While reading this recent commentary on Google’s progressive rollouts of algorithms worldwide, I picked up on Aaron’s mention that some sites competing in languages with less content are less likely to get whacked.
“In most foreign markets Google is not likely to be as aggressive with this type of algorithm as they are in the United States (because foreign ad markets are less liquid and there is less of a critical mass of content in some foreign markets), but I would be willing to bet that Google will be pretty aggressive with it in the UK when it rolls out.” [Emphasis mine.]
How do you figure out what those languages are?
Of course, you need to take that information with a grain of salt. Obviously Wikipedia’s data is not necessarily representative of the web as a whole. Cultural differences, for example, may make speakers of one language disproportionately more likely to contribute to Wikipedia than others. But it’s a data point worth considering, if of course you have the capacity to compete in more than one language. This is the sort of critical thinking encouraged and taught in my book, of course.
“Whooooosh!” – That was the sound of dozens of content farms going multilingual ;).
If you liked this post, add my RSS feed to your reader!