Resource aggregators and search engine algorithms

Resource aggregators (such as feed aggregators, post aggregators and the like) can actually be really harmful for a successful SEO. The sad truth about this kind of "hubs" is that their ranking is often low (3, 2 or even less) so that when they insert a link to your resources, you're actually connected with a bad neighborhood and this will affect your global page rank calculation. Much worse, you have no control over this kind of aggregators which are often resource robots that scan the web in search of fresh stuff. I noticed that only Google tends to show this kind of links in its top results: other search engines (such as Bing or Yahoo) simply ignore them, because they use a different algorithm. In its essence, Google's algorithm is based upon popularity, that is, the more links you get, the more chances you have to be shown in top results.

However, this kind of approach has its own drawbacks. First of all, it doesn't take into account the authority and influence of a website. For example, if I write an article on CSS and this article is quoted and linked by Eric Meyer, surely this will count more in terms of ranking. This happens not because Google is considering Eric's site from the point of view of authority and influence, but because of its page rank (it's seven, as far as I know). But what happens when a website that has nothing to do with CSS and web development inserts a link to my article and its page rank is seven or more? Sadly, this will count exactly as if my article was linked from Eric's site. In a nutshell: some search algorithms doesn't have the concept of trusted source, because they simply scan the web without organizing it in a sort of "top list" of trusted and influent websites. Further, they seem to ignore the concept of contextualization while indexing and ranking resources.

This entry was posted in by Gabriele Romanato. Bookmark the permalink.

Leave a Reply

Note: Only a member of this blog may post a comment.