Search Engines to Crack Down on Duplicate Pages

Maximum PC Staff

When you consider the complexity of modern day web pages, it’s actually a bit of a miracle that search engines work as well as they do. Dealing with duplicate links, especially off pages such as Amazon that may promote an individual product a thousand times or more has always been a challenge. Finally, after years of debate, Google , Yahoo and Microsoft are putting the past behind them to solve this age old issue. The solution is a simple tag that will be added to the standard link format called “canonical”.

The tag is designed to solve issues associated with multiple URL’s pointing to the same page, but may also be helpful when multiple versions of a page exist. Currently, the search engines employ a process that looks at the structure of URL’s to look for similarities. This generally works pretty well, but is far from perfect. It is considered to be somewhat rare for search engines to come together on any issue, but it isn’t unprecedented. In 2006 they joined forces to put unanimous support behind sitemaps.org , and in June of 2008 they jointly announced new standards for the robots.txt directive. Matt Cutts of Google and Nathan Buggia of Microsoft claim this new approach should help reduce the clutter on the web, and improve the accuracy of all search engines.

Even though these tags won’t completely solve all the duplicate problems found on the web, it should significantly enhance the indexing performance of search engines, particularly on e-commerce sites. The new tags will be discussed in depth at this year’s Ask the Search Engines panel at SMX West .

Around the web

by CPMStar (Sponsored) Free to play

Comments