Internet Marketing - Matt Bailey [165]
Simply by the home page’s technical location, at the root level of the domain, it is important. When redirecting the home page location to a “lower level” (such as another directory or subdirectory) of the website, the page appears to be less important to the search engines. This is simply a matter of the information hierarchy, because the most important page on the site appears to be in an “unimportant” place.
High-level pages tend to be considered more relevant and important for search engines. In the past, SEO experts would recommend keeping as many of the directories as close to the root as possible in order to assist in spidering and relevance. Much of the reason for this was anecdotal and based on experience and one’s own interpretations. As search engine technology improved, deeper file structures become more irrelevant. There is no search engine that has claimed preference one way or another, but keeping the home page on the root level is one area where SEO experts have agreed that preference and a clean hierarchical structure can assist your site’s visibility.
However, when adding a 302 redirect from the root level of a website to a deep directory, there will be problems. The second problem deals with link value. The majority of links to any website are mainly domain-level links. Most people will link to the domain of a website. When a 302 redirect is employed, the value of the prior page is not applied to the new destination (because it is temporary). As such, all of the link value that was generated from other websites is now pointing to a page that does not exist.
The most optimal situation is that all of the incoming links are pointing to a page URL that has content reflecting the purpose of the incoming links. When there are incoming links but no destination page and no content, many of the relevance factors will be incomplete, and rankings will fall or vanish altogether.
Avoid domain-level redirects, as in the example of the American Cancer Society’s Cancer.org, and avoid the CMSs that lock a website into a redirect-based deep site structure. By following these guides when selecting a CMS provider, you could avoid future issues with search engines.
Thursday: Uncover Duplicate Content
As mentioned many times in this chapter, duplicate content is a very real issue when managing a website. As your site becomes larger, as it becomes more complex, and as more functionality is added to provide better tools for navigation, there is always the possibility of creating multiple URLs for the same page of content.
Many consider duplicate content to cause a penalty in the search engines, but there is no such thing. Duplicate content creates difficulty in assigning unique content to a specific page. When the same page of supposedly unique content is now found on two or more pages, then the search engines are not able to identify the real source of that unique content. Rather than being a penalty, it is more of a consequence. The search engines are simply attempting to process multiple sources of information and find the primary page that has been lessened in value. By ensuring that each page is unique with its own unique URL, you can avoid duplicate content issues.
Managing duplicate content is a considerable challenge with dynamic websites, because it requires a substantial knowledge of tracking down issues and duplicate pages and then identifying the cause. Today I’ll discuss some of the typical causes of duplicate content.
Use Google Webmaster Tools
Google’s Webmaster Tools also includes a report that will show you duplicated title tags. This is a good first-level check, because you may find that some of your page titles are duplicated, which may lead to the entire pages being duplicated. However, if the multiple URLs are similar and you are using a content management system, then you have more investigation to do, because something may be set up incorrectly in your CMS.
Simply because the title tags are duplicated may not mean that your content is duplicated.