Ranking issues? Just save the spiders some time!

Mon 10 August 2009 13:00, Dennis Sievers

Ranking issues? Just save the spiders some time!

Are you having problems getting good rankings in the search engines? If so, you might want to read the article about crawling and indexing over at Google's official blog.

In the article they state that "many questions about website architecture, crawling and indexing, and even ranking issues can be boiled down to one central issue: How easy is it for search engines to crawl your site?

I agree, but its a little simplistic. Of course ranking issues can be the result of a URL structure that cannot be handled by spiders. But a search engine (and user) friendly website is just the basis for a website that can have the possibility to gain rankings. Getting things right when it comes to consistent URL structures, duplicate content, loopholes, etc. you only brought the website in the position to get crawled and indexed. Step 2 is making sure your pages are setup (SEO-ed) for the correct keywords to get into contact with your target.

The interesting part in the article though, is the fact that Google has a finit number of resources. Hard to believe, isnt it? But with billions and billions of pages to be crawled and recrawled every now and then, it is very important to realize that your website might only get 0,001% of the available time a spider has to visit and crawl your website. This portion will increase when you update your content more frequently or write new content for your visitors. But most of you probably will have a static website. Nonetheless, it is very important to remember that Google, or any other search engine, will not have, and not take the time to crawle every single page on your website. 

So, when it comes to SEO, always think of things like:

  • make sure every single piece of unique content is only accessible via one URL. Don't only think of session id's in the URL, but also check if all your domains redirect to the main domain, you make a decision in using www or not and stick to it, etc.
  • dont' put your important pages away in the deep dungeons of your website. See to it that all pages you want to get indexed and ranked are accessible within one click on your mouse. This is not only good for the search engines, but also benefit your normal visitors.
  • pages that don't need to be indexed, don't need to get visited by the search engine either. Put them in the robot.txt file and save some time for spiders to be able to crawl the important pages on your website

Of course getting good rankings depends on way more factors, but by saving time for the search engines spiders, you make sure they only visit the right pages, and are able to visit them all, instead of just a portion.

  • Comments (6)
  • SEO
  • Tell-a-cowboy

Comments (6)


    • Tristan Teunissen

    Hi Bas,

    When you're having vastly similar content or content that's accessible through multiple urls; the canonical <link>tag can also solve some problems.


    Ma 10 aug 2009, 13:42

  • I hear you Dennis, it's very typical of Google making it sound so easy.

    Great points on how to make sure the spiders crawl and index as many pages as possible.

    I would also suggest including solid internal links, and submit a xml sitemap to Google webmaster tools. I know allot of SEOs don't like/trust Google Webmaster tools but I've always found it incredible useful for measuring frequency of crawling and any additional data about how/when how often the spiders access my sites.

    Also @Tristan I wouldn't reccommend using the canonical tag as the preferred option. Not enough proof testing has been done into the effect of this attribute yet.

    I wrote a blogpost on canonicalisation a while back http://www.vervesearch.com/blog/seo/canonicalisation-issues-why-its-bad-an...

    I will use 301 redirects in most instances, although on /index pages I will use the canonical tag/attribute.

    For anyone that wants to know exactly what canonicalisation is, I've made this video explaining it http://www.screenjelly.com/watch/_n7xY8pz0Nc

    Ma 10 aug 2009, 14:19

    • Tristan Teunissen

    @lisa canonical tag is not the preferred option, but it's a method people can look into.

    From a user perspective, you can't always use a 301 redirect. For instance when the url is providing a lot of semantic information for the user, or when your breadcrumps are based on the url.

    /x/y/x 301 redirect --> /a/b/x

    Ma 10 aug 2009, 14:46

  • Thanks Lisa and Tristan for adding more valuable tips.

    @lisa, i agree on the XML sitemap submission. Although it doesnt serve you higher rankings, its an easy way to tell Google which pages to visit and how often you update them. And of course, a solid internal linking structure takes care of optimal routing paths within the site. This helps spiders and users to quickly and easily get to the important / desired pages.

    @tristan it is a good thing to tell search engines which page is leading. But as Lisa says, its not really a proven method to solve duplicate content issues. I believe it is a light weight solution to prevent dup. content problems, for example when you have two pages that are (almost) the same, or serve session_ids and want to tell Google which URL they need to use to index the page.

    A 301 redirect is always the best and therefore preferred solution to solve duplicate content issues.

    Mabye some readers of this posting and comments who did some testing with canonicalisation can share some more light on this?

    Ma 10 aug 2009, 15:02

    • Tristan Teunissen

    @Dennis true, for 95% of the time 301 redirects are okay. But sometimes you can't solve it with a 301 redirect, because you're interfering with user logic.

    for instance HOTELx is accessible through

    A: /citytrip/stockholm/story-hotel

    but also

    B: /sweden/stockholm/hotels/story-hotel

    you can't redirect the user from A to B, because it's interfering with logic.

    So canonicalisation could be a answer to this problem (or exclusion through robots.txt). But it's pretty cost intensive to decide for the crawler which url is the best.

    Results from canonicalisation tests would be nice.

    Ma 10 aug 2009, 15:19

  • @Tristan; yep you're right, in some instances it just isn't logical (and can be detrimental) to use 301 redirects. I've seen instances where infinite loops of redirects have been created due to 301 redirecting. And yes I think the canonical tag is a great option in those instances. For example (as mentioned above) I usually use canonical tag for any /index pages

    With regards to your example I would probably still use a 301 redirect in that instance. It shouldn't interfere with technical logic. But that's just my opinion.

    Ma 10 aug 2009, 15:30


  • HTML is not allowed. URLs are automatically clickable.
    * Email address is not shown

  • Google Nose
  • Flight Search
  • Flight Search
  • Ingress
  • Google Handwrite
  • Using Google search
  • IPv6 launch
  • Not going to miss the internet

Last Comments


Last event


  • J-P De Clerck
    J-P De Clerck

    Profession: Customer-centric digi...

    Company: Conversionation

  • Sam Murray
    Sam Murray

    Profession: Senior Search Consultant

    Company: Verve Search

  • Susie Hood
    Susie Hood

    Profession: Head of Copywriting

    Company: Click Consult / SEO C...

  • Tom Bogaert
    Tom Bogaert

    Profession: Managing Partner

    Company: QueroMedia

Latest Videos



  • Lizette van der Laan
    Social Media Image

    Is it the real you, the witty you, the person who reads the most interesting articles, makes t...


Subscribe to SC Newsletter:

Most Read

RSS Feed

Are you a bloggerFacebook


© 2016 Searchcowboys.com - All Rights Reserved - All views and opinions expressed are those of the authors of Searchcowboys.

All trademarks, slogans, text or logo representation used or referred to in this website are the property of their respective owners. Sitemap