Home > Articles > Web Services > XML

  • Print
  • + Share This
From the author of

Eliminate Spider Traps

Search engines use special programs, known as spiders (or crawlers) to harvest your pages on a regular basis. The spiders come visiting your site and can usually grab your content and store it away in the search index, but not always. What is it about your Web site that traps a spider, anyway? The list is long and distinguished:

  • You are telling the spider to scram. Spiders will stay away from parts of your site (or even your whole site) if your robots.txt file tells them to. In addition, robots tags in your Web pages can request that the spider leave those pages out of the search index. Check your robots directives to ensure that nothing is excluded from search indexes in error.
  • Your navigation requires a human being. If your site requires visitors to click buttons in pop-up windows or fill out registration forms to see some of your pages, the spiders won't be able to do that. To open your site to the spiders, you must change these navigation techniques or offer alternatives that use regular links on your pages.
  • Your site demands technology that spiders can't use. Spiders can't look at Flash pages, they can't execute JavaScript code, and they don't accept cookies. If your Web site refuses to display a page unless these or other technologies are supported, then the spider will turn tail and run. (Do spiders have tails?) Reserve your use of Flash for content that you are happy not to be indexed, such as that 3D interactive view of your product. Similarly, spiders don't handle frames very well, so dump them. To get a better idea of what spiders see, try turning off graphics, cookies, and JavaScript in your browser, or use the text-only Lynx browser (If you do not want to download Lynx, you can use the Lynx Viewer). In sum, any time you need to do anything more complicated than clicking a link to continue, the spider is probably blocked.
  • Your pages are poorly coded. Browsers are incredibly forgiving of incorrect HTML coding, but spiders are far less so. Most Web sites are rife with coding errors that visitors never notice, but may cause content to be lost or misinterpreted if uncorrected. Make sure that all new and changed Web pages on your site are run through the HTML validator before going public.
  • You don't answer the door. You won't know in advance when the spider will come to call, so you need to be ready all the time. If your server is down for maintenance, or just plain slow, spiders will go knock on someone else's door.
  • You redirect the spider improperly. Whenever you change a page's URL, you must tell the spider to go to the new URL, using a technique called a redirect. JavaScript and "meta-refresh" redirects work for browsers, but only server-side redirects (such as 301 redirects ) work for spiders.
  • Your pages are too fat. Spiders make their living by visiting as many pages as possible, so they limit the amount of material they read on any one page. Normally this limit is quite large (about 100,000 characters), but it can be used up quite quickly if you are not careful. Every character on your page counts, so if your site contains huge JavaScript routines and style sheets embedded in each page, get them out. Move them to external files that won't count as part of your page's size. Additionally, if you want your 500-page PDF file to be found by searchers, break it up so that each chapter is in a separate PDF.
  • Your URLs are too complex. Many excellent Web sites are generated on the fly—programs generate Web pages as they are requested by visitors. These Web sites sometimes use complex parameters in their URLs to tell the programs what to display, such as asking your commerce catalog program to show product number 870 from the Switzerland catalog in French. Spiders flee when they sense that the permutations of your catalog are endless. If you have more than a couple of parameters, or if your URLs are just plain long (over 1000 characters, say), then you risk the spiders skipping your page. Your Web server software should support a way of changing your URLs so they don't suffer from these problems. If you are using the Apache Web server, for example, it offers a function called "mod_rewrite" that does the trick, and other Web servers have similar URL rewrite techniques. One more difficult URL problem concerns so-called "session" parameters, when your programmers have encoded information about which visitor is using your site right in your URL. Because this technique can show the exact same page content with different URLs, spiders scram when they see session identifiers in URLs. Use cookies or Web application server techniques to capture session data instead.

Every page on your site may contain one or more of these spider traps, so it pays to be vigilant. Every trap you remove allows more pages on your site to be indexed.

  • + Share This
  • 🔖 Save To Your Account