Crawler-based engines send crawlers, or "spiders," out into cyberspace. These crawlers visit a Web site, read the information on the actual site, read the site's meta tags and also follow the links that the site connects to. The crawler returns all that information back to a central depository where the data is indexed.

The crawler will periodically return to the sites to check for any information that has changed, and the frequency with which this happens is determined by the administrators of the search engine.

When you query a search engine to locate information, you are actually searching through the index that the search engine has created; you are not actually searching the Web. These indices are giant databases of information that is collected and stored and subsequently searched. This explains why sometimes a search on a commercial search engine, such as Yahoo! or Google, will return results that are in fact dead links. Since the search results are based on the index, if the index hasn't been updated since a Web page became invalid the search engine treats the page as still an active link even though it no longer is. It will remain that way until the index is updated.