Location based search engines allow users to easily find Web pages relevant to a specific region or city. Most people find what they’re looking for on the World Wide Web by using search engines like Yahoo!, Alta Vista, or Google. It is the search engines that finally bring your website to the notice of the prospective customers. Hence it is better to know how these search engines actually work and how they present information to the customer initiating a search. When you ask a search engine to locate the information, it is actually searching through the index which it has created and not actually searching through the Web. Different search engines produce different rankings because not every search engine uses the same algorithm to search through the indices. Many leading search engines use a form of software program called the spiders or crawlers to find information on the Internet and store it for search results in giant databases or indexes. Some spiders record every word on a Web site for their respective indexes, while others only report certain keywords listed in title tags or Meta tags. Search Engines use spiders to index the websites. When you submit your website pages to a search engine by completing their required submission page, the search engine spider will index your entire site. A spider is an automated program that is run by the search engine system. Search engine indexing collects, parses, and stores the data to facilitate fast and accurate information retrieval. Spiders are unable to index pictures or read text that is contained within graphics, relying too heavily on such elements was a consideration for the online marketers. WebCrawler was the Internet’s first search engine that performed keyword searches in both the names and texts of pages on the World Wide Web. It won quick popularity and loyalty among surfers looking for information. During the Web’s infancy, WebCrawler was born in January 1994. It was developed by Brian Pinker-ton, a computer student at the University of Washington, to cope with the complexity of the Web. Pinkerton’s application, WebCrawler, could automatically scan the individual sites on the Web, register their content, and create an index that surfers could query with keywords to find Web sites relevant to their interests.