advertise with

How Search Engines Gather Information

Search engines gather information by crawling web sites. They crawl from page to page visiting sites already known and by following the links that they find. Whilst crawling, the robots, or spiders, gather information from the source code of each site and then send back that information for indexing. The Spiders were designed to read HTML code or code related to it such as XHTML or PHP. The Spiders find it difficult to read pages written in Flash and some other popular web programmes. Spiders cannot directly read Java Script or images. They can however read the alt tags which may be provided with GIF, JPEG or PNG images.

Related SEO Tips

How often do Search Engines Crawl?
Search engines constantly crawl the web. depend on how often you update the content.

Giving search engines something to read
Your website must be readable by search engine. Can the search engine read your website?

Accessible to search engines & human
Your site must be easily accessible to search engines and human visitors.