Spider, sometimes known as "crawler” or "robot", is a software program which is used by search engine to stay up to date with new coming stuff in the internet. They permanently seeking out changed, removed and modified content on webpages.



How it works?

To know how spiders work, it's practical to think of them as automatic data searching robots. As I've said, spiders passage the every website to find as many new or updated web pages and links as possible. When you submit your web pages to a search engine at the "Submit a URL" page in webmaster tool, you will be added to the spider's list of web pages to visit on its next search mission out onto the internet. Your web pages could be found, even if you didn't submit them. Spiders can find you if your web page is linked from any other web page on a "known" web site.

When spider reaches at your webpage, it head looks for a robots.txt file that used to tell spiders which areas of your site to be index and which one not. The next step of spider is to gather outbound links from the page. Spiders track links from one page to another page. This is the original indication behind spiders.