With the widespread growth of the World Wide Web a specially designed tool to search through the information available was developed called the search engine. Using both algorithms and human editing the search engine will present results organized in a list consisting of web pages, information, links, and images. These results are viewed by the user after inputting a keyword or keyword phrase in to the search engines search field.
A search engine uses web crawling, indexing and searching in that order to provide the most accurate results related to a particular search. Search engines work by storing information about millions of web pages that can be retrieved at the request of its user. The web crawler, or "spider," is used as an automated web browser. It follows every appropriate visible link. The web crawler analyzes the contents of each link to determine how the pages should be indexed.
Words found inside the pages are extracted from the description and allocated appropriate meta tags. Meta tags are also taken from contents the webpage itself to establish its relevance. Data from the sites is collected, indexed and stored to be retrieved when it's needed.
All the search engines work on more or less the same principle. Google stores the source pages, also called cache, of all the web pages along with information available on the webpage itself. AltaVista differs slightly in operation as it stores everything that a web page has on offer.
Cache storage helps in keeping track of the updates on the web page and helps in filtering. The system of indexing used by Google makes sure that only the updated contents are made available to its users by doing away with linkrot. The cache has further usage in finding the updated content that has been removed. It helps in recovering the contents as an archives source. The search process starts with a user keying in some keyword or keyword phrase, related to the content they are looking for, in the search box of an engine. The engine then uses the process of indexing to produce web pages that suit the search phrase the most. The list will include a short description of the contents that each webpage has to offer.
The goal of major search engines is to supply the most relevant results. Not all sites with the requested keywords are relevant to the search. The search engines have used their spiders and indexing to filter out useless information. They generate their own system for analyzing a website for content.
Increasingly search engines have been implementing a page ranking system in which each page's descriptions, keywords and content are scanned for relevancy to the inputted keyword and their index. Pages with higher ranks get seen more often at the top of the list. If a site is linked to a high ranking website that site receives a vote that increases its ranking.
A search engine uses web crawling, indexing and searching in that order to provide the most accurate results related to a particular search. Search engines work by storing information about millions of web pages that can be retrieved at the request of its user. The web crawler, or "spider," is used as an automated web browser. It follows every appropriate visible link. The web crawler analyzes the contents of each link to determine how the pages should be indexed.
Words found inside the pages are extracted from the description and allocated appropriate meta tags. Meta tags are also taken from contents the webpage itself to establish its relevance. Data from the sites is collected, indexed and stored to be retrieved when it's needed.
All the search engines work on more or less the same principle. Google stores the source pages, also called cache, of all the web pages along with information available on the webpage itself. AltaVista differs slightly in operation as it stores everything that a web page has on offer.
Cache storage helps in keeping track of the updates on the web page and helps in filtering. The system of indexing used by Google makes sure that only the updated contents are made available to its users by doing away with linkrot. The cache has further usage in finding the updated content that has been removed. It helps in recovering the contents as an archives source. The search process starts with a user keying in some keyword or keyword phrase, related to the content they are looking for, in the search box of an engine. The engine then uses the process of indexing to produce web pages that suit the search phrase the most. The list will include a short description of the contents that each webpage has to offer.
The goal of major search engines is to supply the most relevant results. Not all sites with the requested keywords are relevant to the search. The search engines have used their spiders and indexing to filter out useless information. They generate their own system for analyzing a website for content.
Increasingly search engines have been implementing a page ranking system in which each page's descriptions, keywords and content are scanned for relevancy to the inputted keyword and their index. Pages with higher ranks get seen more often at the top of the list. If a site is linked to a high ranking website that site receives a vote that increases its ranking.
About the Author:
Justin Harrison is an internationally recognised Internet Marketing Consultant expert who provides world class Search Engine Optimization to website owners. For more information visit: http://www.seorankings.co.za
0 comments:
Post a Comment