Search engines can be described as online answer machines where a searcher makes a query and the they provide answers in form of a list of web pages with relevant information. On the other hand Search Engine Optimization (SEO) is the practice of promoting and improving a website to boost the number of visitors the site receives from search engines. These search machines have two major functions on the World Wide Web. First, they are involved in crawling and indexing the billions of documents on the WWW. Secondly, they are tasked with the responsibility of providing useful answers to the queries searched by the internet user.
In performing the first task, automated robots known as "crawlers" or "spiders" are used. These crawlers move through the billions of documents, pages, files, videos, news, and media available on the World Wide Web. To illustrate this function clearly, imagine a person tasked with the responsibility of providing information about shops present in a certain big city with billions of shops with different information or products. The person would have to run through the city (crawling), get information about all the shops and store it (indexing) in a manner that it would be easy to retrieve when asked for it. But to do this, well established paths (links) are required.
From the analogy above, search engines use crawlers to get information on the billions of documents on the World Wide Web. Once the documents are found, the information about the webpage is decoded and stored in massive hard drives where the information is retrieved once queries about it are made. This helps in providing information within a fraction of a second. In providing these answers, which is the second function of search engines, the engines consider numerous factors that SEO provides when writing and designing web pages.
Search engines provide answers to queries through retrieval and ranking of relevant pages. As answer machines, these engines provide answers to queries made online by scouring their corpus of billions of documents on the WWW. They provide results that are relevant to the query and then rank them in order of perceived importance -this is where the process of SEO finds its purpose. This is a move away from the previous ranking factors where engines simply considered words on the page. Ranking and relevance is calculated by considering hundreds of factors to avoid manipulation by webmasters. Some of these factors are discussed here.
Major engines interpret importance as popularity of a website, page, or document. This assumption has been successful as search engines have continued to gain searcher satisfaction. To determine popularity and relevance of a page, search engine engineers have crafted mathematical equations known as algorithms to sort wheat from chaff and then rank wheat in its tastiness order. These mathematical equations take into account 100s of factors that search marketers refer as "ranking factors."
At first glance, the search engine algorithm might seem impenetrable to novices. Search engines provide little information on how to get more traffic or high ranking and the little they provide is not easy to achieve. Here is some information from Google and Bing webmaster guidelines; they provide some insights on how to succeed as a SEO expert.
From Google, the following recommendations are given to get higher rankings
1. Google advices webmasters to primarily design pages for users and not search engines. They warn against cloaking, which is providing different information to search engines from what users get.
2. It is also advisable to make every page reachable from at least one static text link. Google advises that the webmasters should design a site with a clear hierarchy and text links in order to stand a chance of ranking higher.
3. Google also advises on use of information-rich site by writing web pages with clearly and accurately described content. This means that the <title> elements and ALT attributes should be accurately described.
4. Finally you are advised to use keywords in creation of human friendly URLs. Webmasters should use a single version of a URL to reach a document and use 301 redirects or rel="canonical" element to address duplicate content.
No comments:
Post a Comment