Search engine optimization (SEO) is the process of improving the volume or quality of traffic to a web site from search engines via "natural" or un-paid ("organic" or "algorithmic") search results as opposed to search engine marketing (SEM) which deals with paid inclusion. Typically, the earlier (or higher) a site appears in the search results list, the more visitors it will receive from the search engine. SEO may target different kinds of search, including image search, local search, video search and industry-specific vertical search engines. This gives a website web presence.
The leading search engines, such as Google and Yahoo!, use crawlers to find pages for their algorithmic search results. Pages that are linked from other search engine indexed pages do not need to be submitted because they are found automatically. Some search engines, notably Yahoo!, operate a paid submission service that guarantee crawling for either a set fee or cost per click.
Such programs usually guarantee you inclusion into the database, but cannot guarantee specific ranking within the search results. The two major directories viz. the Yahoo Directory and the Open Directory Project, require manual submission and human editorial review. Google offers their customers the Google Webmaster Tools, for which an XML Sitemap feed can be created and submitted for free to ensure that all the pages are found, especially pages that aren't discoverable by automatically following links.
In order to avoid undesirable content in the search indexes, the webmasters can instruct the spiders not to crawl through certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta-tag specific to robots. When a search engine visits a website, the robots.txt located in the root directory is the first file to be crawled. The robots.txt file is then parsed, and will instruct the robot as to which pages are not to be crawled.
As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled. Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.
The leading search engines, such as Google and Yahoo!, use crawlers to find pages for their algorithmic search results. Pages that are linked from other search engine indexed pages do not need to be submitted because they are found automatically. Some search engines, notably Yahoo!, operate a paid submission service that guarantee crawling for either a set fee or cost per click.
Such programs usually guarantee you inclusion into the database, but cannot guarantee specific ranking within the search results. The two major directories viz. the Yahoo Directory and the Open Directory Project, require manual submission and human editorial review. Google offers their customers the Google Webmaster Tools, for which an XML Sitemap feed can be created and submitted for free to ensure that all the pages are found, especially pages that aren't discoverable by automatically following links.
In order to avoid undesirable content in the search indexes, the webmasters can instruct the spiders not to crawl through certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta-tag specific to robots. When a search engine visits a website, the robots.txt located in the root directory is the first file to be crawled. The robots.txt file is then parsed, and will instruct the robot as to which pages are not to be crawled.
As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled. Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.
About the Author:
Don't buy another SEO program, simply learn how to build thousands of backlinks in just a few hours. We can show you the way visit our website today.
No comments:
Post a Comment