When you are new to any field, part of playing catch-up is learning the lexicon. Like any other field, SEO -Search Engine Optimization - is full of terminology which might seem like another language yet will prove critical to your ability to carry out everyday tasks of the professional SEO.
With this said, here is a list of 15 Critical SEO Terms which you need to know.
SERP, or Search Engine Results Page, is the page you see after completing a search in Google. A SERP is the page you see anytime you utilize Google to conduct a search query. The SERP will contain search query results, PPC Google Ad's and if the search is local, a search carousel citing specific businesses/locations.
In terms of search, a spider refers to an algorithm deployed by search engines to index your website. A spider works its way through your site to catalog, archive and index all web pages contained within your website. Much like the Dewy Decimal System indexes books in a library, a spider indexes webpages to allow for proper archiving and searching. A spider can also be called a crawler (active verb: crawling).
Index or indexing is the term applied to spiders when they crawl your website to find new pages. As spiders are constantly crawling the web for new content and web pages, anytime a spider locates new content, it indexes the content so it can be found.
Local Search is the term applied to a search query conducted to find a business, service, restaurant, location etc. found in the local vicinity. If you live in New York City, an example of a local search query would be, "Thai Food, SoHo NYC." As the search query is local to your current location, the SERP will feature Thai restaurants local to that area. A more recent edition to the local SERP is the carousel (pictured below). In addition, the local SERP will feature a map showing locations meeting your search criteria (pictured below).
404 is the term given to a webpage which is no longer available. The page might have been deleted on purpose or due to human error, deleted by accident. 404 pages are common in dated sales. For example, a company runs a sale on an item from December 1 - December 31st. Due to the popularity of that item, customers link to the sale yet, once the sale is finished, they forget to take the link down and the company running the sale kills the content. This results in clicking a link which leads to a non-existent (404'ing) page. 404'ing web pages frustrate search traffic and penalize websites inducing lower ranking. The common fix for a 404'ing page is the 301 redirect.
When a page 404's, web masters will apply a 301 redirect to route incoming traffic to a live webpage. A 301 redirect allows search traffic and spiders to reach live web pages as opposed to 404'ing pages. Applying a 301 redirect saves headaches for traffic, spiders and domain holders.
If your aim is to understand SEO, knowing how to use a Robots.txt file is critical. Robots.txt is a file placed with the coding of a website which lets spiders know what pages not to index. The file works in conjunction with the sitemap allowing spiders to understand what pages should be indexed and what pages should not be indexed. Web masters utilize Robots.txt for various reasons such as hiding a WP admin page or hiding a /cart page from being indexed and accessed. Another option to establish pages which you do not want to index is to apply a "No Follow" to each page. While this works, it serves as a longer and more tedious version than a Robots.txt file. You can view the robots.txt file of any website by searching for a root level domain ending in /robots.txt. Example: http://www.expedia.com/robots.txt
As mentioned above, a sitemap is a categorized list of pages held within your website. A site map is used for internal traffic navigation and to provide spiders with a map of how your site is constructed. There are two forms of sitemaps. HTML and XML. HTML is a visual sitemap used for unique traffic. XML is used for spidering.
Sometimes someone might ask you what the root level of your domain is. They are asking your top level domain is. This blog will have the URL of: http://www.informit.com/blogs/blog.aspx?uk=15-Critical-SEO-Terms. In that URL, the root domain is www.informit.com. The subdirectory is www.informit.com/blogs. The root level domain is always your highest level domain or more clearly, the starting point for all other domains and directories held within your website.
The root level domain of this website is:
A subdomain of this website is:
A subdirectory of this website is:
The reason for using subdomains and subdirectories vary yet a simple answer is website structure utilized for searching and crawling. Within the search optimization world, a battle rages to determine which practice - subdomain or subdirectory - is better for your website SEO. The fight is long, plentiful and full of varying opinions. Instead of waging that battle here, we will let Matt Cutts, Google's Head of Web Spam, elucidate.
There are internal links and backlinks. Internal links are links placed within a website which point inwards keeping traffic bouncing within the site. These links are also called deep links. Backlinks, or links pointing externally from a site, link to another site outside of your domain. As such, a link leading to the SEO page on Wikipedia, constitutes a backlink for Wikipedia. The more backlinks a website has from other relevant websites, the higher page ranking and SEO benefits it will carry.
Content is king in search engine optimization; yet not all content is created equally. Content which overuses keywords (targeted search terms) is not good. An example of keyword stuffing is this sentence:
Keyword stuffing is bad for your SEO rankings because if you want your SEO rankings to be good, positive SEO rankings will be impacted by keyword stuffing resulting in terrible SEO rankings.
Not only is that a terribly written sentence which holds no value to the reader, it also users a single term way too much. It constitutes keyword stuffing and will lead you down the path of Black Hat SEO.
Black Hat SEO is the term applied to negative search manipulation. Examples of Black Hat SEO are keyword stuffing, buying links, duplicate content and link farms. Essentially anything deemed negative by Google and Matt Cutts can be slated as Black Hat SEO.
Duplicate Content is as it sounds. Content which is duplicated, verbatim, across your website. As a technology website, InformIT.com will have various blogs, articles and items covering the same general topic from different aspects. That is alright. What isn't alright is if we took this blog and duplicated it word-for-word in multiple locations on InformIT.com. Duplicate content serves no purpose. It is Black Hat SEO.
Like a new Android device or iPhone version, the Google algorithm goes through periodic updates. Just as Windows, iOS and OS contains multiple versions, so does Google. Google periodically updates its search algorithms to continually better SERP's.Check out Moz Google Algorithm History for a full look. The current version is Penguin 3.0.
There are many others SEO terms which, if you are embarking to become an SEO, you will need to learn. We hope this small guide started you in the right direction.
Remember, if you like this content and want to chat about it, you can reach me at the following social spaces:
Take advantage of special member promotions, everyday discounts, quick access to saved content, and more! Join Today.