SEO Best Practices for Beginners

Introduction to SEO

This is a simple overview of basic, industry-standard best practices that help search engines and web crawlers to better index your site and content. Our focus is on a value-based content-first approach, and these tips are not a substitution for a formal SEO marketing strategy.

HTML, Site Structure, and URLs

Remember, web crawlers do not see your site's presentation and styling, only data. These structural considerations will help crawlers better "see" and index the content on your pages.

Directory Structure and URLs

Create descriptive and simple to understand page and file names for the documents in your web site. No indecipherable, cryptic, or overly long URLs or file names… This is not only good UX practice, but can help search engines crawl your web content more efficiently.

Create a simple and efficient directory structure. Avoid nesting subdirectories unnecessarily deep, or having irrelevant directory names.

Consider using a lowercase-only naming convention for URLs. Use hyphens, not underscores, to join words in your file names. Google treats a hyphen as a word separator, but does not treat an underscore that way. Google treats and underscore as a word joiner — so "red_sneakers" is the same as "redsneakers" to Google. This has been confirmed by Google themselves, including the fact that using dashes over underscores will have a minor ranking benefit. And no camelCase!

Angular One-Page Apps and AJAX Content

According to the latest Google recommendations, as long as your JavaScript and CSS files are crawlable, search bots will be able to index the files used to dynamically update views in one-page apps.

Until recently, web crawlers had a difficult time indexing dynamically rendered content. Pre-rendering AJAX content using services like prerender.io and using the HTML5 pushState have been popular methods for addressing AJAX SEO.

Additional Reading: Google Webmasters: Deprecating Our AJAX Crawling Scheme

Additional Reading: codelord.net: How to set up pushState with html5Mode

<meta> tag

<meta name="description" content="description content here..." />

This is optional, and not likely to have a big impact on your page ranking, but the description content MAY be utilized by Google.

<meta name="robots" content="nofollow" />

The above code added to the <head> will prevent web crawlers from indexing a page.

<title> tag

Use the <title> tag appropriately. The title tag tells both users and search engines what the topic of a particular page is, and will be featured in search engine results. Titles should be accurate, brief, descriptive, and unique for each page.

Links

Navigation is important to search engines. Use the <nav> tag to contain your navigation links.

<a href=”http://website.com” target=”_blank”>This is Anchor Text!</a>

Use descriptive text in your <a> links. Search engines pay attention to this and will take it into account. “Click Here” and “Read More” are not descriptive. Hint: Adding the rel=”nofollow” attribute to your links will tell web crawlers to ignore the link.

Additional reading: MDN: <a> tag rel Attribute

Images

Always include alt text on your images. This provides search engines with information about your images. Not only is this a good SEO practice, but it is required for accessibility as well. This is considered a standard good HTML practice.

In addition to using the alt attribute on your images, you can use the <figcaption> tag to include an image caption. See the example below.

<figure id=”figure2″>
	<img src=”images/shasta.jpg” alt=”My Dog Shasta”>
	<figcaption>My Dog Shasta</figcaption>
</figure>

Don’t forget to use descriptive file names. If it is something you feel is relevant to your content - you can also create an XML Site Map for you images too.

Site Maps and robots.txt

Creating site maps is recommended for larger sites with a lot of content. Create two site maps, one for your users (HTML), and one for search engines (XML).

An HTML site map is a simple page on your site that displays the structure of your website, and usually consists of a listing of the pages on your site. Visitors may visit this page if they are having problems navigating your site.

An XML Site Map is a file that you can submit to search engines via their webmaster tools that helps web crawlers index the content on your site. The XML Site Map contains metadata about your site and its content. You can either write this file in XML yourself, or use a 3rd party tool to generate one. Google and Bing Webmaster Tools both feature interfaces to automatically generate XML site maps.

Additional Resources:

robots.txt

You can restrict page crawling where it's not needed or wanted with a robots.txt file. This file must be named "robots.txt”, and it lives in the root directory of your site. The "robots.txt" file tells search engines whether or not they can access and therefore crawl specific parts of your site.

User-agent: *
Disallow: /images/
Disallow: /search

Additional Reading: Google Webmaster Blog: Speaking the Language of Robots

Additional Reading: Google Search Console Help: Learn about robots.txt