There are many free solutions on the internet to allow even the novice to create a website. Content Management Systems (CMS) like Jhoomla and Drupal or blogging platforms like WordPress and Blogger are very easy to use. With the latter, you don’t even need hosting and are guaranteed to find a template (design) that appeals to you and you can be up and running in less than 10 mins. That’s all good except that there are lots of things behind the scene that you are not aware of but which you should.

Organic traffic is basically visitors that are sent to your website when they use search engines. Say you’ve written about “where to get the best deals for laptop” and someone is searching for that and Google thinks your webpage is a good match for the user’s query; it will therefore refer the user to your site and that’s a good thing. Search traffic is how many people get their sites known on the web and it is important that your website is properly optimised in that respect.

However misconfiguration or inadequate knowledge of SEO can often lead to loss of potential rankings. Say for example you have a page which should really be ranking #1 for a search term but you are currently on Page 5 (#48). You are missing out a lot of this search traffic. A deep analysis of your website reveals that you have a lot of duplicate content on your website which has caused a penalty that you were not aware of.

This happened to one of my clients. She was using Blogger for her site and had written 10 blog posts to help people understand the benefits of her product. When I investigated, Google had indexed 90 pages from her site that was showed there was a problem. What happened was that Blogger uses a feature called “Labels” which are tags to associate words with the blog post. For instance, after writing a post on “how to fix a leaking water tap”, you could tag it with “tap”, leak”, “fixing tap” etc. This resulted in several pages with the same content. In a Post-Panda world, this is an absolute NO-NO. Duplicate content should be removed.

Using the robots.txt file to tell search engines which part of your site they can index

Blogger already has a default robots.txt file for you but it was mis-configured. They know that search/label pages do not add any value to the blog and they have specified in in the robot exclusion file but the entry was wrong and therefore Google was interpreting it as an invitation to index. I created a custom one for the site with the exclusion as follows:

User-Agent : *

Disallow : /search

This basically means that I don’t want any search engines to index any part of the site which starts with the word “search”. So /search-me.html, /search/, /search/labels/, search/dummy/content.html would all be excluded.

If you choose /search/, then everything within that folder will be excluded but not /search-me.html.

You can test specific URLs of your website against your robots.txt with this tool.

We’ve also prevented archive pages from being indexed by adding a bit of code to the blogger template. This should help the site rank better for the keywords they are targeting.

If your robots.txt is wrongly configured, that could mean access to whole your website could be denied too and you wouldn’t appear on Google search results.

Although free CMS are great at first glance, it is important that you check or have someone who knows their stuff to check whether you are not inhibiting your own website’s success.

PS: If you’re using Blogger or WordPress, you should make sure that your post titles come before your blog name so that you rank better.


Posted in: SEO

Posted by:

Leave a Comment