Exact Match Domain Filters on New Websites

[dropcap color=”black”]S[/dropcap]omething I have noticed over the last few months and even into late last year is something that gets back to, directionaly, how Google is attacking its algorithm as of recent.

[highlight]Google expects sites to showcase brand signals[/highlight]

One thing I plan on taking a deeper dive into on my blog in the coming months is this idea of portraying brand signals for a small, mom and pop site and/or affiliate site (basically any site that has a hard time naturally looking like an entity that is a Brand). It is critical going forward to make your site look bigger than it really is if you want to compete organically.

So back to the topic at hand, thinking about starting a new site? Like that exact match variation becuase you read an article from 2009 that says they are the way to go. Well trying to encourage users to search for your domain name and sticking around for a while is a great place to start. That’s one tip, there are many other signals you can portray but it certainly seems that IF the algorithm doesn’t see search volume for your domain, users dont stick around for long, there isnt much natural linking variations taking hold then you are going to struggle to get over the hump.

What do you think? Do you still roll with exact match domains if they make sense? Or do you completely disown them and go for a memorable name that people will search for after visiting your website.

What To Make of the New Google Webmaster Tools Site Crawl Data

[dropcap color=”black”]G[/dropcap]oogle recently gave webmasters a nifty little feature inside of GWT that allows you to view your Site Crawl health. Its a helpful little feature for any SEO practitioner to at least take a look at. In the past one could associate this data with a proprietary tool or via manual data collections and trending over time, however, the site: operator at Google is anything but reliable so this can be handy.

I decided to open up a few of the websites I have allowed access into GWT to take a look at them and noted a few things that I thought I would share:

Now if you are not sure how to find this data, here is a quick screencapture on how to navigate to it.

How To Find Google Webmaster Tools Site Crawl Data

Legend (via Google)

Total Index: The total number of URLs from your site that have been added to Google’s index.

Ever Crawled: The cumulative total of URLs from your site that Google has ever accessed.

Not Selected: URLs from your site that redirect to other pages or URLs whose contents are substantially similar to other pages.

Blocked By Robots: URLs Google could not access because they are blocked in your robots.txt file.

 

Graph #1

Graph #2

Graph #3

Graph #4

Google Webmaster Tools Site Crawl Takeaways

  • ‘Total indexed’  – is what you want to focus on overtime if your site is in content creation mode. Ideally this line should trend up. Keep a close eye on its association with ‘not selected’
  • ‘Ever crawled’ – is somewhat interesting when you take into context with total indexed and not selected. Ideally your ever crawled should be near your total indexed in the most efficient scenario. The thinking here would be you give Google BOT exactly what it needs and nothing more to save it time and resources. As you can see above, however, the sites I picked never really had ‘ever crawled’ near ‘total indexed’ since it is in fact a cumulative trend line. The others are not.
  • ‘Not selected’ – if this line segment wasn’t in this tool then I probably wouldnt even be writing this post. This is the closest glimpse at Google giving you an indication via THEIR data as to how your site is viewed in terms of indexable content.What might be seen as indexable content, you might ask? Obviously the more unique it is the better, but even rehashed content could be seen as fit and not get grouped into this category.
    • If your ‘not selected’ line is above your ‘total indexed’ then you know you have an issue at hand that deserves looking into. Many CMS’s out of the box could create this issue. You can see this is the case in Graph 2 above.
    • Overall I would want this segment of the chart to be as near the bottom as possible. Graph #3 above is a good example that I would like to see; a site with a lot of indexed content with very few proportionate pages ‘not selected’. This to me indicates a site in good health. You are controlling the robots appropriately and not wasting their time with ‘bad content’.
    • It would be interesting to test graphs that are sites seemingly hit by Panda and compare those to sites that have not been hit by Panda. Remember Panda was GOOGs attempt to remove thin content out of its index.
  • ‘Blocked By Robots’ – This is an obvious one that can alert practitioners to situations that might be problematic. Didn’t mean to have 500 pages blocked by Robots.txt? This line can help.

Danger Ahead: Google Will Burn You

It takes a lot for me to blog nowadays. To say my plate is full is an understatement when combining the lead gen projects I have going on and the Agency job at Ignite. One such article that appeared a few days ago, however, seems to have caused me to break my silence. In fact it’s not even the article that makes my blood boil, its one comment that was shared by the owner of Blogcatalog.com (h/t  @johnandrews)

http://www.inc.com/magazine/20100901/how-google-cost-me-$4-million.html#comment-74033863

This is a pretty tough subject to stomach when you see webmasters getting decimated by Google. I have seen this same experience firsthand when I worked in the Ticket industry. In fact, in 2008, Googles spam team made a point to show the world that they were serious about enforcing there massive PR campaign that launched in late 2007 about paid links. Since that time there have been many websites ‘dinged’ ‘removed’ ‘penalized’ etc from the SERPS that once filled their pockets with cash.

What happened to Blog Catalog is case in point in why it is so critical when dealing with Google to stay lightweight.  Here are a few tips I want to share, to help avoid these types of scenarios.

    1. Don’t let Google Organic be > 75% of your revenue stream. Period. Just don’t do it.
    2. Don’t build giant enterprise level sites and utilize tactics that work extremely well, so well that you literally own the SERPS. Google doesn’t like this even if you are white hat. You don’t have the golden ticket like Mahalo, Wikipedia, Demand media etc. so get over it. Google doesn’t care about you, as you are not helping them make any money.
    3. If the above statement is true, you are relying on Google for a lot of your income, then it would be wise to actually give some of it back through Google Adwords. Figure out how to make ROI on Adwords. Google is going to be much less likely to kick you completely out of the SERPS if they see you are spending a million dollars /year with them. Trust me whoever is making that call to remove you from the SERPS will check to see if you are a paying customer.

A Google representative has since clarified they do would never do this. 😉

  1. Build multiple sites in the same niche, expand into different niches and continue to build, build, and build. I for one will never ever rely on one site to pay my bills. It’s just nonsense, there is no need. The internet is not a traditional brick and mortar, it’s very easy to start a new venture. Buy a new domain name, host the content and get it rolling in a week vs. 9 months it takes to build a new brick and mortar location.
  2. Bottom line is don’t be too aggressive, don’t be too successful. It’s much easier to be the #3 guy in your vertical than be the lone wolf sitting at the top, pushing the limits of what is possible and ranking across the board for mid-tail competitive terms.

Every webmaster has a different moto, has a different purpose. There is no cookie cutter formula to making money through Google.  If you want to make money through Google, unfortunately, you have to play by their rules. Trust me I am not any more excited to have to say that than the next person. Building a wall of defensible traffic is one of the smartest moves any online marketer can make in todays game.

img src

Google Adwords at the bottom of the SERPS?

It appears Google is testing ad placements near the bottom of the page. I am seeing them at the bottom solely (when no placement is at the top), as well as when there is an accompanying  placement at the top. It will be very interesting how this one weighs out, and if they institute this across the board.

It appears the last time they tested this was back in 05′

Here is a screencast:

Google Knol Does Know How to Search After All

Google Knol recently launched. If you have not started writing articles for it, well then, hurry the heck up before you get passed by all the other folks who understand the need to get an early start. My latest post was on Increasing Conversions, I started off with 68+ tips and tricks to do so and I fully welcome others to add there thoughts to the article.Now you might have noticed, after writing an article, you could not find it or others. It appeared the “search feature” was not working. The only way to see your article through searching was by being logged in.In fact, a gentlemen for the LA times thought nobody was even writing articles, which obviously was not the case. Turns out it takes “time” for Knols to be searchable, according to a Google representative.Now that we know Google knows how to search, 😉 the remaining question is how well are the articles going to rank? I have a strong feeling these pages are going to rank well. In fact they have already starting do just that.fireshot-capture-_34-how.png

fireshot-capture-_35-ped.png

Now I know these are somewhat “long tail terms” but this is a brand new sub domain. It has not been live but for about a week. Give Knol some time to build up backlinks and age and you can start to understand the ramifications of all of this.

Read More on this topic:

Google Knol Review