It’s election season here in Israel, and many politicians whose views I oppose are showing me Facebook ads.
This is a guest post by Stacey Armstrong, Web Analytics consultant
Stacey’s Bio: I have seven years experience as a web analytics professional, providing analytics solutions for Microsoft Office, MSN Entertainment, T-Mobile, HomeAway, and Les Schwab. My goal as a web analyst is to identify the story within the data.
When I notice a dramatic change in analytics data I first take a look at the analytics tagging to make sure that the change wasn’t caused by a tracking issue.
I use a HTTP tool to look at the analytics tagging passing with the page load or link click. (more…)
Can You Tell Me If This Is A New Algo Loophole? Does Link Disavowal Enable Mafia-Style Link Laundering?
The new link disavowal tools seem to create a unique, certainly unintended opportunity for SEOs to get short term AND long term results via “link laundering.” This is a new algorithm loophole it seems, spam which resembles the mafia’s (and Russian and Iranian governments’) money laundering tactics. I’d love to hear your comments on this. (more…)
Right after the Penguin update, I sent the following email to people who downloaded a <a href=”http://book.seoroi.com”>free chapter</a> from my book.
It shares 9 concrete tactics to protect your business and grow beyond Google so that a Google ban would be the loss of A marketing channel, not The marketing channel.
There are tactics for ecommerce, ad publishers (CPM, CPC, CPA), lead generators and non profits who want to fire-proof themselves like the mythical phoenix.
“It’s spring and Google’s whacked another group of link buyers, and we in the SEO world again buzz about the winners, losers and what to do next.
But even whitehats who don’t buy their links MUST ask how to protect themselves from Google’s whims, as must every open-eyed SEO.
Q: Why worry if you’re whitehat? (more…)
Answering prospects’ questions is key to selling them, and sometimes it’s frustrating to try and guess people’s questions/objections from scratch. I noticed the following great examples of marketers answering questions at exactly the right time – the point of action – and thought others would benefit from learning these objections and answers.
Andy Hagans used to run TropicalSEO.com, and I’ve reposted his popular quiz as to whether your site was defensible and updated it for 2012.
The Multi-Armed Bandit Of CRO Doesn’t Grab Higher Conversion Rates – It Grabs Lower Conversion Rates
Steve Hanov recently suggested a thought-provoking idea: applying multi-armed bandit algorithms to A/B testing, and Visual Website Optimizer (VWO) did an interesting and accessible-to-non-statisticians statistical analysis and explanation of Steve’s method, to evaluate the accuracy of his claims. While I loved their analysis, VWO’s conclusion is a howler! (more…)
I spend an inordinate amount of time deleting spam emails from my Gmail inbox.
Update: Someone said I shouldn’t try to get these guys penalized. Here’s my response:
They’re wasting my time daily and I should stand back quietly? It’s called protecting what’s yours. You don’t think house alarms are bad, even though they might embarass thieves or get them caught, do you? I’m protecting my time from thieves.
If my email is public and scraped, Gmail should still block the spam people send me. Since they’ve been horrible at stopping this for at least a year, and probably more like three years, I’m going to start posting these emails here and hopefully Google will improve its spam filters as a result. As I get more spam emails, I’m going to keep posting them here. And maybe these people will see their own emails scraped and rendered useless by their fellow spammers… (more…)
Google is cutting the ROI on ecommerce SEO and has been for 8+ years. If ecommerce merchants don’t understand the changes below, they’re going to lose market share to savvier competitors.
Read on for what you need to do to survive and thrive. (more…)
Steve recently got a warning from Google that a number of pages were returning soft 404s, and the next week, his traffic dropped 10%.
The following week, another 30% was gone. What are these dramatically harsh “soft” 404 errors? How do you fix them?
What is a soft 404? (more…)
Use Advanced Segments in GA to Discover SEO, Social Media and Ecommerce Insights
Google Analytics (GA) is a tool that, while very useful to basic users who just want a quick look at their site’s statistics, is meant for power users that can take advantage of all of GA’s hidden features. Advanced Segments is an example of that hidden power. (more…)
This is a guest post from Everett Sizemore, who is an eCommerce SEO consultant operating off his 38 acre farm in the Blue Ridge mountains of Virginia. He enjoys gardening, collecting eggs and tackling tough SEO problems.
Want to fight the scrapers outranking you with your own content post-Panda?
Google is losing the war against content scrapers and we’re the ones paying the price. It has gotten even worse since Panda, despite efforts to fix the problem. I know of several dozen websites that are being outranked by scrapers.
While Google might hope we’re going to do their job for them with tags like rel canonical, and rel author, this problem isn’g going away any time soon. You know things are bad when powerhouses like TIME.com are outranked by sites like this.
Fortunately, there are many ways to keep sites from scraping your content, (more…)
This is a guest post by Phil Golobish, Senior SEO Consultant at Slingshot SEO. When he’s not writing posts for Gab, Phil helps Slingshot achieve digital relevance for deserving brands. You can follow Phil at @saintphilip or +Philip Golobish.
In 2006, AOL accidently released a ton of Google click through rate data. Clever marketers then used this data to estimate traffic a site could receive in any ranking position, and to forecast SEO ROI.
Since then, Google has made countless algorithm changes, incorporated personalization options, and blended results with images, videos, news, etc.
Given these changes, are the AOL CTR numbers still relevant? More specifically, what impact has blended search results had on CTR? The study Slingshot performed after the jump! (more…)
In a thought-provoking article, Russ Virante of Virante SEO asks whether, instead of manually checking through competitors’ backlink profiles, it’s perhaps possible to automate the analysis, at least to dig for paid links. He suggests that by using SEOmoz’s link index, and comparing the numbers on some backlink profile metrics against those of Wikipedia [which has never manipulated its backlink profile], it’s possible to get an idea o how natural a site’s backlink profile is. (more…)
If you’ve read about Panda, you’ll know that the quality of your writing and editing is a key notion targeted by Google’s update. (more…)
The Google Panda Update hit many webmasters like a freight train, leaving a long line of quality websites as collateral damage. While the Panda update did have the noble cause of weakening the grip that content farms had on the SERPs, many high-quality, content-rich websites were cleaned from Google in one fail swoop. To address the outrage found across the blogosphere, Google has provided a list of questions to ask yourself if you want your rankings to return.
From the list of questions Google provided, I’d like to propose 3 cheap solutions that could help get you back in Google’s favor.
1. Would you be comfortable giving your credit card information to this site?
I think it is a little ridiculous of Google to even consider this as something to base rankings on, but I won’t get very far arguing with them (I’ll leave that to Aaron Wall).
The question is, what could we possibly do to have Google think the answer for this is ‘yes’ for our website? Remember, they are doing this via algorithm, so it’d be pretty hard for them to analyze our design or look for other superficial indicators of trust.
However, there are ‘tangible’ items that they can check for to indicate trust, and what makes the most sense to me is for Google to check for the ‘Verisign Verified’ seal. It’d be pretty easy for Google to look for this, and if it is there, the website gets the box checked on this question. At ~$19/month, its a minimal investment if your website was previously making a great deal of money but took a big Panda hit.
2. Does this article have spelling, stylistic, or factual errors?
This question does have merit, but I really doubt Google’s sophistication to check for deep stylistic and grammar errors. While the built-in Microsoft Word grammar check is ok and probably on par with what Google would be able to do (my speculation), Apple’s built in grammar check is atrocious.
Rather than rely on these built-in tools, my preference of late has become a very comprehensive ‘cloud’ grammar check tool called Grammarly. This tool grades the grammar of an article, performs comprehensive content reviews, and offers rich suggestions for improving the quality of an article. At around $10/month, it is well worth the investment and I’m pretty confident it’s much more comprehensive in reviewing content than Google could ever be (because grammar isn’t the space Google operates in full-time).
3. Is this the sort of page you’d want to bookmark, share with a friend, or recommend?
The recent ranking correlation data gathered and analyzed by SEOmoz is nothing short of shocking. Of all of the metrics they track, Facebook Shares (not ‘Likes’) have the highest correlation to rankings. Google’s question on ‘sharing’ here, plus this corellation data, leads me to believe that Facebook and other social shares could be a critical signal in the Panda algorithm. There are hundreds of tools to help in this space, but I’d like to point out a few of my favorites… and the best part is, these are free!
- ShareThis – Put this button at the end of your article. It’ll give people an opportunity to easily share your content across a number of social and bookmarking sites.
- fbShare.me – Facebook is making it more difficult to do ‘sharing’ and instead favors their ‘Like’ button. Sharing on Facebook means that it can show up in another users ‘Top News’, whereas just ‘Liking’ a piece of content will not show up in one of your friend’s ‘Top News’. This handy little widget does all of the work for you to get ‘Share’ on your site.
In addition to these two tools, increasing your overall engagement in social media will ensure that your content constantly stays in people’s various social streams.
So there you have it, 3 tools that can help you beat the Panda update. Matt Cutts has indicated that the Panda algorithm is not run daily, so it could take some time to bounce back after implementing all of the various changes being suggested.
This is one of the most sweeping updates Google has performed, and sites caught in the cross-fire can expect numerous tweaks and adjustments by Google as time goes by. By focusing on what Google is saying publicly about the update, we can attempt to make educated guesses on how to satisfy the various pieces of the algorithm.
Learn more about Brian Patterson and his ORM work here.
I recently came across what is to me a new SEO problem.
A site I consult with has some thin pages with a handful of ads at the top, some relevant local content sourced from a third party beneath that…
and a bunch of inbound links to said pages. Not just any links, but links from powerful news sites. My impression is that said links are paid (sidebar links, anchor text… nice number of footprints.) (more…)
Google’s promoting scrapers over Time magazine. Seems Google’s exception list isn’t as robust as one might have thought, or perhaps it’s that the news business just ain’t what it was…
Ideas why this is happening?
Update: 1 idea is that it’s tied to the latest algo change. See also Mashable’s status.
One guess I’d have is a high ratio of surrounding junk to the main body of content. That would fit under the layout/design/user experience angle targeted by the latest algo update.
If you have a legitimate need, read this review of three web screen scrapers?
Many businesses may be confused or worried with all of the changes that have been happening in Google’s local SERPs lately. Google literally turned local search on its head about two months ago, and as the dust settles, many of the long-term effects are becoming clear. One specific change which I’d like to cover here has many of our clients filled with pride, and has helped us close some new local SEO client deals. (more…)
Fastcase is a new legal research tool that is bringing cool lateral thinking to the traditional problem of serving up legal search results. The results are some highly impressive innovations both in the algorithms that sort and rank the results, as well as in how the results are presented.
Ed Walters, CEO of Fastcase, gave me an interview to describe their process.
1) You found that legal research has a three way tension between cases with the most citations, cases from the highest courts, and recency of the decision. How does your algorithm solve that problem? (more…)
Trying out Google instant, I saw a particular query on which Google blanked. Notice the size of the below screenshot. (more…)
While googling around to help my sister Dahlia because her Gateway PC broke down (again … I think she got that 1/1,000,000 that makes it through QC when it’s a lemon), I saw the following search result. It’s entirely made up of forums, which is the first time I’ve ever seen such a thing (at least, when not searching for forums or info about them). Screenshot after the jump. (more…)
“I saw someone write to be careful or use the ilb in moderation to avoid getting penalized or something like that.
What are your thoughts on best practices?” (more…)
Just saw this piece via the ever-industrious Bill Slawski: Google’s Acquisition of MetaWeb & its Named Entities technology.
Reading the curiosity-arousing article on SEL, “The Phone, Calling,” I noticed that the use and presence of call tracking numbers, toll-free numbers and other non-main-line phone numbers could cause trouble for search engines.
“First, these numbers throw a monkey wrench in business identification. Second, they could expire, inadvertently creating a dead-end for a consumer. Publishers today struggle with how to accurately identify an actual business when many phone numbers are involved.”
The easiest solution, imho, is (more…)
In response to Google’s efforts to block access to Latma’s We Con The World parody, which is another proof of Google’s political bias and unreliability, I am going to go a month without Google search. I look forward to seeing how this works, and will report back. If you want to make a widget to this effect, or do the same, please feel free, and do let me know. (more…)
The situation: Your competitors have inbound links that are broken because of typos, changes in URL structures etc.
The common link building commentary: Most SEOs who’ve been around the link building block will tell you that it’s an opportunity to ‘build links’ for free – to pick low hanging fruit. Just drop the site owner a little email and voila – good as new. More juice for you! (more…)
I’ve just been reading some of SEOmoz’s Pro member tips and seeing their suggestions to use affiliate URLs that use hashtags, also referred to as the pound sign, number sign or hash mark.* For example, site.com/#aff123 . (more…)
The most common way to buy links is to find a site that shows up in Google’s index, then contact the owner asking them to add your link in exchange for monetary compensation. What is often not considered is that published pages don’t change very often. (more…)
I often take screenshots as I browse the web, and I’ve found some weird rankings. This isn’t really outing, just some odd things I’ve noticed around. I’d love to hear your theories as to what’s behind each of these. (more…)
Question: Does it matter for my site’s SEO in Google or other search engines if I use www in the domain name? For example, do I need to use http://www.example.com or can i use http://example.com ?
PageRank was supposed to be a probability distribution of the likelihood of someone clicking a link.
When that changed, (more…)
Actually, this post should be titled: “How did I cloak my way to lower rankings?!” Because the truth is that it was completely unintentional – as evidenced by it causing my traffic to drop like a rock. (more…)
A client’s site ranks for some keywords that aren’t core to any of its themes, and it keeps getting bounced back to page 2. This is my hypothesis of what’s going on.
The page ranks by fluke, really. It’s propped up by site authority and the unique content having attracted nice links. The problem is that the content only tangentially touches on the keywords it’s ranking for, but doesn’t quite satisfy the user intent of the query. As a result, the bounce rate is astronomical. (more…)
Trademark Productions steal other people’s content, edit it for the sake of passing through search engine duplicate content filters, and try to pass themselves off as experts you should trust? (more…)
Google has officially announced first-click free, it’s new attempt to fist clueless fools for all they’re worth.
Huh? What? Read on to find out how Google is encouraging content producers to lose a little more control over their content in its ongoing efforts to fight copyright. (more…)
I’ve just seen this in Google on an experiment an acquaintance of mine is running (she doesn’t blog on SEO, hence it being here; she also OKayed me writing this up). A recently registered domain, without having any links pointing to it, is now indexed. (more…)
“What does Google want in SEO” is a common question that many pretentious SEOs claim to know the answer to. I’m about to join their number. Google has moved beyond measuring SERP quality based on relevance and are now aiming to provide the best user experience possible.
Other titles I was considering for this post were:
Why Matt Cutts’ “Make Content For Users” Was Very Insightful
How Googlers Measure SERP Quality – Relevance Is No Longer King
Sorry Rand, But The Googlers Were Very Expressive, IMHO.
They are aiming not just for relevance, but overall positive user experience.
That’s what’s behind labelling of cracked sites in SERPs. That’s the reason for Universal search. That’s why AdWords integrates with GA, and GA with Feedburner. That’s why Quality Score counts loading times.
That’s probably what’s behind Knols (remember, Wiki + AdSense is good user experience ;).) For the inspiration to this post, lookie : what google wants/researching the territory.
David Mihm – web designer and local seo extraordinaire – recently asked me to participate in his local SEO ranking factors survey. And it got me thinking as to how a search engineer might consider the usefulness in ranking sites of any particular factor. Let’s see what the thought process in this part of a search engineer’s workday is like. (more…)
Welcome Search Newz visitors! It seems that Search Newz’s syndicated version of my article, “If You Listened When Google Announced Submarine Crawling,” which follows up the one you’re seeing now, forgot to link to an important Matt Cutts video. So there’s the link to help you out. Anyways, on with the show – here’s what submarine crawling is all about, as interpreted from Matt Cutts’ explanations.
Matt Cutts’ post and this Webmaster Central post recently explained that “high quality” sites are being given special treatment – submarine crawling.
We all know that links from high quality sites are more valuable than those from average or mediocre sites. Now, Matt and Google have given us a new measurement for finding high quality sites – submarine crawling – and thus high quality link prospects.
Russian Submarine courtesy of Orpheus Grey.
So WHAT is Submarine Crawling? (more…)
And it isn’t Google Analytics, as I mistakenly thought. So I need to apologize to Google (and to you, my readers) for the error/false accusation and getting people worried for nothing.
Even more humbling, both Matt Cutts and the official Google Webmaster Central blog have called yours truly’s site “high quality.” So let’s see … (more…)
Formal writing is really frustrating because it requires you to dress up simple ideas in complete sentences, edit your work for grammar and spend an unholy amount of time writing what it would take you a few minutes to express verbally. When you come up with new ideas or discover new stuff as often as I do, that can get really frustrating.
So I’m hereby inaugurating what I hope will be a regular column here: Scratchpad (scratchpad picture courtesy of one eye fish). I’ll share my latest ideas, in a raw scratchpad type format and be paying even more attention than usual to your feedback. (The Post #88 reference was the pre-naming version of this post’s title and I found it quite appropriate to an informal column.)
For this first issue, I’ve got
Google Maps Folds Google Earth Booking Engine, Reviews, Own Pics Directly Into SERPs + A Big List of Hotel Review Sites
Google Maps has been doing a lot of testing and playing with its search engine results pages (SERPs) lately. I’ve seen the EarthBooker hotel booking engine tightly integrated with many hotels. At the same time, when I performed a longtail search for a hotel to stay at during the SMX West conference, I found some reviews (or other stuff Google seems to find relevant) folded directly into the SERPs (you used to have to click more info to see the reviews). And there’s also pictures being folded in from Google’s Panoramio.com. (more…)
Google is broken because PageRank no longer models what it did in Brin and Page’s foundational paper. (more…)
I’m often asked about the Google’s Supplemental Index (SI) and how to get pages out it. Consider this post directions to the exit, the light at the end of Google’s dark tunnel, the “this way to the egress,” if you will. (Ok, so I like my hype, but I promise it’ll be worth it.)
If you’re impatient, click here to get the solution to getting your page/pages out of the Supplemental Index and skip the background info. I think you’re better off with a full understanding of the SI, but I’ll leave it up to you.
As Aaron Wall puts it in his excellent SEO Book, search engines are frequently developing new ways to save computing power. When you do things on a large scale and have billions of websites and web pages to work with, as Google does, getting your computers to run more efficiently will save you money. Lots of it. The Supplemental Index is a way for Google to save money by being more efficient with its computing.
To understand how to get your pages out of the Supplemental Index, you need to understand why they’re there to begin with. In other words, you need to know how the SI works. So let me explain.
When somebody performs a search with Google, what happens behind the scenes is something like this.
- Google looks at the keywords.
- It compares them to the pages in its index (think of a library’s index card system).
- Google determines which pages are most relevant and ranks them.
- Finally, the results are displayed to the searcher.
It’s incredible, but Google really does do all that in fractions of a second.
(This is a simplified explanation of Google’s processes, based on what Dan Thies wrote at SEO Fast Start. If anyone has the precise page’s link, I’d appreciate a comment pointing it out, as I’ve lost it. Wouldn’t want that page to end up in the Supplemental Index, now would I?)
The second part of the process gets exponentially more expensive for Google every day, as more sites and pages go online. Their computers need to run through more and more content. Google hit upon the idea of using a Supplemental Index as a shortcut in this process. By putting pages of lesser importance in the SI, Google has fewer pages to assess and rank at step two. (Then, only if Google feels its main index has too few pages to allow it to do a good job, will Google resort to the Supplemental Index. It supplements their main index when this one is thin for a particular topic.)
Amazon also had Supplemental Index issues at one point, as Tamar Weinberg pointed out in this picture. It’s nothing to feel bad about. (Note that Google no longer labels pages in the supplemental index as such.)
So how does Google determine if a web site or web page is of lesser importance? The behemoth of search considers how likely a web surfer is to arrive at that page if they randomly click links in a never ending browsing session. The more likely someone is to visit your page through links on other websites and on your own, the more important your page. This idea is the foundation of Google, better known as PageRank. See Danny Sullivan’s article on PageRank (which he seems very enthusiastic about ranking for the keyword “PageRank,” based upon his linking practices on Search Engine Land) for more info.
The solution to getting your web page (or web pages, as the case may be) out of the Supplemental Index is to get more links to them. These can be links from within your own website, but it’s usually better – particularly with newer, less established sites – to get links from other websites.
If your site is non-commercial, it generally shouldn’t be difficult to get other sites to link to your inner pages, increase your PageRank and help you out of the Supplemental Index. But if your site is commercial, people are generally less willing to link to your pages. For example, the average webmaster (who isn’t an affiliate) has no incentive to link to a product page.
(As an aside, it would make sense for Google to count affiliate links towards determining whether a page belongs in the SI or main index. This despite the fact that they might be considered paid, PageRank manipulating links – which Google despises – otherwise).
Ironically, you create more pages!
Just not on your own website.
The pages you’ll create will be what’s commonly referred to as “User Generated Content.” For example, the photo-sharing site Flickr lets people upload pictures that then get an individual page (the picture at the top of this post comes from this Flickr user, as a matter of fact). Flickr’s users generate its content. And they link back to their own sites, or other pages they like, from their Flickr profiles, photo pages and so on.
A particular type of user generated content site – which I detailed in my article The New Directory – is likely more useful for getting these links than other types of site. This is because the quality of the user generated content is controlled by member ratings and reviews. While I again encourage you to read the article and learn more, I’m happy to provide you with the easy solution: a list of sites accepting user-generated content where you can get links to your site’s pages.
- Squidoo is a perfect example of The New Directory.
- Gooruze is similar, but focused on the internet marketing niche.
- SEOmoz is like a second home to me. I’d be crazy not to point out their great user generated blog Youmoz.
- Still in marketing, search marketing social media site Sphinn gives you a nice page – do see my profile and add me as a friend.
- eBay gives users About Me pages.
- Myspace is a classic and should need no explanation.
- Ezine Articles, for what it’s worth, lets you have a profile page. I’m not sure Google care much about Ezine Articles, though, because contrary to the New Directory, their quality control is nil.
- Dofollow blogs allow you to leave comments where the author’s name links to a page of his/her choosing. These links are counted by Google, as opposed to most blogs, which use nofollow (I’ll implement dofollow once I choose the right plugin). Courtney Tuttle has a good list of Dofollow blogs.
- WordPress lets users create blogs of their own on their site. (I also recommend using the WordPress software if you want your own full-featured blog with plugins such as dofollow. This blog runs on it.)
- PR offers free press release distribution. I’d link to PR Web too, but they don’t offer a free service, so no link love. (Told you being commercial was tough!)
- Search for “your product” and “social networking” on Google. Try related searches like “your industry” + “social networking,” “your industry” + “social media,” “your industry” + “forum(s),” “your product” + “file-sharing,” “your product” + “article distribution.” The sites you find will let you create a profile and links.
- At the end of the day, networking is still crucial, as I told Marketing Sherpa (PDF) (see tip #27). If you can get bloggers and journalists writing about you, that’s going to mean better links, traffic and copycat links from bloggers who repeat what they read in the daily paper.
I know that aggressive ad positioning, especially when your site is new, are a flag for spam with Google’s algorithms. I created a blog on fishing once that had ads in the topleft and in much of the sidebar, which led to me having to make it through a Captcha for each post. What’s interesting to me now is the question, does aggressive ad publishing decrease PageRank? (more…)
Google’s blog search patent application came out in March 2007 (though it was filed two years earlier) and it, like Google’s near-legendary Anatomy of a Large-Scale Search Engine paper, explains how the search engine will rank documents. In other words, it gives an idea of what makes the first blog show up first, and what makes the second show up second. So understanding the patent is essential to ranking a blog in Google’s blog search.
Luckily for you, you don’t need to go through the drudgery of reading through the whole, long technical paper. Various SEOs (Bill Slawski) and other good folk (Alister Cameron) have analyzed the patent and published this analysis for the public benefit. What follows is my comprehensive summary that aggregates and simplifies the patent and its analysis. (more…)