SMX 2016 Ranking Talk – Paul Haahr

I just watched the latest SMX vid of Paul Haahr. He was surprisingly forthright. Some notes just to organize in my brain.

What google shows in response to a query is based on two basic scoring signals:

  • Query Independent:  ie……Pagerank, Language, Mobile Friendliness
  • Query Dependent: ie…..Keywords, Synonyms, Proximity

Paul then shifts gears to talk about pages themselves and brings in quality raters guide. Paul says “If you wonder why Google is doing something[experiments] it is to make it look more like what the raters guidelines say.”  Paul’s statement emphasizes that this is a document that was created with much effort and needs to be digested to fully understand current SEO.

So key metrics for scoring are broken down into two major categories:

  • Needs Met
  • Page Quality
  • Time to Result(pretty simple, be fast)

Needs Met

Needs met translates to relevance. But the more Paul talks, the more he sounds like he is actually talking about intent. That seems to dovetail with what Roger posted on SEJ.  Give searchers what they think they want, not what they ask for.

Page Quality

Paul gives an anecdote about 2009-2011 results being quite high in relevance but awful in quality. He calls that the definition of a content farm.

So page quality is then further divided into Expert, Authority, and Trust. Paul mentions that Trust is more heavily weighted with Your Money or Life pages AS WELL AS “buying a product.” The raters guide say legal and sensitive areas like child adoption or car safety are pertinent. My first thought was SSL.  Probably not the only trust signals?

Some of the things that make High Quality content:

  • Good Rep – Gotta be defensive and offensive with ORM
  • Expertise – How to establish that your authors are experts? During a Q&A Paul won’t answer directly but does say that manual quality reviewers have the ability to determine that. I’ve got a few ideas on what that means.
  • Authority – Thinking this has to do with site/theme historical topicality. Although citations may be in play as well.
  • Trust – Too many factors to count. Suffice to say this is where you have to put your big boy digital pants on. Churn and burn sites don’t survive here.
  • Sufficient helpful main content which means secondary content isn’t distracting. Oldie but goody of GOOPLA, h/t Gypsy Dave.

Rank Brain

In the Q&A Danny asks about RankBrain. Paul gives up the following:

  • “[He] knows how it works but doesn’t know what it is doing.”
  • Is part of the post retrieval adjustment segment
  • “It sees webpages, queries, and Other Signals.”

What are these other signals? So far most people are only talking about unseen long tail query reformatting.

Brand Bias

Danny asks about brand bias by raters. Paul mentions that they have metrics to try to counter brand bias that aren’t built into quality. He gets quiet….almost somber for a few seconds. But then perks up and mentions that the raters do a great job rating small quality site.

I never paid much attention to Aaron Wall banging on about big company favoritism. Maybe he is on to something though. I don’t think Google is intentionally punitive to small brands but perhaps a flaw in their quality theory.

CTR

Lastly Danny asks about CTR being used in ranking. Both Paul and Gary Illyes squirm in their seats. Gary says engagement is hard to interpret.  Neither outright deny the use of engagement for rank scoring. Again I think it’s one of those things that doesn’t have an easy answer and they’d rather not go down the rabbit hole.  Many panda hit sites see steep increase in traffic before they fall. I’ve always thought it was Google measuring Quality engagement data before a demotion.

You may also like