This post is subtitled “careful what you wish for” or “know where you’re headed”.
Credentials aren’t everything. God knows our world is pretty messed up these days, and it clearly got that way via the actions of a few generations of very credentialed leaders. From Ph.D’s to Grand Poobah’s with every sort of title in between, our “civilized” society has run a-muck with titles and credentials.
In an almost wicked reversal, our immediate environment is overrun with fake credentials. Nowadays I can place an advertisement into Fortune.com and then add the Fortune Magazine logo to my website with a claim “as seen in Fortune”. Or I can buy a set of glowing testimonials from seemingly real people for my product, without having to have any customers or even a product. It’s crazy. And it’s supported by a community of consumers behind the curve.
We don’t always need credentials to prove trustworthiness in fact, ability, experience, or legal rights. Some of the most worthy people in the universe are formally un-credentialed (and should stay that way, IMHO).
Truth is, I am actually one of the credentialing contrarians. I walked away from a nearly completed doctoral dissertation in Engineering, forgoing the Ph.D. credential after doing all of the work to pass the qualifiers, complete the formal requirements, produce ninety percent of the research and most of the dissertation work. I was a Ph.D. candidate and walked away, specifically because in my eyes, the credential failed my real-life value test. I simply didn’t want to be what I was becoming.
But sometimes, we do need some third party validation.
Statisticians lost alot of credibility in the past 100 years for numerous reasons, including alot of funny business. Today anyone with a copy of SPSS can claim to be a statistician. No one is defending the title. Not every consumer knows that an undergraduate degree is not enough to be a research statistician. Even a graduate degree is rarely adequate for research statistics work. And some people exploit that ignorance.
I’ve known Ph.D.’s who, armed with a basic grad school knowledge of statistics, invested in expensive statistical modeling software and became hyperlocal “stats gurus” in their niche communities. They bought every new advancement made by others, and brought the new idea into their own niche community. Have software, be successful. Many, many times their work was crap. Not many times, that was obvious to their peers.
Not too many people actually like statistics, so if a local guy appears to be able to get the job done and is willing to take the heat on critical review of the results, otherwise thoughtful scientists, researchers, and leaders choose to pay the man and move on.
Psychology went downhill the past century and took a lot of clinical research with it. The field of “neuroscience” is split between scientists and pseudo-scientists, with a lot of hucksters in between. Behind the scenes you sometimes find Engineers (capital “E”, meaning they graduated from an accredited Engineering program with an Engineering degree) working in research laboratories (such as labs doing neurological research). That credential ensures the Engineers know the fundamentals, and degreed Engineers don’t tend to make fundamental mistakes. It’s not a guarantee, but 5 years of passing hard undergraduate courses in Math, Science, and Engineering does not leave too many Engineers ignorant of the fundamentals of our physical world.
But you also see a ton of “neuroscience” laboratories doing the same sort of research, with technical experts doing the Engineering work. Specialists. Many are quite good at all the things that need to be done. Digital circuits. Analog interfacing. Computer algorithms. Fundamentals of measurements and statistics. Some of the greatest developments have come from such labs. However, an awful lot of ignorance has also stemmed from such labs, where outside opinions are often eschewed, local expertise can be overly revered, and cultures of “our way” may prevail.
Not too many scientists do great work in isolation. Peer review is important, and unfortunately, lesser-credentialed individuals are too often shut out of formal peer review processes.
In SEO, we are seeing a new age of Scientism with new applications of advanced statistical techniques coming out of so-called “research” projects. Is it valid? Does anyone know? Can anyone even check?
Is it safe to simply “pay the man” for his work, and move on with new metrics and new techniques that someone assures us are trustworthy?
History shouts “NO!” in response to that question. Where there is trust, there is an exploit. And the old adage remains true… “follow the money”.
Before you believe in new tools and techniques proffered by for-profit salesman backed by un-credentialed or questionably credentialed scientists, technologists, and statisticians, ask yourself if you can afford to trust them. If the data are incorrect, what is your downside risk? If the data are correct, where will it lead you?
If you chase search engine rankings via correlation analysis of the Google search results, where will it lead you? If you can reverse engineer the ranking algorithm by such observation analysis, to place your documents into the #1 spot, where will it lead you?
If your web site belongs there, such actions leads to success for you and for Google. If not, it leads to increased scrutiny and algorithm changes, as Google corrects itself and drops you out. If that sounds like success to you, you are chasing the fast money at the expense of stability and awareness of what actually mattered. That’s not professional SEO, and I am not addressing that.
The key to understanding the risk of correlation analysis is that even if it were valid, it assumes no basis for ranking. It works on the status of ranking sites, not the basis for their ranking. If Google were a dumb, static algorithm that might be useful. But Google is not dumb, and not static. Google broadcasts its intent to produce a user-meaningful SERP, and Google outlines characteristics of both quality URLs and URLs Google considers to be unworthy of ranking.
As dynamic as the Google SERPs are, I have a hard time believing any correlation work is worthwhile. Google engineers have frequently commented on the way the SERPs are incrementally built using filters and data from different places, under specific circumstances. In research, these are “environmental factors” and must be controlled when doing experiments. Correlation studies on data sets by definition do not control nor attempt to compensate for such factors.
I simply can’t afford to allow baseless observational analysis to drive my expensive SEO activities.
So what can be done? Certainly new techniques and analyses can be useful. How can we manage the risk that the for-profit tool vendors may be full of baloney? How can we leverage our professional status to hedge our bets that these published “ranking factor” correlations are worth trying?
Credentials. Both hard credentials and soft credentials.
First, openly ask the question “Is he qualified to do this work?”
Why not ask? A few undergrad courses in statistics is not adequate to engage in research in statistical modeling of data as complex and valuable as Google’s index. Asking aout credentials isn’t damning. Why not ask? A serious researcher, when questioned about credentialing and demonstrated abilities, will at least seek to achieve adequate peer respect over the long term via credentialing or formal demonstrated achievements (such as published papers and peer recognized citations).
Very few are so gifted that they don’t need any additional training over time, and very few stable personalities will defend themselves with no basis. Or, we can see the sort of independent labs that evolved in neuroscience…specialists working on special topics that expect to be trusted and do not subject themselves to outside scrutiny. If we see that, we can be wary.
I can’t imagine a for-profit tool vendor unwilling to place a seasoned, respected, credentialed individual onto the public board of advisers if the staff are not adequately credentialed to satisfy our need for assurances. Sometimes that is all that is needed… let someone known, respected, and with an earned reputation worth defending, stand up for the work. They won’t do it unless we ask for it, so why not ask?
Second, ask to see real data that supports claims. It is usually trivial to produce data sets that others can use to verify findings, as that work had to be done anyway before such claims could be put in front of the public.
It is also pretty easy for you to grab your own data and ask “does it hold true with this real world data I have?” Is the claim reasonable to me in my work? if the tool fails to deliver on your real world data, you know not to trust it with…that’s right…your real world data.
Everyone gains when new research reveals new understanding, and when people ask questions. Only the for-profit sellers gain when consumers unquestioningly accept the hype and open their wallets, or encourage others to do the same. In the long run, quiet acceptance of claims made by profit motivated sellers eventually leads to a field of pseudo science that no one believes, including your paying clients.