John Andrews is a Competitive Webmaster and Search Engine Optimization Consultant in Seattle, Washington. This is John Andrews blog on issues of interest to the SEO community and competitive webmasters. Want to know more?

johnon.com  Competitive Web & SEO
March 23rd, 2007 by john andrews

I expected Matt to be Smarter than That

Matt over at WordPress posted “Selling Links” which goes across syndication with this starting paragraph:

“Let’s face it, we’re selling links here. Call it ‘buzz’ all you want, but it boils down to selling links. That skews Google’s index and they’ve come out against that quite publicly. If we’re all given the freedom to disclose in our own manner, we’re a moving target. If we’ve all got disclosure badges everywhere, it’s easy for them to penalize/ban us all.”

Of course if you click thru (and you would click thru), that’s not Matt speaking but a quote from the Pay Per Post blog that Matt subsequently criticizes (and he links to their blogger comments page, as if to avoid passing any link power to their blog. How sad. Hence the nofollow here). Matt Mullenweg, the guy who (desperately?) spammed Google with commercial doorway pages when he was down on his A-list luck (and got caught) is now taking a high-road and saying that link selling is bad for the web. Really?

Sorry, but I thought more of Matt than this. Sure, the conflict of interest is obvious (wordpress.com can’t monetize very well if a middleman like PPP is monetizing out from under him), but to brand it as a benevolent action is below where I expected Matt to be. The web is commercial. Matt is commercial. He’s good at it, and naturally he shouldn’t allow PPP on wordpress.com. (I also noticed an BlueHost advertisement on the PayperPost site that says “Move Off of WordPress.com”, so clearly these guys are competitors). But really, Matt, to pander to the idealistic half of the audience so blatantly… you alienate the rest, you know? And you need the rest, don’t you? Maybe you don’t think you do. In my book, that’s a bad sign.

It wasn’t long ago we read about Matt Mullenweg spamming Google with 160,000 doorway pages on topics like “debt consolidation” and “asbestos”:

Mullenweg hosted at least 160,000 pieces of “content” on his site wordpress.org which use a cloaking technique to hide keywords such as “asbestos”, “debit consolidation” and “mortgages”. Mullenweg was paid a flat fee by Hot Nacho Inc., which creates software for search engine gamers to use. It’s been dubbed “Adsense bait” – Adsense is Google’s keyword-based classified advertising service…Mullenweg employed “negative positioning”, which uses a CSS directive to place the text offscreen, out of sight of the user, but where search engines can still read it….

This seems way hypocritical. I am reminded of the days when Steve Case and company spammed “the Internet” with AOL commercials. The Internet was non-commercial, and the idealists gathered with pitchforks to “defend the Internet” against commercialization. Who did more for the development of the Internet and web, AOL or those idealist defenders of usenet?  Painful question to answer, but the answer is the truth. Matt is painting the market makers as evil-doers, as Matt quietly protects his share of the market. Puh-lease.

 

★★ Click to Share!    Digg this     Create a del.icio.us Bookmark     Add to Newsvine
March 22nd, 2007 by john andrews

SEO for AJAX

Update March 2010:

Google has advanced in its plan to impose new technical methods on webmasters in order to make AJAX accessible. In a new tutorial, the Google team describes a process of tagging URLs such that Google recognizes they utilize callback functions for dynamic content loading, and outlines an “HTML snapshot” method of presenting the context for indexing.

Still quite cumbersome. If you look at the current search results, some webmasters who have been experimenting since Google’s last pronouncement are getting hosed – improperly indexed and in some cases messed up search results for those testing these new methods.

Read the latest descriptions from Google here.

Update October 2009:

Google has released a blog post describing a proposed change to the way we publish information to the web, in order to more robustly support Google’s indexing efforts. In short, Google can’t properly index web pages which change dynamically (such as AJAX-based views) without the publisher helping out. What is the “final state” or perhaps better, what is the state of the page that the publisher wants Google to index?

That has always been the core question, but also represents the sensitive trust point. If Google trusts the publisher to say “this is what should be indexed”, many publishers mis-represent in order to try and rank higher in Google. Google calls this “search spam”. The act of mis-representing to Google what is really in your web page view has been historically called “cloaking”. When you cloak Google, you show Google different page content than you intend to show users that are referred by Google search. Also historically, cloaking can get you banned from Google.

So if Google now wants publishers to use these new HTML constructs (specifically, new alternative URLs that tell Google which state of the AJAX-loaded page is to be considered the “static” state, or the one that should be indexed), how does Google defend against cloaking and search spam?

Perhaps more interestingly, what is the reward for compliance? Better ranking? Or simply indexing? In other words, if you don’t comply with these new Google-specific technical publishing requirements, do you fail to get indexed? And if you then cloak in order to get indexed (using standard HTML), have you broken guidelines and will you get banned?

The competitive webmaster sees all sorts of opportunities problems lurking in this approach. I’m sure Google does as well, which is why Google needs to try and make this an HTML standard or a web publishing standard. As long as it is a Google-specific technology, or perhaps even a search engine specific technology, it will suffer from that uncertain grey area of trust.

Read the announcement, and listen carefully to the future discussions. This is YEARS away, and in the mean time, I can’t ignore the fact that webmaster trust is becoming an essential ingredient in rankings.

Update early 2009: This is an old post, written when AJAX was a new concept. It addresses a philosophy of SEO vs. a culture of using AJAX… but today in 2009 we have advanced well beyond that stage.So to address the issue of SEO and AJAX, I wan to point to a great article on “Hijax”, which is a progressive enhancement approach to designing the AJAX system, while actually impementing AJAX at the end. The result is content that is accessible, and a presentation layer based on dynamic AJAX implementation. Check out DOM Scripting: Hijax. My original blog post on AJAX and SEO follows:
Subtitled: “Meet my 5 sons: all named George” or “Advanced SEO

George Foreman has five sons, all named George. If you were to visit George and he were to introduce you to his sons, he would present each one in turn, call them by their name “George” and then maybe add a second label like (“my second George” or “the smart one” or whatever). He said he named them George because he wanted them to always know who their father was. They are not identical: they just have the same name. You can’t tell them apart by their names, but with the added second label or their looks, you could easily identify them. But not if you were blind. You’d need more info, right?

So how do you name your web pages?

Most webmasters know not to use frames because the frame set URL remains in place as each framed page is displayed. The result is like naming all of your kids George – they all have the same URL. A search engine will see just one “name” and list the one URL as a single page in the search index. Bad idea.

Most webmasters these days also know not to use a URL which is almost exactly the same for multiple pages, differing only by a long “id” value. If it isn’t sufficiently distinct to the search engine spider, it won’t be any different from The Other Page Also Named George. Most web masters are also keeping page titles distinct, as a second label (“the smart George”). More advanced web masters have learned to recognize overly “templatic” web pages, which can look nearly identical to other pages, and suffer a similar identity crisis to nearly blind search spiders (less severe consequences, but still sub-optimal).

But what about modern web applications deploying asynchronous user interface technologies like AJAX, where on-page content (or in-page content) can vary depending on context, not just URL? If the middle paragraph updates via asynchronous server calls, without a page refresh, then the URL hasn’t changed and the new resource (or “view”) becomes just another Web Page Named George. Sure it has different content, but it doesn’t exist as a separate entity according to the search engine labels (URL and page title). In other words, it won’t get indexed.

In the current search world, it is essential that each view full of content that you think defines a valuable, site-defining and user-facing page of content on your site be indexed as a unique web resource. If you collate content to create such views so they are specifically relevant for visitors in a specific context (such as we do when we optimize for SEO… matching views of content to referred search engine visitors), then your hard work collating will be for naught if it doesn’t get labeled and indexed.

So what to do?

Many SEO Advice web sites suggest that you generate a second set of static views (or dynamic, but URL-unique views) to be fed to search engines. I don’t think that is a very creative solution. The Sitemaps protocol is also a way to define views to reflect your pre-defined collections, yet archaic anti-cloaking guidelines (like the one at Google) require that those URLs deliver the same content to both users and search spiders. That makes the sitemap little more than a representation of static content. Again, that’s not very creative. For truly asynchronous, data-based content, it’s also not very practical. There are simply too many possible views.

If you are thinking this issue through analytically, and seeking a practical solution that enables deployment of asynchronous visitor views while simultaneously allowing a statically-labeled set of entry points to exist such that a traditional spidering of that content enables a contextual indexing of the content by labeled URL, go ahead and call yourself an SEO. If you get it done in a way that requires a substantial amount of extra work for a web designer, like maybe creating a second static web site just for search engines, then call yourself a second-tier SEO and try to increase your R&D time or training budget. You’re stuck in old-skool SEO. You’re going to need to update your skills sooner than you might like.

But if you have already mapped out your site’s user experience, and documented the intent and strategy that drives the UI designers to create the asynchronous interfaces deployed, then rest assured you are doing well. You understand the reasons why page sections update asynchronously, the underlying data structures in use behind the scenes, and the link between view and content. You already know how you can use that to define a set of defining views, to be sitemapped and exploited as landing pages. A second set of static pages? You betcha. A lot of extra work? on the contrary. It will probably be defining work for the marketing team, worthy of the effort even without search engines requiring it. It won’t require top-dollar designer and developer hours, but merely DHTML web designer hours. Once you get the basic infrastructure in place to support your “sitemap”, you can endeavor to optimize the site independent of the UI designers, the way nature intended :-)

Now if you find yourself working with a javascript team that has the patience required to document that interface, or one that actually had a strategy in place for the user experience before they built it, or one that is working with a data structure actually designed to support that strategically-defined user experience, consider yourself extremely lucky.

If not, there’s still hope. If you have a good marketing team, you might end up delivering some clue packages to the web too dot oh development primadonnas, along with a proper specification for how they might try and get it a little more right with the next refactoring. In order to keep the SEO bill less than twice the development bill (or less than the first year’s PPC bill). I’m just sayin’, thaz all.

★★ Click to Share!    Digg this     Create a del.icio.us Bookmark     Add to Newsvine
March 21st, 2007 by john andrews

SEO: Not Ready for Video

A bunch of SEO types got together to demonstrate the poor quality of video blogging in SEO land today. They were right.. varying audio levels, sync problems, generally horrible production values all around. One was obviously scripted; the other two obviously not scripted. Not sure what the plan is besides link bait for a slow news day, but I’m guessing from the comments it’s the start of a I-do-better-video-than-you meme. Cool. I look forward to round two.

★★ Click to Share!    Digg this     Create a del.icio.us Bookmark     Add to Newsvine

Competitive Webmaster

Wonder how to be more competitive at some aspect of the web? Submit your thoughts.

SEO Secret

Not Post Secret

Click HERE



about


John Andrews is a mobile web professional and competitive search engine optimzer (SEO). He's been quietly earning top rank for websites since 1997. About John

navigation

blogroll

categories

comments policy

archives

credits

Recent Posts: ★ Do you want to WIN, or just “Be the Winner”? ★ 503: GONE ★ Cloud Storage ★ Identity Poetry for Marketers ★ PR is where the Money Is ★ Google is an Addict ★ When there are no Jobs ★ Google Stifles Innovation, starts Strangling Itself ★ Flying the SEO Helicopter ★ Penguin 2.0 Forewarning Propaganda? ★ Dedicated Class “C” IP addresses for SEO ★ New Domain Extensions (gTLDs) Could Change Everything ★ Kapost Review ★ Aaron Von Frankenstein ★ 2013 is The Year of the Proxy ★ Preparing for the Google Apocalypse ★ Rank #1 in Google for Your Name (for a fee) ★ Pseudo-Random Thoughts on Search ★ Twitter, Facebook, Google Plus, or a Blog ★ The BlueGlass Conference Opportunity ★ Google Execs Take a Break from Marissa Mayer, Lend Her to Yahoo! ★ Google SEO Guidelines ★ Reasons your Post-Penguin Link Building Sucks ★ Painful Example of Google’s Capricious Do Not Care Attitude ★ Seeing the Trees, but Missing the Forest 

Subscribe

☆ about

John Andrews is a mobile web professional and competitive search engine optimzer (SEO). He's been quietly earning top rank for websites since 1997. About John

☆ navigation

  • John Andrews and Competitive Webmastering
  • E-mail Contact Form
  • What does Creativity have to do with SEO?
  • How to Kill Someone Else’s AdSense Account: 10 Steps
  • Invitation to Twitter Followers
  • …unrelated: another good movie “Clean” with Maggie Cheung
  • …unrelated: My Hundred Dollar Mouse
  • Competitive Thinking
  • Free SEO for NYPHP PHP Talk Members
  • Smart People
  • Disclosure Statement
  • Google Sponsored SPAM
  • Blog Post ideas
  • X-Cart SEO: How to SEO the X Cart Shopping Cart
  • IncrediBill.blogspot.com
  • the nastiest bloke in seo
  • Seattle Domainers Conference
  • Import large file into MySQL : use SOURCE command
  • Vanetine’s Day Gift Ideas: Chocolate Fragrance!
  • SEM Rush Keyword Research
  • ☆ blogroll

  • Bellingham SEO
  • Domain Name Consultant
  • Hans Cave Diving in Mexico
  • Healthcare Search Marketing
  • John Andrews
  • John Andrews SEO
  • SEMPDX Interview
  • SEO Quiz
  • SEO Trophy Phrases
  • SMX Search Marketing Expo
  • T.R.A.F.F.I.C. East 2007
  • TOR
  • ☆ categories

    Competition (39)
    Competitive Intelligence (15)
    Competitive Webmastering (546)
    Webmasters to Watch (4)
    domainers (63)
    Oprah (1)
    photography (3)
    Privacy (16)
    Public Relations (187)
    SEO (397)
    Client vs. SEO (2)
    Link Building (3)
    Search Engines vs. SEO (1)
    SEO SECRETS (11)
    SEO vs. SEO (1)
    ThreadWatch Watching (5)
    Silliness (24)
    Social Media (7)
    society (31)
    Uncategorized (23)

    ☆ archives

  • September 2014
  • December 2013
  • October 2013
  • September 2013
  • August 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • November 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • November 2011
  • October 2011
  • September 2011
  • July 2011
  • May 2011
  • April 2011
  • March 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006