Google wants to “organize the world’s information” and will request from your web server whatever it wants. Google says it follows a strictly proper robots.txt file, so if you don’t make your CSS files hands-off, Google can grab them. Why would Google look at CSS? Many reasons.
CSS files reveal a ton about your web page (especially when you use asynchronous web services) that Google otherwise would not know about. Looking for display:hidden is child’s play. I doubt that’s on the agenda except for specific cases such as research on dodgy domains.
At the very minimum one can grade quality simply by looking at the CSS file. At more extreme edges of the modern web, the CSS file may contain most of the AJAX activity… especially as Behavior Driven Development takes root. Again, I doubt it’s on the agenda right now, but we are already past the time when companies like Google should have started spidering external CSS files and js.
One guy suggests cloaking Googlebot when it asks for external CSS files. Not wise IMHO. Cloaking is defined by Google as “bad”, so if you do it, you can be labeled as “bad”. If you use display:hidden you can only be labeled as “capable” or “advanced”. Don’t compromise plausible deniability for this. It’s not worth it.
My gut says this is one of the PR aspects of Google’s anti-SEO efforts. Go hit a bunch of js and external CSS files every once in a while, and generate some buzz as a deterrent to widespread adoption of display:hidden and way-way-left-off-page hidden text. That’s alot easier than actually parsing and categorizing DHTML.
Once again, if you wonder what matters in Google, hit the SERPs instead of the blogs. And since you know to diversify your holdings, even when something new is implemented, you’ll be ok.