The developer survey seems to have become a mainstay of technical blogs lately. I don’t mean to pick on anyone in particular, but the problem is, almost every one does not pass the eye test. This isn’t surprising when you think about it though. There are several problems with these surveys that prevent them from really being representative of the community they want to represent.
The main issue is that all of these surveys suffer from major selection bias. Even with thousands of responses, the results this likely represents a very small fraction of the overall community.
A small but substantial response is fine assuming it is representative. However, in most cases these are promoted through means that would only ever be seen by a specific type of developer - one who reads a certain blog, or follows specific individuals on Twitter or finds articles on sites like EchoJS, for example. Even then, it’s merely a subset of developers in these outlets who are willing to fill out a survey. While the overall reach of these avenues of promoting your survey might be large, keep in mind that the type of developer who you’ll reach is the type that keeps up with the latest trends, who wants to learn about new tools, who wants to stay on the cutting edge.
This is not the typical developer.
This sort of information, especially when we break down answers along certain demographics. At the very least, it would help us gauge how representative the responses are to the community as a whole or help us determine what specific subset of that audience it is representative for. But we should not be fooled into assuming it is widely representative of the developer community.
Look, if you want to run a survey that helps you determine who reads or is a potential reader of your site and what type of topics they are most interested in, this is a very useful tool. They can be fun too (and generate a lot of debate in the comments). But, please, let’s not go off presuming too much off the results of any one of these surveys I’ve seen so far.