I think that the organisers should be applauded for the efforts they’re putting into the survey. And, as I’ve said before, I think the value from it is likely to be in the trends rather than the particular data points, so they’re playing a long game with dedication.
To this end, the 2016 report shows direct comparisons to 2015 in places and has statements like this in others:
We are starting to see a trend where testing teams are getting smaller year after year in comparison with the results from the previous surveys.
I’d like to see this kind of analysis presented alongside the time-series data from previous years and perhaps comparisons to other relevant industries where data is available. Is this a trend in testing or a trend in software development, for instance?
I’d also like to see some thought going into how comparable the year-to-year data really is. For example: is the set of participants sufficiently similar (in statistically important respects) that direct comparisons are possible? Or do some adjustments need to be made to account for, say, a larger number of respondents from some part of the world or from some particular sector than in previous years. Essentially: are changes in the data really reflecting a trend in our industry, or perhaps a change in the set of respondents, or both, or something else?
While I’m wearing my wishing hat I’d be interested in questions which ask about the value of the changes that are being observed. For example, are smaller teams resulting in better outcomes? What kind of outcomes? For who? I wonder whether customers or consumers of testing could be polled too, to give another perspective, with a different set of biases.