Most software metrics are terrible. They are as overhyped as they are poorly researched.
But I think it’s part of the story of humanity that we’ve always worked with imperfect tools and always will. We succeed by learning the strengths, weaknesses and risks of our tools, improving them when we can, and mitigating their risks.
So how do we deal with this in a way that manages risk while getting useful information to people we care about?
I don’t think there are easy answers to this, but I think several people in the field are grappling with this in a constructive way. I’ve seen several intriguing conference-talk descriptions in the last few months and hope to post comments that praise some of them later. But for now, here’s my latest set of notes on this: http://kaner.com/pdfs/PracticalApproachToSoftwareMetrics.pdf