As I’ve mentioned in a previous post, I love The New Yorker. I also love appropriating things I’ve read in the magazine to work for my personal interests (personal side-note: you can see one such personal appropriation here). And in the December 12th edition of the magazine, I found two interesting references to data that I’m excited to appropriate in terms of social impact assessment. The first was in an article called “The Power of Nothing” about the possible positive health benefits associated with the placebo effect. The first point I picked out of the article was the importance of having reliable data for credibility, in a quote from a advocate of the benefits of the placebo effect:
“’I realized long ago that at least some people respond even to the suggestion of treatment…we know that. We have for centuries. But unless we figured out how that process worked, and unless we did it with data that other researchers would consider valid, nobody would pay attention to a word we said.”
Although I am a huge champion of data and believe it is imperative for any social venture to be able to continually assess and improve their programs and activities, I do confess that I also rely on my gut belief – based on qualitative factors, personal experience, and some other aspects that I can’t quite put a name to – that the work that social ventures do does make positive social impact. That gut belief may be good enough to power me through my days and my own work, but it is not (and should not) be enough to convince anyone else to join the work, which is an essential part of building a critical mass to make transformational change.
We are in a nebulous place, talking about data and gut belief as equally important aspects of social impact assessment. As with the placebo effect, it is “essential to consider both the science and the art” of social impact, but, similarly, “[t]antalizing hints and possible effects are not data.” How are we going to be able to reconcile these differing aspects of social impact work and social impact assessment? If we can see with our eyes that our programs are doing good work, then how do we turn that into data?
Perhaps it is in the questions we ask, the answers we collect, the actual assessing that we do. Another article in the same New Yorker issue also has a focus on data with a similar problem – this time in regards to football. Famed coach-turned-commentator John Gruden is a wealth of football information, and “demonstrates every Monday night [that] it’s not possible to assess football without statistics.” I would say the same is true about assessing social impact – we need hard data. But John Gruden notices so much more than the easily assessed quantifiable data like numbers of receptions or yards rushed. And the author of the article suggests that what football needs is “more and better statistics: a way to measure all the things that Gruden notices when he is watching and rewatching plays.” We’ve seen over and over again how our work can positively affect change. How can we turn that knowledge into more and better statistics?
I have a gut belief that it is possible to create a process and a database that would capture that not-easy-to-capture data and information that shows social impact happening outside of numbers. Next stop: reading everything I can about theoretical database design.