Author Archives: Emily Hicks-Rotella

Inspiration for a Social Impact Database


Just sharing this little article that has given me a lot of inspiration for the Social Impact Database. Favorite quote:

“That’s the beautiful thing about data – it can either back up your instincts or prove them totally wrong.”

Check out the full article here:

The Eight Pillars Of Innovation


I Dream of a Database: My StartingBloc Pitch

Hey Everyone,

Right now I’m in sunny Southern California attending the StartingBloc Institute for Social Innovation, and really having my mind blown by the kind of talented, hard-working, incredible dreamers and doers here with me.

I’ll write more about this when the Institute is all done and I’m back in New York, but I wanted to share my pitch for what my dream is. I pitched this to 99 other social entrepreneurs and amazing people, and I’ve gotten some good feedback on it. So now, calling all developers, artists, and dreamers, help me make this a reality!! Here it is:

“While many folks are doing great work on recording and databasing quantitative measures of social impact assessment, not as much is being done to do the possibly more challenging work of gathering, recording, and databasing qualitative measures of social impact. I want to create a database that mission-driven organizations can reference for qualitative social impact assessments that will contribute to helping improve programming and connect seemingly disparate organizations through measurable qualitative social impacts.”


More and Better Data!

Image from Visual Sport via Random, Yes. Useful, No.

As I’ve mentioned in a previous post, I love The New Yorker. I also love appropriating things I’ve read in the magazine to work for my personal interests (personal side-note: you can see one such personal appropriation here). And in the December 12th edition of the magazine, I found two interesting references to data that I’m excited to appropriate in terms of social impact assessment. The first was in an article called “The Power of Nothing” about the possible positive health benefits associated with the placebo effect. The first point I picked out of the article was the importance of having reliable data for credibility, in a quote from a advocate of the benefits of the placebo effect:

“’I realized long ago that at least some people respond even to the suggestion of treatment…we know that. We have for centuries. But unless we figured out how that process worked, and unless we did it with data that other researchers would consider valid, nobody would pay attention to a word we said.”

Although I am a huge champion of data and believe it is imperative for any social venture to be able to continually assess and improve their programs and activities, I do confess that I also rely on my gut belief – based on qualitative factors, personal experience, and some other aspects that I can’t quite put a name to – that the work that social ventures do does make positive social impact. That gut belief may be good enough to power me through my days and my own work, but it is not (and should not) be enough to convince anyone else to join the work, which is an essential part of building a critical mass to make transformational change.

We are in a nebulous place, talking about data and gut belief as equally important aspects of social impact assessment. As with the placebo effect, it is “essential to consider both the science and the art” of social impact, but, similarly, “[t]antalizing hints and possible effects are not data.” How are we going to be able to reconcile these differing aspects of social impact work and social impact assessment? If we can see with our eyes that our programs are doing good work, then how do we turn that into data?

Perhaps it is in the questions we ask, the answers we collect, the actual assessing that we do. Another article in the same New Yorker issue also has a focus on data with a similar problem – this time in regards to football. Famed coach-turned-commentator John Gruden is a wealth of football information, and “demonstrates every Monday night [that] it’s not possible to assess football without statistics.” I would say the same is true about assessing social impact – we need hard data. But John Gruden notices so much more than the easily assessed quantifiable data like numbers of receptions or yards rushed. And the author of the article suggests that what football needs is “more and better statistics: a way to measure all the things that Gruden notices when he is watching and rewatching plays.” We’ve seen over and over again how our work can positively affect change. How can we turn that knowledge into more and better statistics?

I have a gut belief that it is possible to create a process and a database that would capture that not-easy-to-capture data and information that shows social impact happening outside of numbers. Next stop: reading everything I can about theoretical database design.


Could a predictive model competition help build better social impact assessment?

Image via cc by Flickr user N.Calzas

Teach For America is obsessed with data. While teachers collect data on their students to help make sure they are on track for success, the back-office teams collect data on teachers and alumni (people who have already gone through the 2-year teaching corps) to make sure they are on track for having positive social impact, particularly in the field of educational equity. I know this because for the past two months that I have been employed at TFA, I have been partly responsible for maintaining the infrastructure and integrity of this data. Call me a nerd, but it is awesome. It really isn’t surprising that I love it so much, because (as this blog shows) I truly believe that information organized into data can be used to help and improve programs, activities, and actions that make positive social impact on the world.

Some important questions should come to mind when we think about using data to improve social impact (in other words, when creating a social impact assessment): 1) what data is being collected, and 2) how is that data being utilized to inform decisions and improve programs and activities? After reading a recent FastCompany article and then doing some research, I found an organization called Kaggle that is answering the second question in a very interesting way.

We have a dataset. We have a problem. What do we do?

The idea behind Kaggle rests on the assumption that predictive modeling can actually predict the future to a reasonable degree. A very good informational video on their website explains that “predictive modeling is a way of finding patterns and relationships in existing data, and then using those to predict what will happen in spaces where data isn’t available.” If there were a way to predict the future, why wouldn’t everybody be doing it? The answer is the motivation behind Kaggle: “most organizations don’t have access to the advanced machine learning and statistical techniques that would allow them to extract maximum value from their data. Meanwhile, data scientists crave real-world data to develop and refine their techniques. Kaggle corrects this mismatch by offering companies a cost-effective way to harness the ‘cognitive surplus’ of the world’s best data scientists.”

Could data scientists help us predict what programs will be most effective, given a particular social environment, group of participants, schedule, partner organizations, background information, etc? Well, two things will have to happen before we will ever be able to test it out. First, we need to get mission-driven leaders into the game.

Check out the current make-up of the Skillbases of Kaggle users:

Perhaps non-profit leaders are in the 1.6% “social science other” category, but for some reason I doubt it. Not many people working at mission-driven organizations are employed because they are data scientists – and I’m guessing most of us aren’t being data scientists in our spare time, either.  So we need to get into the game, and we need to change the culture of mission-driven organizations to value employee time being spent on data. Similarly, the culture of mission-driven organizations needs to embrace serious and studied data collection. I’m sure you’ve heard the adage “garbage in, garbage out,” meaning that your models are only as good as the data you put into them, and only good data will result in truly helpful conclusions. Why should mission-driven organizations go through such culture shifts? Because predictive modeling could be incredibly useful for leaders of nonprofit organizations who are trying to decide what programs or activities they want to do, and how effective their impact could potentially be. In the same vein, corporations could use this information to help them choose their CRM activities. I want to be clear that I do not think this is a panacea – there is never any one solution to solving the world’s problems. But this could be a powerful resource for leaders when deciding how they could have the most positive impact on communities.

As I’ve talked about before, it is no surprise that one of the hurdles to using a predictive model for social impact is the difficulty of collecting good data. There is no real model for quantifying transformational change (yet). But I bet lots of mission-driven organizations have collected tons of data, but don’t necessarily know what to do with it. If we could identify and organize the data we have, or the data we want to have, and then start a competition to find a predictive model for positive social impact, or transformational change, what might we end up with? Potentially, we could create tools (algorithms) that allow mission-driven organizations to enter data within certain parameters (size of organization, number of people being served, demographics of the environment, etc) and determine with reasonable accuracy whether current programs are truly effective, or whether a new program may have the desired outcome of positive social impact.

The improvement of social impact assessment relies on two factors: collecting good data and creating a set of tools that make use of the data. A social impact assessment exchange would be incredibly powerful if paired with a competition on Kaggle. Imagine, crowdsourcing an incredibly huge and interesting data set of mission-driven work and then finding a predictive model to help better understand the impact of mission-driven work. I can imagine it, and it gets my mind and heart racing thinking about all of the possibilities!

 

 

 

 

 

 


What is this blog, anyway?

Image via cc by Flickr user Imagematters1

I’ve gotten asked a few times (not least of which was me asking myself) why I choose to write this blog. This past week I actually sat down to think about this question, not only so I can have a clear and succinct answer to give to people, but also to motivate myself to continue to research and write, even if the only time I have to do that is after midnight on days before I have to get up at 6am to get ready for work. I reconnected with my belief that self- and organizational-evaluation is a critical part of creating and continually delivering positive social impact. I also thought about how when I started writing this blog this summer, my vision was for it to become a meeting place where people could exchange information and best practices, celebrate wins and help each other through struggles, and to build a database and data warehouse for leaders to reference when considering evaluation and social impact assessment.

Then I read the Vision for The Evaluation Exchange, a periodical that comes from the Harvard Family Research Project at the Harvard Graduate School of Education: “We publish The Evaluation Exchange based on the belief that policies and programs must be grounded in continuous and actionable learning. Evaluation is one crucial tool in this learning process, and its power to inform and transform needs to be shared broadly.” While The Evaluation Exchange has a stated focus on programs and policies specifically related to children, families, and communities, the idea is the same – let’s get talking and sharing and learning from each other about assessment.

Their issues are all free and archived – I’m about to dig into Volume XI, Number 2, from the summer of 2005, which has a topic of Evaluation Methodology. Hopefully it will have tons of great information that I can share here in a future post.

I’m tempted to change the name of my blog to the Social Impact Assessment Exchange. Right now it is just me sharing everything I can find on SIA with anyone who can find me. I’d love to get some more people/organizations on this project… (that ellipses is where YOU come in!)


What Are We Measuring? The EdTech Edition

Image of Wordle built from the U.S. National Educational Technology Plan via drapestakes.blogspot

Education, educational equity, and education technology. I’ve always wanted to be involved in the first, I’m now working for a nonprofit dedicated to the second, and my incredibly talented, intelligent, and entrepreneurial friend James has gotten me totally hooked on the third. Now I see EdTech everywhere I go, and I am loving it! But of course, my number one question whenever I am learning about new EdTech is: where is the Social Impact Assessment?

Easy to ask, not easy to answer. EdSurge for FastCompany put out a great article recently that led me to discover a cool new tool from the NewSchools Venture Fund that begins the daunting but important task of mapping the edtech world. The tool is good looking and easy to use. Here’s the overview shot of the map:

And when you drill down to data – which is of course the section I want to showcase – you see something like this:

So far, so good. But once we get down to Student Information Systems (SIS), Reporting Systems, and Data Warehousing, we get back to the hard questions. I actually love how interactive and tech-driven almost all of the mapped companies’ services appear (I don’t have any demos, so I’m only talking about what information I get from the websites). But exactly what data are we gathering? The basics all appear to be there – grades, reading comprehension, math skills, attendance…easily quantifiable metrics that are cornerstones to assessing the educational experience. Look how pretty this all got wrapped up by the winner of GOOD’s “redesign the report card” competition. But what I can’t necessarily tell from the map and the individual websites is whether any of these systems are gathering, storing, and reporting on some of the more qualitative aspects of social impact in the education sector. Are students more engaged in their classrooms? Do they collaborate more with their peers? Do they enjoy learning? Are they inspired and motivated to advance their learning beyond the classroom? Do they believe they can succeed in life? Are they compelled to teach others, formally or informally?

Working at Teach For America for just about 6 weeks now, but having had several deep conversations about the validity and actualization of our core values, I’ve thought a lot about transformational change. It is plain to me (and I believe to many others) that the things we are accustomed to measuring do not paint the whole picture, especially when it comes to transformational change. But how can we quantify inspiration and motivation? And clearly measuring a child’s motivation to learn is only one half of the challenge, because we then need to measure their follow-through on that motivation. I’m going to make a guess here (and I’ll be happily proved wrong if anyone wants to speak up) that all the student information systems and data warehousing needs end when a child leaves the school or school system in which they were enrolled. And should they switch schools, their data most likely will not follow them. So how can we really tell the long-term, transformational impact of how we are educating our kids? My first thoughts go to gamification of self-reporting data – make it a rewards-system game for kids to enter their own data to track their learning experience. All the systems I saw seemed to be completely aimed at the adult users, with little interaction and engagement from students themselves – except for one: lookred.

Check out lookred’s “what we do” section and you’ll see why they describe themselves as “a student centred experience” serving “a new breed of Digital Natives are moving up through education systems across the world.” Despite the glaringly obvious typo in this next piece of information about one of the offerings from lookred, quoted from the company website, the information is beautiful: “lookred® Spaces begins to address these changing practices and behaviours by providing a platform for education that is more in line with the consumer experience, and draws on inspiration from ‘knowing as much about our learners as Tesco/Wal-mart/Amazon do about thier customers’.  By applying logic traditionally constrained to marketing and advertising domains we can begin to not only better understand individual learners needs and preferences but to specifically target appropriate resources, content and support.” This goes right back to my last post about User Behavior Analytics (…) and how they can be leveraged for social impact assessment. Is lookred® Spaces the tool we can use to gather those missing metrics about transformational change? Will this platform be able to link up (by opt-in) to future Facebook, LinkedIn, and other pages to continue to track transformational change over time?

It’s a brave new world for data collection and for transformational change. How are we going to marry these concepts into a Social Impact Assessment that truly helps us assess the effects of our actions on our stakeholders as well as on society and the world around us? I’m really exhilarated to see so much of this work happening in the education sector. I see two main take-aways from the mapping of the edtech world: doing the hard work of determining what transformational change really is and creating new platforms for metrics and reporting is critical, and engaging students/stakeholders in that work is revolutionary.


Can User Behavior Analytics be adapted for Social Impact Assessment?

I’m a big fan of adaptation. That is to say that when I learn about something that is working for one business or industry or sector, I like to imagine how it could be adapted to benefit something else. These days, that something else is always Social Impact Assessment. And for today’s post, the thing that is working is User Behavior Analytics, and particularly how the data they are based on is gathered.

While I was reading a recent article from FastCompany (Are User Behavior Analytics The Real Predictors Of Customer Engagement?), I found myself substituting phrases like “market share” and “customer economics” for “community engagement” and “social impact measures.” The article is very interesting, and describes how social gaming companies and analytics firms have taken advantage of the “blessing [of having] unlimited amount of data about their customers at their fingertips” as well as the “curse [of having] to figure out a way to sift through all of that data to figure out what’s meaningful and what isn’t.” That would be champagne problems for mission-driven organizations. I read the phrase “[t]hey used customer data to drive engagement, fuel product development, and improve the user experience,” but adapted it as I was reading to state that “we can use stakeholder data to drive engagement, fuel service development, and improve the social impact.” Here’s one more example of adaptation. Instead of “Understanding and engaging customers requires looking beyond traditional web analytics. To optimize and engage the end user experience across multiple touch points including the web, social sites, and mobile applications, companies must instead focus on user behavior dynamics to analyze and identify deep behavioral insights from their data,” I read “Understanding and engaging stakeholders requires looking beyond traditional impact analytics. To optimize and engage the end user experience across multiple services, activities, and resources, mission-driven organizations must instead focus on stakeholder behavior dynamics to analyze and identify deep behavioral [and impact] insights from their data.”

Did I lose anyone with all that repetition? The differences are key. Developing Social Impact Assessment can seem daunting because it is so important and critical, but also because we don’t have any singularly recognized model from which to work. But maybe we don’t have to reinvent the wheel – at least not completely. Take the example from the article of how Pittshburgh-based startup NoWait engages customers through mobile devices by letting them know when their table at a restaurant is ready, connecting the restaurant to the customer and opening the path for a slew of data to be collected. Mission-driven organizations can find similar paths to engaging their stakeholders by providing them a service that also makes the stakeholder connection to the organization deeper, and the channel for collecting data more open. For instance, I’ll use my favorite non-profit example, literacy and mentoring organization Everybody Wins!. Since our main stakeholders are kids, the service we provide them may not be as appreciated as a restaurant place-holder, so let’s mask it in a game. We build a points-and-reward system based on how many hours a child puts into reading or otherwise interacting with a book (this could be having a book read to them, drawing pictures based on a book, writing their own story changing parts of a book, etc.). On our website, we make an interactive game that leads the child to enter their hours and activities. Kids get fun and reward (and incentive) for reading or otherwise interacting with books, and our organization gets self-reported data on how our program is (or isn’t) encouraging kids to read. We also engage kids more with our organization’s services and with books by making a game out of the data collection system. Finally, we get User Behavior Analytics on how kids are interacting with books, what they like best, how we can better reach them, etc. This is all theoretical, but why not try it out? Why not at least explore it, especially if similar approaches are working in other environments, like the ones described in the FastCompany article?

Now, the real challenge is clocking things other than hours, number of books read, or other similarly quantitative measures. How do we utilize User Behavior Analytics to capture and gauge things like how kids engaging with books helps their broader success in school,helps them to become more interested in learning, advances their creative and analytical skills, increases their ability to communicate effectively with others about books? I can’t wait to hear your input on this!