“One of the great mistakes is to judge policies and programs by their intentions rather than their results” -Milton Friedman
Program evaluation is the process of collecting data and information about a program with the goal of making decisions about it. Including an evaluation plan in your programs demonstrates a serious commitment to outcomes and objectives – that you want to continually improve and know how well you have achieved organizational goals. Increasingly, foundations expect to see an evaluation component in the programs they fund.
What is a Dashboard?
Dashboards have become ubiquitous in both the nonprofit and corporate sector. It’s is an easy-to-use evaluation tool that gives nonprofit leaders and funders an at-a-glance view of what’s happening within the organization, and is one of the most consistently successful best practices that thriving nonprofits have in their toolbox. For a new or emerging nonprofit, you can launch more quickly and efficiently if you know which parts of your programs are most effective so that you can continue to hone and perfect these areas. Additionally, you can identify which programs need either improvement or those you should consider dropping altogether.
Continuous development and evaluation go hand-in-hand; donors can’t resist supporting organizations that can say with confidence, “This is what we do; this is what we know works; this is what we’ve learned . . . won’t you learn with us?”
If you’re not familiar with what a dashboard actually looks like, you can review an elegant example created for a fictitious nonprofit named Happy Paws Animal Shelter by evaluation consultant Myia Welsh.
This dashboard may at first appear overly- simplistic, but don’t underestimate its power in conveying critical information. In general, dashboards are supremely effective and important in supporting program decision-making, funding channels, and board governance. Also, you’ll probably be relieved to learn that you don’t need to learn new software to create effective infographics. The charts and graphs in the Happy Paws dashboard were all created in Excel and then copied and pasted into Word, so it’s very easy. That said, there are many software programs dedicated to creating very slick dashboard graphics. A Google search will yield a number from which to choose. For now, my advice would be to keep it simple.
Organizations and individuals that contribute to nonprofits want to know that their investment is sound. They want to know that they’re getting a sound return on that investment, and that their favorite organization is able to learn from their work and course-correct to ensure continuous improvement. With sound program evaluation in place, an organization should be able to answer the question, “What are you doing and how well are you doing it?”
Systematic Collection of Information
The term “systematic” indicates that evaluations have a clearly-defined outline for how information and data are collected. There is also a well-defined for what types of data are collected and why this data is collected. You can collect evaluation data on program activities, characteristics, and outcomes. Although we usually confine our evaluation solely to outcomes, you should keep in mind that evaluation can (and should) extend to all aspect of programming.
Activities: If you’re conducting training events, how many did you have? How many participants attended? It’s often tedious and an afterthought to collect and document this type of data – particularly for major events – everybody is wearing a dozen ‘hats’ and is so busy. Nevertheless, this is exactly the data that funders want to see; it’s worth dedicating extra staff- or volunteer time to ‘count beans’ during such events.
Characteristics: The parameters of your program environment.
Outcomes: Benefit or change resulting from an action. Outcomes are often used to refer to benefits to individuals, families, communities, or larger populations during or after program participation – for example, reduction in the number of low-birthweight babies. Outcomes can also refer to change in systems, organizations, or programs – for example, improved communication or increase in number of clients served.
When is a good time to start?
You can begin collecting evaluation data anytime – why not start today? As far as reporting to funding sources, you may opt to include evaluation data in your annual report, produce quarterly reports for your board, and/or follow a reporting cycle that coincides with major funders requests.
Your annual report is a piece of the organization’s ‘public face’, so this may be a natural first step in reporting evaluation data. You should be able to articulate and demonstrate what you’ve been doing and what the results are – not that you’ve just been ‘busy’ – but that you’ve been busy doing the things that matter a great deal to your community. The advantage of starting now is that you can begin to detect trends and establish best practices immediately, so that you’re ready to show some data in your next annual report.
When a funder has questions or you want to put together a grant proposal, you can have information readily available to strengthen your case for support – you know where you’ve been and how you’ve improved.
Am I Replacing the Storytelling Component of Program Success?
Stories represent the human value of your organization and are indispensable in conveying the success of your organization. Quantitative data are not a means of replacing testimonials and stories and in fact, they’re inextricably linked to the stories to depict a more comprehensive narrative. Try coupling testimonial with scale-based satisfaction data – that’s a winning combination!
For example, if you have a training program, you can collect data on how many participants attended, how satisfied participants were with the program, and how many would recommend it to a friend. Three months after the training, you can contact participants again to ask how they’re putting the training to work in their everyday experiences. Now you’ve got some great concrete information about the value and efficacy of the program – funders will love it!
Start by developing data-collection forms for your program team, with their input, of course, and ask them to start with proverbial ‘baby steps’. The form doesn’t have to be complicated, time-consuming or overly-sophisticated. Your goal is to collect data in an accurate, reliable way without having to go back and recreate the program experience later – that’s an epic waste of time and you won’t get a lot of ‘buy in’ for your evaluation scheme.
Start with the highest-priority areas to which your mission directly relates. Counting everything can drive you mad so you must prioritize the evaluation effort. Select one or two programs and keep track of the information that shakes out naturally. One small step that is well-executed is much better than doing nothing at all.
The Happy Paws Evaluation Dashboard
Our fictional organization Happy Paws is a small organization with an annual budget of ~ $100,000 and 2.5 staff members. The organization has two objectives for its dashboard : 1) help their board members have more efficient meetings and governance, and 2) to always “know where they are” in terms of the goals attached to its mission. One way it can do that is to create a dashboard The tool is called a dashboard because it serves the same function as the dashboard in your car – with a quick scan, you can easily see the gauges, the measures, and any “warning lights” that may require your immediate attention.
These ‘visual reports’ are a fantastic way of presenting information to your board, funders, members, and constituents – nobody enjoys or has time to read through reams of text! This gives everybody a big-picture view of your programmatic areas in a top-level format. They don’t necessarily need to know how things are happening – don’t send them down a rabbit hole. Instead, they can see that, for instance, Happy Paws volunteer program has shown a slight increase over the years – they’re doing great. That’s the end of story – there’s no need to waste additional time in a board meeting poring over the details of that program.
As to what items you’ll want to focus on in creating your dashboard, the lead staff and board should reach consensus about what is most useful and valuable – don’t count everything for every program. What are the high-level measures about which the board needs to know so that it has confidence that things are running smoothly programmatically. This gives your board the clearance to focus on the things that it does best– outstanding governance.
Let’s examine the Happy Paws dashboard and its elements more closely. It has metrics for funding, client feedback, board performance, volunteer hours, and shelter activity – a nice balance of organizational activities. Let’s review a few . . .
Programmatic – In this example, you’ll note that there are three color indicators for performance – red, yellow and green, where green equals success, yellow indicates caution, and red means that this is an area of performance that you’ve got to pay attention to. They can track success over time at a glance.
Board Metrics – This is a unique idea – and a good one. Why would a board want to keep track of things like attendance at board meetings, new board member recruitment, committee meeting participation, executive director performance, etc.? This is the board’s effort to hold itself accountable.
Fund Development Chart – This is, of course, always on the minds of board members – Are we on track for meeting our fundraising goals this year? What’s the revenue picture? Very clear, succinct, and at-a-glance.
Customer Satisfaction Survey Results – This is scale-based information and is very easy to do! In this case, you might see that there’s an equal amount of customers who are “very satisfied” as are “somewhat satisfied”. So, this is really useful information. The “very unsatisfied” in this context could mean that Happy Paws didn’t have the breed that a client wanted; it could mean that some were unhappy that they had to wait a week to pass an adoption application, etc. How do you know? The results of this survey would indicate that there is additional information that can (and should) be gleaned; therefore, you can contact each group for follow-up to investigate specific reasons for dissatisfaction and/or ask more specific questions in future surveys. The bottom line is this . . . if you don’t ask questions, you’ll never know that clients are dissatisfied, and so you lose opportunities to steward the relationship into the future and win them back (or keep them).
It may be a challenge to inspire your program staff to implement a new evaluation plan. A dashboard is an accountability mechanism, and the program staff may perceive this as yet another thing on their already lengthy ‘to do’ list. As an E.D., what can you do to overcome resistance among critical staff members who, most often, will be the data collectors?
Be aware that staff may feel like the evaluation and dashboard are vehicles for criticism. After all, you’re identifying areas that aren’t performing as well as you’d like, in addition to those that shine. This isn’t about internal criticism; rather, it’s about making the organization better by identifying skill sets that together will bring the organization to the next level of greatness. Evaluation should be a collaborative, participatory endeavor. Work with your staff to identify ways for them to incorporate data collection into their existing work schedules in a way that doesn’t seem cumbersome and tedious.
Program staff want to know if they’re indeed making a difference to their service communities. Intuiting that your work matters is one thing – making a concerted effort to measure and report on it is another. Take your programs to the next level and do the best work that you can.
The key to effective, easy evaluation is knowing what it is that you want to measure. Start with the primary mission-related programs of your organization. Another area of evaluation to consider is what your board members need to effectively govern – ask them for input and empower them to focus their attention on dashboards.
Evaluation may make program staff uncomfortable because it increases accountability. Nevertheless, the ability to elucidate specific areas that can be improved for greater organizational impact may inspire them to want to participate.