Evaluation Plans & Systems
As entities that serve the public, nonprofit organizations have an obligation to demonstrate their value to the public. The public has a stake in nonprofit performance and is entitled to information regarding organization results. Nonprofits should regularly measure their performance against a clear set of goals and should share such information with their constituents. Nonprofit evaluation should be appropriate to the size and purpose of the organization and evaluation data should be used to continually improve the quality of processes, programs and activities.
- 1 Evaluation Planning
- 2 Logic Models
- 3 Environmental Scan & SWOT Analysis
- 4 Community Needs/Assets Assessment
- 5 Formative Evaluation
- 6 Summative Evaluation
- 7 Program Evaluation
- 8 Outcome Evaluation
- 9 Impact Evaluation
- 10 Sharing with the Public
- 11 Stakeholder Feedback
- 12 Resources, Sample Documents & Further Reading
- 13 Articles
- 14 References
From the Minnesota Council of Nonprofits:
Before you begin the evaluation process, you should have the end in mind – what will you do with your evaluation results? Who will you share them with? How will you incorporate findings to improve programs? What are your goals for conducting the evaluation? Considering up front your goals for the evaluation and how you will use evaluation results will help to ensure an effective evaluation process.
In larger organizations, most evaluations involve more than one person. One person should be designated the “lead” on ensuring the effective planning, implementation and use of the evaluation. And, evaluation team members should be considered and informed early on, and should have an active part in all aspects of the evaluation. Team members include organizational leadership who should support the evaluation from start to finish, the “lead” staff person who oversees the effective creation, implementation and use of the evaluation, the evaluation design team, and program staff members who will carry out the evaluation. Communicating clearly with, and involving, all of these people will ensure that the evaluation is carried out most effectively.
In smaller nonprofits, it's possible for one person to conduct many of the evaluation duties. During the planning stage, small organizations may consider including volunteers, board members or clients to ensure an adequate focus on community needs. Alternately, many small nonprofits may choose to hire a short-term consultant or a college intern to support intensive data collection over a short time period.
It is easy to forget that conducting an evaluation can require extra resources – including staff time, systems implementation, and for some, hiring a consultant. It is important to be realistic up front about the amount of time and resources your evaluation will take. Neglecting to think about resources can result in an evaluation that isn’t thoroughly implemented or complete. If you are considering hiring an outside consultant to help with your evaluation, plan to pay 5 – 20% of the total program cost on evaluation.
The Logic Model process is a tool that has been used for more than 20 years by program managers and evaluators to describe the effectiveness of their programs. The model describes logical linkages among program resources, activities, outputs, audiences, and short-, intermediate-, and long-term outcomes related to a specific problem or situation. Once a program has been described in terms of the logic model, critical measures of performance can be identified.
Logic models are narrative or graphical depictions of processes in real life that communicate the underlying assumptions upon which an activity is expected to lead to a specific result. Logic models illustrate a sequence of cause-and-effect relationships—a systems approach to communicate the path toward a desired result.
A common concern of impact measurement is that of limited control over complex outcomes. Establishing desired long-term outcomes, such as improved financial security or reduced teen-age violence, is tenuous because of the limited influence we may have over the target audience, and complex, uncontrolled environmental variables. Logic models address this issue because they describe the concepts that need to be considered when we seek such outcomes. Logic models link the problem (situation) to the intervention (our inputs and outputs), and the impact (outcome). Further, the model helps to identify partnerships critical to enhancing our performance.
Environmental Scan & SWOT Analysis
SEE ALSO: Strategic Plan
The environmental scan helps you to understand the broader context in which you’re operating. By investing the time to identify key trends and environmental factors that impact your nonprofit, you can begin to think through the implications and, where appropriate, plan a course of action. An environmental scan is an objective review of the current and anticipated environmental factors that impact your organization. These can include, for example, the political, economic and demographic environment in which you’re operating.
Nonprofits exist in a strange netherworld between market forces and social change. They are trying to create a solution to a social problem, but as much as some might like to deny it, that desired social change exists within a market economy. That means that in order to be successful, nonprofits, just like any business, must continually analyze, understand and create strategies around whatever market forces are at play (competition for funding, clients, partnerships, inputs, results; increased/decreased regulation; changing client/funder demand; changing input costs; changing technology, etc.).
Community Needs/Assets Assessment
A community needs assessment identifies the strengths and resources available in the community to meet the needs of its inhabitants. The assessment focuses on the capabilities of the community, including its citizens, agencies, and organizations. It provides a framework for developing and identifying services and solutions and building communities that support and nurture children and families.
A community assessment may be limited to a compilation of demographic data from census records, results of surveys conducted by others, and informal feedback from community partners. Or, assessments may be expanded to include focus group discussions, town meetings, interviews with stakeholders, and telephone or mailed surveys to partnership members and the community.
Understanding a community's concerns enables us to effectively characterize its needs and respond with appropriate interventions. In order to assess communities and create a community profile, we need to discover those things that matter to the community, what issues the community feels are most important to address, and what resources are available to bring about change. By interviewing community members, conducting listening sessions and public forums, and spending time in the place, we can develop an assessment (or profile) of the community that helps identify critical issues and plan future interventions.
At its most basic, formative evaluation is an assessment of efforts prior to their completion for the purpose of improving the efforts. Formative evaluation encourages a process of reflective practice.
There are many evaluation tools -- observation, in-depth interviews, surveys, focus groups, analysis, reports, and dialogue with participants, each of which can be part of formative evaluation. Depending on the goals of the formative evaluation, it may emphasize one or more of these tools.
There are four main goals for formative evaluation, each of which may be more or less emphasized depending on the program needs:
- Planning Evaluation: Planning evaluation clarifies and assesses a project's plans. Are the goals and timelines appropriate? Are the methods utilized to reach the goals appropriate? In addition, a planning evaluation can lay the groundwork for future formative and summative evaluations by developing indicators and benchmarks. In conflict resolution work, it is often useful to include a planning evaluation component in order to ensure that all stakeholders share common enough visions of the project plans. A planning evaluation can be a form of consensus building amongst those involved in conflict resolution.
- Implementation Evaluation: An implementation evaluation focuses on the extent to which a program is proceeding according to plan. Information about ways in which a program is not proceeding according to plan can be used to either revise plans or to revise programming. In conflict resolution assessment, implementation evaluation can be a useful component to feed into a planning-focused evaluation. (Implementation evaluations can also be part of Summative Evaluations.) Where work is not proceeding according to plan, participants and facilitators can use an implementation evaluation with a planning focus to ask themselves why things are not going according to plan, and adjust plans or strategies accordingly.
- Monitoring Evaluation: A monitoring evaluation is usually conducted by an outside evaluator during the course of a program. A funder may choose to monitor implementation of a conflict resolution project by visiting a workshop, checking in with participants, or talking with project personnel. For long-term conflict resolution work, a monitoring evaluation can provide a funder useful reassurance that money is being well spent.
- Progress Evaluation A progress evaluation assesses a program's progress. The project's unique goals should serve as a benchmark for measuring progress. Information from a progress evaluation can later be used in a summative evaluation. In conflict resolution work, a progress evaluation might assess attitude change part-way through a multi-year program, providing both feedback on what's working, and evidence of impact early on in a program.
A summative evaluation is a type of assessment that occurs at the end of a project or activity. The purpose of the evaluation is to go over the particulars of what transpired during the course of the activity, identify the key events or factors that influenced the outcome, and determine what could have been done to address negative aspects or to reinforce positive aspects of that activity so the outcome was more profitable or positive. Performing a summative evaluation can be very helpful as a tool to learn from the experiences of one activity so that future activities can be structured with greater efficiency.
The idea behind a summative evaluation is different from that of a formative evaluation. With the former, a project that is now complete is assessed with the goal of learning from the experience, both in terms of what worked and what did not. By contrast, a formative evaluation occurs prior to or at the point at which a project is launched, with the goal being to identify potential strengths and weaknesses that are likely to impact the outcome. Both forms of evaluation are critical, with one making it possible to minimize liabilities with a new project and the other serving to provide data that can aid in achieving success with future projects.
Conducting a summative evaluation will involve identification of all benefits and liabilities that came to pass during the course of the project or activity. While the focus is often on direct benefits and liabilities, the evaluation may also consider indirect results as well. For example, along with noting that a project increased sales revenue by ten percent for the most recently closed accounting period, it may also be noted that the project also had the benefit of increasing efficiency in one or more departments by a significant margin. While the goal may have been to increase sales revenue, identifying the additional benefit of higher productivity makes it easier to determine if the overall benefits received were worth any costs that may have been incurred during the course of the project.
Typically, organizations work from their mission to identify several overall goals which must be reached to accomplish their mission. In nonprofits, each of these goals often becomes a program. Nonprofit programs are organized methods to provide certain related services to constituents, e.g., clients, customers, patients, etc. Programs must be evaluated to decide if the programs are indeed useful to constituents. In a for-profit, a program is often a one-time effort to produce a new product or line of products.
Program evaluation is carefully collecting information about a program or some aspect of a program in order to make necessary decisions about the program. Program evaluation can include any or a variety of at least 35 different types of evaluation, such as for needs assessments, accreditation, cost/benefit analysis, effectiveness, efficiency, formative, summative, goal-based, process, outcomes, etc. The type of evaluation you undertake to improve your programs depends on what you want to learn about the program. Don't worry about what type of evaluation you need or are doing -- worry about what you need to know to make the program decisions you need to make, and worry about how you can accurately collect and understand that information.
Organizations that are serious about their theory of change engage in regular self-assessment and evaluation of outcomes. The results of measuring outcomes can be shared with stakeholders to illustrate the impact of an organization's programs and activities, and to demonstrate the difference the organization is making in its community and in peoples' lives. Seeing the difference an organization is making on paper, in video, through testimonials -- is powerful. To be successful a nonprofit must embrace a culture that supports outcomes thinking. This is not as daunting as it sounds. 
Outcome management enables organizations to define and use specific indicators to continually measure how well services or programs are leading to the desired results. With this information, managers can better develop budgets, allocate their resources, and improve their services.
Outcome evaluation measures the change that has occurred as a result of a program. For example, your process evaluation might confirm that 200 people have completed your skills-training program. An outcome evaluation would tell you how many of those demonstrated increased confidence, changed behaviors, found jobs because of the new skills, etc. A successful outcome management program includes a process to measure outcomes plus the use of that information to help manage and improve services and organizational outcomes.
An impact evaluation looks at the long-term, deeper changes that have resulted from that program. This type of evaluation could, for example, suggest that the changes to your skills-training participants’ lives continued over time and perhaps transferred across generations. While certain outcomes can be easily and reliably measured, true impact measurement is a much trickier business. In its truest sense, impact measurement often involves using an independent evaluator, establishing control groups, and measuring changes over extended periods of time. This can be extremely costly, and reliable results may take years to emerge.
Sharing with the Public
Resources, Sample Documents & Further Reading
Nonprofit Answer Guide: Evaluation
National Center for Charitible Statistics: Homepage
The Evaluation Center | Western Michigan University: Evaluation Checklists
Foundation Center, IssueLab: Evaluation Tools and Frameworks
Stanford Social Innovation: Seven Deadly Sins of Impact Evaluation
Stanford Social Innovation: Getting Results: Outputs, Outcomes and Impact
Urban Institute: Building Evaluation Capacity
U.S. Government Accountability Office (GAO): Designing Evaluations
W.K. Kellogg Foundation: Evaluation Handbook
Free Management Library: Evaluation Activities in Organizations
American Evaluation Association: Public Evaluations Documents Library
BlueAvocado.org: Getting Real about Real-Time Evaluation
The Aspen Institute, Rural Economic Policy Program Measuring Community Capacity Building, A Workbook-in-Progress for Rural Communities
University of Iowa: Measuring Strengths in Community Collaboratives
Asset Based Community Development Institute (Northwestern University): Downloadable Publications and Resources
Innovation Network: State of Evaluation in Nonprofits 2010
W.K. Kellog Foundation: Logic Model Development Guide
University of Wisconsin Extension Office: Logic Model Examples and Templates
Environmental Scanning/SWOT Analysis
Strategic Management Insight: SWOT Analysis - Do It Properly!
Social Velocity.net: SWOT Analysis
Journal of Extension: 10-Step Process for Environmental Scanning
Community Needs/Assets Assessment
Nonprofit Association of the Midlands: Midlands Community Compass
Kansas University: Community Toolbox
Asset Based Community Development Institute: Building Communities from the Inside Out: A Path Toward Finding and Mobilizing a Community's Assets - Introduction
Community Action Partnership: Community Needs Assessment Online Tool
Social Solutions: How to Perform a Nonprofit Needs Assessment
Learning to Give: Community Needs Assessments - Definitions and Further Resources
Free Management Library: Basic Guide to Program Evaluation (Including Outcomes Evaluation)
The Bridgespan Group: Program Evaluation
Urban Institute: Key Steps in Outcome Management
National Council of Nonprofits: Evaluation and Measurement of Outcomes
GuideStar Blog: Your Engine of Impact: Impact Evaluation
Stanford Social Innovation Review: Seven Deadly Sins of Impact Evaluation
Sharing with the Public
Center for Disease Control and Prevention: Reporting Evaluation Findings to Different Audiences
Charity Channel: Six Steps to Effective Program Evaluation: Communicate Your Results
Robert Woods Johnson Foundation: A Practical Guide for Engaging Stakeholders in Developing Evaluations
Innovation Network: Expanding Stakeholder Involvement in Evaluation
Blueprint for Change: Identifying Stakeholders
Independent Sector: Charting Impact Sets Goal for 1,000 Completed Reports in 2012
Nonprofit Quarterly: Thinking About Nonprofit Evaluation as Affected by Time September 4, 2013
- Scriven, Michael. "Beyond Formative and Summative Evaluation." In In M.W. McLaughlin and ED.C. Phillips, eds., Evaluation and Education: A Quarter Century. Chicago: University of Chicago Press, 1991.