The best evaluations are planned before the program starts and not as the program ends. Having a clear evaluation strategy can help you to match the evaluation to your purpose and goals. And knowing what you want to measure in your evaluations, you can be collecting information from your participants from the first application or interview.

Read Set up Program Evaluation and use the AMP Evaluation Plan Template and Sample Evaluation Plan to help you create your evaluation strategy.

What are You Evaluating?

Evaluation can only be successful if you have targets to measure against. You need to set clear goals and expectations for your program; otherwise, you will not have a benchmark to measure against or understand what your program was intended to accomplish. For example, your evaluation can target one or more of these parameters:

Are you meeting your program's purpose and goals?

Did your participants meet their goals?

For example, did they:

  • Get a job
  • Grow a network
  • Smooth their cultural transition
  • Gain job search skills

Do your inputs meet the needs of your program?

For example:

  • Do you have the funding to run the program?
  • Do you have enough of the right people to run the program?
  • Do your people have the tools and training to support the program?

What are you doing to support the program and the people who are running it?

For example:

  • Is your marketing effective to draw in enough participants?
  • Is your training effective for participants?
  • What kind of events are you running?
  • Are the matches effective?

What are your measurable outputs (Key Performance Indicators)?

For example:

  • Number of sessions for a type of program
  • Range of events you run
  • Number of participants at an event
  • Successful job searches
  • Satisfaction with the program

What are the short-term and long-term outcomes of the program?

For example:

  • Short-term outcomes could include more newcomers hired or more networking for mentors and mentees.
  • Long-term outcomes could include a more accepting community for newcomers or other programs that support related goals.

Remember! No single evaluation needs to meet all these targets and your program doesn’t need to evaluate all these items. Start with the basics for a new program and build in more evaluation as the program is established and grows.

In the beginning, you can focus on the short-term effects and over time look for long-term progress. Allow your program time to hit your targets because it will not immediately get to where you intend.

Collecting and Analyzing Data

Evaluation can start with an opening interview and continue past the program end date. There are many options for evaluation, but it can be good to have:

  • Entry and exit evaluations with program participants
  • Mid-point evaluations
  • Long-term repercussion with a questionnaire to all participants after 2 years of completion
  • Quarterly program stats
  • Intake and completion interviews for each participant
  • Intake questionnaires and focus groups for peer mentoring
  • Administration interviews quarterly for the first year and annually afterward

Evaluation requires that you collect, review, and analyze responses and data. While this may sound complicated, you can start with a simple survey and build your analysis over time. Focus on the evaluations will help you build the program that your participants need and meet the needs of your stakeholders or funders.

Data can be divided into two broad types: quantitative and qualitative. Quantitative data is the numbers, statistics, and things you can measure such as:

  • Demographics
  • Completion numbers
  • Satisfaction ratings of mentors/ mentees
  • Attendance and completion of programs numbers
  • Time spent in training
  • Time spent in mentorship
  • Volunteer hours
  • Funding and expenses

Quantitative data can be compiled into tables and charts and compared from session to session or year to year. It can be helpful for planning, scheduling, and looking for growth or changing needs.

Qualitative data is the comments, anecdotes, and examples that you can collect such as:

  • Comments and written responses in surveys
  • Interviews or focus group notes and transcripts
  • Observations from staff or volunteers
  • Stories and testimonials for mentors and mentees

This qualitative data can be grouped into themes and patterns to look for trends in what people are saying. The comments can also provide insight into the “why” of some numbers. For example, comments can show what specific parts of your program make your participants satisfied or unsatisfied with their experiences. It may also provide insight into something that you did not specially ask about.

Program evaluation can be as complicated or as simple as you need it it to be. We have created some Admission, Exit, and Follow Up Surveys for you to use as is or to customize. Use these surveys to check in with your program participants.

Download our Evaluation Tools

We encourage you to use any forms we provide as they are or to customize them for your program needs including adding your logo. If you would like to customize these forms, you can find the full set of editable forms here.

Want to Learn More?

Read our Bootcamp: Evaluating Mentoring Success.

Learn more about creating an evaluation strategy with the Basic Guide to Program Evaluation

Both of these books have sections and examples for program evaluation:

Create a Mentorship Program

Are you ready to create a mentorship program in your community? Contact us to start the process.