Essential Parts of an Evaluation Report
An evaluation report serves as a crucial tool that reflects the work conducted during an evaluation project. These reports often serve multiple purposes: a learning resource, a deliverable for funders or other audiences to see program effects, and a concise way to describe the work done in an evaluation project.
The structure of your evaluation report (or any evaluation deliverable, really) depends on its purpose and audience. Some reports may be more formal than others, even mirroring an academic journal article, where the focus is on clarity of methods and replicability. Others may be simpler or focused on the actionable components, interested in clarity and highlighting areas most important for the client’s goals, and written for a practitioner audience.
The main components of an evaluation report often include an introduction, methods, findings, discussion, and appendix section. Some may include additional sections, like recommendations or references.
Like an academic article, the introduction section of an evaluation report details the program and its audience, defines the interest in evaluating that program, and introduces the team assembled to conduct the evaluation. It may also include things like logic models or intended outcomes of the program.
The methods section of an evaluation report describes the evaluation questions, the approaches selected to answer these questions, the instruments used, the analyses conducted, and the sample for each method.
The findings section presents the results of the evaluation, often organized in a way that relates to the main evaluation interests or evaluation questions. Here, detailed analyses are presented alongside tables and figures (we’re partial to data visualizations) that demonstrate the results. The author will point to interesting findings, especially those that relate to the evaluation questions.
The discussion section is typically focused on how the findings answered the evaluation questions, and often includes recommendations from either the evaluator, the project team, or a combination of the two.
The report appendix includes exact instruments used in the evaluation, cites program materials (e.g., logic models, evaluation plans), and sometimes offers additional data or materials that did not necessarily fit into the rest of the report.
Style and purpose impact individual report design, and many evaluators will choose slightly different presentations. Decisions about structure and what to include or exclude come down to the purpose and needs of the report, while individual preferences will also impact the final tone and style choices. When making decisions about your own reports, this traditional model is a good starting point, but also consider if there are additional sections you may need, which sections need the most detail or are more important to emphasize, or other changes you’d make to create a report that works for you! A discussion with your evaluator, who can provide their insights and take direction based on your needs, is a great way to ensure your final evaluation report meets the needs of your project.
We hope you enjoyed this article! If you’d like to see more content like this as it comes out, subscribe to our newsletter Insights & Opportunities. Subscribers get first access to our blog posts, as well as Improved Insights updates and our 60-Second Suggestions. Join us!

