The Complexity of Evaluating Informal Learning

Group of nature scouts surrounded by plants in a field, lead scout looking through a spotting scope


If you’re a frequent reader of our articles (or a practitioner employing evaluation in your own informal learning contexts), you might be asking yourself - Why is evaluating informal learning so darn challenging? 

Unlike formal learning, informal learning is a much more nebulous prospect, existing in a wide variety of contexts ranging from organized programs at museums or aquariums to spontaneous learning that happens in your backyard. This wide variety of contexts means that programs differ dramatically, in everything from content focus to activities to audience and length of engagement. The experience of a teen in an internship program who works at the zoo each day for several summer months will be wildly different from a museum-goer who stops by an interpretation cart to do a one-time activity. 

The free-choice nature of informal learning also creates a layer of difficulty. In many contexts, like perusing an exhibit at your favorite museum, you choose what you look at, do, and where you go next, meaning that no two experiences are exactly the same. This can make comparing learner experiences quite challenging in certain settings.

Another difficulty evaluators encounter is that the very nature of assessment can feel antithetical to the nature of informal learning. In layman's terms, it might bum you out (or otherwise affect you or your experience) to step back and take a “test” while you’re in the middle of an informal learning experience. 

In informal learning, we try hard to avoid replicating the assessment we see in formal learning.  We work to include unobtrusive and embedded assessments, employing methods that are a part of the experience or otherwise fun! We also work to develop methods that the learner may not notice at all, like observations and timing and tracking studies, where visitors’ paths throughout an exhibit or other learning area (like a nature play space) are timed and tracked to uncover interesting patterns in their behavior. 

And finally, we work with learners of all ages and abilities, where one method developed for a certain slice of our audiences (say, teens aged 14 to18) simply won’t work with other audiences. It is necessary to drill into not just the “what” of what we hope to answer or collect data on, but also the “how” - and apply that “how” to all of our audiences, respecting language differences, cultural sensitivities, and age appropriateness. 

It’s clear that there is a lot to consider when developing evaluations for informal learning environments. But, the complexity is part of why we love this work! Interested in learning more about these challenges and how they may be addressed? Check out these resources:

Allen, S., & Peterman, K. (2019). Evaluating informal STEM education: Issues and challenges in context. New Directions for Evaluation, 2019 (161), 17–33. https://doi.org/10.1002/ev.20354

Fu, A. C., Kannan, A., & Shavelson, R. J. (2019). Direct and unobtrusive measures of informal STEM education outcomes. New Directions for Evaluation, 2019 (161), 35–57. https://doi.org/10.1002/ev.20348

National Research Council. (2009). Learning science in informal environments: People, places, and pursuits. The National Academies Press. https://doi.org/10.17226/12190


We hope you enjoyed this article! If you’d like to see more content like this as it comes out, subscribe to our newsletter Insights & Opportunities. Subscribers get first access to our blog posts, as well as Improved Insights updates and our 60-Second Suggestions. Join us!

Next
Next

Questions to Ask When Designing Evaluation Deliverables