Heart-felt purpose meets data-driven decision making

The Academy is committed to revolutionizing the way people work to ensure the world is a healthier place. To ensure this vision has the greatest impact, evaluation works with each program to develop concrete, achievable goals and data collection plans. To reach this goal, we must do more than count the number of people who attend our trainings or who are satisfied because neither of these data points can show us if we are any closer to our goal.


An intelligent data collection plan must be based on specific, measurable goals.

The Evaluation team helps programs translate their passion and heart-felt missions into specific, measurable goals that guide data collection plans to support programs in using data analysis to guide program decision making, demonstrate value to funding sources and implement process improvements that serve the Academy’s commitment to provide exceptional learning experiences that transform individuals and communities.

The evaluation team guides programs through the evaluation process to identify the objectives that matter the most to them to create a well designed plan to collect meaningful data that can be used for planning, decision-making and demonstrating the value of our programs. In this way, the evaluation team and programs are partners who must navigate the process together.

The Evaluation Team provides the following services: conducts data analysis, develops surveys, interprets data, visualizes data in dashboards, creates logic models of work processes, leads focus groups, and conducts organizational assessments.

All of our evaluation methods are based upon the RE-AIM Framework. Developed by Glasgow (1999), the framework is designed to enhance the quality, speed and impact of efforts to translate research/training into practice in five key steps:


Stakeholders rightfully want to know that training “works,” meaning that trainees learn useful knowledge, values, and skills that will translate to effective practice and support better client outcomes. However, many variables other than training can affect whether these desirable goals are achieved (e.g., agency polices, caseload size, availability of needed services). While it is impossible to design training that can control these variables, building a chain of evidence establishes a linkage between training and desired outcomes for the participant, the agency, and the client such that a reasonable person would agree that training played a part in the desired outcome. In addition, evaluation results enable the Academy to continually refine and enhance our training to improve the application of evidence-based practice by those working to enhance client outcomes.

Note: evaluation samples included on the website were developed by the San Diego State University School of Social Work Academy for Professional Excellence and are protected by copyright. You are welcome to adapt them for your use, however, we would appreciate you letting us know via email at jttatlow@sdsu.edu. Thank you.

1. Reach your intended target population

1. R-Reach
(Individual level)
Reach assesses those participants who attend training, as well as those who did not attend training. Information collected focuses on demographic data such as level of education, years of experience, race/ethnicity, gender, primary language, program area working in, job classification, and county of employment. This is individual-level data that is compiled and compared with those not in attendance to provide data showing whether or not we are reaching all potential trainees, and if not, where to focus future workforce development trainings and recruitment strategies. In addition, demographic information is used to determine if the training is working equally well for all participants or is biased towards a particular sub-category of trainees.

Who is attending the training? What experience/background do they bring that may affect the way the training is perceived? (i.e., education level; years of experience; gender; race/ethnicity; age; region of the country)?

Compared with who is not attending training? What are the characteristics of those not in attendance? Do they differ from those attending?

General Demographic Survey
Demographics of those in potential participant pool but not attending training.

2. Efficacy or effectiveness of the intervention is determined

2. E-Efficacy
(Individual level)
Efficacy of curriculum content is determined by assessing identified competencies and objectives and how well these competencies were implemented into training content.

Is the curricula valid, does it teach to the Core Competency? Is this facilitating the knowledge needed/required? As measured by the trainee’s perception and satisfaction of the training received.

Is the trainer facilitating the information in the necessary manner?

End of Day Feedback Assessment
Trainer Observation Assessments
Trainer Evaluation Forms

3. Adoption by target staff, settings, or institutions is measured

3. A-Adoption
(Individual and Organizational levels)
The Adoption segment of the RE-AIM framework measures the adoption of implemented materials by the target staff, settings, or institutions. Adoption data is collected at both the individual and organizational level. The individual-level data collected determines a trainee’s perception of potential change in their knowledge, attitudes, and/or skills due to the training. The organizational-level data collected determines the organizational perception of likelihood of adoption of training content. Use of both levels of data allows for an evaluation of the training environment, the planned application of knowledge, values, and skills, and the overall participant satisfaction and reaction the training itself. Results are used to identify areas where the curriculum appears to be successful in transferring knowledge and skills and those portions of the training that may need changing. The organizational level data assists in determining how to best support the adoption of knowledge, values, skills of trainees within the workplace.

Does a trainee have a perceived change in their knowledge, attitudes, or skills due to the training?
What is the organizational perception of adoption likelihood of information/training content?

End of Day Feedback Assessment
Quarterly Organizational Survey

4. Implementation consistency, costs and adaptations made during delivery are documented

4. I-Implementation
(Individual and Organizational levels)
Using individual-level data the implementation of information within the training is measured through fidelity assessments by trainer observations; assessment of potential change in the trainee’s knowledge, attitudes, and skills are gathered by pre/post-tests; trainee application of skills are assessed through embedded assessments when applicable, and the Trainee self-perceived knowledge, skills, comfort with the area of training are measured with the End of Day Feedback Assessment. At the organizational-level, a quarterly organization survey is administered to determine if the organization where trainees are employed implemented changes to support implementation of training content. Data from both individual and organizational-level are used to measure the effectiveness of the training, assess if the competencies and learning objectives were met, and to provide guidance as to where changes to the curriculum should occur.

Is there any change in the trainee’s knowledge, attitudes, and skills?
Can the trainee adequately perform the skill taught?
Has the trainee’s self-perceived knowledge, skills, comfort with the area of training changed?
Has the organization where trainees are employed implemented changes to support implementation of training content?

Pre/Post Test
Embedded Evaluation Assessments
End of Day Feedback Assessment
Quarterly Organizational Survey

5. Maintenance of intervention effects in individuals and setting over time is documented

5. M-Maintenance
(Individual and Organizational levels)
Using both individual-level and organizational-level data to determine the maintenance of training content, focus is placed on a trainee’s ability to transfer the knowledge, skills, and values taught in training and how a trainee’s organization/workplace assists them in apply them in their work over time. Evaluation at this level assesses the relevancy of training and measures the effect training had on a trainee’s ability to utilize the information into practice over time. To measure a participant’s transfer of learning, six month follow-up surveys are conducted asking trainees to describe how they have utilized the knowledge, skills, and values presented at the training. Further evaluation of transfer of learning is conducted with follow-up skill evaluation assessments, focus groups, organizational case studies, resulting in triangulation of findings from all levels of the RE-AIM framework to determine impact of specific programming on practice.

Is the trainee able to transfer knowledge, values, and skills taught in the classroom and apply them at work? What effect did the training have on the trainee’s ability to utilize the information in practice?
Has the organization maintained changes over time? Have organizational policies and practices changed over time?

Six Month Follow-up Survey
Focus groups
Follow-up Organizational Assessment
Organizational Case Studies

Back To Top

Note: documents in Portable Document Format (PDF) require Adobe Acrobat Reader 5.0 or higher to view, download Adobe Acrobat Reader.

Note: documents in Word format (DOC) require Microsoft Viewer, download word.

Note: documents in Excel format (XLS) require Microsoft Viewer, download Excel.

Note: documents in Powerpoint format (PPT) require Microsoft Viewer, download PowerPoint.

Note: documents in Quicktime Movie format [MOV] require Apple Quicktime, download Quicktime.