Home > Evaluation


Academy for Professional Excellence, has a deep commitment to quality training. As such, we evaluate each and every training we provide using a multi‐level evaluation methodology based on Kirkpatrick’s four‐level evaluation schema. While Kirkpatrick includes only four levels (Satisfaction/Opinion, Knowledge, Behavior, and Outcomes), the Academy has added two additional evaluation levels (Tracking and Formative). These six evaluation levels help build a chain of evidence about training effectiveness and guide refinement of our trainings.

Six-Level Evaluation Methodology

Stakeholders rightfully want to know that training “works,” meaning that trainees learn useful knowledge, values, and skills that will translate to effective practice and support better client outcomes. However, many variables other than training can affect whether these desirable goals are achieved (e.g., agency polices, caseload size, availability of needed services). While it is impossible to design training that can control these variables, building a chain of evidence establishes a linkage between training and desired outcomes for the participant, the agency, and the client such that a reasonable person would agree that training played a part in the desired outcome. In addition, evaluation results enable the Academy to continually refine and enhance our training to improve the application of evidence-based practice by those working to enhance client outcomes.

For more information on how to build evaluation into your training program, the Academy has developed an eLearning course Training Evaluation and Construction.

Note: evaluation samples included on the website were developed by the San Diego State University School of Social Work Academy for Professional Excellence and are protected by copyright. You are welcome to adapt them for your use, however, we would appreciate you letting us know via email at jttatlow@mail.sdsu.edu. Thank you.

Level 1: Tracking


Who is attending the training? What experience/background do they bring that may affect the way the training is perceived?

Evaluation Tool:


The first level of evaluation tracks participants who attend training. Information collected at this level is mainly demographic data such as level of education, years of experience, race/ethnicity, gender, primary language, program area working in, job classification, and county of employment. Complied together, this data provides a snapshot of the demographics of trainees and a lens for understanding other levels of training evaluation data. In addition, demographic information is used to determine if the training is working equally well for all participants or is biased towards a particular sub-category of trainees.

Sample evaluation tools:

PCWTA Trainee Profile
MASTER Demographic Survey

Level 2: Formative Evaluation

Assessment: Is the curriculum valid? Does it teach to the Core Competency?

Evaluation Tool:


Level two is a formative evaluation that assesses if the curriculum content is valid and teaches to the identified competencies and objectives. Since effectiveness of training depends on both the content and the structure/delivery of the training, we actually employ two systems when conducting formative evaluation. First, at each training session, program staff use a Trainer Observation Form or a Delta Plus Tool to take notes on the training content and delivery. At the end of the training series, program staff meet to discuss the curriculum and propose any changes needed. Second, we have a trainer development process in place to enhance our trainers’ ability to deliver the curriculum. Staff observe each trainer at regularly scheduled intervals and provide feedback and update each trainer’s personal development plan as needed.

Training Observation Form
Trainer Evaluation Form

Level 3: Satisfaction - Opinion – Reaction

Assessment: What is the trainee’s opinion/satisfaction of the training received? What is the trainer’s opinion/satisfaction with the training delivered?

Evaluation Tool:

Satisfaction Survey

Level three evaluation measures a trainee’s self-perceived change in attitude or values, increase in competence, and enhanced level of comfort with the content of a specific training. Data collected at this level helps to evaluate the training environment, the planned application of knowledge, values, and skills, and the overall participant satisfaction and reaction the training itself. Results are used to identify areas where the curriculum appears to be successful in transferring knowledge and skills and those portions of the training that may need changing. To further assist with refinement of training delivery, we ask our trainers for feedback on their satisfaction with the training delivery and pre-training planning. The results of our level three evaluation are used in concert with other levels of evaluation to help refine the content and delivery of each training.

Sample evaluation tools:

Satisfaction Survey – Public Child Welfare Training Academy (from PCWTA program)
Satisfaction Survey – Behavioral Health Education Training Academy (from BHETA program)
Satisfaction Survey – BHETA Road Map to Recovery Program (from BHETA program)
Trainer Response Survey

Level 4: Knowledge - Skill Acquisition

Assessment: Is there any change in the trainee’s knowledge, attitudes, and skills? Can the trainee adequately perform the skill taught? Has the trainee’s self-perceived knowledge, skills, comfort with the area of training changed?

Evaluation Tool:

Pre/Post Test Embedded Evaluation

Level four evaluation focuses on actual changes in the knowledge, skills, or values of the participant as a result of the training. Data are used to measure the effectiveness of the training, assess if the competencies and learning objectives were met, and to provide guidance as to where changes to the curriculum should occur. To measure knowledge acquisition, a pre/post-test model is typically used with multiple-choice questions mapped to the learning objectives of the training. To measure skill acquisition, an embedded evaluation is used – frequently in the form of a vignette that requires application of key skills. Trainee responses are recorded for evaluation purposes, and vignette exercises are frequently debriefed in class to further trainee learning

Sample evaluation tools

Pre-test Self Assessment – Leaders in Action Training (from SACHS program)
Post-test Self Assessment – Leaders in Action Training (from SACHS program)
Pre/Post Self-Assessment of Learning Voluntary Case Planning (from MASTER program)
Embedded Evaluation APS Risk Assessment (from MASTER program)
Embedded Evaluation Biopsychosocial Assessement (from MASTER program)

Level 5: Transfer


Is the trainee able to transfer knowledge, values, and skills taught in the classroom and apply them at work? What effect did the training have on the trainee’s ability to utilize the information?

Evaluation Tool:

Follow-up Surveys Skill Assessments

Level five evaluation focuses on a trainee’s ability to transfer the knowledge, skills, and values discussed in training and apply them in the workplace. Evaluation at this level attempts to assess the relevancy of training and measure the effect training had on a trainee’s ability to utilize the information. To measure a participant’s transfer of learning, follow-up surveys are typically conducted asking trainees to describe how they have utilized the knowledge, skills, and values presented at the training. A more rigorous way to evaluate transfer of learning is to conduct a follow-up skill evaluation or pull case files to determine any differences in practice.

Sample evaluation tools from simplest to most detailed:

Follow-up Survey – CADRE (from BHETA Program)
Follow-up to Personal Action Plan (from Tribal STAR Program)
Includes two pieces Personal Action Plan and 3-month Follow-Up Survey
Participant and Supervisor Follow-up Surveys – Manager Core Training (from PCWTA Program)

Level 6: Outcomes


Has the training affected client outcomes?

Evaluation Tool:

Follow-up Surveys Pre/Post Case and Review Change in Outcome Trends

Level six evaluation looks at the impact of training on client outcomes. For training evaluation purposes, we typically use follow-up surveys which ask participants to provide examples of how the training impacted positive outcomes for any of their cases. More rigorous measures would include case/control studies or review of case files pre- and post-training.

Sample evaluation tools:

Tribal STAR Training Follow-up (from Tribal STAR Program)

Back To Top

Note: documents in Portable Document Format (PDF) require Adobe Acrobat Reader 5.0 or higher to view, download Adobe Acrobat Reader.

Note: documents in Word format (DOC) require Microsoft Viewer, download word.

Note: documents in Excel format (XLS) require Microsoft Viewer, download Excel.

Note: documents in Powerpoint format (PPT) require Microsoft Viewer, download PowerPoint.

Note: documents in Quicktime Movie format [MOV] require Apple Quicktime, download Quicktime.