Advertisement
Commentary| Volume 54, ISSUE 3, SUPPLEMENT , S10-S14, March 2014

Practical Experience From the Office of Adolescent Health's Large Scale Implementation of an Evidence-Based Teen Pregnancy Prevention Program

  • Amy Lynn Margolis
    Correspondence
    Address correspondence to: Amy Lynn Margolis, M.P.H., Department of Health and Human Services, Office of Adolescent Health, 1101 Wootton Parkway, Suite 700, Rockville, MD 20852.
    Affiliations
    Department of Health and Human Services, Office of the Secretary, Office of the Assistant Secretary for Health, Office of Adolescent Health, Rockville, Maryland
    Search for articles by this author
  • Allison Yvonne Roper
    Affiliations
    Department of Health and Human Services, Office of the Secretary, Office of the Assistant Secretary for Health, Office of Disease Prevention and Health Promotion, Rockville, Maryland
    Search for articles by this author

      Abstract

      After 3 years of experience overseeing the implementation and evaluation of evidence-based teen pregnancy prevention programs in a diversity of populations and settings across the country, the Office of Adolescent Health (OAH) has learned numerous lessons through practical application and new experiences. These lessons and experiences are applicable to those working to implement evidence-based programs on a large scale. The lessons described in this paper focus on what it means for a program to be implementation ready, the role of the program developer in replicating evidence-based programs, the importance of a planning period to ensure quality implementation, the need to define and measure fidelity, and the conditions necessary to support rigorous grantee-level evaluation.

      Keywords

      The Teen Pregnancy Prevention (TPP) Program began in 2010 and is one of six major evidence-based policy initiatives currently funded across the Federal government. In preparation for large-scale implementation of the TPP Program, the Office of Adolescent Health (OAH) identified several guiding practices and concepts we believed to be critical to success. These key elements helped to shape the practical implementation of the TPP Program and the subsequent lessons learned.
      The OAH TPP Program is a two-tiered program focused on replicating evidence-based programs that are medically accurate, age appropriate, and proven through rigorous evaluation to prevent teen pregnancy and/or associated sexual risk behaviors (Tier 1) and developing and testing additional models and innovative strategies for preventing teen pregnancy (Tier 2). At the time the OAH TPP Program was created, it represented the first time federal funds were dedicated to large-scale replication of evidence-based teen pregnancy prevention programs.
      In September 2010, after an intensely competitive grant competition, OAH awarded a total of $75 million to 75 grantees to replicate evidence-based TPP programs and a total of $25 million to 24 grantees to develop and test new and innovative approaches to prevent teen pregnancy, including eight grantees funded in partnership with the Division of Reproductive Health at the Centers for Disease Control and Prevention (CDC) to implement a communitywide approach to preventing teen pregnancy. OAH TPP grantees receive between $400,000 and $4 million per year during a 5-year grant period. Overall, 16 of the largest Tier 1 grantees and all of the Tier 2 grantees are undergoing rigorous, grantee-level evaluations by a third-party evaluator. OAH manages its grantees through cooperative agreements, which means that the Federal government is substantially involved in the implementation and evaluation of the grant program and provides ongoing technical assistance.
      Having entered the fourth year of a 5-year grant period, OAH and its grantees have learned a great deal about what it takes to implement and evaluate large-scale teen pregnancy prevention programs with fidelity, quality, and rigor. As the first federal funding source of its kind for teen pregnancy prevention, OAH navigated through new territory to ensure the success of the TPP Program. Important lessons were learned and reinforced regarding implementation readiness, the role of the program developer, the importance of a planning period, the need to define and measure fidelity, and the conditions necessary to support rigorous grantee-level evaluation. The utility of this experience extends beyond teen pregnancy prevention to strengthening the implementation of evidence-based programs in general.

      Implementation readiness of evidence-based programs

      Following a systematic, comprehensive review of the literature, the Department of Health and Human Services (HHS) Pregnancy Prevention Research Evidence Review (PPRER) identified 28 programs that had been demonstrated through rigorous evaluation to prevent teen pregnancy and/or associated sexual risk behaviors. The PPRER identified programs based on the quality of their evaluation and the program's outcomes. The PPRER did not examine whether or not a program was available or implementation ready. All 28 evidence-based programs identified by the PPRER were eligible to be replicated with TPP funding from OAH. Upon release of the funding announcement, however, it became apparent that not all of the programs were implementation ready.
      To be implementation ready, a program must include all of the necessary components that will allow it to be effectively implemented by someone other than the original program developer. A program model may have proven to be effective, but if the materials needed to implement the program are not available, it can be difficult or impossible to replicate. Each of the program models were at different stages of being considered implementation ready. Those models that were not in common use in the field were less likely to have the majority of the components necessary. Several program models that were commonly in use in the field, however, also did not have all of the elements needed. At the start of the TPP Program, OAH spent a significant amount of time working with program developers to make their programs implementation ready. This included identifying the program's core components, developing a training plan and training materials, publishing adaptation guidance, and developing tools to monitor fidelity. Ultimately, a few of the identified evidence-based program models were not able to become implementation ready or replicated for this program.
      Through this hands-on experience working with program models and grantees, OAH believes there are several key elements that are important for a program to be considered implementation ready. Several programs in the field were not prepared with one or more of these elements and needed practical guidance to become implementation ready. Based on this work, OAH promoted some of these key elements, including:
      • 1.
        Core components—Core components are the program characteristics related to achieving the outcomes associated with the program. The three types of core components include content (what is taught); pedagogy (how the content is taught); and implementation (learning environment in which the program is taught). When these core components are not clearly identified, an organization attempting implementation does not know which programmatic elements are required to implement the program with fidelity.
      • 2.
        Logic model and theory—A program logic model describes the connections among the resources available, activities conducted, short-term outcomes, and long-term outcomes. A logic model enables an understanding of how the activities in the program are associated with the intended outcomes and identifies the critical mediators. In addition, knowledge of the theory used to develop the program is critical in helping implementers understand how the program works to achieve its stated outcomes. Implementation-ready programs should include both a detailed logic model and a description of the theory used to develop the program so that implementers fully understand how the program works to achieve its stated outcomes.
      • 3.
        Facilitator guide and curriculum materials—For the program to be implemented as designed, a facilitator guide, curriculum materials, and any supplemental materials needed for implementation must be available.
      • 4.
        Training on program model—Implementers must be able to access formal training on the program model that addresses the core components, logic model, and theory of change; reviews the program content; and provides sufficient opportunities for participants to practice implementing program content. Formal training helps to ensure that the core components and key elements of the program are uniformly implemented, leads to greater understanding of how the program works, and promotes greater adherence to fidelity.
      • 5.
        Guidance on allowable adaptations—Implementers are often interested in adapting evidence-based programs to make them a better fit for the community or population being served. Guidance from the program developer on what adaptations are allowable and what adaptations are not allowable helps to minimize the number of adaptations that may have a negative impact on the program outcomes. Adaptation guidance should be informed by the program's core components, logic model, and theory, as well as available research evidence.
      • 6.
        Tools for monitoring fidelity—Being implementation ready means that all of the materials needed to implement the program with fidelity are available, including a tool to monitor whether the program is being implemented with fidelity. Tools for monitoring fidelity help organizations assess program implementation and make continuous quality improvements to ensure the program is implemented as intended.

      Role of the program developer

      Program developers and the organizations that assist them in disseminating their programs are key partners in the successful replication of evidence-based programs. A program developer is the person or persons who created the program and often oversaw the implementation of the program during the initial evaluation. They play a critical role in packaging the program materials so the program can be replicated by others. They also are often the only person available to provide information on how the program was designed and implemented, its core components, and how its stated outcomes were achieved. As a result, program developers were a key resource for organizations interested in replicating programs and received numerous requests for their technical assistance and support. Program developers, however, can vary widely in their capacity to provide this level of support.
      Given the important role of the program developer in ensuring successful replications, OAH learned that the developers should be consulted as key stakeholders as soon as possible. Ideally, developers would receive advance notice that their program(s) have been identified as evidence-based and are going to be eligible for replication funding. Unfortunately for OAH, the HHS list of evidence-based teen pregnancy prevention programs was published at the same time as the replication funding announcement and OAH was unable to provide developers with any advance notice. As a result, many developers were caught off guard when they began receiving hundreds of phone calls inquiring about their program(s).
      OAH found that developers are key to helping implementers make informed decisions about program selection and fit by providing detailed information on the target population and implementation setting, requirements for implementation (e.g., length, content, group size, facilitators), and evaluation results. Developers have also been important throughout the implementation process in providing ongoing training and technical assistance, responding to questions about allowable adaptations, and providing suggestions to help enhance quality program implementation.
      At times, OAH grantees expressed concerns about having two entities (OAH and the program developer) to consult on matters related to implementation of the evidence-based program. Providing clear guidance to grantees and developers on the roles of the funding agency and the developer in making decisions reduced confusion. As the funding agency, OAH is ultimately responsible for making final decisions regarding implementation to ensure that guidelines are consistently applied across all grantees and program models. Program developers, of course, are the experts on their particular program model and provide key information to assist OAH in making the final decision. OAH, grantees, and program developers need to work together closely to ensure implementation is successful. To better streamline communication among grantees, OAH, and program developers, OAH assigned a staff person as a program model lead for each evidence-based program being replicated. The program model lead is responsible for being knowledgeable about the program, establishing a relationship with the program developer, and serving as a liaison between grantees and the developer. The program model lead has helped open up lines of communication between and among grantees and the developer and has increased the level of consistency for guidance provided to all TPP grantees replicating the same program model.

      Importance of a planning period to ensure quality implementation

      To ensure that grantees were fully prepared for program implementation, OAH required they engage in a planning, piloting, and readiness period for the first 6–12 months of funding. During this time, all grantees were required to complete a set of planning year milestones to demonstrate they were ready for full-scale quality implementation (Table 1).
      Table 1Office of Adolescent Health planning year milestones
      • Hire key staff
      • Complete needs and resource assessment
      • Receive training for staff on program model
      • Develop training plan for new staff & providing ongoing training to existing staff
      • Submit program materials for OAH medical accuracy review & make necessary revisions
      • Purchase curricula and other program materials
      • Establish signed Memorandum of Understanding and a monitoring plan with all partners
      • Pilot test the program with a small number of participants from the target population
      • Submit all proposed adaptations for approval
      • Develop a detailed implementation plan for all implementation sites
      • Develop plan for monitoring fidelity
      • Obtain approval from OAH for evaluation plan
      One of the first activities during the planning period was to conduct a thorough needs and resource assessment of the target population to identify its needs and the resources already available. The results of the assessment were used to confirm the program selected during the application process met the needs of the target population and was value added, not duplicative of existing efforts.
      After assessing the community's needs and selecting the most appropriate program, organizations hired staff and received training for them in the program model and other related topics. In addition to formal training on the program model provided by the program developers, OAH provided numerous additional training opportunities for grantees on topics related to teen pregnancy prevention, including classroom management, strategies for engaging youth, and recruitment and retention. In addition, grantees were required to submit all program materials to OAH to review for medical accuracy and subsequently to make all necessary revisions to ensure that the program materials were medically accurate.
      Piloting the program with a small number of youth was critical during the planning period. The pilot period allowed staff to become comfortable with the program content, ensured the program was a good fit for the population, and identified any necessary adaptations to the program content or implementation. All requests for adaptations had to be submitted to OAH for approval prior to implementation.
      Grantees were also expected to establish formal Memoranda of Understanding (MOUs) and a monitoring plan with all partners. Grantees developed a detailed implementation plan for each site that outlined when and how the program was to be implemented and how fidelity and quality of implementation would be assessed.
      The first year planning period proved to be invaluable. When grantees began full implementation of the program with large numbers of youth in their communities, they were better prepared to implement it with high fidelity and quality.

      Defining and measuring fidelity

      Implementing evidence-based programs with fidelity ensures the delivery of the program in the way it was intended and increases the likelihood of obtaining the same positive results associated with the original program evaluation. To ensure implementation with fidelity, OAH found that it was critical to define for the grantees what is meant by fidelity and to set up a system for monitoring, analyzing, and using fidelity data to make continuous program improvements. The requirements for implementing a program with fidelity vary by program and require grantees and OAH staff to have a thorough understanding of each individual evidence-based program model and what is required for each to maintain fidelity.
      For OAH, implementing a program with fidelity is defined as maintaining the core components of the program. Maintaining fidelity does not mean never making adaptations to the program. OAH allowed grantees to make minor adaptations, if necessary, to ensure that the program was a good fit for the population being served as long as the adaptation did not compromise or delete the program's core components and was determined to be appropriate by the developer. OAH defined an adaptation as anything that was not implemented and evaluated in the original program model. Grantees were required to submit all adaptation requests to OAH for approval prior to implementation. In collaboration with the program developer, OAH staff determined whether the adaptation was appropriate, and did not compromise the program's core components, before approval was granted.
      To measure fidelity, OAH defined a uniform set of measures that all grantees were required to collect and report to OAH twice per year. These included:
      • Participant attendance—Collected from every participant at the beginning of each session to determine the amount of the program that participants are receiving. Reviewing this data on a regular basis ensures that participants receive the entire program and allows any attendance issues with entire sites or individual participants to be addressed quickly.
      • Sessions implemented—Ensures that the intended number of sessions are implemented at each site.
      • Facilitator fidelity logs—Provides data on the number of activities implemented as intended for each session and allows facilitators to document any adaptations that were made. The fidelity logs allow for analysis of implementation across facilitators and across sessions.
      • Independent observations—At least 10% of all sessions must be observed by an independent observer for fidelity and implementation quality. Observers are required to complete the same fidelity log as the facilitator as well as a standardized instrument to assess the quality of the session.
      Defining fidelity and setting parameters for allowable adaptations is an important step in the process. It is also important that organizations continuously monitor implementation with fidelity and use the data collected to make program improvements. OAH grantees were required to develop a fidelity monitoring plan, collect fidelity monitoring data, regularly review and analyze the data, provide feedback based on the data to implementation staff, use the data to see what was working well, and make continuous quality improvements. It was critical that the data be collected and reviewed regularly and feedback be provided to the staff implementing the program on a regular basis so that any issues were identified early and necessary adjustments made, and best practices or effective implementation strategies could be identified and shared with other facilitators. Most importantly, the process for reviewing data, providing feedback, and making program improvements based on the data allowed grantees to take ownership of the need to monitor fidelity and to see the value in monitoring fidelity.

      Conditions necessary to support rigorous grantee-level evaluation

      Requiring grantees to conduct an independent rigorous evaluation of their program can be effective if conducted under a specific set of conditions. The benefits of rigorous local, grantee evaluations include (1) the ability to evaluate a large number of interventions in a large number of settings for less resources than would be required if done through a federally sponsored cross-site evaluation; and (2) the ability to enhance the skills of a large number of local evaluators by providing intensive evaluation technical assistance throughout the entire evaluation.
      OAH currently supports 36 rigorous grantee-level evaluations—17 focused on evaluating replications of evidence-based programs and 19 focused on evaluating new or innovative approaches to prevent teen pregnancy. The term grantee-level evaluation refers to the fact that the program evaluation is conducted solely on the intervention being implemented by the grantee. OAH required that all grantee evaluations be conducted by an independent, third-party evaluator hired by the grantee and be designed and implemented to meet the research quality standards set for the HHS PPRER. From the initial awarding of funds, OAH provided intensive training, technical assistance, monitoring, and support to grantees to ensure the quality and rigor of the evaluations. As a result of the intensive support provided by OAH to its grantees, all 36 grantee-level evaluations are on track to meet the evidence standards and contribute significantly to the research base regarding what works in preventing teen pregnancy.
      Through our previous experience working with evaluation-focused programs, discussions with evaluation experts, and working with these TPP grantees to develop successful evaluation plans, OAH has found that there is a set of specific conditions necessary to ensure that grantee-level evaluations maintain a high level of rigor. We applied these conditions at the outset of this grant program to prepare the grantees for success from the beginning and made adjustments as the evaluation progressed. These necessary conditions include:
      • Funders should include a detailed description in the funding opportunity announcement of what the expectations are for conducting a rigorous evaluation, and applications should be reviewed against a set of criteria to assess whether the evaluation plan is likely to meet the evidence standards and be well-powered.
      • Grantees must select an evaluator who is independent of their organization to conduct the evaluation.
      • Sufficient resources must be dedicated to the evaluation to ensure power and ability to maintain rigor. OAH requires grantees to allocate 20%–25% of their overall budget, but not more than $500,000 each year to the evaluation.
      • Detailed criteria must be developed for what is considered a rigorous evaluation. For OAH, all grantee evaluations must meet the research quality standards set for the HHS Pregnancy Prevention Research Evidence Review. OAH communicated and translated the evidence standards against which the evaluations will be judged once completed into a set of standards that the evaluation must follow through its design, implementation, analysis, and reporting.
      • The funder must place conditions on the grantee's funding stating that the funds are in jeopardy if the evaluation does not meet the standards set for rigor, and must be committed to holding the grantees accountable to those standards. To date, a few OAH grantees have been placed on corrective action because their evaluation was not meeting the evidence review standards. However, all were able to make the changes necessary to ensure the level of rigor required and maintain funding.
      • Grantees must be provided intensive programmatic and evaluation technical assistance to ensure high quality programs and rigorous evaluations.
        • °
          OAH staff provide intensive programmatic and evaluation support and technical assistance to grantees with the goal of ensuring quality implementation of the grantee's program. This is done through ongoing communication, including monthly calls, regular site visits, biannual reports, an annual conference, and ongoing technical assistance. Because evaluation is a critical component of many of the grants, OAH provides support to the OAH staff on evaluation related topics through the evaluation technical assistance contractor.
        • °
          OAH also provides grantees with targeted and intensive evaluation technical assistance through a contractor. Grantees are required to submit an evaluation plan and analysis plan to OAH for approval, as well as biannual reports on the status of their evaluation, which are assessed to ensure baseline equivalence and sample attrition. In addition, the contractor provides ongoing technical assistance through monthly calls, in-person meetings during the OAH annual conference, additional in-person meetings as necessary, and group-based technical assistance related to specific issues of interest.
        • °
          OAH has an in-house evaluator who works with both OAH staff and the evaluation technical assistance contractor to support the evaluations on a daily basis. The in-house evaluator provides targeted technical assistance to the grantees through the OAH staff and the contractor and ensures that the evaluations continue to meet the requirements for a rigorous OAH evaluation.
      After 3 years of experience overseeing the implementation and evaluation of evidence-based teen pregnancy prevention programs in a diversity of populations and settings across the country, OAH has learned numerous lessons that are applicable to those working to implement evidence-based programs on a large scale. At the outset of programming, OAH applied several concepts believed to be critical for successful implementation based on past experience and discussions with experts in the field. As we progressed toward full implementation we were able to make necessary adjustments as appropriate. In many cases, our initial plans for success were reinforced and strengthened as we learned better and stronger ways to promote the goals of the TPP Program. Initially developing a strong plan for implementation and applying lessons learned along the way are both key to successful implementation and evaluation of a large scale program.
      For more information about the OAH TPP program, please visit the TPP Resource Center at http://www.hhs.gov/ash/oah/oah-initiatives/teen_pregnancy/. The TPP Resource Center includes information on OAH grantees, OAH guidance documents, and numerous training and technical assistance resources.