PPN Home > Programs that Work > How Programs are Considered

Sign up for PPN updates by email

How Programs Are Considered

We are interested in reviewing any program for which there is evidence of a positive effect. A formal application is not required to submit a program for consideration. We rely on publicly available information for our review of a program's effectiveness. We are interested in programs as they were designed and evaluated — programs do not have to have been replicated or be currently in existence for inclusion on the PPN site. Also, even if the specific goal of the program does not address an indicator, but the evaluation shows a positive effect, we will include the program under the indicator for which the evidence indicates effectiveness.

Evidence Levels

Proven and Promising Programs

Programs are generally assigned either a "Proven" or a "Promising" rating, depending on whether they have met the evidence criteria below. In some cases a program may receive a Proven rating for one indicator and a Promising rating for a different indicator. In this case the evidence level assigned will be Proven/Promising, and the program summary will specify how the evidence levels were assigned by indicator.

Other Reviewed Programs

Some programs on the PPN site are identified as "Other Reviewed Programs". These are programs that have not undergone a full review by PPN, but evidence of their effectiveness has been reviewed by one or more credible organizations that apply similar evidence criteria. Other Reviewed Programs may be fully reviewed by PPN in the future and identified as Proven or Promising, but will be identified as Other Reviewed Programs in the interim.


Evidence Criteria


Type of Information Proven Program Promising Program Not Listed on Site

Program must meet all of these criteria to be listed as "Proven". Program must meet at least all of these criteria to be listed as "Promising". If a program meets any of these conditions, it will not be listed on the site.
Type of Outcomes Affected Program must directly impact one of the indicators used on the site. Program may impact an intermediary outcome for which there is evidence that it is associated with one of the PPN indicators. Program impacts an outcome that is not related to children or their families, or for which there is little or no evidence that it is related to a PPN indicators (such as the number of applications for teaching positions).
Substantial Effect Size At least one outcome is changed by 20%, 0.25 standard deviations, or more. Change in outcome is more than 1%. No outcome is changed more than 1%.
Statistical Significance At least one outcome with a substantial effect size is statistically significant at the 5% level. Outcome change is significant at the 10% level (marginally signficant). No outcome change is significant at less than the 10% level.
Comparison Groups Study design uses a convincing comparison group to identify program impacts, including randomized-control trial (experimental design) or some quasi-experimental designs. Study has a comparison group, but it may exhibit some weaknesses, e.g., the groups lack comparability on pre-existing variables or the analysis does not employ appropriate statistical controls. Study does not use a convincing comparison group. For example, the use of before and after comparisons for the treatment group only.
Sample Size Sample size of evaluation exceeds 30 in both the treatment and comparison groups. Sample size of evaluation exceeds 10 in both the treatment and comparison groups. Sample size of evaluation includes less than 10 in the treatment or comparison group.
Availability of Program Evaluation Documentation Publicly available. Publicly available. Distribution is restricted, for example only to the sponsor of the evaluation.

*Additional considerations play a role on a case-by-case basis.   These may include attrition, quality of outcome measures, and others.


Currently, we do not require programs to do the following:
  • Be currently implemented in some location and provide technical assistance or support.

  • Have been replicated numerous times. (While we recognize the importance of program replication and fidelity to program success, we believe there is value to including information about programs that have successfully improved outcomes for children and families but have not been replicated.)

  • Have articulated as program goals the outcomes they impact. (For example, if a program was designed to reduce violence, but met the criteria for a proven program because it reduced drug use, we would list the program as a "proven" program under the drug use reduction indicator, even though the program did not intend to reduce drug use.)

  • Evaluation to have appeared in a peer-reviewed journal. Nor do we count as "proven" every evaluation that has been published in a peer-reviewed journal.

How to Submit Your Program

If you would like to recommend a program for consideration, please write to promisingpractices@rand.org.