When is enough enough? Implementation science models and frameworks

Implementation scientists have created a large number of models, frameworks, and measures.1 For example, the Society for Implementation Research Collaboration has a repository of over 400 implementation-related measures.1 It is perhaps stating the obvious to suggest that the reason programs are developed is to deliver some impact on identified problems through engaging the relevant people. However, if any conclusions can be drawn from this multitude of ways to improve programs and services, it may be that an ideal approach has yet to be identified. And there's a very good reason for that.

Whenever a particular way of implementing research-informed interventions is presented, there's one crucially important factor that needs to be considered. Context.

All research occurs within a context, and all implementation occurs within a context. Context, therefore, is crucially important to both impact and engagement. It could be assumed, for example, that readers of this article might be people who are interested in making services, interventions, and programs more effective than they currently are. The problem is, it's very likely that the program that they select as their preferred intervention was created in a completely different context than the one to which the protocol will be applied.

Why is context so important? Well, for some time, the health care and health service delivery community has subscribed to the concept of “evidence-based practice” or EBP. In principle, EBP is a great idea and one that both researchers and clinicians might wish could be true. The general concept is that a particular approach or service or program or intervention is developed based on the assessment of a certain situation and problem. This situation and problem, however, is located within a particular context. Then, the package that has been developed to improve the situation and address the problem is evaluated. This evaluation may determine that, in the specific situation in which the new protocol was implemented, the people who engaged with the procedures did better, on average, than the people who were offered some other activity.

Typically, it is then concluded that the new protocol is “evidence-based”. Dissemination to other contexts is generally regarded as an important component of the program development and evaluation cycle. Rarely, however, is adaptation to different contexts included as part of the process. Labeling some initiative as “evidence-based” seems to provide a license for that program to be applied widely. Much more widely than the context in which it was created. If something is “evidence-based” then, so the story goes, it can be implemented across different groups and jurisdictions and will produce similar results or “evidence” to the original work. Unfortunately, context turns up to rain on the EBP parade.

A set of procedures or techniques can’t be transported to different contexts and be expected to deliver the same “evidence” without modification and adaptation. That's because, for psychological and social programs, evidence is accumulated through the interaction of the service recipient and the service provider. Essentially, the recipient and the service provider co-create the outcomes using the available resources of the program.

So, it is not the program that creates the effects. Evidence is not a “thing” that is produced by a particular program or protocol. It is the people implementing and using the program who produce the results that are desired. Programs are applied because some kind of change is required. Change, however, occurs within and between people.2 Change is a human process, not a technical one. This means we need to discover ways to recognize, value, and promote the impact of the people who are delivering services, as well as their ability to engage with people who are benefiting from their talents and skills, along with the resources to which they have access. For example, the value of a therapeutic relationship in the successful delivery of psychological treatment programs has long been recognized.3

Many frameworks or models have been evaluated and can claim to be “evidence-based” but, again, the framework was typically developed in a particular context based on the perspectives, experiences, and goals of the team doing the work. Will a model or framework developed in a faraway place be the right recipe to produce results for a practitioner or clinician in another place? It's unlikely, without, once again, modification and adaptation. And these must be localized modifications and adaptations.

So, what should be done? When choosing a particular intervention, it's likely that how the intervention is chosen is more important than which intervention is chosen. Interventions should be selected with due regard for the importance of context. Recognizing the fundamental importance of culture as a contributor to context is also critical. Ideally, an implementer should proceed with a local approach to gathering and collecting data to guide decisions. The best authority about a context are the people in that context. Only people within a given context understand that context and know what results are of most interest to those who engage with local services. No other model will help.

Impact and engagement are key. If a fairer, more equitable, and socially just planet is ever going to be created, the agency of individuals needs to be recognized, honored, and accommodated. Rather than focusing on “evidence-based practices” and adopting frameworks to apply what worked in other jurisdictions, a culture of “evidence-building practices” needs to be created, in which those within a given context take responsibility for understanding the impact of the interventions that are to be delivered.

How many models are enough? My suggestion, prompted by an anonymous reviewer, is “one”. The one that maximizes impact and engagement in a given context through evidence-building rather than evidence-based practices.

If a steadfast focus on general principles such as impact and engagement is maintained rather than the proliferation of bespoke representations of implementation successes in specific contexts, a genuine science of implementation has a hope of being realized. Such a science could be based on the premise of recognizing the agency of both the implementer and the implementee. A science such as this would also be more interested in developing fundamental, transferable common principles than the ceaseless generation of idiosyncratic, context-specific models and frameworks.

Are the contributors to the field prepared to step away from the individual authoring of frameworks and models to endorse a more fundamental, more generic, and more robust approach to implementation that might actually plant the seed of a genuine scientific endeavor? An attitude such as this might help implementation science to finally fulfill its potential.

Pursuing a more applicable, robust, and contextualized approach does not necessarily mean starting from ground zero. The updated model of evidence-based health care provided by JBI4 embodies many of the points made in this article. While a distinction between evidence-based and evidence-building practices is still relevant, the JBI model perhaps indicates that this does not have to be a “this or that” separation. Perhaps evidence-based practices can indicate the benefits that are possible in ideal circumstances and then evidence-building practices can be adopted to maximize the benefits in local contexts.

The JBI model expounds the importance of not only evidence but also context, patient preferences, and clinician judgment in making the best health care decisions.4 A recognition of agency, therefore, is embedded within the JBI model. Moreover, the JBI model acknowledges the importance of evidence related to feasibility, appropriateness, and meaningfulness in addition to effectiveness. The JBI model, therefore, is one example of a decision-making resource that could be useful in identifying programs to be adopted in particular contexts to achieve impact and engagement.

Impact is also included in the model; however, it could perhaps, have a more central focus. Impact could become the denominator for all the components in the model to help amplify the effects in any given context. The potency of different aspects of the model might be increased by formulating them in terms of “units of impact.” What impact, for example, would be sought for the dissemination strategy? What impact would be assessed by considering the appropriateness or meaningfulness of evidence?

It is likely that aspects of impact will vary over time and across contexts. Such variation is entirely appropriate. By ensuring that impact remains center stage, regardless of the particular feature of implementation being considered, a genuine and tangible reduction in health inequities will be a far more attainable goal.

Acknowledgments

Many of the ideas in this editorial were developed and refined in an ongoing conversation with Rosalyn Bertram (University of Missouri-Kansas City), Editor-in-Chief of Global Implementation Research and Applications (GIRA). Anticipating the announcement of an exciting 2024-25 GIRA special issue that will focus on the practice of implementation, we are developing a manuscript expanding on these themes.

REFERENCES 1. Curran GM. Implementation science made too simple: a teaching tool. Implement Sci Comm 2020; 1:27. 2. Marken RS, Carey TA. Understanding the change process involved in solving psychological problems: a model-based approach to understanding how psychotherapy works. Clin Psychol Psychother 2015; 22:580–590. 3. Carey TA, Kelly RE, Mansell W, Tai SJ. What's therapeutic about the therapeutic relationship? A hypothesis for practice informed by Perceptual Control Theory. Cog Behav Ther 2012; 5 (2–3):47–59. 4. Jordan Z, Lockwood C, Munn Z, Aromataris E. The updated JBI model of evidence-based healthcare. Int J Evid-Based Healthc 2019; 17:58–71.

Comments (0)

No login
gif