January 05, 2024

Four questions to get started with your implementation research project

Blog 1

Obidimma Ezezika  

As a student or researcher, how do you gain clarity on your proposed implementation research? I teach the “Implementation Science in Practice” graduate course at Western University. One of the first assignments is for students to develop an implementation science project. Students find this task quite complicated and challenging. To help them gain more clarity on their topic, I typically use four questions to support their selection and refinement of a topic. If you’re at the cusp of selecting an implementation research project, some of these questions might also help you.

  1. What is your priority practice gap or problem in focus?

The first question is, what is the problem? Can you define it in one sentence? This might seem like a simple, apparent question in identifying a priority area, but it can be challenging to articulate. The response to this question is the first step in any implementation science research. For example, a problem area could be high cervical cancer incidence among women in London, Ontario, or inadequate physical activity among adolescents in a particular region. As you try to answer this question, I tell my class to keep two considerations in mind. The first is to ensure that the problem concerns healthcare providers, patients, or their stakeholders and that they have sufficient interest in the issue. The second is to determine if there is possible evidence on the best practices to address this problem so they don’t get stuck midway through their project.   

  1. What is your evidence-based intervention (EBI)?

The next crucial question is: What evidence-based intervention might help address the identified problem? An evidence-based intervention can be defined as strategies or treatments to improve health behaviors, health outcomes, or health-related environments in one or more well-designed research studies, and these interventions could be pills, products, practices, procedures, principles, programs, or policies.1 EBIs can be typically identified through high-quality guidelines, systematic reviews, and meta-analyses to inform practice. For example, let us imagine the problem above about cervical cancer. I can find evidence-based interventions in the Cochrane Review, and I will come across the HPV vaccine as an evidence-based intervention to reduce cervical cancer.

  1. What is the know-do gap?

Typically, when the student has settled on an evidence-based intervention, the next question I ask is, what is the difference between what we know works (know) and what is actually done in practice (do)?  – the know-do gap. For example, we know births attended by skilled health personnel save lives and reduce maternal mortality, and it is recommended that all births should be attended by skilled health personnel. However, according to WHO 2021 data, the % coverage of births attended by skilled health personnel was only  65% in Africa. The question will be: what should be done in practice; why is implementation not 100%? The difference between what we know (100%) and what is being done (65%) is 35% — the know-do gap. In contrast, according to the same data, there is a 99% coverage in Europe, so there is no know-do gap in that region, in light of the evidence-based intervention of having births attended by health-skilled personnel.

  1. Why does the gap exist?

Once the know-do gap is identified, the next question is to find why the gap exists, i.e what are the barriers and facilitators to the evidence-based intervention. This will help you understand why a practice gap exists and identify strategies to close the gap. These barriers and facilitators are called determinants or factors that obstruct or enable adoption, institutionalization, or scale-up of the EBI. The response to this question provides a glimpse of the size of the challenge you will embark on, and it verges on how well you would be prepared to close the gap by developing strategies to overcome the barriers identified.

 

References

  1. Brown, C. H., Curran, G., Palinkas, L. A., Aarons, G. A., Wells, K. B., Jones, L., Collins, L. M., Duan, N., Mittman, B. S., Wallace, A., Tabak, R. G., Ducharme, L., Chambers, D. A., Neta, G., Wiley, T., Landsverk, J., Cheung, K., & Cruden, G. (2017). An Overview of Research and Evaluation Designs for Dissemination and Implementation. Annual review of public health38, 1–22. https://doi.org/10.1146/annurev-publhealth-031816-044215