Generating evidence involves formulating the right research question(s), identifying and collecting fit-for-purpose data, applying suitable study designs, and conducting the appropriate analyses. Asking the right question is crucial as it is the stepping-stone to the development and conduct of a meaningful study.
The articles Research: Articulating Questions, Generating Hypotheses, and Choosing Study Designs, Setting a research question, aim and objective (CJHP 2014;67(1):31-34) and Formulating Answerable Questions: Question Negotiation in Evidence-based Practice (JCHLA/JABSC 2013;34:55-60) suggest stepwise approaches to the generation of the research question.
In an initial step, a flow of research topics is generated from clinical practice, patient experience, unmet medical need, pharmaceutical companies’ development plans, public health issues, and, overall, during regulatory and health technology assessment (HTA) processes. It is recommended to include all relevant stakeholders in the ideation process. A parallel critical and thorough review of the literature forms the basis for the theoretical framework of the research question and should be included in the background section of the study protocol. Such a review aims at evaluating the current evidence and identifying gaps in knowledge that the study is intended to address. This process should allow the researcher(s) to select the most relevant question(s) for research, and transform the question into study objectives. The study can be hypothesis-generating, or include a testable hypothesis if it involves hypothesis-testing. In Posing the research question: not so simple (Can J Anaesth. 2009;56(1):71-9), the FINER criteria (Feasible, Interesting, Novel, Ethical, and Relevant) are proposed to verify the desirable properties of an appropriate, meaningful and purposeful research idea.
Research questions relevant to regulatory authorities and HTA bodies regarding the utilisation, safety, efficacy (or effectiveness) and impact of medicines are detailed in the European Public Assessment Report (EPAR) available for each centrally authorised product on the EMA website, with general pharmacovigilance-related aspects being described in Modules of the Good Pharmacovigilance Practices (GVP). The European Network of Health Technology Assessment (EUnetHTA) describes The criteria to select and prioritise health technologies for additional evidence generation (2012) and discusses clinical evidentiary requirements to support the study design strategy in Strengthening the Interface of Evidence-Based Decision Making Across European Regulators and Health Technology Assessment Bodies (Value Health 2022:S1098-3015(22)00104-8).
In a second step, a research question is formulated through “a logical statement that progresses from what is known or believed to be true to that which is unknown and requires validation”, as described in Developing great research questions (Am J Health Syst Pharm. 2008;65(17):1667-70). A poorly defined research question can hinder create confusion and jeopardise the development of a clear and meaningful protocol. This can make the evaluation of the study results against the unclear question irrelevant and hamper the publication of the study. How to formulate research recommendations (BMJ. 2006;333(7572):804-6) proposes the EPICOT format with 5 core elements for research recommendations on the effects of treatments: Evidence (source of the current evidence), Population (population characterised by any diagnosis, disease stage, comorbidity, risk factor, sex, age, ethnic group, specific inclusion or exclusion criteria, clinical setting), Intervention (type, frequency, dose, duration, prognostic factor), Comparison (placebo, routine care, alternative treatment/management), Outcome (which clinical or patient related outcomes will the researcher need to measure, improve, influence or accomplish; which methods of measurement should be used), and Time stamp (date of literature search or recommendation). This format was adopted by EUnetHTA in its Position paper on how to formulate research recommendations (2015).
In a further step, the objectives of the study are defined. They are more operational than the research question and can be divided into primary, secondary and exploratory. The primary objective corresponds to the single most important objective of the study, which drives the study design, the calculation of the sample size and the methods that will lead to answer the research question. It can sometimes be a composite objective. Secondary objectives can be defined to provide additional details to support the primary objective, to add new knowledge or comparison, or to answer other relevant questions, potentially using other study designs to complement the evidence. The objectives should be closely related to the research question, cover all its aspects, and be ordered in a logical sequence. The SMART criteria (Specific, Measurable, Appropriate (aligned with the research question), Realistic and Time specific) are can be used to formulate the study objectives (There's a S.M.A.R.T. way to write management's goals and objectives, 1981).
Assessing the feasibility of the study constitutes an important preparatory step and is recommended to ensure that sufficient information is available to apply the proposed study design, for example, knowledge about the information available in the data sources. The aim is not to answer the research question, but to determine whether the proposed study design could answer the research question with the expected statistical power and within the proposed timelines. A feasibility assessment can provide information on the number of subjects with a specific exposure or outcome, the availability of covariates and the follow-up period needed. It can also provide insights into the potential difficulties which may be encountered in the conduct of the study, or which may introduce bias. Importance of feasibility assessments before implementing non‐interventional pharmacoepidemiologic studies of vaccines: lessons learned and recommendations for future studies (Pharmacoepidemiol Drug Saf. 2016;25(12):1397-406) illustrates a pragmatic approach for conducting feasibility assessments for post-authorisation studies, which can be applied beyond vaccine research. The ISPE Good pharmacoepidemiology practice (GPP) explains how a data collection method or data source can answer a research question with justifications based on feasibility when relevant. Linking electronic health data in pharmacoepidemiology: Appropriateness and feasibility (Pharmacoepidemiol Drug Saf. 2020;29(1):18-29) provides guidance to assess the feasibility of data linkage based on key areas including the design of the research question for study objectives addressed using secondary data collection. Other insights on formulating the research question and evaluating study feasibility are provided in Evaluating the Feasibility of Electronic Health Records and Claims Data Sources for Specific Research Purposes (Ther Innov Regul Sci. 2020;54(6):1296-1302).
Building on existing guidance and frameworks, the SPACE framework (A Structured Preapproval and Postapproval Comparative Study Design Framework to Generate Valid and Transparent Real-World Evidence for Regulatory Decisions, Clin Pharmacol Ther. 2019;106(1):103-115) describes a step-by-step process for identifying valid design elements and minimal feasibility criteria. STaRT-RWE: structured template for planning and reporting on the implementation of real world evidence studies (BMJ 2021;372:m4856) provides detailed templates to capture the final design and implementation details (e.g., specific algorithms for each study variable). In addition to these processes and templates, The Structured Process to Identify Fit-For-Purpose Data: A Data Feasibility Assessment Framework (Clin Pharmacol Ther. 2022;111(1):122-134) provides a systematic approach to determine if a data source is fit for regulatory decision-making, helping ensure justification and transparency throughout study development, from articulation of a specific and meaningful research question to identification of fit-for-purpose data and study design, illustrated by use cases.
It is to be noted that formulating a research question is an iterative process. For example, the feedback obtained from a feasibility assessment may reveal that the research question is not feasible and consequently lead to the change of the objectives or design of the study.