Naila Arebi © ECCO |
The level of confidence in evidence generated by research is reflected in various aspects of study design and underpins clinical guidelines in Inflammatory Bowel Disease (IBD). The quality of research study designs, each with inherent strengths and limitations, determines confidence in the results, and it is acknowledged that some ‘robust’ study designs may not be feasible in specific contexts [1, 2]. Experimental designs such as randomised controlled studies that are characterised by methodological rigour, may be unsuitable to study effectiveness owing to their weak external validity, which is attributable to their strict eligibility criteria. Reliance on complementary or alternative evidence from other study designs to address residual gaps in knowledge is not uncommon. Non-experimental observational study designs capture a more diverse population, albeit in a less stringent setting, resulting in several methodological limitations, including high risk of bias and unbalanced confounders [2]. The increasing digitalisation of medicine, in addition to accessible diverse sources of data, is partly responsible for recent intensification of interest in evidence generated by analysing data from observational studies with the purpose of offering new insights into the effectiveness and safety of interventions as well as understanding healthcare delivery and quality of care. The systematic analysis of data from multiple data sources outside a research setting is referred to as real-world studies (RWS), and its subsequent analysis and conclusions as real-world evidence (RWE) [3].
RWE is gaining ground as a valuable tool in healthcare to inform decisions on the effectiveness and safety of interventions in clinical practice as well as for health care policy and health technology assessments. In recognition of the future potential of RWE, the Food and Drug Administration (FDA) has set out a framework to evaluate use of RWE to support the approval of new drug indications and to monitor drug safety in clinical practice [4]. Within that remit, guidance on a spectrum of study designs is included to overcome limitations when interpreting data generated from various sources and the issues related to data assurance. In doing so, the FDA identified specific milestones to review outputs of RWE with respect to regulatory considerations.
An understanding of and familiarity with the breadth of data sources is a prerequisite for critical appraisal of RWE [5]. Real-world data stem from one or more of the following sources:
1) Observational studies involving prospective data collection, such as disease registries, patient surveys, traditional cohort studies and data collected from mobile devices
2) Observational studies using existing administrative data, namely electronic medical records, medical claims data, birth or death registries, surveillance databases and spontaneous adverse drug event databases
3) Pragmatic clinical trials, which are variable and may include multiple sources noted in observational studies
An appreciation of the inherent limitations of RWE is critical in order to avoid drawing incorrect conclusions from RWE. Unlike epidemiological and observational prospective cohort studies, where data are collected to answer a specific research question, RWS use secondary data collected for different purposes within a non-research setting. The interpretation and credibility of evidence generated from such studies are hampered by unique issues contingent on definitions of high-quality evidence, including study design and analytic methods related to temporality, confounders, sources of bias, and descriptions of outcomes, measurements and measurement tools.
Whilst it may take time and effort to overcome most limitations, two recent and ongoing initiatives to mitigate methodological concerns linked with RWE are a step in the right direction. The first is the recently developed structured template for planning and reporting on RWE study implementation (StaRT-RWE), which has the goal of reducing misinterpretation and facilitating reproducibility, validity assessment and evidence synthesis [6]. The STaRT-RWE template was developed with stakeholder engagement and includes several templates for possible study designs and items as a prerequisite to sharing critical information to replicate and assess the validity of conclusions. Defining and explaining the logic of data analytic choices is also intended to facilitate comparisons across studies and collaborations across institutions.
The second initiative, led by the ECCO Epidemiological (EpiCom) and Clinical Research (ClinCom) Committees, involves the development of a Core Outcome Set (COS) for RWE, with the goal of harmonising data collection and reporting, and thereby exploiting the potential of big data for collaborative projects. A working group was set up to oversee a comprehensive systematic review [submitted manuscript] of observational studies in IBD over a 20-year period. The literature review of 315 publications forms the basis for a series of statements designed to reach an agreement between a broad range of stakeholders, using a Delphi method, on the most important items for future studies using real-world data. Following completion of voting and discussion, an ECCO Position Statement on a COS for RWE will be formulated in 2022.
COS are not new to IBD. Reviews of outcomes used in IBD clinical trials have been published specifically for Ulcerative Colitis [7], pouchitis [8], perianal Crohn’s Disease [9] and Crohn’s Disease [10]. In addition, three COS for clinical trials have been developed for perianal Crohn’s Disease [11], patient-reported outcomes in IBD [12] and paediatric IBD [13]. Uptake of IBD COS in prospective clinical trial studies is anticipated to be straightforward where planned data collection can be aligned with the corresponding COS. However, the visibility of IBD COS uptake in clinical trials lags publications due to the time required to develop and report such studies. This was demonstrated in rheumatology, the field that pioneered COS. Following the rheumatoid arthritis (RA) COS published in 1994, a review of uptake in 2009 reported that 60%–70% of trialists conducting trials on RA were measuring outcomes using RA-COS, and by 2016 80% of trials were recording RA-COS [14, 15].
The implementation of COS in RWE is anticipated to be more challenging as it will occur outside a research setting, with data captured for other purposes. It remains to be seen whether institutions housing databases and a registry recognise the value of using COS to generate more meaningful evidence and the potential to harness big data for collaborative research that will influence healthcare. The future trajectory for RWE will be determined by this.
In conclusion, data collected from clinical activities may inform treatment decisions. The future of RWE studies rests on the scientific community’s confidence in the generated RWE. Addressing methodological concerns, in the case of replicability through customised study designs and in the case of data heterogeneity through harmonisation of outcomes and other data elements, is an initial step which, if successful, will enhance the future of RWE.