Spotlight on Methodology Fundamentals

On the subject of methodology, the COVID 19 pandemic has stimulated a lot of discussion about design of clinical trials,

from this came our idea to highlight essential methodological concepts.

4

SPOTLIGHT

ON METHODOLOGY FUNDAMENTALS

In order to become more familiar with methodological issues the Cochrane Neurological Sciences Field decided to interview two experts in Neurology and Clinical methodology, both from the world of Stroke, International Clinical Trials and Cochrane, on various topics.

The Methodology Fundamentals are aimed primarily at young people, but can also be particularly useful to health professionals who wish to reorganize their knowledge in the methodological field.

We hope that these brief and informative discussions will tempt readers to further investigate the issues proposed, to this end we will select and attach papers that help to highlight the topics at hand.

Who are the experts?
Stefano Ricci - Editor, Cochrane Stroke Group, Perugia, Italy, and
Peter Sandercock - Emeritus Professor of Medical Neurology, University of Edinburgh, UK.

What are the Methodological Questions?

1) When are observational studies enough?   (see below)
2) Why RCTs are better than observational studies?  
(see below)
3) What kind of RCTs are needed today?   
(see below)
4) Why it is so difficult to plan and arrange large pragmatic trials?   
(see below)
5) What regulatory authorities should modify to allow for the realisation of these kind of trials?  (see below)
6) How can we ensure that clinical research is done where the results will eventually be applied?  
(up coming)

Q1:  When are observational studies enough? 

Observational studies are extremely useful and indeed irreplaceable in various relevant settings:

  a)   Evaluation of epidemiological characteristics of a disease (e.g. how common is the condition? what is its prognosis? What are the risk factors?).
b)   Evaluation of the external validity of a trial result.
c)   Planning a trial. Estimate the effects of a treatment from observational epidemiology.

To give an Example:  an observational study shows that an increase of “Y” mmHg in Systolic Blood Pressure is associated with an X% increase in the risk of stroke. In a randomized trial we can use this information to evaluate if the administration of a drug “A” lowering the BP by “Y” mmHg reduce the risk of stroke compared to control. Is the reduction that we expect to find, confirmed from the results of the trial?  And if it is not confirmed, what are the reasons for this difference?

d)   To put the results of a treatment trial in clinical context, including the so-called phase 4 studies (mostly to evaluate side effects).

The role of observational studies, in these specific fields of medicine, cannot be underestimated; however, they cannot substitute randomized control trials when the aim of the study is to evaluate the effect of a new treatment on a specific outcome. This is because in modern medicine the problem is not to pick up the large effect of a treatment (for which no RCT is indeed needed) but a relatively moderate difference, which however -in terms of absolute effect- would modify the outcome of hundreds of thousands of patients in the world. In fact, a 3 or 4% absolute difference in death and disability in acute stroke looks like a modest result, but if that treatment were largely applied then a huge number of patients would benefit from it. If our aim is to test efficacy, we cannot rely on non-randomized observational studies of therapy because this kind of study is prone to many biases, including selection and attrition bias, which can completely distort the result. So, let’s give observational studies their merits, but not give them more room than they actually deserve.

For further reading:


Q2: Why are RCTs better than observational studies?

The simplest answer to this question is: because they reduce (or should we say almost eliminate) many important sources of bias. Biases in clinical research are usually described as follows:

Selection bias is a systematic difference between patients who are selected for treatment with the new therapy and those who are not. This difference may be directly induced by a researcher who believes, for instance, that patients with a less severe disease should be treated with the new treatment, while more severe cases receive only standard care;  in which case, the milder disease in the treated cases will mean they have a better outcome compared to controls, thus making the treatment appear more effective. The magic of randomization ensures that the 2 groups differ only with respect to the treatment being tested.  However, even if the sequence of treatments is randomized, selection bias can occur if the researcher has prior knowledge of which treatment (active or control) the next subject to be enrolled in the study will receive. So, for example, in a trial where the treatment allocation is determined by opening the next treatment pack in the randomized sequence, and if there are small differences between the active and control packs, the researcher may discover the next patient will placed in the control pack and decide not to enroll that individual in the study.  This can be prevented by adequate allocation concealment (e.g. use of a secure web-based treatment allocation system).  Finally, a study with only a small number of patients (i.e. a very small sample size) may cause imbalance in various variables, but usually this can be avoided with a correct and prudent sample size calculation in advance. 

Performance bias, two groups receive different care and ancillary cures, consequently the group that receives better background treatment has a better outcome because of this, and not because of the new treatment. In general, this bias can be avoided with treatments that are truly blinded, so that neither researcher nor subject knows if they are in the active or control arm of the trial. Some interventions that involve testing the organization of care (e.g. stroke unit care vs general medical ward), surgery or therapist intervention (e.g. physiotherapy), cannot be blinded, so the trial must include strategies to minimize the impact of performance bias between study groups.  Some (new) treatments carry with them some ancillary procedures (i.e. more frequent control of BP and neurological status) which can be considered part of the treatment. If it is not possible to blind both the person delivering the treatment and the study participant, then it is vital – as far as possible - to prevent the person who is measuring the outcome of treatment to be aware of the treatment allocation for that subject.

Attrition bias is the difference in outcome due to the compliance to treatment or follow-up. It has been frequently shown in randomised placebo-controlled trials that good compliance is associated with better outcomes even among patients allocated placebo, compared to poor compliance with the scheduled procedure. Loss of patients to follow-up is an additional source of bias, especially if it differs between treatment groups. This bias can be avoided by the use of the intention to treat principle, which is the analysis of each randomized patient, no matter if he or she actually took or received the assigned treatment, and by ensuring that follow-up is complete for both treatment groups.

Detection bias is the different way outcome is evaluated, if the treatment the patient actually took is known. Apart from double blind studies, this bias can be avoided (and indeed is in most recent big trials) with the so-called PROBE design,  which means that both patients and the doctors who care for them know the treatment arm, but a third person, who was not involved in patient’s care, will do the follow-up and the outcome evaluation.

 Therefore, if the randomization procedure is correct (today, a web-based system is usually used), and the above-mentioned biases have been avoided, the trial will have a good “internal validity”, meaning the results can be trusted.  A clinician reading the trial report however should always ask this question: “Ok, they did it, but what about me?” This is the big problem of external validity, or, in other words, the extent to which the results of a study can be generalised to the “real world” population. We will come back to this in the next short note. 

For further reading: 
The Magic of Randomization versus the Myth of Real-World Evidence, Rory Collins, F.R.S., Louise Bowman, M.D., F.R.C.P., Martin 

Q3: What kind of RCTs are needed today?

In stroke medicine, as well as in many other fields of therapy, treatments that offer only small absolute reductions for an individual subject in important bad outcomes (e.g. survival with significant disability) may still be very worthwhile. If the treatment is safe, inexpensive and easy to deliver, and is applied to almost all stroke patients it could avoid a significant number of bad outcomes across the whole population (aspirin, statins and blood pressure lowering are all good examples of this type of substantial population-wide benefit from modest treatment effects).

But how can we pick up these modest reductions in negative outcome (i.e. death and disability), with a reasonable certainty? We need trials with both a high internal and external validity. Internal validity is defined “The degree to which observed changes in a dependent variable can be attributed to changes in an independent variable”, or “The integrity of the experimental design and, consequently, the degree to which a study establishes the cause-and-effect relationship between the treatment and the observed outcome”. External validity is defined “the extent to which the results of a study can be generalised to the ”real world” population”. But my “real world” may well be different from yours… so we are actually  talking about the appropriateness by which a study result can be applied to non-study patients or populations. So, External Validity asks the question of generalizability: To what populations, settings, and treatment variables can this effect be generalized? The answer may well be different when different clinicians in different hospitals are asked the question; the more the various clinicians agree, the higher the External Validity is.

Internal validity is maximized when trials are large (adequately powered), well designed (simple efficient design, ensuring high adherence to the protocol) and achieving high completeness of data with no loss to follow-up. External validity is highest when the trial has broad entry criteria (to ensure relevance to the widest variety of people with the disease), incorporates a range of settings (for example in hospital based studies by including both primary, secondary and tertiary level care hospitals) spread across different regions, thus testing the intervention in the type of settings where it will be applied in practice.  To have the highest chance eventually to be implemented in routine practice,  the trial should select a clear measure of outcome that is relevant to patients, clinicians and health care planners.

We therefore need trials with minimal exclusion criteria, reflecting what happens in daily clinical practice (where we have to treat patients that present with the same condition at varying levels of severity and differing clinical manifestations).  The treatment under test should be feasible in everyday practice (or, in case the treatment is complex, with a possibility to apply it in the near future to the majority of patients).  In turn this requires that trial procedures with are easy to “embed” in the routine clinical practice. Busy physicians have very little time to dedicate to trials, and to obtain large numbers of included patients the extra work for the trialists should be minimal (efficient and high quality trial design).

We will return to the question of improving the design, quality and efficiency of trials in a future edition.

Further reading:
"Fundamentals of Clinical Trials."  Friedman, Furberg, DeMets.  Springer 2010. A clear, well written and very wise book on trials!

Q4: What are the factors that make a large pragmatic trial of an intervention feasible?

1.   Choose the right question. By this I mean a question of major public health impact, of interest to the general public, clinicians and policymakers. The condition to be treated should be frequent and  result in significant health burden to those suffering from it and to society at large and there should also be a) considerable clinical uncertainty about the intervention, b)  evidence of substantial  variation in its usage in normal clinical practice (widely used by some clinicians for certain patients, and not by others), and c) the intervention should be feasible in a range of health care settings (so, if proven to be safe and effective, it would be applicable to many patients).

2.  Choose a high priority question.  There are many research questions, and not each one is important enough to justify the cost (in time, effort, risk to participants and financial) of a large trial.  There should also be consensus among people affected by the condition, clinicians and the governmental health departments, that this particular clinical uncertainty is a burning issue right now that needs to be resolved as a priority.  This should ensure the support of the public, politicians, funding agencies and healthcare administrations when the trial is underway. The use of the uncertainty principle to determine eligibility for the trial will emphasise the educational value of trial participation for trial staff.  

3. The intervention should be widely practicable, affordable and have the potential to be implemented on a wide scale across the health service. In countries with a public health system funded by the state, the intervention to be tested and the trial itself should be implementable within that system. This means that when the trial is complete and the intervention confirmed effective, the intervention will be adopted in routine clinical practice fairly easily. However, it may possible that a private sponsor is interested in the study, and offers an “unconditioned” grant for it, looking at the wide applicability in the future. In this case, researchers must do everything to ensure that the private sponsor has no role in planning, conducting and analysing the study. 

4.  The trial procedures should be as streamlined and cost-effective as possible.  The trial should cause the trial participants very little inconvenience; questionnaires should be short, follow-up should fit in with their clinical routine care as closely as possible, and where that is not possible, central follow-up by national data systems, telephone follow-up or other remote follow-up methods.  Likewise, for the clinical staff responsible for the trial participants (who will already be very busy), the trial procedures should be simple and data collection should be limited to essential items only. Monitoring of the conduct of the trial should, as far as possible, not involve visiting sites or detailed source data verification; there are now accepted ‘risk adapted’ methods of central monitoring that are acceptable to regulatory bodies.  All of these steps will help minimise the cost of the study. 

5.  The support of an experienced clinical trials unit is key. Trials are complex and are best supported by methodological teams experienced in solving the practical problems in the simplest and most efficient way. In some countries, clinical trials units provide this support and reduce the burden on the chief investigator. For very large-scale trials, there are only a limited number of clinical trials units around the world with the expertise to conduct and complete randomised trials involving several tens of thousands of patients.  National stroke support organisations or charitable foundations can help in some aspects of funding or publicity, but in the case of a multinational trial it may be possible that coordination at national levels would be left to local organisations, maintaining a “supervisor” at international level.

Q5: Large, efficient, pragmatic trials: “What should clinical trial regulatory authorities modify to allow for the realisation of such trials?”

There are many bureaucratic processes to complete before a trial can begin, which may be different in different Countries, making international multicentre trials difficult to start and conduct. These regulatory processes give rise to delay, increased cost, which in turn reduce sample size and the chance to provide reliable scientific evidence (and sometimes even to complete failure of the study.)1 Such research waste is unacceptable.1 The COVID-19 pandemic has highlighted the scale of this problem2. Tikkinen et al. reviewed research in the early phase of the pandemic  in 2020; of the >2,000 planned drug studies examining COVID-19 treatments (https://www.covid-trials.org), most had delivered little or no directly useful information, and they concluded ‘Throughout the world, however, over-regulation and the lack of national and international efforts to facilitate appropriately large trials represent a missed opportunity to improve care’ 3

The RECOVERY trial (https://www.recoverytrial.net) provides an excellent case study to demonstrate how regulatory and ethical authorities could modify their procedures to reduce the barriers to the set-up and conduct of important trials.  It involved 176 UK hospitals and - within a few months - recruited >12,000 hospitalized patients (15% of all UK COVID-19 cases admitted to hospital), and provided clear answers on the effectiveness of dexamethasone and the ineffectiveness of hydroxychloroquine and lopinavir– ritonavir3. Even the USA, with all its health care resources, was not able to mount large scale randomised trials rapidly enough to provide reliable answers in the early phases of the pandemic, regulatory barriers were a significant contributor.4,5 Bauchner et al concluded ‘The USA could have done better’5.

Rapid approval of the protocol
In the RECOVERY trial, the period from protocol to first patient recruitment was nine days, in part because of a government-mandated prioritisation of certain projects, and efforts by the regulatory bodies in the UK to speed the approval process.6

Training of sites in the protocol and not exhaustive GCP training
Notably, RECOVERY sought to achieve reliability and quality by design rather than by compliance    with good clinical practice or site monitors, relying instead on centralized computer checks on site behaviour and patient compliance, and utilizing central National Health Service medical records of treatment and outcome.6,7

Simple consent
Also, it did not necessarily require written consent where the medical emergency rendered this inappropriate (https://www.recoverytrial.net).7 This point is crucial, due to very different rules in different Countries, and even different approaches by Ethical Committees in the same Country.

Reduced data collection and GCP training approved by regulators
Streamlined trial conduct and data collection enabled the participation of hospitals already stretched by patients with COVID-19, with many of the less-research-experienced hospitals being among the best recruiters. Training in Good Clinical Practice (GCP) was simplified. Staff did not need to undergo extensive training in all aspects of GCP, but focused on the specific tasks each member of the clinical staff were required to perform for the trial.

Support from the government’s medical officers
A strong letter of support from the Chief Medical Officers of England, Wales, Scotland and Northern Ireland emphasized that the trial was to be seen as part of clinical care and stated that “Use of treatments outside of a trial, where participation was possible, is a wasted opportunity to create information that will benefit others.”3 This kind of statement should be produced by regulatory Authorities in each Country (and theoretically in each specific hospital) to facilitate the trial conduct.

Quality by design (QbD).
The Clinical Trials Transformation Initiative is an international partnership that seeks to improve the quality and efficiency of randomised trials.7 Many of the regulatory obstacles experienced by clinicians wishing to perform clinical trials could be surpassed if investigators, regulators and ethical boards were trained in, and understood, the principles of Quality by design (QbD). QbD is not a checklist, rather it is a common sense approach in which stakeholders consider: (1) what aspects of a trial are critical to generating reliable data and providing appropriate protection of research participants (“critical to quality” factors); and, (2) what strategies and actions will effectively and efficiently support quality in these critical areas.  However, such principles of research design may appear at first sight to be in conflict with the extensive clinical trial regulatory system, and so requires dialogue between investigators and regulators to agree on research designs that efficiently generate reliable data.

References

1. Salman R, Beller E, Kagan J, et al. Increasing value and reducing waste in biomedical research regulation and management. Lancet 2014; 383: 176–8

2. Glasziou, P. P., Sanders, S. & Hoffmann, T. Br. Med. J. 369, m1847 (2020)

3. COVID-19 clinical trials: learning from exceptions in the research chaos. Kari A. O. Tikkinen, Reza Malekzadeh, Martin Schlegel, Jarno Rutanen and Paul Glasziou. Nature Medicine. Published Online https://doi.org/10.1038/s41591-020-1077-z

4. Burki TK. Completion of clinical trials in light of COVID-19. Lancet Respir Med 2020. Published Online October 1, 2020. https://doi.org/10.1016/S2213-2600(20)30460-4

5. Emerging Lessons From COVID-19 for the US Clinical Research Enterprise. Angus DC, Gordon AC, Bauchner H. Published Online: February 26, 2021.JAMA. https://doi:10.1001/jama.2021.3284

6. Wise J, Coombes R. Covid-19: The inside story of the RECOVERY trial. BMJ 2020;370:m2670 http://dx.doi.org/10.1136/bmj.m2670 

7. https://www.ctti-clinicaltrials.org/news/new-case-studies-reveal-real-world-experience-quality-design-qbd