We have now identified current information which can answer our clinical question. The next step is to read the article and evaluate the study. There are three basic questions that need to be answered for every type of study:
This tutorial will focus on the first question: are the results of the study valid? The issue of validity speaks to the "truthfulness" of the information. The validity criteria should be applied before an extensive analysis of the study data. If the study is not valid, the data may not be useful.
The evidence that supports the validity or truthfulness of the information is found primarily in the study methodology. Here is where the investigators address the issue of bias, both conscious and unconscious. Study methodologies such as randomization, blinding and follow-up of patients help insure that the study results are not overly influenced by the investigators or the patients.
Evaluating the medical literature is a complex undertaking. This session will provide you with some basic criteria and information to consider when trying to decide if the study methodology is sound. You will find that the answers to the questions of validity may not always be clearly stated in the article and that readers will have to make their own judgments about the importance of each question.
Once you have determined that the study methodology is valid, you must examine the results and their applicability to the patient. Clinicians may have additional concerns such as whether the study represented patients similar to his/her patients, whether the study covered the aspect of the problem that is most important to the patient, or whether the study suggested a clear and useful plan of action.
1. Were patients randomized? The assignment of patients to either group (treatment or control) must be done by a random allocation. This might include a coin toss (heads to treatment/tails to control) or use of randomization tables, often computer generated. Research has shown that random allocation comes closest to insuring the creation of groups of patients who will be similar in their risk of the events you hope to prevent. Randomization balances the groups for known prognostic factors (such as age, weight, gender, etc.) and unknown prognostic factors (such as compliance, genetics, socioeconomics, etc.). This reduces the chance of over-representation of any one characteristic within the study groups.
2. Was group allocation concealed? The randomization sequence should be concealed from the clinicians and researchers of the study to further eliminate conscious or unconscious selection bias. Concealment (part of the enrollment process) ensures that the researchers cannot predict or change the assignments of patients to treatment groups. If allocation is not concealed it may be possible to influence the outcome (consciously or unconsciously) by changing the enrollment order or the order of treatment which has been randomly assigned. Concealed allocation can be done by using a remote call center for enrolling patients or the use of opaque envelopes with assignments. This is different from blinding which happens AFTER randomization.
3. Were patients in the study groups similar with respect to known prognostic variables? The treatment and the control group should be similar for all prognostic characteristics except whether or not they received the experimental treatment. This information is usually displayed in Table 1, which outlines the baseline characteristics of both groups. This is a good way to verify that randomization resulted in similar groups.
4. To what extent was the study blinded? Blinding means that the people involved in the study do not know which treatments were given to which patients. Patients, researchers, data collectors and others involved in the study should not know which treatment is being administered. This helps eliminate assessment bias and preconceived notions as to how the treatments should be working. When it is difficult or even unethical to blind patients to a treatment, such as a surgical procedure, then a "blinded" clinician or researcher is needed to interpret the results.
5. Was follow-up complete? The study should begin and end with the same number of patients in each group. Patients lost to the study must be accounted for or risk making the conclusions invalid. Patients may drop out because of the adverse effects of the therapy being tested. If not accounted for, this can lead to conclusions that may be overly confident in the efficacy of the therapy. Good studies will have better than 80% follow-up for their patients. When there is a large loss to follow-up, the lost patients should be assigned to the "worst-case" outcomes and the results recalculated. If these results still support the original conclusion of the study then the loss may be acceptable.
6. Were patients analyzed in the groups to which they were first allocated? Anything that happens after randomization can affect the chances that a patient in a study has an event. Patients who forget or refuse their treatment should not be eliminated from the study results or allowed to “change groups”. Excluding noncompliant patients from a study group may leave only those that may be more likely to have a positive outcome, thus compromising the unbiased comparison that we got from the process of randomization. Therefore all patients must be analyzed within their assigned group. Randomization must be preserved. This is called "intention to treat" analysis.
7. Aside from the experimental intervention, were the groups treated equally? Both groups must be treated the same except for administration of the experimental treatment. If "cointerventions" (interventions other than the study treatment which are applied differently to both groups) exist they must be described in the methods section of the study.