# Drug trials

### Start by clicking the play button below and viewing the tutorial. You can use the player controls to pause, play, rewind and fast-forward.

Whether you’re discussing the evidence on a drug treatment with a company rep, or evaluating a randomised controlled trial report yourself, or reading a summary of clinical trial evidence (such as a *Veterinary Prescriber* article), it’s essential to understand the terms used and the key features of randomised controlled trials, including the potential sources of bias. Here is a guide to help you.

A randomised controlled trial is the best type of trial design to find out about the efficacy of a drug treatment. The design aims to ensure that the only difference between the two groups of subjects in the trial is the drug of interest, so that any changes can be attributed only to the drug.

**Use of a control.** The trial needs to include a group of subjects that does not get the test treatment. This rules out the possibility that factors other than the treatment had anything to do with the outcomes. The way to find out if a treatment works is to compare it with a placebo control. If there is already a standard treatment, with known efficacy, then it can be more meaningful to compare the test drug with the standard drug (which would be an active control). The active comparator should be identical in appearance to the test drug if possible.

**Randomisation**. The treatment and the control are allocated randomly. This means that there is the same chance of a subject receiving the test treatment or the control. Randomisation ensures that a subject cannot be knowingly or subconsciously allocated a treatment. It also ensures that the groups are broadly identical apart from the test treatment so that any differences in outcome can be attributed to the test treatment.

**A placebo** is an intervention that matches the test treatment in every way possible, except that it does not contain the active drug (e.g. it can be an identical tablet or capsule containing the same inactive ingredients). Using a placebo helps to blind the study.

**Blinding **rules out subjective bias. The people measuring the outcomes must not be aware of (i.e. they are ‘blind to’) treatment allocation. Blinding is especially important if subjective outcomes (such as changes in behaviour) are being measured. If the animal owner is involved in reporting treatment effects they too should be blind to treatment allocation.In human trials, the term ‘double-blind’ is used when both the investigators and participants are blind to treatment allocation.

The trial report should make a clear statement about what the trial is setting out to do. The outcome measures (sometimes called endpoints) should be chosen to suit the objective of the trial. For the purpose of obtaining a product licence (also known as a marketing authorisation) for a drug, the choice of outcome measures is guided by the regulatory authorities.

**Primary outcome measure:** The main measure of the effect of the treatment is called the primary outcome measure or primary endpoint. It should be clinically relevant and directly related to the main goal of the trial. The sample size of the trial should be determined by power calculations involving the primary outcome measure.

**Secondary outcome measures.** Trials often look at the effects of a drug on outcomes other than the primary one; these are secondary outcomes. They should also be specified before the start of the trial and be included in the methods section of the report. Secondary outcome measures do not have the same statistical authority as the primary measures, and it is more likely that positive changes in secondary outcomes will be due to chance. It’s important not to put too much weight on a secondary outcome result, but see it as something interesting that requires proper testing.

The study should be adequately powered for the primary outcome measure and the results for all outcome measures should be reported.

The sample size is the number of subjects included in a study. This should be calculated at the design stage to make sure that the study is big enough to have a realistic chance of detecting a difference between the test treatment and the control. The calculation should appear in the methods section.

The power of a trial is a measure of how likely it is to be able to find a certain size of difference between the groups being compared, assuming such a difference exists. In general, the larger the study, the greater the power. A study with too few subjects is underpowered, so it is unlikely to be able to give convincing evidence on whether or not there is a real difference between the treatments being compared. Power also depends on how large a difference is expected between groups. For example, if only a small extra benefit is expected for one drug over another, more subjects will be needed to achieve the same power to detect the difference than if a very large difference is expected between the treatments. Study power is usually set at 80%, which means that there is a 20% likelihood of missing a real difference between the two groups being compared. There should be a calculation of power in the methods section of the trial report.

In **superiority trials**, the trial is designed to find out if the test treatment is superior to the comparator. This design is usual for placebo-controlled trials, because a substantial difference between the drug and placebo would be expected. But when a standard treatment already exists, an active comparator might be used instead. As it is usually unlikely that there would be a large difference between the efficacy of the test and standard treatments, it is more usual for a trial to find out if the test drug is either equivalent or no worse than (that is, non-inferior to) the standard treatment.

In an **equivalence trial**, it is impossible to show exact equivalence of two treatments, and so this type of trial can only find out if any difference in effect between the two treatments falls within a certain range. In other words, it can find out if the test drug is roughly similar to the control, including the possibility that it might be a little better or a little worse.

A **non-inferiority trial** is designed to find out if the test drug is only no worse (within a certain margin) than the control. It does not rule out the possibility that the test drug might be superior to the control and once non-inferiority has been demonstrated, it is possible to go on to do a statistical test to find out if the test drug is superior. Non-inferiority trials require smaller sample sizes than superiority or equivalence trials and so they are quicker and cheaper to perform. New drugs are commonly tested in non-inferiority trials. See these examples: imepitoin, telmisartan.

**Intention-to-treat and per protocol analysis**

It is possible that some subjects in the trial might not receive the treatment to which they were randomised, or they might be removed from the trial before the end because of adverse effects or some other reason. If only data from subjects that properly completed the trial (called the per protocol population) are analysed, the results may be biased. This is because removing the results from subjects which did not receive the treatment as intended tends to exaggerate any difference between the treatments. This type of bias can be avoided by using an ‘intention-to-treat’ analysis, which uses data from all the randomised subjects according to the group to which they were allocated. The intention-to-treat analysis is considered a more conservative and therefore more believable representation of the results.

However, note that this only applies to superiority trials, including placebo-controlled trials. In non-inferiority trials both types of analysis - intention-to-treat and per-protocol - should be done and the result accepted only if the results of both types of analysis agree.

The probability (p) value represents how likely it is that a particular result in a study occurred by chance alone, if you assume that in reality there is no difference between the treatment and the comparator. For example, a study might suggest that a drug reduces mortality from 30% (the rate on no treatment) to 20%, and report a p value of <0.05 as evidence of this. This statistic indicates that if the trial were to be performed repeatedly, a difference in outcome between the two study groups as big as this (i.e. 10%), or larger, could be expected to occur by chance alone in fewer than 5% of these studies. A p value can range from 0 to 1. The smaller the p value, the lower the likelihood that the result happened by chance, and the more certain it is that there really is a difference between the two treatments being compared. By convention, p values of 0.05 and below are considered significant, and values above 0.05 are considered non significant. When p=0.05, there is a 1 in 20 chance that the result has occurred by chance rather than because of a real difference; p=0.01 is 1 in 100 chance and p=0.001 is a 1 in 1,000 chance.

While the difference between two groups in a trial might be ‘statistically significant’ (i.e. the p value is 0.05 or less) it is not necessarily a clinically significant difference. For example, a study that showed one treatment was 5% more effective than the other could be statistically significant (i.e. the two are shown to be different in the statistical analysis) but not clinically significant; in other words, the 5% might, in reality, be so small that it would not be noticed, or not valued. This is why it is not enough to know that a result is statistically significant. You also need to look at the value of the difference and decide if it is clinically important.

It is important to look for the data on adverse effects (harms) in a trial because you would probably not want to use a treatment if the benefits were outweighed by harms. However, clinical trials will only usually pick up the most common adverse effects. Randomised controlled trials are not usually big enough or long-lasting enough to detect effects that are rare or that only arise after long-term use.

It can be easy to be misled by just passively reading a clinical trial report and so just receiving the messages that the authors want to convey. So, instead of just reading the paper from beginning to end:

Focus on what you need to know, not what the author measured.

Be open-minded. Clear your mind of any preconceptions you have about the treatment.

Skim or simply ignore the abstract, the conclusion and the discussion.

Focus on the methods and results.

Consider whether the patients are like the ones you treat.

Look for obvious sources of bias, such as lack of randomisation or blinding.

Look for the data to answer your questions.

Look for the results of all identified outcomes whether reported or not.

Think about the natural history of the disease. How long would a trial need to last to capture the ups and downs of a disease such as atopic dermatitis or epilepsy, for example.

Remember to look for data on harms (adverse effects).

**Further reading/sources of information**

RCVS Knowledge EBVM Toolkit. Controlled Trial Checklist.

Tests for equivalence or non-inferiority - why? Drug Ther Bull 2008; 46: 55-6.

**For examples of new drug evaluations see any of these reviews:**

**PODCAST**

If you prefer, you can listen to the whole audio presentation about Drug trials in the following podcast. Don't forget that you can also download the podcast to your iPod, music player, tablet or smartphone using the Download link on the right of the audio player.

**Goal of activity:** Update knowledge; to help improve critical appraisal skills.

**Authors/disclosures:** *Veterinary Prescriber* editorial team/no conflict of interest

**Specific learning objectives:** to improve knowledge and understanding of randomised controlled trials.

Click the button below when you are ready to answer the CPD questions based on the Drug trials module from *Veterinary Prescriber*.

Once complete you will be emailed with your CPD certificate so make sure you enter your email address carefully at the head of the form.