In Vivo is part of Pharma Intelligence UK Limited

This site is operated by Pharma Intelligence UK Limited, a company registered in England and Wales with company number 13787459 whose registered office is 5 Howick Place, London SW1P 1WG. The Pharma Intelligence group is owned by Caerus Topco S.à r.l. and all copyright resides with the group.

This copy is for your personal, non-commercial use. For high-quality copies or electronic reprints for distribution to colleagues or customers, please call +44 (0) 20 3377 3183

Printed By

UsernamePublicRestriction

The Long Road To Approval: Acorda Experience With Ampyra Shows Success Of Novel Analysis Plan

This article was originally published in Pharmaceutical Approvals Monthly

Executive Summary

Acorda Therapeutics' challenges in seeking approval for its oral therapy Ampyra to improve walking ability in patients with multiple sclerosis were greater than most from the start: it was a novel drug for a first-of-its-kind claim – and the drug yielded variable efficacy on an unproven endpoint. What seems to have made the difference is the creative analysis plan that Acorda came up with.

Acorda Therapeutics' challenges in seeking approval for its oral therapy Ampyra to improve walking ability in patients with multiple sclerosis were greater than most from the start: it was a novel drug for a first-of-its-kind claim – and the drug yielded variable efficacy on an unproven endpoint. What seems to have made the difference is the creative analysis plan that Acorda came up with.

CEO Ron Cohen downplays the challenges and says Acorda's success is based on good science. Acorda found an area where it believed the drug works – it was used off-label in specially compounded formulations – and figured out how to prove it. Its executives claim its studies fell within basic efficacy and safety studies. But for FDA, it was a different story, one that involved accepting a novel endpoint that was analyzed in a novel way.

[Editor's note: Pharmaceutical Approvals Monthly analyzed the Ampyra approval in the April and May issues. This article provides the company's perspective on the development and approval process, including its experience with FDA.]

Acorda came to the dalfampridine program mid-stream, and started out working toward a spinal cord injury indication. It later took on the MS program, and. after the SCI program failed in Phase III, concentrated on that opportunity. The MS program also faced failed trials, but Acorda salvaged the program with some creative thinking. [For more details on Acorda's development program and commercial performance, see the November issue of IN VIVO. For a sample copy, please contact customer care at 800-332-2181 or [email protected] .]

Sorting Patients By Individual Responses

Because off-label use of dalfampridine, previously known as fampridine or 4-AP, was widespread, Cohen admits the company had more certainty about activity than most developers of new drugs face. But when dalfampridine didn't show a statistically significant difference from placebo on a timed 25-foot walk test in its final Phase II trial, it hit Acorda hard.

There was a trend towards benefit, though, and encouraged by that positive signal, Cohen delved further into the data, collecting printouts of the results for the 200 patients in the study. He included their walking tests for each of the nine study visits: four before treatment, four on treatment and one post-treatment. In an ideal world, he says, walking performance would be similar for the first four non-drug visits – with improvements visible as soon as the drug was administered and maintained for the subsequent on-drug visits. At the two-week follow-up visit, however, the gains would disappear. That would show a clear-cut drug effect.

Cohen tried that approach to break down the data. He organized the patient records by the number of on-drug visits showing improvement. Four visits with improvement constituted a clear response and three visits were deemed a likely drug effect, accounting for a fluke reading or bad day for the patient. He stacked those up against the lower scores (two, one or zero on-drug visits with improved walking performance). After the statistician unblinded the data, Cohen still remembers the look on her face: "Marvel and wonder," he says. Almost all the patients in the 3 and 4 group were on drug, and the p-value for that was less than 0.00001.

The casual sorting worked, but for FDA, Acorda needed a more rigorous methodology. Andy Blight and Acorda's lead statistician Lawrence Marinucci created a statistical algorithm to sort the data. They ultimately did a statistical proof to give it support, which they are preparing to publish.

Instead of succumbing to the "tyranny of the means," the responder rate analysis winnows out the subset of responders.

Acorda's methodology is a means of assessing a dataset based on rates of individual response instead of average response across the cohort. "It's a very powerful statistical tool for identifying response when the great majority of the group is not actually responding," Cohen notes. Instead of succumbing to the "tyranny of the means" – whereby a signal of effect in a subset is lost when the results are averaged out over the entire population – the responder rate analysis winnows out the subset of responders. Acorda believes it can license its method for other companies to use.

For the Ampyra program, they applied the home-grown methodology to the intent-to-treat population – the data slice FDA generally prefers. "Lo and behold, about 35% of the drug patients were responders, and about 8% of the placebo group also met that definition," with a statistically significant p-value, Cohen notes.

Acorda presented the responder finding at an end-of-Phase II meeting with FDA in July 2004, bringing along an expert clinician as a consultant. "The neurology division poured cold water on our enthusiasm," Cohen reports. Agency officials found the responder analysis interesting, but remained focused on the trial's overall failure. "It was the low point for us in the program," Cohen recalls.

But Acorda saw some dim signs for optimism. The division appeared to be generally comfortable with using a responder analysis; it described Acorda's approach as a different "flavor," but found it a valid responder analysis, Cohen says. That encouragement presented another challenge, however. While the agency accepted that the drug was showing an effect based on the novel methodology, the data didn't clarify what that meant to the patient.

For a novel endpoint to get past FDA, it must first be shown to be clinically meaningful to the patient. Acorda managed to do that by marrying a patient-reported walking scale measure, the MSWS-12, to the responder-rate analysis. Acorda's analysis showed that responders scored higher than non-responders on self-ambulation in the MSWS-12. "That was a gamble on our part, because we had very little data on MSWS-12, but it was the best that we could come up with in terms of validation within the study," Blight notes.

Impatient Investors Doubted Analytic Approach

Here, however, they met another roadblock: impatient investors, who were "getting to the end of their ropes," after two failed Phase III trials for spinal cord injury and the problematic Phase II MS trial, Cohen said. The venture backers, including respected funds such as MPM Capital and Forward Ventures, hesitated to commit more funds, dragging out discussions for almost a year. During that time, Acorda's management almost walked away.

Compromise saved them. Both sides agreed to form a committee to vet the company's prospects consisting of company management, VC representatives (both physicians), and an outside expert. The process succeeded, with the group concluding that the company's analytic approach could work. One requisite in exchange for additional financing, however, was that Acorda secure a Special Protocol Assessment with FDA. Though SPAs are common in pharma, investors don't typically use them as a bargaining chip.

Though SPAs are common in pharma, investors don't typically use them as a bargaining chip.

But the delays cost Acorda at least a year and half of development time. Acorda wanted to run two Phase III trials in parallel to make up for lost time, but the VCs refused to fund simultaneous trials. So the trials ran sequentially. Once the first trial came back positive, "everyone was happy to fund the next trial, obviously," Cohen adds.

Figuring Out The Endpoint

As the SPAs spelled out, the primary endpoint for each of the two Phase III trials was the timed 25-foot walk test, analyzed by Acorda's responder rate methodology. The MSWS-12 was included as a means of assessing the clinical meaningfulness of the walk test results.

In a memo summarizing the review decision, FDA Office of Drug Evaluation I Director Robert Temple notes the choice of endpoint was not the "most obvious" measure. More typical, he wrote, would have been "average walking speed in the drug and placebo groups or increase in baseline on drug vs. increase on placebo." But Cohen and Blight believed such an approach would not work, because only certain patients responded to the medication.

Crafting an outcome measure to capture a functional improvement in MS is difficult because of the inherent variability of the disease, both between patients and in individuals day to day. With most drugs in most diseases, one expects to see a quantifiable, vertical improvement – that a patient improved 15% from baseline, for instance. But in an MS patient, baselines shift unpredictably. A patient could do well – or poorly – regardless of treatment. He or she could exhibit a 20% improvement in walking ability while receiving treatment, but separating out the drug's effect from the disease's variability is tough, Blight explains.

That's where Acorda's eccentric responder analysis was critical, because it identifies patients who actually respond to the drug. By looking at the difference between the periods before/after treatment and during treatment, it was clearer to see that when responders were on the drug they got better and when they came off they got worse.

The responder analysis eliminated a lot of the background noise caused by variation over time.

That eliminated a lot of the background noise caused by variation over time, Blight points out. "For a very variable condition," he says, "really honing in on the consistent improvement over time I think is a very valuable way to look at the data. It's just hard for people to think about, because they're not used to it."

Blight explains that the low rate of responders in the placebo arm (8% in one of the pivotal trials and 9% in the other) shows that the responder analysis is getting at a real treatment effect. If the data is examined just on the level of improvement, like a threshold 15% improvement, then nearly double the placebo patients, 15%-16%, reached that mark.

Looking at an improvement in the walking speed itself would also be a difficult measure, "because statistically it looked like you would need a fairly large study to show a treatment effect on the walking speed," Blight notes.

The company also submitted as secondary endpoints more typical measures of average walking speed improvement, leg strength and spasticity, and FDA review documents indicate the agency was reassured by having that traditional evidence to back up the primary analysis (Also see "Acorda's Novel Primary Endpoint For Ampyra Was Made Possible By A Supporting Scaffolding Of Secondary Analyses" - Pink Sheet, 1 Apr, 2010.).

The agency also found reassurance in an alternate analysis of the responder rate data, which is included in the FDA-approved label as a cumulative distribution showing how many patients achieved various levels of improvement in walking speed. Acorda didn't plan on submitting that data, but Temple responded positively to its presentation at the advisory committee review, and it showed up in the label.

His review memo suggests that in that data, he saw a clear demonstration of the drug's effect. Temple cites the difference between Ampyra and placebo in the patients that had a 30% increase in walking speed: 15%-20% versus 3%. "That is a minority of patients, of course, but it would seem to be an obvious benefit," he concludes.

Responder Analysis = Relevant To Clinical Practice

The responder analysis may actually be a more modern type of measurement of drug effect. While traditional endpoints can be statistically and scientifically satisfying from a perspective of quantification, they may not be the most relevant to clinical practice.

"FDA likes responder approaches because it gets you away from this concept of measuring a small statistically significant change that doesn't really mean anything" in the clinic, Blight observes. "You don't care to the infinitesimal place what is the average change, you want to know how many people saw benefit that was actually worth having," he says.

Temple and the agency's neurology division, among others, certainly agree. "Responder analysis may be particularly useful where response is confined to a subset," Temple wrote in his review memo.

He has since advanced that argument in other disease states. At a March 2 meeting on clinical trials for hypertension, he discussed how the distribution of positive results for individual patients can be helpful in discerning a drug's effect, especially when there isn't a mean benefit across the entire population – e.g. when there is a small subset of patients who have a strong response. He pointed to Ampyra as an example where the distribution of results supported efficacy in the absence of an average benefit (Also see "FDA's Temple Discusses Measuring Efficacy When Mean Effect Is Limited" - Pink Sheet, 1 Mar, 2010.).

Working with FDA to gain acceptance and approval, however, appears easier in hindsight. "It wasn't as much fun to live through," Cohen recounts.

Surprises From FDA

From the beginning, FDA took a critical approach to the dataset. In particular, there was "considerable discussion of the meaningfulness of the study endpoint," Temple reported.

The primary clinical reviewer argued that the endpoint and unusual sequential analysis of the responder rate measurement was an intermediate variable, not a true endpoint, and allowed statistical significance without clear clinical significance. "The responder variable ignores the importance of the extent of improvement in walking speed," the reviewer, Kachikwu Illoh, said.

That meant that a small benefit in a relatively large number of patients on the active drug could result in a positive trial, "even when the benefit is not clinically significant or meaningful for the patient." Illoh concluded that the analysis was not appropriate to support approval, and recommended a "complete response" letter.

Temple noted that the speed differences were numerically small, and FDA indeed brought the question of the meaningfulness of the effect to an advisory committee for interpretation by experts in the field.

Leading up to the panel review, Acorda was confident. Its leadership figured any uncertainties had to do with how to craft labeling, and potential conditions around use. But that changed when they saw the briefing documents. "We were surprised," Cohen says. "It seemed to come completely out of left field and it did not in our minds at all reflect what we thought we had understood from our ongoing dialog with FDA, up to and including the SPA agreements."

In a span of two weeks, the company had to rejigger its strategy for the meeting, which it had worked on for five months. Even worse, Illoh's negative review introduced new doubt and fear as to whether Ampyra would be cleared at all.

However, at the meeting, Temple and Division of Neurology Products Director Russell Katz made clear that they stood by the SPA agreement and the company's analyses, and that the briefing document should be considered an alternate way of looking at the data. Much of the meeting focused on the clinical meaningfulness of the signal, which the MS practitioners helped address. They led the panel to an overwhelming 12-1 vote that dalfampridine was effective at improving mobility (Also see "Acorda's Risks With Fampridine Development Look Likely To Pay Off" - Pink Sheet, 26 Oct, 2009.).

The committee's support and acceptance of the benefit as clinically meaningful helped bring FDA to its decision to approve the drug, on Jan. 22, 2010.

Though Acorda's gamble on using a primary endpoint based on an unusual responder analysis was ultimately successful, it was still a risky play. Cohen attributes their success to doggedly doing good science and continual communication with important constituents, including the MS community, investors and FDA (see sidebar, "Ampyra Approval: Lessons Learned").

Agency review documents reveal discord within the review team, though it was overturned by higher level officials. It took a great deal of willingness and flexibility from FDA to appreciate the unique evidence and accept the drug's effect. In a less debilitating disease or a condition without unmet medical need, it might not be a good bet.

Aiming for a novel treatment benefit may present an easier target, but it's important to address the clinical meaningfulness question. Sponsors should be prepared with a range of evidence. It helped Acorda that they didn't rely on the primary findings alone; they had more traditional endpoints as secondary evidence. And companies should be prepared to defend their choices. "If you're going after a novel endpoint, the onus is on you to demonstrate that what you're doing passes muster," Cohen notes.

By Mary Jo Laffler

Related Content

Topics

Latest Headlines
See All
UsernamePublicRestriction

Register

PS004628

Ask The Analyst

Ask the Analyst is free for subscribers.  Submit your question and one of our analysts will be in touch.

Your question has been successfully sent to the email address below and we will get back as soon as possible. my@email.address.

All fields are required.

Please make sure all fields are completed.

Please make sure you have filled out all fields

Please make sure you have filled out all fields

Please enter a valid e-mail address

Please enter a valid Phone Number

Ask your question to our analysts

Cancel