Wednesday, February 23, 2011
Friday, October 01, 2010
Remember the hype around the serotonin-norepinephrine reuptake inhibitors (SNRIs)? Effexor and Cymbalta impact both serotonin and norepinephrine, so they should be more effective than SSRI’s in treating depression? Mind you, that’s not a high bar to clear - it’s not like SSRI’s are much better than placebo. So get the hell outta the way, Prozac and Paxil, because Cymbalta and Effexor will unleash their incredible efficacy onto the world of psychiatry. Doubt me? Read this 2009 article regarding the wonders of Pristiq (son of Effexor) and learn about how “The emergence of the selective serotonin reuptake inhibitor (SSRI) and serotonin norepinephrine reuptake inhibitors (SNRI) antidepressants has improved the treatment of MDD.” Or this press release from Wyeth. Or Dr. Danny Carlat’s experience selling Effexor to his peers. I don’t think anyone who has followed drug marketing would deny that both Wyeth and Lilly tried to pimp Effexor and Cymbalta as working better because of their SNRI properties.
But is that actually true? A team of German researchers examined the data and concluded that neither Effexor nor Cymbalta really work better than SSRIs. They actually found a small advantage for Effexor over SSRIs for treatment response (but not depression remission), but they also found that the manufacturer was hiding studies from them (and the rest of the world). I haven’t said this for a while, but enter Charles Nemeroff. To understand the research by the Germans, we first need to recall that a 2008 study (lead author: Nemeroff) found
...the pooled effect size across all comparisons of venlafaxine versus SSRIs reflected an average difference in remission rates of 5.9%, which reflected a NNT of 17 (1/.059), that is, one would expect to treat approximately 17 patients with venlafaxine to see one more success than if all had been treated with another SSRI. Although this difference was reliable and would be important if applied to populations of depressed patients, it is also true that it is modest and might not be noticed by busy clinicians in everyday practice. Nonetheless, an NNT of 17 may be of public health relevance given the large number of patients treated for depression and the significant burden of illness associated with this disorder. [my emphasis]
As I wrote then, the benefit to public health claim is ridiculous. To understand the reasons why this is so laughable, please check out my prior post on the topic. This meta-analysis included a bunch of data from Wyeth that was previously unpublished...
Which leads to the freshly published meta-analysis on how Effexor compares to SSRIs. The German researchers requested unpublished data from Wyeth and only got some of it - you’d think that just maybe Wyeth sent them the “good news” data and maybe held back on some of the “bad news” data. So when an ever-so-small benefit emerged for Effexor (5% high treatment response rate), well, call me crazy, but I ignored it. We’re not playing with a full dataset because the manufacturer wants to keep some of it hidden, so shame on Wyeth and let’s look at Effexor with a little bit of suspicion. So Effexor vs. SSRIs - no difference. Except that more people drop out of clinical trials on Effexor due to side effects compared to SSRIs (about 3% more). So even if you believe that Wyeth’s hidden data really doesn’t impact these findings, we’re left with a very small advantage for Effexor that is probably negated by its slightly higher dropout rates.
Cymbalta. It had a 3% higher dropout rate due to adverse events and the same efficacy as SSRIs. So nothing to write home about, except that it costs a boatload more than generic SSRIs and is harder to tolerate. But Cymbalta has been marketed to the gills and is clearing $3 billion a year in sales. Hey, this is the company marketed Zyprexa for dementia (oops), and for, well, lots of other stuff (1, 2). So it’s not surprising at all that they can take a mediocre antidepressant like Cymbalta and turn it into a big moneymaker - the wonders of a good marketing department. But Depression Hurts and Cymbalta is a painkiller. Well, that’s fine and dandy until you actually look at the data which show Cymbalta doesn’t do much for pain in depression.
It’s time to get over the hype surrounding SNRIs. The next “advance” in antidepressants, well, who knows what it will be - but let’s hope it’s something a little more substantial than SNRIs. But I’m not hopeful. And no, I don’t want to hear anything more about agomelatine.
I know it’s been a long time between posts. So pardon me if my writing is more awful than usual. And it doesn’t mean I will be posting regularly. Thanks to the multiple readers who sent me a copy of this article.
Citation to new meta-analysis of Effexor and Cymbalta:
Schueler, Y., Koesters, M., Wieseler, B., Grouven, U., Kromp, M., Kerekes, M., Kreis, J., Kaiser, T., Becker, T., & Weinmann, S. (2010). A systematic review of duloxetine and venlafaxine in major depression, including unpublished data Acta Psychiatrica Scandinavica DOI: 10.1111/j.1600-0447.2010.01599.x
Friday, May 14, 2010
These folks at Lilly must think we are exceptionally stupid. As in can't tie our own shoes. A study in the Journal of Psychiatric Research recently found that their experimental antidepressant LY2216684 was no better than placebo. Here are a couple of quotes from the abstract:
LY2216684 did not show statistically significant improvement from baseline compared to placebo in the primary analysis of the Hamilton depression rating scale (HAM-D17) total score. Escitalopram demonstrated significant improvement compared to placebo on the HAM-D17 total score, suggesting adequate assay sensitivity.On the primary outcome measure, the experimental drug failed whereas Lexapro worked to some extent. I know what you're thinking - "the sample size was probably too small to find a significant effect." Um, you're wrong. How about 269 people on the Lilly drug, 138 on placebo, and 62 on Lexapro.
But wait, here comes the good news...
Both LY2216684 and escitalopram showed statistically significant improvement from baseline on the patient-rated QIDS-SR total score compared to placebo... The results of this initial investigation of LY2216684’s efficacy suggest that it may have antidepressant potential.The good news for Lilly is that most people who claim to "read journal articles" really just browse the abstract without actually looking at the full text of the paper. For the select few who have nothing better to do than read Lilly propaganda, take a look at Table 2. A total of 12 secondary outcome measures are listed. The Lilly drug beat placebo on... ONE of them. Lilly doesn't say much about how much better their drug was than placebo on the QIDS-SR measure beside throwing around that often meaningless term of "statistically significant." People on the drug improved by 10.2 points whereas placebo patients improved 8.3 points. So about a 20% difference. If you bother to calculate an effect size, it is d = .24, which is quite small and clinically insignificant. So on the ONE measure where the drug was better than placebo, it was by a small margin, and it missed the mark on 11 other secondary measures as well as on the primary outcome measure. But "it may have antidepressant potential." Hell yes, I've never been so exited about a new drug.
By the way, Lilly is apparently trying this wonder drug out in at least five trials. The journal in which this article appeared has published other dubious Eli Lilly research in the past. The editorial review process is clearly working wonders over at the Journal of Psychiatric Research. Sad, really. The journal publishes some really good work, but then runs this kind of junk as well.
Depression Self-Report Sidebar: The self-reported measure on which the drug had an advantage, the Quick Inventory of Depressive Symptoms (QIDS) - it's really awesome, according to Lilly. Remember, it's the only measure on which their experimental
What does Bristol-Myers Squibb think? In three trials of Abilify for depression, self-reports of depression were unfavorable. So the publications for these studies made sure to downplay these depression self-reports by saying that these measures were not sensitive, that they weren't picking up improvements in depression.
So if a self-report provided positive results, then BAM, it's an awesome measure of depression. But if it provided negative results, then it's a horrendously inaccurate measure and should never have been used in the first place.
Citation below. Yes, one of the authors' last names is Kielbasa.
Dubé, S., Dellva, M., Jones, M., Kielbasa, W., Padich, R., Saha, A., & Rao, P. (2010). A study of the effects of LY2216684, a selective norepinephrine reuptake inhibitor, in the treatment of major depression Journal of Psychiatric Research, 44 (6), 356-363 DOI: 10.1016/j.jpsychires.2009.09.013
Friday, April 02, 2010
Tuesday, March 16, 2010
Holy cow, I've been nominated for an award?!? Under the category of best health blog, with eight other nominees. Voting is already closed, and I can pretty much guarantee I didn't win, but it's an honor to have been nominated.
By now, everyone who has been paying attention should know that a journal article which lists "editorial support" is an article that was ghostwritten. Yet the average reader of these articles is apparently uninformed enough to not care. Why else would so many articles get published which feature "editorial support provided by [insert name of ghostwriter here]." One my my favorite journals, under the "so bad, it's good" category, is the Primary Care Companion to the Journal of Clinical Psychiatry. Good articles certainly make their way into the journal, perhaps by accident, but the journal can always be counted on to provide a steady supply of utter garbage.
Here's the acknowledgements section from one recent piece in the journal: "Editorial support was provided by George Rogan, MSc, Phase Five Communications Inc, New York, New York. Mr. Rogan reports no other financial affiliations relevant to the subject of this article." And in case you're wondering, "Funding for editorial support was provided by Bristol-Myers Squibb." If you've somehow guessed that this is an advertorial for Abilify, you win. Other ghostwritten pieces of fluff paid for by BMS include an article discussing the safety profile of Abilify in depression. It states that "In conclusion, this post hoc analysis extends previous findings demonstrating that aripiprazole is safe and generally well tolerated as an augmentation strategy to standard ADT in patients with MDD with a history of an inadequate response to antidepressant medication." But Abilify caused akathisia in a quarter of patients - I think that's a problem.
But wait... there's more. An article based on data from two trials, which showed (allegedly) that Seroquel improves anxiety in patients with bipolar disorder. This piece also acknowledges that it was ghostwritten. And we know that AstraZeneca, manufacturer of Seroquel, has cooked the books on Seroquel in the past. Feel free to look through the journal every month and have a giggle at some of the ridiculous pieces that make their way into print.
You can get your continuing medical education (CME) from the Primary Care Companion as well. One particularly awesome piece of medical wisdom
Back to the CME.. Thase starts off by stating that only a third of patients achieve remission of depressive symptoms during treatment. Given that Abilify is being marketed for treatment-resistant depression, this is a perfect way to start off this
In particular, "Relying on the global statement “I’m definitely better” from the patient overlooks persistent, minor, or residual symptoms. Dr Thase recommended using a standardized symptom assessment measure and keeping track of the patient’s levels of symptom burden." So even if the patient says he or she is much better, don't believe it. Have the patient fill out rating scales and if any symptoms at any level are present, keep treating. In Thase's words, "If the current treatment is well tolerated and the individual has made significant symptom improvement but is still experiencing residual symptoms, then it may be necessary to adjust the treatment dose, add another medication, or combine pharmacotherapy and psychotherapy." Note that adding psychotherapy comes after adding another medication.
Then a series of other objective, expert psychiatrists chime in. Dr. Gaynes offers his wisdom, which includes "Dr Gaynes concluded that incomplete remission requires aggressive identification and management." Don't be afraid - be aggressive. The unspoken message: Hey, using an antipsychotic like Abilify for depression may seem freakin' crazy. But don't worry, you need to be aggressive. Dr. Trivedi then comments about using rating scales to measure side effects. I don't have much to say about his section, but things get worse momentarily...
Dr. Papakostas then checks in. "A meta-analysis of randomized, double-blind, placebo controlled studies found that augmentation of various antidepressants with the atypical antipsychotic agents olanzapine, risperidone, and quetiapine was more efficacious than adjunctive placebo therapy. In addition, Dr Papakostas noted that the atypical antipsychotic aripiprazole was recently approved by the US Food and Drug Administration (FDA) for use as an adjunctive therapy to antidepressants in MDD. Augmenting with atypical antipsychotics has so far been the best studied strategy for managing treatment-resistant depression, said Dr Papakostas." Dr. P was the coauthor of a meta-analysis that provided "considerable evidence" regarding the wonders of antipsychotic therapy for depression. The only problem was that the analysis actually did not find convincing evidence that the drugs were particularly effective, which I discussed in December 2009.
Next comes Dr. Shelton. Time to be aggressive, again: "Thus, said Dr Shelton, the long-term management of depression should be viewed in the context of acute treatment and the need for early aggressive management to get the patient as well as possible." Be aggressive by adding Abilify to the antidepressant regimen. If not, your patient won't achieve full remission and will suffer needlessly... "Dr Shelton advised clinicians to be aggressive in treatment and stay active over time, asking themselves if everything has
honestly been done to help the patient." Psychotherapy is given a brief mention in this section, but let's face it -- most physicians think of "be aggressive" as upping the dosage and/or adding medications - not as "let's be aggressive by adding psychotherapy."
Then there's the exam at the end. Write up your answers, mail them in, and get your medical education credit. Here's one of the questions...
3. Scores on both patient- and clinician-rated scales found that Ms B is still experiencing residual depressive symptoms. You optimize her current SSRI dose, which produces some improvement. She has not reported any problems with side effects. What course of action to improve her outcome has the most comprehensive efficacy data?
a. Increase the dose of her current SSRI again
b. Augment her current SSRI with another SSRI
c. Switch her to a serotonin-norepinephrine reuptake inhibitor
d. Augment her current SSRI with an atypical antipsychotic
If you guessed that D is the correct answer, you're one step closer to CME credit. And one step closer to writing a prescription for Abilify despite the fact that it is as likely to induce akathisia as to induce remission of depressive symptoms. Or that its advantage over placebo is small on several measures and nonexistent on a patient-rated measure of depression. But D is still the "correct" answer.
The offending educational piece is cited below:
Thase, M., Gaynes, B., Papakostas, G., Shelton, R., & Trivedi, M. (2009). Tackling Partial Response to Depression Treatment The Primary Care Companion to The Journal of Clinical Psychiatry, 11 (4), 155-162 DOI: 10.4088/PCC.8133ah3c
Wednesday, March 03, 2010
P & G, Actonel, and trying to effectively manage data to best suit the needs of Actonel's marketing. Hey, wait, this sounds familiar. You may recall the case of Aubrey Blumsohn - a researcher at the same university investigating the same drug, followed by all sorts of strange happenings. Read more on the Blumsohn story here.
Wednesday, February 10, 2010
A. The disorder is characterized by severe recurrent temper outbursts in response to common stressors.
1. The temper outbursts are manifest verbally and/or behaviorally, such as in the form of verbal rages, or physical aggression towards people or property.
2. The reaction is grossly out of proportion in intensity or duration to the situation or provocation.
3. The responses are inconsistent with developmental level.
B. Frequency: The temper outbursts occur, on average, three or more times per week.
C. Mood between temper outbursts:
1. Nearly every day, the mood between temper outbursts is persistently negative (irritable, angry, and/or sad).
2. The negative mood is observable by others (e.g., parents, teachers, peers).
D. Duration: Criteria A-C have been present for at least 12 months. Throughout that time, the person has never been without the symptoms of Criteria A-C for more than 3 months at a time.
E. The temper outbursts and/or negative mood are present in at least two settings (at home, at school, or with peers) and must be severe in at least in one setting.
F. Chronological age is at least 6 years (or equivalent developmental level).
G. The onset is before age 10 years.
H. In the past year, there has never been a distinct period lasting more than one day during which abnormally elevated or expansive mood was present most of the day for most days, and the abnormally elevated or expansive mood was accompanied by the onset, or worsening, of three of the “B” criteria of mania (i.e., grandiosity or inflated self esteem, decreased need for sleep, pressured speech, flight of ideas, distractibility, increase in goal directed activity, or excessive involvement in activities with a high potential for painful consequences; see pp. XX). Abnormally elevated mood should be differentiated from developmentally appropriate mood elevation, such as occurs in the context of a highly positive event or its anticipation.
I. The behaviors do not occur exclusively during the course of a Psychotic or Mood Disorder (e.g., Major Depressive Disorder, Dysthymic Disorder, Bipolar Disorder) and are not better accounted for by another mental disorder (e.g., Pervasive Developmental Disorder, post-traumatic stress disorder, separation anxiety disorder). (Note: This diagnosis can co-exist with Oppositional Defiant Disorder, ADHD, Conduct Disorder, and Substance Use Disorders.) The symptoms are not due to the direct physiological effects of a drug of abuse, or to a general medical or neurological condition.
I've not given this a lot of thought yet. The committee that examined the topic has some discussion of T-Triple D/bipolar here and here. The committee takes a couple of digs at the the child bipolar diagnosis. So if this new disorder is adopted, we're going to have yet another name for children who behave badly. Fortunately, the criteria appear to require much worse behavior than what has been passing for "bipolar" according to some child psychiatrists. The diagnostic threshold is higher and should theoretically lead to fewer kids being unnecessarily diagnosed. But even if the current criteria are adopted without any changes - look for a movement to diagnose "subthreshold" cases of T-DDD, as untreated subthreshold T-DDD will be found to cause untold psychological and physical damages across the world. Damages that can only be mitigated through aggressive treatment using [insert name of latest patented tranquilizer here]. So whatever antipsychotics or "mood stabilizers" are hot in 2013 when the DSM-V is released... they will be the "cure" for T-DDD or bipolar or whatever the hell we decide to label kids with behavior problems.
That's my first impression. This is definitely going to be a hot-button topic. There is apparently some mechanism to send comments to the DSM-V folks, since this is only a draft version - feel free to comment here or send your ideas to the DSM-V posse.
Tuesday, January 05, 2010
- Mild to moderate depression: Effect size of d = .11, which is tiny (and was not statistically significant)
- Severe depression: Effect size of d = .17, which is pretty darn small (and not statistically significant)
- Very severe depression: Effect size of d = .47, which is moderate.
Hmmmm. Not looking so hot. Of course, anyone who has paid attention to the clinical trial literature on antidepressants over the past 10 years or so already knew this. But now it's in JAMA, so a wider audience may now pay attention. Or ignore it. Good marketing usually beats science, so maybe this won't make any difference.
Antidepressants for all but very severe depression: All the benefits of placebo plus the added bonus of side effects. Sign me up! To quote the authors: "What makes our findings surprising is the high level of depression symptom severity that appears to be required for clinically meaningful drug/placebo differences to emerge, particularly given the evidence that the majority of patients receiving ADM in clinical practice present with scores below these levels." In other words, most people who receive antidepressants would likely have done just as well on placebo (without the side effects).
A few other posts on the topic:
- The long-lasting placebo effect
- Sexual side effects of SSRIs
- Paxil: How to lie
- The much-vaunted public health benefits of antidepressants
- Antidepressants offer weak efficacy for all but most severe depression
- Hiding negative data on antidepressants
- Suicidal tendencies? Nah, not here
Wednesday, December 16, 2009
I've been wanting to write about this for months. Here goes. We know that antipsychotics are the new panacea for all things mental health-related, including depression (1, 2, 3). But critics kept pointing to a pesky lack of evidence that such treatments actually worked. Bristol-Myers Squibb, manufacturer of Abilify, has been running a disinformation campaign in medical journals to tout its drug as an antidepressant. Their attempts to paint a positive picture of Abilify's antidepressant properties and its allegedly fantastic safety/tolerability profile have been simultaneously tragic and amusing (1, 2, 3).
We're now moving on to something bigger... It ain't just Abilify, folks. It's all the atypicals. They are all antidepressants. According to the authors of a recent meta-analysis, for atypical antipsychotics: "At present, this body of evidence is considerably larger than that for any other augmentation strategy in the treatment of major depressive disorder." In other words, if you are not prescribing atypicals for your patients who don't show adequate response to antidepressants, you are not practicing evidence-based medicine. You are a [bleeping] cowboy who is willfully disregarding science. You are denying your patients the best possible treatment. The authors don't actually say any of those things, but those are the implications. If the evidence for using antipsychotics is "considerably larger" than the evidence for anything else, then the implications are clear-cut. And this is exactly how this study will be cited. Salespeople, from drug reps to academic psychiatrists, to practitioners looking to earn a few thousand extra bucks on the side through pharma speaking gigs, will discuss this study as if it were a landmark finding.
Response and Remission: But the "evidence" is not all that convincing. Here's why... The authors pooled together the results of 16 randomized controlled trials. In these studies, patients had failed to respond adequately (using various definitions) to an antidepressant. Patients were then assigned to receive either an atypical antipsychotic or a placebo in addition to their antidepressant. Outcomes were then tabulated somewhere between 4 and 12 weeks later. The results seem clear cut -- if your brain is turned to "off" -- the response rates for atypicals was 44% compared to 30% for placebo. The remission rates were 31% for atypicals and 17% for placebo. The advantage for atypicals is statistically significant. Well, there you have it. Done deal. Ask your doctor about Abilify/Zyprexa/Seroquel today...
But the most important thing in a treatment outcome study is... the outcomes. The authors of the meta-analysis did not bother to actually measure change in scores on rating scales. Instead, they only used response and remission rates. There is absolutely no good reason for doing this. It's potentially quite misleading. Doctors like remission and response rates because they provide the illusion that we are measuring depression exactly. A "responder" got a lot better and is functioning reasonably well whereas a "non-responder" is in bed 12 hours a day while spending the rest of her time watching the E! Network, eating Bon-Bons, and sobbing constantly. But it's not nearly that scientific. A "responder" is usually defined as someone who got 50% better on his or her depression rating score during the study period. So Bob's depression rating score improved by 52% (he's a responder), but Amy's score only improved by 48%, so she's a nonresponder. Is this 4% difference really meaningful?
Let's look at the following dataset for 20 participants in a fictional study...
Improvements in depression over course of 10 week study
Using a 50% improvement to determine if a patient is a "responder", we get a 60% response rate on drug and a 30% response rate on placebo. Lazy logic says: Oooh -- the drug is twice as effective as placebo. But is we take the average for each group, we get an average improvement of 42.7% on the drug compared to 40.6% on placebo. See the problem with response and remission rates? Similar arguments have been made by smarter people than myself.
Putting outcomes into convenient little categories makes good sense when the categories themselves make sense - events like having a heart attack, getting pregnant, or dying. If the death rate on a drug is 4% compared to 2% on a placebo, then the drug really reduced death by 50%. But if the "remission rate" or "response rate" for depression is 40% on drug compared to 20% on placebo, that does not mean the drug is twice as effective as placebo in treating depression. If you need to score a 7 or below on a depression rating scale to be "in remission", but you score an 8, are you really much worse off than the person who scored a 7?
Am I saying that the drugs really just squeaked by placebo in these studies? Well, I've read the Abilify studies and posted on them previously - in those studies, Abilify barely beat the placebo. And in the opinion of the patients themselves, Abilify didn't beat placebo at all. And the studies were designed to benefit Abilify, not to actually see if the drug worked. As I noted previously...
Patients were initially assigned to receive an antidepressant plus a placebo for eight weeks. Those who failed to respond to treatment were assigned to Abilify + antidepressant or placebo + antidepressant. Those who responded during the initial 8 weeks were then eliminated from the study. So we've already established that antidepressant + placebo didn't work for these people -- yet they were then assigned to treatment for 6 weeks with the same treatment (!) and compared to those who were assigned antidepressant + Abilify. So the antidepressant + placebo group started at a huge disadvantage because it was already established that they did not respond well to such a treatment regimen. No wonder Abilify came out on top (albeit by a modest margin).I've not read the other antipsychotics for depression studies. I'll even give them the benefit of the doubt and assume they were not designed in the same biased manner as the Abilify trials. It is, however, worth noting that the "benefit" of Abilify, in terms of response and remission rates compared to placebo, was about the same as for the other atypicals. Which leads me to think that the other atypicals probably show similar marginal benefits for depression.
Here's an analogy. A group of 100 students is assigned to be tutored by Tutor A regarding math. The students are all tutored for 8 weeks. The 50 students whose math skills improve are sent on their merry way. That leaves 50 students who did not improve under Tutor A's tutelage. So Tutor B comes along to tutor 25 of these students, while Tutor A sticks with 25 of them. Tutor B's students do somewhat better than Tutor A's students on a math test 6 weeks later. Is Tutor B better than tutor A? Not really a fair comparison between Tutor A and Tutor B, is it?
But now, based solely on potentially quite misleading response and remission rates, an article appears in the American Journal of Psychiatry - a piece that has the potential to ramp up the prescribing of antipsychotics for depression to an even more ridiculous level. Let the good times roll.
Source of ironclad evidence that atypical antipsychotics are antidepressants (until you actually read the paper):
Nelson, J., & Papakostas, G. (2009). Atypical Antipsychotic Augmentation in Major Depressive Disorder: A Meta-Analysis of Placebo-Controlled Randomized Trials American Journal of Psychiatry, 166 (9), 980-991 DOI: 10.1176/appi.ajp.2009.09030312
Friday, October 30, 2009
Apparently, the FDA will approve just about anything as an antidepressant. Despite patients indicating that they don't perceive Abilify to work as an antidepressant, the FDA approved it, likely leading to tens of thousands of Americans being able to enjoy a taste of akathisia while getting all the psychological benefits of a placebo. Good work, FDA. The shift of antipsychotics into antidepressants has been documented in many places and is, ironically, very depressing (1, 2, 3, 4).
The FDA's "anything goes" attitude regarding antidepressants apparently extends to mediocre medical devices. In 2007, a paper in Biological Psychiatry presented results from a large trial comparing TMS to sham TMS. The article concluded that the treatment was a fantastic option for depression. Well, close to that anyway. That actually wrote that "Transcranial magnetic stimulation was effective in treating major depression with minimal side effects reported. It offers clinicians a novel alternative for the treatment of this disorder."
Before all of us poor depressed souls get in line for some sweet magnetic stimulation, maybe we should, like, look at the evidence. On the primary measure of outcome, the Montgomery-Asberg Depression Rating Scale, the results weren't quite statistically significant. So the sponsor tried to convince the FDA Neurological Devices Panel that the secondary measures showed super-impressive results. The problem: They didn't. The FDA review panel thought a few things (as can be seen in its entirety here):
- The Panel’s consensus was that the efficacy was not established; some stated that the device’s effectiveness was “small,” “borderline,” “marginal” and “of questionable clinical significance.” The Study 01 endpoint with a p value of 0.057 per se was not considered a fatal flaw in the study analysis. The Panel did not believe that clinical significance was demonstrated with these results.
- In general, the panel believed that the analyses of the secondary effectiveness endpoints did not contribute significant information to help establish the effectiveness of the device.
- The Panel agreed that unblinding was greater in the active group, and considering the magnitude of the effect size, it may have influenced the study results. (35.8% of people receiving TMS reported pain at the application site compared to only 3.8% in the sham TMS group. This is a quick way to make a study unblind, as people experiencing pain could logically surmise that they were receiving TMS).
- The Panel stated that there were too many non-random dropouts to reliably interpret these results. The Panel’s consensus was that the Week 6 data was of limited value and did not provide supportive data for establishing effectiveness. (After week 4, patients who did not show adequate improvement were given the option to quit the double-blind study; over half of patients departed the study after week 4).
The authors note that some patient outcome measures were collected in the trial but omitted from the article. Of the 15 secondary end points the authors included in the paper, 11 were statistically significant. Of 11 secondary end points not included, 2 were statistically significant. Thus, the published end points were three times more likely to be statistically significant than the unpublished ones.TMS was denied FDA-approval in January, 2007. But in October 2008, the FDA had a change of heart, approving the device. I'm not quite sure what changed the mind of the FDA.
The following disclaimer on the device's website is a bit funny:
NeuroStar TMS Therapy has not been studied in patients who have not received prior antidepressant treatment. Its effectiveness has also not been established in patients who have failed to receive benefit from two or more prior antidepressant medications at minimal effective dose and duration in the current episode.So it's only demonstrated (weak) efficacy in people who have failed one (not zero, not more than one) antidepressant trial. Impressive, eh? To summarize, the sponsor and its affiliated academics wrote a paper in a major psychiatry journal in which positive outcomes were three times as likely to be reported as negative outcomes. The efficacy data were unimpressive according to an FDA panel -- and these panels are not known for being particularly choosy about efficacy data. It seemed that TMS was dead in the water, only to be resurrected in the form of a surprising FDA approval. And if being resurrected from the grave doesn't make for a great Halloween post, then what does?
O’Reardon, J., Solvason, H., Janicak, P., Sampson, S., Isenberg, K., Nahas, Z., McDonald, W., Avery, D., Fitzgerald, P., & Loo, C. (2007). Efficacy and Safety of Transcranial Magnetic Stimulation in the Acute Treatment of Major Depression: A Multisite Randomized Controlled Trial Biological Psychiatry, 62 (11), 1208-1216 DOI: 10.1016/j.biopsych.2007.01.018
Letter to Editor:
Yu, E., & Lurie, P. (2009). Transcranial Magnetic Stimulation Not Proven Effective Biological Psychiatry DOI: 10.1016/j.biopsych.2009.03.026