Oct 04, 2010
I had my last follow-up for the FTY720 (fingolimod/gilenya) trial in June (yes, I know I'm way behind on writing stuff up--I got a new job and moved halfway across the country this summer so cut me a little slack). As usual, I still can't make complete sense of the results, but since this is the closest thing to an objective, impartial evaluation of any changes that I have had post-CCSVI, I thought it was worth writing up.
I still don't get the EDSS (Expanded Disability Status Scale, http://ms.about.com/od/multiplesclerosis101/a/ms_edss.htm) and it's hard to have much faith in the results when all the sudden I got a 2.5, which is lower than it has ever been since I started the trial two years ago (all the others have been 3 or 3.5), and I didn't feel subjectively better. In fact, my perception was that I was worse than I was in December. And by any measure that I can perceive, I was certainly worse than I was when I started the trial.
I think I mentioned last time that the neuro only asked me to take off my shoes and not my socks when I did the tuning fork/vibration test. This time I had the presence of mind not to just blindly follow instructions literally and asked if I shouldn't take my socks off. The neuro didn't seem to think it was necessary so then I didn't. So that score got better, although I don't see how I could have better vibration sense with my socks on than off. I wonder if it's just a misinterpretation of the different sensations on my part. In the past, without my socks on, I could only hear the tuning fork, but didn't feel the vibrations. With the socks, I don't hear the tuning fork very much so perhaps that's why I feel, or think I feel, the vibrations. I wonder what the vibration sensation corresponds to in everyday life. Anybody know?
I'm not sure how much faith to have in my own perceptions, but I don't have much faith in the EDSS, either, at this point. Obviously, this particular example is somewhat outside the pale, but the whole process seems riddled with inconsistencies and doesn't seem to capture the changes I feel. Sometimes I think I'm much worse when the EDSS shows no change and then sometimes the EDSS changes for no reason that I can discern.
I walked better on the one foot in front of another test; even the neuro thought so. I also didn't fall on the Romberg, although I was unsteady.
Surprisingly, I did about the same or slightly better on the MSFC (Multiple Sclerosis Functional Composite, http://www.nationalmssociety.org/for-professionals/researchers/clinical-study-measures/msfc/index.aspx) than I did in December right after the CCSVI procedure. Not sure what's up with that since I'm clearly worse in everyday life. Mainly the 9-hole peg test was better. This is a test with a board that looks like some of those peg solitaire game with nine holes in three rows of three. You're supposed to put the pegs in the holes one at a time and then take them out and put them back in a shallow storage area. I got my best time ever. Better even than before I started the trial. Interestingly, the trial coordinator said that I was the only patient she noticed who, after putting all the pegs in the holes, immediately grabs the last peg I put in a hole and pull it out. Apparently, the other people start the pulling part with a seemingly random peg. It surprises me that anyone wouldn't do the way I do it and suggests that maybe physical coordination isn't all there is to these tests. Although I'm not sure how much difference that would make in the big picture since I think it's mostly the degree of change within a given patient that they look at. Went down to 59 (from a perfect 60) on the PASAT (the cognitive component of the MSFC, http://www.nationalmssociety.org/for-professionals/researchers/clinical-study-measures/pasat/index.aspx) and felt more unfocused than in December when I could hold onto the numbers in my head with ease. My 25-foot walking time average was essentially the same.
My brain MRI never seems to change even though I keep getting worse. Once again, the report says "Above findings are very similar to prior examination." Of course, the radiologist seems to pick out different things to mention each time so it's impossible to say what might have changed.
The pulmonary function test seemed to go reasonably well, although not so effortlessly as the one in December (When I got the results, I do seem to have gotten worse, though, and it says clinical correlation is advised. Which I would never know if I didn't request the things they put in my medical record for myself). I got a new tech and she seemed more conscientious than the other ones (maybe because she's new). She weighed me and measured my height with my shoes off, unlike all the other techs, who had me keep my shoes on. This, of course, changed the values slightly. Since that apparently affects how they score this test, it could have some effect on my results without there being any real change. Or I suppose it could also mask some real change. This just underscores how difficult it is to control all the variables in a trial even when you don't have seemingly gross violations of protocol like the neuro who thought I didn't need to remove my socks when testing for vibratory sensation.
In fact, all kinds of variables can effect experiments. A book I read not so long ago (The genius in all of us: why everything you’ve been told about genetics, talent, and IQ is wrong by David Shenk) talks about an experiment by neuroscientist John C. Crabbe who did the same study at the same time with the same strains of mice in three different locations (I think it might be this study: http://www.sciencemag.org/cgi/content/abstract/284/5420/1670). The researchers went to extraordinary lengths to standardize equipment, methods, and lab environment. Despite this, mice with different genes responded significantly differently depending on the locations and the different strains behaved differently in different ways. Not only did these hidden environmental differences significantly affect the results, they interacted with the different gene sets in different ways. An awful lot of complexity emerged even from a simple model with genetically-pure mice in standard lab cages so imagine the challenges of running experiments on a heterogeneous group of people living in all kinds of environments with such a heterogeneous disease as MS.
This is compounded by the fact that they don't even know for sure what to measure in MS trials. This is an interesting take on the validity of outcome measures in MS trials.
"At around the time the [British NHS risk-sharing] scheme was launched, my group’s independent analysis of data from the placebo arms of 31 large clinical trials found that the pivotal outcomes to be used in the scheme, including short term disability scores and relapse rates, were not valid surrogates. With no improvement in the treated arms within these original trials, efficacy hinged on preventing the worsening seen in those receiving placebo. The trials had defined disability progression as increases of 0.5 to 1 points on a standardised disability scale (Kurtzke [i.e., EDSS]) confirmed at 3-6 months, a measure that is clearly subject to--and jaw droppingly within--inter-rater variability. We found that patients in the placebo group improved just as often as they worsened, by amounts equivalent to the clinical criteria for treatment failure. It was thus evident that what was being measured was random variation and measurement error in imperfectly blinded studies.
"So if the disability measures were not measuring disability, what about the MRI changes? Multivariate analysis of data from the placebo arms found that changes in the MRI spots made no independent contribution to end of trial outcome; the effect of the changes was accounted for by clinical features such as duration of disease--something that can be measured at no cost.
"Thus the only outcome measure that remained was relapse frequency--and this was unambiguously reduced in patients undergoing treatment in the risk sharing scheme. However, total relapse numbers do not predict the time to disability or death. Although relapse frequency in the first two years after onset has some association with long term outcome, participants in the pivotal trials of interferon and glatiramer acetate had disease durations of several years." (http://www.bmj.com/content/340/bmj.c2693.full?view=long&pmid=20522659)
And even if they had a biomarker that measured objective amounts of damage to the brain, would that really work? After all, so much of it is about location. So one person can have a brain full of lesions and be mostly fine while another with only a few lesions can be in terrible shape. What really matters to me in the end is not how many holes I have in my brain, but how those holes effect me. How do you compare the apples and oranges of various kinds of disability? I sometimes imagine a kind of poker game where different problems are matched off against each other, i.e., I'll see you an eye with blurry vision and raise you a spastic leg and a drop foot.
I was contacted recently by the trial coordinator wanting to know how was doing and if I had relapsed. I told them what they wanted to know, but the cynical part of me can't shake the suspicion that the trial neuros are just waiting for me to get worse so they can gloat over how CCSVI doesn't work.