26 sept. 2014

Triathlon and New Research II



We started with this title because it is time to start with experimental research.  Michel Jouvet started with experimental research last century giving new light about how things work during sleep, in particular during REM sleep.  Experimental research was limited in the 90’s due to understanding of the animals’ rights and abuses by the same experimenters; cats and rodents are not used for experimental reasons as before.  Even before Jouvet, and due to the Nazis’ experiments, nations got together to sign the Declaration of Helsinki which deals with Human experimentation:
The 1975 revision was almost twice the length of the original. It clearly stated that "concern for the interests of the subject must always prevail over the interests of science and society."[6] It also introduced the concept of oversight by an 'independent committee' (Article I.2) which became a system of Institutional Review Boards (IRB) in the US, and research ethics committees or ethical review boards in other countries.[7] In the United States regulations governing IRBs came into effect in 1981 and are now encapsulated in the Common Rule. Informed consent was developed further, made more prescriptive and partly moved from 'Medical Research Combined with Professional Care' into the first section (Basic Principles), with the burden of proof for not requiring consent being placed on the investigator to justify to the committee. 'Legal guardian' was replaced with 'responsible relative'. The duty to the individual was given primacy over that to society (Article I.5), and concepts of publication ethics were introduced (Article I.8). Any experimental manoeuvre was to be compared to the best available care as a comparator (Article II.2), and access to such care was assured (Article I.3). The document was also made gender neutral.

We are not talking about experimental research like Jouvet’s.  What Jouvet did was one of the most important researches done regarding the brain and its functioning. The theoretical frame is a masterpiece after Einstein and the Theory of Relativity:
Michel Valentin Marcel Jouvet (born 16 November 1925 in Lons-le-Saunier, Jura, France) is Emeritus Professor of Experimental Medicine at the University of Lyon. He spent one year in the laboratory of the Horace Magoun in Long Beach, California in 1955. Since this date, he undertakes research of Experimental Neurophysiology in the Faculty of Medicine of Lyon and of Clinical Neurophysiology in the Neurological Hospital of Lyon.
In 1959 Michel Jouvet conducted several experiments on cats regarding muscle atonia (paralysis) during REM sleep. Jouvet demonstrated that the generation of REM sleep depends on an intact pontine tegmentum and that REM atonia is due to an inhibition of motor centres in the medulla oblongata. Cats with lesions around the locus coeruleus have less restricted muscle movement during REM sleep, and show a variety of complex behaviours including motor patterns suggesting that they are dreaming of attack, defence and exploration.

We are far from Jouvet.  We speak about experimenting to help to understand that running needs a technique. There is one in the web that caught my attention:
The 4MM project is still in the testing phase, but initial results are promising. One of the study's test subjects ran an unassisted mile in 5 minutes and 20 seconds, and ran it in just 5 minutes and 2 seconds (a 5.625 percent decrease) with the 4MM. The same subject also experienced a 10-12 percent improvement during a 200-meter sprint, shaving a full 3 seconds off his time despite wearing the 11-pound pack.
While Kerestes hasn't quite gotten the average Joe down to a 4-minute mile, he can demonstrate marked improvements that future iterations of the 4MM can build upon.

Experimental research is what the Mexican Federation has been doing for research without following what the Declaration of Helsinki says about research on humans.  The FMTRI does not allow triathletes to compete internationally if they do not meet the “Marcas Mínimas” criteria.  The FMTRI’s research says that “marcas mínimas” does not play a role on triathletes performance in a triathlon but they still used them as a way of controlling budget and athletes.  “Marcas mínimas” could be a confounding in research, but it is not a requirement to perform well in a triathlon.  Please see our previous post on Medicine and research:
For example, diabetes confounds the relationship between renal failure and heart disease because it can lead to both conditions. Although patients with renal failure are at higher risk for heart disease, failing to account for the inherent risk of diabetes makes that association seem stronger than it actually is.
Confounding is a problem in every observational study, and statistical adjustment cannot always eliminate it. Even some of the best observational trials fall victim to confounding. Hormone replacement therapy was long thought to be protective for cardiac disease[25] until the Women’s Health Initiative randomized trial refuted that notion.[26] Despite the best attempts at statistical adjustment, there can always be residual confounding. However, simply putting more variables into a multivariate model is not necessarily a better option. Overadjusting can be just as problematic, and adjusting for unnecessary variables can lead to biased results.
Let’s continue with other kind of experimental research: Jens Voight gave tons of information about what a well-trained endurance cyclist can do when breaking the Hour-Record:   
5,3 mm/lit concentracion post 1 min. 412 watts. 102 cadencia. 43 años. Bien llevados. pic.twitter.com/Vqjxt4916V


It is interesting that he is comparable to a point racer.  We have posted what a point racer has done in previous posts:
Our sample file is from a male rider at a World Cup race, where the race is only 30km rather than 40km. The riders covered the 30km in just over 34 minutes, at an average speed of 51.22km/hr. That's breaking the speed limit on residential roads in Canada. This rider averaged 419W, with an average heart rate of 178.3bpm. You can see from the file that his power output varied massively throughout the race, and this is why we mentioned earlier that riders need to be able to tolerate repeated, sudden accelerations by recovering whenever they have the opportunity.
A big difference between a file like this and a road racing file is the average cadence - in this file, the average cadence was 117rpm, much higher than you'd see from most people in a road race. Remember the relationship between power and cadence? Just to refresh, power = cadence x force, so you can generate high power either through pedalling faster with less force, or more slowly with a higher force. On the track, where you can't change gears, you try to choose a gear that allows you to ride comfortably in the bunch and recover when you can, but also allows you to generate the power you need when you accelerate for sprints or breakaways.

At first glance, you can see how punchy this race was. The big surges in power, cadence and speed are characteristic of this type of race. They're a bit like stop-and-go traffic on the highway - you slow down, then things start to move again and everyone speeds up, then you have to hit the brakes again, over and over. This is very fatiguing, because each acceleration takes the riders up into their anaerobic zone, creating acidity that gradually builds up throughout the race unless they can recover and clear it out.
In terms of max power for different durations, track endurance riders are fairly similar to road riders - in fact, track riders gain some of their top end fitness by road racing. Points racers tend more towards the road sprinter type or lead-out man type than the smaller, lighter climber type. Here are the max power efforts from this file:
  • 1/2 second: 2,096 watts
  • 1 second: 1,638 watts
  • 5 seconds: 1,296 watts
  • 20 seconds: 1,119 watts
  • 1 minute: 677 watts
  • 4 minutes: 558 watts
  • 20 minutes: 443 watts
What all of these values tell us is that these riders need to not only hit high powers, but to sustain them as well, over and over throughout the race.

At the same time we have posted what Froome has done in the past:
Vuelta Espana 2011 Stage 10: Salamanca 47km Stage Results: 1.Tony Martin (Ger) HTC-Highroad.0:55:54. 2.Christopher Froome (GBr) Team Sky.0:00:59. 3.Bradley Wiggins (GBr) Team Sky.0:01:22. 4.Fabian Cancellara (Swi) Leopard Trek.0:01:27. 5.Taylor Phinney (USA) BMC Racing Team.0:01:33. General Classification: 1.Christopher Froome (GBr) Team Sky.38:09:13. 2.Jakob Fuglsang (Den) Leopard Trek.0:00:12. 3.Bradley Wiggins (GBr) Team Sky.0:00:20. Average Watts: 406w (412np) TSS: 99 Avg Speed: 31mph max Speed: 45mph Avg Cadence: 94 Avg Heart Rate: 147bpm Chris Froome rode the time trial of his life as he rode his way into the overall lead in the Vuelta. The Kenyan-born climber finished second behind Tony Martin (HTC-Highroad) in the 47km time trial to overtake the general classification lead by 12 seconds ahead of of Jakob Fuglsang (Leopard Trek). Team Sky's head physiologist Tim Kerrison is delighted with Froome's performance, "Chris is doing a great job in the race looking after Brad and staying in contention himself." Froome averaged 5.8w/kg at 406W for nearly an hour! He paced the event to perfection as the first half had a total altitude gain of 219m and he averaged 414w, versus the second half where the course had a total elevation gain of only 86m and he averaged 398w. There were certainly riders who started the time trial too hard and suffered the final 20kms where Froome ended up gaining ground. This is the ideal test of one's true capabilities at what is termed Functional Threshold Power (FTP). A cyclist's FTP is the average watts they can mantain for a 60' effort. Given the fact that Froome's 47km time trial took him 57 minutes we can easily conclude that his FTP equals a tad more than 400w. Now that you know what it takes to compete at the highest levels it can be easy to see how you compare to the world's best. Well it's easy to do if you have a power meter that is. If you don't own one try asking if your local fitness gym has any indoor bikes which display power. Or ask at your local cycling club to see if you can rent one for a day in order to conduct some of your own field tests. How long can you maintain 5.8 watts per kilogram? Chris Froome can do this for 60' and now he knows his true poetntial and can apply those power values within his future training. Another great concept we can learn from Froome's TT file is the idea of assigning a score, known as Training Stress Score (TSS), to each and every ride. Froome rode for almost 60' at FTP so that equals 99TSS. One hour at FTP equals 100TSS. Using TrainingPeaks and SRM power meters Team Sky can quanify each days training load in terms of intensity, duration and frequency. When viewed over time TSS values paint a picture of each athlete's fitness, fatigue and form. There is no doubt that Froome started the Vuelta with high fitness and low fatigue. This is the ultimate scenario for any professional rider who hopes to enter their important races with peak form.
Min Avg Max
Power 0 405 766 W
Heart Rate 91 147 169 bpm
Cadence 20 94 114 rpm
Speed 24.4 50.2 73.8 kph
Pace 02:28 01:12 00:49 min/km
Elevation 832 930 1040 m
Temperature 28 30 35 C

What to do about these cyclists data? High cadence is a key to go fast.  Timetrialist are very close to breaking the record for the hour on the road!!!  Wiggins won the last time trial World Championship averaging more than 50K/hr.  Somebody like Wiggo who has practiced track has more chances of breaking the hour-record:
Wiggins was slower than Martin at the first time split but had the speed and power to gradually carve out a significant lead over the 47.1km course. He set the fastest time at the second time split and then gained more time on the climbs in the final part of the course as he stayed tucked in his aero position and pushed huge power down on the pedals on his Pinarello time trial bike.
He stopped the clock in a time of 56:25.52 to take the rainbow jersey. Martin tried to fight back, pushing his huge 58-tooth chain ring but lost further time on the climb and finished 26 seconds slower. Tom Dumoulin (Netherlands) took the bronze medal, confirming his time trialing talent by finishing 40 seconds slower than Wiggins.


It is possible to do experimental research without stepping on the “right to compete” or the “athlete’s rights.” One-subject- experimentation is the way to go due to multiple variables that we should take into account and limited numbers of high performance subjects.  We can use the web to get information about subjects.


15 sept. 2014

Triathlon and Medicine II



We have written a previous post on Medicine and timidly spoke about the problems related to research:
There are very few things like the one I mentioned above observing and testing athletes; they are well done by Medicine.  On the contrary, we have made many mistakes in Medicine that takes a long time to recuperate from.  We have had the Framingham Study for a long time but we continue to believe in consensus instead of looking at the data very closely.

Lately, more and more doctors and researchers are looking at these problems of biases and errors.  Our Federation has done and advertises researches done by them that are directing us to abuses against athletes and failures in our performance as a nation.  If Mario Mola or Richard Murray would be Mexicans they would not be able to compete internationally because they would be unable to give the “MARCAS MÍNIMAS,” required by the Mexican Federation.

We also spoke about the one subject research:
I have been avoiding this issue, but I need to explain everything to my athletes; everything that we human beings know regarding physiology and triathlon.  This knowledge is applied physiology and experimentation in one or two subjects.  Kazdin writes:

Going back to Medicine, Sarah Groff got injured and mentioned that it was advised by her doctor to use a boot; she should forget about her season.  The team trainer told her that she could do something else to avoid losing the season, in order to reinitiate training as quickly as possible.  She went with what it was advised by the trainer (no boot) and ended up winning the second place this year for the season.  Lance
Armstrong was told at Houston (MD Anderson) that he needed a specific treatment for his cancer that would damage kidneys and lungs; he decided to go to Minneapolis for a second opinion where he received a treatment that spared his kidneys and lungs (I read it in his book).  It is obvious that when dealing with sports we need a doctor who practices sports and is passionate about what he/she practices to avoid medical decision as the ones mentioned above.  The same thing for research, we need somebody that practices conscientiously a sport so he/she knows the variables of the sport to take into consideration when doing research.  The regression toward the mean is always present when we do not take into consideration all the variables.

The following article was taken from www.Medscape.org. It was just posted and very relevant to what I mention:

It Ain't Necessarily So: Why Much of the Medical Literature Is Wrong
Christopher Labos, MD CM, MSc, FRCPC
DisclosuresSeptember 09, 2014

In 1897, eight-year-old Virginia O'Hanlon wrote to the New York Sun to ask, "Is there a Santa Claus?"[1] Virginia's father, Dr. Phillip O'Hanlon, suggested that course of action because "if you see it in the Sun, it's so." Today many clinicians and health professionals may share the same faith in the printed word and assume that if it says it in the New England Journal of Medicine (NEJM) or JAMA or The Lancet, then it's so.
Putting the existence of Santa Claus aside, John Ioannidis[2] and others have argued that much of the medical literature is prone to bias and is, in fact, wrong.
Given a statistical association between X and Y, most people make the assumption that X caused Y. However, we can easily come up with 5 other scenarios to explain the same situation.
1. Reverse Causality
Given the association between X and Y, it is actually equally likely that Y caused X as it is that X caused Y. In most cases, it is obvious which variable is the cause and which is the effect. If a study showed a statistical association between smoking and coronary heart disease (CHD), it would be clear that smoking causes CHD and not that CHD makes people smoke. Because smoking preceded CHD, reverse causality in this case is impossible. But the situation is not always that clear-cut. Consider a study published in the NEJM that showed an association between diabetes and pancreatic cancer.[3] The casual reader might conclude that diabetes causes pancreatic cancer. However, further analysis showed that much of the diabetes was of recent onset. The pancreatic cancer preceded the diabetes, and the cancer subsequently destroyed the insulin-producing islet cells of the pancreas. Therefore, this was not a case of diabetes causing pancreatic cancer but of pancreatic cancer causing the diabetes.
2014 EHR Report: Physicians Rate Their EHRs


Mistaking what came first in the order of causation is a form of protopathic bias.[4] There are numerous examples in the literature. For example, an assumed association between breast feeding and stunted growth, [5] actually reflected the fact that sicker infants were preferentially breastfed for longer periods. Thus, stunted growth led to more breastfeeding, not the other way around. Similarly, an apparent association between oral estrogens and endometrial cancer was not quite what it seemed.[6] Oral estrogens may be prescribed for uterine bleeding, and the bleeding may be caused by an undiagnosed cancer. Therefore, when the cancer is ultimately diagnosed down the road, it will seem as if the estrogens came before the cancer, when in fact it was the cancer (and the bleeding) that led to the prescription of estrogens. Clearly, sometimes it is difficult to disentangle which factor is the cause and which is the effect.
2. The Play of Chance and the DICE Miracle
Whenever a study finds an association between 2 variables, X and Y, there is always the possibility that the association was simply the result of random chance.
Most people assess whether a finding is due to chance by checking if the P value is less than .05. There are many reasons why this the wrong way to approach the problem, and an excellent review by Steven Goodman[7] about the popular misconceptions surrounding the P value is a must-read for any consumer of medical literature.

To illustrate the point, consider the ISIS-2 trial,[8] which showed reduced mortality in patients given aspirin after myocardial infarction. However, subgroup analyses identified some patients who did not benefit: those born under the astrological signs of Gemini and Libra; patients born under other zodiac signs derived a clear benefit with a P value < .00001. Unless we are prepared to re-examine the validity of astrology, we would have to admit that this was a spurious finding due solely to chance. Similarly, Counsell et al. performed an elegant experiment using 3 different colored dice to simulate the outcomes of theoretical clinical trials and subsequent meta-analysis.[9] performed an elegant experiment using 3 different colored dice to simulate the outcomes of theoretical clinical trials and subsequent meta-analysis. Students were asked to roll pairs of dice, with a 6 counting as patient death and any other number correlating to survival. The students were told that one dice may be more "effective" or less effective (ie, generate more sixes or study deaths). Sure enough, no effect was seen for red dice, but a subgroup of white and green dice showed a 39% risk reduction (P = .02). Some students even reported that their dice were "loaded." This finding was very surprising because Counsell had played a trick on his students and used only ordinary dice. Any difference seen for white and green dice was a completely random result.
The Frequency of False Positives
It is sometimes humbling and fairly disquieting to think that chance can play such a large role in the results of our analyses. Subgroup analyses, as shown above, are particularly prone to spurious associations. Most researchers set their significance level or rate of type 1 error at 5%. However, if you perform 2 analyses, then the chance of at least one of these tests being "wrong" is 9.75%. Perform 5 tests, and the probability becomes 22.62%; and with 10 tests, there is a 40.13% of at least 1 spurious association even if none of them are actually true. Because most papers present many different subgroups and composite endpoints, the chance of at least one spurious association is very high. Often, the one spurious association is published, and the other negative tests never see the light of day.[10]
There is a way to guard against such spurious findings: replication. Unfortunately, the current structure of academic medicine does not favor the replication of published results,[11] and several studies have shown that many published trials do not stand up to independent verification and are likely false positives.[12,13] In 2005, John Ioannidis published a review of 45 highlighted studies in major medical journals. He found that 24% were never replicated, 16% were contradicted by subsequent research, and another 16% were shown to have smaller effect sizes than originally reported. Less than half (44%) were truly replicated.
The frequency of these false-positive studies in the published literature can be estimated to some degree.[2] Consider a situation in which 10% of all hypotheses are actually true. Now consider that most studies have a type 1 error rate (the probability of claiming an association when none exists [ie, a false positive]) of 5% and a type 2 error rate (the probability of claiming there is no association when one actually exists [ie, a false negative)] of 20%, which are the standard error rates presumed by most clinical trials. This allows us to create the following 2x2 table.
This would imply that of the 125 studies with a positive finding, only 80/125 or 64% are true. Therefore, one third of statistically significant findings are false positives purely by random chance. That assumes, of course, that there is no bias in the studies, which we will deal with presently.
3. Bias: Coffee, Cellphones, and Chocolate
Bias occurs when there is no real association between X and Y, but one is manufactured because of the way we conducted our study. Delgado-Rodriguez and Llorca[4] identified 74 types of bias in their glossary of the most common biases, which can be broadly categorized into 2 main types: selection bias and information bias.
One classic example of selection bias occurred in 1981 with a NEJM study showing an association between coffee consumption and pancreatic cancer.[15] The selection bias occurred when the controls were recruited for the study. The control group had a high incidence of peptic ulcer disease, and so as not to worsen their symptoms, they drank little coffee. Thus, the association between coffee and cancer was artificially created because the control group was fundamentally different from the general population in terms of their coffee consumption. When the study was repeated with proper controls, no effect was seen.[16]
Information bias, as opposed to selection bias, occurs when there is a systematic error in how the data are collected or measured. Misclassification bias occurs when the measurement of an exposure or outcome is imperfect; for example, smokers who identify themselves as nonsmokers to investigators or individuals who systematically underreport their weight or overreport their height.[17] A special situation, known as recall bias, occurs when subjects with a disease are more likely to remember the exposure under investigation than controls. In the INTERPHONE study, which was designed to investigate the association between cell phones and brain tumors, a spot-check of mobile phone records for cases and controls showed that random recall errors were large for both groups with an overestimation among cases for more distant time periods.[18] Such differential recall could induce an association between cell phones and brain tumors even if none actually exists.
An interesting type of information bias is the ecological fallacy.  The ecological fallacy is the mistaken belief that population-level exposures can be used to draw conclusions about individual patient risks.[4] A recent example of the ecological fallacy, was a tongue-in-cheek NEJM study by Messerli[19} showing that countries with high chocolate consumption won more Nobel prizes. The problem with country-level data is that countries don't eat chocolate, and countries don't win Nobel prizes. People eat chocolate, and people win Nobel prizes. This study, while amusing to read, did not establish the fundamental point that the individuals who won the Nobel prizes were the ones actually eating the chocolate.[20]
Another common ecological fallacy is the association between height and mortality. There are a number of reviews suggesting that shorter stature is associated with a longer life span.[21] However, most of these studies looked at country-level data. Danes are taller than Italians and also have more coronary heart disease. However, if you look at twins[22] or individuals within the same country,[23] you see the opposite association -- namely, it is the shorter individuals who have more heart disease. Again, the fault lies in looking at countries rather than individuals.
4. Confounding
Confounding, unlike bias, occurs when there really is an association between X and Y, but the magnitude of that association is influenced by a third variable. Whereas bias is a human creation, the product of inappropriate patient selection or errors in data collection, confounding exists in nature.[24
For example, diabetes confounds the relationship between renal failure and heart disease because it can lead to both conditions. Although patients with renal failure are at higher risk for heart disease, failing to account for the inherent risk of diabetes makes that association seem stronger than it actually is.
Confounding is a problem in every observational study, and statistical adjustment cannot always eliminate it. Even some of the best observational trials fall victim to confounding. Hormone replacement therapy was long thought to be protective for cardiac disease[25] until the Women’s Health Initiative randomized trial refuted that notion.[26] Despite the best attempts at statistical adjustment, there can always be residual confounding. However, simply putting more variables into a multivariate model is not necessarily a better option. Overadjusting can be just as problematic, and adjusting for unnecessary variables can lead to biased results.[27,28]
Real-World Randomization
Confounding can be dealt with through randomization. When study subjects are randomly allocated to one group or another purely by chance, any confounders (even unknown confounders) should be equally present in both the study and control group. However, that assumes that randomization was handled correctly. A 1996 study sought to compare laparoscopic vs open appendectomy for appendicitis.[29] The study worked well during the day, but at night the presence of the attending surgeon was required for the laparoscopic cases but not the open cases. Consequently, the on-call residents, who didn't like calling in their attendings, adopted a practice of holding the translucent study envelopes up to the light to see if the person was randomly assigned to open or laparoscopic surgery. When they found an envelope that allocated a patient to the open procedure (which would not require calling in the attending and would therefore save time), they opened that envelope and left the remaining laparoscopic envelopes for the following morning. Because cases operated on at night were presumably sicker than those that could wait until morning, the actions of the on-call team biased the results. Sicker cases preferentially got open surgery, making the outcomes of the open procedure look worse than they actually were.[30] So, though randomized trials are often thought of as the solution to confounding, if randomization is not handled properly, confounding can still occur. In this case, an opaque envelope would have solved the problem.
5. Exaggerated Risk
Finally, let us make the unlikely assumption that we have a trial where nothing went wrong, and we are free of all of the problems discussed above. The greatest danger lies in our misinterpretation of the findings. A report in the New England Journal of Medicine reported that African Americans were 40% less likely to be sent for an angiogram than their white counterparts.[31] The report generated considerable media attention at the time, but a later article by Schwartz et al.[32] pointed out that the results were overstated. Had the authors used a risk ratio instead of an odds ratio, the result would have been 7% instead of 40%, and it's unlikely that the paper would have been given such prominence. Choosing the correct statistical test can be difficult. Nearly 20 years ago. Sackett and colleagues[33] proclaimed "Down with odds ratios!"[33] and yet they remain frequently used in the literature.
Another major problem is the use of relative risks vs absolute risks. Although the latter are clearly preferable, one review of almost 350 studies found that 88% never reported the absolute risk.[34] Furthermore, overreliance on relative risks can be very misleading. Baylin and colleagues[35] reported that the relative risk for myocardial infarction in the hour after drinking a cup of coffee was 1.5 (ie, a 50% increase). This rather concerning finding was taken up by Poole in a bitingly satirical letter to the editor,[36] in a bitingly satirical letter to the editor, where he calculated that the relative risk of 1.5 translated to an absolute risk of 1 heart attack for every 2 million cups of coffee. Clearly, well-done studies have to be put in clinical context, and it is paramount to remember that statistical significance does not imply clinical significance.
Why Bother?
With all of the different ways that clinical trials can go wrong, one might wonder why we bother at all. Unlike little Virginia, who was prepared to believe whatever she saw in the newspaper, we have become, if not cynics, then at least skeptics when it comes to our published research. But skepticism is a good thing and makes us challenge what we think we know in favor of what we can prove. Without this skepticism, we would still be prescribing hormone replacement therapy to prevent heart disease in women, giving class I anti-arrhythmics to cardiac patients after myocardial infarction, and prescribing COX-2 inhibitors with reckless abandon.
As Dr. Fiona Godlee summed up in her BMJ editorial on evidence-based medicine, “[it’s a] flawed system but still the best we’ve got.”[37]