Scientists’ Elusive Goal: Reproducing Study Results Two years ago, a group of Boston researchers published a study describing how they had destroyed cancer tumors by targeting a protein called STK33. Scientists at biotechnology firm Amgen Inc. quickly pounced on the idea and assigned two dozen researchers to try to repeat the experiment with a goal of turning the findings into a drug. by GAUTAM NAIK Wall Street Journal Researchers at Bayer’s labs often find their experiments fail to match claims made in the scientific literature. It proved to be a waste of time and money. After six months of intensive lab work, Amgen found it couldn’t replicate the results and scrapped the project. “I was disappointed but not surprised,” says Glenn Begley, vice president of research at Amgen of Thousand Oaks, Calif. “More often than not, we are unable to reproduce findings” published by researchers in journals. This is one of medicine’s dirty secrets: Most results, including those that appear in top-flight peer-reviewed journals, can’t be reproduced. “It’s a very serious and disturbing issue because it obviously misleads people” who implicitly trust findings published in a respected peer-reviewed journal, says Bruce Alberts, editor of Science. On Friday, the U.S. journal is devoting a large chunk of its Dec. 2 issue to the problem of scientific replication. Reproducibility is the foundation of all modern research, the standard by which scientific claims are evaluated. In the U.S. alone, biomedical research is a $100-billion-year enterprise. So when published medical findings can’t be validated by others, there are major consequences. Drug manufacturers rely heavily on early-stage academic research and can waste millions of dollars on products if the original results are later shown to be unreliable. Patients may enroll in clinical trials based on conflicting data, and sometimes see no benefits or suffer harmful side effects. There is also a more insidious and pervasive problem: a preference for positive results. Unlike pharmaceutical companies, academic researchers rarely conduct experiments in a “blinded” manner. This makes it easier to cherry-pick statistical findings that support a positive result. In the quest for jobs and funding, especially in an era of economic malaise, the growing army of scientists need more successful experiments to their name, not failed ones. An explosion of scientific and academic journals has added to the pressure. When it comes to results that can’t be replicated, Dr. Alberts says the increasing intricacy of experiments may be largely to blame. “It has to do with the complexity of biology and the fact that methods [used in labs] are getting more sophisticated,” he says. It is hard to assess whether the reproducibility problem has been getting worse over the years; there are some signs suggesting it could be. For example, the success rate of Phase 2 human trials—where a drug’s efficacy is measured—fell to 18% in 2008-2010 from 28% in 2006-2007, according to a global analysis published in the journal Nature Reviews in May. “Lack of reproducibility is one element in the decline in Phase 2 success,” says Khusru Asadullah, a Bayer AG research executive. In September, Bayer published a study describing how it had halted nearly two-thirds of its early drug target projects because in-house experiments failed to match claims made in the literature. The German pharmaceutical company says that none of the claims it attempted to validate were in papers that had been retracted or were suspected of being flawed. Yet, even the data in the most prestigious journals couldn’t be confirmed, Bayer said. In 2008, Pfizer Inc. made a high-profile bet, potentially worth more than $725 million, that it could turn a 25-year-old Russian cold medicine into an effective drug for Alzheimer’s disease. The idea was promising. Published by the journal Lancet, data from researchers at Baylor College of Medicine and elsewhere suggested that the drug, an antihistamine called Dimebon, could improve symptoms in Alzheimer’s patients. Later findings, presented by researchers at the University of California Los Angeles at a Chicago conference, showed that the drug appeared to prevent symptoms from worsening for up to 18 months. “Statistically, the studies were very robust,” says David Hung, chief executive officer of Medivation Inc., a San Francisco biotech firm that sponsored both studies. In 2010, Medivation along with Pfizer released data from their own clinical trial for Dimebon, involving nearly 600 patients with mild to moderate Alzheimer’s disease symptoms. The companies said they were unable to reproduce the Lancet results. They also indicated they had found no statistically significant difference between patients on the drug versus the inactive placebo. Pfizer and Medivation have just completed a one-year study of Dimebon in over 1,000 patients, another effort to see if the drug could be a potential treatment for Alzheimer’s. They expect to announce the results in coming months. Scientists offer a few theories as to why duplicative results may be so elusive. Two different labs can use slightly different equipment or materials, leading to divergent results. The more variables there are in an experiment, the more likely it is that small, unintended errors will pile up and swing a lab’s conclusions one way or the other. And, of course, data that have been rigged, invented or fraudulently altered won’t stand up to future scrutiny. According to a report published by the U.K.’s Royal Society, there were 7.1 million researchers working globally across all scientific fields—academic and corporate—in 2007, a 25% increase from five years earlier. “Among the more obvious yet unquantifiable reasons, there is immense competition among laboratories and a pressure to publish,” wrote Dr. Asadullah and others from Bayer, in their September paper. “There is also a bias toward publishing positive results, as it is easier to get positive results accepted in good journals.” Science publications are under pressure, too. The number of research journals has jumped 23% between 2001 and 2010, according to Elsevier, which has analyzed the data. Their proliferation has ratcheted up competitive pressure on even elite journals, which can generate buzz by publishing splashy papers, typically containing positive findings, to meet the demands of a 24-hour news cycle. Dr. Alberts of Science acknowledges that journals increasingly have to strike a balance between publishing studies “with broad appeal,” while making sure they aren’t hyped. Drugmakers also have a penchant for positive results. A 2008 study published in the journal PLoS Medicine by researchers at the University of California San Francisco looked at data from 33 new drug applications submitted between 2001 and 2002 to the U.S. Food and Drug Administration. The agency requires drug companies to provide all data from clinical trials. However, the authors found that a quarter of the trial data—most of it unfavorable—never got published because the companies never submitted it to journals. The upshot: doctors who end up prescribing the FDA-approved drugs often don’t get to see the unfavorable data. “I would say that selectively publishing data is unethical because there are human subjects involved,” says Lisa Bero of UCSF and co-author of the PLoS Medicine study. In an email statement, a spokeswoman for the FDA said the agency considers all data it is given when reviewing a drug but “does not have the authority to control what a company chooses to publish.” Venture capital firms say they, too, are increasingly encountering cases of nonrepeatable studies, and cite it as a key reason why they are less willing to finance early-stage projects. Before investing in very early-stage research, Atlas Ventures, a venture-capital firm that backs biotech companies, now asks an outside lab to validate any experimental data. In about half the cases the findings can’t be reproduced, says Bruce Booth, a partner in Atlas’ Life Sciences group. There have been several prominent cases of nonreproducibility in recent months. For example, in September, the journal Science partially retracted a 2009 paper linking a virus to chronic fatigue syndrome because several labs couldn’t replicate the published results. The partial retraction came after two of the 13 study authors went back to the blood samples they analyzed from chronic-fatigue patients and found they were contaminated. Some studies can’t be redone for a more prosaic reason: the authors won’t make all their raw data available to rival scientists. John Ioannidis of Stanford University recently attempted to reproduce the findings of 18 papers published in the respected journal Nature Genetics. He noted that 16 of these papers stated that the underlying “gene expression” data for the studies were publicly available. But the supplied data apparently weren’t detailed enough, and results from 16 of the 18 major papers couldn’t fully be reproduced by Dr. Ioannidis and his colleagues. “We have to take it [on faith] that the findings are OK,” said Dr. Ioannidis, an epidemiologist who studies the credibility of medical research. Veronique Kiermer, an editor at Nature, says she agrees with Dr. Ioannidis’ conclusions, noting that the findings have prompted the journal to be more cautious when publishing large-scale genome analyses. When companies trying to find new drugs come up against the nonreproducibility problem, the repercussions can be significant. A few years ago, several groups of scientists began to seek out new cancer drugs by targeting a protein called KRAS. The KRAS protein transmits signals received on the outside of a cell to its interior and is therefore crucial for regulating cell growth. But when certain mutations occur, the signaling can become continuous. That triggers excess growth such as tumors. The mutated form of KRAS is believed to be responsible for more than 60% of pancreatic cancers and half of colorectal cancers. It has also been implicated in the growth of tumors in many other organs, such as the lung. So scientists have been especially keen to impede KRAS and, thus, stop the constant signaling that leads to tumor growth. In 2008, researchers at Harvard Medical School used cell-culture experiments to show that by inhibiting another protein, STK33, they could prevent the growth of tumor cell lines driven by the malfunctioning KRAS. The finding galvanized researchers at Amgen, who first heard about the experiments at a scientific conference. “Everyone was trying to do this,” recalls Dr. Begley of Amgen, which derives nearly half of its revenues from cancer drugs and related treatments. “It was a really big deal.” When the Harvard researchers published their results in the prestigious journal Cell, in May 2009, Amgen moved swiftly to capitalize on the findings. At a meeting in the company’s offices in Thousand Oaks, Calif., Dr. Begley assigned a group of Amgen researchers the task of identifying small molecules that might inhibit STK33. Another team got a more basic job: reproduce the Harvard data. “We’re talking about hundreds of millions of dollars in downstream investments” if the approach works,” says Dr. Begley. “So we need to be sure we’re standing on something firm and solid.” But over the next few months, Dr. Begley and his team got increasingly disheartened. Amgen scientists, it turned out, couldn’t reproduce any of the key findings published in Cell. For example, there was no difference in the growth of cells where STK33 was largely blocked, compared with a control group of cells where STK33 wasn’t blocked. What could account for the irreproducibility of the results? “In our opinion there were methodological issues” in Amgen’s approach that could have led to the different findings, says Claudia Scholl, one of the lead authors of the original Cell paper. Dr. Scholl points out, for example, that Amgen used a different reagent to suppress STK33 than the one reported in Cell. Yet, she acknowledges that even when slightly different reagents are used, “you should be able to reproduce the results.” Now a cancer researcher at the University Hospital of Ulm in Germany, Dr. Scholl says her team has reproduced the original Cell results multiple times, and continues to have faith in STK33 as a cancer target. Amgen, however, killed its STK33 program. In September, two dozen of the firm’s scientists published a paper in the journal Cancer Research describing their failure to reproduce the main Cell findings. Dr. Begley suggests that academic scientists, like drug companies, should perform more experiments in a “blinded” manner to reduce any bias toward positive findings. Otherwise, he says, “there is a human desire to get the results your boss wants you to get.” Adds Atlas’ Mr. Booth: “Nobody gets a promotion from publishing a negative study.” Comments Patrick Butler Wrote: Oh snap! David Eyke Replied: What the scientific community is lacking is some good old fashioned management science. Modern management science tells us that if you want more of something, all you have to do is measure it. If you want something to improve, simply measure the output and make sure the people involved in the production are aware that you are measuring (same thing holds true for teacher evaluations). The measurement alone – even if you do nothing else such as attach consequences to the values produced by the measurement – helps to massively improve the value of the output. This is a function of human nature – we pay more attention to our own level of effort when we know that others are paying attention. So, what aren’t we paying attention to – what aren’t we measuring? What aren’t we evaluating that has led to trillions of dollars of public money over the last fifty years poured into research that is only 20% reproducible in full? We aren’t measuring is the REPLICATION rate of scientific work by scientists. We aren’t measuring it, nor are we publishing it widely. In other words, we tell the scientific community that we ignore their poor efforts and wasted research dollars. What we aren’t measuring and ranking and discussing and the rate at which the work effort of all scientists is regularly reproduced and confirmed by other scientists. Who are the best scientists? Are they at Harvard and Stanford and Yale? Or are they at Georgia, Oklahoma and Washington State? We don’t know because we don’t measure and rank and discuss the rate at which the extremely expensive work effort of all scientists is regularly reproduced and confirmed by other scientists. We don’t know the identity of the best scientists who regularly turn out original AND highly-reproducible work. This is because we are not measuring it. And management science tells us that if you don’t measure, you are going to waste a whole lot of money. We don’t want to discourage scientific inquiry into sometimes very beneficial but very high risk blind alleys, but we should at least reward, with some form of recognition, scientists who produce a career’s worth of highly verifiable science. Maybe something as simple as Congressional recognition for the top twenty highly reproducible retiring professors each year. A thank you for a career of hard work. And a career spent not predictably wasting the taxpayers’ money. Dan Laroque Wrote: Of course they don't reproduce the results. Like global climate change they use regression models that create lines for select bits of data however large or small. The level of confounding must be huge in drug trials just as they are in plant pathology work. The companies grab some academic who creates models like the ones used in the mortgage meltdown - they look great, complicated and authoritative. They don't work. To get reproducible results one must first control the biology to which the treatment (drug) is applied. Modeling is not a lost art. It is a black art. Orin Armstrong Replied: I was unaware that smog and pollution are now part of the climate. But then, since I question your position, I must be ignorant. Ian Gilbert Wrote: The holy writ of "evidence-based medicine" is "Users' Guides to the Medical Literature", Guyatt, et al., JAMA 2008 (2nd edition). The underlying assumptions of "Users' Guides etc." are appalling: Physicians are assumed to know and understand little to nothing about statistics, probability, and experimental design. If they read and understand "Users' Guides etc.", physicians will end up with nothing more than a ruidimentary, superficial understanding of statistics, probability, and experimental design. ---------------------------------"Users' Guides etc." implies that the educational progression of premeds and medical students is based on learning (memorizing) huge quantities of biological and medical data and information, with little or no effort put into case studies, probability theory, and other "impractical liberal arts courses" that might actually impart some critical-thinking ability. Young (and sometimes not-so-young) doctors are so taken with their own IQ-type intelligence that they never notice that they never learned how to deal with ambiguity and uncertainty. For every doctor who is willing to say, "I don't know", there are 50 who say, "Do this, swallow that, we know it works". Simon Gruber Replied: Interesting insight. You could try posting it on a relevant article? Ian Gilbert Replied: Mr. Gruber, thanks for your help. I was under the impression that the article was about badly designed medical research experiments, misinterpreted by the medical researchers who conducted them. Live and learn. William Brown Wrote: "Before investing in very early-stage research, Atlas Ventures, a venture-capital firm that backs biotech companies, now asks an outside lab to validate any experimental data. In about half the cases the findings can't be reproduced." Of course the peer reviewers are at the mercy of the experimenters: The inner workings of a living organism are mind boggling. By comparison nuclear physics was childs play (at least in hindsight) for several reviewers of the recent faster-than-light experiment. They noted the experiment neglected the relativistic effect from the satellites used to clock the particles. In either field I would have to rely on Atlas Ventures' method. Kent Strand Wrote: Publishing enhanced, embellished, or fraudulent data is Criminal. It should be prosecuted as such. When publishing for public or industry consumption, there comes an inherent liability to be honest, transparent and direct. It could be argued, successfully I might add, such people who publish erroneous data or studies, well, they should be put away. Simon Gruber Replied: I actually think you'd have very little success arguing that scientists who make mistakes (publish "erroneous" data) should be imprisoned. Especially since there's no law against being wrong, but I guess go ahead and call it Criminal (capital C even!). Just remember that no banker is responsible for violating actual laws and ruining the world's economy. Just prosecute until you get thte result you want, it'll work great I'm sure. Joseph Grcar Wrote: I believe that this article is flawed, like the previous one published by Mr. Naik. Charles Davis Wrote: Great piece. Thanks. William Clark Wrote: This is a common technique with dating. We have to confirm the drunken sex we had on the first night with a study--that is called a relationship. Jack Vaczosky Replied: Scientific fraud would be one possible culprit why the results aren't replicable under similar conditions. Bernard Levine Wrote: QUOTE: In the quest for jobs and funding, especially in an era of economic malaise, the growing army of scientists need more successful experiments to their name, not failed ones. An explosion of scientific and academic journals has added to the pressure. That is a nice succinct summary of the self-parody that science has become. Many times in these comment pages I have pointed out that funded research produces the results that the funding source expects or demands, because otherwise the researcher won't get paid. EVERY time some academic fresh from cashing his latest grant check has sneeringly accused me of not understanding the scientifiic method. The trouble is, I do understand the method... and the hustle. As with Gresham's Law pertaining to money, bad science drives out the good. Matthew Tangeman Replied: sound like global warming research to anyone?? Louis Ciola Wrote: It is the very core of the scientific method that any experiment should be reproducible. If you conduct a scientific experiment with a new drug and you claim that your drug ABC cures the problem 70% of the time, then those results have to be reporducible or they are worthless. And, there is no reason why the results cannot be reproduced, unless the study is flawed, deliberately no accurate or is seeking to deceive the public. Now these same scientific principles are used in engineering. That's why the FAA certifies a new aircraft for flight. If I design a new airplane out of composite materials for example, it has to be certified for flight or no passengers can fly on it. If I design a new aircraft engine, that too, must be thoroughly tested, or it can't be certified for flight. There is absolutely no excuse whatsoever for making false claims about any drug or product. It either works or it doesn't. Any other claims are pure nonsense. The scientific community must know which researchers are honest and accurate. And, which one aren't and view their research accordingly. Still, this is a very disturbing article if it reflects the current state of pharmaceutical and medical research. Susan Collingwood Replied: Mr. Ciola, you may know much about *engineering* research, but you don't know much about this this arena of research. Please note that there is *zero* evidence that the results are not reproduceable *within the same lab*. Thus, there is no need to rush to the assumption that the majority of non-reproduceable studies are somehow fraudulent. In biomedical research such as those described here, the systems are incredibly sensitive to a huge number of factors (including the soap used to clean the glassware used to prepare the solutions and the skill of the minimum wage person doing the glassware cleaning). Further, a wide variety of conditions can affect the complicated "analytical" equipment used to conduct or analyze results. Thus, unless the experimental results are invalidated in the original lab or in multiple other labs there is no way of knowing which lab is getting the "valid" results. Lastly, one of the "non-reproduceable" experiments described was the "repetition" of an experiment *with a different chemical* than the first experiment. In that case, it is entirely possible that the first compound may well be entirely as effective as first reported, and the second compound, although having a similar effect on one aspect of the cellular chemistry, doesn't have the *identical* effect, thus explaining the differing results. (The use of the different chemical was likely a result of the drug company's attempt to avoid having to pay for the university's intellectual property in the compound they developed.) Wade Yoder Wrote: To truly replicate, and make claims as this article did "the original researchers should be there to ensure the same quality of research was used" before a drug company patents a process and then shelves it because it doesn't work based on a procedure they used "solely to discredit the findings"...I would be interested in knowing if another company will be able to test this, or if the bought the lock and key when they "pounced" on it ;) Lester Marshall Replied: Or it could be what happened with the Cold Fusion announcement back in the eighties. The guys who did the original research didn't do it right, so they were wrong and the other studies couldn't reproduce the results because the original results were bad. Wade Yoder Replied: Or we could look at some things that are recommended in countries where medicine is free and people are educated how to naturally keep their immune system strong and a diet that is loaded with antioxidants to help fight disease including cancer instead of diets that help cause it. Like Hippocrates said in 460 B.C. over 2500 years ago "let thy food be thy medicine and thy medicine be thy food". Peter Melzer Wrote: Reproducibility of a research finding is essential to the scientific method. Reproducing a result with a different, independent method represents the strongest confirmation of a result and, if negative, the most powerful corrective to a hypothesis. Because scientists know that their findings will be tested, deliberate fraud remains rare. Rather, unrepresentative samples constitute a primary cause destroying a promising hypothesis. Read more here: http://brainmindinst.blogspot.com/2011/02/representative-sampling-mind.html tom johnson Replied: Correct. As a retired scientist, I am not surprised that studies may not reproduce the original results. In some cases, it is the sample size that sends a hypothesis to the dust bin. In others, failing to follow the methods exactly results in a rejection of the null hypothesis. However, once the woefully uneducated media, politicians or the public gets hold of a study that affirms their preconceived notions, a funny thing happens. Either that particular study will be quoted as an absolute truth in perpetuity , or if the original study results are contradicted, the study is dismissed as quackery or the scientist's reputation called into question. None of the three above listed groups seem to realize that good science is ever changing, not static, and seeks to gain further data that will support or reject the original null hypothesis. At some point in science (e.g. gravity, earth revolves around the sun, etc.), science reaches a consensus and although there may be some studies that contradict the consensus, the preponderance of data points in the same direction. The problem is, the three groups listed above will grasp at any study that supports their preconceived notions and will use the contradictory evidence, however flimsy, to make their point. James Hussey Replied: Tom, I might argue that Bayer or Amgen licensed a technology from an academic center where certain representations were made regarding valid results. After licensing, these companies had every incentive to make sure the invention worked. They undoubtedly consulted the original academic scientists to assist in reproducing the results. Unfortunately, the results were never reproduced or only partially reproduced in 80% of the cases. Of course, the patents were never modified to reflect this fact. The patents are not valid because they cannot teach. So, what is the policy lesson for the government? First, a lot of representations are made about very shaky academic data and processes. Did the scientist actually get the result they are representing? In the majority of cases, the scientist got the result - once or twice after many attempts. They rushed to patent the invention and publish. Then, they applied for bigger grants and got tenure. But the invention process was very poor and the data was not reproducible nor robust. Instead of working on making the process more robust, the academic moved on to other experiments. This science is not fraud - but it is commercially flawed. That is, it will not lead to any products, jobs or commercial benefit for the government, the university or the academic. So the taxpayer loses out in a commercial sense. The academic and the university make more money from licensing and grants. The government and the taxpayer are the losers. So, the real issue is "what are we paying for" with government research? If we are advancing science solely for the benefit of humanity, we are doing fine. If we are expecting the research to lead to jobs, companies or products, we are failing miserably. The real issue is " should we invest $60 billion per year in basic research if it leads to little or no commercial benefit?". That is the real issue. Perhaps, private companies will begin to invest in basic research again knowing the poor quality of research from academic and government sources. Anil Singh Wrote: If methodlogy is so exacting, those attempting to replicate results should follow the recipe exactly, otherwise they are essentially performing two different experiments. It is sad that finding out what does not work is seen as "failure." If a drug or treatment process does not work in certain populations, that can save lives. If some basic-level conclusions are negative, that will save companies more time and money by avoiding higher-level studies or investing in drug or treatment development. Negatuve results might inspire others to try new methodologies. What does not work is useful!! Scientists need to divorce themselves from the false reality that marketers and marketeers create that only big successes are.of any values James Beard Replied: If the recipe contains something that is responsible for results found that was not recognized nor reported, simply following the recipe exactly would reproduce the same unrecognized effects. No, in the case at hand, better to use a slightly different means of doing the same things, and see if the effect of doing that accomplishes what authors of the first experiments claim. The value of a paper is in accuracy and pertinency of conclusions reached, not in whether accidental, unintended, and unrecognized effects can be accurately reproduced. You are correct that negative results are valuable, often more important in avoiding huge waste of time and money than positive results that may prove less useful than initially expected. Still, human nature says you do not get rewarded for being wrong in picking out something to work on with a distant goal the destination. If what you picked does not take you toward the desired destination (or appear to do so), your reward will be smaller. Of greater importance, this problem is not limited to bioscience. I have read estimates, based on "looking-back" studies of reports appearing in top-rated scientific journals years earlier, and the conclusions invariably say that more than half the reports should not have been published, because the findings were simply and flatly wrong (as proved by others later) or the design, methodology, or materials used in the experiment were too flawed to allow meaningful conclusions to be reliably reached. In a nutshell, a scientific paper published in a highly-reputed peer-reviewed journal is more likely to be wrong than right. If more people recognized this, and recognized the implications, there would be a lot less hysteria (a la "Global Warming" just for one example) and a lot less "bandwagon effect" resulting from "new and startling revelations." We would be the better for it. The value of the scientific method is not that every scientist is always or even mostly right, but that one scientist's findings can be examined and checked by others, and the erroneous eventually cast aside. Rocco Papalia Replied: James Beard commenting on the 'recipe'. Hmmmmmmm that's too good to pass up!! Lester Marshall Replied: I don't know about that. Did you ever hear about the Alar tests that showed it caused cancer. The reason the test results showed that is because any results that didn't support the theory were thrown out. The tobacco companies did something similar when they stopped research into smoking because it showed that smoking was dangerous. They even made the researchers sign non disclosure agreements so they couldn't tell the public what they found. Brock Bose Wrote: The problem comes down to incentives... it can all be summed up by the last line of the article. Mr. Booth: "Nobody gets a promotion from publishing a negative study." John Warren Replied: or more government grants. Denis Grady Wrote: Not surprising, many researchers gain funding by publishing. More publications, more money. By the time anybody finds fault, it can be years and nobody remembers! As for drugs, nothing works 100% of the time in 100% of cases and some studies may be done in areas the companies decide not to pursue because of failure. The FDA approves drugs with data for certain indications. Why would one publish unfavorable results in areas not part of the application. As for Doctors never seeing unfavorable data? Have you ever looked at the FDA approved literature and package insert? About 1 paragraph of indications and 75 paragraphs of warnings. This area is way to complex to cover in an "article". William Brown Wrote: "However, the authors found that a quarter of the trial data—most of it unfavorable—never got published because the companies never submitted it to journals." Product developers have been known to downplay glitches or "fudge" the efficiency of equipment being perfected. In reality observing customers are not fooled. However the customers do require that the delivered product performs better than anything else currently available. The burden of integrity is much higher for drugs and medical procedures that affect human lives. Janice Stanger Wrote: Here is one finding that always gets duplicated: the healing power of a whole foods, plant-based diet to prevent and reverse cardiovascular disease and lower the risk of just about any other chronic illness (often reversing these as well) including type 2 diabetes, headaches, arthritis, hypertension, high cholesterol, and even some cancers. Enjoy a wide variety of tasty whole plant foods - veggies, fruits, beans, potatoes, whole grains, nuts, seeds, herbs and spices. Leave processed and animal foods off your plate for spectacular - and reproducible - results with only positive side effects. http://perfectformuladiet.com/plant-based-nutrition/science-based-nutrition-and-health/ Wade Yoder Replied: Very true "still today";) Like Hippocrates said in 460 B.C. over 2500 years ago "let thy food be thy medicine and thy medicine be thy food". William Glasheen Replied: > Leave processed and animal foods off your plate for > spectacular - and reproducible - results with only > positive side effects. MARKETING ALERT!!!! Janice never misses an opportunity to hawk her wares. Life itself has a side effect called death. I'm not opposed to a little capitalism lurking around the scientists, and I live a healthy lifestyle (which includes healthy animal-based foods). I do however object to hyperbole and fads masquerading as truth and good health. Caveat emptor. Victor Kwong Replied: Janice, I thought what you wrote was sarcasm. I never realize you really mean it. William Skiba Wrote: IDK - to me this article is quite positive - it tells me that overall the system works: First, some experiments are done - often using a statistically unsound methodology, due to the economic environment in which the researchers operate (i.e. publish or perish). Second, another set of researchers, with a different economic incentives (killing your customers = bad for business) take over, and weed out the bogus results from the first group. Third, a government regulator with yet another set of incentives (maintain a vibrant pharma industry + simultaneously maintain trust of the electorate) gets involved in controlling and reviewing human-based experiments, and weeds out the bogus results that somehow made it through the first two stages. Sure, some bogus stuff slips through all of the screening stages. But nothing in life is perfectly certain. Although it may be costly, and may be inefficient, the currently used process seems to be working pretty decently. Frank Seldin Replied: If I synthesize this all the way down, what it says is that the FDA's standard for medical research (the basis for Pharma research) are much tougher than the standards applied at university research centers and the journals they publish in. Maybe the simple solution is to apply the FDA standards to those universities, and have the journals adopt those standards as well. GEORGE DIMOPOULOS Wrote: It is somewhat common to see initial differences in basic medical research, investigating novel ideas. As a trained biomedical research scientist, I can attest to the fact that I have seen incongruous data in the literature, especially with an area of research, which has a finite number of articles published. From my experience, the primary reason for contradictions in the literature of " new" research is the variable approaches in methodology used in reproducing the data. Jason Rife Wrote: I am a biomedical researcher at a university. Reproducing results is a big problem and isn't as easily assigned as simply fraud, although I'm sure that is sometimes a reason. One dirty secret is that experiments can't always be replicated within the same lab (full disclosure: I would never publish those results). The article makes it look like academic labs publish the bulk of irreproducible results. I can say that on occasion we have had a very difficult time reproducing work from big pharma. Another big problem is from the peer reviewed journals. They don't like to publish too many details. They are constantly pushing for shorter papers that report the exciting experiments and leaving the less exciting, but critical experiments as "data not shown". The pace of science is very fast and very competitive. I believe we stand a better chance of replicating published experiments from 40 years ago over those published one year ago. Finally, scientists have different skill levels. Can you faithfully reproduce a wonderful meal from a cookbook? Learned technique and skill and innate talent go a long way to getting experiments to work. Preston Garrison Wrote: Killing a particular tumor may depend on "synthetic lethality," the loss of 2 functions, one by a mutation in the tumor and another by the drug that it is used. If the drug is tested on tumors that don't have the mutation, it won't work. You need to know the somatic mutation profile of the tumor. And of course heterogeneity in a particular tumor can complicate things even more. paul rhoades Replied: the thoughtful, well informed and reasonable comment above has received 2 recommendations at this time. The divisive, pointless, mean-spirited comment below has received 9. Hmmm. . . on another note, I really used to love the WSJ. Victor Kwong Replied: WSJ does not censor comment unless it is extremely abusive. Are you having a problem with this policy? or are you saying that WSJ should not publish this article at all so that no negative comment can be posted about the scientific research community? Rocco Papalia Wrote: OMG! This article actually suggests that the white be-robed, government-funded saints of ivy coated academia may actually be biased for false positives in a quest ... for filthy lucre?? And for years we've been told that this is an exclusive ailment of the commercial set; pure researchers are supposed to be above all that... tsk tsk. Peter Ashley Replied: You are giving bias and aspertion to what is not a mechanical process. To quote an opinion article from an editor in, I think, Nature, 96% of research is useless. Good research has to have the paradigm right, experiment right, equipment right, data collection and recording right, and math right, and I probably left out a dozen other factors, haven't worked in a national lab in some time. Somehow out of reviews and criticism, pre-experiment and post-hoc, we hope good data comes out that the scientists interpret with a correct paradigm. Messy, messy, messy, science is a thoroughly human affair. James Hussey Replied: Peter, True, science is messy. Then why represent science as something else when trying to get Congress or the private market to pay for it? Clearly, the companies thought or were told that the research was reproducible and robust. Otherwise, why pay for it? Clearly the companies did not feel like they got much for their license money. In fact, one of the major arguments for funding government research is that "it creates jobs". If research is messy and uncertain, why not drop the "commercial value creation" facade and tell Congress that "funding this research may,in fact, not produce anything of value"? That is the dliemma for scientists. No Buck Rogers, no bucks. You have to sell the vision and potential and eventually produce something or argue for funding science "to advance mankind and knowledge". Cannot have it both ways. I think scientists realize the $60 billion in funding would be much smaller if "results were not important". Rocco Papalia Replied: Scientists susceptible to ordinary human weaknesses? Who knew?? They never seem to mention this to the press.... LOUIS COURY Wrote: "...a quarter of the trial data—most of it unfavorable—never got published because the companies never submitted it to journals." One point not made in this article is that it is actually quite difficult to publish negative findings. When an experiment fails, it is usually very difficult to prove conclusively why. Note that the examples of negative findings that were published mentioned in this article are all cases where a previously published theory or hypothesis was overturned. That situation is rarely the case for clinical trials. Chi Li Wrote: This problem is also observable in our next generation of scientists and researchers, namely, the results and conclusions that lead to many awards in the science competitions by high school students who are using the awards to get into top universities. The results are not reproducible and methodologies are questionable! David Rosenberg Wrote: And I thought that publishing articles that added nothing to the knowledge base was bad! TODD KUENY Wrote: Cheating has been around for 1,000s of years. My unscientific and simple calculations show that the expected cheating rate you get by combining MBAs and others into a group is about, surprise 64%, just as found by Bayer. Details here: http://lwgat.blogspot.com/2011/12/falsified-medical-studies-norm.html Jerome Ewing Wrote: God bless WSJ for giving this issue the publicity it deserves. I grew up on a leading research campus and observed that scientists, while almost always brilliant and generally admirable, have a very high view of their personal integrity as a class (relative to the rest of us) which does not correspond to my experience. Businessmen have a degree of integrity, relatively speaking, because so few of us try to claim that they are not in it for the money! NATASHA LIFTON Wrote: Th academic community is under no obligation whatsoever to present accurate, honest science. University labs not have regulations and fines are not imposed by a governing body for mistakes. Why would someone have an assumption that Universities are honest and accurate about science? Look only to their athletic departments to see how easily they abuse youngsters for cold hard cash... ADAM HENDRICKS Wrote: Inconsistent study results can occur for many reasons, including the following: -Flawed assumption about equivalence in study designs. -Bad data collection in one or both studies. -Bad data aggregation in one or both studies. -Bad analysis in one or both studies. -Bad programming in one of both studies. -Deliberately inaccurate findings in one or both studies. -Flawed assumption of equivalence of treatment. -Bad bioequivalence between two batches of what should be the same drug or biologic. Sometimes, it takes a series of independent studies over time to correctly see or infer what is true and what is bunk. John Zebley Wrote: This might come off as conspiracy theory-esque, but cancer has already been cured. Inform yourselves and do something about it vvvvv FREDRIC WILLIAMS Wrote: The underlying problem, alluded to in a couple of posts, is that there are significant financial incentives and status associated with producing research outcomes that either disprove some previous theory or offer proof of some new theory. These financial incentives come primarily from government and stimulate status and financial enhancements from both government-run 'public" universities and private "tax-exempt" universities. When you pay scientists more for one result (it works!) than you do for another (it failed!), you will get predictable outcomes. Fraud to those less sympathetic, honest error to those who are kinder. Most people know about the military-industrial complex that Dwight Eisenhower thought a danger half a century ago -- but few know about the federal-university research danger described in the same speech. Corruption is bought and paid for in America -- and it will remain a curse to the economic well-being of its people until it is halted. RICHARD EGGERMAN Replied: Thank goodness none of this is true of those finding dramatic things happening in global warming! Eric Sharps Wrote: The article does a fine job of highlighting the "dirty little secret" of scientific research; however, the example of Pfizer/Medivation's failure in their late-stage trial of Dimebon to treat Alzheimer's is not a good example. Failure was foreseeable. A free report at my web site (www.foursquarepartners.com) posted over one year prior to the drug's failure shows why. John Hedley Wrote: And people are willing to bet on the conventional wisdom of global warming in this kind of environment? Guess I have to add 'scientific rigor' to the oxymoronic dustbin. William Glasheen Replied: > John Hedley wrote: > Guess I have to add 'scientific rigor' to the oxymoronic dustbin. It's inappropriate to paint with such a broad brush. Life and capitalism itself is all about incentives. Fraud is its own separate issue. Simon Gruber Replied: Do you know what rigor means? John Hedley Replied: If you mean a strict adherence to accuracy, yes though it would seem the form practiced by the pure research community is neither methodical or scrupulous. JOHN SHUEY Replied: Just as we may assume that there are enough reports about Mr. Cain's alleged problems with women that some of them must be true, we are safe in believing that the gist of the main global warming narrative is correct. Agreement is massive - too massive to be a fraud. David States Wrote: The difference between business and academics is that businesses acknowledge conflict of interest and pay for their mistakes. In academics, very few people acknowledge the huge bias toward publication of positive results, and once a paper is out, it is way to easy to just move on to the next problem. NIH is setting itself up for a huge fall by not addressing these issues. JOHN SHUEY Wrote: We are losing our ability to judge well. We use the term "hero" much too loosely and we are too quick to accept people as "experts" because they have a title and a college degree. The result is war after war with negative or inconclusive results, political gridlock, a corrupt educational system, and a massive financial meltdown. This is just one more story about failed institutions. jack higbie Wrote: Since there is so much money at stake, maybe the industrial labs should invite the academic researchers to their labs to see if they can reproduce their results there. Maybe the academic researcher has a special "knack" he was unable to communicate in his paper. Jose Hernandez Wrote: This article highlights a key problem of modern medical research, (albeit, the problem is not limited to medicine); specifically the failure to publish negative results. But, I'll just forewarn, there are other intellectual issues that contribute to the limitations of medical research. Here I am referring to the post-Newtonian view that all natural processes are controlled by mechanistic deterministic considerations. William Lovin Wrote: "This is one of medicine's dirty secrets: Most results, including those that appear in top-flight peer-reviewed journals, can't be reproduced." Sure... But when the University of East Anglia makes up a truckload of fabricated data to support a man-caused global warming hypothesis, it's absolutely gold-plated!... But, as per Glasheen (above) "Fraud is its own separate issue." OK, why are so many climate "scientists" abject fraudsters? I thought the scientific method was supposed to flush these trolls. Clearly something is broken in the practice of science, and the body politic is too stupid to understand why it's important that there is a scientific method and that science (and society) only advances through the strict adherence to it... BRIAN KULLMAN Replied: It gets worse. Climate science is based largely on computer modeling, not on laboratory science. There are no controlled experiments that can be tested independently in other labs. Climate science is closer to economics than to science. Kent Nauman Wrote: The foundation and structure of science itself is false. Just two examples: 1) If you look in genetics for positive mutations in humans and the fruit fly you will find none. On this basis evolution and therefore mental health firmly stand as man is a machine according to medical science. 2) At Mt. Vernon on the wall there was in the 1970's a uniform of George Washington's shot full of holes with General Washington in it. He says that he was not harmed in the prophesies of George Washington. This violates the basic rule of physics that 2 solid objects cannot occupy the same place. Do you believe modern physics or General Washington? John H Noble Jr Wrote: The doctrine of reproducibility stands as the bedrock of valid science. Virtually every scientist has been socialized and trained to conduct replicable research. Commercial interests prefer secrecy over transparency and single-trial over multiple-trial replicated research. Why? It is cheaper and faster in getting new drugs to market. The congress has promoted through the Prescription Drug User Fee Act (PDUFA) an abbreviated version of the scientific method, a.k.a. "FDA science," which permits small size and largely non-replicated research to justify approval of drugs for the market. Trade secrecy stands in the way of independent replication of reported results. Sadly, abbreviated research leads to the compromised practice of medicine and otherwise preventable loss of lives, increased mortality, as well as a waste of private and taxpayer money. How do we reinstitute bona fide science in the conduct of biomedical research? In my opinion, it must start with congressional reform. FDA bureaucrats and the biomedical research establishment will embrace the doctrine of reproducibility if and only if the congress mandates it through a radically revised Prescription Drug User Fee Act (PDUFA). Dan Eustace Wrote: Thank you for the article, representing a business ethic mindset and comments coming from perceptions of the ethics of academia and the discipline of science. What are to make of it? Science is testing hypotheses, getting results and interpretation. Sometimes, and it it encouraged, interpretations lead to generalization. In the testing, results succeed or fail, as we know. Harford in his book "Adapt" might be a point of information to interject. In essence, the world is too complicated for "average" simple answers. The ethics of business and enterprises seems to be not the same as scientists who realize certain results and interpret them. In time, "things change", even our world view changes. We go on. In business, it is closer to a zero sum game (winners and losers) and standards of progress although changing all the time are judged fixed. Going back to the article and Harford's Adapt, the ethics of the business focused article and the trial and error book viewpoints are different. It is imperative to come to grips with these differences to explore how to put things into perspective and use the outcomes productively. My comment is not to find fault with either view but to learn from them. thank you for reading. BRIAN KULLMAN Wrote: When money and science compete, money wins.