Streams

Studies Show.... Or Do They?

Friday, December 10, 2010

Jonah Lehrer talks about his New Yorker article on the limitations of the scientific method and how difficult it is for studies to actually prove anything.

Guests:

Jonah Lehrer

The Morning Brief

Enter your email address and we’ll send you our top 5 stories every day, plus breaking news and weather.

Comments [24]

oscar from ny

..tesla said that the world workxs on luminiferous aether, this is why the aztec descrived the a new beginning in aether, you ever try to put two magnets together?.+ - ?.this energy tesla said cobined with radio waves can create a real free world without wasting fossil fuel wich other generations will need to cross space...well too bad whoever doesnt't believe this will not get to see this new world,..an electric world,..picture everything hovering from cars to bikes,..sort of like back future, manhattan will be covered simply with a magnectic field , cars will hoover around, and even one day you will be able to have an accident free rocket that can boost you anywhere, and since all these magnectic field will be controlled with computers ..even if your drunk you can just type your address and the "lightweight" vehicle can take you home,..these magnetic floor or device in the pavement can hornes its energy thru the sun and everything else also..man will one day reason

Dec. 11 2010 09:49 PM
Ben from Brooklyn

One important problem in the peer review process is also that academic journals have a tendency to publish positive findings rather than negative findings. Meaning basically that they publish research where they "find something" rather than when they "find nothing". This occurs independently of drug company influence. Its simply an editorial tendency by honest academics.

The basic standard in much of social science (and medical) research is that if there is a 5 percent or greater chance that a studies results were obtained by chance, then we conclude that the study found nothing in terms of predicted hypotheses.

If 20 studies on a particular phenomenon are done and there is only one positive finding, there is a good chance that this one study will be published and the other 19 will not, either because the others are rejected, or because the researchers never submit the studies for publication because they know it will be rejected. However, using 5 percent as a critical value means that it's expected that if you did 20 studies on a phenomenon, one would have positive results simply by chance (e.g. a false positive, or seeing something that isn't there).

So, all in all, it is important to look at the totality of research on an a problem, however impossible it may be since most research in academia never sees the light in terms of publication.

Dec. 10 2010 03:19 PM
Amy from Manhattan

What were the sample sizes in the old vs. the new trials? And who was included? Some older studies didn't have much diversity among the subjects.

Psychology & the brain are much harder to pin down than the physical sciences. Every well-done medical trial has a section on what the possible sources of bias or other limitations were.

Finally, parts of this discussion remind me of Evelyn Fox Keller's book "Reflections on Gender and Science," which showed how societal constructs influence the way science is done & interpreted. She was talking about gender constructs, but there are others that may be more relevant to what Jonah Lehrer wrote about.

Dec. 10 2010 11:49 AM
gregb

There is an enormous gulf between a problem with the scientific method, and its flawed implementation. If listeners leave the show with the impression the method is flawed and that science, in general, is unreliable, all you’ve done is feed the flames of climate deniers and young earthers and (sorry Leonard) vaccine fearmongers. The US is rapidly losing its leadership in science and engineering- please don’t contribute to this slide.

The Scientific Method is probably one of the top ten insights of humankind- it provides a window into objective truth undistorted by emotion and flawed experience. At least the closest thing to objective truth we have. It is responsible for our improving lifespan, the amazing technology we benefit from every day, and yes, many environmental ills. Overall, despite real problems in implementation, the scientific method is our most reliable map out of ignorance towards useful knowledge.

Jonah is correct to report that many scientific findings, particularly in the biological sciences, are deeply flawed. The FINDINGS are flawed, not the method. The root cause of his concern is ‘survivor bias”. That is, a researcher follows a humdrum day with rather dull results, when suddenly they observe a large effect and publish some amazing correlation. Of course, what really happened is a statistical fluctuation that occurred right on probabilistic schedule. That fluctuation gets their attention, and the less-capable scientist publishes it as a “finding”. Naturally the effect fades over time- it was an outlier to begin with.

What they should have done was dilute the finding with all the past dull days, at which point the correlation is a great deal weaker. There are many institutional reasons the dull work remains unpublished, combined with mere venality (read a few chapters in Ben Goldacre’s book http://www.badscience.net/ He nails the problem clearly and amusingly).

The press likes to report the controversy rather than the consensus facts. Man bite dog, and so on...

Dec. 10 2010 11:31 AM
Susan from Manhattan

Let me give another example of effect modification that is NOT being considered by the American Academy of Pediatrics Nutrition Committee. They looked at one single study with a drop out rate of 40%. The infants were supplemented with iron from 1 month to 6 months of age. We know that iron supplements as well as formula decrease the absorption of iron overall, even though the total amount that gets into the infant may increase. The AAP thought these were breastfed babies. If you read the methods, they eventually were mostly formula fed. They did three tests -- one on physical development, one of visual acuity, one on mental development. Only one test showed a difference between those who were supplemented and those who were not. By throwing out some of the subjects down to a sample size of 17, they were able to then get a result for another test. They did not test for many of the side effects that have been found for iron supplements including increased infections, slower growth and even some showing a lower cognitive development when iron levels were too high.

On the basis of this, the American Academy of Pediatrics made a recommendation for EXCLUSIVELY breastfed infants (not the study group) to start supplementation with iron at four months of age (not the age range in this study of a mere 20 infants).

If you actually tested that age range of supplementation with exclusively breastfed infants, I would not expect the results to be the same as supplementing mixed fed infants over a longer period of time.

This is an example of the current sloppiness in the medical realm.

Dec. 10 2010 11:16 AM
Susan from Manhattan

It always amazes me when the randomized clinical trial is held up as a "gold standard". In fact, much of the research I read does not delve into the equally important factor that leads to differences in findings - "effect modification". This is where the population you study is always slightly different. For instance, if you test a heart medication in men of a certain age, you may get one particular result. If you try it in a group of women, that heart medication may act differently. I've seen trials where beta-carotene was given to smokers in Norway to prevent cancer. They saw no result. What no one really pondered was that these Norwegians were ALREADY ingesting lots of fish replete with retinol. The problem now is that many people never read the methods section, they only read the abstracts. Furthermore, scientific journalists seem to repeat back what the researchers say, rather than investigating deeper themselves.

Dec. 10 2010 11:07 AM
J Fuller from New York, NY

Great topic- one that deserves even more coverage. I would like to see journals accept proposals based on the hypotheses and methods, prior to data analysis. I wonder if Jonah Lehrer or anyone else think this could have an impact.

Anyone have any thoughts about this or other ways this problem could be addressed?

Dec. 10 2010 11:05 AM
DarkSymbolist from NYC!

"Really smart?"

I don't get it....what was different about what he was saying and the way science has ALWAYS developed? It's a process of discovery for crying out loud...

The guy seemed to be spouting nonsense to me and saying absolutely nothing new at all.

Dec. 10 2010 11:01 AM
JM

Science is influenced by money, cultural fads, and the echo chamber that is the university. I know you guys want something to believe in, but people are people, and people do science.

Dec. 10 2010 11:00 AM
jm from NYC

Karl Popper emphasized that neither proof or confirmation was central to science. What characterizes science is that the falsifiability of its theories.

Dec. 10 2010 10:59 AM
LennieF from Manahattan

Isn't this a function, at least to some degree, of Goedel's theorems?

Also, sorry that I missed first few minutes, but I assume that he isn't talking about hard sciences like chemistry and physics

Dec. 10 2010 10:57 AM
gary from Newark

I haven't heard anything to that strikes me as interesting here. Why doesn't he cite examples from physics, materials science
and chemistry that support his thesis. He seems to be lumping that nonsense that they do in the soft sciences with the other
branches ( physics, chemistry, etc).

Dec. 10 2010 10:57 AM
pfox from Brooklyn

What's the impact of the pressure on academics to produce original research? When the research in a field or on a particular issue has established one pov, isn't there then an opening for an academic who wants to be original to look to make a case against what is now a status quo?

Dec. 10 2010 10:56 AM
Joeseph

Wow, what a poor understanding of the scientific method your guest has. Man do Americans have a bad understanding of science.

Dec. 10 2010 10:53 AM
Hal from Brooklyn

I read the article.

It is my understanding that the 'decline effect' was coined to describe results in paranormal (PSI) research, where data tends to regress to the mean as more samples are taken.
In more scientific pursuits, 'decline effect' can be ascribed to things like poorly designed studies, publication bias, or simply that the research disproves the hypothesis in question.

Dec. 10 2010 10:51 AM
Ken from Little Neck

Maybe I'm missing something, but isn't this one of the great strengths of science? If we discover that something we thought was true maybe isn't, doesn't that lead to greater discovery?

Dec. 10 2010 10:50 AM
Tony

http://www.rawstory.com/rs/2010/12/drug-company-ghostwriters-author-work-bylined-academics-documents-show/

Humm, some studies are not done honestly. Maybe we are just catching some drug companies cheating.

How about (real?) hard sciences?

Dec. 10 2010 10:49 AM
Matt

Brian,

Your guest is very confused and doesn't understand basic scientific principles. There is a major difference between the physical sciences, like physics, and social or medical sciences. The "decline effect" is nonsense. This is bad, bad science journalism.

Dec. 10 2010 10:49 AM
Julian from Manhattan

Science is all about overturning previous theories based on new evidence or better measurement. This has always been the case, for hundreds if not thousands of years. What is new here?

Dec. 10 2010 10:49 AM
Kelly from Greenpoint

But Science was my new religion! Now you're saying it isn't infallible? Does the Science Pope know about this?

Dec. 10 2010 10:48 AM
superf88

interesting timing of this -- just a few days ago the information that aspirin has been shown to kill cancer was trumpeted across the world.

NOT because of the discovery, which had already been made. Because of the high quality of the scientific tests that validated this discovery!

Dec. 10 2010 10:46 AM
Tonky from Red Hook

Jonah,

I read you article last night. Very cool.

Would you please explain the precognition study Schooler used to test the decline effect.

Specifically I didn't understand the process as described. How was a false positive attained?

Dec. 10 2010 10:46 AM
Tonky from Red Hook

Jonah,

I read you article last night. Very cool.

Would you please explain the precognition study Schooler used to test the phenomenon.

Specifically I didn't understand process as described. How was a false positive attained?

Dec. 10 2010 10:44 AM
John james from Forest hills

everyone knows psychiatrists are all a little off, so I am not surprised they get diferrent answers to the same question all the time.

Dec. 10 2010 10:44 AM

Leave a Comment

Email addresses are required but never displayed.