# Thread: "Is there something wrong with the scientific method?"

1. Originally Posted by Randomnity
Hmm....this is why I think stats courses should be mandatory when you start research. The p-value is completely arbitrary, and by definition it means the results seen will be due to random variation 5% of the time (at p=0.05), which can add up when thousands of studies are published yearly. It's only "magic" because the top journals decided to set that as a standard. They could easily have chosen 10% (p=0.1), or 1% (p=0.001) instead.

It isn't a flaw of science that the p-value isn't absolute, though. The popularity of "the cult of the p-value" signals a flaw with the teaching system and perhaps even our general understanding. Knowing that "statistical significance" isn't everything and being aware of bias just means you actually have to use your brain to interpret results, be objective and look at more than one study. Shouldn't be that hard for a scientist
How is that easy for a scientist, or anyone? Sounds like you are asking for perfection.

2. I couldn't be bothered reading the whole article. If they stated their contention clearly and got to the point, I might have.

From what I did read: Cocaine+mouse test is ridiculous, aren't you meant to at least try to limit possible variables? Even if they get a statistical distribution, the test is stupid. It's like saying, A=BCDEFGHIJKL, lets leave B,C,D,E,F,G,H,I,J and K to do what they want, and see what distribution we get when changing L. Again in the other cases, they look for tooth fairies or make up catch phrases rather than trying to see what they are leaving out of the system. Maybe no one has the time or resources to be thorough, I'm not sure. Still their method seems so obviously flawed that I wouldn't use it as evidence of anything.

Apart from that, the way it was written reminded me more of a new age business book than a scientific analysis. Or maybe one of those websites where the writer explains how they've disproved relativity.

3. Originally Posted by Randomnity
Hmm....this is why I think stats courses should be mandatory when you start research. The p-value is completely arbitrary, and by definition it means the results seen will be due to random variation 5% of the time (at p=0.05), which can add up when thousands of studies are published yearly. It's only "magic" because the top journals decided to set that as a standard. They could easily have chosen 10% (p=0.1), or 1% (p=0.001) instead.

It isn't a flaw of science that the p-value isn't absolute, though. The popularity of "the cult of the p-value" signals a flaw with the teaching system and perhaps even our general understanding. Knowing that "statistical significance" isn't everything and being aware of bias just means you actually have to use your brain to interpret results, be objective and look at more than one study. Shouldn't be that hard for a scientist
Thank you for assuming I graduated without knowing what a p-value is. I wasn't talking about accepting the results of a shitty study because p<0.05. I was talking about accepting the results of a randomized controlled trial with p<0.05 (or p<0.01). If the p-value is used to support contradictory or highly variable results 41% of the time in a research design that minimizes sources of bias/error (the "gold standard"), that blows me away. RCTs are not infallible, but the percentage is much higher than I expected. Of course, that number could diminish over time...

4. Sorry, I wasn't talking about you specifically, just the hordes of grad students/undergrads/etc who don't understand what a p-value is (and stats in general). I see that it came off as directed to you and that wasn't my intention, just using you as a spring board for ranting.

Is the 41% from the article? That's another good point to show that p-value isn't the important thing at all, you can get a nice p-value from contradictory or highly variable results as you say. RCT or not. This underlines the importance of repeating studies and comparing your data to all the data in the field (i.e. from independent labs).

edit: I've edited my earlier post to remove the reference to you, to better reflect my intentions.

5. Originally Posted by Randomnity
Sorry, I wasn't talking about you specifically, just the hordes of grad students/undergrads/etc who don't understand what a p-value is (and stats in general). I see that it came off as directed to you and that wasn't my intention, just using you as a spring board for ranting.

Is the 41% from the article? That's another good point to show that p-value isn't the important thing at all, you can get a nice p-value from contradictory or highly variable results as you say. RCT or not. This underlines the importance of repeating studies and comparing your data to all the data in the field (i.e. from independent labs).

edit: I've edited my earlier post to remove the reference to you, to better reflect my intentions.
Ah, I see, and I agree with your point. The 41% study was mentioned in the article, but I don't think he gave the source. I plan on tracking it down when I have time.

6. What are your thoughts on Science Based Medicine, as contrasted with Evidence Based Medicine. The charge is that evidence based medicine unnecessarily trusts the results of Randomized controlled trials, without requiring a plausible pathology. A consequence for example, is the placebo controlled homeopathy trials which found significant (p<0.05) effects are deemed to be evidence of efficacy. Science Based Medicine proponents suggest this is insufficient.

The downside is that it means that more controversial (but still plausible scientifically) hypothesis (eg vaccine injury risk) might not be researched due to ideological or political reasons.

There is discussion about the differences at the following blog. http://www.sciencebasedmedicine.org/
I would post the specific links, but that site seems to be down right now. (To be updated)

7. I haven't read the blog much, but read a few articles there in the past and found them very interesting. Mostly those were on how homeopathic "practictioners" prey on desperate people like cancer patients and encourage them to avoid medical treatment. Haven't read any of their comments on EBM, it sounds interesting though, I'll have to remember to look it up when it's up again.

I think that in theory, both science and evidence are pretty essential, although some areas can be "flexed" with more than others (sore muscle remedies vs. cancer treatments, and so on). I can't really say more than that without knowing a more precise description of the two approaches and seeing them in action, I think.

8. Originally Posted by Architectonic
What are your thoughts on Science Based Medicine, as contrasted with Evidence Based Medicine. The charge is that evidence based medicine unnecessarily trusts the results of Randomized controlled trials, without requiring a plausible pathology. A consequence for example, is the placebo controlled homeopathy trials which found significant (p<0.05) effects are deemed to be evidence of efficacy. Science Based Medicine proponents suggest this is insufficient.

The downside is that it means that more controversial (but still plausible scientifically) hypothesis (eg vaccine injury risk) might not be researched due to ideological or political reasons.

There is discussion about the differences at the following blog. http://www.sciencebasedmedicine.org/
I would post the specific links, but that site seems to be down right now. (To be updated)
The site must still be down because it isn't working for me. I read something from the Sceptic's Dictionary site (http://www.skepdic.com/sciencebasedmedicine.html), and I'm not sure I understand the process. Specifically, I don't know how someone goes about calculating "prior probability". It sounds too subjective... the prevailing opinions of the day should not determine the likelihood of something being true or false .
I'm reminded of how the first body of evidence of the nature of light "proved" that light acted as a wave via Young's Double Slit experiment. If my understanding of prior probability is correct, the prior probablility of light being a particle would be low, rendering those results less powerful/significant than those saying light is a wave when in reality, light acts as both. In this example, the concept of prior probability would not lead us to more accurate results, but would instead skew the results towards what we think they should be.

Now, regarding clinical RCTs: overall, we haven't mastered the translation of the interactions between medical interventions and the body at the molecular level to the resulting clinical effect. Until we can do that and thus determine and control significant confounding variables, RCTs will never be truely unequivocal, but in the meanwhile, we should press on with the best tools we have available while acknowledging its weaknesses.

EDIT: I'm so wordy! Let me know if you need clarification on these long sentences.

9. Originally Posted by erm
Well, why don't you explain how some simple formal logic, like 2 + 2 = 4, is based on empiricism?
I know it isn't quite relevant to the argument, but 2+2=4 is one of the 45 equations you have memorised to allow you to perform addition in base 10 of the Arabic number system. Its logic is a fundamental of natural numbers (and reality) though.

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
Single Sign On provided by vBSSO