User Tag List

123 Last

Results 1 to 10 of 26

  1. #1
    The Eighth Colour Octarine's Avatar
    Join Date
    Oct 2007
    MBTI
    Aeon
    Enneagram
    10w so
    Socionics
    LOL
    Posts
    1,366

    Default Scientific dishonesty

    If you were a scientist and you designed a series of experiments with a variety of experimental measures, would you consider it to be dishonest if you were only to report a few of these outcomes? Particularly the ones that show a significant change as opposed to the ones that didn't?

    If you set a series of thresholds as to what would be considered a significant change, but then found that level of change was rarely met, would you change the thresholds in the report to make the results sound more significant?

    If to secure funding, you were required to publish your experimental protocol beforehand, would you still continue to cherry pick the results that you report and the experimental thresholds? Despite the fact that your bait and switch will be evident in the literature.

    If you did all the above, would you expect your research to be published in 'top' journals such as Science, Nature, PNAS, NEJM or The Lancet?

    Would you be surprised to learn that the above does occur regularly?

  2. #2
    / nonsequitur's Avatar
    Join Date
    Sep 2008
    MBTI
    INTJ
    Enneagram
    512 sp/so
    Posts
    1,822

    Default

    LOL Architectonic. This is a topic very close to my own heart.

    If you were a scientist and you designed a series of experiments with a variety of experimental measures, would you consider it to be dishonest if you were only to report a few of these outcomes? Particularly the ones that show a significant change as opposed to the ones that didn't?
    Yes, I would consider this to be dishonest. However, in my field (Biochemistry), 90% of lab heads whom I know do it. I know of many specific examples where this is the case, which is why I treat anything that I read in the literature with a grain of salt - regardless of the impact factor of the journal.

    If you set a series of thresholds as to what would be considered a significant change, but then found that level of change was rarely met, would you change the thresholds in the report to make the results sound more significant?
    Again, yes. I would think that it's dishonest. My supervisor told my colleague (another PhD student) to change her methods of analysis to produce a significant outcome - any significant outcome - and she was not able to, given quite a few statistical methods. So he told her to tweak the analysis. Just so that we can produce a paper. I don't respect his attitude.

    If to secure funding, you were required to publish your experimental protocol beforehand, would you still continue to cherry pick the results that you report and the experimental thresholds? Despite the fact that your bait and switch will be evident in the literature.
    It wouldn't be evident. Because to secure funding (at least in this country), all you need are preliminary results that SUGGEST something significant. Moreover, detailed protocols are not required. I personally know of one supervisor who got a multi-million dollar 5 year defence-related grant based on dodgy data produced by one student which has since been proven suspicious (possibly faked).

    If you did all the above, would you expect your research to be published in 'top' journals such as Science, Nature, PNAS, NEJM or The Lancet?
    I would say that the impact factor of the journal (what determines if the journal is a 'top' journal or not) has virtually no effect on this. What determines if something gets published in a high impact journal is its perceived significance. If anything, because of the sheer amount of data required for publication in these journals, it's more likely that simple things like Western blots and confocal images are manipulated. To remain competitive and publish before others, time is obviously a factor. There are many instances of this, and such signs are evident to people who are looking for them in the literature.

    Unfortunately, most people don't look so closely at the data once something has been published, relying instead on the editors of these journals to screen out fakes. As a result, even X-ray crystallography protein structures can be faked. There was a high-profile case last year where this guy at the University of Alabama faked lots of protein structures which had been published in Nature, Science and PNAS. He'd back-generated data which no one looked closely at. The only reason why he was caught was because another lab had tried to crystallise a different form of what he'd published and had tried to look at his structure closely to figure out the packing, and noticed irregularities in the packing and solvent structure. These irregularities could not have been picked up by editors, or by non-experts in crystallography. But such experts are busy working on their own stuff, and wouldn't have time to look at the structures of things that they weren't working on.

    Additionally, it was discovered last year that about 60ish protein structures published by a Chinese university in Crystallographa Acta had been faked. Apparently it's really common in China, and has gotten more common in these international journals since Chinese labs started publishing in them.

    Let me just emphasise here that before last year, neither I nor most people thought that crystallography data could be faked. Protein structures were considered the "holy grail" of biochemistry, the one thing that couldn't be manipulated to "prove" something.

    Would you be surprised to learn that the above does occur regularly?
    As I said before, no. It's common, and it's one of the reasons why I'm so disillusioned with academia.

  3. #3
    pathwise dependent FDG's Avatar
    Join Date
    Aug 2007
    MBTI
    ENTJ
    Enneagram
    7w8
    Socionics
    ENTj
    Posts
    5,908

    Default

    Yes, I would consider such practices dishonest, but I know that a large percentage of experiments and/or statitics-based research reports are conducted with such a bias, otherwise it's hard to obtain enough funding. I personally don't do that (I don't work in research, though), and once I had to accept a lower grade (econometrics exam) because I concluded that the model I was asked to test simply did not work.
    ENTj 7-3-8 sx/sp

  4. #4
    Senior Member KDude's Avatar
    Join Date
    Jan 2010
    Posts
    8,263

    Default

    Quote Originally Posted by Architectonic View Post
    Would you be surprised to learn that the above does occur regularly?
    Of course. I watched Spider-Man. "Back to formula??!"

  5. #5
    The Eighth Colour Octarine's Avatar
    Join Date
    Oct 2007
    MBTI
    Aeon
    Enneagram
    10w so
    Socionics
    LOL
    Posts
    1,366

    Default

    Thanks for your reply nonsequitur.


    "Crystallographer faked data"
    http://www.the-scientist.com/blog/display/56226/
    Hmm..

    A few points, the prestige of those journals was considered high, long before the impact factor was invented. I see it as sort of a self fulfilling prophecy, due to the age of those institutions.
    I guess published protocols are less common in primary research, but I expect the move towards published protocols to become more common in medical trials. It is meant to keep the authors more honest. Well it would, if they actually stuck to it.

    You bring up a few interesting points that relate to the incentives involved and I think there are a variety of incentives involved.

    Eg.
    You have spent lots of time on a particular study, but the results were not as you expected. It is demoralising to put a lot of work into something but get no returns. Your career and livelihood is at stake, you must publish or perish.

    Others do it, it is considered standard practise in many labs, so why not do it yourself?

    The incentives of faking data are more complex. I think the individuals in question often believe that their science is genuine. Perhaps they feel they do not have the resources to "prove" the data that they'd like, so they fake it. Rational people can participate in such behaviour if they calculate their odds of getting away with it to be high. Kind of like copyright infringement on the internet...
    It makes you wonder how many have gotten away with it? Due to cherry picking of data shown to the referees, or even a lack of knowledge and understanding by the referees themselves (common).

    The media has a strange idea that once research is published by a peer reviewed journal, then it can be considered "fact". This is wrong, the true peer review occurs after publication. Unfortunately, there are lots of limitations on things like letters to the editor etc (and who reads them anyway?).

    How can the field of science be reformed to become more honest?

  6. #6
    Minister of Propagandhi ajblaise's Avatar
    Join Date
    Aug 2008
    MBTI
    INTP
    Posts
    7,917

    Default

    People are dishonest, including scientists.

    The good thing about science is peer review, and scientists have an incentive to call bullshit on other scientists, in order to advance their own scientific careers. Especially when a scientist sees someone else in their own niche of which they are very familiar coming up with wacky numbers and theories.

  7. #7
    Senior Member KDude's Avatar
    Join Date
    Jan 2010
    Posts
    8,263

    Default

    Quote Originally Posted by Architectonic View Post
    How can the field of science be reformed to become more honest?
    By reading my post, and realizing I'm calling them potential supervillains. It's a trope, but there's probably a little kernel of truth to learn from it. And if you see wrongs (and it seems like most of you are in agreement on what entails dishonesty), you're not going to reform anything without power. You can define power as you will or how you see fit or in which areas you can see it apply, but either way, you can't really stop things like this without finding some leverage and allying with people with the same principles.

  8. #8
    / nonsequitur's Avatar
    Join Date
    Sep 2008
    MBTI
    INTJ
    Enneagram
    512 sp/so
    Posts
    1,822

    Default

    Quote Originally Posted by Architectonic View Post
    Thanks for your reply nonsequitur.
    Not at all, it's something that I complain about at length in my blog lol. I've also thought a lot about this, given the ethics of the people around me.

    A few points, the prestige of those journals was considered high, long before the impact factor was invented. I see it as sort of a self fulfilling prophecy, due to the age of those institutions.
    I guess published protocols are less common in primary research, but I expect the move towards published protocols to become more common in medical trials. It is meant to keep the authors more honest. Well it would, if they actually stuck to it.
    The impact factor was invented for convenience but is used in consideration for grant applications. It has become almost the be-all-end-all in publishing, and in my field it's commonly said that "3/5 JBC papers is the equivalent of 1 PNAS/Nature paper". Of course, what would make more sense would be direct references to that paper... But with that also comes drawbacks because even if other papers refute the original paper, it would still have to reference it, artificially boosting (in an extreme case) the reference count (and supposed importance) of that paper. There's a common joke that goes that "a retraction is also a reference!"

    There's something deeply rotten in the core of (at least biomedical) academia and this publishing culture/system, but obviously to revamp the entire thing would not be possible in the short-term.

    I've never published in medical trials before, but I would assume that because of commercial interests and patent restrictions, the full details of such studies will never be made public. There's another joke that goes that if you want to get something published without having to disclose the details of your protocols, you just apply for a patent.

    You bring up a few interesting points that relate to the incentives involved and I think there are a variety of incentives involved.

    Eg.
    You have spent lots of time on a particular study, but the results were not as you expected. It is demoralising to put a lot of work into something but get no returns. Your career and livelihood is at stake, you must publish or perish.

    Others do it, it is considered standard practise in many labs, so why not do it yourself?
    That is some people's attitudes, but not mine. I feel strongly that in academia, it is important not to be TOO ambitious, and that the main goal should always be to serve science and its progression, not the individual. But then, I am highly idealistic and firmly believed this even before those ethics classes became compulsory. I would have problems sleeping at night if this was not the case.

    The incentives of faking data are more complex. I think the individuals in question often believe that their science is genuine. Perhaps they feel they do not have the resources to "prove" the data that they'd like, so they fake it. Rational people can participate in such behaviour if they calculate their odds of getting away with it to be high. Kind of like copyright infringement on the internet...
    It makes you wonder how many have gotten away with it? Due to cherry picking of data shown to the referees, or even a lack of knowledge and understanding by the referees themselves (common).
    The incentives of faking data are obvious. More money, more prestige, tenure, getting paid by conference organisers to fly first class to give a talk (on your fake data) on a nice tropical island...

    I have wondered this, but there's no way of knowing, of course. As my INTJ mentor said (when I asked him this), the amount of stuff out there is almost infinite, as is the number of people participating. The only thing that you can do is make sure that you're not one of them.

    The media has a strange idea that once research is published by a peer reviewed journal, then it can be considered "fact". This is wrong, the true peer review occurs after publication. Unfortunately, there are lots of limitations on things like letters to the editor etc (and who reads them anyway?).
    The media is stupid. I'm sorry, but it's true. So is anyone that takes something published in a newspaper with the headlines "Science/Scientists say..." as fact. Even a lay person who goes back to the original paper will not be able to make head or tail of it. No matter how "well-read" they are. The vocabulary and technical terms used are very specific, and so are the methods. Interpretation of data is often based on experience with equipment/data and spotting if something is "wrong" is dependent on this. That's why I get upset when people try to tell me about "science" in general, or that a lay person's opinion of science actually matters in the big picture. And yet it's precisely lay people who are making policies about it and based on it! (with the aid of "scientific expert opinion", of course, who are mostly more interested in advancing personal interest)

    How can the field of science be reformed to become more honest?
    Quote Originally Posted by ajblaise View Post
    People are dishonest, including scientists.

    The good thing about science is peer review, and scientists have an incentive to call bullshit on other scientists, in order to advance their own scientific careers. Especially when a scientist sees someone else in their own niche of which they are very familiar coming up with wacky numbers and theories.
    My opinion is only partially in line with ajblaise's. People are people, and people are dishonest. There will always be people cheating. It is impossible to uncover all cases of fraud, and the rewards for this kind of behavior is high. The peer review process is very flawed (I could go into A LOT more detail but I think I've said a lot already). However, ideas for reform are sparse. It's partially that the system is already so entrenched - not only in academia, but also in the grant proposal bodies, the entire bureaucracy of it all. There are also a lot of people with vested interests in keeping the status quo. In fact, there are many scientists who believe that the current system works. I don't know if it's because they're deluded, or if they only choose to see the "good". More likely, as the people whom I've spoken to have said, it's impossible to change the entire system. That's the way it works, so the only thing that we can do is go along with it and try not to exploit it or be exploited.

  9. #9
    Senior Member Fan.of.Devin's Avatar
    Join Date
    Jul 2010
    MBTI
    INTP
    Enneagram
    4w5
    Socionics
    INT-
    Posts
    294

    Default

    I think the practice described in the OP is pretty rare -though not exactly unheard of- in the hard sciences. Probably because it's tantamount to career suicide, and will inevitably come back to bite you in the ass eventually...*
    Though I must say it wouldn't exactly knock my socks off if it came to light, that say, climatology was rife with this practice... Well, more so than we already know it is, anyway. -_-

    Of course, anyone with even a highschool level understanding of statistics sees a problem here... Suppressing unfavorable outcomes skews the overall picture of results and exaggerates statistical significance. (duh, I guess that would be the entire point)
    Dishonesty? I guess that's certainly arguable for some instances (as mentioned above and below), but at least one instance comes to mind of it happening not out of desire for any foreseeable monetary gain (and in fact ended up costing assloads of money), but just out of sheer incompetence on behalf of people who really ought to have known better.





    *Unless you work in big pharma, in which case you're not only free to skew results to your favor whenever the fuck you feel like it with for the most part financial and legal impunity, but are in fact hired to specifically to do so from the getgo...
    I believe propoxyphene was JUST discontinued in the US, like, a matter of weeks ago? Gee, that didn't take long.
    INTP 4w5 SX/SP
    Tritype 4/5/8

  10. #10
    / nonsequitur's Avatar
    Join Date
    Sep 2008
    MBTI
    INTJ
    Enneagram
    512 sp/so
    Posts
    1,822

    Default

    Quote Originally Posted by Fan of Devin View Post
    I think the practice described in the OP is pretty rare -though not exactly unheard of- in the hard sciences. Probably because it's tantamount to career suicide, and will inevitably come back to bite you in the ass eventually...*
    Though I must say it wouldn't exactly knock my socks off if it came to light, that say, climatology was rife with this practice... Well, more so than we already know it is, anyway. -_-
    Where are you getting this opinion from? Especially the bit about career suicide. As far as I know, few people have even been caught doing this.

    I'll give another real-life example, pulled straight from my own supervisor.

    We had an experiment that showed exactly what he wanted to prove after 1 day. However, the original protocol developed was to run it for 3 days. The unfortunate thing was that it didn't show what he wanted to show after 3 days. So he simply published the data with the modified protocol to run for 1 day. Is that dishonest? How could he ever get "caught"? He was perfectly above board with all of this, and his methods are public. But if no one else who's actually done the experiment (rare because no one ever gets credit for repeating published stuff) mentions it, no one will know about the inconsistency. Except the people in the lab, i.e. people like me. And even I'm ambivalent and unsure if it's "okay".

    There are many variations of what I just described above happening everyday. How are people to "catch" them? It isn't fabricating anything. But there is a certain level of dishonesty and manipulation there. You forget that in the hard sciences, what we use are systems to illustrate principles. Such artificial/model systems, e.g. cell culture, western blots, experiment length, statistics, etc. can all be manipulated to give a "favourable" result. It's all perfectly legitimate. Bad science, but completely legitimate. There are tonnes of medical studies done on things like biomarkers. Yet they use crap statistical methods like the student t-test (for God's sakes, not designed to do this type of statistics AT ALL) to show significance. The results get quoted and referenced over and over. The "bubble" of crap builds. Anyone who publishes anything MUST address that bubble of crap but will not be able to do it within the confines of a paper that they're writing to show something else. What do you do? You try to talk around it till hopefully someone from the original lab writes another comment in to explain their data or refute it directly. Which seldom happens.

Similar Threads

  1. Replies: 243
    Last Post: 06-17-2013, 04:39 AM
  2. Concerns over the scientific wellbeing of Type threads
    By Nocapszy in forum Myers-Briggs and Jungian Cognitive Functions
    Replies: 84
    Last Post: 08-29-2008, 10:15 PM
  3. Evaluating sources of science facts and what is the current scientific viewpoint
    By ygolo in forum Science, Technology, and Future Tech
    Replies: 2
    Last Post: 08-14-2008, 08:00 PM
  4. Scientific evidence for the MBTI assumptions
    By Nails in forum Myers-Briggs and Jungian Cognitive Functions
    Replies: 19
    Last Post: 12-17-2007, 12:13 PM
  5. Scientific Astrology??
    By substitute in forum Science, Technology, and Future Tech
    Replies: 15
    Last Post: 11-30-2007, 01:56 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
Single Sign On provided by vBSSO