• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Scientific dishonesty

Octarine

The Eighth Colour
Joined
Oct 14, 2007
Messages
1,351
MBTI Type
Aeon
Enneagram
10w
Instinctual Variant
so
If you were a scientist and you designed a series of experiments with a variety of experimental measures, would you consider it to be dishonest if you were only to report a few of these outcomes? Particularly the ones that show a significant change as opposed to the ones that didn't?

If you set a series of thresholds as to what would be considered a significant change, but then found that level of change was rarely met, would you change the thresholds in the report to make the results sound more significant?

If to secure funding, you were required to publish your experimental protocol beforehand, would you still continue to cherry pick the results that you report and the experimental thresholds? Despite the fact that your bait and switch will be evident in the literature.

If you did all the above, would you expect your research to be published in 'top' journals such as Science, Nature, PNAS, NEJM or The Lancet?

Would you be surprised to learn that the above does occur regularly?
 
Joined
Sep 18, 2008
Messages
1,941
MBTI Type
INTJ
Enneagram
512
Instinctual Variant
sp/so
LOL Architectonic. This is a topic very close to my own heart.

If you were a scientist and you designed a series of experiments with a variety of experimental measures, would you consider it to be dishonest if you were only to report a few of these outcomes? Particularly the ones that show a significant change as opposed to the ones that didn't?
Yes, I would consider this to be dishonest. However, in my field (Biochemistry), 90% of lab heads whom I know do it. I know of many specific examples where this is the case, which is why I treat anything that I read in the literature with a grain of salt - regardless of the impact factor of the journal.

If you set a series of thresholds as to what would be considered a significant change, but then found that level of change was rarely met, would you change the thresholds in the report to make the results sound more significant?
Again, yes. I would think that it's dishonest. My supervisor told my colleague (another PhD student) to change her methods of analysis to produce a significant outcome - any significant outcome - and she was not able to, given quite a few statistical methods. So he told her to tweak the analysis. Just so that we can produce a paper. I don't respect his attitude.

If to secure funding, you were required to publish your experimental protocol beforehand, would you still continue to cherry pick the results that you report and the experimental thresholds? Despite the fact that your bait and switch will be evident in the literature.
It wouldn't be evident. Because to secure funding (at least in this country), all you need are preliminary results that SUGGEST something significant. Moreover, detailed protocols are not required. I personally know of one supervisor who got a multi-million dollar 5 year defence-related grant based on dodgy data produced by one student which has since been proven suspicious (possibly faked).

If you did all the above, would you expect your research to be published in 'top' journals such as Science, Nature, PNAS, NEJM or The Lancet?
I would say that the impact factor of the journal (what determines if the journal is a 'top' journal or not) has virtually no effect on this. What determines if something gets published in a high impact journal is its perceived significance. If anything, because of the sheer amount of data required for publication in these journals, it's more likely that simple things like Western blots and confocal images are manipulated. To remain competitive and publish before others, time is obviously a factor. There are many instances of this, and such signs are evident to people who are looking for them in the literature.

Unfortunately, most people don't look so closely at the data once something has been published, relying instead on the editors of these journals to screen out fakes. As a result, even X-ray crystallography protein structures can be faked. There was a high-profile case last year where this guy at the University of Alabama faked lots of protein structures which had been published in Nature, Science and PNAS. He'd back-generated data which no one looked closely at. The only reason why he was caught was because another lab had tried to crystallise a different form of what he'd published and had tried to look at his structure closely to figure out the packing, and noticed irregularities in the packing and solvent structure. These irregularities could not have been picked up by editors, or by non-experts in crystallography. But such experts are busy working on their own stuff, and wouldn't have time to look at the structures of things that they weren't working on.

Additionally, it was discovered last year that about 60ish protein structures published by a Chinese university in Crystallographa Acta had been faked. Apparently it's really common in China, and has gotten more common in these international journals since Chinese labs started publishing in them.

Let me just emphasise here that before last year, neither I nor most people thought that crystallography data could be faked. Protein structures were considered the "holy grail" of biochemistry, the one thing that couldn't be manipulated to "prove" something.

Would you be surprised to learn that the above does occur regularly?
As I said before, no. It's common, and it's one of the reasons why I'm so disillusioned with academia.
 

FDG

pathwise dependent
Joined
Aug 13, 2007
Messages
5,903
MBTI Type
ENTJ
Enneagram
7w8
Yes, I would consider such practices dishonest, but I know that a large percentage of experiments and/or statitics-based research reports are conducted with such a bias, otherwise it's hard to obtain enough funding. I personally don't do that (I don't work in research, though), and once I had to accept a lower grade (econometrics exam) because I concluded that the model I was asked to test simply did not work.
 

Octarine

The Eighth Colour
Joined
Oct 14, 2007
Messages
1,351
MBTI Type
Aeon
Enneagram
10w
Instinctual Variant
so
Thanks for your reply nonsequitur.


"Crystallographer faked data"
http://www.the-scientist.com/blog/display/56226/
Hmm..

A few points, the prestige of those journals was considered high, long before the impact factor was invented. I see it as sort of a self fulfilling prophecy, due to the age of those institutions.
I guess published protocols are less common in primary research, but I expect the move towards published protocols to become more common in medical trials. It is meant to keep the authors more honest. Well it would, if they actually stuck to it.

You bring up a few interesting points that relate to the incentives involved and I think there are a variety of incentives involved.

Eg.
You have spent lots of time on a particular study, but the results were not as you expected. It is demoralising to put a lot of work into something but get no returns. Your career and livelihood is at stake, you must publish or perish.

Others do it, it is considered standard practise in many labs, so why not do it yourself?

The incentives of faking data are more complex. I think the individuals in question often believe that their science is genuine. Perhaps they feel they do not have the resources to "prove" the data that they'd like, so they fake it. Rational people can participate in such behaviour if they calculate their odds of getting away with it to be high. Kind of like copyright infringement on the internet...
It makes you wonder how many have gotten away with it? Due to cherry picking of data shown to the referees, or even a lack of knowledge and understanding by the referees themselves (common).

The media has a strange idea that once research is published by a peer reviewed journal, then it can be considered "fact". This is wrong, the true peer review occurs after publication. Unfortunately, there are lots of limitations on things like letters to the editor etc (and who reads them anyway?).

How can the field of science be reformed to become more honest?
 

ajblaise

Minister of Propagandhi
Joined
Aug 3, 2008
Messages
7,914
MBTI Type
INTP
People are dishonest, including scientists.

The good thing about science is peer review, and scientists have an incentive to call bullshit on other scientists, in order to advance their own scientific careers. Especially when a scientist sees someone else in their own niche of which they are very familiar coming up with wacky numbers and theories.
 

KDude

New member
Joined
Jan 26, 2010
Messages
8,243
How can the field of science be reformed to become more honest?

By reading my post, and realizing I'm calling them potential supervillains. ;) It's a trope, but there's probably a little kernel of truth to learn from it. And if you see wrongs (and it seems like most of you are in agreement on what entails dishonesty), you're not going to reform anything without power. You can define power as you will or how you see fit or in which areas you can see it apply, but either way, you can't really stop things like this without finding some leverage and allying with people with the same principles.
 
Joined
Sep 18, 2008
Messages
1,941
MBTI Type
INTJ
Enneagram
512
Instinctual Variant
sp/so
Thanks for your reply nonsequitur.
Not at all, it's something that I complain about at length in my blog lol. I've also thought a lot about this, given the ethics of the people around me.

A few points, the prestige of those journals was considered high, long before the impact factor was invented. I see it as sort of a self fulfilling prophecy, due to the age of those institutions.
I guess published protocols are less common in primary research, but I expect the move towards published protocols to become more common in medical trials. It is meant to keep the authors more honest. Well it would, if they actually stuck to it.
The impact factor was invented for convenience but is used in consideration for grant applications. It has become almost the be-all-end-all in publishing, and in my field it's commonly said that "3/5 JBC papers is the equivalent of 1 PNAS/Nature paper". Of course, what would make more sense would be direct references to that paper... But with that also comes drawbacks because even if other papers refute the original paper, it would still have to reference it, artificially boosting (in an extreme case) the reference count (and supposed importance) of that paper. There's a common joke that goes that "a retraction is also a reference!"

There's something deeply rotten in the core of (at least biomedical) academia and this publishing culture/system, but obviously to revamp the entire thing would not be possible in the short-term.

I've never published in medical trials before, but I would assume that because of commercial interests and patent restrictions, the full details of such studies will never be made public. There's another joke that goes that if you want to get something published without having to disclose the details of your protocols, you just apply for a patent.

You bring up a few interesting points that relate to the incentives involved and I think there are a variety of incentives involved.

Eg.
You have spent lots of time on a particular study, but the results were not as you expected. It is demoralising to put a lot of work into something but get no returns. Your career and livelihood is at stake, you must publish or perish.

Others do it, it is considered standard practise in many labs, so why not do it yourself?
That is some people's attitudes, but not mine. I feel strongly that in academia, it is important not to be TOO ambitious, and that the main goal should always be to serve science and its progression, not the individual. But then, I am highly idealistic and firmly believed this even before those ethics classes became compulsory. I would have problems sleeping at night if this was not the case.

The incentives of faking data are more complex. I think the individuals in question often believe that their science is genuine. Perhaps they feel they do not have the resources to "prove" the data that they'd like, so they fake it. Rational people can participate in such behaviour if they calculate their odds of getting away with it to be high. Kind of like copyright infringement on the internet...
It makes you wonder how many have gotten away with it? Due to cherry picking of data shown to the referees, or even a lack of knowledge and understanding by the referees themselves (common).
The incentives of faking data are obvious. More money, more prestige, tenure, getting paid by conference organisers to fly first class to give a talk (on your fake data) on a nice tropical island...

I have wondered this, but there's no way of knowing, of course. As my INTJ mentor said (when I asked him this), the amount of stuff out there is almost infinite, as is the number of people participating. The only thing that you can do is make sure that you're not one of them.

The media has a strange idea that once research is published by a peer reviewed journal, then it can be considered "fact". This is wrong, the true peer review occurs after publication. Unfortunately, there are lots of limitations on things like letters to the editor etc (and who reads them anyway?).
The media is stupid. I'm sorry, but it's true. So is anyone that takes something published in a newspaper with the headlines "Science/Scientists say..." as fact. Even a lay person who goes back to the original paper will not be able to make head or tail of it. No matter how "well-read" they are. The vocabulary and technical terms used are very specific, and so are the methods. Interpretation of data is often based on experience with equipment/data and spotting if something is "wrong" is dependent on this. That's why I get upset when people try to tell me about "science" in general, or that a lay person's opinion of science actually matters in the big picture. And yet it's precisely lay people who are making policies about it and based on it! (with the aid of "scientific expert opinion", of course, who are mostly more interested in advancing personal interest)

How can the field of science be reformed to become more honest?
People are dishonest, including scientists.

The good thing about science is peer review, and scientists have an incentive to call bullshit on other scientists, in order to advance their own scientific careers. Especially when a scientist sees someone else in their own niche of which they are very familiar coming up with wacky numbers and theories.

My opinion is only partially in line with ajblaise's. People are people, and people are dishonest. There will always be people cheating. It is impossible to uncover all cases of fraud, and the rewards for this kind of behavior is high. The peer review process is very flawed (I could go into A LOT more detail but I think I've said a lot already). However, ideas for reform are sparse. It's partially that the system is already so entrenched - not only in academia, but also in the grant proposal bodies, the entire bureaucracy of it all. There are also a lot of people with vested interests in keeping the status quo. In fact, there are many scientists who believe that the current system works. I don't know if it's because they're deluded, or if they only choose to see the "good". More likely, as the people whom I've spoken to have said, it's impossible to change the entire system. That's the way it works, so the only thing that we can do is go along with it and try not to exploit it or be exploited.
 

Fan.of.Devin

New member
Joined
Jul 12, 2010
Messages
292
MBTI Type
INTP
Enneagram
4w5
I think the practice described in the OP is pretty rare -though not exactly unheard of- in the hard sciences. Probably because it's tantamount to career suicide, and will inevitably come back to bite you in the ass eventually...*
Though I must say it wouldn't exactly knock my socks off if it came to light, that say, climatology was rife with this practice... Well, more so than we already know it is, anyway. -_-

Of course, anyone with even a highschool level understanding of statistics sees a problem here... Suppressing unfavorable outcomes skews the overall picture of results and exaggerates statistical significance. (duh, I guess that would be the entire point)
Dishonesty? I guess that's certainly arguable for some instances (as mentioned above and below), but at least one instance comes to mind of it happening not out of desire for any foreseeable monetary gain (and in fact ended up costing assloads of money), but just out of sheer incompetence on behalf of people who really ought to have known better.





*Unless you work in big pharma, in which case you're not only free to skew results to your favor whenever the fuck you feel like it with for the most part financial and legal impunity, but are in fact hired to specifically to do so from the getgo...
I believe propoxyphene was JUST discontinued in the US, like, a matter of weeks ago? Gee, that didn't take long.
 
Joined
Sep 18, 2008
Messages
1,941
MBTI Type
INTJ
Enneagram
512
Instinctual Variant
sp/so
I think the practice described in the OP is pretty rare -though not exactly unheard of- in the hard sciences. Probably because it's tantamount to career suicide, and will inevitably come back to bite you in the ass eventually...*
Though I must say it wouldn't exactly knock my socks off if it came to light, that say, climatology was rife with this practice... Well, more so than we already know it is, anyway. -_-

Where are you getting this opinion from? Especially the bit about career suicide. As far as I know, few people have even been caught doing this.

I'll give another real-life example, pulled straight from my own supervisor.

We had an experiment that showed exactly what he wanted to prove after 1 day. However, the original protocol developed was to run it for 3 days. The unfortunate thing was that it didn't show what he wanted to show after 3 days. So he simply published the data with the modified protocol to run for 1 day. Is that dishonest? How could he ever get "caught"? He was perfectly above board with all of this, and his methods are public. But if no one else who's actually done the experiment (rare because no one ever gets credit for repeating published stuff) mentions it, no one will know about the inconsistency. Except the people in the lab, i.e. people like me. And even I'm ambivalent and unsure if it's "okay".

There are many variations of what I just described above happening everyday. How are people to "catch" them? It isn't fabricating anything. But there is a certain level of dishonesty and manipulation there. You forget that in the hard sciences, what we use are systems to illustrate principles. Such artificial/model systems, e.g. cell culture, western blots, experiment length, statistics, etc. can all be manipulated to give a "favourable" result. It's all perfectly legitimate. Bad science, but completely legitimate. There are tonnes of medical studies done on things like biomarkers. Yet they use crap statistical methods like the student t-test (for God's sakes, not designed to do this type of statistics AT ALL) to show significance. The results get quoted and referenced over and over. The "bubble" of crap builds. Anyone who publishes anything MUST address that bubble of crap but will not be able to do it within the confines of a paper that they're writing to show something else. What do you do? You try to talk around it till hopefully someone from the original lab writes another comment in to explain their data or refute it directly. Which seldom happens.
 

Octarine

The Eighth Colour
Joined
Oct 14, 2007
Messages
1,351
MBTI Type
Aeon
Enneagram
10w
Instinctual Variant
so
I posted this thread because I've read such an example this week. I'm not making this up - statistically insignificant results compared to controls are made out as a positive change. The graphs, based on the unadjusted means have overlapping 95% CI with the control, but then they go on to state significant p values for this comparison, without explaining how the data was altered. They also failed to report seven of the measures that were used according to the protocol. Most of those measures that were reported had revised thresholds as to what was significant or "normal".

This was a very expensive study funded by the respective government.
 
Last edited:

The_Liquid_Laser

Glowy Goopy Goodness
Joined
Jul 11, 2007
Messages
3,376
MBTI Type
ENTP
Would you be surprised to learn that the above does occur regularly?

Not in the slightest.

How can the field of science be reformed to become more honest?

I don't think it can be at this point. Science is too influencial. Power corrupts, and the scientific community now has a lot of power. Ironically the best science was done by people who saw it as something of a leisurely diversion. Once it became known that science could lead to major profits, the integrity of science went out the window.
 

entropie

Permabanned
Joined
Apr 24, 2008
Messages
16,767
MBTI Type
entp
Enneagram
783
Yes it's dishonest. But if you want money for a project and some numbers are just making it look bad, you can change that. You just have to take the responsibility if it goes bad afterwards and you have to think to yourself in the first place, how likely a bad outcome is and if it could damage anyone or anything.

There are seldom zero risk operations which work like in a fairy world and a bit of gambling is always necessary if you want to reach certain goal, but it should always exclude that human life could be risked or things could be destroyed. l tend to take risks alot but always have 3-4 security nets. That way it becomes safer and since I am a security freak, I think I am doing it right
 

burymecloser

Member
Joined
Jan 31, 2010
Messages
516
MBTI Type
INTP
Enneagram
6w5
Clearly dishonest. And yet terribly tempting -- not to falsify data but merely to withhold some or mess around with parameters the way you've described. It's dishonest in a way that can be rationalised, justified to oneself.

Quite the contrary. I would be surprised if it didn't.
Likewise.
 

spin-1/2-nuclei

New member
Joined
May 2, 2010
Messages
381
MBTI Type
INTJ
I don't think it's fair to say dishonesty happens in science regularly... at least not the natural sciences like chemistry, physics, mathematics.. it is far more difficult to fake your data in these fields... if I claim that I discovered a new reaction for 3,3 sigmatropic rearrangements and then publish the fudged data it's not a question of if I get busted but when... When someone in another lab or big pharma tries to use my new reaction and they don't get what I claim they should get people will be calling bullshit left and right.. and either I am able to find my original procedure in my lab notebook and reproduce the results exactly and inform them what they are missing from the procedure they are using to get it to work (and they subsequently try my suggestion and get it to work) or I will be without a job in addition to answering all kinds of review committees both at my uni/place of work and within the funding agency...

It is definitely easier to fudge data in the biological sciences and social sciences but I doubt that dishonesty is rampant in these fields either.. there is a healthy level of competition in science (even in the biological and social sciences) that keeps a lot of this to a minimum... At the end of the day there are dishonest people and these dishonest people will inevitably do dishonest things.... they will of course exploit flaws in the system that allow these types of behaviors and depending on what fields they work in sometimes they get caught and sometimes they don't...

but these people aren't inhabiting science exclusively and if anything science has better safe guards in place than fields like business, real estate, government, finance, law, politics, entertainment.... I mean you've never seen a scandal of dishonesty in science that comes even close to what the banks did with student loans or real estate....

Because fields like chemistry, math, computer science, and physics fuel technological advancement for the most part all of your major discoveries in science will be independently verified because if your new drug doesn't treat what it claims it treats or the new algorithm doesn't describe what it is suppose to describe then your data is not reproducible and that tends to invite people to call bullshit on your original claims - and nearly every heavy hitting scientists has his/her share of competitive rivals and those that work in fields and on projects that directly contribute to technological advancement will have bullshit called on their claims by the end users of their technology...

The reality is there are dishonest people and they gravitate towards all fields - if anything science is better equipped to deal with these people before they do major damage than any other field...
 

Octarine

The Eighth Colour
Joined
Oct 14, 2007
Messages
1,351
MBTI Type
Aeon
Enneagram
10w
Instinctual Variant
so
it is far more difficult to fake your data in these fields...

Chemistry has its fair share of examples, from plagiarism to crystallography fraud. You have to realise that often research is very specialised and it will often take significant time and resources to prove particular data false. It is very easy for that researcher to say 'you are doing it wrong' than admit issues with the data.

But my major point wasn't so much about outright fraud, as little lies - cherry picking of data to make it sound more significant is rampant.
 

entropie

Permabanned
Joined
Apr 24, 2008
Messages
16,767
MBTI Type
entp
Enneagram
783
But my major point wasn't so much about outright fraud, as little lies - cherry picking of data to make it sound more significant is rampant.

That's what I meant. You cant really falsify data I wouldnt like that. But you can make data look better by cherry-picking the right things. That is called marketing and you need that in the real business world every day. I dont see what's wrong with that cause after all you want to market your research.
 

Randomnity

insert random title here
Joined
May 8, 2007
Messages
9,485
MBTI Type
ISTP
Enneagram
6w5
Instinctual Variant
sp/sx
That's what I meant. You cant really falsify data I wouldnt like that. But you can make data look better by cherry-picking the right things. That is called marketing and you need that in the real business world every day. I dont see what's wrong with that cause after all you want to market your research.
What kind of impact do you think this will have on the knowledge base in the field in question?

After all, the point of science is to increase our knowledge, not to market our research. The former usually requires the latter, of course.
 

entropie

Permabanned
Joined
Apr 24, 2008
Messages
16,767
MBTI Type
entp
Enneagram
783
Yea, but I was thinking like if you have a project and it's yet to be decided whether it's funded or not. Then there is the moment where you have to step in front of the trustees and tell them why they should pay you for your research. In that moment you just tell them the basic facts and the benefits they'll get. The real research that includes boring data you afterwards do with your colleagues, who do understand what you are doing.

I dont know maybe I am getting the question wrong, I am not saying you should falsify data but it's a truth that many things in life have to be sold in a certain way if you want to reach a spectrum of people with it. the sad thing is all those strange scientific articles who postulate that because we beamed an entangled photon we gonna have human beamers soon. It's sad indeed that you only can reach people sometimes if you use big words and relate everything to startrek. Then again you have to be fair, we are studied natural scientists, we dedicated a huge amount of our lifes to understand those things and we have often a far better imagination what a discovery means that would look to someone foreign to the subject like mumbojumbo.

Yet it's a shame what articles sometimes claim to be scientific and it's a shame how some data is falsified or not correctly shown to get peoples attention. It would be a task one could set for oneself to market a project in a way it sounds intresting to everyone, yet one doesnt lie or falsify data in the process. I dont know if that will ever be really possible if you consider that most people almost react aggressive if you tell them something from science and they only want to see how it benefits them. I think in that regards scientific thinkers and researchers are still a rare breed
 
Top