I posted this thread because I've read such an example this week. I'm not making this up - statistically insignificant results compared to controls are made out as a positive change. The graphs, based on the unadjusted means have overlapping 95% CI with the control, but then they go on to state significant p values for this comparison, without explaining how the data was altered. They also failed to report seven of the measures that were used according to the protocol. Most of those measures that were reported had revised thresholds as to what was significant or "normal".
This was a very expensive study funded by the respective government.