Table of Contents  
EDITORIAL
Year : 2016  |  Volume : 9  |  Issue : 6  |  Page : 671-673  

When perfect is the enemy of the good


Department of Community Medicine, Dr. D. Y. Patil Medical College, Hospital and Research Centre, Dr. D. Y. Patil Vidyapeeth, Pune, Maharashtra, India

Date of Web Publication16-Nov-2016

Correspondence Address:
Amitav Banerjee
Department of Community Medicine, Dr. D. Y. Patil Medical College, Hospital and Research Centre, Dr. D. Y. Patil Vidyapeeth, Pune, Maharashtra
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/0975-2870.194179

Rights and Permissions

How to cite this article:
Banerjee A. When perfect is the enemy of the good. Med J DY Patil Univ 2016;9:671-3

How to cite this URL:
Banerjee A. When perfect is the enemy of the good. Med J DY Patil Univ [serial online] 2016 [cited 2024 Mar 28];9:671-3. Available from: https://journals.lww.com/mjdy/pages/default.aspx/text.asp?2016/9/6/671/194179

An astute reviewer brought the following observation to the notice of the editor of this journal. The paper reviewed by him concerned a trial of a lipid-lowering agent. There were fifty participants each in the intervention and control arms. Among the various outcome measures, serum cholesterol and serum triglycerides were included. Readings were taken at baseline and then three subsequent readings at 30-day interval, i.e., four serial estimations of these outcomes for both arms of the trial.

The mean serum cholesterol estimations (mg/dl), standard deviation in brackets, for the intervention arm were 220 (33.60), 209 (32.14), 199 (30.85), and 190 (28.81). For the control arm, the corresponding values were 201 (24.97), 200 (23.85), 200 (24.64), and 200 (25.07).

The mean triglyceride levels (mg/dl), standard deviation in brackets, for the intervention arm were 183 (39.19), 176 (37.09), 169 (36.08), and 161 (34.71). For the control arm, the corresponding values were 196 (40.84), 196 (39.30), 196 (40.04), and 196 (39.49).

Similar trends, i.e., a steady fall in the intervention arm and almost identical mean values in the control arm, for the other lipid constituents were reported by the authors. What raised our suspicion about the validity of the data were the almost identical mean values particularly of serum triglycerides in the control arm over the 4-month study period. To be doubly sure, we reviewed the literature and found that the reported variation for serum cholesterol ranges from 4% to 10%, [1] and for triglycerides, the variation ranges from 5% to 25%. [2] These variations are mostly due to biological factors and partly due to laboratory variation. The neat results, i.e., the invariable and steady fall in the intervention arm and the consistent means in the control arm appeared too good to be true. Another point to consider is that a statistician with no insight about variation in serum lipid levels would have missed this anomaly.

To hold a very tolerant and lenient view, concerns about such data do not necessarily translate into lack of integrity by the investigators. In a research project, a lot of factors and players with different interests and commitments to the research issue come in the picture. In a busy hospital laboratory where a large number of samples are sent, such repetitive readings for a research project may hold a low priority. The technicians may just enter the previous readings in "normal" subjects who belong to the control arm without actually doing the test, thinking there is no harm done as the samples do not belong to diseased subjects. In the worst case scenario, the authors may fabricate the data. Whether we hold a tolerant view or consider the worst case scenario, such manuscripts deserve outright rejection. Serious researchers should be aware of such research dynamics and ensure proper quality control including laboratory results.

An example of "touching up" of data is as follows. In a study for the evaluation of a screening test, the following values were presented: Sensitivity 95.12, specificity 28.95, positive predictive value: 32.50, and negative predictive value: 94.29.

Since the sample size was modest (only 155 subjects), the reviewer asked the authors to present the 95% confidence intervals. In the revised version, the authors presented the following 95% confidence intervals: sensitivity 95.12 (95% confidence interval = 94.91, 95.33), specificity 28.95 (95% confidence interval = 28.68, 29.21), positive predictive value: 32.50 (95% confidence interval = 32.23, 32.79), and negative predictive value: 94.29 (95% confidence interval = 94.04, 94.53). The point which raised concern about the validity of the data was the very narrow confidence intervals. Anyone with basic statistical sense would understand that with the rather small sample size of 155, such precision is not possible.

When it was pointed out to the authors that the widths of the confidence intervals are not commensurate with the modest sample size of the study, the authors stated that there was error in calculation in the confidence intervals and presented the following figures in the revised manuscript: Sensitivity 95.12 (95% confidence interval = 83.47, 99.40), specificity 28.95 (95% confidence interval = 20.84, 38.19), positive predictive value: 32.50 (95% confidence interval = 24.23, 41.65), and negative predictive value: 94.29 (95% confidence interval = 80.84, 99.30).

The neat precision in the earlier confidence intervals was again too good to be true. The authors' plea of oversight while submitting the revised manuscript with more realistic confidence intervals is not tenable. The "touching up" to show narrow confidence intervals in the earlier depiction of the confidence intervals inflates the precision of the estimate which appeared deliberate with intention to impress the reviewers and editors. Papers with such "neat" calculations deserve summary rejection by reviewers and editors. It is unfortunate that most reviewers just gloss over the statistical aspects and such papers sometimes get published.

What could be done to alert other editors of such misconducts? For instance, the authors may submit the same manuscript to another journal with the more realistic confidence interval. Regrettably, there is not much that can be done. Authors fabricating data may become wiser after getting detected by one set of reviewers and nothing prevents them from sending the revised manuscript to another journal as a fresh submission. Research integrity has to come from the authors who are squarely responsible for the correctness of their data. Mechanisms for checking publication frauds are never foolproof as is the case for other types of frauds.

How difficult it is for the scientific community to appreciate data fraud is illustrated by a letter to the editor written to bring to notice the very suspicious set of results in a series of articles published by a particular set of authors. [3] The results were too nice in these series which raised the suspicion of data authenticity and prompted this letter. The letter brought out that these results cannot be explained by any stretch of probability. This letter was more or less ignored for almost a decade. The lead author who fabricated the data went on to publish more than 200 studies before the scientific community realized the fraud in most of these papers. [4] A total of 193 of his papers were subsequently retracted due to publication misconduct, a record of sorts.

Lifting of images from other sources by authors without acknowledging the source has also been part of our experience. Once we received a manuscript with a figure depicting part of the face including the ear lobe of a female patient.

When we were trying to select the referees for this manuscript by feeding the keywords in search engine, we retrieved an article with an image which looked uncannily similar - what clinched the plagiarism of the image was the earring of the patient. The identical designs of the earring in both the figures made us scrutinize the other features of the visible portion of the face and we were convinced that both the images were from the same patient.

Sometimes the writing style may give important clues to an experienced reviewer. One reviewer remarked, "very poor opening paragraph …improves greatly after first few paragraphs as if written by two different hands." Sure enough when we carried out a plagiarism check, we found that the subsequent paragraphs were plagiarized.

Fabrication or touching up involves many aspects. In addition to data and text, images also get touched up, particularly with the presently available options such as "Photoshop." [5] An exhaustive study on image manipulation among a sample of more than 20,000 published papers from forty journals revealed that almost 4% of the published papers contained suspect images. [6] The study concluded that image manipulation can be identified through visual inspection and noted that the phenomenon of image manipulation has risen sharply after 2002, perhaps coinciding with greater availability of software. They also brought out that a large proportion of papers with image manipulations were from China and India, emphasizing the need for efforts at publication ethics reforms in these countries. [6]

Why do authors indulge in such practices? The motive is to project excellence or perfection. This striving for excellence is affecting science adversely. [7] Authors try to inflate their work sometimes subtly, and sometimes not as subtly as illustrated in the examples narrated above. While it may be possible for reviewers and editors to spot crude efforts at perfection, when done subtly, such articles are likely to be published. In extreme cases, the race for perfection may encourage fraud in an effort toward career advancement. [7] A sound research is more realistic than a perfect one. A "too perfect" manuscript should alert the editors and reviewers and they should devote extra time to such papers even to the extent of asking for raw data from the authors.

Research and debate in scientific misconduct particularly in developing countries which have joined the bandwagon of "scientific publication" recently should be encouraged. The gold rush for publications promotes a social environment where the culture of cutting corners ensures academic tenure and promotion. Factors which promote research misconduct are personal, sociocultural, and institutional. [8] A study on research misconduct from a developing country revealed that almost 70% of authors admitted to have indulged in at least one form of research misconduct such as plagiarism, falsifying data, protocol violations related to subject enrollment or procedures, selectively dropping "outliers," falsifying reference list, authorship criteria violations, and being influenced by study sponsors. [9]

On a global level, Fanelli [10] carried out a meta-analysis on scientific misconduct. It revealed that almost 2% of scientists admitted to have fabricated their research at some point in their career and about 34% admitted to other lesser forms of research misconduct; when queried about the behavior of their colleagues, the corresponding figures were 14% and 72%, respectively. The meta-analysis concluded that these are conservative estimates of true prevalence of research misconduct.

To conclude, authors should be transparent which is possible only when research has been carried out with integrity. Authors should not hesitate to answer "letters to editor" commenting on their published work. Often authors fail to respond. Genuine research can have errors and oversight. An attempt to cover up amounts to scientific misconduct. Similarly, nonresponse from authors to queries on their published work raises suspicion. Authors should acknowledge their lapses and limitations if they do not have tenable answers to observations on their work. The editorial board has no option but to publish letters to editor from readers commenting on published work even if the authors fail to respond. Such postpublication "peer review" helps in setting the record straight. We would appreciate if all queries from readers are answered by the authors honestly and transparently. We, the editors, get prejudiced when reviewing subsequent submissions from authors who fail to answer queries from readers. We would like to reassure the authors that making mistakes in research and occasional imperfections are acceptable. Integrity should be valued more than perfection. Let not perfection become the enemy of the good and honest.



 
  References Top

1.
Steven LS, Norman RC. Intraindividual variation in lipid and lipoprotein levels. Can Med Assoc J 1993;149:843-4.  Back to cited text no. 1
    
2.
Durrington PN. Biological variation in serum lipid concentrations. Scand J Clin Lab Invest Suppl 1990;198:86-91.  Back to cited text no. 2
    
3.
Kranke P, Apfel CC, Roewer N, Fujii Y. Reported data on granisetron and postoperative nausea and vomiting by Fujii et al. Are incredibly nice! Anesth Analg 2000;90:1004-7.  Back to cited text no. 3
    
4.
Marcus A, Oransky I. How the Biggest Fabricator in Science Got Caught. Nautilus; 21 May, 2015. Available from: http://www.nautil.us/issue/24/error/how-the-biggest-fabricator-in-science-got-caught. [Last accessed on 2016 Sep 14].  Back to cited text no. 4
    
5.
Rossner M, Yamada KM. What′s in a picture? The temptation of image manipulation. J Cell Biol 2004;166:11-5.  Back to cited text no. 5
    
6.
Bik EM, Casadevall A, Fang FC. The prevalence of inappropriate image duplication in biomedical research publications. MBio 2016;7. pii: e00809-16.  Back to cited text no. 6
    
7.
Mathews D. Focus on ′Excellence′ is ′Damaging Science′. Times Higher Education. Available from: http://www.timeshighereducation.com/news/focus-on-research-excellence-is-damaging-science. [Last accessed on 2016 Jul 29].  Back to cited text no. 7
    
8.
Okonta PI, Rossouw T. Misconduct in research: A descriptive survey of attitudes, perceptions and associated factors in a developing country. BMC Med Ethics 2014;15:25.  Back to cited text no. 8
    
9.
Okonta P, Rossouw T. Prevalence of scientific misconduct among a group of researchers in Nigeria. Dev World Bioeth 2013;13:149-57.  Back to cited text no. 9
    
10.
Fanelli D. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One 2009;4:e5738.  Back to cited text no. 10
    




 

Top
   
 
  Search
 
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
References

 Article Access Statistics
    Viewed2441    
    Printed95    
    Emailed0    
    PDF Downloaded3805    
    Comments [Add]    

Recommend this journal