The Ethics of Collecting and Processing Data and Publishing Results of Scientific Research

Michael D. Mann, Ph.D.
Department of Physiology and Biophysics
University of Nebraska Medical Center

Ethical transgressions of research scientists that capture the headlines of major newspapers, though few in number, have a profound effect on public attitudes toward science and science funding. Clearly, the Darsee case at Harvard (Broad, 1982) captured the attention of the scientific and nonscientific public. William Broad and Nicholas Wade, in their popular book, Betrayers of the Truth, recounted the details of this case and gave their opinions of its causes and effects. They argued that this and similar cases are probably only the tip of a large iceberg of fraud and deceit, not isolated transgressions of a few deranged individuals as claimed by many scientists. It is impossible to decide this issue, but perhaps it is not necessary for us to do so. It may be more important to do what we can to prevent their repetition by others in the future.

Fraud or Error?

There is no doubt, scientists make mistakes. Everyone else does; why shouldn't scientists? It's easy to transpose two digits in recording a number or press the wrong key on a computer keyboard. But isn't there a difference between this kind of mistake and fraud? According to the dictionary, fraud is an "intentional deception to cause a person to give up property or some lawful right" (Webster's New Universal Unabridged Dictionary, Deluxe Second Edition, Dorset & Baber, 1979). The difference between fraud and an honest mistake seems to be a matter of "intention." Fraud is done intentionally; a mistake is done by accident. A little guide for graduate students, On Being a Scientist, (it should be required reading) makes this distinction eloquently.
"Of all the violations of the ethos of science, fraud is the gravest. As with error, fraud breaks the vital link between human understanding and the empirical world, a link that is science's greatest strength. But fraud goes beyond error to erode the foundation of trust on which science is built." (Committee on the Conduct of Science, 1989)
The National Institutes of Health (NIH) prefers to use the term misconduct in referring to things scientists may do that they shouldn't. Misconduct is defined as " fabrication , falsification , plagiarism , or other practices that seriously deviate from those that are commonly accepted within the scientific community for proposing, conducting or reporting research. It does not include honest error or honest differences in interpretations or judgments of data." (Italics mine, Public Health Service, Policies and Procedures for Dealing with Possible Misconduct in Extramural Research, as quoted by Association of American Medical Colleges, 1990.) The difference between misconduct and fraud appears to involve the purpose of the act. In committing fraud the perpetrator is attempting to cause a person to give up a right or property. Misconduct has a broader scope, not requiring this link to rights or property; it requires only deviation from accepted practices. This definition of misconduct is seriously flawed. Conducting any radically new laboratory research procedure assuredly deviates from common practice and could be viewed as misconduct. It certainly is not the spirit of the policy to punish this kind of deviation!

There may be clear distinctions between error, fraud and misconduct, but one consequence for science of all three acts is the same. Someone is led to disbelieve something that is actually true or to believe something that is actually false. So, it makes little difference to the progress of science whether the error was intentional or not.

It can be argued that mistakes should be punished just like fraud or misconduct, because it is difficult to know whether the error was accidental or deliberate. Perhaps sloppiness in science should be punished, but should the severity of the infraction determine the severity of the punishment? C.P. Snow has warned that not to penalize errors may encourage fraud or lead to a situation where scientists become inured or tolerant of fraud.

"The only ethical principle which has made science possible is that the truth shall be told all the time. If we do not penalise false statements made in error, we open up the way, don't you see, for false statements by intention. And of course a false statement of fact made deliberately, is the most serious crime a scientist can commit." (Snow, 1959)
Admitting that everyone makes errors is not enough; scientists have an obligation to check all their data thoroughly and to draw only valid conclusions from them. Students have to learn that they must not publish research results and conclusions until they are certain of their accuracy. This may mean that some results have to be put into a drawer for years before their correct interpretation is clear [see Branscomb (1985) for a further discussion of this idea]. How many modern scientists have ever done this? Probably not enough! Hans Selye (1970), the noted "stress" researcher, once asked tongue-in-cheek, "Should we award prizes and medals to academics for refraining from publishing their results?" I think not, but a reduction in the haste for publishing is certainly merited.

Does that mean we can only advance hypotheses the correctness of which we are 100% certain? Clearly, not. Nor does it mean that there will be no contradiction between laboratories or that one laboratory will not show that another has left a variable uncontrolled. No one can think of every possibility. On the other hand, hypotheses that are inherently unlikely should not be advanced, conclusions that are not supported by data should be withheld, and obvious variables must be controlled.

Why do errors matter in the first place? One answer has to do with the normal progression of science. Each scientist reads and digests the writings of others, and he builds upon what they have done (i.e., knowledge is built communally). But if others have forged their data or made serious errors that affect the conclusions, the progress of science can be delayed or stopped. Our problem is worsened because the harm of errors in medical research may be more severe than in other kinds of research. There are patients wanting help for their ailments, treatments that may be recommended or discredited incorrectly.

On the other hand, there may be cases of misconduct in which there appears to be no victim. Probably, there is no such thing as a victim-less crime. If he is not caught, the person who forges data usually gains something because of it. What he gained was given by someone else in good faith--the giver is a victim. Also, someone else may have been denied the same gain which he actually deserved--he is also a victim. It is true that the possibility of replication creates the opportunity to detect fraud or error, but often it is a long time before it is detected, sometimes too long.

Another reason why errors matter has to do with self esteem developed from doing what is right. Most people want to abide by the rules of society. For them, this form of misconduct is contrary to the way of life they prefer. It's hard to maintain self esteem when you know that you are doing something that you consider to be wrong.

Finally, fraud, misconduct and errors due to sloppiness undermine the most fundamental tenet of science, that of trust. The previous quotation from On Being a Scientist points this out clearly. The entire edifice of science is erected upon trust. We have no policemen in science, no way of knowing whether a scientist has done what he reports in a paper. We have only his word for it. Every scientist must be trusted to do and say what is right. Every case of misconduct undermines science by bringing distrust into the activity.

Unethical Behavior

This paper is about unethical behavior, be it fraud or not. Ignorance that a behavior is inappropriate may not be an excuse (Figure 1). Even an honest mistake can discredit a scientist, especially if that mistake is repeated. A scientist who develops a reputation for doing sloppy science may not be believed even when he is right. Charles Babbage (1830), inventor of the first working computer, pointed out that ". . . the character of an observer, as of a woman, if doubted is destroyed." This comment, though overly sexist for our times, points out that it may not be necessary for an experimenter to be proven guilty of misconduct. His career may be destroyed by just the accusation.

Therefore, it is important that students learn as much as possible about the proper conduct of science while they are students. Unfortunately, students learn too well in some cases. A common practice of the "professor" in the laboratory may be unethical; the professor may even be unaware that it is unethical because his professor did it, too. So, how is a student to find out that this behavior is wrong? That is one reason why this kind of paper is important.

It is doubtful that Darsee, had he this paper, would have refrained from his unethical behavior. The pressures for him to transgress were too strong (Figure 2). The pressures to advance in academia or to accomplish something significant can be very strong. Sometimes students experience strong pressure from their professors, not only to get results in the lab, but to get certain, specific results, which may be impossible. Darsee knew what he did was wrong! But, the prevailing view is that the kind of behavior indulged in by Darsee is rare among scientists. Other, less severe, unethical behaviors may be more common, but their impact on science is just as great, perhaps greater. We will, therefore, take this opportunity to point out some of the pitfalls, the unethical behaviors that can occur in science, the unethical nature of which you may not even be aware.

Arenas for Unethical Behavior

There are a number of things scientists do into which can creep unethical behaviors. Scientists do experiments, collect and publish data, submit and review grant proposals and research manuscripts, administer grant and contract funds, serve on University committees, and teach students (not listed in order of importance). Unethical behaviors can occur in any of these arenas. For this discussion, we will confine our attention to unethical behaviors that can creep into the design and execution of experiments; the collecting and processing of data; and the preparation, submission and review of manuscripts. Actually, that's a huge topic and we will only scratch the surface, but it's a start. Perhaps simply pointing out the potential for unethical behavior will capture your attention and make you more sensitive to the issue. If so, then this paper has achieved its goal.

Where to Go From Here:

Potential Problems With Gathering Data
Potential Problems With Data Processing
Problems With Writing a Paper
Problems With Reviewing
Problems With Editing
The Role of Scientific Judgment
Suggested Readings

[Online Documents]