Reports of data faking continue to smirch the image of scientific research. But, deplorable as this is, it only points to a larger problem.
Pressured to publish or perish, young scientists can lose perspective scrambling for grant money and promotion. Also, less than rigorous standards weaken preparation and review of scientific papers.
Thus a cultural atmosphere has arisen within the scientific community that does more than encourage fraud. It tends to degrade the scientific literature generally. This has been said plainly enough by some prophets within the scientific community. But by and large that community has preferred to regard the cases of overt fraud as aberrations rather than as symptomatic of an underlying malaise.
A year ago, there were many discussions of the fraud problem in and out of congressional hearings. They were sparked partly by four instances of data faking widely reported in 1980. Again, many scientists considered these exceptional. However, at least three more cases have subsequently surfaced -- at the University of Bristol in England and at Cornell and Harvard Universities in the United States.
The latter case was recently examined at length in Science, where questions were raised as to why Harvard took half a year to report the instance to the public and to the National Institutes of Health, the agency sponsoring the research. A subsequent report by a special investigating committee exonerated Harvard of any laxity in a situation that required lengthy investigation. But the overall question remains. When will the scientific establishment face up to the underlying problems?
Robert H. Ebert, former dean of Harvard Medical School and a prominent critic , continues to emphasize the corrosive effect of pressures to publish. He also warns of ''borderline falsification that is more common than anybody knows, in which you are anticipating results you are going to get when you put in an abstract (of a paper to be given later).'' He adds, ''That whole environment is bad.''
Harold Hillman of the University of Surrey in England has also called attention to shoddy publication practices. Collaborators and supervisors add their names to papers reporting work in which they have not shared. Researchers fail to report experiments or data that don't support their hypotheses and fail to cite the work of others which contradicts their ideas. Referees endorse papers for publication without adequately reading or understanding them.
''These widespread practices have a considerably greater impact on knowledge than the relatively rare acts of fraud,'' Hillman warns. ''There should be such a great value put on accuracy that it would never occur to anybody to do that. It is a kind of moral issue of our times,'' Ebert says.
Is the rest of the scientific community listening to such prophets? Or do most scientists still consider the ''rare cases of fraud'' to be of little concern?