Reinventing Discovery - Michael Nielsen [104]
This same kind of incentive building can be applied to any type of scientific knowledge: preprints, data, computer code, science wikis, collaboration markets, you name it. In each case the overall pattern is the same: citation leads to measurement leads to reward leads to people who are motivated to contribute. This is a way of expanding science’s reputation economy. There will, in practice, be many complications, and many possible variations on this theme. Indeed, even the arXiv-SPIRES story I told was oversimplified: SPIRES was just one factor among several that gave preprints real status in physics. But the basic picture is clear.
A case of particular importance is computer code. Today, scientists who write and release code often get little recognition for their work. Someone who has created a terrific open source software program that’s used by thousands of other scientists is likely to get little credit from peers. “It’s just software” is the response many scientists have to such work. From a career point of view, the author of the code would have been better off spending their time writing a few minor papers that no one reads. This is crazy: a lot of scientific knowledge is far better expressed as code than in the form of a scientific paper. But today, that knowledge often either remains hidden, or else is shoehorned into papers, because there’s no incentive to do otherwise. But if we got a citation-measurement-reward cycle going for code, then writing and sharing code would start to help rather than hurt scientists’ careers. This would have many positive consequences, but it would have one particularly crucial consequence: it would give scientists a strong motivation to create new tools for doing science. Scientists would be rewarded for developing tools such as Galaxy Zoo, Foldit, the arXiv, and so on. And if that happened we’d see scientists become leaders, not laggards, in developing new tools for the construction of knowledge.
There are limits to the citation-measurement-reward idea. Obviously, it’s neither possible nor desirable to judge a discovery based solely on what citations a paper (or preprint or data or code) has received. When it comes to assessing the importance of a discovery, there’s no replacement for understanding the discovery deeply. But with that said, the basis for the reputation economy in science ma citation system. It’s the way scientists track the provenance of scientific knowledge. If scientists are to take seriously contributions outside the old paper-based forms, then we should extend the citation system, creating new tools and norms for citation, while keeping in mind the limitations citations have (and have always had) as a way of assessing scientific work.
Today, many scientists find the idea of working more openly almost unimaginable. After giving talks about open science I’ve sometimes been approached by skeptics who say, “Why would I help out my competitors by sharing ideas and data on these new websites? Isn’t that just inviting other people to steal my data, or to scoop me? Only someone naive could think this will ever be widespread.” As things currently stand, there’s a lot of truth to this point of view. But it’s also important to understand its limits. What these skeptics forget is that they already freely share their ideas and discoveries, whenever they publish papers describing their own scientific work. They’re so stuck inside the citation-measurement-reward