The Royal Society is conducting a policy study entitled ‘Science as a Public Enterprise’, focused on public engagement with science. This goes far beyond the traditional notions of ‘engagement’, in which the high priesthood of science may offer occasional public lectures and open days, write pop-science books, or contribute to TV documentaries. There is a growing realisation across science that modern communication media allow much more direct involvement: the public can see, grasp, and take part in scientific research to a much greater extent than has ever been possible before. There is also a sense that there are practical arguments for increased transparency – that it would benefit scientists as well as the public – as well as a moral case (the public purse funds most research, and the public are often profoundly affected even by private science – for instance medical science, or models of oil dispersal in deep-water blowouts). The Climate Code Foundation, of course, welcomes this study, which relates directly to our goal of improving the public understanding of climate science.
The study group is led by Geoffrey Boulton, an eminent geologist. As part of the study, there was a Town Hall Meeting on Wednesday (2011-06-08), looking specifically at ‘Open Science’, which David Jones and I (Nick Barnes) attended. It was divided into two panel sessions, “Why should science be open?” and “How should science be open?” The meeting was addressed by Paul Nurse, president of the Royal Society, by Mark Walport, director of the Wellcome Trust, and by Philip Campbell, editor-in-chief of Nature. Many more of the great and good of UK science were in attendance, either on the panels or contributing from the floor. The discussion was interesting, and for the most part was both constructive and well-informed.
Mark Walport described the case for open science as “obvious and powerful”, and summarised arguments for and against. He dismissed many of the arguments against as weak and insubstantial, but identified the following as stronger:
- there are no incentives for greater openness;
- the global equity question: is free access necessarily fair access?
- (especially in medical science)what about the confidentiality of the subjects?
- what about privately-funded science, or science with national-security ramifications?
- competitiveness: won’t groups or countries practising open science be disadvantaged?
He emphasized that even these last two arguments can’t stand in the way of an urgent and necessary change: negative results of medical trials must be published.
Geoffrey Boulton contrasted his first ever science publication, which had six data points, with a more recent paper of his which has six billion. Many modern papers cannot include all their data, and act instead almost as an advertisement for the dataset, where the real science value lies. I would argue that the same metaphor applies to papers on computational science: the paper cannot include a precise description of the computational methods, and should act as a pointer to the underlying code.
Stephen Emmott, head of computational science at Microsoft Research, said that we need a revolutionary change to maintain reproducibility and falsifiability in a world of model-based science. He emphasized the importance of open code: much research cannot be reproduced without the code. He referred to a genomics study (possibly the same ones described in this Nature editorial) in which the findings of most studies could not be reproduced due to a lack of openness.
Geoffrey Boulton rounded off the first session by encouraging us to ask “Is it worth the candle?” to open science, suggesting that the answer is decidedly yes, and pointing out that we will probably have to do it anyway.
The “How?” session was introduced by Philip Campbell, who emphasized three key questions:
- Credit: How can the systems of acknowledgement, reward, professional advancement, and institutional assessment in science be evolved to properly recognise contributions other than the traditional peer-reviewed paper? Creating and curating datasets, writing and maintaining code, promoting public engagement, all must be recognised and rewarded.
- Cost: Creating and especially curating datasets is expensive, especially in fields such as particle physics and metagenomics where data volumes are enormous. Who is going to pay? Funding agencies need to step up for this. Opening, curating, and maintaining software resources also costs money (although much less) and funding agencies have failed to provide for it.
- Community: Each scientific community must decide on the appropriate level of openness. For example, data embargo times might vary from field to field according to the personal and institutional investment made in obtaining data. In many fields, openness is increasing. In genomics, researchers who wanted data embargoes have been persuaded to accept credit instead: open science wins citations.
Timo Hannay, of Digital Science (a division of Macmillan publishing) is working to provide better software tools to working scientists. He pointed out that almost all scientists have better software tools for managing their music collection or family snapshots than they do for managing their data and other digital resources.
From the floor, Peter Murray-Rust expressed the view that some groups can have valuable vested interests in the status quo, and be opposed to openness regardless of the interests of society or the views of scientists. Sometimes gradual “evolution” is possible, but sometimes a “fracture” is necessary.
The last comment I recorded was from Cameron Neylon, a biophysicist and open research expert who sits on our advisory committee (as does Peter Murray-Rust). He said that funding bodies should demand progress, but can’t move out in front of their scientific communities. So communities have to believe in the provision of research outputs as adding value. However, institutions and agencies “should never spend money restricting access” to scientific data or information.
In the coffee break after the meeting, I met Philip Campbell, who invited me to attend a meeting to discuss journal software publication policies. I very much look forward to that. Geoffrey Boulton encouraged us to make a submission to the study group, which we will certainly do. I also spoke briefly to Nick von Behr of the Royal Society, and to Timo Hannay, and hope to be able to meet each of them again in future.
One last point raised, although I can’t recall who said it: access to science ought not to be limited according to perceived interest. Almost any scientific topic is of interest to some proportion of the public, and modern technology – in particular the web – allows those specific people to directly engage in the science, without the wasted effort and limits that traditional ‘broadcasting’ media would impose.
This has a direct bearing on citizen science – another important aspect of ‘Science as a Public Enterprise’, not really touched on by this meeting. There are dozens of amazing citizen science projects, covering astrophysics, climate prediction, malaria control, historical climatology, among many other topics. Some simply allow the public to donate spare computational power of their own machines. In others, participants contribute their own intelligence (for instance, to discriminate between different galaxy types, or to read and transcribe old hand-written ship’s logs). In either case, a large amount of excellent science is being done with the help and participation of the public, which would not be possible in any other way.
Overall, a constructive and interesting meeting. I look forward to future activities of the study, and to seeing its conclusions. It is easy to be impatient at the pace of change in large organisations or communities, but this change, however much delayed, is definitely coming.
More information about the Royal Society study here.