Avoiding ‘muddled science’ in the newsroom

On April 23, I was part of a webinar called ProtoCall, organised by Pro.to with the support of International Centre for Journalists and the International AIDS Vaccine Initiative. It happens once a week and is hosted by Ameya Nagarajan and Nayantara Narayanan. Every week there’s a theme which, together with the discussion around it, is picked to help non-science and non-health journalists cover the coronavirus pandemic. The session before the one I was part of discussed the role of data, the gaps in data and how journalists could help fill them. My session was entitled ‘How muddled science drives misinformation’, and my fellow panelists were Shruti Muralidhar and Shahid Jameel, neither of whom should need introduction on the pages of this blog.

Given a brief ahead of the session (available to read here), I prepared some notes for the conversation and which I’m pasting below in full. Note that the conversation itself panned out differently (as military historians have noted, “no plan survives contact with the enemy”), so you could watch the full video if you’re interested or read the transcript when it comes out. Both Shruti and Dr Jameel made some great points throughout the conversation, plus the occasional provocative opinion (by myself as well).

§

1. Science journalists should continue to do what we’ve always had to do — empower our readers to decide for themselves based on what data they have available. Yes, this is a slow process, and yes, it’s tedious, but we shouldn’t have to adopt radical tactics now just because we haven’t been doing our job properly before. Introduce the relevant concept, theories, hypotheses, etc. as well as introduce how scientists evaluate data and keeping what in mind.

I can think of at least three doctors I’ve spoken to recently – all three of very good standing in the medical research community, and one is pro-lockdown, one is anti-lockdown, and one argues that there’s a time and place to impose a lockdown. This is a new virus for everybody and there is disagreement between doctors as well. But this doesn’t imply that some doctors are motivated by ideologies or whatever. It means the story here is that doctors disagree, period.

2. Because this is a new disease for everybody, be skeptical of every result, especially those that claim 100% certainty. No matter what anyone says, the only thing you can know with 100% certainty is that you cannot know anything with 100% certainty. This is a pandemic and suddenly everyone is interested in what scientific studies have to say, because people are desperately looking for hope and there will be a high uptake for positive news – no matter how misinformed or misguided.

But before everyone was interested in scientific studies, it was always the case that results from tests and experiments and such were never 100% accurate. They all had error rates, they were all contingent on replication studies, they were and are all works in progress. So no matter what a study says, you can very safely assume it has a caveat or a shortcoming, or a specific, well-defined context in which it is true, and you need to go looking for it.

3. It’s okay to take time to check results. At a time of such confusion and more importantly heightened risk, misinformation can kill. So take your time, speak to doctors and scientists. Resisting the pressure to publish quickly is important. If you’re on a hard deadline, be as conservative in your language as possible, just go with the facts – but then even facts are not entirely harmless. There are different facts pointing to different possibilities.

Amitabh Joshi said a couple years back at a talk that science is not about facts but about interpreting collections of facts. And scientists often differ because they’re interpreting different groups of facts to explain trends in the data. Which also means expertise is not a straightforward affair, especially in the face of new threats.

4. Please become comfortable saying “I don’t know”. I think those are some of the most important words these days. Too many people – especially many celebrities – think that the opposite of ‘true’ is ‘false’ and that the opposite of ‘false’ is ‘true’. But actually there’s a no man’s land in between called ‘I don’t know’, which stands for claims, data, etc. that we haven’t yet been able to verify yet.

Amitabh Bachchan recently recorded a video suggesting that the coronavirus is transmitted via human faeces and by flies that move between that faecal matter and nearby food items. The thing is, we don’t know if this is true. There have been some studies but obviously they didn’t specifically study what Amitabh Bachchan claimed. But saying ‘I don’t know’ here wouldn’t mean that the opposite of what Bachchan said is true. It would mean Bachchan was wrong to ascribe certainty to a claim that doesn’t presently deserve that certainty. And when you say you don’t know, please don’t attach caveats to a claim saying ‘it may be true’ or ‘it may be false’.

We need to get comfortable saying ‘we don’t know’ because then that’s how we know we need more research, and even that we need to support scientists, etc.

5. Generally beware of averages. Averages have a tendency to flatten the data, which is not good when regional differences matter.

6. Has there been a lot of science journalism of the pandemic in India? I’m not sure. A lot of explanations have come forth as background to larger stories about the technology, sampling/testing methods, governance, rights, etc. But I’ve seen very little of the mathematics, of the biology and research into the virus as such.

I don’t think this is a problem of access to scientists or availability of accessible material, which to my mind are secondary issues, especially from journalists’ point of view. Yes, you need to be able to speak to doctors and medical researchers, and many of them are quite busy these days and their priorities are very different. But also many, many scientists are sitting at home because of the lockdown and many of them are keen to help.

To me, it’s more a problem of journalists not knowing which questions to ask. For example, unless you know that something called a cytokine storm exists, to you it remains an unknown-unknown. So the bigger issue for me is that journalists shouldn’t expect to do a good job covering this crisis without knowing the underlying science. A cytokine storm is one example, but I’d say not many journalists are asking more important questions, from my point of view, about statistical methods, clinical trials, scientific publishing, etc. and I suspect it’s because they’re not aware these issues exist.

If you want to cover the health aspects like a seasoned health journalist would, there are obviously other things you’re going to have to familiarise yourself with, like pharmaceutical policy, clinical trials, how diseases are tracked, hospital administration, etc.

So I’d say learn the science/health or you’re going to have a tough time asking the right questions. You can’t expect to go into this thinking you can do a good job just by speaking to different doctors and scientists because sooner than later, you’re going to miss asking the right questions.

7. Three things have worked for The Wire Science, vis-à-vis working with freelancers and other editors.

First, there needs to be clear communication. For example, if you disagree with a submission, please take time out to explain what you think is wrong about it, because it often happens that the author knows the science very well but may just not have laid it out in a way that’s completely clear. This is also exhausting but in the long run it helps.

Second, set clear expectations. For example at The Wire Science, I insist on primary sources to all claims to the extent possible, so we don’t accidentally help magnify a dubious claim made by a secondary source. I don’t accept articles or comments on papers that have not been published in a peer-reviewed journal or in a legitimate preprint repository. And I insist that any articles based on scientific papers must carry an independent voice commenting on the merits and weaknesses of the study, even if the reporter hasn’t spoken to the paper’s authors themselves.

Interestingly enough, in our internal fact-check filters, these ‘clear expectations’ criteria act as pre-filters in the sense that if an article meets these three criteria, it’s also factually accurate more than 90% of the time. And because these criteria are fairly simple to define and identify in the article, anyone can check for them instead of just me.

Third, usually the flow of information and decisions in our newsroom is top-down-ish (not entirely top-down), but once the pandemic took centerstage, this organisation sort of became radial. Editors, reporters and news producers all have different ideas for stories and I’ve been available as a sort of advisor, so before they pursue any story, they sometimes come to me to discuss if they’re thinking about it the right way.

This way automatically prevents a lot of unfeasible ideas from being followed up. Obviously it’s not the ultimate solution but it covers a lot of ground.

8. The urgency and tension of a pandemic can’t be an excuse to compromise on quality and nuance. And especially at a time like now, misinformation can kill, so I’m being very clear with my colleagues and freelancers that we’re going to take the time to verify, that I’m going to resist the temptation to publish quickly. Even if there’s an implicit need to publish stuff quickly since the pandemic is evolving so fast, I’d say if you can write pieces with complexity and nuance, please do.

The need for speed arises, at least from what I can see, in terms of getting more traffic to your site and which in turn your product, business and editorial teams have together decided is going to be driven by primacy – in terms of being seen by your readers as the publication that puts information out first. So you’re going to need to have a conversation with your bosses and team members as well about the importance at a time like this of being correct over being fast. The Wire Science does incur a traffic penalty as a result of going a bit slower than others but it’s a clear choice for us because it’s been the lesser price to pay.

In fact, I think now is a great time to say to your readers, “It’s a pandemic and we want to do this right. Give us money and we’ll stop rushing for ads.”

Full video:

Freeman Dyson’s PhD

The physicist, thinker and writer Freeman Dyson passed away on February 28, 2020, at the age of 96. I wrote his obituary for The Wire Science; excerpt:

The 1965 Nobel Prize for the development of [quantum electrodynamics] excluded Dyson. … If this troubled Dyson, it didn’t show; indeed, anyone who knew him wouldn’t have expected differently. Dyson’s life, work, thought and writing is a testament to a philosophy of doing science that has rapidly faded through the 20th century, although this was due to an unlikely combination of privileges. For one, in 1986, he said of PhDs, “I think it’s a thoroughly bad system, so it’s not quite accidental that I didn’t get one, but it was convenient.” But he also admitted it was easier for him to get by without a PhD.

His QED paper, together with a clutch of others in mathematical physics, gave him a free-pass to more than just dabble in a variety of other interests, not all of them related to theoretical physics and quite a few wandering into science fiction. … In 1951, he was offered a position to teach at Cornell even though he didn’t have a doctorate.

Since his passing, many people have latched on to the idea that Dyson didn’t care for awards and that “he didn’t even bother getting a PhD” as if it were a difficult but inspiring personal choice, and celebrate it. It’s certainly an unlikely position to assume and makes for the sort of historical moment that those displeased with the status quo can anchor themselves to and swing from for reform, considering the greater centrality of PhDs to the research ecosystem together with the declining quality of PhD theses produced at ‘less elite’ institutions.

This said, I’m uncomfortable with such utterances when they don’t simultaneously acknowledge the privileges that secured for Dyson his undoubtedly deserved place in history. Even a casual reading of Dyson’s circumstances suggests he didn’t have to complete his doctoral thesis (under Hans Bethe at Cornell University) because he’d been offered a teaching position on the back of his contributions to the theory of quantum electrodynamics, and was hired by the Institute for Advanced Study in Princeton a year later.

It’s important to mention – and thus remember – which privileges were at play so that a) we don’t end up unduly eulogising Dyson, or anyone else, and b) we don’t attribute Dyson’s choice to his individual personality alone instead of also admitting the circumstances Dyson was able to take for granted and which shielded him from adverse consequences. He “didn’t bother getting a PhD” because he wasn’t the worse for it; in one interview, he says he feels himself “very lucky” he “didn’t have to go through it”. On the other hand, even those who don’t care for awards today are better off with one or two because:

  • The nature of research has changed
  • Physics has become much more specialised than it was in 1948-1952
  • Degrees, grants, publications and awards have become proxies for excellence when sifting through increasingly overcrowded applicants’ pools
  • Guided by business decisions, journals definition of ‘good science’ has changed
  • Vannevar Bush’s “free play of free intellects” paradigm of administering research is much less in currency
  • Funding for science has dropped, partly because The War ended, and took a chunk of administrative freedom with it

The expectations of scientists have also changed. IIRC Dyson didn’t take on any PhD students, perhaps as a result of his dislike for the system (among other reasons because he believed it penalises students not interested in working on a single problem for many years at a time). But considering how the burdens on national education systems have shifted, his decision would be much harder to sustain today even if all of the other problems didn’t exist. Moreover, he has referred to his decision as a personal choice – that it wasn’t his “style” – so treating it as a prescription for others may mischaracterise the scope and nature of his disagreement.

However, questions about whether Dyson might have acted differently if he’d had to really fight the PhD system, which he certainly had problems with, are moot. I’m not discussing his stomach for a struggle nor am I trying to find fault with Dyson’s stance; the former is a pointless consideration and the latter would be misguided.

Instead, it seems to me to be a question of what we do know: Dyson didn’t get a PhD because he didn’t have to. His privileges were a part of his decision and cemented its consequences, and a proper telling of the account should accommodate them even if only to suggest a “Dysonian pride” in doing science requires a strong personality as well as a conspiracy of conditions lying beyond the individual’s control, and to ensure reform is directed against the right challenges.

Featured image: Freeman Dyson, October 2005. Credit: ioerror/Wikimedia Commons, CC BY-SA 2.0.

The scientist as inadvertent loser

Twice this week, I’d had occasion to write about how science is an immutably human enterprise and therefore some of its loftier ideals are aspirational at best, and about how transparency is one of the chief USPs of preprint repositories and post-publication peer-review. As if on cue, I stumbled upon a strange case of extreme scientific malpractice that offered to hold up both points of view.

In an article published January 30, three editors of the Journal of Theoretical Biology (JTB) reported that one of their handling editors had engaged in the following acts:

  1. “At the first stage of the submission process, the Handling Editor on multiple occasions handled papers for which there was a potential conflict of interest. This conflict consisted of the Handling Editor handling papers of close colleagues at the Handling Editor’s own institute, which is contrary to journal policies.”
  2. “At the second stage of the submission process when reviewers are chosen, the Handling Editor on multiple occasions selected reviewers who, through our investigation, we discovered was the Handling Editor working under a pseudonym…”
  3. Many forms of reviewer coercion
  4. “In many cases, the Handling Editor was added as a co-author at the final stage of the review process, which again is contrary to journal policies.”

On the back of these acts of manipulation, this individual – whom the editors chose not to name for unknown reasons but one of whom all but identified on Twitter as a Kuo-Chen Chou (and backed up by an independent user) – proudly trumpets the following ‘achievement’ on his website:

The same webpage also declares that Chou “has published over 730 peer-reviewed scientific papers” and that “his papers have been cited more than 71,041 times”.

Without transparencya and without the right incentives, the scientific process – which I use loosely to denote all activities and decisions associated with synthesising, assimilating and organising scientific knowledge – becomes just as conducive to misconduct and unscrupulousness as any other enterprise if only because it allows people with even a little more power to exploit others’ relative powerlessness.

a. Ironically, the JTB article lies behind a paywall.

In fact, Chen had also been found guilty of similar practices when working with a different journal, called Bioinformatics, and an article its editors published last year has been cited prominently in the article by JTB’s editors.

Even if the JTB and Bioinformatics cases are exceptional for their editors having failed to weed out gross misconduct shortly after its first occurrence – it’s not; but although there many such exceptional cases, they are still likely to be in the minority (an assumption on my part) – a completely transparent review process eliminates such possibilities as well as, and more importantly, naturally renders the process trustlessb. That is, you shouldn’t have to trust a reviewer to do right by your paper; the system itself should be designed such that there is no opportunity for a reviewer to do wrong.

b. As in trustlessness, not untrustworthiness.

Second, it seems Chou accrued over 71,000 citations because the number of citations has become a proxy for research excellence irrespective of whether the underlying research is actually excellent – a product of the unavoidable growth of a system in which evaluators replaced a complex combination of factors with a single number. As a result, Chou and others like him have been able to ‘hack’ the system, so to speak, and distort the scientific literature (which you might’ve seen as the stack of journals in a library representing troves of scientific knowledge).

But as long as the science is fine, no harm done, right? Wrong.

If you visualised the various authors of research papers as points and the lines connecting them to each other as citations, an inordinate number would converge on the point of Chou – and they would be wrong, led there not by Chou’s prowess as a scientist but misled there by his abilities as a credit-thief and extortionist.

This graphing exercise isn’t simply a form of visual communication. Imagine your life as a scientist as a series of opportunities, where each opportunity is contested by multiple people and the people in charge of deciding who ‘wins’ at each stage aren’t some or all of well-trained, well-compensated or well-supported. If X ‘loses’ at one of the early stages and Y ‘wins’, Y has a commensurately greater chance of winning a subsequent contest and X, lower. Such contests often determine the level of funding, access to suitable guidance and even networking possibilities, so over multiple rounds, by virtue of the evaluators at each step having more reasons to be impressed by Y‘s CV because, say, they had more citations, and fewer reasons to be impressed with X‘s, X ends up with more reasons to exit science and switch careers.

Additionally, because of the resources that Y has received opportunities to amass, they’re in a better position to conduct even more research, ascend to even more influential positions and – if they’re so inclined – accrue even more citations through means both straightforward and dubious. To me, such prejudicial biasing resembles the evolution of a Lorenz attractor: the initial conditions might appear to be the same to some approximation, but for a single trivial choice, one scientist ends up being disproportionately more successful than another.

The answer of course is many things, including better ways to evaluate and reward research, and two of them in turn have to be to eliminate the use of numbers to denote human abilities and to make the journey of a manuscript from the lab to the wild as free of opaque, and therefore potentially arbitrary, decision-making as possible.

Featured image: A still from an animation showing the divergence of nearby trajectories on a Lorenz system. Caption and credit: MicoFilós/Wikimedia Commons, CC BY-SA 3.0.

The chrysalis that isn’t there

I wrote the following post while listening to this track. Perhaps you will enjoy reading it to the same sounds. Otherwise, please consider it a whimsical recommendation. 🙂

I should really start keeping a log of different stories in the news all of which point to the little-acknowledged but only-evident fact that science – like so many things, including people – does not embody lofty ideals as much as the aspirations to those ideals. Nature News reported on January 31 that “a language analysis of titles and abstracts in more than 100,000 scientific articles,” published in the British Medical Journal (BMJ), had “found that papers with first and last authors who were both women were about 12% less likely than male-authored papers to include sensationalistic terms such as ‘unprecedented’, ‘novel’, ‘excellent’ or ‘remarkable’;” further, “The articles in each comparison were presumably of similar quality, but those with positive words in the title or abstract garnered 9% more citations overall.” The scientific literature, people!

Science is only as good as its exponents, and there is neither meaning nor advantage to assuming that there is such a thing as a science beyond, outside of and without these people. Doing so inflates science’s importance when it doesn’t deserve to be, and suppresses its shortcomings and prevents them from being addressed. For example, the BMJ study prima facie points to gender discrimination but it also describes a scientific literature that you will never find out is skewed, and therefore unrepresentative of reality, unless you acknowledge that it is constituted by papers authored by people of two genders, on a planet where one gender has maintained a social hegemony for millennia – much like you will never know Earth has an axis of rotation unless you are able to see its continents or make sense of its weather.

The scientific method describes a popular way to design experiments whose performance scientists can use to elucidate refined, and refinable, answers to increasingly complex questions. However, the method is an external object (of human construction) that only, and arguably asymptotically, mediates the relationship between the question and the answer. Everything that comes before the question and after the answer is mediated by a human consciousness undeniably shaped by social, cultural, economic and mental forces.

Even the industry that we associate with modern science – composed of people who trained to be scientists over at least 15 years of education, then went on to instruct and/or study in research institutes, universities and laboratories, being required to teach a fixed number of classes, publish a minimum number of papers and accrue citations, and/or produce X graduate students, while drafting proposals and applying for grants, participating in workshops and conferences, editing journals, possibly administering scientific work and consulting on policy – is steeped in human needs and aspirations, and is even designed to make room for them, but many of us non-scientists are frequently and successfully tempted to address the act of being a scientist as an act of transformation: characterised by an instant in time when a person changes into something else, a higher creature of sorts, like a larva enters a magical chrysalis and exits a butterfly.

But for a man to become a scientist has never meant the shedding of his identity or social stature; ultimately, to become a scientist is to terminate at some quasi-arbitrary moment the slow inculcation of well-founded knowledge crafted to serve a profitable industry. There is a science we know as simply the moment of discovery: it is the less problematic of the two kinds. The other, in the 21st century, is also funding, networking, negotiating, lobbying, travelling, fighting, communicating, introspecting and, inescapably, some suffering. Otherwise, scientific knowledge – one of the ultimate products of the modern scientific enterprise – wouldn’t be as well-organised, accessible and uplifting as it is today.

But it would be silly to think that in the process of constructing this world-machine of sorts, we baked in the best of us, locked out the worst of us, and threw the key away. Instead, like all human endeavour, science evolves with us. While it may from time to time present opportunities to realise one or two ideals, it remains for the most part a deep and truthful reflection of ourselves. This assertion isn’t morally polarised, however; as they say, it is what it is – and this is precisely why we must acknowledge failures in the practice of science instead of sweeping them under the rug.

One male scientist choosing more uninhibitedly to call his observation “unprecedented” than a female scientist might have been encouraged, among other things, by the peculiarities of a gendered scientific labour force and scientific enterprise, but many male scientists indulging just as freely in their evaluatory fantasies, such as they are, indicates a systemic corruption that transcends (but not escapes) science. The same goes for, as in another recent example, for the view that science is self-correcting. It is not because people are not, and they need to be pushed to be. In March 2019, for example, researchers uncovered at least 58 papers published in a six-week period whose authors had switched their desired outcomes between the start and end of their respective experiments to report positive, and to avoid reporting negative, results. When the researchers wrote to the authors as well as the editors of the journals that had published the problem papers, most of them denied there was an issue and refused to accept modifications.

Again, the scientific literature, people!

A science for the non-1%

David Michaels, an epidemiologist and a former US assistant secretary of labour for occupational safety and health under Barack Obama, writes in the Boston Review:

[Product defence] operations have on their payrolls—or can bring in on a moment’s notice—toxicologists, epidemiologists, biostatisticians, risk assessors, and any other professionally trained, media-savvy experts deemed necessary (economists too, especially for inflating the costs and deflating the benefits of proposed regulation, as well as for antitrust issues). Much of their work involves production of scientific materials that purport to show that a product a corporation makes or uses or even discharges as air or water pollution is just not very dangerous. These useful “experts” produce impressive-looking reports and publish the results of their studies in peer-reviewed scientific journals (reviewed, of course, by peers of the hired guns writing the articles). Simply put, the product defence machine cooks the books, and if the first recipe doesn’t pan out with the desired results, they commission a new effort and try again.

Members of the corporate class have played an instrumental role in undermining trust in science in the last century, and Michaels’s exposition provides an insightful glimpse of how they work, and why what they do works. However, the narrative Michaels employs, as illustrated above, treats scientists like minions – a group of people that will follow your instructions but will not endeavour to question how their research is going to be used as long as, presumably, their own goals are met – and also excuses them for it. This is silly: the corporate class couldn’t have done what it did without help from a sliver of the scientific class that sold its expertise to the highest bidder.

Even if such actions may have been more the result of incompetence than of malice, for too long have scientists claimed vincible ignorance in their quasi-traditional tendency to prize unattached scientific progress more than scientific progress in step with societal aspirations. They need to step up, step out and participate in political programmes that deploy scientific knowledge to solve messy real-world problems, which frequently fail and just as frequently serve misguided ends (such as – but sure as hell not limited to – laundering the soiled reputation of a pedophile and convicted sex offender).

But even so, even as the scientists’ conduct typifies the problem, the buck stops with the framework of incentives that guides them.

Despite its connections with technologies that powered colonialism and war, science has somehow accrued a reputation of being clean. To want to be a scientist today is to want to make sense of the natural universe – an aspiration both simple and respectable – and to make a break from the piddling problems of here and now to the more spiritually refined omnipresent and eternal. However, this image can’t afford to maintain itself by taking the deeply human world it is embedded in for granted.

Science has become the reason for state simply because the state is busy keeping science and politics separate. No academic programme in the world today considers scientific research to be at par with public engagement and political participationa when exactly this is necessary to establish science as an exercise through which, fundamentally, people construct knowledge about the world and then ensure it is used responsibly (as well as to demote it from the lofty pedestal where it currently lords over the social sciences and humanities). Instead, we have a system that encourages only the production of knowledge, tying it up with metrics of professional success, career advancement and, most importantly, a culture of higher educationb and research that won’t brook dissent and tolerates activist-scientists as lesser creatures.

a. And it is to the government’s credit that political participation has become synonymous with electoral politics and the public expression of allegiance to political ideologies.

b. Indeed, the problem most commonly manifests as a jaundiced impression of the purpose of teaching.

The perpetuators of this structure are responsible for the formation and subsequent profitability of “the strategy of manufacturing doubt”, which Michaels writes “has worked wonders … as a public relations tool in the current debate over the use of scientific evidence in public policy. … [The] main motivation all along has been only to sow confusion and buy time, sometimes lots of time, allowing entire industries to thrive or individual companies to maintain market share while developing a new product.”

To fight the vision of these perpetuators, to at least rescue the fruits of the methods of science from inadvertent ignominy, we need publicly active scientists to be the rule, not the exceptions to the rule. We need structural incentives to change to accommodate the fact that, if they don’t, this group of people will definitely remain limited to members of the upper class and/or upper castes. We need a stronger, closer marriage of science, the social sciences, business administration and policymaking.

To be sure, I’m neither saying the mere presence of scientists in public debates will lead to swifter solutions nor that the absence of science alone in policymaking is responsible for so many of the crises of our times – but that their absence has left cracks so big, it’s quite difficult to consider if they can be sealed any other wayc. And yes, the world will slow down, the richer will become less rich and economic growth will become more halting, but these are all also excuses to maintain a status quo that has only exploited the non-1% for two centuries straight.

c. Michaels concludes his piece with a list of techniques the product-defence faction has used to sow doubt and, in the resulting moments of vulnerability, ‘sell science’ – i.e. techniques that represent the absence of guiding voices.

Of course, there’s only so much one can do if the political class isn’t receptive to one’s ideas – but we must begin somewhere, and what better place to begin than at the knowledgeable place?

The not-so-obvious obvious

If your job requires you to pore through a dozen or two scientific papers every month – as mine does – you’ll start to notice a few every now and then couching a somewhat well-known fact in study-speak. I don’t mean scientific-speak, largely because there’s nothing wrong about trying to understand natural phenomena in the formalised language of science. However, there seems to be something iffy – often with humorous effect – about a statement like the following: “cutting emissions of ozone-forming gases offers a ‘unique opportunity’ to create a ‘natural climate solution'”1 (source). Well… d’uh. This is study-speak – to rephrase mostly self-evident knowledge or truisms in unnecessarily formalised language, not infrequently in the style employed in research papers, without adding any new information but often including an element of doubt when there is likely to be none.

1. Caveat: These words were copied from a press release, so this could have been a case of the person composing the release being unaware of the study’s real significance. However, the words within single-quotes are copied from the corresponding paper itself. And this said, there have been some truly hilarious efforts to make sense of the obvious. For examples, consider many of the winners of the Ig Nobel Prizes.

Of course, it always pays to be cautious, but where do you draw the line before a scientific result is simply one because it is required to initiate a new course of action? For example, the Univ. of Exeter study, the press release accompanying which discussed the effect of “ozone-forming gases” on the climate, recommends cutting emissions of substances that combine in the lower atmosphere to form ozone, a compound form of oxygen that is harmful to both humans and plants. But this is as non-“unique” an idea as the corresponding solution that arises (of letting plants live better) is “natural”.

However, it’s possible the study’s authors needed to quantify these emissions to understand the extent to which ambient ozone concentration interferes with our climatic goals, and to use their data to inform the design and implementation of corresponding interventions. Such outcomes aren’t always obvious but they are there – often because the necessarily incremental nature of most scientific research can cut both ways. The pursuit of the obvious isn’t always as straightforward as one might believe.

The Univ. of Exeter group may have accumulated sufficient and sufficiently significant evidence to support their conclusion, allowing themselves as well as others to build towards newer, and hopefully more novel, ideas. A ladder must have rungs at the bottom irrespective of how tall it is. But when the incremental sword cuts the other way, often due to perverse incentives that require scientists to publish as many papers as possible to secure professional success, things can get pretty nasty.

For example, the Cornell University consumer behaviour researcher Brian Wansink was known to advise his students to “slice” the data obtained from a few experiments in as many different ways as possible in search of interesting patterns. Many of the papers he published were later found to contain numerous irreproducible conclusions – i.e. Wansink had searched so hard for patterns that he’d found quite a few even when they really weren’t there. As the British economist Ronald Coase said, “If you torture the data long enough, it will confess to anything.”

The dark side of incremental research, and the virtue of incremental research done right, stems from the fact that it’s non-evidently difficult to ascertain the truth of a finding when the strength of the finding is expected to be so small that it really tests the notion of significance or so large – or so pronounced – that it transcends intuitive comprehension.

For an example of the former, among particle physicists, a result qualifies as ‘fact’ if the chances of it being a fluke are 1 in 3.5 million. So the Large Hadron Collider (LHC), which was built to discover the Higgs boson, had to have performed at least 3.5 million proton-proton collisions capable of producing a Higgs boson and which its detectors could observe and which its computers could analyse to attain this significance.

But while protons are available abundantly and the LHC can theoretically perform 645.8 trillion collisions per second, imagine undertaking an experiment that requires human participants to perform actions according to certain protocols. It’s never going to be possible to enrol billions of them for millions of hours to arrive at a rock-solid result. In such cases, researchers design experiments based on very specific questions, and such that the experimental protocols suppress, or even eliminate, interference, sources of doubt and confounding variables, and accentuate the effects of whatever action, decision or influence is being evaluated.

Such experiments often also require the use of sophisticated – but nonetheless well-understood – statistical methods to further eliminate the effects of undesirable phenomena from the data and, to the extent possible, leave behind information of good-enough quality to support or reject the hypotheses. In the course of navigating this winding path from observation to discovery, researchers are susceptible to, say, misapplying a technique, overlooking a confounder or – like Wansink – overanalysing the data so much that a weak effect masquerades as a strong one but only because it’s been submerged in a sea of even weaker effects.

Similar problems arise in experiments that require the use of models based on very large datasets, where researchers need to determine the relative contribution of each of thousands of causes on a given effect. The Univ. of Exeter study that determined ozone concentration in the lower atmosphere due to surface sources of different gases contains an example. The authors write in their paper (emphasis added):

We have provided the first assessment of the quantitative benefits to global and regional land ecosystem health from halving air pollutant emissions in the major source sectors. … Future large-scale changes in land cover [such as] conversion of forests to crops and/or afforestation, would alter the results. While we provide an evaluation of uncertainty based on the low and high ozone sensitivity parameters, there are several other uncertainties in the ozone damage model when applied at large-scale. More observations across a wider range of ozone concentrations and plant species are needed to improve the robustness of the results.

In effect, their data could be modified in future to reflect new information and/or methods, but in the meantime, and far from being a silly attempt at translating a claim into jargon-laden language, the study eliminates doubt to the extent possible with existing data and modelling techniques to ascertain something. And even in cases where this something is well known or already well understood, the validation of its existence could also serve to validate the methods the researchers employed to (re)discover it and – as mentioned before – generate data that is more likely to motivate political action than, say, demands from non-experts.

In fact, the American mathematician Marc Abrahams, known much more for founding and awarding the Ig Nobel Prizes, identified this purpose of research as one of three possible reasons why people might try to “quantify the obvious” (source). The other two are being unaware of the obvious and, of course, to disprove the obvious.

How science is presented and consumed on Facebook

This post is a breakdown of the Pew study titled The Science People See on Social Media, published March 21, 2018. Without further ado…

In an effort to better understand the science information that social media users encounter on these platforms, Pew Research Center systematically analyzed six months’ worth of posts from 30 of the most followed science-related pages on Facebook. These science-related pages included 15 popular Facebook accounts from established “multiplatform” organizations … along with 15 popular “Facebook-primary” accounts from individuals or organizations that have a large social media presence on the platform but are not connected to any offline, legacy outlet.

Is popularity the best way to judge if a Facebook page counts as a page about science? Popularity is an easy measure but it often almost exclusively represents a section of the ‘market’ skewed towards popular science. Some such pages from the Pew dataset include facebook.com/healthdigest, /mindbodygreen, /DailyHealthTips, /DavidAvocadoWolfe and /droz – all “wellness” brands that may not represent the publication of scientific content as much as, more broadly, content that panders to a sense of societal insecurity that is not restricted to science. This doesn’t limit the Pew study insofar as the study aims to elucidate what passes off as ‘science’ on Facebook but it does limit Pew’s audience-specific insights.

§

… just 29% of the [6,528] Facebook posts from these pages [published in the first half of 2017] had a focus or “frame” around information about new scientific discoveries.

Not sure why the authors, Paul Hitlin and Kenneth Olmstead, think this is “just” 29% – that’s quite high! Science is not just about new research and research results, and if these pages are consciously acknowledging that on average limiting their posts about such news to three of every 10 posts, that’s fantastic. (Of course, if the reason for not sharing research results is that they’re not very marketable, that’s too bad.)

I’m also curious about what counts as research on the “wellness” pages. If their posts share research to a) dismiss it because it doesn’t fit the page authors’ worldview or b) popularise studies that are, say, pursuing a causative link between coffee consumption and cancer, then such data is useless.

From 'The science people see on social media'. Credit: Pew Research Center
From ‘The science people see on social media’. Credit: Pew Research Center

§

The volume of posts from these science-related pages has increased over the past few years, especially among multiplatform pages. On average, the 15 popular multiplatform Facebook pages have increased their production of posts by 115% since 2014, compared with a 66% increase among Facebook-primary pages over the same time period. (emphasis in the original)

The first line in italics is a self-fulfilling prophecy, not a discovery. This is because the “multiplatform organisations” chosen by Pew for analysis all need to make money, and all organisations that need to continue making money need to grow. Growth is not an option, it’s a necessity, and it often implies growth on all platforms of publication in quantity and (hopefully) quality. In fact, the “Facebook-primary” pages, by which Hitlin and Olmstead mean “accounts from individuals or organizations that have a large social media presence on the platform but are not connected to any offline, legacy outlet”, are also driven to grow for the same reason: commerce, both on Facebook and off. As the authors write,

Across the set of 30 pages, 16% of posts were promotional in nature. Several accounts aimed a majority of their posts at promoting other media and public appearances. The four prominent scientists among the Facebook-primary pages posted fewer than 200 times over the course of 2017, but when they did, a majority of their posts were promotions (79% of posts from Dr. Michio Kaku, 78% of posts from Neil deGrasse Tyson, 64% of posts from Bill Nye and 58% of posts from Stephen Hawking). Most of these were self-promotional posts related to television appearances, book signings or speeches.

A page with a few million followers is likelier than not to be a revenue-generating exercise. While this is by no means an indictment of the material shared by these pages, at least not automatically, IFL Science is my favourite example: its owner Elise Andrews was offered $30 million for the page in 2015. I suspect that might’ve been a really strong draw to continue growing, and unfortunately, many of the “Facebook-primary” pages like IFLS find this quite easy to do by sharing well-dressed click-bait.

Second, if Facebook is the primary content distribution channel, then the number of video posts will also have shown an increase in the Pew data – as it did – because publishers both small and large that’ve made this deal with the devil have to give the devil whatever it wants. If Facebook says videos are the future and that it’s going to tweak its newsfeed algorithms accordingly, publishers are going to follow suit.

Source: Pew Research Center
Source: Pew Research Center

So when Hitlin and Olmstead say, “Video was a common feature of these highly engaging posts whether they were aimed at explaining a scientific concept, highlighting new discoveries, or showcasing ways people can put science information to use in their lives”, they’re glossing over an important confounding factor: the platform itself. There’s a chance Facebook is soon going to say VR is the next big thing, and then there’s going to be a burst of posts with VR-mediated content. But that doesn’t mean the publishing houses themselves believe VR is good or bad for sharing science news.

§

The average number of user interactions per post – a common indicator of audience engagement based on the total number of shares, comments, and likes or other reactions – tends to be higher for posts from Facebook-primary accounts than posts from multiplatform accounts. From January 2014 to June 2017, Facebook-primary pages averaged 14,730 interactions per post, compared with 4,265 for posts on multiplatform pages. This relationship held up even when controlling for the frame of the post. (emphasis in the original)

Again, Hitlin and Olmstead refuse to distinguish between ‘legitimate’ posts and trash. This would involve a lot more work on their part, sure, but it would also make their insights into science consumption on the social media that much more useful. But until then, for all I know, “the average number of user interactions per post … tends to be higher for posts from Facebook-primary accounts than posts from multiplatform accounts” simply because it’s Gwyneth Paltrow wondering about what stones to shove up which orifices.

§

… posts on Facebook-primary pages related to federal funding for agencies with a significant scientific research mission were particularly engaging, averaging more than 122,000 interactions per post in the first half of 2017.

Now that’s interesting and useful. Possible explanation: Trump must’ve been going nuts about something science-related. [Later in the report] Here it is: “Many of these highly engaging posts linked to stories suggesting Trump was considering a decrease in science-agency funding. For example, a Jan. 25, 2017, IFLScience post called Trump’s Freeze On EPA Grants Leaves Scientists Wondering What It Means was shared more than 22,000 times on Facebook and had 62,000 likes and other reactions.”

§

Highly engaging posts among these pages did not always feature science-related information. Four of the top 15 most-engaging posts from Facebook-primary pages featured inspirational sayings or advice such as “look after your friends” or “believe in yourself.”

Does mental-health-related messaging on the back of new findings or realisations about the need for, say, speaking out on depression and anxiety count as science communication? It does to me; by all means, it’s “news I can use”.

§

Three of the Facebook-primary pages belong to prominent astrophysicists. Not surprisingly, about half or more of the posts on these pages were related to astronomy or physics: Dr. Michio Kaku (58%), Stephen Hawking (58%) and Neil deGrasse Tyson (48%).

Ha! It would be interesting to find out why science’s most prominent public authority figures in the last few decades have all been physicists of some kind. I already have some ideas but that’ll be a different post.

§

Useful takeaways for me as science editor, The Wire:

  1. Pages that stick to a narrower range of topics do better than those that cover all areas of science
  2. Controversial topics such as GMOs “didn’t appear often” on the 30 pages surveyed – this is surprising because you’d think divisive issues would attract more audience engagement. However, I also imagine the pages’ owners might not want to post on those issues to avoid flame wars (😐), stay away from inconclusive evidence (😄), not have to take a stand that might hurt them (🤔) or because issue-specific nuances make an issue a hard-sell (🙄).
  3. Most posts that shared discoveries were focused on “energy and environment, geology, and archeology”; half of all posts about physics and astronomy were about discoveries

Featured image credit: geralt/pixabay.

By the way: the Chekhov’s gun and the science article

“If in the first act you have hung a pistol on the wall, then in the following one it should be fired. Otherwise don’t put it there.” (source)

This is the principle of the Chekhov’s gun: that all items within a narrative must contribute to the overarching narrative itself, and those that don’t should be removed. This is very, very true of the first two Harry Potter books, where J.K. Rowling includes seemingly random bits of information in the first half of each book that, voila, suddenly reappear during the climax in important ways. (Examples: Quirrell’s turban and the Whomping Willow). Thankfully, Rowling’s writing improves significantly from the third book, where the Chekhov’s guns are more subtly introduced, and don’t always stay out of sight before being revived for the grand finale.

However, does the Chekhov’s gun have a place in a science article?

Most writers, editors and readers (I suspect) would reply in the affirmative. The more a bit of science communication stays away from redundancy, the better. Why introduce a term if it’s not going to be reused, or if it won’t contribute to the reader understanding what a writer has set out to explain? This is common-sensical. But my concern is about introducing information deftly embedded in the overarching narrative but which does not play any role in further elucidating the writer’s overall point.

Consider this example: I’m explaining a new research paper that talks about how a bunch of astronomers used a bunch of cool techniques to identify the properties of a distant star. While what is entirely novel about the paper is the set of techniques, I also include two lines about how the telescopes the astronomers used to make their observations operate using a principle called long baseline interferometry. And a third line about why each telescope is equipped with an atomic clock.

Now, I have absolutely no need to mention the phrases ‘long baseline interferometry’ and ‘atomic clocks’ in the piece. I can make my point just as well without them. However, to me it seems like a good opportunity to communicate to – and not just inform – the reader about interesting technologies, an opportunity I may not get again. But a professional editor (again, I suspect) would argue that if I’m trying to make a point and I know what that point is, I should just make that. That, like a laser pointer, I should keep my arguments focused and coherent.

I’m not sure I would agree. A little bit of divergence is okay, maybe even desirable at times.

Yes, I’m aware that editors working on stories that are going to be printed, and/or are paying per word, would like to keep things as concisely pointy as possible. And yes, I’m aware that including something that needn’t be included risks throwing the reader off, that we ought to minimise risk at all times. Finally, yes, I’m aware that digressing off into rivulets of information also forces the writer to later segue back into the narrative river, and that may not be elegant.

Of these three arguments (that I’ve been able to think of; if you have others, please feel free to let me know), the first one alone has the potential to be non-negotiable. The other two are up to the writer and the editor: if she or they can tuck away little gems of trivia without disrupting the story’s flow, why not? I for one would love to discover them, to find out about connections – scientific, technological or otherwise – in the real world that frequently find expression only with the prefix of a “by the way, did you know…”.

Featured image credit: DariuszSankowski/pixabay.

By the way: the Chekhov's gun and the science article

“If in the first act you have hung a pistol on the wall, then in the following one it should be fired. Otherwise don’t put it there.” (source)

This is the principle of the Chekhov’s gun: that all items within a narrative must contribute to the overarching narrative itself, and those that don’t should be removed. This is very, very true of the first two Harry Potter books, where J.K. Rowling includes seemingly random bits of information in the first half of each book that, voila, suddenly reappear during the climax in important ways. (Examples: Quirrell’s turban and the Whomping Willow). Thankfully, Rowling’s writing improves significantly from the third book, where the Chekhov’s guns are more subtly introduced, and don’t always stay out of sight before being revived for the grand finale.

However, does the Chekhov’s gun have a place in a science article?

Most writers, editors and readers (I suspect) would reply in the affirmative. The more a bit of science communication stays away from redundancy, the better. Why introduce a term if it’s not going to be reused, or if it won’t contribute to the reader understanding what a writer has set out to explain? This is common-sensical. But my concern is about introducing information deftly embedded in the overarching narrative but which does not play any role in further elucidating the writer’s overall point.

Consider this example: I’m explaining a new research paper that talks about how a bunch of astronomers used a bunch of cool techniques to identify the properties of a distant star. While what is entirely novel about the paper is the set of techniques, I also include two lines about how the telescopes the astronomers used to make their observations operate using a principle called long baseline interferometry. And a third line about why each telescope is equipped with an atomic clock.

Now, I have absolutely no need to mention the phrases ‘long baseline interferometry’ and ‘atomic clocks’ in the piece. I can make my point just as well without them. However, to me it seems like a good opportunity to communicate to – and not just inform – the reader about interesting technologies, an opportunity I may not get again. But a professional editor (again, I suspect) would argue that if I’m trying to make a point and I know what that point is, I should just make that. That, like a laser pointer, I should keep my arguments focused and coherent.

I’m not sure I would agree. A little bit of divergence is okay, maybe even desirable at times.

Yes, I’m aware that editors working on stories that are going to be printed, and/or are paying per word, would like to keep things as concisely pointy as possible. And yes, I’m aware that including something that needn’t be included risks throwing the reader off, that we ought to minimise risk at all times. Finally, yes, I’m aware that digressing off into rivulets of information also forces the writer to later segue back into the narrative river, and that may not be elegant.

Of these three arguments (that I’ve been able to think of; if you have others, please feel free to let me know), the first one alone has the potential to be non-negotiable. The other two are up to the writer and the editor: if she or they can tuck away little gems of trivia without disrupting the story’s flow, why not? I for one would love to discover them, to find out about connections – scientific, technological or otherwise – in the real world that frequently find expression only with the prefix of a “by the way, did you know…”.

Featured image credit: DariuszSankowski/pixabay.

Cybersecurity in space

Featured image: The International Space Station, 2011. Credit: gsfc/Flickr, CC BY 2.0

On May 19, 1998, the Galaxy IV satellite shut down unexpectedly in its geostationary orbit. Immediately, most of the pagers in the US stopped working even as the Reuters, CBS and NPR news channels struggled to stay online. The satellite was declared dead a day later but it was many days before the disrupted services could be restored. The problem was found to be an electrical short-circuit onboard.

The effects of a single satellite going offline are such. What if they could be shutdown en masse? The much-discussed consequences would be terrible, which is why satellite manufacturers and operators are constantly devising new safeguards against potential threats.

However, the pace of technological advancements, together with the proliferation of the autonomous channels through which satellites operate, has ensured that operators are constantly but only playing catch-up. There’s no broader vision guiding how affected parties could respond to rapidly evolving threats, especially in a way that constantly protects the interests of stakeholders across borders.

With the advent of low-cost launch options, including from agencies like ISRO, since the 1990s, the use of satellites to prop up critical national infrastructure – including becoming part of the infrastructure themselves – stopped being the exclusive demesne of developed nations. But at the same time, the drop in costs signalled that the future of satellite operations might rest with commercial operators, leaving them to deal with technological capabilities that until then were being handled solely by the defence industry and its attendant legislative controls.

Today, satellites are used for four broad purposes: Earth-observation, meteorology and weather-forecasting; navigation and synchronisation; scientific research and education; and telecommunication. They’ve all contributed to a burgeoning of opportunities on the ground. But in terms of their own security, they’ve become a bloated balloon waiting for the slightest prick to deflate.

How did this happen?

Earlier in September, three Chinese engineers were able to hack into two Tesla electric-cars from 19 km away. They were able to move the seats and mirrors and, worse, control the brakes. Fortunately, it was a controlled hack conducted with Tesla’s cooperation and after which the engineers reported the vulnerabilities they’d found to Tesla.

The white-hat attack demonstrated a paradigm: that physical access to an internet-enabled object was no longer necessary to mess with it. Its corollary was that physical separation between an attacker and the target no longer guaranteed safety. In this sense, satellites occupy the pinnacle of our thinking about the inadequacy of physical separation; we tend to leave them out of discussions on safety because satellites are so far away.

It’s in recognition of this paradigm that we need to formulate a multilateral response that ensures minimal service disruption and the protection of stakeholder interests at all times in the event of an attack, according to a new report published by Chatham House. It suggests:

Development of a flexible, multilateral space and cybersecurity regime is urgently required. International cooperation will be crucial, but highly regulated action led by government or similar institutions is likely to be too slow to enable an effective response to space-based cyberthreats. Instead, a lightly regulated approach developing industry-led standards, particularly on collaboration, risk assessment, knowledge exchange and innovation, will better promote agility and effective threat responses.

Then again, how much cybersecurity do satellites need really?

Because when we speak about cyber anything, our thoughts hardly venture out to include our space-borne assets. When we speak about cyber-warfare, we imagine some hackers at their laptops targeting servers on some other part of the world – but a part of the world, surely, and not a place floating above it. However, given how satellites are becoming space-borne proxies for state authority, they do need to be treated as significant space-borne liabilities as well. There’s even precedence: In November 2014, an NOAA satellite was hacked by Chinese actors with minor consequences. But in the process, the attack revealed major vulnerabilities that the NOAA rushed to patch.

So the better question would be: What kinds of protection do satellites need against cyber-threats? To begin with, hackers have been able to jam communications and replace legitimate signals with false ones (called spoofing). They’ve also been able to invade satellites’ SCADA systems, introduce viruses to trip up software and,  pull DOS attacks. The introduction of micro- and nanosatellites has also provided hackers with an easier conduit into larger networks.

Another kind of protection that could be useful is from the unavoidable tardiness with which governments and international coalitions react to cyber-warfare, often due to over-regulation. The report states, “Too centralised an approach would give the illicit actors, who are generally unencumbered by process or legislative frameworks, an unassailable advantage simply because their response and decision-making time is more flexible and faster than that of their legitimate opponents.”

Do read the full report for an interesting discussion of the role cybersecurity plays in the satellite services sector. It’s worth your time.

Vanilla entertainment

One of the first, and most important in hindsight, bits of advice I got from the journalist Siddharth Varadarajan was about how to choose what to write: “Write what you’d like to read” (Dan Fagin would later add the important “why now” dimension). As someone avidly interested in scientific theories – especially in physics and astronomy – I’ve noticed that the best stories around today about theoretical research have narratives centred around some kind of human dilemma. One of the more recent examples of such stories is of Shinichi Mochizuki’s work trying to solve the abc conjecture. The principal ‘plot element’ was that Mochizuki’s proposed solution to the problem required some superhuman efforts of concentration and re-learning – the latter being something humans are not naturally good at.

However, I found my reading was only half-sated by stories like Mochizuki’s. The other half was best found in books that described the development of complex scientific theories through the lives of more than a few researchers. The last such book I read was Brian Greene’s The Elegant Universe, which posits string theory as likely being the ultimate and singular framework in physics and mathematics capable of describing this universe we inhabit. Despite my issues with the book, I really liked it because of its introduction to great scientists starting with how they got interested in some topics and how their work jumped thereon from one idea to another. Such stories sometimes don’t involve conflicts, and so science journalists are not motivated to write about them, which is understandable.

Yet, I think such stories need to make an appearance in the pages of the mainstream media because the glimpses they provide into the lives and thought-processes of scientists who’ve succeeded in making one contribution or another are seldom available anywhere else but books. So when I received a 5,400-word interview of the theoretical physicist Abhay Ashtekar to publish on The Wire, I published the whole thing. Though Ashtekar describes some friction within the field, the interview as such makes for pleasant, informative reading. I hope you enjoy reading it (for whatever reason).

From unboiling eggs to the effects of intense kissing, IgNobel Prizes reward good ol’ curiosity

The year’s IgNobel Awards were held on September 17, and rewarded research that defines a kind of excellence that still impacts society without managing the sobriety of character that often bags the more vaunted Nobel Prizes. The 25th edition, held as usual at Harvard University’s Sanders Theatre, and as usual presided over by the magazine Improbable Research‘s editor Marc Abrahams, recognised work done in describing pain, diagnosing appendicitis, the effects of intense kissing and more.

Instituted and first awarded in 1991, the prizes were originally designed to identify work that shouldn’t be reproduced, although that snark has diminished in time. On the flipside, they’re known for juxtaposing meticulously conducted research with the banality of their subjects. For example, the citation for the management prize this year read, “… for discovering that many business leaders developed in childhood a fondness for risk-taking, when they experienced natural disasters that – for them – had no dire personal consequences.” The awarders’ take has been that “The Ig Nobel Prizes honour achievements that make people laugh, and then think. The prizes are intended to celebrate the unusual, honour the imaginative – and spur people’s interest in science, medicine, and technology.”

The 2015 literature prize went to Dutch linguists for discovering that a translation of “huh?” existed in almost every language and for unknown reasons. The biology prize got picked up by a Chilean diad that found “that when you attach a weighted stick to the rear end of a chicken, the chicken then walks in a manner similar to that in which dinosaurs are thought to have walked.”. The physics prize was claimed by scientists who found using the principles of fluid dynamics early last year that many mammals – across species – often took a uniform 21 seconds to take a leak (give or take 13 seconds). The diagnostic medicine prize awardees could actually have hit upon something more useful than you think: diagnosing appendicitis by having patients drive at a fixed speed over a speed-bump. If they experience a sharp pain in certain areas, it’s surgery time. The physiology and entomology prize was co-bagged by Justin Schmidt for developing a relative pain index and Michael Smith for letting himself be stung in 25 parts of his body to find the places most (nostril, upper lip, penis shaft) and least sensitive (skull, middle toe tip, upper arm) to stinging pain. Brave souls all.

The citations also demonstrated how being persistently curious could someday enable you to do things you wouldn’t have thought scientifically (or mathematically) possible. For example, the chemistry prize went to a team from the USA and Australia that figured out how to partially unboil an egg (kudos to Abrahams & co. for being able to go past the paper’s title: “Shear-stress-mediated refolding of proteins from aggregates and inclusion bodies”). The medicine prize may have actually put too fine a point on what everyone probably already knew: kissing does people a world of good, and intense kissing does good intensely. And there’s no point trying to paraphrase the mathematics-prize-winning work: “for trying to use mathematical techniques to determine whether and how Moulay Ismael the Bloodthirsty, the Sharifian Emperor of Morocco, managed, during the years from 1697 through 1727, to father 888 children.”

However, it’s the work winning the 2015 economics prize that doesn’t deserve to be reproduced at all – and it’s probably telling that it didn’t involve scientists but policemen. Specifically, the prize went to Bangkok Metropolitan Police, which offered to bribe its policemen if they didn’t take bribes from others. The BMP needs to be able to take pride in its work’s illustrious company, which includes the 2008 recession, the invention of virtual animal husbandry as well as the find that people would postpone their deaths, indeed, “if that would qualify them for a lower rate on the inheritance tax”.