As a software engineer who has worked in academia and industry, it's no surprise to me at all that the Imperial College coronavirus pandemic model is full of errors. Making computer models is extremely difficult, and just because you're an expert in epidemiology doesn't mean that you'll be able to build a functional epidemiological computer model. Computer modeling is a specialization of its own, not an add-on to other sets of expertise. And it's not just about individual expertise: institutional expertise is at least as important.

Processes not people. This is important: the problem here is not really the individuals working on the model. The people in the Imperial team would quickly do a lot better if placed in the context of a well run software company. The problem is the lack of institutional controls and processes. All programmers have written buggy code they aren't proud of: the difference between ICL and the software industry is the latter has processes to detect and prevent mistakes.

For standards to improve academics must lose the mentality that the rules don't apply to them. In a formal petition to ICL to retract papers based on the model you can see comments "explaining" that scientists don't need to unit test their code, that criticising them will just cause them to avoid peer review in future, and other entirely unacceptable positions. Eventually a modeller from the private sector gives them a reality check. In particular academics shouldn't have to be convinced to open their code to scrutiny; it should be a mandatory part of grant funding.

Frankly, I wouldn't trust any modeler who isn't risking their own money on the accuracy of their model.

0 TrackBacks

Listed below are links to blogs that reference this entry: COVID-19 Models Are Full of Errors.

TrackBack URL for this entry: http://www.mwilliams.info/mt5/tb-confess.cgi/9129

Comments

Supporters

Email blogmasterofnoneATgmailDOTcom for text link and key word rates.

Site Info

Support