TITLE: More on Computational Simulation, Programming, and the Scientific Method
AUTHOR: Eugene Wallingford
DATE: January 24, 2008 6:39 AM
As I was running through some very cold, snow-covered streets,
it occurred to me that
my recent post
on James Quirk's AMRITA system neglected to highlight one of
the more interesting elements of Quirk's discussion:
Computational scientists have little or no incentive to become
better programmers, because research papers are the currency
of their disciplines. Publications earn tenure and promotion,
not to mention street cred in the profession. Code is viewed
by most as merely a means to an end, an ephemeral product on
the way to a citation.
What I take from Quirk's paper is that code isn't -- or shouldn't
be -- ephemeral, or only a means to an end. It is the experiment
and the source of data on which scientific claims rest. As I
thought more about the paper I began to wonder, can computational
scientists do better science if they become better programmers?
Even more to the point, will it become essential for a
computational scientist to be a good programmer just to do the
science of the future? That's certainly what I heard some of
the scientists at the
While googling to find a link to Quirk's article for my entry
(Google is the new
I found the paper
Computational Simulations and the Scientific Method
and Bill Wood. They take the programming-and-science angle
in a neat software direction, suggesting that
Publishing a test fixture offers several potential benefits,
- the creators of a new simulation technique should
publish unit tests that specify the technique's
intended behavior, and
- the developers of scientific simulation code for a
given technique use its unit tests to demonstrate that
their component correctly implements the technique.
These are not about programming or software development; they
are about a way to do science.
This is a really neat connection between (agile) software
development and doing science. The idea is not necessarily
new to folks in the agile software community. Some of these
folks speak of test-driven development in terms of being a
"more scientific" way to write code, and agile developers of
all flavors believe deeply in the observation/feedback cycle.
But I didn't know that computational scientists were talking
this way, too.
After reading the Kleb and Wood paper, I was not surprised
to learn that Bil has been involved in the Agile 200?
conferences over the years. I somehow missed the 2003
IEEE Software article that he and Wood co-wrote on
XP and scientific research and so now have something new
I really like the way that Quirk and Kleb & Wood talk
about communication and its role in the practice of science.
It's refreshing and heartening.
- a way to communicate a technique or algorithm better
- a way to share the required functionality and performance
features of an implementation
- a way to improve repeatability of computational
experiments, by ensuring that scientists using the
same technique are actually getting the same output
from their component modules
- a way to improve comparison of different experiments
- a way to improve verification and validation of