Absurd precision

Our local swimming pool, like other public buildings, has to display an energy efficiency certificate.  While waiting for Tina to finish changing, I started to read this (OK, I should get a life!) and was amazed that the total usable floor area in square metres (about 2500) was quoted to two decimal places.  (So there were six significant digits in the printed record.)  It may be that the company was simply converting dimensions in imperial units (the building dates from the 1930s, before metrication in the UK) but I wondered whether anyone providing the dimension had the common sense to think that 0.01 square metres is a little smaller than the area of a postcard.
But how many OR studies have been guilty of the same absurd precision?  Papers that came to me to be refereed often had similarly absurdly precise data, based on parameters that were quoted to one or two significant figures.  It must be accurate -- the computer says so!
Oh dear!  I had posed the above two paragraphs, and opened the latest issue of the Journal of the O.R. Society.   It included several atrocities. 
One row of a table told me that the four parts of an algorithm took:
0.54 secs, 0.22 secs, 10.76 secs and 7620.64 secs, with a total of these and other parts, of 7633.53 secs.  The fourth component used 99.8% of the time, so the algorithm's efficiency depends on that.
Another table had a column of figures: 6349.09, 6244.68, 4103.79 etc, for the time taken to find a solution to a problem where the data was measured in metres and the total of all measurements was about 3000 metres -- so the problem solution was correct to something greater than 1 in 3000 (let's be generous, and say 0.1%) but the time in the first row was correct to 0.01 in 6000.

Comments

Popular Posts