What a silly question!
There is, inevitably, a close connection between statistics and operational research. The data for O.R. work is collected and analysed by people with statistical expertise. Those engaged in an O.R. study need to advise on what data sets are needed, and the sort of analysis which might help.
So, it is hardly surprising that most O.R. training involves lecture modules and courses on elementary and advanced statistics. Hopefully, these include warnings about what might go wrong. I am sure that there are many O.R. projects that have horror stories about mistakes in the way that observations were made, or incomplete data sets.
One of the warnings that I received many years ago, and which I stressed with students, concerned collecting data by means of questionnaires. Someone said that there is a popular attitude that "Anyone can produce a questionnaire", to which one replies "Anyone can make a mess of a questionnaire". Surveys and questionnaires need to be thoroughly tested before they are released.
Why mention this? Tina was sent a lifestyle questionnaire this week, for a long-term study for which she has volunteered. Several of the questions were amazingly badly thought out, but her prize was:
"In the past six months, on average, on weekdays, how many hours have you spent out-of-doors?"
The question was intended to look at risk of skin cancer.
But could you answer that question with any confidence of being accurate?
What do they mean by "Out-of-doors"? Who measures their time out of doors? Did any of the team test themselves on the questionnaire? Did they try it out on anyone else? A question like that should NEVER have got through to the final survey!
Good luck to whoever has to analyse the responses.
So, it is hardly surprising that most O.R. training involves lecture modules and courses on elementary and advanced statistics. Hopefully, these include warnings about what might go wrong. I am sure that there are many O.R. projects that have horror stories about mistakes in the way that observations were made, or incomplete data sets.
One of the warnings that I received many years ago, and which I stressed with students, concerned collecting data by means of questionnaires. Someone said that there is a popular attitude that "Anyone can produce a questionnaire", to which one replies "Anyone can make a mess of a questionnaire". Surveys and questionnaires need to be thoroughly tested before they are released.
Why mention this? Tina was sent a lifestyle questionnaire this week, for a long-term study for which she has volunteered. Several of the questions were amazingly badly thought out, but her prize was:
"In the past six months, on average, on weekdays, how many hours have you spent out-of-doors?"
The question was intended to look at risk of skin cancer.
But could you answer that question with any confidence of being accurate?
What do they mean by "Out-of-doors"? Who measures their time out of doors? Did any of the team test themselves on the questionnaire? Did they try it out on anyone else? A question like that should NEVER have got through to the final survey!
Good luck to whoever has to analyse the responses.
Comments
Post a Comment