How to solve the problem with short answer questions

short answer main

The problem with short answer questions is that they don't give learners good feedback on their response. In fact, their overuse is a good indication of lazy instructional design. Some think that the problem is that learners can ‘cheat’ and simply look at the model answer. Personally, I have no issue with this – the only person the learner is cheating is themselves.

The instructional design reason for using short answer questions is that sometimes you need open questions. In another blog post on how to write effective multiple choice questions, I talk about how face-to-face trainers often overuse short answer questions when they come to elearning because in classroom sessions they’re used to posing open questions to spark discussion and debate.

If a topic needs to be dealt with through a short answer, perhaps because the subject matter is debatable or complex, then self-paced learning might not be the best approach. The great thing about a blended-learning approach is that it allows these types of topics to be openly dealt with in discussion online or face to face.  

With a short answer question, a learner writes something and then gets back a written model answer with which they can compare their response. To trigger the feedback we use a button such as:

  • Compare your answer

  • See an expert's response

  • See what other people have written

But most of the time we’re guilty of cheating our learners of getting great feedback. It's as though we’re setting up a learning experience where we ask the learner to do something – which often requires considerable thought – and then we make a seemingly random comment.

There are three approaches to solving the problem with short answer questions and giving better feedback to learners.

Searching for correct keywords

With this approach, an instructional designer defines a series of keywords or key statements that need to appear in the learner's response. The response is then checked against these keywords or statements. The feedback might be a score, with the matches highlighted in the response. This approach works well when you're asking the learner to make a list or if the response needs certain technical terms. If you are asking more complex or reflective questions the feedback can actually be really frustrating for the learner.

We don’t yet have this function in our cloud-based authoring tool Glasshouse, but it's on our roadmap.    

Automatic marking using machine learning

With this approach a computer program is trained to see patterns in text and the program compares these patterns to the learner's response. This is how the Google Answers system works. The drawback of this approach is that it needs a lot of sample content to be effective. It's been successfully used in MOOCs where there are a large number of learners.

Peer feedback on responses

This is the path we’ve followed with Glasshouse. The way it works is that after the learner completes a short answer question they are shown another learner's response and asked to give feedback to them, which is then delivered anonymously. The learner doesn’t get the feedback instantly, but an added benefit is that they reconnect with the program over a period of time.     

This peer assessment approach turns the short answer question into a high order learning tool. The learning becomes a bit more of a social learning experience. The issue is that the learner needs to make more effort (which is also positive). We have now used this approach on a couple of projects – so far the learner responses have been positive.