September 24, 2014 — Blog Post
Creating (Unbiased) Decision Aids
Our choices are what give us a feeling of control over our lives, a sense of stability in what might otherwise feel like an endless, whirling chaos. But hundreds of tiny, seemingly insignificant things influence our decisions and emotions every day.
In fact, that’s the entire basis for the study of choice architecture. (For those for whom this is a new phrase, choice architecture is a term that basically means decisions may be influenced by how choices are presented.) Take, for example, the high tax on cigarettes intended to lead some people to not smoke as much. Some people herald this as a healthy nudge, while others shout that it’s a paternalistic infringement on our American right to choice.
But what about when you’re looking at shared decision making? After all, when it comes to people’s bodies and lives, there must be a certain level of care not to manipulate the presentation of information. But how possible is it to present genuinely unbiased data and do so in a way that’s not manipulating the structure of the information?
The first thing to consider is that everyone has their biases. In my work creating decision aids, I have biases, my medical advisory panel has theirs, and the writers and clinicians authoring research content in the literature also have theirs. This means that I have to be aware of my own biases and analyze them. (And because I know I may be blind to them, I have my editor keep an eye out for them, too.)
I can note my advisors’ influences because usually they’re pretty forthright with them. For example, surgeons tend to be pretty vehement that surgery is a strong treatment option. So when they come up with something pro-their bias, I can flag it as one on which I need to do a bit more outside research.
The next thing to consider is the audience’s bias. This can be trickier since not everyone has the same background. Often, though, there are certain trends. For example, if an American man hears he has low-risk prostate cancer, we can generalize that American men are taught to be aggressive when faced with a challenge. In general, being aggressive is viewed as “being a man” in vast swaths of our society. We also know that the word “cancer” has a profound effect on people — there’s a fear of death and often a panic to quickly treat anything that might lead to death and get it out. At the same time, there’s often a certain optimism that, with the right treatment, cancer can be beaten.
What we found in our focus groups was that in order to make active surveillance a viable option in a decision aid, it needs to be presented first so that it doesn’t appear as an afterthought, which can happen if it’s described last. It also needs to have an equal amount of time spent on describing what it is and how it works, so it feels as weighty and aggressive as radiation or surgery. It doesn’t have to ultimately be the right choice for every man, but each man deserves to hear about it in a way that gives it the same amount of weight in the decision-making process instead of being eliminated outright for not playing into the American male bias.
Is it manipulative to employ choice architecture to counteract bias? I don’t think so. Because even when the structure is re-ordered to play against bias, the information is still itself. The numbers aren’t fudged, and nothing is hidden. And despite all the best efforts towards making the treatment options seem equal, if one is fundamentally unlike the others (in this instance, active surveillance), our nature is to rule that option out in order to more easily compare the options that are more alike (surgery and radiation, both aggressive treatments).
So again, I ask: is it possible to present completely unbiased data and do so in a way that’s not manipulating the structure of the information? No, I don’ think it is. But it’s something we can continue to struggle towards achieving.