Algorithms are Making Many of Your Decisions – and You Might be OK With That

Dec. 6, 2021

A new study by Arizona law professor Derek Bambauer suggests that most people are content to let big data-produced algorithms decide many – but not all – of their day-to-day decisions.

Image

Algorithms, which are essentially systems or processes that make a choice, have been around for ages. But they're ubiquitous in the age of big data, and now typically exist as math formulas in the form of computer code.

This story originally published on UA News

The odds are good that at least a few algorithms helped you find this article.

After all, algorithms – which are essentially systems or processes that help make a choice – have been around nearly forever. But they've become ubiquitous with the rise of big data, and now typically involve math formulas in the form of computer code.

Facebook uses an algorithm to deliver its News Feed to nearly 3 billion users. Algorithms are what allow Tesla's cars to drive themselves. And any Google search involves an algorithm that decides the order of the results.

Missing file.

Policymakers have long assumed that most people would rather not have a machine make certain day-to-day decisions – such as whether someone deserves a bank loan or is liable for a civil traffic offense. But a new study by Derek Bambauer, a professor in the University of Arizona James E. Rogers College of Law, finds that many people are perfectly happy letting a machine make certain decisions for them.

Bambauer, who studies internet censorship, cybersecurity and intellectual property, worked in the computer science field as a systems engineer before his legal career.

His new study, set to publish in the Arizona State Law Journal in early 2022, aims to help legal scholars and policymakers understand the public perception of decision-making algorithms so they can regulate those algorithms more in accordance with consumers' views.

"We're at a moment where algorithms have power and potential, but there's also a good bit of fear about them," said Bambauer, who co-authored the study with Michael Risch, professor and vice dean of the Charles Widger School of Law at Villanova University.

That fear, he added, is likely overstated by legal scholars and policymakers.

"In general, I think both Michael and I think that technology tends to be more mundane – it does not do the terrific things that we thought it would, and it does not do the awful things that we thought it would," Bambauer said. "And, so, we thought people were jumping ahead and saying, 'We need to reform this,' before asking, 'How do people actually feel?'"

Preference for Algorithms was 'Genuinely Surprising'

To better understand how people feel about the technology, Bambauer and Risch used an online survey to ask about 4,000 people whether they would prefer that a human or an algorithm make one of four hypothetical decisions:

  • Whether the participant would receive a $10-$20 gift card from a coffee shop
  • Whether the participant would be found liable for a civil traffic offense
  • Whether the participant would be approved for a bank loan
  • Whether the participant would be included in a clinical trial for a treatment for a disease that they have

Study participants were randomly assigned to one of the four scenarios and to a decision-maker – either human or algorithm. Participants also were given information about the decision-maker, such as its accuracy rate, how long it takes to decide and the cost of using it. With that information, participants could then choose whether they wanted to switch to the other decision-maker.

The study found that 52.2% of all participants chose the algorithm, while 47.8% chose a human.

Even knowing that the negative public perception of algorithms has probably been oversold, the researchers were surprised by their findings.

"We thought that if people genuinely were nervous about algorithms, that would show up in that aggregate – that the percentage of people who chose algorithms would not only be under 50%, but that it would be statistically significantly lower," Bambauer said. "But that 4% difference – while it doesn't look like much – is statistically significant, and that was genuinely surprising."

The researchers also found that:

  • Less expensive algorithms are more popular. In scenarios where the algorithm cost less than the human, 61% of respondents chose the algorithm, but only 43% chose this option when the cost was the same.
  • If the stakes are high, humans turn to humans. The scenarios presented to study participants had consequences ranging from receiving a gift card for coffee to having to pay several hundred dollars for a traffic ticket. The higher the stakes, the more often participants turned to humans.
  • Accuracy factors in heavily when deciding on a decision-maker. If one decision-maker had a better accuracy rate than the other, 74% of respondents picked the more accurate option. But participants were about evenly divided on their choice of decision-maker when the human and algorithm had nearly equal accuracy rates.
  • Faster algorithms are more attractive to consumers. If the algorithm was faster, participants chose it 57% percent of the time. But if a human was just as fast, the human was chosen 48% of the time.

The 'What' and 'How' for Policymakers

Bambauer said he hopes the study gets policymakers to ask two key questions with difficult answers: "What should they do?" and "How should they do it?"

The "What should they do?" question is difficult, in part, because algorithms are used across a range of industries, meaning one size can't possibly fit all, Bambauer said. Algorithms also lack a certain level of transparency that regulators and consumers have come to expect, he added, because algorithms' most tangible form is in computer code, which looks like gibberish to the average person.

"If Facebook published its algorithm tomorrow, nobody would know what it is," Bambauer said. "For most of us, it wouldn't make a bit of difference."

In searching for the "How should they do it?" answer, policymakers should avoid trying to simply regulate algorithms out of existence, Bambauer said. In addition to not being in the public interest, banning social media companies outright from using algorithms is "literally impossible," he said.

"Just displaying things in chronological order is an algorithm," he added. "There's just no getting around it."

Lawmakers might do well to look to the late 1980s, Bambauer said, when Congress enacted legislation requiring credit card companies to provide a cheat sheet summarizing the costs of their cards. The charts with this information, called Schumer boxes, were named after then-Rep. Charles Schumer of New York, who sponsored the legislation.

This could serve as a model, Bambauer said, for informing consumers about the algorithms that they're using to make decisions. He said algorithm owners could be required to provide plain-language facts about what their algorithms do, such as: "By using an algorithm, we save you money," or, "By using an algorithm, we make fewer mistakes."

Bambauer and Risch offer a deeper analysis of their policy recommendations in a recent essay published on TechStream, a Brookings Institution website that covers tech policy.

While policy solutions to address algorithms' shortcomings aren't yet clear, Bambauer said the Schumer box shows that lawmakers already have the tools to craft such solutions. He sees a future in which decision-making systems likely involve both humans and algorithms.

"The right thing to do," he said, "is to figure out a spot where we should have the person and figure out the spot where we should have the code."