Why We Don’t Trust AI Decisions
The human brain gives us a big clue as to why we are often distrustful of AI-generated advice

Every day we make many decisions. Most feel minor, with very few repercussions for suboptimal choices. Often, we are not even aware we are choosing between options. Take a typical Monday morning: On the commute to work, traffic starts to slow down. You’re in the right-hand lane, and you might switch to a faster-moving lane, but your exit is in little more than a mile. With barely a thought, you settle behind the slow-moving car in front of you.
As you are walking from the parking lot to the office building, you stop at the café on the corner. It is Monday morning, so coffee feels as essential as air, but that croissant is also tempting even though you had promised yourself to cut back on carbs. As you settle in at your desk, wiping the crumbs from the pastry off your fingers, your boss stops by asking you about your weekend. You start to talk about the plumbing problems you had, but as you see his eyes shift away, you pivot and mention you caught the second half of the big game Sunday night. A few minutes later, your boss is fist-bumping you, chuckling about your insight about how the home team might be able to salvage its season.
There are other times, though, when you might feel the weight of a decision, and you wrestle with the pros and cons of one option versus another. For example, that same morning, you stare at the email that you have written, inviting one of your coworkers to join you on a big project. Your coworker has relevant expertise but is a pain to work with. Minutes go by as you contemplate hitting send, trying to predict how the next few weeks will go, and almost as important, how you will feel about it.
A fair number of tech companies, and their multitude of investors, are betting that in the near future, if you have not done so already, you will be willing to listen to AI to help you make all kinds of decisions, from the seemingly inconsequential to important. Commercials are showing people asking for advice from large language model (LLM) AI on a wide range of topics—from how to do a household task to how to care for your baby. Advice you used to get from friends and family will now be available 24-7 on your phone.
Trying to decide to change lanes? Ask the AI in your car what you should do. If you need help thinking about how bad it will be for your diet to eat that croissant, the AI on your wrist will be glad to give advice in the context of the health biomarkers it is monitoring. Wondering how to get your boss to pay attention to your conversation? AI has some suggestions for likely engaging topics. Your work laptop’s AI client will not only give you advice about whom to select for your team project, but also advise you on how to deal with a troubling coworker.
But not everyone is comfortable seeking help from AI to make decisions—at least not yet. And the human brain—and what makes it special—provides a clue as to why.
Missing the Human Element
Recent research from UC Berkeley highlights some of the bias people have against using AI for advice. In the study, subjects were asked whether they preferred to get advice from humans or AI over a wide range of topics. Only when seeking advice about tech and software did subjects state that they would prefer the advice from AI. On some topics people did not show much preference, like in cooking and recipes. But for other topics, such as dealing with relationships, personal development, career and education, there was a strong aversion to seeking AI advice. However, when the researchers then presented the subjects with advice generated by ChatGPT and humans about dating and relationships, but did not tell the subjects the source of the advice, the subjects actually preferred the advice from ChatGPT.
The researchers suggested that the reason people are shying away from seeking advice from AI is not that the advice is bad, but that people are turned off by the “otherness” of AI. When we think of whom we turn to for advice, especially of a personal nature, it’s friends, family and mentors—people with whom we have many shared experiences and who know us best. When we turn to a trained professional, like a counselor or psychologist, we are reassured when that professional relates to us on a personal level.
How authentic the advice feels is connected to how well we relate to the person giving it. This feeling becomes much less pronounced the further one gets away from asking the question, “Is this good for me?” and more to “Is this good?” When that occurs, expertise is more important than the feeling of connectedness, and thus asking AI for technical advice does not generate the same aversion.
This conclusion makes sense if one considers the difference between how humans make a decision versus how AI does it. While there are many unanswered questions regarding how our brains make decisions, some clues from neuroscience research suggest that even if AI-generated advice seems reasonable and passes the “Turing Test” by sounding indistinguishable from human-generated advice, LLM AI is missing key components of human brain function, which exacerbates the “otherness” feeling of AI.
Mirror Images
One such component is the ability of some neurons to become active for two distinct reasons: when we have decided to engage in a behavior and when we observe others engaged in that same behavior. These “mirror neurons” were discovered by Italian neuroscientists who recorded the activity of neurons in macaque monkeys that were engaged in a simple grasping task. Since then, numerous functional brain imaging studies have illustrated that networks of neurons in human brains are activated in similar fashion, coming alive not just when we are engaged or planning a behavior, but also when we are observing or even thinking of others engaged in a similar behavior. These mirror neuron networks have been hypothesized by scientists to have roles ranging from learning how to acquire language to the ability to feel empathy for others’ pain.
How does this relate to decision-making? Take one of the decisions I described earlier—when you were talking to your boss and made a decision to change topics of conversation from plumbing to sports. When you observed your boss’s initial reaction, regions in your brain that process facial features activated a mirror neuron network in the inferior frontal gyrus and posterior parietal cortex that generates the behavior of your eyes shifting away when you are bored. In other words, your brain mimicked the behavior internally that you were observing in your boss, which allowed you to surmise quickly what your boss was feeling and then pivot the conversation to a more engaging topic.
The general ability to recognize that someone else is experiencing, thinking and feeling something different from you is referred to as Theory of Mind and is essential in navigating most personal and social interactions. These are exactly the types of interactions—where personal connection is desired and personal guidance is sought—in which people are most reluctant to seek advice from AI. A mirror neuron network facilitates the formation of a Theory of Mind by self-referencing someone else’s behavior. (While it is controversial, some neuroscientists have investigated whether individuals with diagnosed autism spectrum disorder have differences in their mirror neuron networks that contribute to their difficulty in using Theory of Mind to navigate social interactions.)
LLM AI, of course, has no mirror neuron system. While there is variation in how different programs work, there is no evidence that there is any code generating a simulated mirror neuron system. Rather, the advanced neural network that powers LLM AI is based upon using something called a Feed Forward Layer to model the intent of the user. The LLM AI receives data, which is then processed by multiple “hidden layers,” and then the LLM AI generates an output—an answer or a piece of advice. These hidden layers allow the program to make complex relationships between features of the data to better determine what the user was asking.
But there are two key things missing from these models: the code for oneself and the code for engaging in the behavior. For example, if a friend asked me whether he should have a croissant in the morning, my mirror neuron system would generate the neural activity that would mimic what it would feel like if I ordered the tasty treat, and my advice would be tempered by that brain activity. An LLM AI does not consider that, even though its advice might be similar to mine. ChatGPT’s response when I asked this question was, “A croissant sounds like a solid choice—flaky, buttery and satisfying. If you are in the mood, go for it!” But because it isn’t based upon a sense of self and the behavior that would follow from that advice, my trust in that advice is much less.
Can AI Overcome This Hurdle?
There is evidence suggesting that LLM AIs can mimic some aspects of brain function, such as processing different languages using generalized processes from the original language it was trained in. Could future iterations of LLM AI also incorporate some aspect of the mirror neuron system into their design?
There seem to be several huge challenges, the biggest being the sense of self. Neuroscience has no insight into how the sense of self is generated in the brain, though some clues have indicated that certain regions of the brain play a role in the process. A Stanford neuroscience lab found that a region of the brain called the anterior precuneus, when activated in human subjects by electrodes, caused specific disruption in the subjects’ bodily sense of self. If future research can identify mechanisms for how that is generated, maybe it could be computationally modeled in LLM AIs.
Another challenge in modeling a mirror system network in LLM AI is coding for the prediction of what it feels like to sense a behavior. This is easier to program in an AI that has a defined and limited use. For example, if you had an AI in your car and you asked for advice on whether to change lanes, the AI could more easily predict the consequences of the behavior (e.g., by changing lanes, you would save 10 seconds on your commute at a 0.01% increased risk of an accident). It would be much more difficult for AI to do that in a more complex and diverse circumstance, such as when you wondered whether you should ask a troubling coworker to join you in a work project.
Research in LLM AI has exploded over the last few years, but trying to leverage ways that this emerging technology can make our lives and our society better has been tempered by mistrust and concerns for how it could upend our lives and disrupt our society. While this worry is partially due to the same fears that occur whenever there is a technological revolution, there is something different with LLM AI, and I think that mostly has to do with how human the technology seems while clearly not being human. However, by understanding a bit more of how the brain functions, such as unlocking the way the mirror neuron system works, we could potentially program AI to have some of the same perspective that we have. Maybe then, we could trust its advice.