Bias, Blame, and Brainpower: What Happens When Minds and Machines Collide
Highlights from the LMU-TAU AI-Humanity-Society Workshop
From March 17–19, 2025, early-career researchers from Tel Aviv University and Ludwig Maximilian University of Munich (LMU) gathered in Munich for an interdisciplinary workshop on how artificial intelligence is reshaping our understanding of society, culture, and human behavior. Building on the virtual session in November 2024, the in-person event featured perspectives from law, social science, economics, business, psychology, philosophy, and data science.
As AI technologies—from generative models to predictive algorithms—become more and more embedded in daily life, participants discussed timely questions: How do these systems affect human decision-making? What are the legal and ethical implications? And how can we keep innovation grounded in human values?
AI Meets the Human Mind and Personality
One of the workshop’s core themes was the interplay between personality traits and how people engage with AI. Dr. Shir Etgar presented research showing that AI-generated financial advice varies based on gendered cues—such as stereotypically male or female professions—often offering women less risky and more simplified, even patronizing, responses.
“Gender biases can have tremendous implications. We need a new approach that helps users become more aware and make informed decisions.”—Dr. Shir Etgar, TAU Faculty of Social Sciences
Research on how our personality shapes our perception of AI found that neuroticism is associated with treating AI as a threat, while agreeableness was linked with seeing AI as an opportunity. These traits influence whether people override, avoid, or embrace AI recommendations.
Other contributions examined public attitudes toward AI, its role in policy-related research such as refugee integration and how tailoring AI systems to users’ personality profiles might improve experience.
At the same time, researchers cautioned against over-anthropomorphizing machines, emphasizing the ethical consequences of designing systems that appear ‘too human’. What is important, however, is that AI can surely serve as a tool for real social change.
Questions of Legitimacy and Accountability
What happens to our right to due process when an algorithm makes the call? On the one hand, AI can definitely help reduce bureaucratic barriers for people in need of legal aid by automating the initial assessment of cases. On the other hand, how can fair treatment be ensured?
“If promises are not made personally but by a computer system used by a human, without the human knowing the content of the given promises, does this break the fundamental human trust?”—Peter Moser, LMU
Building on earlier research presented at the TAU-LMU Workshop in 2022, discussions emphasized that AI systems are unlikely to completely replace humans in the near future. Instead, AI systems and humans are expected to work side by side, which raises new questions of liability and responsibility in decision-making, particularly when errors occur and blame must be assigned.
When AI Misses the Subtleties
Despite its exponentially growing capabilities, AI still struggles with nuance and consistency. AI-generated predictions of voting behavior during the EU elections revealed significant variability in accuracy across regions and languages—highlighting the risk of reinforcing social inequalities if these systems are not properly validated.
Prompt engineering also remains a major challenge—as anyone who has tried running the same prompt twice knows very well, each attempt will yield a different result, raising concerns about reproducibility and transparency.
Another limitation is AI’s current difficulty with counterfactual ‘what if’ thinking and reasoning from a first or third-person perspective—fundamental to human and animal cognition. This may constitute another area for future development.
“LLMs can mimic human logic to a point—but they still lack the personal and contextual depth that defines human thought.”—Roy Klein, TAU
A Shared Vision for Interdisciplinary Research
The workshop wrapped up with a forward-looking discussion on future collaboration. Representatives from both LMU and TAU emphasized their shared commitment to bridging the gap between technology and humanity.
“The partnership between LMU and TAU is more than just a collaboration—it’s a convergence of academic strengths that allows us to tackle today’s most pressing questions from multiple angles,” said Dr. Michal Linder Zarankin, from TAU.
“By combining our expertise across disciplines, we’re ensuring that the development of new technologies is guided by a deep understanding of cultural and ethical dimensions.”—Dr. Lior Zalmanson, the Academic Coordinator and a Senior Lecturer at TAU
Participants also gave highly positive feedback on the structure, setup, and substance of the event, praising the relevance and quality of the talks, the diverse yet focused group of participants, and the opportunities for meaningful conversation.
“In terms of talks and topics and people, it was the BEST workshop I've ever been to. People were extremely nice, smart, and motivated to connect and collaborate,” commented one of the participants.
Looking Ahead
As TAU and LMU continue to deepen their cooperation, new funding opportunities will support further research at the intersection of AI, humanities, and the social sciences.
Participants noted the ongoing challenge of inter- or multi-disciplinary collaboration—especially the lack of a shared language across fields. Events like this collaborative workshop help build essential bridges between different academic fields.
By encouraging interdisciplinary dialogue, the AI–Humanity–Society workshop has not only sparked new ideas—it has laid the groundwork for future breakthroughs at the intersection of technology and the human experience.