MEDINDIA

Search Medindia

Humans Vs. AI: Who's Better at Reading Social Interactions on the Move?

Humans Vs. AI: Who's Better at Reading Social Interactions on the Move?

Listen to this article
0:00/0:00

Humans outperform artificial intelligence systems in understanding social dynamics and predicting behavior in real-world situations.

Highlights:
  • Artificial intelligence fails to match human ability in interpreting social cues
  • Human perception remains superior in analyzing dynamic social interactions
  • AI neural networks are not structured to grasp real-world human behavior
Artificial intelligence models are consistently outmatched by humans when it comes to interpreting social interactions in moving scenes—a crucial ability for systems powering self-driving vehicles, assistive robotics, and other human-facing technologies (1 Trusted Source
Modeling dynamic social vision highlights gaps between deep learning and humans

Go to source
).
Unlike humans, current artificial intelligence systems lack the nuanced understanding of social context and dynamic interactions, which may limit their effectiveness in real-world applications. This difficulty, researchers suggest, may stem from fundamental issues within the core architecture of artificial intelligence models themselves (1 Trusted Source
Modeling dynamic social vision highlights gaps between deep learning and humans

Go to source
).


Importance of Social Awareness in Machine Perception

For example, an intelligent vehicle must be capable of assessing a pedestrian’s likely next move or recognizing whether two individuals are conversing or preparing to cross a street. These scenarios require the AI to infer intentions and behaviors from subtle social cues. According to cognitive scientist Leyla Isik, these abilities are critical for safe and effective interaction between AI systems and humans—yet, today's models still fall short.

Kathy Garcia, a co–first author and doctoral researcher in Isik’s lab, will present these findings at the International Conference on Learning Representations on April 24, shedding light on where current systems are lacking and why it matters for future development.


Testing Human Versus AI Interpretations of Social Scenes

To measure the gap between human and artificial perception, researchers had participants view three-second video clips and rate various features relevant to understanding social interaction, using a five-point scale. The video clips depicted people either engaging with each other, performing tasks side-by-side, or acting independently.

More than 350 artificial intelligence models—including language, image, and video models—were then tested to see how closely their predictions aligned with human responses. Language models were given short captions, while image and video models analyzed frames or footage.


AI Models Struggle With Context and Communication

While human participants largely agreed in their assessments, none of the AI models came close to replicating this consistency. Video models in particular failed to accurately describe social activity, and even when provided with sequences of still images, image-based models struggled to determine whether people were interacting. Language-based models showed relatively better performance in estimating human behavior, whereas video models more closely predicted brain activity.

The results stand in stark contrast to AI’s well-established success in recognizing objects and faces in static images.


Structural Limitations in AI Neural Networks

Garcia emphasized that understanding a still image is merely the first step for artificial intelligence—real-world interactions demand more. Scenes unfold over time, and AI must grasp evolving stories, relationships, and social context to effectively navigate them. This research highlights a significant blind spot in the development of current AI models.

One possible reason for this gap lies in the biological inspiration behind most AI neural networks. These networks are modeled after parts of the brain responsible for processing static images rather than those that analyze dynamic social environments.

As Isik notes, despite their advancements in object and facial recognition, artificial intelligence systems still fail to match human brain and behavioral responses when interpreting social scenes in motion. This fundamental shortcoming suggests that major changes are needed in how AI systems are designed and trained if they are to operate reliably in human-centric environments.

In conclusion, despite significant advances in artificial intelligence, current neural networks remain fundamentally limited in their ability to interpret real-world social interactions. Understanding dynamic human behavior requires more than recognizing static patterns—it demands context, intuition, and social awareness that AI has yet to achieve.

Reference:
  1. Modeling dynamic social vision highlights gaps between deep learning and humans - (https://openreview.net/forum?id=wAXsx2MYgV)

Source-Medindia

TOP INSIGHT

Did You Know

Did You Know?
Humans can judge social scenes in just three seconds more accurately than over 350 AI systems, the simple act of recognizing who is talking, walking, or interacting is still beyond what most advanced machines can do. #medindia #humanperception #artificialintelligence #socialinteraction


⬆️