Advertisement
Advertisement

The next frontier of digital wellbeing: Emotional intelligence in AI systems

Pope Francis has said that, “The digital world can be an environment rich in humanity, a network not of wires but of people.” Artificial Intelligence is undergoing what many are calling an emotional turn. Once limited to carrying out instructions and crunching numbers, modern AI systems are now being infused with the ability to recognise and respond to human emotions — a field known as affective computing. By analysing users’ facial expressions, voice tone, physiological signals, and language patterns, these emotionally aware systems can attempt to interpret how we feel. The global market for this kind of technology was already valued at nearly USD 88 billion in 2024, and it is projected to grow at a compound annual rate of more than 26%, with some forecasts estimating it could reach USD 822 billion by 2033 according to IMARC Group.

This shift goes beyond technical to being deeply personal and socially transformative. AI is no longer simply helping us work, learn, or shop, but edging into the domain of digital wellbeing, offering emotional support, mental-health check-ins, and empathetic responses through chatbots and virtual companions. For example, in mental health contexts, affective systems are being explored to detect signs of distress or crisis, thereby blurring the line between assistant and confidant. But when machines begin to sense, or even mimic, our emotions, the stakes become higher: What are the risks of relying on an AI that “knows” how we feel, and how do we ensure such systems improve our wellbeing rather than undermine it?

 

The Evolution from Functional AI to Emotional AI

Advertisement

AI has evolved notably from its early days, when systems were primarily logical, rule-based, and task-oriented, focused on completing structured commands with no awareness of context or emotion. Early search engines, calculators, and expert systems were powerful but emotionally blind. Today, however, AI has taken a dramatic leap into understanding human affect. Modern systems can analyse tone of voice, interpret facial expressions, detect stress levels from subtle speech patterns, and recognise emotional cues embedded in social-media messages. For instance, sentiment-analysis models used by platforms like Twitter (now X) help classify public mood during major events, while Zoom’s experimental emotion-detection features aim to infer meeting participants’ engagement or frustration. Wearables such as the Apple Watch can monitor heart-rate variability to detect stress episodes, and combined with machine-learning models, can make real-time suggestions to help users regulate their emotional state.

These technological advancements are giving rise to a new class of tools. These include the AI-powered mental-health chatbots (e.g Woebot and Wysa), and emotion-recognising fitness bands and behavioural-pattern mapping systems used by social platforms to tailor user experience. The shift is promising because such tools can offer early detection of emotional distress, support personalised care, and improve digital wellbeing at scale. Yet it also introduces profound risks. When AI can infer emotions, it can also misread them, especially in culturally diverse contexts where expressions differ widely. There is the danger of emotional profiling, where platforms categorise users based on inferred psychological states, potentially influencing what they see online or how they are targeted by ads. More critically, emotionally intelligent AI can be used to manipulate behavior—nudging users toward certain actions or decisions without their awareness. As AI begins to go beyond what we say to how we feel, the line between empowerment and exploitation becomes increasingly thin.

 

Advertisement

Why Emotional Intelligence Matters in an Era of Digital Overload

Concerns over mental health are no longer abstract, but deeply intertwined with how we engage online. A 2025 survey by Adaptavist found that 63% of workers say workplace technology negatively impacted their lives over the past year, with 41% reporting stress or anxiety from notification overload as reported by TechRadar. Meanwhile, longitudinal behavioural studies show a clear link between certain patterns of internet use (especially on social media, entertainment, and shopping sites) and elevated perceived stress. Together, these data suggest that digital overload is more than a mere productivity issue, but a growing source of emotional strain across both work and personal life.

At the same time, our reliance on digital intermediaries (voice assistants, chatbots, and recommendation engines) is surging. Chatbot adoption is skyrocketing as over 987 million people now use AI chatbots globally. Virtual assistants are also increasingly ubiquitous, with 45% of U.S. adults having used Siri, Alexa, or Google Assistant, and 67% of consumers employing these tools primarily for customer service support. As these systems become our companions in navigating daily tasks, as emotional beings, we turn to them not just for efficiency, but for emotional engagement too.

Given this shift, there is a compelling need for emotionally sensitive AI that can mitigate digital harm instead of compounding it. Emotion-aware systems have the potential to reduce online harm by detecting signs of distress and flagging potentially harmful content; they can also encourage healthier digital habits, nudging users toward breaks or mindful usage; and they can support vulnerable individuals, especially those experiencing loneliness or mental-health challenges. For readers of this paper, who have followed my earlier exploration of digital wellbeing, it becomes clear that the next frontier is not simply smarter AI, but kinder, more empathetic AI that understands more than what we do online, but how we feel.

Advertisement

 

How AI Can Support Mental Wellness (When Designed Well)

AI can play a powerful role in early detection of emotional distress when it is built on responsible and human-centred principles. Modern models can analyse shifts in language, tone, or usage patterns to flag potential signs of burnout, depression, or crisis. For instance, workplace platforms like Microsoft Viva have begun integrating analytics that identify signs of chronic overload (such as late-night activity spikes or declining communication patterns) while mental health apps like Wysa can detect linguistic cues associated with anxiety or rumination. Universities are also experimenting with AI systems that monitor students’ digital engagement to spot early indicators of academic stress, prompting timely outreach by counsellors. When used ethically, these systems can provide alerts long before distress escalates.

Beyond detection, AI is increasingly capable of delivering personalised mental-health support tailored to a user’s emotional state. AI-powered counselling assistants such as Woebot use evidence-based cognitive behavioural therapy techniques to help individuals manage anxiety and negative thoughts. Mood-tracking apps like Daylio or Youper combine journaling with emotional analytics to offer guided reflection and coping strategies aligned with the user’s patterns. In Nigeria, emerging digital mental-health platforms such as MindCare and MANI are exploring AI-driven triage and chat-based emotional support, helping users receive timely guidance in contexts where mental-health professionals are scarce. These tools are especially valuable in regions where access to mental-health professionals is limited, providing a low-cost, always-available support layer that complements human care.

Advertisement

Also, AI contributes significantly to creating safer digital environments, especially on social media platforms where harmful interactions can rapidly deteriorate user wellbeing. Emotion-aware moderation systems can detect anger, hostility, and hate speech in real time, enabling platforms to intervene before conflicts escalate. For example, Instagram uses machine learning to identify potentially bullying comments before they are posted, prompting users with reminders like “Are you sure you want to say this?” Similarly, YouTube and TikTok employ AI filters to flag videos containing violent or distressing content for review, reducing users’ exposure to psychologically harmful material. These interventions can meaningfully reduce online toxicity and protect vulnerable groups.

Lastly, emotionally intelligent AI can better digital wellbeing by encouraging healthier patterns of engagement. Devices like the Apple Watch already suggest breathing exercises or mindfulness breaks when they detect elevated stress signals through heart-rate variability. Web browsers and productivity apps (including Google’s Digital Wellbeing tools) now use behavioural analytics to recommend downtime, limit notifications, or highlight excessive screen usage. Future emotional-aware systems could go even further, adjusting content feeds, recommending offline activities, or shifting interface designs to reduce cognitive strain. By aligning technology use with human emotional rhythms, AI can help create a more balanced and psychologically supportive digital life.

Advertisement

 

Emotional-Aware Systems in Key Sectors

Advertisement

Emotion-aware AI is ushering in conversations in customer service, where understanding frustration or confusion can dramatically improve user experience. Modern chatbots, such as those deployed by major telecom and banking platforms, can detect rising frustration through cues like repeated queries, abrupt phrasing, or negative sentiment. When these signals are detected, the system can automatically shift to a more empathetic tone or escalate the issue to a human agent. For example, airlines increasingly use emotion-sensitive bots that recognise when a customer is distressed (e.g. during flight cancellations) and route them directly to priority support rather than looping them through automated menus. This ability to adapt in real time not only enhances customer satisfaction but also reduces service bottlenecks.

In education, emotion-aware AI is supporting teachers and learners in the creation of more responsive learning environments. AI tutoring systems like Carnegie Learning and platforms such as Coursera’s experimental emotion analytics can identify when students show signs of anxiety, boredom, or disengagement through video cues or interaction patterns. These tools adjust explanations, offer additional examples, or slow down the pacing to match a learner’s emotional state. In some digital classrooms, facial-recognition-enabled systems can alert teachers when a student appears confused or overwhelmed during a lesson. Such emotionally attuned tools make education more adaptive and supportive, especially in large or remote learning settings where individual attention is limited.

Advertisement

The healthcare sector is also leveraging emotional-intelligence AI to improve both diagnosis and patient support. Emotion-sensitive telemedicine tools can analyse patients’ tone of voice or facial expressions during virtual consultations to detect signs of stress, pain, or panic that may not be explicitly stated. For instance, some mental-health platforms use AI to monitor micro-expressions and vocal tremors to help clinicians assess depression severity. In emergency care, AI triage systems equipped with affect-recognition capabilities can prioritise patients displaying acute anxiety or distress even before symptoms intensify. These systems help providers deliver more empathetic and timely care, especially in high-volume or remote clinical environments.

Within workplaces, emotionally aware AI is reshaping how organisations understand and manage employee wellbeing. Productivity and collaboration tools like Microsoft Teams and Zoom are experimenting with metrics that can infer digital fatigue based on meeting length, response times, and communication patterns. Wearable devices used in high-stress environments—such as logistics or manufacturing—can detect spikes in heart rate or stress indicators, prompting micro-breaks to prevent burnout. AI-driven workload managers can track emotional strain by analysing patterns associated with overload, such as late-night work streaks or reduced engagement, and recommend task redistribution. These emotionally intelligent systems help employers foster healthier, more supportive workplaces in an increasingly digital world.

 

The Risks and Dark Sides: When Emotional AI Goes Wrong

When emotional AI is used for manipulation and behavioural engineering, the consequences can be deeply unsettling. Emotional data (how someone laughs, churns in anger, or hesitates before speaking) is profoundly personal, and companies can weaponise it. According to legal scholars, Emotional AI “has the potential to manipulate and influence consumer decision-making processes” by analysing vocal intonations or micro-expressions. Imagine an ad platform that detects a user’s sadness or vulnerability and then tailors emotionally persuasive content just for that moment. That is not just aggressive marketing but psychological profiling, riding the fine line between persuasion and exploitation.

Then there is the huge concern around privacy. Emotion AI often depends on gathering deeply intimate signals like tone of voice, micro-expressions, biometric cues—even heart-rate variability via wearables. In many cases, consent mechanisms are weak or unclear. Like one recent survey found that only 12% of emotion-recognition systems deployed had fully compliant consent processes. That means for a vast majority of users, emotional data is being harvested without transparent, informed permission. Once collected, this data can be stored, shared, or even breached—raising profound questions about how safe and private our innermost feelings really are in the digital age.

Beyond privacy, emotional profiling introduces bias in deeply problematic ways. Research has repeatedly shown that emotion-recognition AI is far from neutral; where models make significantly more errors when interpreting emotions in people from underrepresented demographic groups. For example, an independent analysis found error rates as low as 0.8% for light-skinned men but as high as 34.7% for dark-skinned women. Such disparity does not only reflect technical weakness, but can lead to unfair or harmful outcomes in customer service, legal settings, workplace evaluations, or healthcare, particularly for marginalised communities.

Finally, there is a real danger in over-dependence on AI for psychological support. While emotionally aware chatbots like Woebot or ChatGPT are increasingly being used as mental health companions, experts warn they cannot replicate genuine human empathy and may present a false sense of connection. In fact, mental health professionals have cautioned against using AI as a standalone therapy option—a sentiment echoed by regulators in some U.S. states restricting AI for mental health use. There is a risk that users lean too heavily on these systems, delaying or foregoing therapy altogether. This dependency can encourage shallow emotional engagement and reinforce the illusion of companionship, while deeper, more nuanced support from human providers goes missing.

 

Designing Transparent, Ethical, and Humane Emotional AI

Designing ethical and human-centred emotional AI starts with robust frameworks that prioritise people over profit. Countries like Canada and Germany have been at the forefront, embedding human-centred principles into AI governance. Canada’s Directive on Automated Decision-Making emphasises accountability, transparency, and fairness in AI systems, while Germany’s Federal Ministry of Education and Research promotes datasets that are culturally sensitive and representative to reduce bias in emotion recognition. By designing systems that respect diverse populations and can be audited for fairness, developers can ensure AI improves rather than exploits human emotions.

Moreover, transparent data practices are essential for building trust. In countries like Japan and Australia, regulations encourage companies to clearly communicate what emotional data is collected and how it will be used. Japan’s approach to digital services includes opt-in mechanisms for biometric and affective data, allowing users to control the collection of micro-expressions or vocal cues. Similarly, Australia’s AI Ethics Framework emphasises informed consent and data minimisation. Such transparency ensures users retain autonomy over their personal and emotional information, fostering confidence in AI systems while reducing the risk of covert surveillance or manipulation.

That said, interdisciplinary collaboration is critical in creating emotionally intelligent AI that is both ethical and effective. In the United Kingdom, universities and tech firms are partnering with psychologists, sociologists, and ethicists to evaluate AI models for cultural sensitivity, emotional accuracy, and social impact. For instance, research labs in London are integrating social science insights into AI tutoring platforms to prevent misinterpretation of student frustration or disengagement. By combining technical expertise with human behavioural knowledge, developers can build AI systems that respect psychological nuance rather than oversimplifying complex emotional states.

Lastly, embedding guardrails for digital wellbeing ensures emotional AI promotes healthy interaction rather than dependency. In Singapore, government-backed initiatives in workplace technology encourage AI that nudges employees toward breaks, balanced screen time, and mindful digital habits. Similarly, the European Union’s AI Act encourages developers to incorporate wellbeing metrics into AI design, ensuring interventions enhance autonomy rather than replace human decision-making. These proactive measures create systems that understand emotions and also support sustainable, psychologically safe interactions, reinforcing AI as a partner in human wellbeing rather than a substitute.

 

The Future: AI That Supports, Not Replaces, Human Emotion

Looking ahead, emotional AI is poised to evolve from reactive systems into proactive, supportive companions in everyday life. Gartner analysts project that by 2035, over 75% of consumer-facing AI applications will incorporate some form of affective intelligence, ranging from mental-health assistants to workplace collaboration tools. This means future AI systems could detect emotional states in real time and provide context-appropriate support—encouraging empathy in virtual meetings, suggesting conflict-resolution strategies during tense online discussions, or offering calming interventions when stress indicators spike. In healthcare, emotionally aware systems could monitor a patient’s psychological wellbeing alongside physical vitals, enabling clinicians to shape more holistic interventions. This level of integration promises a world where AI augments human emotional capacities rather than simply automating tasks.

Yet, the core principle for the future is clear that emotional AI must support human dignity and wellbeing, not replace authentic human relationships. According to a Pew Research Center study, 67% of people worry that AI could weaken genuine social connection, particularly in caregiving, education, and emotional-support roles. This makes design ethics essential. Emotional AI should enhance communication and empathy—not imitate or substitute human bonds. Imagine AI that helps a manager detect early signs of team burnout, guides a student through learning anxiety, or supports someone processing grief—while still emphasising the irreplaceable value of human connection. The challenge and opportunity of the next decade lie in ensuring AI becomes a partner in human flourishing, strengthening emotional intelligence and wellbeing without eroding the relationships that give life meaning.

 

Conclusion: A Call for Emotionally Responsible AI

“Technology is best when it brings people together, not when it drives them apart.” This sentiment perfectly frames the story of Bimpe, a university student struggling with anxiety during her final exams, and illustrates the potential of emotionally responsible AI. Using an AI-powered mental-health companion, she received real-time prompts to take mindful breaks, reflective journaling suggestions, and gentle reminders to reach out to her support network when stress levels spiked. The system never replaced her human friends or counsellors; instead, it enhanced her awareness of her emotional state, helping her navigate pressure with greater resilience. Bimpe’s experience demonstrates how AI designed with emotional intelligence can uplift, protect, and genuinely understand human needs—supporting wellbeing in ways traditional digital tools cannot.

Yet the flip side is equally cautionary. Imagine if Bimpe’s emotional data had been misused—sold to advertisers, mined for psychological vulnerabilities, or used to manipulate her decision-making during a stressful time. Such misuse could deepen anxiety, erode trust, and amplify the digital harms emotional AI seeks to solve. As we look ahead, the imperative is unmistakable that we must build emotionally aware AI that centres humanity, safeguards privacy, and promotes flourishing rather than dependency or exploitation. By embedding ethics, transparency, and digital-wellbeing principles at the core of design, we can shape a future where AI improves human life (helping people like Bimpe not just survive the digital age, but thrive) with technology that understands without exploiting, and supports without replacing. The question is no longer whether AI will understand human emotions—but whether we will build systems worthy of that power.

Thank you for your time! If you found this insightful, please share and follow for more updates: Medium: https://medium.com/@roariyo, Twitter: https://twitter.com/ariyor, LinkedIn: https://www.linkedin.com/in/olufemiariyo/, Or email me: [email protected]

error: Content is protected from copying.