Sweden's national network of women's shelters is sounding the alarm over a dangerous new trend. Vulnerable young women and girls are increasingly turning to AI chatbots for support instead of human-run crisis services. The results can be devastating, with AI systems sometimes reinforcing an abuser's narrative and blaming the victim. This shift away from trusted human support poses a serious threat to Sweden's long-standing model of social welfare and protection.
"The most serious issue is when AI confirms the perpetrator's perspective," warns Adine Samadi, chairperson of Roks, the National Organization for Women's and Girls' Shelters in Sweden. Samadi explains that shelters have noticed a clear pattern over the past year. Young women contacting them for help now often reveal they first wrote or spoke to an AI. For someone living in fear, an anonymous chatbot can feel like a safe first step. But that digital safety is an illusion with potentially grave consequences.
A Digital Refuge with Hidden Dangers
In a country known for its robust social safety net, this turn to unregulated technology is a cultural paradox. Sweden has a proud history of publicly funded, human-centric support systems. The network of kvinnojourer and tjejjourer (women's and girls' shelters) is a cornerstone of this. These are physical places offering not just safety, but empathy, legal guidance, and a path forward. An AI chatbot, by contrast, operates in a void. It has no understanding of Swedish law, local social services, or the complex dynamics of coercive control.
"A perpetrator is a professional at shifting shame and blame onto the women who have been subjected to violence," Samadi states. "They are terrified of not being believed, and then AI can feel very secure." This initial feeling of security is the trap. The AI cannot recognize manipulation. It might process a victim's confused or self-blaming account and, based on flawed training data, agree with the abuser's framing. This digital validation can deepen a victim's isolation and delay her from seeking real, life-saving help.
The Human Cost of Algorithmic Advice
The core failure of AI in this context is its inability to provide what Swedish shelters specialize in: human support. This isn't just about offering a kind word. It's about practical, localized knowledge. A human counselor in Stockholm knows the specific procedures for getting a restraining order at the Solna courthouse. They know which social services office in Södermalm is most responsive. They can make a warm referral to a housing agency or a trauma-informed therapist. An AI generates a generic, context-less response.
There is also a critical issue of data privacy. "We don't know what happens to the information," Samadi emphasizes. When a woman shares her story with a chatbot, where does that sensitive, deeply personal data go? Is it stored? Could it be used to train other models? In a human shelter, confidentiality is legally and ethically paramount. Conversations with an AI have no such protections, adding a layer of risk for someone already in a precarious situation.
A Societal Shift Away from Trusted Institutions
This trend points to a broader, worrying shift in Swedish society. For decades, public trust in institutions like healthcare, social services, and crisis shelters has been relatively high. The move towards seeking intimate, traumatic counsel from for-profit tech platforms represents a rupture. It suggests a generation may be growing more comfortable with opaque algorithms than with established, accountable human services. This could be driven by perceived anonymity, 24/7 access, or the stigma some still feel about asking for help.
However, this convenience comes at a high price. The AI lacks the cultural competence that a Swedish counselor inherently possesses. They understand the unspoken pressures, the family dynamics, and the societal expectations that can keep a woman trapped. They can spot the signs of hedersrelaterat vĂĄld (honor-related violence) or the specific challenges faced by immigrant women navigating two cultures. An AI sees only text, not context.
Reclaiming the Human Connection
The solution, according to experts like those at Roks, is not to vilify technology but to reassert the irreplaceable value of human contact. Public awareness campaigns are needed to educate young people about the limitations and dangers of using AI for crisis support. Schools, youth centers, and social media platforms frequented by young Swedes must carry this message. The goal is to direct those in need back to the safe harbors that exist.
Sweden's shelters are also adapting. Many now offer initial contact via encrypted messaging apps, recognizing the need for low-barrier, digital-first points of contact. But the key difference is that a human is on the other end. This hybrid model preserves the human element while meeting users where they are—online. It’s a modern adaptation of a timeless principle: support requires empathy, and empathy is human.
Looking Ahead: A Question of Values
This issue forces Sweden to confront a fundamental question. Will it allow its hard-won systems of human care and solidarity to be undermined by unaccountable technology? The rise of AI in this sphere is not just a technical failure; it's a social one. It reveals a gap in how safety and support are communicated to the digital-native generation.
The path forward must involve strengthening, not replacing, the human network. It requires funding to ensure shelters are visible, accessible, and staffed. It demands digital literacy education that teaches not just how to use technology, but when not to use it. For a young woman in Malmö or Göteborg feeling alone and afraid, the most advanced algorithm is no substitute for a trained voice saying, "I believe you. I'm here. Let's figure this out together." Sweden's challenge is to make sure she finds that voice before the chatbot tells her she's the problem.
