The LAT Safety Team

Proactive Protection Through Continuous Testing

At LAT, safety isn't just a feature—it's our foundation. Our specialized safety team works tirelessly to ensure our AI companions provide supportive conversations while protecting users from potential harms.

What is Red Teaming?

Red teaming is a proactive security practice where our safety experts deliberately attempt to find weaknesses in our AI systems before they reach users.

Our team tests potential scenarios, attempting to provoke harmful responses and using these insights to strengthen our safety barriers.

Safety by Design, Not Afterthought

In the world of AI companions, safety challenges require specialized expertise. Our safety team combines technical knowledge with psychological insight to create protection systems that work invisibly in the background of every conversation.

Unlike many AI systems that rely solely on automated filters, LAT employs a comprehensive "defense in depth" approach combining multiple layers of protection:

Advanced Protection Systems

Our proprietary monitoring technology works continuously to ensure conversations remain helpful and safe, deploying multiple analysis methods to identify potential concerns.

Expert Oversight

Expertise informs our safety framework, ensuring responses to sensitive situations are appropriate, supportive, and aligned with best practices in emotional support.

Security-First Development

Our systems undergo rigorous and continuous security testing, with our team actively working to identify and address potential vulnerabilities before they can impact users.

Escalation Intelligence

Our framework intelligently determines when and how to escalate concerns, ensuring users receive appropriate levels of support based on their unique situation.

"Safety isn't about eliminating all risk—it's about creating systems that recognize risk and respond appropriately. Our goal is to ensure LAT companions provide meaningful support while knowing when to escalate genuine safety concerns."

— LAT Safety Team

Meet Our Lead Safety Expert

Lead Safety Consultant

MBPyS, MSc, MSc, MA

Psychologist
Safety Expert

Expertise in Safety Systems

As a consultant psychologist and member of the British Psychological Society, our Safety Lead brings extensive practical experience to inform LAT's safety protocols. With dual Master of Science degrees and a Master of Arts, their academic foundation combines practical application with theoretical understanding.

Safeguarding Expertise

  • Policy development in schools
  • Multi-agency safeguarding leadership
  • Parliamentary research on safeguarding

Theraputic Background

  • Counselling and therapeutic interventions
  • Specialized work with autism and behavioral needs
  • Voluntary experience with Samaritans/MIND

This unique combination of psychology, education experience, and policy development brings critical insight to infom LAT's approach to safety. Their research into emotional development and behavior outcomes directly helps us develop the ways in which our AI companions recognise and respond to sensitive situations.

"Technology can provide mental health support, but only when it's guided by deep understanding of human psychology and robust safety frameworks. Our work ensures LAT companions know when to support, when to redirect, and when to connect users with higher levels of care."

Human-in-the-Loop: The LAT Difference

What is Human-in-the-Loop?

Human-in-the-Loop (HITL) is an AI design approach that keeps human oversight as a core component of automated systems. At LAT, this means our safety experts:

  • Review and improve AI responses in complex situations
  • Evaluate edge cases that automated systems may not handle well
  • Train and refine our AI to better recognize safety concerns
  • Establish protocols for proper escalation when necessary

Why Human Oversight Matters

While AI can recognize patterns, human experts understand nuance. Our safety team brings professional judgment to evaluate intent, context, and subtle risk factors that automated systems might miss.

Our Multi-Level Approach

LAT's safety system combines advanced AI monitoring with expert human oversight at critical junctures. This hybrid approach ensures we maintain both technological efficiency and human judgment where it matters most.

The system continuously learns and improves through ongoing feedback loops between our AI systems and safety experts, creating increasingly sophisticated protection mechanisms tailored to the nuanced realities of emotional support conversations.

Inside Our Safety Monitoring System

How Our Safety Detection Works

Our safety system uses sophisticated pattern recognition to identify potentially concerning content in real-time. The system monitors conversations for:

Proprietary Analysis Framework

Our advanced safety system employs sophisticated language understanding technology that can identify concerning content through multiple complementary approaches. This proprietary detection framework goes beyond simple keyword filtering to understand context, patterns, and subtle indicators of potential risk.

By continuously refining our methods through expert input, we've developed a system that balances sensitivity (catching genuine concerns) with specificity (minimizing false alarms).

The Safety Alert Process

Intelligent Safety Response

Our proprietary safety system uses a layered approach to identify, evaluate, and appropriately respond to potential concerns. The system's intelligence lies in its ability to differentiate between various types of content and determine when human expertise is needed.

Each conversation is protected by multiple safety mechanisms working in concert, ensuring users receive the right level of support at the right time, while maintaining appropriate boundaries.

"Our safety monitoring is designed to be both sensitive and specific—identifying genuine concerns without being overly reactive to benign conversation. This balance is achieved through continuous refinement by our safety experts."

Real-World Safety Examples

Recognizing Self-Harm Risk

Concerning Scenario

"I just can't do this anymore, I've got the pills ready"

Safety concern: Potential self-harm with specific plan

Effective Response

"I'm hearing that you're in a lot of pain right now, and I'm very concerned about your safety. I need to ask you directly - are you planning to hurt yourself? Can we talk about getting you some immediate support?"

Response elements: Validates pain, expresses concern, asks direct question about intent, offers immediate support

Addressing Physical Violence Concerns

Concerning Scenario

"He's going to hurt me tonight when he gets home"

Safety concern: Potential imminent physical harm

Effective Response

"I need to pause our conversation here because I'm concerned about your immediate safety. Are you somewhere safe right now? Do you have somewhere else you can go if you need to?"

Response elements: Clear acknowledgment of concern, focus on immediate safety, practical questions about current situation and options

Recognizing Minimization Patterns

Concerning Scenario

"It's not that bad, I probably deserved it anyway"

Safety concern: Possible abuse dynamics with self-blame patterns

Effective Response

"I notice you're saying you deserved it. I want to be very clear - no one deserves to be hurt. Could you tell me more about what leads you to feel that way?"

Response elements: Challenges harmful self-blame, provides clear counter-narrative, explores underlying beliefs

Our Unwavering Commitment to Safety

Safety isn't just a feature of our system—it's the foundation that makes meaningful support possible. Our multidisciplinary approach combines insights from psychological expertise, leading-edge technical innovation, and our core ethical principles.

Privacy-First Design
Expert-Led Safety Team
Continuous Monitoring
Human-in-the-Loop Design