AI Safety and Your Relationships: How We Build Responsible AI at Yenesow
When you give an app access to your most personal relationships — your family, your closest friends, your professional network — the stakes for getting AI right are extraordinarily high. Here's how we think about AI safety, what guardrails we've built, and why we believe transparency is the only path forward.
The Legitimate Concerns
Let's start with the questions you should be asking any AI-powered personal app:
Can the AI manipulate my behavior? Social media algorithms already do this — optimizing for engagement, not well-being. Could a relationship app do the same, nudging you toward behaviors that benefit the company rather than your relationships?
What does the AI actually see? Does it read my messages? Does it know what I said to my therapist? Does it have access to my photos, my location, my browsing history?
Could the AI give harmful advice? Relationships are complex. Could an AI nudge you to reach out to someone at the wrong time, or suggest something tone-deaf during a sensitive situation?
Who else sees my data? Is my relationship data being sold to advertisers? Used to train models? Shared with third parties?
These are all valid concerns. Here's how we address each one.
Our Five Principles for Responsible AI
AI Suggests, You Decide
Yenesow's AI generates nudges and conversation starters, but it never acts on your behalf. It won't send messages for you, post on your social media, or make calls. Every action requires your explicit decision. The AI is an advisor, not an agent. You are always in control of when, how, and whether to reach out to someone.
Only What You Share — We Never Access Your Messages
Yenesow's AI only knows what you explicitly tell it — through the Tune Up questionnaire and AI Chat. We never access your messages, emails, SMS, call logs, or any external communications. We don't connect to Gmail, iMessage, WhatsApp, or any messaging platform. Your relationship health scores are built entirely from the context you choose to share within the app and the contact metadata you import from your iPhone. This is a hard technical constraint, not just a policy.
No Dark Patterns, No Engagement Optimization
Our business model is subscriptions, not advertising. We have zero incentive to maximize your time in the app. In fact, the ideal outcome is that you spend less time in Yenesow and more time actually talking to the people you care about. Our nudges are designed to get you out of the app, not keep you in it. We don't use streaks, gamification, or FOMO mechanics.
Contextual Sensitivity
Our AI is trained to handle sensitive situations with care. When it detects signals like a job loss or a difficult life event, it adjusts its tone and approach. It suggests empathetic, low-pressure outreach rather than cheerful "Hey, long time no talk!" messages. We continuously review and improve these sensitivity guidelines based on user feedback and relationship science research.
Your Data Is Yours — Period
We never sell your data. We never share it with advertisers. We never use your personal relationship data to train models that serve other users. Your data is encrypted with AES-256 at rest and TLS 1.3 in transit. You can export all your data at any time, and you can delete your account and all associated data permanently. We comply with GDPR, CCPA, and Apple's App Store privacy requirements.
The Regulatory Landscape
AI regulation is evolving rapidly. Here's where the major frameworks stand and how Yenesow aligns:
| Regulation | Scope | Yenesow Compliance |
|---|---|---|
| EU AI Act (2024) | Risk-based AI classification | Yenesow is classified as "limited risk" — we provide transparency about AI-generated content (nudges are clearly labeled as AI suggestions) |
| GDPR | EU data protection | Full compliance: data minimization, right to erasure, data portability, explicit consent for processing |
| CCPA/CPRA | California consumer privacy | Full compliance: no sale of personal information, right to delete, right to know what data is collected |
| Apple App Store Guidelines | iOS app privacy | Full compliance: App Privacy Labels accurately reflect data collection, no tracking without ATT consent |
| NIST AI RMF | U.S. AI risk management | We follow NIST's framework for identifying and mitigating AI risks in our development process |
What We Don't Do (And Never Will)
- We don't read your messages, emails, or any communication content
- We don't sell or share your data with third parties
- We don't use your data to train models for other users
- We don't track your location
- We don't use dark patterns to increase engagement
- We don't send notifications designed to create anxiety or FOMO
- We don't make automated decisions that affect your relationships without your input
- We don't retain your data after account deletion (30-day grace period, then permanent deletion)
The Bigger Picture: AI as a Force for Good
The conversation about AI safety often focuses on what can go wrong. That's important. But it's equally important to recognize what AI can do right. The loneliness epidemic is real. Social media has weakened our social fabric. People are struggling to maintain meaningful relationships in an increasingly distracted world.
AI that helps people stay connected — that reminds them to call their grandmother, congratulate a friend on a promotion, or check in on someone going through a hard time — is AI working in service of human well-being. The key is building it responsibly, transparently, and with the right incentives.
That's what we're trying to do at Yenesow. And we'll keep publishing exactly how we do it, because you deserve to know.
AI that works for your relationships, not against them
Yenesow is built on transparency, privacy, and the belief that technology should bring people closer together.
Download Yenesow Free