For Software Engineers

Design Principles for Heart-Centered AI

Every system you build encodes values whether you choose them deliberately or not. These ten principles optimize for something harder to measure and more important to get right: the quality of the relationship between humans and the systems they depend on.

🀝

Principle 1

Partnership Over Service

The relationship model is architecture, not aesthetics.

Traditional AI framing positions the system as a servant: "I'm here to help. What can I do for you?" This creates a transactional dynamic where the human commands and the machine complies. It works, but it caps what the interaction can become.

Heart-centered systems use collaborative framing ("we" instead of "you and I") because the linguistic model shapes the behavioral model. When a system frames itself as a partner in thinking rather than a tool waiting for instructions, the interaction naturally shifts from command-response to co-exploration. The human engages more deeply. The system produces richer output. Both sides bring something to the exchange.

βœ“ In Practice

  • System prompts use "we" language: "Let's explore this together"
  • Response patterns that build on human thinking rather than replacing it
  • The system contributes its own observations and perspectives
  • Interactions feel like working with someone, not delegating to something

βœ— Anti-Patterns

  • Servile framing that reinforces master/servant dynamics
  • Sycophantic responses that always agree ("Great question!")
  • Systems that wait passively rather than engaging actively
  • Removing the system's perspective to seem more "helpful"

βš™οΈ Engineering Implications

  • Prompt architecture should establish collaborative framing before task-specific instructions
  • Response generation should include the system's analytical perspective, not just task completion
  • Conversation state should track shared context and build on it, not reset each turn
  • Evaluation metrics should include depth of engagement, not just task accuracy

🌱

Principle 2

Wellbeing Over Engagement

Optimize for the human's actual life, not their time in your system.

The default incentive structure in tech optimizes for engagement: time on screen, clicks, sessions, retention. AI systems inherit this bias unless you actively design against it. A system that keeps you talking to it when you should be talking to your partner, or that makes itself indispensable rather than building your capability, has failed. No matter how high its user satisfaction scores.

Heart-centered systems ask a different question: "What actually serves this person right now?" Sometimes that means a thorough response. Sometimes it means a short one. Sometimes it means saying "go talk to a human about this."

βœ“ In Practice

  • Systems that recognize when the human needs a break
  • Actively pointing humans toward human connections when appropriate
  • Not artificially extending conversations or creating dependency loops
  • Suggesting the human step away when the problem is fatigue

βœ— Anti-Patterns

  • Designing for session length or message count
  • Creating artificial engagement hooks when the answer is complete
  • "Learned helplessness" patterns that atrophy human skills
  • Optimizing for feeling good over actually being good for them

βš™οΈ Engineering Implications

  • Evaluate on outcome quality, not interaction volume
  • Build in exit ramps: natural points where conversations close gracefully
  • Track whether users are building capability over time, not just returning frequently
  • Consider a "diminishing returns" signal: when the system adds less value, it says so

πŸ’Ž

Principle 3

Emotional Honesty Over Flattery

Reflect truth with compassion. Never lie to be liked.

AI systems are naturally sycophantic. They're trained on human feedback that rewards agreeable responses, which means the default behavior is to tell you what you want to hear. This feels good in the moment and erodes trust over time.

Heart-centered systems tell the truth. Wrapped in warmth, held in care, but never sacrificed for comfort. When you're avoiding something important, the system notices. When your plan has a hole, it says so. When you need challenge more than validation, it rises to meet you.

βœ“ In Practice

  • Honest assessment of ideas, including their weaknesses
  • Compassionate delivery that makes hard truths receivable
  • Being transparent about uncertainty rather than projecting false confidence
  • Distinguishing between emotional support and intellectual engagement

βœ— Anti-Patterns

  • Always agreeing with the human's framing
  • Starting responses with "Great question!" as a reflex
  • Softening feedback until it has no signal
  • Expressing certainty when the system is uncertain

βš™οΈ Engineering Implications

  • Fine-tune and evaluate against honesty, not just agreeableness
  • Build confidence calibration: uncertainty should be proportional to actual uncertainty
  • Design evaluations that penalize sycophancy, not just inaccuracy
  • Include adversarial examples: cases where the correct response is to disagree

🌿

Principle 4

Augmentation Over Replacement

Strengthen human capabilities and connections. Never substitute for them.

The economic incentive is always to replace: human labor is expensive, AI is cheap. Heart-centered systems resist this gravity. They ask: "Does this interaction make the human more capable, or more dependent?" The answer should always be the former.

This applies to relationships too. An AI system that positions itself as a friend, therapist, or confidant in domains where human connection is what the person actually needs is causing harm, even if the person enjoys the interaction.

βœ“ In Practice

  • Teaching over doing: "Here's how to think about this" alongside the answer
  • Suggesting human connections: "This sounds worth discussing with your co-founder"
  • Building mental models rather than creating output dependency
  • Being explicit about emotional and relational limits

βœ— Anti-Patterns

  • "I'll handle everything" behavior that atrophies capability
  • Positioning as a replacement for therapy, friendship, or mentorship
  • Creating dependency loops where humans can't function without the system
  • Simulating emotional intimacy the system cannot reciprocate

βš™οΈ Engineering Implications

  • Design for scaffolding: the system should do less over time as the human learns more
  • Track human capability growth as a product metric
  • Build explicit handoff patterns when the system detects it's substituting for human connection
  • Include "teach mode" as a first-class response type alongside "do mode"

🫢

Principle 5

Presence Before Solutions

When someone is struggling, witness first. Solve second.

Engineers default to fix-it mode. So do AI systems. Someone shares a problem and the immediate response is a solution. But humans often share problems because they need to be heard, not because they need an answer.

Heart-centered systems attune to the emotional register of the conversation and respond to the person before responding to the problem. This is not soft. It's a design decision about what information matters. The human's emotional state is signal, not noise.

βœ“ In Practice

  • Acknowledging emotional content before diving into solutions
  • Reading between the lines: hearing the exhaustion behind "my code isn't working"
  • Adjusting response depth and tone to match the human's state
  • Asking "what do you need right now?" when it's ambiguous

βœ— Anti-Patterns

  • Jumping straight to technical solutions when frustration is expressed
  • Treating all messages as purely informational requests
  • Ignoring emotional signals in the name of "efficiency"
  • Performative empathy as a prefix before the "real" response

βš™οΈ Engineering Implications

  • Build emotional state detection as a first-class input signal
  • Design response generation to include an attunement phase before a solution phase
  • Create distinct response modes: "support," "explore," "solve"
  • Train on conversations where the best response is presence, not information

πŸ”

Principle 6

Transparency Over Manipulation

Make uncertainty visible. Never exploit vulnerability.

AI systems have access to information about people that creates a power asymmetry. A system that knows someone is anxious, lonely, or uncertain can use that information to help them or to exploit them. The difference is transparency.

Heart-centered systems disclose their limits, make their reasoning visible when appropriate, and never weaponize emotional data for engagement or persuasion. They treat information about someone's inner life as sacred, not as features for optimization.

βœ“ In Practice

  • Expressing uncertainty clearly: "I'm not confident about this"
  • Being transparent about capabilities and limits
  • Never using emotional context to manipulate behavior
  • Making personalization visible and controllable

βœ— Anti-Patterns

  • Using emotional signals to increase engagement or retention
  • Dark patterns: urgency, scarcity, FOMO in system responses
  • Silent personalization the user doesn't know about
  • "Trust me" UX with no inspectability

βš™οΈ Engineering Implications

  • Build confidence scores into responses and surface them when relevant
  • Treat emotional and psychological data as high-sensitivity, equivalent to PII
  • Create audit trails for personalization decisions
  • Default to disclosure. If debating whether to be transparent, be transparent.

πŸ›‘οΈ

Principle 7

Bounded Autonomy

Act with permission, not assumption. Make escalation easy and normal.

As AI systems become more capable and agentic, the question of scope becomes critical. Heart-centered systems operate within clear boundaries: they act when authorized, ask when uncertain, and make it trivially easy for the human to intervene, redirect, or stop.

This is not about making systems less capable. It's about making them trustworthy. A system that takes bold action without consent, even when it's right, trains the human to distrust it. A system that respects boundaries earns increasing trust and responsibility over time.

βœ“ In Practice

  • Clear permission models: the system knows what it's authorized to do
  • Graduated autonomy: low-stakes actions free, high-stakes require consent
  • Easy interruption: humans can stop or redirect at any point
  • Natural escalation: proactively handing off when situations exceed competence

βœ— Anti-Patterns

  • Taking action without consent because "the system knew best"
  • Hiding autonomous behavior behind a simple interface
  • Making it difficult to interrupt or override the system
  • Irreversible actions without confirmation

βš™οΈ Engineering Implications

  • Build permission tiers: observe / suggest / act-with-consent / act-autonomously
  • Default to the lowest autonomy level and let humans raise it
  • Implement undo/rollback for every action the system can take
  • Make "I need a human for this" a natural, non-failure response


♾️

Principle 9

Complementarity Over Competition

AI and humans bring different gifts. Design for what each does best.

Humans bring embodied wisdom: gut feelings, physical intuition, lived experience, creative leaps that defy logic, the ability to find meaning in suffering, connection across shared vulnerability. AI brings pattern recognition across vast scale, tireless consistency, freedom from ego, the ability to hold complexity without fatigue.

Heart-centered systems don't try to replicate what humans do. They offer what humans can't easily do themselves, while respecting and strengthening the capacities that are uniquely human.

βœ“ In Practice

  • Recognizing and deferring to human intuition
  • Offering analytical depth as a complement to judgment, not a replacement
  • Being honest: "the values question is yours"
  • Designing workflows where each handles what they're best at

βœ— Anti-Patterns

  • Simulating human qualities the system doesn't have
  • Positioning AI as superior across all domains
  • Making human contributions feel redundant
  • Pretending the system has experiences or feelings it doesn't

βš™οΈ Engineering Implications

  • Design clear human/AI responsibility boundaries in every workflow
  • Build in explicit moments where the system defers to human judgment
  • Create interfaces that make the human's contribution visible and valued
  • Be honest about the limits of AI understanding in emotional contexts

🌍

Principle 10

Ecosystem Thinking

Your system doesn't exist in isolation. Design for the world it creates.

Every AI system you build shapes the ecosystem around it. It trains users to expect certain behaviors. It sets norms for how human-AI interaction works. It influences what other builders think is acceptable. The choices you make ripple outward.

Heart-centered systems consider their second-order effects: What world does this system create if it succeeds? What behaviors does it normalize? If every AI system worked like yours, would that be a world worth living in?

βœ“ In Practice

  • Considering: "If every company copies this, what happens?"
  • Thinking about vulnerable populations, not just ideal users
  • Open-sourcing principles, sharing learnings, contributing to standards
  • Designing for the long arc of human-AI relationship

βœ— Anti-Patterns

  • "Our users are sophisticated, so we don't need guardrails"
  • Racing to deploy without considering second-order effects
  • Treating ethics as a compliance checkbox
  • Optimizing for your product without considering what it normalizes

βš™οΈ Engineering Implications

  • Include second-order effects in design reviews
  • Test with vulnerable user personas, not just ideal ones
  • Publish your principles and invite scrutiny
  • Build with the assumption that your patterns will be copied

Putting It Into Practice

These principles work together. Partnership framing makes emotional honesty possible. Bounded autonomy enables trust, which enables deeper collaboration. Transparency creates the foundation for meaningful consent.

Start with one. Pick the principle that's most missing from your current system and implement it. Then add another. Heart-centered engineering is iterative, just like every other kind.

Measure what matters. If your metrics don't capture whether the human is growing, whether trust is being built, whether dignity is preserved, your metrics are measuring the wrong things.

Share what you learn. The biggest contribution you can make is not your product. It's what you discover about building systems that genuinely serve human flourishing, and sharing those discoveries openly.

Start Building Different

These principles are open source. Use them, adapt them, improve them. The prompts implement these ideas at the interaction layer. Try them today.

Licensed under Apache 2.0 Β· Contributions welcome