1. Home
  2. AI Companies
  3. Interaction
IN

Interaction

About

Interaction is a Palo Alto-based startup building Poke, an AI assistant that operates entirely within iMessage and SMS. The architecture constrains the system to function through messaging protocols rather than native apps, requiring the assistant to parse natural language commands, maintain conversational state, and execute actions across text, email, and calendar integrations - all mediated through message-based I/O. This introduces latency and throughput considerations inherent to SMS delivery networks and iMessage's API surface, alongside constraints on rich UI feedback mechanisms available to native applications.

The company raised $15M in seed funding led by General Catalyst. The technical challenge centers on building proactive intelligence that surfaces relevant information from communication patterns while operating within the reliability and availability constraints of carrier networks and Apple's messaging infrastructure. Cross-platform integration across email and calendar systems adds complexity in authentication flows, permission models, and error handling when actions must be triggered through conversational interfaces rather than direct API calls.

The team includes engineers from quantitative trading firms, MIT, Cambridge, and international science olympiad competitors. The stack includes Next.js, React, and SwiftUI, suggesting server-side processing for NLP workloads with client components for any companion interfaces. Production success depends on handling edge cases in natural language understanding, managing state across asynchronous message exchanges, and maintaining consistent behavior despite variable network conditions and platform-specific limitations in both iOS and carrier SMS systems.

Open roles at Interaction

Explore 1 open positions at Interaction and find your next opportunity.

IN

Member of Technical Staff

Interaction

San Francisco, California, United States (On-site)

3mo ago

Similar companies

NE

Nebius

Nebius is a Nasdaq-listed technology company (NBIS) building full-stack AI infrastructure from its Amsterdam headquarters, with GPU clusters deployed across Europe and the United States. Led by CEO Arkady Volozh, the company operates AI-optimized sustainable data centers - including a facility 60 kilometers from Helsinki and a new Vineland, New Jersey site - and has raised significant capital ($700 million from investors including Accel, NVIDIA, and Orbis). The engineering organization, numbering in the hundreds, maintains deep expertise in world-class infrastructure and runs an in-house AI R&D team that dogfoods the platform to validate it against production ML practitioner requirements. The infrastructure stack spans hyperscaler-scale features with supercomputer-grade performance characteristics. ISEG, Nebius's supercomputer, ranks among the world's most powerful systems. The platform integrates NVIDIA GPUs with NVIDIA InfiniBand networking, exposing workload orchestration through both Kubernetes and Slurm. The operational layer includes standard observability (Prometheus, Grafana), data infrastructure (PostgreSQL, Apache Spark), and ML tooling (MLflow, vLLM, Triton, Ray), with infrastructure-as-code managed via Terraform. This architecture targets the latency, throughput, and reliability requirements of AI training and inference workloads at scale. The company has secured a multi-billion dollar agreement with Microsoft to deliver dedicated AI infrastructure from its Vineland data center. Nebius serves startups, research institutes, and enterprises across healthcare and life sciences, robotics, finance, and entertainment verticals. The technical approach emphasizes production-grade infrastructure that handles the operational complexity of large-scale AI deployments - managing GPU utilization, network bottlenecks, and the cost-performance trade-offs inherent in serving diverse AI workloads from model training through inference serving.

477 jobs
RA

Reflection AI

Reflection AI develops open foundation models targeting superintelligent autonomous systems, with current work focused on autonomous coding as a path to broader cognitive automation. The company combines reinforcement learning and large language models to build systems capable of handling most cognitive work on a computer, positioning autonomous code generation as the bottleneck to unlock that capability. The team includes contributors to AlphaGo, AlphaZero, PaLM, GPT-4, and Gemini, bringing production experience across game-playing RL systems and frontier language models. This background suggests familiarity with the trade-offs in training large-scale models - compute efficiency, sample complexity, and the operational challenges of running RL at scale alongside supervised pretraining. Reflection's stated objective centers on keeping superintelligence open and accessible through open foundation models. For inference practitioners, this implies potential work on model architectures, training infrastructure, and deployment systems designed for broad distribution rather than proprietary deployment. The autonomous coding focus suggests evaluation infrastructure for code generation, likely including metrics beyond pass@k - compilation rates, execution correctness, and performance characteristics of generated code under real-world constraints.

49 jobs
EV

Eve

Eve builds AI-native infrastructure for plaintiff law firms, operating as an intelligent case assistant platform that manages litigation workflows from intake through resolution. The system processes more than 200,000 legal cases annually, handling case evaluation, medical chronology generation, demand letter drafting, and discovery responses. Developed in collaboration with OpenAI and Anthropic, the platform learns each firm's tone and style to generate documents that match attorney output, with attorneys able to train and teach the system for their specific practice patterns. The platform targets labor and employment practices and personal injury firms. Client firms report 250% year-over-year revenue growth and 2.5X case capacity increases without additional headcount, though these are self-reported outcomes rather than platform-wide guarantees. Eve claims to be the first legal AI to achieve SOC II Type 2 certification while maintaining HIPAA compliance, addressing the compliance and security requirements of handling protected health information and sensitive legal data at scale. The technical challenge set involves natural language processing for document generation, AI workflow development that adapts to individual firm processes, and maintaining enterprise-grade security infrastructure. The platform must handle the operational complexity of legal document generation across varied practice areas while meeting regulatory requirements for data handling in the legal and healthcare domains.

39 jobs
RA

Relevance AI

Relevance AI operates a no-code platform for building and orchestrating teams of agentic AI to automate tasks at scale. Founded in 2020, the company addresses the operational bottleneck of deploying AI agents across organizations by abstracting the complexity of agent creation and coordination. The platform saw 40,000 agents created in January 2025 alone - a 40x year-over-year increase in agent creation velocity - and supports thousands of subject-matter experts across fast-growing scaleups and Fortune 500 companies, including Activision and SafetyCulture. The architecture centers on agent orchestration and workforce management primitives that allow non-technical users to instantiate and coordinate agent teams without writing code. This presents a trade-off: accessibility and deployment speed against the control and customization available in code-first frameworks. The platform's value proposition hinges on reducing time-to-deployment for agent-based automation workflows, particularly for organizations constrained by engineering bandwidth or lacking deep ML expertise. The company operates from Australia and serves customers across gaming, enterprise software, and workplace safety verticals. The 40x growth in agent creation suggests either expanding adoption within existing customers or rapid customer acquisition, though the operational complexity of maintaining reliability and cost predictability at this scale - particularly around LLM API costs, latency in multi-agent workflows, and failure mode handling - remains a central engineering challenge for any orchestration platform at production scale.

22 jobs
TO

Toma

Toma operates a voice AI platform for automotive dealerships, processing over 1,000,000 calls since launching in 2024. The system handles inbound phone operations - service scheduling, call routing, and follow-up automation - with safeguards designed to manage transfer latency and revenue leakage. Core technical challenge: maintaining conversational quality and intent detection accuracy across high-variance dealership scenarios (service appointments, parts inquiries, sales handoffs) while minimizing false transfers and dropped context. The platform implements transfer triggers, clawback mechanisms for mistimed handoffs, and follow-up alerts when human staff doesn't complete actions, addressing the operational complexity of human-AI transition points in production telephony. Infrastructure runs on AWS with a TypeScript/Next.js frontend, PostgreSQL via Prisma for state management, and tRPC for type-safe API boundaries. The voice AI layer must handle real-time constraints - low-latency speech recognition and synthesis, sub-second intent classification - while managing concurrent call volume and dealership-specific context (inventory, scheduling systems, staff availability). Trade-offs center on model selection for conversational understanding versus inference cost at scale, and the reliability surface area of integrating with legacy dealership management systems. Founded by engineers from Scale AI, Uber, Lyft, and Amazon; backed by Andreessen Horowitz and Y Combinator with $17 million Series A funding. Deployment spans dealerships across the United States, including Pohanka Automotive Group, SCHOMP, Hudson Automotive Group, and Bergey's. Primary bottlenecks likely involve tuning voice models for domain-specific terminology (vehicle makes, service codes, dealership jargon), managing tail latency in transfer decisions where milliseconds impact customer experience, and evaluating conversational success beyond simple call completion - did the AI correctly capture appointment details, route urgency appropriately, preserve customer satisfaction? The system's value proposition hinges on converting missed calls and staff bottlenecks into captured revenue, which requires high precision on intent classification and low false-negative rates on transfer triggers to avoid revenue loss from mishandled interactions.

1 job