Join waitlist

Now accepting waitlist

Manipulation, out of the box.

Dual-arm hardware — tabletop and mobile — paired with soda OS, our agentic OS for teleop, data collection, and VLM-orchestrated manipulation. Out of the box.

Research labs and AI developers only for now. Why we’re building this →

SOMA dual-arm tabletop robot
soda-os · bimanual teleop active

Why SOMA

Manipulation is the last mile of physical AI.

Foundation models can see and plan. What they cannot do, reliably, is act in the world. The bottleneck is no longer intelligence — it is a platform. A hardware-software stack that lets any AI developer collect data, run policies, and ship skills without building a robotics company first.

SOMA is that platform. Dual-arm robots that arrive working, with an agentic OS that treats manipulation the way Claude Code treats software: a VLM orchestrates a library of vision-action skills, calls tools, recovers from failure, and learns from every teleop session.

We build both layers because the interface between them is where every current system breaks.

Hardware

Two configurations. One stack.

Shipped calibrated, with soda OS pre-installed.

Tabletop Dual-Arm

Tabletop Dual-Arm

Research-grade manipulation, lab-desk footprint.

Configuration
2 × 6-DoF arms, fixed base
Payload
3 kg per arm
Reach
650 mm
Control
Up to 1000 Hz, joint + Cartesian
Sensing
Dual wrist cameras + scene camera
Mobile Dual-Arm

Mobile Dual-Arm

Navigate, manipulate, return. One platform.

Configuration
2 × 6-DoF arms on holonomic base
Payload
3 kg per arm, 50 kg base
Navigation
LiDAR + VIO SLAM out of the box
Sensing
Dual wrist + head-mounted RGBD

On the roadmap

Consumer-grade single arm. Same stack.

A Kickstarter campaign for hobbyists, classrooms, and indie AI developers. Same soda OS, same skill library, one arm instead of two.

soda OS

An agentic OS for manipulation.

A VLM sits at the top of the stack and orchestrates a growing library of vision-action skills — pick, place, pour, wipe, open, hand-off. Skills are trained from teleop data you collect, or from ours. Think Claude Code, but the tools are physical.

soda-os repl
> load the red mug into the dishwasher
  → plan: locate(red_mug) ▸ grasp ▸ open(dishwasher) ▸ place
  → skill: grasp.mug      [vla-policy v0.4]
  → skill: open.dishwasher
✓ done · 14.3s
  • Low-latency bimanual teleop

    Teleop

    VR or Leader-arm input. Every session becomes training data.

  • Automatic data collection

    Data

    Trajectories, videos, proprioception, and natural-language annotation — captured by default, exportable to LeRobot format.

  • VLM tool calling for the physical world

    Orchestration

    High-level natural-language goals decomposed into skill calls. Failure recovery, retries, and human-in-the-loop handoff built in.

  • Composable skill library

    Skills

    Ship pretrained primitives. Fine-tune with a few dozen demos. Register your own skill in a single Python file.

Traction

Built with researchers. Validated by them.

GRASP Summit

Demoed live at Penn GRASP Robotics Summit — 200+ roboticists in the room.

Research demand

Multiple US robotics labs have requested early-access quotes after live demos.

Build the next manipulation policy on a platform that ships.

Join the waitlist for early access, research pricing, and soda OS beta.

Questions? hello@somarobotics.ai