About Human Archive
Human Archive is a research lab focused on modeling human embodied intelligence.
Humans are the most sophisticated biological systems we have ever observed, yet we still do not fully understand ourselves. Research into human physical intelligence — including the human hand, proprioception, and vision — remains largely unsolved. Our mission is to recover human embodied intelligence as a learned model. To achieve this, we build custom hardware products, deploy them globally at scale, and publish research. Today, our data is used for robotics and world modeling, but the broader opportunity is advancing scientific research into intelligence itself.
Founded by Stanford and UC Berkeley researchers, we are lean, deeply technical, and operate at extreme speed, taking on unglamorous and conventionally impossible problems that directly unlock step-function gains in model capability.
The deployment of capable humanoids at scale will permanently redefine human labor. Undesirable physical work will disappear, and human effort will shift toward a new era of abundant creativity.
We are building the infrastructure to accelerate that transition by assembling the Human Archive mafia. You will own meaningful systems from day one and see your work directly impact model capabilities. This is a once-in-a-generation inflection point. If you want to help reshape physical labor and work on problems that matter at civilizational scale, join us.
The Opportunity
As a Machine Learning Engineer, you’ll work on multimodal perception, VLA training, robotics post-training, and downstream policy evaluation. This is a hands-on role at the intersection of applied machine learning, data infrastructure, and robotics, where your work directly shapes how data is collected, validated, annotated, and evaluated.
You’ll help close the loop between research and data collection by fine-tuning VLAs on downstream policy performance and building post-training and reinforcement learning systems around real-world robotics tasks. You’ll be expected to make architectural decisions, own projects end-to-end, and operate in highly ambiguous research environments given the novelty and scale of our multimodal datasets.
Your work will help shape how frontier labs and leading robotics companies train their models, transforming physical labor markets and economies while contributing to broader research into human embodied intelligence.
What You’ll Do
Build systems for multimodal perception, annotation, dataset QA, and robotics evaluation
Publish research on multimodal data by fine-tuning and evaluating VLA models on downstream robotics tasks and policy performance
Build post-training and reinforcement learning systems around robotics failure modes and corrective demonstrations
Work across video understanding, tracking, pose estimation, temporal modeling, and multimodal alignment
Develop tooling for benchmarking, observability, and temporal efficiency
Prototype quickly, ship rapidly, and iterate from real-world robotics deployments and research feedback
What We’re Looking For
Passionate, mission-driven individuals who have demonstrated exceptional ownership in previous work
Engineers who want their work to directly impact the next frontier of physical AGI
Strong ML engineering fundamentals across robotics, computer vision, and perception systems
Experience with video understanding, tracking, pose estimation, robotics, or real-world sensor systems
Strong technical intuition and ability to move quickly in ambiguous research environments
Published research or production experience in robotics, embodied AI, reinforcement learning, motion capture, or vision systems is a strong plus