About Hark
Hark is an artificial intelligence company building advanced, personalized intelligence. One that is proactive, multimodal, and capable of interacting with the world through speech, text, vision, and persistent memory.
We're pairing that intelligence with next-generation hardware to create a universal interface between humans and machines. While today's AI largely operates through chat boxes and decade-old devices, Hark is focused on what comes next: agentic systems that interact naturally with people and the real world.
To get there, we're developing multimodal models and next-generation AI hardware together - designed from the ground up as a single, unified interface for a new era of intelligent systems.
About the Role
The Omni team at Hark is building the next generation of AI experiences beyond text, enabling models to understand and generate content across multiple modalities, including text, and vision. Our goal is to create seamless, real-time multimodal intelligence that powers intuitive and immersive user experiences.
As part of the Omni team, you will help drive the development of text, video, and multimodal models. This includes working across the full stack—from data and modeling to training, serving, and product integration. You will contribute to both pretraining and posttraining efforts while collaborating closely with product teams to push the boundaries of model capability and deliver exceptional end-to-end user experiences.
Responsibilities
- Drive research and development to advance vision and video capabilities in multimodal models, including image understanding, video modeling, and generative vision systems.
- Develop and improve large-scale vision and video data pipelines, including data collection, filtering, labeling, and synthetic data generation.
- Design and implement state-of-the-art models for vision and video, including multimodal architectures that integrate vision with text and other modalities.
- Build evaluation frameworks and internal benchmarks to measure model performance, robustness, and visual quality across tasks.
- Optimize models and systems for scalability, efficiency, and real-time or production deployment.
- Collaborate closely with product and engineering teams to translate research innovations into impactful, user-facing AI experiences.
Requirements
- Proven track record of advancing vision or video models through innovations in data, modeling, or training.
- Strong experience in areas such as image understanding, video modeling, generative vision (e.g., diffusion, autoregressive models), or multimodal learning.
- Experience with large-scale machine learning systems and distributed training.
- Strong background in data-driven experimentation, evaluation, and iterative model development.
- Strong ownership mindset and ability to drive end-to-end impact from research to production.
Bonus Qualifications
- Background in graphics, rendering, simulation, or 3D vision is a plus.
- Experience with multimodal systems (vision + text, vision + audio) or real-time AI systems is a strong plus.
Compensation
The pay offered for this position may vary based on several individual factors, including job-related knowledge, skills, and experience. The total compensation package may also include additional components/benefits depending on the specific role. This information will be shared if an employment offer is extended.