About The Role
We are hiring ML Researchers to develop novel approaches that advance the frontier of multimodal vision AI and create product-defining capabilities for SpreeAI. This role exists because current generative and vision models are not designed for photorealistic human representation, controllable try-on, or real-world deployment constraints. You will explore new architectures, algorithms, and training strategies that improve realism, controllability, efficiency, and multimodal understanding — with a direct path from research to production.
You Will Work On Research Problems Across
- photorealistic virtual try-on
- human-centric visual representation learning
- video-based modeling and temporal consistency
- multimodal reasoning and generative pipelines
- compute-efficient diffusion and generative architectures
This is a research role with product impact: successful work leads to platform capabilities, white papers, patents and most importantly, industry differentiation.
Why This Role Exists
Modern Multimodal AI Systems Struggle With Identity Preservation, Pose Consistency, Physical Realism, And Controllability Under Production Constraints. We Are Building New Approaches Where
- diffusion models must produce consistent outputs across poses, viewpoints, and garments,
- generative models must learn human and garment interactions realistically,
- research innovations must scale to real-world deployment environments.
This role is for researchers who want to see novel ideas become shipped systems used by real customers.
What You'll Do
- Develop novel architectures and training approaches for vision and multimodal AI.
- Advance generative modeling techniques including controllable diffusion and video generation.
- Design experiments improving realism, temporal consistency, and human representation.
- Collaborate with applied engineering teams to translate research into production systems.
- Publish white papers or research outputs aligned with product differentiation.
- Evaluate new model paradigms for scalability and efficiency.
Core Research Areas & Model Architectures
Candidates Should Have Familiarity With Or Interest In Advancing
- Diffusion models and latent diffusion architectures.
- Transformer-based vision models (ViT, multimodal transformers).
- Image-to-image and video generation pipelines.
- Control mechanisms for generative models (conditioning, adapters, LoRA).
- Representation learning for human pose, geometry, or identity consistency.
- Multimodal architectures combining vision, text, and structured inputs.
Qualifications
- PhD in Computer Science, Artificial Intelligence, Robotics, Computer Vision, or related field.
- Strong research background in computer vision, generative modeling, or multimodal AI.
- Strong programming skills in Python and familiarity with object-oriented languages.
- Experience with deep learning frameworks (PyTorch preferred).
- Strong foundations in machine learning theory and experimental design.
Preferred Qualifications
- Publications at top conferences (CVPR, ICCV, NeurIPS, ICLR, SIGGRAPH, etc.).
- Experience with diffusion-based generative models.
- Video modeling or temporal learning experience.
- Experience bridging research into production systems.
- Interest in compute efficiency, distillation, or scalable generative pipelines.
SPREEAI is a fast-growing, innovative AI company at the forefront of fashion and e-commerce, revolutionizing how consumers engage with fashion through lifelike photorealistic try-on technology and hyper-personalized shopping experiences. Our mission is to redefine the retail landscape with cutting-edge AI solutions that blend high fashion and technology. We thrive in a dynamic, fast-paced environment where creativity meets technology to drive real impact. If you are passionate about innovation and shaping the future of fashion, SPREEAI offers a platform to make your mark.