About Us
At Neuracore, we're building the world's first robot learning cloud service (https://github.com/NeuracoreAI/neuracore).
Our platform eliminates the complexity of traditional robotics development by providing a complete end-to-end solution for data collection, model training, and deployment that works across different robot types and configurations.
Our multidisciplinary team is at the forefront of making robot learning accessible to organisations worldwide, from manufacturing and logistics to healthcare and research institutions. We're transforming how robotics teams develop, train, and deploy intelligent systems by providing cloud-native infrastructure that scales from small research projects to enterprise-wide robot fleets.
About the Role
We are seeking a Computer Vision Engineer to develop the visual perception systems that enable robots to understand and interact with their environments. You'll design and implement vision models for object detection, scene understanding, spatial reasoning, and manipulation planning across diverse robot platforms. This role offers the opportunity to build the visual intelligence layer of our platform, enabling robots to generalise across tasks and environments through cloud-native vision infrastructure.
Key Responsibilities
Design and implement computer vision models for robotic perception including object detection, segmentation, pose estimation, and scene understanding
Develop visual representation learning approaches that generalise across different robot cameras, environments, and task domains
Build and optimise real-time vision pipelines that run efficiently on both cloud infrastructure and edge devices
Integrate vision systems with robot control loops, bridging perception and action for manipulation and navigation tasks
Create evaluation benchmarks and testing frameworks for visual perception across diverse robotic scenarios
Collaborate with ML engineers and robotics scientists to incorporate vision into end-to-end robot learning pipelines
Required Skills
Advanced degree (PhD, Master's, or equivalent experience) in Computer Science, Computer Vision, Robotics, or related field
Deep expertise in computer vision with hands-on experience in object detection, segmentation, depth estimation, or 3D vision
Strong proficiency in PyTorch with experience training and deploying vision models at scale
Experience with camera systems including calibration, stereo vision, and multi-camera setups
Production software skills with the ability to build performant, maintainable vision systems beyond research prototypes
Familiarity with cloud infrastructure and containerisation for model training and deployment
Preferred Skills
Experience with vision-language models or foundation models for robotics
3D vision expertise including point cloud processing, NeRFs, or volumetric representations
Real-time inference optimisation using TensorRT, ONNX, or similar frameworks
Robotic manipulation experience with grasp detection, hand-eye coordination, or visual servoing
Familiarity with simulation environments for synthetic data generation and domain randomisation
Experience with ROS/ROS2 vision integration and sensor fusion with LiDAR, depth cameras, or tactile sensors