Who We Are
At Serval, we're building the AI platform for IT teams. Our goal is to take on legacy players like ServiceNow, a $230+ bn company, by building the platform for AI agents to resolve IT issues instead of humans.
Serval “automates the automation,” using a natural language-to-code workflow builder and AI agents that discover and deliver automations for tedious IT workflows.
Our mission is to free IT departments from the #helpdesk channel by creating the simplest way to automate employee onboarding/offboarding, software access management, and the long tail of employee requests. Long term, our vision extends to developing a universal workflow automation platform for all business functions.
Serval was founded by product and engineering leaders from Verkada and is backed by industry-leading investors like First Round, General Catalyst, Alt Capital, and Box Group.
Role Overview
As an Applied AI Engineer at Serval, you’ll help build the intelligence behind our platform - the foundational AI agents that reason, act, and automate complex IT workflows. Your work will apply cutting-edge models and techniques in creative, real-world ways to convert repetitive IT processes into intelligent automation.
Key Responsibilities
- Design, build, and deploy AI-powered features from the ground up.
- Develop and optimize Serval’s applied AI systems — from model selection and fine-tuning to inference and evaluation pipelines.
- Integrate AI capabilities into production environments, ensuring reliability, scalability, and performance.
- Collaborate across engineering and product to bring new customer experiences to life.
- Continuously evaluate model performance and improve results based on data and user feedback.
- Help establish AI engineering best practices and raise the technical bar across the team.
Requirements
- Experience as a software engineer or machine learning engineer with a focus on applied AI.
- Proven experience developing and deploying production-grade AI systems, ideally leveraging large language models or foundation models.
- Experience with prompt engineering, fine-tuning, or evaluation techniques for LLMs.
- Comfort working with APIs, data pipelines, and cloud environments (AWS, GCP, or similar).
- Deep appreciation for delivering high-quality user experiences, not just high-performing models.
- Excellent communication skills and ability to thrive in a fast-paced, collaborative startup environment.
- Degree in Computer Science or a related technical field.
Nice to Have
- Experience building AI-native applications or tools that use LLMs in production.
- Familiarity with our stack: Go, gRPC, React, TypeScript, Kubernetes, AWS, and Terraform.
- Early-stage startup experience or a track record of zero-to-one product development.
- Experience with retrieval-augmented generation (RAG), vector databases, or orchestration frameworks.
What we offer
- Impact: Be a key player in shaping the success of our product and company.
- Growth: Build a fundamentally new AI product offering with the support of our experienced team and investors. Grow rapidly with the company.
- Culture: Join a culture that values innovation, ownership, accountability, and fun.
- Compensation: Competitive salary, equity and benefits.