Concentrate provides one OpenAI-compatible API to access, route, and manage models across leading AI providers and open-source models through a single endpoint. We help teams save time, lower token spend with credits back from our bulk purchasing power, improve reliability, and avoid vendor lock-in.
Supported by top-tier VCs. This is a remote role.
You'll work directly with customers to solve LLM infrastructure and deployment problems, while also building the product and platform capabilities that make those solutions scalable. This is a highly hands-on role for someone who is technical, pragmatic, and excited to operate across customer work, engineering, and product at an early-stage AI API company.
What You'll Do
Work closely with customers to understand LLM deployment needs and solve technical problems in production
Debug issues end to end across application behavior, AI API integrations, infrastructure, and model and provider performance across OpenAI, Anthropic, Gemini, and open source models
Build product features, internal tools, and platform improvements based on patterns you see in the field
Improve multi-provider routing, LLM reliability, AI observability, latency, and token cost efficiency across multiple LLM providers
Help customers reduce AI infrastructure costs, navigate rate limits, and architect for provider failover and redundancy
Partner closely with founders on customer deployments, product direction, and technical strategy
What We're Looking For
Strong technical ability and high ownership
Strong debugging instincts across backend systems, AI APIs, infrastructure, and customer environments
Experience working with or around LLM APIs, model routing, or AI spend management is a strong plus
Comfort working directly with customers and operating in ambiguity
Startup experience or experience in fast-moving, high-ownership environments
Likely 5–12 years of experience, with flexibility for exceptional candidates
Experience with some of: Python, TypeScript/Node.js, PostgreSQL, Redis, AWS, Docker, Kubernetes, Terraform, and CI/CD workflows
Clear written and verbal communication skills
Fluent English required
Bonus
Experience with LLM gateways, AI gateway architecture, or enterprise AI infrastructure
Familiarity with zero data retention, PII redaction, or AI compliance requirements
Experience with LLM cost optimization, token spend analysis, or provider discount structures
Experience in forward deployed, solutions, or customer-facing technical roles
Founder or early startup experience
Interest in growing into broader technical leadership over time
Salary Range: $200K-$300K cash compensation + strong equity
Equal Opportunity & Fair Chance Notice
Concentrate AI is an affirmative action and equal opportunity employer. We are committed to providing equal employment opportunities and do not discriminate in recruiting, hiring, training, promotion, or other employment practices on the basis of race, color, sex, age, religion, national origin, ancestry, protected veteran status, disability, sexual orientation, gender identity or expression, genetic information, or any other status protected by applicable law.
Qualified applicants with arrest and conviction records will be considered in accordance with the San Francisco Fair Chance Ordinance and applicable state and local laws.
California Privacy Notice
California residents may contact privacy@concentrate.ai for additional information regarding how we collect, use, and disclose personal information during the job application process.
Recruitment Agency Notice
Concentrate AI does not accept unsolicited resumes from recruitment agencies and is not responsible for any fees related to unsolicited submissions.