About Slash
Slash is building the future of business banking, one industry at a time. We believe businesses deserve financial infrastructure tailored to how they actually operate. That's why we're creating a new category of business banking. We combine the reliability of traditional banking (high yields, competitive rewards, and comprehensive security) with industry-specific features that make businesses more efficient, more competitive, and more profitable.
Started in 2021, Slash is one of the fastest growing fintechs in the world and we power over ten billion dollars a year in business purchasing across numerous industries. We recently raised a $100M Series C led by Ribbit Capital with participation from Khosla Ventures, Goodwater Capital, NEA, and Y Combinator, accelerating our expansion into new markets and products.
Slash is headquartered in San Francisco, and has a strong in-person culture.
About the role
Slash is building an AI-native financial platform, and we're forming Slash AI Labs, a small, high-leverage team responsible for bringing AI deeply into every surface of the product. We've already shipped Twin, our agent platform powered by a custom orchestration runtime, tool-calling over a structured ontology graph, MCP, and multi-surface delivery across web, Slack, and API. You'll be an early member of this team and helping incubate what comes next. If you want to build production AI systems at a fintech that's growing fast and ships constantly, this is the role.
What you’ll do
Ship full-stack AI features end to end, from prompt engineering and agent orchestration to React UI and API integration
Build and extend our agent runtime: orchestration, tool execution, MCP servers, graph-based conversation state, and context compaction
Design and implement multi-surface agent experiences across web, Slack, and programmatic APIs
Build internal AI tooling and platforms, model evaluation frameworks, prompt management, and LLM observability tooling
Work directly with Anthropic and OpenAI APIs, the Vercel AI SDK, and our custom orchestration layer to solve challenging problems in tool-calling, long-context management, and multi-step reasoning
Improve and scale our AI infrastructure: inference routing, model tiering, prompt caching, and streaming architecture on AWS/EKS
We’re looking for someone who
Has shipped full-stack AI products to production, not just prototypes, but systems that serve real users reliably
Is proficient in TypeScript or modern web frameworks (React, Next.js, Node.js) with strong backend fundamentals
Has hands-on experience with LLM APIs (Anthropic, OpenAI), with a strong grasp on tool-calling, structured output, and streaming
Can build and iterate quickly, you're a builder who ships, not someone who over-plans
Has experience with one or more of: agent orchestration, MCP, RAG, structured extraction, or model evaluation
What's in it for You:
Opportunity for high growth
High autonomy + ownership culture
Comprehensive health + benefits plan
Working out of our downtown San Francisco office space
Unlimited PTO