CoreWeave, the AI Hyperscaler™, acquired Weights & Biases to create the most powerful end-to-end platform to develop, deploy, and iterate AI faster. Since 2017, CoreWeave has operated a growing footprint of data centers covering every region of the US and across Europe, and was ranked as one of the TIME100 most influential companies of 2024. By bringing together CoreWeave’s industry-leading cloud infrastructure with the best-in-class tools AI practitioners know and love from Weights & Biases, we’re setting a new standard for how AI is built, trained, and scaled.
The integration of our teams and technologies is accelerating our shared mission: to empower developers with the tools and infrastructure they need to push the boundaries of what AI can do. From experiment tracking and model optimization to high-performance training clusters, agent building, and inference at scale, we’re combining forces to serve the full AI lifecycle — all in one seamless platform
Weights & Biases has long been trusted by over 1,500 organizations — including AstraZeneca, Canva, Cohere, OpenAI, Meta, Snowflake, Square,Toyota, and Wayve — to build better models, AI agents and applications. Now, as part of CoreWeave, that impact is amplified across a broader ecosystem of AI innovators, researchers, and enterprises.
As we unite under one vision, we’re looking for bold thinkers and agile builders who are excited to shape the future of AI alongside us. If you're passionate about solving complex problems at the intersection of software, hardware, and AI, there's never been a more exciting time to join our team
What You'll Do
The AI team is a hands-on applied AI group at Weights & Biases that turns frontier research into teachable workflows. We collaborate with leading enterprises and the OSS community. We are the team that took W&B from a few hundred users to millions of users and one of the most beloved tools in the ML community.
This is a senior applied role at the research-to-production boundary. You will prototype, evaluate, and ship reusable DL/RL workflows for enterprise use on the W&B stack—then document and teach them to our customers and the community. The focus is application, not novel research: rapid prototyping, careful evaluation, and production-grade reference implementations with clear trade-offs.
About the role
- Understand the state-of-the-art in deep learning / AI and turn the research into practical workflows that can be adopted by our users, the open source community & enterprise customers alike.
- Build in public: Publish engineering artifacts (code, reports, talks) that teach how to reproduce results; engage with OSS and customer engineers
- Design and ship reference workflows for post-training & agents (SFT/DPO/GRPO/PPO, reward models, online RLHF/RLAIF) with reproducible repos, W&B Reports, and dashboards others can run.
- Own end-to-end demos: data → distributed training (FSDP/ZeRO/DeepSpeed/JAX pjit) → evaluation (lm-eval-harness + agent benches) → serving (vLLM/TensorRT-LLM/Triton/SG Lang).
- Partner with lighthouse customers; turn recurring patterns into templates and product feedback
- Track recent advances (papers, releases, kernels), run focused ablations, and translate wins into production-ready workflows
- Run growth experiments to track the usage of the Weights & Biases suite of products from the artifacts built
Who Your Are:
- Deep learning: 5+ years training large models in PyTorch or JAX; strong numerics (autograd, initialization, mixed precision)
- RL/RLHF: hands-on with SFT/DPO/GRPO/PPO, reward modeling, preference data pipelines, and online/offline RL for LLMs/agents
- Inference/serving: production experience with vLLM/TensorRT-LLM/Triton; quantization, speculative decoding, caching
- Evaluation: built task/agent harnesses with statistically sound metrics (variance, CIs, power) and failure taxonomies
- Systems: strong Python plus one: CUDA/Triton kernels, custom C++ ops, or high-performance data ingestion
- Reproducibility: rigorous experiment tracking (sweeps, artifacts, lineage); minimal repros others can run.
- Public signal: 2+ OSS repos/notebooks/talks with adoption (e.g., stars, forks, downloads, conference views
Preferred
- Paper-to-production within weeks at a top lab or applied-AI team (pretrain → post-train → eval → serve)
- Data engines & feedback loops (rater pipelines, synthetic data, active learning
- Prior customer enablement with external adoption at scale