About the Role:
Would you like to participate in creating the fastest Generative Models inference in the world? Join the Cerebras Inference Team to participate in development of unique Software and Hardware combination that sports best inference characteristics in the market while running largest models available.
Cerebras wafer scale inference platform allows running Generative models with unprecedented speed thanks to unique hardware architecture that provides fastest access to local memory, ultra-fast interconnect and huge amount of available compute.
You will be part of the team that works with latest open and closed generative AI models to optimize for the Cerebras inference platform. Your responsibilities will include working on model representation, optimization and compilation stack to produce the best results on Cerebras current and future platforms.
Job responsibilities:
- Analysis of new models from generative AI field and understanding of impacts on compilation stack
- Implementation of compiler and frontend features to support new models, improve inference characteristics and Cerebras user experience
- Collaboration with other teams throughout feature implementation
- Research on new methods for model optimization to improve Cerebras inference
Requirements:
- Degree in Engineering, Computer Science, or equivalent in experience and evidence of exceptional ability
- Strong experience working with Python and C++ languages
- Experience working with PyTorch and HuggingFace Transformers library
- Knowledge and experience working with Large Language Models (understanding Transformer architecture variations, generation cycle, etc.)
- Knowledge of MLIR based compilation stack is a plus