Job Title: AI Engineer
Location: Hybrid – Austin, TX (3 Days in office)
Job Type: Long-term Contract
Job Description
We are seeking an experienced AI Engineer with strong hands-on expertise in AWS SageMaker and Amazon Bedrock to design, develop, deploy, and scale machine learning and generative AI solutions. The ideal candidate will work closely with data scientists, cloud engineers, and business stakeholders to build secure, production-grade AI systems on AWS.
Key Responsibilities
- Design, develop, and deploy machine learning and generative AI models using AWS SageMaker.
- Build and integrate Generative AI applications using Amazon Bedrock (Claude, Titan, Llama, etc.).
- Develop end-to-end ML pipelines including data ingestion, training, tuning, deployment, and monitoring.
- Implement model inference, batch processing, and real-time endpoints using SageMaker.
- Optimize model performance, scalability, and cost on AWS.
- Apply MLOps best practices including CI/CD, model versioning, monitoring, and retraining.
- Integrate AI/ML services with backend systems, APIs, and microservices.
- Ensure security, compliance, and governance using AWS IAM, encryption, and logging.
- Collaborate with cross-functional teams to translate business requirements into AI solutions.
- Stay current with AWS AI/ML service updates and emerging GenAI best practices.
Required Skills & Qualifications
- Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or related field.
- 5+ years of experience in AI/ML engineering or applied machine learning.
- Strong hands-on experience with AWS SageMaker (training jobs, endpoints, pipelines).
- Proven experience working with Amazon Bedrock for Generative AI use cases.
- Proficiency in Python and ML libraries (PyTorch, TensorFlow, scikit-learn).
- Experience with LLMs, prompt engineering, embeddings, and RAG architectures.
- Strong understanding of ML algorithms, deep learning, and NLP.
- Experience with AWS services such as S3, Lambda, API Gateway, EC2, ECR, CloudWatch.
- Knowledge of containerization and orchestration (Docker, Kubernetes preferred).
- Familiarity with MLOps tools and practices.