This role is hybrid and requires you to be at our Client office at a minimum 4 days per week - subject to change at any time.
We're looking for an AI/ML Engineer with hands‑on experience building and deploying production‑grade models, including Generative AI solutions. You will design, train, evaluate, and operationalize models on modern cloud data platforms, implement robust MLOps/LLMOps practices, and collaborate with data, platform, and product teams to drive end‑to‑end delivery.
. Model Development: Design, train, fine‑tune, and evaluate ML and GenAI models (supervised/unsupervised, NLP, CV, and LLM‑based use cases).
. Model Deployment: Package and deploy models to production using containers and CI/CD; implement scalable serving with REST/gRPC, batch, and streaming pipelines.
. MLOps/LLMOps: Establish automated training, evaluation, model registry, feature store integration, monitoring (data drift, model drift, latency, cost), and safe rollback.
. GenAI Engineering: Build prompts, retrieval pipelines (RAG), and model adapters/LoRA; evaluate with quantitative metrics and human‑in‑the‑loop reviews.
. Data Engineering Collaboration: Ingest, transform, and validate datasets; partner with data engineering on schema design, data contracts, and lineage.
. Cloud & Platforms: Operate on AWS and/or Snowflake for storage, compute, orchestration, and governance; optimize cost/performance.
. Observability & Reliability: Instrument models and data pipelines with logging, tracing, metrics, and alerting; ensure SLAs/SLOs for availability and latency.
. Documentation & Compliance: Produce clear design docs, model cards, and runbooks; adhere to security, privacy, and responsible AI guidelines.
. AI/ML: Proficiency in Python and common ML stacks; strong with TensorFlow and/or PyTorch for training, fine‑tuning, and inference.
. Generative AI: Experience with LLMs or diffusion models; prompt engineering, RAG, evaluation frameworks, and safety/guardrail techniques.
. MLOps/LLMOps: Hands‑on with CI/CD for ML (e.g., GitHub Actions/GitLab CI), model packaging (Docker), model registries, feature stores, and monitoring.
. Cloud Data Platforms: Practical experience with AWS (e.g., S3, ECR, ECS/EKS, SageMaker, Lambda, Step Functions) and Snowflake (Snowpark, warehouses, governance).
. Data Pipelines: Building and operating ETL/ELT; familiarity with orchestration (Airflow/Prefect); schema management and data quality checks.
. Collaboration: Ability to work with cross‑functional teams (Data Eng, Platform, Product, Security) and communicate trade‑offs clearly.
Nice to Have (Preferred)
. Streaming: Kafka for real‑time features, streaming inference, and event‑driven retraining (optional as per your note).
. Vector/RAG: Experience with vector databases (e.g., FAISS, Milvus, pgvector) and chunking/indexing strategies.
. Infrastructure as Code: Terraform/CloudFormation; Kubernetes (EKS) for model serving and autoscaling.
. Experiment Tracking: MLflow/Weights & Biases/Comet for experiment lineage and governance.
. Testing: Unit/integration tests for data/model pipelines; regression/benchmark suites for models.
. Security & Compliance: Secrets management, IAM, PII handling, and Responsible AI practices.
. Snowflake Advanced: Snowpark ML, external functions, UDFs for in‑database ML.
Tools & Technologies (Representative)
. Languages/Frameworks: Python, TensorFlow, PyTorch, scikit‑learn, Transformers
. MLOps/LLMOps: MLflow, Weights & Biases, Docker, K8s, model registries, feature stores
. Cloud/Data: AWS (S3, ECR/ECS/EKS, SageMaker), Snowflake (Snowpark), SQL
. Pipelines & Orchestration: Airflow/Prefect, dbt (optional), REST/gRPC endpoints
. Streaming (Optional): Kafka/Kafka Connect/KSQL
. Observability: Prometheus/Grafana, OpenTelemetry, CloudWatch
What You'll Deliver (Outcomes)
. Productionized ML/GenAI services with defined SLAs/SLOs
. Automated training and deployment pipelines with traceable experiment lineage
. Reliable data and model monitoring (quality, drift, performance, cost)
. Clear documentation (architecture, model cards, runbooks) and knowledge transfer
"CGI is providing a reasonable estimate of the pay range for this role. The determination of this range includes factors such as skill set level, geographic market, experience and training, and licenses and certifications. Compensation decisions depend on the facts and circumstances of each case. A reasonable estimate of the current range is $80,000–$130,000. This role is an existing vacancy."
#LI-GS!
Together, as owners, let’s turn meaningful insights into action.
Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because…
You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction.
Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise.
You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons.
At CGI, we value the strength that diversity brings and are committed to fostering a workplace where everyone belongs. We collaborate with our clients to build more inclusive communities and empower all CGI partners to thrive. As an equal-opportunity employer, being able to perform your best during the recruitment process is important to us. If you require an accommodation, please inform your recruiter.
To learn more about accessibility at CGI, contact us via email. Please note that this email is strictly for accessibility requests and cannot be used for application status inquiries.
Come join our team—one of the largest IT and business consulting services firms in the world.