Red Hat AI on AWS Cloud
Introduction
This course describes the how to steps to deploy the two components of Red Hat AI, OpenShift AI and Red Hat Enterprise Linux AI products on Amazon Web Services (AWS). It prioritizes Red Hat managed offerings but includes self-managed configurations when no managed service is available.
In addition, this course introduces various AWS AI services including Amazon SageMaker, Amazon Bedrock, Amazon Q, and Amazon Elastic Cloud Compute (EC2) instances to help learners understand how these services complement and compete Red Hat AI.
By the end of this course, learners will have a solid foundation in deploying and integrating Red Hat AI with various AWS AI Services. They will understand how to choose the right deployment model and services based on their specific use case and requirements.
Objectives
-
Describe Red Hat AI, which includes Red Hat OpenShift AI and RHEL AI, and explain how it works.
-
Configure AWS to host Red Hat AI workloads.
-
Describe the benefits of Red Hat OpenShift AI Cloud Service.
-
Explain the design decisions associated with self-management deployment of RHEL AI on AW
-
Describe, at a high level, AWS AI Services, including Amazon SageMaker, Amazon Bedrock, Amazon Q, and EC2 instance.
-
Evaluate the differences between Red Hat AI and AWS AI offerings.
-
Demonstrate practical skills in deploying and and managing Red Hat AI integration with AWS AI Services by completing hands-on labs (to be added)
Classroom Environment
Recommended starting the lab now so it’s available when you reach the AWS segment of the course.
Launch an AWS Blank Open Environment to follow along with this guided lab.