Lab Environment

Lab Environment Options

This course offers two lab environment options on the Red Hat Demo Platform (RHDP). Choose the environment that best matches your learning objectives and budget constraints.

Best for: General course labs, getting started, cost-conscious learning

This is the recommended lab environment for most course activities. It provides a production-ready OpenShift AI 3 environment with GPU acceleration.

Lab Configuration:

  • Single NVIDIA A10 GPU with 24GB memory

  • Pre-deployed Llama model utilizing the GPU

  • Full Red Hat OpenShift AI 3 operator and dashboard

  • Suitable for most GPU operator, model deployment, and observability labs

Important Limitations:

  • Model Deployment: To deploy a new model, you must first stop the running Llama model to free GPU resources.

  • MIG and GPU Slicing Labs: This single-GPU environment may not function correctly for Multi-Instance GPU (MIG) partitioning and time-slicing labs. For these advanced GPU sharing labs, use Option 2.

Direct Link: Red Hat OpenShift AI 3


Option 2: Introducing llm-d - Production-Ready Scalable LLM Inference

Best for: Advanced GPU sharing labs (MIG, slicing), multi-GPU scenarios, production-like environments

This environment provides a more robust, multi-GPU setup designed for production-scale LLM inference patterns.

Lab Configuration:

  • Multiple NVIDIA GPUs across multiple nodes

  • Designed for GPU partitioning (MIG) and time-slicing demonstrations

  • Production-grade configuration for scalable LLM inference

  • Required for advanced GPU sharing and multi-GPU labs

Cost Warning:

This lab environment utilizes multiple expensive GPU resources and incurs significant costs while running.

You MUST shut down this lab when not actively using it. Do not leave this environment running overnight or between lab sessions.


Instructions to Launch Your Lab on RHDP

  1. Log in to the RHDP portal (see links below)

  2. Click on one of the direct lab links above, or search for the lab name in the RHDP catalog

  3. On the catalog page, click the Order button

  4. Fill out the required details in the order form

  5. Review the warning at the bottom of the form and check the box labeled:
    ”I confirm that I understand the above warnings.”

  6. Click the Order button to place your lab order

Lab Provisioning Timeline

  • Lab provisioning typically takes 60-90 minutes

  • You will receive an email with access details once your lab environment is ready

  • You can also retrieve lab access directly from the RHDP portal under Services

How to Access Your Running Lab

  1. On the RHDP portal, click on the Services option in the left-hand menu

  2. Select your lab from the listings on the right-hand side of the page

  3. View access details including OpenShift console URL, credentials, and API endpoints


Lab Selection Guide

Use this guide to select the appropriate lab environment for each chapter:

Course Chapter/Lab Option 1 (RHOAI 3) Option 2 (llm-d)

Chapter 1: GPU Operator Deployment

✓ Recommended

✓ Works

Chapter 2: MIG and GPU Slicing

⚠ Limited (single GPU)

✓ Recommended

Chapter 3: Observability and Monitoring

✓ Recommended

✓ Works

Multi-GPU Scenarios

✗ Not supported

✓ Required

Start with Option 1 (Red Hat OpenShift AI 3) for your initial learning. Only provision Option 2 (llm-d) when you reach labs that specifically require MIG, GPU slicing, or multi-GPU configurations.