Overview
This course introduces learners to the fundamentals of working with Granite family large language models (LLMs) using Red Hat Enterprise Linux AI. It focuses on the secure, enterprise-grade development, fine-tuning, deployment, and serving of generative AI models tailored to real-world business needs.
Objectives
At the end of Red Hat System Administration II training course, participants will be able to
Prerequisites
- Basic understanding of AI and ML concepts (recommended)
- Familiarity with the Linux command line
- Access to a RHEL AI machine (for BYOD participants; GPU not required for core labs)
Course Outline
- Define generative AI: capabilities, benefits, and challenges
- Select appropriate models and techniques based on use cases
- Understand Granite models
- Compare with larger, closed-source models
- Assess enterprise applicability
- Enable technical/non-technical collaboration
- Train and fine-tune LLMs using Red Hat Enterprise Linux AI and IBM Granite
- Deploy and serve AI models
- Operate and manage models in production environments