Elective
Gen AI + Prompt Engineering
Master large language models and the art of effective prompt design for cutting-edge applications.
Duration
12 weeks, 24 sessions
Audience
Educators, developers, and analysts who need practical, model-agnostic GenAI skills.
Prerequisites
Basic Python recommended; any LLM account.
Tools
Browser Google Colab or Jupyter LLM provider (Azure OpenAI / OpenAI / Hugging Face) GitHub
Learning Outcomes
- ✓ Explain how LLMs work at a high level and key limitations
- ✓ Apply prompt patterns (instruction, few-shot, chain-of-thought, style/role, tool-use)
- ✓ Evaluate and iterate prompts with objective rubrics and lightweight metrics
- ✓ Build RAG with a simple retriever and guardrails; extend to multimodal I/O
- ✓ Orchestrate small agent workflows and integrate via REST APIs
Curriculum
12-Week Curriculum
| Week | Session A | Session B | Micro-lab |
|---|---|---|---|
| 1 | GenAI landscape & use-cases | LLM fundamentals: tokens, context, safety | Prompt a public model; note latency & cost |
| 2 | Prompt engineering basics | Advanced prompting (few-shot, style, structure) | Rewrite prompts using patterns; compare outputs |
| 3 | Prompt optimization & evaluation | Chatbot basics with system/user/assistant roles | Build a rubric; A/B test two prompts |
| 4 | Text generation & automation | Code generation & analysis | Write a small script generated by an LLM |
| 5 | Image generation I (concepts, prompts) | Image generation II (parameters, safety) | Create an image prompt book (3 variants) |
| 6 | Practice set: real-world tasks | Prompt engineering deep-dive I (optimization) | Submit an improved prompt + evidence |
| 7 | Prompt engineering deep-dive II (multi-model) | RAG I: retrieval basics, embeddings | Build a tiny RAG over 5–10 docs |
| 8 | RAG II: evaluation & guardrails | Multimodal I: text + image | Add answer-citing and refusal policies |
| 9 | Multimodal II: text + image + audio | Bias & ethics; safety and red-teaming | Draft a safety checklist for your app |
| 10 | Data augmentation & RLHF concepts | Model evaluation & feedback loops | Create a synthetic QA set (20 items) |
| 11 | Automation & API integration (agents) | Education & business integrations | Call an LLM via REST; log prompts/outputs |
| 12 | Practice set & troubleshooting | Final synthesis & showcase | Present a 3-minute demo |
Assessment
Attendance & Participation 30%
Micro-labs & Quizzes 40%
Mini-Capstone 30%
Pass: ≥70% overall and ≥80% attendance
Tools & Platforms
Browser Google Colab or Jupyter LLM provider (Azure OpenAI / OpenAI / Hugging Face) GitHub
Take the Next Step
Applications are open. Secure your place in the next cohort and start your AI journey.
Apply Now