Skip to main content Link Menu Expand (external link) Document Search Copy Copied

GenAI: LLM Foundation and Applications

Course Overview

Advanced pre-trained models such as ChatGPT showcase remarkable abilities. Enterprises are swiftly moving towards developing personalized Large Language Model (LLM) applications to achieve improved efficiency, governance, adaptability, and notably, a competitive edge.

In this immersive and insightful training, we will delve into the exciting realm of generative artificial intelligence and explore the immense potential of Large Language Models. From understanding the foundational concepts to harnessing the power of LLMs for various applications, this workshop is designed to equip you with the knowledge and skills necessary to navigate the cutting-edge landscape of AI-driven content generation. This training will help if you’re a seasoned AI enthusiast or just starting your journey and empower you to leverage LLMs effectively.

The training workshop will be a completely hands-on approach to Generative AI and LLM.


What Participants will learn?

  1. What are building blocks of language models: Embeddings, Attention Mechanisms, Transformer Architecture Types.
  2. What are Foundation models and large language models (LLMs)? How pre-trained LLMs are built?
  3. What are different NLP tasks and how pre-trained models are fine-tuned for specific tasks?
  4. What are prompt and prompt engineering and what are some of the best practices with writing effective prompts for building LLM applications?
  5. What are different open source and commercial landscape of LLM models and their advantage disadvanges?
  6. How to build next generation NLP applications using agent frameworks, Retrieval-augmented generation (RAG) and Vector databases and how they can be used to retrieve information from private data repositories like documents and external knowledge repository?
  7. How to build solutions using LLM application like Semantic Search, Question Answering, and Conversational Bots?
  8. How to fintune custom LLMs in-house?
  9. How to deal with some of the emerging challenges of LLMs like bias, fairness, toxicity, and hallucinations?
  10. How to evaluate and monitor LLM based applications?


The participants need to have good understanding or knowledge of machine learning and python programming.

If you are not familiar with Machine Learning, please go through this ML Foundation Course before attending this workshop.

Course Duration:

  • 6 weeks with 2.5 hours of class on saturdays. About 16 Hours of course content.
  • Every week the participant has to complete a take home assignment and quizzes.
  • The recordings of the sessions will be available for viewing.

Detailed Course Outline

Session 1: What are key buidling blocks of LLMs?

  • Embeddings and its applications
  • Tokenization
  • Attention mechanism
  • Encoder and Decoder architectures
  • Hands On: word2vec and embeddings exploration

Session 2: What are different LM Architectures and NLP Tasks?

  • Pre-trained Models: GPT and BERT models
  • Huggingface library and model hubs
  • NLP Tasks: Classification, NER and Semantic Similarity
  • Hands on: Using models from Huggiface model hub

Session 3: How Foundation Models are built?

  • Open Source and Properitary LLM Models and leaderboards
  • Stages of Building Foundation Models
  • Pre-training, Instruction Tuning and Model Alignment
  • Llama2 Case Study: How Llama2 model was built

Session 4: What is Prompt Engineering and best practices?

  • Zero Shot, One Shot and Few Shot Prompting
  • Advaned Prompting: Chain of Thoughts (CoT), Tree of Thought (ToT)
  • Best Practices and patterns of writing prompts
  • Hands on: Prompt Engineering using OpenAI and LangChain APIs

Session 5 & 6: How to build LLM Based Applications?

  • Building LLM Agent Frameworks usig LangChain framework
  • RAG: Retrival Augmented Generation using LlamaIndex framework
  • Information Extraction
  • Hands on: Stock Performance Analysis usign agents and tools
  • Hands On: QA over Private Documents

Session 7: How to evaluate and monitor performance of LLMs?

  • LLM Evaluation Metrics and Frameworks
  • Understanding Prompt Injection, Jailbreaking, Hallucination
  • Human Alignment and Rewards Models
  • Hands On: LLM Evaluation

Session 8: How to build and finetune LLMs?

  • Understand Quantization and LoRA Techniques
  • Full Finetuning vs Parameter efficient fine tuning
  • Fine Tuning a Pre-trained models using custom dataset
  • Hands On: Custom Fine Tuning

Fees and Registration

The fees for the course is INR 18,000 + GST (18%). (Eightheen thousand rupees plus applicable GST).

The fee does not include the cost of access to OpenAI and any other cloud related costs. (The cost for these should be borne by the participants. Typically this cost is not very high and is typically less than INR 1000.)

Interested participants can register their interest using the form below. The payment details will be emailed to you.

Registration Form

Pleaes fill this form to register for the course.

Please contact me, for any queries.