The Practical AI Digest

The Practical AI Digest de Mo Bhuiyan via NotebookLM

Mo Bhuiyan via NotebookLM

Distilling AI/ML theory into practical insights. One concept at a time. No jargon.

Categorias: Tecnologia

Ouvir último episódio:

This episode dives into strategies for fine-tuning gigantic AI models without needing massive compute. We explain parameter-efficient fine-tuning methods like LoRA (Low-Rank Adaptation), which freezes the original model and trains only small adapter weights, and QLoRA, which goes a step further by quantizing model parameters to 4-bit precision. You’ll learn why techniques like these have become essential for customizing large language models on modest hardware, how they preserve full performance, and what recent results (like fine-tuning a 65B model on a single GPU) mean for practitioners.

Episódios anteriores

  • 12 - Efficient Fine-Tuning: Adapting Large Models on a Budget 
    Tue, 03 Feb 2026
  • 11 - Diffusion Models: AI Image Generation Through Noise 
    Tue, 20 Jan 2026
  • 10 - Graph Neural Networks: Learning from Connections, Not Just Data 
    Tue, 30 Sep 2025
  • 9 - Neuro-Symbolic AI: Combining Learning With Logic 
    Tue, 16 Sep 2025
  • 8 - LLMs in Chip Design: How AI Is Entering the Hardware Workflow 
    Tue, 02 Sep 2025
Mostrar mais episódios

Mais podcasts de tecnologia brasileiros

Mais podcasts de tecnologia internacionais

Escolha o gênero do podcast