Talk

Local LLM fine-tuning: a practical example

giovedì 29 maggio

16:15 - 16:45
StanzaGnocchi
LinguaInglese
Livello audienceIntermediate
Elevator pitch

There is increasing talk about local language models in the context of LLMs, but finding practical notions of fine-tuning them is not easy. This talk will offer a concrete example for customizing a local LLM and adapting it to specific needs.

Descrizione

The fine-tuning of Large Language Models (LLMs) is one of the most discussed topics in the field of AI, but confusion often arises: “What is really the fine-tuning of an LLM?”, “How does it differ from traditional fine-tuning?”, and most importantly, “How do you really fine-tune a local LLM?”

In this talk we will answer these questions, addressing the complete process of fine-tuning an open-source LLM. Starting from dataset creation to hands-on training and exporting the model for real use, we will show step-by-step how to do all this in a familiar environment like Google Colab.

With the help of a Python package optimized for the use case, the talk will be accessible to everyone: both the curious who want to better understand generative models and those who want to delve into the technical details to apply them in their own projects.

TagsMachine-Learning, Jupyter/iPython Notebook, Natural Language Processing
Participant

Giunta

Sono un AI Engineer specializzato in intelligenza artificiale generativa, appassionato di calcio e curioso di tutto ciò che può arricchire il proprio bagaglio culturale!