Discover how to navigate the hidden dimensions of AI minds. Learn to craft prompts that guide language models through their latent space - the abstract realm where knowledge becomes geometry and ideas are coordinates waiting to be explored.
Large Language Models can seem almost magical in their capabilities - brilliantly insightful one moment, frustratingly obtuse the next. This apparent cognitive asymmetry stems from how these models navigate their latent space: a high-dimensional realm where knowledge is encoded not as discrete facts, but as geometric relationships between concepts.
In this talk, we’ll demystify this abstract space and provide practical tools for exploring it. You’ll learn why the same model can give wildly different responses based on how you structure your prompts, and how to craft them effectively. We’ll examine why models sometimes display deep insight into topics not explicitly present in their training data, through their ability to traverse novel paths through latent space.
Using Python code examples and visual analogies, we’ll build an intuitive understanding of latent space navigation. You’ll discover why providing a thoughtfully diverse set of “coordinate references” in your prompts often works better than hyper-focused queries, much like how dropout prevents neural networks from overfitting. We’ll explore practical techniques for prompting that treat the model not as a simple question-answering system, but as a geometric navigator responding to carefully placed waypoints.
The talk will provide both theoretical foundations and hands-on techniques, including:
Whether you’re building AI applications or just curious about how these models really work, you’ll come away with a deeper understanding of the geometric nature of machine learning and practical tools for more effective model interaction. Join us for this journey into the hidden dimensions of artificial minds.
CTO at Agreenlab and Ormalab, focusing on Python development and AI . My background spans from cultural anthropology to technology, which has shaped my approach to building AI systems, particularly in natural language processing and making machine learning more interpretable. I’ve contributed to research projects in machine learning and computer vision, publishing in international journals while staying committed to practical, real-world applications. Beyond tech, I experiment with wild fermentation and co-organize PyBari, the Python community in Bari.