As a proud sponsor and attendee of Uphill Conf 2025, I was excited to experience the event, and it did not disappoint. Focused on cutting-edge AI topics, the conference day revealed just how rapidly AI innovation is progressing, from agentic large language models (LLMs) to small LLMs, vision-language models (VLMs), and private LLMs, with appearances from some of the most renowned names in educational AI, including Joshua Starmer and Luis Serrano.

Agentic LLMs: From Passive Models to Autonomous Problem Solvers
Nicole Königstein opened the morning with a talk about agent-based LLMs, moving beyond simple prompting to the orchestration of multiple AI agents collaborating on tasks such as data analysis and debugging. The takeaway was clear: the future of AI isn’t just bigger models, but more innovative ecosystems — LLMs that reason, plan, and coordinate tasks independently.

SmolLMs: Small is the New Big
Next up, Loubna Ben Allal introduced us to the world of SmolLMs from Huggingface — small, efficient language models that punch far above their weight. Forget the narrative that bigger is always better. The new generation of smaller models can deliver comparable performance to larger models while being cheaper, faster, and greener to run — a critical advantage for businesses seeking powerful AI without the associated infrastructure costs or privacy concerns.

Private LLMs: AI Behind Closed Doors
One of the most relevant topics of the day for enterprises was the focus on private large language models (LLMs) — models that run entirely within an organization’s walls, rather than through public APIs. Multiple talks emphasized strategies for fine-tuning open-source models, such as Mistral, Phi-2, Llama, SmolLM, etc, which enable companies to have complete control over their data and AI workflows without exposing sensitive information externally. It’s a clear trend: private, tailored AI is no longer a luxury; it’s becoming the norm. Sandra Kublik had an excellent talk about this.

Teaching Machines to See: VLMs (Vision-Language Models)
Andrés Marafioti delivered a fascinating talk on Vision-Language Models (VLMs), which integrate images and text into a unified AI system. From generating rich captions to answering questions about images, VLMs are a massive leap toward making AI truly multimodal. Fascinating was the emphasis on efficient VLMs that could soon run on even modest hardware, thereby democratizing access to powerful visual AI tools.

YouTube Superstars Light Up the Stage
The excitement was palpable when two YouTube legends took the stage:
Joshua Starmer, of StatQuest, showed how reinforcement learning is already making an impact in business environments, teaching systems to optimize for success through feedback and iteration. His signature blend of clarity and humor made even complex RL topics feel approachable.

Luis Serrano, of Serrano Academy, delved deeply into the mechanics of attention in neural networks. He peeled back the layers of AI’s “secret sauce,” showing how attention mechanisms power breakthroughs in language, vision, and more. True to his YouTube style, Luis made the session lively, intuitive, and highly memorable.

Having these well-known educators in person created an electric atmosphere, especially for the many fans who first learned about machine learning through their videos.
Final Thoughts
Uphill Conf 2025 was a strong signal that AI development is entering a more practical, efficient, and secure phase. Agentic LLMs, SmolLMs, private models, and VLMs are not futuristic dreams — they’re tools being built and deployed right now.

As a sponsor, in collaboration with our cloud infrastructure partner, Exoscale, it was gratifying to see the enthusiasm, critical thinking, and creativity on display — precisely the qualities that will drive the next wave of innovation. It was also lovely to give the Lego Batmobile to Maria Letizia Jannibelli.

Uphill Conf once again proved why it’s a must-attend event for anyone serious about staying ahead in the rapidly evolving AI landscape.
See you at the top — literally!
