12 upcoming events
Select your calendar and accept the calendar subscription prompt. Once accepted, all conference sessions will automatically be added to your calendar or mobile device.
To add a specific session to your calendar, simply click on the scheduled time in the agenda.
See you at the conference!
Follow the calendar
Now click the "Follow calendar" button below to get the
events on your own calendar.
Step 2 of 2
When you subscribe to this calendar, all of the events in the calendar will appear on your own calendar. When the calendar owner creates new events, they'll automatically appear on your calendar. It's like magic.
Upcoming events
Speaker: Jim Dowling Jim Dowling will kick off the event.
Speaker: Jim DowlingHopsworks supports building batch, real-time, and agentic AI systems using a unified architecture around feature pipelines, training pipelines, and inference pipelines. In this talk, we walk through the journey of developing batch and real-time ML systems to agentic AI systems with this unified FTI pipeline architecture. We will look in particular at how we connect application state to agents using "application context protocol". That is, we will see how entity IDs in applications can be used to help agents reliably retrieve application state as context.
Lyft’s Feature Store: Architecture, Optimization, and Evolution.
Tuesday 14 October ⋅ 16:15 – 16:40 (UTC)
Speaker: Rohan VarshneyLyft's Feature Store, a core infrastructure component in its Data Platform, optimizes the management & deployment of ML features at scale. It centralizes feature engineering & ensures uniformity across models & workflows by streamlining feature creation & storage for both offline/online model training & inference, facilitating low-latency access & high-throughput processing. This presentation covers its architecture, practical uses, performance, developer experience, optimization efforts, & evolution over the last 5+ years. We hope to demonstrate its role in empowering Lyft engineers to develop service components & models more effectively, including for future AI/LLM applications.
Powering Real-Time AI at Pinterest: Feature Management at Scale with Galaxy Signal Platform.
Tuesday 14 October ⋅ 16:40 – 17:05 (UTC)
Speaker: Andrey GubenkoHow do you deliver fresh, consistent, and highly reliable features to AI models—across billions of entities and hundreds of millions of users—in real-time? Pinterest’s Galaxy Platform answers this at scale. In this talk, we’ll share the architecture powering Galaxy, our learnings from evolving it into a real-time online feature store, and lessons from sub-second serving features to models powering Home Feed, Search, and Ads.
Speakers: Divya Nagar and Xiyuan FengWith the rise of Generative AI and large-scale semantic search, embeddings have become a foundational building block for enabling high-value ML and AI use cases across Uber. From personalized recommendations on Uber Eats to conversational assistants, embeddings now power Retrieval-Augmented Generation (RAG), our GenAI platform, semantic search, and predictive models at global scale. Over the past year, the Michelangelo team has elevated embeddings to first-class citizens within Uber's ML ecosystem—building a unified platform to simplify their generation, ingestion, versioning, and use across diverse applications. Today, this platform powers GenAI use cases and semantic search within the Uber App as well as internal systems. This talk introduces Vector Store, Uber’s scalable platform for managing the full lifecycle of embeddings—including offline/streaming generation, batch/real-time ingestion, standardized retrieval APIs, and automated model switching—all backed by centralized metadata and governance.
Speaker: Gokulram Krishnan-> Growth of Predictive Analysis -> Impact on the Financial Sector, the customer experience, and the value to improve business revenue -> Role of Artificial Intelligence and Machine Learning in the Financial Industry -> Benefits from next-generation technology -> Importance of Machine Learning Algorithms and Exploratory Data Analysis -> Defining propensity and likeness aspects in the Financial Industry.
Speaker: Krishna Chaitanya ChakkaThis proposal outlines a presentation on Roku's approach to achieving real-time feature serving at scale and dramatically improving the development velocity for our ML models. We will cover the entire ML development lifecycle, from initial training to production deployment. A key focus will be on how we leveraged open-source projects, such as Chronon, to facilitate incremental aggregations for efficient feature computation.
Bridging Real-Time and Batch: Declarative Feature Engineering with Apache Hamilton + Narwhals
Tuesday 14 October ⋅ 18:55 – 19:20 (UTC)
Speaker: Ryan WhittenGenerating accurate training data for real-time features is notoriously difficult, often requiring duplicate logic and introducing the risk of train-serve skew. With Apache Hamilton and Narwhals, teams can define feature transformations once and execute them seamlessly across both real-time and batch environments. This unified approach supports low-latency inference, scalable backfills, and consistent feature definitions—streamlining development and enhancing reliability in high-throughput, real-time AI systems.
How Coinbase Builds Sequence Features for Machine Learning
Tuesday 14 October ⋅ 19:30 – 19:50 (UTC)
Speaker: Joseph McAllisterCoinbase uses ML to power products ranging from fraud detection to personalized recommendations, and sequence features have become critical to improving model performance across these domains. In this talk, we’ll share how Coinbase built a framework to productionize user action sequences at scale, enabling models to learn directly from rich behavioral histories.
Real-time ML: Accelerating Python for inference (< 10ms) at scale
Tuesday 14 October ⋅ 19:50 – 20:10 (UTC)
Speaker: Chase HaddletonReal-time machine learning depends on features and data that by definition can’t be pre-computed. Detecting fraud or acute diseases like sepsis requires processing events that emerged seconds ago. How do we build an infrastructure platform that executes complex data pipelines (< 10ms) end-to-end and on-demand? All while meeting data teams where they are–in Python–the language of ML. We’ll share how we built a symbolic python interpreter that accelerates ML pipelines by transpiling Python into DAGs of static expressions. These expressions are optimized and run at scale with Velox–an OSS (~4k stars) unified query engine (C++) from Meta.
Real-Time Feature Aggregation at Scale: iFood’s Path to Sub-Second Latency
Tuesday 14 October ⋅ 20:10 – 20:30 (UTC)
Speaker: Willian MoreiraAt iFood, real-time ML features are essential for delivering personalized and responsive user experiences across critical use cases such as fraud detection, recommendations, and promotions. In this talk, we’ll walk through how we built a low-latency feature platform that aggregates and serves features in under one second using Spark Structured Streaming and Redis.The platform enables real-time updates that power models reacting instantly to user behavior, supporting high-throughput, low-latency pipelines in a production environment.
Speaker: Varant ZanoyanLeveraging modern techniques for recommendations — such as sequence and generative modeling — requires complex infrastructure to generate training data, process high-throughput event streams, and serve low-latency inference at scale. Chronon, an open-source feature platform built by Airbnb and Stripe and adopted by companies like OpenAI, Netflix, Roku, and Uber, simplifies these challenges. In this talk, we’ll show you how to use Chronon to streamline the data and feature engineering behind next-generation recommenders and deploy a robust, production-grade system.