You already have an application, data model, and CI/CD pipeline. This livestream shows how to bolt on physical AI using NVIDIA Omniverse libraries—ovrtx, ovphysx, and ovstorage—instead of replatforming. In under an hour, Damien Fagnou and Ashley Goldstein will live-code a small, realistic integration: loading a USD scene, stepping a deterministic physics loop, rendering RTX sensor outputs, and wiring everything into a Model Context Protocol (MCP) server that LLM agents and tools like Claude or Cursor can safely call.
You will learn:
- How to embed ovphysx and ovrtx into an existing Python microservice, with tensor IO via DLPack for RL, control, or data generation
- How ovstorage connects PLM/PDM and cloud storage directly into OpenUSD scenes without costly data migrations
- How to expose these capabilities to agentic stacks using MCP and deploy always-on agents with NemoClaw/OpenClaw-style guardrails
The session closes with a roadmap and Q&A on when to use libraries vs Omniverse Kit, how Isaac Lab is evolving on top of this modular architecture, and what to expect as physical AI becomes a first-class workload for ISVs and industrial platforms.
📆 Check out the full calendar for all of our upcoming events → https://nvda.ws/3JqaWnA
----------------------------------------------------------------------------
⬇️Get Started → https://nvda.ws/4cZAZO1
👀Explore OpenUSD → https://nvda.ws/3CeozBQ
👥Join the Community → https://nvda.ws/3ZMfc6e