At SIGGRAPH 2025, AI continued to gain traction across computer graphics, with clear signs of both progress and resistance. While interest was high and more research papers than ever involved deep learning, practical adoption remains fragmented and often constrained by infrastructure and workflow realities. This post offers a grounded perspective beyond the hype, covering industry economics and AI, 3D generation, and simulation. Rather than a deep technical dive, it shares observations and takeaways from a week of demos, papers, and conversations on the show floor.

Industry Pressure & AI for Efficiency
Social media and streaming are reshaping attention and revenue in the entertainment industry. People spend more time in social media and streaming platforms instead of going to movie theaters or playing high-end games than pre-covid, clearly put by Natalya Tatarchuk (CTO Activision) during the Advances in Real-Time Rendering in Games sessions. The shift puts pressure on major film/animation studios and AAA gaming studios to do more with less while keeping quality high. AI could play a role in this evolution and has been a polarizing topic in the industry for the past years. Lay-offs in the industry have partially been attributed to (hyped?) expectations for AI to automate artists and devs away. On the SIGGRAPH floor, however, a careful yet broad interest for AI tooling existed.
Where is AI adoption the largest? Most popular applications lie in the ideation and discovery processes early on, where generative models bring small ideas to life quickly. Midjourney and ComfyUI were definitely mentioned the most and seemed widely adopted, besides the obvious ChatGPT and Gemini as general asistants. For later stages in large projects, where the playbook is fixed, generative AI was deemed too low quality for making final renders. 3D related AI tools are mostly in an experimental phase, many studios are looking into new tools, but adoptions seems low.

What is coming in the next two years? In the paper section, the amount of research on AI for computer graphics was astounding, breakthrough applications are expected to arrive in the coming years. Animation generation was covered widely, for example AnyTop, allowing to generate animations and skipping mocap entirely. 3D generation quality is improving rapidly, altough not clearly not production ready for everyone. Nvidia is pushing heavily for neural shading and physics simulation for robotics and extreme realism in digital environments. Further in this post, we cover 3D Generative AI and Neural Shading & Simulation more broadly.

What are practical challenges for the new AI tools? On-premise solutions and walled gardens are often necessary to protect intellectual property. For many large studios, especially those outside North America and Europe, cloud-only platforms raise serious concerns around privacy, compliance, and long-term access. Yet most of the new tools on the market today are web-based or cloud-native, making them hard to adopt. Integration is another pain point: many tools are designed as one-size-fits-all platforms rather than plug-ins or extensions to existing software. Midjourney and ComfyUI are clear examples. Leaving your creative software environment breaks flow, adds friction, and makes it harder to stay focused. While tech artists are quick to explore these tools, many other creatives remain cynical and are much slower to adapt generative tools—especially because AI models are trained on their data and threaten their income. Automating the boring part of creative workflows, e.g. retopo, is better supported by artists.
3D Generative AI
A hot space at SIGGRAPH this year was 3D generative AI. A wave of new startups like Deemos, Chat3D, Tripo, and Meshy are driving innovation in this area. These platforms make it faster than ever to go from an idea to a rough 3D model, however they’re missing quality and important features for AAA studios.
Geometric quality is getting really good, and results are often visually convincing at first glance. However, under the hood, most generated assets fall short of production standards. Particularly textures are often low-resolution or poorly aligned, topology is usually too chaotic to animate or modify, and UV unwrapping is either missing or completely unusable. Getting high quality PBR materials is a challenge clearly. These are still critical bottlenecks if the goal is integration into a film, game, or simulation pipeline.

Control is another big challenge. Most current tools offer minimal ways to guide or constrain the output beyond a text prompt or references images. Iterating on a result to refine a shape, preserving certain features, or regenerating only part of a mesh is sometimes supported but not in a practical way. At DataMeister, we’ve built a more structured approach in Trellis, see our presentation and whitepaper, where users can iteratively guide generation using 3D constraints and highly precise 2D edits making the process more interactive and reliable.
On the infrastructure side, most 3D generative tools run fully in the cloud and are often built by teams outside of North America or the EU. That can be problematic for studios with strict data policies, especially when dealing with confidential assets or client data. Additionally, few of these tools integrate well into existing workflows. Even something as simple as naming conventions, material assignments, or unit scales is often missing—forcing teams to manually adapt outputs before use. These small gaps add up quickly and break creative flow. They’re part of the reason why many artists still prefer traditional modeling over trying to wrangle with generative results.
Neural Shading & Simulation
NVIDIA demonstrated an Unreal Engine integration for neural shading, where neural networks optimize rendering by compressing textures first and approximating materials during real-time rendering. This triggered a feeling of inception to us, as neural networks approximate functions and a rendering pipeline approximates real-world physics. The benefits include reduced memory usage (3x - 10x) and higher fps (2x - 5x) at minimal visual loss, but the tech is still early. No major productions are using it yet. NVIDIA offered multiple courses at SIGGRAPH and theoretical sessions to introduce their frameworks, such as Slang, to support this new approach.

Robotics and simulation teams are increasingly adopting real-time graphics tooling originally built for film and game production. Features like photorealistic sensor emulation, contact dynamics, and rigid body simulation are now being combined with rendering pipelines that support ray tracing, mesh-based collision, and procedural scene composition. At SIGGRAPH, several NVIDIA demos showed how procedural scene generation paired with high-fidelity physics can produce scalable virtual environments for robotics simulation such as the Disney droids. In parallel, AI models are being used to estimate physically based material properties (PBR) from real-world sensor data, like video footage captured by a self-driving car, allowing engineers to recreate complex scenes with realistic lighting and surface behavior. These materials can then be selectively altered—such as changing only the roughness of a road surface or the reflectivity of a wall—while keeping the geometry constant. This enables precise experimentation with visual variation, which is critical for stress-testing vision and control systems. As a result, the boundaries between creative rendering workflows and robotics simulation stacks are narrowing, resulting from shared needs for realism, control, and reproducibility.
Conclusion
AI is more present than ever on SIGGRAPH. As a sign on the wall, the volume of research papers involving neural networks indicates that this trend will definitely continue in the coming years. This industry is driven by technological advancements, SIGGRAPH was born for this exact reason, and AI is clearly becoming the next shift. As Datameister, we had a great time soaking up the energy, testing new tools, and discussing both the promises and limitations of current AI approaches. Many of the issues raised around integration, creative control, and production quality are challenges we actively help customers with.
In case you are looking for a partner in this space, reach out. We offer end-to-end support—from AI development and infrastructure to hosting and seamless integration with your existing creative pipelines.