OptiX 5.0 introduces an AI-accelerated denoiser based on a paper published by NVIDIA research "Interactive Reconstruction of Monte Carlo Image Sequences using a Recurrent Denoising Autoencoder". It uses GPU-accelerated artificial intelligence to dramatically reduce the time to render a high fidelity image that is visually noiseless. This provides ultra-fast interactive feedback to artists, allowing them to iterate their creative decisions more quickly and achieve their final product much faster.
The AI-accelerated denoiser was trained using tens of thousands of images rendered from one thousand 3D scenes
So this works by predicting from the grainy image what the finished image looks like? I think it would be like a really good application for machine learning, because the finished images would serve great as training data .
The way of the future.. realtime GPU graphics will merge with realtime noise removal and calculation of raytraced algorithms.. and the game and vfx industry will be the one and same.