Bake all the surfaces in the scene for all the variable conditions in your scene (such as light rotation, or object movement). See this Blender script for an example - this is also included in the example Blender scene linked above.
Each surface in a scene becomes a trained neural network contained within a shader. Here's an example of the coffee cup's surface, unwrapped according to its UV map (you can hover over it with your mouse and explore its two interactive axes):
That means that each pixel, say roughly a million of them on an average screen, is doing inference on a multi-thousand parameter network, ideally 60 times per second. That's just a lot of compute and something to keep in mind for older devices (though for example the Coffee Cup demo does work fluidly on many devices).
Although you need to be attentive when using this approach, in order to keep performance consistently within a good range, you can also look forward a bit and imagine, given the expected (and current) improvements in GPU capabilities, what else is possible when pixels are "smart" and their networks deeper and deeper. If you're in a position to fund research in neural graphics, do say hello.