Good question.
Your observation is correct. I have also noticed this behavior and investigated it. I found that the learning process is not as straightforward as we used to think.
At the start of the training process, there is randomness involved. Training tries to get the network updated so that results for the training sets are correct.
The lotto neural network can't finish train with a network solving any drawing round. The training process isn't converging because the drawing numbers is stochastic.
So whenever you train the network, it starts from a random set of values. Then it corrects them during the training process and ends up differently because it is not converging to a final and universal solution, as predictable and well-defined problems are.