Neural networks are one of the most dominant forms of AI algorithms being used today. They seem to be the right solution to a myriad of problems and are often considered to provide objective answers to a variety of complex questions, but why? A neural network can be tuned to cope with a wide range of situations, which is great, but is it always correct to do so? We will present a range of manipulations to a predefined neural network to show their effects on the results it can produce. Examples are the use of different training sets and variations in configuration, but also changing the neural network during runs, which could affect it in certain and uncertain ways. We will explain why this is not necessarily a fault of the network or even a bad thing in general, but that it does require some careful thinking when working with neural networks. The definition of insanity is doing the same thing over and over again and expecting different results, but how about the opposite? All over the world, people working on neural networks are taking different approaches but are expecting the same results. Is that not insanity?