During training, neural networks solve an optimization problem, trying to find the optimal set
of parameters that will lead to the lowest point of the entire error function. This lowest point
is the global minimum. Neuton’s patented method of training solves the local minima problem.
Part of the scientific breakthrough is that we are not using compression with our models. We
iteratively grow our models, neuron to neuron, adding weight by weight from scratch. Our
patented method resolves local minimum problems and efficiently grows the network. Hence, we
need fewer neurons to minimize the error. In traditional networks getting out of a local minimum
means adding additional layers/neurons; Neuton, however, will find the global minimum without
increasing the number of neurons excessively. As a result the model is much more compact and
accurate in comparison with those of other neural networks.