Neuton Framework
The disruptive Neural Network Framework is invented and patented by our team of scientists. Its resulting models are self-growing and so small in size that they can easily be embedded into microcontrollers and other computing devices. Accuracy is not affected by compactness.
How is it Different?
Self-growing Neural Networks with extremely compact models
Up to 1000 times
Fewer coefficients than those built with existing neural frameworks
Smaller in size (Kb) than those built with non-neural algorithms
Faster than those built by competitors
Self-growing
Neuton does not use any modifications of the stochastic gradient descent algorithm, but uses a new efficient global optimization algorithm that allows you to develop the optimal network structure during training.
Local Minima Free
Neuton allows for finding the global minima of error function
No Vanishing Gradients
Neuton solves the problem of vanishing gradients
High Calculation Speed
Neuton models work extremely fast thanks to the small number of neurons and coefficients
Benchmarks
To benchmark a model correctly, and allow for a clear comparison against other solutions, Neuton has three measurements: number of coefficients, model size, and Kaggle score.
Bike Sharing Demand
Self-growing
Unlike other neural networks that have a predefined architecture, Neuton’s patented algorithm develops a network structure by automatically determining the number of neurons during training. No longer is there a need to go through the lost time and labor of using AI/Auto ML solutions that assess and build models using various non-neural algorithms and neural network frameworks to find the best solution to any given problem. Neuton identifies and grows the most optimal model automatically. This approach dramatically reduces the number of neurons, making the network compact and improving its accuracy.
Local Minima Free
During training, neural networks solve an optimization problem, trying to find the optimal set of parameters that will lead to the lowest point of the entire error function. This lowest point is the global minimum. Neuton’s patented method of training solves the local minima problem. Part of the scientific breakthrough is that we are not using compression with our models. We iteratively grow our models, neuron to neuron, adding weight by weight from scratch. Our patented method resolves local minimum problems and efficiently grows the network. Hence, we need fewer neurons to minimize the error. In traditional networks getting out of a local minimum means adding additional layers/neurons; Neuton, however, will find the global minimum without increasing the number of neurons excessively. As a result the model is much more compact and accurate in comparison with those of other neural networks.
No Vanishing Gradients
The vanishing gradient problem is encountered when training neural networks with gradient-based learning methods and backpropagation. In such methods, each of the neural network's parameters receive an update proportional to the partial derivative of the error function with respect to the current weight in each iteration of training. The problem is that in some cases, the gradient will be vanishingly small, effectively preventing the weight from changing its value. In the worst case scenario, this may completely stop the neural network from further training. Neuton’s proprietary algorithm of training and self-organization addresses the problem of vanishing gradients, thus making the model stable and accurate. Unlike traditional neural network architectures, Neuton develops connections among neurons during training.
High Calculation Speed
Due to an optimized neural network structure having up to 100 times fewer neurons and coefficients, Neuton models work extremely fast enabling a high volume of real-time predictions.
Contact Us
Get in touch to learn more
Contact Us
Contact Us
Thank you!
Please check your inbox. We will reach out to you shortly with the next steps.
Register for our next Webinar