Prior to sending data to Model Training, Neuton automatically determines the following:
The system immediately notifies about all data errors, if any, in a comprehensible format.
After successful Data Validation, data analysis and transformation start automatically, implying the following actions:
Learn more about Neuton Differentiators https://neuton.ai/neuton-key
Model quality can be increased by applying the automatic generation of new variables. Feature Engineering is available in Advanced Mode and performs the following actions:
Learn more about Neuton Differentiators https://neuton.ai/neuton-key
Neuton’s Neural network creation is the core component of the training phase. The construction
of a neural network starts automatically once all data preparation steps are complete.
The Neuton neural network is a self-growing neural network resulting in extremely compact models
with the following features:
Learn more about Neuton Framework https://neuton.ai/frmwrk
Results of the new data predictions can be viewed or downloaded by means of a user-friendly web interface in just a few clicks without any coding.
Neuton’s REST API functionality can be used as a tool to augment your device or service with AI capabilities. The platform provides examples of its implementation in several different programming languages.
The downloadable option simplifies the deployment of Neuton models even to edge devices or microcontrollers. The downloadable solution can be used without any tiebacks to the Neuton platform and requires no Internet connection or special licensing. The Neuton Model Viewer allows for you to view the parameters of the neural network generated by Neuton.
Model quality evaluation is an essential condition for building efficient machine learning solutions.
Learn more about Metrics https://neuton.ai/neuton-key
Learn more about Neuton Explainability Office https://neuton.ai/explainability
When building machine learning solutions, it is essential to evaluate the quality results (row-level explainability):
Historical Model-to-data Relevance is an excellent indicator for models may need to be retrained. This feature is also available even for downloadable models, which allows managing a model lifecycle even outside the platform.
Exploratory Data Analysis (EDA) is a tool that automates graphical data analysis and highlights
the most important statistics in the context of a single variable, your overall data, it’s
interconnections and relation to the target variable in a training dataset. Given the
potentially wide feature space, up to 20 of the most important features with the highest
statistical significance are selected for EDA based on machine learning modeling
Learn more about Neuton Explainability Office https://neuton.ai/explainability
Once the model is trained, the platform displays a chart with the 10 most important features
that affect the model prediction power. Additionally, a user can select any other feature to
check its relative importance. The Feature Importance Matrix has 2 modes, one displaying only
the original features and the other displaying the features after feature engineering. When
performing classification tasks, you can also see the feature importance for every class.
Learn more about Neuton Explainability Office https://neuton.ai/explainability
Along with quality evaluation of the model, it is also equally important to rightly interpret the prediction results.
Learn more about Model Interpreter https://neuton.ai/explainability
The platform automatically executes provisioning and de-provisioning of storage for each respective model to ensure maximum data security.
During both the training and prediction phases, the platform automatically executes provisioning
and deprovisioning of virtual machines with the most suitable configurations, based on dataset
parameters.
During the prediction phase, virtual machine usage/time is controlled by the user through a
user-friendly interface.