Sign in Start for free
sub header image
Support Library
C Library
Neuton, our unique neural network framework, natively creates incredibly compact and accurate models that can easily be deployed into your firmware project using an automatically generated archive with a C library.
The library is written in accordance with the C99 standard, so it is universal and does not have strict requirements for the hardware. The ability to use the library depends mainly on the amount of memory available for its operation.
The archive contains the following files and folders:
neuton.h - header file of library
neuton.c - library source code
model/model.h - model header file
StatFunctions.h / StatFunctions.c - statistical functions for preprocessing
Converted_models
We don't recommend that you modify any files in the archive. Unsupervised changing of files may cause errors in model inference.
High-level integration steps include:
Copying all files from the archive to the project and including the header file of the library.
Creating a float array with model inputs and passing it to `neuton_model_set_ inputs` function.
Calling `neuton_model_run_inference` and processing the results.
In the instruction below detailed information about how to integrate Neuton into your firmware project is provided.
# How to integrate Neuton into your firmware project
## Include header file
Copy all files from this archive to your project and include the header file:
#include "neuton.h"
The library contains functions to get model information such as:
task type (regression, classification, etc.);
neurons and weights count;
window buffer size;
input and output features count;
model size and RAM usage;
float support flag;
quantization level.
The main functions are:
`neuton_model_set_inputs` - to set input values;
neuton_model_run_inference` - to make predictions.
## Set input values
Make a float array with the model inputs. Input count and order should be the same as in the training dataset.
float inputs[] = {
feature_0,
feature_1,
// ... feature_N
};
If the digital signal processing option was selected on the platform, you should call `neu- ton_model_set_inputs` multiple times for each sample to fill the internal window buffer. The function will return `0` when the buffer is full, this indicates that the model is ready for prediction.
##Make predictions
When the buffer is ready, you should call `neuton_model_run_inference` with two arguments:
pointer to `index` of predicted class;
pointer to neural net `outputs` (dimension of array can be read using `neuton_ model_outputs_count` function).
For regression task output value will be stored at `outputs[0]`.
For classifications task `index` will contain a class index with maximal probability, `outputs` will contain probabilities of each class. Thus, you can get predicted class probability at `outputs[index]`..
The function will return `0` on successful prediction.
if (neuton_model_set_inputs(inputs) == 0)
{
uint16_t index; float*
outputs;


if (neuton_model_run_inference(&index, &outputs) == 0)
{
// code for handling prediction result
}
}
The same instruction you can find in the downloaded archive in README.md file.
Integration with Tensorflow, ONNX
For processed data (for solutions with turned off DSP option) and 32FLOAT input data type created models are also available in TensorFlow and ONNX formats which can be built in any pipeline. You can find a model in these formats in the `converted_models` folder.
The input data for predictions must be identical in format to the data used for the training, including the order of columns. The target variable should be excluded. If some columns were dropped using the platform web interface the same columns should be dropped in new data for prediction.
The C Library returns predicted classes in encoded representation, to convert them to original representation please refer to README.md file where you can find all necessary information.
Models in Tensorflow and ONNX formats are available only for the 32-bit version of the models.
In the model.h file you can find features that are marked as “unused by model” which means that these features are not used by the model during inference, but you can not drop these features from data used for inference. To optimize model performance, you can train the new model on the new training dataset with dropped features that were marked as “unused by model”. In this case, you can drop the same features in the data used for prediction.



Stay updated, join the community