How to Embed Models into Microcontrollers?
Neuton, our unique neural network framework, creates natively, incredibly compact, and accurate models that can easily be integrated into even the smallest microcontrollers. Since our models are natively built compact without compromising accuracy we do not require performing other functions such as compression or other transformations. The С/C++ library prepared after training can be immediately installed on the device

The Neuton Platform works with tabular or sensory data enabling you to solve problem types such as regression, time series, and classification both binomial and multinomial.
1. Model Training
Prior to the start of training, you will need to enable and configure TinyML settings that will allow the model to be optimized for use on your specific device:
The Bit Depth of Calculations.
The choice of bit depth depends on the device and the amount of free memory on it. The available options include 8, 16, and 32 bits.
Data Normalization Type.
Allows you to control the number of required resources. You may select to use a single normalization scale if the data values in the dataset are approximately in the same range. This will make the model even more compact. If the variables in the dataset have different scales, a unique type of normalization should be selected.
Support for Float Calculations.
If your device supports float calculations, select this option to create models with higher accuracy. For 32 bits, the float support is enabled by default.
Digital Signal Processing.
The setting Digital Signal Processing is needed to classify events based on signals received from accelerometers, gyroscopes, magnetometers, EMG sensors, etc.

It automatically determines the window with which the signals will be classified, as well as other data transformations that allow you to get more accurate models.
During training, the Neuton Platform makes a real-time dashboard available so you can monitor the quality of the model, the number of coefficients as well as the model size.

Once the most optimal model quality and size are achieved, training will be stopped automatically. Alternatively, you may also manually stop training once the model is deemed consistent and you have achieved your target model requirements.
2. Inference Running
After successful training, the archive containing the C/C++ library with the model for deployment will be generated.
The library contains the following files:
neuton.h - header file of library
neuton.c - library source code
model/model.h - model header file
StatFunctions.h / StatFunctions.c - statistical functions for preprocessing
All files are an integral part of the library and will be used unchanged.
The library is written in accordance with the C99 standard, so it is quite universal and does not have strict requirements for the hardware. The ability to use the library depends mainly on the amount of memory available for its operation.
The deployment consists of the following steps:
1. Copying all files from the archive to the project and including the header file of the library.
2. Creating a float array with model inputs and passing it to `neuton_model_set_inputs` function.
3. Calling `neuton_model_run_inference` and processing the results.
Stay updated, join the community
slack