Please note that Neuton no longer supports Internet Explorer.
We recommend upgrading to the latest Microsoft Edge, Google Chrome, or Firefox.
Close
Select a model
Try Benchmark Models Built with Neuton
Select a model
In this section you can use pre-trained Neuton models to predict on a holdout dataset (e.g. data not used during training or validation).

The original dataset has been split in two parts: one for training and the other for validation. The other dataset is a holdout dataset that was not used during model training and/or the validation process. The holdout dataset contains the solution to the variables that Neuton is attempting to predict. As a result, you can check into the accuracy of your predictions. This information, including other model characteristics for each dataset is shown below.
Choose a model
Survived (Target variable)
Survived = 1
Did not Survive = 0
Pclass
Ticket class (1 = 1st class, 2 = 2nd class, 3 = 3rd class)
Sex
Male = 1, female = 0
Age
Age of passengers
Sib Sp
Number of Siblings/Spouses aboard
Parch
Number of Parents/Children aboard
Fare
Ticket number
Embarked C
Port of embarkation is Cherbourg = 1, other port = 0
Embarked Q
Port of Embarkation is Queenstown = 1, other port = 0
Embarked S
Port of Embarkation is Southampton = 1, other port = 0
Predicted target variable
Survived = 1.0
Not Survived = 0.0
Factual target variable
Survived = 1.0
Not Survived = 0.0
2013-14 Enrolments
Number of new students in school (2013-14 academic year)
2014-15 Enrolments
Number of new students in school (2014-15 academic year)
2015-16 Enrolments
Number of new students in school (2015-16 academic year)
2013-14 DSIB Rating
School rating in 2013-14 academic year
2014-15 DSIB Rating
School rating in 2014-15 academic year
Target (Target variable)
Predicted school rating in 2015-16 academic year
Predicted target variable
Predicted school rating
from 1.0 to 5.0 with step of 1.0
Factual target variable
Actual school rating
from 1.0 to 5.0 with step of 1.0
Age
Emloyee age
Attrition (Target variable)
Predicted employee attrition. employee is leaving the company (no = 0, yes = 1)
Business Travel
No Travel = 1, Travel Frequently = 2, Tavel Rarely = 3
Daily Rate
Daily rate
Distance From Home
The distance from work to home
Education
Below College = 1, College = 2, Bachelor = 3, Master = 4, PhD = 5
Environment Satisfaction
Low = 1, Medium = 2, High = 3, Very High = 4
Gender
Numerical value: Female = 1, Male = 2
Hourly Rate
Hourly salary
Job Involvement
Low = 1, Medium = 2, High = 3, Very High = 4
Job Level
Job Level
Job Satisfaction
Low = 1, Medium = 2, High = 3, Very High = 4
Monthly Income
Monthly salary
Monthly Rate
Monthly rate
Num Companies Worked
Number of companies worked at
Over Time
Employee works overtime, No = 1, Yes = 2
Percent Salary Hike
Percentage increase in salary. The percentage of change in salary between 2 consecutive years (eg. 2017, 2018).
Performance Rating
Low = 1, Good = 2, Excellent = 3, Outstanding = 4
Relationship Satisfaction
Low = 1, Medium = 2, High = 3, Very High = 4
Stock Option Level
How many company stocks individual owns from the company
Total Working Years
Total years worked
Training Hours Last Year
Total annual training hours
Work Life Balance
Bad = 1, Good = 2, Better = 3, Best= 4
Years At Company
Total number of years at the company
Years In Current Role
Years in current role
Years Since Last Promotion
How many years passed since last promotion
Years With Curr Manager
Years spent with current manager
Department Human Resources
HR Department = 1, other = 0
Department Research & Development
R&D Department = 1, other = 0
Department Sales
Sales Department = 1, other = 0
Education Field Human Resources
Human Resources education field = 1, other = 0
Education Field Life Sciences
Life Sciences education field = 1, other = 0
Education Field Marketing
Marketing education field = 1, other = 0
Education Field Medical
Medical education field = 1, other = 0
Education Field Other
Other education field = 1, mentioned in other columns = 0
Education Field Technical Degree
Technical Degree education field = 1, other = 0
Job Role Healthcare Representative
Job role Healthcare Representative = 1, other = 0
Job Role Human Resources
Job role Human Resources = 1, other = 0
Job Role Laboratory Technician
Job role Laboratory Technician = 1, other = 0
Job Role Manager
Job role Manager = 1, other = 0
Job Role Manufacturing Director
Job role Manufacturing Director = 1, other = 0
Job Role Research Director
Job role Research Director = 1, other = 0
Job Role Research Scientist
Job role Research Scientist = 1, other = 0
Job Role Sales Executive
Job role Sales Executive = 1, other = 0
Job Role Sales Representative
Job role Sales Representative = 1, other = 0
Marital Status Divorced
Marital Status Divorced = 1, other = 0
Marital Status Married
Marital status Married = 1, other = 0
Marital Status Single
Marital status Single = 1, other = 0
Predicted target variable
Attrition of valuable employee.
Attrition =1.0- Employee probably will leave company.
Attrition =0.0 - Employee probably will stay in company.
Factual target variable
Attrition of valuable employee.
Attrition =1.0- Employee will leave company.
Attrition =0.0 - Employee will stay in company.
CO (GT)
True hourly average concentration CO in mg/m^3 (reference analyzer)
PT08.S1 (CO)
Tin oxide. Hourly average sensor response (nominally CO targeted)
NMHC (GT)
True hourly average overall Non Metanic HydroCarbons concentration in microg/m^3 (reference analyzer)
PT08.S2 (NMHC)
Titania. Hourly average sensor response (nominally NMHC targeted)
NOx (GT)
True hourly average NOx concentration in ppb (reference analyzer)
PT08.S3 (NOx)
Tungsten oxide. Hourly average sensor response (nominally NOx targeted)
NO2 (GT)
True hourly average NO2 concentration in microg/m^3 (reference analyzer)
PT08.S4 (NO2)
Tungsten oxide. Hourly average sensor response (nominally NO2 targeted)
PT08.S5 (O3)
Indium oxide. Hourly average sensor response (nominally O3 targeted)
T
Temperature in °C
RH
Relative Humidity (%)
AH
Absolute Humidity
C6H6 (GT) (Target variable)
target variable means true hourly averaged Benzene concentration in microg/m^3 (in micrograms per cubic meter)
Predicted target variable
Averaged Benzene (C6H6 (GT)) concentration in microg/m^3
Factual target variable
Averaged Benzene (C6H6 (GT)) concentration in microg/m^3
Crim
Per capita crime rate by city
Zn
Proportion of residential land zoned for lots over 25,000 sq.ft
Indus
Proportion of non-retail business acres per town
Chas
Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
Nox
Nitrogen oxides concentration (parts per 10 million)
Rm
Average number of rooms per dwelling
Age
Proportion of owner-occupied units built prior to 1940
Dis
Weighted mean of distances to five Boston employment centres
Rad
Index of accessibility to radial highways
Tax
Full-value property-tax rate per $10,000
Ptratio
Pupil-teacher ratio by town
B
1000(Bk - 0.63)^2, where Bk is the proportion of Afro-american by town
Lstat
% of lower status population
Medv (Target variable)
Median value of owner-occupied homes in $1000s
Predicted target variable
Median value of owner-occupied homes in $1000s
Factual target variable
Median value of owner-occupied homes in $1000s
Accuracy
Class prediction accuracy.
If 73 class values (0 or 1) from 100 records were predicted, then the accuracy will be 73/100 = 0.73 or 73 percent.
The higher the Accuracy score – the better the model
AUC
Area Under the receiver operating characteristic Curve.
Takes into account predicted probabilities of a record to fall into the corresponding class.
The higher the AUC score – the better the model
Gini
Gini normalized uses predicted probabilities and is another interpretation of AUC, calculated as AUC * 2 – 1.
The higher the Gini score – the better the model
Precision Weighted
Positive Predicted Value. Ratio of True Positive values to combined values (True Positive + False Positive).
Each predicted record is treated as having equal weight.
The higher the Precision score – the better the model
Recall Weighted
Sensitivity or True Positive Rate. Ratio of True Positive to combined values (True Positive + False Negative).
Each predicted record is treated as having equal weight.
The higher the Recall score – the better the model
F1 Weighted
Weighted average of Precision & Recall.
Each predicted record is treated as having equal weight.
The higher the F1 score – the better the model
Precision Macro
Positive Predicted Value.
Ratio of True Positive values to combined values (True Positive + False Positive). Each class is treated as having equal weight.
The higher the Precision score – the better the model
Recall Macro
Sensitivity or True Positive Rate. Ratio of True Positive to combined values (True Positive + False Negative).
Each class is treated as having equal weight.
The higher the Recall score – the better the model
F1 Macro
Weighted average of Precision & Recall. Each class is treated according to its support (the number of true instances for each class).
The higher the F1 score – the better the model
Logloss
Logarithmic Loss is a measure of prediction confidence level. Log Loss represents the difference between the actual class and the probability of a prediction being that class. For example, the model correctly predicts a 0.90 probability of being class 1, that means it is pretty confident, but still there is 0.1 uncertainty of this prediction; LogLoss penalizes for this uncertainty.
The lower the LogLoss score - the better the model
Lift
Lift is a measure of the performance of a targeting model at predicting or classifying cases as having an enhanced response (with respect to the population as a whole), measured against a random choice targeting model.
The higher the Lift score – the better the model
Coefficients
Current number of neural network coefficients
The lower the coefficient value - the better the model
Accuracy
Class prediction accuracy.
If 73 class values (0 or 1) from 100 records were predicted, then the accuracy will be 73/100 = 0.73 or 73 percent.
The higher the Accuracy score – the better the model
AUC
Area Under the receiver operating characteristic Curve.
Takes into account predicted probabilities of a record to fall into the corresponding class.
The higher the AUC score – the better the model
Gini
Gini normalized uses predicted probabilities and is another interpretation of AUC, calculated as AUC * 2 – 1.
The higher the Gini score – the better the model
Precision Weighted
Positive Predicted Value. Ratio of True Positive values to combined values (True Positive + False Positive).
Each predicted record is treated as having equal weight.
The higher the Precision score – the better the model
Recall Weighted
Sensitivity or True Positive Rate. Ratio of True Positive to combined values (True Positive + False Negative).
Each predicted record is treated as having equal weight.
The higher the Recall score – the better the model
F1 Weighted
Weighted average of Precision & Recall.
Each predicted record is treated as having equal weight.
The higher the F1 score – the better the model
Precision Macro
Positive Predicted Value.
Ratio of True Positive values to combined values (True Positive + False Positive). Each class is treated as having equal weight.
The higher the Precision score – the better the model
Recall Macro
Sensitivity or True Positive Rate. Ratio of True Positive to combined values (True Positive + False Negative).
Each class is treated as having equal weight.
The higher the Recall score – the better the model
F1 Macro
Weighted average of Precision & Recall. Each class is treated according to its support (the number of true instances for each class).
The higher the F1 score – the better the model
Size kB
Neural model file size in kilobytes (without line breaks)
The lower the file size - the better the model
Accuracy
Class prediction accuracy.
If 73 class values (0 or 1) from 100 records were predicted, then the accuracy will be 73/100 = 0.73 or 73 percent.
The higher the Accuracy score – the better the model
AUC
Area Under the receiver operating characteristic Curve.
Takes into account predicted probabilities of a record to fall into the corresponding class.
The higher the AUC score – the better the model
Gini
Gini normalized uses predicted probabilities and is another interpretation of AUC, calculated as AUC * 2 – 1.
The higher the Gini score – the better the model
Precision Weighted
Positive Predicted Value. Ratio of True Positive values to combined values (True Positive + False Positive).
Each predicted record is treated as having equal weight.
The higher the Precision score – the better the model
Recall Weighted
Sensitivity or True Positive Rate. Ratio of True Positive to combined values (True Positive + False Negative).
Each predicted record is treated as having equal weight.
The higher the Recall score – the better the model
F1 Weighted
Weighted average of Precision & Recall.
Each predicted record is treated as having equal weight.
The higher the F1 score – the better the model
Precision Macro
Positive Predicted Value.
Ratio of True Positive values to combined values (True Positive + False Positive). Each class is treated as having equal weight.
The higher the Precision score – the better the model
Recall Macro
Sensitivity or True Positive Rate. Ratio of True Positive to combined values (True Positive + False Negative).
Each class is treated as having equal weight.
The higher the Recall score – the better the model
F1 Macro
Weighted average of Precision & Recall. Each class is treated according to its support (the number of true instances for each class).
The higher the F1 score – the better the model
MAE
Mean Absolute Error between observed and predicted values.
The lower the error value – the better the model
AE max
Maximum Absolute Error.
The lower the error value – the better the model
AE min
Minimum Absolute Error.
The lower the error value – the better the model
R2
Proportion of the variance in the dependent variable that is predictable from the independent variable(s).
That means if R2 = 0.95 – then 95% of data are explained by observed statistics.
The higher the R2 value – the better the model
RMSE
Root Mean Squared Error.
Error between observed and predicted values (square root of squared average error over all observations).
The lower the error value – the better the model
RMSLE
Root Mean Squared Logarithmic Error.
Error between observed and predicted values (square root of squared average logarithmic error over all observations).
The lower the error value – the better the model
MSE
MSE measures the average of the squares of the errors - that is, the average squared difference between the estimated values and true values
The squaring is necessary to remove any negative signs, it also gives more weight to larger differences, so bigger errors are penalized higher
The lower the error value – the better the model
MAE
Mean Absolute Error between observed and predicted values.
The lower the error value – the better the model
AE max
Maximum Absolute Error.
The lower the error value – the better the model
AE min
Minimum Absolute Error.
The lower the error value – the better the model
R2
Proportion of the variance in the dependent variable that is predictable from the independent variable(s).
That means if R2 = 0.95 – then 95% of data are explained by observed statistics.
The higher the R2 value – the better the model
RMSE
Root Mean Squared Error.
Error between observed and predicted values (square root of squared average error over all observations).
The lower the error value – the better the model
RMSLE
Root Mean Squared Logarithmic Error.
Error between observed and predicted values (square root of squared average logarithmic error over all observations).
The lower the error value – the better the model