Download as pdf or txt
Download as pdf or txt
You are on page 1of 121

`

Neuro-T
User Guide

Version 3.0
Contents
Contents ............................................................................................................................................................. 2
Quick Start .......................................................................................................................................................... 5
Getting started ................................................................................................................................................... 5
Access Neuro-T .................................................................................................................................................. 5
Set up your project ............................................................................................................................................ 6
Manage your data .............................................................................................................................................. 9
Train your model .............................................................................................................................................. 11
Introduction ...................................................................................................................................................... 12
Neuro-T ............................................................................................................................................................ 12
About Neurocle................................................................................................................................................ 12
Model types provided by Neuro-T ................................................................................................................... 13
Requirement specification ............................................................................................................................... 15
Glossary ............................................................................................................................................................ 17
Deep Learning & Statistics ............................................................................................................................... 17
Unique concepts in Neuro-T ............................................................................................................................ 23
Getting started with Neuro-T ............................................................................................................................ 24
Starting Neuro-T .............................................................................................................................................. 24
Log in........................................................................................................................................................ 24
Home screen interface ............................................................................................................................. 24
Setting up workspace....................................................................................................................................... 25
Workspace Setting ................................................................................................................................... 25
Create image set ...................................................................................................................................... 26
Creating deep learning models ......................................................................................................................... 29
Manage your data: File tab .............................................................................................................................. 29
Import and export images ....................................................................................................................... 29
Edit images (preprocessing) ..................................................................................................................... 32
Move images ............................................................................................................................................ 32
Delete images .......................................................................................................................................... 33
Label your images: Data tab ............................................................................................................................ 34
Components in Data tab .......................................................................................................................... 34
Create/Copy label set............................................................................................................................... 36
Import/Export label set............................................................................................................................ 39
Setting ROI and mask ............................................................................................................................... 41
Label your images .................................................................................................................................... 46
Split Train/Test dataset ............................................................................................................................ 53
Merge and copy your class: Class mapping ............................................................................................. 55

2 / 121
View-only mode ....................................................................................................................................... 57
Compare labels and models..................................................................................................................... 58
Convert labels to images .......................................................................................................................... 60
Flowchart ................................................................................................................................................. 63
Train your own models: Train tab .................................................................................................................... 65
Adjust training conditions ........................................................................................................................ 65
Evaluating & reporting results........................................................................................................................... 73
Evaluate performance of your models: Result tab .......................................................................................... 73
Key metrics .............................................................................................................................................. 73
Classification ............................................................................................................................................ 73
Segmentation ........................................................................................................................................... 76
Object Detection ...................................................................................................................................... 80
OCR .......................................................................................................................................................... 83
Anomaly Detection .................................................................................................................................. 84
See result by image .................................................................................................................................. 85
Evaluate your model ................................................................................................................................ 88
Import your model ................................................................................................................................... 91
Generating reports: Report tab ....................................................................................................................... 91
Database managements & system setting ........................................................................................................ 94
Working with database .................................................................................................................................... 94
Database storage management ............................................................................................................. 100
System setting ................................................................................................................................................ 101
GPU allocation ....................................................................................................................................... 101
Admin Console ............................................................................................................................................... 102
Accounts managements......................................................................................................................... 103
Workspace managements ..................................................................................................................... 106
Useful tips ....................................................................................................................................................... 109
How to improve model performance ............................................................................................................ 109
Filtering noisy labels ...................................................................................................................................... 109
Manually splitting Test/Train set .................................................................................................................... 110
Importing labels as masking images .............................................................................................................. 111
Resize method ............................................................................................................................................... 111
Image basis analysis (View filter) ................................................................................................................... 112
Classification - Class Activation Map ...................................................................................................... 112
Segmentation, Object Detection, OCR - Predicted labels ...................................................................... 113
Definition of score in each model types ........................................................................................................ 113
How to use the Task list ................................................................................................................................. 115
Models tab ............................................................................................................................................. 115

3 / 121
Activities tab .......................................................................................................................................... 115
Inference speed ............................................................................................................................................. 117
How to upgrade NVIDIA Driver ...................................................................................................................... 118
How to get technical support. ....................................................................................................................... 120
Keyboard shortcuts ......................................................................................................................................... 121

4 / 121
Quick Start
Getting started
Quick start is a brief introduction for users who already have a certain level of knowledge in deep learning. These
users can follow the steps below to start their project.

This section assumes that Neuro-T is already installed. If Neuro-T is not yet installed on your computer, please
refer to ‘Neuro-T Installation Guide’.

Quick start includes information on how to do the following:


• Access Neuro-T
• Set up project
o Create workspace and share them with collaborators
o Create image sets and organize images
• Manage datasets
o Create label sets and choose label types
o Label images with labeling tools
o Split dataset into Train, Test sets
• Train model
o Set training conditions and create a new model
o See the results and write a report
Please make sure to upgrade the latest version of NVIDIA Driver (v471.11) before accessing to Neuro-T.

Precaution: For program stability, shut down Neuro-T server before upgrading the NDIVIA Driver. (Refer to
'How to Upgrade NVIDIA Driver' to see how to download NVIDIA Driver.)

Access Neuro-T
1. In the Windows taskbar, click the Neuro-T logo icon or search 'Neuro-T' in the search box to start the server.

2. In the Neuro-T log window, choose the GPU you want to use. (The detailed procedure can be found in the
‘Neuro-T Installation Guide’.)
3. When done, type http://localhost:8000/ in the address bar of your web browser to start Neuro-T.
4. Log in to your Neuro-T account. If you don't have one, click [Don't have an account? Sign Up] and sign up.

5 / 121
NOTE

If you cannot connect to the server after you entered http://localhost:8000, please refer to 'How to Run
Neuro-T', and 'Set up Firewall on a Server PC' sections in the Neuro-T Installation Guide. Try to re-connect
after following the instructions.

If you are not using a server PC directly, you need to access Neuro-T using http:// [ IP Address of Server
PC ] :8000. If the port 8000 is already in use, Neuro-T will be assigned to alternative port, and it will be shown
in the 'View Config' window. Please change 8000 to allocated port from the address.

For example, if you are using 8003 port, access Neuro-T using http:// [ IP Address ] :8003.

Set up your project


To get started with a project, you need to create a workspace and import images that will be used in the project.

Create workspaces
Workspaces are shared spaces where deep learning projects are carried out. You can create a workspace to
conduct single or multiple projects and share them with other user accounts to collaborate.
1. Click [New workspace] or click [+] in the navigation menu. A dialog appears to create a new workspace.
2. Enter the name of the workspace, share permission to members and click [Create].

Manage workspaces
Managing workspaces include editing the name of the workspace, sharing permission to other members, and
deleting the workspace. Managing can only be done by the owner of the workspace.
1. Open the Workspace page.
2. At the right of the Workspace name, click the [Edit] icon to open the edit panel.

6 / 121
3. In the edit panel, you can:
• Change the workspace name.
• Delete the workspace.

Create image sets


In the Workspace page, you may create as many image sets as you need. You can copy, duplicate, edit, and
delete image sets giving you more space to explore.
1. In the Workspace page, click [New] or [Create image set].
2. In the dialog, enter the name and description of the image set and click [Create].

Import data files


After creating an image set, you can start importing your images. Each image set can contain up to 100,000
images (recommended), and a maximum file size of 64MB each.
To import images, do the following:
1. Double-click the image set. A new page with 5 tabs (File, Data, Train, Result, Report) will appear.
2. In the File tab, import your images by doing one of the following:

7 / 121
• Drag and drop images from your computer onto the import area.
• Click [Add image] or [Add folder] > choose [Original] > select images from the directory.

To import resized images, do the following:


1. Click [Add image] or [Add folder] > choose [Resized] > select images from the directory.
2. In the Resize images dialog, choose the size you wish to resize (512px or 1024px), then click [Resize and
import] to import resized images. If you drag and drop, this dialog will always appear.
3. If you don’t want to resize, click [Skip and import original] to import original images.

NOTE

Depending on the amount of data, importing images at once on Microsoft Edge may take a long time. To
minimize the importing time for high resolution images, resize images when importing. Resized images will
maintain their aspect ratio.

8 / 121
Manage your data
In the Data tab, you can create classes, label images, and split data to Test or Train set. You are free to create as
many label sets as you want using 5 different labeling methods. Label sets not only enable you to find the
method that best fits your data, but also allows you to modify, copy and experiment a combination of class labels
and dataset split to compare and progressively develop the best model.

I. Create Label set


Label set is a unit of data set composed of a labeling type (or method), labeling classes, and dataset split. These
components determine how you’ll create a deep learning model. It is important to select the appropriate labeling
method that fits your project purpose.
There are five label types:
• Classification: Categorizes each image by its class, such as Good/Bad.
• Segmentation: Recognizes the shape and location of objects in images.
• Object Detection: Detects the location and number of objects in images.
• Anomaly Detection: Categorizes Normal and Anomaly images by training Normal images only.
• OCR: Recognizes text, numbers, and symbols in images.

To create a new Label set, do the following:


1. In the Data tab, click [New].
2. In the Create new label set dialog, select the type of project (Classification, Segmentation, Object Detection,
Anomaly Detection, OCR), enter a name and description and click [Create].

NOTE

Depending on the label type, the labeling method and deep learning model will vary. You can also import an
existing labeling data as a JSON file or image file once you create a Label set.

9 / 121
II. Label images
Label the images in accordance with your project goal. Labeling tools are displayed in the toolbar at the right. You
can easily create a new class with the [+] button or edit existing classes by clicking the [Edit] button on the right
side of the class name. You may check the labeling status on the table to the left.
The tools for each label type are as follows:
• Classification: Select the image from the table on the left-hand side. Then click on the class name to assign
the image to a class.
• Segmentation: Click the class name and desired shape of labeling tools — line, circle, square, polygon and
magic wand. Then draw a region on the image directly.
• Object Detection: Draw a box on the image by clicking on the square icon and selecting a class.
• Anomaly Detection: Select image from the image table on the left, then click Normal or Anomaly to label a
class to the image. Images assigned as ‘Train set’ cannot be labeled as Anomaly.
• OCR: Draw a box around text in the image. Then type in matching characters.

10 / 121
III. Assign Train/Test set
The next step is to divide images into two groups: Train and Test set. The Train set will be used for model training,
the Test set for evaluation.
1. Set a ratio to randomly split images to either Train set or Test set and click [Split selected] or [Split all]. By
default, the ratio is 85:15.
2. If you want to manually assign images to a specific set, click [Train set] or [Test set].
3. Assign as [Not used] if you want to exclude specific images from training.

Train your model


In the Train tab, double-check the dataset details to make sure you’ve satisfied the minimum requirements. You
can go back to the Data tab if you need revision. When you’ve satisfied the requirements, you can set the train
settings by adjusting the hyperparameters to train a deep learning model.
To set hyperparameters, do the following:
1. Select a Label set.
2. Double-check the dataset details. You may go back to the Data tab to revise labels or dataset split.
3. Adjust training conditions and click [Start training]. For further information, refer to 'Setting training
conditions'. When training is complete, see the results in the Result tab and export a report in the Report tab.

11 / 121
Introduction

Neuro-T
V3.0

About Neurocle
Neurocle
"Making Deep Learning Vision Technology More Accessible"
As a group of computer vision & deep learning experts, Neurocle believes that deep learning can enhance the
quality of life.
Our vision is to enable people to apply deep learning technology to anywhere they like with the easy-to-use
software. No matter who the users are and what kind of system they use, we help people solve deep learning
image problems.

For more information, please visit our website or contact us by email.


• Website: www.neuro-cle.com
• Technical Support: support@neuro-cle.com

Product lineup: Neuro-T, Neuro-X & Neuro-R


Neuro-T and Neuro-X are deep learning software that train models for image recognition. Based on the intuitive
GUI and Auto Deep Learning, Neuro-T makes it possible for non-experts to create deep learning models and
Neuro-X allows deep learning experts to conduct experiments by adjusting numerous hyperparameters.
Deep learning models can be used together with Neuro-R for real-time inference. The models can run on flexible
platforms such as GPUs and embedded devices.

12 / 121
Model types provided by Neuro-T

Classification
Classification models identify the concept of the whole image and distinguish images in different classes. This
model type is appropriate for use in cases such as classifying images into the categories "Normal" and "Defect".

13 / 121
Segmentation
Segmentation models not only identify an object, but also recognize its shape and location within an image. As
images are analyzed at a pixel level, segmentation can be used to locate the exact defect area from the product
surface, or to discover multiple types of objects within an image.

Object Detection
Object Detection models capture objects within an image and distinguish the class of each object. They show the
size and location of the object in the form of a box. This model type is useful when detecting instances of certain
object classes in a scene, such as cars parked on the street, or people's faces.

14 / 121
Anomaly Detection
Anomaly Detection is an unsupervised learning model that identifies outliers after training normal images. This
model is applicable in a variety of domains, including video surveillance system.

OCR (Optical Character Recognition)


OCR models are specialized in recognizing words and texts in an image depicting scanned documents, bar code,
signs with text, and billboards. This model type detects text in the given picture, splits them into character level,
and then recognizes each character.

Requirement specification

Type of license
The different types of licenses are distinguished by the number of accounts and GPUs. 'Maximum number of
GPUs’ means the number of GPUs that Neuro-T can use simultaneously. Neuro-T license can be upgraded when
users need to use more accounts and GPUs.

15 / 121
Type of license Number of accounts Maximum number of GPUs
Basic 1 1
Standard 3 2
Team 5 4
Enterprise 10 8

NOTE
Even within the same PC, if the user switch to a downgraded license dongle, the user must use the account
according to the number of accounts permitted for that dongle. For example, if you create 3 accounts with
the Standard license but wish to switch to the Basic license, 2 accounts will be deleted (since the basic license
only supports 1 account).

Requirement Specifications
Minimum specifications Recommended specifications
CUDA compute
3.5 or higher
capability NVIDIA RTX 3080 Ti
8GB or higher NVIDIA RTX 3090
GPU
(NVIDIA RTX 3060, RTX 3070)
Server O/S Windows 10 64-bit, Windows 11 64-bit

• 1 GPU : i5 or higher • 1 GPU : i7 or higher


CPU
• Multi GPU : i7 or higher • Multi GPU : i9 or higher

RAM 16GB or higher 32GB or higher


Client Browser Chrome (recommended), Firefox, Microsoft Edge

NOTE

PC with GPU is required for training deep learning model using Neuro-T. If other operations are running on
Neuro-T’s PC and are occupying CPU and GPU memory, this might affect the stability of the software. It is
strongly recommended that users avoid executing other programs simultaneously.

16 / 121
Glossary
Deep Learning & Statistics
These are the basic deep learning concepts and statistics which could assist you while conducting a project.

Precision and Recall


Before moving on to Precision and Recall, understanding of the four possible outcomes shown below is needed.
This table shows the actual answer and the predicted answer of the model in a 2x2 matrix.
Predicted
Positive Negative
True Positive False Negative
Positive
(TP) (FN)
Actual
False Positive True Negative
Negative
(FP) (TN)

• True Positive(TP) : the model correctly identifies an image as belonging to the Positive class
• True Negative(TN) : the model correctly identifies an image as belonging to the Negative class
• False Positive(FP) : the model incorrectly identifies an image as belonging to the Positive class (image actually
belongs to the Negative class)
• False Negative(FN) : the model incorrectly identifies an image as belonging to the Negative class (image
actually belongs to the Positive class)

In the terminology true/false positive/negative, true or false refers to the assigned classification being correct or
incorrect, while positive or negative refers to assignment to the positive or the negative category. For example, if
the model correctly predicts the image of an apple as an apple, it becomes True Positive, and if the model
predicts the image of an apple as a banana, it becomes False Negative.
Precision is the ratio of the actual Positives among what was predicted to be Positive.

Recall is the ratio of the predicted Positives among what was Positive.

Both Precision and Recall deal with the case in which the model predicts the actual Positive as Positive, but from
a different perspective.

For example, there is a task that corrects the apple images out of 10 images. There are 6 apple images and 4 are
banana images out of 10 images. Let's look at the difference between Precision and Recall in different situations.

17 / 121
Let's say there are only two images that the model predicted as an apple. But if the actual answer to those two
images was an apple, then Precision is 100%. Although there are four more images of apples, there are still
images that the model didn't predict. However, this cannot be known by Precision. Since Precision is the ratio of
the actual Positives among what was predicted to be Positive.

Also, if the model predicted all 6 apple images as an apple, the Recall would be 100%. However, there may be
times when the model might mistakenly predict some banana images as apples. In other words, there are 6
actual images of apples, but the images that are predicted as an apple are 8, so there is a false prediction, but
this cannot be known by Recall, the ratio of the predicted Positives among the actual Positive.

Thus, the higher the value of Precision and Recall, the better it is. However, having one high value in either
Precision or Recall does not necessarily mean that the model is showing a high performance. Both values should
be considered.

Accuracy and F1 score


If Precision and Recall only dealt with Positive as actual Positive, Accuracy comprehensively deals with all the
cases including ones in which actual Negative is predicted as Negative. In general, Accuracy is the most intuitive
metric on which to judge model performance.

18 / 121
Accuracy = Ratio of images whose label is correctly predicted among all images

For example, the ratio of ‘predicting actual apple images as an apple and actual banana images as a banana’
among total number of images is called Accuracy.

The F1 Score is also a measure of a test's accuracy, and this can be used as a metric to complement Accuracy. The
F1 Score is the harmonic mean of the Precision and Recall, where an F1 Score reaches its best value at 1 (perfect
Precision and Recall) and worst at 0.

ROC curve and AUC (Area under the ROC Curve)


A Receiver Operating Characteristic curve, or ROC curve is a graph that illustrates the diagnostic ability of a
classification model since discrimination threshold varies.

The ROC curve is created by plotting the True Positive Rate (TPR) against the False Positive Rate (FPR) at various
threshold settings.

True Positive Rate (= recall)


It describes how good the model is at predicting the Positive class when the actual outcome is Positive.

False Positive Rate


It describes how often a Positive class is predicted when the actual outcome is Negative.

AUC (Area under the ROC curve)


AUC measures the entire area underneath the entire ROC curve from (0,0) to (1,1). AUC provides a total measure
of model performance across all possible classification thresholds. Generally, the higher the AUC, the better the
model distinguishes classes.

19 / 121
Intersection over Union
IoU (Intersection over Union)

Intersection over Union is an evaluation metric used to evaluate the performance of object detector on particular
data set.
There are two types of bounding boxes required for calculation of IoU: the ground-truth bounding boxes and
predicted bounding boxes.
• Ground truth bounding box is a labeled region, as a user input, that specifies where in the image the specific
object is.
• The predicted bounding box is a predicted region from deep learning model, which specifies the location and
existence of a specific object.

Higher the IoU (i.e. overlapping pixels divided by area of union) gets, the more ground truth and predicted
bounding boxes overlaps.
These are the examples of bounding boxes with different IoUs.

20 / 121
Patch size
While Classification algorithm is trained by image, Segmentation model is trained by patches. The patch is set to
square, and Neuro-T provides three patch sizes :128x128, 256x256, and 512x512. If the images being used for
training are smaller than 512x512 pixel, patch will be automatically set as 512x512 without image crop or resizing
process.

Guide to selecting the right patch size


• The performance of the model will be improved if the patch size and the original image size are the same. If
the image size is 512x512, select 512x512 for the patch size.
• If the image size is bigger than the available matching patch size, it is good to set the patch size to be slightly
bigger than the detecting area. For example, if the detecting area in a 1024x1024 image is 200x200, then it is
recommended that the patch size is 256x256.

Scale factor
Scale factor is parameter for Segmentation model.
If you set the Scale Factor to 2, you may get the results below.
For example, if the original image size is 256x256, it will be converted to a 512x512 size image, which doubles the
width and height of the original image.

21 / 121
Sensitivity
Sensitivity refers to the standard that separates Normal class and Anomaly class. For example, if Sensitivity is set
to 30, image with anomaly score above 30 will be classified as ‘Anomaly’ class; image with anomaly score below
30 will be classified as ‘Normal’ class.

Sensitivity can be set by referring to the Sensitivity histogram provided in the 'Result' tab and the 'Data' tab; the
histogram consists of anomaly scores of images with test results. The x-axis represents the anomaly score, and y-
axis represents the number of images that shows the anomaly score in x-axis.

22 / 121
Unique concepts in Neuro-T
Workspace
Workspaces are shared space where deep learning projects are carried out. You can either conduct multiple
projects within the same workspace or have multiple workspaces for each project. After creating a workspace,
you can share it with other account users and manage access to each project.

Image set
Within the workspace, you can create the image set which is like a folder where you import, export, and manage
images you need for the project. If you want to create different types of deep learning models with the same
images, you can use the exact same image set instead of importing images multiple times.

Label set
Label set is a unit of Data set, which consists of images, label type, labeling, and image split to Train/Test sets.
Label type determines the type of deep learning model that you are creating, thus when you are creating a new
label set, you need to select the right label type that is appropriate for your project goal.
You may create labels using Neuro-T's labeling tool or import existing labels as JSON or image files.
In addition to labeling, you can split images and assign them to Train/Test set, which will impact the model
performance as each model may differ based on what it has learned and evaluated .

23 / 121
Getting started with Neuro-T
Starting Neuro-T
Log in

1. To sign up, click [Don’t have an account? Sign Up].


2. Type in your user ID and password, and then click [Log in].

Home screen interface

1. User account: View user ID. Click the [<<] icon to minimize the navigation bar.
2. Home: Click this menu to open the Home screen.
3. Workspace: Click [+] to create a new workspace. Open a workspace to work on the project.
4. Settings, Documentation, Help: Click to adjust settings, read user guide manuals, or ask for help.

24 / 121
5. Software version: See the current version
6. Notification and task list: Click to see the progress of training models and any other loading activities.
7. New workspace: Click to create a new workspace.
8. Recently edited: Click on the recently visited image set to quickly access to the page.
9. Latest models: See and directly go to the result tab to check the results. You will see models that were created
within the last 3 days or if there aren’t any, the 5 most recently created models.
10. Tutorial videos: Watch tutorial videos for guide.

Setting up workspace

Workspace Setting

Guide to creating a workspace


Workspaces are shared spaces where deep learning projects are carried out. You can create a workspace to
conduct single or multiple projects and share them with other user accounts to collaborate.
The workspace name should be your project name, in which each holds a unique project of its own. You will need
a workspace in the following cases:
• When starting a new project
• When separating internal and external users while working on the same project
• When splitting multiple projects that are being carried out in a single workspace

To create a workspace, do the following:


1. Click [New workspace] or click [+] in the navigation menu. A dialog appears to create a new workspace.
2. Enter the name of the workspace, share permission to members and click [Create].

25 / 121
Workspace permissions
The owner (creator of the workspace) can manage team members who can get access to the project.
To manage permission, do the following:
1. Open the Workspace page.
2. In the members column to the right, set a user as a [Member] or remove their access by selecting [No access].

NOTE

If you are not the owner of a workspace, you cannot change the permission of accounts. If you want to join a
group or add another account, please ask for permission to the owner of the workspace.

Create image set


Within a workspace, you can create image sets (or image folders) where you will import, export, delete and edit
images used in the project. You can create multiple image sets if you have different sets of images you wish to
use in a particular workspace. If you wish to use the same set of images to create different types of deep learning
models, you can later choose multiple labeling types (or method) within a single image set.
Here are some examples of when to use an image set:
• When adding or deleting new images to an existing image set
• When proceeding a new project without making use of the existing image set

NOTE

It is recommended to create few image sets since most tasks can be separated by creating a new label set in
the Data tab.

To create an image set, do the following:


1. In the Workspace page, click [New] or [Create image set].
2. In the Create new image set dialog, enter the name and description of the image set and click [Create].

26 / 121
To edit image sets:
1. Click the [Menu] icon at the lower-right side of the image set card.
2. In the Edit panel, you can:
• Change the image set name and description, then click [Apply].
• Click on the [Delete] icon to delete the workspace.

To copy or delete multiple image sets:


1. Use the Ctrl key to select multiple image sets.
2. Click [Copy] to make the exact copy of the image set, including images, label sets, and model to another
workspace.
3. Click [Delete] to delete the selected image sets.

27 / 121
28 / 121
Creating deep learning models
Manage your data: File tab
In the File tab, you can import images to the image set and later export them to your PC.

Import and export images


The type and number of images used for training significantly impacts the model’s performance. Follow the guide
below get a high model performance.

1) Import at least 100 images for each class


It is recommended to import more than 100 images per class because images from each class will be later split into
Train and Test sets.
• Train images will be used for training the model.
• Test images will be used to evaluate the performance of the model.
Since the model is generated based on the Train set, the more the amount of training data, the better will be the
performance of the model.

Requirements

Classification, Segmentation, Object Detection: Prepare at least 10 Training images per class and a total of 3
Test images.

OCR: Prepare at least 10 Train images and 3 Test images.

Anomaly: Prepare at least 10 Train images for Normal class and 3 Test images for each Normal and Anomaly
class.

2) Balance the number of images in each class


To enhance the performance of your model, make sure to evenly distribute the number of images between classes.
For example, don’t use 10 images for one class and 500 images for another.

3) Diversify the images of each class


Gather images that best represent what you expect the model to predict. Moreover, also consider including images
of each class in multiple angles and different lighting conditions to prevent the model to become biased to a specific
type of image.

29 / 121
How to import and export images
The image requirement is as follows. The supported file formats are JPG(JPEG), PNG, BMP, TIF(TIFF) and DCM.
Note that formats in BMP, TIF(TIFF) and DCM will be saved as PNG format.

In the File tab, import your images by doing one of the following:
• Drag and drop images from your computer onto the import area.
• Click [Add image] or [Add folder] > choose [Original] > select images from the directory.

To import resized images, do the following:


1. Click [Add image] or [Add folder] > choose [Resized] > select images from the directory.
2. In the Resize images dialog, choose the size you wish to resize (512px or 1024px), then click [Resize and
import] to import resized images. If you drag and drop, the dialog will always appear.
3. If you don’t want to resize, click [Skip and import original] to import original images. The larger side of the image
will be set as the maximum size that you choose (512 or 1024). The aspect ratio of image will be maintained.

30 / 121
NOTE

Please note that depending on the amount of data, importing images at once on Microsoft Edge may take a
long time. To minimize the importing time for high resolution images, resize images when importing. Resize
images will maintain their aspect ratio.

Export
1. Click [Export images].
2. Click [Export] to continue downloading images as a zip file.

31 / 121
Edit images (preprocessing)
In the File tab, you can preprocess (or prepare) your images with the following features.
• Transform: Rotate and flip images.
• Crop: Crop images to remove unwanted parts of an image. You may also enter the start index of x and y
coordinates and specify the width and height.
• Slice: Split images horizontally, vertically, or both.

NOTE

When you create a slice, one large image will be divided into several equal pieces. If slicing into equal pieces is
not possible, the bottom right side that falls outside the slice will be cropped out. For example, if you slice a
100 x 100-pixel image by 3, the image will be sliced into three pieces of 33 pixel and 1pixel will be cropped
out.

To edit images, do the following:


1. Click [Edit images] at the top right corner.
2. Select the image you want to edit.
3. Transform, crop, or slice images.
4. Click [Save as copy] to save changes.

Move images
In the File tab, you can copy or move images to another image set.
To move or copy images, do the following:
1. Select images you’d like to move or copy.
2. Click the [Move or Copy] icon.
3. Choose one option between move or copy.

32 / 121
4. Select a location in which you would like to send images. You can either send it to an existing or a new image
set.
5. Click [OK] to proceed.
6. You will be asked for confirmation if you choose to move images when you have a label set in the current
image set. Enter the name of the image set in the field provided and click [Move].

Delete images
1. Select images you’d like to delete.
2. Delete selected images.
3. If you have a model, you will need to enter the name of the image set.

NOTE

Be careful when deleting images since deleted images will be entirely removed from the image set. Also, label
sets and models included in the image set will all be affected. If you do not wish to use specific images for
training or testing, it is strongly recommended to assign them as 'Not used' instead of deleting them.

33 / 121
Label your images: Data tab
In the Data tab, you may perform ROI/Mask, labeling, and splitting to Train and Test sets. You may also check and
compare results after creating a model.
In addition, you may export your labeled information and predict label information. You may also create image
set based on labeled information.

Components in Data tab

1. Create a new label set. Click [Manage] to see and edit existing label sets.
2. Import or export ROI/Mask or labeling information as a JSON file. PNG file also can be imported/exported when
using segmentation models.
3. Convert class labels to image set.
4. Select label sets and models (prediction results) you wish to display in the main panel.
See how to check prediction results by each image
5. See tips & shortcuts which can be used during image labeling process.
6. See the current label set’s summary of image distribution.
7. Main panel with a list of image files and labeling information.
8. Image view panel where you can directly draw labels in the image using labeling tools.
9. Toolbar where you can use ROI/Mask and labeling tools.

NOTE

To properly import or export a label set, set each image file name differently. If images with the same file
name exist, those images will not be imported or exported properly. For Segmentation, you can import or
export labels as masking images.

34 / 121
Main Panel components
1.Select all images
2.Clear assigned class labels or sets
3.Export tables as csv file. When exporting the values of the model, additional information will also be exported.
Model type Provided value
Classification Probability by class
Segmentation Avg. Probability, Avg. IoU
4.Filter columns you want to show or hide in the main panel.
5.Image list
6.Label set information (Train set/Test set, Labels etc.) and model test results of each image.
7.Sort column in ascending or descending order.
8.Search for keywords in the column.

35 / 121
Image view panel

1.Threshold/Sensitivity: Adjust label settings


2.Properties: Details on the displayed image and properties panel to adjust image details.
3.Zoom settings: Adjust the zoom level of the displayed image.

Create/Copy label set

Label set
Label set is a unit of dataset, which consists of images, label type, labeling, and image split to Train/Test sets.
Label type determines the type of deep learning model you are creating. Thus, when you are creating a new label
set, you need to select the appropriate label type for your project.
You may create multiple label sets for one image set and create one model for each label set.

36 / 121
Label set types

There are five label types: Classification, Segmentation, Object Detection, OCR, and Anomaly Detection.
• Classification: Categorizes each image by its class, such as Good/Bad.
• Segmentation: Recognizes the shape and location of objects in images.
• Object Detection: Detects the location and number of objects in images.
• Anomaly Detection: Categorizes Normal and Anomaly images by training Normal images only.
• OCR: Recognizes text, numbers, and symbols in images.

See Classification, Segmentation, Object Detection, Anomaly Detection and OCR for more

How to create label sets


Creating a label set is the first step in the Data tab. Create a label set in the following cases:
• When creating your own label set instead of using the label set created by another user
• When class criteria are changed, new labeling needs to be done with a different criteria
• When changes are made to the previously created label set by you or another user
• When comparing the performance of each model created by various label sets
• When improving the performance of a model by changing the labels and set status after the model
evaluation

37 / 121
Step-by-Step Instruction

1. Click 'New' to create a new label set.


2. In the ‘Create new label set’ dialog, select a project type (Classification, Segmentation, Object Detection,
Anomaly Detection or OCR), enter the name and description and click 'Create'.

Depending on the label type, deep learning model type and labeling method will change accordingly.

For label sets created in a version earlier than v1.1, set type will be displayed as N/A. If you would like to use
the previous label set, assign it with a label type.

Copy
1. In the Data tab, click the [Manage] button located at the top of the screen.
2.In the manage dialog, click [Copy] to make a copy of the label set.
3. The label set will be copied in the same image set.

38 / 121
Import/Export label set
Import
1. Click the [Import file] button.
2. Select whether to import ROI/Mask data or labeling data (Label, Train /Test set, Retest set, OCR rotation).

3. For ROI/Mask data import, select a JSON file and import.

39 / 121
4. For labeling data import, choose the file format you wish to import. For all label types, you may import JSON
files, but for Segmentation you may also import labels in the format of masking images.

5. After selecting a labeling data file, compare the classes in the JSON file (or masking image) with the ones that
already exist in the label set.

6. Then, select whether to replace all existing labeling data with the JSON file (or masking image) or import new
data only.

40 / 121
Export
1. Click the [Export file] button.
2. Then, from the provided list, select the label sets and models you wish to export.
3. You may choose to export either ROI/Mask or labeling data (or both) and for labeling data you may choose to
export as either JSON file or masking image (or both).
4. Finally, click [Export] to download as a zip file.

Setting ROI and mask


In the ROI/Mask tab, you can apply ROI (Region of Interest) and masks on the images before you start labeling.
This helps remove unwanted regions of the image and allows the model to focus only on valuable parts of the
image.

When applying ROI and mask, please note the following:


• ROI and mask will be equally applied to all images that have the same size to the current image. You may
check the current image size at the top section of the ROI/Mask tab.
• Images that do not have ROI and mask due to a difference in image size will not be included in training, nor
testing and will be automatically assigned as [Not used] once training starts.
• For Object Detection, all labels passed outside the ROI will be displayed in red and these labels will not be
included in training.

41 / 121
ROI
ROI (Region of Interest) enables you to select a specific region of the image you want the model to train. Regions
outside the ROI will not be included in training, and testing.

42 / 121
How to apply ROI
1. Select the [ROI] tool and draw a region on the image.

2.Using the [Select] tool, you can adjust the x,y position and width/height size of the ROI.

43 / 121
3. After you have applied the ROI to one image, click [Apply All] to apply the ROI to all images that have the same
image size.

Mask
Mask enables you to exclude regions of the image you do not want the model to train.

How to apply mask


1. Select the [Mask] tool and draw masks on parts of the image you wish to exclude.

44 / 121
2. Using the [Select] tool, you can adjust the position and size of the masks.

3. Check the [Invert mask] option if you wish to reverse the masked areas with the unmasked area.

45 / 121
4. After you have applied the mask to one image, click [Apply all] to apply the mask to all images that have the
same image size.

Label your images


In the Labeling tab, you can create classes, draw labels, and split the images to Test or Train sets.

How to do labeling
1. Start labeling your images.

46 / 121
2. Labeling tools are displayed based on the model type of the selected label set.

3. Click the [+] button to create as many classes as you need. Click the menu icon on the right side of the class
name to edit the class name and color.

4. Label images.

47 / 121
Classification label
Select the image and click on the desired class, the class will be labeled in the upper left corner of the image.

48 / 121
Segmentation label
Select a class and draw a label on the image using the labeling tool.
• If labels of the same class are overlapped while drawing, those labels will be merged.
• If you press & hold Ctrl key while drawing, overlapped region of the selected class will be erased.
• If you overlap labels of different classes, the overlapped region will be considered as the label drawn last.
• If you wish to change the class after drawing one label, select the label and then click another class.
• If you wish to change the class for multiple drawn labels, click the [Select] tool to select multiple labels and
then click on another class.

49 / 121
Object Detection label
Draw a box above the image by clicking on the box tool.

NOTE: For Object Detection, all labels passed outside the ROI will be displayed in red and these labels will not
be included in training.

50 / 121
Anomaly Detection label
Select the image from the table on the left-hand side and then click Normal or Anomaly class to assign the image
to the class.

Generally, assigning each class in the Anomaly Detection is similar to Classification. However, there are only two
classes exist in Anomaly Detection (i.e., Anomaly class and Normal class) and you will not be able to add or delete
any class. The Anomaly Detection function will only be train normal images and will categorize normal images
and anomaly images. Thus, images that are in the train set cannot be assigned as anomaly class. The test set can
be assigned both as a Normal / Anomaly class.

OCR label
1. Use the [Rotate image] in the Labeling tab to rotate the image to appear upright. Click [Apply selected] or
[Apply all] to apply the same angle to all images.

NOTE: If the image is not at a normal angle, it is not possible to label correctly, which can affect the training
results.

51 / 121
2. Click the labeling tool and draw a box on the image. After drawing a box, the size of the box will be fixed making
it easy to continue drawing the next boxes. If you want to release fixed size, you can press the ESC key, or click
the labeling tool button again to release it. Every time you draw a box, an input space will be created on the right
side depending on the number of boxes drawn.

3. Select a character case you would like to apply in the label box (uppercase, lowercase, sentence case). In Neuro-
T, OCR provides English and number and 5 special characters recognition. No other languages and symbols are
recognized. English recognition can be selected from the following three types.

4. Sentence case (Tt): select this input method for labeling if your data set consists of both uppercase and lowercase
characters.
5. Upper case (TT): select this input method for labeling if your data set only consists of uppercase characters.
6. Lower case (tt): select this input method for labeling if your data set only consists of lowercase letters.
In all cases, you can also input numbers and 5 kinds of special characters(+ - : / &).

7. Then, select whether to stack labeled boxes by created time, horizontal stack, or vertical stack.

8. Finally, type the characters in the labeled box.

You can change the input method while labeling with the selected input method. For example, if you have labeled
as lowercase and have decided to change the input method to uppercase in the middle, all previously typed
characters will be changed to uppercase and will also be automatically trained. However, changing the input
method to upper/lower case will not affect any previous input; you can enter both uppercase and lowercase
characters.

52 / 121
Auto-labeling
Auto-labeling help you to label your image using existing models. You can either label images in new label set or
current label set.

Step-by-Step Instructions
1. Choose label set you want to do auto-labeling. You can either choose [Current label set] or [New label set].
2. Click [Select] to choose existing model for auto-labeling.
3. Select images that you want to label and click [Apply selected] or click [Apply all] to label all images.

If you apply auto-labeling to image that already has label information, the previous label information will be
deleted and labels done by auto-labeling will be added.

Split Train/Test dataset

Guide to splitting data set


The data set is divided into Train set and Test set, which are used to create a model. The default ratio is 85:15
(Train set : Test set) and can be changed. If the deep learning algorithm cannot train sufficient number of images,
the performance of the model may decrease. Therefore, the number of train images should be greater than that
of the test images.

There are two ways to assign images to Train set or Test set. The first way is to select images and manually assign
them to a set. The second way is to automatically assign images by a ratio (default 85:15) to a set using ‘Random
split’.
Effective method to split data set

How to split data set


1. Set a ratio to randomly split and assign images to either Train set or Test set. By default the ratio is set to 85:15.
2. Click [Split selected] or [Split all] images.
3. Check the main panel to see if the ‘set’ column is displayed as Train or Test.

NOTE: This feature does not apply to images assigned as ‘Not used’.

53 / 121
Assign image as Train or Test set (Manually)

1. Select an image.
2. Click [Train set], or [Test set].
3. Check the main panel to see if the Set column is displayed as Train or Test.

Assign image as [Not used]


Not used is used when you do not want to use an image without having to delete and damaging the image set.
1. Select an image you do not want to use for the model.
2. Click [Not used].

3. Check the main panel to see if the Set column is displayed as ‘Not used’.

Clear sets
At the top of the data table, click [Clear] and choose between [Clear class labels] or [Clear set] to clear the data
from the selected image(s).

54 / 121
Merge and copy your class: Class mapping
How to use class mapping
Class mapping allows you to create a new label set by changing the classes of the existing label set.
The class merging feature within [Class mapping] can combine multiple classes into one class.

1. Click the [Class mapping] icon.

2. Select the classes you wish to merge in Classes of the existing label set section, and the click [Merge].

3.The merged class will be displayed in Classes of a new label set section.
4.If you would like to copy the existing class to a new label set, click [Move].

55 / 121
5. After clicking [Next], create a new label set with the classes you have arranged, and click [Submit].

6. The new label set will be displayed in the Data tab.

56 / 121
View-only mode
After generating a model, your label set will be on ‘view-only mode’ to prevent from editing your label set and
unintentionally losing your original work and the results of the generated models. For example, let’s say that your
Classification model correctly predicted ‘Good’ apple images as ‘Good’. However, for some reason, you decide to
change a few apple images labeled as ‘Good’ to ‘Bad’ in the original label set. Later, when you go back to see and
compare whether the predicted classes match the labeled classes, you will notice that they no longer match.
While the model states the prediction is correct, you will see the model predicting an apple image labeled ‘Bad’
as ‘Good’.
Thus, if you have generated a model with a Label set, you will be on View-only mode. During the View only
mode, you will be limited from using certain edit features to protect your original work from being
unintentionally modified.

If you want to make changes to the label set to achieve good model performance, you can do one of the
following two options:
• Create copy (recommended): Protect your original labeling data by creating a copy of the label set in which
you will be doing the editing work. This allows you to keep track of your work history.
• Enable edit: If you don’t mind losing your original labeling data, you may directly edit the current label set.
Be aware that the labeled data and the predicted results may not match after editing the label set.

57 / 121
Compare labels and models
Click [Compare labels and models] to easily compare multiple label sets and results of generated models.

Step-by-Step Instructions
1. Click the [Compare labels and models] button on the top of the main panel and select the label sets or models
you wish to compare.

2.Each row will be presented in the main panel which is located on the left-hand side of the Data tab.
3.Use the [View filter] button to show or hide the label set and model’s predicted results and their information on
the image view panel. This helps compare predicted/labeled regions. Only the label sets and models that are
currently loaded in the main panel will be displayed in the view filter.
View filter
• Image adjustment: adjust images brightness and contrast
• Filter by features:
o Label opacity: Adjust the transparency of labels
o ROI opacity: Adjust the transparency of ROI
o Mask opacity: Adjust the transparency of mask areas
o Info text: choose the color of the text
o Show info text/all label sets/all models: show/hide all labels
• Filter by label set/model: Filter labels by class

Filter by label set/model


• Show all class: show all label set / model class labels (you may show/hide each class individually)
• Show CAM: show/hide CAM(Class Activation Map) results for Classification models
• Show labeled boxes: show/hide labeled boxes for Anomaly Detection models.

58 / 121
Definition for ‘Class Activation Map (CAM)’ and how to utilize CAM
How to filter noisy labels

59 / 121
Convert labels to images
In the Data tab, you may convert your labels to image sets. Depending on the label type, converting method will
defer.

Classification
In Classification, the labels are the classes assigned to images. When converting class labels to images, the
images in the selected class are saved as a new image set.
To convert Classification labels to images, do the following:
1.Click [Labels to images] at the top right of the Data tab.
2.Select a class you wish to convert and click [Next].
3.Enter the name and description of the new image set and click [Convert].

60 / 121
Segmentation
In Segmentation, the labels are pixel boxes, polygons etc. assigned with a class. When converting labels to
images, labels will be cropped or masked as new images.
To convert Segmentation labels to images, do the following:
1. Click [Labels to images] at the top right of the Data tab.
2. In the dialog, select a class you wish to convert and click [Next].
3.Enter the name and description of the new image set.
4. Select a Convert method and click [Convert].

Convert method includes:


1. Shape of labels
• Original: Use the exact same shape as it was labeled
• Box: Create labels into boxes
• Fitted box: Create labels into boxes that tightly fit the object (boxes may be rotated).
2. Crop or mask labels
• Crop in equal size: Crop all labels into equal size images
• Crop in actual size: Crop labels into their actual size or with padding around the boundary. Images may vary in
size.
• Mask: Convert labels to masks. If an image contains two labels, the image will contain two masks.
• Invert mask: Mask all areas around the labels.

61 / 121
Object Detection
In Object Detection, the labels are bounding boxes. When converting labels to images, labels are cropped or
masked depending on the settings.
To convert Object Detection labels to images, do the following:
1. Click [Labels to images] at the top right of the Data tab.
2. In the dialog, select a class you wish to convert and click [Next].
3.Enter the name and description of the new image set.
4. Select a Convert method and click [Convert].

Convert method includes:


1. Crop or mask labels
• Crop in equal size: Crop all labels into equal size images
• Crop in actual size: Crop labels into their actual size or with padding around the boundary. Images may vary in
size.
• Mask: Convert labels to masks. If an image contains two labels, the image will contain two masks.
• Invert mask: Mask all areas around the labels.

62 / 121
Anomaly Detection
In Anomaly Detection, the labels are the classes assigned to images. When converting class labels to images, the
images in the selected class are saved as a new image set.
To convert Object Detection labels to images, do the following:
1. Click [Labels to images] at the top right of the Data tab.
2. In the dialog, select a class you wish to convert and click [Next].
3.Enter the name and description of the new image set and click [Convert].

OCR
In OCR, you may not convert OCR labels to images.

Flowchart
Flowchart enables you to build, visualize and connect task flows within your workspace. For example, if your task
is to detect moving objects and classify what the objects are, you may use a single Object Detection model.
However, to enhance the performance the model , you may create multiple models. For example, you can create
an Object Detection model to detect moving objects, then use a Classification model to classify the detected
objects. To manage a series of connected tasks effectively, they should be together in one place and ‘Flowchart’
works as that place for workflow management.

Flowchart Concepts
1. Parent image set: First image set you select to start a flowchart.
2. Child image set: Image set created from a label set using convert [labels to images].

63 / 121
Step-by-Step Instructions

1. Click the [Flowchart] button at the top of your screen.


2. In the Flowchart dialog, click [+], enter a name, and click [Apply] to add a new flowchart to the list.
3. Use the [menu] icon to edit the name or delete the flowchart.
4. To start, click on the flowchart and click the [Click to start] node that appears at the center of the canvas.
5. In the right panel, select the parent image set to create the first node.
6. Click the created parent image set node, and from the right panel, select a label set to connect it to the parent
image set node. When the label set node is created, class nodes that pertain to the label set will also appear
on the canvas.
7. Click a class node, and in the right panel select a child image set or click [Convert labels to images] to create a
child image set.
8. Repeat steps 5 to 7 to create more complex flowcharts.

NOTE

Flowchart is managed within a workspace level. Therefore, you can’t connect image sets or label sets from
another workspace. The depth of the flow is restricted to 5 levels; thus, you may connect up to 5 label sets in
a single flowchart. The number of flowcharts you can create is unlimited.

64 / 121
Train your own models: Train tab
In the Train tab, you may train models using the label set in which ROI/Mask, labeling and Train/Test splitting has
been completed. You may adjust the model’s settings such as the model type and training conditions. Once
training is completed, model evaluation will be automatically performed, so you can check the performance of
the model.

Components in train tab

1. Select label set: Select the label set you wish to train.
2. Dataset details card: Check the image size, summary, and image distribution.
3. Create model card: Adjust training conditions and start training.

Adjust training conditions


Guide to adjusting training conditions
In the Train tab, set the parameters for each training trial. Different types of parameters are provided for each
label set type and the model performance may vary depending on the training conditions.
You can’t start training if the label set does not meet the minimum requirement. If so, please go back to Data tab.

Training precaution

Running other operations on the PC and occupying CPU and GPU memory may affect the stability of the
software. It is strongly recommended to not use other operations while training.

It is recommended to turn off GPU acceleration in the browser of the PC.


Please these steps:

1. Open Google Chrome and click Customize and Control Google Chrome > Settings > Show advanced
settings.
2. In the System section, uncheck the box next to ‘Use hardware acceleration when available’.
3. Restart Google Chrome.
*These steps may vary depending on your browser.

65 / 121
Classification

1. Are all images in the Test set labeled?: Choose Yes or No. It is highly recommended to assign the ground truth
(labels) to all test images to get more accurate evaluation results.
2. Model name: Enter the name of the model.
3. Train method: Select a training method.
a. Quick learning: Quick learning quickly searches the best combination of hyperparameters. This is a good
method to start training and evaluating your model.
b. Auto deep learning: Searches the best combination of hyperparameters. As you move to Level 3, training
will take longer because it will search for more diverse cases. You may restrict training time.
c. Retraining: Retraining (fast or standard) allows you to reuse a previously constructed model’s architecture
and hyperparameter to create a new model. This will reduce training time.
4. Train parameters: Adjust training image size, inference speed and embedded device.

※If all images are equivalent in size, the default value will be displayed as the image size.

NOTE

If the imported images differ in size, the values will be displayed based on the size of the smallest image.
The minimum image size to is 128x128 and the maximum image size is 512x512.
If the size of the original images and training images are different, images will be automatically resized during
training.

5. Advanced parameters: Select a resize method, turn on/off data augmentation or fix validation set if needed.
6. Start training: Click the button to start training. Once training is complete, evaluation of the model will start
automatically.

66 / 121
Segmentation

1. Are all images in the Test set labeled?: Choose Yes or No. It is highly recommended to assign the ground truth
(labels) to all test images to get more accurate evaluation results.
2. Model name: Enter the name of the model.
3. Train method: Select a training method.
a. Quick learning: Quick learning quickly searches the best combination of hyperparameters. This is a good
method to start training and evaluating your model.
b. Auto deep learning: Searches the best combination of hyperparameters. As you move to Level 3, training
will take longer because it will search for more diverse cases. You may restrict training time.
c. Retraining: Retraining (fast or standard) allows you to reuse a previously constructed model’s architecture
and hyperparameter to create a new model. This will reduce training time.
4. Train parameters: Adjust training image size, inference speed and embedded device.

※ If all images are equivalent in size, the default value will be displayed as the image size.

67 / 121
NOTE

For segmentation training, you will see different parameters depending on the size of your images to improve
the performance of the segmentation model.
1. If image sizes are smaller than 512x512, you will only be provided with the ‘Image size’ default parameter.
2. If image sizes are larger than 512 x 512, you will be provided with ‘Patch size’ and ‘Scale factor’ parameters.
3. If images all vary in size and at least one image or one side of an image is greater than 512x512, you will
be provided with ‘Patch size’ and ‘Scale factor’ parameters.

5. Advanced parameters: Select a resize method, turn on/off data augmentation or fix validation set if needed.
6. Start training: Click the button to start training. Once training is complete, evaluation of the model will start
automatically.

68 / 121
Object Detection

1. Are all images in the Test set labeled?: Choose Yes or No. It is highly recommended to assign the ground truth
(labels) to all test images to get more accurate evaluation results.
2. Model name: Enter the name of the model.
3. Train method: Select a training method.
a. Quick learning: Quick learning quickly searches the best combination of hyperparameters. This is a good
method to start training and evaluating your model.
b. Auto deep learning: Searches the best combination of hyperparameters. As you move to Level 3, training
will take longer because it will search for more diverse cases. You may restrict training time.
c. Retraining: Retraining (fast or standard) allows you to reuse a previously constructed model’s architecture
and hyperparameter to create a new model. This will reduce training time.
4. Train parameters: Adjust training image size, inference speed and embedded device.

※If all images are equivalent in size, the default value will be displayed as the image size.

NOTE

If the imported images differ in size, the values will be displayed based on the size of the smallest image.
If the size of the original images and training images are different, images will be automatically resized during
training.

5. Advanced parameters: Select a resize method, turn on/off data augmentation or fix validation set if needed.
6. Start training: Click the button to start training. Once training is complete, evaluation of the model will start
automatically.

69 / 121
Anomaly Detection

1. Are all images in the Test set labeled?: Choose Yes or No. It is highly recommended to assign the ground truth
(labels) to all test images to get more accurate evaluation results.
2. Model name: Enter the name of the model.
3. Train method: Select a training method.
a. Quick learning: Quick learning quickly searches the best combination of hyperparameters. This is a good
method to start training and evaluating your model.
b. Auto deep learning: Searches the best combination of hyperparameters. As you move to Level 3, training
will take longer because it will search for more diverse cases. You may restrict training time.
c. Retraining: Retraining (fast or standard) allows you to reuse a previously constructed model’s architecture
and hyperparameter to create a new model. This will reduce training time.
4. Train parameters: Adjust training image size, inference speed and embedded device.

※If all images are equivalent in size, the default value will be displayed as the image size.

NOTE

If the imported images differ in size, the values will be displayed based on the size of the smallest image.
The minimum image size to train in the Anomaly Detection model is 128x128 and the maximum image size is
512x512.
If the size of original images and training images are different, images will be automatically resized during
training.

5. Advanced parameters: Select a resize method, turn on/off data augmentation or fix validation set if needed.
6. Start training: Click the button to start training. Once training is complete, evaluation of the model will start
automatically.

70 / 121
OCR

1. Are all images in the Test set labeled?: Choose Yes or No. It is highly recommended to assign the ground truth
(labels) to all test images to get more accurate evaluation results.
2. Model name: Enter the name of the model.
3. Train method: Select a training method.
a. Quick learning: Quick learning quickly searches the best combination of hyperparameters. This is a good
method to start training and evaluating your model.
b. Auto deep learning: Searches the best combination of hyperparameters. As you move to Level 3, training
will take longer because it will search for more diverse cases. You may restrict training time.
c. Retraining: Retraining (fast or standard) allows you to reuse a previously constructed model’s architecture
and hyperparameter to create a new model. This will reduce training time.
4. Train parameters: Adjust training image size, inference speed and embedded device.

※If all images are equivalent in size, the default value will be displayed as the image size.

NOTE

If the imported images differ in size, the values will be displayed based on the size of the smallest image.
The minimum image size to train in the OCR model is 128x128 and the maximum image size is 512x512.
If the size of original images and training images are different, images will be automatically resized during
training.

5. Advanced parameters: Select a resize method, turn on/off data augmentation or fix validation set if needed.
6. Start training: Click the button to start training. Once training is complete, evaluation of the model will start
automatically.

71 / 121
When you train an OCR model, training will be divided into two processes, Rotating and Training. The remaining
time and the progress percentage for each training will be displayed in the ‘Task list’.

72 / 121
Evaluating & reporting results
Evaluate performance of your models: Result tab
The Result tab allows you to evaluate the performance of the generated model after completing training.
Evaluation of the performance is based on the images assigned to the Test set in the Data tab.
Neuro-T provides several metrics to evaluate the performance of a model. The confusion matrix shows the
overall performance of the model and Accuracy, Precision, Recall, and F1 Score for each label.

Key metrics

For all model types, four different key metrics are provided: Accuracy, Precision, Recall and F1 score.
The models shown in the table belong to the same Image Set. Each model has its four key metrics and when you
click the model, you will be able to check detailed results along with a donut chart of the key metrics.

• Accuracy: Ratio of images whose label is correctly predicted among all images
• Precision: Ratio of the actual Positives among what was predicted to be Positive.
• Recall: Ratio of the predicted Positives among what was actually Positive
• F1 score: Harmonic mean of the Precision and Recall

Refer ‘Deep Learning & Statistics’ section in this user guide for more information on these concepts.

Classification
Confusion matrix
Predicted
Class 1 Class 2 Class 3 Class 4
Actual Class 1 100 0 1 0
(Ground
truth) Class 2 0 99 0 0
Class 3 0 0 120 0
Class 4 1 0 0 87

73 / 121
Each row of the matrix represents cases in the actual class while each column represents cases in the predicted
class. Each cell shows the number of images.
The values on the diagonal line are the results when the expected and actual answers match. Therefore, the
more values on the diagonal they are, the more accurate the results are.

ROC curve
A Receiver Operating Characteristic curve, or ROC curve is a graph that illustrates the diagnostic ability of a
classification model since discrimination threshold varies.
The ROC curve is created by plotting the True Positive Rate(TPR) against the False Positive Rate(FPR) at various
threshold settings.

ROC curve in Classification


All the classes in the model are shown on the graph by default. You can hide or show the graph of a specific class
by clicking the eye icon.

Confidence interval in Classification


The probability of test data generally tends to be saturated close to 1. To show confidence interval, each test data
is randomly sampled to form a sample group, and the distribution is created by using the averages of each
sample group. The confidence interval in the ‘Result’ tab is displayed based on this distribution.

74 / 121
Score
You can find scores for each image from Data tab if you click the [See result by image] button in Result tab.
In Classification, score is the probability by class. As shown in the image below, if the score is 99.92, it means that
the model has predicted ‘Choco’ image as Choco with 99.92%.

Probability threshold
You can reduce False Positive by applying threshold to classification model. By applying probability threshold,
which will re-categorize predicted class with low probability score as ‘unknown’ class.
For example, if a model predicts ‘Image_01’ as OK class with a probability of 56%, this image will not be assigned
as OK class with a threshold higher than 56%.
Please note that by using probability threshold feature, you are applying threshold to the prediction result of
existing model, instead of creating a new model.

Guide to probability threshold


• Probability threshold is only applicable for Classification model.
• You can apply up to three different threshold values per model and may need to delete existing threshold in
order to create a new threshold.
• As threshold result is dependent to the original model, evaluating the original model impacts existing
thresholds. You may need to re-apply thresholds after evaluating.
[Re-evaluate] and [Threshold] button will be grayed out for threshold results.
• By clicking the [Export model] button you can export the model and its threshold values, which will allow
you to inference the model with threshold without additional process.
• Images below threshold are classified as unknown class, and are excluded when calculating key metrics
(Accuracy, Precision, Recall, and F1 score).

75 / 121
Step-by-Step Instructions
1.Select a model.

2. If the model is Classification model, you will be able to apply probability threshold by clicking the [Apply
threshold] button.

3. Click the checkbox to select the class and type in the minimum probability that you would like to set as a
threshold. Only integer between 1 and 100 is valid.

4. The result of threshold will be presented in the table with an updated metrics, right below the original model.
Also, you can click on the threshold to see the details just like other models.

Segmentation
Confusion matrix
In segmentation, confusion matrix is provided in three different ways: by pixel, by class, and by image.

1. Confusion matrix (by pixel)


Each row of the matrix represents the actual proportion for each class while each column represents the
predicted proportion for each class. The diagram below represents the table above.

76 / 121
Predicted
Class 1 Class 2
Actual (Background)
(Ground truth)
Class 1 87.2 (%) 12.8 (%)
A B

Class 2 4.4 (%) 95.6 (%)


(Background) C D

2. Confusion matrix (by class)


The matrix shows the predicted results for each class.
If the images are labeled with multiple classes within one image, they can be counted as duplicates in the matrix.

Total Good Intermediate Bad

Background number of images

Class1

Class2

Good
No ground truth in the image and no predicted area.
There is ground truth in the image and the IoU of the predicted area is 0.5 or higher.

Intermediate
There is ground truth in the image and the IoU of the predicted area is 0 < x < 0.5.

Bad
There is no ground truth in the image, but a predicted area exists.
There is ground Truth in the image, and the IoU of the predicted area is =0.

77 / 121
3. Confusion matrix (by image)
The matrix shows the predicted results for each image.
When the image is labeled with multiple classes, it is divided into Good/Intermediate/Bad; all predicted results
are taken into account comprehensively for all classes.

Total Good Intermediate Bad


# of images

Good
No ground truth in the image and no predicted area.
There is ground truth in the image and the IoU of the predicted area is 0.5 or higher (IoU≥0.5).

Intermediate
There is ground truth in the image, the IoU of predicted area for at least one class if greater than 0.5, and the
IoU of predicted area of the remaining class is less than 0.5.

Bad
There is ground truth in the image, and the IoU of the predicted area is =0.
There is ground truth in the images and the IoU of the predicted area for all classes is less than 0.5.

IoU (Intersection over Union)

Score
In Segmentation, 'Score' means an average value of F1 Score by each class for the pixels of each image. When you
evaluate a model in Segmentation, each pixel is predicted as a specific class. For example, one pixel is predicted
as class1, and another pixel will be predicted as class2 (shown in the image below). If the score is 94.18, it means
that the average value (average F1 Score) of pixels predicted as ‘Choco’ in the ‘Choco_12’ image is 99.92%.
The detailed concept of F1 Score can be found in the 'Result' tab.

78 / 121
Probability/Size threshold
Probability threshold
Using the probability threshold function, you can delete the area with low probability. The standard of the
threshold is Avg.probability by area, it is defined as ‘sum of probability of each pixel in the area divided by the
number of pixels in the area’.
If you want to eliminate the probability area under 56%, you can eliminate such area using probability threshold.
Such areas will be considered as background, which is same as all the pixels that you have not labeled and
assigned as specific class.
Please note that by using the probability threshold feature, you are applying a threshold to the prediction result
of the existing model, instead of creating a new model.

Size threshold
In Segmentation model, the predicted region may include extremely small areas which are not an object of
interest. By using size threshold such areas, which lower the model performance, can be removed from the
predicted region.
For example, if a predicted region for specific images includes negligible areas which are smaller than 2x2 pixel,
you can eliminate such regions with size threshold. Such areas will be considered as background, which is the
same as all the pixels that you have not labeled and assigned as a specific class.
Please note that by using size threshold feature, you are applying threshold to the prediction result of existing
model, instead of creating a new model.

Guide to threshold
• You can apply up to three different threshold values per model and may need to delete existing thresholds in
order to create a new threshold.
• As threshold result is dependent to the original model, evaluating the original model impacts existing
thresholds. You may need to re-apply thresholds after evaluation.
• [Re-evaluate] and [Threshold] button will be grayed out for threshold results.
• By clicking the [Export model] button you can export the model and its threshold values, which will allow you
to inference the model with threshold without additional process

Step-by-Step Instructions
1. Select the model.

2.If the model is a Segmentation model, you will be able to apply probability or size threshold by clicking the
[Threshold] button.

79 / 121
3. Decide whether you want to use the size threshold or the probability threshold. Using both is also allowed.
• Size threshold: Click the checkbox to select the class and then type in the minimum pixels for width and
height for each class. You can either apply 'and' and 'or' condition as well.
• Probability threshold: Click the checkbox to select the class and then type the Avg.probability by area for
each class.

NOTE

If you want to apply a probability threshold to models that were created in version earlier than 2.1, you would
need to re-evaluate in advance. Please re-evaluate the original model prior to applying the threshold.

4. The result of the threshold will be presented in the table with updated metrics, right below the original model.
Also, you can click on the threshold to see details just like in other models.

Object Detection
Confusion matrix
Confusion matrix by object and by image are provided in Object Detection.

1. Confusion matrix (by object)

# of Labeled boxes # of Predicted boxes # of Matching boxes

Class1
Class2
Class3
• Number of labeled boxes: Displays the number of labels by class
• Number of predicted boxes: Displays the number of labels predicted by model by class
• Number of matching boxes: Displays the number of labels with more than 0.5 IoU of the labeled and
predicted labels

80 / 121
2. Confusion matrix (by image)
The matrix shows the predicted results for each image.
When the image is labeled with multiple classes, it is divided into Good/Intermediate/Bad; all predicted results
are taken into account comprehensively for all classes.

Total Good Intermediate Bad

# of images

Good
No ground truth in the image and no predicted area.
There is ground truth in the image and the IoU of the predicted area is 0.5 or higher (IoU≥0.5).

Intermediate
When ground truth is more than 4 object, all ground truths are predict good and only two over kill in
background, or when predict count and label count is same but only two labels are bad

Bad
There is ground truth in the image, and the IoU of the predicted area is =0.
There is ground truth in the image and the IoU of the predicted area for all classes is less than 0.5.

IOU (Intersection over Union)

Score
In Object Detection, 'Score' means the average score of classification probability for each box. For example, think
of the situation where each object in both Class1 and Class2 under image below; if the score is 99.92, then the
average probability of predicting Class1 object as Class1 and Class2 object as Class2 is 99.92%.

81 / 121
Probability/Size threshold
Probability threshold
Using the probability threshold function, you can delete the area with low probability. The standard of the
threshold is Avg.probability by area, it is defined as ‘sum of probability of each pixel in the area divided by the
number of pixels in the area’.
If you want to eliminate the probability area under 56%, you can eliminate such area using probability threshold.
Such areas will be considered as background, which is same as all the pixels that you have not labeled and
assigned as specific class.
Please note that by using the probability threshold feature, you are applying a threshold to the prediction result
of the existing model, instead of creating a new model.

Size threshold
In Object Detection model, the predicted region may include extremely small areas which are not an object of
interest. By using size threshold such areas, which lower the model performance, can be removed from the
predicted boxes.
For example, if a predicted box for specific images includes negligible areas which are smaller than 2x2 pixel, you
can eliminate such regions with size threshold. Such areas will be considered as background, which is the same as
all the pixels that you have not labeled and assigned as a specific class.
Please note that by using the size threshold feature, you are applying threshold to the prediction result of
existing model, instead of creating a new model.

Guide to threshold
• You can apply up to three different threshold values per model and may need to delete existing thresholds in
order to create a new threshold.
• As threshold result is dependent to the original model, evaluating the original model impacts existing
thresholds. You may need to re-apply thresholds after evaluation.
• [Re-evaluate] and [Threshold] button will be grayed out for threshold results.
• By clicking the [Export model] button you can export the model and its threshold values, which will allow
you to inference the model with threshold without additional process.

Step-by-Step Instructions
1.Select a model.

2.If the model is Object Detection model, you will be able to apply probability or size threshold by clicking the
[Apply threshold] button.

82 / 121
3.Decide whether you want to use the size threshold or the probability threshold. Using both is also allowed.
Size threshold: Click the checkbox to select the class and then type in the minimum pixels for width and height
for each class. You can either apply 'and' and 'or' condition as well.
• Probability threshold: Click the checkbox to select the class and then type the Avg.probability by area for
each class.

The result of the threshold will be presented in the table with updated metrics, right below the original
model. Also, you can click on the threshold to see details just like in other models.

OCR
Confusion matrix
Confusion matrix by OCR box and by image are provided in OCR.

1. Confusion matrix (by OCR box)

Labeled boxes Good Bad


# of box

Good
No ground truth in the image and no predicted characters.
If the predicted characters are correct, and the IoU is greater than 0.5.

Bad
If the predicted characters are wrong.
The predicted characters are correct, but the IoU is less than 0.5.

2. Confusion matrix (by image)

Total Good Intermediate Bad


# of images

83 / 121
Good
No ground truth in the image and no predicted characters.
There is ground truth in the image and the IoU of the Predicted area is 1 (IoU=1).

Intermediate
When ground truth is more than 4 object, all ground truths are predict good and only two over kill in
background, or when predict count and label count is same but only two labels are bad

Bad
There is ground truth in the image, and the IoU of the predicted area is = 0.
There is ground truth in the image and the IoU of the predicted area for all characters is less than 0.5.

Please note that we do not provide score for OCR models.

Anomaly Detection
Adjusting sensitivity
Step-by-Step Instructions
1. Once the training is complete, you can check the model in the Result tab.
2. The model result, including key metrics—Accuracy, Precision, Recall, F1 score— changes based on the Sensitivity
value. The default value of key metrics is calculated based on the recommended Sensitivity value.
Definition for Sensitivity
3. Adjust the Sensitivity value from the graph below and the key metrics value will be changed accordingly. There
are two ways to adjust the Sensitivity: from the Result tab and Data tab.

Adjust sensitivity from the Result tab


Adjust sensitivity by utilizing the histogram in the Evaluation result section. Click the desired area from the
histogram, or manually enter the value; click the [Apply all] button to apply the sensitivity value to the entire
model.

Adjust sensitivity from Data tab


Adjust sensitivity by clicking the [Sensitivity] button at the top of the Image viewer and select the desired model.
In the Data tab, you will be able to preview the predicted results (i.e., a class of the image and bounding boxes in

84 / 121
the image) for each image in advance by adjusting the sensitivity. If you would like to apply the adjusted
sensitivity value to all images, please click the [Apply all] button.

NOTE

Bounding box will appear on the predicted anomaly area; the number and size of bounding box can be
changed based on the sensitivity value.

Confusion matrix
Confusion matrix for anomaly detection is similar to Classification with two classes. Each row of the matrix
represents the cases in a predicted class while each column represents the cases in an actual class. Each cell
shows the number of images.
Predicted
Normal Anomaly
Actual
Normal 40 0
(Ground truth)
Anomaly 1 20

Anomaly score
In Anomaly Detection, score means the anomaly score of each image. The higher score the image has, the more
likely the image is an outlier.

See result by image


After checking the overall evaluation results of the model in the Result tab, there are two ways to check the
detailed prediction result for each image.

See result by image in the Result tab


1.Select the model you wish to check the prediction result for in the model list.
2.Click the [See result by image] button.
3.The screen will switch to the Data tab and prediction results of the selected model will be shown on the main
panel and the image view panel.

85 / 121
Compare labels and models button in the Data tab
Click the [Compare labels and models] button at the top of the main panel and check the rows you want to
compare.

The prediction results of the selected model will be shown on the main panel and the image view panel.
How to utilize the [Compare labels and models]

86 / 121
How to check the results of model
Information you could see in the main panel and image viewer will be displayed.

[Classification]
Main panel: You can check whether the answer and prediction match by comparing the Class column of the label
set and Predicted Class column of the model.
Image viewer: Class of the label set and class predicted by the model will be shown in the left-hand corner of the
image viewer. If you click the view filter in the left-hand corner of the image viewer, you can turn on/off the CAM
results.

[Segmentation]
Main panel: Compare Class, Labels column of the label set and Predicted class, Labels column of the model and
see if the answer and the prediction match.
Image viewer: You can compare the of the label set and the model and see the result. If you click the view filter
in the left-hand corner of the image viewer, you can turn on/off the labels.

[Object Detection]
Main panel: Compare Class, Labels column of the label set and Predicted Class, Labels column of the model to
see if the answer and the prediction match.
Image viewer: You can compare the labels of the label set and the model to check the results. If you click the
view filter in the left-hand corner of the image viewer, you can turn on/off the box of both label set and the
model.

[OCR]
Main panel: Compare Labeled box, Characters column of the label set and Predicted box, Predicted characters
column of the model to see if the answer and the prediction match.
Image viewer: Compare Characters of the label set to the predicted characters by model to check results. The
result will be shown in Labeled / Predicted of the Labeling tab on the right side. If you click the view filter in the
left-hand corner of the image viewer, you can turn on/off the box of both label set and the model.

[Anomaly Detection]
Main panel: Compare Class column of the label set and Predicted Class column of the model to see if the answer
and the prediction match.
Image viewer: Label set of the class and the predicted class will be shown in the left corner of the image viewer.
Through this, you will be able to see if the prediction and the answer match.

87 / 121
Evaluate your model
In Result tab, you can evaluate your model with different dataset such as Train/Test set or Retest set.

Step-by-Step Instruction for Evaluation


1.Select a model to evaluate from the generated models. If it is difficult to find the model you want, you can
search it by using the filter.

2.Click the [Re-evaluate] button to evaluate. You may select the set you would like to evaluate.
The first evaluation will start automatically followed by the completion of training.
Clicking the 'Evaluate' button will allow you to choose options and you can combine those options.

- Train set: Evaluate Train set images.


- Test set: Evaluate Test set images.
- Retest set: Evaluate only the Retest set images. (refer 'How to use the Retest set')

* Retest set is only activated on a label set that has more than one model.
* Retest set cannot be combined with other set when you do the evaluation.
* If the Label Set was created in a version earlier than 1.1, you cannot see new information such as Class
Activation Map and confusion matrix (by image). Please retest your model on the version 1.1 to see the new
information.

3.Select the Evaluate with metric/without metric option and click the [OK] button to evaluate.
• Evaluate with metric: It is recommended to use this option if you want to evaluate images with ground
truth.
* In Segmentation, Object Detection, and OCR, all unlabeled images are included in the metric as a
‘Background’ class.
• Evaluate without metric: Metric will not be displayed after evaluating. It is recommended to use this option
if you want to evaluate images without ground truth.

88 / 121
4.Check the evaluation results of the model. The overall evaluation results for all labels are displayed as default.
Click on the labels on the left to see the specific evaluation results for each label.

NOTE: The predicted values for each image can be found in the ‘Data’ Tab.

5.The confusion matrix only displays the top 10 classes. The confusion matrix for the entire class can be exported
as a csv file.
6.You may export the model you created as a .net file by clicking the [Export model] button.

How to improve model performance

How to use the Retest set


Using the Retest set, you can evaluate only the images you wish to evaluate and without damaging the original
label set information.
Retest set is only activated on a label set that has more than one model created.

Guide to using Retest set


Usage of Retest set is good under following situation:
• When you only want to evaluate a portion of images among ones that are assigned as a test set.
(If you import a new image to the image set and want to test only those images, etc.)
• When you want to save the train and test set information that you used when creating the first model, and
further evaluate additionally.
• When you want to evaluate by partially selecting the Train and Test set.

89 / 121
Step-by-Step Instruction
1.Check and see if there is a Retest set column in the main panel.
2.Select the image you wish to assign as Retest set, then click [Retest set].

3.Move to the Result tab.


4.After selecting the model from the model list, click [Re-evaluate].

5.Click the [Retest set] from the drop down and click [OK].

90 / 121
Import your model
In Result tab, you can import your model that is previously made from Neuro-T. Imported model will be useful
when you do auto-labeling.

Step-by-Step Instruction
1.Click [Import model] in the Result tab.
2.Select model file(.net file) that you want to import in your computer.
3.Insert label set name and model name. New label set will be created when you import your model.
4.Click [import] to import your model

NOTE

The model will be imported to image set that you are working on. New label set will be made and imported
model will be assigned to new label set.

Generating reports: Report tab


Create reports on the evaluation results of the model. The report can be exported in html format.

Guide to writing a report


There are spaces where you can write a summary or memo so that the exported report can be used for reporting
purposes. These can be used properly to create effective reports.

How to write a report


1.Select a model you wish to analyze in the report.

91 / 121
2.Then, choose whether to included images or not. If you choose to include, you may decide how many images,
and which predicted images to include.
• Images predicted Good : Images with labels predicted correctly.
• Images predicted Intermediate : Images with labels predicted good and bad.
• Images predicted Bad : Images with labels predicted incorrectly.

1.Click the [Generate report] button. The report creation section below will be activated.

3.Enter the title of the report.


4.Fill out information in the Report info field. It is recommended to write a general summary of the model and
evaluation results.

92 / 121
5.Check the model information and Train & Evaluation Result.

6. decided to include images, you will see images along with their prediction results and other information. You
may also leave a memo.

7. When done, click the [Export report] button to export the Report in html format.
8. For Segmentation models, choose the image format you wish to export your images:
• Export original images
• Export labeled images as image files
• Export predicted images as image files

93 / 121
Database managements & system setting
Working with database

Import database
1.Click [Show hidden icons] on the taskbar.
2.Right-click the Neuro-T icon and click [Import Database].

3.Select one of three import types that is appropriate for you and click the [OK] button.

4.When you select [Import all database]: a login for the admin console window will appear. Enter the admin ID
and password and click the 'Login' button. When you select [Import each workspace's database] or [Import each
workspace’s label database]: New window to select option for database import will appear. You can decide
whether to evaluate your model before importing.

94 / 121
5.When you select [Import all database]: you will see step 6 on your next screen. When you select [Import each
workspace’s database]: a login window will appear where you can select which account to migrate the database
into. Enter User ID and Password and click the [Login] button.
6.Browse and select the database to import

7.Click [OK].

8.Check the [Database import completed.] message on the Neuro-T log window.

95 / 121
9.Run Neuro-T to check that the database has been moved successfully.

Export database
1.Click [Show hidden icons] on the taskbar.
2.Right-click the Neuro-T icon and click [Export database].

3.Select one of two export types that is appropriate for you and click the [OK] button. If you would like to export
all database because you are updating to a new version of Neuro-T or installing Neuro-T on another PC, select
[Export all database]. If you only need to send a database from a specific project, select [Export each
workspace’s database].

4.When you select [Export all database]: a login for the admin console window will appear. Enter admin ID and
password and click the [Login] button.
(The default ID is 'admin', the password is 'admin12345'.)
When you select [Export each user group's database]: a login for the user account window will appear. Enter
User ID and password and click the [Login] button.

96 / 121
5.When you select [Export all database]: you will see step 6 on your next screen.
When you select [Export each workspace's database]: the following window will appear for you to select a
workspace. After selecting workspace, click the [Back up] button. It is also possible to select multiple
workspaces.

6.Click [OK].

97 / 121
7.Enter a file name of the database and select the path where you want to save the file.

8.Check the [Database export completed]. message on the Neuro-T log window.

9. Check the database saved as a NTW file.

98 / 121
Reset database
1.Click [Show hidden icons] on the taskbar.
2.Right-click the Neuro-T icon and click [Reset Database].

3.Click [OK]

4.Check the [Database reset completed]. message on the Neuro-T log window.

99 / 121
Database storage management
You may change the disk for saving database

How to allocate (change) a disk for database


1.Click [Show hidden icons] on the task bar.
2.Right-click the Neuro-T icon and click [Disk allocation].

3.Select a folder(disk) you would like to change to.

100 / 121
System setting
GPU allocation
If you use Neuro-T on multi-GPU PC, you can allocate the GPU that you would like to use for Neuro-T.

How to allocate GPUs for Neuro-T for the first time


1.Install Neuro-T
2.Click the Neuro-T icon or type 'Neuro-T' in the search box located in the Windows taskbar to turn on the
server.
3.In the Neuro-T log window, select the desired GPU for Neuro-T.

How to allocate GPUs for Neuro-T


1.Click [Show hidden icons] on the taskbar.
2.Right-click the Neuro-T icon and click [GPU Allocation].

3.Select the desired GPU for Neuro-T

101 / 121
Admin Console
An admin console is always included regardless of the purchased version. You may use the admin console to do
the following:
• Manage user accounts
• Manage workspace

How to access admin console


1. Click [Show hidden icons] on the taskbar.
2. Right-click the Neuro-T icon and click 'admin'.

3. Admin console login window opens.

NOTE

Server PC: You can also access the admin page by typing ‘localhost:8000/admin’ into the browser's address
bar.
Client PC: You can access the admin page by typing ‘IP address of server PC:8000/admin’ into the browser’s
address bar.
Please refer to ‘How to Run Neuro-T on a Client PC’ in the Installation Guide.

4. The default ID is 'admin', and the password is 'admin12345'.

102 / 121
How to change password
1. Click 'admin' button at the top right of the screen and click the [change password' button].

2. Enter a new password and click the [change my password] button.

Accounts managements
In Accounts, you can add, delete, and change the password of accounts.

Add (create) accounts


1. Click the [Add account] button at the top right of the screen.

103 / 121
2.Enter ID and password. Click the [Save] button

Delete accounts
1. Check the account to delete.
2. Select [Delete selected account] from the drop-down menu at the bottom of the screen and click the [Go]
button.

104 / 121
3. After confirming the account to delete again, click the [Yes, I'm sure] button.

Change the password of accounts


1. Click [Userid].

2. Click the blue text in the password description.

105 / 121
3. Enter a new password, then click the [Change password] button.

Workspace managements
In workspace section, you may set the following:

Create a new workspace


1. Click the [Add workspace] button at the top right of the screen.

2. Type the name of the workspace you want to create and add the accounts to the workspace.
3. Click [Save] button.

106 / 121
Add or delete accounts of workspace
1.All workspaces are displayed on the screen.
2.Click the ‘ID’ and enter the group to modify the account.

3.Select an account to delete or add from the workspace.


4.Click the 'Save' button to complete the account change.

107 / 121
Change the group master
1.All workspaces are displayed on the screen.
2.Click the ‘ID’ and enter the group to change the master of workspace.

3.Select an account you wish to change the master to.


4.Click the [Save] button to complete the master change.

108 / 121
Useful tips
How to improve model performance
If you are not satisfied with the results of the evaluation, you may improve the performance by training a new
model with modifications.
Try the following steps to create a well-performing model:
• Make sure that there is no mistake in labeling process. Compare multiple label sets to find a labeling noise.
• Check if there is an enough number of images for each class. Imbalanced data could lower the model
performance.
• Include enough number of images which have similarities but categorized as different classes in order to
train the subtle differences between classes. In this case, labels should be done very precisely to prevent any
confusion of deep learning models.
• Manually assign images to the Train and Test set, while making sure to include representative examples of
each class. If there is a non-representative sample, try to assign such images as [Not used].
• To find the root cause of low performance, gather all images with wrong prediction, and try to find a pattern
between such images.

Filtering noisy labels


If you have multiple label sets created based on different sources, you can compare them easily from the main
panel in Data tab. Using this feature, you can find the discrepancies between label sets and filter noisy labels.
Correct the label noise and use improved label set in order to create better models. Also, you can compare the
test results of the models to see how different labels have impact the prediction results.

109 / 121
Reduce labeling noise by comparing multiple label sets

When multiple label sets are created, you can reduce labeling noise by comparing the labels of each image. In
the image above, the same image is specified as Banana in label set01 and Apple in label set02. In such cases,
you may recheck the image and adjust the label to improve accuracy.

Improve the performance of the model by comparing evaluation results of each label set
The overall evaluation results, such as Accuracy and Precision, are displayed on the Result tab for each model,
and the detailed results for each image are displayed on the main panel of the Data tab. You may check the
differences between various models (Status of set and label) and change the conditions for retraining.

Manually splitting Test/Train set


Providing a representative input that could be used for training is necessary to train a high-performing model
which can distinguish the difference between classes. Therefore, when there is a limited number of images for
specific class, it is crucial that the representative images of such class are manually assigned as Train set.
Additionally, if a certain image seems like an outlier, assigning such image as [Not used] will prevent distorted
model.
Usually, you may use the Random split feature to automatically assign the images to Train or Test set, however, if
you have imbalanced data set or insufficient number of images, you may specify the images that should be
assigned as Train set.

110 / 121
Also, train images and test images should be as similar as possible. For example, suppose that you are using deep
learning to distinguish between the different types of cars. If you train a model while only using a front view of
cars, predictions may not be accurate for the side view of the cars. In this situation, you can either allocate all
types of images(front view and side view) to both Train set and Test set assuming that there are sufficient
number of images for , or exclude side view images from both Test set and Train set.

In addition, the resolution and background of the images also affect the prediction results. This should be noted
when uploading the images.

Importing labels as masking images


When importing labels as making images previously exported from Neuro-T
You can import the previously exported masking images of the labels with no further procedure. Please refer
to 'How to import / export label set' section.
When importing labels as image files that have not been exported from Neuro-T
If the masking image was not generated from Neuro-T, the image must be imported while meeting these
requirements:
• Masking images must be created with 255 pixel value (white) for the labeled area; 0 pixel value (black) for
the background area.
• Name of the masking images must also be named as ‘filename_classname’ in order for it to be recognized
when importing; thus, each image should only include one class.

Resize method
There are four different types of resize transforming algorithm: Nearest, Linear, Area, and Cubic.
Each algorithm has different image scaling methodology, which affects image quality(resolution) and requires
different computational cost.
You may choose the most appropriate resize transform algorithm, and for more information go to
http://opencv.org/ .

111 / 121
Image basis analysis (View filter)
You can analyze model performance not only based on key metrics and confusion matrix from 'Result' tab, which
are numeric, but also on image basis from Data tab.
Image-based results for different types of models will be presented in different format; Class Activation Map for
Classification model, predicted regions for Segmentation, Object Detection, and OCR models.
You may click the [View filter] button from Data tab after selecting label sets and models.

Classification - Class Activation Map


Class Activation Map (CAM) is a heat map that shows which area the deep learning model focused on during
Classification. The closer it is to red, the bigger the impact the area had on determining the predicted class of the
image. For example, consider a deep learning model that focused on the defective area for the left-hand image,
however the deep learning model focused on unrelated areas for the right-hand image. In this case, utilizing the
Class Activation Map will allow you to judge whether it was well-trained or not; re-training is also available
through changing the training condition.

How to utilize Class Activation Map (CAM)


• When you train a model on fast mode, the performance of CAM may not be optimized due to the limitation
of training time.
• The performance of CAM may vary depending on the characteristics of models, so it is recommended to
check the result of CAM only for reference. Note that the performance of CAM is distinct from the
performance of the model.
• Performance of CAM is optimized when the image size is 512x512. If the original image size is smaller than
512x512, resizing the image to 512x512 while training will provide better CAM results.

112 / 121
Segmentation, Object Detection, OCR - Predicted labels
You can turn on and off the predicted labels from image panel in 'Data' tab using 'View filter' function.
This can be useful when analyzing the model result by image or comparing model performance.

How to utilize view filters for model predicted regions


• You can compare labeled and predicted areas based on their location and size.
• If you select multiple models from [Compare labels and models] window, you can compare the predicted
labels from each model. You can also turn off and on each model's prediction result from image panel by
using the [View filter] function.
• Predicted regions can be also export in a format of json file and masking images. You can exploit the model-
predicted regions as a new label, and create a new model based on this label set.

Definition of score in each model types


From Data tab, you can check the score of each image.
Please be informed that each model type has different definition of score.

Classification
In Classification, 'score' is the probability by class. As shown in the image below, if the score is 99.92, it means
that the model has predicted ‘Choco_12’ image to be Choco with 99.92%.

Segmentation
In Segmentation, 'score' means the average value of F1 Score for each class based on the pixels of each image.
When you evaluate a model in Segmentation, each pixel is predicted to belong to a particular class. For example,
one pixel is predicted to be class1, and another pixel will be predicted to be class2 (shown in the image below). If
the score is 99.92, it means that the average value (average F1 Score) of pixels predicted to be ‘Choco' in the
‘Choco_12’ image is 99.92%.
The detailed concept of F1 Score can be found in the Result tab.

113 / 121
Object Detection
In Object Detection, 'score' means the average score of classification probability for each box. For example, think
of a situation where two objects in the image below belong to Class1 and Class2 respectively. If the score is
99.92, then the average probability of predicting Class1 object as Class1 and Class2 object as Class2 is 99.92%.

OCR
Please note that we do not provide score for OCR models.

Anomaly Detection
In Anomaly Detection, score means the anomaly score of each image. The higher score the image has, the more
likely the image is an outlier.

114 / 121
How to use the Task list
Tasks that need lengthy loading or processing will appear in the Task list.
The Task list is composed of two tabs: models and activities. Whenever you train or evaluate models, you will be
able to see their progress in the Models tab. Whenever you import images, folders, JSON files, or image files, you
will be able to see each loading process in the Activities tab.

Models tab
If you and your project members are training multiple models, they will appear on the Models tab of the Task list
menu. This allows users or project members to together track the progress of models.

Reorder models
When you train a new model, the card will always be added at the bottom of the list by default. However, when
you evaluate an existing model, the card will be added below the progressing model located at the top. This is
because the time required for evaluating a model is much shorter than training one. You can always drag and
drop to change the order of the training or evaluating models.

Activities tab
Whenever you import, copy or move images, import JSON files and masking images, and copy label sets, they will
appear on the Activities tab of the Task list menu. These activities are not shared with other users and it is useful
when loading lengthy activities while working on the project.

[Show activity in data tab] button


For image import, when the process is complete, you may use the [Show activity in data tab] button to locate
the images and work more efficiently.

115 / 121
The [Show activity in data tab] button selects you the images that belong to the imported folder in just one click.
This is especially helpful when you need to manually select numerous images or want to quickly check the
images while working on another tab, or even another project.
1. When done importing, open the Activities tab and click on an imported folder or file.
2. This will direct you to the Data tab
3. Images that belong to the folder will be selected for you.
4. If you want to select images from multiple folders (or files), select those folders (or files) using the checkbox.
5. Then, click the [Show activity in data tab] button.

This button is also useful when categorizing images using Classification, but don't have the image file names
organized.
1. Simply, import images organized by folders.
2. When done importing, click on the card in the Activities tab.
3. This will direct you to the Data tab.
4. Images that belong to the folder will be selected for you.
5. Assign a class to the images.

116 / 121
Importing JSON file and masking images
If you are importing a JSON file or masking images that contain image labels, beware to first import all images
and then import the correspondent JSON file or masking images. If you import a JSON or masking images first,
labels of images loaded after will not be displayed.

Inference speed

117 / 121
How to upgrade NVIDIA Driver
Please make sure to upgrade the latest version of NVIDIA Driver before accessing to Neuro-T (v471.11).

Precaution: For program stability, it is required to shut down Neuro-T server before upgrading the NDIVIA
Driver.

To upgrade the version please follow these steps:


1.Right-click on the windows desktop and select [NVIDIA Control Panel].

2.In the [NVIDIA Control Panel], check your driver version.

118 / 121
3. Go to https://www.nvidia.com to NVIDIA Driver Downloads section.

4. If you are using 30 series GPU, search the appropriate driver for your NVIDIA product.

5.Download the driver to the latest version.

119 / 121
How to get technical support.
If you have any problem, you may get technical support. You may find all information you need in Help page.

Step-by-Step Instructions
1.Go to Help page and click [Download server log].
2.Explain your problems and attach the server log file
3.It is better to add version information in the contents. The version can be seen at the bottom of the navigation
bar on the left
4.You may also capture images of software to explain your situation in more detail

120 / 121
Keyboard shortcuts

121 / 121

You might also like