Developers Platform Requirements

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Summary of AI Engineers' Requests for Developers Platform:

Dashboard Overview:
An intuitive dashboard providing a summary of key metrics and insights related to model
performance, predictions, and data movements.

Alerts:

To ensure timely turnaround time; Alerting system (real time) for immediate notifications on
crucial events such as client uploads in the Egnyte system (cloud), output pushes, and any
anomalies in the prediction process. (with details on time, size, client ID, name) Identifying
specific folders where data was uploaded.

TRACKING:

Log:
Comprehensive logs with detailed information on each step of the prediction process,
including timing, data movement, and prediction outcomes.

Data Movement:
Monitoring and tracking the movement of data through various stages, from client raw
data uploads to cleaned_data and final prediction outputs and final output folders..

Files:
Clear information on the total number of files that have undergone the final prediction
and analysis process, including timestamps and relevant details.

Prediction Monitoring:
A dedicated Egnyte section for monitoring final predictions made on uploaded
documents, with detailed insights into the prediction outcomes.

Performance Monitoring:

Metrics and visualizations that enable developers to monitor and analyze the overall
performance of the machine learning model, including accuracy, precision, recall, and
other relevant metrics.

Model Training:

Establishing two folders:

1. Dedicated to storing the final output from prediction/analysis (not for training the model).
2. Reserved for adding new cases to the training dataset after receiving the IME from the
doctor, complete with real labels.

Folder Organization:
Logical organization of folders for raw data, cleaned data, and final output, facilitating
easy access and management for developers.

Quality Assurance (QA) Tools:


Integration of tools and functionalities for quality assurance on final prediction/analysis
outputs, allowing developers or designated users to review and validate results.

Difference Visualization:
Visual representation of differences between MANTA's predictions and,
predictions from other sources (e.g., IME doctors), with color-coded indicators for easy
interpretation.
User-Specific Access Controls:
User roles and access controls to ensure that team members have appropriate
permissions based on their roles and responsibilities.

Integration Support:
Seamless integration with other tools and platforms commonly used in the machine
learning development and deployment ecosystem.

You might also like