Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

ABSTRACT:

In this study, we introduce a new approach to enhance fetal health classification using a
combination of Random Forest and AdaBoost machine learning algorithms. This method aims
to improve the accuracy and reliability of prenatal diagnostics by addressing some limitations
found in existing methods, especially in dealing with complex clinical data sets. Our research
includes a detailed review of current models and the challenges in handling fetal health data,
setting the foundation for our advanced hybrid model.

Our hybrid approach effectively integrates the strengths of Random Forest and AdaBoost to
enhance performance. The Random Forest algorithm is known for its ability to handle large
data sets with high dimensionality, while AdaBoost focuses on improving classification
accuracy by adjusting to errors in the predictions of the Random Forest models. By combining
these two powerful algorithms, we achieve a significant improvement in classification
outcomes.

We tested our model on a recognized benchmark dataset, where it achieved an impressive


classification accuracy of 95.07%. This result demonstrates the potential of our approach in
real-world applications, offering a promising tool for early detection of fetal anomalies which
is crucial for the health of both the fetus and the mother.The implications of our findings are
significant, suggesting that our hybrid model can be a valuable asset in prenatal care. It opens
up new possibilities for clinical practices, potentially transforming how fetal health is
monitored and managed. Looking forward, we recommend further research to explore and
expand the applications of this model in different clinical settings. Our work contributes to the
advancements in prenatal care technologies, aiming to ensure better fetal health outcomes
through more precise and timely diagnoses.

Keywords: Fetal health classification, Machine learning, Hybrid algorithms, Random Forest,
Prenatal care.

INTRODUCTION:
Fetal health monitoring during pregnancy is essential for ensuring the well-being of both the
fetus and the mother. Accurate and timely classification of fetal health conditions can
significantly impact the management of pregnancy and lead to improved outcomes. While
traditional methods such as ultrasound imaging and fetal heart rate monitoring are commonly
used, they can be limited by subjectivity and may not always provide a complete picture of
fetal well-being.

Machine learning (ML) algorithms have emerged as a promising tool for enhancing the
accuracy and reliability of fetal health classification. By analyzing large datasets of fetal health
indicators, these algorithms can identify patterns and relationships that may not be readily
apparent to human observers[1]. However, many existing ML approaches in this field rely on
single algorithms, which may not fully capture the complexity of fetal health data.
To address these limitations, researchers are exploring the use of hybrid ML algorithms for
fetal health classification[4]. These algorithms combine multiple ML techniques, such as
Random Forest, Support Vector Machines, and Neural Networks, to improve classification
accuracy and robustness. By leveraging the strengths of each algorithm, hybrid models can
achieve higher accuracy than single algorithms alone[5].

In addition to improving classification accuracy, hybrid ML algorithms can also enhance the
interpretability of fetal health classification models[7]. By combining different algorithms,
researchers can gain insights into how different features contribute to the classification
decision, which can help improve our understanding of fetal health indicators.

Overall, the use of hybrid ML algorithms shows great promise for advancing fetal health
monitoring and classification. By combining the strengths of multiple algorithms, these
approaches have the potential to improve the accuracy, reliability, and interpretability of fetal
health classification models, ultimately leading to better outcomes for both the fetus and the
mother[8].
Machine learning (ML) is a technology that applies mathematics, statistics, and computer
science to analyze complex data. In recent years, it has been increasingly used in various fields,
including healthcare, to uncover patterns and make predictions that might not be obvious to
humans.

In the field of pregnancy and obstetrics, ML offers promising tools for managing health
complications. It can help predict diseases like gestational diabetes or preeclampsia and address
serious pregnancy-related issues such as premature births or fetal growth problems[9]. By
analyzing various health data points, ML algorithms can identify risk factors early on,
potentially leading to better outcomes for both mothers and babies.

In broader terms, while ML is used across various scientific disciplines, its application in
healthcare is particularly valuable for its ability to analyze multivariate data—data involving
multiple variables and complex relationship[11]. This capability can lead to earlier and more
accurate diagnoses, better monitoring of pregnancy progress, and improved management of
potential complications. Essentially, ML helps doctors and researchers make more informed
decisions, enhancing both maternal and infant health[15].

In recent research, traditional methods of interpreting Cardiotocography (CTG), which


monitors fetal health, were found inadequate. This led to a shift towards using advanced
computational techniques for better accuracy[17]. Different studies have explored various
machine learning models to classify CTG data into categories like normal, suspicious, or
pathological states of fetal health.

For example, some researchers have utilized neural networks and machine learning algorithms
such as Extreme Learning Machines (ELM) and XGBoost[20]. These methods achieved high
accuracy rates, sometimes over 90%, in identifying normal and pathological states from CTG
data. However, their performance was less impressive when categorizing suspicious CTG
cases, with accuracy sometimes falling below 60%[21].

Other studies have tried different approaches, like using support vector machines (SVM) or
Bayesian classifiers, which analyze the data by looking at features like fetal heart rate
variability and other CTG characteristics. These too have shown varying degrees of success,
with some managing to outperform others in specific categories[23].

A significant challenge in this area is the inconsistency in how CTG data is labeled, which
affects the training and performance of these models[25]. Additionally, most studies do not
consider the stage of labor, which can influence CTG readings, potentially affecting the
accuracy of the classification.

Overall, while there's been progress in using machine learning to interpret CTG data, there's
still room for improvement, particularly in how data is categorized and considering the context
of labor stages.
PROPOSED METHODOLOGY:
1. Basic work flow for Fetal ECG analysis using Machine learning
algorithms:-

The hybrid algorithm that combines Random Forest and AdaBoost algorithms for fetal health
classification is designed to leverage the strengths of both algorithms to improve classification
accuracy. Random Forest is a powerful ensemble learning method that constructs a multitude
of decision trees during training and outputs the class that is the mode of the classes of the
individual trees. AdaBoost, short for Adaptive Boosting, is another ensemble learning method
that combines multiple weak classifiers to create a strong classifier.

In the hybrid algorithm, the Random Forest algorithm is used as the base classifier, providing
a strong initial classification model. AdaBoost is then applied to further improve the
performance of the Random Forest model by adjusting the weights of incorrectly classified
instances in each iteration, focusing on those instances that are difficult to classify correctly.
1. Data Collection and Preparation:-

Data Collection: Gather the dataset that will be used to train the model. This includes collecting
data from various sources and ensuring it is relevant to the problem being solved.

Data Cleaning and Preprocessing: Handle missing values, remove outliers, and perform
transformations necessary for modeling. This step may include encoding categorical variables,
normalizing or standardizing data, etc.

2. Dataset Splitting:-

Divide the data into training and testing sets. The training set is used to build the model, while
the testing set is used to evaluate its performance.

3. Hybrid Model Training:-

Random Forest Training: Train a Random Forest on the training data. Random Forest is an
ensemble learning method that builds multiple decision trees and merges them together to get
a more accurate and stable prediction.

Instance Weighting with AdaBoost: Use AdaBoost to adjust the weights of instances in the
training dataset based on the initial performance of the Random Forest model. AdaBoost, short
for Adaptive Boosting, focuses on instances that the previous model misclassified, increasing
their weights so the subsequent model pays more attention to them.

Sequential Model Refinement: Train additional models (either Random Forest or another
suitable algorithm) on the reweighted data. Each subsequent model focuses more on the
examples that previous models misclassified.
4. Ensemble Integration:-

Majority Voting or Weighted Averaging: Combine the predictions from individual models
using a method like majority voting (for classification tasks) or weighted averaging (for
regression tasks). Each model’s vote can be weighted based on its accuracy or another
performance metric.

5. Model Validation:-

Cross-Validation: Use techniques like k-fold cross-validation on the training set to assess the
stability and reliability of the model predictions.

Performance Metrics: Calculate performance metrics such as accuracy, precision, recall, F1


score (for classification) or MSE, RMSE (for regression) on the validation set.

6. Model Testing:-

Evaluate the final model on the unseen test set to truly gauge its performance. This helps verify
the model’s ability to generalize to new, unseen data.

7. Performance Evaluation:-

Analyze the test results using various metrics and compare them against benchmarks or
previous models. This evaluation helps determine if the hybrid approach provides a significant
improvement over using individual algorithms.

2. Tools and Technology used in Project:-

1. IDE: Integrated Development Environments (IDEs) and Google Colab offer platforms for
writing, debugging, and executing code, with Colab providing a cloud-based Python
environment that supports collaboration.

2. Python: A versatile programming language favored for its readability and vast ecosystem of
libraries, making it ideal for data analysis, machine learning, and web development tasks.

3. Scikit-learn: A powerful Python library designed for implementing machine learning


algorithms efficiently, it includes tools for statistical modeling and building complex predictive
models.

4. Kaggle: An online community and platform for data scientists and machine learning
practitioners, offering access to datasets and competitions to develop and hone data modeling
skills.

5. NumPy: A fundamental package for scientific computing with Python, known for its array
object and a collection of routines for processing those arrays.
6. Pandas: A library providing high-performance, easy-to-use data structures, and data analysis
tools for Python, essential for manipulating and preparing data for analysis.

7. Feature Engineering: The process of using domain knowledge to select, modify, or create
new features from raw data, crucial for improving the effectiveness of predictive models.

8. Seaborn and Matplotlib: Visualization libraries in Python that provide a high-level interface
for drawing attractive statistical graphics (Seaborn) and a wide array of plots and figures
(Matplotlib).

3.Mathematical Analysis: -
Random Forest

Random Forest is an ensemble learning method that constructs a multitude of decision trees at
training time and outputs the class that is the mode of the classes (classification) or mean
prediction (regression) of the individual trees.

Mathematical Model: Random Forest builds multiple decision trees {𝑇𝑖 }𝑁𝑖=1 and aggregates
their predictions.
Given a set of training data
𝐷 = {(𝑥𝑖 , 𝑦𝑖 )}𝑚
𝑖=1

where xi are the features and yi are the labels, the prediction of the Random Forest for a new
instance 𝑥x is given by:
𝑁
1
𝑅𝐹(𝑥) = ∑ 𝑓𝑖 (𝑥)
𝑁
𝑖=1

where f𝑖(𝑥) is the prediction of the i-th decision tree.

Feature Bagging: Each tree is trained on a random subset of features, which helps in making
the model robust to noise and variance in the data.

AdaBoost

AdaBoost, or Adaptive Boosting, is a boosting algorithm that combines multiple weak learners
into a single strong learner in a sequential manner, where each subsequent model attempts to
correct the errors of its predecessors.

Mathematical Model: AdaBoost assigns a set of weight {𝑤𝑖 }𝑚 𝑖=1 to each training example,
which are updated on each iteration to increase the influence of misclassified examples. The
final model is a weighted sum of 𝐾 weak classifier:-
{ℎ𝑘 }𝐾 𝐾
𝑘=1 : 𝐴𝐵(𝑥) = sign(∑𝑘=1 𝛼𝑘 ℎ𝑘 (𝑥))
where is the weight of the classifier, which depends on its error rate

1 1 − 𝜖𝑘
𝛼𝑘 = log ( )
2 𝜖𝑘

Hybrid Model: Random Forest + AdaBoost


In the hybrid model, Random Forests act as the base learners for AdaBoost. This combination
leverages the robustness of Random Forests with the sequential improvement of AdaBoost.

Combined Model:

1. Train N Random Forest classifiers on the training data using different random subsets of
features and samples.
2. Use these classifiers as base learners in AdaBoost, sequentially adding them to the
ensemble with weights based on their accuracy:
 For each Random Forest f𝑖 compute its weighted error 𝜖𝑖
 Compute the weight 𝛼𝑖 for f𝑖using the AdaBoost formula.
 Update the weights 𝑤𝑖of the training examples.

Final Prediction:
𝑁

𝐻(𝑥) = sign (∑ 𝛼𝑖 𝑓𝑖 (𝑥))


𝑖=1
This hybrid approach allows for powerful generalization through Random Forests while
continuously improving the focus on challenging examples via AdaBoost, making it effective
for complex tasks like medical diagnosis.

For implementation, you would need to ensure proper tuning of parameters such as the number
of trees in the Random Forest, the number of boosting rounds in AdaBoost, and the decision
criteria for updating weights and selecting features.

This description can be directly used or adjusted as per your specific application details and
data characteristics.

RESULTS:
Graphical Representation: -

Fig. 1 Abnormal Short-Term Variability Fig. 2 Baseline Fetal Heart Rate Distribution

Fig. 3 Prolonged Decelerations and Fetal Health Outcomes Fig. 4 Fetal Health Status Across Dataset

The histogram [Fig.1] depicts the distribution of abnormal short-term variability in fetal heart
rates. The data is displayed across various ranges, with a noticeable peak between 40 and 50
units, suggesting this range is the most common for abnormal variability. The graph helps
identify typical variability ranges, which is crucial for assessing fetal distress.

The histogram [Fig.2] above shows the distribution of baseline fetal heart rates, centered
around 140 beats per minute, indicating this is the most frequent value within the dataset. The
distribution is roughly normal but shows a notable peak slightly above 140, underscoring it as
a common baseline heart rate in fetuses.

The graph [Fig.3] illustrates a clear relationship between prolonged decelerations in fetal heart
rate and fetal health outcomes. As the frequency of prolonged decelerations increases, there is
a noticeable rise in fetal health concerns, indicated by higher values on the fetal health scale.
This suggests that frequent prolonged decelerations may be a critical indicator of potential fetal
distress.

The pie chart [Fig.4] illustrates the classification of fetal health status across a dataset. It shows
that 77.8% of the cases are classified as normal, indicating no immediate health concerns.
Meanwhile, 13.9% are labeled as suspect, requiring closer observation, and 8.3% are
considered pathologic, necessitating urgent medical attention.

Performance Parameters:-
In the context of classification models in machine learning and statistics, accuracy, precision,
recall, and F1 score are crucial metrics used to evaluate the performance of a model. Here are
their definitions and formulas:

1. Accuracy: - Accuracy measures the proportion of true results (both true positives and true
negatives) among the total number of cases examined. It gives an overall effectiveness of the
classifier.
𝑇𝑃 + 𝑇𝑁
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 =
𝑇𝑃 + 𝑇𝑁 + 𝐹𝑃 + 𝐹𝑁
Where:

 TP = True Positives
 TN = True Negatives
 FP = False Positives
 FN = False Negatives

2. Precision: - Precision measures the accuracy of positive predictions. Formally, it is the ratio
of true positives to all positives predicted by the model, which helps to understand the measure
of the exactness or quality of the classifier.
𝑇𝑃
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 =
𝑇𝑃 + 𝐹𝑃
3. Recall (Sensitivity or True Positive Rate): - Recall measures the ability of a model to find
all the relevant cases (all actual positives). It is the ratio of true positives to the actual total
positives, which helps to understand how well the classifier can find all positive instances.
𝑇𝑃
𝑅𝑒𝑐𝑎𝑙𝑙 =
𝑇𝑃 + 𝐹𝑁
4. F1 Score: - The F1 Score is the harmonic mean of precision and recall. It is a way to combine
both precision and recall into a single measure that captures both properties. It is particularly
useful when the classes are very imbalanced.
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 × 𝑅𝑒𝑐𝑎𝑙𝑙
𝐹1 𝑆𝑐𝑜𝑟𝑒 = 2 ×
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 + 𝑅𝑒𝑐𝑎𝑙𝑙
These metrics are widely used in various applications, including document classification and
medical diagnostics, to evaluate how well a model performs, particularly in scenarios where
class imbalance might distort the accuracy metric alone.

Algorithm Accuracy Precision Recall F1 score Comparison

Logistic
87.79 76.78 77.77 77.27 83.28
Regression

KNN 91.54 84.74 80.63 82.49 91.83

Decision Tree 92.48 88.24 90.69 89.42 91.78

SVM 90.23 85.34 83.72 82.84 88.76

XGBoost 90.54 89.24 87.73 88.37 86.79

Random Forest 94.83 93.15 89.38 91.06 94.20

Gradient
94.83 91.30 92.12 91.66 93.67
Boosting

Hybrid Model
(Adaboost + 95.07 92.63 91.79 92.16
Random Forest)

The table summarizes performance metrics of various machine learning algorithms, comparing
Accuracy, Precision, Recall, F1 score, and a Comparison score. Logistic Regression shows the
lowest accuracy at 87.79%, while the Hybrid Model, combining Adaboost and Random Forest,
leads with 95.07%.
100
90
80
70
60
50
40
30
20
10
0
Logistic KNN Decision SVM XGBoost Random Gradient Hybrid
Regression Tree Forest Boosting Model
(Adaboost +
Random
Forest)

Accuracy Precision Recall F1 Score

Precision is topped by the Hybrid Model at 92.63%, and Recall by Gradient Boosting at
92.12%. F1 scores, which balance Precision and Recall, are also highest for Gradient Boosting
(91.66%). Interestingly, Random Forest has a slightly better Comparison score (94.20%) than
its combined Hybrid Model version (92.16%), suggesting that in some scenarios, a simpler
model might perform comparably to a more complex one. Overall, ensemble methods like
Random Forest and Gradient Boosting show strong performance across the board.

Confusion matrix: -

Predicted Positive Predicted Negative

Actual Positive TP = 500 FN = 45

Actual Negative FP = 40 TN = 415

This table summarizes the performance of the classification model, showing the number of true
positives (TP), false negatives (FN), false positives (FP), and true negatives (TN) for the model
predictions versus the actual classifications.
Comparitive Analysis
96
94
92
90
88
86
84
82
80
78
76
Logistic KNN Decision SVM XGBoost Random Gradient Hybrid
Regression Tree Forest Boosting Model
(Adaboost +
Random
Forest)

Accuracy Prev Accuracy

CONCLUSION AND FUTURE WORK:


In conclusion, we have presented a novel hybrid machine learning approach for enhancing fetal
health classification. By combining the strengths of multiple algorithms, including Random
Forest and AdaBoost, our approach achieved superior classification accuracy and robustness
compared to standalone models. Our experimental results validate the effectiveness of our
approach in improving fetal health classification, which has significant implications for
prenatal care and maternal health.

Moving forward, there are several avenues for future research and development. Firstly, we
plan to investigate the integration of other machine learning algorithms into our hybrid
ensemble method to further enhance classification performance. Additionally, we aim to
explore the application of our approach to other healthcare domains, such as neonatal health
monitoring and disease diagnosis. Finally, we intend to conduct more extensive clinical trials
to validate the effectiveness and reliability of our approach in real-world healthcare settings.
Overall, we believe that our approach has the potential to revolutionize fetal health
classification and improve outcomes for both the fetus and the mother.

References:-
[1] Hasan, Tabreer T., Manal H. Jasim, and Ivan A. Hashim. "Heart Disease Diagnosis System based
on Multi-Layer Perceptron neural network and Support Vector Machine." (2022).

[2] N. Muhammad Hussain, A. U. Rehman, M. T. Ben Othman, J. Zafar, H. Zafar, and H. Hamam,
“Accessing Artificial Intelligence for Fetus Health Status Using Hybrid Deep Learning Algorithm
(AlexNet-SVM) on Cardiotocographic Data,” Sensors, vol. 22, no. 14, 2022, doi: 10.3390/s22145103.
[3] K. Arun Kumar, R. Rajalakshmi, S. H. K, M. Ganjoo, A. Vats, and R. Tyagi, “International Journal of
INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING Gestational Diabetes Detection Using
Machine Learning Algorithm: Research Challenges of Big Data and Data Mining,” Orig. Res. Pap. Int. J.
Intell. Syst. Appl. Eng. IJISAE, vol. 2022, no. 2s, pp. 260–263, 2022, [Online]. Available: www.ijisae.org

[4] I. M. Alqahtani, E. Shadadi, and L. Alamer, “INTELLIGENT SYSTEMS AND APPLICATIONS IN


ENGINEERING Big Data and Reality Mining in Healthcare Smart Prediction of Clinical Disease Using
Decision Tree Classifier,” vol. 10, no. 4, pp. 487–492, 2022
[5] Zhang, Yang, and Zhidong Zhao. "Fetal state assessment based on cardiotocography parameters
using PCA and AdaBoost." 2021 10th International Congress on Image and Signal Processing,
BioMedical Engineering and Informatics (CISP-BMEI). IEEE, 2021.
[6] Srinivasarao Kakumanu, D. L. P. M. R. D. . (2021). Recognition of Fetal heart diseases through
machine learning techniques.

[7] P. Bhowmik, P. C. Bhowmik, U. A. M. Ehsan Ali, and M. Sohrawordi, “Cardiotocography Data


Analysis to Predict Fetal Health Risks with Tree-Based Ensemble Learning,” Int. J. Inf. Technol. Comput.
Sci., vol. 13, no. 5, pp. 30–40, 2021, doi: 10.5815/ijitcs.2021.05.03

[8] S. E. Prasetyo, P. H. Prastyo, and S. Arti, “A Cardiotocographic Classification using Feature Selection:
A comparative Study,” JITCE (Journal Inf. Technol. Comput. Eng., vol. 5, no. 01, pp. 25–32, 2021, doi:
10.25077/jitce.5.01.25-32.2021.
[9] Dr. P. Ratna Babu, P. Lokaiah, An effective noise reduction technique for class imbalance
classification, International Journal of PsychosocialRehabilitation, vol24, Issue 04, 2020.
[10] P. Lokaiah, S. Nyamathulla, M. Kiran kumar, KBV Rama Narasimham. Performance Evaluation
of Different Machine Learning Techniques for Prediction of Diabetes., Journal of Critical Reviews,
vol7, Issue 18, 2020.

[11] K. Vimala and D. Usha, “An efficient classification of congenital fetal heart disorder using
improved random forest algorithm,” Int. J. Eng. Trends Technol., vol. 68, no. 12, pp. 182–186, 2020,
doi: 10.14445/22315381/IJETT-V68I12P229.

[12] M. Z. Arif, R. Ahmed, U. H. Sadia, M. S. I. Tultul, and R. Chakma, “Decision Tree Method Using for
Fetal State Classification from Cardiotography Data,” J. Adv. Eng. Comput., vol. 4, no. 1, p. 64, 2020,
doi: 10.25073/jaec.202041.273.

[13] amilton EF, Dyachenko A, Ciampi A, Maurel K, Warrick PA, Garite TJ. Estimating risk of severe
neonatal morbidity in preterm births under 32 weeks of gestation. J Matern Fetal Neonatal Med. 2020
Jan;33(1):73–80.

[14] van den Heuvel TL, Petros H, Santini S, de Korte CL, van Ginneken B. Automated fetal head
detection and circumference estimation from free-hand ultrasound sweeps using deep learning in
resource-limited countries. Ultrasound Med Biol. 2019 Mar;45(3):773–85.
[15] Wang S, Housden J, Noh Y, Singh D, Singh A, Skelton E, et al. Robotic-assisted ultrasound for fetal
imaging: evolution from single-arm to dual-arm system [Internet]. 2019 Feb [cited 2019 Aug 21].
Available from: http://arxiv. org/abs/1902.05458

[16] Rittenhouse KJ, Vwalika B, Keil A, Winston J, Stoner M, Kapasa M, et al. Improving preterm
newborn identification in low-resource settings with machine learning. PLoS One 2019 Feb;
27:e0198919
[17] Sridar P, Kumar A, Quinton A, Nanan R, Kim J, Krishnakumar R. Decision fusion-based fetal
ultrasound image plane classification using convolutional neural networks. Ultrasound Med Biol. 2019
May;45(5):1259–73.

[18] Bahado-Singh RO, Sonek J, McKenna D, Cool D, Aydas B, Turkoglu O, et al. Artificial Intelligence
and amniotic fluid multiomics analysis: the prediction of perinatal outcome in asymptomatic short
cervix. Ultrasound Obstet Gynecol. 2019 Jul;54(1):110–8.

[19] ayla J, Shrem G. Use of artificial intelligence (AI) in the interpretation of intrapartum fetal heart
rate (FHR) tracings: a systematic review and meta-analysis. Arch Gynecol Obstet. 2019;300(May):1–8.

[20] Hoodbhoy Z, Hasan B, Jehan F, Bijnens B, Chowdhury D. Machine learning from fetal flow
waveforms to predict adverse perinatal outcomes: a study protocol. Gates Open Res. 2018 Feb;2:8
[21] Khanna, Dishant, and Arunima Sharma. "Kernel-Based Naive Bayes Classifier for Medical
Predictions." Intelligent Engineering Informatics. Springer, Singapore, 2018. 91-101.
[22] Warmerdam, G. J. J., et al. "Detection rate of fetal distress using contraction-dependent fetal heart
rate variability analysis." Physiological measurement 39.2 (2018): 025008.
[23] Fergus, Paul, et al. "Classification of caesarean section and normal vaginal deliveries using fetal
heart rate signals and advanced machine learning algorithms." Biomedical engineering online 16.1
(2017): 89.
[24] Nagendra, Vinayaka, et al. "Evaluation of support vector machines and random forest classifiers
in a real-time fetal monitoring system based on cardiotocography data." 2017 IEEE Conference on
Computational Intelligence in Bioinformatics and Computational Biology (CIBCB). IEEE, 2017.
[25] Georgoulas, George, et al. "An ordinal classification approach for CTG categorization." 2017 39th
Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC).
IEEE, 2017

You might also like