models (1)

You might also like

Download as xlsx, pdf, or txt
Download as xlsx, pdf, or txt
You are on page 1of 46

Column1 Column2 Column3

Data Science

Machine learning Classical Learning Regression


Classification
Clustering
Pattern search
Ensemble methods Bagging

Boosting
Stacking

Nueral Nets CNN

RNN

Generative Adversial Network


Auto Encoders

Reinforcement

Time series
Decomposition Linear

Non-Linear

Matrix Factorization
Manifold Learning
Column4

ALGORITHM
Linear Regression

Polynomial Regression

Ridge Regression

Lasso

Elastic Net

Support Vector Machine

XGBoost

Decision Trees
Random forest

Bayesian Linear Regression

K-Nearest neighbours

Naïve Bayes

Support Vector Machine

Decision Trees

Logistic
Gradient Boost

Random forest

K-Means

Mean shift
Heirarchial clustering

Density-Based spatial clustering of applications with noise (DBSCAN)

Agglomerative

Apriori

Euclat

FP-Growth
Random forest

Xtreme Gradient Boosting(XGBoost)

AdaBoost

LightBoost

Gradient Boost Machine


CatBoost

Stochastic Gradient Boosting

Difussion convolution recurrent network

Deep residual network

LeNet-5

AlexNet

VGGNet

GoogLeNet

MobileNet

EfficientNet

Dense Convolution Network

Long Short Term Memory

Gated Recurrent Network

Recurrent Convolution network

Heirarchial Network

Bidirectional

Generator
Discriminator

seq2seq

Genetic Algorithm

Asynchronous Advantage Actor-Critic(A3C)

State Action Reward State Action(SARSA)

Deep Q-Network

Q-Learning

ARIMA

SARIMA

Exponential Smoothing

Seasonal Decomposition

Vector Autoregression(VAR)

State Space Model


Long short term memory

Guassian processes

Prophet

Neural prophet

Wavelet Transforms

Dynamic Linear Models

Principal Component Analysis

Linear Discriminant Analysis

Factor Analysis

Independent Component Analysis

Multidimensional Scaling

t-Distributed Stochastic Neighbor Embedding(t-SNE)

Isomap

Locally Linear Embedding

Kernel PCA

Self Organising Maps

Auto Encoders

Singular Value Decomposition


Non-Negative Matrix Factorization

Guassian process Latent Variable Model

Difussion maps
Column5

DESCRIPTION
Predicts the target variable as a linear function of a single predictor variable.

Predicts the target variable as a linear function of multiple predictor variables.

Adds a penalty equal to the square of the magnitude of coefficients to the loss function to prevent
overfitting.
Adds a penalty equal to the absolute value of the magnitude of coefficients to the loss function,
leading to sparse models.
Combines L1 and L2 regularization penalties to take advantage of both ridge and lasso regression.

Uses support vector machine principles for regression, capable of fitting complex, non-linear
functions.

Builds an ensemble of trees sequentially, where each tree corrects errors made by the previous
ones. Combines the predictions of weak learners to form a strong prediction.

Splits the data into regions and fits a simple model (usually a constant) within each region.
Prediction is the mean value of the target variable in each leaf node.
An ensemble of decision trees, where each tree is trained on a random subset of the data and
predictions are averaged. Aggregates predictions from multiple decision trees.

Incorporates prior distributions for the model parameters and updates these distributions based on
the data. Bayesian inference is used to derive the posterior distribution of the model parameters.

Classifies a data point based on the majority class among its k-nearest neighbors in the feature
space. Simple and effective for small datasets with clear class boundaries.
Assumes independence among predictors given the class. Simple, fast, and works well with small
datasets and text classification.

Finds the hyperplane that best separates classes in the feature space. Uses kernel functions to
handle non-linear boundaries.

Splits the data into subsets based on feature values, forming a tree-like structure. Can handle both
numerical and categorical data.

Models the probability of a binary outcome. Uses the logistic function to constrain the output
between 0 and 1.
Builds an ensemble of trees sequentially, where each tree corrects errors made by the previous
ones. Includes popular implementations like XGBoost, LightGBM, and CatBoost.

An ensemble of decision trees, where each tree is trained on a random subset of the data.
Combines predictions from multiple trees to improve accuracy and reduce overfitting.

One of the most popular clustering algorithms, K-means partitions the data into K clusters by
iteratively assigning data points to the nearest cluster centroid and updating the centroids based
on the mean of the points assigned to each cluster. It aims to minimize the within-cluster variance.

Mean Shift is a non-parametric clustering algorithm that doesn't require specifying the number of
clusters beforehand. It works by iteratively shifting each data point towards the mode (peak) of the
kernel density estimate of the data until convergence.
This method creates a hierarchy of clusters by either bottom-up (agglomerative) or top-down
(divisive) approaches. In agglomerative clustering, each data point starts in its own cluster, and
pairs of clusters are merged as one moves up the hierarchy. In divisive clustering, all data points
start in one cluster, which is recursively split as one moves down the hierarchy.

DBSCAN groups together points that are closely packed together, based on a specified distance
(epsilon) and minimum number of points (minPts) within that distance. It can discover clusters of
arbitrary shapes and is robust to noise.

This hierarchical clustering algorithm starts with each point as its own cluster and merges the
closest pair of clusters iteratively until only one cluster remains.

Identifies frequent itemsets and generates association rules. Used in market basket analysis.

Uses a vertical database format for finding frequent itemsets. Efficient for datasets with many
frequent itemsets.

More efficient than Apriori by using a prefix tree to store frequent patterns. Avoids generating
candidate sets explicitly.
An extension of bagged decision trees where each tree is trained on a bootstrap sample of the data
and, additionally, a random subset of features is considered for splitting at each node. This
decorrelates the trees and improves performance.

An optimized and scalable implementation of gradient boosting, XGBoost includes regularization


terms to prevent overfitting and support parallel processing.
AdaBoost adjusts the weights of incorrectly classified instances, increasing their influence on the
learning process. It combines multiple weak classifiers to form a strong classifier by focusing on
difficult cases in each iteration.
A gradient boosting framework that uses tree-based learning algorithms, LightGBM is designed for
efficiency and scalability. It uses a histogram-based approach for faster training.

GBM builds an ensemble of decision trees sequentially, where each tree is trained to correct the
errors of the previous trees. It uses gradient descent to minimize the loss function.
Specifically designed to handle categorical features efficiently, CatBoost uses ordered boosting to
reduce prediction shift and support categorical data directly.

A variant of gradient boosting that introduces randomness by subsampling the training data and
features. This can help reduce overfitting.

Combines multiple base models (e.g., decision trees, logistic regression, SVMs) and trains a meta-
model on their predictions. The base models are typically trained on the original data, and the
meta-model is trained on the outputs (predictions) of the base models.

Introduced the concept of residual learning to address the degradation problem in deep networks.
Uses skip connections to allow gradients to flow more easily through the network.
One of the earliest CNN architectures designed by Yann LeCun for handwritten digit recognition
(MNIST dataset).
A deeper architecture that popularized CNNs in large-scale image classification tasks. It introduced
the use of ReLU activation and dropout for regularization.
Characterized by its use of very small (3x3) convolutional filters and a deep architecture with up to
19 layers.
Introduced the Inception module, which allows for more efficient computation by combining
multiple convolutions with different filter sizes.
Designed for mobile and embedded vision applications, it uses depthwise separable convolutions
to reduce the number of parameters.
Uses a compound scaling method to balance network depth, width, and resolution for improved
efficiency.
Uses dense blocks where each layer receives input from all previous layers, improving gradient flow
and feature reuse.
An extension of RNN designed to overcome the vanishing gradient problem. LSTMs have a more
complex architecture that includes gates (input gate, forget gate, and output gate) to control the
flow of information.

A simplified version of LSTM that combines the input and forget gates into a single update gate and
merges the cell state and hidden state. GRUs are computationally more efficient than LSTMs while
still addressing the vanishing gradient problem.
Combines the strengths of CNNs and RNNs by applying convolutional layers to capture spatial
features and recurrent layers to capture temporal features.

Organizes RNN layers hierarchically to capture different levels of abstraction at different temporal
scales.
Consists of two RNNs running in parallel, one in the forward direction and one in the backward
direction. This allows the network to have both past and future context.

The generator algorithm is responsible for creating new data samples. It takes random noise as
input and generates data that should ideally be indistinguishable from real data.
The discriminator algorithm is like a binary classifier. It tries to distinguish between real data
samples and the fake samples generated by the generator. Its goal is to correctly classify real data
as real (label 1) and fake data as fake (label 0).

An encoder and a decoder, widely used for tasks like machine translation and text summarization,
where it maps input sequences to output sequences, effectively capturing contextual information
and generating meaningful responses. It utilizes recurrent neural networks or transformers to
encode source sequences into fixed-length representations and decode them into target
sequences, enabling the modeling of complex relationships between inputs and outputs.

Genetic Algorithms are optimization techniques inspired by natural selection, using selection,
crossover, and mutation to evolve candidate solutions towards optimal outcomes across diverse
problem domains. They efficiently explore large search spaces, providing robust solutions to
complex optimization problems, albeit with sensitivity to parameter settings and computational
demands.

Actor-Critic methods combine value-based and policy-based approaches by maintaining both a


policy network (actor) and a value network (critic). The critic evaluates actions, providing feedback
to the actor to improve its policy.

DQN is an extension of Q-Learning that uses deep neural networks to approximate the action-value
function. It employs experience replay and target networks to stabilize training and improve
sample efficiency.

Q-Learning is a model-free RL algorithm that learns an action-value function Q(s,a)Q(s, a)Q(s,a),


which estimates the expected cumulative reward of taking action aaa in state sss. It uses the
Bellman equation to update the Q-values iteratively.
ARIMA is a popular linear model used for time series forecasting. It combines autoregression (AR),
differencing (I), and moving average (MA) components to capture different aspects of the time
series data.
SARIMA extends ARIMA by incorporating seasonal components into the model to account for
periodic fluctuations in the data.

Exponential smoothing methods, such as Simple Exponential Smoothing (SES), Double Exponential
Smoothing (DES), and Triple Exponential Smoothing (Holt-Winters), are simple yet effective
techniques for forecasting time series data. They assign exponentially decreasing weights to past
observations.

STL decomposes time series data into seasonal, trend, and remainder components using a process
of repeated filtering.
VAR models are used for multivariate time series forecasting, where multiple time series variables
are modeled simultaneously as a system of equations.
State space models, including the Kalman Filter and Bayesian Structural Time Series (BSTS) models,
represent time series data as a latent state process evolving over time.
LSTM networks are a type of recurrent neural network (RNN) capable of learning long-term
dependencies in sequential data, making them well-suited for time series forecasting tasks.

Gaussian processes are a flexible non-parametric Bayesian approach for modeling time series data.
They model the distribution over functions and can capture complex patterns in the data.
Prophet is a forecasting tool developed by Facebook that decomposes time series data into trend,
seasonality, and holiday effects and utilizes a piecewise linear or logistic growth curve model.

Neural Prophet is an extension of Facebook's Prophet that incorporates neural network


architectures to capture complex patterns in time series data.

Wavelet transforms are used for time-frequency analysis of non-stationary time series data,
allowing for both time and frequency localization.

DLMs are Bayesian state space models that can be used for time series analysis and forecasting by
specifying dynamic relationships between observed and unobserved variables.

Projects data onto the directions of maximum variance. Commonly used for feature extraction and
data visualization.
Projects data to a lower-dimensional space by maximizing class separability. Often used for
supervised classification tasks.
Models observed variables and their linear combinations using latent factors. Used for uncovering
hidden relationships in the data.
Decomposes multivariate signals into additive, independent components. Commonly used in signal
processing and for separating mixed signals.
Projects data into a lower-dimensional space while preserving pairwise distances as well as
possible. Used for data visualization.
Reduces dimensions by modeling pairwise similarities and preserving local structure. Primarily used
for visualization of high-dimensional data.
Preserves geodesic distances between all points. Captures the intrinsic geometry of the data
manifold.
Preserves local linear relationships among neighboring points. Good for unwrapping manifolds in
high-dimensional data.
Extends PCA using kernel methods to handle non-linear relationships. Used for capturing non-
linear structures.
Neural network-based method that reduces dimensions while preserving topological properties.
Used for clustering and visualization.
Neural networks trained to compress and reconstruct data. Capture complex non-linear
relationships and are used for unsupervised learning.
Decomposes a matrix into singular vectors and singular values. Used for latent semantic analysis,
noise reduction, and feature extraction.
Factorizes a matrix into non-negative factors. Useful for parts-based representation in image
processing and text mining.
Uses Gaussian processes to learn a latent space representation. Captures uncertainty in the
embedding process.
Uses diffusion processes to model the data geometry. Preserves the global data structure and is
robust to noise.
Column6

Hyperparameters

fit_intercept, nomalize

degree, include_bias

alpha, solver

alpha, max_iter

alpha, l1_ratio

c, kernel, gamma

n_estimators, learning_rate, max_depth, min_child_weight, gamma, subsample, colsample_bytree,


colsample_bylevel, colsample_bynode, reg_alpha, reg_lambda, scale_pos_weight, max_delta_step, objective,
booster, tree_method, grow_policy, verbosity, sampling_method

criterion, splitter, max_depth, min_sample_split, min_samples_leaf, min_weight_fraction_leaf, max_features,


random_state, max_leaf_nodes, min_impurity_decrease, ccp_alpha
n_estimators, criterion, max_depth, min_sample_split, min_samples_leaf, min_weight_fraction_leaf,
max_features, random_state, max_leaf_nodes, min_impurity_decrease, bootstrap, oob_score, n_jobs,
random_state, verbose, warm_start, ccp_alpha, max_samples

n_iter, tol, alpha_1, alpha_2, lambda_1, lambda_2, computer_score, fit_intercept, normalize

n_neighbors, weights

alpha, fit_prior, class_prior

c, kernel, gamma

max_depth, min_samples_split, min_samples_leaf

penalty, c, solver
n_estimators, learning_rate, max_depth, min_samples_split, min_samples_leaf, min_weight_fraction_leaf, max_features,
max_leaf_nodes, min_impurity_decrease, min_impurity_split, init, random_state, verbose, warm_start, presort,
validation_fraction, n_iter_no_change, tol, ccp_alpha

n_estimators, criterion, max_depth, min_samples_split, min_samples_leaf, min_weight_fraction_leaf, max_features,


max_leaf_nodes, min_impurity_decrease, min_impurity_split, bootstrap, oob_score, n_jobs, random_state, verbose,
warm_start, class_weight, ccp_alpha, max_samples

n_clusters, max_iter

bandwidth, seeds, bin_seeding, min_bin_freq, cluster_all, n_jobs, max_iter, verbose


n_clusters, affinity, linkage, distance_threshold, compute_full_tree, connectivity, compute_distances, memory, verbose

eps, min_samples, metric, metric_params, algorithm, leaf_size, p, n_jobs

n_clusters, affinity, linkage, distance_threshold, compute_full_tree, connectivity, compute_distances, memory, verbose

min_support, min_confidence, max_length, verbose, show_progress, n_jobs, random_state

min_support, max_length, verbose, show_progress, n_jobs, random_state

min_support, support, min_confidence, max_length, max_itemsets, n_jobs, verbose, random_state


n_estimators, criterion, max_depth, min_samples_split, min_samples_leaf, min_weight_fraction_leaf, max_features,
max_leaf_nodes, min_impurity_decrease, min_impurity_split, bootstrap, oob_score, n_jobs, random_state, verbose,
warm_start, class_weight, ccp_alpha, max_samples

n_estimators, learning_rate, max_depth, min_child_weight, gamma, subsample, colsample_bytree,


colsample_bylevel, colsample_bynode, reg_alpha, reg_lambda, scale_pos_weight, max_delta_step, objective,
booster, tree_method, grow_policy, verbosity, sampling_method

base_estimator, n_estimators, learning_rate, algorithm, random_state


num_leaves, ,max_depth, learning_rate, n_estimator, min_child_samples, subsamples, colsample_bytree,
reg_alpha, reg_lambda, min_split_gain, early_stopping_rounds, objective, boosting_type, metric, num_threads,
random_state
learning_rate, n_estimators, max_depth, min_samples_split, min_samples_leaf, max_leaf_nodes,
min_impurity_decrease, subsample, max_features, reg_alpha, reg_lambda, loss, early_stopping_rounds,
warm_start, init, learning_rate, subsample
iterations, learning_rate, depth, max_leaves, l2_leaf_reg, bootstrap_type, subsample, colsample_bylevel,
colsample_bynode, max_bin, cat_features, reg_lambda, min_child_samples, eta_decay, loss_function,
eval_metric, nan_mode, random_seed, early_stopping_rounds, scale_pos_weight, train_dir, task_type, verbose

n_estimators, learning_rate, max_depth, min_samples_split, min_samples_leaf, max_leaf_nodes, subsample,


max_features, reg_alpha , reg_lambda, loss, early_stopping_rounds, warm_start, init, learning_rate, subsample

num_nodes, num_layers, kernel_size, dilation_coefficient, dropout, output_dim, learning_rate, weight_decay, batch_size,


num_epochs, early_stopping, patience, random_state, verbose

input_shape, include_top, weights, input_tensor, pooling, classes, classifier_activation


filter_size, number of filters, stride, padding, pooling, activation function, learning rate, batch size, number of
epochs, optimizer
filter_size, number of filters, stride, padding, pooling, activation function, learning rate, batch size, number of
epochs, optimizer

input_shape, include_top, weights, input_tensor, pooling, classes, classifier_activation

input_shape, include_top, weights, input_tensor, pooling, classes, classifier_activation

alpha, depth_multiplier, dropout, input_shape, include_top, weights, input_tensor, pooling, classes


width_coefficient, depth_coefficient, resolution, dropout_rate, num_classes, include_top, weights, input_tensor,
pooling, classifier_activation

growth_rate, block_config, num_init_features, bn_size, drop_rate, num_classes, transition_rate, compression

units, activation, reccurent_activation, dropout

units, activation, reccurent_activation, dropout

num_filters, kernel_size, strides, padding, activation, recurrent_activation, use_bias, kernel_initializer,


recurrent_initializer, bias_initializer, kernel_regularizer, recurrent_regularizer, bias_regularizer,
activity_regularizer, dropout, recurrent_dropout, return_sequences, return_state, go_backwards, stateful, unroll

input_shape, include_top, weights, input_tensor, pooling, classes, classifier_activation

units, activation, return_sequences, return_state, go_backwards, stateful, dropout, recurrent_dropout,


kernel_initializer, recurrent_initializer, bias_initializer, kernel_regularizer, recurrent_regularizer, bias_regularizer,
activity_regularizer, kernel_constraint, recurrent_constraint, bias_constraint, merge_mode

latent_dim, output_shape, num_filters, kernel_size, strides, padding, activation, batch_norm, dropout_rate,


output_activation, random_state
input_shape, num_filters, kernel_size, strides, padding, activation, batch_norm, dropout_rate, output_activation,
random_state

encoder_embedding_dim, encoder_hidden_dim, encoder_num_layers, encoder_dropout, decoder_embedding_dim,


decoder_hidden_dim, decoder_num_layers, decoder_dropout, teacher_forcing_ratio, max_length, output_dim, attention,
bidirectional, attention_type, padding_idx, sos_token, eos_token, batch_first, random_state

population_size, crossover_rate, mutation_rate, elite_rate, max_generations, early_stopping, fitness_function

learning_rate, discount_factor, entropy_regularization, value_loss_coefficient, max_grad_norm,


update_frequency, num_workers, gamma, tau, batch_size, num_episodes, max_steps_per_episode

learning_rate, discount_factor, batch_size, memory_size

learning_rate, discount_factor, epsilon

p, d, q, P, D, Q, m, trend, method, maxiter, start_params, solver, transparams, disp, seasonal

p, d, q, P, D, Q, m, trend, method, maxiter, start_params, solver, transparams, disp, seasonal_order,


enforce_stationarity, enforce_invertibility, measure_error, time_varying_regression, mle_regression,
simple_differencing

trend, damped_trend, seasonal, seasonal_periods, initial_level, initial_trend, initial_seasonal, use_boxcox,


remove_bias, smoothing_level, smoothing_trend, smoothing_seasonal, damping_trend, optimized, start_params,
method, use_brute
period, seasonal, trend, low_pass, robust, seasonal_deg, trend_deg, low_pass_deg, seasonla_jump, trend_jump,
low_pass_jump

p, trend, seasonal, exog, max_lags, ic, solver, trend_offset, cov_type

initial_state_mean, initial_state, covariance, transition_matrix, transition_covariance, observation_matrix,


observation_covariance, process_noise, measure_noise, state_intercept, design_matrix, selection_matrix,
filter_method, inversion_method, stability_method, conserve_memory, time_invariant, loglikelihood_burn
units, activation, reccurent_activation, dropout, use_bias, kernel_initilizer, recurrent_initialization,
bias_initializer, unit_forget_bias, kernel_regularizer, bias_regularizer, activity_regularizer, activity_regulizer,
recurrent_dropout, stateful, go_backwards, time_major, unroll, input_shape, batch_input_shape, input_length,
implementation, mask_zero

length_scale, alpha, periodicity, noise_level, kernel, optimizer, n_restarts_optimizer, normalize_y, nugget

growth, changepoints, seasonality_mode, seasonality_prior_scale, changepoint_prior_scale,


holidays_prior_scale, holidays, weekly_seasonality, daily_seasonality, fourier_order, interval_width,
changepoint_range, add_regressor, regressor_prior_scale, mcmc_samples

n_lags, n_forecasts, n_changepoints, changepoints_range, trend_reg, yearly_seasonality, weekly_seasonality,


daily_seasonality, seasonality_mode, learning_rate, batch_size, loss_func, num_hidden_layers, d_hidden,
ar_sparsity, uncertainty_sampling, uncertainty_dist, optimizer_kwargs, learning_rate_scheduler, l1_reg, l2_reg,
gradient_clip_value, early_stopping, early_stopping_patience, reduce_on_plateau_patience, extra_features

wavelet, level, mode, boundary, filter_bank, feature_extraction_method, num_features, model_architecture,


learning_rate, num_epochs, batch_size, optimizer, loss_function, regularization, dropout_rate, early_stopping,
validation_split, shuffle, random_state

level_var, trend_var, seasonal_var, cycle_var, autoregressive_order, observation_covariance, transition_covariance,


initial_state_mean, initial_state_covariance, observed_values, date_col, seasonality, trend, cycle, autoregressive, holidays

n_components, copy, whiten, svd_solver, tol, iterated_power, random_state

solver, shrinkage, priors, n_components, store_covariance, tol

n_components, tol, copy, max_iter, noise_variance_init, svd_method, iterated_power, random_state

algorithm, fun, n_components, whiten, max_iter, tol, fun_args, random_state

n_components, metric, n_init, max_iter, verbose, eps, dissimilarity, random_state, n_jobs


n_components, perplexity, early_exaggeration, learning_rate, n_iter, n_iter_without_progress, min_grad_norm,
metric, init, verbose, random_state, method, angle
n_neighbors, n_components, eigen_solver, tol, max_iter, path_method, neighbors_algorithm, metric, metric_params, p,
random_state

n_neighbors, n_components, reg, eigen_solver, tol, max_iter, method, neighbors_algorithm, random_state


n_components, kernel, gamma, degree, coef0, kernel_params, alpha, fit_inverse_transform, eigen_solver, tol, max_iter,
remove_zero_eig, random_state, copy_X

map_shape, learning_rate, decay_function, neighborhood_function, sigma, batch_size, random_state, n_jobs


encoding_dim, activation, optimizer, loss, metrics, epochs, batch_size, validation_split, shuffle, verbose, callbacks,
learning_rate, dropout_rate, regularization

n_components, algorithm, randomized, n_iter, random_state


n_components, init, solver, beta_loss, tol, max_iter, random_state, alpha, l1_ratio

n_components, kernel, max_iter, tol, verbose, random_state, alpha, optimizer, callback

n_components, alpha, neighbors_algorithm, random_state, metric, neighborhood_params, affinity_params


column7

Role and Range


fit_intercept' determines whether the model will include an intercept term (the y-intercept) in the equation. Boolean range
'normalize' determines whether the input features X should be normalized before fitting the model. Boolean range
degree' determines the highest power of x (the independent variable) to include in the polynomial features. Range(Any positiv
determines whether an additional column of ones (the bias term, or the intercept) is included in the transformed feature matr
alpha' controls the regularization strength applied to the linear regression model. Range(non-negative real number).
'solver' specifies the algorithm used to optimize the ridge regression model.
alpha' controls the strength of the regularization applied to the model. Range(non-negative real number).
'max_iter' determines the maximum number of iterations allowed for the solver to converge to a solution. Range(pos
alpha' controls the balance between the L1 and L2 regularization terms in the cost function. Range(non-negative real number)
c' controlsdetermines
'l1_ratio' the regularization strength,
the balance which
between thebalances theand
L1 (Lasso) trade-off between
L2 (Ridge) maximizing
regularization the margin
penalties and
in the minimizing
cost function. the classi
Range (0
'kernal' selects the type of function used to map data into a higher-dimensional space, enabling the SVM to capture complex
'gamma' controls the influence of individ
than 10)

n_estimators' specifies the number of boosting rounds, or equivalently, the number of trees to be built sequentially. Range(10
'learning_rate' controls the step size at each iteration while moving toward a minimum of the loss function. Range(0.001 - 1)
'max_depth' determines how deeply each tree is allowed to grow during the boosting process. Range(1 - 20 or more)
'min_child_weight' specifies the minimum sum of instance weights (also known as the minimum sum of Hessians) n
'gamma' controls the minimum loss reduction required to make a further partition on a leaf nod
'subsample' controls the fraction of the training data that is randomly sampled to train each tr
'colsample_bytree' controls the fraction of features (columns) to be randomly sam
'colsample_bylevel' controls the fraction of features (columns) to be ran
construction process. Range(0.1 - 1) 'colsample_bynode' control the fraction of features (co
construction process. Range(0.1 - 1) 'reg_alpha' applies L1 regularization to feature we
penalizing large weights. Range(0 - 10) 'reg_lambda' applies L2 regula
overfitting and improve model generalization. Range (0.0 - 10.0) 'scale_po
adjusting the weight of positive examples.
node weight can take during each boosting iteration. Range(0 - 10)
'objective' specifies the learning task and the corresponding loss function that the model aims to optimize. String type range
'booster' specifies the type of boosting algorithm to use during training. String type range
'tree_method' specifies the method used t
'grow_p
range
information printed during the training process. discrete integer values range
'sampling_method' specifies the method used to sample data points or features during the tree construction process.

criterion' specifies the function to measure the quality of a split.Range(String vlaues).


'splitter' determines the strategy used to split each node during the construction of the tree. Range (S
'max_depth ' sets the maximum number of levels in the tree. Range(any positive
'min_sample_split ' specifies the minimum
integers and floats) 'min_samples_le
leaf node. Range(Positive integers and floats)
fraction of the total sum of weights (of the input samples) required to be at a leaf node. Range(0.0 to 0.5)
number of features to consider when looking for the best split at each node. Range(integer,float or string)
'random_state ' is used to initialize the random number generator. Range( integer values or None)
'max_leaf_nodes ' specifies the maximum number of leaf nodes allowed in the tree. Range(positi
'min_impurity_decrease ' specifies the minimum amount of impurity decr
'ccp_alpha ' controls the complexity of the tree
n_estimators ' specifies the number of decision trees to be created in the forest. Range(Positive integer values)
'criterion ' determines the function used to measure the quality of a split at each node of the decision tree
'max_depth ' controls the maximum depth allowed for each decision tree in the forest
'min_sample_split ' specifies the minimum number of samples
'min_samples_leaf ' specifies the minimu
and float values) 'min_we
total of weights (of all the input samples) required to be at a leaf node. Range( 0.0 - 0.5)
features to consider when looking for the best split at each node of the decision trees within the forest. Range(integer,float or
controls the randomness involved in the bootstrapping of the samples and the feature selection for splits in each tree. Range(
'max_leaf_nodes ' controls the maximum number of leaf nodes allowed in each decision tree within the forest. R
'min_impurity_decrease ' controls the threshold for splitting nodes based on i
'bootstrap ' determines whether bootstrap samples are used w
'oob_score ' controls whether out-of-bag (O
Range(Boolean values)
predicting with the Random Forest algorithm. Range(positive integer greater than or equal to -1)
random number generator used by the algorithm. Range(integer values or None)
'verbose ' controls the verbosity of the algorithm's output during training. Range(Boolean values)
'warm_start ' allows you to fit additional trees to an existing forest while retaining the trees tha
'max_samples ' specifies the maximum number or proportion of sa
Range(integer or a float value between 0 and 1)
n_iter ' specifies the number of iterations or samples used in the Markov chain Monte Carlo (MCMC) sampling process for pos
'tol ' specifies the tolerance for the convergence criterion during the optimization process. Range(pos
'alpha_1 ' determines the shape and strength of the prior distribution applied to the regression c
'alpha_2 ' controls the strength of the prior distribution
'lambda_1 ' controls the strength
coefficients. Range(positive float values) 'lambda_2
applied to the regression coefficients. Range(positive float values)
not to compute the log marginal likelihood of the observations. Range(Boolean)
'fit_intercept ' determines whether to calculate the intercept term in the linear model. Range(Boolean)
'normalize ' determines whether the predictor variables should be normalized before fitting the

n_neighbors ' determines the number of neighbors to consider when making predictions for a new data point. Range(positive
'weights ' determines how the contributions of neighboring points are weighted when making predictions for a n
alpha' applied to avoid zero probabilities for features that are not present in the training data. Range(0 to positive infinity)
'fit_prior ' controls whether class prior probabilities are learned from the data during training. Range(Boolea
'class_prior ' allows you to specify the prior probabilities of the classes explicit

c' controls the trade-off between achieving a low error on the training data and minimizing the complexity of the decision bou
'kernel' specifies the type of kernel function to be used in the algorithm. Range(string values)
'gamma' controls the influence of individual training examples. Range(positive float
max_depth ' controls the maximum depth of the tree. It determines the longest path from the root node to a leaf node. Range
'min_samples_split ' determines the minimum number of samples required to split an internal node during the c
'min_samples_leaf ' specifies the minimum number of samples required to be at a leaf node. Range(po
penalty' determines the type of regularization to apply to the model. Range(string values or None)
'c' controls the regularization applied to the model. Range( positive float values)
'solver' determines the algorithm u
values)
n_estimators ' specifies the number of boosting stages (decision trees) to be iteratively built. Range(positive integer values)
'learning_rate ' determines the contribution of each tree (or each iteration) to the overall ensemble. R
'max_depth ' controls the maximum depth of each individual tree in the ensemble. Ra
'min_samples_split ' determines the minimum number of sam
'min_samples_leaf ' spe
node after a split. Range(positive integer and float values) 'm
total input samples required to be in a leaf node. Range(0.0 - 0.5)
features to consider when looking for the best split at each node. Range(integer, float and string)
determines the minimum impurity decrease required for a split to occur at a node. Range(float values)
'min_impurity_split' is used to determine the minimum impurity threshold below which a node will not be split. Range( 0 - 0.5
'init 'specifies the initial estimator that will be used to compute the initial predictions.
'random_state ' serves as a source of randomness that contro
algorithm.Range(non-negative integer value) 'verbose ' controls the verbo

learned information and continuing training from that point onwards with additional iterations. Range(boolean )
faster tree building. Range(boolean )
'validation_fraction ' specifies the fraction of training data to set aside as a validation set for early stopping. Range(0
n_iter_no_change ' controls early stopping based on a lack of improvement in
'tol' determines the stopping criterion for the optim
'ccp_alpha ' controls the com
individual trees during the training process. Range(0 and extends to positive infinity)

n_estimators ' determines the number of decision trees to be used in the ensemble. Range(1 and extends to any positive integ
'criterion' determines the function used to measure the quality of a split when building decision trees within the
'max_depth ' controls the maximum depth of each decision tree in the ensemble. Range(
'min_samples_split' determines the minimum number of samples r
tree in the ensemble. Range(2 to infinity) 'min_samples_leaf ' determines the minimum number of samples requ
value) 'min_weight_fraction_leaf 'specifies
be at a leaf node. Range(0 - 0.5) 'max_features ' d
the best split at each node.Range(integer,float or string)
leaf nodes allowed in each decision tree within the ensemble. Range(2 to infinity)
specifies the minimum impurity decrease required for a split to happen during the construction of each decision tree within th
'min_impurity_split' minimum impurity decrease required for a split to occur during the construction of each decision tree wit
'bootstrap ' controls whether bootstrap samples are used when building trees in the ensemble. R

'oob_score ' controls whether to use out-of-bag samples to estimate the generalization accuracy of the model. Range(Boolean
'n_jobs ' determines the number of parallel jobs to run during the model training process. Rang
'random_state' serves as a seed for the random number
RandomState instance) 'verbose ' con
algorithm will produce output messages during training. Range(integer values or boolean )
previous call to fit() as the initialization for the next call. Range(Boolean)
assign different weights to classes in the dataset.
'ccp_alpha' controls the complexity of the trees in the forest by enforcing pruning based on the Cost-Complexity Pruning (CCP
'max_samples ' determines the maximum number of samples to be drawn from the dataset to train each

n_clusters ' specifies the number of clusters the algorithm should form. Range(1, number of samples in the dataset)
'max_iter ' specifies the maximum number of iterations the algorithm will run for a single run. Range

bandwidth' determines the radius of the window used to define the neighborhood around each point. Range(positive real num
'seeds' clustering specifies the initial points from which the algorithm starts its ite
Range(none,array) 'bin_seeding' used to accelerate the clustering process by initially se
'min_bin_freq' specifies the minimum num
potential seed point during the binning process. Range(1 to 10 or higher) 'c
cluster, even if they are far from any mode (peak) of the data distribution. Range(Boolean)
algorithm.
'max_iter'specifies the maximum number of iterations the algorithm will perform to converge to the cluster centroid
'verbose' controls the verbosity or level of detail of the algorithm's output during training. Range(no
n_clusters ' indicates the number of clusters the algorithm should ultimately produce. Range(1 to the total number of data po
'affinity' specifies the metric used to compute the linkage between clusters. Ranger(distance metrics)
'linkage' specifies the method used to compute the distan
'distance_threshold ' specifies

whether to compute the full tree of hierarchical clusters or only a truncated version of it. Range(Boolean)
'connectivity' specifies the connectivity matrix representing the connectivity constraints
'compute_distances ' specifies whether distances b
'memory
clustering process.
verbosity level of the algorithm's output during the clustering process. Range(integer values)

eps' controls the maximum distance between two samples for them to be considered as neighbors.
'min_samples ' determines the minimum number of points required to form a dense region, which can b
starts at 2)) 'metric ' specifies the distance metric to use when calculating the distance between points in t
metric_params 'allows you to specify ad
'algorithm
the clustering process. Range('auto', 'ball_tree', 'kd_tree', 'brute')
used for efficient nearest neighbor search. Range(integer value)
used in the Minkowski distance metric. Range(positive real numbers)
'n_jobs' specifies the number of parallel jobs to run for neighbor search and other parallelizable tasks. Range(-1 to the numb
n_clusters ' specifies the number of clusters to find. Range(2 to the total number of data points in the dataset )
'affinity ' specifies the metric used to compute the linkage between the clusters. Range(distance metrics)
'linkage' specifies the method used to compute the distance between
'distance_threshold ' specifies the threshold to use

the full tree of hierarchical clusters or only a truncated version of it. Range(Boolean)
'connectivity' specifies the connectivity matrix representing the connectivity constraints of the data.
'compute_distances ' specifies whether distances between points should
'memory' used to specify cach

algorithm's output during the clustering process. Range(integer values)

min_support ' specifies the minimum support threshold for frequent itemsets. Range(0 - 1)
'min_confidence ' represents the minimum confidence threshold for association rules to be considere
'max_length ' specifies the maximum number of items (or f
unique items in the dataset) 'verbose' controls the verbosity level of the algorithm's out
'show_progress ' controls whether progres
Range(boolean)
and other parallelizable tasks. Range(-1 to the number of available CPU cores)
controls the random seed used for generating random numbers. Range( integer value)
min_support ' specifies the minimum support threshold for frequent itemsets. Range(0 - 1)
'max_length ' specifies the maximum number of items (or features) that can appear in an
'verbose' controls the verbosity level of the algorithm's output during th
'show_prog
execution of the algorithm. Range(boolean) 'n_j
other parallelizable tasks. Range(-1 to the number of available CPU cores)
controls the random seed used for generating random numbers. Range( integer value)

min_support ' specifies the minimum support threshold for frequent itemsets. Range(0 - 1)
'support' determines the minimum support threshold for identifying frequent itemsets. Range(0 - 1
'min_confidence ' represents the minimum confidence thresho
'max_length ' specifies the maximum number of items (or f
unique items in the dataset) 'max_itemsets ' determines the maximum number of freque
'n_jobs' specifies the number of parallel jobs to r
number of available CPU cores)
random numbers. Range( integer value)
the algorithm's output during the clustering process. Range(integer values)
n_estimators ' determines the number of decision trees to include in the ensemble. Range(10 - infinity)
'criterion' determines the function used to measure the quality of a split.
'max_depth ' controls the maximum depth of the decision trees in t
'min_samples_split' determines the min
construction of each decision tree in the ensemble. Range( 2 to 20) 'min_samples_leaf'
Range( integer value greater than or equal to 1)
samples required to be at a leaf node. (0 - 0.5)
number of features to consider when looking for the best split. It can be specified as an integer, float, or string. Range(0 - 0.5)
specifies the maximum number of leaf nodes that any individual tree in the ensemble is allowed to have. Range(2 and can go
'min_impurity_decrease ' specifies the minimum reduction in impurity that is required for a split to be considered in a node. R
'min_impurity_split ' is a threshold for early stopping in tree
'bootstr
individual trees. Range(Boolean)
accuracy of the model using out-of-bag samples. Range(Boolean)
controls the number of CPU cores used to perform parallel processing during model training and prediction.
'random_state ' controls the randomness of various aspects of the algorithm, en
'verbose' controls the verbosity le

fit and add more estimators to the ensemble. Range(Boolean)


'class_weight ' allows you to assign different weights to classes in the dataset.
'ccp_alpha ' controls the complexity of the trees in the ensemble by penalizing the complex
positive infinity) 'max_samples 'determines the maximum number or proportion of samples used to t
Range(0 - 1)

n_estimators ' determines the number of decision trees to include in the ensemble. Range(10 - infinity)
'learning_rate ' controls the step size at each iteration while moving toward a minimum of the loss function. Range( 0.01 a
'n_estimators ' specifies the number of boosting rounds or trees that the model will build.
'max_depth ' specifies the maximum depth of the in
'min_child_weight
child. Range(1 to infinity)
make a further partition on a leaf node of the tree. Range(0 to positive infinity)
fraction of the training data that is randomly sampled to grow each tree. Range(0.1 - 1.0)
'colsample_bytree ' controls the fraction of features to consider when constructing each tree. Range(0.1 - 1.0)
'colsample_bylevel ' controls the fraction of features to consider when
'colsample_bynode ' con
Range(0.1 -1.0)
controls L1 regularization term on weights. It penalizes large individual weights, encouraging sparsity in the model. Range(0, ∞
'reg_lambda ' controls the L2 regularization term on weights. Range(0, ∞)
'scale_pos_weight ' used to balance the class distribution in binary classification tasks
'max_delta_step ' used to help control the step size during the update of th
'objective 'specifies the loss function to be opti
'
the base learner used in the boosting process. Range(categorical)
used to build decision trees during training.
'grow_policy ' determines how trees are grown during the boosting process. Range (categorical)
'verbosity ' controls the amount of information printed during training. Range(0 - 3)
'sampling_method ' determines the method used for sampling d
Linear
Polynomial
Ridge
Lasso
Regression
Elastic Net
Bayesian
Quantile
Support vector
Logistic
Classification Support vector
KNN

You might also like