ML Exp. 1-10 Output

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 59

EXPERIMENT NO.

AIM: Introduction to platforms Anaconda, Google COLAB.

RESOURCES REQUIRED: H/W :- P4 machine


S/W :- Google Colaboratory , Anaconda Navigator, Jupyter
Notebooks

THEORY:
Introduction to Anaconda
Anaconda distribution, a comprehensive platform for data science and scientific computing in
Python. Anaconda simplifies the process of setting up and working with various libraries and
tools commonly used in data science, machine learning, and scientific computing.

Experiment Steps:
1. Installation of Anaconda
1. Download Anaconda:
 Go to the Anaconda website.
 Choose the appropriate version for your operating system (Windows, macOS,
Linux) and download the installer.
2. Install Anaconda:
 Follow the installation instructions for your operating system.
 During installation, you can choose to add Anaconda to your system PATH,
which makes it easier to access Anaconda from the command line.
3. Verify Installation:
 Open a new terminal or command prompt.
 Type conda --version to check if the installation was successful.
2. Anaconda Navigator
1. Launch Anaconda Navigator:
 Open Anaconda Navigator from your applications or Start menu.
2. Explore Navigator:
 Get familiar with the Anaconda Navigator interface.
 Identify the key components such as Home, Environments, and Jupyter
Notebooks.
3. Creating and Managing Environments
1. Create a New Environment:
 Use Anaconda Navigator to create a new environment.
 Choose the Python version and give your environment a name.
2. Manage Environments:
 Activate and deactivate environments.
 Install and remove packages using the Conda package manager.
4. Jupyter Notebooks with Anaconda
1. Launch Jupyter Notebook:
 Open Jupyter Notebook from Anaconda Navigator.
2. Create a New Notebook:
 Create a new Jupyter Notebook within your Anaconda environment.
3. Execute Code in Notebook:
 Write and execute a simple Python code snippet in the Jupyter Notebook.

5. Introduction to Google Colab


5.1 Overview of Google Colab
1. Go to Google Colab.
2. Overview of the interface and features.
5.2 Setting Up a Colab Notebook
1. Create a new Colab notebook.
2. Understand the collaborative features.
5.3 Collaboration Features
1. Share and collaborate on a Colab notebook.
2. Commenting and version history.
6. Comparing Anaconda and Google Colab

Feature Google Colab Jupyter Notebook


Accessibility and Cloud-based, requires no local Requires local installation and
Setup setup. setup.
Real-time collaboration with Collaboration requires external
Collaboration
team members. services or plugins.
Free access to GPU and TPU Relies on local hardware resources
Hardware Resources
resources. for computation.
Integration with Seamless integration with Limited native integration with
Cloud Google Drive. cloud services.
Pre-installed libraries for Requires manual installation of
Library Management
machine learning. libraries.
Supports Matplotlib, Seaborn, Similar support for visualization
Visualization Tools
Plotly, etc. libraries.
Ideal for teaching and learning
Educational Use Widely used in educational settings.
with ease.
Offline Usage Limited offline functionality. Full functionality available offline.
Command Line Supports shell commands within Limited support; requires external
Integration notebooks. plugins.
Shareable via links or Google Sharing involves file transfer or
Ease of Sharing
Drive integration. external services.
Strong community support and Well-established community with
Community Support
resources. extensive resources.
Highly customizable based on local
Customization Limited customization options.
environment.

5. Conclusion
This lab document provides a structured outline for conducting an introduction to Anaconda
and Google Colab experiment.In this experiment, we have been introduced to the Anaconda
distribution and its capabilities. and learned how to install Anaconda, create and manage
Python environments, and use Jupyter Notebooks for interactive coding.

Additional Resources
 Anaconda Documentation: Anaconda Documentation
 Jupyter Notebook Documentation: Jupyter Notebook Documentation
Installing Anaconda and Python
To learn machine learning, we will use the Python programming language in this tutorial. So,
in order to use Python for machine learning, we need to install it in our computer system with
compatible IDEs (Integrated Development Environment).

In this topic, we will learn to install Python and an IDE with the help of Anaconda
distribution.

Anaconda distribution is a free and open-source platform for Python/R programming


languages. It can be easily installed on any OS such as Windows, Linux, and MAC OS. It
provides more than 1500 Python/R data science packages which are suitable for developing
machine learning and deep learning models.

Anaconda distribution provides installation of Python with various IDE's such as Jupyter
Notebook, Spyder, Anaconda prompt, etc. Hence it is a very convenient packaged solution
which you can easily download and install in your computer. It will automatically install
Python and some basic IDEs and libraries with it.

Below some steps are given to show the downloading and installing process of Anaconda and
IDE:

Step-1: Download Anaconda Python:

 To download Anaconda in your system, firstly, open your favorite browser and type
Download Anaconda Python, and then click on the first link as given in the below
image. Alternatively, you can directly download it by clicking on this link,
https://www.anaconda.com/distribution/#download-section.
 After clicking on the first link, you will reach to download page of Anaconda, as
shown in the below image:

 Since, Anaconda is available for Windows, Linux, and Mac OS, hence, you can
download it as per your OS type by clicking on available options shown in below
image. It will provide you Python 2.7 and Python 3.7 versions, but the latest version is
3.7, hence we will download Python 3.7 version. After clicking on the download option,
it will start downloading on your computer.

Note: In this topic, we are downloading Anaconda for Windows you can choose it as
per your OS.
Step- 2: Install AnacondaPython (Python 3.7version):

Once the downloading process gets completed, go to downloads → double click on the ".exe"
file (Anaconda3-2019.03-Windows-x86_64.exe) of Anaconda. It will open a setup window
for Anaconda installations as given in below image, then click on Next.

 It will open a License agreement window click on "I Agree" option and move further.
 In the next window, you will get two options for installations as given in the below
image. Select the first option (Just me) and click on Next.

 Now you will get a window for installing location, here, you can leave it as default or
change it by browsing a location, and then click on Next. Consider the below image:
 Now select the second option, and click on install.

 Once the installation gets complete, click on Next.


 Now installation is completed, tick the checkbox if you want to learn more about
Anaconda and Anaconda cloud. Click on Finish to end the process.

Note: Here, we will use the Spyder IDE to run Python programs.
Step- 3: OpenAnacondaNavigator

 After successful installation of Anaconda, use Anaconda navigator to launch a Python


IDE such as Spyder and Jupyter Notebook.
 To open Anaconda Navigator, click on window Key and search for Anaconda
navigator, and click on it. Consider the below image:
 After opening the navigator, launch the Spyder IDE by clicking on the Launch button
given below the Spyder. It will install the Spyder IDE in your system.

Run your Python program in Spyder IDE.

 Open Spyder IDE, it will look like the below image:


 Write your first program, and save it using the .py extension.
 Run the program using the triangle Run button.
 You can check the program's output on console pane at the bottom right side.

Step- 4: ClosetheSpyder IDE.

Below some steps are given to show the downloading and installing process of Anaconda and
IDE:

Step-1: Download Anaconda Python:

 To download Anaconda in your system, firstly, open your favorite browser and type
Download Anaconda Python, and then click on the first link as given in the below
image. Alternatively, you can directly download it by clicking on this link,
https://www.anaconda.com/distribution/#download-section.
 After clicking on the first link, you will reach to download page of Anaconda, as
shown in the below image:

 Since, Anaconda is available for Windows, Linux, and Mac OS, hence, you can
download it as per your OS type by clicking on available options shown in below
image. It will provide you Python 2.7 and Python 3.7 versions, but the latest version is
3.7, hence we will download Python 3.7 version. After clicking on the download option,
it will start downloading on your computer.
Note: In this topic, we are downloading Anaconda for Windows you can choose it as
per your OS.

Step- 2: Install AnacondaPython (Python 3.7version):

Once the downloading process gets completed, go to downloads → double click on the ".exe"
file (Anaconda3-2019.03-Windows-x86_64.exe) of Anaconda. It will open a setup window
for Anaconda installations as given in below image, then click on Next.
 It will open a License agreement window click on "I Agree" option and move further.

 In the next window, you will get two options for installations as given in the below
image. Select the first option (Just me) and click on Next.
 Now you will get a window for installing location, here, you can leave it as default or
change it by browsing a location, and then click on Next. Consider the below image:

• Now select the second option, and click on install.


 Once the installation gets complete, click on Next.

• Now installation is completed, tick the checkbox if you want to learn more about
Anaconda and Anaconda cloud. Click on Finish to end the process.

Note: Here, we will use the Spyder IDE to run Python programs.
Step- 3: OpenAnacondaNavigator

 After successful installation of Anaconda, use Anaconda navigator to launch a Python


IDE such as Spyder and Jupyter Notebook.
 To open Anaconda Navigator, click on window Key and search for Anaconda
navigator, and click on it. Consider the below image:

 After opening the navigator, launch the Spyder IDE by clicking on the Launch button
given below the Spyder. It will install the Spyder IDE in your system.
Run your Python program in Spyder IDE.

 Open Spyder IDE, it will look like the below image:

 Write your first program, and save it using the .py extension.
 Run the program using the triangle Run button.
 You can check the program's output on console pane at the bottom right side.

Step- 4: ClosetheSpyder IDE.

How to use Colaboratory

To use Colaboratory, you must have a Google account.

On your first visit, you will see a Welcome To Colaboratory notebook with links to video
introductions and basic information on how to use Colab.

Create aworkbook

From the File menu, click New notebook to create a workbook.

If you are not yet logged in to a Google account, the system will prompt you to log in.

The notebook will by default have a generic name; click on the filename field to rename it.
The file type, IPYNB, is short for "IPython notebook" because IPython was the forerunner of
Jupyter Notebook.

The interface allows you to insert various kinds of cells, mainly text and code, which have
their own shortcut buttons under the menu bar via the Insert menu.

Because notebooks are meant for sharing, there are accommodations throughout for
structured documentation.

Code, debug, repeat

You can insert Python code to execute in a code cell. The code can be entirely standalone or
imported from various Python libraries.

A notebook can be treated as a rolling log of work, with earlier code snippets being no longer
executed in favor of later ones, or treated as an evolving set of code blocks intended for
ongoing execution. The Runtime menu offers execution options, such as Run all, Run
before or Run the focused cell, to match either approach.

Each code cell has a run icon on the left edge, as shown above. You can type code into a cell
and hit the run icon to execute it immediately.
If the code generates an error, the error output will appear beneath the cell. Correcting the
problem and hitting run again replaces the error info with program output. The first line of
code, in its own cell, imports the NumPy library, which is the source of the arange function.
Colab has many common libraries pre-loaded for easy import into programs.

A text cell provides basic rich text using Markdown formatting by default and allows for the
insertion of images, HTML code and LaTeX formatting.

As you add text on the left side of the text cell, the formatted output appears on the right.
Once you stop editing a block, only the final formatted version shows.

Incorporating data into thenotebook

After getting comfortable with the interface and using it for initial test coding, you must
eventually provide the code with data to analyze or otherwise manipulate.

Colab can mount a user's Google Drive to the VM hosting their notebook using a code cell.

Once you hit run, Google will ask for permission to mount the drive.

If you allow it to connect, you will then have access to the files in your Google Drive via the
/my_drive path.

If you prefer not to grant access to your Drive space, you can upload files or any network file
space mounted as a drive from your local machine instead.

with file access, many functions are available to read data in various ways. For example,
importing the Pandas library gives access to functions such as read_csv and read_json.
Save and share

By default, Colab puts notebooks in a Colab Notebooks folder under My Drive in Google
Drive.

The File menu enables notebooks to be saved as named revisions in the version history,
relocated using Move, or saved as a copy in Drive or GitHub. It also allows you to download
and upload notebooks. Tools based on Jupyter provide broad compatibilities, so you can
create notebooks in one place and then upload and use them in another.

You can use the Share button in the upper right to grant other Google users access to the
notebook and to copy links.

Google also provides example notebooks illustrating available resources, such as pre-trained
image classifiers and language transformers, as well as addressing common business
problems, such as working with BigQuery or performing time series analytics. It also
provides links to introductory Python coding notebooks.
Experiment No. 2

Aim: Study of Machine learning Libraries and Tools

Objective: The objective of this experiment is to provide students with hands-on experience
in using popular machine learning libraries and tools. Participants will explore libraries such
as scikit-learn, TensorFlow, and PyTorch, and familiarize themselves with essential machine
learning tasks.
Prerequisites
 Basic understanding of Python programming language.
 Familiarity with fundamental machine learning concepts.
RESOURCES REQUIRED: H/W :- P4 machine
S/W :- Jupyter Notebook

Theory:

Best Python libraries for Machine Learning


Machine learning is a science of programming the computer by which they can learn from
different types of data. According to machine learning's definition of Arthur Samuel - "Field
of study that gives computers the ability to learn without being explicitly programmed". The
concept of machine learning is basically used for solving different types of life problems.

In previous days, the users used to perform tasks of machine


learning by manually coding all the algorithms and using
mathematical and statistical formulas.

This process was time-consuming, inefficient, and tiresome


compared to Python libraries, frameworks, and modules. But in
today's world, users can use the Python language, which is the
most popular and productive language for machine learning. Python has replaced many
languages as it is a vast collection of libraries, and it makes work easier and simpler.

Here are some of the best libraries of Python used for Machine Learning:

 NumPy
 SciPy
 Scikit-learn
 Pandas
 Matplotlib
 Seaborn
 TensorFlow
 Keras
• PyTorch

Top Python Machine Learning Libraries


1) NumPy

NumPy is a well known general-purpose array-processing package. An extensive collection


of high complexity mathematical functions make NumPy powerful to process large multi-
dimensional arrays and matrices. NumPy is very useful for handling linear algebra, Fourier
transforms, and random numbers. Other libraries like TensorFlow uses NumPy at the
backend for manipulating tensors.

With NumPy, you can define arbitrary data types and easily integrate with most databases.
NumPy can also serve as an efficient multi-dimensional container for any generic data that is
in any datatype. The key features of NumPy include powerful N-dimensional array object,
broadcasting functions, and out-of-box tools to integrate C/C++ and Fortran code.

Its key features are as below:

 Supports n-dimensional arrays to enable vectorization, indexing, and broadcasting


operations.
 Supports Fourier transforms mathematical functions, linear algebra methods, and
random number generators.
 Implementable on different computing platforms, including distributed and GPU
computing.
 Easy-to-use high-level syntax with the optimized Python code to provide high speed
and flexibility.
 In addition to that, NumPy enables the numerical operations of plenty of libraries
associated with data science, data visualization, image processing, quantum
computing, signal processing, geographic processing, bioinformatics, etc. So, it is one
of the versatile machine learning libraries.

Advantages:

 It can easily deal with multidimensional data.


 It helps in the matrix manipulation of data and operations such as transpose, reshape,
and much more.
 It enables enhanced performance and management of garbage collection by providing
a dynamic data structure.
 It allows improving the performance of Machine Learning models.

Disadvantages:

 It is highly dependent on non-Pythonic entities. It uses the functionalities of Cython


and other libraries that use C or C++.
 Its high productivity comes at a price.
 Its data types are hardware-native and not Python-native, so it costs heavily when
NumPy entities have to be translated back to Python-equivalent entities and vice versa.

Code:
import numpy as nup

# Then, create two arrays of rank 2

K = nup.array([[2, 4], [6, 8]])

R = nup.array([[1, 3], [5, 7]])

# Then, create two arrays of rank 1

P = nup.array([10, 12])

S = nup.array([9, 11])

# Then, we will print the Inner product of vectors

print ("Inner product of vectors: ", nup.dot(P, S), "\n")

# Then, we will print the Matrix and Vector product

print ("Matrix and Vector product: ", nup.dot(K, P), "\n")

# Now, we will print the Matrix and matrix product

print ("Matrix and matrix product: ", nup.dot(K, R))

Output:
Inner product of vectors: 222

Matrix and Vector product: [ 68 156]

Matrix and matrix product: [[22 34]


[46 74]]
2) SciPy

SciPy is a popular library among Machine Learning developers as it


contains numerous modules for performing optimization, linear
algebra, integration, and statistics. SciPy library is different from
SciPy stack, as SciPy library is one of the core packages which
made up the SciPy stack. SciPy library is used for image manipulation
tasks.

Advantages:

 It is perfect for image manipulation.


 It offers basic processing features for mathematical operations.
 It provides effective integration for numerics and their optimizations.
 It also facilitates the processing of signals.

Disadvantages:

 There is no major disadvantage of using SciPy. However, there can be confusion


between SciPy stack and SciPy library as the SciPy library is included in the stack.

Example 1:

1. from scipy import signal as sg


2. import numpy as nup
3. K = nup.arange(45).reshape(9, 5)
4. domain_1 = nup.identity(3)
5. print (K, end = 'KK')
6. print (sg.order_filter (K, domain_1, 1))

Output:

r (K, domain_1, 1))


output:
[[ 0 1 2 3 4]
[ 5 6 7 8 9]
[10 11 12 13 14]
[15 16 17 18 19]
[20 21 22 23 24]
[25 26 27 28 29]
[30 31 32 33 34]
[35 36 37 38 39]
[40 41 42 43 44]] KK [[ 0. 1. 2. 3. 0.]
[ 5. 6. 7. 8. 3.]
[10. 11. 12. 13. 8.]
[15. 16. 17. 18. 13.]
[20. 21. 22. 23. 18.]
[25. 26. 27. 28. 23.]
[30. 31. 32. 33. 28.]
[35. 36. 37. 38. 33.]
[ 0. 35. 36. 37. 38.]]
Example 2:

1. from scipy.signal import chirp as cp


2. from scipy.signal import spectrogram as sp
3. import matplotlib.pyplot as plot
4. import numpy as nup
5. t_T = nup.linspace(3, 10, 300)
6. w_W = cp(t_T, f0 = 4, f1 = 2, t1 = 5, method = 'linear')
7. plot.plot(t_T, w_W)
8. plot.title ("Linear Chirp")
9. plot.xlabel ('Time in Seconds)')
10. plot.show()

Output:

3) Scikit-learn

Scikit-learn is a Python library which is used for classical


machine learning algorithms. It is built on the top of two basic
libraries of Python, that is NumPy and SciPy. Scikit-learn is
popular in Machine learning developers as it supports
supervised and unsupervised learning algorithms. This library can also be used for data-
analysis, and data-mining process.

The following features of scikit-learn make it one of the best machine learning libraries in
Python:

 Easy to use for precise predictive data analysis


 Simplifies solving complex ML problems like classification, preprocessing, clustering,
regression, model selection, and dimensionality reduction
 Plenty of inbuilt machine learning algorithms
 Helps build a fundamental to advanced level ML model
 Developed on top of prevalent libraries like SciPy, NumPy, and Matplotlib

Example:

1. from sklearn import datasets as ds


2. from sklearn import metrics as mt
3. from sklearn.tree import DecisionTreeClassifier as dtc
4.
5. # load the iris datasets
6. dataset_1 = ds.load_iris()
7.
8. # fit a CART model to the data
9. model_1 = dtc()
10. model_1.fit(dataset_1.data, dataset_1.target)
11. print(model)
12.
13. # make predictions
14. expected_1 = dataset_1.target
15. predicted_1 = model_1.predict(dataset_1.data)
16.
17. # summarize the fit of the model
18. print (mt.classification_report(expected_1, predicted_1))
19. print(mt.confusion_matrix(expected_1, predicted_1))

Output:

DecisionTreeClassifier()
precision recall f1-score support

0 1.00 1.00 1.00 50


1 1.00 1.00 1.00 50
2 1.00 1.00 1.00 50

accuracy 1.00 150


macro avg 1.00 1.00 1.00 150
weighted avg 1.00 1.00 1.00 150

[[50 0 0]
[ 0 50 0]
[ 0 0 50]]

4) Pandas

Pandas is a Python library that is mainly used for data analysis. The
users have to prepare the dataset before using it for training the
machine learning. Pandas make it easy for the developers as it is
developed specifically for data extraction. It has a wide variety of
tools for analysing data in detail, providing high-level data structures.
Advantages:

 It has descriptive, quick, and compliant data structures.


 It supports operations such as grouping, integrating, iterating, re-indexing, and
representing data.
 It is very flexible for usage in association with other libraries.
 It contains inherent data manipulation functionalities that can be implemented using
minimal commands.
 It can be implemented in a large variety of areas, especially related to business and
education, due to its optimized performance.

Disadvantages:

 It is based on Matplotlib, which means that an inexperienced programmer needs to be


acquainted with both libraries to understand which one will be better to solve a
specific business problem.
 It is much less suitable for quantitative modeling and n-dimensional arrays. In such
scenarios, where we need to work on quantitative or statistical modeling, we can use
Numpy or SciPy.

The two main types of data structures used by pandas are :

 Series (1-dimensional)
 DataFrame (2-dimensional)

These two put together can handle a vast majority of data requirements and use cases from
most sectors like science, statistics, social, finance, and of course, analytics and other areas of
engineering.

Pandas support and perform well with different kinds of data including the below :

 Tabular data with columns of heterogeneous data. For instance, consider the data
coming from the SQL table or Excel spreadsheet.
 Ordered and unordered time series data. The frequency of time series need not be
fixed, unlike other libraries and tools. Pandas is exceptionally robust in handling
uneven time-series data
 Arbitrary matrix data with the homogeneous or heterogeneous type of data in the rows
and columns
 Any other form of statistical or observational data sets. The data need not be labeled
at all. Pandas data structure can process it even without labeling.

It was launched as an open-source Python library in 2009. Currently, it has become one of the
favourite Python libraries for machine learning among many ML enthusiasts. The reason
is it offers some robust techniques for data analysis and data manipulation. This library is
extensively used in academia. Moreover, it supports different commercial domains like
business and web analytics, economics, statistics, neuroscience, finance, advertising, etc. It
also works as a foundational library for many advanced Python libraries.
Here are some of its key features:

 Handles missing data


 Handles time series data
 Supports indexing, slicing, reshaping, subsetting, joining, and merging of large
datasets
 Offers optimized code for Python using C and Cython
 Powerful DataFrame object for broad data manipulation support

Example:

1. import pandas as pad


2.
3. data_1 = {"Countries": ["Bhutan", "Cape Verde", "Chad", "Estonia", "Guinea", "Keny
a", "Libya", "Mexico"],
4. "capital": ["Thimphu", "Praia", "N'Djamena", "Tallinn", "Conakry", "Nairobi", "
Tripoli", "Mexico City"],
5. "Currency": ["Ngultrum", "Cape Verdean escudo", "CFA Franc", "Estonia Kroon;
Euro", "Guinean franc", "Kenya shilling", "Libyan dinar", "Mexican peso"],
6. "population": [20.4, 143.5, 12.52, 135.7, 52.98, 76.21, 34.28, 54.32] }
7.
8. data_1_table = pad.DataFrame(data_1)
9. print(data_1_table)

Output:

Countries capital Currency population


0 Bhutan Thimphu Ngultrum 20.40
1 Cape Verde Praia Cape Verdean escudo 143.50
2 Chad N'Djamena CFA Franc 12.52
3 Estonia Tallinn Estonia Kroon; Euro 135.70
4 Guinea Conakry Guinean franc 52.98
5 Kenya Nairobi Kenya shilling 76.21
6 Libya Tripoli Libyan dinar 34.28
7 Mexico Mexico City Mexican peso 54.32

5) Matplotlib
Matplotlib is a Python library that is used for data visualization. It is used by developers
when they want to visualize the data and its patterns. It is a 2-D plotting library that is used to
create 2-D graphs and plots.

It has a module pyplot which is used for plotting graphs, and it provides different features for
control line styles, font properties, formatting axes and many more. Matplotlib provides
different types of graphs and plots such as histograms, error charts, bar charts and many more.

Example 1:

1. import matplotlib.pyplot as plot


2. import numpy as nup
3.
4. # Prepare the data
5. K = nup.linspace(2, 4, 8)
6. R = nup.linspace(5, 7, 9)
7. Q = nup.linspace(0, 1, 3)
8.
9. # Plot the data
10. plot.plot(K, K, label = 'K')
11. plot.plot(R, R, label = 'R')
12. plot.plot(Q, Q, label = 'Q')
13.
14. # Add a legend
15. plot.legend()
16.
17. # Show the plot
18. plot.show()

Output:
Example 2:

1. import matplotlib.pyplot as plot


2.
3. # Creating dataset-1
4. K_1 = [8, 4, 6, 3, 5, 10,
5. 13, 16, 12, 21]
6.
7. R_1 = [11, 6, 13, 15, 17, 5,
8. 3, 2, 8, 19]
9.
10. # Creating dataset2
11. K_2 = [6, 9, 18, 14, 16, 15,
12. 11, 16, 12, 20]
13.
14. R_2 = [16, 4, 10, 13, 18,
15. 20, 6, 2, 17, 15]
16.
17. plot.scatter(K_1, R_1, c = "Black",
18. linewidths = 2,
19. marker = "s",
20. edgecolor = "Brown",
21. s = 50)
22.
23. plot.scatter(K_2, R_2, c = "Purple",
24. linewidths = 2,
25. marker = "^",
26. edgecolor = "Grey",
27. s = 200)
28.
29. plt.xlabel ("X-axis")
30. plt.ylabel ("Y-axis")
31. print ("Scatter Plot")
32. plt.show()

Output:
Matplotlib is a data visualization library that is used for 2D plotting to produce publication-
quality image plots and figures in a variety of formats. The library helps to generate
histograms, plots, error charts, scatter plots, bar charts with just a few lines of code.

It provides a MATLAB-like interface and is exceptionally user-friendly. It works by using


standard GUI toolkits like GTK+, wxPython, Tkinter, or Qt to provide an object-oriented
API that helps programmers to embed graphs and plots into their applications.

It is the oldest Python machine learning library. However, it is still not obsolete. It is one of
the most innovative data visualization libraries for Python. So, the ML community admires it.

The following features of the Matplotlib library make it a famous Python machine learning
among the ML community:

 Its interactive charts and plots allow fascinating data storytelling


 Offers an extensive list of plots appropriate for a particular use case
 Charts and plots are customizable and exportable to various file formats
 Offers embeddable visualizations with different GUI applications
 Various Python frameworks and libraries extend Matplotlib

Below are some of the advantages and disadvantages of Matplotlib.

Advantages:

 It helps produce plots that are configurable, powerful, and accurate.


 It can be easily streamlined with the Jupyter Notebook.
 It supports GUI toolkits that include wxPython, Qt, and Tkinter.
 It is leveraged with a structure that can support Python as well as IPython shells.

Disadvantages:

• It has a strong dependency on NumPy and other such libraries for the SciPy stack.
 It has a high learning curve as its use takes quite a lot of knowledge and application
from the learner’s end.
 It can be confusing for developers as it provides two distinct frameworks, object-
oriented and MATLAB.
 It is primarily used for data visualization. It is not suitable for data analysis. To get
both data visualization and data analysis, we will have to integrate it with other
libraries.
6) Seaborn

Seaborn is a library in Python that allows us to create analytical graphs.


Seaborn is based on Matplotlib and includes the data structures of pandas.

Below are some advantages and disadvantages of Seaborn.

Advantages:

 It produces graphs that are more appealing than those created with Matplotlib.
 It has integrated packages that are unavailable in Matplotlib.
 It uses less code for visualizing graphs.
 It is integrated with pandas for visualizing and analyzing data.

Disadvantages:

 Prior knowledge of Matplotlib is required to work with Seaborn.


 Seaborn does not provide the feature of customization, which is there in Matplotlib.

7) TensorFlow

TensorFlow was developed for Google’s internal use by the


Google Brain team. Its first release came in November 2015 under
Apache License 2.0. TensorFlow is a popular computational
framework for creating machine learning models. TensorFlow
supports a variety of different toolkits for constructing models at varying levels of abstraction.

TensorFlow exposes a very stable Python and C++ APIs. It can expose, backward compatible
APIs for other languages too, but they might be unstable. TensorFlow has a flexible
architecture with which it can run on a variety of computational platforms CPUs, GPUs, and
TPUs. TPU stands for Tensor processing unit, a hardware chip built around TensorFlow for
machine learning and artificial intelligence.
TensorFlow empowers some of the largest contemporary AI models globally. Alternatively, it
is recognized as an end-to-end Deep Learning and Machine Learning library to solve practical
challenges.

The following key features of TensorFlow make it one of the best machine learning
libraries Python:

 Comprehensive control on developing a machine learning model and robust neural


network
 Deploy models on cloud, web, mobile, or edge devices through TFX, TensorFlow.js,
and TensorFlow Lite
 Supports abundant extensions and libraries for solving complex problems
 Supports different tools for integration of Responsible AI and ML solutions

8) Keras

Keras has over 200,000 users as of November 2017. Keras is an


open-source library used for neural networks and machine
learning. Keras can run on top of TensorFlow, Theano, Microsoft Cognitive Toolkit, R, or
PlaidML. Keras also can run efficiently on CPU and GPU.

Keras works with neural-network building blocks like layers, objectives, activation functions,
and optimizers. Keras also have a bunch of features to work on images and text images that
comes handy when writing Deep Neural Network code.

Apart from the standard neural network, Keras supports convolutional and recurrent neural
networks.

It was released in 2015 and by now, it is a cutting-edge open-source Python deep learning
framework and API. It is identical to Tensorflow in several aspects. But it is designed with a
human-based approach to make DL and ML accessible and easy for everybody.

You can conclude that Keras is one of the versatile machine learning libraries Python
because it includes:

 Everything that TensorFlow provides but presents in easy to understand format.


 Quickly runs various DL iterations with full deployment proficiencies.
 Support large TPUs and GPU clusters which facilitate commercial Python machine
learning.
 It is used in various applications, including natural language processing, computer
vision, reinforcement learning, and generative deep learning. So, it is useful for graph,
structured, audio, and time series data.

9) PyTorch
PyTorch has a range of tools and libraries that support
computer vision, machine learning, and natural language
processing. The PyTorch library is open-source and is
based on the Torch library. The most significant advantage
of PyTorch library is it’s ease of learning and using.
PyTorch can smoothly integrate with the python data science stack, including NumPy. You will
hardly make out a difference between NumPy and PyTorch. PyTorch also allows developers to
perform computations on Tensors. PyTorch has a robust framework to build computational
graphs on the go and even change them in runtime. Other advantages of PyTorch include multi
GPU support, simplified preprocessors, and custom data loaders.

Facebook released PyTorch as a powerful competitor of TensorFlow in 2016. It has now


attained huge popularity among deep learning and machine learning researchers. Various
aspects of PyTorch suggest that it is one of the outstanding Python libraries for machine
learning. Here are some of its key capabilities.

 Fully support the development of customized deep neural networks


 Production-ready with TorchServe
 Supports distributed computing through the torch.distributed backend
 Supports various extensions and tools to solve complex problems
 Compatible on all leading cloud platforms for extensible deployment
 Also supported on GitHub as an open-source Python framework

Conclusion:

In this Experiment, we have discussed about different libraries of Python and Machine
learning which are used for performing Machine learning tasks. We have also shown
different examples of each library.
Code:
def hebbian_learning(samples):
print(f'{"INPUT":^8} {"TARGET":^16}{"WEIGHT CHANGES":^15}
{"WEIGHTS":^25}') w1,
w2, b = 0, 0, 0
print(' ' * 45, f'({w1:2}, {w2:2}, {b:2})')
for x1, x2, y in samples:
w1 = w1 + x1 * y
w2 = w2 + x2 * y
b=b+y
print(f'({x1:2}, {x2:2}) {y:2} ({x1*y:2}, {x2*y:2}, {y:2}) ({w1:2}, {w2:2},
{b:2})')

AND_samples =
{ 'binary_input_binary_output': [
[1, 1, 1],
[1, 0, 0],
[0, 1, 0],
[0, 0, 0]
],
'binary_input_bipolar_output':
[ [1, 1, 1],
[1, 0, -1],
[0, 1, -1],
[0, 0, -1]
],
'bipolar_input_bipolar_output':
[ [ 1, 1, 1],
[ 1, -1, -1],
[-1, 1, -1],
[-1, -1, -1]
]
}
OR_samples =
{ 'binary_input_binary_output': [
[1, 1, 1],
[1, 0, 1],
[0, 1, 1],
[0, 0, 0]
],
'binary_input_bipolar_output':
[ [1, 1, 1],
[1, 0, 1],
[0, 1, 1],
[0, 0, -1]
],
'bipolar_input_bipolar_output':
[ [ 1, 1, 1],
[ 1, -1, 1],
[-1, 1, 1],
[-1, -1, -1]
]
}
XOR_samples =
{ 'binary_input_binary_output': [
[1, 1, 0],
[1, 0, 1],
[0, 1, 1],
[0, 0, 0]
],
'binary_input_bipolar_output':
[ [1, 1, -1],
[1, 0, 1],
[0, 1, 1],
[0, 0, -1]
],

'bipolar_input_bipolar_output':
[ [ 1, 1, -1],
[ 1, -1, 1],
[-1, 1, 1],
[-1, -1, -1]
]
}
#For AND gate
print('-',20, 'HEBBIAN LEARNING', '-',20)
print('AND with Binary Input and Binary Output')
hebbian_learning(AND_samples['binary_input_binary_output'])
print('AND with Binary Input and Bipolar Output')
hebbian_learning(AND_samples['binary_input_bipolar_output'])
print('AND with Bipolar Input and Bipolar Output')
hebbian_learning(AND_samples['bipolar_input_bipolar_output'])

#OR Gate
print('-',20, 'HEBBIAN LEARNING', '-',20)
print('OR with binary input and binary output')
hebbian_learning(OR_samples['binary_input_binary_output'])
print('OR with binary input and bipolar output')
hebbian_learning(OR_samples['binary_input_bipolar_output'])
print('OR with bipolar input and bipolar output')
hebbian_learning(OR_samples['bipolar_input_bipolar_output'])
#XOR Gate
print('-',20, 'HEBBIAN LEARNING', '-',20)
print('XOR with binary input and binary output')
hebbian_learning(XOR_samples['binary_input_binary_output'])
print('XOR with binary input and bipolar output')
hebbian_learning(XOR_samples['binary_input_bipolar_output'])
print('XOR with bipolar input and bipolar output')
hebbian_learning(XOR_samples['bipolar_input_bipolar_output'])

Output:
- 20 HEBBIAN LEARNING - 20
AND with Binary Input and Binary Output
INPUT TARGET WEIGHT CHANGES WEIGHTS
( 0, 0, 0)
( 1, 1) 1 ( 1, 1, 1) ( 1, 1, 1)
( 1, 0) 0 ( 0, 0, 0) ( 1, 1, 1)
( 0, 1) 0 ( 0, 0, 0) ( 1, 1, 1)
( 0, 0) 0 ( 0, 0, 0) ( 1, 1, 1)
AND with Binary Input and Bipolar Output
INPUT TARGET WEIGHT CHANGES WEIGHTS
( 0, 0, 0)
( 1, 1) 1 ( 1, 1, 1) ( 1, 1, 1)
( 1, 0) -1 (-1, 0, -1) ( 0, 1, 0)
( 0, 1) -1 ( 0, -1, -1) ( 0, 0, -1)
( 0, 0) -1 ( 0, 0, -1) ( 0, 0, -2)
AND with Bipolar Input and Bipolar Output
INPUT TARGET WEIGHT CHANGES WEIGHTS
( 0, 0, 0)
( 1, 1) 1 ( 1, 1, 1) ( 1, 1, 1)
( 1, -1) -1 (-1, 1, -1) ( 0, 2, 0)
(-1, 1) -1 ( 1, -1, -1) ( 1, 1, -1)
(-1, -1) -1 ( 1, 1, -1) ( 2, 2, -2)
- 20 HEBBIAN LEARNING - 20
OR with binary input and binary output
INPUT TARGET WEIGHT CHANGES WEIGHTS
( 0, 0, 0)
( 1, 1) 1 ( 1, 1, 1) ( 1, 1, 1)
( 1, 0) 1 ( 1, 0, 1) ( 2, 1, 2)
( 0, 1) 1 ( 0, 1, 1) ( 2, 2, 3)
( 0, 0) 0 ( 0, 0, 0) ( 2, 2, 3)
OR with binary input and bipolar output
INPUT TARGET WEIGHT CHANGES WEIGHTS
( 0, 0, 0)
( 1, 1) 1 ( 1, 1, 1) ( 1, 1, 1)
( 1, 0) 1 ( 1, 0, 1) ( 2, 1, 2)
( 0, 1) 1 ( 0, 1, 1) ( 2, 2, 3)
( 0, 0) -1 ( 0, 0, -1) ( 2, 2, 2)

OR with bipolar input and bipolar output


INPUT TARGET WEIGHT CHANGES WEIGHTS
( 0, 0, 0)

( 1, 1) 1 ( 1, 1, 1) ( 1, 1, 1)
( 1, -1) 1 ( 1, -1, 1) ( 2, 0, 2)
(-1, 1) 1 (-1, 1, 1) ( 1, 1, 3)
(-1, -1) -1 ( 1, 1, -1) ( 2, 2, 2)

- 20 HEBBIAN LEARNING - 20
XOR with binary input and binary output
INPUT TARGET WEIGHT CHANGES WEIGHTS
( 0, 0, 0)
( 1, 1) 0 ( 0, 0, 0) ( 0, 0, 0)
( 1, 0) 1 ( 1, 0, 1) ( 1, 0, 1)
( 0, 1) 1 ( 0, 1, 1) ( 1, 1, 2)
( 0, 0) 0 ( 0, 0, 0) ( 1, 1, 2)
XOR with binary input and bipolar output
INPUT TARGET WEIGHT CHANGES WEIGHTS
( 0, 0, 0)
( 1, 1) -1 (-1, -1, -1) (-1, -1, -1)
( 1, 0) 1 ( 1, 0, 1) ( 0, -1, 0)
( 0, 1) 1 ( 0, 1, 1) ( 0, 0, 1)
( 0, 0) -1 ( 0, 0, -1) ( 0, 0, 0)

XOR with bipolar input and bipolar output


INPUT TARGET WEIGHT CHANGES WEIGHTS

( 0, 0, 0)
( 1, 1) -1 (-1, -1, -1) (-1, -1, -1)
( 1, -1) 1 ( 1, -1, 1) ( 0, -2, 0)
(-1, 1) 1 (-1, 1, 1) (-1, -1, 1)
(-1, -1) -1 ( 1, 1, -1) ( 0, 0, 0)
Code :

from numpy import hstack

from numpy.random import normal

import matplotlib.pyplot as plt

sample1 = normal(loc=20, scale=5 , size=4000)

sample2 = normal(loc=40, scale=5 , size=8000)

sample = hstack((sample1,sample2))

plt.hist(sample, bins=50, density=True) plt.show()

Output:

The plot clearly shows the expected distribution with the peak for the first
process is 20 and the second process is 40. And for many points in the middle of
the distribution, it is unclear as to which distribution they are picked up from.
We can model the problem of estimating the density of this data set using the
Gaussian Mixture Model.
# Example of fitting a gaussian mixture model with expectation
maximization
import numpy as np
from numpy.random import normal
from sklearn.mixture import GaussianMixture

# Generate a sample dataset


sample1 = normal(loc=20, scale=5, size=4000)
sample2 = normal(loc=40, scale=5, size=8000)
sample = np.hstack((sample1, sample2)).reshape(-1, 1)

# Fit a Gaussian mixture model with 2 components to the sample data


model = GaussianMixture(n_components=2, init_params='random')
model.fit(sample)
# Predict latent values for each sample point
yhat = model.predict(sample)
# Check latent value for first few points
print("Latent value for first 80 points:")
print(yhat[:80])
# Check latent value for last few points
print("Latent value for last 80 points:")
print(yhat[-80:])
Output:
Code:

import numpy as np
import matplotlib.pyplot as plt

class McCullochPittsModel:
def init (self, weights, threshold):
self.weights = weights
self.threshold = threshold

def predict(self, inputs):


weighted_sum = np.dot(inputs, self.weights)
if weighted_sum >= self.threshold:
return 1
else:
return 0

# Example usage
weights = np.array([1, -1])
threshold = 0

mpm = McCullochPittsModel(weights, threshold)

# Test inputs
inputs = np.array([
[0, 1], # Should output 1
[1, 1], # Should output 0
[1, 0] # Should output 1
])

# Make predictions
predictions = np.array([mpm.predict(x) for x in inputs])
print(predictions)

# Example graph
x = np.linspace(-0.5, 1.5, 100) y
= -x + 1
plt.plot(x, y, '-r', label='y=2x+1')
plt.plot()
plt.scatter(inputs[:,0][predictions==1], inputs[:,1][predictions==1], c='b')
plt.xlabel('Wind')
plt.ylabel('Temp')
plt.title('McCulloch Pitts Model Example')
plt.legend()
plt.show()

Output: [0 1 1]
Code:

import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.decomposition import PCA
from matplotlib.colors import ListedColormap

# load the iris dataset


iris = load_iris()

# create PCA instance and fit to the data


pca = PCA(n_components=2)
X_transformed = pca.fit_transform(iris.data)

# plot the transformed data


colors = ['blue', 'red', 'green']
cmap = ListedColormap(colors[:len(np.unique(iris.target))])
plt.figure()
for i, target_name in enumerate(iris.target_names):
plt.scatter(X_transformed[iris.target == i, 0], X_transformed[iris.target == i, 1],
c=cmap(i), label=target_name)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.title('PCA of IRIS dataset')
plt.legend()
plt.show()
Output:
Code:

# Iris Classification
# Import Packages
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd

columns = ['Sepal length', 'Sepal width', 'Petal length', 'Petal width', 'Class_labels'] # As per the
iris dataset information

# Load the data


df = pd.read_csv('iris.data', names=columns)
df.head()

# Some basic statistical analysis about the data


df.describe()

# Visualize the whole dataset


sns.pairplot(df, hue='Class_labels')

# Seperate features and target


data = df.values
X = data[:,0:4]
Y = data[:,4]

# Calculate avarage of each features for all classes


Y_Data = np.array([np.average(X[:, i][Y==j].astype('float32')) for i in range (X.shape[1]) for j in
(np.unique(Y))])

Y_Data_reshaped = Y_Data.reshape(4, 3)
Y_Data_reshaped = np.swapaxes(Y_Data_reshaped, 0, 1)
X_axis = np.arange(len(columns)-1)
width = 0.25

# Plot the avarage


plt.bar(X_axis, Y_Data_reshaped[0], width, label = 'Setosa')
plt.bar(X_axis+width, Y_Data_reshaped[1], width, label = 'Versicolour')
plt.bar(X_axis+width*2, Y_Data_reshaped[2], width, label = 'Virginica')
plt.xticks(X_axis, columns[:4])
plt.xlabel("Features") plt.ylabel("Value in cm.") plt.legend(bbox_to_anchor=(1.3,1)) plt.show()

# Split the data to train and test dataset.


from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2)
# Support vector machine algorithm
from sklearn.svm import SVC
svn = SVC()
svn.fit(X_train, y_train)

# Predict from the test dataset


predictions = svn.predict(X_test)

# Calculate the accuracy


from sklearn.metrics import accuracy_score
accuracy_score(y_test, predictions)

# A detailed classification report


from sklearn.metrics import classification_report
print(classification_report(y_test, predictions))

X_new = np.array([[3, 2, 1, 0.2], [ 4.9, 2.2, 3.8, 1.1 ], [ 5.3, 2.5, 4.6, 1.9 ]])
#Prediction of the species from the input vector
prediction = svn.predict(X_new)
print("Prediction of Species: {}".format(prediction))

# Save the model


import pickle

with open('SVM.pickle', 'wb') as f:


pickle.dump(svn, f)

# Load the model


with open('SVM.pickle', 'rb') as f:
model = pickle.load(f)
model.predict(X_new)

Output:
Code:

from tkinter import *


import numpy as np
from PIL import ImageGrab
from Prediction import predict

window = Tk()
window.title("Handwritten digit recognition")
l1 = Label()

def MyProject():
global l1

widget = cv
# Setting co-ordinates of canvas
x = window.winfo_rootx() + widget.winfo_x()
y = window.winfo_rooty() + widget.winfo_y()
x1 = x + widget.winfo_width()
y1 = y + widget.winfo_height()

# Image is captured from canvas and is resized to (28 X 28) px


img = ImageGrab.grab().crop((x, y, x1, y1)).resize((28, 28))

# Converting rgb to grayscale image


img = img.convert('L')

# Extracting pixel matrix of image and converting it to a vector of (1, 784)


x = np.asarray(img)
vec = np.zeros((1, 784))
k=0
for i in range(28):
for j in range(28):
vec[0][k] = x[i][j]
k += 1

# Loading Thetas
Theta1 = np.loadtxt('Theta1.txt')
Theta2 = np.loadtxt('Theta2.txt')

# Calling function for prediction


pred = predict(Theta1, Theta2, vec / 255)

# Displaying the result


l1 = Label(window, text="Digit = " + str(pred[0]), font=('Algerian', 20)) l1.place(x=230, y=420)

lastx, lasty = None, None


# Clears the canvas
def clear_widget():
global cv, l1
cv.delete("all")
l1.destroy()

# Activate canvas
def event_activation(event):
global lastx, lasty
cv.bind('<B1-Motion>', draw_lines)
lastx, lasty = event.x, event.y

# To draw on canvas
def draw_lines(event):
global lastx, lasty
x, y = event.x, event.y
cv.create_line((lastx, lasty, x, y), width=30, fill='white', capstyle=ROUND, smooth=TRUE,
splinesteps=12)
lastx, lasty = x, y

# Label
L1 = Label(window, text="Handwritten Digit Recoginition", font=('Algerian', 25), fg="blue")
L1.place(x=35, y=10)

# Button to clear canvas


b1 = Button(window, text="1. Clear Canvas", font=('Algerian', 15), bg="orange", fg="black",
command=clear_widget)
b1.place(x=120, y=370)

# Button to predict digit drawn on canvas


b2 = Button(window, text="2. Prediction", font=('Algerian', 15), bg="white", fg="red",
command=MyProject)
b2.place(x=320, y=370)

# Setting properties of canvas


cv = Canvas(window, width=350, height=290, bg='black')
cv.place(x=120, y=70)

cv.bind('<Button-1>', event_activation) window.geometry("600x500") window.mainloop()


Output:

You might also like