Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Copia de 01 Hello World Deep Learning.ipynb - Colaboratory https://colab.research.google.com/drive/1kUu_pyoVk61Rp0qm3562J8...

The Hello World of Deep Learning with Neural Networks

Like every �rst app, you should start with something super simple that shows the overall
scaffolding for how your code works. In the case of creating neural networks, one simple case is
where it learns the relationship between two numbers. So, for example, if you were writing code
for a function like this, you already know the 'rules':

def hw_function(x):
y = (2 * x) - 1
return y

So how would you train a neural network to do the equivalent task? By using data! By feeding it
with a set of x's and y's, it should be able to �gure out the relationship between them.

This is obviously a very different paradigm from what you might be used to. So let's step through
it piece by piece.

Imports
Let's start with the imports. Here, you are importing TensorFlow and calling it tf for convention
and ease of use.

You then import a library called numpy which helps to represent data as arrays easily and to
optimize numerical operations.

The framework you will use to build a neural network as a sequence of layers is called keras so
you will import that too.

import tensorflow as tf
import numpy as np
from tensorflow import keras

print(tf.__version__)

2.8.2

De�ne and Compile the Neural Network


Next, you will create the simplest possible neural network. It has 1 layer with 1 neuron, and the

1 de 7 2/6/2022 12:07
Copia de 01 Hello World Deep Learning.ipynb - Colaboratory https://colab.research.google.com/drive/1kUu_pyoVk61Rp0qm3562J8...

 47
input shape to it is just 1 value. s will
You completado
build thisamodel
las 12:06
using Keras' Sequential class which
allows you to de�ne the network as a sequence of layers. You can use a single Dense layer to
build this simple network as shown below.

# Build a simple Sequential model


model = tf.keras.Sequential([
keras.layers.Dense(units=1, input_shape=[1])
])

Now, you will compile the neural network. When you do so, you have to specify 2 functions: a
loss and an optimizer.

If you've seen lots of math for machine learning, here's where it's usually used. But in this case,
it's nicely encapsulated in functions and classes for you. But what happens here? Let's explain...

You know that in the function declared at the start of this notebook, the relationship between the
numbers is y=2x-1 . When the computer is trying to 'learn' that, it makes a guess... maybe
y=10x+10 . The loss function measures the guessed answers against the known correct
answers and measures how well or how badly it did.

It then uses the optimizer function to make another guess. Based on how the loss function
went, it will try to minimize the loss. At that point maybe it will come up with something like
y=5x+5 , which, while still pretty bad, is closer to the correct result (i.e. the loss is lower).

It will repeat this for the number of epochs which you will see shortly. But �rst, here's how you
will tell it to use mean squared error for the loss and stochastic gradient descent for the
optimizer. You don't need to understand the math for these yet, but you can see that they work!

Over time, you will learn the different and appropriate loss and optimizer functions for different
scenarios.

# Compile the model


model.compile(optimizer='sgd', loss='mean_squared_error')

Providing the Data


Next up, you will feed in some data. In this case, you are taking 6 X's and 6 Y's. You can see that
the relationship between these is y=2x-1 , so where x = -1 , y=-3 etc.

The de facto standard way of declaring model inputs and outputs is to use numpy , a Python
library that provides lots of array type data structures. You can specify these values by building
numpy arrays with np.array() .

2 de 7 2/6/2022 12:07
Copia de 01 Hello World Deep Learning.ipynb - Colaboratory https://colab.research.google.com/drive/1kUu_pyoVk61Rp0qm3562J8...

# Declare model inputs and outputs for training y = 2x - 1


xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)

Training the Neural Network


The process of training the neural network, where it 'learns' the relationship between the x's and
y's is in the model.fit() call. This is where it will go through the loop we spoke about above:
making a guess, measuring how good or bad it is (aka the loss), using the optimizer to make
another guess etc. It will do it for the number of epochs you specify. When you run this code,
you'll see the loss on the right hand side.

# Train the model


model.fit(xs, ys, epochs=500)

Epoch 1/500
1/1 [==============================] - 0s 386ms/step - loss: 0.6963
Epoch 2/500
1/1 [==============================] - 0s 7ms/step - loss: 0.6660
Epoch 3/500
1/1 [==============================] - 0s 5ms/step - loss: 0.6397
Epoch 4/500
1/1 [==============================] - 0s 5ms/step - loss: 0.6166
Epoch 5/500
1/1 [==============================] - 0s 5ms/step - loss: 0.5962
Epoch 6/500
1/1 [==============================] - 0s 5ms/step - loss: 0.5778
Epoch 7/500
1/1 [==============================] - 0s 6ms/step - loss: 0.5611
Epoch 8/500
1/1 [==============================] - 0s 5ms/step - loss: 0.5457
Epoch 9/500
1/1 [==============================] - 0s 7ms/step - loss: 0.5315
Epoch 10/500
1/1 [==============================] - 0s 8ms/step - loss: 0.5183
Epoch 11/500
1/1 [==============================] - 0s 9ms/step - loss: 0.5058
Epoch 12/500
1/1 [==============================] - 0s 5ms/step - loss: 0.4939
Epoch 13/500
1/1 [==============================] - 0s 5ms/step - loss: 0.4826
Epoch 14/500
1/1 [==============================] - 0s 5ms/step - loss: 0.4718
Epoch 15/500
1/1 [==============================] - 0s 5ms/step - loss: 0.4614
Epoch 16/500
1/1 [==============================] - 0s 5ms/step - loss: 0.4514
Epoch 17/500

3 de 7 2/6/2022 12:07
Copia de 01 Hello World Deep Learning.ipynb - Colaboratory https://colab.research.google.com/drive/1kUu_pyoVk61Rp0qm3562J8...

Epoch 17/500
1/1 [==============================] - 0s 5ms/step - loss: 0.4417
Epoch 18/500
1/1 [==============================] - 0s 5ms/step - loss: 0.4323
Epoch 19/500
1/1 [==============================] - 0s 6ms/step - loss: 0.4231
Epoch 20/500
1/1 [==============================] - 0s 5ms/step - loss: 0.4142
Epoch 21/500
1/1 [==============================] - 0s 6ms/step - loss: 0.4055
Epoch 22/500
1/1 [==============================] - 0s 6ms/step - loss: 0.3971
Epoch 23/500
1/1 [==============================] - 0s 15ms/step - loss: 0.3888
Epoch 24/500
1/1 [==============================] - 0s 13ms/step - loss: 0.3807
Epoch 25/500
1/1 [==============================] - 0s 7ms/step - loss: 0.3728
Epoch 26/500
1/1 [==============================] - 0s 5ms/step - loss: 0.3651
Epoch 27/500
1/1 [==============================] - 0s 5ms/step - loss: 0.3576
Epoch 28/500
1/1 [==============================] - 0s 5ms/step - loss: 0.3502
Epoch 29/500
1/1 [==============================] - 0s 5ms/step - loss: 0.3430

Ok, now you have a model that has been trained to learn the relationship between x and y . You
can use the model.predict() method to have it �gure out the y for a previously unknown x . So,
for example, if x=10 , what do you think y will be? Take a guess before you run this code:

# Make a prediction
print(model.predict([10.0]))

[[18.987118]]

You might have thought 19 , right? But it ended up being a little under. Why do you think that is?

Remember that neural networks deal with probabilities. So given the data that we fed the model
with, it calculated that there is a very high probability that the relationship between x and y is
y=2x-1 , but with only 6 data points we can't know for sure. As a result, the result for 10 is very
close to 19, but not necessarily 19.

As you work with neural networks, you'll see this pattern recurring. You will almost always deal
with probabilities, not certainties, and will do a little bit of coding to �gure out what the result is
based on the probabilities, particularly when it comes to classi�cation.

Actividad
4 de 7 2/6/2022 12:07
Copia de 01 Hello World Deep Learning.ipynb - Colaboratory https://colab.research.google.com/drive/1kUu_pyoVk61Rp0qm3562J8...

Para los siguientes datos construya una red neuronal con dos capas ocultas con N unidades y
con función de activación activación sigmoide (i.e., activation='sigmoid' ). Escoja N de modo
que la predicción para una entrada de 2.5 sea lo más próximo a lo esperado (observe la relación
cuadrática entre la entrada y salida). La capa de salida debe tener una sola unidad y su función
de activación debe ser lineal (i.e., activation='linear' ). Compile su modelo con el mismo
optimizador y función de costo del ejemplo anterior. Entrene su modelo por 2000 épocas.
Calcule el error cuadrático medio MSE en el conjunto de entrenamiento

# Declare model inputs and outputs for training y = x^2


xs = np.array([-4.0, -3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([16, 9, 4, 1, 0, 1, 4, 9, 16], dtype=float)

# Build a simple Sequential model


model = tf.keras.Sequential([
keras.layers.Dense(units=20, activation='sigmoid', input_shape=[1]),
keras.layers.Dense(units=20, activation='sigmoid'),
keras.layers.Dense(units=1, activation='linear')
])
# Compile the model
model.compile(optimizer='sgd', loss='mean_squared_error')
# Train the model
model.fit(xs, ys, epochs=2000)

#fit con epchos 2000

#predict en x=2.5

print(model.predict([2.5]))
m_p = []
m_p = model.predict([xs])
m_p = m_p.flatten(order='c')

import numpy as np
ecm = (np.square(ys - m_p)).mean()
ecm

Epoch 1/2000
1/1 [==============================] - 0s 238ms/step - loss: 83.5754
Epoch 2/2000
1/1 [==============================] - 0s 7ms/step - loss: 71.1072
Epoch 3/2000
1/1 [==============================] - 0s 6ms/step - loss: 61.7993
Epoch 4/2000
1/1 [==============================] - 0s 5ms/step - loss: 54.7385
Epoch 5/2000

5 de 7 2/6/2022 12:07
Copia de 01 Hello World Deep Learning.ipynb - Colaboratory https://colab.research.google.com/drive/1kUu_pyoVk61Rp0qm3562J8...

1/1 [==============================] - 0s 6ms/step - loss: 49.3645


Epoch 6/2000
1/1 [==============================] - 0s 5ms/step - loss: 45.2952
Epoch 7/2000
1/1 [==============================] - 0s 5ms/step - loss: 42.2446
Epoch 8/2000
1/1 [==============================] - 0s 5ms/step - loss: 39.9865
Epoch 9/2000
1/1 [==============================] - 0s 5ms/step - loss: 38.3371
Epoch 10/2000
1/1 [==============================] - 0s 4ms/step - loss: 37.1480
Epoch 11/2000
1/1 [==============================] - 0s 4ms/step - loss: 36.3009
Epoch 12/2000
1/1 [==============================] - 0s 5ms/step - loss: 35.7038
Epoch 13/2000
1/1 [==============================] - 0s 5ms/step - loss: 35.2868
Epoch 14/2000
1/1 [==============================] - 0s 5ms/step - loss: 34.9977
Epoch 15/2000
1/1 [==============================] - 0s 4ms/step - loss: 34.7983
Epoch 16/2000
1/1 [==============================] - 0s 4ms/step - loss: 34.6614
Epoch 17/2000
1/1 [==============================] - 0s 6ms/step - loss: 34.5675
Epoch 18/2000
1/1 [==============================] - 0s 5ms/step - loss: 34.5030
Epoch 19/2000
1/1 [==============================] - 0s 5ms/step - loss: 34.4587
Epoch 20/2000
1/1 [==============================] - 0s 5ms/step - loss: 34.4279
Epoch 21/2000
1/1 [==============================] - 0s 5ms/step - loss: 34.4063
Epoch 22/2000
1/1 [==============================] - 0s 5ms/step - loss: 34.3909
Epoch 23/2000
1/1 [==============================] - 0s 5ms/step - loss: 34.3796
Epoch 24/2000
1/1 [==============================] - 0s 7ms/step - loss: 34.3712
Epoch 25/2000
1/1 [==============================] - 0s 6ms/step - loss: 34.3647
Epoch 26/2000
1/1 [==============================] - 0s 6ms/step - loss: 34.3594
Epoch 27/2000
1/1 [==============================] - 0s 6ms/step - loss: 34.3550
Epoch 28/2000
1/1 [==============================] - 0s 7ms/step - loss: 34.3513
Epoch 29/2000
1/1 [==============================] - 0s 5ms/step - loss: 34.3479

6 de 7 2/6/2022 12:07
Copia de 01 Hello World Deep Learning.ipynb - Colaboratory https://colab.research.google.com/drive/1kUu_pyoVk61Rp0qm3562J8...

7 de 7 2/6/2022 12:07

You might also like