Project 2

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Project 2

• Colab link:
https://drive.google.com/file/d/1WRWGuiNS9VDfBp447SSuIxUzCjsEpkve/vie
w?usp=sharing
• Please add the Code Cell and Text Cell directly in the Colab Notebook. After
completing your work, save and download the Notebook in .ipynb format.
Rename the file with your Full Name and your Student ID Number,
submit .ipynb file through Canvas.

Background: in this project, we will be implementing PyTorch Neural Network


classifiers for the Concrete Crack Images recognition task. Concrete is the most
widely used construction material in the construction industry. Concrete cracking is a
condition when water evaporation in concrete occurs very quickly due to changes in
weather. Many traditional measures have been taken to determine if a building is
cracked or not cracked, but it has resulted in inaccurate conclusions. Here, we will
use AI to classify the crack condition. The original dataset contains concrete images
having cracks or not having cracks from various METU Campus Buildings. Each class
has 20000images with a total of 40000 images with 227 x 227 pixels with RGB
channels. We have reduced the dataset to a smaller version.

Example images with no crack (left) and crack (right)

Questions:

Q1: We give a learning rate lr = 0.0005 to Adam optimizer to train the AI Model.
Please try a different learning rate (at least ten times smaller or greater than the
original one) and check what happens to the training loss and test loss after this
change. Explain the possible reason for the phenomena you have observed. (6
points)

Hint: You can use the line chart to visualize the changes in training loss and test loss
under different epochs and learning rate (LR) combinations.

Q2: As the previous class introduced, a model needs sufficient training epochs to
learn the information from the training dataset. However, an excess number of
epochs can lead to a worse result in testing. Increase the number of epochs in
training to a large value (for example, 50 or larger) and find out what happened.
What is the reason contributing to the result? (6 points)

Q3: Now, keep the large number of epochs that you just set in the previous question.

a) Add some augmentation strategies mentioned in the notebook. Does the test
accuracy become better? Why? (6 points)

b) Regularization is an important technique to prevent model overfitting. Add the


basic regularization (weight decay) during training. For reference, please look at the
API here (https://pytorch.org/docs/stable/generated/torch.optim.Adam.html) and a
reasonable weight decay to add is 0.0001. What other regulation methods do you
know? Please list one or two different methods here. (7 points)

You might also like