Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Ex.No.5.

Build a Deep Neural Network for multi class text classification using Reuters
dataset. Plot the results in the graph.

Aim :- Build a Deep Neural Network for multi class text classification using Reuters
dataset.

Algorithm :-
1. Import the necessary libraries, including tensorflow, keras, and the Reuters dataset.

2. Preprocess the data by tokenizing the text, converting the input sequences into fixed-length
sequences, and encoding the target labels.

3. Define the architecture of the deep neural network. This involves specifying the number of
layers, the number of neurons in each layer, the activation function, and any regularization
techniques such as dropout or batch normalization.

4. Compile the model by specifying the loss function, the optimizer, and the evaluation metric.

5. Train the model using the training data. This involves specifying the batch size, the number
of epochs, and the validation split.

6. Evaluate the performance of the model on the test data.

7.Optionally, fine-tune the hyperparameters of the model using techniques such as grid search
or Bayesian optimization.

Program :

import numpy as np
import matplotlib.pyplot as plt
from keras.datasets import reuters
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM, SpatialDropout1D
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical

# Load the Reuters dataset


max_words = 10000
(x_train, y_train), (x_test, y_test) = reuters.load_data(num_words=max_words, test_split=0.2)

# Preprocess the data


maxlen = 100 # Set the maximum length of input sequences
tokenizer = Tokenizer(num_words=max_words)
x_train = pad_sequences(x_train, maxlen=maxlen)
x_test = pad_sequences(x_test, maxlen=maxlen)
y_train = to_categorical(y_train, num_classes=46) # 46 classes in the Reuters dataset
y_test = to_categorical(y_test, num_classes=46)
# Build the model
model = Sequential()
model.add(Embedding(input_dim=max_words, output_dim=128, input_length=maxlen))
model.add(SpatialDropout1D(0.2))
model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(46, activation='softmax'))

# Compile the model


model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

# Train the model


history = model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=10, batch_size=64)

# Plot the training and validation accuracy


plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.show()

You might also like