Professional Documents
Culture Documents
RNN Structure PDF
RNN Structure PDF
RNN Structure PDF
Just like the backpropagation techniques discussed in Chapter 17, Deep Learning, the
algorithm evaluates a loss function to obtain the gradients and update the weight
parameters. In the RNN context, backpropagation runs from right to left in the
computational graph, updating the parameters from the final time step all the way to the
initial time step. Hence, the algorithm is called backpropagation through time.
In addition to the recurrent connections between the hidden states, there are several
alternative approaches, including recurrent output relationships, bidirectional RNNs, and
encoder-decoder architectures (you can refer to GitHub for more background references to
complement this brief summary here: https://github.com/PacktPublishing/Hands-On-
Machine-Learning-for-Algorithmic-Trading).
However, the successful learning of relevant past information requires the training output
samples to capture this information so that backpropagation can adjust the network
parameters accordingly. The use of previous outcome values alongside the input vectors is
called teacher forcing.
Connections from the output to the subsequent hidden states can be also used in
combination with hidden recurrences. However, training requires backpropagation
through time and cannot be run in parallel.
[5]