“…[61,63,64] For example, a fully connected layer can be directly mapped on one RRAM array or partitioned and implemented onto a few smaller arrays. [63,65,66] For long short term memory (LSTM) networks, the synaptic weights in an LSTM layer toward input gate, output gate, and forget gate can be deployed on different RRAM arrays. [63,65,66] For long short term memory (LSTM) networks, the synaptic weights in an LSTM layer toward input gate, output gate, and forget gate can be deployed on different RRAM arrays.…”