Learning Precise Timing with LSTM Recurrent Networks

F. A. Gers, N. N. Schraudolph, and J. Schmidhuber. Learning Precise Timing with LSTM Recurrent Networks. Journal of Machine Learning Research, 3:115–143, 2002.

Download

pdf djvu ps.gz
333.8kB   289.5kB   239.0kB  

Abstract

The temporal distance between events conveys information essential for numerous sequential tasks such as motor control and rhythm detection. While Hidden Markov Models tend to ignore this information, recurrent neural networks (RNNs) can in principle learn to make use of it. We focus on Long Short-Term Memory (LSTM) because it has been shown to outperform other RNNs on tasks involving long time lags. We find that LSTM augmented by "peephole connections" from its internal cells to its multiplicative gates can learn the fine distinction between sequences of spikes spaced either 50 or 49 time steps apart without the help of any short training exemplars. Without external resets or teacher forcing, our LSTM variant also learns to generate stable streams of precisely timed spikes and other highly nonlinear periodic patterns. This makes LSTM a promising approach for tasks that require the accurate measurement or generation of time intervals.

BibTeX Entry

@article{GerSchSch02,
     author = {Felix A. Gers and Nicol N. Schraudolph
               and J\"urgen Schmid\-huber},
      title = {\href{http://nic.schraudolph.org/pubs/GerSchSch02.pdf}{
               Learning Precise Timing with {LSTM} Recurrent Networks}},
      pages = {115--143},
    journal =  jmlr,
     volume =  3,
       year = 2002,
   b2h_type = {Journal Papers},
  b2h_topic = {Other},
   abstract = {
    The temporal distance between events conveys information essential for
    numerous sequential tasks such as motor control and rhythm detection.
    While Hidden Markov Models tend to ignore this information, recurrent
    neural networks (RNNs) can in principle learn to make use of it.  We focus
    on Long Short-Term Memory (LSTM) because it has been shown to outperform
    other RNNs on tasks involving long time lags.  We find that LSTM augmented
    by "peephole connections" from its internal cells to its multiplicative
    gates can learn the fine distinction between sequences of spikes spaced
    either 50 or 49 time steps apart without the help of any short training
    exemplars.  Without external resets or teacher forcing, our LSTM variant
    also learns to generate stable streams of precisely timed spikes and other
    highly nonlinear periodic patterns.  This makes LSTM a promising approach
    for tasks that require the accurate measurement or generation of time
    intervals.
}}

Generated by bib2html.pl (written by Patrick Riley) on Thu Sep 25, 2014 12:00:33