grh.mur.at - nonlinear http://grh.mur.at/taxonomy/term/55/0 en Echo State Networks with Filter Neurons and a Delay&Sum Readout http://grh.mur.at/publications/esns-with-filters-and-delay-sum-readout <div class="field field-type-number-integer field-field-year"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Year:&nbsp;</div> 2010 </div> </div> </div> <div class="field field-type-text field-field-authors"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Authors:&nbsp;</div> Georg Holzmann </div> <div class="field-item even"> <div class="field-label-inline"> Authors:&nbsp;</div> Helmut Hauser </div> </div> </div> <div class="field field-type-text field-field-pubtype"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Type:&nbsp;</div> Journal paper </div> </div> </div> <div class="field field-type-text field-field-publisher"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Publisher:&nbsp;</div> <p>Neural Networks</p> </div> </div> </div> <div class="field field-type-text field-field-abstract"> <div class="field-label">Abstract:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <p>Echo state networks (ESNs) are a novel approach to recurrent neural network training with the advantage of a very simple and linear learning algorithm. It has been demonstrated that ESNs outperform other methods on a number of benchmark tasks. Although the approach is appealing, there are still some inherent limitations in the original formulation.</p> <p>Here we suggest two enhancements of this network model.<br /> First, the previously proposed idea of filters in neurons is extended to arbitrary infinite impulse response (IIR) filter neurons. This enables such networks to learn multiple attractors and signals at different timescales, which is especially important for modeling real-world time series.<br /> Second, a delay&amp;sum readout is introduced, which adds trainable delays in the synaptic connections of output neurons and therefore vastly improves the memory capacity of echo state networks.</p> <p>It is shown in commonly used benchmark tasks and real-world examples, that this new structure is able to significantly outperform standard ESNs and other state-of-the-art models for nonlinear dynamical system modeling.</p> </div> </div> </div> <div class="field field-type-filefield field-field-publication"> <div class="field-label">Publication:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <div class="filefield-file clear-block"><div class="filefield-icon field-icon-application-pdf"><img class="field-icon-application-pdf" alt="application/pdf icon" src="http://grh.mur.at/sites/all/modules/filefield/icons/protocons/16x16/mimetypes/application-pdf.png" /></div><a href="http://grh.mur.at/sites/default/files/ESNFilterDelaySum_0.pdf" type="application/pdf; length=2375237" title="ESNFilterDelaySum.pdf">Echo State Networks with Filter Neurons and a Delay&amp;Sum Readout (preprint)</a></div> </div> </div> </div> echo state networks machine learning neural networks nonlinear reservoir computing Signal Processing Mon, 13 Jul 2009 17:05:14 +0000 grh 174 at http://grh.mur.at Reservoir Computing: a powerful Black-Box Framework for Nonlinear Audio Processing http://grh.mur.at/publications/reservoir-computing-for-audio <div class="field field-type-number-integer field-field-year"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Year:&nbsp;</div> 2009 </div> </div> </div> <div class="field field-type-text field-field-authors"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Authors:&nbsp;</div> Georg Holzmann </div> </div> </div> <div class="field field-type-text field-field-pubtype"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Type:&nbsp;</div> Conference paper </div> </div> </div> <div class="field field-type-text field-field-publisher"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Publisher:&nbsp;</div> <p>Proc. of the 12th Int. Conference on Digital Audio Effects (DAFx-09)</p> </div> </div> </div> <div class="field field-type-text field-field-abstract"> <div class="field-label">Abstract:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <p>This paper proposes reservoir computing as a general framework for nonlinear audio processing.<br /> Reservoir computing is a novel approach to recurrent neural network training with the advantage of a very simple and linear learning algorithm. It can in theory approximate arbitrary nonlinear dynamical systems with arbitrary precision, has an inherent temporal processing capability and is therefore well suited for many nonlinear audio processing problems. Always when nonlinear relationships are present in the data and time information is crucial, reservoir computing can be applied.</p> <p>Examples from three application areas are presented: nonlinear system identification of a tube amplifier emulator algorithm, nonlinear audio prediction, as necessary in a wireless transmission of audio where dropouts may occur, and automatic melody transcription out of a polyphonic audio stream, as one example from the big field of music information retrieval.<br /> Reservoir computing was able to outperform state-of-the-art alternative models in all studied tasks.</p> </div> </div> </div> <div class="field field-type-filefield field-field-publication"> <div class="field-label">Publication:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <div class="filefield-file clear-block"><div class="filefield-icon field-icon-application-pdf"><img class="field-icon-application-pdf" alt="application/pdf icon" src="http://grh.mur.at/sites/all/modules/filefield/icons/protocons/16x16/mimetypes/application-pdf.png" /></div><a href="http://grh.mur.at/sites/default/files/RCandAudio.pdf" type="application/pdf; length=1947747" title="RCandAudio.pdf">Reservoir Computing DAFx-09 paper</a></div> </div> </div> </div> <div class="field field-type-filefield field-field-media"> <div class="field-label">Media:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <div class="filefield-file clear-block"><div class="filefield-icon field-icon-application-zip"><img class="field-icon-application-zip" alt="application/zip icon" src="http://grh.mur.at/sites/all/modules/filefield/icons/protocons/16x16/mimetypes/package-x-generic.png" /></div><a href="http://grh.mur.at/sites/default/files/DAFX09AudioExamples.zip" type="application/zip; length=5518643" title="DAFX09AudioExamples.zip">Audio Examples for DAFx-09 paper (5.3 MB)</a></div> </div> </div> </div> audio echo state networks machine learning neural networks nonlinear reservoir computing Signal Processing Thu, 25 Jun 2009 14:23:28 +0000 grh 154 at http://grh.mur.at