aureservoir Namespace Reference


Data Structures

class  AUExcept
class  CalcDelay
 template class for delay calculation More...
class  DelayLine
 template class for a signal delay line More...
class  ESN
 class for a basic Echo State Network More...
class  BPFilter
 simple bandpass filter based on an exponential moving average More...
class  IIRFilter
 general IIR filter implemented in transposed direct form 2 More...
class  SerialIIRFilter
 serie of IIR Filters More...
class  InitBase
 abstract base class for initialization algorithms More...
class  InitStd
 standard initialization as described in Jaeger's initial paper More...
class  SimBase
 abstract base class for simulation algorithms More...
class  SimStd
 standard simulation algorithm as in Jaeger's initial paper More...
class  SimLI
 algorithm with leaky integrator neurons More...
class  SimBP
 algorithm with bandpass style neurons as in Wustlich and Siewert More...
class  SimFilter
 algorithm with general IIR-Filter neurons More...
class  SimFilter2
 algorithm with IIR-Filter before neuron nonlinearity More...
class  SimFilterDS
 IIR-Filter neurons with additional delay&sum readout. More...
class  SimSquare
 algorithm with additional squared state updates More...
class  TrainBase
 abstract base class for training algorithms More...
class  TrainPI
 offline trainig algorithm using the pseudo inverse More...
class  TrainLS
 offline trainig algorithm using the least square solution More...
class  TrainRidgeReg
 offline algorithm with Ridge Regression / Tikhonov Regularization More...
class  TrainDSPI
 offline algorithm for delay&sum readout with PI More...
struct  SPMatrix
 typedef trait class of a real sparse matrix with type T More...
struct  DEMatrix
 typedef trait class of a real dense matrix with type T More...
struct  DEVector
 typedef trait class of a real dense vector with type T More...
struct  CDEVector
 typedef trait class of a complex dense vector with type T More...
class  Rand
 template class for random number generation More...

tanh2 activation functions

DEVector< double >::Type tanh2_a_
DEVector< double >::Type tanh2_b_
template<typename T>
void act_tanh2 (T *data, int size)
template<typename T>
void act_invtanh2 (T *data, int size)

Enumerations

enum  ActivationFunction { ACT_LINEAR, ACT_TANH, ACT_TANH2, ACT_SIGMOID }
enum  InitAlgorithm { INIT_STD }
enum  InitParameter {
  CONNECTIVITY, ALPHA, IN_CONNECTIVITY, IN_SCALE,
  IN_SHIFT, FB_CONNECTIVITY, FB_SCALE, FB_SHIFT,
  LEAKING_RATE, TIKHONOV_FACTOR, DS_USE_CROSSCORR, DS_USE_GCC,
  DS_MAXDELAY, IP_LEARNRATE, IP_MEAN, IP_VAR
}
enum  SimAlgorithm {
  SIM_STD, SIM_SQUARE, SIM_LI, SIM_BP,
  SIM_FILTER, SIM_FILTER2, SIM_FILTER_DS
}
enum  TrainAlgorithm { TRAIN_PI, TRAIN_LS, TRAIN_RIDGEREG, TRAIN_DS_PI }

Functions

void set_denormal_flags () throw (AUExcept)
void denormals_add_dc (float *data, int size)
void denormals_add_dc (double *data, int size)
double stringToDouble (const string &s) throw (AUExcept)
int stringToInt (const string &s) throw (AUExcept)
linear activation functions


template<typename T>
void act_linear (T *data, int size)
template<typename T>
void act_invlinear (T *data, int size)
tanh activation functions


template<typename T>
void act_tanh (T *data, int size)
template<typename T>
void act_invtanh (T *data, int size)
sigmoid activation functions
template<typename T>
void act_sigmoid (T *data, int size)
template<typename T>
void act_invsigmoid (T *data, int size)
FFT routines using FFTW
void rfft (const DEVector< double >::Type &x, CDEVector< double >::Type &X, int fftsize)
void rfft (const DEVector< float >::Type &x, CDEVector< float >::Type &X, int fftsize)
void irfft (CDEVector< double >::Type &X, DEVector< double >::Type &x)
void irfft (CDEVector< float >::Type &X, DEVector< float >::Type &x)

Variables

const float SINGLE_DENORMAL_DC = 1.0E-25
const double DOUBLE_DENORMAL_DC = 1.0E-30


Enumeration Type Documentation

enum aureservoir::ActivationFunction

all possible activation functions for reservoir and output neurons

Enumerator:
ACT_LINEAR  linear activation function
ACT_TANH  tanh activation function
ACT_TANH2  tanh activation function with local slope and bias
ACT_SIGMOID  sigmoid activation function

enum aureservoir::InitAlgorithm

all possible initialization algorithms

Enumerator:
INIT_STD  standard initialization,

See also:
class InitStd

enum aureservoir::InitParameter

possible parameters of the initialization algorithms

Note:
not every algorithm must use all of them !
Enumerator:
CONNECTIVITY  connectivity of the reservoir weight matrix
ALPHA  spectral radius of the reservoir weight matrix
IN_CONNECTIVITY  connectivity of the input weight matrix
IN_SCALE  scale input weight matrix random vaules
IN_SHIFT  shift input weight matrix random vaules
FB_CONNECTIVITY  connectivity of the feedback weight matrix
FB_SCALE  scale feedback weight matrix random vaules
FB_SHIFT  shift feedback weight matrix random vaules
LEAKING_RATE  leaking rate for Leaky Integrator ESNs
TIKHONOV_FACTOR  regularization factor for TrainRidgeReg
DS_USE_CROSSCORR  use simple cross-correlation for delay calculation
DS_USE_GCC  use generalized cross-correlation for delay calculation
DS_MAXDELAY  maximum delay for delay&sum readout
IP_LEARNRATE  learnrate for Gaussian-IP reservoir adaptation
IP_MEAN  desired mean for Gaussian-IP reservoir adaptation
IP_VAR  desired variance for Gaussian-IP reservoir adaptation

enum aureservoir::SimAlgorithm

all possible simulation algorithms

See also:
class SimStd
Enumerator:
SIM_STD  standard simulation

See also:
class SimStd
SIM_SQUARE  additional squared state updates

See also:
class SimSquare
SIM_LI  simulation with leaky integrator neurons

See also:
class SimLI
SIM_BP  simulation with bandpass neurons

See also:
class SimBP
SIM_FILTER  simulation with IIR-Filter neurons

See also:
class SimFilter
SIM_FILTER2  IIR-Filter before nonlinearity.

See also:
class SimFilter2
SIM_FILTER_DS  with Delay&Sum Readout

See also:
class SimFilterDS

enum aureservoir::TrainAlgorithm

all possible training algorithms

Enumerator:
TRAIN_PI  offline, pseudo inverse based

See also:
class TrainPI
TRAIN_LS  offline least square algorithm,

See also:
class TrainLS
TRAIN_RIDGEREG  with ridge regression,

See also:
class TrainRidgeReg
TRAIN_DS_PI  trains a delay&sum readout with PI

See also:
class TrainDSPI


Function Documentation

template<typename T>
void aureservoir::act_invlinear ( T *  data,
int  size 
) [inline]

inverse linear activation function, performed on each element

Parameters:
data pointer to the data
size of the data

template<typename T>
void aureservoir::act_invsigmoid ( T *  data,
int  size 
) [inline]

inverse sigmoid activation function, performed on each element: y(x) = ln( 1/x - 1 )

template<typename T>
void aureservoir::act_invtanh ( T *  data,
int  size 
) [inline]

inverse tanh activation function, performed on each element

Parameters:
data pointer to the data
size of the data

template<typename T>
void aureservoir::act_invtanh2 ( T *  data,
int  size 
) [inline]

inverse tanh2 activation function this means the following: y(x) = (atanh(x) - b) / a

Parameters:
data pointer to the data
size of the data

template<typename T>
void aureservoir::act_linear ( T *  data,
int  size 
) [inline]

linear activation function, performed on each element

Parameters:
data pointer to the data
size of the data

template<typename T>
void aureservoir::act_sigmoid ( T *  data,
int  size 
) [inline]

sigmoid activation function, performed on each element: y(x) = 1 / (1 + exp(x))

Parameters:
data pointer to the data
size of the data

template<typename T>
void aureservoir::act_tanh ( T *  data,
int  size 
) [inline]

tanh activation function, performed on each element

Parameters:
data pointer to the data
size of the data

template<typename T>
void aureservoir::act_tanh2 ( T *  data,
int  size 
) [inline]

tanh2 activation function with local slope a and bias b this means the following: y(x) = tanh( a*x + b ) where a and b are vetors with same size as the data

Parameters:
data pointer to the data
size of the data

void aureservoir::denormals_add_dc ( double *  data,
int  size 
) [inline]

adds a constant value to prevent going to denormal mode

Parameters:
data pointer to double precision data
size of the data
See also:
http://www.musicdsp.org/files/other001.txt

void aureservoir::denormals_add_dc ( float *  data,
int  size 
) [inline]

adds a constant value to prevent going to denormal mode

Parameters:
data pointer to single precision data
size of the data
See also:
http://www.musicdsp.org/files/other001.txt

void aureservoir::irfft ( CDEVector< float >::Type &  X,
DEVector< float >::Type &  x 
)

calculates inverse real fft in single precision

Parameters:
X complex frequency domain input vector
x real IFFT output vector, will be resized to correct size

void aureservoir::irfft ( CDEVector< double >::Type &  X,
DEVector< double >::Type &  x 
)

calculates inverse real fft in double precision

Parameters:
X complex frequency domain input vector
x real IFFT output vector, will be resized to correct size

void aureservoir::rfft ( const DEVector< float >::Type &  x,
CDEVector< float >::Type &  X,
int  fftsize 
)

calculates real fft with zero padding in single precision

Parameters:
x real input vector
X complex FFT output vector, will be resized to fftsize/2+1
fftsize fftsize, x will be zero-padded to this size

void aureservoir::rfft ( const DEVector< double >::Type &  x,
CDEVector< double >::Type &  X,
int  fftsize 
)

calculates real fft with zero padding in double precision

Parameters:
x real input vector
X complex FFT output vector, will be resized to fftsize/2+1
fftsize fftsize, x will be zero-padded to this size

void aureservoir::set_denormal_flags (  )  throw (AUExcept) [inline]

deactivate pentium4 denormals

This deactivates denormals on pentium4 processors, which speeds up quite a lot. However, this only works for the SSE unit and is not supported by older models, so all used libraries have to be compiled with -mfpmath=sse -msse flags. To turn denormals off on SSE, we turn on the Denormals Are Zero and Flush to Zero (DAZ and FZ) bits in the MXCSR register.

See also:
http://developer.apple.com/documentation/Performance/Conceptual/Accelerate_sse_migration/migration_sse_translation/chapter_4_section_2.html
Note:
For some strange reasons this function must be inline, otherwise I get linker problems !

double aureservoir::stringToDouble ( const string &  s  )  throw (AUExcept) [inline]

specialization for complex

converts a value from a string to a double used to set parameters from strings

int aureservoir::stringToInt ( const string &  s  )  throw (AUExcept) [inline]

converts a value from a string to a integer used to set parameters from strings


Variable Documentation

const double aureservoir::DOUBLE_DENORMAL_DC = 1.0E-30

DC offset to prevent denormals for double precision

const float aureservoir::SINGLE_DENORMAL_DC = 1.0E-25

DC offset to prevent denormals for single precision

DEVector<double>::Type aureservoir::tanh2_a_

slope vector for tanh2

DEVector<double>::Type aureservoir::tanh2_b_

bias vector for tanh2


Generated on Wed Mar 12 21:16:08 2008 for aureservoir by  doxygen 1.5.3