tether.nn package¶
Submodules¶
tether.nn.alif module¶
tether.nn.attention module¶
- class tether.nn.attention.SpikingSelfAttention(dim, num_heads=8, decay=0.9, threshold=1.0)[source]¶
Bases:
Module
tether.nn.block module¶
tether.nn.lif module¶
tether.nn.plif module¶
tether.nn.surrogates module¶
- class tether.nn.surrogates.Arctan(alpha=2.0, trainable=False)[source]¶
Bases:
SurrogateArctan surrogate gradient.
The surrogate derivative is given by:
\[\begin{split}f'(x) = \\frac{1}{1 + (\\alpha \\pi x)^2}\end{split}\]where x is the normalized membrane potential (v - threshold).
- class tether.nn.surrogates.FastSigmoid(alpha=2.0, trainable=False)[source]¶
Bases:
SurrogateFast Sigmoid (approximated) surrogate gradient.
Uses a computationally cheaper approximation of the sigmoid derivative:
\[\begin{split}f'(x) = \\frac{1}{(1 + |\\alpha x|)^2}\end{split}\]This avoids expensive exponential operations.
- class tether.nn.surrogates.Sigmoid(alpha=2.0, trainable=False)[source]¶
Bases:
SurrogateSigmoid surrogate gradient.
The surrogate function is a sigmoid, and its derivative is:
\[\begin{split}f'(x) = \\alpha \\cdot \\sigma(\\alpha x) \\cdot (1 - \\sigma(\\alpha x))\end{split}\]where \(\\sigma\) is the logistic sigmoid function and x is the membrane potential gap.
- class tether.nn.surrogates.Surrogate(alpha=2.0, trainable=False)[source]¶
Bases:
ModuleBase class for surrogate gradient functions used in Spiking Neural Networks.
Surrogate gradients allow for backpropagation through the non-differentiable Heaviside step function used for spike generation.
- Parameters:
alpha (float, optional) – Scaling parameter that controls the steepness/width of the surrogate derivative. Default is 2.0.
trainable (bool, optional) – If True, alpha becomes a learnable parameter. Default is False.
Module contents¶
- class tether.nn.ALIF(n_neurons, decay_v=0.9, decay_a=0.9, threshold=1.0, beta=0.5, alpha=2.0, store_traces=False)[source]¶
Bases:
Module
- class tether.nn.Arctan(alpha=2.0, trainable=False)[source]¶
Bases:
SurrogateArctan surrogate gradient.
The surrogate derivative is given by:
\[\begin{split}f'(x) = \\frac{1}{1 + (\\alpha \\pi x)^2}\end{split}\]where x is the normalized membrane potential (v - threshold).
- class tether.nn.FastSigmoid(alpha=2.0, trainable=False)[source]¶
Bases:
SurrogateFast Sigmoid (approximated) surrogate gradient.
Uses a computationally cheaper approximation of the sigmoid derivative:
\[\begin{split}f'(x) = \\frac{1}{(1 + |\\alpha x|)^2}\end{split}\]This avoids expensive exponential operations.
- class tether.nn.LIF(n_neurons, decay=0.9, threshold=1.0, alpha=2.0, surrogate=None, store_traces=False)[source]¶
Bases:
Module- property alpha¶
- class tether.nn.PLIF(n_neurons, init_decay=0.9, init_threshold=1.0, alpha=2.0, surrogate=None, store_traces=False)[source]¶
Bases:
Module
- class tether.nn.Sigmoid(alpha=2.0, trainable=False)[source]¶
Bases:
SurrogateSigmoid surrogate gradient.
The surrogate function is a sigmoid, and its derivative is:
\[\begin{split}f'(x) = \\alpha \\cdot \\sigma(\\alpha x) \\cdot (1 - \\sigma(\\alpha x))\end{split}\]where \(\\sigma\) is the logistic sigmoid function and x is the membrane potential gap.
- class tether.nn.SpikingSelfAttention(dim, num_heads=8, decay=0.9, threshold=1.0)[source]¶
Bases:
Module
- class tether.nn.Surrogate(alpha=2.0, trainable=False)[source]¶
Bases:
ModuleBase class for surrogate gradient functions used in Spiking Neural Networks.
Surrogate gradients allow for backpropagation through the non-differentiable Heaviside step function used for spike generation.
- Parameters:
alpha (float, optional) – Scaling parameter that controls the steepness/width of the surrogate derivative. Default is 2.0.
trainable (bool, optional) – If True, alpha becomes a learnable parameter. Default is False.