LinearDense¶
- class LinearDense(in_shape: tuple[int, ...] | int, out_shape: tuple[int, ...] | int, step_time: float, *, synapse: SynapseConstructor, bias: bool = False, delay: float | None = None, batch_size: int = 1, weight_init: OneToOne[Tensor] | None = None, bias_init: OneToOne[Tensor] | None = None, delay_init: OneToOne[Tensor] | None = None)[source]¶
Bases:
WeightBiasDelayMixin
,Connection
Linear all-to-all connection.
\[y = x W^\intercal + b\]- Parameters:
in_shape (tuple[int, ...] | int) – expected shape of input tensor, excluding batch dimension.
out_shape (tuple[int, ...] | int) – expected shape of output tensor, excluding batch dimension.
step_time (float) – length of a simulation time step, in \(\text{ms}\).
synapse (SynapseConstructor) – partial constructor for inner
Synapse
.bias (bool, optional) – if the connection should support learnable additive bias. Defaults to
False
.delay (float | None, optional) – maximum supported delay length, in \(\text{ms}\), excludes delays when
None
. Defaults toNone
.batch_size (int, optional) – size of input batches for simulation. Defaults to
1
.weight_init (OneToOne[torch.Tensor] | None, optional) – initializer for weights. Defaults to
None
.bias_init (OneToOne[torch.Tensor] | None, optional) – initializer for biases. Defaults to
None
.delay_init (OneToOne[torch.Tensor] | None, optional) – initializer for delays. Defaults to
None
.
Shape
LinearDense.weight
,LinearDense.delay
:\(\prod(N_0, \ldots) \times \prod(M_0, \ldots)\)
LinearDense.bias
:\(\prod(N_0 \cdot \cdots)\)
- Where:
\(N_0, \ldots\) are the unbatched output dimensions.
\(M_0, \ldots\) are the unbatched input dimensions.
Note
When
delay
is None, nodelay_
parameter is created and altering the maximum delay ofsynapse
will have no effect. Setting to 0 will create and register adelay_
parameter but not use delays unless it is later changed.Note
If
weight_init
orbias_init
are None,weight
andbias
are, respectively, initialized as uniform random values over the interval \([0, 1)\) usingtorch.rand()
.If
delay_init
is None,delay
is initialized as zeros usingtorch.rand()
.- forward(*inputs: Tensor, **kwargs) Tensor [source]¶
Generates connection output from inputs, after passing through the synapse.
Outputs are determined as the learned linear transformation applied to synaptic currents, after new input is applied to the synapse, then reshaped to match
batched_outshape
.- Parameters:
*inputs (torch.Tensor) – inputs to the connection.
- Returns:
outputs from the connection.
- Return type:
Shape
*inputs
:\(B \times M_0 \times \cdots\)
return
:\(B \times N_0 \times \cdots\)
- Where:
\(B\) is the batch size.
\(M_0, \ldots\) are the unbatched input dimensions.
\(N_0, \ldots\) are the unbatched output dimensions.
Note
*inputs
are reshaped usinglike_synaptic()
then passed to py:meth:~Synapse.forward ofsynapse
. Keyword arguments are also passed through.
- property inshape: tuple[int, ...]¶
Shape of inputs to the connection, excluding the batch dimension.
- like_bias(data: Tensor) Tensor [source]¶
Reshapes data like reduced postsynaptic receptive spikes to connection bias.
- Parameters:
data (torch.Tensor) – data shaped like reduced postsynaptic receptive spikes.
- Returns:
reshaped data.
- Return type:
Shape
data
:\(M \times 1\)
return
:\(M\)
- Where:
\(M\) is the number of elements across output dimensions.
- like_input(data: Tensor) Tensor [source]¶
Reshapes data like synapse input to connection input.
- Parameters:
data (torch.Tensor) – data shaped like synapse input.
- Returns:
reshaped data.
- Return type:
Shape
data
:\(B \times M\)
return
:\(B \times M_0 \times \cdots\)
- Where:
\(B\) is the batch size.
\(M\) is the number of elements across input dimensions.
\(M_0, \ldots\) are the unbatched input dimensions.
- like_synaptic(data: Tensor) Tensor [source]¶
Reshapes data like connection input to synapse input.
- Parameters:
data (torch.Tensor) – data shaped like connection input.
- Returns:
reshaped data.
- Return type:
Shape
data
:\(B \times M_0 \times \cdots\)
return
:\(B \times M\)
- Where:
\(B\) is the batch size.
\(M_0, \ldots\) are the unbatched input dimensions.
\(M\) is the number of elements across input dimensions.
- property outshape: tuple[int, ...]¶
Shape of outputs from the connection, excluding the batch dimension.
- postsyn_receptive(data: Tensor) Tensor [source]¶
Reshapes data like connection output for pre-post learning methods.
- Parameters:
data (torch.Tensor) – data shaped like output of
forward()
.- Returns:
reshaped data.
- Return type:
Shape
data
:\(B \times N_0 \times \cdots\)
return
:\(B \times N \times 1 \times 1\)
- Where:
\(B\) is the batch size.
\(N_0, \ldots\) are the unbatched output dimensions.
\(N\) is the number of elements across output dimensions.
- presyn_receptive(data: Tensor) Tensor [source]¶
Reshapes data like the synapse state for pre-post learning methods.
- Parameters:
data (torch.Tensor) – data shaped like output of
like_synaptic()
.- Returns:
reshaped data.
- Return type:
Shape
data
:\(B \times M \times [N]\)
return
:\(B \times N \times M \times 1\)
or
\(B \times 1 \times M \times 1\)
- Where:
\(B\) is the batch size.
\(M\) is the number of elements across input dimensions.
\(N\) is the number of elements across output dimensions.
- property selector: Tensor | None¶
Learned delays as a selector for synaptic currents and delays.
- Returns:
delay selector if the connection has learnable delays.
- Return type:
torch.Tensor | None
Shape
\(1 \times M \times N\)
- Where:
\(M\) is the number of elements across input dimensions.
\(N\) is the number of elements across output dimensions.