Networks

This module contains functions to create network topologies and initialize weights. Currently, only the random connectivity topology is implemented.

hetero.networks.create_topology(N, N_syn, allow_autapse=False, topology='random', rng=None)[source]

Create sparse connectivity matrix following specified topology rules.

Parameters:
  • N (int) – Number of neurons

  • N_syn (int) – Number of synapses (determines sparsity)

  • allow_autapse (bool, optional) – Whether to allow self-connections (default False)

  • topology (str, optional) – Type of topology to create (default ‘random’) Currently supported: ‘random’

  • rng (numpy.random.Generator, optional) – Random number generator instance

Returns:

Sparse adjacency matrix of shape (N, N)

Return type:

scipy.sparse.csr_matrix

hetero.networks.intialize_weights(N_input, N, N_output, p, f, mu_e, sigma_0, return_adj=True, rng=None)[source]

Creates three weight matrices:

  • the feedforward weights w_ff mapping inputs to neurons,

  • the recurrent weights w between neurons, and

  • an empty feedback weight array w_fb (a placeholder for future use).

The recurrent weights are initialized to ensure excitatory/inhibitory balance, whereas the feedforward weights are initialized from a centered normal distribution.

Parameters:
  • N_input (int) – Number of input dimensions

  • N (int) – Number of neurons

  • N_output (int) – Number of output dimensions

  • p (float) – Connection probability

  • f (float) – Fraction of excitatory neurons

  • mu_e (float) – Mean excitatory weight. The mean inhibitory weight is computed based on f to ensure E/I balance.

  • sigma_0 (float) – Recurrent weight standard deviation (same for both excitatory and inhibitory)

  • return_adj (bool, optional) – Whether to return adjacency matrices. Default is True.

  • rng (numpy.random.Generator, optional) – Random number generator instance

Returns:

  • w_ff (ndarray) – Input weights of shape (N, N_input)

  • w (ndarray) – Recurrent weights of shape (N, N)

  • w_fb (ndarray) – Feedback weights of shape (N, N_output)

Note

  • The recurrent weights are scaled by \(1/\sqrt{Np}\) to ensure the expected recurrent input variance, independent of the network size and connection probability, is equal to sigma_0.

  • The feedforward weights are scaled by \(1/\sqrt{dim_{input}}\) to ensure that feedforward input variance on each neuron is equal to 1 independent of the input dimension.

  • The feedforward and recurrent gains (\(J_u\) and \(J\), respectively), will be applied later during integrating the dynamical system.