Configuration¶
The configuration is desgined to be a nested dictionary for the sake of readablility. It contains 9 segments that define the specficis of the followings:
storage
heterogeniety profile
network architecture
network dynamic
gains
stimulus
task design
dataset
readout
Note
The default values in the configuration dictionary are used as a reference for naming each simulation. Key-value pairs that different from the defaut configuration file will be appended to the simulation name.
Storage¶
'root': osjoin('results/'), # the default path
'stype': 'float16', # data type of the stored state
Heterogeniety profile¶
'distro': 'lognormal', # or any numpy-compatible distribution
'n_means': 3, # number of mean levels
'n_vars': 3, # number of variance levels
'log_distance': 0.5, # log10-distance between distinct levels
n_means x n_vars+1 sets of time constants are drawn from the distro
distribution. The default values of mean and variance levels is set to 1. When
n_means or n_vars are larger than one, the distinct levels are centered
around 1 with a logarithmic distance of log_distance (in base 10).
The default values below, generate 12 distinct networks with mean time constants of \(\mu = \{10^{-0.5} , 1, 10^{0.5}\}\) and standard deviation levels of \(\sigma = \{0, 10^{-0.5} , 1, 10^{0.5}\}\).
Note
The default zero variance level is always added to create a homogeneous network as the control.
Network architecture¶
'N': 250, # number of neurons in the network
'p': 0.1, # synaptic connection probability
'f': 0.8, # fraction of excitatory neurons
'mue': 1., # average synaptic strength of excitatory neurons
'sig0': 1., # standard deviation of synaptic strengths
'autapse': False, # is self-connection allowed?
'topology': 'random', # network topology
'delay': 0, # synpatic delay
Note
Currently only random connectivity is supported.
Network dynamic¶
'dyn': {
'LIF': {
'E': -70e-3, # reversal potential (in volts)
'vreset': -70e-3, # reset potential (in volts)
'thr': -69e-3, # threshold potential (in volts)
'tau_ref': 0.002, # refractory period (must be larger than dt to be percieved)
'RI': 1e-3, # factor converting gained synpatic inputs to porper voltages (mV --> V)
'nu0': 5, # reseting state firing rate (in Hz)
'kernel': None, # synaptic kernel of the recurrent synapses (None means delta spikes)
'tau_s': 10, # synaptic timescale of the kernel
},
'LI':{
'af': 'possig', # activation function of the rate network
'v0': 1., # activation function's inverse slope
'thr': 0., # activation function's bias
},
# define other dynamics here
}
Gains¶
'Ju': 1., # gain of input synapses
'J': 1., # gain of recurrent synapses
'Jn': .1, # gain (intensity) of the white noise
Stimulus¶
'stim': {
'lorenz': {'rho': 28., 'sigma':10., 'beta':8/3.}, # Lorenz 1963 system (chaotic)
'mackey_glass-10': {'tau': 10}, # Mackey-Glass with a delay of 10 (periodic)
'mackey_glass-50': {'tau': 50}, # Mackey-Glass with a delay of 50 (chaotic)
'mackey_glass-80': {'tau': 80}, # Mackey-Glass with a delay of 10 (chaotic)
'sign': {'freq': 1., 'phase': 0.}, # sign of sinus (periodic)
'sin': {'freq': 1., 'phase': 0.}, # sin input with frequency of 1 (periodic)
'sin-5': {'freq': 5., 'phase': 0.}, # sin input with frequency of 5 (periodic)
'narma': {'order': 30, 'a1': .2, 'a2': .04, 'b': 1.5, 'c': .001}, # Non-linear auto-regressive moving average of oredr 30 (chaotic)
# define other stimuli here ...
}
Synthetic stimuli defined in the Stimulus module can be parameterized here.
The stimuli can be generated from the arbitrary underlying process. To define
new processes, define a new key and set of parameters (e.g., compare sin
and sin-5).
Note
Different stimuli can be mixed together to form a multi-dimensional input.
Note
The physical time used in the processes above will be scaled to ensure the average timescale of the (possibly compound) input is equal to 1. Due to this time-rescaling, for example, sin-5 and sin behave identically; Upon rescaling the physical frequency of 5 will become 1. See [] for the reason and more details.
Task design¶
'task': {
'taylor': {'deg_min': 1, 'deg_max': 6, 'deg_n': 6},
'nostradamus':{'delt_min': -2, 'delt_max': 2, 'delt_n': 29},
'tayloramus': {
'deg_min': 1, 'deg_max': 6, 'deg_n': 6,
'delt_min': -2, 'delt_max': 2, 'delt_n': 29
}
# define other temporal tasks here
},
Synthetic stimuli defined in the tasks module can be parameterized here.
Please refer to [tasks] for further details.
Dataset size¶
'n_trn': 20, # size of tge training set in unit of the base timescale
'n_tst': 10, # size of the test set in unit of the base timescale
'n_trials': 3, # expand the training set for how many independent readout trials
'scale_by_size': True, # expand the training set by the size of the network
Synthesizing a dataset necessitates specifying the size of the train and test sets. These can be defined as a multiplication of the stimulus base timescale which is desined to be 1.
Note
The number of samples in each set depends on the sampling rate. We often sample every dt=0.01 units. Thus, there are 100 samples in one timescale of the stimulus.
Readout¶
'readout_method': 'ridge', # or any sklearn-compatible regression model (e.g., ols, lasso, elasticnet)
'readout_regularizer': 1e-6, # readout_regular
'readout_has_bias': True, # fit intercept?
'readout_activation': { # this can be different than the activation of the neuron within the network
'None': {}, # no activation, return raw states (wheter rate or spikes).
# for rate neurons pass through an activation function
'rate': {'af': 'possig', 'v0': 1., 'thr': 0.}, # same as `LI` dynamic
# for spiking neurons convolve
'array': {}, # with nothing, just return the spike train as array
# custom kernels
'g50': {'kernel': 'gaussian', 'sigma': 50}, # causal gaussian (sigma is in ticks)
'gnc50':{'kernel': 'gaussian_nc','sigma': 50}, # non-causal gaussian (sigma is in ticks)
'd50': {'kernel': 'exp', 'tau_k': 50}, # exponential decay (tau_k is in ticks)
'a50': {'kernel': 'alpah', 'tau_k': 50}, # alpha kernel (tau_k is in ticks)
'ma50': {'kernel': 'moving_avg', 'win': 50}, # moving-average kernel with the window size of win (win is in ticks)
# add other activations here ...
}