neural_data_simulator.core

Neural Data Simulator core package.

The neural data simulator, or NDS, aims to create a system that can generate spiking data from behavioral data (e.g. cursor movement, arm kinematics, etc) in real-time.

The structure of this package is as follows:
  • encoder contains the encoder implementation.

  • ephys_generator currently hosts all classes that are used to generate spikes from spike rates.

  • filters contains all the filter implementations used for signal processing.

  • health_checker hosts utilities that monitor the data processing time consistency. It prints messages to the console when the data processing time is increasing or is taking longer than expected.

  • inputs contains various types of inputs that can be used by NDS, mainly inputs.SamplesInput and inputs.LSLInput.

  • models is meant to host model implementations that can be used by the encoder to convert behavior data into spike rates.

  • outputs contains classes that NDS uses to stream data out, mainly outputs.FileOutput, outputs.ConsoleOutput and outputs.LSLOutputDevice.

  • runner is where the function that runs the encoder pipeline is implemented.

  • samples contains the samples.Samples class, which is used to represent data in NDS.

  • settings contains the data model used to parse and validate the config.

  • timing contains the timing.Timer class, which is used by every script that processes data at regular intervals.

neural_data_simulator.core.encoder

This module contains the Encoder implementation.

class neural_data_simulator.core.encoder.Processor(*args, **kwargs)[source]

Bases: Protocol

Protocol for an encoder Processor class.

A processor can be used to transform data, usually for the purpose of adapting it to match the requirements of the:

  • encoder model: in this case the processor is called a preprocessor.

  • consumer of the encoder output (spike rates): in this case the processor represents a postprocessor.

A python protocol (PEP-544) works in a similar way to an abstract class. The __init__() method of this protocol should never be called as protocols are not meant to be instantiated. An __init__() method may be defined in a concrete implementation of this protocol if needed.

execute(data: Samples) Samples[source]

Execute processing on the samples input data.

Parameters

data – Input data to process.

Returns

Data samples after processing.

__init__(*args, **kwargs)
class neural_data_simulator.core.encoder.Encoder(*, input_: Input, preprocessor: Optional[Processor], model: EncoderModel, postprocessor: Optional[Processor], output: Output)[source]

Bases: object

Encoder class implementation.

It manages the data through all the necessary steps to convert from behavior data into spiking data. These steps currently include an optional preprocessor, the model transformation and an optional postprocessor.

__init__(*, input_: Input, preprocessor: Optional[Processor], model: EncoderModel, postprocessor: Optional[Processor], output: Output)[source]

Initialize the Encoder class.

Parameters
  • input – a class that implements reading one or multiple samples with the read method, and can be connected to through a context manager.

  • preprocessor – optional processor to transform the samples before they are passed to the model.

  • model – a class that can convert samples of behavior data into samples of spike rates for each call of the encode method.

  • postprocessor – optional processor to transform the samples that are returned by the model.

  • output – a class that can take one or multiple samples for each call of the send method, and can be connected to through a context manager.

iterate() None[source]

Move samples through all the stages of the Encoder.

That is: behavior samples input -> preprocessing -> encoding -> postprocessing -> spike rates samples output.

connect() Iterator[None][source]

Connect to both input and output.

Yields

Yields after both connections are established.

neural_data_simulator.core.ephys_generator

This module contains classes that are used to generate spikes from spike rates.

class neural_data_simulator.core.ephys_generator.SpikeRateInput(*args, **kwargs)[source]

Bases: Protocol

An abstract input that can be used to read spike rates.

A python protocol (PEP-544) works in a similar way to an abstract class. The __init__() method of this protocol should never be called as protocols are not meant to be instantiated. An __init__() method may be defined in a concrete implementation of this protocol if needed.

property channel_count: int

Get the number of input channels.

Returns

The input channel count.

read() Optional[ndarray][source]

Read spike rates, one per channel.

Returns

An array of spike rates with shape (n_units,) or None if no samples are available.

__init__(*args, **kwargs)
class neural_data_simulator.core.ephys_generator.LSLSpikeRateInputAdapter(lsl_input: LSLInput)[source]

Bases: SpikeRateInput

Reads spike rates from an LSL input.

property channel_count: int

Get the LSL stream channel count.

Returns

The input channel count.

__init__(lsl_input: LSLInput)[source]

Create an adapter for a given LSL input.

Parameters

lsl_input – The LSL input to adapt.

connect()[source]

Connect to the LSL input stream.

read() Optional[ndarray][source]

Connect to the LSL input stream.

Returns

An array of spike rates with shape (n_units,) or None if no samples are available.

class neural_data_simulator.core.ephys_generator.SpikeRateTestingInput(n_channels: int, n_units: int)[source]

Bases: SpikeRateInput

A constant spike rate input that can be used for testing.

Generates spike rates so that spikes are more likely to happen on channels of a higher order and less likely on channels of a lower order. The spike rate for a channel is always constant.

__init__(n_channels: int, n_units: int)[source]

Create a testing spike rate input.

Parameters
  • n_channels – The number of input channels.

  • n_units – The total number of units, which should be a multiple of the number of channels.

property channel_count: int

Get the number of input channels.

Returns

The input channel count.

read() Optional[ndarray][source]

Read spike rates, one per channel.

Returns

The array of testing spike rates with shape (n_units,). For example, if n_channels = 50 and n_units_per_channel = 1, the spike rates will be constant and equal to:

[ 0. 2. 4. 6. 8. 10. 12. 14. 16. 18. 20. 22. 24. 26. 28. 30. 32. 34. 36. 38. 40. 42. 44. 46. 48. 50. 52. 54. 56. 58. 60. 62. 64. 66. 68. 70. 72. 74. 76. 78. 80. 82. 84. 86. 88. 90. 92. 94. 96. 98.]

class neural_data_simulator.core.ephys_generator.SpikeEvents(time_idx: ndarray, unit: ndarray, waveform: Optional[ndarray])[source]

Bases: object

Spike events that are generated during an execution of ProcessOutput.

During each execution n_samples of data are processed and sent to output. Depending on the current spike rates, n_spike_events >= 0 may be generated for all units (n_units) and inserted into the output data.

time_idx: ndarray

An array of time indices of spike events with shape (n_spike_events,). The time index is the index of the sample in the output data, from 0 to n_samples - 1. For example, time_idx can be [6 3 5] in case of 3 spike events. This means that the first spike event corresponds to the 6th sample, the second at to the 3rd sample, and the third to the 5th sample.

unit: ndarray

An array of the same size as time_idx that contains the unit number that spiked for the corresponding time_idx entries. For example, unit can be [0 1 1] in case of 3 spike events. This means that the first spike event corresponds to the unit 0, the second and the third to the unit 1.

waveform: Optional[ndarray]

The spike waveforms with shape (n_samples_waveform, n_spike_events), where n_samples_waveform is configurable. The values are the amplitudes of the spike waveforms in counts.

Can be None if only using spike times without waveforms.

class SpikeEvent(time_idx: int, unit: int, waveform: Optional[ndarray])[source]

Bases: NamedTuple

A single spike event.

time_idx: int

The time index of the spike event.

unit: int

The unit that spiked.

waveform: Optional[ndarray]

The spike waveform. Can be None if only using the spike time.

get_spike_event(index: int) SpikeEvent[source]

Get a single SpikeEvent at the specified index.

__init__(time_idx: ndarray, unit: ndarray, waveform: Optional[ndarray]) None
class neural_data_simulator.core.ephys_generator.NoiseData(n_channels: int, beta: float, standard_deviation: float, fmin: float, samples: int, random_seed: Optional[int])[source]

Bases: object

Multi-channel colored noise generator.

__init__(n_channels: int, beta: float, standard_deviation: float, fmin: float, samples: int, random_seed: Optional[int]) None[source]

Initialize the noise generator.

Gaussian distributed noise will be pre-generated on all channels.

Parameters
  • n_channels – The number of channels.

  • beta – The power-spectrum of the generated noise is proportional to (1 / f)**beta.

  • standard_deviation – The desired standard deviation of the noise.

  • fmin – Low-frequency cutoff. fmin is normalized frequency and the range is between 1/samples and 0.5, which corresponds to the Nyquist frequency.

  • samples – The number of samples to generate per channel.

  • random_seed – The random seed to use. Use a fixed seed for reproducible results.

get_slice(shape: Tuple[int, int]) ndarray[source]

Get the next random noise slice of a given size.

After computing the current noise slice, the window is advanced by n_samples and next time this function is called the upcoming window is returned.

Parameters

shape – The noise slice array.

Returns

An array with shape (n_samples, n_channels).

class neural_data_simulator.core.ephys_generator.ContinuousData(noise_data: NoiseData, n_channels: int, params: Params)[source]

Bases: object

Generator of the electrophysiology raw data.

class Params(raw_data_frequency: float, n_units_per_channel: int, n_samples_waveform: int, lfp_data_frequency: float, lfp_filter_cutoff: float, lfp_filter_order: int)[source]

Bases: object

Initialization parameters for the ContinuousData class.

raw_data_frequency: float

The electrophysiology raw data output sample rate in Hz.

n_units_per_channel: int

The number of units per channel.

n_samples_waveform: int

The number of samples in a spike waveform.

lfp_data_frequency: float

The LFP data sample rate in Hz.

lfp_filter_cutoff: float

The LFP filter cutoff frequency in Hz.

lfp_filter_order: int

The LFP filter order.

property lfp_downsample_rate: float

Get the LFP downsample rate.

__init__(raw_data_frequency: float, n_units_per_channel: int, n_samples_waveform: int, lfp_data_frequency: float, lfp_filter_cutoff: float, lfp_filter_order: int) None
__init__(noise_data: NoiseData, n_channels: int, params: Params)[source]

Initialize the ContinuousData class.

Parameters
  • noise_data – The background noise generator.

  • n_channels – The number of input/output channels.

  • params – The initialization parameters.

get_continuous_data(n_samples: int, spike_events: SpikeEvents) ndarray[source]

Get continuous data from spike events and background noise.

Parameters
  • n_samples – The number of continuous data samples to return.

  • spike_events – The spike events to use for generating the data.

Returns

The synthesized continuous data with combined spikes and noise as a (n_samples, n_units) array.

get_lfp_data(data: ndarray) ndarray[source]

Filter and downsample raw data to get LFP data.

Parameters

data – The raw continuous data with shape (n_samples, n_units).

Returns

The filtered and downsampled data with shape (n_filtered_samples, n_units).

class neural_data_simulator.core.ephys_generator.Waveforms(params: Params, n_units: int)[source]

Bases: object

Spike waveforms loader.

class Params(prototypes_definitions: Dict[int, List[float]], unit_prototype_mapping: Dict[str, int], n_samples: int)[source]

Bases: object

Initialization parameters for the Waveforms class.

prototypes_definitions: Dict[int, List[float]]

The waveform prototypes definitions. The keys are the prototype IDs and the values are the waveform definitions.

unit_prototype_mapping: Dict[str, int]

The unit prototype mapping. The keys are the unit numbers and the values are the prototype IDs. The “default” key is used for all units that are not explicitly defined. Unit numbers are 0-based.

n_samples: int

The number of samples in the waveform.

property prototypes: ndarray

The waveform prototypes.

property prototypes_ids: list[int]

The waveform prototypes IDs.

__init__(prototypes_definitions: Dict[int, List[float]], unit_prototype_mapping: Dict[str, int], n_samples: int) None
__init__(params: Params, n_units: int)[source]

Initialize Waveforms class.

Parameters
  • params – The waveforms parameters.

  • n_units – The number of units.

get_spike_waveforms(units: ndarray) ndarray[source]

Get the spike waveforms for a given list of units.

Parameters

units – The units array to get the waveforms for.

Returns

The waveform samples for the given units.

class neural_data_simulator.core.ephys_generator.SpikeTimes(n_channels: int, params: Params)[source]

Bases: object

Spike-time generator without waveforms.

This class generates random spike events according to unit spike rates within a given time interval determined by a number of samples and a known raw data sample rate. It also ensures that the spikes are within the refractory period of each other taking into account the occurrence of the spikes generated in the previous time interval.

class Params(raw_data_frequency: float, n_units_per_channel: int, refractory_time: float)[source]

Bases: object

Initialization parameters for the Spikes class.

raw_data_frequency: float

The electrophysiology raw data output sample rate in Hz.

n_units_per_channel: int

The number of units per channel.

refractory_time: float

The refractory time in seconds.

property n_refractory_samples: int

The number of samples in the refractory period.

__init__(raw_data_frequency: float, n_units_per_channel: int, refractory_time: float) None
__init__(n_channels: int, params: Params)[source]

Initialize Spikes class.

Parameters
  • n_channels – The number of channels. This value together with the configured number of units per channel determines the total number of units for which spikes are generated.

  • params – The spike generator parameters.

generate_spikes(rates: ndarray, n_samples: int) SpikeEvents[source]

Generate spikes for the given rates and a given number of samples.

Spike times are calculated using the spike rate input for each channel. When there are multiple units in a channel, the rate is divided equally across the units.

First, the chances of a spike for each sample in each unit is calculated from the rates (spikes/sec). A random draw is performed for each of n_samples for each unit and a spike is assigned or not depending on the spike chance. An iterative process is then performed to remove spikes within the configured refractory period.

Parameters
  • rates – The spike rates array with shape (n_units,). Each element in the array represents the spike rate in spikes per second for the corresponding unit.

  • n_samples – The number of samples to output.

Returns

The generated spikes as SpikeEvents.

class neural_data_simulator.core.ephys_generator.Spikes(n_channels: int, waveforms: Waveforms, params: Params)[source]

Bases: object

Spike generator with waveforms.

This class generates random spike events according to unit spike rates within a given time interval determined by a number of samples and a known raw data sample rate. It also ensures that the spikes are within the refractory period of each other taking into account the occurrence of the spikes generated in the previous time interval. It applies the corresponding waveforms to each SpikeEvent.

class Params(raw_data_frequency: float, n_units_per_channel: int, refractory_time: float)[source]

Bases: Params

Initialization parameters for the Spikes class.

Alias for SpikeTimes.Params.

raw_data_frequency: float

The electrophysiology raw data output sample rate in Hz.

n_units_per_channel: int

The number of units per channel.

refractory_time: float

The refractory time in seconds.

__init__(n_channels: int, waveforms: Waveforms, params: Params)[source]

Initialize Spikes class.

Parameters
  • n_channels – The number of channels. This value together with the configured number of units per channel determines the total number of units for which spikes are generated.

  • waveforms – The Waveforms instance with spike waveform prototypes.

  • params – The SpikeTimes generator parameters.

generate_spikes(rates: ndarray, n_samples: int) SpikeEvents[source]

Generate spikes for the given rates and a given number of samples.

Calls into the SpikeTimes.generate_spikes() method, then applies the waveforms.

Parameters
  • rates – The spike rates array with shape (n_units,). Each element in the array represents the spike rate in spikes per second for the corresponding unit.

  • n_samples – The number of samples to output.

Returns

The generated spikes as SpikeEvents.

class neural_data_simulator.core.ephys_generator.ProcessOutput(continuous_data: ContinuousData, spikes: Spikes, input_: SpikeRateInput, outputs: LSLOutputs, params: Params, health_checker: HealthChecker)[source]

Bases: object

Process that reads spike rates and outputs spiking data.

The process can be started by calling the start() method. The output streams are the raw continuous data stream, the LFP data stream and spike events stream.

class Params(n_units_per_channel: int, lsl_chunk_frequency: float, raw_data_frequency: float, resolution: float)[source]

Bases: object

Initialization parameters for the ProcessOutput class.

n_units_per_channel: int

The number of units in each channel.

lsl_chunk_frequency: float

The frequency at which to stream data to the LSL outlets in Hz.

raw_data_frequency: float

The electrophysiology raw data output sample rate in Hz.

resolution: float

The unit resolution in uV per count.

property lsl_chunk_interval: float

The interval at which to stream data to the LSL outlets.

__init__(n_units_per_channel: int, lsl_chunk_frequency: float, raw_data_frequency: float, resolution: float) None
class LSLOutputs(raw: Optional[LSLOutputDevice] = None, lfp: Optional[LSLOutputDevice] = None, spike_events: Optional[LSLOutputDevice] = None)[source]

Bases: object

Possible LSL output streams.

__init__(raw: Optional[LSLOutputDevice] = None, lfp: Optional[LSLOutputDevice] = None, spike_events: Optional[LSLOutputDevice] = None) None
raw: Optional[LSLOutputDevice] = None

The raw continuous data stream.

lfp: Optional[LSLOutputDevice] = None

The LFP data stream.

spike_events: Optional[LSLOutputDevice] = None

The spike events stream.

__init__(continuous_data: ContinuousData, spikes: Spikes, input_: SpikeRateInput, outputs: LSLOutputs, params: Params, health_checker: HealthChecker)[source]

Initialize the ProcessOutput class.

Parameters
  • continuous_data – The continuous data generator.

  • spikes – The spikes generator.

  • input – The spike rates input.

  • outputs – The LSL output streams.

  • params – The initialization parameters.

  • health_checker – The health monitor.

start()[source]

Start the process.

The process keeps iterating until the stop() method is called. On each execution, spike rates are read from the input and used to generate spike events. For each spike event, the corresponding waveform is selected and combined with random noise to obtain raw continuous data. The raw continuous data is then filtered to obtain the LFP data. Spike events, raw continuous data and LFP data are then streamed via the configured LSL outlets.

stop()[source]

Stop the process.

neural_data_simulator.core.filters

Filter implementations for signal processing.

class neural_data_simulator.core.filters.RealTimeFilter(*args, **kwargs)[source]

Bases: Protocol

A protocol for filters operating on chunked data.

A python protocol (PEP-544) works in a similar way to an abstract class. The __init__() method of this protocol should never be called as protocols are not meant to be instantiated. An __init__() method may be defined in a concrete implementation of this protocol if needed.

execute(data: ndarray) ndarray[source]

Perform filtering on the current chunk of data.

Parameters

data – Data that should be filtered.

Returns

Filtered data with the same dimensions as the input.

__init__(*args, **kwargs)
class neural_data_simulator.core.filters.GaussianFilter(*, name: str, window_size: int, std: float, normalization_coeff: float, num_channels: int, enabled: bool = True)[source]

Bases: RealTimeFilter

An implementation of a Gaussian filter.

__init__(*, name: str, window_size: int, std: float, normalization_coeff: float, num_channels: int, enabled: bool = True)[source]

Initialize the GaussianFilter class.

Parameters
  • name – A label that identifies the filter instance.

  • window_size – The number of samples defining the size of the Gaussian window.

  • std – Standard deviation.

  • normalization_coeff – When applying the filter both numerator and denominator are normalized by this value.

  • num_channels – Number of channels.

  • enabled – Whether to apply the filter. If false, the data will be passed through without modification.

execute(data: ndarray) ndarray[source]

Perform filtering on data.

Parameters

data – Data that should be filtered as a two-dimensional array. The first dimension represents samples and the second dimension represents channels.

Returns

Filtered data with the same dimensions as the input.

class neural_data_simulator.core.filters.ButterworthFilter(*, name: str, filter_order: int, critical_frequency: Union[float, Tuple[float, float]], sample_rate: float, num_channels: int, btype: str, enabled: bool = True)[source]

Bases: RealTimeFilter

Generic class for Butterworth filters.

__init__(*, name: str, filter_order: int, critical_frequency: Union[float, Tuple[float, float]], sample_rate: float, num_channels: int, btype: str, enabled: bool = True)[source]

Perform Butterworth filtering.

Parameters
  • name – A label that identifies the filter instance.

  • filter_order – The order of the filter.

  • critical_frequency – Critical frequency in Hz. For lowpass or highpass it is a scalar representing the cutoff frequency. For bandpass it is a tuple of two scalars representing the lower and upper cutoff frequencies.

  • sample_rate – Sample rate in Hz.

  • num_channels – Number of channels.

  • btype – Type of filter. Either highpass or lowpass.

  • enabled – Whether to apply the filter. If false, the data will be passed through without modification.

execute(data: ndarray) ndarray[source]

Perform filtering on data.

Parameters

data – Data that should be filtered as a two-dimensional array. The first dimension represents samples and the second dimension represents channels.

Returns

Filtered data with the same dimensions as the input.

class neural_data_simulator.core.filters.HighpassFilter(*, name: str, filter_order: int = 2, critical_frequency: float = 0.5, sample_rate: float, num_channels: int, enabled: bool = True)[source]

Bases: ButterworthFilter

Perform highpass filtering.

This class is based around the scipy Butterworth digital filter implementation and uses the same language as the underlying scipy package.

__init__(*, name: str, filter_order: int = 2, critical_frequency: float = 0.5, sample_rate: float, num_channels: int, enabled: bool = True)[source]

Create a new instance.

Parameters
  • name – A label that identifies the filter instance.

  • filter_order – The order of the filter.

  • critical_frequency – Critical frequency in Hz.

  • sample_rate – Sample rate in Hz.

  • num_channels – Number of channels.

  • enabled – Whether to apply the filter. If false, the data will be passed through without modification.

class neural_data_simulator.core.filters.LowpassFilter(*, name: str, filter_order: int = 2, critical_frequency: float = 50, sample_rate: float, num_channels: int, enabled: bool = True)[source]

Bases: ButterworthFilter

Perform lowpass filtering.

This class is based around the scipy Butterworth digital filter implementation and uses the same language as the underlying scipy package.

__init__(*, name: str, filter_order: int = 2, critical_frequency: float = 50, sample_rate: float, num_channels: int, enabled: bool = True)[source]

Perform lowpass filtering.

Parameters
  • name – A label that identifies the filter instance.

  • filter_order – The order of the filter.

  • critical_frequency – Critical frequency in Hz.

  • sample_rate – Sample rate in Hz.

  • num_channels – Number of channels.

  • enabled – Whether to apply the filter. If false, the data will be passed through without modification.

class neural_data_simulator.core.filters.BandpassFilter(*, name: str, filter_order: int = 2, critical_frequencies: Tuple[float, float], sample_rate: float, num_channels: int, enabled: bool = True)[source]

Bases: ButterworthFilter

Perform bandpass filtering.

This class is based around the scipy Butterworth digital filter implementation and uses the same language as the underlying scipy package.

__init__(*, name: str, filter_order: int = 2, critical_frequencies: Tuple[float, float], sample_rate: float, num_channels: int, enabled: bool = True)[source]

Perform lowpass filtering.

Parameters
  • name – A label that identifies the filter instance.

  • filter_order – The order of the filter.

  • critical_frequencies – Tuple of low and high critical frequencies in Hz.

  • sample_rate – Sample rate in Hz.

  • num_channels – Number of channels.

  • enabled – Whether to apply the filter. If false, the data will be passed through without modification.

neural_data_simulator.core.health_checker

Health checker for NDS components.

class neural_data_simulator.core.health_checker.HealthChecker(queue_size: int, optimal_num_samples_per_iteration: int)[source]

Bases: object

Class to monitor the processing time consistency.

It prints messages to the console when the data processing time is increasing or is taking longer than expected.

__init__(queue_size: int, optimal_num_samples_per_iteration: int) None[source]

Initialize the HealthChecker class.

Parameters
  • queue_size – size of the queue to store the number of samples processed in each iteration. The queue acts a sliding window of the last n iterations.

  • optimal_num_samples_per_iteration – the number of samples that should be processed in each iteration.

record_processed_samples(n_samples: int)[source]

Record the samples processed and check the health.

Parameters

n_samples – number of samples processed in the current iteration.

neural_data_simulator.core.inputs

A collection of inputs that can be used by NDS.

class neural_data_simulator.core.inputs.Input[source]

Bases: ABC

Represents an input that can be used to consume data from.

This can be an interface for a joystick, a behavior data generator, a data streamer that loads data from disk, etc. Each read should return all newly available data since the last read call.

abstract read() Samples[source]

Read available data.

abstract connect() None[source]

Connect to input.

disconnect() None[source]

Disconnect from input. The default implementation does nothing.

class neural_data_simulator.core.inputs.SamplesInput(input_samples: Samples)[source]

Bases: Input

An input object based on neural_data_simulator.core.samples.Samples.

The underlying samples dataclass will have its timestamps modified to be in reference to when the first read was made from this class, simulating the appearance of data being collected in real-time. Alternatively, the function set_reference_time_to_now can be called prior to the first read of the data to use that as a reference time.

A timer is synced between the reference time and the first timestamp in the input samples. Any calls to the read function will calculate the current time in reference to the synced timer and return the appropriate samples.

__init__(input_samples: Samples) None[source]

Initialize the SamplesInput class.

Parameters

input_samples – Dataclass containing timestamps and behavior data.

set_reference_time_to_now()[source]

Set current time as starting time for data stream.

read() Samples[source]

Get new samples from the time of last read.

If first call to read samples will be read since the call to set_reference_time_to_now. If set_reference_time_to_now was not previously called, it will be called.

Returns

neural_data_simulator.core.samples.Samples dataclass with timestamps and data available since last read call.

connect() None[source]

No action required during connect for this class.

class neural_data_simulator.core.inputs.StreamInfo(lsl_stream_info: dataclasses.InitVar[StreamInfo])[source]

Bases: object

Selected advertised properties of an LSL stream.

name: str

Name of the LSL stream.

sample_rate: float

Advertised sample rate of the LSL stream.

channel_count: int

Number of channels in the LSL stream.

lsl_stream_info: dataclasses.InitVar[StreamInfo]

pylsl stream info object.

__init__(lsl_stream_info: dataclasses.InitVar[StreamInfo]) None
class neural_data_simulator.core.inputs.LSLInput(stream_name: str, connection_timeout: float = 60.0, resolve_streams_wait_time: float = 1.0)[source]

Bases: Input

Represents an LSL Inlet stream for behavior data.

__init__(stream_name: str, connection_timeout: float = 60.0, resolve_streams_wait_time: float = 1.0)[source]

Initialize LSLInput class.

Parameters
  • stream_name – Name of the LSL stream to retrieve data from.

  • connection_timeout – Maximum time for attempting a connection to an LSL input stream.

  • resolve_streams_wait_time – Maximum waiting time to get the list of available streams. Should be bigger than 0.5 to ensure all streams are returned.

get_info() StreamInfo[source]

Get information about the LSL stream.

If the stream is not connected, it will try to resolve the stream and return the information.

Returns

LSL stream properties.

Raises

ValueError – If the stream is not found.

read() Samples[source]

Read available data from the inlet as a samples.

Returns

neural_data_simulator.core.samples.Samples dataclass with timestamps and data read from the LSL StreamInlet. If no data is available, an empty Samples is returned.

Raises
  • ValueError – LSL StreamInlet is not connected. connect should be

  • called – before read.

set_connection_timeout(timeout: float) None[source]

Set the maximum time that the inlet search for the desired LSL stream.

Parameters

timeout – Maximum time to wait in seconds.

Raises

ValueError – if timeout equals or less than 0.

connect()[source]

Connect to the LSL Inlet stream.

disconnect()[source]

Disconnect from the LSL Inlet stream.

neural_data_simulator.core.models

Classes that implement a model for encoding neural data from behavior data.

class neural_data_simulator.core.models.EncoderModel(*args, **kwargs)[source]

Bases: Protocol

Protocol for an Encoder model.

Classes that conform to this protocol can be used by the neural_data_simulator.encoder.Encoder to convert behavioral input data into spiking rate data.

The Encoder processes data in chunks represented as neural_data_simulator.core.samples.Samples. One chunk may contain several behavioral data points (n_samples) across multiple axes (n_axes). The Encoder calls the EncoderModel’s encode() method for each chunk in order to transform the behavioral data into spiking rates (n_samples) across multiple units (n_units).

A python protocol (PEP-544) works in a similar way to an abstract class. The __init__() method of this protocol should never be called as protocols are not meant to be instantiated. An __init__() method may be defined in a concrete implementation of this protocol if needed.

encode(data: Samples) Samples[source]

Encode behavior into spiking rates.

Parameters

data – Behavioral data as neural_data_simulator.core.samples.Samples. For example, in case of modeling velocities in a horizontal and vertical direction (2 axes), the data is a 2D array with shape (n_samples, 2).

Returns

Spiking rates as neural_data_simulator.core.samples.Samples. The spiking rates are represented as a 2D array with shape (n_samples, n_units).

__init__(*args, **kwargs)

neural_data_simulator.core.outputs

A collection of outputs that can be used by NDS.

class neural_data_simulator.core.outputs.Output[source]

Bases: ABC

Represents an abstract output that can be used to send samples.

abstract property channel_count: int

Return the number of channels.

wait_for_consumers(timeout: int) bool[source]

Wait for consumers to connect until the timeout expires.

Parameters

timeout – Timeout in seconds.

Returns

True if consumers are connected, False otherwise.

has_consumers() bool[source]

Return whether there are consumers connected to the output.

abstract connect() None[source]

Connect to output.

disconnect() None[source]

Disconnect from output. The default implementation does nothing.

send(samples: Samples) Samples[source]

Push samples to output and return the data unchanged.

Parameters

samples – Samples to output.

Returns

The input samples unchanged.

class neural_data_simulator.core.outputs.ConsoleOutput(channel_count: int)[source]

Bases: Output

Represents an output device that prints to the terminal.

__init__(channel_count: int)[source]

Initialize the ConsoleOutput class.

property channel_count: int

The number of channels.

Returns

Number of channels of the output.

connect() None[source]

Connect to the device within a context.

The default implementation does nothing.

class neural_data_simulator.core.outputs.FileOutput(channel_count: int, file_name: str = 'output.csv')[source]

Bases: Output

Represents an output device that writes to a file.

__init__(channel_count: int, file_name: str = 'output.csv')[source]

Initialize FileOutput class.

Parameters
  • channel_count – Number of channels for this output.

  • file_name – File path to write the samples via the send method. Defaults to “output.csv”.

property channel_count: int

The number of channels.

Returns

Number of channels of the output.

connect() None[source]

Open the output file.

disconnect() None[source]

Close the output file.

class neural_data_simulator.core.outputs.StreamConfig(name: str, type: str, source_id: str, acquisition: dict, sample_rate: Union[float, Callable[[], float]], channel_format: str, channel_labels: List[str])[source]

Bases: object

Parameters of an LSL stream.

name: str

LSL stream name.

type: str

LSL stream type.

source_id: str

LSL source id.

acquisition: dict

Information regarding the acquisition device.

sample_rate: Union[float, Callable[[], float]]

Sampling rate in Hz.

channel_format: str

Stream data type, for example float32 or int32.

channel_labels: List[str]

Channel labels. The number of labels must match the number of channels.

classmethod from_lsl_settings(lsl_settings: LSLOutputModel, sampling_rate: Union[float, Callable], n_channels: int)[source]

Create a StreamConfig from an LSLOutputModel.

Parameters
__init__(name: str, type: str, source_id: str, acquisition: dict, sample_rate: Union[float, Callable[[], float]], channel_format: str, channel_labels: List[str]) None
class neural_data_simulator.core.outputs.LSLOutputDevice(stream_config: StreamConfig)[source]

Bases: Output

An output device that can be used to stream data via LSL.

__init__(stream_config: StreamConfig)[source]

Initialize the LSL Output Device from a StreamConfig.

Parameters

stream_configneural_data_simulator.outputs.StreamConfig instance.

classmethod from_lsl_settings(lsl_settings: LSLOutputModel, sampling_rate: Union[float, Callable], n_channels: int)[source]

Initialize from neural_data_simulator.core.settings.LSLOutputModel.

Parameters
property channel_count: int

The number of channels.

Returns

Number of channels of the output.

property sample_rate: Union[float, Callable[[], float]]

Sample rate of the stream.

Returns

The sample rate in Hz.

property name: str

The name of the stream.

Returns

The configured name of the output stream.

send_as_chunk(data: ndarray, timestamp: Optional[float] = None)[source]

Send a list of data points to the LSL outlet.

Parameters
  • data – An array of data points.

  • timestamp – An optional timestamp corresponding to the data points.

Raises
  • ValueError – LSL StreamOutlet is not connected. connect should be called before send.

  • ValueError – There was nothing to send because the data array is empty.

send_as_sample(data: ndarray, timestamp: Optional[float] = None)[source]

Send a single sample with the corresponding timestamp.

A sample consisting of a data point per channel will be pushed to the LSL outlet together with an optional timestamp.

Parameters
  • data – A single data point as an array of 1 value per channel.

  • timestamp – An optional timestamp corresponding to the data point.

Raises
  • ValueError – LSL StreamOutlet is not connected. connect should be called before send.

  • ValueError – There was nothing to send because the data array is empty.

connect()[source]

Connect to the LSL stream.

disconnect()[source]

Forget the connection to the LSL stream.

has_consumers() bool[source]

Check if there are consumers connected to the stream.

Returns

True if there are consumers, False if there aren’t any.

wait_for_consumers(timeout: int) bool[source]

Wait for consumers to connect until the timeout expires.

Parameters

timeout – Timeout in seconds.

Returns

True if consumers are connected, False otherwise.

neural_data_simulator.core.runner

Runner that uses a timer object to run the simulation.

class neural_data_simulator.core.runner.Encoder(*args, **kwargs)[source]

Bases: Protocol

Protocol of an Encoder class.

A python protocol (PEP-544) works in a similar way to an abstract class. The __init__() method of this protocol should never be called as protocols are not meant to be instantiated. An __init__() method may be defined in a concrete implementation of this protocol if needed.

iterate() None[source]

Iterate over steps of a simulation.

connect() Iterator[None][source]

Connect to a encoder.

__init__(*args, **kwargs)
class neural_data_simulator.core.runner.Timer(*args, **kwargs)[source]

Bases: Protocol

Protocol for a Timer class.

A python protocol (PEP-544) works in a similar way to an abstract class. The __init__() method of this protocol should never be called as protocols are not meant to be instantiated. An __init__() method may be defined in a concrete implementation of this protocol if needed.

start() None[source]

Start timer.

wait() None[source]

Wait appropriate time.

total_elapsed_time() float[source]

Get total time since start.

Returns

Total time since start in seconds.

__init__(*args, **kwargs)
neural_data_simulator.core.runner.run(encoder: Encoder, timer: Timer, total_seconds_of_simulation: Optional[int] = None)[source]

Loop over the provided encoder using a timer.

Connects to all devices in the encoder before starting the timer. Using the timer, the loop should execute the encoder once per timer period. CTR+C can be used to stop the loop at any time, after which all devices will be disconnected and the function will return.

Parameters
  • encoder – Encoder object to be executed periodically. It can be connected to through a context manager connect method and can be iterated using the iterate method.

  • timer – Timer object that can be started with start, waits (sleep) for the necessary time using wait, and can return the total elapsed time since start through total_elapsed_time method.

  • total_seconds_of_simulation – Total time to run the simulation (in seconds) or None if it should run indefinitely (until CTR+C is pressed). Defaults to None.

neural_data_simulator.core.samples

Utilities for handling data in the desired NDS format.

class neural_data_simulator.core.samples.Samples(timestamps: ndarray, data: ndarray)[source]

Bases: object

Unified collection of timestamps and data points.

timestamps: ndarray

Timestamps for each data sample. Each row corresponds to a data sample.

data: ndarray

Array of data samples. Each row corresponds to a data sample, while each column corresponds to a dimension of the data sample.

property empty: bool

Check if the samples are empty.

Returns

True if there are no data points.

classmethod empty_samples() Samples[source]

Create an empty samples instance.

Returns

Samples instance with empty timestamps and data arrays.

classmethod load_from_npz(filepath: str, timestamps_array_name: str = 'timestamps', data_array_name: str = 'data') Samples[source]

Load the timestamps and data from the file into a new samples instance.

Parameters
  • filepath.npz file path with the timestamps and data

  • timestamps_array_name – Name of the timestamp array defined when creating the file (see np.savez documentation for details). The loaded array should be in the shape of (Nx1), N = number of samples. Defaults to “timestamps”

  • data_array_name – Name of the data array defined when creating the file (see np.savez documentation for details). The loaded array should be in the shape of (NxM), N = number of samples and M is the number of channels. Defaults to “data”.

__init__(timestamps: ndarray, data: ndarray) None

neural_data_simulator.core.settings

Models for parsing and validating the contents of settings.yaml.

class neural_data_simulator.core.settings.LogLevel(value)[source]

Bases: str, Enum

Possible log levels.

class neural_data_simulator.core.settings.EncoderEndpointType(value)[source]

Bases: str, Enum

Possible types for the encoder input or output.

FILE = 'file'
LSL = 'LSL'
class neural_data_simulator.core.settings.EphysGeneratorEndpointType(value)[source]

Bases: str, Enum

Possible types of input for the ephys generator.

TESTING = 'testing'
LSL = 'LSL'
class neural_data_simulator.core.settings.EncoderModelType(value)[source]

Bases: str, Enum

Possible types of input for the encoder model.

PLUGIN = 'plugin'
VELOCITY_TUNING_CURVES = 'velocity_tuning_curves'
class neural_data_simulator.core.settings.LSLChannelFormatType(value)[source]

Bases: str, Enum

Possible values for the LSL channel format.

class neural_data_simulator.core.settings.TimerModel(*, max_cpu_buffer: float, loop_time: float)[source]

Bases: BaseModel

Settings for the timer implementation.

max_cpu_buffer: float
loop_time: float
class neural_data_simulator.core.settings.LSLInputModel(*, connection_timeout: float, stream_name: str)[source]

Bases: BaseModel

Settings for all LSL inlets.

connection_timeout: float
stream_name: str
class neural_data_simulator.core.settings.LSLOutputModel(*, channel_format: LSLChannelFormatType, stream_name: str, stream_type: str, source_id: str, instrument: _Instrument, channel_labels: Optional[list[str]] = None)[source]

Bases: BaseModel

Settings for all LSL outlets.

channel_format: LSLChannelFormatType
stream_name: str
stream_type: str
source_id: str
instrument: _Instrument
channel_labels: Optional[list[str]]
class neural_data_simulator.core.settings.EncoderSettings(*, model: str, preprocessor: Optional[str] = None, postprocessor: Optional[str] = None, model_weights_file: Optional[str] = None, input: Input, output: Output)[source]

Bases: BaseModel

Settings for the encoder.

class Input(*, type: EncoderEndpointType, file: Optional[File] = None, lsl: Optional[LSLInputModel] = None)[source]

Bases: BaseModel

Settings for the encoder input.

class File(*, path: str, sampling_rate: float, timestamps_array_name: str, data_array_name: str)[source]

Bases: BaseModel

Settings for the encoder input type file.

path: str
sampling_rate: float
timestamps_array_name: str
data_array_name: str
type: EncoderEndpointType
file: Optional[File]
lsl: Optional[LSLInputModel]
class Output(*, n_channels: int, type: EncoderEndpointType, file: Optional[str] = None, lsl: Optional[LSLOutputModel] = None)[source]

Bases: BaseModel

Settings for the encoder output.

n_channels: int
type: EncoderEndpointType
file: Optional[str]
lsl: Optional[LSLOutputModel]
model: str
preprocessor: Optional[str]
postprocessor: Optional[str]
model_weights_file: Optional[str]
input: Input
output: Output
class neural_data_simulator.core.settings.EphysGeneratorSettings(*, waveforms: Waveforms, input: Input, output: Output, noise: Noise, resolution: float, random_seed: Optional[int] = None, raw_data_frequency: float, n_units_per_channel: int, refractory_time: float, lsl_chunk_frequency: float)[source]

Bases: BaseModel

Settings for the spike generator.

class Waveforms(*, n_samples: int, prototypes: Dict[int, JsonWrapperValue], unit_prototype_mapping: Dict[str, int])[source]

Bases: BaseModel

Settings for the spike waveform prototypes.

n_samples: int
prototypes: Dict[int, JsonWrapperValue]
unit_prototype_mapping: Dict[str, int]
class Input(*, type: EphysGeneratorEndpointType, lsl: Optional[LSLInputModel] = None, testing: Optional[Testing] = None)[source]

Bases: BaseModel

Settings for the ephys generator input.

class Testing(*, n_channels: int)[source]

Bases: BaseModel

Settings for the ephys generator input type testing.

n_channels: int
type: EphysGeneratorEndpointType
lsl: Optional[LSLInputModel]
testing: Optional[Testing]
class Output(*, raw: Raw, lfp: LFP, spike_events: SpikeEvents)[source]

Bases: BaseModel

Settings for the ephys generator output.

class Raw(*, lsl: LSLOutputModel)[source]

Bases: BaseModel

Settings for the ephys generator output type raw.

lsl: LSLOutputModel
class LFP(*, data_frequency: float, filter_cutoff: float, filter_order: int, lsl: LSLOutputModel)[source]

Bases: BaseModel

Settings for the ephys generator output type LFP.

data_frequency: float
filter_cutoff: float
filter_order: int
lsl: LSLOutputModel
class SpikeEvents(*, lsl: LSLOutputModel)[source]

Bases: BaseModel

Settings for the ephys generator output type spike events.

lsl: LSLOutputModel
raw: Raw
lfp: LFP
spike_events: SpikeEvents
class Noise(*, beta: float, standard_deviation: float, fmin: float, samples: int)[source]

Bases: BaseModel

Settings for the ephys generator noise.

beta: float
standard_deviation: float
fmin: float
samples: int
waveforms: Waveforms
input: Input
output: Output
noise: Noise
resolution: float
random_seed: Optional[int]
raw_data_frequency: float
n_units_per_channel: int
refractory_time: float
lsl_chunk_frequency: float
class neural_data_simulator.core.settings.Settings(*, version: SemVer, log_level: LogLevel, timer: TimerModel, encoder: EncoderSettings, ephys_generator: EphysGeneratorSettings)[source]

Bases: VersionedYamlModel

All settings for the NDS main package.

log_level: LogLevel
timer: TimerModel
encoder: EncoderSettings
ephys_generator: EphysGeneratorSettings
class Config[source]

Bases: object

Pydantic configuration.

extra = 'forbid'

neural_data_simulator.core.timing

Implement an accurate timer class that works across platforms.

class neural_data_simulator.core.timing.Timer(period: float, max_cpu_buffer: float = 0.005)[source]

Bases: object

A simple timer class.

It has a custom implementation for the python’s sleep method by adding a cpu bound routine to wait for the last n ms before the next loop execution.

__init__(period: float, max_cpu_buffer: float = 0.005)[source]

Initialize timer class.

Parameters
  • period – Expected time between returns from the wait function (in seconds).

  • max_cpu_buffer – Maximum time to stay in cpu bound loop (i.e. a while loop that does nothing) waiting for correct time to return from a wait call (in seconds). Defaults to 0.005.

start() None[source]

Start the timer.

wait() None[source]

Wait until the end of the next timer loop.

It uses very small sleep intervals to avoid jitters caused by the time.sleep() function.

total_elapsed_time() float[source]

Get total time since the start function call.

Returns

Elapsed time (in seconds).

neural_data_simulator.core.timing.get_timer(loop_time: float = 0.02, max_cpu_buffer: float = 0.005) Timer[source]

Get timer object.

Parameters
  • loop_time – expected time between returns from the wait function (in seconds).

  • max_cpu_buffer – Maximum time to stay in cpu bound loop (i.e. a while loop that does nothing) waiting for correct time to return from a wait call (in seconds). Defaults to 0.005.

Returns

An instance of the neural_data_simulator.timing.Timer class based on input parameters.