neural_data_simulator.tasks.center_out_reach

Center-out reaching task.

During the task, the GUI keeps track of 2 cursors: the real cursor that is controlled by the user, and the decoded cursor that is controlled by the output of the decoder. The task consists of a sequence of trials. During each trial, the user has to make the decoded cursor reach the target. The trial ends when the decoded cursor hovers over the target for a configurable amount of time. The trials alternate between a “reaching out” trial (from center outwards) and a “back to center” trial (from current position back to center). The user can reset the cursor positions to the center of the screen at any time by pressing the space bar. The real cursor can be toggled on and off by pressing the ‘c’ key. The task can be stopped by pressing the ‘ESC’ key.

At the end of the task, the cursor velocities and trajectories are plotted.

neural_data_simulator.tasks.center_out_reach.buttons

A button as a Sprite. Pressing the button will execute the associated action.

class neural_data_simulator.tasks.center_out_reach.buttons.Button(text: str, color: str, font: Font, size: Tuple[int, int], xy: Tuple[int, int], action=None)[source]

Bases: Sprite

A button object that can be drawn on the screen.

It consists of a rectangle with a text label in the center.

__init__(text: str, color: str, font: Font, size: Tuple[int, int], xy: Tuple[int, int], action=None)[source]

Initialize the Button class.

Parameters
  • text – The text to display on the button.

  • color – The color of the button.

  • font – The font to use for the text.

  • size – The size of the button.

  • xy – The position of the button.

  • action – An optional action to execute when the button is pressed.

property is_mouse_over: bool

Check if the mouse cursor is hovering the button.

change_color(color)[source]

Set a new color for the button.

Parameters

color – The new button color.

press()[source]

Call the action associated with the button.

neural_data_simulator.tasks.center_out_reach.input_events

Module for handling events such as key presses.

class neural_data_simulator.tasks.center_out_reach.input_events.InputEvent(value)[source]

Bases: Enum

An enumeration of the possible input events.

NONE = 0
EXIT = 1
RESET = 2
TOGGLE_CURSOR = 3
CLEAR_METRICS = 4
MOUSE_BUTTON_PRESSED = 5
class neural_data_simulator.tasks.center_out_reach.input_events.InputHandler[source]

Bases: object

Listeners for input event handling.

__init__()[source]

Create a new instance.

property input_device_name

Get the name of the input device.

set_handler_for_event(event: InputEvent, handler: Callable)[source]

Set a function as a handler for a specific input event.

get_cursor_relative_position() Tuple[int, int][source]

Get the relative position of the joystick or mouse cursor.

The position is relative to the previous position when this function was last called. If a joystick was detected at the start of the GUI, the joystick position is returned. Otherwise, the mouse position is returned.

Returns

The relative position of the cursor as a tuple.

poll()[source]

Poll for input events and call the appropriate handler.

This method should be called once per iteration of the main loop.

neural_data_simulator.tasks.center_out_reach.joystick

Module for handling joysticks and gamepads.

class neural_data_simulator.tasks.center_out_reach.joystick.JoystickInput[source]

Bases: object

Captures joystick movement and converts it to relative positions.

__init__()[source]

Create a new instance.

property relative_position: Optional[Tuple[int, int]]

Get joystick position relative to the last event poll.

Returns

The relative position of the joystick as a tuple.

process_event(event: Event)[source]

Handle input event.

neural_data_simulator.tasks.center_out_reach.metrics

Collect and plot velocities resulted from running the task.

class neural_data_simulator.tasks.center_out_reach.metrics.MetricsCollector(window_rect: Tuple[int, int], target_size: float, unit_converter: PixelsToMetersConverter, actual_cursor_color, decoded_cursor_color)[source]

Bases: object

Collect and plot velocities resulted from running the task.

__init__(window_rect: Tuple[int, int], target_size: float, unit_converter: PixelsToMetersConverter, actual_cursor_color, decoded_cursor_color)[source]

Create a new instance.

Parameters
  • window_rect – The size of the task window.

  • target_size – The radius of the target in meters.

  • unit_converter – The unit converter to use for converting from meters to pixels.

  • actual_cursor_color – The color to use for plotting the actual cursor.

  • decoded_cursor_color – The color to use for plotting the decoded cursor.

clear_data()[source]

Remove all recorded data so far.

record_decoded_velocities(decoded_velocities, timestamps)[source]

Record decoded velocities.

Parameters
  • decoded_velocities – List of decoded velocities.

  • timestamps – The timestamps corresponding to the decoded velocities.

record_actual_velocities(actual_velocities, timestamps)[source]

Record actual velocities.

Parameters
  • actual_velocities – List of actual velocities.

  • timestamps – The timestamps corresponding to the actual velocities.

record_cursor_positions(trial_count, actual_position, decoded_position)[source]

Record cursor positions.

Parameters
  • trial_count – The number of the current trial.

  • actual_position – The position of the actual cursor.

  • decoded_position – The position of the decoded cursor.

plot_metrics(targets)[source]

Show velocities plot and R-values.

Parameters

targets – List of target positions in pixels.

neural_data_simulator.tasks.center_out_reach.scalers

Scalers and unit converters.

class neural_data_simulator.tasks.center_out_reach.scalers.PixelsToMetersConverter(ppi: Optional[float])[source]

Bases: object

Unit conversion between pixels and meters.

__init__(ppi: Optional[float])[source]

Initialize the PixelsToMetersConverter class.

Parameters

ppi – The pixels per inch of the display or None. If ppi is None, the function will try to calculate it based on the default monitor.

pixels_to_millimeters(X: ndarray) ndarray[source]

Convert pixels to millimeters.

Parameters

X – An array of pixel values.

Returns

The array resulted from the conversion.

millimeters_to_pixels(X: ndarray) ndarray[source]

Convert millimeters to pixels.

Parameters

X – An array of millimeter values.

Returns

The array resulted from the conversion.

pixels_to_meters(X: ndarray) ndarray[source]

Convert pixels to meters.

Parameters

X – An array of pixel values.

Returns

The array resulted from the conversion.

meters_to_pixels(X: Union[float, ndarray]) Union[float, ndarray][source]

Convert meters to pixels.

Parameters

X – An array of meter values or a single float value.

Returns

The array resulted from the conversion.

class neural_data_simulator.tasks.center_out_reach.scalers.StandardVelocityScaler(scale: ndarray, mean: ndarray, unit_converter: PixelsToMetersConverter)[source]

Bases: object

A simple standard scaler for velocities.

__init__(scale: ndarray, mean: ndarray, unit_converter: PixelsToMetersConverter)[source]

Instantiate a new scaler for velocities.

The purpose of this scaler is to scale the velocity to a standard deviation of 1 and a mean of 0 when calling the transform function.

Parameters
  • scale – The scale to apply to the velocities.

  • mean – The mean to offset the velocities.

  • unit_converter – The unit converter to use to convert between pixels and millimeters.

transform(X: ndarray) ndarray[source]

Apply the transformation required to standardize the velocity.

This transformation should be applied to the velocity before sending it to the encoder.

Parameters

X – The velocity to transform.

Returns

Standardized velocity.

inverse_transform(X: ndarray) ndarray[source]

Apply the transformation required to reverse the scaling of the velocity.

This transformation should be applied to the velocity after receiving it from the decoder.

Parameters

X – The velocity to transform.

Returns

Scaled velocity.

neural_data_simulator.tasks.center_out_reach.screen_info

A wrapper around screeninfo with improved functionality on macOS.

neural_data_simulator.tasks.center_out_reach.screen_info.get_monitors() List[Monitor][source]

Return a list of Monitor objects representing the connected screens.

neural_data_simulator.tasks.center_out_reach.screen_info.get_ppmm(monitor: Monitor) Optional[Tuple[float, float]][source]

Return the pixels per millimeter of the given monitor.

neural_data_simulator.tasks.center_out_reach.settings

Settings schema for the center-out reach task.

class neural_data_simulator.tasks.center_out_reach.settings.CenterOutReach(*, input: Input, output: Output, sampling_rate: float, window: Window, with_metrics: bool, standard_scaler: StandardScaler, task: Task, task_window_output: Optional[Output] = None)[source]

Bases: BaseModel

Center-out reach settings.

class Input(*, enabled: bool, lsl: Optional[LSLInputModel] = None)[source]

Bases: BaseModel

Input settings.

enabled: bool
lsl: Optional[LSLInputModel]
class Output(*, lsl: LSLOutputModel)[source]

Bases: BaseModel

Output settings.

lsl: LSLOutputModel
class Window(*, width: Optional[float] = None, height: Optional[float] = None, ppi: Optional[float] = None, colors: Colors)[source]

Bases: BaseModel

Window settings.

class Colors(*, background: str, decoded_cursor: str, actual_cursor: str, target: str, target_waiting_for_cue: str, decoded_cursor_on_target: str)[source]

Bases: BaseModel

Colors used in the GUI.

background: str
decoded_cursor: str
actual_cursor: str
target: str
target_waiting_for_cue: str
decoded_cursor_on_target: str
width: Optional[float]
height: Optional[float]
ppi: Optional[float]
colors: Colors
class StandardScaler(*, scale: list[float], mean: list[float])[source]

Bases: BaseModel

Velocity scaler settings.

scale: list[float]
mean: list[float]
class Task(*, target_radius: float, cursor_radius: float, radius_to_target: float, number_of_targets: int, delay_to_begin: float, delay_waiting_for_cue: float, target_holding_time: float, max_trial_time: int)[source]

Bases: BaseModel

Task settings.

target_radius: float
cursor_radius: float
radius_to_target: float
number_of_targets: int
delay_to_begin: float
delay_waiting_for_cue: float
target_holding_time: float
max_trial_time: int
input: Input
output: Output
sampling_rate: float
window: Window
with_metrics: bool
standard_scaler: StandardScaler
task: Task
task_window_output: Optional[Output]

neural_data_simulator.tasks.center_out_reach.sprites

Circular shapes that are being drawn in the window on the screen.

class neural_data_simulator.tasks.center_out_reach.sprites.Sprite(color: str, radius: int, xy: Tuple[int, int])[source]

Bases: Sprite

An object that can be drawn on the screen.

This class is a wrapper around pygame.sprite.Sprite that is used to represent a circle of a given color and radius.

__init__(color: str, radius: int, xy: Tuple[int, int])[source]

Create a sprite with a given color, radius, and position.

Parameters
  • color – The color of the sprite.

  • radius – the radius of the sprite.

  • xy – The position of the sprite.

property position: Tuple[int, int]

Get the sprite coordinates in the window.

Returns

The sprite coordinates as a tuple.

collides_with(other_sprite: Sprite) bool[source]

Check if this sprite collides with another sprite.

Parameters

other_sprite – The sprite to check for collision with.

Returns

True if the sprites collide, False otherwise.

update_position(xy: Tuple[float, float])[source]

Adjust the position of the sprite by a given amount.

Parameters

xy – The delta to adjust the position by.

change_color(color)[source]

Set a new color for the sprite.

Parameters

color – The new sprite color.

neural_data_simulator.tasks.center_out_reach.task_runner

Run trial rounds in a loop.

An iteration of the loop consists of:
  1. Polling for input events.

  2. Advancing the state.

  3. Updating the window.

  4. Pausing so that the GUI doesn’t run faster than the targeted sampling rate.

  5. Repeat until the loop is signaled to stop.

class neural_data_simulator.tasks.center_out_reach.task_runner.VelocityScaler(*args, **kwargs)[source]

Bases: Protocol

Scales the cursor velocity.

A python protocol (PEP-544) works in a similar way to an abstract class. The __init__() method of this protocol should never be called as protocols are not meant to be instantiated. An __init__() method may be defined in a concrete implementation of this protocol if needed.

transform(X: ndarray) ndarray[source]

Scales the real velocity.

inverse_transform(X: ndarray) ndarray[source]

Scales the decoded velocity.

__init__(*args, **kwargs)
class neural_data_simulator.tasks.center_out_reach.task_runner.TaskRunner(sample_rate: float, decoded_cursor_input: Optional[Input], actual_cursor_output: Optional[Output], velocity_scaler: VelocityScaler, with_decoded_cursor: bool, metrics_collector: Optional[MetricsCollector], task_window_output: Optional[Output] = None)[source]

Bases: object

The BCI task runner.

__init__(sample_rate: float, decoded_cursor_input: Optional[Input], actual_cursor_output: Optional[Output], velocity_scaler: VelocityScaler, with_decoded_cursor: bool, metrics_collector: Optional[MetricsCollector], task_window_output: Optional[Output] = None)[source]

Create a new instance to run the center_out_reach task.

Parameters
  • sample_rate – sampling rate of input data (Hz).

  • decoded_cursor_input – The input (e.g., LSLInput) for decoded cursor velocities.

  • actual_cursor_output – The output (e.g., LSLOutput) for actual cursor velocities.

  • velocity_scaler – Scales the actual cursor velocities.

  • with_decoded_cursor – if True, use the decoded cursor velocities, else use the actual cursor velocities.

  • metrics_collector – Collects and plots velocities resulted from running the task.

  • task_window_output – The output (e.g., LSLOutput) for target positions and the task’s cursor positions

stop()[source]

Signal the loop that it should stop.

run(task_state: TaskState, user_input: InputHandler)[source]

Start the loop.

Parameters
  • task_state – The state machine that should be updated by the loop.

  • user_input – The user input controller for actual cursor.

neural_data_simulator.tasks.center_out_reach.task_state

The state machine for the BCI task.

class neural_data_simulator.tasks.center_out_reach.task_state.State(*args, **kwargs)[source]

Bases: Protocol

The model of a state in a state machine.

A python protocol (PEP-544) works in a similar way to an abstract class. The __init__() method of this protocol should never be called as protocols are not meant to be instantiated. An __init__() method may be defined in a concrete implementation of this protocol if needed.

is_valid_next_state(s: State) bool[source]

Check if a transition is possible to the given next state.

Parameters

s – The next state to validate the transition to.

Returns

True if a transition is possible from the current state to the given next state.

enter(previous_state: Optional[State] = None)[source]

Enter the state.

This method is called when the state machine transitioned to this state.

Parameters

previous_state – The previous state that the state machine was in.

exit(next_state: Optional[State] = None)[source]

Leave the state.

This method is called when the state machine transitioned from this state.

Parameters

next_state – The next state that the state machine is transitioning to.

__init__(*args, **kwargs)
class neural_data_simulator.tasks.center_out_reach.task_state.StateParams(delay_to_begin: float, delay_waiting_for_cue: float, target_holding_time: float, max_trial_time: int)[source]

Bases: object

Configuration for the state machine.

delay_to_begin: float

The delay before the trial begins.

delay_waiting_for_cue: float

The time between the target is shown and the go cue.

target_holding_time: float

The time required for the mouse to hover over the target.

max_trial_time: int

The time allocate for a trial to be completed.

__init__(delay_to_begin: float, delay_waiting_for_cue: float, target_holding_time: float, max_trial_time: int) None
class neural_data_simulator.tasks.center_out_reach.task_state.BaseState(task_window: TaskWindow, params: StateParams)[source]

Bases: State

Base class for all states in the state machine.

property time_in_state

Get the time spent in the current state.

property trial_timed_out: bool

Check if the trial has timed out.

__init__(task_window: TaskWindow, params: StateParams) None[source]

Create a new instance.

enter(previous_state: Optional[State] = None)[source]

Transitioned to the state.

exit(next_state: Optional[State] = None)[source]

Transitioned from the state.

abstract is_valid_next_state(s: State) bool[source]

Check if a transition is possible to the given next state.

Parameters

s – The next state to validate the transition to.

Returns

True if a transition is possible from the current state to the given next state.

class neural_data_simulator.tasks.center_out_reach.task_state.MenuScreen(task_window: TaskWindow, params: StateParams)[source]

Bases: BaseState

The state that the state machine starts in.

It represents the time before the task start when a menu is presented on the screen.

is_valid_next_state(s: State) bool[source]

Check if a transition is possible to the given next state.

The only valid transition is to the WaitingToBegin state after the GUI windows is no longer showing the menu.

class neural_data_simulator.tasks.center_out_reach.task_state.WaitingToBegin(task_window: TaskWindow, params: StateParams)[source]

Bases: BaseState

The state before the first trial.

is_valid_next_state(s: State) bool[source]

Check if a transition is possible to the given next state.

The only valid transition is to the WaitingForCue state after the delay_to_begin.

class neural_data_simulator.tasks.center_out_reach.task_state.WaitingForCue(task_window: TaskWindow, params: StateParams)[source]

Bases: BaseState

The state that the state machine is at the start of every trial round.

is_valid_next_state(s: State) bool[source]

Check if a transition is possible to the given next state.

The only valid transition is to the Reaching state after the delay_waiting_for_cue.

enter(previous_state: Optional[State] = None)[source]

Enter the state.

Position the target in a random location or in center depending on the previous position. Inform the task window that the target is not ready so that its appearance can be updated.

exit(next_state: Optional[State] = None)[source]

Exit the state.

Inform the task window that the target is ready so that its appearance can be updated.

class neural_data_simulator.tasks.center_out_reach.task_state.Reaching(task_window: TaskWindow, params: StateParams)[source]

Bases: BaseState

In this state the cursor is trying reach the target.

is_valid_next_state(s: State) bool[source]

Check if a transition is possible to the given next state.

Valid transitions are:
  • to the InTarget state if the cursor is on the target

  • to the WaitingForCue state if the trial has timed out.

enter(previous_state: Optional[State] = None)[source]

Enter the state.

If the previous state was WaitingForCue, then set the time that the trial started to be the time this state was entered.

exit(next_state: Optional[State] = None)[source]

Exit the state.

Reset the time that the trial started if the next state is not InTarget.

class neural_data_simulator.tasks.center_out_reach.task_state.InTarget(task_window: TaskWindow, params: StateParams)[source]

Bases: BaseState

In this state the cursor is hovering over the target.

is_valid_next_state(s: State) bool[source]

Check if a transition is possible to the given next state.

Valid transitions are:
  • to the WaitingForCue state if the cursor was hovering the target for the target_holding_time.

  • to the WaitingForCue state if the trial has timed out.

  • to the Reaching state if the cursor is no longer over the target.

enter(previous_state: Optional[State] = None)[source]

Enter the state.

If the previous state was the Reaching state, then copy the time that the trial started.

Inform the task window that the cursor is on the target so that its appearance can be updated.

exit(next_state: Optional[State] = None)[source]

Exit the state.

If the next state is not the Reaching state, then reset the time that the trial started.

Inform the task window that the cursor is no longer on the target so that its appearance can be updated.

class neural_data_simulator.tasks.center_out_reach.task_state.StateMachine(states: list[neural_data_simulator.tasks.center_out_reach.task_state.State])[source]

Bases: object

The state machine that controls the BCI task.

It is responsible for transitioning between states and calling the enter and exit methods on the states. It also provides a method to get the next state that should be transitioned to.

__init__(states: list[neural_data_simulator.tasks.center_out_reach.task_state.State])[source]

Initialize the state machine with the given states.

get_next_state() Optional[State][source]

Get the next state that should be transitioned to.

enter(s: State) bool[source]

Transition to the given state.

class neural_data_simulator.tasks.center_out_reach.task_state.TaskState(task_window: TaskWindow, params: StateParams)[source]

Bases: object

The state of the task during each trial round.

__init__(task_window: TaskWindow, params: StateParams)[source]

Initialize the state with parameters and display it in given window.

advance()[source]

Try to advance the state machine to the next state.

If the state machine is able to transition to the next state, then transition to it.

Do nothing if the state machine is not able to transition to the next state.

neural_data_simulator.tasks.center_out_reach.task_window

The BCI task GUI.

class neural_data_simulator.tasks.center_out_reach.task_window.RichText(surface: Surface, rect: Rect)[source]

Bases: NamedTuple

A single rich text surface.

surface: Surface

The text surface.

rect: Rect

The text bounds.

class neural_data_simulator.tasks.center_out_reach.task_window.TaskWindow(window_rect: Tuple[int, int], params: Params, menu_text: Optional[List[Dict[str, str]]] = None)[source]

Bases: object

Implements the BCI task GUI using pygame.

This class is responsible for showing sprites on the screen and updating their positions.

class Params(target_radius: int, cursor_radius: int, radius_to_target: int, number_of_targets: int, background_color: str, decoded_cursor_color: str, decoded_cursor_on_target_color: str, actual_cursor_color: str, target_color: str, target_waiting_color: str, font_size: int, button_size: Tuple[int, int], button_spacing: int, button_offset_top: int, button_color: str = 'gray', button_color_on_hover: str = 'lightgray')[source]

Bases: object

Task window specific configuration.

target_radius: int

The radius of the target in pixels.

cursor_radius: int

The radius of the cursor in pixels.

radius_to_target: int

The distance from the center of the window to the target in pixels.

number_of_targets: int

The number of targets to show.

background_color: str

The window background color.

decoded_cursor_color: str

The color of the decoded cursor.

decoded_cursor_on_target_color: str

The color of the decoded cursor when it is hovering over the target.

actual_cursor_color: str

The color of the actual cursor.

target_color: str

The color of the target.

target_waiting_color: str

The color of the target when it is waiting for the cue.

font_size: int

The font size in pixels.

button_size: Tuple[int, int]

The button width and height in pixels.

button_spacing: int

The vertical spacing between buttons in pixels.

button_offset_top: int

The top offset of the buttons from the center of the window in pixels.

button_color: str = 'gray'

The button background color.

button_color_on_hover: str = 'lightgray'

The button background color when the mouse is hovering over it.

__init__(target_radius: int, cursor_radius: int, radius_to_target: int, number_of_targets: int, background_color: str, decoded_cursor_color: str, decoded_cursor_on_target_color: str, actual_cursor_color: str, target_color: str, target_waiting_color: str, font_size: int, button_size: Tuple[int, int], button_spacing: int, button_offset_top: int, button_color: str = 'gray', button_color_on_hover: str = 'lightgray') None
__init__(window_rect: Tuple[int, int], params: Params, menu_text: Optional[List[Dict[str, str]]] = None)[source]

Create a new instance.

property window_center

Get the center of the window.

property is_cursor_on_target: bool

Check if the cursor is hovering over the target.

show_hint(hint: Optional[List[Dict[str, str]]])[source]

Show/Hide a rich text at the top of the screen.

Parameters
  • hint – The rich text to show. If None, the hint is hidden.

  • elements. (The list should be a list of rich text) –

  • keys (Each rich text element should be a dictionary with the following) –

  • text (-) – The text to show

  • color (-) – The color of the text

start_task()[source]

Start the task.

stop_task()[source]

Stop the task.

reset_cursor()[source]

Reset the cursor position to the center of the screen.

center_target()[source]

Position the target in the center of the screen.

Also resets the target color to the default color.

randomize_target()[source]

Place the target in a random position.

The new position is selected from a predefined list of possible positions.

property is_target_centered: bool

Check if the target is in the center of the screen.

reset_target_color()[source]

Reset the target color to the default color.

set_decoded_cursor_on_target(on_target: bool)[source]

Set the decoded cursor color depending on whether it is on the target.

set_target_ready(ready: bool)[source]

Set the target color depending on whether it is ready for reaching.

toggle_actual_cursor()[source]

Toggle the visibility of the actual (real) cursor.

update_cursor(actual_velocity: list[Tuple[float, float]], decoded_velocity: list[Tuple[float, float]]) Tuple[Tuple[int, int], Tuple[int, int]][source]

Adjust the position of the actual and decoded cursors.

Parameters
  • actual_velocity – History of velocities for the actual (real) cursor.

  • decoded_velocity – History of velocities for the decoded cursor.

Returns

Updated actual and decoded positions.

try_press_button()[source]

Try to press a button if the mouse is hovering over it.

draw()[source]

Draw all sprites on the screen.

tick(framerate)[source]

Tick the screen update clock.

This method should be called for every frame in order update the screen and limit the game speed to match the frame rate.

leave()[source]

Quit pygame nicely.