neural_data_simulator.tasks.center_out_reach
Center-out reaching task.
During the task, the GUI keeps track of 2 cursors: the real cursor that is controlled by the user, and the decoded cursor that is controlled by the output of the decoder. The task consists of a sequence of trials. During each trial, the user has to make the decoded cursor reach the target. The trial ends when the decoded cursor hovers over the target for a configurable amount of time. The trials alternate between a “reaching out” trial (from center outwards) and a “back to center” trial (from current position back to center). The user can reset the cursor positions to the center of the screen at any time by pressing the space bar. The real cursor can be toggled on and off by pressing the ‘c’ key. The task can be stopped by pressing the ‘ESC’ key.
At the end of the task, the cursor velocities and trajectories are plotted.
neural_data_simulator.tasks.center_out_reach.input_events
Module for handling events such as key presses.
- class neural_data_simulator.tasks.center_out_reach.input_events.InputEvent(value)[source]
Bases:
Enum
An enumeration of the possible input events.
- NONE = 0
- EXIT = 1
- RESET = 2
- TOGGLE_CURSOR = 3
- CLEAR_METRICS = 4
- MOUSE_BUTTON_PRESSED = 5
- class neural_data_simulator.tasks.center_out_reach.input_events.InputHandler[source]
Bases:
object
Listeners for input event handling.
- property input_device_name
Get the name of the input device.
- set_handler_for_event(event: InputEvent, handler: Callable)[source]
Set a function as a handler for a specific input event.
- get_cursor_relative_position() Tuple[int, int] [source]
Get the relative position of the joystick or mouse cursor.
The position is relative to the previous position when this function was last called. If a joystick was detected at the start of the GUI, the joystick position is returned. Otherwise, the mouse position is returned.
- Returns
The relative position of the cursor as a tuple.
neural_data_simulator.tasks.center_out_reach.joystick
Module for handling joysticks and gamepads.
- class neural_data_simulator.tasks.center_out_reach.joystick.JoystickInput[source]
Bases:
object
Captures joystick movement and converts it to relative positions.
- property relative_position: Optional[Tuple[int, int]]
Get joystick position relative to the last event poll.
- Returns
The relative position of the joystick as a tuple.
neural_data_simulator.tasks.center_out_reach.metrics
Collect and plot velocities resulted from running the task.
- class neural_data_simulator.tasks.center_out_reach.metrics.MetricsCollector(window_rect: Tuple[int, int], target_size: float, unit_converter: PixelsToMetersConverter, actual_cursor_color, decoded_cursor_color)[source]
Bases:
object
Collect and plot velocities resulted from running the task.
- __init__(window_rect: Tuple[int, int], target_size: float, unit_converter: PixelsToMetersConverter, actual_cursor_color, decoded_cursor_color)[source]
Create a new instance.
- Parameters
window_rect – The size of the task window.
target_size – The radius of the target in meters.
unit_converter – The unit converter to use for converting from meters to pixels.
actual_cursor_color – The color to use for plotting the actual cursor.
decoded_cursor_color – The color to use for plotting the decoded cursor.
- record_decoded_velocities(decoded_velocities, timestamps)[source]
Record decoded velocities.
- Parameters
decoded_velocities – List of decoded velocities.
timestamps – The timestamps corresponding to the decoded velocities.
- record_actual_velocities(actual_velocities, timestamps)[source]
Record actual velocities.
- Parameters
actual_velocities – List of actual velocities.
timestamps – The timestamps corresponding to the actual velocities.
neural_data_simulator.tasks.center_out_reach.scalers
Scalers and unit converters.
- class neural_data_simulator.tasks.center_out_reach.scalers.PixelsToMetersConverter(ppi: Optional[float])[source]
Bases:
object
Unit conversion between pixels and meters.
- __init__(ppi: Optional[float])[source]
Initialize the PixelsToMetersConverter class.
- Parameters
ppi – The pixels per inch of the display or None. If ppi is None, the function will try to calculate it based on the default monitor.
- pixels_to_millimeters(X: ndarray) ndarray [source]
Convert pixels to millimeters.
- Parameters
X – An array of pixel values.
- Returns
The array resulted from the conversion.
- millimeters_to_pixels(X: ndarray) ndarray [source]
Convert millimeters to pixels.
- Parameters
X – An array of millimeter values.
- Returns
The array resulted from the conversion.
- class neural_data_simulator.tasks.center_out_reach.scalers.StandardVelocityScaler(scale: ndarray, mean: ndarray, unit_converter: PixelsToMetersConverter)[source]
Bases:
object
A simple standard scaler for velocities.
- __init__(scale: ndarray, mean: ndarray, unit_converter: PixelsToMetersConverter)[source]
Instantiate a new scaler for velocities.
The purpose of this scaler is to scale the velocity to a standard deviation of 1 and a mean of 0 when calling the transform function.
- Parameters
scale – The scale to apply to the velocities.
mean – The mean to offset the velocities.
unit_converter – The unit converter to use to convert between pixels and millimeters.
neural_data_simulator.tasks.center_out_reach.screen_info
A wrapper around screeninfo with improved functionality on macOS.
neural_data_simulator.tasks.center_out_reach.settings
Settings schema for the center-out reach task.
- class neural_data_simulator.tasks.center_out_reach.settings.CenterOutReach(*, input: Input, output: Output, sampling_rate: float, window: Window, with_metrics: bool, standard_scaler: StandardScaler, task: Task, task_window_output: Optional[Output] = None)[source]
Bases:
BaseModel
Center-out reach settings.
- class Input(*, enabled: bool, lsl: Optional[LSLInputModel] = None)[source]
Bases:
BaseModel
Input settings.
- enabled: bool
- lsl: Optional[LSLInputModel]
- class Output(*, lsl: LSLOutputModel)[source]
Bases:
BaseModel
Output settings.
- lsl: LSLOutputModel
- class Window(*, width: Optional[float] = None, height: Optional[float] = None, ppi: Optional[float] = None, colors: Colors)[source]
Bases:
BaseModel
Window settings.
- class Colors(*, background: str, decoded_cursor: str, actual_cursor: str, target: str, target_waiting_for_cue: str, decoded_cursor_on_target: str)[source]
Bases:
BaseModel
Colors used in the GUI.
- background: str
- decoded_cursor: str
- actual_cursor: str
- target: str
- target_waiting_for_cue: str
- decoded_cursor_on_target: str
- width: Optional[float]
- height: Optional[float]
- ppi: Optional[float]
- class StandardScaler(*, scale: list[float], mean: list[float])[source]
Bases:
BaseModel
Velocity scaler settings.
- scale: list[float]
- mean: list[float]
- class Task(*, target_radius: float, cursor_radius: float, radius_to_target: float, number_of_targets: int, delay_to_begin: float, delay_waiting_for_cue: float, target_holding_time: float, max_trial_time: int)[source]
Bases:
BaseModel
Task settings.
- target_radius: float
- cursor_radius: float
- radius_to_target: float
- number_of_targets: int
- delay_to_begin: float
- delay_waiting_for_cue: float
- target_holding_time: float
- max_trial_time: int
- sampling_rate: float
- with_metrics: bool
- standard_scaler: StandardScaler
neural_data_simulator.tasks.center_out_reach.sprites
Circular shapes that are being drawn in the window on the screen.
- class neural_data_simulator.tasks.center_out_reach.sprites.Sprite(color: str, radius: int, xy: Tuple[int, int])[source]
Bases:
Sprite
An object that can be drawn on the screen.
This class is a wrapper around pygame.sprite.Sprite that is used to represent a circle of a given color and radius.
- __init__(color: str, radius: int, xy: Tuple[int, int])[source]
Create a sprite with a given color, radius, and position.
- Parameters
color – The color of the sprite.
radius – the radius of the sprite.
xy – The position of the sprite.
- property position: Tuple[int, int]
Get the sprite coordinates in the window.
- Returns
The sprite coordinates as a tuple.
- collides_with(other_sprite: Sprite) bool [source]
Check if this sprite collides with another sprite.
- Parameters
other_sprite – The sprite to check for collision with.
- Returns
True if the sprites collide, False otherwise.
neural_data_simulator.tasks.center_out_reach.task_runner
Run trial rounds in a loop.
- An iteration of the loop consists of:
Polling for input events.
Advancing the state.
Updating the window.
Pausing so that the GUI doesn’t run faster than the targeted sampling rate.
Repeat until the loop is signaled to stop.
- class neural_data_simulator.tasks.center_out_reach.task_runner.VelocityScaler(*args, **kwargs)[source]
Bases:
Protocol
Scales the cursor velocity.
A python protocol (PEP-544) works in a similar way to an abstract class. The
__init__()
method of this protocol should never be called as protocols are not meant to be instantiated. An__init__()
method may be defined in a concrete implementation of this protocol if needed.- __init__(*args, **kwargs)
- class neural_data_simulator.tasks.center_out_reach.task_runner.TaskRunner(sample_rate: float, decoded_cursor_input: Optional[Input], actual_cursor_output: Optional[Output], velocity_scaler: VelocityScaler, with_decoded_cursor: bool, metrics_collector: Optional[MetricsCollector], task_window_output: Optional[Output] = None)[source]
Bases:
object
The BCI task runner.
- __init__(sample_rate: float, decoded_cursor_input: Optional[Input], actual_cursor_output: Optional[Output], velocity_scaler: VelocityScaler, with_decoded_cursor: bool, metrics_collector: Optional[MetricsCollector], task_window_output: Optional[Output] = None)[source]
Create a new instance to run the center_out_reach task.
- Parameters
sample_rate – sampling rate of input data (Hz).
decoded_cursor_input – The input (e.g., LSLInput) for decoded cursor velocities.
actual_cursor_output – The output (e.g., LSLOutput) for actual cursor velocities.
velocity_scaler – Scales the actual cursor velocities.
with_decoded_cursor – if True, use the decoded cursor velocities, else use the actual cursor velocities.
metrics_collector – Collects and plots velocities resulted from running the task.
task_window_output – The output (e.g., LSLOutput) for target positions and the task’s cursor positions
- run(task_state: TaskState, user_input: InputHandler)[source]
Start the loop.
- Parameters
task_state – The state machine that should be updated by the loop.
user_input – The user input controller for actual cursor.
neural_data_simulator.tasks.center_out_reach.task_state
The state machine for the BCI task.
- class neural_data_simulator.tasks.center_out_reach.task_state.State(*args, **kwargs)[source]
Bases:
Protocol
The model of a state in a state machine.
A python protocol (PEP-544) works in a similar way to an abstract class. The
__init__()
method of this protocol should never be called as protocols are not meant to be instantiated. An__init__()
method may be defined in a concrete implementation of this protocol if needed.- is_valid_next_state(s: State) bool [source]
Check if a transition is possible to the given next state.
- Parameters
s – The next state to validate the transition to.
- Returns
True if a transition is possible from the current state to the given next state.
- enter(previous_state: Optional[State] = None)[source]
Enter the state.
This method is called when the state machine transitioned to this state.
- Parameters
previous_state – The previous state that the state machine was in.
- exit(next_state: Optional[State] = None)[source]
Leave the state.
This method is called when the state machine transitioned from this state.
- Parameters
next_state – The next state that the state machine is transitioning to.
- __init__(*args, **kwargs)
- class neural_data_simulator.tasks.center_out_reach.task_state.StateParams(delay_to_begin: float, delay_waiting_for_cue: float, target_holding_time: float, max_trial_time: int)[source]
Bases:
object
Configuration for the state machine.
- delay_to_begin: float
The delay before the trial begins.
- delay_waiting_for_cue: float
The time between the target is shown and the go cue.
- target_holding_time: float
The time required for the mouse to hover over the target.
- max_trial_time: int
The time allocate for a trial to be completed.
- __init__(delay_to_begin: float, delay_waiting_for_cue: float, target_holding_time: float, max_trial_time: int) None
- class neural_data_simulator.tasks.center_out_reach.task_state.BaseState(task_window: TaskWindow, params: StateParams)[source]
Bases:
State
Base class for all states in the state machine.
- property time_in_state
Get the time spent in the current state.
- property trial_timed_out: bool
Check if the trial has timed out.
- __init__(task_window: TaskWindow, params: StateParams) None [source]
Create a new instance.
- class neural_data_simulator.tasks.center_out_reach.task_state.MenuScreen(task_window: TaskWindow, params: StateParams)[source]
Bases:
BaseState
The state that the state machine starts in.
It represents the time before the task start when a menu is presented on the screen.
- class neural_data_simulator.tasks.center_out_reach.task_state.WaitingToBegin(task_window: TaskWindow, params: StateParams)[source]
Bases:
BaseState
The state before the first trial.
- class neural_data_simulator.tasks.center_out_reach.task_state.WaitingForCue(task_window: TaskWindow, params: StateParams)[source]
Bases:
BaseState
The state that the state machine is at the start of every trial round.
- is_valid_next_state(s: State) bool [source]
Check if a transition is possible to the given next state.
The only valid transition is to the Reaching state after the delay_waiting_for_cue.
- class neural_data_simulator.tasks.center_out_reach.task_state.Reaching(task_window: TaskWindow, params: StateParams)[source]
Bases:
BaseState
In this state the cursor is trying reach the target.
- is_valid_next_state(s: State) bool [source]
Check if a transition is possible to the given next state.
- Valid transitions are:
to the InTarget state if the cursor is on the target
to the WaitingForCue state if the trial has timed out.
- class neural_data_simulator.tasks.center_out_reach.task_state.InTarget(task_window: TaskWindow, params: StateParams)[source]
Bases:
BaseState
In this state the cursor is hovering over the target.
- is_valid_next_state(s: State) bool [source]
Check if a transition is possible to the given next state.
- Valid transitions are:
to the WaitingForCue state if the cursor was hovering the target for the target_holding_time.
to the WaitingForCue state if the trial has timed out.
to the Reaching state if the cursor is no longer over the target.
- class neural_data_simulator.tasks.center_out_reach.task_state.StateMachine(states: list[neural_data_simulator.tasks.center_out_reach.task_state.State])[source]
Bases:
object
The state machine that controls the BCI task.
It is responsible for transitioning between states and calling the enter and exit methods on the states. It also provides a method to get the next state that should be transitioned to.
- __init__(states: list[neural_data_simulator.tasks.center_out_reach.task_state.State])[source]
Initialize the state machine with the given states.
- class neural_data_simulator.tasks.center_out_reach.task_state.TaskState(task_window: TaskWindow, params: StateParams)[source]
Bases:
object
The state of the task during each trial round.
- __init__(task_window: TaskWindow, params: StateParams)[source]
Initialize the state with parameters and display it in given window.
neural_data_simulator.tasks.center_out_reach.task_window
The BCI task GUI.
- class neural_data_simulator.tasks.center_out_reach.task_window.RichText(surface: Surface, rect: Rect)[source]
Bases:
NamedTuple
A single rich text surface.
- surface: Surface
The text surface.
- rect: Rect
The text bounds.
- class neural_data_simulator.tasks.center_out_reach.task_window.TaskWindow(window_rect: Tuple[int, int], params: Params, menu_text: Optional[List[Dict[str, str]]] = None)[source]
Bases:
object
Implements the BCI task GUI using pygame.
This class is responsible for showing sprites on the screen and updating their positions.
- class Params(target_radius: int, cursor_radius: int, radius_to_target: int, number_of_targets: int, background_color: str, decoded_cursor_color: str, decoded_cursor_on_target_color: str, actual_cursor_color: str, target_color: str, target_waiting_color: str, font_size: int, button_size: Tuple[int, int], button_spacing: int, button_offset_top: int, button_color: str = 'gray', button_color_on_hover: str = 'lightgray')[source]
Bases:
object
Task window specific configuration.
- target_radius: int
The radius of the target in pixels.
- cursor_radius: int
The radius of the cursor in pixels.
- radius_to_target: int
The distance from the center of the window to the target in pixels.
- number_of_targets: int
The number of targets to show.
- background_color: str
The window background color.
- decoded_cursor_color: str
The color of the decoded cursor.
- decoded_cursor_on_target_color: str
The color of the decoded cursor when it is hovering over the target.
- actual_cursor_color: str
The color of the actual cursor.
- target_color: str
The color of the target.
- target_waiting_color: str
The color of the target when it is waiting for the cue.
- font_size: int
The font size in pixels.
- button_size: Tuple[int, int]
The button width and height in pixels.
- button_spacing: int
The vertical spacing between buttons in pixels.
- button_offset_top: int
The top offset of the buttons from the center of the window in pixels.
- button_color: str = 'gray'
The button background color.
- button_color_on_hover: str = 'lightgray'
The button background color when the mouse is hovering over it.
- __init__(target_radius: int, cursor_radius: int, radius_to_target: int, number_of_targets: int, background_color: str, decoded_cursor_color: str, decoded_cursor_on_target_color: str, actual_cursor_color: str, target_color: str, target_waiting_color: str, font_size: int, button_size: Tuple[int, int], button_spacing: int, button_offset_top: int, button_color: str = 'gray', button_color_on_hover: str = 'lightgray') None
- __init__(window_rect: Tuple[int, int], params: Params, menu_text: Optional[List[Dict[str, str]]] = None)[source]
Create a new instance.
- property window_center
Get the center of the window.
- property is_cursor_on_target: bool
Check if the cursor is hovering over the target.
- show_hint(hint: Optional[List[Dict[str, str]]])[source]
Show/Hide a rich text at the top of the screen.
- Parameters
hint – The rich text to show. If None, the hint is hidden.
elements. (The list should be a list of rich text) –
keys (Each rich text element should be a dictionary with the following) –
text (-) – The text to show
color (-) – The color of the text
- center_target()[source]
Position the target in the center of the screen.
Also resets the target color to the default color.
- randomize_target()[source]
Place the target in a random position.
The new position is selected from a predefined list of possible positions.
- property is_target_centered: bool
Check if the target is in the center of the screen.
- set_decoded_cursor_on_target(on_target: bool)[source]
Set the decoded cursor color depending on whether it is on the target.
- set_target_ready(ready: bool)[source]
Set the target color depending on whether it is ready for reaching.
- update_cursor(actual_velocity: list[Tuple[float, float]], decoded_velocity: list[Tuple[float, float]]) Tuple[Tuple[int, int], Tuple[int, int]] [source]
Adjust the position of the actual and decoded cursors.
- Parameters
actual_velocity – History of velocities for the actual (real) cursor.
decoded_velocity – History of velocities for the decoded cursor.
- Returns
Updated actual and decoded positions.