unlimited-research-cooperative / bio-silicon-synergetic-intelligence-system Goto Github PK
View Code? Open in Web Editor NEWBio-Silicon Synergetic Intelligence System
Home Page: https://discord.gg/bKpF32REAj
License: Other
Bio-Silicon Synergetic Intelligence System
Home Page: https://discord.gg/bKpF32REAj
License: Other
We currently calculate several features from signal data on a nightly basis since these calculations are computationally expensive. If we can speed these calculations up, we can utilize these features within the live BCI system. This body of work aims to use Convolutional Neural Networks (CNNs) to learn to predict nightly feature values from signal data in order to quickly calculate these features.
We should start by predicting the live system features to understand how well CNNs are at this task. Once we've done that, we can move onto predicting the nightly features.
Since we currently don't have live rat ECoG signals to use, we can learn how effective CNNs are at predicting nightly features based on human ECoG data. This isn't exactly analogous, but it should give us an indication of how effective this ML system would be. We have ECoG data from humans playing a game:
Dataset link - https://openneuro.org/datasets/ds004770/versions/1.0.0
Dataset Google Drive Link: https://drive.google.com/file/d/1Wh8SJ1qZ3_mBZdX_Hukz04uYbYQvfXSQ/view?usp=sharing
Dataset Paper: https://assets.researchsquare.com/files/rs-3581007/v1/c27bf88d-3f89-4b8f-bf79-0fd848624f38.pdf?c=1702544255
Dataset Name: sub-01_ses-task_task-game_run-01_ieeg.edf
In this section, we will list out the nightly features we want to be able to predict quickly from ECoG data. These features can be found here:
These features are fast to calculate and are thus included in the live BCI system. Try to predict these first using CNNs to understand how effective CNNs are. You can find the features listed here:
Peaks:
We need to take a look at the relevant features to extract, and experiment with different combinations of these features.
For our first experiment setup, we will use a 1 dimensional DOOM aim+shoot game.
old game.py code section for DOOM :
# scenarios: https://vizdoom.farama.org/environments/default/
# custom scenario: https://vizdoom.farama.org/environments/creatingCustom/
# Initialize the game environment using the given configuration and scenario paths
def initialize_vizdoom(config_path, scenario_path):
# Create a new DoomGame instance
game = vzd.DoomGame()
# Overwrite the default config path with a specific path
config_path = "AAA_projects/UnlimitedResearchCooperative/Synthetic_Intelligence_Labs/ViZDoom/scenarios/my_way_home.cfg"
# Overwrite the default scenario path with a specific path
scenario_path = "AAA_projects/UnlimitedResearchCooperative/Synthetic_Intelligence_Labs/ViZDoom/scenarios/my_way_home.wad"
# Make the game window visible
game.set_window_visible(True)
# Set the game mode to PLAYER (as opposed to SPECTATOR)
game.set_mode(vzd.Mode.PLAYER)
# Enable detailed objects information
game.set_objects_info_enabled(True)
game.set_sectors_info_enabled(True)
# Set the screen resolution
game.set_screen_resolution(vzd.ScreenResolution.RES_640X480)
# Enable rendering of the HUD
game.set_render_hud(True)
game.set_automap_render_textures(True)
game.set_render_weapon(True)
game.set_render_decals(True)
game.set_render_particles(True)
game.set_render_effects_sprites(True)
game.set_render_messages(True)
game.set_render_corpses(True)
game.set_render_all_frames(True)
game.set_sound_enabled(True)
# Clear any previously available buttons and specify new ones for this game instance
game.clear_available_buttons()
game.add_available_button(vzd.Button.ATTACK)
game.add_available_button(vzd.Button.USE)
game.add_available_button(vzd.Button.MOVE_BACKWARD)
game.add_available_button(vzd.Button.MOVE_FORWARD)
game.add_available_button(vzd.Button.TURN_RIGHT)
game.add_available_button(vzd.Button.TURN_LEFT)
#
game.set_button_max_value(self: vizdoom.DoomGame, button: vizdoom.Button, max_value: float)
# Sets the maximum allowed absolute value for the specified Button. Setting the maximum value to 0 results in no constraint at all (infinity). This method makes sense only for delta buttons. The constraints limit applies in all Modes.
# Has no effect when the game is running.
# Initialize the game with the specified settings
game.init()
return game
# The following lines are unreachable due to the preceding return statement
# and should be removed or corrected for proper execution.
# buttons for actions:
# https://vizdoom.farama.org/api/python/doomGame/#vizdoom.DoomGame.set_available_buttons
# https://github.com/Farama-Foundation/ViZDoom/blob/master/examples/python/delta_buttons.py
# Decode action strings into boolean arrays indicating which actions are active
def decode_actions(action_str):
# Convert the comma-separated string into a list of integers
action_codes = [int(code) for code in action_str.split(',') if code.isdigit()]
# Initialize a boolean list to represent the activation state of each action
action = [False] * len(vzd.Button)
# Set the corresponding action to True based on the action codes
for code in action_codes:
if code < len(action):
action[code] = True
return action
# variables for game state:
# https://vizdoom.farama.org/main/api/python/gameState/#
# https://vizdoom.farama.org/api/python/enums/
# https://github.com/Farama-Foundation/ViZDoom/issues/361
# https://github.com/Farama-Foundation/ViZDoom
# https://github.com/Farama-Foundation/ViZDoom/blob/master/examples/python/buffers.py
# https://github.com/Farama-Foundation/ViZDoom/blob/master/examples/python/labels_buffer.py
# https://github.com/Farama-Foundation/ViZDoom/blob/master/examples/python/objects_and_sectors.py
# https://vizdoom.farama.org/main/api/python/doomGame/#vizdoom.DoomGame.set_sectors_info_enabled
# https://vizdoom.farama.org/main/api/python/gameState/#vizdoom.GameState.objects
# Extract and return game state information as a dictionary
def extract_game_state(game):
# Retrieve various game variables
hitcount = game.get_game_variable(vzd.GameVariable.HITCOUNT)
hits_taken = game.get_game_variable(vzd.GameVariable.HITS_TAKEN)
dead = game.get_game_variable(vzd.GameVariable.DEAD) > 0
health = game.get_game_variable(vzd.GameVariable.HEALTH)
attack_ready = game.get_game_variable(vzd.GameVariable.ATTACK_READY) > 0
# Player position for distance calculation
player_x = game.get_game_variable(vzd.GameVariable.POSITION_X)
player_y = game.get_game_variable(vzd.GameVariable.POSITION_Y)
player_z = game.get_game_variable(vzd.GameVariable.POSITION_Z)
def detect_doors(labels):
# Example logic; you'll need to adjust based on how doors are labeled in your scenario
doors_detected = any(label.object_name.lower().contains("door") for label in labels)
return doors_detected
def categorize_enemy_type(labels):
enemy_types_detected = {"weak": 0, "strong": 0, "boss": 0}
for label in labels:
if "Imp" in label.object_name: # Example: assuming 'Imp' as a weak enemy
enemy_types_detected["weak"] += 1
elif "Demon" in label.object_name: # Example: a stronger enemy
enemy_types_detected["strong"] += 1
# Add more conditions based on known enemy types in ViZDoom
return enemy_types_detected
# This requires keeping track of past actions and outcomes
action_states = {"moving": False, "shooting": False, "escaping_enemy": False}
def determine_exploring_state(depth_buffer):
# Example heuristic: a narrower field in the depth buffer might indicate a corridor
# This will require custom logic based on your game's design and scenarios
return "corridor" if is_corridor(depth_buffer) else "open room"
# Example logic for tracking if a key has been picked up
level_states = {"looking_for_door_key": True, "have_door_key": False}
def detect_wall_states(depth_buffer):
# Example logic to process the depth buffer and determine wall proximity and orientation
wall_states_detected = {"wall_to_the_left": False, "wall_to_the_right": False, "wall_in_front": False}
# Fill in the logic based on depth buffer analysis
return wall_states_detected
# Initialize variables for enemy information
enemy_in_view = 0.0
enemy_position_x = 0.0
enemy_position_y = 0.0
enemy_position_z = 0.0
enemy_angle = 0.0
enemy_pitch = 0.0
enemy_roll = 0.0
enemy_velocity_x = 0.0
enemy_velocity_y = 0.0
enemy_velocity_z = 0.0
visible_objects = []
if state and state.labels:
for obj in state.labels: # Using labels for visible objects
obj_distance = np.sqrt((obj.object_position_x - player_x) ** 2 + (obj.object_position_y - player_y) ** 2 + (obj.object_position_z - player_z) ** 2)
visible_objects.append({
"label": obj.value,
"name": obj.object_name,
"distance": obj_distance,
"position": {
"x": obj.object_position_x,
"y": obj.object_position_y,
"z": obj.object_position_z,
}
})
# Check if the game state has labels for identifying objects
state = game.get_state()
if state and state.labels:
for label in state.labels:
if label.object_name == "DoomPlayer" and label.object_id != 0:
# Update enemy information based on the first encountered enemy
enemy_in_view = 1.0
enemy_position_x = label.object_position_x
enemy_position_y = label.object_position_y
enemy_position_z = label.object_position_z
enemy_angle = label.object_angle
enemy_pitch = label.object_pitch
enemy_roll = label.object_roll
enemy_velocity_x = label.object_velocity_x
enemy_velocity_y = label.object_velocity_y
enemy_velocity_z = label.object_velocity_z
break # Exit loop after finding the first enemy
# https://vizdoom.farama.org/main/api/python/gameState/#data-types-used-in-gamestate
# add_available_game_variable(self: vizdoom.DoomGame, variable: vizdoom.GameVariable) โ None
# Adds the specified GameVariable to the list of available game variables (e.g. HEALTH, AMMO1, ATTACK_READY) in the GameState returned by get_state() method.
# Has no effect when the game is running.
# Config key: availableGameVariables/available_game_variables (list of values)
# Attempt to extract the screen buffer, if available
screen_buffer = None
if game.get_screen_format() != vzd.ScreenFormat.CRCGCB:
screen_buffer = state.screen_buffer
# Compile extracted information into a dictionary and return it
game_state_info = {
"hitcount": hitcount,
"hits_taken": hits_taken,
"dead": dead,
"health": health,
"attack_ready": attack_ready,
"enemy_in_view": enemy_in_view,
"enemy_position": {
"x": enemy_position_x,
"y": enemy_position_y,
"z": enemy_position_z
},
"enemy_angle": enemy_angle,
"enemy_pitch": enemy_pitch,
"enemy_roll": enemy_roll,
"enemy_velocity": {
"x": enemy_velocity_x,
"y": enemy_velocity_y,
"z": enemy_velocity_z
},
"screen_buffer": screen_buffer
}
return game_state_info
In your D_game.py you are using these config files
config_path = "AAA_projects/UnlimitedResearchCooperative/Synthetic_Intelligence_Labs/ViZDoom/scenarios/my_way_home.cfg" scenario_path = "AAA_projects/UnlimitedResearchCooperative/Synthetic_Intelligence_Labs/ViZDoom/scenarios/my_way_home.wad"
But they are not included in this repository. Running the main function in the file will produce errors. Please include them in the repository.
create a 1d_enemy_shooter_DOOM script, game+player metadata extraction (1D position of enemy, 1D position of player aim), reward system specific to game (move reward / punish trigger to game script)
File used in the above screenshot - CMSIS-DAP-V1-F103.hex
Here are some methods of fusion and synthesis we can use for our features:
1.Deep Feature Synthesis (DFS)
2.Feature Embedding
3.Residual Connections
4.Attention Mechanisms
5.Bottleneck Layers
6.Skip Connections and Feature Concatenation
7.Regularization Techniques
8.Auxiliary Outputs
9.Custom Loss Functions and Multi-task Learning
10.Model Interpretability and Feature Importance Analysis
11.Feature Aggregation:
- Summation and Averaging
- Weighted Sums
- Higher Order Combinations
- Polynomials and Cross-terms
12.Dimensionality Reduction Techniques:
- Principal Component Analysis (PCA)
- Autoencoders
13.Non-linear Combinations:
- Functions of Features
- Bucketing/Binning
14.Concatenation for Embedding Layers
15.Conditional Features
16.Clustering-Based Features
17.Temporal or Sequential Combinations
18.Feature Transformation with Domain Knowledge
19.Feature Normalization
20.Manifold Learning
21.Kernel Methods
22.Spectral Methods
23.Advanced Probabilistic Models
24.Tensor Decomposition
25.Complex Networks and Graph Analysis
26.Wavelet Transforms
27.Differential Equations and Dynamical Systems
28.Information Theory
29.Lie Groups and Differential Geometry
30.Topological Data Analysis (TDA)
Feature extraction using CNN quantization involves using a convolutional neural network (CNN) to extract meaningful features from neural data and then quantizing these features to reduce memory and computational requirements. This process allows for efficient representation of the data while preserving important information for downstream tasks such as classification or prediction.
Acquire continuous stream of neural data:
This involves obtaining a continuous stream of data from neural sources, such as EEG signals.
Preprocess data:
Evaluate the performance of the system
Measure metrics such as accuracy, precision, recall, F1-score, or any domain-specific performance indicators.
Iterate and refine:
This might involve adjusting hyperparameters, refining the preprocessing steps, modifying the network architecture, or exploring alternative algorithms.
Deployment and integration:
Integrate the system into existing infrastructure or applications, ensuring compatibility and scalability.
Continuous monitoring and maintenance:
Monitor the performance of the deployed system over time.
Perform maintenance tasks such as updating models with new data, retraining periodically, or addressing any issues that arise during operation.
By following these steps, we can develop a robust system for analyzing neural data using a quantized CNN and leverage the extracted features for various applications.
Implementing the entire process of feature extraction and CNN-based quantization in a single code snippet would be quite complex. However, I can provide you with a simplified Python code example that demonstrates the basic steps of feature extraction and quantization using a hypothetical dataset and a simple CNN architecture.
Here's a basic outline of the code:
import numpy as np
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# Step 1: Acquire continuous stream of neural data (assumed to be already available)
# Load your data here...
# Step 2: Preprocess data
# Assuming data is stored in X_data and labels in y_data
# You may need to define functions for filtering and feature extraction
def preprocess_data(X_data):
# Segment data into 0.25-second intervals
segments = segment_data(X_data, segment_length=0.25)
# Normalize each segment
scaler = StandardScaler()
normalized_segments = [scaler.fit_transform(seg) for seg in segments]
# Apply filtering to remove noise and artifacts
filtered_segments = [apply_filtering(seg) for seg in normalized_segments]
# Extract features
features = [extract_features(seg) for seg in filtered_segments]
return features
# Define helper functions for segmentation, filtering, and feature extraction...
# Step 3: Extract features (assumed to be implemented)
ef extract_features(segment):
# Compute various features such as peak heights, variance, etc.
# Perform spectral analysis
# Compute other features like centroids, zero-crossing rate, etc.
return computed_features
# Step 4: Design CNN architecture
def create_model(input_shape, num_classes):
model = tf.keras.Sequential([
tf.keras.layers.Conv1D(32, 3, activation='relu', input_shape=input_shape),
tf.keras.layers.MaxPooling1D(2),
tf.keras.layers.Conv1D(64, 3, activation='relu'),
tf.keras.layers.MaxPooling1D(2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(num_classes, activation='softmax')
])
return model
# Step 5: Train CNN on extracted features
X_train, X_test, y_train, y_test = train_test_split(X_data, y_data, test_size=0.2, random_state=42)
X_train_features = preprocess_data(X_train)
X_test_features = preprocess_data(X_test)
model = create_model(input_shape=X_train_features[0].shape, num_classes=num_classes)
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(np.array(X_train_features), np.array(y_train), epochs=10, batch_size=32, validation_split=0.1)
# Step 6: Quantize CNN weights and activations (optional)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_tflite_model = converter.convert()
# Step 7: Apply quantized CNN for feature quantization
interpreter = tf.lite.Interpreter(model_content=quantized_tflite_model)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]['index']
output_index = interpreter.get_output_details()[0]['index']
quantized_features = []
for segment in X_new_data:
# Assuming segment is preprocessed
input_data = np.expand_dims(segment, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_index)
quantized_features.append(output_data)
quantized_features = np.array(quantized_features)
# Step 8: Utilize quantized features for downstream tasks
# For example, if you want to make predictions using the quantized features:
predictions = model.predict(quantized_features)
# Step 9: Evaluate the performance of the system
test_loss, test_accuracy = model.evaluate(np.array(X_test_features), np.array(y_test))
print("Test Accuracy:", test_accuracy)
# Step 10: Iterate and refine (if necessary)
# You can iterate on the model architecture, hyperparameters, or preprocessing steps based on performance evaluation.
# For example, you might adjust the learning rate, add regularization, or experiment with different network architectures.
# Example:
# model = create_model(input_shape=X_train_features[0].shape, num_classes=num_classes)
# model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# model.fit(np.array(X_train_features), np.array(y_train), epochs=20, batch_size=32, validation_split=0.1)
# Step 11: Deployment and integration
# Deploy the model for real-world use, integrating it into your application or infrastructure.
# Depending on your deployment environment, you might deploy as a web service, mobile app, or embedded system.
# Example:
# Save the quantized model for deployment
# with open("quantized_model.tflite", "wb") as f:
# f.write(quantized_tflite_model)
# Step 12: Continuous monitoring and maintenance
# Monitor the deployed system's performance and perform maintenance tasks as needed.
# This might involve retraining the model with new data, updating the model architecture, or addressing any issues that arise in production.
# Example:
# Load the deployed model
# interpreter = tf.lite.Interpreter(model_path="quantized_model.tflite")
# interpreter.allocate_tensors()
# Perform inference on new data
# input_index = interpreter.get_input_details()[0]['index']
# output_index = interpreter.get_output_details()[0]['index']
# input_data = np.expand_dims(new_data, axis=0).astype(np.float32)
# interpreter.set_tensor(input_index, input_data)
# interpreter.invoke()
# output_data = interpreter.get_tensor(output_index)
# Perform further processing or actions based on the output_data
We need to experiment with various combinations of features for intuitive data visualisations, balancing the time it takes to plot, the computational expense
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.