Code Monkey home page Code Monkey logo

sparks-baird / self-driving-lab-demo Goto Github PK

View Code? Open in Web Editor NEW
67.0 5.0 7.0 215.64 MB

Software and instructions for setting up and running a self-driving lab (autonomous experimentation) demo using dimmable RGB LEDs, an 8-channel spectrophotometer, a microcontroller, and an adaptive design algorithm, as well as extensions to liquid- and solid-based color matching demos.

Home Page: https://self-driving-lab-demo.readthedocs.io/

License: MIT License

Dockerfile 0.01% Jupyter Notebook 85.27% Python 0.77% HTML 13.89% TeX 0.07%
automation materials-informatics materials-science optics self-driving-lab as7341 circuitpython micropython neopixel pico-w

self-driving-lab-demo's Introduction

Project generated with PyScaffold PyPI-Server Downloads Coveralls DOI tests-passing

If you're reading this on GitHub, navigate to the documentation for tutorials, APIs, and more

self-driving-lab-demo Open In Colab

Software and instructions for setting up and running an autonomous (self-driving) laboratory optics demo using dimmable RGB LEDs, an 8-channel spectrophotometer, a microcontroller, and an adaptive design algorithm, as well as extensions to liquid- and solid-based color matching demos.

Demos

This repository covers three teaching and prototyping demos for self-driving laboratories in the fields of optics (light-mixing), chemistry (liquid-mixing), and solid-state materials science (solid-mixing).

CLSLab:Light

NOTE: Some updates have occurred since the creation of the video tutorial and the publication of the manuscript. Please read the description section of the YouTube video and see #245.

White paper [postprint] Build instructions manuscript YouTube build instructions Purchase*

Self-driving labs are the future; however, the capital and expertise required can be daunting. We introduce the idea of an experimental optimization task for less than $100, a square foot of desk space, and an hour of total setup time from the shopping cart to the first "autonomous drive." For our first demo, we use optics rather than chemistry; after all, light is easier to move than matter. While not strictly materials-based, importantly, several core principles of a self-driving materials discovery lab are retained in this cross-domain example:

  • sending commands to hardware to adjust physical parameters
  • receiving measured objective properties
  • decision-making via active learning
  • utilizing cloud-based simulations

The demo is accessible, extensible, modular, and repeatable, making it an ideal candidate for both low-cost experimental adaptive design prototyping and learning the principles of self-driving laboratories in a low-risk setting.

Summary Unassembled Assembled

Users

University instructors utilizing CLSLab-Light during Spring 2023: 4 (~40 kits in total)

At-cost Commercialization

  • GroupGets round 1: funded and fulfilled (19 kits)
  • GroupGets round 2: funded and fulfilled (20 kits)

*CLSLab:Light is stocked in the GroupGets Store. It has a higher GroupGets fee (only GroupGets sees the extra profit). If you don't want to wait for new rounds and you'd rather order a pre-built kit, this is the best option right now.

CLSLab:Liquid

Bill of materials

We extend the light-mixing demo to a color-matching materials optimization problem using dilute colored dyes. This optimization task costs less than 300 USD, requires less than three square feet of desk space, and less than three hours of total setup time from the shopping cart to the first “autonomous drive.” The demo is modular and extensible; additional peristaltic pump channels can be added, the dye reservoirs can be increased, and chemically sensitive parts can be replaced with chemically resistant ones.

Summary Schematic Assembled

CLSLab:Solid

There are few to no examples of a low-cost demo platform involving the handling of solid-state materials (i.e., powders, pellets). For this demo, we propose using red, yellow, and blue powdered wax as a replacement for the liquid colored dyes. The demo is more expensive due to the need for robotics. The demo involves using tealight candle holders, transferring them to a rotating stage via a robotic arm, dispensing a combination of powders, melting the wax via an incandescent light bulb, measuring a discrete color spectrum, and moving the completed sample to a separate sample storage area.

clslab:solid

See Also

Basic Usage

I recommend going through the introductory Colab notebook, but here is a shorter version of how an optimization comparison can be run between grid search, random search, and Bayesian optimization using a free public demo.

Basic Installation

pip install self-driving-lab-demo

Client Setup for Public Test Demo

from self_driving_lab_demo import (
    SelfDrivingLabDemoLight,
    # SelfDrivingLabDemoLiquid,
    mqtt_observe_sensor_data,
    get_paho_client,
)

PICO_ID = "test"
sensor_topic = f"sdl-demo/picow/{PICO_ID}/as7341/"  # to match with Pico W code

# instantiate client once and reuse to avoid opening too many connections
client = get_paho_client(sensor_topic)

sdl = SelfDrivingLabDemoLight(
    autoload=True,  # perform target data experiment automatically, default is False
    observe_sensor_data_fn=mqtt_observe_sensor_data,  # default
    observe_sensor_data_kwargs=dict(pico_id=PICO_ID, client=client),
    simulation=False,  # default
)

Optimization Comparison

from self_driving_lab_demo.utils.search import (
    grid_search,
    random_search,
    ax_bayesian_optimization,
)

num_iter = 27

grid, grid_data = grid_search(sdl, num_iter)
random_inputs, random_data = random_search(sdl, num_iter)
best_parameters, values, experiment, model = ax_bayesian_optimization(sdl, num_iter)

Visualization

import plotly.express as px
import pandas as pd

# grid
grid_input_df = pd.DataFrame(grid)
grid_output_df = pd.DataFrame(grid_data)[["frechet"]]
grid_df = pd.concat([grid_input_df, grid_output_df], axis=1)
grid_df["best_so_far"] = grid_df["frechet"].cummin()

# random
random_input_df = pd.DataFrame(random_inputs, columns=["R", "G", "B"])
random_output_df = pd.DataFrame(random_data)[["frechet"]]
random_df = pd.concat([random_input_df, random_output_df], axis=1)
random_df["best_so_far"] = random_df["frechet"].cummin()

# bayes
trials = list(experiment.trials.values())
bayes_input_df = pd.DataFrame([t.arm.parameters for t in trials])
bayes_output_df = pd.Series(
    [t.objective_mean for t in trials], name="frechet"
).to_frame()
bayes_df = pd.concat([bayes_input_df, bayes_output_df], axis=1)
bayes_df["best_so_far"] = bayes_df["frechet"].cummin()

# concatenation
grid_df["type"] = "grid"
random_df["type"] = "random"
bayes_df["type"] = "bayesian"
df = pd.concat([grid_df, random_df, bayes_df], axis=0)

# plotting
px.line(df, x=df.index, y="best_so_far", color="type").update_layout(
    xaxis_title="iteration",
    yaxis_title="Best error so far",
)

Example Output

Advanced Installation

PyPI

conda create -n self-driving-lab-demo python=3.10.*
conda activate self-driving-lab-demo
pip install self-driving-lab-demo

Local

In order to set up the necessary environment:

  1. review and uncomment what you need in environment.yml and create an environment self-driving-lab-demo with the help of conda:
    conda env create -f environment.yml
    
  2. activate the new environment with:
    conda activate self-driving-lab-demo
    

NOTE: The conda environment will have self-driving-lab-demo installed in editable mode. Some changes, e.g. in setup.cfg, might require you to run pip install -e . again.

Optional and needed only once after git clone:

  1. install several pre-commit git hooks with:

    pre-commit install
    # You might also want to run `pre-commit autoupdate`

    and checkout the configuration under .pre-commit-config.yaml. The -n, --no-verify flag of git commit can be used to deactivate pre-commit hooks temporarily.

  2. install nbstripout git hooks to remove the output cells of committed notebooks with:

    nbstripout --install --attributes notebooks/.gitattributes

    This is useful to avoid large diffs due to plots in your notebooks. A simple nbstripout --uninstall will revert these changes.

Then take a look into the scripts and notebooks folders.

Dependency Management & Reproducibility

  1. Always keep your abstract (unpinned) dependencies updated in environment.yml and eventually in setup.cfg if you want to ship and install your package via pip later on.
  2. Create concrete dependencies as environment.lock.yml for the exact reproduction of your environment with:
    conda env export -n self-driving-lab-demo -f environment.lock.yml
    For multi-OS development, consider using --no-builds during the export.
  3. Update your current environment with respect to a new environment.lock.yml using:
    conda env update -f environment.lock.yml --prune

Project Organization

├── AUTHORS.md              <- List of developers and maintainers.
├── CHANGELOG.md            <- Changelog to keep track of new features and fixes.
├── CONTRIBUTING.md         <- Guidelines for contributing to this project.
├── Dockerfile              <- Build a docker container with `docker build .`.
├── LICENSE.txt             <- License as chosen on the command-line.
├── README.md               <- The top-level README for developers.
├── configs                 <- Directory for configurations of model & application.
├── data
│   ├── external            <- Data from third party sources.
│   ├── interim             <- Intermediate data that has been transformed.
│   ├── processed           <- The final, canonical data sets for modeling.
│   └── raw                 <- The original, immutable data dump.
├── docs                    <- Directory for Sphinx documentation in rst or md.
├── environment.yml         <- The conda environment file for reproducibility.
├── models                  <- Trained and serialized models, model predictions,
│                              or model summaries.
├── notebooks               <- Jupyter notebooks. Naming convention is a number (for
│                              ordering), the creator's initials and a description,
│                              e.g. `1.0-fw-initial-data-exploration`.
├── pyproject.toml          <- Build configuration. Don't change! Use `pip install -e .`
│                              to install for development or to build `tox -e build`.
├── references              <- Data dictionaries, manuals, and all other materials.
├── reports                 <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures             <- Generated plots and figures for reports.
├── scripts                 <- Analysis and production scripts which import the
│                              actual PYTHON_PKG, e.g. train_model.
├── setup.cfg               <- Declarative configuration of your project.
├── setup.py                <- [DEPRECATED] Use `python setup.py develop` to install for
│                              development or `python setup.py bdist_wheel` to build.
├── src
│   └── self_driving_lab_demo <- Actual Python package where the main functionality goes.
├── tests                   <- Unit tests which can be run with `pytest`.
├── .coveragerc             <- Configuration for coverage reports of unit tests.
├── .isort.cfg              <- Configuration for git hook that sorts imports.
└── .pre-commit-config.yaml <- Configuration of pre-commit git hooks.

Note

This project has been set up using PyScaffold 4.2.3.post1.dev10+g7a0f254 and the dsproject extension 0.7.2.post1.dev3+g948a662.

self-driving-lab-demo's People

Contributors

sgbaird avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

self-driving-lab-demo's Issues

science-as-a-service model for demos

Science-as-a-service is a compelling concept to me. i.e. the equivalent of Amazon AWS but for experiments instead of compute time. What would a subscription model for larger/higher-cost demos look like?

Which microcontroller/computer to use? RPi, Arduino, which model

  • RPi Pico W
  • RPi Zero W 2
  • RPi 4B
  • Arduino Uno

Re: Hardware: Why not just run it off a Raspberry Pi Pico (you can also get them with Stemma connectors, no supply shortage, only $4) and do the algorithmic stuff on a laptop? Send commands to set the LEDs and read the current spectrometer reading by USB serial

https://twitter.com/JoshuaSchrier/status/1543374043230904321

Joshua Schrier suggested Pico, offload the decision-making to a computer. I was thinking it would be nice to be able to run the module standalone, with it connected to WiFi. Probably not bad to have a USB connection for the demo, but I wonder if it might be better to have it connected to WiFi for the scale-up/transition to a "real" task. Especially if there are going to be interactions/submissions via Google Colab. In this case, a free Colab session would probably time out by the time the real experiment has completed the optimization.

Pico not being recognized by SparksOne computer

Will need to check the brand again, but I think it's a custom build from a company. I should also probably try with a different USB cable, but it seems likely to me that it's an issue with the computer hardware.

Feature Request: surrogate models of the objectives

Hey @sgbaird!

This repo is super cool! It is great to see Ax is useful for these optimization problems.

In the interest of lightweight R&D, it would be awesome if this repo had multi-fidelity surrogate models of the objective functions. This would make it easier to develop better Bayesian optimization methods (and run multiple replications of optimization loops), without needing the custom hardware.

Would it be possible to add some multi-fidelity surrogate models of the objective functions (e.g. Random Forests) to the repo that could be downloaded and used?

Thanks!

cc @eytan @Balandat

Throw a better error than Queue.Empty when timing out

RayTaskError(Empty): ray::evaluate() (pid=21684, ip=127.0.0.1)
  File "python\ray\_raylet.pyx", line 662, in ray._raylet.execute_task
  File "python\ray\_raylet.pyx", line 666, in ray._raylet.execute_task
  File "C:\Users\sterg\AppData\Local\Temp\ipykernel_31088\897486387.py", line 22, in evaluate
  File "C:\Users\sterg\Documents\GitHub\sparks-baird\self-driving-lab-demo\src\self_driving_lab_demo\core.py", line 220, in evaluate
    results = self.observe_sensor_data(R, G, B, atime=atime, astep=astep, gain=gain)
  File "C:\Users\sterg\Documents\GitHub\sparks-baird\self-driving-lab-demo\src\self_driving_lab_demo\core.py", line 152, in observe_sensor_data
    return self.observe_sensor_data_fn(
  File "C:\Users\sterg\Documents\GitHub\sparks-baird\self-driving-lab-demo\src\self_driving_lab_demo\utils\observe.py", line 96, in mqtt_observe_sensor_data
    sensor_data = sensor_data_queue.get(True, queue_timeout)
  File "c:\Users\sterg\Miniconda3\envs\sdl-demo\lib\queue.py", line 179, in get
    raise Empty
_queue.Empty

public-private key for hardware validation

Related:

Based on discussion with @rekumar #127 (comment):

  1. Maybe a public-private key encryption scheme (like RSA, python example) is appropriate here?
    Example: the private key lives on your hardware device, the public key is the device identifier within your database. Any data generated by the device is terminated with some signature, then encrypted using the private key. Your database API then decrypts the data using the public key and checks for the signature before accepting the new data.

As a bonus, you can use these keys to do device-side validation (using its private key) to block instructions from bad actors. Instructions sent to the device would be encrypted using its public key, then decrypted on the device side to check the signature. In this scenario, maybe you don't want to use the public key as the device's ID in the database... you could encrypt the device's ID using a server-side private key to hide this from external viewers of the database.

#127 (reply in thread):

For encryption on the Pico W, probably: https://docs.micropython.org/en/latest/library/cryptolib.html
For random ID generation on the Pico W, probably: https://github.com/pfalcon/pycopy-lib/blob/master/uuid/uuid.py

Aside: unique hardware ID generation code given at

my_id = hexlify(unique_id()).decode()

secrets.py: invalid syntax for integer with base 10

image

The person mentioned the password only had integers and letters, and that it worked as expected the second time.

As long as the SSID and password are wrapped in a string and neither are using outlandish characters, it really shouldn't matter what's in there. My best guess is the person had a syntax error in the Python file. Posting here for provenance. If someone else runs into this issue also, please post here.

Issues connecting to WiFi

Two people reported having issues with connecting to WiFi. Note that the Pico W only supports connecting to 2.4 GHz WiFi, not to 5G. Also, connecting to a WiFi that has a captive portal (e.g. most hotel and coffee shop WiFi) or WPA-enterprise authentication (e.g. Eduroam) won't work out-of-the-box on the Pico W. As long as it doesn't violate the terms of service for the WiFi network in question, you can probably use MAC spoofing (see discussion at https://github.com/orgs/micropython/discussions/9264). I don't know how to do MAC spoofing with the Pico W, but if someone figures this out, please post here!

Also note that mobile hotspots and home networks sometimes default to broadcasting a 5G network, but there may be an option in the hotspot settings to convert this to a 2.4 GHz network (extended compatibility, for example).

I tested with my own hotspot on a Pixel 4, and it didn't connect at first. After I toggled a radio button for "Extend compatibility" which "Helps other devices find this hotspot. Reduces hotspot connection speed" (Settings --> Network --> Hotspot --> WiFi Hotspot --> Extend Compatibility), or in other words, switches to 2.4 GHz instead of 5G, it was able to connect. I think I read somewhere that whether it broadcasts 5G or 2.4 GHz can also depend on whether you're already connected to a WiFi network.

See also george-hawkins/micropython-wifi-setup#4

EDIT: You may also need to turn off "Limit IP Address Tracking" on iPhones.

Suggestions for optimization algorithms to test

design of experiments (DOE)

(suggestion by @kjelljorner, see twitter post)

L-BGFS

(suggestion by @CompRhys, see twitter post)

derivative-free method, e.g. simplex

(suggestion by @WardLT, see twitter post)

Olympus

Supports various optimization algorithms in addition to benchmarks (suggestion by Alan Aspuru-Guzik, see twitter post, cc @rileyhickman). Includes many options from above.

Supported algorithms in Olympus

From Colab notebook

['BasinHopping',
 'Cma',
 'ConjugateGradient',
 'DifferentialEvolution',
 'Genetic',
 'Gpyopt',
 'Grid',
 'Hyperopt',
 'LatinHypercube',
 'Lbfgs',
 'ParticleSwarms',
 'Phoenics',
 'RandomSearch',
 'Simplex',
 'Slsqp',
 'Snobfit',
 'Sobol',
 'SteepestDescent']

Olympus also has a nice page categorizing the algorithms into "Bayesian", "Evolutionary", "Gradient", "Grid-like", and "Other". This is right in line with a results figure I'm thinking about that using plotly legend groups (see group click toggle behavior).

Ensure that main public-facing tutorial can run to completion

MongoDB logging stopped working on the device after some time

I noticed that the experiments were running a few seconds faster - because the MongoDB logging stopped on the device side for some reason. The results were still collected by the client but not stored in the database. Resetting the device caused the experiment logging to resume.

The actual errors may be embedded in results stored in the Jupyter notebook kernel I'm running. It had been too long to easily check if it was an issue of too many connections or some other resource limitation imposed by MongoDB. It seems more likely to have been recurring ENOMEM errors on the Pico W (i.e., too much RAM being used).

Issue connecting to AS7341

From another user:

%Run -c $EDITOR_CONTENT
prefix: sdl-demo/picow/<PICO_ID removed>/
Detected devices at I2C-addresses: I2C read_byte at 0xA9, error [Errno 5] EIO I2C write_byte at 0xA9, error [Errno 5] EIO I2C write_byte at 0x70, error [Errno 5] EIO I2C read_byte at 0xA9, error [Errno 5] EIO I2C write_byte at 0xA9, error [Errno 5] EIO I2C write_byte at 0x80, error [Errno 5] EIO I2C write_byte at 0x80, error [Errno 5] EIO I2C read_byte at 0x92, error [Errno 5] EIO Failed to contact AS7341 at I2C address 0x39
Traceback (most recent call last): File "<stdin>",
line 24, in <module> File "/lib/as7341_sensor.py",
line 58, in init ExternalDeviceNotFound: Failed to contact AS7341,
terminating >>>

Ensure that the AS7341 sensor is connected to Grove Port #6 and that a green indicator LED is lit on the sensor. If the indicator LED doesn't light up, the sensor or cable may be broken, or there may be an issue with the power supplied by the Pico W.

Wire gauges

The 18 gauge electrical wire I've linked to elsewhere may be too thick, and the 20 gauge wire I linked to may be too flimsy. Planning to update soon. The 14 gauge sculpting wire from Amazon should work just fine, though.

write backup data to SD card slot if present

Some options:

  • Append to CSV file
  • Separate via directory structure
    • day of experiment
    • user-supplied project name
  • file dump in main folder

Tutorials / example code:

Asynchronous MQTT and general MQTT resources

Pyserial general resources

Potential for non-linear correlations between parameters?

"Mixing" usually implies first-order, linear relationships. What about higher-order correlations between input parameters? Is there a way this could be introduced physically or artificially into the setup?

I need to think about this one more.

Suggestions for assessing performance

Suggestion by @BAMcvoelker, see twitter post:

You could also plot the sample performance (e.g. in terms of performance quantile) instead of model performance (in terms of MAE). The plot would show success rate VS draws. This would have the advantage to be data set invariant for random and grid search.

I think this is similar to (if not the same as) this towards data science post comparing of grid, random, and Bayesian.

Oh, also realizing I might have misinterpreted the suggestion (and caused some confusion by not including enough info). MAE refers to the MAE between some fixed target spectrum and an observed spectrum. Hence the MAE isn't referring to model performance from a regression quality perspective, but rather how well we match a (discrete) target spectrum. @BAMcvoelker, ignore this comment if that was already clear.

RGB + brightness has a degenerate dimension

i.e. there are a minimum of 3 "true" underlying parameters, namely the brightness of the red LED, the brightness of the green LED, and the brightness of the blue LED. RGB + brightness is more convenient from an understanding point of view - i.e. fix the color and change the brightness for the fixed color. In the end, RGB + brightness has to convert to currents supplied to each of the 3 LEDs. This is something I've been aware of, but I think I'll leave it as-is for now until I do a demo tutorial showing how to remove the degeneracy from the search space. I think it's a good teaching example since degeneracies come into play with just about every materials optimization task.

Something also worth noting is that the search space might be slightly larger if the currents are directly accessed (not sure how to do this at the moment, but it seems like I'd need to write out custom buffer arrays or do something relatively low-level).

Not exactly sure how the degeneracy affects #6 and #9.

Last, the brightness values are 32-bit according to the datasheet, though they're currently being treated as float values.

Having trouble getting the ArduCam SPI (MP2) camera to work

pydantic minimal working example for experiment input validation (and some serverless options)

From #127 (reply in thread) by @rekumar:

Have you looked at pydantic for this kind of thing? Its pretty nice, clear to work with, and stitches into API's very cleanly. Example for validating job submission to our API:

Defining our Data Model:
https://github.com/CederGroupHub/alab_management/blob/94d02870623eb198e663e7789021e7f3596768c3/alab_management/experiment_view/experiment.py

Validating incoming jobs against the Data Model
https://github.com/CederGroupHub/alab_management/blob/94d02870623eb198e663e7789021e7f3596768c3/alab_management/dashboard/routes/experiment.py#L14-L29

This would be at the application side (perhaps in the observation function).

Once pydantic is implemented in the codebase, I'd like to figure out a way to put a virtual- or hardware-based "firewall" in-between the user requesting the experiment and the device(s) carrying out the commands, in line with #127 (reply in thread):

For SDLs with multiple steps (i.e. multiple pieces of hardware that require separate communication), I lean toward the idea of the user communicating with a central processor as a "firewall" of sorts (e.g., physical hardware or cloud-based server) rather than having the user directly communicate with the individual pieces of hardware. Something where the scope of users that can change code on the physical hardware, server, etc. is narrower than the scope of users that can send requests to it.

I.e., a way to expose a controlled instance of the API pinned to a specific package version via HTTP requests, for free.

Google search: host python package api for free

Vercel

RGB LEDs vs. having 10+ monochromatic light sources

An array of RGB LEDs + a spectrometer might not be the best match-up. RGB LEDs + a sensor designed for a single color value (e.g. RGB Color Sensor with IR filter and White LED - TCS34725) or monochromatic lasers (e.g. Laser Diode - 5mW 650nm Red) + the 10-channel spectrometer might be a better match-up. Tunable laser(s) probably too expensive.

See also https://www.adafruit.com/product/3595

Will see once the hardware comes in and I start testing.

access model example(s)

Instead of the FIFO, no-authentication MQTT model, something that onboards new users and performs workload management, job scheduling, and imposes user access restrictions.

Alternative demo ideas

In a similar vein to a Rubens tube, vibrating a drumhead or a liquid surface and trying to match particular harmonics using speakers as inputs (a 2D signal example).

In a 3D signal example, measuring WiFi strength in 3D Euclidean space (maybe with separate sensors) where the WiFi source positions and/or sensor positions can be adjusted, and trying to match a target distribution of the WiFi signal. Starts to become reminiscent of DFT relaxation, where multiple local optima may start to appear. See also https://www.google.com/search?q=wifi+heatmapper

In a materials example, perhaps liquid suspensions of particles that are being stirred, mixing the solutions, and measuring the packing fraction (visually?) of a dried solution based on drying temperature and composition? Wash out the container with vigorous mixing and then extract the waste into a waste container (all via peristaltic pumps most likely).

Might want to look at other low-cost sensors for chemistry, physics, and materials science that might not appear on Adafruit. pH is a nice idea one that appeared in one of the papers.

EDIT: nice example of motor control with an RPi Pico W

CytronTechnologies/MAKER-PI-PICO#13

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.