Code Monkey home page Code Monkey logo

jeans's Introduction

jeans

An exploration of multi-level selection (group selection) by simulation with a genetic algorithm and deep learning in Python.

The goal of this repository is to better understand this weird concept known as "group selection" by programming an evolutionary simulation with groups that are in competition with each other and the members of each group can choose to cooperate with their groupmates or not. Hopefully, altruism will emerge, but I am also unsure of the fundamental parts of biology that I misunderstand. So that's another goal: to learn what I don't know about biology by blindly simulating a small piece of it.

Find more technical details in issue #1

Inspirational Readings/Lectures

Tools

  • Pymunk - physics engine
  • Pyglet - game/visualization library
  • Numpy - multi-dimentional math library
  • Keras - deep learning library

Running The Simulation

Make a Python virtual environment

python3 -m venv venv

Run the virtual environment

source venv/bin/activate

Install all of the requirements

pip install -r requirements.txt

Enter the matrix

python3 sim.py

jeans's People

Contributors

mfekadu avatar

Stargazers

 avatar

Watchers

 avatar

jeans's Issues

The Plan - Getting To v1.0

Overview

This issue will describe the plan to get to a minimum viable simulation. It's like a software requirements document but allowing for more technical details.

Step 0: Make A Simple Physics-based 2D Environment ๐Ÿ› 

This step partially satisfies my need for cognitive closure by starting with something to check off but also specifies the environment where the organisms live.

Requirements For The Simulation Environment

  • The simulation has basic physics/collisions/etc
    • Pymunk takes care of physics
  • The simulation can draw shapes (at least circles, squares, triangles, line-segments)
    • Pymunk makes shapes and Pyglet displays shapes
  • The simulation has outer-borders (or at least some way to keep organisms in a finite space...yes, limited energy per organism and food source clustered in a small area is a reasonable solution)
  • The simulation has arbitrarily placed walls/caves/etc.
    • Edit: September 8, 2019 Marking as a low priority task for now because I want to minimize the complexity of the basic simulation to begin with.

Step 1: Implement A "Save State" Mechanism

  • Pull Request: #2

Requirements

  • The simulation's "state" can be saved, including every object and relevant data attached to each object (e.g., location history?) to:
    • 1. be able to restart from that state
      • Edit: September 8, 2019 Marking as low priority because I am not sure if restarting from a given state is worthwhile? A restart from a given state is possible because the replay feature works in #2 by saving the binary representation of the entire "space" (Pymunk object), which includes all shapes inside the simulation. Perhaps this feature will be more useful as the simulation gets more complex, but even then it should be trivial to implement the code. The non-trivial part is considering how that initial state might interact with any randomly generated numbers and whether those consequences are acceptable. ยฏ\_(ใƒ„)_/ยฏ
    • 2. be able to replay a timelapse of the simulation
  • The "save state" mechanism will be fast, perhaps saving to a file in batches after building a cache?
  • consider using pickle because this example and @ryanprior suggest it.
    • Edit: Sep 13, 2019 while #2 got this done, it will also re-implement the same code with anything but pickle because pickle is slow and results in giant files. (see #2 for more details)

Step 2: Extend The 2D Environment With Important Stuff For Evolution

Requirements

  • The simulation can arbitrarily spawn food
  • The simulation will allow the user to increase/decrease the food spawn rate
  • The organisms can move in any direction (up, down, left, right, etc.)
  • The organisms can sense another object inside it's "field of view" (FOV)
  • The organism has a limited angle and limited range for FOV.
    here's a crude drawing of ray casting to calculate "field of view."
        /|
       / |
     /   |
  /      |
O----o--.|
   \     |
     \   |
       \ |
        \|
  • There exists some way for an organism to know that a food object in its FOV is genuine food
  • The organism can choose to eat food
  • The organism can physically interact (collision) with food
  • The organism can "grab" food and "un-grab" food (throw?)
  • Can the organism see color? (yes by multi-channel FOV?) (but is that necessary? Why not a hex code? utilize fewer data points.)

Step 3: Extend The Organisms With Simple Brains

After reading Up and Down the Ladder of Abstraction (UDLA), I think the simulation would benefit significantly from incremental development with lots of visual representations of each step in the development process. So before adding fancy neural network brains, the organisms in the simulation should be able to follow a simple handwritten ruleset.
Requirements

  • The organisms can follow a simple rule like, "if food is in range of sensors, then move to it and eat."
  • The simulation's state can be recorded while running a simple ruleset.
  • The simulation will display an interactive visualization of all states, at all times
    • see UDLA for inspiration

Step 4: Extend The Organisms With Automated Brains

This step is the fuzziest in my mind. Should a convolutional neural network be used? How about a recurrent neural network? Reinforcement learning? I have no clue what's best.

Requirements

  • The organisms have neural network brains
  • The brains can output
    • move_up
    • move_down
    • move_left
    • move_right
    • eat
    • grab
    • ungrab
  • The brains take as input
    • An array of ray-cast-projections for FOV in Red
    • An array of ray-cast-projections for FOV in Blue
    • An array of ray-cast-projections for FOV in Green
    • the previous K actions, where K is some arbitrary number

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.