Code Monkey home page Code Monkey logo

shoutsid / townhall Goto Github PK

View Code? Open in Web Editor NEW
22.0 1.0 5.0 5.31 MB

A Python-based chatbot project built on the autogen and tinygrad foundation, utilizing advanced agents for dynamic conversations and function orchestration, enhancing and expanding traditional chatbot capabilities.

Home Page: https://shoutsid.github.io/townhall/

License: GNU General Public License v3.0

Python 5.89% Shell 0.09% Dockerfile 0.03% Jupyter Notebook 93.99%
agent-based agent-based-framework autogen chat-application chatbot gpt gpt-3 gpt3-turbo gpt4 llm ai llama tinygrad gpt-2 gpt2

townhall's Introduction

Townhall Banner

Townhall

🚧 Under heavy development and in transition from a previous personal project. Not ready for production use. 🚧

GitHub contributors GitHub issues GitHub pull requests GitHub forks GitHub stars GitHub watchers GitHub followers GitHub license Security: bandit

Townhall is a cutting-edge chatbot framework crafted in Python and grounded on the robust Autogen foundation. This isn't just another chatbot; Townhall leverages the power of advanced agents to breathe life into conversations and elevate them to a whole new level.

🧱 The Autogen Foundation: The Bedrock of Innovation

At its core, Townhall is built upon the Autogen framework, a pioneering platform for LLMs. Autogen enables the creation of agents that are not only customizable but also conversational. These agents can interact with each other, and seamlessly incorporate human inputs, setting the stage for more dynamic and intelligent dialogues.

Our advanced agents go beyond merely responding to user queries; they orchestrate multiple functions to provide a cohesive and engaging user experience. Think of them as the conductors of a grand symphony, where each instrument is a unique function or feature. They coordinate these functions to create a harmonious and effective dialogue, far outclassing traditional chatbots which often feel like disjointed sets of scripted responses. The advanced agents adapt and learn, making each conversation better than the last. They can switch between various modes, employing a blend of LLMs, human inputs, and specialized tools to deliver a personalized conversational experience.

Table of Contents

  1. Features
  2. Prerequisites
  3. Installation
  4. Usage
  5. Contributing
  6. Testing
  7. Roadmap
  8. Credits
  9. License

πŸ“ Prerequisites

Before you begin, ensure you have met the following requirements:

  • Python 3.10 or higher
  • pip package manager
  • sqlite3

πŸ› οΈ Installation

Docker Compose

git clone --recurse-submodules https://github.com/shoutsid/townhall.git
cd townhall
docker compose up -d
docker compose exec townhall bash
./setup.sh

For Linux

git clone --recurse-submodules https://github.com/shoutsid/townhall.git
cd townhall
./setup.sh

For Mac/Windows

The easiest way to get setup on windows is to start playing is click below to use the Github Codespace. Otherwise this was developed on WSL Ubuntu.

Open in GitHub Codespaces

🌐 Usage

Agents

Each agent can be run independently. To start an product_manager agent for example, run the following commands:

export OPENAI_API_KEY=<your-api-key>
python3 townhall/agents/product_manager.py

LLaMa Integration

To start the Llama module, run the following commands:

pip install -r requirements.txt
cd townhall/models/llama/weights/
bash pull_llama.sh
cd ../../../..
python3 townhall/models/llama/llama.py

GPT-2 Integration

To start the GPT-2 module, run the following commands:

python3 townhall/models/gpt2/gpt2.py

🀝 Contributing

If you would like to contribute to Townhall, please fork the repository and use a feature branch. Pull requests are warmly welcome.

πŸ§ͺ Testing

To run the tests:

pytest

πŸ—ΊοΈ Roadmap

For the detailed roadmap of upcoming features, please visit our Project Board.

πŸ‘ Credits

  • Prompt Contributions: A big thank you to Josh-XT for the various prompts. Check out his repository AGiXT for more details.
  • Autogen Foundation: Townhall is built upon the robust Autogen Framework, a pioneering platform for LLMs by Microsoft.
  • OpenAI Assistant: Special mention to OpenAI Assistant for aiding in the development process.
  • tinygrad: Special thanks to tinygrad for providing the foundational machine learning framework that enhances our project's capabilities.

Developed by @shoutsid.

πŸ“œ License

This project is licensed under the GNU GENERAL PUBLIC LICENSE . See the LICENSE.md file for details.

townhall's People

Contributors

dependabot[bot] avatar shoutsid avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

townhall's Issues

Implement Multi-Agent Conversations

Issue 2: Implement Multi-Agent Conversations

Title: Implement Multi-Agent Conversations
Description:
Add the capability for multiple agents to participate in a single conversation, allowing for more dynamic and interactive chat experiences.

Context

The current chatbot architecture supports conversations between a user and a single agent. Enabling multi-agent conversations will make the system more flexible and capable of handling complex scenarios.

Expected Outcomes

  • Implement a conversation manager that can coordinate between multiple agents.
  • Enable real-time message broadcasting to all participating agents.
  • Add functionality to dynamically add or remove agents in an ongoing conversation.

Challenges

  • Managing message sequencing and timing between multiple agents.
  • Ensuring that all agents have the necessary context to contribute meaningfully to the conversation.

Recommended Libraries and Algorithms

  • Conversation Management
    • Use Python's built-in asyncio for asynchronous message handling.
    • Consider using websockets for real-time communication between agents.

Resources

  • Official asyncio documentation for asynchronous programming in Python.
  • Tutorials or articles explaining real-time communication using websockets.

Implement Interactive Chatroom with Multiple Agents

Issue 8: Implement Interactive Chatroom with Multiple Agents

Title: Implement Interactive Chatroom with Multiple Agents
Description:
Extend the interactive chat interface to support conversations involving multiple agents, enhancing the chatbot's capability to handle complex dialog scenarios.

Context

While the chatbot can handle one-on-one conversations, the ability to conduct conversations involving multiple agents can open up new possibilities, such as group discussions or more complex task handling.

Expected Outcomes

  • Integrate multi-agent support into the existing interactive chat interface built with Streamlit.
  • Provide a way to dynamically add or remove agents from the chatroom.

Challenges

  • Coordinating real-time updates among multiple agents.
  • Maintaining conversation context when multiple agents are involved.

Recommended Libraries and Frameworks

  • UI Framework
    • Use Streamlit for extending the interactive chat interface to support multiple agents.

Resources

  • Streamlit documentation for more advanced features like multi-threading or asynchronous updates.
  • Research papers or articles on multi-agent communication models.

Add RAG (Retriever-Augmented Generation) Capabilities

Title: Add RAG (Retriever-Augmented Generation) Capabilities
Description:
Integrate Retriever-Augmented Generation (RAG) capabilities to enable advanced document retrieval and question-answering functionalities within the chatbot.

Context

The chatbot aims to support document embedding and interactions. Adding RAG capabilities would allow the chatbot to search, retrieve, and generate responses based on the embedded documents.

Expected Outcomes

  • Implement a RAG model that can search and retrieve relevant information from embedded documents.
  • Integrate the RAG model into the chatbot's existing architecture.

Challenges

  • Ensuring efficient and accurate document retrieval.
  • Seamlessly integrating RAG capabilities with the chatbot's other functionalities.

Recommended Libraries and Frameworks

  • NLP Frameworks
    • Hugging Face Transformers is a well-known option for implementing RAG models.
    • Given the fast pace of NLP research, also consider exploring other emerging frameworks for possible advantages in performance or features.

Resources

  • Hugging Face documentation on RAG models.
  • Research papers or articles on Retriever-Augmented Generation techniques.
  • Keep an eye on recent publications and repositories for the latest advancements in NLP.

Improve UI/UX with Streamlit

Title: Improve User Interface and User Experience with Streamlit
Description:
Revamp the existing placeholder user interface using Streamlit to make the chatbot more intuitive, visually appealing, and easier to navigate.

Context

The current user interface serves as a placeholder with no functionality. Utilizing Streamlit will allow for rapid development and deployment of a more polished UI/UX.

Expected Outcomes

  • Build a new chat interface using Streamlit that is both user-friendly and functional.
  • Implement interactive Streamlit widgets like buttons, sliders, and text boxes to enhance user interaction.
  • Test and optimize the interface for responsiveness and loading times.

Challenges

  • Ensuring that the Streamlit interface integrates well with the backend systems.
  • Conducting user testing to validate the new interface and its functionalities.

Recommended Libraries and Frameworks

  • UI Framework
    • Use Streamlit for building the entire user interface.

Resources

  • Streamlit documentation and tutorials for building interactive web apps.
  • Articles on best practices for UI/UX design in chatbots.

PlannerService should not use config_list as llm_config

Issue Summary

The config_list parameter passed to the AssistantAgent constructor is incorrectly being used as the llm_config list, when it should not be.

Steps to Reproduce

  1. Create an instance of AssistantAgent with a config_list parameter.
  2. Observe that the config_list parameter is being used as the llm_config list, contrary to expected behavior.

Expected Results

The config_list parameter should be distinct from the llm_config list, allowing for customization of the OpenAI config within the planner.

Actual Results

The config_list parameter is being utilized as the llm_config list, limiting customization.

Reproducibility

Always

Add Support for Audio Embedding

Title: Add Support for Audio Embedding
Description:
Incorporate audio embedding features into the interactive chat interface, allowing users to share and play audio clips, thereby enriching the conversational experience.

Context

Adding audio capabilities can make the chatbot useful in new contexts, such as podcast sharing or voice memo exchanges, further diversifying its application.

Expected Outcomes

  • Enable audio file upload and playback within the chat interface.
  • Implement basic audio controls like play, pause, and volume adjustment.

Challenges

  • Supporting various audio formats like MP3, WAV, and AAC.
  • Ensuring secure and efficient storage and streaming of audio files.

Resources

  • Streamlit documentation for implementing audio file upload and playback.
  • pydub library for audio processing tasks.

Unable to install on Mac M1

When I build the docker container I get this error, with docker.
34.01 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.2/14.2 MB 779.4 kB/s eta 0:00:00
34.18 ERROR: Ignored the following versions that require a different python version: 1.21.2 Requires-Python >=3.7,<3.11; 1.21.3 Requires-Python >=3.7,<3.11; 1.21.4 Requires-Python >=3.7,<3.11; 1.21.5 Requires-Python >=3.7,<3.11; 1.21.6 Requires-Python >=3.7,<3.11
34.18 ERROR: Could not find a version that satisfies the requirement nvidia-cublas-cu12>=12.2.5.6 (from versions: 0.0.1.dev5)
34.18 ERROR: No matching distribution found for nvidia-cublas-cu12>=12.2.5.6
34.40
34.40 [notice] A new release of pip is available: 23.1.2 -> 23.3.1
34.40 [notice] To update, run: pip install --upgrade pip

Dockerfile:14

12 |
13 | # Install any needed packages specified in requirements.txt
14 | >>> RUN pip install --no-cache-dir -r requirements.txt
15 |
16 | # Make port 80 available to the world outside this container

ERROR: failed to solve: process "/bin/sh -c pip install --no-cache-dir -r requirements.txt" did not complete successfully: exit code: 1

Please help.

Add functionality for Agent Configurable Predictable -> Creative States

Issue Summary

Add functionality for Agent Configurable Predictable -> Creative States, affecting llm seed & temperature.

Feature Description

The feature aims to allow configuration of an agent's behavior from being predictable to creative by adjusting the llm seed and temperature parameters. Currently, there is no way to modify these settings for a specific agent.

Use Case

This feature is crucial for users who want to control the level of creativity or predictability in the responses and actions of their agents. It can be especially useful in dynamic conversation scenarios, customer service applications, and data analytics tasks.

Steps to Implement

  1. Extend the AssistantAgent class to include methods for setting and getting llm seed & temperature.
  2. Update the agent initialization process to accept configurable llm seed & temperature.
  3. Create unit tests to verify the new functionality works as expected.

Expected Results

After implementation, users should be able to:

  • Configure llm seed & temperature during agent initialization
  • Change llm seed & temperature for an existing agent
  • Observe noticeable changes in agent behavior based on these settings

Add Prompt Templating for Forced Choice Questions/Answers

Issue 4: Add Prompt Templating for Forced Choice Questions/Answers

Title: Add Prompt Templating for Forced Choice Questions/Answers
Description:
Develop a templating system that can generate forced-choice questions and answers, enhancing the chatbot's ability to guide conversations in a more structured manner.

Context

Conversations often require users to make choices, and guiding them through these decisions can be critical for user experience. A templating system for generating forced-choice questions can streamline this process.

Expected Outcomes

  • Create a templating engine that can generate questions and multiple-choice answers dynamically.
  • Integrate the templating engine into the existing conversation management system.
  • Provide a simple interface for agents to define and use these templates.

Challenges

  • Designing a templating syntax that is both powerful and easy to use.
  • Ensuring that the templating engine can handle various data types and conditional logic.

Recommended Libraries and Frameworks

  • Templating Libraries
    • Use Jinja2 for text-based templating.
    • Consider PyYAML for defining templates in YAML format.

Resources

  • Jinja2 and PyYAML documentation for understanding their capabilities and limitations.
  • Articles on best practices for question design and user experience.

Develop an Interactive Chat Interface

Title: Develop an Interactive Chat Interface
Description:
Create a fully interactive chat interface using Streamlit, allowing users to engage in dynamic conversations with multiple agents.

Context

The chatbot is intended to have advanced functionalities, including multi-agent conversations. An interactive chat interface will greatly enhance user engagement and provide a platform for these advanced features.

Expected Outcomes

  • Implement a chat window within Streamlit where messages can be displayed in real-time.
  • Add support for interactive elements like quick replies, buttons, and media sharing within the chat interface.

Challenges

  • Ensuring seamless real-time updates in the chat window.
  • Implementing a user-friendly design that accommodates various interactive elements without clutter.

Recommended Libraries and Frameworks

  • UI Framework
    • Use Streamlit for building the interactive chat interface.

Resources

  • Streamlit documentation for building dynamic and real-time updated interfaces.
  • UX design principles for chat interfaces.

Add Support for Voice Commands

Issue 5: Add Support for Voice Commands

Title: Add Support for Voice Commands
Description:
Implement voice recognition capabilities to allow users to interact with the chatbot using voice commands, thus expanding the chatbot's accessibility and usability features.

Context

As voice interfaces become more prevalent, adding voice command support will make the chatbot more versatile and accessible to a broader audience, including those who may have difficulty with text-based interfaces.

Expected Outcomes

  • Integrate a voice recognition system that can accurately interpret spoken commands.
  • Add support for basic voice commands like "start," "stop," "help," etc.
  • Provide a simple way for other agents to add new voice commands.

Challenges

  • Ensuring low-latency and high-accuracy voice recognition.
  • Handling various accents and dialects effectively.
  • Providing effective feedback mechanisms for misunderstood commands.

Recommended Libraries and Frameworks

  • Voice Recognition Libraries
    • SpeechRecognition for basic voice recognition capabilities.
    • Google Speech-to-Text API for more advanced features and better accuracy.

Resources

  • Documentation for SpeechRecognition and Google Speech-to-Text API.
  • Research papers or articles on effective voice interface design.

Restructure Multi-Agent Simulation Experiments into `app/models/`

Overview

Due to the increasing complexity and overlapping functionalities of the 12 experiment files, it's essential to consolidate them into a more organized and maintainable structure. This restructuring aims to improve code modularity and ease of use, by categorizing functionality into separate classes and files within the app/models/ directory.

New Structure and Purpose

app/models/

  • constants.py: To store constant variables and configurations.
  • cooperative_agent.py: For defining agents with grid-specific features.
  • dqn.py: To implement the Deep Q-Network architecture.
  • environment.py: For simulating a basic environment for agent interaction.
  • improved_agent.py: To define agents with DQN and custom learning models (ImprovedLLM).
  • life_long_model.py: Renamed from ImprovedLLM to house the custom learning model.
  • message.py: For inter-agent communication features.
  • multi_agent_environment.py: To simulate an environment with multiple interacting agents and tasks.
  • reply_memory.py: Renamed from ReplayMemory for managing storage and sampling of past transitions.
  • transition.py: To represent a single transition in the environment.

Tasks

  1. Create a models/ directory inside app/.
  2. Migrate and refactor code from each experiment into its respective new file.
  3. Update all references and import statements.
  4. Implement unit tests to ensure functionality remains consistent.
  5. Update README to reflect the new structure.

Validation Steps

  • Run unit tests to confirm code integrity.
  • Perform functionality tests to ensure that all features are working as expected.
  • Review code for compliance with coding standards and performance metrics.

BUG: Buffer expansion causes position to exceed bounds, consider strategies.

BUG: Buffer expansion causes position to exceed bounds, consider strategies

This issue can appear in transformer blocks

    /app/models/llama/llama.py", line 260, in <module>
      probs = llama.model(
    /app/models/llama/transformer.py", line 83, in __call__
      pos = variable("pos", 1, 1024).bind(start_pos)
    /ext/tinygrad/tinygrad/shape/symbolic.py", line 155, in bind
      assert self.val is none and self.min<=val<=self.max, f"cannot bind {val} to {self}"

# BUG: Buffer expansion causes postion to exceed bounds, consider strategies.
#
# /app/models/llama/llama.py", line 260, in <module>
# probs = llama.model(
# /app/models/llama/transformer.py", line 83, in __call__
# pos = Variable("pos", 1, 1024).bind(start_pos)
# /ext/tinygrad/tinygrad/shape/symbolic.py", line 155, in bind
# assert self.val is None and self.min<=val<=self.max, f"cannot bind {val} to {self}"

Integrate GPT-2 Model into Townhall using Tinygrad Framework

Integrate GPT-2 Model into Townhall using Tinygrad Framework

Description

We aim to implement the GPT-2 model into our Townhall project. The existing GPT-2 code is written using the tinygrad framework, and we need to modularize it to fit into the best practices of the project.

Tasks

  • Review the existing GPT-2 code written in tinygrad.
  • Identify the components of the GPT-2 code that need to be modularized.
  • Refactor the code to make it modular and conform to the project's coding standards.
  • Integrate the modularized GPT-2 code into the Townhall project.
  • Test the integration to ensure that the GPT-2 model functions as expected.
  • Optimize for performance and scalability.

Challenges

  • Ensuring that the GPT-2 integration does not negatively impact the existing functionalities of Townhall.
  • Performance optimization to handle the computational needs of GPT-2.

Recommended Libraries and Frameworks

  • Tinygrad for running the GPT-2 model.

Resources

Restructure into ./app folder

Refactor the project structure to be more modular and easier to maintain.
Seperate services into ./app/services folder.
Seperate helpers into ./app/helpers folder.

Integration of Llama with Tinygrad

Integrate the Llama library with Tinygrad to enhance the project's capabilities and efficiency.

Background

Llama is a powerful library that can provide valuable functionality to our Townhall project. By integrating it with Tinygrad, we can leverage its features for improved performance and functionality.

Proposed Changes

  • Add Llama as a dependency to the project.
  • Implement integration with Llama in relevant parts of the codebase.
  • Test and ensure the integration works seamlessly.

Benefits

  • Improved performance and functionality through the use of Llama.
  • Access to additional features and capabilities.

Additional Information

Context

This integration will enhance our project by leveraging the capabilities of the Llama library. It's essential for keeping our project up-to-date and competitive in terms of functionality and performance.

Links

Implement New React Dashboard Application

Implement New React Dashboard Application

Description:

We are planning to launch a new React Dashboard application that provides an enhanced user interface and integrates with our current backend services. As part of this initiative, we've designed a styles.css file to style the app components.

Tasks:

  1. Setup the React Application: Create a new React application using Create React App or our standard setup.
  2. Integrate styles.css: Import and apply the styles.css file to the relevant React components. Ensure styles are appropriately applied.
  3. Component Development: Develop the necessary React components such as Header, Feature Section, and Footer.
  4. Mobile Responsiveness: Test the application on various screen sizes to ensure mobile responsiveness.
  5. Backend Integration: Connect the React application to our existing backend services. Ensure API calls and data fetching are working correctly.

Develop Vector Store Management Pattern Similar to ORM

Issue 3: Develop Vector Store Management Pattern Similar to ORM

Title: Develop Vector Store Management Pattern Similar to ORM
Description:
Develop a management pattern similar to Object-Relational Mapping (ORM) that is specifically designed to handle opensource vector space models. This pattern will aim to streamline the way we interact with vector data stores, improving data access and manipulation.

Context

The project aims to leverage opensource vector space models for various functionalities. Traditional ORMs are not suitable for this kind of data. The goal is to create a pattern similar to ORM but designed to work with vector stores.

Expected Outcomes

  • Design and implement a vector store management pattern that provides CRUD (Create, Read, Update, Delete) operations for vector data.
  • The pattern should be compatible with multiple types of vector space models and data stores.

Challenges

  • Creating a pattern flexible enough to accommodate different types of vector space models and data stores.
  • Ensuring efficient handling of large-scale vector data.

Recommended Vector Data Stores and Libraries

  • Vector Data Stores
    • Weaviate for GraphQL-based vector search.
    • Pinecone for distributed vector indexing and search.

Resources

  • Weaviate and Pinecone documentation for understanding their capabilities and limitations.
  • Research papers or articles on efficient data access patterns could be a good starting point.

Completely remove the requirement of PyTorch

This is significantly slowing down our build times. The primary application does not use Torch other than the experimental simulation network, TinyGrad should be used instead. Much lighter weight,

Token prediction and compression techniques

Issue 1: Token prediction and compression techniques

Title: Implement Token Prediction and Compression Techniques
Description:
Develop and implement specific techniques for token prediction and data compression to improve the efficiency and responsiveness of the Townhall chatbot.

Context

The Townhall chatbot is growing in complexity and is handling larger datasets. Efficient token prediction and data compression are critical for improving computational performance and user experience.

Expected Outcomes

  • Implement Markov Chain and N-gram models for token prediction.
  • Implement Huffman coding for data compression.
  • Achieve at least a 25% reduction in processing time for token-related operations.

Challenges

  • Balancing the trade-off between computational complexity and prediction/compression effectiveness.
  • Ensuring backward compatibility and that new features do not introduce bugs.

Recommended Libraries and Algorithms

  • Token Prediction
    • Use nltk for implementing N-gram models.
    • Use numpy or pandas for Markov Chain calculations.
  • Compression
    • Use Python's zlib library for implementing Huffman coding.

Resources

  • NLTK documentation for N-gram models
  • Research papers on Huffman coding and its applications
  • Markov Chain tutorials to understand the algorithm's behavior in text prediction

Add Support for Image Embedding

Title: Add Support for Image Embedding
Description:
Extend the chat interface to support the embedding of images, allowing for richer and more visually engaging conversations.

Context

With the RAG capabilities in place, the next step is to enrich the chat interface with the ability to embed and interact with images, making the chatbot more versatile.

Expected Outcomes

  • Implement image upload and display functionalities within the chat interface.
  • Provide options for annotating and interacting with the embedded images.

Challenges

  • Handling various image formats and sizes efficiently.
  • Ensuring secure file upload and storage.

Recommended Libraries and Frameworks

  • UI Framework
    • Use Streamlit for adding the image embedding features.
  • Image Processing
    • Consider using PIL (Pillow) for basic image handling tasks.

Resources

  • Streamlit documentation on handling file uploads and displaying images.
  • PIL documentation for image processing tasks.

Add Support for Video Embedding

Title: Add Support for Video Embedding
Description:
Further enhance the interactive chat interface by adding the capability to embed videos, allowing for an even more dynamic and rich user experience.

Context

Adding video embedding capabilities will augment the chatbot's functionality, making it more versatile and engaging for users who prefer or require visual content.

Expected Outcomes

  • Implement video upload and playback features within the chat interface.
  • Add options for video controls like play, pause, and seek.

Challenges

  • Handling various video formats and codecs efficiently.
  • Ensuring secure and efficient video storage and streaming.

Recommended Libraries and Frameworks

  • UI Framework
    • Use Streamlit for adding the video embedding features.
  • Video Processing
    • Consider using OpenCV for video processing tasks.

Resources

  • Streamlit documentation on handling file uploads and displaying videos.
  • OpenCV documentation for video processing tasks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.