Code Monkey home page Code Monkey logo

robosuite's Introduction

robosuite

gallery of_environments

[Homepage][White Paper][Documentations][ARISE Initiative]


Latest Updates


robosuite is a simulation framework powered by the MuJoCo physics engine for robot learning. It also offers a suite of benchmark environments for reproducible research. The current release (v1.4) features long-term support with the official MuJoCo binding from DeepMind. This project is part of the broader Advancing Robot Intelligence through Simulated Environments (ARISE) Initiative, with the aim of lowering the barriers of entry for cutting-edge research at the intersection of AI and Robotics.

Data-driven algorithms, such as reinforcement learning and imitation learning, provide a powerful and generic tool in robotics. These learning paradigms, fueled by new advances in deep learning, have achieved some exciting successes in a variety of robot control problems. However, the challenges of reproducibility and the limited accessibility of robot hardware (especially during a pandemic) have impaired research progress. The overarching goal of robosuite is to provide researchers with:

  • a standardized set of benchmarking tasks for rigorous evaluation and algorithm development;
  • a modular design that offers great flexibility to design new robot simulation environments;
  • a high-quality implementation of robot controllers and off-the-shelf learning algorithms to lower the barriers to entry.

This framework was originally developed since late 2017 by researchers in Stanford Vision and Learning Lab (SVL) as an internal tool for robot learning research. Now it is actively maintained and used for robotics research projects in SVL and the UT Robot Perception and Learning Lab (RPL). We welcome community contributions to this project. For details please check out our contributing guidelines.

This release of robosuite contains seven robot models, eight gripper models, six controller modes, and nine standardized tasks. It also offers a modular design of APIs for building new environments with procedural generation. We highlight these primary features below:

  • standardized tasks: a set of standardized manipulation tasks of large diversity and varying complexity and RL benchmarking results for reproducible research;
  • procedural generation: modular APIs for programmatically creating new environments and new tasks as combinations of robot models, arenas, and parameterized 3D objects;
  • robot controllers: a selection of controller types to command the robots, such as joint-space velocity control, inverse kinematics control, operational space control, and 3D motion devices for teleoperation;
  • multi-modal sensors: heterogeneous types of sensory signals, including low-level physical states, RGB cameras, depth maps, and proprioception;
  • human demonstrations: utilities for collecting human demonstrations, replaying demonstration datasets, and leveraging demonstration data for learning. Check out our sister project robomimic;
  • photorealistic rendering: integration with advanced graphics tools that provide real-time photorealistic renderings of simulated scenes.

Citation

Please cite robosuite if you use this framework in your publications:

@inproceedings{robosuite2020,
  title={robosuite: A Modular Simulation Framework and Benchmark for Robot Learning},
  author={Yuke Zhu and Josiah Wong and Ajay Mandlekar and Roberto Mart\'{i}n-Mart\'{i}n and Abhishek Joshi and Soroush Nasiriany and Yifeng Zhu},
  booktitle={arXiv preprint arXiv:2009.12293},
  year={2020}
}

robosuite's People

Contributors

amandlek avatar cremebrule avatar dmiller12 avatar dwaitbhatt avatar hartikainen avatar hermanjakobsen avatar jcreus avatar jirenz avatar jstmn avatar kpaonaut avatar lorepieri8 avatar minhnh avatar mohanksriram avatar mrkulk avatar otokatli avatar praveenelango avatar raghavauppuluri13 avatar roberto-martinmartin avatar s-tian avatar sh0cktr4p avatar snasiriany avatar steve-tod avatar youngwoon avatar yukezhu avatar zhuyifengzju avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

robosuite's Issues

Add solref and solimp arguments to generated objects

We should add solref and solimp arguments to MujocoGeneratedObject and its subclasses to easily play around and experiment with contact modeling. The default behavior for some objects is pretty bad - for example thin cylinders tend to sink into the table.

GLFW NameError

In case it ever comes up for anyone else: I was getting NameError: name '_raise_glfw_errors_as_exceptions' when running demo.py after following the instructions in the README from scratch. I had to downgrade glfw from 1.10.0 to fix it. I went ahead and used 1.7.0 since that's what I've used before, but given that 1.10.0 came out yesterday, it's likely just a 1.10.0 issue.

Computing inversere kinematics for absolute endeffector position

Hi,

I'd like to compute inverse kinematics for an absolute endeffector position. I'm trying to reuse parts of RoboSuite's inverse kinematics controller for this.

Here is my attempt at doing it.
https://github.com/kpertsch/robosuite/blob/dev_frederik/robosuite/scripts/test_absolute_inv_kin.py
Unfortunately the solution is incorrect, am I missing some tranformations? (I added the transform from base to world but that didn't help)

Thanks!

Changing scene/object property on the fly

Hi, I am wondering is there a way to remove/add objects (and also change properties of objects) in a particular environment on the fly?
It seems that this used to be a problem with mujoco-py but they have fixed the read-write issue. How can I do the same in robosuite environments?

Renderer real-time compensation is out of sink

Hello,

I noticed that the render behaves sometimes strangely, with regards to the frequency of rendering and timing.

It is possible to see this by slightly changing the demo.py file by adding sleepand print calls :
image

Launching the demo then with say Sawyer lift we can see that it only renders every so often, not at every call of render() . Moreover, if we set up a test at a control frequency of 5 hz, and allow for 20 timesteps, we can see that it does not render in 4s which is what would have been expected in real-time.

Upon further investigation, this comes from mujoco-py, that implements a compensation to make the rendering real-time (c.f. lines 194-204 in mjviewer.py):
image

Now, this compensation relies on sim.nbsubsteps which tells mujoco-py how many simulation steps to take for every control step. This here is set to 1, but the functionality is re-implemented in base.py (note that step in base.py reasons in terms of frequencies and time while sim.nbsubsteps just considers the number of simulation substeps):
image

The result of this is that every-time that render() is called, mujoco-py compensates time as if only one simulation step was taken between consecutive render calls (as sim.nbsubsteps =1), while there are actually more taken, depending on the control_freq ; which makes the whole thing go out of sink.

I made a quick and dirty hack to compensate for this, but a better ideas are more than welcome:
In base.py:

  1. I added self.viewer._render_every_frame = True in _reset_internal to stop the compensation by mujoco-py
  2. At every render call I added time.sleep(1/self.control_freq) to get back to real-time.

This does not take into account the actual compute time (as opposed to mujoco-py which does) between render calls so it is actually not showing real-time, but its an approximate fix if you don't care about being very exact.

image

image

Thanks!!

Velocity actuator gain values on the sawyer too low?

Hello,

Thanks again for the repo, it has been really helpful! :)

Could you please provide some information on how did you decide on the velocity actuator gains for the sawyer, as well as the control ranges for both the velocity and torque versions of the sawyer?

I have setup a pushing task of a small cubic object (5cm edges, 200grams) on the table arena, and am noticing that if the friction of the object is high enough (0.8-0.9) , the sawyer fails to provide enough torque to push the object.

This is fixed if I increase the values of the velocity actuator gains, but I would like to better understand if this is a sensible thing to do, and how to get a good idea of what are sensible gains.

I would also like to create a PID controller on top of mujoco torque actuators by modifying the base, sawyer_robot and sawyer modules, but I am struggling to figure out a way to get sensible P and I gains for each joint.

I would appreciate any insights you could give me on this!

Test linear interpolator error

test_linear_interpolator produced the following error in v1.0 branch

Testing controller EE_IK with trajectory pos and interpolator=linear...
Completed trajectory. Took 30 timesteps total.
Traceback (most recent call last):
  File "test_linear_interpolator.py", line 170, in <module>
    test_linear_interpolator()
  File "test_linear_interpolator.py", line 160, in test_linear_interpolator
    assert timesteps[1] > min_ratio * timesteps[0], "Error: Interpolated trajectory time should be longer " \
AssertionError: Error: Interpolated trajectory time should be longer than non-interpolated!

Standardize placement initializers for tasks and multiple objects

  • Currently, for SawyerPickPlace, the placement_initializer argument is ignored, and no placement initializer is used by the PickPlaceTask to place objects - instead the placement is hardcoded such that placement corresponds to the left side of the table (the bin).
  • We should unify placement initializers that can work for all tasks across different numbers of objects and different ranges - this will be helpful in addressing issues with SawyerNutAssembly as well where we need different ranges depending on the type of object.

From actual sawyer urdf to updated mujoco XML files?

Hello,

Please correct me if I am wrong, but I have noticed that your mujoco XML files have been constructed using the ``standard'' urdf description of the sawyer provided by rethink here.

Do you by any chance have functionality that could take the urdf obtained from the parameter server of the actual physical sawyer and update the mujoco xml files accordingly? (Did you have to write all the xml files by hand or do you have any scripts that helped you obtain them from the urdf description?)

Thanks!

Novint faclon interface

This is a query regarding the interface of the novint falcon and getting force feedback from the simulation. Is it possible?

Potential issue with UniformRandomSampler

Hello,

That repository is fantastic, thank you for the good work!

I have noticed in the UniformRandomSampler under models.tasks.placementsampler that the z_rotation argument default is "random", in the SaywerLift environment it is passed as True , and in the sample_quat function where it is used it seems that the only supported options are [None, Iterable, scalar].

In case I am misunderstanding something, and because I am not sure what the intended defaults were originally, I am just raising an issue here for now!

image

image

Thanks!

Setting friction for generated objects

The MujocoGeneratedObject class supports directly setting a vector of friction coefficients, but subclasses (see here) but subclasses such as BoxObject instead convert the friction parameter to a friction_range that is just used by the superclass to sample a translational friction value, while keeping the other friction parameters at their default values. Solution is to make sure that the friction argument is passed through from subclasses to the super calls.

Setting object positions during simuation

Hi,

I'd like to set object positions during simulation, however mujoco-py only supports setting the complete qpos vector.

Is there a way to get the qpos-address of the object joints? Or is there another way to modify object positions/orientations?

Thanks!

Coordinate conversion

Great work! The repo has provided image (256x256) as extracted observation. However, I'm encountering some difficulties performing coordinate conversions from 3D scene to 2D image and vice versa. For example, table in the scene is defined in MJCF model as pos="0.5, 0.3, 0.8" and quat, I want to extract 2D image of this particular region and after performing some detections on objects from this cropped region, transfering back 2D coordinates to 3D scene in pos and quat format. Can you provide some guidance on how to do this?

Many thanks!

Support for mujoco 2.0

It would be nice if robosuite officially supported mujoco 2.0. I've ran most of the existing environments with mujoco 2.0 and they seem to run fine. I think it's possible that the upgrade is just a matter of bumping up the mujoco version in the requirements.txt and setup.py and no changes in the robosuite code base are needed.

I'd be happy to submit a PR for this.

Compatibility with Python3.6

Are there any plans to make robosuite compatible with Python3.6. Tensorflow doesn't offer support for Python3.7 and I want to use both together.

Increasing Simulation speed

Hi,

simulation speed with standard parameters is currently very slow.
For a single env.step() it takes about 0.15 seconds.

Is there a parameter that allows speeding this up? I know there are Mujoco solver parameters like the number of iterations it takes at every step. Is that exposed in robosuite anywhere?

Thanks!

get_model StringIO usage

In robosuite/models/base.py, why does the get_model() function use StringIO? Can't load_model_from_xml directly take a string? The current way appears to create a new .xml file under /tmp/ every time it is called.

def get_model(self, mode="mujoco_py"):
        """
        Returns a MjModel instance from the current xml tree.
        """

        available_modes = ["mujoco_py"]
        with io.StringIO() as string:
            string.write(ET.tostring(self.root, encoding="unicode"))
            if mode == "mujoco_py":
                from mujoco_py import load_model_from_xml

                model = load_model_from_xml(string.getvalue())
                return model
            raise ValueError(
                "Unkown model mode: {}. Available options are: {}".format(
                    mode, ",".join(available_modes)
                )
            )

Failed to get image observation

Hi,

I think this is related to issue #17 and probably has to do with environment configuration. I am on Ubuntu 18.04 and followed all the installation instructions. I was able to run the demo successfully. However, in the tutorial, whenever use_camera_obs is set to true, I got the following error message:

File "/home/robosuite/robosuite/environments/base.py", line 149, in reset
return self._get_observation()
File "/home/robosuite/robosuite/environments/sawyer_lift.py", line 274, in _get_observation
depth=self.camera_depth,
File "mjsim.pyx", line 149, in mujoco_py.cymj.MjSim.render
File "mjsim.pyx", line 151, in mujoco_py.cymj.MjSim.render
File "mjrendercontext.pyx", line 43, in mujoco_py.cymj.MjRenderContext.init
File "mjrendercontext.pyx", line 108, in mujoco_py.cymj.MjRenderContext._setup_opengl_context
File "opengl_context.pyx", line 128, in mujoco_py.cymj.OffscreenOpenGLContext.init
RuntimeError: Failed to initialize OpenGL

I am just wondering if you know what is causing this problem.

Thanks!

How can I move the objects as I wish in this environment?

Hi, thank you for providing a complete simulation robotic arm a lot.
However, now I want to get more different pictures about the state, I don't know how to move the objects?(e.g. bottles, mikes.)
I tried to modify the location of the random initialization, but the change was not obvious.
Perhaps, this project is too big to review all details especially my English is not skilled~
May I could not find your example programs?
Can you please provide one direction and let me go to learn how to move those, your project or mujoco-py?

Run demo meet error

thanks for your job,it is very exciting for robot.And when I run the demo,I meet a error,could you help me?Thanks very much!
GLFW error (code %d): %s 65544 b'X11: RandR gamma ramp support seems broken' Creating window glfw Creating window glfw

Replay demos using action doesn't work

I'm trying to replay some demos in various tasks and environments. While they work setting use-actions to False, they always fail setting it to True.
Is there a solution? Is it caused by some source of non-determinism?

placement initializer x and y ranges ignored in SawyerNutAssembly

There is a bug where the x_range and y_range arguments are ignored by the UniformRandomPegsSampler placement initializer - probably because it needs to place multiple object types (square and round nuts) see here

This placement initializer should probably take dictionaries as input to specify ranges per object type and use the input dictionary appropriately.

Is it possible to manipulate softbody with robosuite?

Hi,
I find the robosuite framework very cool. Does it support manipulating soft objects like cloth, ropes? As far as I understand, mujoco does support soft object manipulation. Does that support come out-of-the-box for robosuite? If not then any ideas on how to do it?
Thanks!

Memory leak of the IK controller?

Hello!

I would like to use the ik_wrapper and the sawyer_ik_controller in order to control the robot in end-effector pose, but I am noticing that my environment consumes more and more memory over time!

Because I want to use distributed environments in order to speed up learning, my program ends up running out of memory.

I am wandering if this could be a memory leak somewhere in the controller? The only thing that I could think of is pybullet (looking at the code, i don't see where else this could be coming from).

Any Ideas? Perhaps any recommendation on what other packages I could use to replace pybullet do the the inverse kynematics?

Many Thanks,
Eugene

Support Bullet Physics

While MuJoCo has advantages, $500 per year is a very high barrier to entry for many. It will severely limit any organic community of tutorials and blogs which could spring up around this repository.

Please consider support for an open source and free engine such as Bullet.

Robot appears differently in window rendering vs offscreen rendering from camera

Hi,

I am trying to generate camera observations of rollouts of a policy trained with Surreal.
However I found that the Sawyer robot rendered with env.unwrapped.render() onscreen and the observation rendered with env.unwrapped.sim.render() look different. More specifically, the one rendered in the simulation window has more details whereas the one rendered in offscreen mode does not resemble the real look of the robot:

image

Is there a way to obtain the more realist looking robot in observation images?

Thanks

Why I got some dark images?

Hi,
When I try to save some images I found there are some black in the normal images.
I want to know how can I avoid it and if I don't render the env, will it exist?
the dark images:
image_10
image_20

the normal image:
image_0

thanks a lot!

Action data of human demo

Hi,

I'm generating demo data through the "collect_human_demonstrations" script file. I have a question related to this.

Can I get data corresponding to robot action in demo? (it may be torque of robot joint)

Velocity limits for Sawyer model

I was wondering how you came up with the velocity limits and kv value for the velocity actuators that is defined in Sawyer model. I've checked the real robot velocity limits but they don't match with your values. Also, how did you set kv gains for the mujoco ?

Sensor Reading is Brittle in Some Environments

In some environments (which don't look to have been merged yet, such as wiping environments in the vices_19 branch), sensor data from sensor with id sensor_id is referenced using env.sim.data.sensordata[sensor_id*3:sensor_id*3+3], where the magic number 3 is used based on the assumption that the previous sensors all have 3 values associated with them. If sensors with different dimensions are added (e.g. 1-dim contact sensors), the returned values for will be wrong.

Note: this is due to the fact that Mujoco/mujoco-py simply puts all sensor data in a single array in env.sim.data.sensordata (in the order that the sensors were added).

We will eventually need some way of tracking the kinds of sensors that have been added to mitigate the issue; filing the issue here for reference in the meantime. Users will have to manually keep track of what sensors are used and be careful with how they reference them.

Missing and Broken Documentation

I read through all the documentation and a lot of details necessary to run and contribute to this project appear to be missing so I'd appreciate your help. Will the documentation be expanded at some point?

Examples of missing info for the robosuite:

  1. The main README.md does not seem to contain a link to surreal itself.
  2. Could the quickstart be updated to cover the steps from a fresh single machine to running a demo? The current quickstart assumes everything is already installed and working.
  3. The link to code in creating_environment.md is broken
  4. Some examples seem to be in the scripts folder, could you add an overview of their purpose?
  5. The scripts folder isn't mentioned in the main readme.md
  6. existence of action_spec() and environment_spec() is in a separate repository. Specs are a dictionary, but what do those specs typically contain? Are there recommended strings?

Thanks for your help!

position of object does not change when I follow 'How to build a custom environment'

Hello, thank you very much for the repo.
When I followed the steps of 'How to build a custom environment', I found a problem. In the 'adding the object' part, the example code is

from robosuite.models.objects import BoxObject
from robosuite.utils.mjcf_utils import new_joint

object_mjcf = BoxObject()
world.merge_asset(object_mjcf)

obj = object_mjcf.get_collision(name="box_object", site=True)
obj.append(new_joint(name="box_object", type="free"))
obj.set("pos", [0, 0, 0.5])
world.worldbody.append(obj)

At this part I tried to change the position of the object, i.e change obj.set("pos", [0, 0, 0.5]) into other numbers, but the rendering result shows the position of the object remains unchanged (at the bottom of the table, shown in the picture attached).
Has anybody also faced this problem and could you please give me some advice? Thank you very much!
image

Share source code for Real Sawyer Robot interface

Robosuite is an amazing open-source code that helps the researchers a lot to benchmark their algorithms. I really appreciate your effort and kindness to share this source code. Could you share the real robot API interface to work with the real Sawyer robot? That would also help people a lot to see algorithm work on the real robot. Currently, I am working on real Sawyer robot. I really look forward to seeing your sharing.

Thank you

Renderer not working in tutorial

Following the README.md
When I run $ python robosuite/demo.py, everything works fine however under the Quick Start section of your README.md when I run:

import numpy as np
import robosuite as suite

# create environment instance
env = suite.make("SawyerLift", has_renderer=True)

# reset the environment
env.reset()

for i in range(1000):
    action = np.random.randn(env.dof)  # sample random action
    obs, reward, done, info = env.step(action)  # take action in the environment
    env.render()  # render on display

I get a blank (black) window that appears for three seconds before vanishing. I don't see any images or animations. I find that strange since running the demo.py worked fine.

Rendering problem

Hi,
Thank you for providing this benchmarking framework.

I have a couple of inconveniences with using this framework.

First, when rendering via "env.render ()", the rendering screen is turned back on whenever the environment is reset. Is there a solution for this?

Second, a GLFW missing error occurs if the "has_offscreen_render" option is set to true when the "has_render" option is false. If both options are true, the rendering screen will blink.

Thank you.

Getting the top_site and bottom_site parameters

Hi,

I am trying to load different objects in your amazing environment to train a general purpose grasper. For that I have to provide different xml's for each object. In these xml's there are "site" parameters which specify the topmost and bottommost part of the object, from the center of the object. Also looking at those values they are only computed in z-direction. But when I use the models provided in meshes of this repository and try to compute this top-site and bottom-site, I do not get the same answer as that written in the xml files. So I just find out the mean of the stl file, this is my center and then subtract this center so now everything lies between 0 and 1 and then I just find out the min-max in the z-direction to get the top site and bottom site.

Maybe I am doing something wrong, any help regarding this topic would be great.

Thanks

[BUG] Off-screen render images are incorrectly mirrored

I've found that images rendered off-screen by mujoco_py (w/ sim.render) are upside down and mirrored when compared to images rendered by the Mujoco viewer. While the former issue can be fixed by adjusting the camera transform, the later issue can only be fixed by flipping the image buffers in code.

For example, below is the default image returned by the sawyer_pick_place environment, a rotated version of it for convenience, and the image shown by the Mujoco Viewer. Note that the bin is mirrored between the rotated and "real" images.

I believe the Mujoco viewer's rendering is correct, as the right baxter gripper appears on the robot's left in the sim rendering. To fix this issue I flipped the image buffer's width channel in code, though there may be a better way to resolve this.

From sim Rotate 180 degrees From viewer
raw_sim rotated_sim viewer_resize

Playback Demostrations Failed

I want to replay the demonstrations.

However, when I "python playback_demonstrations_from_hdf5.py --folder ../models/assets/demonstrations/SawyerPickPlace/ --use-actions" under the robosuite/robosuite/scripts/, the program failed the assertion and the robot can't grab the things.
Screenshot from 2020-03-30 12-44-51
Screenshot from 2020-03-30 12-43-14

I failed when installing mujoco-py==2.0.2.2 & robosuite==0.3.0 as well as mujoco-py==1.50.1.68 & robosuite==0.2.0.
Could you give any suggestion?

BTW, when I installed surreal, it says
"ERROR: surreal 0.2.1 has requirement mujoco-py<1.50.2,>=1.50.1, but you'll have mujoco-py 2.0.2.2 which is incompatible."
"ERROR: robosuite 0.3.0 has requirement mujoco-py==2.0.2.2, but you'll have mujoco-py 1.50.1.68 which is incompatible."
Does this mean RoboTurk collected demonstrations using mujoco-py 1.50.1.68 and maybe incompatible with the current robosuite?

reset_from_xml_string bug

The function reset_from_xml_string in base.py doesn't call _reset_internal, which can be problematic when subclasses expect this to be called on every env reset. We should factor MjSim creation outside of _reset_internal and then call it in reset_from_xml_string.

Offscreen render appear black images(zeros array) as the increases of episode

As the episode increases, at the fixed step, obs will return a black images from obs until the end of the entire episode.

Very strangely, there are no problems with more than a hundred episodes at the beginning. The resulting images are normal and clearly.
Then once the bad picture appears, it will always appear regularly!

in https://github.com/SurrealAI/surreal/blob/da705c02a243dbc7709c6002a02f1f8df6007674/surreal/main/ddpg_configs.py#L133
I found that your team have also encountered similar bad pictures, I really want to know how you avoided it.

env = suite.make( 'SawyerLift', has_renderer=True, use_camera_obs=True, camera_depth=False, ignore_done=False, render_visual_mesh=False, reward_shaping=True, camera_height=Origin_size, camera_width=Origin_size, camera_name=camera_name, control_freq=10, reach_flag=False, )

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.