nicrusso7 / rex-gym Goto Github PK
View Code? Open in Web Editor NEWOpenAI Gym environments for an open-source quadruped robot (SpotMicro)
License: Apache License 2.0
OpenAI Gym environments for an open-source quadruped robot (SpotMicro)
License: Apache License 2.0
Hi, I met an error when I ran the demo walk, the rex-gym has successfully installed on my windows.
Could you please tell me how to solve this problem?
Thank you!
PS F:\program_study\quadruped_robot\rex-gym> rex-gym policy --env walk
pybullet build time: Oct 24 2021 14:53:06
RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd
Traceback (most recent call last):
File "E:\program\Anaconda3\envs\pytorch\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "E:\program\Anaconda3\envs\pytorch\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "E:\program\Anaconda3\envs\pytorch\Scripts\rex-gym.exe\__main__.py", line 7, in <module>
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\click\core.py", line 1137, in __call__
return self.main(*args, **kwargs)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\click\core.py", line 1062, in main
rv = self.invoke(ctx)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\click\core.py", line 1668, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\click\core.py", line 763, in invoke
return __callback(*args, **kwargs)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\rex_gym\cli\entry_point.py", line 31, in policy
PolicyPlayer(env, args, signal_type).play()
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\rex_gym\playground\policy_player.py", line 29, in play
config = utility.load_config(policy_dir)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\rex_gym\agents\scripts\utility.py", line 197, in load_config
config = yaml.load(file_)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\main.py", line 951, in load
return loader._constructor.get_single_data()
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 113, in get_single_data
return self.construct_document(node)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 118, in construct_document
data = self.construct_object(node)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 146, in construct_object
data = self.construct_non_recursive_object(node)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 183, in construct_non_recursive_obje
ct
data = constructor(self, tag_suffix, node)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 1013, in construct_python_object_new
return self.construct_python_object_apply(suffix, node, newobj=True)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 995, in construct_python_object_appl
y
value = self.construct_mapping(node, deep=True)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 440, in construct_mapping
return BaseConstructor.construct_mapping(self, node, deep=deep)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 255, in construct_mapping
value = self.construct_object(value_node, deep=deep)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 146, in construct_object
data = self.construct_non_recursive_object(node)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 188, in construct_non_recursive_obje
ct
for _dummy in generator:
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 723, in construct_yaml_map
value = self.construct_mapping(node)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 440, in construct_mapping
return BaseConstructor.construct_mapping(self, node, deep=deep)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 255, in construct_mapping
value = self.construct_object(value_node, deep=deep)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 146, in construct_object
data = self.construct_non_recursive_object(node)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 183, in construct_non_recursive_obje
ct
data = constructor(self, tag_suffix, node)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 995, in construct_python_object_appl
y
value = self.construct_mapping(node, deep=True)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 440, in construct_mapping
return BaseConstructor.construct_mapping(self, node, deep=deep)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 255, in construct_mapping
value = self.construct_object(value_node, deep=deep)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 146, in construct_object
data = self.construct_non_recursive_object(node)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 188, in construct_non_recursive_obje
ct
for _dummy in generator:
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 717, in construct_yaml_seq
data.extend(self.construct_sequence(node))
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 211, in construct_sequence
return [self.construct_object(child, deep=deep) for child in node.value]
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 211, in <listcomp>
return [self.construct_object(child, deep=deep) for child in node.value]
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 146, in construct_object
data = self.construct_non_recursive_object(node)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 183, in construct_non_recursive_obje
ct
data = constructor(self, tag_suffix, node)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 913, in construct_python_name
return self.find_python_name(suffix, node.start_mark)
File "E:\program\Anaconda3\envs\pytorch\lib\site-packages\ruamel\yaml\constructor.py", line 898, in find_python_name
mark,
ruamel.yaml.constructor.ConstructorError: while constructing a Python object
cannot find 'walk_env.RexWalkEnv' in the module 'rex_gym.envs.gym'
in "E:\program\Anaconda3\envs\pytorch\lib\site-packages\rex_gym/policies/walk/ik\config.yaml", line 7, column 7
``
HI
I'm trying to add visual signal of the terrain to the agent.
While rendering in offscreen mode(I turned render
option to False when initializing environment), I found that the terrain becam invisible, but the robot looks fine. Do you have any idea about how could I render the terrain without GUI?
Thanks!
Just a few general questions:
firstly, well done on what I have read so far. I am trying to install and followed your instructions, however I received an error:
pip install rex_gym
Collecting rex_gym
Using cached https://files.pythonhosted.org/packages/98/1c/6d979037d598d3ddd410908eba0bbcf76b464ebd22143a8d9682c3a9af26/rex_gym-0.1.7.tar.gz
ERROR: Command errored out with exit status 1:
command: 'C:\Users\user.conda\envs\rex\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\user\AppData\Local\Temp\pip-install-jqupm9ki\rex-gym\setup.py'"'"'; file='"'"'C:\Users\user\AppData\Local\Temp\pip-install-jqupm9ki\rex-gym\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\user\AppData\Local\Temp\pip-install-jqupm9ki\rex-gym\pip-egg-info'
cwd: C:\Users\user\AppData\Local\Temp\pip-install-jqupm9ki\rex-gym
Complete output (9 lines):
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\user\AppData\Local\Temp\pip-install-jqupm9ki\rex-gym\setup.py", line 37, in
'': [f for f in copy_assets('policies')] + [a for a in copy_assets('util')]
File "C:\Users\user\AppData\Local\Temp\pip-install-jqupm9ki\rex-gym\setup.py", line 37, in
'': [f for f in copy_assets('policies')] + [a for a in copy_assets('util')]
File "C:\Users\user\AppData\Local\Temp\pip-install-jqupm9ki\rex-gym\setup.py", line 16, in copy_assets
yield os.path.join(dirpath.split('/', 1)[1], f)
IndexError: list index out of range
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
Have you see this error before? I hope you can help, looking forward to reviewing.
--RW
Hi
After I fixed the headless rendering issue I had another problem.
While running the policy_player
in headless mode, I turned render
option to False when initializing the environment, and used a gym.wrappers.monitoring.video_recorder
to record the scene.
Here you can see the robot is walking smoothly backwards(walking environment, in mounts terrain).
Another case(gallop, plane), but this time is turning around
I tried enabling the sleep control in
rex-gym/rex_gym/envs/rex_gym_env.py
Lines 387 to 398 in 2666304
but the problem still exists. Any ideas?
Thanks!
Hello,
Thanks for the project, it looks awesome.
I've been trying to use Stable-Baselines3 on it (we created a fork to register the gym env: https://github.com/osigaud/rex-gym)
and could train an agent on it, however after training or when using a second env for testing, we could not reproduce the results.
Do you know what can change between two instantiation of the environment?
It seems that the observation provided to the agent is somehow quite different to the one seen during training.
(we are testing simple walk forward on a plane)
I'm using the RL Zoo to train the agent (and remove any kind of mistake from my part). It works with other pybullet envs perfectly (e.g. with HalfCheetahBulletEnv-v0
) but not with rex-gym :/
Additionally, it seems that the env is not deterministic, could you confirm? And do you know why?
PS: if needed I can provide a minimal example to reproduce the issue
Hi, I'm using Windows11(OSBuild:22621.674) and running this on virtual env on anaconda. I installed rex-gym from source (not PyPi).
Then, I faced same problem as this and I tried pip install --no-dependencies .
but I still in pip's dependecies problem.
It seems the install succeeded, but actually, a lot of modules which is needed for rex-gym weren't installed such as gym.
I tried to install them manually , but I couldn't find appropriate version.
How do you solve the dependencies problems? Please let me know. Thank you.
Hi, I am looking to implement a custom object detection algorithm in rex gym. Can you please tell me how can I do so?
using the code can trains a lot of policies, but how to choose the best one? in other words, what kind of standard can be use to judge a trained policy?
thanks!
hello ,i have a question for this :
if i train with gpu ,the training process will get stuck ,i do not know why,
but if i train with cpu ,all is ok.
i do not understand why ?
Can you give an example of training a policy directly from the command line?
I've tried this but it doesn't seem to produce anything in the logging directory:
rex-gym train --playground True --env walk --log-dir ~/rex-gym/logging
Am I missing some arguments?
Hello, I found a performance issue in the definition of append
, rex_gym/agents/ppo/memory.py, tf.stack and tf.gather will be calculated repeatedly during program execution, resulting in reduced efficiency. I think it should be created before the loop(with) in append
.
Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.
Hi @nicrusso7,
Thank you for this fantastic effort in making this niche field open sourced. I was browsing through your source code as well as the research paper that inspired this work, and made some comparisons with more recent advancements in Deep Reinforcement Learning for robotic quadrupeds.
How much effort is required to use this existing work to train another robotic quadruped of an entirely different set of URDF? For instance, how much changes do we need to do in order to train the Anymal Model C robot (URDF here)?
If adapting the existing framework to accomodate other types of quadrupeds is simple, how much effort is required to update the the approach to match current approaches? For instance, the work as shown by the ETH Zurich's Robotics Systems Lab? (For one, they use Raisim while this repository is in the OpenAI Gym environment)
I am really excited about the prospect of Legged robots, and would love to invest the time to furthering the work in this repository. Looking forward to hearing from you!
Regards,
Derek
I want to train the robot to learn to walk from scratch, but I don't know what command or step to achieve it
Kindly let me know what to do.
The following errors are reported after installation and operation
(rex) lrh@lrh:~/桌面/rex-gym$ rex-gym policy -e walk
/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/tensorflow_core/python/framework/dtypes.py:597: DeprecationWarning: np.object
is a deprecated alias for the builtin object
. To silence this warning, use object
by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
np.object,
/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/tensorflow_core/python/framework/dtypes.py:605: DeprecationWarning: np.bool
is a deprecated alias for the builtin bool
. To silence this warning, use bool
by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use np.bool_
here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
np.bool,
/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/tensorflow_core/python/framework/tensor_util.py:106: DeprecationWarning: np.object
is a deprecated alias for the builtin object
. To silence this warning, use object
by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
np.object:
/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/tensorflow_core/python/framework/tensor_util.py:108: DeprecationWarning: np.bool
is a deprecated alias for the builtin bool
. To silence this warning, use bool
by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use np.bool_
here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
np.bool:
/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:568: DeprecationWarning: np.object
is a deprecated alias for the builtin object
. To silence this warning, use object
by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
(np.object, string),
/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:569: DeprecationWarning: np.bool
is a deprecated alias for the builtin bool
. To silence this warning, use bool
by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use np.bool_
here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
(np.bool, bool),
/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/tensorboard/util/tensor_util.py:100: DeprecationWarning: np.object
is a deprecated alias for the builtin object
. To silence this warning, use object
by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
np.object: SlowAppendObjectArrayToTensorProto,
/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/tensorboard/util/tensor_util.py:101: DeprecationWarning: np.bool
is a deprecated alias for the builtin bool
. To silence this warning, use bool
by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use np.bool_
here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
np.bool: SlowAppendBoolArrayToTensorProto,
Traceback (most recent call last):
File "/home/lrh/anaconda3/envs/rex/bin/rex-gym", line 8, in
sys.exit(cli())
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/click/core.py", line 1130, in call
return self.main(*args, **kwargs)
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/click/core.py", line 760, in invoke
return _callback(*args, **kwargs)
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/rex_gym/cli/entry_point.py", line 31, in policy
PolicyPlayer(env, args, signal_type).play()
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/rex_gym/playground/policy_player.py", line 29, in play
config = utility.load_config(policy_dir)
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/rex_gym/agents/scripts/utility.py", line 197, in load_config
config = yaml.load(file)
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/ruamel/yaml/main.py", line 1071, in load
return loader._constructor.get_single_data()
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/ruamel/yaml/constructor.py", line 121, in get_single_data
return self.construct_document(node)
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/ruamel/yaml/constructor.py", line 126, in construct_document
data = self.construct_object(node)
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/ruamel/yaml/constructor.py", line 154, in construct_object
data = self.construct_non_recursive_object(node)
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/ruamel/yaml/constructor.py", line 191, in construct_non_recursive_object
data = constructor(self, tag_suffix, node)
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/ruamel/yaml/constructor.py", line 955, in construct_python_object_new
return self.construct_python_object_apply(suffix, node, newobj=True)
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/ruamel/yaml/constructor.py", line 937, in construct_python_object_apply
value = self.construct_mapping(node, deep=True)
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/ruamel/yaml/constructor.py", line 445, in construct_mapping
return BaseConstructor.construct_mapping(self, node, deep=deep)
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/ruamel/yaml/constructor.py", line 261, in construct_mapping
value = self.construct_object(value_node, deep=deep)
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/ruamel/yaml/constructor.py", line 154, in construct_object
data = self.construct_non_recursive_object(node)
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/ruamel/yaml/constructor.py", line 196, in construct_non_recursive_object
for _dummy in generator:
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/ruamel/yaml/constructor.py", line 674, in construct_yaml_map
value = self.construct_mapping(node)
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/ruamel/yaml/constructor.py", line 445, in construct_mapping
return BaseConstructor.construct_mapping(self, node, deep=deep)
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/ruamel/yaml/constructor.py", line 261, in construct_mapping
value = self.construct_object(value_node, deep=deep)
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/ruamel/yaml/constructor.py", line 154, in construct_object
data = self.construct_non_recursive_object(node)
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/ruamel/yaml/constructor.py", line 191, in construct_non_recursive_object
data = constructor(self, tag_suffix, node)
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/ruamel/yaml/constructor.py", line 865, in construct_python_name
return self.find_python_name(suffix, node.start_mark)
File "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/ruamel/yaml/constructor.py", line 850, in find_python_name
mark,
ruamel.yaml.constructor.ConstructorError: while constructing a Python object
cannot find 'algorithm.PPOAlgorithm' in the module 'rex_gym.agents.ppo'
in "/home/lrh/anaconda3/envs/rex/lib/python3.7/site-packages/rex_gym/policies/walk/ik/config.yaml", line 3, column 14
I'm a newer to robot simulation in pybullet using gym and ubuntu, I' m confused about some question.
1、Through command line, the simulation can be run, which is not familar to me. How can I view the source code by pycharm?(I mean which file contains the simulation environment and PPO algorithm and any other code, is this package "/home/iver/anaconda3/envs/rex/lib/python3.7/site-packages/rex_gym"? But which is the main function correspond to command line? How can I run the training code from the source code?)
2、If I want to use DDPG or any other learning algorithm, which file I need to change?
3、I use command line to train a robot, but failed. PS. The tensorflow in my virtual environment is 1.15.0.
thanks a lot if you can offer me some help!
Heyhey Nicola,
Great work with the repo and the robot!
I have a question about the servos: You're using the standard MG996R servos, right? And they don't give you any position feedback or torque feedback or literally any kind of feedback, right? I have one here and all it does is it accepts incoming PWM signals and then goes to the corresponding position.
My question is why do you have the Rex.GetMotorTorques()
function in your robot class (or rather why is it part of your observation)? My concern is that a policy that was trained in simulation wouldn't transfer to the real robot if the state doesn't contain this information.
Cheers,
Flo
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.