Code Monkey home page Code Monkey logo

baloo's Introduction

Baloo

The Baloo code is archived on Zenodo
DOI

Baloo is a design framework for network stacks based on Synchronous Transmissions (i.e., using a flooding protocol like Glossy as underlying communication primitive). Baloo is flexible enough to implement a wide variety of network layer protocols, while introducing only limited memory and energy overhead.

Using Baloo, one can relatively easily re-implement network layer protocols like the Low-power Wireless Bus, Crystal, and Sleeping Beauty.

The general concept of Baloo and its working principles have been presented in the following paper.

Synchronous Transmissions Made Easy: Design Your Network Stack with Baloo
Romain Jacob, Jonas Bächli, Reto Da Forno, Lothar Thiele
Proceedings of the 2019 International Conference on Embedded Wireless Systems and Networks (EWSN) 2019.
Direct Link

Unless explicitly stated otherwise, all Baloo sources are distributed under the terms of the 3-clause BSD license. This license gives everyone the right to use and distribute the code, either in binary or source code format, as long as the copyright license is retained in the source code.

How to cite Baloo

To cite Baloo, please use the paper listed above. If you want to cite specifically the code, please use the Zenodo archive corresponding to the version you want to cite.

Baloo
Romain Jacob, Jonas Bächli, Reto Da Forno.
Version x.y. Zenodo.
http://doi.org/10.5281/zenodo.XXXXXXX

Online presence

Disclaimer

Although we tested the code extensively, Baloo is a research prototype that likely contain bugs. We take no responsibility for and give no warranties in respect of using the code.

This repository contains an implementation of Baloo using the Contiki-NG operating system. Some minor modifications to the original OS code where made. The list of modified files (and rational for changes, whenever appropriate) is available here.

The Baloo source files contain detailed Doxygen comments. The Doxygen documentation can be generated with the following commands

cd /tools/doxygen/
make html

This generates the complete documentation for Contiki-NG. Documentation of Baloo files can be directly accessed /tools/doxygen/html/group__gmw.html.

baloo's People

Contributors

romain-jacob avatar triscale-anon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

baloo's Issues

Configuring GMW_CONF_T_DATA for Chaos to reach all Flocklab nodes

Hi @romain-jacob, I have been playing extensively with Baloo+chaos over the past few days and I think I've encountered an issue which I managed to replicate with the following code (see below and on attached pastebins).

I think the currently maximum allowed values for GMW_CONF_T_DATA do not allow Chaos to correctly execute reaching and aggregating data from all nodes when run on Flocklab.

If my testing is correct currently a Baloo data slot has maximum size of ~128ms.

When interpreting the results from the A2:Synchotron paper (the flocklab tests), it really seems like an individual chaos round (full network flood + payload aggregation from each node) usually takes them between 200 and 300 ms to execute.

When testing on Flocklab, with the maximum ~128ms slot size I am never able to reliably see the payload from all nodes aggregated correctly into the final message which reaches the initiator.

I have been testing mechanisms to attempt to recover from this (attempt to execute multiple chaos round, keeping track locally of chaos flags and payload so they can be used again the round after), but overall this yields terrible latency (~2-5 times slower than A2:Synchotron to perform the same protocols).

Therefore I am wondering if:

  1. There is a way to extend the maximum GMW_CONF_T_DATA size.
  2. There is a way to have nodes "resume" an interrupted chaos flood. I can reassign flags and payloads, but my understanding of chaos is that it is always triggered by one node declared as is_initiator. When multiple nodes have the initiaton flag set, the floods seem to not occur correctly (am I testing badly)?
  3. Does this data make sense? Is my testing completely wrong? If so would you just be able to give a small quick hint into what my biggest understanding flaw could be?

I have linked the code and results of running on Flocklab.


The code:

This is a minor modification of the baloo-test-chaos example, in rder to keep it as simple as possible.

Full baloo-test-chaos.c file.
Full project-conf.h file.
Full Flocklab results file (initiator was node 10).

As I am testing on all 26 flocklab nodes, I modified the following:

#define NODE_LIST  \
 { 1,2,3,4,6,7,8,10,11,13,14,15,16,17,18,19,20,22,23,24,25,26,27,28,32,33 }
#define NUM_NODES                       26
#define CHAOS_CONF_PAYLOAD_LEN          4

I configured GMW_CONF_T_DATA rounded down to the maximum 10ms period:

#define GMW_CONF_T_DATA                 120000UL

Greater values (such as 130000UL, 200000UL) will not compile with error:

error: large integer implicitly truncated to unsigned type

When it comes to the code I am just looking at the payload. All nodes will wish to aggregate their payload every round:

static gmw_skip_event_t
src_on_slot_pre_callback(uint8_t slot_index,
                         uint16_t slot_assignee,
                         uint8_t* out_len,
                         uint8_t* out_payload,
                         uint8_t is_initiator,
                         uint8_t is_contention_slot)
{
  /* Always aggregate into the payload */
  *out_payload = 1;
  *out_len = 1;
  return GMW_EVT_SKIP_DEFAULT;
}

And aggregation just sets the relevant bit in the payload:

__attribute__((always_inline))
inline void chaos_set_payload_cb(uint8_t* chaos_payload_field,
                                 uint16_t node_index,
                                 uint8_t* payload)
{
  if(*payload) {
    chaos_payload_field[node_index / 8] |= (1 << (node_index % 8));
  }
}

Results printing is simply done by investigating the contents of the payload to see if all nodes have replied:

static gmw_repeat_event_t
src_on_slot_post_callback(uint8_t slot_index,
                          uint16_t slot_assignee,
                          uint8_t len,
                          uint8_t* payload,
                          uint8_t is_initiator,
                          uint8_t is_contention_slot,
                          gmw_pkt_event_t event)
{
  DEBUG_PRINT_INFO("payload: %u %u %u %u", payload[0], payload[1], payload[2], payload[3]);
  return GMW_EVT_REPEAT_DEFAULT;
}

A correct chaos round would have reached all 26 nodes, yielding an output (on initiator node with ID 10) of:

payload: 255 255 255 3

Whereas when testing on Flocklab I get results of the form:

1555065953.090825,10,10,r,[   0] payload: 128 128 129 3
[...]
1555065955.090949,10,10,r,[   0] payload: 128 1 48 0
[...]
1555065957.091816,10,10,r,[   0] payload: 132 8 0 1
[...]
1555065959.094163,10,10,r,[   0] payload: 128 3 16 0
[...]
1555065961.092170,10,10,r,[   0] payload: 128 2 49 0
[...]

Therefore never all results from all nodes are present

Vanilla baloo-minimal missing packets

As discussed in #6 I tested a vanilla baloo-minimal.c implementation with 3 nodes.

I am obtaining the same problems (with missing packets) as in Issue #6.
Where one of the src nodes receives two packets and all other nodes only receive one.

Host:
[INFO:     0] time: 94, period: 2
[INFO:     0] received 1 of 26 packets
[INFO:     0] dummy application task

Src 1:
[INFO:     0] received 2 of 26 packets
[INFO:     0] dummy application task

Src 2:
[INFO:     0] received 1 of 26 packets
[INFO:     0] dummy application task

My only modification was to change the HOST_ID to 10 (from 1)

Cumulative Round Time

I was trying to determine cumulative execution time of a Baloo Round with n scheduled slots.

I was wondering if there was a way within GMW to determine this already.

I personally tried the following code, though it does not seem quite right (final slots are consistently skipped):

#define TOTAL_T_ROUND_COMPUTE(n)    (((GMW_CONF_T_DATA + GMW_CONF_T_GAP)*(n) + GMW_CONF_T_CONTROL + GMW_CONF_T_GAP_CONTROL) / 1000LU) // in ms

So I was wondeing what other parameters I should take into account.

Building `baloo-test-chaos` error

I was trying to build the chaos example and have incurred the following error when building:

In file included from ../../os/net/mac/gmw/gmw.h:62:0,
                 from ../../os/net/mac/gmw/gmw-noise-detect.c:55:
../../os/net/mac/gmw/gmw-types.h:67:17: error: 'TOGMW_CONF_MAX_SLOTS' undeclared here (not in a function)
../../Makefile.include:316: recipe for target 'obj_sky/gmw-noise-detect.o' failed
make: *** [obj_sky/gmw-noise-detect.o] Error 1

It seems like TOGMW_CONF_MAX_SLOTS is never defined in the codebase itself (except in the gmw-types.h). I.e.:

#define GMW_SCHED_SECTION_HEADER_LEN    (8)
typedef struct __attribute__((packed)) gmw_schedule {
  uint32_t time;                  /* multiple of GMW_CONF_PERIOD_TIME_BASE */
  uint16_t period;                /* multiple of GMW_CONF_PERIOD_TIME_BASE */
  uint16_t n_slots;
  uint16_t slot[TOGMW_CONF_MAX_SLOTS];
} gmw_schedule_t;

I was wondering how the value ends up being used (i.e. what you would reckon to be sensible values) and if I can safely define it at a project-conf.h level.

Again thanks for the great work on Baloo!


EDIT:

Could it perhaps be a typo?
All gwm macros seem to be prefixed as GMW-* and editing as follows seems to work:

#define GMW_SCHED_SECTION_HEADER_LEN    (8)
typedef struct __attribute__((packed)) gmw_schedule {
  uint32_t time;                  /* multiple of GMW_CONF_PERIOD_TIME_BASE */
  uint16_t period;                /* multiple of GMW_CONF_PERIOD_TIME_BASE */
  uint16_t n_slots;
  uint16_t slot[GMW_CONF_MAX_SLOTS];
} gmw_schedule_t;

Multi-hop chaos

I was playing a bit with Chaos and I have a number of questions. Everything was built on top of baloo-chaos-test running on 6 nodes on Flocklab.

~

  1. How many chaos slots are executed in one Baloo/GMW slot.

As far as I understand for every slot the GMW determines if the current node is an initiator or a receiver, and based on that it will either execute GMW_SEND_PACKET() or GMW_RCV_PACKET().

Going deeper into GMW_SEND_PACKET(), it is mainly made up of GMW_START_PRIM(...), GMW_WAIT_UNTIL(...) and GMW_STOP_PRIM().

When using the chaos primitive, GMW_START_PRIM(...) is assigned to chaos_start(...) and GMW_STOP_PRIM() is assigned to chaos_stop().

From my understanding of chaos, it normally would autonomously terminate once all of the CHAOS_FLAGS are set. But by scheduling just one slot using GMW, isn't the protocol forced to terminate after just one flood? (So chaos_stop() is called independently on wether the node has received flags from all other nodes?).

Also is it safe to assume that the current setup only executes one chaos slot on the network?

~
2. Are flags carried over between slots?

As far as I currently understand chaos flags are not carried over between rounds, though my question is if I were to set schedule.n_slots to 2 or even 5, would flags be carried over between these consecutive chaos slots? I currently suspect that flags are reset after each invocation of chaos_start() and subsequent chaos_end().

~
3. Access to CHAOS_FLAGS_FIELD.
Is there currently an elegant way to access the CHAOS_FLAGS_FIELD from the packet field after a chaos slot has completed?

~
4. Multi-hop chaos.
When running the chaos example over a multi-hop network I seem to get the value of the initiator/host node being flooded to every node. Whereas every source node is only ever able to get their payload to their one-hop-neighbours.
Am I testing incorrectly?

~
5. Chaos and control packet.
I was just wondering how the presence of the control packet is handled with Chaos. I believe the packet reaches all the nodes in the network synchronizing them and determining the beginning of the Chaos protocol. Though what would be the recommended way to keep Chaos "going"?
I was mainly thinking of:
A) Scheduling "enough" schedule.n_slots to allow the protocol to finish. If these are not enough a new control packet in the next round (back-to-back) is sent out with more slots.
B) Using the GMW_EVT_REPEAT_SLOT feature to keep re-broadcasting until when all nodes are finished. As is mentioned in the wiki, though, there is no guarantee nodes will finish and they might miss subsequent control packets from the host should they overrun the schedule.period, overall desynchronizing.

Problems when using `GMW_PRIM_GLOSSY`

I've been trying setting the primitive field of the control section of the control packet to GMW_PRIM_GLOSSY and it currently does not compile, I've tried the following fixes with no luck:

In arch/platform/sky/gmw-conf-sky.h:112:

FROM:

  #define GMW_PRIM_DEFAULT           GMW_PRIM_GLOSSY     /* default Glossy */
  #define GMW_PRIM_GLOSSY            0
  #define GMW_PRIM_CHAOS             1

TO:

  #define GMW_PRIM_GLOSSY            0
  #define GMW_PRIM_DEFAULT           GMW_PRIM_GLOSSY     /* default Glossy */
  #define GMW_PRIM_CHAOS             1

And In arch/platform/dpp-cc430/gmw-conf-dpp-cc430.h:111:

FROM:

  #define GMW_PRIM_DEFAULT           0     /* default Glossy */
  #define GMW_PRIM_CHAOS             1

TO:

  #define GMW_PRIM_GLOSSY            0
  #define GMW_PRIM_DEFAULT           GMW_PRIM_GLOSSY     /* default Glossy */
  #define GMW_PRIM_CHAOS             1

From what I see wherever GMW_PRIM_CHAOS is defined also is GMW_PRIM_GLOSSY, yet I cannot seem to quite get GMW_PRIM_GLOSSY to work, I was just looking for a more user readable approach than config->primitive = 0;

Control packet Wiki - GMW_CONTROL_CLEAR

I believe there is a typo in the Control packet Wiki for GMW_CONTROL_CLEAR instructions.

Clear flags are defined as CLR (instead of CLEAR) in the codebase:

os/net/mac/gmw/gmw-control.h:76:#define GMW_CONTROL_CLR_CONFIG(c)  ((c)->schedule.n_slots &= \

Minimal Baloo Test

I was testing a minimal Baloo configuration using 3 nodes.

ID  |  Type
----+-----
10  |  Host
 6  |  Src
 7  |  Src

I am working on top of the Baloo minimal example.

Out-of-the-box the nodes did not seem to be communicating as expected. Two nodes are only ever receiving 1 packet of information per schedule, when expecting 2. By doing some debugging, I saw that when running the scheduling would behave as follows:

SCHEDULE

Time Slot  | Node ID
-----------+--------
    00     |    10
    01     |    07
    02     |    06

Though when executing, I get the following logs:

Node 10 (Host)
[INFO:     0] PRE  [Time Slot: 0][Node ID: 10] [INITIATOR] Prepared Payload 10 19
[INFO:     0] POST [Time Slot: 0][Node ID: 10] [NOT INITIATOR]  Skip
[INFO:     0] PRE  [Time Slot: 1][Node ID: 10] [NOT INITIATOR]  Skip
[INFO:     0] POST [Time Slot: 1][Node ID: 10] [INITIATOR] INVALID LEN PACKET. Len: 0
[INFO:     0] PRE  [Time Slot: 2][Node ID: 10] [NOT INITIATOR]  Skip
[INFO:     0] POST [Time Slot: 2][Node ID: 10] [INITIATOR] received 7 19
[INFO:     0] time: 552, period: 2
[INFO:     0] [Node ID: 10] HOST received 1 of 3 packets
[WARN:     0] Missed 1 slots! 

Node 7 (Src)
[INFO:     0] PRE  [Time Slot: 0][Node ID: 7] [NOT INITIATOR] Skip
[INFO:     0] POST [Time Slot: 0][Node ID: 7] [INITIATOR] received 10 128
[INFO:     0] PRE  [Time Slot: 1][Node ID: 7] [NOT INITIATOR] Skip
[INFO:     0] POST [Time Slot: 1][Node ID: 7] [INITIATOR] HOST INVALID LEN PACKET. Len: 0
[INFO:     0] PRE  [Time Slot: 2][Node ID: 7] [INITIATOR] Prepared Payload 7 128
[INFO:     0] POST [Time Slot: 2][Node ID: 7] [NOT INITIATOR] Skip
[INFO:     0] [Node ID: 7] SRC received 1 of 3 packets
[WARN:     0] Missed 1 slots! Binary: 2


Node 6 (Src)
[INFO:     0] PRE  [Time Slot: 0][Node ID: 6] [NOT INITIATOR] Skip
[INFO:     0] POST [Time Slot: 0][Node ID: 6] [INITIATOR] received 10 105
[INFO:     0] PRE  [Time Slot: 1][Node ID: 6] [INITIATOR] Prepared Payload 6 105
[INFO:     0] POST [Time Slot: 1][Node ID: 6] [NOT INITIATOR] Skip
[INFO:     0] PRE  [Time Slot: 2][Node ID: 6] [NOT INITIATOR] Skip
[INFO:     0] POST [Time Slot: 2][Node ID: 66] [INITIATOR] received 7 105
[INFO:     0] [Node ID: 6] SRC received 2 of 3 packets
[WARN:     0] Missed 1 slots! Binary: 2

Which makes me think the dynamics are as follows:

TIME 0) Host Node 10 prepares packet, sends it and both source nodes receive it.
TIME 1) Src node 6 prepares a packet, sends it and nobody receives it. Nodes 10 and 7 instead receive a packet of null length. By inspecting the data we read stale memory, as the contents of the buffer were never overwritten.
TIME 2) Src node 7 prepares a packet, sends it, and all nodes receive it.

Also both Host and Source have the same callbacks registered:

gmw_init(...)
{
  /* load the host node implementation */
  host_impl->on_control_slot_post   = &host_on_control_slot_post_callback;
  host_impl->on_slot_pre            = &host_on_slot_pre_callback;
  host_impl->on_slot_post           = &host_on_slot_post_callback;
  host_impl->on_round_finished      = &host_on_round_finished;

  /* load the source node implementation */
  src_impl->on_control_slot_post    = &src_on_control_slot_post_callback;
  src_impl->on_slot_pre             = &host_on_slot_pre_callback;
  src_impl->on_slot_post            = &host_on_slot_post_callback;
  src_impl->on_round_finished       = &src_on_round_finished;
  src_impl->on_bootstrap_timeout    = &src_on_bootstrap_timeout;

  ...
}

So my question is, have I configured something completely wrong? The behaviour of the nodes does not seem to be impacted by topology (for as much as I can test). Or is this expected behaviour?

Swapping out Node 6 with another TelosB shows the same exact behaviour. If Node 7 is ordered second in the schedule (instead of third), its messages will not be reaching the other two nodes.

Location of new protocols

I was wondering where to locate within the directory structure new protocols built on top of Baloo.

Looking at the existing implementations I mainly see two types of approach:

  1. Modular such as lwb. All files live within net/mac/gmw/lwb and an API is exposed to the user. All Baloo callbacks are encapsulated within the gmw-lwb files.
  2. A more experimental approach such as sleeping-beauty or crystal. All source for the algorithms lives within their example directory. All code is available to the user as the protocol is implemented directly within the user's code within the Baloo callbacks.

I completely understand how all protocols will start from approach (2), but I think off-the-shelf functionality can only be achieved with (1)-like implementations (i.e. a simple API that all can use).

Due to this I was wondering where consensus protocols (such as 2PC and 3PC) should be located. I don't think they would strictly belong within the gmw sections, or is it simply that all baloo-based implementations should currently be located there?

SUSPENDED -> BOOTSTRAP Delay

Hi @romain-jacob

I was running some tests the other day and occasionally a node would miss the Control packet and get the following behaviour:

image

Node 8 misses the control packet, sleeps for the old round time (2s), wakes up again, and finally keeps the radio always on waiting for a new control packet.

The src_on_bootstrap_timeout function for node 8:

static uint32_t
src_on_bootstrap_timeout(void)
{
  leds_off(LEDS_GREEN);
  leds_on(LEDS_RED);
  return 0;
}

What I was expecting was the node to instantly keep the radio always on in case a control packet is missed.

From inspecting the gmw.c source I think what is happening is:

  1. Missed Contorl slot
  2. Calling impl_sync_state = gmw_impl->on_control_slot_post(&control, sync_event, pkt_event); with sync_event=GMW_EVT_CONTROL_MISSED which has ID 1
  3. The on_control_slot_post returns GMW_DEFAULT meaning that sync_state gets assigned to GMW_SUSPENDED.
  4. And then process_poll gets called, and I suspect that this is using the old, previous, round time, rather than the new value which has been set by the on_slot_timeout callback.
  5. At the next iteration the state changes to GMW_BOOTSTRAP and thus the node instantly wakes up again to listen

So I was wondering if:

  1. There is some extra configuration I should do to allow on_bootstrap_timeout to instantly set the new round time
  2. If it is best to modify the on_control_post callback to return GMW_BOOTSTRAP rather than GMW_DEFAULT when sync_event is GMW_EVT_CONTROL_MISSED
  3. Or if I'm completely missing the point :p

Configurable pre_post_proc

Currently the pre_post_proc struct gets configured on gmw_start and is constant throughout the whole protocol execution.

This means that at the end of each Baloo round we are forced to poll to execute the post_process_current_round and the post_process_current_round should they be defined.

    /* poll the post process */
    if(pre_post_proc.post_process_current_round) {
      process_poll(pre_post_proc.post_process_current_round);
    }

    /* wake-up earlier to execute the pre-process (if one is defined) */
    pre_process_offset = (!pre_post_proc.pre_process_next_round) ? 0 :
                         GMW_MS_TO_TICKS(GMW_CONF_T_PREPROCESS);

I was wondering if an extension to this is either planned by you guys, or a modification by PR is welcome.

The idea would be to either

  1. Add a configurable flag section to the struct gmw_pre_post_processes which can allow to toggle the pre and post polls on or off.
  2. Allow full configuration of the struct gmw_pre_post_processes, exposing methods to change its contents after the initial gmw_start call. This would allow to set the post_process_current_round field to NULL temporarily should be wish to not preempt.

Is it reasonable? The idea that certain protocols might wish to have back-to-back floods without being forced to preempt the app process before they are finished.

Modify Schedule n_slots dynamically

I was wondering if there was a current effort to support dynamic changes to the n_slots parameter of the schedule, to allow to have varying schedules with different nodes at different time intervals.

A little background on how I would use this feature:

I am trying to reimplement the contents of the following paper within Baloo:
Beshr Al Nahas, Simon Duquennoy and Olaf Landsiedel. 2017. "Network-wide Consensus Utilizing the Capture Effect in Low-power Wireless Networks". In Proceedings of the Conference on Embedded Networked Sensor Systems (ACM SenSys)

This means starting from 2PC and 3PC and then building my way to more robust consensus protocols.

When working from 2PC I need a way to tell each source node wether their vote (cast in the previous round) has been registered or not by the host. In the paper they achieve this through packet aggregation on each node, though from my understanding this does not fit well with Baloo's design: each node transmits at each round (if scheduled), so rather than having big packets we constantly aggregate into, it is best to have small and slim packets to reduce the overall time taken to perform each round.

Therefore my current issue: the host asks all source nodes to cast their vote. All nodes vote in the next round, but lets assume that one vote is lost. Currently I see three ways of fixing this:

  1. I ask all nodes again to vote. Nodes are unaware if their vote was registered, but voting is deterministic hence they will vote in the same exact way again when prompted to vote again by the host. The host keeps track of the votes received the previous round and aggregates all the missing votes at the next round. Drawback: with n nodes we have n messages at each round even if we only are missing 1 vote.
  2. I have a slightly larger packet that tells the source nodes which votes have been received. This means that upon inspecting the message broadcast by the host, source nodes which know their vote has been received will not vote again. Benefit: fewer messages in the network. Drawback: same completion time as part 1 (as all nodes will have to be scheduled anyways), additionally the host is now sending a bigger packet around the network.
  3. Alter the schedule for the next round. If I could alter the schedule for the next round to just schedule all nodes from which I have been unable to register a reply, to vote again, this would not increase the size of the host packets, and overall greatly reduce the time to complete as all nodes from which I already have votes will not have a scheduled slot within which they stay quiet.

So overall I was wondering if the "Modify schedule.n_slots" currently listed within WIP will be worked on soon, so I can design around poits 1, 2 or 3 accordingly. Or is there a feature of Baloo which already does solve this issue?

Impact of GMW_EVT_SKIP_SLOT on multi-hop networks

I've been trying the GMW_EVT_SKIP_SLOT feature to decrease node power usage.

I've been experimentally seeing that GMW_EVT_SKIP_SLOT seems to prevent packet relaying across multi hop networks.

H<--------------------->S1<--------------------->S2

schedule.n_slots = 2
schedule.slots[0] = S1
schedule.slots[1] = S2

If on the second slot source node S1 returns the pre-slot callback with GMW_EVT_SKIP_SLOT then S2's packet will never be relayed to host node H.

This would seem like expected behaviour if you shut off the radio, though I was wondering if my testing is incorrect and this should not be the case.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.