Code Monkey home page Code Monkey logo

nebulex's People

Contributors

1100x1100 avatar adamz-prescribefit avatar alexandrubagu avatar ananthakumaran avatar awlexus avatar cabol avatar dongfuye avatar eliasdarruda avatar escobera avatar ferigis avatar filipeherculano avatar fredr avatar frekw avatar fuelen avatar garthk avatar george124816 avatar hissssst avatar kianmeng avatar kociamber avatar martosaur avatar peaceful-james avatar peburrows avatar polmiro avatar renatoaguiar avatar rudolfman avatar ryvasquez avatar sdost avatar suzdalnitski avatar szajbus avatar twinn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nebulex's Issues

Adapter for FoundationDB

Hi Cabol,

I was wondering what approach should I go to implementing an adapter for FoundationDB?

Thanks.

Custom ttl for every cache record?

One of the cool features of Cachex is set a custom ttl for every set record if is required, I don't see any possible way to achieve this on nebulex, just a global one that is "gc_interval" if I'm not wrong

So... Is possible to do it with Nebulex?

Greettings

Sporadic :badarg error

I'm experimenting with Nebulex multi-level caching with the following structure:

defmodule Hodor.Cache do
  use Nebulex.Cache,
    otp_app: :hodor,
    adapter: Nebulex.Adapters.Multilevel

  defmodule L1 do
    use Nebulex.Cache,
      otp_app: :hodor,
      adapter: Nebulex.Adapters.Local
  end

  defmodule L2 do
    use Nebulex.Cache,
      otp_app: :hodor,
      adapter: NebulexRedisAdapter
  end
end

and the following configuration:

config :hodor, Hodor.Cache,
  levels: [
    Hodor.Cache.L1,
    Hodor.Cache.L2
  ]

config :hodor, Hodor.Cache.L1,
  gc_interval: 5 * 60

config :hodor, Hodor.Cache.L2,
  mode: :standalone,
  conn_opts: [
    # Redis connection parameters are injected here by Hodor.Application.start
  ]

This configuration seems to work fine for the most part, but I get occasional error messages in my console that look like this:

iex(Hodor@Jasons-MacBook-Pro)25> 16:39:05.551 pid=<0.660.0> [error] GenServer Hodor.Cache.L1.Generation terminating
** (ArgumentError) argument error: {:badarg, [{:ets, :new, [:"Elixir.Hodor.Cache.L1.0", [:set, :named_table, :public, {:read_concurrency, true}]], []}, {:shards_owner_sup, :init, 1, [file: '/Users/jvoegele/work/hodor/deps/shards/src/shards_owner_sup.erl', line: 60]}, {:supervisor, :init, 1, [file: 'supervisor.erl', line: 295]}, {:gen_server, :init_it, 2, [file: 'gen_server.erl', line: 374]}, {:gen_server, :init_it, 6, [file: 'gen_server.erl', line: 342]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 249]}]}
    (shards) /Users/jvoegele/work/hodor/deps/shards/src/shards_local.erl:897: :shards_local.do_new/3
    (nebulex) lib/nebulex/adapters/local/generation.ex:130: Nebulex.Adapters.Local.Generation.new_gen/2
    (nebulex) lib/nebulex/adapters/local/generation.ex:105: Nebulex.Adapters.Local.Generation.handle_info/2
    (stdlib) gen_server.erl:637: :gen_server.try_dispatch/4
    (stdlib) gen_server.erl:711: :gen_server.handle_msg/6
    (stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3
Last message: :timeout
State: %{cache: Hodor.Cache.L1, gc_interval: 300, gen_index: 0, time_ref: {-576459550140147, #Reference<0.4266848188.3638034439.1982>}}

Any idea what the issue might be?

Invalidate keys cluster-wide

Currently keys which are set are available for the whole cluster, multilevel style. Could a function be added to invalidate the keys cluster-wide?

Over simplified exampe:
Node1: Nebulex.set "foo" "bar"

bar
Node2: Nebulex.get "foo"
bar
Node2: Nebulex.drop "foo"
nil (or last value)
Node1: Nebulex.get "foo"
nil

Random test failure - UndefinedFunctionError

Environment

Elixir version (elixir -v): Elixir 1.7.3 (compiled with Erlang/OTP 21)
Nebulex version (mix deps): 1.0.0-rc.3
Operating system: Ubuntu 18.04

Expected behavior

Fetching a key from the cache should return nil or an already cached value.

DistCache.get({__MODULE__, type, cache_key})

Actual behavior

The other day, I was running some tests locally, and had this error pop up.

     ** (UndefinedFunctionError) function :function_clause.exception/1 is undefined (module :function_clause is not available)
     code: assert "http://foo.admin/baz" = ResourceURL.get_admin_url(product)
     stacktrace:
       :function_clause.exception([])
       (nebulex) lib/nebulex/adapters/dist.ex:265: Nebulex.Adapters.Dist.rpc_call/4
       (app) lib/app/support/caches/dist_cache.ex:2: App.DistCache.execute/2
       (app) lib/app/ecommerce/resource_urls/resource_url_cache.ex:17: App.ResourceURLCache.get/2
       (app) lib/app/ecommerce/resource_urls/resource_url_cache.ex:24: App.ResourceURLCache.get_or_set/2
       (app) lib/app/ecommerce/resource_urls/implements/product.ex:10: App.ResourceURL.App.Product.get_admin_url/1
       test/ecommerce/resource_urls/resource_url_test.exs:52: (test)

It actually ended up occurring twice - in different locations - in different test runs.

Havent been able to easily identify the problem.

Thanks

custom ttl on mulltilevel cache gets overwritten

Hi cabol! thanks so much for the work you've put into this. I've found a weird bug that i can't sort out. I have a Mutlilevel cache with Local and Dist adapters implemented, just the same as in your examples, but if i set a custom ttl on a record, when i get that record, the expire_at value gets overwritten to be something similar to desired_ttl - (DateTime.utc_now() |> DateTime.to_unix()) and happens again if i rapidly call get a second time, here's an example from the repl:

iex(node1@dell)209> Simple.Service.Cache.set("awesome", "sauce", [ttl: 60, return: :object])
%Nebulex.Object{
  expire_at: 1555474083,
  key: "awesome",
  value: "sauce",
  version: nil
}
iex(node1@dell)210> Simple.Service.Cache.get("awesome", [return: :object])
Nebulex.Adapters.Multilevel
%Nebulex.Object{
  expire_at: 53,
  key: "awesome",
  value: "sauce",
  version: nil
}
iex(node1@dell)211> Simple.Service.Cache.get("awesome", [return: :object])
Nebulex.Adapters.Multilevel
%Nebulex.Object{
  expire_at: -1555473977,
  key: "awesome",
  value: "sauce",
  version: nil
}
iex(node1@dell)212> Simple.Service.Cache.get("awesome", [return: :object])
Nebulex.Adapters.Multilevel
nil

also, i'm currently running 3 nodes which are all connected, and i just found this, which looks like it could be the culprit:
https://github.com/cabol/nebulex/blob/master/lib/nebulex/adapters/multilevel.ex#L417

Multilevel Cache: replicate/2 is attempting to subtract from :infinity

Environment

Elixir version (elixir -v): Elixir 1.7.4 (compiled with Erlang/OTP 21)
Nebulex version (mix deps): 1.0.0
Operating system: Debian Stretch

Expected behavior

Fetching a key from the cache should return nil or an already cached value.

Cache.get({:get_route, ~D[2018-12-05], "1", [solver_settings: nil]})

Actual behavior

Last message (from #PID<0.3307.3>): {:get_route, ~D[2018-12-05], "1", [solver_settings: nil]}
20:14:36.594 [info] Converted exit {{:badarith, [{:erlang, :-, [:infinity, 1544040876], []}, {Nebulex.Adapters.Multilevel, :replicate, 2, [file: 'lib/nebulex/adapters/multilevel.ex', line: 417]}, {Enum, :"-reduce/3-lists^foldl/2-0-", 3, [file: 'lib/enum.ex', line: 1925]}, {Nebulex.Cache.Object, :get, 3, [file: 'lib/nebulex/cache/object.ex', line: 12]}, {Cache, :with_hooks, 3, [file: 'lib/cache.ex', line: 2]}, {Routes.Manager, :handle_call, 3, [file: 'lib/routes/manager.ex', line: 38]}, {:gen_server, :try_handle_call, 4, [file: 'gen_server.erl', line: 661]}, {:gen_server, :handle_msg, 6, [file: 'gen_server.erl', line: 690]}]}, {GenServer, :call, [Routes.Manager, {:get_route, ~D[2018-12-05], "1", [solver_settings: nil]}, :infinity]}} to 500 response
20:14:36.597 [error] #PID<0.3307.3> running Api.Endpoint (connection #PID<0.3306.3>, stream id 1) terminated
Server: localhost:80 (http)
Request: POST /graphql
** (exit) exited in: GenServer.call(Routes.Manager, {:get_route, ~D[2018-12-05], "1", [solver_settings: nil]}, :infinity)
    ** (EXIT) an exception was raised:
        ** (ArithmeticError) bad argument in arithmetic expression
            :erlang.-(:infinity, 1544040876)
            (nebulex) lib/nebulex/adapters/multilevel.ex:417: Nebulex.Adapters.Multilevel.replicate/2
            (elixir) lib/enum.ex:1925: Enum."-reduce/3-lists^foldl/2-0-"/3
            (nebulex) lib/nebulex/cache/object.ex:12: Nebulex.Cache.Object.get/3
            (db) lib/cache.ex:2: Cache.with_hooks/3
            (app) lib/routes/manager.ex:38: Routes.Manager.handle_call/3
            (stdlib) gen_server.erl:661: :gen_server.try_handle_call/4
            (stdlib) gen_server.erl:690: :gen_server.handle_msg/6

More information

defp replicate(cache, %Object{expire_at: expire_at} = object) do
    object =
      if expire_at,
        do: %{object | expire_at: expire_at - DateTime.to_unix(DateTime.utc_now())},
        else: object

    true = cache.__adapter__.set(cache, object, [])
    object
  end

The above function is called to replicate from the L2 to the L1 cache, but the expire_at is set to :infinity on the Nebulex.Object struct. I am attempting to use a two level cache with L1 as Nebulex.Adapter.Local and L2 as NebulexRedisAdapter.

has_key?/1 does not respect ttl

Environment

Elixir version (elixir -v): Elixir 1.7.3 (compiled with Erlang/OTP 21)
Nebulex version (mix deps): 1.0.0
Operating system: Ubuntu 18.04

Expected behavior

Using has_key?/1 on a key that was set with a ttl should return false when the ttl has expired.

Actual behavior

iex> Cache.set(:foo, 1, ttl: 0)
1
iex> Cache.get(:foo)
nil
iex> Cache.set(:foo, 1, ttl: 0)
1
iex> Cache.has_key?(:foo)
true

Fix documentation about hooks

Options :pre_hooks_strategy and :post_hooks_strategy were removed, now the strategy is passed along with the list of hooks, like so:

def pre_hooks do
  pre_hook =
    fn
      (result, {_, :get, _} = call) ->
        # do your stuff ...
      (result, _) ->
        result
     end

  {:async, [pre_hook]}
end

Fix the documentation in Nebulex.Hook module and also in the hooks.md guide – perhaps, check out other possible places.

Publish a rc.3 release

Hi!

Since there are some important fixes on master, would you mind cutting a -rc.3 release?

Load/Stress Tests

Make load/stress tests for Nebulex, including the local generational cache and distributed cache either. This can be done in a separated repo (e.g.: nebulex_bench) and documenting results in a blog post maybe.

Add counters support – increments and decrements by a given amount

Create a callback update_counter in Nebulex.Cache, in order to handle both scenarios, increments and decrements, depending on the incr value, which can me positive or negative. In case incr is not a valid integer value, it should raise an ArgumentError exception.

@callback update_counter(key, incr :: integer, opts) :: value | no_return

By default incr might be 1.

Update Getting Started guide

In "Fetching all entries" section we are using Blog.Cache.all/0,1 which it doesn't exist anymore.
Replace it by to_map function

FAQ list

Hi is there a forum or faq list so that new users could ask or find answer they are looking for?

I could not wait for this forum or faq, I would like find out a few things before deciding a design of a new project. I would like to use elixir, ecto, phoenix and nebulex. However, i would like to know the difference between local and distributed cache. Does the distributed cache refer to the caches found in several running copies of nebulex in different hardware boxes? What are the benefits as compared to increase local cache memory?

Multilevel Cache: transaction/3 is attempting to change all levels multiple times.

Environment

Elixir version (elixir -v): Elixir 1.7.4 (compiled with Erlang/OTP 21)
Nebulex version (mix deps): 1.0.0
Operating system: Debian Stretch

Expected behavior

Operations wrapped in a transaction should occur once.

test "check cache set and get in transaction" do
    value = ["old value"]

    Cache.set(:test, value)

    Cache.transaction(
      fn ->
        old_value = Cache.get(:test)

        assert old_value == value

        Cache.set(:test, ["new value" | old_value])
      end,
      keys: [:test]
    )

    assert Cache.get(:test) == ["new_value", "old_value"]
  end

Actual behavior

1) test check cache set and get in transaction (Cache.MultilevelTest)
      test/cache/multilevel_test.exs:14
      Assertion with == failed
      code:  assert old_value == value
      left:  ["new value", "old value"]
      right: ["old value"]
      stacktrace:
        test/cache/multilevel_test.exs:23: anonymous fn/0 in Cache.MultilevelTest."test check cache set and get in transaction"/1
        (nebulex_redis_adapter) lib/nebulex_redis_adapter.ex:118: NebulexRedisAdapter.do_transaction/5
        (nebulex) lib/nebulex/adapters/multilevel.ex:366: anonymous fn/4 in Nebulex.Adapters.Multilevel.eval/3
        (elixir) lib/enum.ex:1925: Enum."-reduce/3-lists^foldl/2-0-"/3
        (db) lib/cache.ex:2: Cache.with_hooks/3
        test/cache/multilevel_test.exs:19: (test)

More information

  ## Transaction

  @impl true
  def transaction(cache, fun, opts) do
    eval(cache, :transaction, [fun, opts], [])
  end

  ## Helpers

  defp eval(ml_cache, fun, args, opts) do
    eval(levels(opts, ml_cache), fun, args)
  end

  defp eval([l1 | next], fun, args) do
    Enum.reduce(next, apply(l1.__adapter__, fun, [l1 | args]), fn cache, acc ->
      ^acc = apply(cache.__adapter__, fun, [cache | args])
    end)
  end

The above code seems to try to run the transaction operation for all cache layers, but since each other operation (:get, :set, etc) also goes through this code path, all the changes will have occurred for all layers after running the function inside the transaction once. It looks like the transaction lock should be acquired for all layers, then the wrapped function should run once, then the transaction lock should be released for all layers.

Pre Expire Hook

Im trying to implement a delayed queue. I was thinking I could maybe add items into the cache with a ttl, and before expiration get a pre-expire or pre-evict callback to do what I need to do with the items. It doesn't appear like this is supported. Any ideas?

Create first cache generation by default when the cache is started

Currently, when the local cache is started, the first generation is not created by default, hence if we perform some operation in the cache, an exception is raised. To avoid this exception we have to call MyCache.new_generation() before to use the cache or set the option gc_interval in the config.

The idea is to avoid this situation by creating the first cache generation automatically when the cache is started; maybe in the Nebulex.Adapters.Local.Generation.init callback. Hence, if we don't set the option gc_interval in the config because maybe we want to create generations in a different way using other strategy, etc., at least the first generation will be created and we won't get any exception.

Adapter for Mnesia

The idea with this adapter is to provide another backend and be able to support different distributed topologies (replicated topology is a must).

Will nebulex support replicating cache partitions?

I'm sorry if this has been asked before; I couldn't find it with a quick search.

My question is pretty much just the title. I have a real use-case for wanting to be able to replicate partitions - as a full replication à la #15 would require too much RAM usage - and I also don't have the brain power time/knowledge to write a partition replicator that I would have any confidence in. I'm also somewhat curious if this effect can be created by just running two Dist caches per node - I feel like it might work but also might just double the data being stored on each node with no real gain.

Question: disabling cache conditionally in defcacheable

Hello there!

I would like to turn a defcacheable function into a pass-through function when I'm in development, and turn on cache only in staging/production. My current environment is stored in the application config, so it's accessible at the time the macro is expanded. I'm a bit new at Elixir, so my attempt at a macro didn't prove very successful :)

The best solution I could come up with was to write a _cached variant of my function that calls an uncached variant, and then do this at every call site:

if @env == :dev do
  do_thing()
else
  do_thing_cached()
end
|> ...

But I was wondering if there is a built-in way of doing this.

Add capability to limit cache size

In order to prevent memory usage from growing out of control, it would be nice to be able to limit the size of Nebulex caches. If the size limit has been reached, then a new item cannot be added to the cache without evicting some other item from the cache first. Eviction policy could be LRU, LFU, or perhaps based on generation.

Ideally, the size limitation would be in terms of the actual amount of memory occupied by the cache values, but if that is difficult to determine efficiently, then the number of keys in the cache could be used as a reasonable surrogate.

Ability to "get or set" a key

A common caching scenario is to attempt to get a key, and if it doesnt exist, set it. This could quite easily be implemented with something like the following:

  def get_or_set(key, function, opts \\ []) do
    transaction(fn ->
      with nil <- get(key, opts) do
        set(key, function.(), opts)
      end
    end)
  end
MyCache.get_or_set(:foo, fn -> :bar end)

If this is something of interest, I could look at implementing it soon.

Caching utility macros: `defcacheable`, `defevict` and `defupdatable`

Implement utility macos:

defcacheable

Defines a cacheable function with the given name fun and arguments args. The returned value by the code block is cached if it doesn't exist already in cache, otherwise, it is returned directly from cache and the code block is not executed.

defcacheable cacheable_fun(x, y \\ "y"), cache: Cache, key: x do
  # logic ...
end

defevict

Defines a function with cache eviction enabled on function completion (one or all values are removed on function completion).

defevict evict_fun(x), cache: Cache, key: x do
  # logic ...
end

defupdatable

Defines an updatable caching function. The content of the cache is updated without interfering the function execution. That is, the method would always be executed and the result cached.

defupdatable updatable_fun(x), cache: Cache, key: x do
  # logic ...
end

Performance Problem.

I tried switching to nebulex from my hand-rolled version using GenServer + redis. There was quite a performance hit. The 3 bottom results (below 79734e2) were before nebulex, and everything on top is nebulex results.

Could it be that now the workload has shifted from redis to the internal node hence it is more affected by CPU (it's running in a container and I cap the CPU to 1 core). I think further testing is needed.

screen shot 2561-10-03 at 07 09 08

Add telemetry integration

Perhaps create a separate repo nebulex_telemetry, and use pre and/or post hooks for the implementation.

Add matching option on returned result to Nebulex.Caching

There is a use that I believe is fairly common that I do not think fits in the Nebulex.Caching macro DSL.

Sometimes one wants to conditionally cache a value based on the result of the underlying function logic. This is easy to achieve by using the lower-level APIs Cache.get and Cache.set but it is fairly verbose and repeatable. Also, I was really enjoying the Nebulex.Caching DSL.

In my specific use case, I have a cache-through layer in front of a REST API and I only want to cache the results when the REST query underneath succeeds. Sometimes there are transient error responses or short periods of time where this API is down.

I added an optional function argument called match (for lack of better naming at the moment) to defcacheable that receives the result of the logic and returns a boolean value. Based on that, the value will be cached or not cached. For example:

defcacheable get_user(id), cache: Cache, key: {User, id}, match: &match?/1 do
  Tesla.get("users/#{id}")
end

defcacheable get_users(id), cache: Cache, key: {User, :list}, match: &match?/1 do
  Tesla.get("users")
end

# Successful requests will be cached
def match?({:ok, _}), do: true
# Failed requests will not be cached
def match?(_), do: true

If you think this would be worth adding I would be happy to write a PR, I already have something I can base it off. It is fairly straight-forward.

By the way, thanks for such a great library, cheers 🙇 !

Fulfil the open-source checklist

General Items

  • It has a github repo
  • It has a proper MIT LICENSE file
  • It's has a clear and useful README.md
  • It's documented (with examples)
  • It's tested
  • is has coverage > 98%

Features for First Version

  • Local Generational Cache Adapter
  • Distributed Cache Adapter
  • Multilevel Cache Adapter

Exhibition

  • There is a blog post about it
  • It's shared on social networks
  • It's shared on reddit
  • It's shared on hacker news with a title like Show HN: description

Examples

  • It provides a sample application
  • Examples of use are documented in the README or linked from there
  • Simple benchmarks

Publish

Support for persistence operations

New callbacks to Nebulex.Cache:

@callback dump(path :: Path.t(), opts) :: :ok | {:error, term}

@callback load(path :: Path.t(), opts) :: :ok | {:error, term}

And a new behavior Nebulex.Adapter.Persistence with the callbacks:

@callback dump(cache :: Nebulex.Cache.t(), path :: Path.t(), Nebulex.Cache.opts()) :: :ok | {:error, term}

@callback load(cache :: Nebulex.Cache.t(), path :: Path.t(), Nebulex.Cache.opts()) :: :ok | {:error, term}

And provide the implementation for them for the current built-in adapters.

Multi Level with dist not working as expected

Hey, great project!
I have 2 nodes running with multilevel, each has local cache + dist and Redis as a backup.
Lets say one node goes down and then back up, from my understanding it is supposed to check local cache which is empty so it goes to distributed and should take the value from the 2nd node without going to Redis, but in my situation I see it goes straight to Redis and skips dist. Maybe I misunderstood the usage?

defmodule SimpleServer.Cache do
  use Nebulex.Cache,
    otp_app: :simple_server,
    adapter: Nebulex.Adapters.Multilevel

  defmodule L1 do
    use Nebulex.Cache,
      otp_app: :simple_server,
      adapter: Nebulex.Adapters.Local
  end

  defmodule L2 do
    use Nebulex.Cache,
      otp_app: :simple_server,
      adapter: Nebulex.Adapters.Dist
  end

  defmodule Primary do
    use Nebulex.Cache,
      otp_app: :simple_server,
      adapter: Nebulex.Adapters.Local
  end

  defmodule Redis do
    use Nebulex.Cache,
      otp_app: :simple_server,
      adapter: NebulexRedisAdapter
  end
end

configuration:

import Config

# Distributed Cache
config :simple_server, SimpleServer.Cache,
  levels: [
    SimpleServer.Cache.L1,
    SimpleServer.Cache.L2,
    SimpleServer.Cache.Redis
  ]

config :simple_server, SimpleServer.Cache.L1,
  # 1 hour
  gc_interval: 60 * 60

# Internal local cache used by PartitionedCache.Dist
config :simple_server, SimpleServer.Cache.L2,
    local: SimpleServer.Cache.Primary

config :simple_server, SimpleServer.Cache.Redis,
  conn_opts: [
    # Redix options
    host: "127.0.0.1",
    port: 6379
  ]

config :simple_server,
  port: 4000,
  nodes: [:"gw2@yuri-pc", :"gw1@yuri-pc"]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.