Code Monkey home page Code Monkey logo

lookout's Introduction

source{d} Lookout

Service for assisted code review, that allows running custom code Analyzers on pull requests.

GitHub version Build Status Development Code Coverage Go Report Card GoDoc

WebsiteDocumentationBlogSlackTwitter

Introduction

With source{d} Lookout, we’re introducing a service for assisted code review, that allows running custom code analyzers on pull requests.

Jump to the Quickstart section to start using it!

Table of Contents

Motivation and Scope

source{d} is the company driving the Machine Learning on Code (#MLonCode) movement. Doing Machine Learning on Code consists of applying ML techniques to train models that can cluster, identify and predict useful aspects of source code and software repositories.

source{d} Lookout is the first step towards a full suite of Machine Learning on Code applications for AI-assisted coding, but you can also create your own analyzers without an ML approach.

The benefits of using source{d} Lookout are:

  • Keep your code base style/patterns consistent.
  • Language agnostic assisted code reviews.
  • Identify where to focus your attention on code reviews.
  • Automatically warn about common mistakes before human code review.

Current Status

Currently, source{d} Lookout is in development process.

Further Reading

This repository contains the code of source{d} Lookout and the project documentation, which you can also see properly rendered at https://docs.sourced.tech/lookout.

Quickstart

There are several ways to run source{d} Lookout; we recommend to use docker-compose because it's straightforward, but you can learn more about the different ways to run source{d} Lookout.

Please refer to the Configuring source{d} Lookout guide for documentation about the config.yml file, and to know how to configure source{d} Lookout to analyze your repositories, or to use your own analyzers.

There is docker-compose.yml config file for Docker Compose to start source{d} Lookout, its dependencies (bblfsh and PostgreSQL) and a dummy analyzer which will add some stats to the watched pull requests.

To do so, clone this repository or download docker-compose.yml directly.

Create the config.yml file in the same directory where docker-compose.yml is. You can use config.yml.tpl as a template. Make sure that you specify in the config.yml the repositories that will be watched by source{d} Lookout. Then run, passing a valid GitHub user/token:

$ docker-compose pull
$ GITHUB_USER=<user> GITHUB_TOKEN=<token> docker-compose up --force-recreate

Once it is running, source{d} Lookout will start posting the comments returned by dummy analyzer into the pull requests opened at GitHub in the repositories that you configured to be watched.

You can stop it by pressing ctrl+c

If you want to try source{d} Lookout with your own analyzer instead of dummy one, you must run it in advance, then set it into config.yml and then run:

$ docker-compose pull
$ GITHUB_USER=<user> GITHUB_TOKEN=<token> docker-compose up --force-recreate lookout bblfsh postgres

If you need to reset the database to a clean state, you should drop the postgres container. To do so, stop running source{d} Lookout with ctrl+c and then execute:

$ docker rm lookout_postgres_1

Available Analyzers

This is the list of the known implemented analyzers for source{d} Lookout:

Name Description Targeted files Maturity level
style-analyzer Code style analyzer development
terraform Checks if Terraform files are correctly formatted Terraform usable
gometalint Reports gometalinter results on pull requests Go testing and demo
sonarcheck Reports SonarSource checks results on pull requests using bblfsh UAST Java testing and demo
flake8 Reports flake8 results on pull requests Python testing and demo
npm-audit Reports issues with newly added dependencies using npm-audit JavaScript development
function-name analyzer Applies a translation model from function identifiers to function names. development

Create an Analyzer

If you are developing an Analyzer, or you want more info about how they work, please check the documentation about source{d} Lookout analyzers.

Contribute

Contributions are more than welcome, if you are interested please take a look at our Contributing Guidelines.

Community

source{d} has an amazing community of developers and contributors who are interested in Code As Data and/or Machine Learning on Code. Please join us! 👋

Code of Conduct

All activities under source{d} projects are governed by the source{d} code of conduct.

License

Affero GPL v3.0 or later, see LICENSE.

lookout's People

Contributors

bzz avatar campoy avatar carlosms avatar creachadair avatar dependabot-support avatar dpordomingo avatar egsy avatar epicalyx avatar gy741 avatar mcuadros avatar meyskens avatar se7entyse7en avatar smacker avatar smola avatar vmarkovtsev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

lookout's Issues

Propose advanced schema for HistoryDB

#35 introduces a simple storage for input (events) and output (comments) of the analysis.

This issue is about proposing additional data model, needed for further operations of lookout server on Github.

Things that one might want to persist (not necessarily all in this DB)

  • installations: users, enabling/disabling lookout service on their repositories
  • GH App access tokens: YOUR_APP_ID, private key, JWT tokens (10 min expiration), YOUR_INSTALLATION_ACCESS_TOKEN (GH API access token, got with JWT, 1h expiration)
  • m.b some state of analyzers?

[SDK] Make user experience more like curl

Improve user experience of lookout SDK

  1. add -v to push/review commands, to hide debug details (i.e all nifty gRPC) #37
  2. If no bblfsh service available, show just 1 warning that no UASTs will be available and trigger analyzer #34
    Right now it prints lots of logs and takes ~3 sec
  3. if no analyzer running - just print 1 line of warning and stop (same as cURL) #34
    Right now it attempts to re-connect \w lots of verbose logs

Better testing for server

Currently, we have small make test-json test which definitely isn't enough.

Proposal:

Core tests. They make sure the core of our service works correctly and we don't rely on github too heavily (means everything works with json provider):

  1. Rework current make test-json using the same (or even improved) approach as we do with make test-sdk
  2. Use new DB for each run, remove it after the test is finished
  3. Add tests for:
  • wrong CommitRevision
  • multiple analyzers
  • tests for push event
  • test for analyzer response with error
  • local .lookout.yml (as a bonus because it would require separate repository and it might be too much for this task)

We also need better tests for github provider (both unit&integration) but it will be covered in another issue.

Add DB storage service \w kallax

This is initial step of adding storage to the lookout serve.

Let's store something really simple, like only the input (events) and the output (comments), linked to each other.

[SDK] Add CI automation for release

Add automation to produce a release on tag, that include next artefacts:

  • content of ./sdk dir
  • lookout binaries for linux/mac os

in a single archive, uploaded to GH release.

Panic on ChangesRequest

I am sending the following ChangesRequest to ./lookout review ipv4://localhost:2000:

base {
  internal_repository_url: "file:///home/vadim/Projects/style-analyzer/lookout/core"
  hash: "e2ca665f6f45a3e9c44b3e1c6bee07c236efc842"
}
head {
  internal_repository_url: "file:///home/vadim/Projects/style-analyzer/lookout/core"
  hash: "b6365d0cdf4a2eb1b389960ac8f437a9b3cd1cf7"
}
exclude_vendored: true
want_uast: true

(want_content is false, not shown in the text for some reason).

and get:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0xac0082]

goroutine 70 [running]:
github.com/src-d/lookout/service/git.filterVendor(0x0, 0x0, 0x0, 0xc4202c46e0)
	/home/travis/gopath/src/github.com/src-d/lookout/service/git/scanner.go:299 +0x22
github.com/src-d/lookout/service/git.NewChangeExcludeVendorScanner.func1(0xc42026cc20, 0xc42026cc20, 0xc420c636ca, 0xc420c6c000)
	/home/travis/gopath/src/github.com/src-d/lookout/service/git/scanner.go:305 +0x2f
github.com/src-d/lookout.(*FnChangeScanner).Next(0xc4200c16c0, 0xc420be8b00)
	/home/travis/gopath/src/github.com/src-d/lookout/data.go:288 +0x99
github.com/src-d/lookout.(*FnChangeScanner).Next(0xc4200c1800, 0xc420af0d00)
	/home/travis/gopath/src/github.com/src-d/lookout/data.go:286 +0x5a
github.com/src-d/lookout/service/bblfsh.(*ChangeScanner).Next(0xc42025a6e0, 0xc420af0da0)
	/home/travis/gopath/src/github.com/src-d/lookout/service/bblfsh/bblfsh.go:146 +0x45
github.com/src-d/lookout.(*DataServerHandler).GetChanges(0xc42000cce0, 0xc4200c0300, 0xfa8120, 0xc4201e0fe0, 0x0, 0x0)
	/home/travis/gopath/src/github.com/src-d/lookout/data.go:86 +0x151
github.com/src-d/lookout/pb._Data_GetChanges_Handler(0xc54de0, 0xc42000cce0, 0xfa7340, 0xc4200dee70, 0x17c8060, 0xc420a18720)
	/home/travis/gopath/src/github.com/src-d/lookout/pb/service_data.pb.go:189 +0x10e
github.com/src-d/lookout/vendor/google.golang.org/grpc.(*Server).processStreamingRPC(0xc4200f4380, 0xfa9200, 0xc420a24200, 0xc42022e100, 0xc420abc3c0, 0x179adc0, 0x0, 0x0, 0x0)
	/home/travis/gopath/src/github.com/src-d/lookout/vendor/google.golang.org/grpc/server.go:1160 +0xa2d
github.com/src-d/lookout/vendor/google.golang.org/grpc.(*Server).handleStream(0xc4200f4380, 0xfa9200, 0xc420a24200, 0xc42022e100, 0x0)
	/home/travis/gopath/src/github.com/src-d/lookout/vendor/google.golang.org/grpc/server.go:1253 +0x12b1
github.com/src-d/lookout/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc4200b8120, 0xc4200f4380, 0xfa9200, 0xc420a24200, 0xc42022e100)
	/home/travis/gopath/src/github.com/src-d/lookout/vendor/google.golang.org/grpc/server.go:680 +0x9f
created by github.com/src-d/lookout/vendor/google.golang.org/grpc.(*Server).serveStreams.func1
	/home/travis/gopath/src/github.com/src-d/lookout/vendor/google.golang.org/grpc/server.go:678 +0xa1

Include migration in lookout binary

We want to have a single binary as a release artifact. It means migrations must be included in it. It also makes sense to expose some of kallax migration commands through lookout.

  • use go-bindata to include sql files with migration into the binary
  • expose migrate command that would do the same as kallax migrate up (maybe with --all by default) but with migrations from go-bindata.
  • update makefile and travis to use new command
  • update documentation & docker-compose to run migration on start

Remove merge field from review event

This field is very confusing when you study proto files and the current behavior of this field bound to GitHub. Other providers have a similar field but with the different meaning. It contains sha of merge commit ONLY AFTER pull request is merged. Before it's empty.

According to discussions we don't need this field (at least for the first version of the service/analyzers) because we don't need to comment on closed pull requests.

lookout client sends negative seconds

I execute ./lookout review ipv4://localhost:2000 and receive the request on Python side. I get the following ReviewEvent:

created_at {
  seconds: -62135596800
}
updated_at {
  seconds: -62135596800
}

Indeed, those integers are negative.

More tests for Github provider

More rigorous test for things that might go wrong:

  • Poster/Watcher network failure, partial JSON download, etc
  • Exceeding API limits

Deploy to staging environment

As part of Phase1 of development, we want to have a staging environment analyzing repositories under src-d org.

TODOs:

  • prepare contains + instructions to run them #79
  • ask Infra by creating an issue \w request for a
    • chart: deployment? for server, a service for Dummy analyzer + bblfsh + postgres. (later, another service for Style analyzer will be added)
    • staging env (Infra choice, most probably our pipeline cluster?)
    • CD system (Infra choice)
  • configure
  • cut release and deploy

[SDK] Update Data Service API

  • Get full repository content, to stream back all files from a given revision
    (But keep current defaults of GetChanges)
  • Remove “changed_only” flag
  • Add flag to “exclude_vendored”

Don't fail on broken cache dir

Steps to reproduce:

rm -rf /tmp/lookout/github.com/src-d/lookout
mkdir -p /tmp/lookout/github.com/src-d/lookout
make dry-run

results in:

repository does not exist
exit status 1

In real life, it happens when the process dies during cloning. After that lookout serve doesn't work until the cache dir is deleted manually.

[SDK] Add simple integration test to CI

Add CI profile for integration tests that

  • builds lookout binary (part of SDK)
  • launches dummy analyzer dummy serve
  • runs lookout review ipv4://localhost:10302
  • at least check grep BEGIN RESULT
  • runs lookout push ipv4://localhost:10302
  • greps for "dummy comment for push event"

Add another profile that verifies that

Refactor github provider to use pull requests api

I started digging into #77 and found out current /events API isn't suitable for us.

Example of the current state of lookout:
There is a pull request #81. It was re-pushed force after rebase on master. Now let's look at github responses:

Pull requests API:
https://api.github.com/repos/src-d/lookout/pulls
https://gist.github.com/smacker/6378f49c4c6108b50bae2e1041e3fa2f

[0].head.sha = 'cd15e2c050763e2ac3d444521eb28d06c9f47326'

Events API:
https://api.github.com/repos/src-d/lookout/events
https://gist.github.com/smacker/1c82e55e001f2e27efdfe0242b33bd9d

[13].payload.pull_request.head.sha = 'dfa9b65007a3c737d682e561ae2f7a1741d81b2b'

there is no push event to pull request branch or any other event that updates PR:

$ grep "cd15e2c050763e2ac3d444521eb28d06c9f47326" events.json  | wc -l
       0

So there is no way to get up-to-date sha for pull request using event api.

Watch multiple repositories

Right now lookout serve watches a single given repository.

This issue is about adding ability for watching multiple repositories i.e from CLI using env var/flag or some other similar mechanism (it may be part of server configuration file eventually, but not with this issue)

Eventually, it will have a list of repositories to watch from each GH Installation.
Even more eventually (but most probably this Q there will be a UI for this)

Prototype analyzer SDK with means to test it

As discussed, this issue is to create PoC of an SDK for analyzers.

To make it easier to reference to other teams, it will be in ./sdk folder inside this repo. For now it will only have a main README, containing links to .proto files in this repo, and links to documentation on how to generate code. It probably should also contain instructions on how to run bblfshd

As part of release, we need to provide a small pre-built binary to testing the analyzers. This binary, dubbed look, will be a simplified version of the whole lookout server.

On execution, it will:

  • trigger an immediate gRPC call to the analyzer
  • provide any required repository data (act as a real DataServer)
  • print the analyzer's response in the console
  • exit

It should be as simple as:

  • look review --path path/to/single-git-repo --from revision --to revision <Analyzer gRPC>
  • look push a hook for training, working the same way

In this case review would be a subcommand, to accommodate for any number of methods we may add in the future.
Names, use of arguments, flags or env. vars., etc. all will be added later on.

As suggested by @bzz, default behaviour would be look review ip4://localhost:NNNN that will trigger a review API for changes between latest head and one commit before it for repository in a current directory.

Configurable Analyzers

  • add support for multiple Analyzers in lookout server
  • read per-repository configuration with DataServer API GetFiles({IncludePattern: "^\.lookout\.yml$"}) #55
  • hard-code static global configuration of the server
  • merged global and repo-specific configurations #71
  • pass configuration to each individual Analyzer in NotifyReview gRPC #55

Network failures end the serve process

For lookout serve, with the github provider, if the network fails the process ends with the message context deadline exceeded.

We probably don't want that, and instead the watcher should be able to keep polling if the network problem gets fixed.

SDK: uast handling

  • implement an optional request for uast in dummy analyzer
  • sdk data server should return error if UAST was requested but there is no connection to bblfsh
  • add tests for this cases

Configurable polling interval for Github

Right now github watcher uses Basic Authentication with a single user's oath token to poll Events API. It works well, but @carlosms well noted that this API, on top of the usual 1 token API limits, is subject of additional restrictions on frequency AKA "X-Poll-Interval" .

Current implementation potentially should be enough for Phase1 of development, but again, as @carlosms noted, polling interval can be sometimes i.e 1min and then latency to new PR processing will be unacceptably high, even for small deployments like we are targeting this Q.

This issue is about changing github provider, so it:

  • has a configurable max poll delay, N sec, with some low default (i.e every 5 sec)
  • supports multiples Access tokens, that for now are provided though Server configuration file
    (but eventually those will be tokens, based on Github Applications Installations)
  • provides ability to poll actual GH api at >= given frequency by using next GH client with a different token, if current one has Poll-Interval to high to satisfy configured frequency.

This would allow to get poll interval as close as possible to configured N, given bigger number of tokens, and obeying API usage policy.

Most probably, such low interval would require implementation to be careful by not scheduling events that were already processed, not to overwhelm analyzers i.e by checking event status with DB from #53 first.

Discard old events

Steps to reproduce:

  1. Create PR
  2. Amend commit & push --force
  3. Run the server on this repository

Results in:

[2018-07-27T13:54:10.417860965+02:00]  INFO processing pull request app=lookout head=refs/pull/74/head provider=github repository=git://github.com/src-d/lookout.git
...
[2018-07-27T13:54:11.32091199+02:00]  INFO processing pull request app=lookout head=refs/pull/74/head provider=github repository=git://github.com/src-d/lookout.git
object not found
exit status 1

This error may also happen if push --force was done during events processing.

Refactor usage of AnalyzerComments

We have introduced AnalyzerComments structure which forces us to use for-in-for loops in many places. It looks kind of strange and even ugly.

We can try to refactor it.

Some proposals from @carlosms:

For instance, one alternative I can think of is:

  • remove the analyzer feedback url from the main yml config file
  • add a new string field to each pb.Comment for the feedback link
  • filling this field is responsibility of each analyzer

This would also allow an analyzer to provide a different feedback url for different types of comments.

As one another alternative we can make the structure itself to expose richer interface so it will be easier to use.

Add more debug level messages

An event processing can take some time. But looking at the logs you don't easily see how much time each step takes.

For instance a recent test took around 2 minutes, and it felt like it was due to the git repo sync to check for the local config file. But it would be good to have more fine-grain info.

Add option to github provider to report status

With repo:status OAuth scope one can report status to Github using Status API

This issue is about adding an option to enable reporting in-progress status of the code analysis by lookout to PR using state:pending of GH status API.

Most probably this should be specific to a provider, as i.e in GitLab similar thing is related to Pipelines

Initial implementation should only report a global status of PR: it's pending, if it's beeing process by any of the analysers in lookout.

Use src-d logger for grpc or remove grpc logging completely

If we want to have logs for grpc it's better to use our own logger to produce logs in the same format.
Another option is to just remove grpc logging.

If we want to go with the first option here is a proposal:

The interface of gopkg.in/src-d/go-log.v1 is much different from grpclog.LoggerV2 but the logrus one is almost the same.

I would propose to set default logrus logger as grpc logger (something like this):

package grpclogrus

import (
	"github.com/sirupsen/logrus"
	"google.golang.org/grpc/grpclog"
)

// GrpcLogrus implements grpclog.LoggerV2 interface using logrus logger
type GrpcLogrus struct {
	*logrus.Logger
}

// V reports whether verbosity level l is at least the requested verbose level.
func (lw GrpcLogrus) V(l int) bool {
	// logrus & grpc levels are inverted
	logrusLevel := 4 - l
	return int(lw.Logger.Level) <= logrusLevel
}

func init() {
	l := GrpcLogrus{logrus.StandardLogger()}
	grpclog.SetLoggerV2(l)
}

And then configure it using LoggerFactory.ApplyToLogrus

Thoughts?

/cc @smola @bzz

Correctness guarantees: same comment should not be posted twice

Same review request event can be processed multiple times by lookout service for a number of different reasons.

Part of the service correctness guarantees for the users is that he never gets same feedback multiple times in the same review thread.

Most probably this can be done by a Serve checking with HistoryDB, if same comments were already provided to this review thread, before asking a specific Poster to actually go and post/create a new ones in a code review systeem.

(At at current Phase1, we want to keep pulling code review system for events and not be smart about scheduling things based on HistoryDB yet)

Marked as umbrella as this would most probably require some new integration tests in a separate profile on CI.

Naming inconsistency in event proto

Abstract from sdk/event.proto:

message ReferencePointer {
    // InternalRepositoryURL is the origina; clone URL not canonized.
    string internal_repository_url = 1 [(gogoproto.customname) = "InternalRepositoryURL"];
    // ReferenceName is the name of the reference pointing.
    string ReferenceName = 2 [(gogoproto.casttype) = "gopkg.in/src-d/go-git.v4/plumbing.ReferenceName"];
    // hash is the hash of the reference pointing.
    string Hash = 3;
}

// PushEvent represents a Push to a git repository.
message PushEvent {
    // Provider triggering this event.
    string provider = 1;
    // InternalId is the internal id for this event at the provider.
    string internal_id = 2 [(gogoproto.customname) = "InternalID"];
    // CreateAt is the timestamp of the creation date of the push event.
    google.protobuf.Timestamp created_at = 3 [(gogoproto.nullable) = false, (gogoproto.stdtime) = true];
    // Commits is the number of commits in the push.
    uint32 commits = 4;
    // Commits is the number of distinct commits in the push.
    uint32 distinct_commits = 5;
    // Configuration contains any configuration related to specific analyzer
    google.protobuf.Struct configuration = 6 [(gogoproto.nullable) = false];

    CommitRevision commit_revision = 7 [(gogoproto.nullable) = false, (gogoproto.embed) = true];
}

We've got two different naming conventions here: lower_underscore and UpperCamelCase. I am fine with both but please choose the one and rename everything else accordingly.

Add server Docker image to release artefacts

To be able to deploy lookout server \w dummy analyzer on staging \w k8s, we need to have a Docker images for:

  • lookout serve
  • dummy analyzer

It makes sense to add Server one as a part of our usual release process CI automation (so eventually it will include lookoutd instead of lookout serve).

Not sure if the same is needed for Dummy - it might be enough just to have a 1 command automation to push a new Dummy image version manually from time to time.

Add a user feedback link to comment text

Research has found that one of the best ways to keep analyzers suggestions quality high is to provide a user with a link for file an issue for each particular analyzer.

This task is about:

  • adding an "issue tracker URL" field to Analyzer configuration settings on serve
  • adding a "feedback request template" field to global lookout server configuration, some with default like If you think this is not relevant for you, please, [tell us](<link>)
  • append this template to the end of every comment's text before posting

Add simple CI

Add TravisCI for Golang with a profile for running go test

Logging

  1. All commands (serve/review/push/dummy) should have -v flag.
  2. When logging is verbose successful path should be printed. It includes at least Connected to, Send/Revieved event. For servers it should reports when a server has started. Please inspect the code to make find more useful cases.
  3. What should we do with grpc logging? There was a suggestion to replace it with our logging: #37 (comment) @bzz @carlosms need your input
  4. We may also want to be able to configure level more flexible. Example: https://github.com/smola/go-cli/pull/1/files#diff-308aaa719425f4333033c97cfb3b9868
  5. review/push/(dummy?) should have INFO level by default while serve most probably WARNING. -v flag would make it one level more versbose

CLI input/output for serve

Right now lookout serve can only trigger analysis with actual Github events.
This issue is about changing that, so that STDIO can be used as Input and Output for it from the CLI.

It includes:

  • a CLI trigger, to separate polling Github and reading Stdio modes:
    i.e lookout serve -mode cli that reads stdio and exists VS lookout serve -mode github that polls Github or something similar
  • implementation of CLI Provider that
    • listens to stdin of the process
    • prints only the comments to stdout
    • supports structured output in JSON i.e with -o json
  • prints logs only to stderr
  • has simple integration test:
     ./dummy
     cat seriece-of-event.json | lookout serve -mode cli | grep comment
    

WantUAST shouldn't return content of files

To parse the code currently bblfsh server sets req.WantContents.
We need to improve the service to remove content from response after parsing if client didn't ask for it.

Run tests on macos

  1. Don't run tests that require bblfsh if -short flag is provided
  2. Add tests on macos to CI with this flag on master branch.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.