Code Monkey home page Code Monkey logo

kafka-webview's People

Contributors

blueicarus avatar bygui86 avatar crim avatar dependabot[bot] avatar lucrito avatar quentingodeau avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kafka-webview's Issues

Initial run requires 2+ GB of RAM

Hey there.

Don't know if thats expected, but the app requires 2+GB of RAM available to boot.
If you set an upper limit of 2-3GB on it in Kubernetes/Docker, it crashes without booting.

webview-845ddb486d-sv6wj webview Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
webview-845ddb486d-sv6wj webview 2019-03-14 19:43:05.412 ERROR 8 --- [           main] o.s.boot.SpringApplication               : Application run failed
webview-845ddb486d-sv6wj webview
webview-845ddb486d-sv6wj webview org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'stompWebSocketHandlerMapping' defined in class path resource [org/springframework/web/socket/config/annotation/DelegatingWebSocketMessageBrokerConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.web.servlet.HandlerMapping]: Factory method 'stompWebSocketHandlerMapping' threw exception; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'messageBrokerTaskScheduler' defined in class path resource [org/springframework/web/socket/config/annotation/DelegatingWebSocketMessageBrokerConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.scheduling.concurrent.ThreadPoolTaskScheduler]: Factory method 'messageBrokerTaskScheduler' threw exception; nested exception is java.lang.IllegalArgumentException: 'poolSize' must be 1 or higher
webview-845ddb486d-sv6wj webview 	at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:591) ~[spring-beans-5.0.12.RELEASE.jar!/:5.0.12.RELEASE]

That seems to be enormous ram to boot a web app

SASL Authentication

Greeting! Is it possible to use kafka-webview with a cluster with a SASL authenticaton? In simple console consumer I can do this by just passing jaas conf(with login/password) file as JVM argument like

export KAFKA_OPTS="-Djava.security.auth.login.config=/home/nkm/Apps/kafka_2.11-2.0.0/config/jaas_client.conf"

and consumer.property file with lines

security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN

as cmd arguments, like..

bin/kafka-console-consumer.sh --bootstrap-server localhost:9029092 --topic test_topic --from-beginning --consumer.config config/consumer.properties

Allow for more control over consumer group used

Clusters that have authorization configured need more options around the consumer group Id (consumerIdPrefix), as ACL's are often based on the consumer group used.

For example, as it is now, the consumerIdPrefix is not sufficient because ACL's are linked to a specific/static consumer group Id and therefore a varying consumerIdPrefix + suffix combination does not work in this scenario.

Besides supporting a static consumer group Id, it would be great if a Cluster default consumer group Id could be provided when creating a new Cluster and potentially also override that consumer group per View.

User Permission to Create Views

First of all, great project. I just deployed it with a custom Apache Avro deserializer and everything works like a charm.

For my setup, I would like a little more flexibility in terms of user permissions. I do not want users to be able to modify topic settings, but I do want to give the ability to create views. This in effectively means I can not share admin access (since that gives too much power over cluster/topic configuration), but giving user permissions is not enough (since admins would have to create views for users). Is this something that could be changed easily? Happy to help with a PR if you send me some pointers.

Add ability to set arbitrary kafka consumer properties when defining a cluster.

Problem

When defining a cluster and its connection properties, only a subset of common properties are exposed in the UI. More advanced options such as disabling server host name verification are definitely nice to have and could be added into the UI. But undoubtably there are 100 other use cases that also require setting various configuration properties.

Currently there exists no way to do advanced configuration of connection properties without committing code to change the UI.

Possible Solution

Provide a general way to supply additional configuration properties when defining a cluster. Perhaps we add an "Advanced" tab that allows from selecting known configuration options, as well as the option to define your own keys and values.

This should allow for future proofing the application as well as supporting currently unknown (to the dev team) use cases. It also doesn't prevent us from adding a more user friendly UI for more common settings.

LDAP Integration does not use Bind

It looks like the current LDAP integration of kafka-webview wants to retrieve the user password property and compare locally. This is a somewhat odd requirement, since most LDAP servers are setup to not return the password, especially not in a hashed and salted form that corresponds to the Spring LDAP encoder.

Instead, most projects make use of the LDAP bind functionality, which gives the server authority to make the authentication decision.

Open stream behind reverse proxy

Thanks to the work on #137 we are running kafka-webview behind a reverse proxy on our K8s clusters. It has been a tremendous help to quickly debug our flows!

Only hickup we encounter now is that the stream view/websocket still redirects to the old URL (not prefixed). When checking calls we see it performing a GET https://myhost/websocket/info?t=1550846293947 instead of a GET https://myhost/{configured pathPrefix}/websocket/info?t=1550846293947

org.sourcelab.kafka.webview.ui.controller.api.exceptions.ApiException: Failed to construct kafka consumer

I have followed instructions in issue #81 to create a new format as per image below
image
But when trying to view the topic, I am getting "Failed to construct kafka consumer" message.
The view works to some extend when I am using String format instead of Avro so but obviously message is unreadable.

`2018-12-20 23:17:22.264 INFO 9 --- [nio-8080-exec-5] o.a.kafka.common.utils.AppInfoParser : Kafka version : 1.1.1
2018-12-20 23:17:22.264 INFO 9 --- [nio-8080-exec-5] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : 98b6346a977495f6
2018-12-20 23:17:22.399 WARN 9 --- [nio-8080-exec-9] .m.m.a.ExceptionHandlerExceptionResolver : Resolved [org.sourcelab.kafka.webview.ui.controller.api.exceptions.ApiException: Failed to construct kafka consumer]

Extremely slow filtering

Not sure if this is known or not,

but I've got kafka-webview up and running with my data system. (just having the ability to view topics has been awesome!)

Our kafka topics typically contain millions of records serialized with protocol buffers.
When I attempted to write my own filter, as well as use the example string filter, I'm filtering approximately 5 records per second.

This makes filters virtually unusable for any topic with more than a handful of records.

Is this know? Are there any plans to improve this, or a known reason why? Also, perhaps its something on my end...

Also, I'm running the backend off a baremetal server, 12 physical cores, 65GB RAM. It doesnt seem to be using anywhere close to 10% of system resources.

Crash listing topics when filter available

If I add a filter (I was using the example ones unmodified), then go to create a view, after I have selected my cluster the application crashes and no topics are available in the list. If I do not have any filters, then it all works fine.

The stack trace is very long, but I did spot this, don't know if it is relevant

ERROR 23754 --- [nio-7070-exec-1] org.thymeleaf.TemplateEngine : [THYMELEAF][http-nio-7070-exec-1] Exception processing template "configuration/view/create": An error happened during template parsing (template: "class path resource [templates/configuration/view/create.html]")

org.thymeleaf.exceptions.TemplateInputException: An error happened during template parsing (template: "class path resource [templates/configuration/view/create.html]")

and later....

Caused by: org.attoparser.ParseException: Exception evaluating SpringEL expression: "filterParameters.containsKey(filter.id)" (template: "configuration/view/create" - line 352, col 45)

and later....

ERROR 23754 --- [nio-7070-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.thymeleaf.exceptions.TemplateInputException: An error happened during template parsing (template: "class path resource [templates/configuration/view/create.html]")] with root cause

I got the success message when setting up the filter. I used the StringSearchFilter, specified where to find the jar file (I used the 'with-dependencies' one), and set Classpath to examples.filter.StringSearchFilter

Create Kafka Producer

I would like to know if you can offer the kafka producer feature, it would be great.

Using KafkaAvroDeserializer

Hi,

I'm trying to declare a new Message Format by declaring the io.confluent.kafka.serializers.KafkaAvroDeserializer class and uploading the kafka-avro-serializer-4.1.0.jar
The declaration failed. Class is not found.
Do you have an idea? Do you know ho to read topics serialized with Avro?

Regards

Antoine

For SASL+SSL clusters do not require KEYSTORE or KEYSTORE PASSWORD

Created from #109 and #105

Issue
Currently creating a connection to a SASL+SSL cluster requires providing a KEYSTORE and KEYSTORE PASSWORD. These are not required for connecting to a SASL+SSL cluster. Only the TRUSTSTORE and TRUSTSTORE PASSWORD.

Suggested Fix
No longer require these fields when creating or updating a cluster connection in this scenario.

Cluster connection issue with Docker

Hi @Crim ,

Thanks for this awesome project!
When I use the docker image to connect an unsecured Kafka server. All webview, kafka, zookeeper are deployed in the same VM in EC2.
I got below error.
WARN 9 --- [eration-UserId1] org.apache.klients.NetworkClient : Connection to node -1 could not be establ Broker may not be available.

Error connecting to cluster: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.

But I can connect the Kafka server via command line tool.

Did I miss anything?

Thanks.

Cluster and SSL config via file

Hi,

I'm using kafka-webview and I really like it so far. One big problem is, that most of the config has to be done via the UI. Is there any way to pre-configure clusters and the SSL config in the config.yml?

Unable to view consumer group

Hi,
I have configured kafka-webview to connect my Kafka cluster. I am seeing an issue is that the consumer groups pertaining to the topic is being viewed in Cluster DEV Consumer Groups section on my Cluster Explorer page. However, I have configured Kafka Tool desktop UI in my local which is showing all the consumer groups and messages nicely. Is there anything I am missing or need to be configured in the configuration yaml files?

Thanks
Sujit

Question: Problems with custom deserializer, UnknownFieldSet

After creating a custom deserializer, and uploading into the app, I get the following when trying to view a View which uses that message format.

 Error Type definition error: [simple type, class com.google.protobuf.UnknownFieldSet$Parser]; nested exception is com.fasterxml.jackson.databind.exc.InvalidDefinitionException: No serializer found for class com.google.protobuf.UnknownFieldSet$Parser and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationFeature.FAIL_ON_EMPTY_BEANS) (through reference chain: org.sourcelab.kafka.webview.ui.manager.kafka.dto.KafkaResults["results"]->java.util.Collections$UnmodifiableRandomAccessList[0]-

Is there anyway to avoid this? #75 looks like it tried to mitigate this, but im still seeing it on master HEAD

Issue with Docker version of Kafka Webview and MacOS

Hello,

I noticed that the docker version of Kafka Webview is cannot reach my local Kafka cluster on my MacOS machine. Is this a know issue ? It might have to do with the /etc/hosts file but I am not sure.

The issue does not occur with the zipped version.

P.S. Kudos on for this nice software by the way.

Got 500 when creating new message format

Hi @Crim ,

Thanks for this awesome project!
I am trying to play around with this tool and had two custom deserializer ready for test purpose.
I am able to add one of them successfully, but for the other one always got 500.
The two custom deserializer both implement the kafka deserilize interface but with different logic inside.

Exceptions in the log is as below:
2018-01-25 15:10:01.375 ERROR 18792 --- [nio-8080-exec-9] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.thymeleaf.exceptions.TemplateInputException: Error resolving template "/configuration/messageFormat/create", template might not exist or might not be accessible by any of the configured Template Resolvers] with root cause

org.thymeleaf.exceptions.TemplateInputException: Error resolving template "/configuration/messageFormat/create", template might not exist or might not be accessible by any of the configured Template Resolvers
at org.thymeleaf.engine.TemplateManager.resolveTemplate(TemplateManager.java:870) ~[thymeleaf-3.0.7.RELEASE.jar!/:3.0.7.RELEASE]
at org.thymeleaf.engine.TemplateManager.parseAndProcess(TemplateManager.java:607) ~[thymeleaf-3.0.7.RELEASE.jar!/:3.0.7.RELEASE]
at org.thymeleaf.TemplateEngine.process(TemplateEngine.java:1098) ~[thymeleaf-3.0.7.RELEASE.jar!/:3.0.7.RELEASE]
at org.thymeleaf.TemplateEngine.process(TemplateEngine.java:1072) ~[thymeleaf-3.0.7.RELEASE.jar!/:3.0.7.RELEASE]
......

Any clue about this issue?

streamlined configuration options

Excellent tool, it really provides good insights, we're using it on several production environments because it's such a great addition to our Kafka setup. When we deploy it we first spin up the h2 db in server mode and use a Python script to execute SQL updates to set all the environment specific values (clusters, views, users, ...) before starting kafka-webview. This approach runs as part of our automated deploy chain and updating h2 this way just takes a few seconds to complete but it's only valid for the 1.0.5 specific h2 backend. A more streamlined approach would be welcome to guard against future updates or model changes.

Once again, thanks for this really great application!

Support running Kafka Webview behind reverse proxy

I am running Kafka Webview in a Kubernetes cluster and I'm trying to expose it publicly using Kubernetes Ingress (reverse proxy mechanism, backed by traefik).

I want to setup a reverse proxy rewrite rule that translates all external requests that matche the path prefix /dashboards/kafka to / on the service that is running Kafka Webview. This works for accessing the index page or the static resources, but the page links are incorrent and redirects fail, because your are using absolute paths in the JSP templates (e.g. the authentication procedure automatically redirects to /login, but should be login so it is externally reachable from /dashboards/kafka/login through the reverse proxy).

Another solution could be to use the Spring Boot server.servlet.configPath configuration property. Using this property I could set a prefix path for the Kafka Webview (dashboards/kafka) and let the reverse proxy redirect the paths as-is. For this to work, I guess you should add the variable ${pageContext.request.contextPath} before all href paths?

Or is there another easy workaround?

ERR_TOO_MANY_REDIRECTS chrome Error

when trying to use start.sh from the release bin package, the app is up and running but chrome is showing this error message ERR_TOO_MANY_REDIRECTS when using sslrequire true in the config file

Consumer Switch to Stream Failed

Hello

Got the following Stack Trace when I try to switch to the stream view. The browser view is working fine !
And if I try to logout from my default user, I got the Error 500 Houston, we have a problem :) !

Thanks for your help

`2019-01-15 16:46:08.816 ERROR 33528 --- [oundChannel-109] .WebSocketAnnotationMethodMessageHandler : Unhandled exception from message handler method

java.lang.NullPointerException: null
at org.sourcelab.kafka.webview.ui.controller.stream.StreamController.getLoggedInUser(StreamController.java:189) ~[classes!/:2.1.2]
at org.sourcelab.kafka.webview.ui.controller.stream.StreamController.getLoggedInUserId(StreamController.java:182) ~[classes!/:2.1.2]
at org.sourcelab.kafka.webview.ui.controller.stream.StreamController.newConsumer(StreamController.java:130) ~[classes!/:2.1.2]
at org.sourcelab.kafka.webview.ui.controller.stream.StreamController$$FastClassBySpringCGLIB$$9d63246f.invoke() ~[classes!/:2.1.2]
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) ~[spring-core-5.0.12.RELEASE.jar!/:5.0.12.RELEASE]
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:746) ~[spring-aop-5.0.12.RELEASE.jar!/:5.0.12.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) ~[spring-aop-5.0.12.RELEASE.jar!/:5.0.12.RELEASE]
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:294) ~[spring-tx-5.0.12.RELEASE.jar!/:5.0.12.RELEASE]
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:98) ~[spring-tx-5.0.12.RELEASE.jar!/:5.0.12.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185) ~[spring-aop-5.0.12.RELEASE.jar!/:5.0.12.RELEASE]
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688) ~[spring-aop-5.0.12.RELEASE.jar!/:5.0.12.RELEASE]
at org.sourcelab.kafka.webview.ui.controller.stream.StreamController$$EnhancerBySpringCGLIB$$65d5c314.newConsumer() ~[classes!/:2.1.2]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_191]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_191]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_191]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_191]
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:181) ~[spring-messaging-5.0.12.RELEASE.jar!/:5.0.12.RELEASE]
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:114) ~[spring-messaging-5.0.12.RELEASE.jar!/:5.0.12.RELEASE]
at org.springframework.messaging.handler.invocation.AbstractMethodMessageHandler.handleMatch(AbstractMethodMessageHandler.java:517) [spring-messaging-5.0.12.RELEASE.jar!/:5.0.12.RELEASE]
at org.springframework.messaging.simp.annotation.support.SimpAnnotationMethodMessageHandler.handleMatch(SimpAnnotationMethodMessageHandler.java:495) [spring-messaging-5.0.12.RELEASE.jar!/:5.0.12.RELEASE]
at org.springframework.messaging.simp.annotation.support.SimpAnnotationMethodMessageHandler.handleMatch(SimpAnnotationMethodMessageHandler.java:88) [spring-messaging-5.0.12.RELEASE.jar!/:5.0.12.RELEASE]
at org.springframework.messaging.handler.invocation.AbstractMethodMessageHandler.handleMessageInternal(AbstractMethodMessageHandler.java:475) [spring-messaging-5.0.12.RELEASE.jar!/:5.0.12.RELEASE]
at org.springframework.messaging.handler.invocation.AbstractMethodMessageHandler.handleMessage(AbstractMethodMessageHandler.java:411) [spring-messaging-5.0.12.RELEASE.jar!/:5.0.12.RELEASE]
at org.springframework.messaging.support.ExecutorSubscribableChannel$SendTask.run(ExecutorSubscribableChannel.java:138) [spring-messaging-5.0.12.RELEASE.jar!/:5.0.12.RELEASE]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_191]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_191]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_191]`

Improve 'cluster explorer' pages

Lets improve the 'cluster explorer' pages in the app. I think we can do a better job visualizing a cluster, its topics, and partition distribution.

Also add some screenshots to the README.

How to increase session time

Hi, I get automatically logged out it's very irritating when i'm watchin stream and suddenly its stop and when I'm refreshing page I got login page.

How to increase user session time ?

Ability to Customize LDAP integration

Hey there,

Can you consider modifying LDAP configuration to provide more advanced custom LDAP search filters or options?

The issues that given default Spring configuration finds Groups by provided base path with a type of groupOfUniqueNames and looks for uniqueMemberId attributes to match the user.

For example we don't use type of groupOfUniqueNames for our groups so the LDAP configuration won't work.

Can we get it updated to have an option of providing full path for LDAP search/filter without looking for groups/users?

Issue with the start.sh script

Hello,

When one uses the start.sh script, java jar tries to start the following jar: kafka-webview-ui-2.0.0-javadoc.jar instead of this one: kafka-webview-ui-2.0.0.jar.

See this line: https://github.com/SourceLabOrg/kafka-webview/blob/master/kafka-webview-ui/src/assembly/distribution/start.sh#L20

## launch webapp
java -jar kafka-webview-ui-*.jar $HEAP_OPTS $LOG_OPTS

I had to modify the script as follows:

## launch webapp
java -jar kafka-webview-ui-2.0.0.jar $HEAP_OPTS $LOG_OPTS

In order for the spring boot app to start.

search for topic

Hi, your app is really cool. What I miss is a possibility to search and remove topics. When testing we are dealing with many topics and this would make life more easier. Thanks!

Error "No serializer found for class..."

When deserializing from Kafka into objects that have no registered jackson serializer you get the error below.

We should handle this situation better by telling jackson to fall back to using that objects toString() method.

` Error Could not write JSON: No serializer found for class com.google.protobuf.UnknownFieldSet$Parser and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationFeature.FAIL_ON_EMPTY_BEANS); nested exception is com.fasterxml.jackson.databind.JsonMappingException: No serializer found for class com.google.protobuf.UnknownFieldSet$Parser and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationFeature.FAIL_ON_EMPTY_BEANS) (through reference chain: org.sourcelab.kafka.webview.ui.manager.kafka.dto.KafkaResults["results"]->java.util.Collections$UnmodifiableRandomAccessList[0]->org.sourcelab.kafka.webview.ui.manager.kafka.dto.KafkaResult["value"]

Docker image

I would love to give this a spin, but it'd be amazing if it had a Docker image to quickly just get it up and running against existing setups

auto set offset of view to tail to avoid burrow alert

When using burrow, it will alert because webview managed topic offset is not updated automatically.

Can you please add a scheduler to consume the topic to tail please?

BTW, webview do not use group.id, so in manage tools, I can not operate it. Can you please add an option to allow admin to set the group.id?

Thanks!
Jiming

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.