Code Monkey home page Code Monkey logo

skuber's People

Contributors

amir avatar aperkins1310 avatar arkadius avatar awh21 avatar cgbaker avatar chessman avatar chrisbeach avatar coralbg avatar dbuschman7 avatar dmitry-erokhin avatar doriordan avatar dream1master avatar dtaniwaki avatar farmdawgnation avatar hagay3 avatar harana-bot avatar hollinwilkins avatar javierarrieta avatar jroper avatar marcuslonnberg avatar mzeljkovic avatar ngortheone avatar nicolasrouquette avatar nsutcliffe avatar olivierlemasle avatar olofwalker avatar pbarron avatar s4nk avatar sakulk avatar scala-steward avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

skuber's Issues

Can't seem to get a failed future on delete of non-existent object

If I do effectively this:

val fut = (k8s usingNamespace namespace delete[ConfigMap] name).map { r => Right(r) }.recover {
      case t: Throwable =>
        Left(t)
    }

    println(Await.result(fut, Duration.Inf))

I can repeatedly execute the above, with no Left(...) ever being returned. The first time, of course, it should succeed, but in subsequent invocations, it should raise an exception.

The corresponding code works fine for create (and other CRUD) operations:

//    val fut = (k8s usingNamespace namespace create configMap).map { r => Right(r) }.recover {
//      case t: Throwable =>
//        Left(t)
//    }

Attempting to invoke more than once will return a Left, such as this one:

Left(skuber.api.client.package$K8SException: Status(v1,Status,ListMeta(,,None),Some(Failure),Some(configmaps "foo22" already exists),Some(AlreadyExists),Some({"name":"foo22","kind":"configmaps"}),Some(409)))

In the case of delete, if the object doesn't exist, we see:

[INFO] [01/13/2022 15:30:00.738] [default-akka.actor.default-dispatcher-4] [skuber.api] [Response: non-ok status returned - Status(v1,Status,ListMeta(,,None),Some(Failure),Some(configmaps "foo22" not found),Some(NotFound),Some({"name":"foo22","kind":"configmaps"}),Some(404))
Right(())

Clearly, the non-200 status (404) is returned, but skuber doesn't appear to raise an exception.

I'm using:

"io.github.hagay3" %% "skuber" % "2.7.0",

Refresh EKS tokens.

The skuber client appears to have no way to refresh EKS tokens - unless I am missing how?
If I am not missing how to refresh tokens we need a way to do this.

The following is from AWS....

"Hello,

We have identified applications running in one or more of your Amazon EKS clusters that are not refreshing service account tokens. Applications making requests to Kubernetes API server with expired tokens will fail. You can resolve the issue by updating your application and its dependencies to use newer versions of Kubernetes client SDK that automatically refreshes the tokens.

What is the problem?

Kubernetes version 1.21 graduated BoundServiceAccountTokenVolume feature [1] to beta and enabled it by default. This feature improves security of service account tokens by requiring a one hour expiry time, over the previous default of no expiration. This means that applications that do not refetch service account tokens periodically will receive an HTTP 401 unauthorized error response on requests to Kubernetes API server with expired tokens. You can learn more about the BoundServiceAccountToken feature in EKS Kubernetes 1.21 release notes [2].

To enable a smooth migration of applications to the newer time-bound service account tokens, EKS v1.21+ extends the lifetime of service account tokens to 90 days. Applications on EKS v1.21+ clusters that make API server requests with tokens that are older than 90 days will receive an HTTP 401 unauthorized error response.

How can you resolve the issue?

To make the transition to time bound service account tokens easier, Kubernetes has updated the below official versions of client SDKs to automatically refetch tokens before the one hour expiration:

  • Go v0.15.7 and later
  • Python v12.0.0 and later
  • Java v9.0.0 and later
  • Javascript v0.10.3 and later
  • Ruby master branch
  • Haskell v0.3.0.0

We recommend that you update your application and its dependencies to use one of the above client SDK versions if you are on an older version.

While not an exhaustive list, the below AWS components have been updated to use the newer Kubernetes client SDKs that automatically refetches the token :

  • Amazon VPC CNI: v1.8.0 and later
  • CoreDNS: v1.8.4 and later
  • AWS Load Balancer Controller: v2.0.0 and later
  • kube-proxy: v1.21.2-eksbuild.2 and later

You can find instructions for upgrading Amazon VPC CNI, CoreDNS, AWS Load Balancer Controller and kube-proxy in EKS documentation at [3], [4], [5] and [6] respectively. Please check EKS documentation [2] for list of components and the versions that have addressed this issue.

How can I identify service accounts using stale tokens?

As of April 20th 2022, we have identified the below service accounts attached to pods in one or more of your EKS clusters using stale (older than 1 hour) tokens. Service accounts are listed in the format : |namespace:serviceaccount

..."

Http client settings taken from ActorSystem settings

Is your feature request related to a problem? Please describe.
Currently, when somebody wants to configure http client, he/she has to configure it on the root, ActorSystem settings level. It is not pleasant for applications which:

  • uses multiple skuber clients with different configurations
  • has more complex root application configuration like: nested configuration with skuber client configuration on some deeper level

Describe the solution you'd like
The problem is in places using actorSystem.settings.config like: https://github.com/hagay3/skuber/blob/master/client/src/main/scala/skuber/api/client/impl/KubernetesClientImpl.scala#L719 . What do you think about changing actorSystem.settings.config to appConfig.getConfig(skuberKeyPath)? Thanks to that we will have isolation of this configuration - similar to the way how e.g. akka.dispatcher is determined.

If you agree with that, I can prepare PR.

Add namespace parameter to every method in KubernetesClient as an optional parameter

Instead of using the following api for switching between namespaces

def usingNamespace(newNamespace: String): KubernetesClient

its better to add an optional parameter that will seep down to the client and will fire the right HTTP call.
for example:

def delete[O <: ObjectResource](name: String, gracePeriodSeconds: Int = -1, namespace: Option[String] = None)(implicit rd: ResourceDefinition[O], lc: LoggingContext): Future[Unit]

NoClassDefFoundError for BouncyCastle with Skuber 2.7.1

Hi,

I've been recently working on replacing an older version of Skuber (2.6.0, from the doriordan fork) to 2.7.1 (especially interested in some changes related to resource watches).

When building my app with the 2.7.1 release, I get a NoClassDefFound error at runtime, related to BouncyCastle:

Exception in thread "main" java.lang.NoClassDefFoundError: org/bouncycastle/jce/provider/BouncyCastleProvider
	at skuber.api.security.TLS$.$anonfun$getTrustManagers$1(TLS.scala:60)
	at scala.Option.map(Option.scala:242)
	at skuber.api.security.TLS$.getTrustManagers(TLS.scala:59)
	at skuber.api.security.TLS$.buildSSLContext(TLS.scala:47)
	at skuber.api.security.TLS$.establishSSLContext(TLS.scala:36)
	at skuber.api.client.impl.KubernetesClientImpl$.apply(KubernetesClientImpl.scala:748)
	at skuber.api.client.package$.init(package.scala:218)
	at skuber.api.client.package$.init(package.scala:208)
	at skuber.api.client.package$.init(package.scala:196)
	at skuber.package$.k8sInit(package.scala:288)
	at scala.Function0.apply$mcV$sp(Function0.scala:39)
	at scala.Function0.apply$mcV$sp$(Function0.scala:39)
	at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:17)
	at scala.App.$anonfun$main$1(App.scala:76)
	at scala.App.$anonfun$main$1$adapted(App.scala:76)
	at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:563)
	at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:561)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:919)
	at scala.App.main(App.scala:76)
	at scala.App.main$(App.scala:74)
Caused by: java.lang.ClassNotFoundException: org.bouncycastle.jce.provider.BouncyCastleProvider
	at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
	... 25 more

I think this might be fixed by the latest commit in master (5b28e55), but I haven't had the chance to try and build and run the code.

Are there any workarounds so we can get the current 2.7.1 release working, or is it better just to wait for the next release? I tried explicitly including bouncycastle as a dependency, but that then leads to a host of other issues and I think it's not really the way to go.

Thank you and thanks for working on this project!

Automatically generate ObjectResource case classes from CRDs

Is your feature request related to a problem? Please describe.
You can already create ObjectResource case classes manually as described here. These days it's common to have many operators and their CRDs installed in a k8s cluster which makes it quite cumbersome to manually create those often deeply nested structures.

Describe the solution you'd like
To me it seems the only easy way out (at least from the users point of view) is to have some kind of code generation in place which translates those YAML based CRDs (EventBus example) into simple case classes. I quite like the approach ScalablyTyped takes. It's a sbt plugin taking TypeScript and converting it to Scala.JS facades, such that you can directly use JavaScript libraries in a typed way directly from Scala.JS. This all happens at build time so there's no need to commit the generated classes into a VCS. For this to work you'd have to define all the CRD dependency URLs you have in your build file (similar to the NPM dependencies for ScalablyTyped). In theory just linking to (the often existent) manifests/kustomization.yaml of a k8s application (EventBus example) be sufficient.

Additional context
I'm currently using kustomize with ArgoCD to achieve GitOPs principles. It works great but with increasing cluster complexity kustomize becomes harder and harder to maintain. Having the CRD generation feature in place it should be possible to describe the whole k8s cluster using Skuber, export it as a big yaml and feed it back into ArgoCD. IaaC using typed scala code would be so awesome :)

too old resource version

Using 2.7.3 watchContinuously(...) throws this exception.

Restarting stream due to failure [1]: skuber.api.client.package$K8SException: Status(v1,Status,ListMeta(,,None),Some(Failure),Some(too old resource version: 762700013 (762702363)),Some(Expired),None,Some(410))
skuber.api.client.package$K8SException: Status(v1,Status,ListMeta(,,None),Some(Failure),Some(too old resource version: 762700013 (762702363)),Some(Expired),None,Some(410))
	at skuber.api.watch.BytesToWatchEventSource$.$anonfun$apply$1(BytesToWatchEventSource.scala:22)
	at akka.stream.impl.fusing.Map$$anon$1.onPush(Ops.scala:52)
	at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:542)
	at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:423)
	at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:650)
	at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:521)
	at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:625)
	at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:800)
	at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:818)
	at akka.actor.Actor.aroundReceive(Actor.scala:537)
	at akka.actor.Actor.aroundReceive$(Actor.scala:535)
	at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:716)
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
	at akka.actor.ActorCell.invoke(ActorCell.scala:548)
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
	at akka.dispatch.Mailbox.run(Mailbox.scala:231)
	at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
	at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
	at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
	at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
	at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
	at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)

Using 2.6.x we don't see this issue. We don't see a reason why this K8SException is being thrown. We are not using Job or Pod TTL and we only delete the job/pod after the watchContinuously returns success or failure. It appears there might be a race condition in as to when Skuber creates the job and associates the resourceVersion. It appears the watch is for some reasons seeing any older version of the resource when it should not.

Here is the job.

...
    val job = Job(_jobId).withTemplate(templateSpec)
      .withActiveDeadlineSeconds(activeDeadlineSeconds)
      .withCompletions(1)
      .withParallelism(1)
      .withBackoffLimit(backOffLimit)

Here is the watch.

...
val stream: (UniqueKillSwitch, Future[Done]) = RestartSource.withBackoff(
      restartSettings.withMaxRestarts(3, 30.seconds))(
      () => _kubernetesClient.watchContinuously[Job](_jobId))
      .viaMat(KillSwitches.single)(Keep.right)
      .toMat(podPhaseMonitor)(Keep.both)
      .run()
...

Support for Bound Service Account Token Volume for In-Cluster communication (kubernetes 1.22)

Is your feature request related to a problem? Please describe.
When in-cluster configuration running in kubernetes 1.22 the service account token mounted on the pod (filesystem) expires and gets refreshed periodically by the kubelet. Skuber does not take into account this, it caches the token and will use it once expired.

Describe the solution you'd like
Skuber needs to periodically reload the token from the filesystem.

Describe alternatives you've considered
This is a change in the behaviour on how service account tokens work in kubernetes after the version 1.21. This feature is required or skuber won't work for in-cluster communication on kubernetes 1.22.

Additional context
Resources about the Bound Service Account Token Volume feature
GCloud - kubernetes-bound-service-account-tokens
service-accounts-admin/#bound-service-account-token-volume

akka.actor.ActorInitializationException when trying to send API requests

Hi,

This is a follow-up to a previous issue (#59).

After updating skuber to 2.7.2, all API requests sent by Skuber fail with an Akka initialization exception. I've tried this inside a Kubernetes cluster, with default configuration settings. Full log of my dummy app:

[INFO] [07/29/2021 09:28:18.485] [main] [skuber.api] Using following context for connecting to Kubernetes cluster: Context(Cluster(v1,https://192.168.0.1:443,false,Some(Left(/var/run/secrets/kubernetes.io/serviceaccount/ca.crt))),TokenAuth(token=<redacted>),Namespace(Namespace,v1,ObjectMeta(test-ns,,,,,,None,None,None,Map(),Map(),List(),0,None,None),None,None))
[INFO] [07/29/2021 09:28:19.382] [main] [skuber.api] [ { reqId=d69cff65-6a8b-4515-a6bb-47acb4c297c7} } - about to send HTTP request: GET https://192.168.0.1/api/v1/namespaces/test-ns/configmaps/test-config]
[ERROR] [07/29/2021 09:28:19.781] [default-akka.actor.default-dispatcher-5] [akka://default/system/Materializers/StreamSupervisor-0/TLS-for-flow-1-1] Client/Server mode has not yet been set.
akka.actor.ActorInitializationException: akka://default/system/Materializers/StreamSupervisor-0/TLS-for-flow-1-1: exception during creation
	at akka.actor.ActorInitializationException$.apply(Actor.scala:196)
	at akka.actor.ActorCell.create(ActorCell.scala:664)
	at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:514)
	at akka.actor.ActorCell.systemInvoke(ActorCell.scala:536)
	at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:295)
	at akka.dispatch.Mailbox.run(Mailbox.scala:230)
	at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
	at java.base/java.util.concurrent.ForkJoinTask.doExec(Unknown Source)
	at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(Unknown Source)
	at java.base/java.util.concurrent.ForkJoinPool.scan(Unknown Source)
	at java.base/java.util.concurrent.ForkJoinPool.runWorker(Unknown Source)
	at java.base/java.util.concurrent.ForkJoinWorkerThread.run(Unknown Source)
Caused by: java.lang.IllegalStateException: Client/Server mode has not yet been set.
	at java.base/sun.security.ssl.SSLEngineImpl.beginHandshake(Unknown Source)
	at akka.stream.impl.io.TLSActor.<init>(TLSActor.scala:165)
	at akka.stream.impl.io.TLSActor$.$anonfun$props$1(TLSActor.scala:40)
	at akka.actor.TypedCreatorFunctionConsumer.produce(IndirectActorProducer.scala:91)
	at akka.actor.Props.newActor(Props.scala:226)
	at akka.actor.ActorCell.newActor(ActorCell.scala:616)
	at akka.actor.ActorCell.create(ActorCell.scala:643)
	... 10 more

Note that this works if I downgrade Skuber to 2.7.0.

My build.sbt file is quite simple, only Skuber and a few extra things:

libraryDependencies += "io.github.hagay3"           %% "skuber"          % "2.7.2"
libraryDependencies += "ch.qos.logback"              % "logback-classic" % "1.2.3"
libraryDependencies += "com.typesafe.scala-logging" %% "scala-logging"   % "3.9.3"
libraryDependencies += "com.github.pureconfig"      %% "pureconfig"      % "0.16.0"

assembly / assemblyJarName := "dummy-skuber.jar"

ThisBuild / assemblyMergeStrategy := {
  case PathList("module-info.class")         => MergeStrategy.discard
  case x if x.endsWith("/module-info.class") => MergeStrategy.discard
  case x =>
    val oldStrategy = (ThisBuild / assemblyMergeStrategy).value
    oldStrategy(x)
}

The actual code doesn't do much, just tries to retrieve a ConfigMap and then start a watch:

private def run(appConfig: AppConfig): Unit = {
    val k8s                = k8sInit
    val getConfigMapFuture = k8s.getInNamespace[ConfigMap](appConfig.configMapName, appConfig.namespace)
    getConfigMapFuture.onComplete {
      case Success(configMap) =>
        logger.info(s"Succesfully read ConfigMap: $configMap")
        k8s.watchContinuously(configMap).runWith(configMapMonitor)
      case Failure(e) =>
        logger.warn(s"Failed to read ConfigMap, due to: $e")
    }
  }

Note that it doesn't even create the Future (so it doesn't reach the Failure branch in onComplete.

Let me know if there's anything else I can try! I've run out of ideas.

Thanks!

skuber-light

skuber is not a light library currently, considering it includes akka HTTP, akka streams, play json and AWS SDK.

Creating skuber-light can increase flexibility and it will remove unneeded dependencies for simple usages.
skuber-light can share code with the main skuber library and include way fewer features.

Creating custom resource object from yaml

Hello -

How can we create objects from yaml?
I mean I can create an Argo EventBus Custom resource using kubectl apply -f test-eventbus.yaml
How can I create the same programmatically using skuber API?

---- test-eventbus.yaml -----
apiVersion: argoproj.io/v1alpha1
kind: EventBus
metadata:
name: default
namespace: test1-event-bus
spec:
nats:
native: {}

Thank you!

Unable to connect to a GKE cluster from the local machine

I created a cluster in GKE. Then, I tried to run an application locally to connect to the cluster, and it failed. The in-cluster method works fine.

here is the error:

java.util.NoSuchElementException: None.get
	at scala.None$.get(Option.scala:627)
	at scala.None$.get(Option.scala:626)
	at skuber.api.Configuration$.valueAt$1(Configuration.scala:118)
	at skuber.api.Configuration$.authProviderRead$1(Configuration.scala:180)
	at skuber.api.Configuration$.toK8SAuthInfo$1(Configuration.scala:189)
	at skuber.api.Configuration$.$anonfun$parseKubeconfigStream$9(Configuration.scala:209)
	at skuber.api.Configuration$.$anonfun$parseKubeconfigStream$6(Configuration.scala:154)
	at scala.collection.StrictOptimizedIterableOps.map(StrictOptimizedIterableOps.scala:100)
	at scala.collection.StrictOptimizedIterableOps.map$(StrictOptimizedIterableOps.scala:87)
	at scala.collection.convert.JavaCollectionWrappers$JListWrapper.map(JavaCollectionWrappers.scala:138)
	at skuber.api.Configuration$.topLevelYamlToK8SConfigMap$1(Configuration.scala:154)
	at skuber.api.Configuration$.$anonfun$parseKubeconfigStream$1(Configuration.scala:209)
	at scala.util.Try$.apply(Try.scala:210)
	at skuber.api.Configuration$.parseKubeconfigStream(Configuration.scala:104)
	at skuber.api.Configuration$.$anonfun$parseKubeconfigFile$2(Configuration.scala:95)
	at scala.util.Success.flatMap(Try.scala:258)
	at skuber.api.Configuration$.parseKubeconfigFile(Configuration.scala:94)
	at skuber.api.Configuration$.$anonfun$defaultK8sConfig$3(Configuration.scala:308)
	at scala.util.Failure.orElse(Try.scala:221)
	at skuber.api.Configuration$.$anonfun$defaultK8sConfig$2(Configuration.scala:308)
	at scala.Option.getOrElse(Option.scala:201)
	at skuber.api.Configuration$.defaultK8sConfig(Configuration.scala:307)
	at skuber.api.client.package$.defaultK8sConfig(package.scala:218)
	at skuber.api.client.package$.init(package.scala:193)
	at skuber.package$.k8sInit(package.scala:288)

To Reproduce

  1. create a cluster
$PROJECT=dataedu
$CLUSTER_NAME
gcloud auth application-default login --project=$PROJECT
gcloud config set project $PROJECT
gcloud container clusters create $CLUSTER_NAME --project=$PROJECT
export USE_GKE_GCLOUD_AUTH_PLUGIN=True
gcloud container clusters get-credentials $CLUSTER_NAME --zone=us-central1-a --project=$PROJECT
  1. connect to cluster
  implicit val system: ActorSystem = ActorSystem()
  implicit val dispatcher: ExecutionContextExecutor = system.dispatcher
 
 val k8s = k8sInit

Expected behavior
Be able to connect to GKE cluster from local machine same as in-cluster

skuber version
2.6.7

kubernetes server version

❯ kubectl version
Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.26.7-gke.500

Cloud Platform
GCP

Additional context

When it tries to create the client, it's looking for the following structure:

- name: 
  user:
    auth-provider:
      config:
        access-token: 
        cmd-args: 
        cmd-path: 
        expiry: 
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp

but in the new versions, gcloud doesn't support it and has the following format:

- name: 
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args: null
      command: gke-gcloud-auth-plugin
      env:
      installHint: 
      interactiveMode: IfAvailable
      provideClusterInfo: true

So, either I'm having an issue in configuration or the new authentication method is not supported in skuber.

Add ability to delete objects in namespace

Current methods "delete" and "deleteWithOptions" will take "default" namespace from the context.
Either add new param to these methods or new signature , i.e. "deleteInNamespace" and "deleteInNamespaceWithOptions".

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.