teamhephy / workflow Goto Github PK
View Code? Open in Web Editor NEWHephy Workflow - An open source fork of Deis Workflow - The open source PaaS for Kubernetes.
License: MIT License
Hephy Workflow - An open source fork of Deis Workflow - The open source PaaS for Kubernetes.
License: MIT License
From @chaintng on July 21, 2017 4:49
I use google container engine and recently upgrade the kubernetes cluster to 1.7.0
However, i got this error when i try to set autoscale to my application
> deis autoscale:set web --min=15 --max=50 --cpu=60 -a MY-APP
Applying autoscale settings for process type web on MY-APP... Error: Unknown Error (400): {"detail":"Invalid version: '1.7+'"}
Copied from original issue: deis/workflow#841
From @jmspring on January 19, 2017 20:31
At the Azure / Deis hackfest at in December, there was time to work on additional enhancements. One such was a PoC for adding support for Let's Encrypt certificates (to enable TLS) when new applications are deployed to Workflow.
The initial PoC takes on two parts:
An application that makes periodic checks for new applications: https://github.com/jmspring/deis_certificate_manager
A worker process that the (1) spawns to do the actual Let's Encrypt/ACME handshake:
https://github.com/jmspring/deis_cert_letsencrypt_worker
This is just a PoC and real support should directly work with the ACME protocol and not necessarily shell out another process to do the handshake.
Further, some research into limitations in/around Let's Encrypt certificate issuance policies (limits per domain, etc) should be considered.
Also, the PoC only retrieves the initial certificate, Let's Encrypt certs have ~90 day life time so renewal as well as revocation as apps are removed should be considered as well.
@mboersma - Jason Hansen mentioned you were Workflow lead.
Copied from original issue: deis/workflow#708
From @carraher on January 9, 2017 17:40
Need the ability to add root certificates authorities to containers that want to access object storage (database, builder, registry). This is needed to host secure-https on-prem object storage that is signed by a non-public CA.
Currently a non-public signed https object storage system results in
ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645)
curl -k
Copied from original issue: deis/workflow#690
From @miracle2k on July 19, 2017 0:13
In https://deis.com/docs/workflow/managing-workflow/tuning-component-settings/#setting-resource-limits it says I can use BUILDER_POD_NODE_SELECTOR
to target specific nodes. How do I actually provide this environment variable. Can I set this through helm/values.yaml?
As far as I can tell, if I want a host that is exclusively responsible as a builder, I need to taint the node and assign tolerations to the builder pods; this is not supported yet though, or is it?
The limits_cpu and limits_memory settings I can apply to the "builder component"; but that is distinct from the builder pods, right? Is there a way to resource limit them.
I am asking this because I have had repeated trouble with the builders taking down other containers, I can only assume due to resource usage. My nodes are, admittedly, not the most powerful ones.
Copied from original issue: deis/workflow#840
From @mboersma on February 15, 2017 23:38
Application deployment should be triggerable via GitHub webhooks. Allow operators and developers to configure Workflow as a webhook endpoint. Workflow should trigger deploys upon release and update the build status on success or failure with the release number.
Copied from original issue: deis/workflow#739
From @mboersma on March 8, 2017 15:55
In the Azure Quick Start docs, we suggest running this command to discover what locations exist for container services:
$ az account list-locations --query [].name --output tsv
eastasia
southeastasia
centralus
eastus
eastus2
westus
northcentralus
southcentralus
northeurope
westeurope
westus2
....
However, many of those locations will later fail when trying to actually create the service:
$ az acs create --resource-group="${AZURE_RG_NAME}" --location="${AZURE_DC_LOCATION}" \
> --orchestrator-type=kubernetes --master-count=1 --agent-count=1 \
> --agent-vm-size="Standard_D2_v2" \
> --admin-username="matt" \
> --name="${AZURE_SERVICE_NAME}" --dns-prefix="mboersma" \
> --ssh-key-value @/Users/matt/.ssh/id_rsa.pub
creating service principal.........done
waiting for AAD role to propagate.done
The provided location 'westus2' is not available for resource type 'Microsoft.ContainerService/containerServices'. List of available regions for the resource type is 'japaneast,centralus,eastus2,japanwest,eastasia,southcentralus,australiaeast,australiasoutheast,brazilsouth,southeastasia,westus,northcentralus,westeurope,northeurope,eastus'.
The first command should be replaced with one that lists only the appropriate regions.
Copied from original issue: deis/workflow#761
From @rhenretta on May 30, 2017 18:42
Running deis workflow v2.14.0 on kubernetes v1.6.4 built with kops.
Used CNI options when deploying deis workflow:
use_cni: true
registry_proxy_bind_addr: "127.0.0.1:5555"
I tried deploying the example-go chart
$ deis create health-check --no-remote
Creating Application... done, created health-check
If you want to add a git remote for this app later, use `deis git:remote -a health-check`
$ deis pull deis/example-go:latest -a health-check
Creating build... Error: Unknown Error (400): {"detail":"(app::deploy): rpc error: code = 2 desc = Error while pulling image: Get http://127.0.0.1:5555/v1/repositories/health-check/images: dial tcp 127.0.0.1:5555: getsockopt: connection refused"}
$ deis pull deis/example-go:latest -a health-check
Creating build... done
I checked the logs on all the deis-registry-proxy pods. Looks like 1/3 can't see the registry come online:
2017-05-30T18:22:16.691091782Z waiting for the registry (100.71.45.99:80) to come online...
2017-05-30T18:24:24.946897342Z waiting for the registry (100.71.45.99:80) to come online...
2017-05-30T18:26:33.330851271Z waiting for the registry (100.71.45.99:80) to come online...
2017-05-30T18:28:41.586901797Z waiting for the registry (100.71.45.99:80) to come online...
2017-05-30T18:30:49.971669483Z waiting for the registry (100.71.45.99:80) to come online...
one came online as expected:
2017-05-30T16:07:31.983198358Z waiting for the registry (100.71.45.99:80) to come online...
2017-05-30T16:07:34.083123317Z waiting for the registry (100.71.45.99:80) to come online...
2017-05-30T16:07:35.27908879Z starting registry-proxy...
and the last shows the output from the successful pull
2017-05-30T18:29:27.249769638Z 127.0.0.1 - - [30/May/2017:18:29:27 +0000] "GET /v2/ HTTP/1.1" 200 2 "-" "docker/1.12.6 go/go1.6.4 git-commit/78d1802 kernel/4.4.65-k8s os/linux arch/amd64 UpstreamClient(docker-py/1.10.6)" "-"
2017-05-30T18:29:27.343427138Z 127.0.0.1 - - [30/May/2017:18:29:27 +0000] "GET /v2/health-check/manifests/v3 HTTP/1.1" 200 1146 "-" "docker/1.12.6 go/go1.6.4 git-commit/78d1802 kernel/4.4.65-k8s os/linux arch/amd64 UpstreamClient(docker-py/1.10.6)" "-"
Copied from original issue: deis/workflow#815
From @mirague on March 1, 2017 9:20
For a variety of implementations it's necessary to attach annotations to Pods (kube2iam, k8s alpha features e.g. Node Affinity/Anti-Affinity, etc.). Normally one would add the annotation to the Deployment's .spec.template.metadata.annotations
, resulting in each Pod carrying the proper annotation.
However, with Deis Workflow this is currently impossible to achieve, or at least persist. One can run kubectl edit deployment myapp-dev-cmd -n myapp-dev
and add the desired annotation to .spec.template.metadata.annotations
but this would be lost once a new version of the app is released.
Can we allow for a way to annotate Pods? Either through a deis
command or by at least persisting the Deployment's set .spec.template.metadata.annotations
when a new version is rolled out. This would allow for the combined use of existing k8s implementations with Deis Workflow.
Copied from original issue: deis/workflow#747
From @vdice on March 8, 2017 21:12
In order to switch over from our in-house registry-proxy to the official/upstream kube-registry-proxy
(as original PR deis/workflow#734 proposed) we will need to sort out the following issue when upgrading.
v2.12.0 release candidate testing showed that after a Workflow install that uses the in-house variant of deis-registry-proxy
(say, v2.11.0
), when one goes to upgrade (helm upgrade luminous-hummingbird workflow-staging/workflow --version v2.12.0
), although the deis-registry-proxy
pod appears to have been removed, the new luminous-hummingbird-kube-registry-proxy
sometimes does not appear due to a host port conflict:
$ helm ls
NAME REVISION UPDATED STATUS CHART NAMESPACE
luminous-hummingbird 4 Wed Mar 8 14:01:02 2017 DEPLOYED workflow-v2.12.0 deis
$ kd get po,ds
NAME READY STATUS RESTARTS AGE
po/deis-builder-574483744-qnf44 1/1 Running 0 24m
po/deis-controller-3953262871-jqkmd 1/1 Running 2 24m
po/deis-database-83844344-m5x4x 1/1 Running 0 24m
po/deis-logger-176328999-d7fxc 1/1 Running 9 1h
po/deis-logger-fluentd-0hqfs 1/1 Running 0 1h
po/deis-logger-fluentd-drfh6 1/1 Running 0 1h
po/deis-logger-redis-304849759-nbrdp 1/1 Running 0 1h
po/deis-minio-676004970-g2bj9 1/1 Running 0 1h
po/deis-monitor-grafana-432627134-87b1z 1/1 Running 0 24m
po/deis-monitor-influxdb-2729788615-q67f9 1/1 Running 0 25m
po/deis-monitor-telegraf-6q562 1/1 Running 0 1h
po/deis-monitor-telegraf-rzwnv 1/1 Running 6 1h
po/deis-nsqd-3597503299-94nhx 1/1 Running 0 1h
po/deis-registry-756475849-v0rmw 1/1 Running 0 24m
po/deis-router-1001573613-mk07g 1/1 Running 0 13m
po/deis-workflow-manager-1013677227-kh5vt 1/1 Running 0 25m
NAME DESIRED CURRENT READY NODE-SELECTOR AGE
ds/deis-logger-fluentd 2 2 2 <none> 1h
ds/deis-monitor-telegraf 2 2 2 <none> 1h
ds/luminous-hummingbird-kube-registry-proxy 0 0 0 <none> 24m
$ kd describe ds luminous-hummingbird-kube-registry-proxy
Name: luminous-hummingbird-kube-registry-proxy
Image(s): gcr.io/google_containers/kube-registry-proxy:0.4
Selector: app=luminous-hummingbird-kube-registry-proxy
Node-Selector: <none>
Labels: chart=kube-registry-proxy-0.1.0
heritage=Tiller
release=luminous-hummingbird
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Misscheduled: 0
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
25m 25m 2 {daemonset-controller } Normal FailedPlacement failed to place pod on "k8s-agent-fbf26383-0": host port conflict
25m 25m 2 {daemonset-controller } Normal FailedPlacement failed to place pod on "k8s-master-fbf26383-0": host port conflict
Copied from original issue: deis/workflow#766
From @Shashwatsh on July 28, 2017 17:9
i'm trying to install deis workflow on kubernetes 1.5 but getting an error
kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-03-09T11:55:06Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-03-09T11:55:06Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
[root@fedora-1gb-blr1-01 ~]# kubectl create clusterrolebinding helm --clusterrole=cluster-admin --serviceaccount=kube-system:tiller-deploy
Error: unknown flag: --clusterrole
Copied from original issue: deis/workflow#843
From @gottfrois on March 6, 2017 15:18
If the feature already exists, this is more of a question, otherwise it would be really nice to be able to do this.
Kubernetes allows to define ENV variables using pod fields as values:
apiVersion: v1
kind: Pod
metadata:
name: ...
spec:
containers:
- name: ...
image: ...
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
Is this possible using deis config:set
command? If not, what would it take to allow this in futur release of deis?
More information here.
Copied from original issue: deis/workflow#751
From @amingilani on November 5, 2016 21:55
Currently I can keys:add
my primary ssh-ed25519 ssh key but I can't push, and have to rely on my alt RSA key.
Github let's me push using my primary ssh-ed25519, I can log in as core@coreOS using my ssh-ed25519 just fine.
Steps to reproduce:
ssh-keygen -o -a 100 -t ed25519
Copied from original issue: deis/workflow#598
From @kingdonb on August 24, 2017 3:38
I tried to launch Workflow on OpenShift and I have followed from helm/helm#2517 where it was explained how Helm can be used on OpenShift
kingdonb$ helm install --tiller-namespace myproject -n deis --namespace myproject workflow/
Error: release deis failed: User "system:serviceaccount:myproject:default" cannot create daemonsets.extensions in project "myproject"
Is this an issue with OpenShift or with Deis? My reading is that there is a resource kind that does not exist on OpenShift yet, that Deis leveraged on a high version of Kubernetes...
Copied from original issue: deis/workflow#856
From @deis-admin on January 19, 2017 21:15
From @renegare on December 6, 2014 14:25
In order to deploy a privately built image to deis cluster
As a developer
I would like to execute the following commands to push and deploy a local image
`deis push <local_image_name>:<tag>`
`deis deploy <local_image_name>:<tag>` # not to be confused with `deis pull`
Currently as of deis v1.0.2
all of this is possible, but through the [not so elegant] steps:
docker build -t <app_name>... # build your image locally
docker tag <app_name>:latest 127.0.0.1:5000/<app_name>:`git rev-parse --short HEAD` # tag your app locally
ssh -Nf -L 5000:127.0.0.1:5000 core@$REGISTRY_HOST # port-forward the deis-registry port (make sure also your docker daemon allows this insecure registry)
docker push 127.0.0.1:5000/<app_name>
killall ssh # not elegant
deis pull <core-host-docker-registry-service-ip>:5000/<app_name>:<tag> # actually deploy your image
This clearly not elegant (I have it running on a CI #ignant) and a few too many moving parts.
I'm hoping for a discussion around this, I am still oblivious to any related concerns around the possibility of this feature.
Copied from original issue: deis/deis#2680
Copied from original issue: deis/workflow#709
From @intellix on October 15, 2016 18:47
So I'm playing with the API right now. I'm creating pipelines by attaching config data to my applications. When I request a list of applications from the API I'd also like to get the config data with those. At the moment I have to loop through each application and make a request for the config each time. If I have 10 applications that's an extra 10 requests.
GraphQL is making waves around the web as a REST API killer. The upcoming Github API changes are going to be using it as well.
Would be awesome to create a GraphQL API for Deis along with subscriptions for pushing data in real time to the client/ui whenever something happens instead of doing long-polling
Example of an API call I'd like to make:
query Applications {
applications: {
id,
created,
config: {
values
}
}
}
Copied from original issue: deis/workflow#558
From @mboersma on February 15, 2017 23:45
Developers should be able to list, provision and deprovision services available to them through the deis workflow cli. This is forward-looking, anticipating a service broker implementation that eventually comes out of k8s sig-service-catalog efforts, or steward.
This could be implemented as a deis CLI plugin, if we want to keep things decoupled or vendor-neutral.
Copied from original issue: deis/workflow#741
From @aberba on January 31, 2017 12:53
Devs new to workflow (including me) would want to know how to set it up locally for testing before deploying to cloud (staging, testing).
Copied from original issue: deis/workflow#718
From @felixbuenemann on December 13, 2016 16:34
It would be great if Deis Workflow provided a ways to trigger one-off Kubernetes Jobs and scheduled Cron Jobs for batch processing.
There should be a deis job
command similar to deis run
which can trigger a custom command in the app to be run, but this should not be interactive (don't wait for completion in the cli) and not be limited by the 20 minute kill timeout. In addition these tasks should be started with the currently deployed release, but not be killed, if a new release is deployed.
If this feature were present, it could then be further extended to Kubernetes Cron Jobs, which would provide a reliable way to schedule recurring jobs. While it is possible to deploy schedulers as their own apps or proc types this is best left to the cluster.
Cron jobs could then be scheduled using a command like deis cron:add <name> <cron> <command>
eg. deis cron:add daily_import 30 5 * * * rake import:prices import:inventory
. Deis would then create and update K8s Cron Jobs to trigger these commands with the current app release and environment.
Copied from original issue: deis/workflow#652
From @intellix on October 16, 2016 20:30
In my quest to achieve pipelines in Deis, I've got 2x applications:
Underneath the myproject-staging
app I want a button: Promote to Production
which when clicked will put the exact same image onto the production app (myproject).
It seems the way to get this working is to make an API call like:
POST `http://deis.${cluster}/${apiVersion}/apps/builds`
Data: {
image: "home/myproject-staging:git-db6645f5/push/slug.tgz"
}
The problem is that this only works for Docker image references and doesn't support passing in a slug built reference.
I'm not going to say buildpacks are a requirement for me right now, but it would allow me to implement pipelines for all application types.
Copied from original issue: deis/workflow#559
From @xiaoping378 on May 5, 2017 7:45
SETUP:
kubectl create clusterrolebinding cluster-admin-helm --clusterrole=cluster-admin --serviceaccount=kube-system:helm
➜ helm version
Client: &version.Version{SemVer:"v2.4.1", GitCommit:"46d9ea82e2c925186e1fc620a8320ce1314cbb02", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.4.1", GitCommit:"46d9ea82e2c925186e1fc620a8320ce1314cbb02", GitTreeState:"clean"}
helm update && helm repo add helm repo add deis https://charts.deis.com/workflow
helm install deis/workflow --namespace deis
➜ kubectl get po -n deis
NAME READY STATUS RESTARTS AGE
deis-builder-1134410811-pnwkl 0/1 Running 1 1h
deis-controller-2000207379-4nw2g 1/1 Running 0 1h
deis-database-244447703-xkzdw 1/1 Running 0 1h
deis-logger-2533678197-wm668 1/1 Running 2 1h
deis-logger-fluentd-45kw4 0/1 CrashLoopBackOff 13 46m
deis-logger-redis-1307646428-pghzd 1/1 Running 0 1h
deis-minio-3195500219-hdxqw 1/1 Running 0 1h
deis-monitor-grafana-59098797-xv98s 1/1 Running 0 1h
deis-monitor-influxdb-168332144-vc7wb 1/1 Running 0 1h
deis-monitor-telegraf-9sjtw 0/1 CrashLoopBackOff 16 1h
deis-nsqd-1042535208-lb1f0 1/1 Running 0 1h
deis-registry-2249489191-9jpgw 1/1 Running 2 1h
deis-registry-proxy-53089 1/1 Running 0 1h
deis-router-3258454730-jh10h 1/1 Running 16 1h
deis-workflow-manager-3582051402-39vrh 1/1 Running 0 1h
the crash fluentd said:
➜ kubectl logs -n deis deis-logger-fluentd-45kw4
/var/log/containers contains broken links
linked /var/lib/docker/containers to /home/Docker/docker/containers
2017-05-05 07:38:41 +0000 [info]: reading config file path="/opt/fluentd/conf/fluentd.conf"
2017-05-05 07:38:41 +0000 [info]: starting fluentd-0.14.15 pid=1
2017-05-05 07:38:41 +0000 [info]: spawn command to main: cmdline=["/usr/bin/ruby2.3", "-Eascii-8bit:ascii-8bit", "/usr/local/bin/fluentd", "-c", "/opt/fluentd/conf/fluentd.conf", "--under-supervisor"]
2017-05-05 07:38:41 +0000 [info]: gem 'fluent-mixin-config-placeholders' version '0.4.0'
2017-05-05 07:38:41 +0000 [info]: gem 'fluent-mixin-plaintextformatter' version '0.2.6'
2017-05-05 07:38:41 +0000 [info]: gem 'fluent-mixin-rewrite-tag-name' version '0.1.0'
2017-05-05 07:38:41 +0000 [info]: gem 'fluent-plugin-deis_output' version '0.1.0'
2017-05-05 07:38:41 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '1.7.0'
2017-05-05 07:38:41 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '0.25.3'
2017-05-05 07:38:41 +0000 [info]: gem 'fluent-plugin-remote_syslog' version '0.3.2'
2017-05-05 07:38:41 +0000 [info]: gem 'fluent-plugin-sumologic-mattk42' version '0.0.4'
2017-05-05 07:38:41 +0000 [info]: gem 'fluentd' version '0.14.15'
2017-05-05 07:38:41 +0000 [info]: gem 'fluentd' version '0.14.13'
2017-05-05 07:38:41 +0000 [info]: adding filter pattern="kubernetes.**" type="kubernetes_metadata"
2017-05-05 07:38:41 +0000 [info]: adding match pattern="**" type="copy"
2017-05-05 07:38:41 +0000 [error]: #0 config error file="/opt/fluentd/conf/fluentd.conf" error_class=Fluent::ConfigError error="Exception encountered fetching metadata from Kubernetes API endpoint: 403 Forbidden"
2017-05-05 07:38:41 +0000 [info]: Worker 0 finished unexpectedly with status 2
2017-05-05 07:38:41 +0000 [info]: Received graceful stop
the crashed telegraf said:
➜ kubectl logs -n deis deis-monitor-telegraf-9sjtw
parse error: Invalid numeric literal at line 1, column 5
note.
i have added the cluster-admin role to deis-router serviceaccout, if don't, it will Crash
Copied from original issue: deis/workflow#809
From @Cryptophobia on June 12, 2017 20:46
We are trying to implement lifecycle preStop and postStart hooks that will be respected by resque workers inside Kubernetes pods. We want to define scripts that will run in the Deis applications by setting environment variables to run preStop and postStart hooks. Except imagine that the command part is just a long environment variable set in Deis environment variables:
lifecycle:
postStart:
exec:
command:
- "/bin/bash"
- "-c"
- >
echo 'Ran the postStart Lifecycle Handler' > test;
preStop:
exec:
command:
- "/bin/bash"
- "-c"
- >
kill -QUIT 1;
The problem that we are encountering is this:
The deis/kubernetes process is monitoring the worker process, in our case resque-pool with pid 1
. Once we send the -QUIT
command to this process, the Kubernetes worker pod enters "CrashBackoffLoop" and restarts the main resque-pool worker process. This is great behavior for resiliency, but not the behaviour we want because we want the pod to be terminated after the workers are terminate gracefully. However, the worker pod is restarted because something is monitoring the main process.
Is there a way we can unhook the deis/kubernetes main process monitor from binding to pid 1
from inside the pod or is there a simpler way to implement lifecycle hooks like the example above?
We were hoping for a successful test so that we can contribute to the deis-workflow by adding environment variables for lifecycle hooks and support for them.
Any help would be greatly appreciated.
Copied from original issue: deis/workflow#825
From @mboersma on February 15, 2017 23:42
We have investigated this several times and found k8s RBAC not-quite-complete, and not a simple replacement for Workflow's existing users & perms mechanisms. Nonetheless, this is something to keep in mind: we would like Workflow to play nice with existing RBAC if possible, even if that behavior needs to be hidden behind an experimental feature flag.
Copied from original issue: deis/workflow#740
From @kmala on September 29, 2016 16:35
Copied from original issue: deis/workflow#529
From @bacongobbler on February 6, 2017 20:55
It looks like the image for GCP is out of date. There is no "Turn on Cloud Logging" checkbox anymore.
As seen at https://deis.com/docs/workflow/quickstart/provider/gke/boot/
Copied from original issue: deis/workflow#727
From @Overdrivr on June 14, 2017 6:39
I'm trying to deploy from Gitlab CI to Deis using buildpack ( I cannot use Docker-based because of #823 )
Authentication to deis from gitlab-ci runner works fine, code is built successfully. But at the end of the deployment, I get the following error message.
remote: Successfully built 76385f485fae
remote: Pushing to registry
remote: received unexpected HTTP status: 500 Internal Server Error
remote: 2017/06/14 06:09:56 Error running git receive hook [Build pod exited with code 1, stopping build.]
To ssh://[email protected]:2222/cronobo-dispatcher.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'ssh://[email protected]:2222/cronobo-dispatcher.git'
ERROR: Job failed: exit code 1
My cluster is GKE 3 node n1-standard1, with 50Go of storage each.
All pods are properly running
$ kubectl --namespace=deis get pods
NAME READY STATUS RESTARTS AGE
deis-builder-1134541883-4mvsv 1/1 Running 0 2h
deis-controller-2381495828-61b1t 1/1 Running 1 2h
deis-database-401209816-csdkz 1/1 Running 0 2h
deis-logger-2717637750-16mtl 1/1 Running 3 2h
deis-logger-fluentd-4n8b3 1/1 Running 0 2h
deis-logger-fluentd-86dsg 1/1 Running 0 2h
deis-logger-fluentd-x1w8f 1/1 Running 0 2h
deis-logger-redis-1413683677-vl4kd 1/1 Running 0 2h
deis-minio-3370481340-8nlfx 1/1 Running 0 2h
deis-monitor-grafana-1073006293-45rnw 1/1 Running 0 2h
deis-monitor-influxdb-2675149720-vmp7s 1/1 Running 0 2h
deis-monitor-telegraf-n9z5j 1/1 Running 1 2h
deis-monitor-telegraf-qhbtf 1/1 Running 1 2h
deis-monitor-telegraf-ttsgr 1/1 Running 1 2h
deis-nsqd-1225249577-c3jnk 1/1 Running 0 2h
deis-registry-2623437609-x1fcp 1/1 Running 1 2h
deis-registry-proxy-3mwz3 1/1 Running 0 2h
deis-registry-proxy-3pftk 1/1 Running 0 2h
deis-registry-proxy-ccqxp 1/1 Running 0 2h
deis-router-3258585802-fmfm9 1/1 Running 0 2h
deis-workflow-manager-3757950027-vkwn8 1/1 Running 0 2h
How can I debug this ?
Copied from original issue: deis/workflow#826
From @joevandyk on December 12, 2017 2:15
https://deis.com/docs/workflow/quickstart/provider/minikube/boot/ links to https://github.com/kubernetes/minikube/blob/master/DRIVERS.md#xhyve-driver for the xhyve driver which is a 404.
Copied from original issue: deis/workflow#863
From @pfeodrippe on April 22, 2017 20:32
I'm trying minikube with deis. Has something changed now? It was pulling the repo 2 hours ago
$ helm repo add deis https://charts.deis.com/workflow
Error: Looks like "https://charts.deis.com/workflow" is not a valid chart repository
or cannot be reached: Get https://charts.deis.com/workflow/index.yaml: net/http:
TLS handshake timeout
Copied from original issue: deis/workflow#800
From @cristiangrama on September 19, 2017 12:51
My app has 2 web instances. Every now and then it returns with a 502 Bad Gateway message from nginx
.
When running deis
commands like deis apps
or deis logs
every now and then I get a 502 Bad Gateway from it as well.
Copied from original issue: deis/workflow#860
From @pixeleet on April 6, 2017 9:55
Hey There,
Looking for a migration path from Helmc to Helm so I could use the latest and greatest charts.
Copied from original issue: deis/workflow#788
From @Overdrivr on June 8, 2017 17:13
I'm trying to deploy Docker images to Deis from Gitlab CI, but fails to authenticate to a private docker registry.
I created a specific user gitlab-ci
, that authenticates successfully.
This is the output of the CI script
Running with gitlab-ci-multi-runner 9.2.0 (adfc387)
on docker-auto-scale (e11ae361)
Using Docker executor with image felixbuenemann/deis-workflow-cli:latest ...
Starting service docker:dind ...
Pulling docker image docker:dind ...
Using docker image docker:dind ID=sha256:dee518b729774dca2b75e356b4e5d288f4abd00daea5a934c63c4a5a20fe6655 for docker service...
Waiting for services to be up and running...
Using docker image sha256:93836450ba09f3d8d835cf8da9bd33f5d283aca7b89a8410a8597e0a2b8ca79e for predefined container...
Pulling docker image felixbuenemann/deis-workflow-cli:latest ...
Using docker image felixbuenemann/deis-workflow-cli:latest ID=sha256:69f5967add43047a280852adca72ac7c16e9b25af0aab9537350a80db6af0aa1 for build container...
Running on runner-e11ae361-project-3003608-concurrent-0 via runner-e11ae361-machine-1496941443-0cbe3239-digital-ocean-2gb...
Cloning repository...
Cloning into '/builds/MYUSERNAME/MYPROJECT'...
Checking out f7c7d01f as push-to-deis...
Skipping Git submodules setup
$ deis version
Logged in as gitlab-ci
Configuration file written to /root/.deis/client.json
v2.13.0
$ deis login $DEIS_CONTROLLER --username=$DEIS_USERNAME --password=$DEIS_PASSWORD
Logged in as gitlab-ci
Configuration file written to /root/.deis/client.json
Logged in as gitlab-ci
Configuration file written to /root/.deis/client.json
$ deis whoami
Logged in as gitlab-ci
Configuration file written to /root/.deis/client.json
You are gitlab-ci at http://deis.XXX.XX.XXX.XXX.nip.io
$ deis registry:set username=$CI_REGISTRY_USER password=$CI_REGISTRY_PASSWORD -a $DEIS_APP_NAME
Logged in as gitlab-ci
Configuration file written to /root/.deis/client.json
Applying registry information... ...Error: You do not have permission to perform this action.
���ERROR: Job failed: exit code 1
Authentication to the registry fails, although I am using the correct variables for username and password (I was using the exact same ones for connecting with docker
to the registry), and $DEIS_APP_NAME
is defined in Gitlab as secret, with value equal to my app name inside deis.
Any idea on why this action is not allowed, or how I can debug this further ? Error message is not very explicit.
Copied from original issue: deis/workflow#823
From @sheerun on March 6, 2017 19:5
Hey,
By default Azure's Load Balancer timeout (used by ACS) is 4 minutes: https://azure.microsoft.com/en-us/blog/new-configurable-idle-timeout-for-azure-load-balancer/
It turns out it's too low for decent deis build. I think you could document this at https://deis.com/docs/workflow/managing-workflow/configuring-load-balancers/
Copied from original issue: deis/workflow#753
From @IulianParaian on July 9, 2017 18:44
I tried to setup workflow using store persistent data in AWS S3.
I followed the steps from here
This is how the custom values.yaml file looks like
global:
# Set the storage backend
storage: s3
s3:
# Your AWS access key. Leave it empty if you want to use IAM credentials.
accesskey: "xxxx"
# Your AWS secret key. Leave it empty if you want to use IAM credentials.
secretkey: "xxxx"
# Any S3 region
region: "xx"
# Your buckets.
registry_bucket: "registry-xxxx"
database_bucket: "database-xxxx"
builder_bucket: "builder-xxxx"
After installing Workflow
helm install deis/workflow --namespace deis -f values.yaml
the deis-controller pod is not starting and in logs I'mm getting:
system information:
Django Version: 1.11.3
Python 3.5.2
Django checks:
System check identified no issues (2 silenced).
Health Checks:
Checking if database is alive
There was a problem connecting to the database
FATAL: password authentication failed for user "xxxxx"
Copied from original issue: deis/workflow#839
From @eteng on October 4, 2017 5:40
Hi
i'm getting Internal server error when i run 'deis certs:add server.bundle server.key` with comodo EV multi-domain cert. i have created the bundle/chain cert using different combination, but it still not working.
Copied from original issue: deis/workflow#861
From @pixeleet on February 14, 2017 9:13
25m 15h 9 api-cmd-4281948355-qecb1 Pod spec.containers{api-cmd} Normal Pulling {kubelet ip-10-0-10-170.eu-central-1.compute.internal} pulling image "bannerwise/core:production-735dd428f31808e52bd422e6e54f5d78bddde289"
25m 15h 9 api-cmd-4281948355-qecb1 Pod spec.containers{api-cmd} Normal Pulled {kubelet ip-10-0-10-170.eu-central-1.compute.internal} Successfully pulled image "bannerwise/core:production-735dd428f31808e52bd422e6e54f5d78bddde289"
25m 31m 8 api-cmd-4281948355-qecb1 Pod Warning FailedSync {kubelet ip-10-0-10-170.eu-central-1.compute.internal} Error syncing pod, skipping: failed to "StartContainer" for "api-cmd" with RunContainerError: "GenerateRunContainerOptions: secrets \"api-v115-env\" not found"
What happens is, one of my containers died, a new machine which hasn’t pulled the image yet tried to pull but it somehow didn’t have the latest env secret and thus the container could not start, eventually this resulted in all api containers being not available, still investigating why but if you know something please let me know 😛 It’s an annoying little thing happening every now and then and I till can’t put my finger on it
Kube version
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.4", GitCommit:"dd6b458ef8dbf24aff55795baa68f83383c9b3a9", GitTreeState:"clean", BuildDate:"2016-08-01T16:38:31Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Deis version
v2.4.0
Cloud Provider: AWS
4.7.3-coreos-r2
Copied from original issue: deis/workflow#736
From @olalonde on October 24, 2016 11:39
I just successfully deployed a new release with deis pull ...
but when accessing the app from my browser, I am getting a 503 Service Temporarily Unavailable
nginx error page.
The app logs show that it is responding to health check requests:
$ deis logs
myapp-cmd-2847749395-jx1u0 -- ::ffff:10.244.1.1 - - [24/Oct/2016:11:36:19 +0000] "GET /_health HTTP/1.1" 200 - "-" "Go 1.1 package http"
myapp-cmd-2847749395-jx1u0 -- ::ffff:10.244.1.1 - - [24/Oct/2016:11:36:29 +0000] "GET /_health HTTP/1.1" 200 - "-" "Go 1.1 package http"
myapp-cmd-2847749395-jx1u0 -- ::ffff:10.244.1.1 - - [24/Oct/2016:11:36:39 +0000] "GET /_health HTTP/1.1" 200 - "-" "Go 1.1 package http"
myapp-cmd-2847749395-jx1u0 -- ::ffff:10.244.1.1 - - [24/Oct/2016:11:36:49 +0000] "GET /_health HTTP/1.1" 200 - "-" "Go 1.1 package http"
$ deis config | grep PORT
PORT 5000
Some logs from deis router:
[2016-10-24T11:38:24+00:00] - myapp-ui - 172.20.0.83 - - - 503 - "GET / HTTP/1.1" - 792 - "http://myapp.com/" - "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36" - "myapp.com" - -- myapp.com - - - 0.000
[2016-10-24T11:38:29+00:00] - myapp-ui - 172.20.0.131 - - - 503 - "GET / HTTP/1.1" - 390 - "-" - "Pingdom.com_bot_version_1.4_(http://www.pingdom.com/)" - "myapp.com" - - - myapp.com - - - 0.000
[2016-10-24T11:38:52+00:00] - deis/deis-controller - 172.20.0.147 - - - 201 - "POST /v2/apps/myapp-ui/config/ HTTP/1.1" -2298 - "-" - "Deis Client v2.7.0" - "~^deis\x5C.(?<domain>.+)$" - 10.0.34.240:80 - deis.d.myapp.com - 82.117 - 82.183
[2016-10-24T11:38:52+00:00] - deis/deis-controller - 172.20.0.131 - - - 200 - "GET /v2/apps/myapp-ui/config/ HTTP/1.1" - 1820 - "-" - "Deis Client v2.7.0" - "~^deis\x5C.(?<domain>.+)$" - 10.0.34.240:80 - deis.d.myapp.com - 0.031 - 0.031
[2016-10-24T11:38:54+00:00] - deis/deis-controller - 10.244.0.1 - - - 200 - "GET /v2/apps/myapp-ui/config/ HTTP/1.1" - 1820 - "-" - "Deis Client v2.7.0" - "~^deis\x5C.(?<domain>.+)$" - 10.0.34.240:80 - deis.d.myapp.com - 0.019 - 0.019
[2016-10-24T11:39:29+00:00] - myapp-ui - 172.20.0.147 - - - 503 - "GET / HTTP/1.1" - 390 - "-" - "Pingdom.com_bot_version_1.4_(http://www.pingdom.com/)" - "myapp.com" - - - myapp.com - - - 0.000
[2016-10-24T11:39:41+00:00] - myapp-ui - 172.20.0.83 - - - 503 - "GET /c/dJMPJb HTTP/1.1" - 390 - "-" - "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" - "myapp.com" - - - myapp.com - - - 0.000
But it doesn't show any of the request coming from my browser. So I'm guessing the deis router is having trouble with something. I tried to delete the deis-router pod but still same problem. Any idea?
Copied from original issue: deis/workflow#569
From @rbankole on May 5, 2017 15:51
deis logs -a <app_name> --lines -1
not spitting out all logs since app launch. As a workaround you have to supply an arbitrarily large number i.e - deis logs -a <app_name> --lines 10000
.
Copied from original issue: deis/workflow#810
From @ColdHeat on July 1, 2017 23:43
I'm just trying to follow the Deploy Your First Application tutorial and I'm running into this error:
❯ deis pull deis/example-go -a expert-neckwear
Creating build... Error: Unknown Error (400): {"detail":"Put http://127.0.0.1:5555/v1/repositories/expert-neckwear/: dial tcp 127.0.0.1:5555: getsockopt: connection refused"}
Copied from original issue: deis/workflow#834
From @pfeodrippe on April 28, 2017 13:12
Let me describe a high view sequence of my commands...
I've installed the workflow with helm
, created two apps, made some git push
at these apps, deleted the release, deleted the namespace because of Error: secrets "builder-key-auth" already exists
, deleted the registry, database and builder s3 buckets because of the below deis-controller
logs and finally reinstalled the workflow to the cluster with all necessary deis configuration.
Is there any way I could attach the running apps (started from the old release) to this new release so I could do deis apps
and see them?
$ kubectl -n deis logs deis-controller-3875226455-sz1p8 -f
system information:
Django Version: 1.10.6
Python 3.5.2
Django checks:
System check identified no issues (2 silenced).
Health Checks:
Checking if database is alive
There was a problem connecting to the database
FATAL: password authentication failed for user "XXXXXXXXXXXXXXX"
Copied from original issue: deis/workflow#802
From @mboersma on February 15, 2017 23:47
This would imply a new UI allowing configuration of ingress & egress rules. See also https://github.com/orgs/deis/projects/4.
Copied from original issue: deis/workflow#742
From @pie6k on July 31, 2017 21:53
I've got single service that needs other services to function (database etc). I wonder what is good local development workflow for working with it.
My goal is to be able to quickly restart service on every file save to be able to quickly iterate during development.
node server.js
again.Right now, the only idea I have is git push deis master
after every file change to see how it works together with all other local services - but it takes quite long time per iteration.
Copied from original issue: deis/workflow#844
From @sbulman on August 5, 2017 7:27
Hi All,
I'm following the instructions to set up Deis on Azure Container Service. One of the deis-logger-fluentd pods is crashing with the following log.
2017-08-05 07:21:26 +0000 [info]: reading config file path="/opt/fluentd/conf/fluentd.conf"
2017-08-05 07:22:27 +0000 [error]: config error file="/opt/fluentd/conf/fluentd.conf" error_class=Fluent::ConfigError error="Invalid Kubernetes API v1 endpoint https://10.0.0.1:443: Timed out connecting to server"
Any ideas?
Thanks.
Copied from original issue: deis/workflow#847
From @davidchua on April 8, 2017 1:27
After deploying a Dockerfile onto Deis Workflow, Application gets stuck in Launching App ...
and attempting to redeploy brings down builder
.
I am unable to pull out further logs to find out what went wrong inside of builder.
Below are my logs:
$ kubectl version
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:34:32Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Deis Workflow installed with: v2.9.0 on GKE
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
2m 2m 1 {default-scheduler } Normal Scheduled Successfully assigned deis-builder-1805754844-qk5z7 to gke-kubernetes-pool-1-b3e2ffed-p0wg
2m 2m 1 {kubelet gke-kubernetes-pool-1-b3e2ffed-p0wg} spec.containers{deis-builder} Normal Created Created container with docker id 41a3bf2cb2a1; Security:[seccomp=unconfined]
2m 2m 1 {kubelet gke-kubernetes-pool-1-b3e2ffed-p0wg} spec.containers{deis-builder} Normal Started Started container with docker id 41a3bf2cb2a1
1m 1m 1 {kubelet gke-kubernetes-pool-1-b3e2ffed-p0wg} spec.containers{deis-builder} Normal Killing Killing container with docker id 41a3bf2cb2a1: pod "deis-builder-1805754844-qk5z7_deis(85eb255b-1bf9-11e7-a4f7-42010a920022)" container "deis-builder" is unhealthy, it will be killed and re-created.
1m 1m 1 {kubelet gke-kubernetes-pool-1-b3e2ffed-p0wg} spec.containers{deis-builder} Normal Created Created container with docker id 60a6cabe6e4c; Security:[seccomp=unconfined]
1m 1m 1 {kubelet gke-kubernetes-pool-1-b3e2ffed-p0wg} spec.containers{deis-builder} Normal Started Started container with docker id 60a6cabe6e4c
54s 54s 1 {kubelet gke-kubernetes-pool-1-b3e2ffed-p0wg} spec.containers{deis-builder} Normal Started Started container with docker id 37f98b501301
54s 54s 1 {kubelet gke-kubernetes-pool-1-b3e2ffed-p0wg} spec.containers{deis-builder} Normal Killing Killing container with docker id 60a6cabe6e4c: pod "deis-builder-1805754844-qk5z7_deis(85eb255b-1bf9-11e7-a4f7-42010a920022)" container "deis-builder" is unhealthy, it will be killed and re-created.
54s 54s 1 {kubelet gke-kubernetes-pool-1-b3e2ffed-p0wg} spec.containers{deis-builder} Normal Created Created container with docker id 37f98b501301; Security:[seccomp=unconfined]
1m 15s 4 {kubelet gke-kubernetes-pool-1-b3e2ffed-p0wg} spec.containers{deis-builder} Warning Unhealthy Readiness probe failed: Get http://10.4.0.143:8092/readiness: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
1m 15s 5 {kubelet gke-kubernetes-pool-1-b3e2ffed-p0wg} spec.containers{deis-builder} Warning Unhealthy Liveness probe failed: Get http://10.4.0.143:8092/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2m 12s 4 {kubelet gke-kubernetes-pool-1-b3e2ffed-p0wg} spec.containers{deis-builder} Normal Pulled Container image "quay.io/deis/builder:v2.6.1" already present on machine
12s 12s 1 {kubelet gke-kubernetes-pool-1-b3e2ffed-p0wg} spec.containers{deis-builder} Normal Killing Killing container with docker id 37f98b501301: pod "deis-builder-1805754844-qk5z7_deis(85eb255b-1bf9-11e7-a4f7-42010a920022)" container "deis-builder" is unhealthy, it will be killed and re-created.
12s 12s 1 {kubelet gke-kubernetes-pool-1-b3e2ffed-p0wg} spec.containers{deis-builder} Normal Created Created container with docker id 93597879ef60; Security:[seccomp=unconfined]
12s 12s 1 {kubelet gke-kubernetes-pool-1-b3e2ffed-p0wg} spec.containers{deis-builder} Normal Started Started container with docker id 93597879ef60
$ kubectl logs deis-builder-1805754844-qk5z7 -n deis -f
2017/04/08 01:24:11 Starting health check server on port 8092
2017/04/08 01:24:11 Starting deleted app cleaner
2017/04/08 01:24:11 Starting SSH server on 0.0.0.0:2223
Listening on 0.0.0.0:2223
Accepting new connections.
Copied from original issue: deis/workflow#790
From @tmc on June 12, 2017 2:50
Right now the experimental native ingress creates an ingress per app, this typically implies manual steps, dns records, etc -- lowering the utility and ease.
If native ingress support allowed the utilization of a single ingress in SNI mode converting a deis installation to use letsencrypt via kube-lego would be a single annotation and dns record configuration.
Refs deis/workflow#708
Copied from original issue: deis/workflow#824
From @krancour on October 14, 2016 17:2
This issue supersedes deis/controller#355 and attempts to distill its most salient suggestions into something more digestible.
One persistent shortcoming of Workflow is that it does not easily accommodate applications requiring e2e SSL. Currently, SSL is typically terminated at the router(s). (It can be terminated at the load balancer, but this is uncommon.) The fact that unencrypted traffic flows between the router(s) and application pods (which may reside on other nodes) precludes the use of Workflow as a platform for any applications subject to stringent regulatory requirements like HIPAA or PCI-DSS that mandate the encryption of all over-the-wire transmissions.
The most efficient way to solve this is to bypass the router and deliver encrypted TCP packets directly to applications pods. Most application frameworks, however, do not know how to terminate SSL. This is, and should remain, chiefly a platform concern. So the problems to solve for are:
Bypassing the router is easy. Currently the router routes traffic to all "routable services." By not annotating an application's k8s service as routable, the router will cease to route traffic to it.
Traffic can be router directly to application pods by making its k8s service one of type: LoadBalancer
.
Pods can host multiple containers-- and these are able to communicate with one another over the local interface. I propose that wherever e2e SSL is required, a dedicated router can be installed as a "sidecar" container in each application pod. Such router instances would run in a new "terminator mode," wherein the router retains all of its usual configurability and flexibility, but ceases to route traffic for all applications (routable services). Instead, it becomes concerned only with terminating SSL and proxying requests to a single upstream (the application) over the local interface.
This is relatively easy to implement.
I propose the deis
CLI and controller be enhanced to allow applications to opt-in to e2e SSL. This would require the controller to modify an application's service definition to not be "routable", to be of type: LoadBalancer
, and deployments to include the terminator sidecar (which must also include any applicable certificates).
Above, I stated that the terminator sidecar "must also include any applicable certificates." I believe this must explicitly exclude the platform certificate (which is always a wildcard) and should be limited only to certificates owned by / associated to the application. Otherwise, the private key for a certificate not owned by the application in question (e.g. the platform certificate) could be exposed to developers who should not have access to it.
Copied from original issue: deis/workflow#557
From @abh on May 29, 2017 20:10
Running on Kubernetes 1.6.2 (using Tectonic, so RBAC etc). I used the rbac gist to setup the RBAC rules (which made the builder run).
All the pods except the telegraf ones are running and healthy. I created the user okay, but creating an application gives a 500 error.
10.2.3.0 "GET / HTTP/1.1" 404 74 "curl/7.51.0"
10.2.5.1 "GET /v2/ HTTP/1.1" 401 58 "Deis Client v2.14.0"
10.2.4.0 "POST /v2/auth/register/ HTTP/1.1" 201 240 "Deis Client v2.14.0"
10.2.5.1 "GET /v2/ HTTP/1.1" 401 58 "Deis Client v2.14.0"
10.2.3.0 "POST /v2/auth/login/ HTTP/1.1" 200 52 "Deis Client v2.14.0"
ERROR:root:Uncaught Exception
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/rest_framework/views.py", line 480, in dispatch
response = handler(request, *args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/rest_framework/mixins.py", line 21, in create
self.perform_create(serializer)
File "/app/api/viewsets.py", line 20, in perform_create
obj = serializer.save(owner=self.request.user)
File "/usr/local/lib/python3.5/dist-packages/rest_framework/serializers.py", line 214, in save
self.instance = self.create(validated_data)
File "/usr/local/lib/python3.5/dist-packages/rest_framework/serializers.py", line 906, in create
instance = ModelClass.objects.create(**validated_data)
File "/usr/local/lib/python3.5/dist-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/django/db/models/query.py", line 393, in create
obj.save(force_insert=True, using=self.db)
File "/app/api/models/app.py", line 92, in save
if self._scheduler.ns.get(self.id).status_code == 200:
File "/app/scheduler/resources/namespace.py", line 25, in get
raise KubeHTTPException(response, message, *args)
File "/app/scheduler/exceptions.py", line 10, in __init__
data = response.json()
File "/usr/local/lib/python3.5/dist-packages/requests/models.py", line 866, in json
return complexjson.loads(self.text, **kwargs)
File "/usr/lib/python3.5/json/__init__.py", line 319, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.5/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.5/json/decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
10.2.5.1 "POST /v2/apps/ HTTP/1.1" 500 25 "Deis Client v2.14.0"
_Copied from original issue: deis/workflow#814_
From @seanknox on June 16, 2017 23:17
$ deis version
v2.14.0
deis ps:scale cmd=1000
sat indefinitely (waited ~10 minutes before giving up).Copied from original issue: deis/workflow#827
From @deis-admin on January 19, 2017 23:32
From @kamilchm on July 14, 2014 19:1
Does someone use deis for building deployment pipelines?
Heroku have pipelines in labs https://devcenter.heroku.com/articles/labs-pipelines. How can I implement equivalent of it using deis?
Copied from original issue: deis/deis#1318
Copied from original issue: deis/workflow#711
From @helloravi on February 11, 2017 5:39
I have tried looking up for examples of implementations of Deis on lightsail. I did not find any examples. Has anyone tried doing that?
Copied from original issue: deis/workflow#735
From @rimusz on March 8, 2017 16:4
Native ingress support is WIP (#732 and #1243 ), so would be nice to have support for hybrid mode where both native ingress/router are enabled.
Specially it would be handy to test native ingress without breaking the router setup, when user is happy with the ingress, swung all traffic to it, then router mode can be disabled. :)
Copied from original issue: deis/workflow#762
From @timfpark on April 5, 2017 15:3
In my deployment of Deis Workflow v2.12.0, I am seeing a CrashLoopBackOff on the Router component:
deis deis-router-2126433040-drr9t 0/1 CrashLoopBackOff 85 5d
looking at the logs with kubectl, I see the following:
no such container: "169d0df2740125110584a290b4b7da6b1fa8c3d9ebd82120ec93ee621fd2070e"
Is this a known issue? Is there anything I can collect to help diagnose what's going on here?
Copied from original issue: deis/workflow#784
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.