Code Monkey home page Code Monkey logo

kubo-deployment's Introduction

kubo-deployment's People

Contributors

akshaymankar avatar alex-slynko avatar bentarnoff avatar bsnchan avatar christianang avatar cppforlife avatar danjahner avatar dpb587-pivotal avatar drnic avatar freynca avatar greg-patricio avatar iainsproat avatar jaimegag avatar jamiemonserrate avatar jfmyers9 avatar jhvhs avatar johnsonj avatar karampok avatar making avatar manifaust avatar mkjelland avatar mordebites avatar neil-hickey avatar professor avatar semanticallynull avatar srm09 avatar svrc avatar tvs avatar voelzmo avatar xtreme-sameer-vohra avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubo-deployment's Issues

Can't use insecure-registry with kubo

When I try to deploy docker from insecure registry, I receive "Failed to pull image"

Message from POD

Failed to pull image "harbor.local/devops-web/springboot-web-app:latest": rpc error: code = 2 desc = Error response from daemon: Get https://harbor.local/v1/_ping: dial tcp 10.171.162.27:443: getsockopt: connection refused
Error syncing pod

I'm looking for how to deploy property "insecure-registry", but I could not.
I try to put in "kubo-manifest.yml"
image

Looking in "/var/vcap/jobs/docker/bin/job_properties.sh" (worker vm) doesn't have property.

Any one trying to do this?

Thanks,
Gustavo

deploy K8s cluster - Failed to find variable '/kubobosh/mykubocluster-3/vcenter_ip

When I try to deploy a K8s cluster, I get these error messages:

$ /usr/local/bin/bosh -e kubobosh -d mykubocluster-3 deploy mykubo-3.yml

Task 11 | 21:30:13 | Preparing deployment: Preparing deployment
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: etcd/864bd57d-b195-4325-8939-3cf665f76f84
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: etcd/49736647-7982-452d-9481-098bf8e6461c
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: etcd/d051341f-5cab-406c-9028-5e435d15fcb9
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: master/c7a95105-e9d5-40c6-8bbc-672c96650284
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: master/b4372d8a-ab1e-4add-9f1f-8cf251906b47
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: etcd/864bd57d-b195-4325-8939-3cf665f76f84
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: etcd/49736647-7982-452d-9481-098bf8e6461c
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: etcd/d051341f-5cab-406c-9028-5e435d15fcb9
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: etcd/864bd57d-b195-4325-8939-3cf665f76f84
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: etcd/49736647-7982-452d-9481-098bf8e6461c
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: etcd/d051341f-5cab-406c-9028-5e435d15fcb9
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: master/c7a95105-e9d5-40c6-8bbc-672c96650284
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: master/b4372d8a-ab1e-4add-9f1f-8cf251906b47
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: master/c7a95105-e9d5-40c6-8bbc-672c96650284
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: master/b4372d8a-ab1e-4add-9f1f-8cf251906b47
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: master/c7a95105-e9d5-40c6-8bbc-672c96650284
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: master/b4372d8a-ab1e-4add-9f1f-8cf251906b47
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: master/c7a95105-e9d5-40c6-8bbc-672c96650284
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: master/b4372d8a-ab1e-4add-9f1f-8cf251906b47
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: etcd/864bd57d-b195-4325-8939-3cf665f76f84
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: etcd/49736647-7982-452d-9481-098bf8e6461c
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: etcd/d051341f-5cab-406c-9028-5e435d15fcb9
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: worker/b01dc389-ed90-4337-9966-0e6c8affd3e2
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: worker/e260602f-a98b-466d-8960-1b46061879f2
Task 11 | 21:30:16 | Warning: DNS address not available for the link provider instance: worker/f51b2907-d08c-4842-a3c2-21b6c3dd94b5
Task 11 | 21:30:16 | Preparing deployment: Preparing deployment (00:00:03)
Task 11 | 21:30:17 | Error: Unable to render instance groups for deployment. Errors are:

  • Unable to render jobs for instance group 'master'. Errors are:
    • Unable to render templates for job 'cloud-provider'. Errors are:
      • Failed to find variable '/kubobosh/mykubocluster-3/vcenter_user' from config server: HTTP code '404'
      • Failed to find variable '/kubobosh/mykubocluster-3/vcenter_password' from config server: HTTP code '404'
      • Failed to find variable '/kubobosh/mykubocluster-3/vcenter_ip' from config server: HTTP code '404'
      • Failed to find variable '/kubobosh/mykubocluster-3/vcenter_dc' from config server: HTTP code '404'
      • Failed to find variable '/kubobosh/mykubocluster-3/vcenter_ds' from config server: HTTP code '404'
      • Failed to find variable '/kubobosh/mykubocluster-3/vcenter_vms' from config server: HTTP code '404'
      • Failed to find variable '/kubobosh/mykubocluster-3/director_uuid' from config server: HTTP code '404'
  • Unable to render jobs for instance group 'worker'. Errors are:
    • Unable to render templates for job 'cloud-provider'. Errors are:
      • Failed to find variable '/kubobosh/mykubocluster-3/vcenter_user' from config server: HTTP code '404'
      • Failed to find variable '/kubobosh/mykubocluster-3/vcenter_password' from config server: HTTP code '404'
      • Failed to find variable '/kubobosh/mykubocluster-3/vcenter_ip' from config server: HTTP code '404'
      • Failed to find variable '/kubobosh/mykubocluster-3/vcenter_dc' from config server: HTTP code '404'
      • Failed to find variable '/kubobosh/mykubocluster-3/vcenter_ds' from config server: HTTP code '404'
      • Failed to find variable '/kubobosh/mykubocluster-3/vcenter_vms' from config server: HTTP code '404'
      • Failed to find variable '/kubobosh/mykubocluster-3/director_uuid' from config server: HTTP code '404'

Task 11 Started Fri Nov 3 21:30:13 UTC 2017
Task 11 Finished Fri Nov 3 21:30:17 UTC 2017
Task 11 Duration 00:00:04
Task 11 error

Updating deployment:
Expected task '11' to succeed but state is 'error'

Exit code 1

Bosh director was deployed successfully and vCenter IP and credentials were correctly filled at this time.

Add generate_bosh_manifest script

Right now we generate manifest and deploy BOSH director immediately. In some cases, developer needs to do some customizations to the manifest prior to deployment.

Acceptance:

  • generate_bosh_manifest and save it to file
  • Apply bosh_deployment/local_bosh_release.yml ops-file with BOSH director release with version that differs from generated one (e.g. 262)
  • Deploy BOSH using bosh-cli create-env
  • bosh-cli env should show that you are using provided BOSH release

director deployment fails trying to download dns-release

This is in a vSphere environment. Getting the following error:

Task 1 error

Processing release 'bosh-dns/0.0.7':
  Uploading remote release 'https://bosh.io/d/github.com/cloudfoundry/dns-release?v=0.0.7':
    Expected task '1' to succeed but state is 'error'

Log for task:
task_log.txt

The director VM comes up with 8.8.8.8 in /etc/resolv.conf which is blocked in our network. Previously with powerdns we would get whatever was in dns_recursor_ip. I think this value needs to be picked up in some patch file (vsphere/cpi.yml?) so that cases where 8.8.8.8 is blocked work.

Trying to deploy Kubo on vSphere failed

Hi all,
Running the command ./bin/deploy_k8s ~/kubo-env/kubo k8scls public failed with this error message :

Task 12 | 01:23:39 | Preparing deployment: Preparing deployment (00:00:04)
Task 12 | 01:23:48 | Error:

Can't use release 'kubo/0.8.0-dev.29'. It references packages without source code and are not compiled against stemcell 'bosh-vsphere-esxi-ubuntu-trusty-go_agent/3421.11':

  • 'cni/fb66deef2826ccd6c6c135dbc915094e6cef2ab6'
  • 'etcdctl/35165b48a3100f6f0e4af03c211f913dcf0055b2'
  • 'flanneld/69e5913473152bb3a97fee5ad5f237cb6b3becba'
  • 'golang/dd608878e7f3335773a316e718b07a7e5c3cd32b'
  • 'govc/02be57c077b9ed2a47481deb5f5dfa0d295ad242'
  • 'jq/a8a92d1eb93b806ff9e4f9e8daab4d0dec04b962'
  • 'kubernetes/4f4cc4d51b1ab75753e832be76154258deb4b17b'
  • 'pid_utils/96db60d4d683939fd187297035544c340e75d9a4'
  • 'route-sync/4d89a033084648e6143f4d94cf4a4c210f0bedea'
  • 'socat/44be0e2da76a8c3db0409993b168aa26d4bc3cd4'

Task 12 Started Fri Oct 13 01:23:39 UTC 2017
Task 12 Finished Fri Oct 13 01:23:48 UTC 2017
Task 12 Duration 00:00:09
Task 12 error

Updating deployment:
Expected task '12' to succeed but state is 'error'

Exit code 1

AWS Support

Hey, all.

I want to try out the kubo and the best way to do it for me is to run it on AWS. Since the project is built on top of BOSH I do not think running it on AWS is a problem. Except Kubernetes release itself has some constrains.

Do you see any reason the project won't run on AWS?

Uploading stemcell timeout error

In my cloud account, uploading of stemcell takes around 7 minute. But code allows only for 5 minute and throws timeout error.

creating stemcell (bosh-openstack-kvm-ubuntu-trusty-go_agent 3445.7):
CPI 'create_stemcell' method responded with error: CmdError{"type":"Bosh::Clouds::CloudError","message":"Timed out waiting for Image `24316d83-d90d-4643-9ef5-736e1decb15a' to be active","ok_to_retry":false}

Is there any way I can change the default timeout value.

Thanks,

Use pre-compiled releases when using scripts to deploy kubo

In order to reduce the time required to deploy Kubo,
we should use pre-compiled releases whenever possible.

Acceptance criteria:

  • A major version of a stemcell can be pre-defined as default for deploying the kubo release
  • The CI/CD pipeline for kubo is testing the kubo-deployment and kubo-release against that stemcell, and produces a publicly available pre-compiled release for that stemcell in addition to the current one.
  • A pre-compiled release of kubo-etcd, docker-boshrelease and haproxy are guaranteed to be available on a public URL for the same major stemcell version as above.
  • When the public (default) option is used to deploy kubo with the deployment scripts, and the major stemcell version matches the pre-defined default, the pre-compiled releases will be used for deployment.

Deployment Error when using two Cluster Setup (vSphere)

I could not deploy a Kubernetes Cluster with Kubo in the following setup:

  • Mgmt-Edge Cluster with local Storage
  • Compute Cluster with VSAN Storage
  • DIstributed Switch across both Clusters with Management Port Group

I deployed Bosh on Mgmt-Edge with local Storage. Worked.
Then I tried to deploy a Kubernetes Cluster on Compute. Failed

in create-cloud-config.sh and create-kubo-deployment.sh (followed the HOWTOs) there is no Strorage specified, so it takes the storage from the -v vcenter_ds=ds-esxi6-1 entry in deploy-bosh.sh which is wrong, because on the Compute cluster I only have VSAN storage available. Is it possible to specify the storage in my Kubo deployment explicitly?

P.S. Using the same storage (e.g. NFS in my setup) across both clusters I guess the deployment will work (did not test that, because for the moment I just use the Compute Cluster for everything, which works fine).

Please configure GITBOT

Pivotal uses GITBOT to synchronize Github issues and pull requests with Pivotal Tracker.
Please add your new repo to the GITBOT config-production.yml in the Gitbot configuration repo.
If you don't have access you can send an ask ticket to the CF admins. We prefer teams to submit their changes via a pull request.

Steps:

  • Fork this repo: cfgitbot-config
  • Add your project to config-production.yml file
  • Submit a PR

If there are any questions, please reach out to [email protected].

Action Failed get_task: 1 of 1 post-start scripts failed. Failed Jobs: kubelet.

I have a running PCF on GCP, with prereqs detailed out here. I followed the instructions for: Deploy Kubo with an existing Pivotal Cloud Foundry installation. Kubosh deploys beautifully, and deploy k8s looks healthy until this error. I'm including the task output from the deploy and let me know if there is anything else needed.

Task 5
14:06:43 | Preparing deployment: Preparing deployment (00:00:02)
14:06:49 | Preparing package compilation: Finding packages to compile (00:00:00)
14:06:49 | Compiling packages: nginx/8529a0248c092a9a5d6c25c2418baa40713bd6be
14:06:49 | Compiling packages: golang/dd608878e7f3335773a316e718b07a7e5c3cd32b
14:06:49 | Compiling packages: docker/cef2cbc7b3e2898f19426f4a78556ecff9eb9bb0
14:06:49 | Compiling packages: bosh-helpers/f469b9d9cd643a3e706ca661deea572de6e522b4 (00:01:32)
14:08:21 | Compiling packages: cni/fb66deef2826ccd6c6c135dbc915094e6cef2ab6
14:08:30 | Compiling packages: docker/cef2cbc7b3e2898f19426f4a78556ecff9eb9bb0 (00:01:41)
14:08:30 | Compiling packages: flanneld/69e5913473152bb3a97fee5ad5f237cb6b3becba
14:08:40 | Compiling packages: cni/fb66deef2826ccd6c6c135dbc915094e6cef2ab6 (00:00:19)
14:08:40 | Compiling packages: etcdctl/35165b48a3100f6f0e4af03c211f913dcf0055b2
14:08:45 | Compiling packages: flanneld/69e5913473152bb3a97fee5ad5f237cb6b3becba (00:00:15)
14:08:45 | Compiling packages: jq/a8a92d1eb93b806ff9e4f9e8daab4d0dec04b962
14:08:53 | Compiling packages: golang/dd608878e7f3335773a316e718b07a7e5c3cd32b (00:02:04)
14:08:53 | Compiling packages: kubernetes/cc9d2c04acabe975800122fa0a5b1cd4c2a65bae
14:08:56 | Compiling packages: etcdctl/35165b48a3100f6f0e4af03c211f913dcf0055b2 (00:00:16)
14:08:56 | Compiling packages: pid_utils/96db60d4d683939fd187297035544c340e75d9a4
14:08:57 | Compiling packages: nginx/8529a0248c092a9a5d6c25c2418baa40713bd6be (00:02:08)
14:08:57 | Compiling packages: golang1.7/65a8c296d8c2319d411165f5dc837a63b3846c7b
14:09:00 | Compiling packages: jq/a8a92d1eb93b806ff9e4f9e8daab4d0dec04b962 (00:00:15)
14:09:00 | Compiling packages: etcd/36afa1e6f720df5501c57710d5cfbfc9d8f52fba
14:09:10 | Compiling packages: pid_utils/96db60d4d683939fd187297035544c340e75d9a4 (00:00:14)
14:09:10 | Compiling packages: etcd-common/0f365b3a98184c2a6537efd51f67e8d5e9d2c486
14:09:19 | Compiling packages: etcd/36afa1e6f720df5501c57710d5cfbfc9d8f52fba (00:00:19)
14:09:19 | Compiling packages: route-sync/4d89a033084648e6143f4d94cf4a4c210f0bedea
14:09:25 | Compiling packages: etcd-common/0f365b3a98184c2a6537efd51f67e8d5e9d2c486 (00:00:15)
14:09:35 | Compiling packages: golang1.7/65a8c296d8c2319d411165f5dc837a63b3846c7b (00:00:38)
14:09:35 | Compiling packages: etcd-consistency-checker/c8d9275bf004706858bcb6274ea7558aac8542c7
14:09:35 | Compiling packages: etcd-dns-checker/60b7621d73d52e1cb4cd64f80cb0416605665744 (00:00:24)
14:10:00 | Compiling packages: etcd-consistency-checker/c8d9275bf004706858bcb6274ea7558aac8542c7 (00:00:25)
14:10:11 | Compiling packages: kubernetes/cc9d2c04acabe975800122fa0a5b1cd4c2a65bae (00:01:18)
14:10:47 | Compiling packages: route-sync/4d89a033084648e6143f4d94cf4a4c210f0bedea (00:01:28)
14:13:46 | Creating missing vms: etcd/c96228a7-8678-4246-89c3-0efb58f37b63 (0)
14:13:46 | Creating missing vms: etcd/5dc4e4cf-2fee-4fed-9fd6-bb5efdfd5b04 (2)
14:13:46 | Creating missing vms: etcd/c50dcf00-931c-4475-8147-edd6020622a4 (1)
14:13:46 | Creating missing vms: master/2b637083-019d-42ed-8a98-71e21835ef8e (0)
14:13:46 | Creating missing vms: master/1181d3fa-8fa6-4919-babf-b590727a02a5 (1)
14:13:46 | Creating missing vms: worker/aca2bddf-59b1-4802-a9c5-e09aa09a0efd (0)
14:13:46 | Creating missing vms: worker/b94cd112-7eeb-440e-aaf7-e86646fe87bc (1)
14:13:46 | Creating missing vms: worker/f4eaa97c-acd9-4f78-9ac3-c4e4f80086ab (2)
14:13:46 | Creating missing vms: proxy/209b335b-caa6-4892-a029-d87fcc0dd698 (0)
14:14:52 | Creating missing vms: master/1181d3fa-8fa6-4919-babf-b590727a02a5 (1) (00:01:06)
14:14:54 | Creating missing vms: proxy/209b335b-caa6-4892-a029-d87fcc0dd698 (0) (00:01:08)
14:14:54 | Creating missing vms: etcd/5dc4e4cf-2fee-4fed-9fd6-bb5efdfd5b04 (2) (00:01:08)
14:14:54 | Creating missing vms: etcd/c96228a7-8678-4246-89c3-0efb58f37b63 (0) (00:01:08)
14:14:56 | Creating missing vms: worker/aca2bddf-59b1-4802-a9c5-e09aa09a0efd (0) (00:01:10)
14:14:56 | Creating missing vms: etcd/c50dcf00-931c-4475-8147-edd6020622a4 (1) (00:01:10)
14:14:57 | Creating missing vms: master/2b637083-019d-42ed-8a98-71e21835ef8e (0) (00:01:11)
14:14:57 | Creating missing vms: worker/f4eaa97c-acd9-4f78-9ac3-c4e4f80086ab (2) (00:01:11)
14:14:57 | Creating missing vms: worker/b94cd112-7eeb-440e-aaf7-e86646fe87bc (1) (00:01:11)
14:14:57 | Updating instance etcd: etcd/c96228a7-8678-4246-89c3-0efb58f37b63 (0) (canary) (00:00:53)
14:15:50 | Updating instance etcd: etcd/5dc4e4cf-2fee-4fed-9fd6-bb5efdfd5b04 (2) (00:01:02)
14:16:52 | Updating instance etcd: etcd/c50dcf00-931c-4475-8147-edd6020622a4 (1) (00:00:52)
14:17:44 | Updating instance master: master/2b637083-019d-42ed-8a98-71e21835ef8e (0) (canary) (00:00:41)
14:18:25 | Updating instance master: master/1181d3fa-8fa6-4919-babf-b590727a02a5 (1) (00:00:40)
14:19:05 | Updating instance worker: worker/aca2bddf-59b1-4802-a9c5-e09aa09a0efd (0) (canary) (00:03:57)
L Error: Action Failed get_task: Task 75bce522-4fba-4295-7b8c-d63d86f7dcc6 result: 1 of 1 post-start scripts failed. Failed Jobs: kubelet.

14:23:02 | Error: Action Failed get_task: Task 75bce522-4fba-4295-7b8c-d63d86f7dcc6 result: 1 of 1 post-start scripts failed. Failed Jobs: kubelet.

Started Tue Apr 25 14:06:17 UTC 2017
Finished Tue Apr 25 14:23:02 UTC 2017
Duration 00:16:45

Task 5 error

Updating deployment:
Expected task '5' to succeed but was state is 'error'

Exit code 1

dashbaord and service access

is it possible to register the dashboard and service endpoint to the cf router?
or is thereany project working on that?

Restructure Kubo deployment documentation

This is a documentation structure story only.

README.md structure

    1. paving - with bbl (terraform)
      1a,1b,1c - for each IAAS
    1. deploy bosh - with bbl or something else (deploy director)
      2a,2b,2c - for each IAAS (deploy_bosh script goes away)
    1. deploy kubo (deploy_k8s)
    1. get kubernetes credentials (kubeconfig)
    1. access kubernetes with kubectl
    1. deploy an example service/app with kubectl

Acceptance Criteria

  • I am able to follow documentation steps to deploy Kubo on each IAAS (GCP, Vsphere) ...

Azure Support

Please provide an story how to ruin kubo together with pcf on azure (also if azure container services might not be available)

Gaps in paving GCP docs

The first set of shell commands includes

export network=<An existing GCP network for deploying kubo>

I have no idea what that means. Could there be a link to some GCP docs or something to tell me what to do?

Similarly

Make sure that the IP prefix below denotes a free CIDR range

How would I know if it was free?

Update to credhub 1.0.0

Why
With credhub 0.8.0 we will be able to generate and reference certificates in the kubo manifest instead of using the credhub cli. See this PR.

Acceptance

  • I can deploy Kubo using credhub version 0.8.0
  • I can access the cluster and deploy a service

failed to deploy

I got error while deploying kubo release 0.0.4 on Openstack.

I set kubernetes-api-url to kubernetes-system-specs.properties.

My yml

name: kubo
director_uuid: dae9452b-0148-4cad-a860-6925024ad55a

releases:
- name: etcd
  version: 108+dev.2
  url: https://storage.googleapis.com/kubo-etcd/kubo-etcd-release.108%2Bdev.2.tgz
  sha1: 6f452dae3d6399b38cbdee983847b81f88fc8163
- name: kubo
  version: latest
- name: docker
  version: 28.0.1
  url: https://bosh.io/d/github.com/cf-platform-eng/docker-boshrelease?v=28.0.1
  sha1: 448eaa2f478dc8794933781b478fae02aa44ed6b

stemcells:
- alias: trusty
  os: ubuntu-trusty
  version: latest

instance_groups:
- name: etcd
  instances: 1
  networks:
  - name: &network-name kubo_network
  azs: [default]
  jobs:
  - name: etcd
    release: etcd
    properties:
      etcd:
        require_ssl: false
        peer_require_ssl: false
  stemcell: trusty
  vm_type: small
  persistent_disk_type: 5g

- name: master
  instances: 1
  networks:
  - name: *network-name
  azs: [default]
  jobs:
  - name: kubernetes-api
    release: kubo
    properties:
      admin-username: admin
      admin-password: xxxx
      kubelet-password: xxxx
      tls:
        kubernetes: xxx

 - name: kubeconfig
    release: kubo
    properties:
      kubernetes-api-url: &kubo_url "https://xxx:8443"
      kubelet-password: xxxx
      tls:
        kubernetes: xxx
  - name: kubernetes-controller-manager
    release: kubo
  - name: kubernetes-scheduler
    release: kubo
  - name: kubernetes-system-specs
    release: kubo
    properties:
      kubernetaes-api-url: *kubo_url
  stemcell: trusty
  vm_type: small
...

Error Message

Are you sure you want to deploy? (type 'yes' to continue): yes

Director task 881975
Started preparing deployment > Preparing deployment. Done (00:00:00)

Error 100: Unable to render instance groups for deployment. Errors are:

  • Unable to render jobs for instance group 'master'. Errors are:
    • Unable to render templates for job 'kubernetes-system-specs'. Errors are:
      • Error filling in template 'heapster.yml.erb' (line 21: Can't find property '["kubernetes-api-url"]')

post-start script failing on kubelet

Hi,

I'm currently following the GCP guide on deploying kubo but running into an issue with the post-start script on the kubelet job.

The post-start.stderr.log reports:

...
Error from server (NotFound): nodes "vm-7d3507f5-872e-4e6c-7b6a-f22890688cdc.c.<my project id>.internal" not found
Error from server (NotFound): nodes "vm-7d3507f5-872e-4e6c-7b6a-f22890688cdc.c.<my project id>.internal" not found
( repeats )

The post-start.stdout.log reports:

...
loading cached container: /var/vcap/packages/kubernetes/container-images/kubernetes_heapster_influxdb:v0.6.tgz
kubelet failed post-start checks after 120 seconds
...

Seems like its having trouble with some internal DNS? @zachgersh and I are willing to remote pair on this if you have some time.

This is using the latest kubo-release and we are following all the steps outlined in the guide (without customization).

Thanks,

Kevin and @zachgersh

Failed to deploy

I got error message while deploying kubo 0.0.4 on openstack.

Following is api worker/0 flanneld_ctl.stderr.log.

Error:  client: etcd cluster is unavailable or misconfigured
error #0: dial tcp: lookup 9ce829f9-bebb-4b3d-8546-ad982f38920a.etcd.kubo-network.kubo.bosh on 192.168.17.12:53: no such host

My DNS status

worker/966fb1f2-fe36-4a1f-af3a-674310cf910d:/var/vcap/sys/log/flanneld# nc -v 192.168.17.12 53
Connection to 192.168.17.12 53 port [tcp/domain] succeeded!

My ETCD heath

etcd/9ce829f9-bebb-4b3d-8546-ad982f38920a:/var/vcap/jobs/etcd/bin# /var/vcap/packages/etcd/etcdctl cluster-health
member f7c77ad6832e05a9 is healthy: got healthy result from http://172.16.21.7:4001
cluster is healthy

Bosh deploy error log

'''
Director task 886116
Started preparing deployment > Preparing deployment. Done (00:00:01)

Started preparing package compilation > Finding packages to compile. Done (00:00:00)

Started compiling packages
Started compiling packages > cni/fb66deef2826ccd6c6c135dbc915094e6cef2ab6
Started compiling packages > flanneld/69e5913473152bb3a97fee5ad5f237cb6b3becba
Started compiling packages > etcdctl/35165b48a3100f6f0e4af03c211f913dcf0055b2
Started compiling packages > jq/a8a92d1eb93b806ff9e4f9e8daab4d0dec04b962
Started compiling packages > kubernetes/cc9d2c04acabe975800122fa0a5b1cd4c2a65bae
Done compiling packages > jq/a8a92d1eb93b806ff9e4f9e8daab4d0dec04b962 (00:01:12)
Started compiling packages > pid_utils/96db60d4d683939fd187297035544c340e75d9a4
Done compiling packages > flanneld/69e5913473152bb3a97fee5ad5f237cb6b3becba (00:01:12)
Done compiling packages > etcdctl/35165b48a3100f6f0e4af03c211f913dcf0055b2 (00:01:17)
Done compiling packages > pid_utils/96db60d4d683939fd187297035544c340e75d9a4 (00:00:05)
Done compiling packages > cni/fb66deef2826ccd6c6c135dbc915094e6cef2ab6 (00:01:20)
Done compiling packages > kubernetes/cc9d2c04acabe975800122fa0a5b1cd4c2a65bae (00:02:13)
Done compiling packages (00:02:13)

Started creating missing vms
Started creating missing vms > etcd/9ce829f9-bebb-4b3d-8546-ad982f38920a (0)
Started creating missing vms > master/e89abc82-a44f-45e3-9f86-4b50589ee288 (0)
Started creating missing vms > worker/966fb1f2-fe36-4a1f-af3a-674310cf910d (0). Done (00:00:55)
Done creating missing vms > master/e89abc82-a44f-45e3-9f86-4b50589ee288 (0) (00:00:57)
Done creating missing vms > etcd/9ce829f9-bebb-4b3d-8546-ad982f38920a (0) (00:00:58)
Done creating missing vms (00:00:58)

Started updating instance etcd > etcd/9ce829f9-bebb-4b3d-8546-ad982f38920a (0) (canary). Done (00:00:46)
Started updating instance master > master/e89abc82-a44f-45e3-9f86-4b50589ee288 (0) (canary). Done (00:00:56)
Started updating instance worker > worker/966fb1f2-fe36-4a1f-af3a-674310cf910d (0) (canary). Failed: 'worker/0 (966fb1f2-fe36-4a1f-af3a-674310cf910d)' is not running after update. Review logs for failed jobs: flanneld, docker, kubelet (00:11:14)

Error 400007: 'worker/0 (966fb1f2-fe36-4a1f-af3a-674310cf910d)' is not running after update. Review logs for failed jobs: flanneld, docker, kubelet
'''

job status

RSA 1024 bit CA certificates are loaded due to old openssl compatibility
Acting as user 'admin' on deployment 'kubo' on 'my-bosh'

Director task 886640

Task 886640 done

+--------------------------------------------------+---------+---------+---------+--------------+
| Instance | State | AZ | VM Type | IPs |
+--------------------------------------------------+---------+---------+---------+--------------+
| etcd/0 (9ce829f9-bebb-4b3d-8546-ad982f38920a)* | running | default | small | 172.16.21.7 |
| etcd | running | | | |
| etcd_consistency_checker | running | | | |
+--------------------------------------------------+---------+---------+---------+--------------+
| master/0 (e89abc82-a44f-45e3-9f86-4b50589ee288)* | running | default | small | 172.16.21.9 |
| kubernetes-api | running | | | |
| kubernetes-controller-manager | running | | | |
| kubernetes-scheduler | running | | | |
+--------------------------------------------------+---------+---------+---------+--------------+
| worker/0 (966fb1f2-fe36-4a1f-af3a-674310cf910d)* | failing | default | small | 172.16.21.14 |
| flanneld | failing | | | |
| docker | unknown | | | |
| kubelet | unknown | | | |
| kubernetes-proxy | running | | | |
+--------------------------------------------------+---------+---------+---------+--------------+

my manifest

name: kubo
director_uuid: dae9452b-0148-4cad-a860-6925024ad55a

releases:
- name: etcd
  version: 108+dev.2
  url: https://storage.googleapis.com/kubo-etcd/kubo-etcd-release.108%2Bdev.2.tgz
  sha1: 6f452dae3d6399b38cbdee983847b81f88fc8163
- name: kubo
  version: latest
- name: docker
  version: 28.0.1
  url: https://bosh.io/d/github.com/cf-platform-eng/docker-boshrelease?v=28.0.1
  sha1: 448eaa2f478dc8794933781b478fae02aa44ed6b

stemcells:
- alias: trusty
  os: ubuntu-trusty
  version: latest

instance_groups:
- name: etcd
  instances: 1
  networks:
  - name: &network-name kubo_network
  azs: [default]
  jobs:
  - name: etcd
    release: etcd
    properties:
      etcd:
        require_ssl: false
        peer_require_ssl: false
  stemcell: trusty
  vm_type: small
  persistent_disk_type: 5g

- name: master
  instances: 1
  networks:
  - name: *network-name
  azs: [default]
  jobs:
  - name: kubernetes-api
    release: kubo
    properties:
      admin-username: admin
      admin-password: aaa
      kubelet-password: bbb
      tls:
        kubernetes: xxx
  - name: kubeconfig
    release: kubo
    properties:
      kubernetes-api-url: &kubo_url "https://192.168.17.88:8443"
      kubelet-password: bbb
      tls:
        kubernetes: xxx
  - name: kubernetes-controller-manager
    release: kubo
  - name: kubernetes-scheduler
    release: kubo
  - name: kubernetes-system-specs
    release: kubo
    properties:
      kubernetes-api-url: *kubo_url
  stemcell: trusty
  vm_type: small

- name: worker
  instances: 1
  networks:
  - name: *network-name
  azs: [default]
  jobs:
  - name: flanneld
    release: kubo
  - name: docker
    release: docker
    properties:
      docker:
        flannel: true
        iptables: false
        ip_masq: false
        log_level: error
        storage_driver: overlay
      env: {}
  - name: kubeconfig
    release: kubo
    properties:
      kubernetes-api-url: *kubo_url
      kubelet-password: bbb
      tls:
        kubernetes: xxx
  - name: kubelet
    release: kubo
    properties:
      kubernetes-api-url: *kubo_url
      tls:
        kubelet: xxx
  - name: kubernetes-proxy
    release: kubo
    properties:
      kubernetes-api-url: *kubo_url
  stemcell: trusty
  vm_type: small
  persistent_disk_type: 10g

update:
  canaries: 1
  canary_watch_time: 10000-600000
  max_in_flight: 1
  serial: true
  update_watch_time: 10000-600000

variables:
- name: kubo-admin-password
  type: aaa
- name: kubelet-password
  type: bbb

Expected to find a map at path '/ca' but found '<nil>'

followed all the directions to install kubo and get to the very end to try to use it and get this message.

I'm including all the output from the install and then trying to set up the environment.

Let me know what other info you might need.

ashafer@bosh-bastion:~/kubo-deployment$ bin/deploy_bosh ${state_dir} ${service_account_creds}

| KuBOSH Deployer |

Deployment manifest: '/home/ashafer/kubo-deployment/bosh-deployment/bosh.yml'
Deployment state: '/home/ashafer/kubo-env/kube/state.json'
Started validating
Downloading release 'bosh'... Finished (00:00:06)
Validating release 'bosh'... Finished (00:00:03)
Downloading release 'bosh-google-cpi'... Finished (00:00:06)
Validating release 'bosh-google-cpi'... Finished (00:00:03)
Downloading release 'uaa'... Finished (00:00:04)
Validating release 'uaa'... Finished (00:00:01)
Downloading release 'credhub'... Finished (00:00:04)
Validating release 'credhub'... Finished (00:00:06)
Validating cpi release... Finished (00:00:00)
Validating deployment manifest... Finished (00:00:00)
Downloading stemcell... Finished (00:00:00)
Validating stemcell... Finished (00:00:00)
Finished validating (00:00:39)
Started installing CPI
Compiling package 'golang/b7c20853a73ad56fad8d60b980503e22d3ac7f43'... Finished (00:00:22)
Compiling package 'bosh-google-cpi/4ad0377566e35d667396164d6f210b3723c61a63'... Finished (00:00:28)
Installing packages... Finished (00:00:03)
Rendering job templates... Finished (00:00:00)
Installing job 'google_cpi'... Finished (00:00:00)
Finished installing CPI (00:00:54)

Starting registry... Finished (00:00:00)
Uploading stemcell 'bosh-google-kvm-ubuntu-trusty-go_agent/3312.15'... Finished (00:00:55)

Started deploying
Creating VM for instance 'bosh/0' from stemcell 'stemcell-71314557-e838-4559-5e95-e683b04f7818'... Finished (00:00:52)
Waiting for the agent on VM 'vm-511fc259-d3fe-4d82-63a6-6c3b2aeb5d7b' to be ready... Finished (00:00:19)
Creating disk... Finished (00:00:02)
Attaching disk 'disk-deaf98f0-db01-4f14-7e21-51c82cf2bf19' to VM 'vm-511fc259-d3fe-4d82-63a6-6c3b2aeb5d7b'... Finished (00:00:23)
Rendering job templates... Finished (00:00:03)
Compiling package 'ruby/c1086875b047d112e46756dcb63d8f19e63b3ac4'... Finished (00:03:03)
Compiling package 'mysql/b7e73acc0bfe05f1c6cbfd97bf92d39b0d3155d5'... Finished (00:00:39)
Compiling package 'libpq/661f5817afe24fa2f18946d2757bff63246b1d0d'... Finished (00:00:22)
Compiling package 'openjdk_1.8.0/2b211db4acf8cb063c414f55cced7e0a7f955d52'... Finished (00:00:35)
Compiling package 'golang/b7c20853a73ad56fad8d60b980503e22d3ac7f43'... Finished (00:00:31)
Compiling package 's3cli/f87a0db2430a633402b76704fda9328d22fea7ad'... Finished (00:00:02)
Compiling package 'uaa/d5df660429556d1094c0d333de4b344f9d5959bf'... Skipped [Package already compiled] (00:00:04)
Compiling package 'davcli/5f08f8d5ab3addd0e11171f739f072b107b30b8c'... Finished (00:00:02)
Compiling package 'director/7bce891171c3d4b16b26ed5dc55fa7045fa553da'... Finished (00:02:09)
Compiling package 'credhub/9e5470c81ed7df77e6645a7178c3aa28c4d16f05'... Finished (00:00:13)
Compiling package 'nginx/2ec2f63293bf6f544e95969bf5e5242bc226a800'... Finished (00:01:00)
Compiling package 'bosh-google-cpi/4ad0377566e35d667396164d6f210b3723c61a63'... Finished (00:00:34)
Compiling package 'nats/63ae42eb73527625307ff522fb402832b407321d'... Finished (00:00:20)
Compiling package 'verify_multidigest/8fc5d654cebad7725c34bb08b3f60b912db7094a'... Finished (00:00:02)
Compiling package 'lunaclient/b922e045db5246ec742f0c4d1496844942d6167a'... Finished (00:00:02)
Compiling package 'health_monitor/fd2714251a49b15f5b51a0f28329440303fbdf8c'... Finished (00:01:35)
Compiling package 'postgres/3b1089109c074984577a0bac1b38018d7a2890ef'... Finished (00:04:12)
Compiling package 'postgres-9.4/ded764a075ae7513d4718b7cf200642fdbf81ae4'... Finished (00:04:56)
Compiling package 'powerdns/7e06981d3adfda288c7056dac2da293a1597909e'... Finished (00:00:02)
Compiling package 'dsm-client/2dd1bab877ef2c1c3ea1919c357c3824e2b4baf8'... Finished (00:00:01)
Compiling package 'uaa_utils/20557445bf996af17995a5f13bf5f87000600f2e'... Skipped [Package already compiled] (00:00:00)
Updating instance 'bosh/0'... Finished (00:02:39)
Waiting for instance 'bosh/0' to be running... Finished (00:02:18)
Running the post-start scripts 'bosh/0'... Finished (00:00:03)
Finished deploying (00:27:21)

Stopping registry... Finished (00:00:00)
Cleaning up rendered CPI jobs... Finished (00:00:00)

Succeeded
Warning: The targeted TLS certificate has not been verified for this connection.
Setting the target url: https://10.0.1.6:8844
Login Successful
CA not found. Please validate your input and retry your request.
Type: root
Name: default
Certificate: -----BEGIN CERTIFICATE-----
MIICxzCCAa+gAwIBAgIUAcWmpoHr1gi3KBLnem2VaFLGiGswDQYJKoZIhvcNAQEL
BQAwEzERMA8GA1UEAwwIMTAuMC4xLjYwHhcNMTcwMzE0MDIxNDQxWhcNMTgwMzE0
MDIxNDQxWjATMREwDwYDVQQDDAgxMC4wLjEuNjCCASIwDQYJKoZIhvcNAQEBBQAD
ggEPADCCAQoCggEBAMxnBlMg1FZPkJIGukJ2h2GVJVBUeFrym65QG3IGFlp5nfwJ
U2vajym/1s3NsHnzwAC0I4kneJGHLisKeyGEyCntxaXLUrOGHaXLvv56+Qf//PR0
lD3m0qheCyMGv5XKKI4OvJlAEr5SRqxdnht0u0FZJPzENnPsjl4nH129JFV3fxW6
TnaeEl2MuXg+FtyiKZbdI/fZ2DkJI6/jFgP96OFd7qMmRYoLlAbHEM/WjDc6A9HG
4xL+GV+wcCkOIK+QPN09f+643OdSCIpxS9vCqVvdLjfsZOAbt0V1bzsDzoMe5W0Z
h4ESl6RRew0/ahiJogLrQr+XxpHEw7nnDoc8dPECAwEAAaMTMBEwDwYDVR0TAQH/
BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAWpaT3LBCqDbTSeiMteCPP3zBLO4r
3MkYBSm5uyMSg7TgYYYwpDscOW9yCD5mVzFCxSLSgBFxXxqR00mFXhk9wxqyi+El
Kqbx4iqPZuOC3EAfBfqj8T3yoH8obzhzBp35HeRJiz/CSrsEaeD2voPr540wBSHW
PUEPZnWo3RwbcxPCRc5GfvVMArjqUYvQ8UyqmCm29Qq5FP4D21+usvYv3Ezc1ch6
yP88Es7abHfEQTbsGOKZdcC7P85ao/tOV8S9FptKBrVSk4VQPdHvw7S8ZuUjRfak
MKt/mKVXs9tHK9cSNYx1SlvQPJUKEqw10yaWDchkLE0m0UzI4OL3Ww7KkQ==
-----END CERTIFICATE-----

Private Key: -----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAzGcGUyDUVk+Qkga6QnaHYZUlUFR4WvKbrlAbcgYWWnmd/AlT
a9qPKb/Wzc2wefPAALQjiSd4kYcuKwp7IYTIKe3FpctSs4Ydpcu+/nr5B//89HSU
PebSqF4LIwa/lcoojg68mUASvlJGrF2eG3S7QVkk/MQ2c+yOXicfXb0kVXd/FbpO
dp4SXYy5eD4W3KIplt0j99nYOQkjr+MWA/3o4V3uoyZFiguUBscQz9aMNzoD0cbj
Ev4ZX7BwKQ4gr5A83T1/7rjc51IIinFL28KpW90uN+xk4Bu3RXVvOwPOgx7lbRmH
gRKXpFF7DT9qGImiAutCv5fGkcTDuecOhzx08QIDAQABAoIBAQCZ94jmGTaZBTnr
JTIsWkhEEyqWReqa52CpfyINU9SGtlFwxj2WYn2wfxb401V5p0gbv5V8/MRvKpp2
RWDWsMRuAPL+nhdfr0ip2L23xz3K7uLF6QK5ViOcO6q76Ztq42qFB0i9T2xO/H7L
24D7QYTEBrg7xjkWPTxIY8PWwmCaFx1psPBz/sRHUChkIGdtXlfcc1JmhNL/EB5/
RfYgceMMfzPx2ofbOt+Tfjqt9lCrR/3sQn6mGN9CIKBIgZmOPAqaXRBf6xZDqYxc
OT5BQzmcN+709FR/gmoW9FUwUtiIbZVefu/4+r5s4zGXh688jrgzHbRW7RUTEUSf
8pDOcNyhAoGBAPebVKm7gNJUXuQeaFmuskx7jL2xoddsYqxsd+LTwIy4pSHORPVi
kBIJCnQHfcs8LuZxZpBIQwg/do9Npb89xfvQNai0+bQ+Dm0uNP36NPy7qyYdMeia
v4y1D0zkcQFav5W3CNGVcpicd6E2Ml1i3qii7K0HlldP0T46Zkyoc8KlAoGBANNU
xxqT2g08XDS50bJHuMOME84CztapGe8mmrRftcP3LQVfHpmQpVJPuX1dxjYjqkpb
PX28vfbBniXiS4ba+uG8D91MVdMnkb7nlthT5uqpIK80oTvQFamuOakM4jjT/90I
cdEQJlTDc+14NQ8YBWjmk2CaXegwTBM1RJPwopNdAoGBAJb9JZeLO3cG9AZvdHqb
ySZSgPR8CZDwCwvR6RlsvxIQ1sHSosJwJCKbWMCAgPkZ7g+gP0bkidvRt16TnusL
pFt2EAKcuVhsLyfs8Wue1Aj599f6HaEWHJCVKItfEnoc+I83Wi1T0Nm3MEwiXHwN
+nEjSOgKpGcByTsFKbS9VDnxAoGAJqw391QhLhTipr9ucVqQpDBJG4UGBuBRH6OH
4gQ1xhPAiGAcwGto5YQzZI65jATAz/ScbxsQBEzwPOyJd7cw/AgnOw8SEZ8HG9FT
mGjaNA0ZLxbJfqGYpUF9ycLSzyV0iCVYdrKm4RIXb9h0lTuHGehABgiZsLjN4yH3
V79McP0CgYAafkvAhLDPHK2wy4/Sa1/+AG15g2VZJyOhjFNo4SFQnMjMIuP2PdRA
+gLniPJFIfG+qdeY4DMZpeOdo1V/PxO4cyB0GS1puHCYNfCdwi4utkzhABkODIs2
TINpDiXpp1WVhJ6fiS7OBnpQLOZN0YE4pYsuhJCaB4CB8cG19bIAvg==
-----END RSA PRIVATE KEY-----

Updated: 2017-03-14T02:14:41Z
Using environment '10.0.1.6' as client 'bosh_admin'

Name kube
UUID 3fd1a749-35d8-4d2d-90b0-19098dfdeaa4
Version 261.2.0 (00000000)
CPI google_cpi
Features compiled_package_cache: disabled
config_server: enabled
dns: enabled
snapshots: disabled
User bosh_admin

Succeeded
/kubo-deployment
ashafer@bosh-bastion:
/kubo-deployment$ bin/set_kubeconfig ${state_dir} kube
Credential not found. Please validate your input and retry your request.
Expected to find a map at path '/ca' but found ''
Exit code 1

Creating stemcell failed.

I am trying to deploy bosh director in openstack.
But while deploying getting below error.

[linux@vibh-pcf3 kubo-deployment]$ bin/deploy_bosh ~/temp/bosh/ ~/pcf.pem

 KuBOSH Deployer           

Deployment manifest: '/tmp/tmp.GyYIKa9ixp'
Deployment state: '/home/linux/temp/bosh/state.json'

Started validating
Downloading release 'bosh'... Skipped [Found in local cache] (00:00:00)
Validating release 'bosh'... Finished (00:00:01)
Downloading release 'uaa'... Skipped [Found in local cache] (00:00:00)
Validating release 'uaa'... Finished (00:00:01)
Downloading release 'credhub'... Skipped [Found in local cache] (00:00:00)
Validating release 'credhub'... Finished (00:00:01)
Downloading release 'bosh-openstack-cpi'... Skipped [Found in local cache] (00:00:00)
Validating release 'bosh-openstack-cpi'... Finished (00:00:00)
Validating cpi release... Finished (00:00:00)
Validating deployment manifest... Finished (00:00:00)
Downloading stemcell... Skipped [Found in local cache] (00:00:00)
Validating stemcell... Finished (00:00:18)
Finished validating (00:00:30)

Started installing CPI
Compiling package 'ruby_openstack_cpi/6576c0d52231e773f4ad53f5c5a0785c4247696a'... Finished (00:02:29)
Compiling package 'bosh_openstack_cpi/1177a2d7556f20dab1b6cf6839a8c2748d2c82dd'... Finished (00:00:02)
Installing packages... Finished (00:00:00)
Rendering job templates... Finished (00:00:00)
Installing job 'openstack_cpi'... Finished (00:00:00)
Finished installing CPI (00:02:33)

Starting registry... Finished (00:00:00)
Uploading stemcell 'bosh-openstack-kvm-ubuntu-trusty-go_agent/3445.7'... Failed (00:00:05)
Stopping registry... Finished (00:00:00)
Cleaning up rendered CPI jobs... Finished (00:00:00)

creating stemcell (bosh-openstack-kvm-ubuntu-trusty-go_agent 3445.7):
CPI 'create_stemcell' method responded with error: CmdError{"type":"Bosh::Clouds::CloudError","message":"Extracting stemcell root image failed. Check task debug log for details.","ok_to_retry":false}

Exit code 1

========================================
In addition,

[linux@vibh-pcf3 kubo-deployment]$ bosh verify stemcell bosh-openstack-kvm-ubuntu-trusty-go_agent/3445.7

Verifying stemcell...
File exists and readable FAILED

Validation errors:

  • Cannot find stemcell file /home/linux/kubo-deployment/bosh-openstack-kvm-ubuntu-trusty-go_agent/3445.7
    'bosh-openstack-kvm-ubuntu-trusty-go_agent/3445.7' is not a valid stemcell
    ======================================

Please help.

Missing arrows for user workload to the routers on the main architectural diagram

The arrows for workflow are not clear. The red arrows from the GoRouter to the router-api I assume are for registering TCPRoutes. In essence the personas or the arrows are missing. For example the orange arrows are HTTP workload traffic originating from the GoRouter but the diagram is missing the user to the LB to the GoRouter to initiate that request. The lighter orange is TCP workload I believe.

The ubuntu and vmtools versions in the latest bosh release for vsphere need to be upgraded

Following the guide here https://docs-kubo.cfapps.io/installing/vsphere/deploying-bosh-vsphere/, I downloaded the latest bosh release for vsphere and deployed bosh to a vCenter. Then I deployed a kubo cluster through the bosh director (https://docs-kubo.cfapps.io/installing/vsphere/deploying-bosh-vsphere/).

However, after ssh into the bosh director VM, I found that the VM was using ubuntu 14.04 and the VMtools version is 9.4.0 while the latest version of open-vm-tool is 10.1.15. Same for all the other VMs deployed by kubo.

Although the deployment works just fine, the integration of kubo on vsphere with vRNI is impacted: some of the fields on vRNI dashboard do not show up properly possibly due to the out-dated ubuntu and vmtools versions.

So I'm wondering if the kubo team could provide new bosh release with the latest ubuntu and open-vm-tools versions.

Error while deploying Kubo Cluster

Hello Team,

I am facing issue while deploying the kubo cluster on bosh
We have provided the region as us-east-1

Please refer to the attached screenshot
screen shot 09-22-17 at 11 44 am

Let credhub manage all release passwords and certificates

Why:
We have some variables stored in store file and some in credhub and it is really confusing.

How
Credhub respects the variables section in the manifest as well as properties in our spec that have specific types defined. If we do not specify a --vars-store param during the deploy, then these variables will be managed entirely by credhub. Note: Spec properties can also be shared via links instead of passing them as properties to every job.

Acceptance

  • Make sure we don't have any pre-generated certificates in credhub.
  • Generate manifest and store it to file.
  • Make sure we have variables in the manifest for ((kubelet-password)), ((tls-kubernetes)) and ((kubo-admin-password)).
  • Deploy using the generated manifest. (Note: Make sure we don't specify any vars-store parameters during the deploy.
  • Deploy same Kubo from different machine - it is noop.
  • Check credentials are stored in credhub using credhub get /<director_name>/<deployment name>/kubelet-password
  • Check certificate is stored in credhub using credhub get /<director_name>/<deployment name>/tls-kubernetes
  • Check bin/set_kubeconfig script does work
  • Change password using credhub get /<director_name>/<deployment name>/kubelet-password
  • Redeploy service and verify that new password is set

[#143252077]

InvalidParameterValue: Network vpc-xxxxxx already has an internet gateway

After executing the apply command (based on Paving the infrastructure), there is an error:

* aws_internet_gateway.gateway: InvalidParameterValue: Network vpc-xxxxxx already has an internet gateway attached
        status code: 400, request id: xxxxxxxx

I tried to delete the gateway on the AWS Console, however it does not allow it.

Also, in hashicorp/terraform/vendor/github.com/terraform-providers/terraform-provider-aws/aws/resource_aws_egress_only_internet_gateway.go, there is a Forcenew: True flag set. However, I do not know if it is related to the error or not.

Any suggestion is much appreciated. Thanks!

`KubernetesCluster` tag is invalid for gcp vm

This commit added a KubernetesCluster tag to vms. Now, when deploying kubo to GCP it fails with this error message:

Error: CPI error 'Bosh::Clouds::CloudError' with message 'Setting metadata for vm 
'vm-ee9480aa-6be1-4e4a-5ccd-a6a23336bd30': Failed to set labels for Google 
Instance 'vm-ee9480aa-6be1-4e4a-5ccd-a6a23336bd30': googleapi: Error 400: 
Invalid value for field 'labels': ''. Label key 'KubernetesCluster' violates format constraints. 
The key must start with a lowercase character, can only contain lowercase letters, 
numeric characters, underscores and dashes. The key can be at most 63 characters 
long. International characters are allowed., invalid' in 'set_vm_metadata' CPI method

The task cpi logs show the params passed to the set_vm_metadata method:

{set_vm_metadata [vm-ee9480aa-6be1-4e4a-5ccd-a6a23336bd30 
map[director:frb-kubobosh 
deployment:frb-kubo id:32badce4-832b-4656-87d2-334f6fb4bf76 
job:etcd index:2 name:etcd/32badce4-832b-4656-87d2-334f6fb4bf76 
created_at:2017-08-15T22:05:03Z 
KubernetesCluster:frb-kubobosh/frb-kubo]]}

K8s deployment failed - Failed Jobs: kubelet

Kubo 0.8.0

tried to deploy a K8s cluster.
Get the following error message:

/usr/local/bin/bosh -e kubobosh -d mykubocluster-1 deploy mykubo-1.yml

Task 30

Task 30 | 23:53:54 | Preparing deployment: Preparing deployment
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: etcd/95c68d8a-e55c-417a-871d-ec719c26854a
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: etcd/568d77fc-9c02-4d07-ad5a-8a6fd11cdafe
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: etcd/c28e7027-486e-404e-bf79-bc9529799b6c
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: master/3c865011-7bed-442a-810c-0ae1a3086c9f
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: master/bd3de4fc-6057-43f1-9789-03b753f52457
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: etcd/95c68d8a-e55c-417a-871d-ec719c26854a
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: etcd/568d77fc-9c02-4d07-ad5a-8a6fd11cdafe
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: etcd/c28e7027-486e-404e-bf79-bc9529799b6c
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: etcd/95c68d8a-e55c-417a-871d-ec719c26854a
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: etcd/568d77fc-9c02-4d07-ad5a-8a6fd11cdafe
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: etcd/c28e7027-486e-404e-bf79-bc9529799b6c
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: master/3c865011-7bed-442a-810c-0ae1a3086c9f
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: master/bd3de4fc-6057-43f1-9789-03b753f52457
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: master/3c865011-7bed-442a-810c-0ae1a3086c9f
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: master/bd3de4fc-6057-43f1-9789-03b753f52457
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: master/3c865011-7bed-442a-810c-0ae1a3086c9f
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: master/bd3de4fc-6057-43f1-9789-03b753f52457
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: master/3c865011-7bed-442a-810c-0ae1a3086c9f
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: master/bd3de4fc-6057-43f1-9789-03b753f52457
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: etcd/95c68d8a-e55c-417a-871d-ec719c26854a
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: etcd/568d77fc-9c02-4d07-ad5a-8a6fd11cdafe
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: etcd/c28e7027-486e-404e-bf79-bc9529799b6c
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: worker/8e7e9765-e009-44ed-822d-3da7503381c1
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: worker/133508de-d846-41a3-9a3a-719e07e46355
Task 30 | 23:53:57 | Warning: DNS address not available for the link provider instance: worker/861ce8df-cc67-4908-a6f7-b8c340832ee3
Task 30 | 23:53:57 | Preparing deployment: Preparing deployment (00:00:03)
Task 30 | 23:54:00 | Preparing package compilation: Finding packages to compile (00:00:01)
Task 30 | 23:54:01 | Creating missing vms: etcd/95c68d8a-e55c-417a-871d-ec719c26854a (0)
Task 30 | 23:54:01 | Creating missing vms: etcd/c28e7027-486e-404e-bf79-bc9529799b6c (1)
Task 30 | 23:54:01 | Creating missing vms: etcd/568d77fc-9c02-4d07-ad5a-8a6fd11cdafe (2)
Task 30 | 23:54:01 | Creating missing vms: master/3c865011-7bed-442a-810c-0ae1a3086c9f (0)
Task 30 | 23:54:01 | Creating missing vms: worker/8e7e9765-e009-44ed-822d-3da7503381c1 (0)
Task 30 | 23:54:01 | Creating missing vms: master-haproxy/c49b3f1a-8ca1-4bb1-a48f-365c0c5e0650 (0)
Task 30 | 23:54:01 | Creating missing vms: worker/861ce8df-cc67-4908-a6f7-b8c340832ee3 (1)
Task 30 | 23:54:01 | Creating missing vms: master/bd3de4fc-6057-43f1-9789-03b753f52457 (1)
Task 30 | 23:54:01 | Creating missing vms: worker/133508de-d846-41a3-9a3a-719e07e46355 (2)
Task 30 | 23:54:01 | Creating missing vms: worker-haproxy/ade84456-8df6-4101-aacd-b39063383225 (0)
Task 30 | 23:55:11 | Creating missing vms: worker/133508de-d846-41a3-9a3a-719e07e46355 (2) (00:01:10)
Task 30 | 23:55:20 | Creating missing vms: etcd/95c68d8a-e55c-417a-871d-ec719c26854a (0) (00:01:19)
Task 30 | 23:55:20 | Creating missing vms: etcd/c28e7027-486e-404e-bf79-bc9529799b6c (1) (00:01:19)
Task 30 | 23:55:21 | Creating missing vms: worker/861ce8df-cc67-4908-a6f7-b8c340832ee3 (1) (00:01:20)
Task 30 | 23:55:21 | Creating missing vms: worker/8e7e9765-e009-44ed-822d-3da7503381c1 (0) (00:01:20)
Task 30 | 23:55:22 | Creating missing vms: worker-haproxy/ade84456-8df6-4101-aacd-b39063383225 (0) (00:01:21)
Task 30 | 23:55:23 | Creating missing vms: master/3c865011-7bed-442a-810c-0ae1a3086c9f (0) (00:01:22)
Task 30 | 23:55:23 | Creating missing vms: master-haproxy/c49b3f1a-8ca1-4bb1-a48f-365c0c5e0650 (0) (00:01:22)
Task 30 | 23:55:24 | Creating missing vms: etcd/568d77fc-9c02-4d07-ad5a-8a6fd11cdafe (2) (00:01:23)
Task 30 | 23:55:25 | Creating missing vms: master/bd3de4fc-6057-43f1-9789-03b753f52457 (1) (00:01:24)
Task 30 | 23:55:26 | Updating instance etcd: etcd/95c68d8a-e55c-417a-871d-ec719c26854a (0) (canary) (00:01:03)
Task 30 | 23:56:29 | Updating instance etcd: etcd/568d77fc-9c02-4d07-ad5a-8a6fd11cdafe (2) (00:01:04)
Task 30 | 23:57:33 | Updating instance etcd: etcd/c28e7027-486e-404e-bf79-bc9529799b6c (1) (00:01:01)
Task 30 | 23:58:34 | Updating instance master: master/3c865011-7bed-442a-810c-0ae1a3086c9f (0) (canary) (00:01:01)
Task 30 | 23:59:35 | Updating instance master: master/bd3de4fc-6057-43f1-9789-03b753f52457 (1) (00:01:01)
Task 30 | 00:00:36 | Updating instance master-haproxy: master-haproxy/c49b3f1a-8ca1-4bb1-a48f-365c0c5e0650 (0) (canary) (00:00:30)
Task 30 | 00:01:06 | Updating instance worker: worker/8e7e9765-e009-44ed-822d-3da7503381c1 (0) (canary) (00:03:55)
L Error: Action Failed get_task: Task 0cfc19ee-513c-4423-5f14-a40a8d3b2fce result: 1 of 1 post-start scripts failed. Failed Jobs: kubelet.
Task 30 | 00:05:01 | Error: Action Failed get_task: Task 0cfc19ee-513c-4423-5f14-a40a8d3b2fce result: 1 of 1 post-start scripts failed. Failed Jobs: kubelet.

Task 30 Started Fri Nov 3 23:53:54 UTC 2017
Task 30 Finished Sat Nov 4 00:05:01 UTC 2017
Task 30 Duration 00:11:07
Task 30 error

Updating deployment:
Expected task '30' to succeed but state is 'error'

Exit code 1

I checked Credhub and there is a valid certificate for this deployment:

$ credhub get -n "/kubobosh/mykubocluster-1/tls-kubernetes"
id: b06a2519-5ff7-41df-858a-1de92807b4a9
name: /kubobosh/mykubocluster-1/tls-kubernetes
type: certificate
value:
ca: |
-----BEGIN CERTIFICATE-----
MIIDJDCCAgygAwIBAgIUYvzMcqq7a+G8RRou4/5J89+U3S4wDQYJKoZIhvcNAQEL
BQAwDTELMAkGA1UEAxMCY2EwHhcNMTcxMTAzMjMxMjQzWhcNMTgxMTAzMjMxMjQz
WjANMQswCQYDVQQDEwJjYTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
ALBMu+qDuyNa2aOyueYA03hc/DbcdPm5BjNsCOsXPSKPXAiT3Byy5m5keuYVU4MN
ILUcPmODdyJjn1MSEZ7ZPw2ebNMOQ2q3rnf3ceZPvFEZWk7Vg9oM0PwxPMkF/Sil
B4yHEanzZkl6DVpvHZyzF6IYaF51WBZCri3WipbZFDcdkUeiI2HiMD8zFb+FHO5+
HqJAkX62a6HRjZJo1OAh4Q3tjYGL+ElDrkF7cJI9KB3pEa49bcvq2hpfEPXGfa0x
4F6JcD6uUcFA/FMs0S3glRDjgHUbjBUkImXJyRrm0qM9exxYDaBIrgHlbbQVSATY
Xwri144sVMPpGJ439QU02VkCAwEAAaN8MHowHQYDVR0OBBYEFAhE7MX+t5qOl/BM
mDqCaE0/ljJAMEgGA1UdIwRBMD+AFAhE7MX+t5qOl/BMmDqCaE0/ljJAoRGkDzAN
MQswCQYDVQQDEwJjYYIUYvzMcqq7a+G8RRou4/5J89+U3S4wDwYDVR0TAQH/BAUw
AwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAfuSdPXSIdZB1brm0XuANkj1KHahD5WNo
h0rQ+ny2QVtRV8wTnJQVeLDCjQZQTzu8+YwvjWKumI5Wvl9fNEGz0lBVdYrd2aJx
QAdYbq8UfGcxuuHliHx3pie8PRMXnjg5IyKStTY3UdaDUbGHeoLxJutkgGqlsJYz
iSbgE3hQCQ9DYZQpeN9zvZ+NNNHidht9Wbj0KfgYv5WZirtt6IwAFkkB+kTUNj8u
P7/vHhGWb4v3ldPC/OQdbAb5ZmvhGBLrt6XjvsIbq2xeb/cRbN92nQGhKC+PV18n
8eCrCKM7als7RRc8h9wpVQDnlDu1LVPIX1Tc4R2st+BWjXLu90Q5tA==
-----END CERTIFICATE-----
certificate: |
-----BEGIN CERTIFICATE-----
MIIDyTCCArGgAwIBAgIULRtw+d44wksk5linDP5cmvRH+LowDQYJKoZIhvcNAQEL
BQAwDTELMAkGA1UEAxMCY2EwHhcNMTcxMTAzMjM1MzU1WhcNMTgxMTAzMjM1MzU1
WjAwMRUwEwYDVQQDEwwxMC40MC4yMDcuNDYxFzAVBgNVBAoTDnN5c3RlbTptYXN0
ZXJzMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAy5zdOor7V5GuzjN0
moJNo+uWA11nV1opEFyccfNRNGIv+Fk/Di4QXfC16UBN4EgT3toOMP2Vp3r73KcL
EendZTTi/9Uw7cVp9/QGFrkkryFAIfPJCyU9GZcAvPolEdUsweGVIvB/nsDhaOoA
1hgHJkzVc/Ez7EqanCfFJ1K7FFxIc7g5yKIbciA0DMevr1nApDQfgDHsNokJ3jel
bq5VEeMJ9td/TwbUh4qQsNKWH4Rk1hFanQRW8TBuF6b5MzH0qlEPcYNCqNqFzcca
V/yh2GNw3Zr2FBYccK1YXHyFWJ5030vaRlQlvPRhsRjHME56147AfDpFy550XPAc
YY4ypQIDAQABo4H9MIH6MB0GA1UdDgQWBBSUL4GPNQCMcUsjy5KuklyplkEp7zCB
gAYDVR0RBHkwd4cECijPLocECmTIAYIKa3ViZXJuZXRlc4ISa3ViZXJuZXRlcy5k
ZWZhdWx0ghZrdWJlcm5ldGVzLmRlZmF1bHQuc3ZjgiRrdWJlcm5ldGVzLmRlZmF1
bHQuc3ZjLmNsdXN0ZXIubG9jYWyCC21hc3Rlci5rdWJvMEgGA1UdIwRBMD+AFAhE
7MX+t5qOl/BMmDqCaE0/ljJAoRGkDzANMQswCQYDVQQDEwJjYYIUYvzMcqq7a+G8
RRou4/5J89+U3S4wDAYDVR0TAQH/BAIwADANBgkqhkiG9w0BAQsFAAOCAQEAUwnn
24PbcznkqIUvqIiEomRDyOeYdrYK+ZCrKsscP15mR/N0Q+wM53rxcuSPaiZtRZ6B
FxP9GWk3adtA77d2oKPjXqjG0H2PhgzF9b2UIYr1DiDEP3bCKgpcuQsG9dufHQ48
tgwh5n9HqyDan/6trmdtXKeCw9aZ7tGPEBpswGPdg4ryi5NxfQQPwtKdQ/K/0Dlj
mKy8S7mc3qsksZlr/UaJ1K6+WTxuJPXwigxo4Lvr0tFVDwq7DFJWUEaGONOehjlx
ekc9ckaDDZkg4hm0z/2YOJK06s8hbJVBy6s9v/Gremw/ZGM2rT47esNbU3U/rQ/z
Dk8zNoTGv4OINaz/5Q==
-----END CERTIFICATE-----
private_key: |
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEAy5zdOor7V5GuzjN0moJNo+uWA11nV1opEFyccfNRNGIv+Fk/
Di4QXfC16UBN4EgT3toOMP2Vp3r73KcLEendZTTi/9Uw7cVp9/QGFrkkryFAIfPJ
CyU9GZcAvPolEdUsweGVIvB/nsDhaOoA1hgHJkzVc/Ez7EqanCfFJ1K7FFxIc7g5
yKIbciA0DMevr1nApDQfgDHsNokJ3jelbq5VEeMJ9td/TwbUh4qQsNKWH4Rk1hFa
nQRW8TBuF6b5MzH0qlEPcYNCqNqFzccaV/yh2GNw3Zr2FBYccK1YXHyFWJ5030va
RlQlvPRhsRjHME56147AfDpFy550XPAcYY4ypQIDAQABAoIBAQCubVg8Abnlv8C/
lucKQhxfE7/0a/zOoUdSY/QFzkq/lGnB2DqjXCTbRQ2hn1vXigezvpuvrl2ZF0tS
MKvUEcN/Ivpf7fO3jYoFR0A8ESly+goly+FrseAQ3wJb4fMFvthT03tebib2CghQ
Rz6mKfld/y5Q3836W8LtiUhlMoNfoVKHpEXubNFoulbdVIIsdu410Slz5d8kaT1u
DOE5K6Ni3WTqe0s9Zo4jEoiMYdGRGJYRoIvgDTUkXpk2cECrTN3xTMyr4YgLal9B
TnQl9CDcNDuG03xz7lohN9OYtwF5OpLLxPQAx9oX3t1zRabS1VINxRVa1zt6o9Ju
NjlbKloBAoGBAPZNPD2cc3M8bkaJ5LrITHqK6w80SETOENkceIRzqfSfsGDoHtOk
ZR+GaYFpI9218TrGG5HCEN5fbN6HfwlFtuk22me8qOphzPe1CcTnmmh4hikw7D9o
apbI1VmyxrRQbMmOPIfcdh0EZLhOi/nnq5jzMGQ3wiLbJknG8OSohdWBAoGBANOh
URMqFf1Iognt6qtk9fFhDgI3NIstWs2piYupOWw4wmBuCJmt+GnY4HoEDf4kGe2R
E6PIEGW5yUyFOogSTGf5DX0E6tCrd3UjSaZh57VGpyST3jmphMgpfNv0uHJY1Wbf
abT/T3RpIBdkydRhkF7EE1jJ53ZQyfbJX6U5Z9clAoGBAIDvxc1rDXUR+ZirrzWo
jYDJIGyBLiP2zBMcOGr+McaBok/Ys+qPcPCj6K96XvA9wt7FvsD7GuGOiuujevlb
qXlE4ejUdojcUfSKrWaK5+Yw0erWVZaMDuCImkeusx7Jy2loMH/fBWYDWsaxN83H
XalgBcEw/0xH9S9CGfFZ11YBAoGBALz3w2QgZUgnzgCdv7hRS0bAifiygKlx0y3n
H5lkfpDC0dW3CtjmvfUNoctxyWjPpZM6wtWw8+tRjIxWPmB4Ll98xG2IsX+oS999
per6ayKzttVzb6//TUBJw2LITtZTuiHEhigG/VSN9gjNh2arw3TLEhdrGdHM67oA
L/ZhnvY9AoGAWxei3h35ACkaT8Df1xDmitlbqgRmNMCRt1sAwLQjZtIdHKUWu9+z
clmGM7BfWoHeIHVhcKZvfZW6yi8b94tm6f0ySmYs9f6Zm9UbTaJCHDGBI/U0Rs0F
/0JrTwO2siXfGVe4mes+SLgdBjl0xiSIWnRMZya60QhDvYqrdY5e028=
-----END RSA PRIVATE KEY-----

Please configure GITBOT

Pivotal uses GITBOT to synchronize Github issues and pull requests with Pivotal Tracker.
Please add your new repo to the GITBOT config-production.yml in the Gitbot configuration repo.
If you don't have access you can send an ask ticket to the CF admins. We prefer teams to submit their changes via a pull request.

Steps:

  • Fork this repo: cfgitbot-config
  • Add your project to config-production.yml file
  • Submit a PR

If there are any questions, please reach out to [email protected].

Avoid wrapper deployment scripts and terraform-generated scripts

I have a preexisting GCP bastion + bosh and I went to deploy kubo. I tried to read through the docs https://docs-kubo.cfapps.io/, which led to wrapper scripts, and also to missing helper scripts (e.g. update_gcp_env) which turn out to only be available if I'd used kubo's terraform plans to create my bastion (I had created my GCP bastion using the community GCP instructions https://github.com/cloudfoundry-incubator/bosh-google-cpi-release/tree/master/docs/bosh).

Could we please move away from bespoke kubo terraform plans and wrapper scripts around bosh commands; and move to a simple bosh deploy kubo.yml command?

For example, I would like a simple-to-comprehend, bosh2-standard, non-invasive set of instructions for my existing bosh/bastion, such as:

export BOSH_DEPLOYMENT=kubo
bosh deploy manifests/kubo.yml -l <(./bin/discover-gcp-env.sh) -o <(./bin/select-from-cloud-config.yml) -v system_domain=mygcp.starkandwayne.com

xoxo
@drnic

BTW, new docs site looks nice https://docs-kubo.cfapps.io

how to deploy kuBosh on aws

Hello experts,

I'm trying to setup kuBosh on aws.
I'm referring https://github.com/pivotal-cf-experimental/kubo-deployment/blob/master/docs/guides/customized-installation.md for the setup.

tried to generate env_config by running
bin/generate_env_config my-target-folder mybosh-name aws

but there is no configurations file available for aws.
https://github.com/pivotal-cf-experimental/kubo-deployment/tree/master/configurations

Please let me know if there is any user guide available for the installation on aws or I'm missing something.

Any inputs/insights appreciated.

Regards,
Deepak Upadhyay

Error Deploying to vSphere

./bin/deploy_bosh: line 37: bosh-cli: command not found

an example of a deployment.yaml for vSphere would also be really helpful.

docs have you deploy two bosh directors

Hi,

I followed the Kubo on GCP docs and it instructed me to deploy a BOSH director via the bin/deploy_bosh ${state_dir} ${service_account_creds} script. However, earlier in these docs it tells the user to go to the Deploying supporting infrastructure docs which already have you deploy a BOSH director.

Since the prerequisites for the "Kubo on GCP" docs state that you would already have a BOSH director, maybe this part can be removed? Not sure of this script deploys a BOSH director with extra functionality.

Support deploying KuBOSH on AWS

Rationale

  • OSS developers and users want to deploy kubo on AWS

Acceptance Criteria

  • I can follow scripts in kubo-deployment repo and deploy KuBOSH in a AWS env
  • Verify bosh is accessible
  • Verify that bosh has internet access
  • Verify that we have a CI pipeline w/locks in place to test KuBOSH on AWS

Please configure GITBOT

Pivotal uses GITBOT to synchronize Github issues and pull requests with Pivotal Tracker.
Please add your new repo to the GITBOT config-production.yml in the Gitbot configuration repo.
If you don't have access you can send an ask ticket to the CF admins. We prefer teams to submit their changes via a pull request.

Steps:

  • Fork this repo: cfgitbot-config
  • Add your project to config-production.yml file
  • Submit a PR

If there are any questions, please reach out to [email protected].

Please configure GITBOT

Pivotal uses GITBOT to synchronize Github issues and pull requests with Pivotal Tracker.
Please add your new repo to the GITBOT config-production.yml in the Gitbot configuration repo.
If you don't have access you can send an ask ticket to the CF admins. We prefer teams to submit their changes via a pull request.

Steps:

  • Fork this repo: cfgitbot-config
  • Add your project to config-production.yml file
  • Submit a PR

If there are any questions, please reach out to [email protected].

Include HA-Proxy as an optional release in Kubo OSS

Rationale.
As an OSS developer that doesn't want CF dependency, when deploying KuBoSH on vsphere/openstack , developer should have the ability to frontload the Master and Worker nodes with a Proxy such that the cluster is operational after it is deployed.

BOSH DNS v2 may be a viable long-term solution but we haven't spiked on it yet and not all features we need are available (e.g. links)

Acceptance criteria

(AC deployment)

  • Deploy KuboSH on Vsphere with routing-mode=iaas
  • Add a new property in director.yml to specify I would like proxy included in the deployment

(AC Master/API)

  • Verify that after the deployment, the Master nodes are frontloaded by the Proxy
  • Verify that APIserver works, by using kubectl

(AC Worker/Pods)

  • Verify that after the deployment, a specific port on the Proxy forwarded to the Worker nodes (fixed port for Ingress)
  • We don't need route sync to work as we can let
    developers use Ingress Controller for app routing.

Please configure GITBOT

Pivotal uses GITBOT to synchronize Github issues and pull requests with Pivotal Tracker.
Please add your new repo to the GITBOT config-production.yml in the Gitbot configuration repo.
If you don't have access you can send an ask ticket to the CF admins. We prefer teams to submit their changes via a pull request.

Steps:

  • Fork this repo: cfgitbot-config
  • Add your project to config-production.yml file
  • Submit a PR

If there are any questions, please reach out to [email protected].

Slack channel?

Which slack channel is the kubo team using for internal discussions?

Problem when uploading stemcell - dns_recursor_ip

When trying to upload the stemcell, I got this error:

~/kubo/kubo-deployment$ /usr/local/bin/bosh -e kubobosh upload-stemcell https://s3.amazonaws.com/bosh-core-stemcells/vsphere/bosh-stemcell-3421.11-vsphere-esxi-ubuntu-trusty-go_agent.tgz
Using environment '10.173.13.24' as client 'admin'

Task 32

21:26:38 | Error: Relative paths are not allowed in this context. The following must be be switched to use absolute paths: 'dns_recursor_ip'

Started Thu Nov 2 21:26:38 UTC 2017
Finished Thu Nov 2 21:26:38 UTC 2017
Duration 00:00:00

Task 32 error

Uploading remote stemcell 'https://s3.amazonaws.com/bosh-core-stemcells/vsphere/bosh-stemcell-3421.11-vsphere-esxi-ubuntu-trusty-go_agent.tgz':
Expected task '32' to succeed but state is 'error'

Exit code 1

for info, I deployed Bosh Director using the following command:
/usr/local/bin/bosh create-env bosh.yml
--state=mystate.json
--vars-store=mycreds.yml
-o vsphere/cpi.yml
-o uaa.yml
-o misc/powerdns.yml
-o credhub.yml
-v director_name=kubobosh
-v internal_cidr=10.173.13.0/25
-v internal_gw=10.173.13.125
-v internal_ip=10.173.13.24
-v network_name=ROUTED-VLAN-1525
-v vcenter_dc=DC1
-v vcenter_ds=Datastore2
-v vcenter_ip=10.173.13.21
-v vcenter_user='[email protected]'
-v vcenter_password='VMware1!'
-v vcenter_templates=kubobosh-templates
-v vcenter_vms=kubobosh-vms
-v vcenter_disks=Datastore1
-v vcenter_cluster=Cluster-MGMT
-v dns_recursor_ip=10.20.20.1

Deployment Fails with BOSH CLIv1 and Director version 259.0.0

Hi,
I am trying to install kubo using BOSH CLIv1 with a preinstalled Director version 259.0.0. I've prepared a deployment manifest compatible with the version.

However when I do bosh deploy it fails giving below error:
Deprecation: Ignoring cloud config. Manifest contains 'networks' section.

Started preparing deployment > Preparing deployment. Failed: 419: unexpected token at '"etcd.' (00:00:00)

Error 100: 419: unexpected token at '"etcd.'

are the kubo releases not compatible with older versions of BOSH or something I am missing?

BOSH API access - 401 Unauthorized

I tried to access the BOSH API:

1/ retrieve admin passwd:
/usr/local/bin/bosh int ./mycreds.yml --path /admin_password
syx43o9vbkcsjbd5hs6b

2/ use the passwd in the API:
curl -v -s -k https://admin:'syx43o9vbkcsjbd5hs6b'@10.40.206.148:25555/deployments | jq .

  • Trying 10.40.206.148...
  • TCP_NODELAY set
  • Connected to 10.40.206.148 (10.40.206.148) port 25555 (#0)
  • WARNING: disabling hostname validation also disables SNI.
  • TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  • Server certificate: 10.40.206.148
  • Server auth using Basic with user 'admin'

GET /deployments HTTP/1.1
Host: 10.40.206.148:25555
Authorization: Basic YWRtaW46c3l4NDNvOXZia2NzamJkNWhzNmI=
User-Agent: curl/7.54.0
Accept: /

< HTTP/1.1 401 Unauthorized
< Server: nginx
< Date: Tue, 07 Nov 2017 18:37:08 GMT
< Content-Type: text/html;charset=utf-8
< Content-Length: 31
< Connection: keep-alive

  • Authentication problem. Ignoring this.
    < WWW-Authenticate: Basic realm="BOSH Director"
    < X-XSS-Protection: 1; mode=block
    < X-Content-Type-Options: nosniff
    < X-Frame-Options: SAMEORIGIN
    <
    { [31 bytes data]
  • Connection #0 to host 10.40.206.148 left intact
    parse error: Invalid numeric literal at line 1, column 4

Unauthorized access:
=> HTTP/1.1 401 Unauthorized

did I do something wrong here?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.