Code Monkey home page Code Monkey logo

openshift-ansible's Introduction

Join the chat at https://gitter.im/openshift/openshift-ansible Build Status Coverage Status

OpenShift Ansible

This repository contains Ansible roles and playbooks to install, upgrade, and manage OpenShift clusters.

Note: the Ansible playbooks in this repository require an RPM package that provides docker. Currently, the RPMs from dockerproject.org do not provide this requirement, though they may in the future. This limitation is being tracked by #2720.

Getting the correct version

When choosing an openshift release, ensure that the necessary origin packages are available in your distribution's repository. By default, openshift-ansible will not configure extra repositories for testing or staging packages for end users.

We recommend using a release branch. We maintain stable branches corresponding to upstream Origin releases, e.g.: we guarantee an openshift-ansible 3.2 release will fully support an origin 1.2 release.

The most recent branch will often receive minor feature backports and fixes. Older branches will receive only critical fixes.

In addition to the release branches, the master branch master branch tracks our current work in development and should be compatible with the Origin master branch (code in development).

Getting the right openshift-ansible release

Follow this release pattern and you can't go wrong:

Origin/OCP OpenShift-Ansible version openshift-ansible branch
1.3 / 3.3 3.3 release-1.3
1.4 / 3.4 3.4 release-1.4
1.5 / 3.5 3.5 release-1.5
3.X 3.X release-3.x

If you're running from the openshift-ansible master branch we can only guarantee compatibility with the newest origin releases in development. Use a branch corresponding to your origin version if you are not running a stable release.

Setup

Install base dependencies:

Requirements:

  • Ansible >= 2.4.3.0, 2.5.x is not currently supported for OCP installations
  • Jinja >= 2.7
  • pyOpenSSL
  • python-lxml

Fedora:

dnf install -y ansible pyOpenSSL python-cryptography python-lxml

Additional requirements:

Logging:

  • java-1.8.0-openjdk-headless
  • patch

Metrics:

  • httpd-tools

Simple all-in-one localhost Installation

This assumes that you've installed the base dependencies and you're running on Fedora or RHEL

git clone https://github.com/openshift/openshift-ansible
cd openshift-ansible
sudo ansible-playbook -i inventory/hosts.localhost playbooks/prerequisites.yml
sudo ansible-playbook -i inventory/hosts.localhost playbooks/deploy_cluster.yml

Node Group Definition and Mapping

In 3.10 and newer all members of the [nodes] inventory group must be assigned an openshift_node_group_name. This value is used to select the configmap that configures each node. By default there are three configmaps created; one for each node group defined in openshift_node_groups and they're named node-config-master node-config-infra node-config-compute. It's important to note that the configmap is also the authoritative definition of node labels, the old openshift_node_labels value is effectively ignored.

There are also two configmaps that label nodes into multiple roles, these are not recommended for production clusters, however they're named node-config-all-in-one and node-config-master-infra if you'd like to use them to deploy non production clusters.

The default set of node groups is defined in [roles/openshift_facts/defaults/main.yml] like so

openshift_node_groups:
  - name: node-config-master
    labels:
      - 'node-role.kubernetes.io/master=true'
    edits: []
  - name: node-config-infra
    labels:
      - 'node-role.kubernetes.io/infra=true'
    edits: []
  - name: node-config-compute
    labels:
      - 'node-role.kubernetes.io/compute=true'
    edits: []
  - name: node-config-master-infra
    labels:
      - 'node-role.kubernetes.io/infra=true,node-role.kubernetes.io/master=true'
    edits: []
  - name: node-config-all-in-one
    labels:
      - 'node-role.kubernetes.io/infra=true,node-role.kubernetes.io/master=true,node-role.kubernetes.io/compute=true'
    edits: []

When configuring this in the INI based inventory this must be translated into a Python dictionary. Here's an example of a group named node-config-all-in-one which is suitable for an All-In-One installation with kubeletArguments.pods-per-core set to 20

openshift_node_groups=[{'name': 'node-config-all-in-one', 'labels': ['node-role.kubernetes.io/master=true', 'node-role.kubernetes.io/infra=true', 'node-role.kubernetes.io/compute=true'], 'edits': [{ 'key': 'kubeletArguments.pods-per-core','value': ['20']}]}]

For upgrades, the upgrade process will block until you have the required configmaps in the openshift-node namespace. Please define openshift_node_groups as explained above or accept the defaults and run the playbooks/openshift-master/openshift_node_group.yml playbook to have them created for you automatically.

Complete Production Installation Documentation:

Containerized OpenShift Ansible

See README_CONTAINER_IMAGE.md for information on how to package openshift-ansible as a container image.

Installer Hooks

See the hooks documentation.

Contributing

See the contribution guide.

Building openshift-ansible RPMs and container images

See the build instructions.

openshift-ansible's People

Contributors

abutcher avatar ashcrow avatar brenton avatar cooktheryan avatar dav1x avatar detiber avatar dgoodwin avatar ewolinetz avatar giuseppe avatar glennswest avatar ingvagabund avatar jarrpa avatar jcantrill avatar jupierce avatar l3n41c avatar michaelgugino avatar mtnbikenc avatar mwoodson avatar openshift-merge-robot avatar rhcarvalho avatar sdodson avatar smarterclayton avatar smunilla avatar sosiouxme avatar tbielawa avatar tdawson avatar tomassedovic avatar twiest avatar vrutkovs avatar wshearn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

openshift-ansible's Issues

Istio template fails to install - certificate already exists

Execuing a command
oc new-app istio_installer_template.yaml --param=OPENSHIFT_ISTIO_MASTER_PUBLIC_URL=master.server
Causes the openshift-ansible-istio-job to fail, due to error:
"Error from server (AlreadyExists): error when creating "roles/openshift_istio/files/csr.yaml": certificatesigningrequests.certificates.k8s.io "istio-sidecar-injector.istio-system" already exists"

attaching full log
openshift-ansible-istio-job-khwjd.log

Uninstalling istio

The document present a manual how to install istio.

Can we add a section how to uninstall it from the system?

Potential race when creating istio CRDs and their instances

It appears there may be a race between defining the CRDs and then being able to create instances, when installing istio.yaml from the helm templates we sometimes see the following errors

 "[unable to recognize \"roles/openshift_istio/files/istio.yaml\": no matches for config.istio.io/, Kind=stdio,
   unable to recognize \"roles/openshift_istio/files/istio.yaml\": no matches for config.istio.io/, Kind=logentry,
   unable to recognize \"roles/openshift_istio/files/istio.yaml\": no matches for config.istio.io/, Kind=metric,
   unable to recognize \"roles/openshift_istio/files/istio.yaml\": no matches for config.istio.io/, Kind=metric,
   unable to recognize \"roles/openshift_istio/files/istio.yaml\": no matches for config.istio.io/, Kind=metric,
   unable to recognize \"roles/openshift_istio/files/istio.yaml\": no matches for config.istio.io/, Kind=metric,
   unable to recognize \"roles/openshift_istio/files/istio.yaml\": no matches for config.istio.io/, Kind=metric,
   unable to recognize \"roles/openshift_istio/files/istio.yaml\": no matches for config.istio.io/, Kind=metric,
   unable to recognize \"roles/openshift_istio/files/istio.yaml\": no matches for config.istio.io/, Kind=kubernetes]"

Ensure system:authenticated can create Istio CRDs

Normal users are unable to create instances of the Istio CRDs, failing with messages similar to the following

Error from server (Forbidden): gateways.networking.istio.io "greeting-gateway" is forbidden: User "developer" cannot get gateways.networking.istio.io in the namespace "myproject": User "developer" cannot get gateways.networking.istio.io in project "myproject"

Installation of elasticsearch fails when running offline

The error message is

create Pod elasticsearch-0 in StatefulSet elasticsearch failed error: Pod "elasticsearch-0" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace

This is related to the way in which the elasticsearch installation uses triggers to populate the image information from the imagestreams.

Include OPENSHIFT_ISTIO_KIALI_IMAGE_PREFIX to Istio Template

Description

  • Include a variable which allows changing between the image_prefix on kiali would be helpful.

I suggested OPENSHIFT_ISTIO_KIALI_IMAGE_PREFIX in order to keep the coherence between variables names.

Best Regards,
Guilherme Baufaker Rêgo

Kiali Route is not being exposed

Description

When you install Kiali via openshift-ansible without kiali parameters:

oc process -f https://raw.githubusercontent.com/openshift-istio/openshift-ansible/istio-3.10-0.8.0/istio/istio_installer_template.yaml -p=OPENSHIFT_ISTIO_MASTER_PUBLIC_URL=openshift-url:8443

kiali is being installed without route exposed. (maybe kiali shouldn't be installed on this case, but if so, kiali route should be exposed as well)

When you install Kiali via openshift-ansible with kiali parameters:

`oc process -f https://raw.githubusercontent.com/openshift-istio/openshift-ansible/istio-3.10-0.8.0/istio/istio_installer_template.yaml -p=OPENSHIFT_ISTIO_MASTER_PUBLIC_URL=openshift_url:8443 -p=OPENSHIFT_ISTIO_KIALI_USERNAME=admin -p=OPENSHIFT_ISTIO_KIALI_PASSWORD=admin

  • oc create -f `

kiali is being installed (as expected) but the route is not being exposed.

image

Version

ansible --version

ansible 2.5.2
config file = /home/gbaufake/Redhat/istio/install/ansible/ansible.cfg
configured module search path = [u'/home/gbaufake/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15 (default, May 16 2018, 17:50:09) [GCC 8.1.1 20180502 (Red Hat 8.1.1-1)]

Istio-sidecar-injector pod startup failure

Description

The ansible job ran to completion but istio-sidecar-injector pod failed to start due to permission error. cc @gbaufake

master-config:

admissionConfig:
  pluginConfig:
...
    MutatingAdmissionWebhook:
      configuration:
        apiVersion: v1
        disable: false
        kind: DefaultAdmissionConfig
# oc get pods
NAME                                          READY     STATUS             RESTARTS   AGE
istio-citadel-69cc84849c-tskdb                1/1       Running            0          16m
istio-egressgateway-7f8bbcbc4f-xrgsr          1/1       Running            0          16m
istio-ingress-7d945799fc-gnknh                1/1       Running            0          16m
istio-ingressgateway-7f6d5ccc65-xvdm2         1/1       Running            0          16m
istio-pilot-578b974bcc-gphq5                  2/2       Running            0          16m
istio-policy-b5bf474cc-kmwxn                  2/2       Running            0          16m
istio-sidecar-injector-57c6b96dc4-47cc4       0/1       CrashLoopBackOff   9          16m
istio-statsd-prom-bridge-6dbb7dcc7f-kjs9p     1/1       Running            0          16m
istio-telemetry-9445d68d5-7pbk9               2/2       Running            0          16m
openshift-ansible-istio-installer-job-sm488   0/1       Completed          0          17m
prometheus-586d95b8d9-97r6v                   1/1       Running            0          16m
# oc log -f istio-sidecar-injector-57c6b96dc4-47cc4
W0622 15:42:59.841060  123350 cmd.go:358] log is DEPRECATED and will be removed in a future version. Use logs instead.
2018-06-22T19:39:29.842384Z	info	version [email protected]/openshiftistio-0.8.0-6f9f420f0c7119ff4fa6a1966a6f6d89b1b4db84-Clean
2018-06-22T19:39:29.853598Z	info	New configuration: sha256sum fae3fe3c7b0fbb7ee2ac5f3555d73214d1bce510a46cc8a31fbf9f9f077115b0
2018-06-22T19:39:29.853645Z	info	Policy: disabled
2018-06-22T19:39:29.853704Z	info	Template: |
  initContainers:
  - name: istio-init
    image: docker.io/openshiftistio/proxy-init-centos7:0.8.0
    args:
    - "-p"
    - [[ .MeshConfig.ProxyListenPort ]]
    - "-u"
    - 1337
    - "-m"
    - [[ or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String ]]
    - "-i"
    [[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeOutboundIPRanges") -]]
    - "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeOutboundIPRanges"  ]]"
    [[ else -]]
    - "*"
    [[ end -]]
    - "-x"
    [[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeOutboundIPRanges") -]]
    - "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeOutboundIPRanges"  ]]"
    [[ else -]]
    - ""
    [[ end -]]
    - "-b"
    [[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeInboundPorts") -]]
    - "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeInboundPorts"  ]]"
    [[ else -]]
    - [[ range .Spec.Containers -]][[ range .Ports -]][[ .ContainerPort -]], [[ end -]][[ end -]][[ end]]
    - "-d"
    [[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeInboundPorts") -]]
    - "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeInboundPorts" ]]"
    [[ else -]]
    - ""
    [[ end -]]
    imagePullPolicy: IfNotPresent
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
      privileged: true
    restartPolicy: Always
  
  containers:
  - name: istio-proxy
    image: [[ if (isset .ObjectMeta.Annotations "sidecar.istio.io/proxyImage") -]]
    "[[ index .ObjectMeta.Annotations "sidecar.istio.io/proxyImage" ]]"
    [[ else -]]
    docker.io/openshiftistio/proxyv2-centos7:0.8.0
    [[ end -]]
    args:
    - proxy
    - sidecar
    - --configPath
    - [[ .ProxyConfig.ConfigPath ]]
    - --binaryPath
    - [[ .ProxyConfig.BinaryPath ]]
    - --serviceCluster
    [[ if ne "" (index .ObjectMeta.Labels "app") -]]
    - [[ index .ObjectMeta.Labels "app" ]]
    [[ else -]]
    - "istio-proxy"
    [[ end -]]
    - --drainDuration
    - [[ formatDuration .ProxyConfig.DrainDuration ]]
    - --parentShutdownDuration
    - [[ formatDuration .ProxyConfig.ParentShutdownDuration ]]
    - --discoveryAddress
    - [[ .ProxyConfig.DiscoveryAddress ]]
    - --discoveryRefreshDelay
    - [[ formatDuration .ProxyConfig.DiscoveryRefreshDelay ]]
    - --zipkinAddress
    - [[ .ProxyConfig.ZipkinAddress ]]
    - --connectTimeout
    - [[ formatDuration .ProxyConfig.ConnectTimeout ]]
    - --statsdUdpAddress
    - [[ .ProxyConfig.StatsdUdpAddress ]]
    - --proxyAdminPort
    - [[ .ProxyConfig.ProxyAdminPort ]]
    - --controlPlaneAuthPolicy
    - [[ .ProxyConfig.ControlPlaneAuthPolicy ]]
    env:
    - name: POD_NAME
      valueFrom:
        fieldRef:
          fieldPath: metadata.name
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          fieldPath: metadata.namespace
    - name: INSTANCE_IP
      valueFrom:
        fieldRef:
          fieldPath: status.podIP
    - name: ISTIO_META_POD_NAME
      valueFrom:
        fieldRef:
          fieldPath: metadata.name
    - name: ISTIO_META_INTERCEPTION_MODE
      value: [[ or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String ]]
    imagePullPolicy: IfNotPresent
    securityContext:
        privileged: false
        readOnlyRootFilesystem: true
        [[ if eq (or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String) "TPROXY" -]]
        capabilities:
          add:
          - NET_ADMIN
        [[ else -]]
        runAsUser: 1337
        [[ end -]]
    restartPolicy: Always
    resources:
      requests:
        cpu: 100m
        memory: 128Mi
      
    volumeMounts:
    - mountPath: /etc/istio/proxy
      name: istio-envoy
    - mountPath: /etc/certs/
      name: istio-certs
      readOnly: true
  volumes:
  - emptyDir:
      medium: Memory
    name: istio-envoy
  - name: istio-certs
    secret:
      optional: true
      [[ if eq .Spec.ServiceAccountName "" -]]
      secretName: istio.default
      [[ else -]]
      secretName: [[ printf "istio.%s" .Spec.ServiceAccountName ]]
      [[ end -]]
2018-06-22T19:39:29.855413Z	warn	Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
2018-06-22T19:39:29.889872Z	error	Register webhook failed: mutatingwebhookconfigurations.admissionregistration.k8s.io "istio-sidecar-injector" is forbidden: User "system:serviceaccount:istio-system:istio-sidecar-injector-service-account" cannot get mutatingwebhookconfigurations.admissionregistration.k8s.io at the cluster scope: User "system:serviceaccount:istio-system:istio-sidecar-injector-service-account" cannot get mutatingwebhookconfigurations.admissionregistration.k8s.io at the cluster scope. Retrying...
Version

Elasticsearch is falling with limit of 512Mi

Description

Deploying Istio 0.8 with
oc process -f https://raw.githubusercontent.com/openshift-istio/openshift-ansible/istio-3.10-0.8.0/istio/istio_installer_template.yaml -p=OPENSHIFT_ISTIO_MASTER_PUBLIC_URL=https://MASTER_URL:8443 -p=OPENSHIFT_ISTIO_INSTALL_AUTH=true

causes a problem on elasticsearch due to a limit on the pod

limits:
memory: 512Mi
requests:
memory: 512Mi

oc describe pod elasticsearch-0

Name: elasticsearch-0
Namespace: istio-system
Start Time: Mon, 16 Jul 2018 10:27:00 -0300
Labels: app=elasticsearch
controller-revision-hash=elasticsearch-5c8fbb565f
statefulset.kubernetes.io/pod-name=elasticsearch-0
Annotations: openshift.io/scc=restricted
Status: Running
IP: 10.130.1.62
Controlled By: StatefulSet/elasticsearch
Containers:
elasticsearch:
Container ID: docker://cdba7d930d1ded1b24fb5ca599d333247a22edde38d54526593751972fc82f77
Image: registry.centos.org/rhsyseng/elasticsearch@sha256:bca624bae5ec63d38896e4a02f652d245c69c98f0b7fae999ebc0dc6f5bc2eb2
Image ID: docker-pullable://registry.centos.org/rhsyseng/elasticsearch@sha256:bca624bae5ec63d38896e4a02f652d245c69c98f0b7fae999ebc0dc6f5bc2eb2
Ports: 9200/TCP, 9300/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: OOMKilled
Exit Code: 137

Started: Mon, 16 Jul 2018 10:43:16 -0300
Finished: Mon, 16 Jul 2018 10:45:23 -0300
Ready: False
Restart Count: 6

Can we add a way to increase this limit?

Best Regards,
Guilherme Baufaker Rêgo

openshift-ansible-istio-installer complete, but no pods started

Description

When attempting to install 0.7.1 (could also be an issue with .0.8.0) openshift-ansible-istio-installer exists but NO pods are started. This happens intermittently when installing into Openshift v3.9.14.

Jenkins job output example below.

Version

Please put the following version information in the code block
indicated below.

VERSION INFORMATION HERE PLEASE
Steps To Reproduce
Expected Results

Expected all istio pods to start successfully.

Example Jenkins pipeline output, which contains

[Istio-Downstream] Running shell script
+ oc get namespace bookinfo
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Create Pod to Install Istio Downstream)
[Pipeline] dir
Running in /home/jenkins/workspace/Istio-Downstream/openshift-ansible/istio
[Pipeline] {
[Pipeline] sh
[istio] Running shell script
+ oc new-project istio-system
Now using project "istio-system" on server "<hostname>".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git

to build a new example application in Ruby.
[Pipeline] sh
[istio] Running shell script
+ oc create sa openshift-ansible
serviceaccount "openshift-ansible" created
[Pipeline] sh
[istio] Running shell script
+ oc adm policy add-scc-to-user privileged -z openshift-ansible
scc "privileged" added to: ["system:serviceaccount:istio-system:openshift-ansible"]
[Pipeline] sh
[istio] Running shell script
+ oc adm policy add-cluster-role-to-user cluster-admin -z openshift-ansible
cluster role "cluster-admin" added: "openshift-ansible"
[Pipeline] sh
[istio] Running shell script
+ oc process -f https://raw.githubusercontent.com/openshift-istio/openshift-ansible/istio-3.9-0.7.1/istio/istio_installer_template.yaml -p=OPENSHIFT_ISTIO_MASTER_PUBLIC_URL=<hostname>
+ oc create -f -
configmap "install.istio.inventory" created
job "openshift-ansible-istio-installer-job" created
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Observed Results

Include 'tracing' service for Jaeger

Description

Istio 0.8.0 introduces a tracing sub-chart that deploys the Jaeger backend - but in a limited form. It exposes zipkin and tracing services, zipkin as a collector for the zipkin formatted tracing data, and tracing to access a UI.

If users want the full Jaeger backend they currently need to set tracing.jaeger.enabled=true when installing with the helm chart.

I think it would be a good idea for the product to expose the same service tracing, to remain consistent with the community install.

Version

n/a

Expected Results

UI exposed via the tracing service, as well as jaeger-query.

Fix default injection policy

The helm charts changed the variable used for disabling the policy, our current istio.yaml and istio-auth.yaml files are therefore shipping with the policy enabled.

Istio playbook fails to install -> "Path roles/openshift_istio/files/istio.yaml does not exist !"

Command executed

git clone https://github.com/openshift-istio/openshift-ansible.git && cd openshift-ansible
ansible-playbook -i cloud_host playbooks/openshift-istio/config.yml

Result

...
TASK [openshift_istio : Update images in the configuration files if not running community] ********************************************************************************************************************************************
Thursday 26 April 2018  10:48:41 +0200 (0:00:00.114)       0:00:09.062 ********
included: /Users/dabou/Temp/to-be-deleted/openshift-ansible/roles/openshift_istio/tasks/modify_image_names.yml for 192.168.99.50
included: /Users/dabou/Temp/to-be-deleted/openshift-ansible/roles/openshift_istio/tasks/modify_image_names.yml for 192.168.99.50
included: /Users/dabou/Temp/to-be-deleted/openshift-ansible/roles/openshift_istio/tasks/modify_image_names.yml for 192.168.99.50
included: /Users/dabou/Temp/to-be-deleted/openshift-ansible/roles/openshift_istio/tasks/modify_image_names.yml for 192.168.99.50
included: /Users/dabou/Temp/to-be-deleted/openshift-ansible/roles/openshift_istio/tasks/modify_image_names.yml for 192.168.99.50
included: /Users/dabou/Temp/to-be-deleted/openshift-ansible/roles/openshift_istio/tasks/modify_image_names.yml for 192.168.99.50

TASK [openshift_istio : Modify image names in configuration file roles/openshift_istio/files/istio.yaml] ******************************************************************************************************************************
Thursday 26 April 2018  10:48:42 +0200 (0:00:00.650)       0:00:09.712 ********
failed: [192.168.99.50] (item={'value': u'openshiftistio/istio-ca-centos7:0.7.1', 'key': u'docker.io/istio/istio-ca:0.7.1'}) => {"changed": false, "config_item": {"key": "docker.io/istio/istio-ca:{{default_istio_image_tag}}", "value": "openshiftistio/istio-ca-centos7:0.7.1"}, "msg": "Path roles/openshift_istio/files/istio.yaml does not exist !", "rc": 257}
failed: [192.168.99.50] (item={'value': u'openshiftistio/pilot-centos7:0.7.1', 'key': u'docker.io/istio/pilot:0.7.1'}) => {"changed": false, "config_item": {"key": "docker.io/istio/pilot:{{default_istio_image_tag}}", "value": "openshiftistio/pilot-centos7:0.7.1"}, "msg": "Path roles/openshift_istio/files/istio.yaml does not exist !", "rc": 257}
failed: [192.168.99.50] (item={'value': u'openshiftistio/proxy-init-centos7:0.7.1', 'key': u'docker.io/istio/proxy_init:0.7.1'}) => {"changed": false, "config_item": {"key": "docker.io/istio/proxy_init:{{default_istio_image_tag}}", "value": "openshiftistio/proxy-init-centos7:0.7.1"}, "msg": "Path roles/openshift_istio/files/istio.yaml does not exist !", "rc": 257}
failed: [192.168.99.50] (item={'value': u'openshiftistio/proxy-centos7:0.7.1', 'key': u'docker.io/istio/proxy:0.7.1'}) => {"changed": false, "config_item": {"key": "docker.io/istio/proxy:{{default_istio_image_tag}}", "value": "openshiftistio/proxy-centos7:0.7.1"}, "msg": "Path roles/openshift_istio/files/istio.yaml does not exist !", "rc": 257}
failed: [192.168.99.50] (item={'value': u'openshiftistio/sidecar-injector-centos7:0.7.1', 'key': u'docker.io/istio/sidecar_injector:0.7.1'}) => {"changed": false, "config_item": {"key": "docker.io/istio/sidecar_injector:{{default_istio_image_tag}}", "value": "openshiftistio/sidecar-injector-centos7:0.7.1"}, "msg": "Path roles/openshift_istio/files/istio.yaml does not exist !", "rc": 257}
failed: [192.168.99.50] (item={'value': u'openshiftistio/mixer-centos7:0.7.1', 'key': u'docker.io/istio/mixer:0.7.1'}) => {"changed": false, "config_item": {"key": "docker.io/istio/mixer:{{default_istio_image_tag}}", "value": "openshiftistio/mixer-centos7:0.7.1"}, "msg": "Path roles/openshift_istio/files/istio.yaml does not exist !", "rc": 257}

Add route permissions to kiali.yaml

(Copied from openshift-istio/origin#8)

Some permissions are missing for Kiali to enable some functionalities related to OpenShift: in the Kiali deployment we've added a ClusterRoleBinding used to read openshift routes: see https://github.com/kiali/kiali/blob/master/deploy/openshift/kiali.yaml#L110-L115

Without it, we cannot get URLs to Grafana or Jaeger automatically (some extra config is needed).

I'm not sure where the kiali role / binding is defined in openshift-istio. I haven't found them, but if you can point it to me I'll be happy to contribute.

Version
oc istio-3.9-0.8.0-alpha2+cd746ec
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://127.0.0.1:8443
openshift v3.9.0+a96a520-22
kubernetes v1.9.1+a0ce1bc657

Steps To Reproduce
Run cluster up
istiooc cluster up --istio --istio-kiali-version=0.3.1.Alpha --istio-kiali-username=admin --istio-kiali-password=admin
Deploy a service mesh for instance bookinfo demo
In kiali, navigate to an istio-enabled sevice details, then click on "Metrics"
Current Result
An error shows that the Grafana URL cannot be detected. No link to Grafana at the bottom of the page.

Expected Result
No error, links to grafana at the bottom

Galley validatingwebhookconfiguration update failed

Once the installation is finished, the galley component throws the following error, constantly:

2018-08-09T15:34:26.751839Z     error   istio-galley validatingwebhookconfiguration update failed: validatingwebhookconfigurations.admissionregistration.k8s.i
o "istio-galley" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: User "system:serviceaccoun
t:istio-system:istio-galley-service-account" cannot update deployments/finalizers.extensions at the cluster scope, <nil>                                      
2018-08-09T15:34:27.752264Z     error   istio-galley validatingwebhookconfiguration update failed: validatingwebhookconfigurations.admissionregistration.k8s.i
o "istio-galley" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: User "system:serviceaccoun
t:istio-system:istio-galley-service-account" cannot update deployments/finalizers.extensions at the cluster scope, <nil>                                      
2018-08-09T15:34:28.750673Z     error   istio-galley validatingwebhookconfiguration update failed: validatingwebhookconfigurations.admissionregistration.k8s.i
o "istio-galley" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: User "system:serviceaccoun
t:istio-system:istio-galley-service-account" cannot update deployments/finalizers.extensions at the cluster scope, <nil>                                      
2018-08-09T15:34:30.506136Z     error   istio-galley validatingwebhookconfiguration update failed: validatingwebhookconfigurations.admissionregistration.k8s.i
o "istio-galley" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: User "system:serviceaccoun
t:istio-system:istio-galley-service-account" cannot update deployments/finalizers.extensions at the cluster scope, <nil>                                      
2018-08-09T15:34:30.751741Z     error   istio-galley validatingwebhookconfiguration update failed: validatingwebhookconfigurations.admissionregistration.k8s.i
o "istio-galley" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: User "system:serviceaccoun
t:istio-system:istio-galley-service-account" cannot update deployments/finalizers.extensions at the cluster scope, <nil> 

There's nothing done after installation, not even bookinfo sample installed.

Installed in a cluster with 3 nodes (simple configuration), OCP 3.9, latest origin istio-1.0.0-snapshot.2 tag (not openshift-enterprise).

Istio removal template should be more forgiving

The istio_removal_template.yaml template is very strict on the objects that should exist for removal. If an object is missing, it mostly fails instead of considering that object removed and move on to the next object.

This can possibly be achieved by setting most of enquiry tasks in the removal playbook ignore_errors: true and make the following delete tasks conditional on if the object exists

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.