Code Monkey home page Code Monkey logo

satellite's Introduction

satellite's People

Contributors

alchemydocs avatar cldocid2 avatar daniel-p-miller avatar derekpoindexter avatar kkronstainbrown avatar mtreible-ibm avatar rachael-graham avatar yingyeliu avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

satellite's Issues

Update references to IBM block storage CSI driver

"the Block Storage CSI driver documentation"
https://www.ibm.com/docs/en/stg-block-csi-driver/1.4.0?topic=installation-compatibility-requirements

This should be written as "the IBM block storage CSI driver documentation". "block storage" isn't capitalized.
In addition, please refer to the general or at least the latest links, if possible.
www.ibm.com/docs/en/stg-block-csi-driver/
www.ibm.com/docs/en/stg-block-csi-driver/latest?topic=installation-compatibility-requirements.

Thank you,
Rivka Pollack
CSS ID Team Lead and Writer

Disconnected use for Satellite components - Default Value of accessTokenMaxAgeSeconds

Hello,

in the Disconnected use for Satellite components we explain how to change the accessTokenMaxAgeSeconds to 168h, the maximum we support.

We don't mention the default value, if someone want's to switch back to the default value.
Also the default value is not shown in the show command, it just shows default.

This might lead to the Question: What is the default value for accessTokenMaxAgeSeconds? (Example in Slack)

Solution: Can we add a sentence like

You can modify this setting by changing the accessTokenMaxAgeSeconds value for all your OAuth clients.
The default value for accessTokenMaxAgeSeconds is 86400 seconds.

Reference: 3.4. Configuring the internal OAuth server’s token duration

Documentation didn't fit to code example

The following bulletpoint didn't fit to the config file example in the code box.
URL: https://cloud.ibm.com/docs/satellite?topic=satellite-config-http-proxy&locale=en#http-proxy-config
Section: Configuring your HTTP proxy
Bulletpoint Nr. 4:
"Navigate to the /etc/environment file on your host. Enter the for NO_PROXY from step 2. If that file does not exist, create it."

The bullet point named /etc/environment but the content within the code example refers to "/etc/profile.d/ibm-proxy.sh". This didn't fit together.

NO_PROXY config for on premise locations

The doc is confusing regarding setting up the mirror location ($REDHAT_PACKAGE_MIRROR_LOCATION) for an ibmcloud satellite location on premise (RHEL 8.7). There is no ENV $REDHAT_PACKAGE_MIRROR_LOCATION defined. The only repo config we have is /etc/yum.repos.d/redhat.repo, with entries like this:

baseurl = https://cdn.redhat.com/content/dist/layered/rhel8/x86_64/sat-tools/6.7/source/SRPMS
enabled = 0
gpgcheck = 1
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
sslverify = 1
sslcacert = /etc/rhsm/ca/redhat-uep.pem
sslclientkey = /etc/pki/entitlement/8454176169347351717-key.pem
sslclientcert = /etc/pki/entitlement/8454176169347351717.pem
metadata_expire = 86400
enabled_metadata = 0

Do we need to set "cdn.redhat.com" for NO_PROXY?

Also there is a typo in the Note at step 8, refering to REDHHAT_PACKAGE_MIRROR_LOCATION instead of REDHAT_PACKAGE_MIRROR_LOCATION

image

Typo in Link tunnel server endpoint Destination hostnames

In

Destination hostnames: c-01-ws.us-east.link.satellite.cloud.ibm.com, c-02-ws.us-eat.link.satellite.cloud.ibm.com, c-03-ws.us-east.link.satellite.cloud.ibm.com, c-04-ws.us-east.link.satellite.cloud.ibm.com, api.link.satellite.cloud.ibm.com

c-02-ws.us-eat.link.satellite.cloud.ibm.com

is wrong, s is missing

must be

c-02-ws.us-east.link.satellite.cloud.ibm.com

Disconnected use for Satellite components - Update text

In Disconnected use for Satellite components we mention a yaml file called accessTokenMaxAgeSeconds.yaml. But the file is not any more in the document.

We had posted a yaml file in a previous version, but then changed it to a edit command.
We should now change this sentence.

How do I set how long my location can run disconnected from IBM Cloud?
Update and apply the accessTokenMaxAgeSeconds.yaml file to set the time. For more information, see Setting the disconnected usage time.

How do I set how long my location can run disconnected from IBM Cloud?
Add the accessTokenMaxAgeSeconds: 604800parameter by using the oc edit oauthclients command.
For more information, see Setting the disconnected usage time.

Please add an "Important" note that the ODF add-on is NOT supported on Satellite & is not needed in order to configure local storage using ODF

Update we got from a customer that had much difficulty with deploying ODF to their Satellite cluster:

"First the ODF add-on is NOT SUPPORTED with IBM Cloud Satellite. Plus the API-Key I was using was still not correct. Baker walked me through creating a new API-Key. With that and JUST deploying the storage configuration using the ODF and local storage, the ODF deployed as expected."

This is for case CS3544384. Thanks!

Accessing your Red Hat OpenShift API Satellite link endpoints

Hi Team,
please define and descripe the filter option for the source endpoints (https://cloud.ibm.com/docs/satellite?topic=satellite-link-endpoint-secure). The current documentatoin just describes the way how to do this with help of the UI. Please add a description for a possibility to do this with help of the cli.

I try to do this with help of classic and vpc hosts within the IBM Cloud infrastructure. It just works if I do classic. If VPC is not possible please describe this as well.

Define specifics of being backedup to IBM CoS

Documentation says "All Satellite control plane and cluster data." which is causing lot of concern to customers and creating a blind spot on whats being backed up.

Please update documentation to clearly WHAT Data is being backedup and do not leave for interpretation.

We need this updated by 8/27/2021

Attach hosts to the location executing manually the script

Hi,

In the documentation https://cloud.ibm.com/docs/satellite?topic=satellite-azure#azure-host-attach (step 6)

It is explained that the script must be passed in the property "custom-data" of the VM created in Azure.

But it is also possible to execute this script manually after the VM creation instead as using it as custom-data.

Customer does not want to provision their VMs this way as they are running the template from a centralized repo and do not want dependencies with any other repo/script.

We should add this to the documentation.

CASE: CS2489524

Thank you.

Ingress for Link connectors scale down on 23th Feb 2023

Hello,

we use tugboats for our satellite-link service.
To reduce costs, we will scale-down / delete some of the tugboats on the 23th Feb 2023.
(See Second round Link Tugboat scale down)

This means that the Documentation under https://cloud.ibm.com/docs/satellite?topic=satellite-reqs-host-network-outbound for each region, the list under Allow Link connectors to connect to the Link tunnel server endpoint needs to change.

US East will still have 2 tugboats, all other regions will only one.

Delete Region Ingress Name Zone 1 Zone 2 Zone 3
no au-syd/ap-south c-01-ws.au-syd.link.satellite.cloud.ibm.com 135.90.67.154 130.198.75.74 168.1.201.194
no jp-osa c-01-ws.jp-osa.link.satellite.cloud.ibm.com 163.68.78.234 163.73.70.50 163.69.70.106
no jp-tok/ap-north c-01-ws.jp-tok.link.satellite.cloud.ibm.com 128.168.89.146 165.192.71.226 161.202.150.66
no eu-de / eu-central c-01-ws.eu-de.link.satellite.cloud.ibm.com 149.81.188.130 161.156.38.2 158.177.75.210
yes eu-de / eu-central c-02-ws.eu-de.link.satellite.cloud.ibm.com 149.81.188.138 158.177.109.210 161.156.38.10
yes eu-de / eu-central c-03-ws.eu-de.link.satellite.cloud.ibm.com 161.156.38.18 158.177.179.154 149.81.188.146
yes eu-de / eu-central c-04-ws.eu-de.link.satellite.cloud.ibm.com 161.156.38.26 149.81.188.154 158.177.169.162
no eu-gb c-01-ws.eu-gb.link.satellite.cloud.ibm.com 158.175.130.138 141.125.87.226 158.176.74.242
yes eu-gb c-02-ws.eu-gb.link.satellite.cloud.ibm.com 141.125.66.114 158.176.104.186 158.175.131.242
yes eu-gb c-03-ws.eu-gb.link.satellite.cloud.ibm.com 141.125.137.98 158.176.135.26 158.175.140.106
yes eu-gb c-04-ws.eu-gb.link.satellite.cloud.ibm.com 158.176.142.106 158.175.125.50 141.125.137.50
no ca-tor c-01-ws.ca-tor.link.satellite.cloud.ibm.com 163.74.67.114 163.75.70.74 158.85.79.18
no us-south c-01-ws.us-south.link.satellite.cloud.ibm.com 169.61.31.178 169.61.156.226 169.46.88.106
yes us-south c-02-ws.us-south.link.satellite.cloud.ibm.com 169.61.140.18 52.117.55.50 169.48.139.210
yes us-south c-03-ws.us-south.link.satellite.cloud.ibm.com 169.60.2.74 150.239.137.98 169.62.221.10
yes us-south c-04-ws.us-south.link.satellite.cloud.ibm.com 169.59.239.66 169.61.38.178 169.48.188.146
no us-east c-01-ws.us-east.link.satellite.cloud.ibm.com 169.47.156.154 169.63.148.250 169.62.1.34
yes us-east c-02-ws.us-east.link.satellite.cloud.ibm.com 169.63.113.122 169.60.122.226 169.61.101.226
yes us-east c-03-ws.us-east.link.satellite.cloud.ibm.com 169.63.133.10 169.47.174.178 52117112242
yes us-east c-04-ws.us-east.link.satellite.cloud.ibm.com 169.63.121.178 169.62.53.58 169.59.135.26
no br-soa c-01-ws.br-sao.link.satellite.cloud.ibm.com 163.109.70.234 163.107.69.114 169.57.155.74

Can you please update the documentation on 25th Feb 2023?

Consistency of connectivity rules

Hi

  1. in the section Allow Hosts to Connect to IBM, for hosts cloud.ibm.com, containers.cloud.ibm.com, protocol is missing port is mentioned
    Suggestion: Please add Protocol whether TCP or HTTPS or both
  2. For NTP protocol and UDP port 123, could be interpreted incorrectly as if both protocols NTP and UDP to be allowed on port 123.
    Suggestion: Provide UDP on port 123
  3. In section Allow control plane worker nodes to back up control plane etcd data to IBM Cloud Object Storage, port is missing
    Suggestion : add port no 443 or something else
  4. Overall advice to provide a FW template file or a script

Cloud Object Storage requirements. Clarify documentation

Hi,

In the documentation https://cloud.ibm.com/docs/satellite?topic=satellite-locations#location-create-console

"Create a bucket in this service instance to back up your Satellite location control plane. The IBM Cloud Object Storage bucket must be in the same region as your Satellite location. The bucket endpoint must match the instance endpoint, such as a Cross Region bucket for a Global instance."

What means "The bucket endpoint must match the instance endpoint, such as a Cross Region bucket for a Global instance" or what should I configure?

My understanding is that I have to create a Region bucket in the same region that the location. But I do not understand that second part. I'm not sure it adds something and it confuses.

Thank you.

Information information left

Related website: https://cloud.ibm.com/docs/satellite?topic=satellite-link-endpoint-secure

A very important information is the allowed cidr addresses. I tried to use a cidr which belongs to this subnet 192.168.x.x/24 and I received this error Message:

Unable to Add Source. IP/CIDR out of range. Service source should be subset of following CIDRs: '10.0.0.0/8, 161.26.0.0/16, 166.8.0.0/14, 172.16.0.0/12'.

Rather allow the 192.168.0.0/16 address space or at least document the allowed cidr in the documentation above

Create a directory

Hi,
How do i create a directory for the configuration files, in this example ~/agent/env-files and create a file in the ~/agent/env-files directory - with which commands in the terminal

Running a Connector agent - API Key issues

In the documentation:

https://cloud.ibm.com/docs/satellite?topic=satellite-run-agent-locally&interface=ui

It notes:

Create a file in the ~/agent/env-files directory called apikey with a single line value of your IBM Cloud API Key that can access the Satellite Connector.
SATELLITE_CONNECTOR_IAM_APIKEY=~/agent/env-files/apikey

However in testing using a file with only the api key in it is not working, as a work around the SATELLITE_CONNECTOR_IAM_APIKEY was set to the actual api key instead of a file containing the api key and it worked. Here is an example according to the documentation:

Example:

SATELLITE_CONNECTOR_IAM_APIKEY=" ~/Development/cs01-connector/agent/env-files/apikey" 

Created env.txt file with the following values:
$ vi ~/Development/cs01-connector/agent/env-files/env.txt

Errors in log:

- ..."tOps","msgid":"O03-400","msg":"GetToken error","err":"status code: 400. Provided API key could not be found."
- ..."agent_tunnel","msgid":"LAT07","msg":"Failed to get configuration from API, will retry in 120 seconds","statusCode":401,"details":{"incidentID":"bfb28653-4037-4554-9dae-ec34cee14f81","code":"LA0001","description":"You must specify an IAM token as an Authorization header.","type":"Authentication"}
- ..."agent_tunnel","msgid":"LAT06","msg":"Failed to get configuration from API, will retry in 120 seconds"}

Can the documentation be updated to include using the api key by itself and could you verify this process works as expected when using a file containing the api key?

`Why don't cluster list or get updates to Kubernetes resources that are managed by Satellite Config?` in the FAQ does not make sense

The question is awkwardly worded and difficult to understand. Even if it were to read, "Why doesn't cluster list or get updates to Kubernetes resources that are managed by Satellite Config?", it isn't quite asking the question that is being answered.

Perhaps it should read something more like, "Why isn't my Resources list updating after registering my cluster or creating a subscription?".

vCPU vs CPU

Confirm that you have at least three host machines in separate zones that meet the minimum hardware requirements, such as 4 CPUs and 16 GB of memory with RHEL 7 packages and any provider-specific requirement from the guide.

Link to section

In your on-premises environment, identify or create at least three host machines in physically separate racks, which are called zones in Satellite, that meet the minimum hardware requirements, such as 4 CPUs and 16 GB of memory with RHEL 7 packages.

Link to section

I believe it should be "4 vCPUs" not "4 CPUs".

Indentation in YAML file to set accessTokenMaxAgeSeconds in disconnected-usage is wrong

In High Availability, Disaster Recovery, and Disconnected Usage in the last chapter Disconnected Usage the indentation of the yaml file shown to change the accessTokenMaxAgeSeconds is wrong.

If I copy and paste the file and try to apply it I get an error.

# cat DisconnectedUsage.yaml

apiVersion: config.openshift.io/v1
kind: OAuth
  metadata:
    name: cluster
  spec:
    tokenConfig:
      accessTokenMaxAgeSeconds: 259200

# oc apply -f DisconnectedUsage.yaml
error: error parsing DisconnectedUsage.yaml: error converting YAML to JSON: yaml: line 3: mapping values are not allowed in this context

The correct file is

# cat accessTokenMaxAgeSeconds-72h.yaml

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  tokenConfig:
    accessTokenMaxAgeSeconds: 259200

# oc apply -f accessTokenMaxAgeSeconds-72h.yaml
oauth.config.openshift.io/cluster configured

Can you please correct this.

Reference to steps 3-5 in linked Openshift documentation is misleading

In the table specified below, it refers to steps 3-5 in the Red Hat OpenShift on IBM Cloud firewall documentation. These steps do not appear to exist, although the information is correct. Might want to remove reference to the steps 3-5 since it is not present on the linked OpenShift documentation.

https://cloud.ibm.com/docs/satellite?topic=satellite-host-reqs#reqs-host-network-firewall-outbound

Allow IBM Cloud services to set up and manage your location

All IP addresses listed for US South (dal) in steps 3 - 5 of the Red Hat OpenShift on IBM Cloud firewall documentation

Consistent terminology

Hi, some comments to bring more clarity.

  1. In the section "Basic Control Plane Worker setup - I think it refers to Satellite location then it should not be referred to as Master control plane, which most of us understand that Master control plane resides in IBM cloud but not in Satellite location.
    Suggestion: Change it to Satellite location Control plane
  2. This page falls short of explaining Disaster recovery concepts with authenticity. DR should talk about RPO, RTO and other key metrics and in the context of business continuity not at a component level.
    Suggestion: Please change "Disaster Recovery" to "Backup and Recovery" as the description aptly describes procedures for backup and how it can be used for restoring the backup data. Also need to be careful to provide full clarity, as I dont think you can simply restore control plane data to create a new location. Also its confusing in the context of disaster, as when disaster happens we need to restore services in completely different physical location, which requires setting up new location as per our understanding.
  3. High Availability section , should talk about SLAs, how much SLA does it improve by adding more hosts ? is it only for control plane ? or also to the workload clusters ? touch upon how it helps HA for customer applications too and add any caveats
    Suggestion: Show separate power, racks in the diagrams to support the text. Also highlight Satellite components that will benefit high availability and link to main arch diagram. HA is ideally explained as full technology stack for Satellite location and workload clusters and other satellite enabled services.

SCP command for VPC fails

scp <path_to_attachHost.sh> root@<ip_address>:/tmp/attach.sh

Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
lost connection

I think the error is caused by the private key not being included in the root@ part of the code.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.