Code Monkey home page Code Monkey logo

cpma's People

Contributors

arapov avatar danil-grigorev avatar djwhatle avatar eriknelson avatar gildub avatar jmontleon avatar jmrodri avatar jwmatthews avatar pensu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cpma's Issues

cpma.log.json is not a valid json

Example:

{
  "level": "error",
  "msg": "Unable to read private key: ",
  "time": "2019-06-06T16:31:59+02:00"
}
{
  "level": "error",
  "msg": "Unable to read private key: ",
  "time": "2019-06-06T16:31:59+02:00"
}
{
  "level": "error",
  "msg": "Unable to read private key: ",
  "time": "2019-06-06T16:31:59+02:00"
}
{
  "level": "error",
  "msg": "Unable to read private key: ",
  "time": "2019-06-06T16:31:59+02:00"
}

We need to wrap these objects with array [{...},{...}] or change format of this file

CI implementation

Leverage the work being done for the controller and implement end to end tests to

  • bring up both an OCP3 and 4 cluster
  • generate a report
  • apply transforms

Reduce amount on console logs

Since users will prompt values using survey, we need to remove all logs from console either by moving all info logs to debug level or leaving only cpma.log file.

Factorize Master, Node, configuration file unmarshalling.

I thought someone mentioned it but I cannot find in PR#128 a comment about it.

Anyway, since lots of components are sub set of the same configuration file, shouldn't we rely on a helper to handle each time we tap into a MasterConfig, or NodeConfig, etc.

For instance, these 2 blocks have the same use and both share configV1.MasterConfig:
https://github.com/fusor/cpma/blob/master/pkg/transform/oauth_transform.go#L57-L66
https://github.com/fusor/cpma/blob/master/pkg/transform/sdn_transform.go#L103-L110

Helpers could be something like:
https://github.com/fusor/cpma/blob/8c7523c9ec9a90959fae074a9ab02b6533f7f835/pkg/ocp3/ocp3.go#L24-L49

Your thoughts?

Consider using the Golang structured test paradigm

https://github.com/automationbroker/bundle-lib/blob/master/registries/adapters/dockerhub_adapter_test.go#L35-L147

Basically create an array of test cases to cover edge cases and easy cases as well. The actual struct can be tailored to your individual test needs. The important part is the array.

	testCases := []struct {
		name        string
		c           Configuration
		expected    []string
		expectederr bool
		handlerFunc http.HandlerFunc
}{
...
}

Then create your test loop:

	for _, tc := range testCases {
		t.Run(tc.name, func(t *testing.T) {
			// get test server
			serv := httptest.NewServer(tc.handlerFunc)
			defer serv.Close()

			// use the test server's url
			dockerHubLoginURL = strings.Join([]string{serv.URL, "/v2/users/login/"}, "")
			dockerHubRepoImages = strings.Join([]string{serv.URL,
				"/v2/repositories/%v/?page_size=100"}, "")
			dockerHubManifestURL = strings.Join([]string{serv.URL, "/v2/%v/manifests/%v"}, "")

			// create the adapter we want to test
			dha := DockerHubAdapter{Config: tc.c}

			// test the GetImageNames method
			output, err := dha.GetImageNames()

			if tc.expectederr {
				if !assert.Error(t, err) {
					t.Fatal(err)
				}
				assert.NotEmpty(t, err.Error())
			} else if err != nil {
				t.Fatalf("unexpected error during test: %v\n", err)
			}

			errmsg := fmt.Sprintf("%s returned the wrong value", tc.name)
			assert.Equal(t, tc.expected, output, errmsg)
		})
}

Cluster Report Improvements

For Persistent Volumes:

  • Add Capacity if not already there

  • Add Phase (Bound, List, Available, etc.)

  • Add Driver (NFS, EBS, etc.)
    (Note from John, Assume we will need some logic to look at determine, not aware of a simple field of a 'type' we can just query on PV spec.)

  • Add Node List to report

  • CPU

  • Memory

  • Consumed Memory, if possible, did not see an API response for this

  • Number of Pods runing (this might be more difficult in that you may have to look at pods to determine what node they running on)

  • List Masters if possible. Didn't see a way to list this in the API, but we get a list somehow for the settings survey?

What belongs in the transform/secrets pkg?

Spawned from #128 (comment)

@jwmatthews pointed out that there are a set of types that are tightly coupled to OAuth that don't feel right to be in a generic secrets package:

https://github.com/fusor/cpma/blob/master/pkg/transform/secrets/secrets.go#L11-L24

This begs the question, should these be part of the oauth pkg, and furthermore, what actually belongs in a secrets pkg, and where should it be located?

It makes sense to me that we should have a generic secrets pkg for logic that works with them, but I'm not sure that it belongs under the transform dir. Separately, I think the above types should be in the oauth pkg.

If the target cluster has its config mounted into containers, we can't know the file's true location

Wanted to capture this while it's still fresh in my mind in case anyone else runs into this. I'm using https://github.com/fusor/origin3-dev/ for my example ocp3 cluster, which brings up an ec2 vm and runs oc cluster up to launch a cluster via docker containers. The cluster's configuration files are stored in a non-default location on the vm, and are mounted into possibly different locations inside the containers.

What this means is that when I run the tool against the cluster, we read the master config and see that htpasswd is configured to be at /etc/origin/master/htpasswd. In reality, that file actually exists at /tmp/openshift.local.clusterup/kube-apiserver/htpasswd on the vm, but it's mounted into the container at /etc/origin/master/htpasswd.

I don't see how we can figure out the true location of the source files if the values in the master config differ from where they actually are on the machine.

@jwmatthews, is this just a product of our particular origin3-dev setup, and we're unlikely to encounter this scenario with the production environments we're migrating? Or is this something we need to address?

Use ssh instead of sftp to be able to use sudo commands

Although it might sound more natural to use sftp to retrieve the nodes' configuration files, ssh is more flexible to handle advanced admin operations.
The use case is more specifically when nodes are configured with an dedicated user which has sudo privileges and the admin(root) user is not allowed for ssh/sftp.

Unit of translation should not be an individual config file

In master I'm seeing a series of funcs (fetch, decode, translate, dump, or FDTD) run per remote config file (master, node). I'd like to propose an alternative model because this doesn't sufficiently address the problem of configuring OCP4 features, like container runtimes, which require multiple input configuration files.

The unit of translation should be on an OCP4 feature basis, not on a remote config basis. Let's call the OCP4 features components. For example, there is an OAuth component, an SDN component, and a ContainerRuntime component.

Ideally, you should be able to define the FDTD functions for each "component", and I want to fetch/decode the N config files that are relevant to that component.

Report: Unit tests

Would it make sense to have unit test for report.
I'm thinking about making sure a parameter that is not translated and therefore reported appears on the report side.

cpma falls FATAL when run with --help flag

$ ./bin/cpma --help
Helps migration cluster configuration of a OCP 3.x cluster to OCP 4.x

Usage:
  cpma [flags]

Flags:
      --config string       config file (default is $HOME/.cpma.yaml)
      --debug               set log level to debug
  -h, --help                help for cpma
  -l, --local-only          do not fetch files, use only local files.
  -o, --output-dir string   set the directory to store extracted configuration.
40/40 10:40:41 FATAL unable to read private key: open : no such file or directory

That's wrong, right?

Handle all types of client secrets

OAuth: Add missing and report unsupported parameters

Add the following parameters which are supported in OCP4 OAuth CRD spec:

  • template -> error -> name
  • template -> login -> name
  • template -> providerSelection -> name
  • tokenConfig -> accessTokenMaxAgeSeconds

Mark as unsupported, only if value is set:

  • AlwaysShowProviderSelection
  • AssetPublicURL
  • MasterCA
  • MasterPublicURl
  • MasterURL
  • SessionConfig:
    • sessionMaxAgeSeconds
    • sessionName
    • sessionSecretsFile
  • tokenConfig -> authorizeTokenMaxAgeSeconds

Consider file retrieval as an independent feature

Spawned from PR #128

We have a generic requirement that most (all?) transforms are going to need to get remote file contents as part of their Extract methods, to be decoded and stored on an Extraction object. It makes sense that we have a dedicated part of the codebase who's only responsibility is to provide this feature. Additionally, we should be able to easily mock it with something that provides testdata sourced contents so we can test Extract methods well.

If the above is true, we should move Fetch out of transform/transform.go, and into its own file, and I would advocate a different pkg than transform. This is the kind of thing that typically ends up in a util package if it's a handful of functions, but if it grows to be more than that, I think it makes sense in its own file pkg (name TBD).

It should handle file caching since most transforms will be requesting the same files from it, I know we do that today, but just speccing that here. Additionally, @gildub has pointed out that it's likely that we will need the ability to provide the host as an argument, since it's possible this may need to support a multi-host scenario.

Finally, it could possibly be useful to represent this concept with something like a FileManager struct rather than a handful of functions. I don't have a reason for this right now, but I've certainly seen it done for something like this in the past.

At the very least, Fetch as it is today should be moved out of that transform file and into something more focused on the concept of file retrieval, and it shouldn't be called from anywhere but transform Extract methods, since that's the only place that should be downloading remote files.

Some Q's to be answered:

  • Is it possible that in a multi-master scenario, the master config files would differ between masters? If that's not possible, just pick one, and we don't have to care about specifying the host.
  • Do we need to get files from nodes as well? If that's true, we know we'll need to at least get files from a master and a node, so that's more than one host, and we'd have to accept the host as an argument here as well. It seems valuable to do this regardless of the answer to the Q's.

Naming validation for output manifests

It seems like naming of output CRDs should follow a certain pattern to be accepted as a valid manifest, during manifest consumption on OCP4 cluster. We need to validate these values under DNS-1123 standard.

The Secret “htpasswd_auth-secret” is invalid: metadata.name: Invalid value: “htpasswd_auth-secret”: a DNS-1123 subdomain must consist of lower case alphanumeric characters, ‘-’ or ‘.’, and must start and end with an alphanumeric character (e.g. ‘example.com’, regex used for validation is ‘[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*’)

Similar to https://bugzilla.redhat.com/show_bug.cgi?id=1488366

Proper Error Handling

We should probably have better error handling than calling logrus.Fatal and logrus.Panic.

There are times where we're going to have to expect errors and deal with them gracefully.

One example off the top of my head is looking for files that might not exist. CRI-O was not the default runtime in Openshift 3.x, but it was possible to use. In order to transfer the container runtime configuration we'll need to at least try to fetch the file. If that fails it shouldn't error the program out. Instead we need to simply not create a CR for runtime configuration.

Consider what the translation unit should be called

If we believe this problem can be described as a series of Translations, where something asks them to do their ETL work, I think it's important we settle on what we should call this fundamental type that represents the workflow. The names thus far:

  • Translator(used today)
  • Migration
  • Transform
  • Operation

We're looking for a name for a type that abstracts the entire workflow of translating a given OCP4 feature, and implements the whole ETL process. Inviting opinion on the existing names, or proposals for new ones.

Issue spawned from a previous PR: #113 (comment)

Use machine-config-operator structs for Crio when possible

This is just to track replacing the structs we added for Crio when possible

This prevents us from using machine-config-operator
openshift/machine-config-operator#848

I also hit this issue and haven't been able to get client-go and apimachinery to play nice after adding machine-config-operator

# k8s.io/client-go/rest
../../../../pkg/mod/k8s.io/[email protected]+incompatible/rest/request.go:598:31: not enough arguments in call to watch.NewStreamWatcher
	have (*versioned.Decoder)
	want (watch.Decoder, watch.Reporter)
make: *** [Makefile:20: build] Error 2

I can work on this some more when the other issue gets fixed.

Document supported use cases and workflows to execute.

This issue is to track work to create a 'scenarios' folder similar to https://github.com/fusor/mig-controller/tree/master/docs/scenarios

The purpose of 'docs/scenarios' for CPMA is to create documentation on the high level use cases we will support along with any sample data needed to recreate those scenarios.

For example, this may include sample yamls or master-config.yaml snippets which have certain configurations to apply in OCP 3 so we may exercise CPMA tool and produce desired Reports/CRs to apply to OCP 4.x.

These engineering docs will help the community to understand the capabilities of the tool and serve as on-boarding, as well as help QE and Docs.

Implement ability to migrate Identity Management configuration from OCP 3.x cluster to OCP 4.0 cluster.

As we build the functionality to assist with migrating Identify Management configuration, let's ensure we capture:

  • Able to work with user input via survey
  • Able to work with data from a configuration file

For Identify Management configuration in 3.x see:
https://docs.openshift.com/container-platform/3.11/install_config/configuring_authentication.html

Please test at least 3 different authentication provider configurations:

  • Httpasswd
  • Github Auth
  • Google

related PR - #29

lint: the code fails lint pretty badly

[jesusr@transam cpma{master}]$ make lint
pkg/ocp/ocp.go:20:6: exported type ConfigFile should have comment or be unexported
pkg/ocp/ocp.go:26:6: exported type OAuthTranslator should have comment or be unexported
pkg/ocp/ocp.go:33:6: exported type SDNTranslator should have comment or be unexported
pkg/ocp/ocp.go:39:6: exported type Translator should have comment or be unexported
pkg/ocp/ocp.go:50:1: exported function NewOAuthTranslator should have comment or be unexported
pkg/ocp/ocp.go:62:1: exported function NewSDNTranslator should have comment or be unexported
pkg/ocp/ocp.go:107:1: exported method OAuthTranslator.Load should have comment or be unexported
pkg/ocp/ocp.go:122:1: exported method SDNTranslator.Load should have comment or be unexported
pkg/ocp3/ocp3.go:17:6: exported type Master should have comment or be unexported
pkg/ocp3/ocp3.go:19:6: exported type Node should have comment or be unexported
pkg/ocp3/ocp3.go:25:1: comment on exported function MasterDecode should be of the form "MasterDecode ..."
pkg/ocp3/ocp3.go:38:1: comment on exported function NodeDecode should be of the form "NodeDecode ..."
pkg/ocp4/ocp4.go:8:6: exported type Manifests should have comment or be unexported
pkg/ocp4/ocp4.go:26:1: exported function OAuthManifest should have comment or be unexported
pkg/ocp4/ocp4.go:37:1: exported function SecretsManifest should have comment or be unexported
pkg/ocp4/ocp4.go:44:1: exported function SDNManifest should have comment or be unexported
pkg/ocp4/oauth/basicauth.go:12:6: exported type IdentityProviderBasicAuth should have comment or be unexported
pkg/ocp4/oauth/github.go:12:6: exported type IdentityProviderGitHub should have comment or be unexported
pkg/ocp4/oauth/gitlab.go:10:6: exported type IdentityProviderGitLab should have comment or be unexported
pkg/ocp4/oauth/google.go:10:6: exported type IdentityProviderGoogle should have comment or be unexported
pkg/ocp4/oauth/htpasswd.go:14:6: exported type IdentityProviderHTPasswd should have comment or be unexported
pkg/ocp4/oauth/keystone.go:15:6: exported type IdentityProviderKeystone should have comment or be unexported
pkg/ocp4/oauth/ldap.go:9:6: exported type IdentityProviderLDAP should have comment or be unexported
pkg/ocp4/oauth/oauth.go:27:1: comment on exported type OAuthCRD should be of the form "OAuthCRD ..." (with optional leading article)
pkg/ocp4/oauth/oauth.go:28:6: type name will be used as oauth.OAuthCRD by other packages, and that stutters; consider calling this CRD
pkg/ocp4/oauth/oauth.go:45:6: exported type MetaData should have comment or be unexported
pkg/ocp4/oauth/oauth.go:51:2: exported var APIVersion should have comment or be unexported
pkg/ocp4/oauth/openid.go:10:6: exported type IdentityProviderOpenID should have comment or be unexported
pkg/ocp4/oauth/requestheader.go:9:6: exported type IdentityProviderRequestHeader should have comment or be unexported
pkg/ocp4/sdn/sdn.go:65:1: exported function TranslateClusterNetworks should have comment or be unexported
pkg/ocp4/sdn/sdn.go:80:1: exported function SelectNetworkPlugin should have comment or be unexported
pkg/ocp4/secrets/secrets.go:11:6: exported type HTPasswdFileSecret should have comment or be unexported
pkg/ocp4/secrets/secrets.go:15:6: exported type KeystoneFileSecret should have comment or be unexported
pkg/ocp4/secrets/secrets.go:19:6: exported type LiteralSecret should have comment or be unexported
pkg/ocp4/secrets/secrets.go:22:6: exported type BasicAuthFileSecret should have comment or be unexported
pkg/ocp4/secrets/secrets.go:26:6: exported type Secret should have comment or be unexported
pkg/ocp4/secrets/secrets.go:34:6: exported type MetaData should have comment or be unexported
pkg/ocp4/secrets/secrets.go:39:5: exported var APIVersion should have comment or be unexported
pkg/ocp4/secrets/secrets.go:41:1: exported function GenSecret should have comment or be unexported
internal/io/io.go:10:5: exported var GetFile should have comment or be unexported
internal/io/io.go:24:9: if block ends with a return statement, so drop this else and outdent its block
Found 41 lint suggestions; failing.
make: *** [Makefile:31: lint] Error 1

Omit optional values if they are empty

We should omit optional values if they are empty. For example:

identityProviders:
  - name: my_remote_basic_auth_provider 
    challenge: true 
    login: true 
    mappingMethod: claim 
    provider:
      apiVersion: v1
      kind: BasicAuthPasswordIdentityProvider
      url: https://www.example.com/remote-idp 
      ca: /path/to/ca.file # OPTIONAL
      certFile: /path/to/client.crt  # OPTIONAL
      keyFile: /path/to/client.key # OPTIONAL

certFile, ca and keyFile can be empty and we should not generate secrets and have them in translated CR.

File organization feels arbitrary

It's not very clear where anything goes in the project. Pulling from my comment on another PR:

It's not clear what should go into something like ocp/ocp.go file, for example. Especially when there is an ocp3 and an ocp4 pkg. This needs more thought. This is probably something that deserve's its own issue, and should be dealt with separately.

Multiple translations could possibly write to the same output CR

Spawned from the discussion on #113 (comment)

Quoting from that comment:

Have we considered the fact that more than one ocp4 feature may need to provide their configuration on the same CR, which means they'd need to edit a shared output file? Perhaps we need to share a data structure between transforms that represents the total file output so that more than one Translator can mutate the same individual output file.

secretType with GenSecret and the switch in buildData should be an enum

https://github.com/fusor/cpma/blob/master/pkg/transform/secrets/secrets.go#L68

It's preferable to use an enum to represent something like this so it uses a strong type and won't accept a secretType that isn't part of the possible set (the compiler will stop it in its tracks).

Go doesn't have enums, but there are idiomatic ways they can be approximated:

type Direction int

const (
    North Direction = iota
    East
    South
    West
)

func (d Direction) String() string {
    return [...]string{"North", "East", "South", "West"}[d]
}

https://yourbasic.org/golang/iota/

root.go should not have any logic of the core application

That's exactly what I thought.
The reason it that apparently it's the practice when using "github.com/spf13/cobra".

I disagree, cobra does not dictate how you call the rest of your app. Let's be clear here, I'm not debating the need for the root.go file. And if cobra requires it to be called root.go that is also okay. What I'm against is this code living in root.go.

		ocpOAuth := new(ocp.OAuthConfig)
		ocpSDN := new(ocp.SDNConfig)
		ocpOAuth.Add(source)
		ocpSDN.Add(source)

		configs = append(configs, ocpOAuth, ocpSDN)

		for _, config := range configs {
			config.Extract()
			config.Transform()
			ocp.DumpManifests(config.GenYAML())
}

I'm of the opinion this should be in the core of your application. IMO root.go is NOT the core of your application. It is merely the "UI" for this application. Basically the struct that knows how to handle CLI parameters really doesn't care about what the core of the application needs.

Basically I know to look in cmd/ to see what parameters there are and how they are parsed. And I look to see how the application is started.

So to satisfy the request I'm making it is as simple as moving the above code to another file in the pkg/ directory, something like this:

MigrationAssistant struct {
 // the fields required to handle the transformations etc, I would defer to the POC that @eriknelson has been working on for this
}

func (m *MigrationAssistant) Start(params map[string]string) {
  // take in the parameters you need to start this application
}

Then in root.go I would do the following:

var rootCmd = &cobra.Command{
	Use:   "cpma",
	Short: "Helps migration cluster configuration of a OCP 3.x cluster to OCP 4.x",
	Long:  `Helps migration cluster configuration of a OCP 3.x cluster to OCP 4.x`,
	Run: func(cmd *cobra.Command, args []string) {
		env.InitConfig()
		env.InitLogger()

		// start the application
		ma := MigrationAssistant{}
		ma.Start(PASS_YOUR_PARAMS_HERE)
},
...

Do you see now? Basically root.go knows all about parsing parameters. It knows how to transform those CLI parameters into something that can be passed into the Start of the application. It's cleaner and easier to see where the stuff for cobra is and the stuff required by your application.

README needs developer getting started instructions

I checked out the project to my GOPATH and using go 1.12.4, it will not build using make. Cloning to a location outside of my GOPATH does work. I understand there's some weirdness around go modules and go binary versions, so we should clearly communicate expectations in the README:

  • What go versions are supported?
  • Where the project should be checked out to?
  • Any special requirements like setting GO111MODULE="on"?

Consider a classic ETL workflow

Currently the code is structured as fetch, decode, transform, dump. I'm proposing we model the workflow to match the established ETL pattern. "Component" here refers to the abstraction described in #106:

  • Extract: Fetch + decode the dependent config files for each component into usable data structures. Cache and reuse files that are shared between components (perhaps a FileRetriever type that checks for a cached copy, if miss, download).
  • Validate: Ensure the data we've extracted looks correct to prevent garbage-in-garbage-out. This is important and not currently part of the workflow. @dymurray and I have already run into issues due to the lack of validate.
  • Transform: Take the old data structures and turn them into the new structures
  • Load (Dump, or Flush): Serialize the transformed structures into whatever destination it is that we desire, in this case, a set of CR files.

These methods should be defined on each component we write. A runner should accept a list of these components and execute each of them in sequence (could be parallelized via channels, but let's not pre-optimize).

make ci should include lint, gofmt, and govet targets

https://github.com/fusor/cpma/blob/master/Makefile#L25

Right now we're just checking the build and test, would be good practice to also enforce lint and gofmt with our CI runs, with the intention that all PRs should pass these before being merged. Devs can confirm their work will pass the checks by just running make ci manually.

I don't believe the project is passing lint and/or gofmt currently, so as part of this issue we should repair the breakages.

OAuth transform for Basic Auth has placeholder data and is incomplete.

Basic Auth support needs to be re-examined, I imagine this hasn't been tested against a real cluster yet.

https://github.com/fusor/cpma/blob/master/pkg/transform/oauth/basicauth.go#L45-L55

	// TODO: Fetch cert and key
	certFile := "456This is pretend content"
	encoded := base64.StdEncoding.EncodeToString([]byte(certFile))
	certSecret := secrets.GenSecret(certSecretName, encoded, "openshift-config", "basicauth")

	keySecretName := p.Name + "-client-key-secret"
	idP.BasicAuth.TLSClientKey.Name = keySecretName

	keyFile := "123This is pretend content"
	encoded = base64.StdEncoding.EncodeToString([]byte(keyFile))
	keySecret := secrets.GenSecret(keySecretName, encoded, "openshift-config", "basicauth")

The new transforms need an example unit test

Spawned from #128 (comment)

I provided an example for how to accomplish this here: eriknelson@bdd651a

Specifically this is how I would convert Flush into overridable behavior so it becomes testable: eriknelson@bdd651a

I'm not 100% satisfied with pkg level default functions; it doesn't feel like that's a sustainable model for dependency injection (DI). Something to consider, I was chatting with another kubernetes go developer about how people deal with DI in go and there have been some promising proposals in major projects that maybe we could leverage? kubernetes-sigs/controller-runtime#285

Pragmatically I do not think we should adopt this, but it's an interesting option to explore, should we choose to do so.

Add 'lint' to Makefile

What do others think about adding a local 'lint' target to our Makefile and have it be called each time 'build' is called?

Thinking this would help to surface basic issues before submitting a PR.

I've used the below and would suggest:
https://github.com/golangci/golangci-lint

Example in Makefile

${GOPATH}/bin/golangci-lint:
	go get -u github.com/golangci/golangci-lint/cmd/golangci-lint

lint: ${GOPATH}/bin/golangci-lint
	${GOPATH}/bin/golangci-lint run

Tool shouldn't create manifests for configurations that don't exist

I don't have any oauth or secret configurations in my ocp3 config, and that led to manifests being created that have empty data.

[dymurray@pups cpma]$ cat data/manifests/100_CPMA-cluster-config-oauth.yaml 
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
  namespace: openshift-config
spec:
  identityProviders:
  - null
[dymurray@pups cpma]$ 
[dymurray@pups cpma]$ ls data/manifests/
100_CPMA-cluster-config-oauth.yaml    100_CPMA-cluster-config-secret-.yaml  
[dymurray@pups cpma]$ cat data/manifests/100_CPMA-cluster-config-secret-.yaml 
apiVersion: ""
kind: ""
type: ""
metadata:
  name: ""
  namespace: ""
data: null

We should have some better error checking so that we aren't creating null manifests.

How should we accept our program arguments?

Spawned from #128 (comment)

Reading up some more about cobra, I think I was wrong regarding the above comment, where I stated that we shouldn't have multiple cmd files. It makes sense to me for each command, we should have a different cmd/X file (if we have a tool with different commands, ex: cpma report, cpma transform).

Firstly, do we prefer to accept the output passed as a command vs. an option? Ex:

  • Command: cpma report X
  • Argument: cpma --output-type={report,transform}, defaulting to transform?

The first would result in a bunch of cmd/COMMAND.go files, and they would also share command line args, like output directory. So does that really make sense over the argument approach?

For maintainability reasons, I like the argument approach. It means we have a single entrypoint. If we think the UX of a command approach is the right choice, then multiple files totally make sense under cmd/COMMAND.go.

On top of all this, if we choose to go the command route, what does just cpma $OPTIONS do? Just list the help? Default to transform?

@jwmatthews curious about your thoughts on this because you stated you liked the command approach. If I had to choose, I like the argument approach because of its maintainability. Admittedly, the command approach is much closer to what oc and kubectl do today, so I could see that as a point in its favor.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.