Code Monkey home page Code Monkey logo

gitops-engine's Introduction

slack

Argoproj - Get stuff done with Kubernetes

Argo Image

What is Argoproj?

Argoproj is a collection of tools for getting work done with Kubernetes.

  • Argo Workflows - Container-native Workflow Engine
  • Argo CD - Declarative GitOps Continuous Delivery
  • Argo Events - Event-based Dependency Manager
  • Argo Rollouts - Progressive Delivery with support for Canary and Blue Green deployment strategies

Also argoproj-labs is a separate GitHub org that we setup for community contributions related to the Argoproj ecosystem. Repos in argoproj-labs are administered by the owners of each project. Please reach out to us on the Argo slack channel if you have a project that you would like to add to the org to make it easier to others in the Argo community to find, use, and contribute back.

Community Blogs and Presentations

Project specific community blogs and presentations are at

Adopters

Each Argo sub-project maintains its own list of adopters. Those lists are available in the respective project repositories:

Contributing

To learn about how to contribute to Argoproj, see our contributing documentation. Argo contributors must follow the CNCF Code of Conduct.

For help contributing, visit the #argo-contributors channel in CNCF Slack.

To learn about Argoproj governance, see our community governance document.

Project Resources

gitops-engine's People

Contributors

2opremio avatar ahalay avatar alexmt avatar ash2k avatar ashutosh16 avatar blakepettersson avatar crenshaw-dev avatar darshanime avatar dependabot[bot] avatar fengshunli avatar gdsoumya avatar jannfis avatar jaypipes avatar jessesuen avatar jgwest avatar jsoref avatar kshamajain99 avatar leoluz avatar linuxsuren avatar maruina avatar mayzhang2000 avatar mikebryant avatar pasha-codefresh avatar sbose78 avatar suzuki-shunsuke avatar svghadi avatar terrytangyuan avatar tommyknows avatar wtam2018 avatar yujunz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gitops-engine's Issues

Support gitops-engine.argoproj.io annotations

The synchronization process is controlled by the following annotations:

  • argocd.argoproj.io/sync-wave
  • argocd.argoproj.io/hook
  • argocd.argoproj.io/hook-delete-policy
  • argocd.argoproj.io/sync-wave

We should get rid of argocd part since the engine is not specific to Argo CD. There are two possibilities:

  • Make annotation "domain" customizable
  • Support gitops-engine.argoproj.io/<type> in addition to argocd.argoproj.io/<type> .

Slack AMA

It'd be great to run a ask-me-anything session on Slack. It might be good to run it in our own Slack (after we figured out #7), so people who care can stay there afterwards.

Stuff that needs to be done:

  • pick date and time
  • decide where
  • let engineers and other folks involved know, so they can help answer questions
  • make sure AMA is mentioned in relevant announcements

/cc @mewzherder and @staceypotter

Fix race condition

Currently the gitops engine code has a lot of race conditions.

$ go test ./... --race
?   	github.com/argoproj/gitops-engine	[no test files]
?   	github.com/argoproj/gitops-engine/agent	[no test files]
time="2020-06-03T13:37:40+09:00" level=info msg="Start syncing cluster" server="https://test"
==================
WARNING: DATA RACE
Write at 0x00c000456ca0 by goroutine 31:
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1.1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:548 +0x3a8
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).processApi()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:479 +0x2cc
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:536 +0x430
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/ctl.go:510 +0x45
  golang.org/x/sync/errgroup.(*Group).Go.func1()
      /Users/d-kuro/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 +0x85

Previous write at 0x00c000456ca0 by goroutine 30:
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1.1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:548 +0x3a8
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).processApi()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:479 +0x2cc
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:536 +0x430
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/ctl.go:510 +0x45
  golang.org/x/sync/errgroup.(*Group).Go.func1()
      /Users/d-kuro/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 +0x85

Goroutine 31 (running) created at:
  golang.org/x/sync/errgroup.(*Group).Go()
      /Users/d-kuro/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:54 +0x73
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/ctl.go:509 +0x1a1
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:526 +0x98b
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).EnsureSynced()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:592 +0xe9
  github.com/argoproj/gitops-engine/pkg/cache.TestEnsureSynced()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster_test.go:140 +0xe2
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb

Goroutine 30 (finished) created at:
  golang.org/x/sync/errgroup.(*Group).Go()
      /Users/d-kuro/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:54 +0x73
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/ctl.go:509 +0x1a1
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:526 +0x98b
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).EnsureSynced()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:592 +0xe9
  github.com/argoproj/gitops-engine/pkg/cache.TestEnsureSynced()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster_test.go:140 +0xe2
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb
==================
==================
WARNING: DATA RACE
Write at 0x00c000456ca0 by goroutine 32:
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1.1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:548 +0x3a8
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).processApi()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:479 +0x2cc
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:536 +0x430
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/ctl.go:510 +0x45
  golang.org/x/sync/errgroup.(*Group).Go.func1()
      /Users/d-kuro/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 +0x85

Previous write at 0x00c000456ca0 by goroutine 30:
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1.1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:548 +0x3a8
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).processApi()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:479 +0x2cc
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:536 +0x430
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/ctl.go:510 +0x45
  golang.org/x/sync/errgroup.(*Group).Go.func1()
      /Users/d-kuro/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 +0x85

Goroutine 32 (running) created at:
  golang.org/x/sync/errgroup.(*Group).Go()
      /Users/d-kuro/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:54 +0x73
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/ctl.go:509 +0x1a1
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:526 +0x98b
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).EnsureSynced()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:592 +0xe9
  github.com/argoproj/gitops-engine/pkg/cache.TestEnsureSynced()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster_test.go:140 +0xe2
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb

Goroutine 30 (finished) created at:
  golang.org/x/sync/errgroup.(*Group).Go()
      /Users/d-kuro/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:54 +0x73
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/ctl.go:509 +0x1a1
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:526 +0x98b
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).EnsureSynced()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:592 +0xe9
  github.com/argoproj/gitops-engine/pkg/cache.TestEnsureSynced()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster_test.go:140 +0xe2
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb
==================
time="2020-06-03T13:37:40+09:00" level=info msg="Cluster successfully synced" server="https://test"
--- FAIL: TestEnsureSynced (0.00s)
    testing.go:906: race detected during execution of test
time="2020-06-03T13:37:40+09:00" level=info msg="Start syncing cluster" server="https://test"
==================
WARNING: DATA RACE
Write at 0x00c0003752c0 by goroutine 38:
  runtime.mapassign()
      /Users/d-kuro/sdk/go1.14.4/src/runtime/map.go:571 +0x0
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).setNode()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:300 +0x140
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).onNodeUpdated()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:751 +0x66
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).replaceResourceCache()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:209 +0x3a3
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).watchEvents.func1.2()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:414 +0x708
  github.com/argoproj/gitops-engine/pkg/cache.runSynced()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:382 +0x8c
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).watchEvents.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:393 +0x219
  github.com/argoproj/gitops-engine/pkg/utils/kube.RetryUntilSucceed()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/kube.go:384 +0x2c6
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).watchEvents()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:386 +0x20f

Previous read at 0x00c0003752c0 by goroutine 29:
  reflect.maplen()
      /Users/d-kuro/sdk/go1.14.4/src/runtime/map.go:1351 +0x0
  reflect.Value.Len()
      /Users/d-kuro/sdk/go1.14.4/src/reflect/value.go:1132 +0x2f1
  github.com/stretchr/testify/assert.getLen()
      /Users/d-kuro/go/pkg/mod/github.com/stretchr/[email protected]/assert/assertions.go:575 +0xda
  github.com/stretchr/testify/assert.Len()
      /Users/d-kuro/go/pkg/mod/github.com/stretchr/[email protected]/assert/assertions.go:586 +0x9c
  github.com/argoproj/gitops-engine/pkg/cache.TestEnsureSynced()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster_test.go:143 +0x1a0
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb

Goroutine 38 (running) created at:
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1.1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:563 +0x69d
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).processApi()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:479 +0x2cc
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:536 +0x430
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/ctl.go:510 +0x45
  golang.org/x/sync/errgroup.(*Group).Go.func1()
      /Users/d-kuro/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 +0x85

Goroutine 29 (finished) created at:
  testing.(*T).Run()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1042 +0x660
  testing.runTests.func1()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1284 +0xa6
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb
  testing.runTests()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1282 +0x527
  testing.(*M).Run()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1199 +0x2ff
  main.main()
      _testmain.go:76 +0x223
==================
==================
WARNING: DATA RACE
Write at 0x00c000254bb0 by goroutine 47:
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1.1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:548 +0x3a8
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).processApi()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:487 +0x1dc
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:536 +0x430
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/ctl.go:510 +0x45
  golang.org/x/sync/errgroup.(*Group).Go.func1()
      /Users/d-kuro/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 +0x85

Previous write at 0x00c000254bb0 by goroutine 46:
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1.1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:548 +0x3a8
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).processApi()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:487 +0x1dc
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:536 +0x430
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/ctl.go:510 +0x45
  golang.org/x/sync/errgroup.(*Group).Go.func1()
      /Users/d-kuro/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 +0x85

Goroutine 47 (running) created at:
  golang.org/x/sync/errgroup.(*Group).Go()
      /Users/d-kuro/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:54 +0x73
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/ctl.go:509 +0x1a1
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:526 +0x98b
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).EnsureSynced()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:592 +0xe9
  github.com/argoproj/gitops-engine/pkg/cache.TestEnsureSyncedSingleNamespace()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster_test.go:165 +0x18b
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb

Goroutine 46 (running) created at:
  golang.org/x/sync/errgroup.(*Group).Go()
      /Users/d-kuro/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:54 +0x73
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/ctl.go:509 +0x1a1
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:526 +0x98b
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).EnsureSynced()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:592 +0xe9
  github.com/argoproj/gitops-engine/pkg/cache.TestEnsureSyncedSingleNamespace()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster_test.go:165 +0x18b
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb
==================
time="2020-06-03T13:37:40+09:00" level=info msg="Cluster successfully synced" server="https://test"
--- FAIL: TestEnsureSyncedSingleNamespace (0.00s)
    testing.go:906: race detected during execution of test
==================
WARNING: DATA RACE
Write at 0x00c000702ea0 by goroutine 55:
  runtime.mapassign()
      /Users/d-kuro/sdk/go1.14.4/src/runtime/map.go:571 +0x0
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).setNode()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:300 +0x140
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).onNodeUpdated()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:751 +0x66
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).replaceResourceCache()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:209 +0x3a3
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).watchEvents.func1.2()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:414 +0x708
  github.com/argoproj/gitops-engine/pkg/cache.runSynced()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:382 +0x8c
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).watchEvents.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:393 +0x219
  github.com/argoproj/gitops-engine/pkg/utils/kube.RetryUntilSucceed()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/kube.go:384 +0x2c6
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).watchEvents()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:386 +0x20f

Previous read at 0x00c000702ea0 by goroutine 43:
  reflect.maplen()
      /Users/d-kuro/sdk/go1.14.4/src/runtime/map.go:1351 +0x0
  reflect.Value.Len()
      /Users/d-kuro/sdk/go1.14.4/src/reflect/value.go:1132 +0x2f1
  github.com/stretchr/testify/assert.getLen()
      /Users/d-kuro/go/pkg/mod/github.com/stretchr/[email protected]/assert/assertions.go:575 +0xda
  github.com/stretchr/testify/assert.Len()
      /Users/d-kuro/go/pkg/mod/github.com/stretchr/[email protected]/assert/assertions.go:586 +0x9c
  github.com/argoproj/gitops-engine/pkg/cache.TestEnsureSyncedSingleNamespace()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster_test.go:168 +0x247
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb

Goroutine 55 (running) created at:
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1.1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:563 +0x69d
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).processApi()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:487 +0x1dc
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:536 +0x430
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/ctl.go:510 +0x45
  golang.org/x/sync/errgroup.(*Group).Go.func1()
      /Users/d-kuro/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 +0x85

Goroutine 43 (finished) created at:
  testing.(*T).Run()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1042 +0x660
  testing.runTests.func1()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1284 +0xa6
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb
  testing.runTests()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1282 +0x527
  testing.(*M).Run()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1199 +0x2ff
  main.main()
      _testmain.go:76 +0x223
==================
time="2020-06-03T13:37:40+09:00" level=info msg="Start syncing cluster" server="https://test"
time="2020-06-03T13:37:40+09:00" level=info msg="Cluster successfully synced" server="https://test"
--- FAIL: TestGetNamespaceResources (0.00s)
    testing.go:906: race detected during execution of test
time="2020-06-03T13:37:40+09:00" level=info msg="Start syncing cluster" server="https://test"
time="2020-06-03T13:37:40+09:00" level=info msg="Cluster successfully synced" server="https://test"
time="2020-06-03T13:37:40+09:00" level=warning msg="invalidated cluster" server="https://test"
time="2020-06-03T13:37:40+09:00" level=info msg="Start syncing cluster" server="https://test"
time="2020-06-03T13:37:40+09:00" level=info msg="Cluster successfully synced" server="https://test"
time="2020-06-03T13:37:40+09:00" level=info msg="Start syncing cluster" server="https://test"
time="2020-06-03T13:37:40+09:00" level=info msg="Cluster successfully synced" server="https://test"
--- FAIL: TestChildDeletedEvent (0.00s)
    cluster_test.go:278: 
        	Error Trace:	cluster_test.go:278
        	Error:      	Not equal: 
        	            	expected: []*cache.Resource{}
        	            	actual  : []*cache.Resource{(*cache.Resource)(0xc000472580)}
        	            	
        	            	Diff:
        	            	--- Expected
        	            	+++ Actual
        	            	@@ -1,2 +1,11 @@
        	            	-([]*cache.Resource) {
        	            	+([]*cache.Resource) (len=1) {
        	            	+ (*cache.Resource)({
        	            	+  ResourceVersion: (string) (len=3) "123",
        	            	+  Ref: (v1.ObjectReference) &ObjectReference{Kind:Pod,Namespace:default,Name:helm-guestbook-pod,UID:1,APIVersion:v1,ResourceVersion:,FieldPath:,},
        	            	+  OwnerRefs: ([]v1.OwnerReference) (len=1) {
        	            	+   (v1.OwnerReference) &OwnerReference{Kind:ReplicaSet,Name:helm-guestbook-rs,UID:2,APIVersion:apps/v1,Controller:nil,BlockOwnerDeletion:nil,}
        	            	+  },
        	            	+  Info: (interface {}) <nil>,
        	            	+  Resource: (*unstructured.Unstructured)(<nil>)
        	            	+ })
        	            	 }
        	Test:       	TestChildDeletedEvent
time="2020-06-03T13:37:40+09:00" level=info msg="Start syncing cluster" server="https://test"
time="2020-06-03T13:37:40+09:00" level=info msg="Cluster successfully synced" server="https://test"
time="2020-06-03T13:37:40+09:00" level=info msg="Start syncing cluster" server="https://test"
time="2020-06-03T13:37:40+09:00" level=info msg="Cluster successfully synced" server="https://test"
time="2020-06-03T13:37:40+09:00" level=info msg="Start syncing cluster" server="https://test"
time="2020-06-03T13:37:40+09:00" level=info msg="Cluster successfully synced" server="https://test"
==================
WARNING: DATA RACE
Read at 0x00c0008c5260 by goroutine 66:
  runtime.mapiterinit()
      /Users/d-kuro/sdk/go1.14.4/src/runtime/map.go:797 +0x0
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).replaceResourceCache()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:212 +0x421
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).watchEvents.func1.2()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:414 +0x708
  github.com/argoproj/gitops-engine/pkg/cache.runSynced()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:382 +0x8c
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).watchEvents.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:393 +0x219
  github.com/argoproj/gitops-engine/pkg/utils/kube.RetryUntilSucceed()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/kube.go:384 +0x2c6
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).watchEvents()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:386 +0x20f

Previous write at 0x00c0008c5260 by goroutine 50:
  runtime.mapdelete()
      /Users/d-kuro/sdk/go1.14.4/src/runtime/map.go:685 +0x0
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).onNodeRemoved()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:760 +0x17f
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).replaceResourceCache()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:218 +0x5d2
  github.com/argoproj/gitops-engine/pkg/cache.TestNamespaceModeReplace()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster_test.go:377 +0x292
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb

Goroutine 66 (running) created at:
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1.1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:563 +0x69d
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).processApi()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:479 +0x2cc
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:536 +0x430
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/ctl.go:510 +0x45
  golang.org/x/sync/errgroup.(*Group).Go.func1()
      /Users/d-kuro/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 +0x85

Goroutine 50 (finished) created at:
  testing.(*T).Run()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1042 +0x660
  testing.runTests.func1()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1284 +0xa6
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb
  testing.runTests()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1282 +0x527
  testing.(*M).Run()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1199 +0x2ff
  main.main()
      _testmain.go:76 +0x223
==================
time="2020-06-03T13:37:40+09:00" level=info msg="Start syncing cluster" server="https://test"
==================
WARNING: DATA RACE
Read at 0x00c0003d7b08 by goroutine 60:
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).watchEvents.func1.2()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:394 +0xe1
  github.com/argoproj/gitops-engine/pkg/cache.runSynced()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:382 +0x8c
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).watchEvents.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:393 +0x219
  github.com/argoproj/gitops-engine/pkg/utils/kube.RetryUntilSucceed()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/kube.go:384 +0x2c6
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).watchEvents()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:386 +0x20f

Previous write at 0x00c0003d7b08 by goroutine 50:
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).replaceResourceCache()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:221 +0x6e6
  github.com/argoproj/gitops-engine/pkg/cache.TestNamespaceModeReplace()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster_test.go:377 +0x292
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb

Goroutine 60 (running) created at:
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1.1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:563 +0x69d
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).processApi()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:479 +0x2cc
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:536 +0x430
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/ctl.go:510 +0x45
  golang.org/x/sync/errgroup.(*Group).Go.func1()
      /Users/d-kuro/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 +0x85

Goroutine 50 (finished) created at:
  testing.(*T).Run()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1042 +0x660
  testing.runTests.func1()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1284 +0xa6
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb
  testing.runTests()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1282 +0x527
  testing.(*M).Run()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1199 +0x2ff
  main.main()
      _testmain.go:76 +0x223
==================
==================
WARNING: DATA RACE
Write at 0x00c000308488 by goroutine 60:
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).setNode()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:300 +0x155
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).onNodeUpdated()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:751 +0x66
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).replaceResourceCache()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:209 +0x3a3
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).watchEvents.func1.2()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:414 +0x708
  github.com/argoproj/gitops-engine/pkg/cache.runSynced()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:382 +0x8c
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).watchEvents.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:393 +0x219
  github.com/argoproj/gitops-engine/pkg/utils/kube.RetryUntilSucceed()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/kube.go:384 +0x2c6
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).watchEvents()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:386 +0x20f

Previous read at 0x00c000308488 by goroutine 50:
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).onNodeRemoved()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:758 +0xe8
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).replaceResourceCache()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:218 +0x5d2
  github.com/argoproj/gitops-engine/pkg/cache.TestNamespaceModeReplace()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster_test.go:377 +0x292
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb

Goroutine 60 (running) created at:
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1.1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:563 +0x69d
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).processApi()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:479 +0x2cc
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:536 +0x430
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/ctl.go:510 +0x45
  golang.org/x/sync/errgroup.(*Group).Go.func1()
      /Users/d-kuro/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 +0x85

Goroutine 50 (finished) created at:
  testing.(*T).Run()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1042 +0x660
  testing.runTests.func1()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1284 +0xa6
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb
  testing.runTests()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1282 +0x527
  testing.(*M).Run()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1199 +0x2ff
  main.main()
      _testmain.go:76 +0x223
==================
==================
WARNING: DATA RACE
Read at 0x00c0008c50e0 by goroutine 60:
  runtime.mapaccess2_faststr()
      /Users/d-kuro/sdk/go1.14.4/src/runtime/map_faststr.go:107 +0x0
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).setNode()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:301 +0x1c7
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).onNodeUpdated()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:751 +0x66
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).replaceResourceCache()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:209 +0x3a3
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).watchEvents.func1.2()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:414 +0x708
  github.com/argoproj/gitops-engine/pkg/cache.runSynced()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:382 +0x8c
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).watchEvents.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:393 +0x219
  github.com/argoproj/gitops-engine/pkg/utils/kube.RetryUntilSucceed()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/kube.go:384 +0x2c6
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).watchEvents()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:386 +0x20f

time="2020-06-03T13:37:40+09:00" level=info msg="Cluster successfully synced" server="https://test"
Previous write at 0x00c0008c50e0 by goroutine 50:
  runtime.mapdelete_faststr()
      /Users/d-kuro/sdk/go1.14.4/src/runtime/map_faststr.go:297 +0x0
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).onNodeRemoved()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:765 +0x374
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).replaceResourceCache()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:218 +0x5d2
  github.com/argoproj/gitops-engine/pkg/cache.TestNamespaceModeReplace()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster_test.go:377 +0x292
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb

Goroutine 60 (running) created at:
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1.1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:563 +0x69d
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).processApi()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:479 +0x2cc
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:536 +0x430
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/ctl.go:510 +0x45
  golang.org/x/sync/errgroup.(*Group).Go.func1()
      /Users/d-kuro/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 +0x85

Goroutine 50 (finished) created at:
  testing.(*T).Run()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1042 +0x660
  testing.runTests.func1()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1284 +0xa6
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb
  testing.runTests()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1282 +0x527
  testing.(*M).Run()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1199 +0x2ff
  main.main()
      _testmain.go:76 +0x223
==================
==================
WARNING: DATA RACE
Write at 0x00c0000dd678 by goroutine 60:
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).setNode()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:304 +0x321
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).onNodeUpdated()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:751 +0x66
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).replaceResourceCache()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:209 +0x3a3
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).watchEvents.func1.2()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:414 +0x708
  github.com/argoproj/gitops-engine/pkg/cache.runSynced()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:382 +0x8c
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).watchEvents.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:393 +0x219
  github.com/argoproj/gitops-engine/pkg/utils/kube.RetryUntilSucceed()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/kube.go:384 +0x2c6
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).watchEvents()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:386 +0x20f

Previous read at 0x00c0000dd678 by goroutine 50:
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).onNodeRemoved()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:761 +0x1f3
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).replaceResourceCache()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:218 +0x5d2
  github.com/argoproj/gitops-engine/pkg/cache.TestNamespaceModeReplace()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster_test.go:377 +0x292
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb

Goroutine 60 (running) created at:
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1.1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:563 +0x69d
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).processApi()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:479 +0x2cc
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/cache/cluster.go:536 +0x430
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync.func1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/ctl.go:510 +0x45
  golang.org/x/sync/errgroup.(*Group).Go.func1()
      /Users/d-kuro/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 +0x85

Goroutine 50 (finished) created at:
  testing.(*T).Run()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1042 +0x660
  testing.runTests.func1()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1284 +0xa6
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb
  testing.runTests()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1282 +0x527
  testing.(*M).Run()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:1199 +0x2ff
  main.main()
      _testmain.go:76 +0x223
==================
--- FAIL: TestGetDuplicatedChildren (0.00s)
    testing.go:906: race detected during execution of test
time="2020-06-03T13:37:40+09:00" level=warning msg="invalidated cluster" server=
time="2020-06-03T13:37:40+09:00" level=info msg="Changing cluster config to: &rest.Config{Host:\"http://newhost\", APIPath:\"\", ContentConfig:rest.ContentConfig{AcceptContentTypes:\"\", ContentType:\"\", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:\"\", Password:\"\", BearerToken:\"\", BearerTokenFile:\"\", Impersonate:rest.ImpersonationConfig{UserName:\"\", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:\"\", CertFile:\"\", KeyFile:\"\", CAFile:\"\", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:\"\", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(nil), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}" server=
time="2020-06-03T13:37:40+09:00" level=warning msg="invalidated cluster" server=
time="2020-06-03T13:37:40+09:00" level=info msg="Changing cluster namespaces to: [default]" server=
time="2020-06-03T13:37:40+09:00" level=info msg="Changing cluster namespaces to: [updated]" server=
time="2020-06-03T13:37:40+09:00" level=warning msg="invalidated cluster" server=
FAIL
FAIL	github.com/argoproj/gitops-engine/pkg/cache	0.848s
?   	github.com/argoproj/gitops-engine/pkg/cache/mocks	[no test files]
ok  	github.com/argoproj/gitops-engine/pkg/diff	(cached)
?   	github.com/argoproj/gitops-engine/pkg/engine	[no test files]
ok  	github.com/argoproj/gitops-engine/pkg/health	(cached)
time="2020-06-03T13:37:40+09:00" level=info msg=syncing application=fake-app skipHooks=false started=false
time="2020-06-03T13:37:40+09:00" level=info msg=tasks application=fake-app tasks="[Sync/0 resource /Pod:fake-argocd-ns/my-pod obj->obj (,,)]"
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase:  -> Running, message: '' -> 'one or more tasks are running'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'Synced', phase: 'Running', message: ''" application=fake-app kind=Pod name=my-pod namespace=fake-argocd-ns phase=Sync
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase: Running -> Succeeded, message: 'one or more tasks are running' -> 'successfully synced (all tasks run)'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg=syncing application=fake-app skipHooks=false started=false
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'SyncFailed', phase: '', message: 'not permitted in project'" application=fake-app kind=Pod name=my-pod namespace=kube-system phase=Sync
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'SyncFailed', phase: '', message: 'not permitted in project'" application=fake-app kind=Service name=my-service namespace=fake-argocd-ns phase=Sync
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase:  -> Failed, message: '' -> 'one or more synchronization tasks are not valid'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg=syncing application=fake-app skipHooks=false started=false
time="2020-06-03T13:37:40+09:00" level=info msg=tasks application=fake-app tasks="[Sync/0 resource /Service:fake-argocd-ns/my-service nil->obj (,,), Sync/0 resource /Pod:fake-argocd-ns/my-pod nil->obj (,,)]"
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase:  -> Running, message: '' -> 'one or more tasks are running'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'Synced', phase: 'Running', message: ''" application=fake-app kind=Service name=my-service namespace=fake-argocd-ns phase=Sync
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'Synced', phase: 'Running', message: ''" application=fake-app kind=Pod name=my-pod namespace=fake-argocd-ns phase=Sync
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase: Running -> Succeeded, message: 'one or more tasks are running' -> 'successfully synced (all tasks run)'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'SyncFailed', phase: '', message: 'TestCrd.argoproj.io \"\" not found'" application=fake-app kind=TestCrd name=my-resource namespace=fake-argocd-ns phase=Sync
time="2020-06-03T13:37:40+09:00" level=info msg=syncing application=fake-app skipHooks=false started=false
time="2020-06-03T13:37:40+09:00" level=info msg=tasks application=fake-app tasks="[Sync/0 resource /Service:fake-argocd-ns/my-service nil->obj (,,), Sync/0 resource /Pod:fake-argocd-ns/my-pod obj->nil (,,)]"
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase:  -> Running, message: '' -> 'one or more tasks are running'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'Pruned', phase: 'Succeeded', message: 'pruned'" application=fake-app kind=Pod name=my-pod namespace=fake-argocd-ns phase=Sync
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'Synced', phase: 'Running', message: ''" application=fake-app kind=Service name=my-service namespace=fake-argocd-ns phase=Sync
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase: Running -> Succeeded, message: 'one or more tasks are running' -> 'successfully synced (all tasks run)'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg=syncing application=fake-app skipHooks=false started=false
time="2020-06-03T13:37:40+09:00" level=info msg=tasks application=fake-app tasks="[Sync/0 resource /Service:fake-argocd-ns/my-service obj->nil (,,), Sync/0 resource /Pod:fake-argocd-ns/my-pod obj->nil (,,)]"
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase:  -> Running, message: '' -> 'one or more tasks are running'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'Pruned', phase: 'Succeeded', message: 'pruned'" application=fake-app kind=Service name=my-service namespace=fake-argocd-ns phase=Sync
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'Pruned', phase: 'Succeeded', message: 'pruned'" application=fake-app kind=Pod name=my-pod namespace=fake-argocd-ns phase=Sync
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase: Running -> Succeeded, message: 'one or more tasks are running' -> 'successfully synced (all tasks run)'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg=syncing application=fake-app skipHooks=false started=false
time="2020-06-03T13:37:40+09:00" level=info msg=tasks application=fake-app tasks="[Sync/0 resource /Service:fake-argocd-ns/my-service nil->obj (,,)]"
time="2020-06-03T13:37:40+09:00" level=info msg="apply failed" application=fake-app dryRun=true message=foo task="Sync/0 resource /Service:fake-argocd-ns/my-service nil->obj (,,)"
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'SyncFailed', phase: 'Failed', message: 'foo'" application=fake-app kind=Service name=my-service namespace=fake-argocd-ns phase=Sync
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase:  -> Failed, message: '' -> 'one or more objects failed to apply (dry run)'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg=syncing application=fake-app skipHooks=false started=false
time="2020-06-03T13:37:40+09:00" level=info msg=tasks application=fake-app tasks="[Sync/0 resource /Service:fake-argocd-ns/test-service obj->obj (,,)]"
time="2020-06-03T13:37:40+09:00" level=info msg="apply failed" application=fake-app dryRun=true message=foo task="Sync/0 resource /Service:fake-argocd-ns/test-service obj->obj (,,)"
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'SyncFailed', phase: 'Failed', message: 'foo'" application=fake-app kind=Service name=test-service namespace=fake-argocd-ns phase=Sync
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase:  -> Failed, message: '' -> 'one or more objects failed to apply (dry run)'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg=syncing application=fake-app skipHooks=true started=false
time="2020-06-03T13:37:40+09:00" level=info msg=tasks application=fake-app tasks="[]"
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase:  -> Succeeded, message: '' -> 'successfully synced (no more tasks)'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg=syncing application=fake-app skipHooks=false started=false
time="2020-06-03T13:37:40+09:00" level=info msg=tasks application=fake-app tasks="[Sync/0 resource /Pod:fake-argocd-ns/my-pod obj->nil (,,)]"
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase:  -> Running, message: '' -> 'one or more tasks are running'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'PruneSkipped', phase: 'Succeeded', message: 'ignored (no prune)'" application=fake-app kind=Pod name=my-pod namespace=fake-argocd-ns phase=Sync
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase: Running -> Succeeded, message: 'one or more tasks are running' -> 'successfully synced (all tasks run)'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg=syncing application=fake-app skipHooks=false started=true
time="2020-06-03T13:37:40+09:00" level=info msg=tasks application=fake-app tasks="[Sync/0 resource /Pod:fake-argocd-ns/my-pod obj->nil (PruneSkipped,Succeeded,ignored (no prune))]"
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase: Succeeded -> Succeeded, message: 'successfully synced (all tasks run)' -> 'successfully synced (no more tasks)'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg=syncing application=fake-app skipHooks=false started=false
time="2020-06-03T13:37:40+09:00" level=info msg=tasks application=fake-app tasks="[Sync/0 resource /Pod:fake-argocd-ns/my-pod obj->obj (,,)]"
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase:  -> Running, message: '' -> 'one or more tasks are running'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'Synced', phase: 'Running', message: ''" application=fake-app kind=Pod name=my-pod namespace=fake-argocd-ns phase=Sync
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase: Running -> Succeeded, message: 'one or more tasks are running' -> 'successfully synced (all tasks run)'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg=syncing application=fake-app skipHooks=false started=false
time="2020-06-03T13:37:40+09:00" level=info msg=tasks application=fake-app tasks="[Sync/0 resource /Pod:fake-argocd-ns/my-pod obj->obj (,,)]"
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase:  -> Running, message: '' -> 'one or more tasks are running'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'Synced', phase: 'Running', message: ''" application=fake-app kind=Pod name=my-pod namespace=fake-argocd-ns phase=Sync
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase: Running -> Succeeded, message: 'one or more tasks are running' -> 'successfully synced (all tasks run)'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg=syncing application=fake-app skipHooks=false started=false
time="2020-06-03T13:37:40+09:00" level=info msg=tasks application=fake-app tasks="[Sync/0 resource /Pod:fake-argocd-ns/my-pod obj->obj (,,)]"
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase:  -> Running, message: '' -> 'one or more tasks are running'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'Synced', phase: 'Running', message: ''" application=fake-app kind=Pod name=my-pod namespace=fake-argocd-ns phase=Sync
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase: Running -> Succeeded, message: 'one or more tasks are running' -> 'successfully synced (all tasks run)'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg=syncing application=fake-app skipHooks=false started=false
time="2020-06-03T13:37:40+09:00" level=info msg=tasks application=fake-app tasks="[Sync/0 resource /Pod:fake-argocd-ns/my-pod nil->obj (,,), SyncFail/0 hook /Pod:fake-argocd-ns/my-pod nil->obj (,,)]"
==================
WARNING: DATA RACE
Write at 0x00c000414178 by goroutine 81:
  github.com/argoproj/gitops-engine/pkg/utils/kube/kubetest.(*MockKubectlCmd).ApplyResource()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/kubetest/mock.go:56 +0x56
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).applyObject()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context.go:572 +0x190
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).runTasks.func5.1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context.go:783 +0x35d

Previous write at 0x00c000414178 by goroutine 80:
  github.com/argoproj/gitops-engine/pkg/utils/kube/kubetest.(*MockKubectlCmd).ApplyResource()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/utils/kube/kubetest/mock.go:56 +0x56
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).applyObject()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context.go:572 +0x190
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).runTasks.func5.1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context.go:783 +0x35d

Goroutine 81 (running) created at:
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).runTasks.func5()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context.go:778 +0x165
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).runTasks()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context.go:807 +0xb32
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).Sync()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context.go:265 +0x1b3d
  github.com/argoproj/gitops-engine/pkg/sync.TestSyncFailureHookWithSuccessfulSync()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context_test.go:513 +0x277
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb

Goroutine 80 (finished) created at:
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).runTasks.func5()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context.go:778 +0x165
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).runTasks()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context.go:807 +0xb32
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).Sync()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context.go:265 +0x1b3d
  github.com/argoproj/gitops-engine/pkg/sync.TestSyncFailureHookWithSuccessfulSync()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context_test.go:513 +0x277
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb
==================
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase:  -> Running, message: '' -> 'one or more tasks are running'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'Synced', phase: 'Running', message: ''" application=fake-app kind=Pod name=my-pod namespace=fake-argocd-ns phase=Sync
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase: Running -> Succeeded, message: 'one or more tasks are running' -> 'successfully synced (all tasks run)'" application=fake-app
--- FAIL: TestSyncFailureHookWithSuccessfulSync (0.00s)
    testing.go:906: race detected during execution of test
time="2020-06-03T13:37:40+09:00" level=info msg=syncing application=fake-app skipHooks=false started=false
time="2020-06-03T13:37:40+09:00" level=info msg=tasks application=fake-app tasks="[Sync/0 resource /Pod:fake-argocd-ns/my-pod nil->obj (,,), SyncFail/0 hook /Pod:fake-argocd-ns/my-pod nil->obj (,,)]"
time="2020-06-03T13:37:40+09:00" level=info msg="apply failed" application=fake-app dryRun=true message= task="Sync/0 resource /Pod:fake-argocd-ns/my-pod nil->obj (,,)"
time="2020-06-03T13:37:40+09:00" level=info msg="apply failed" application=fake-app dryRun=true message= task="SyncFail/0 hook /Pod:fake-argocd-ns/my-pod nil->obj (,,)"
==================
WARNING: DATA RACE
Write at 0x00c0003d4cd8 by goroutine 85:
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).runTasks.func5.1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context.go:786 +0x68c

Previous write at 0x00c0003d4cd8 by goroutine 84:
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).runTasks.func5.1()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context.go:786 +0x68c

Goroutine 85 (running) created at:
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).runTasks.func5()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context.go:778 +0x165
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).runTasks()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context.go:807 +0xb32
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).Sync()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context.go:265 +0x1b3d
  github.com/argoproj/gitops-engine/pkg/sync.TestSyncFailureHookWithFailedSync()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context_test.go:532 +0x4e5
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'SyncFailed', phase: 'Failed', message: ''" application=fake-app kind=Pod name=my-pod namespace=fake-argocd-ns phase=Sync
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb

Goroutine 84 (running) created at:
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).runTasks.func5()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context.go:778 +0x165
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).runTasks()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context.go:807 +0xb32
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).Sync()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context.go:265 +0x1b3d
  github.com/argoproj/gitops-engine/pkg/sync.TestSyncFailureHookWithFailedSync()
      /Users/d-kuro/.ghq/github.com/argoproj/gitops-engine/pkg/sync/sync_context_test.go:532 +0x4e5
  testing.tRunner()
      /Users/d-kuro/sdk/go1.14.4/src/testing/testing.go:991 +0x1eb
==================
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'SyncFailed', phase: 'Failed', message: ''" application=fake-app kind=Pod name=my-pod namespace=fake-argocd-ns phase=SyncFail
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase:  -> Failed, message: '' -> 'one or more objects failed to apply (dry run)'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg=syncing application=fake-app skipHooks=false started=true
time="2020-06-03T13:37:40+09:00" level=info msg=tasks application=fake-app tasks="[Sync/0 resource /Pod:fake-argocd-ns/my-pod nil->obj (SyncFailed,Failed,), SyncFail/0 hook /Pod:fake-argocd-ns/my-pod nil->obj (SyncFailed,Failed,)]"
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase: Failed -> Failed, message: 'one or more objects failed to apply (dry run)' -> 'one or more synchronization tasks completed unsuccessfully'" application=fake-app
--- FAIL: TestSyncFailureHookWithFailedSync (0.00s)
    testing.go:906: race detected during execution of test
time="2020-06-03T13:37:40+09:00" level=info msg=syncing application=fake-app skipHooks=false started=false
time="2020-06-03T13:37:40+09:00" level=info msg=tasks application=fake-app tasks="[Sync/0 hook /Pod:fake-argocd-ns/my-pod obj->obj (,,)]"
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase:  -> Running, message: '' -> 'one or more tasks are running'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'Synced', phase: 'Running', message: ''" application=fake-app kind=Pod name=my-pod namespace=fake-argocd-ns phase=Sync
time="2020-06-03T13:37:40+09:00" level=info msg=syncing application=fake-app skipHooks=false started=false
time="2020-06-03T13:37:40+09:00" level=info msg=tasks application=fake-app tasks="[Sync/0 resource /Pod:fake-argocd-ns/my-pod nil->obj (,,), SyncFail/0 hook /Pod:fake-argocd-ns/failed-sync-fail-hook nil->obj (,,), SyncFail/0 hook /Pod:fake-argocd-ns/successful-sync-fail-hook nil->obj (,,)]"
time="2020-06-03T13:37:40+09:00" level=info msg="apply failed" application=fake-app dryRun=true message= task="Sync/0 resource /Pod:fake-argocd-ns/my-pod nil->obj (,,)"
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'SyncFailed', phase: 'Failed', message: ''" application=fake-app kind=Pod name=my-pod namespace=fake-argocd-ns phase=Sync
time="2020-06-03T13:37:40+09:00" level=info msg="apply failed" application=fake-app dryRun=true message= task="SyncFail/0 hook /Pod:fake-argocd-ns/failed-sync-fail-hook nil->obj (,,)"
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'SyncFailed', phase: 'Failed', message: ''" application=fake-app kind=Pod name=failed-sync-fail-hook namespace=fake-argocd-ns phase=SyncFail
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase:  -> Failed, message: '' -> 'one or more objects failed to apply (dry run)'" application=fake-app
time="2020-06-03T13:37:40+09:00" level=info msg=syncing application=fake-app skipHooks=false started=true
time="2020-06-03T13:37:40+09:00" level=info msg=tasks application=fake-app tasks="[Sync/0 resource /Pod:fake-argocd-ns/my-pod nil->obj (SyncFailed,Failed,), SyncFail/0 hook /Pod:fake-argocd-ns/failed-sync-fail-hook nil->obj (SyncFailed,Failed,), SyncFail/0 hook /Pod:fake-argocd-ns/successful-sync-fail-hook nil->obj (,,)]"
time="2020-06-03T13:37:40+09:00" level=info msg="apply failed" application=fake-app dryRun=false message= task="SyncFail/0 hook /Pod:fake-argocd-ns/failed-sync-fail-hook nil->obj (SyncFailed,Failed,)"
time="2020-06-03T13:37:40+09:00" level=info msg="adding resource result, status: 'Synced', phase: 'Running', message: ''" application=fake-app kind=Pod name=successful-sync-fail-hook namespace=fake-argocd-ns phase=SyncFail
time="2020-06-03T13:37:40+09:00" level=info msg="Updating operation state. phase: Failed -> Failed, message: 'one or more objects failed to apply (dry run)' -> 'one or more synchronization tasks completed unsuccessfully'" application=fake-app
FAIL
FAIL	github.com/argoproj/gitops-engine/pkg/sync	0.560s
ok  	github.com/argoproj/gitops-engine/pkg/sync/common	(cached)
ok  	github.com/argoproj/gitops-engine/pkg/sync/hook	(cached)
ok  	github.com/argoproj/gitops-engine/pkg/sync/hook/helm	(cached)
ok  	github.com/argoproj/gitops-engine/pkg/sync/ignore	(cached)
ok  	github.com/argoproj/gitops-engine/pkg/sync/resource	(cached)
ok  	github.com/argoproj/gitops-engine/pkg/sync/syncwaves	(cached)
?   	github.com/argoproj/gitops-engine/pkg/utils/errors	[no test files]
ok  	github.com/argoproj/gitops-engine/pkg/utils/exec	(cached)
?   	github.com/argoproj/gitops-engine/pkg/utils/io	[no test files]
?   	github.com/argoproj/gitops-engine/pkg/utils/json	[no test files]
ok  	github.com/argoproj/gitops-engine/pkg/utils/kube	(cached)
?   	github.com/argoproj/gitops-engine/pkg/utils/kube/kubetest	[no test files]
?   	github.com/argoproj/gitops-engine/pkg/utils/testing	[no test files]
?   	github.com/argoproj/gitops-engine/pkg/utils/text	[no test files]
ok  	github.com/argoproj/gitops-engine/pkg/utils/tracing	(cached)
FAIL

Health check for HPA doesn't catch all good states

The health check for HPA doesn't seem to catch all the states which we could consider as healthy.

For example, given the following status, HPA should be in healthy instead of progressing state:

    - lastTransitionTime: '2020-09-02T21:38:22Z'
      message: recommended size matches current size
      reason: ReadyForNewScale
      status: 'True'
      type: AbleToScale

I think when checking for the health conditions here, we should rather check for status to be True instead of matching the reason field against known patterns.

Somethingwrong with clusterCache.EnsureSynced()

config := &rest.Config{}  
clusterCache := cache.NewClusterCache(config,
		// cache default namespace only
		cache.SetNamespaces([]string{"default", "kube-system"}),
		// configure custom logic to cache resources manifest and additional metadata
		cache.SetPopulateResourceInfoHandler(func(un *unstructured.Unstructured, isRoot bool) (info interface{}, cacheManifest bool) {
			// if resource belongs to 'extensions' group then mark if with 'deprecated' label
			if un.GroupVersionKind().Group == "extensions" {
				info = []string{"deprecated"}
			}
			_, ok := un.GetLabels()["acme.io/my-label"]
			// cache whole manifest if resource has label
			cacheManifest = ok
			return
		}),
	)
	// Ensure cluster is synced before using it
	if err := clusterCache.EnsureSynced(); err != nil {
		panic(err)
	}
	// Iterate default namespace resources tree
	for _, root := range clusterCache.FindResources("default", cache.TopLevelResource) {
		clusterCache.IterateHierarchy(root.ResourceKey(), func(resource *cache.Resource, _ map[kube.ResourceKey]*cache.Resource) bool {
			fmt.Printf("name:%s, resource: %s, \n\n", resource.Ref.Name, resource.Ref.String())
			return true
		})
	} 

**When i use the example code๏ผŒ it will report an error**

I0507 17:59:03.375877   49921 cluster.go:688]  "msg"="Start syncing cluster"  
panic: SchemaError(dev.oam.core.v1beta1.ApplicationRevision.spec.referredObjects): array should have exactly one sub-item 

Impossible to include gitops-engine from Go modules

When trying to use gitops-engine as a Go module dependency, it fails due to an invalid requirement specification in go.mod:

go: finding module for package github.com/argoproj/gitops-engine
go: downloading github.com/argoproj/gitops-engine v0.1.0
go: found github.com/argoproj/gitops-engine in github.com/argoproj/gitops-engine v0.1.0
go: github.com/argoproj/[email protected] requires
        k8s.io/[email protected]: reading k8s.io/kube-aggregator/go.mod at revision v0.0.0: unknown revision v0.0.0

go.mod should specify valid version for kube-aggregator dependency.

Synchronization failed due to nil pointer dereference error

Observed panic in argocd logs:

time="2020-05-18T17:28:21Z" level=error msg="Recovered from panic: runtime error: invalid memory address or nil pointer dereference
goroutine 87 [running]:
runtime/debug.Stack(0xc001515200, 0x1b98c60, 0x3461f90)
\t/opt/hostedtoolcache/go/1.14.1/x64/src/runtime/debug/stack.go:24 +0x9d
github.com/argoproj/argo-cd/controller.(*ApplicationController).processAppRefreshQueueItem.func1(0xc0003bea00, 0x1af3d60, 0xc006329f60)
\t/home/runner/work/argo-cd/argo-cd/src/github.com/argoproj/argo-cd/controller/appcontroller.go:809 +0x85
panic(0x1b98c60, 0x3461f90)
\t/opt/hostedtoolcache/go/1.14.1/x64/src/runtime/panic.go:973 +0x3e3
github.com/argoproj/argo-cd/vendor/github.com/argoproj/gitops-engine/pkg/utils/kube.filterAPIResources(0xc0005ea000, 0x0, 0x0, 0x1f024e0, 0xc001508000, 0x2421, 0x3e00, 0xc0056edde0, 0x10)
\t/home/runner/work/argo-cd/argo-cd/src/github.com/argoproj/argo-cd/vendor/github.com/argoproj/gitops-engine/pkg/utils/kube/ctl.go:83 +0x309
github.com/argoproj/argo-cd/vendor/github.com/argoproj/gitops-engine/pkg/utils/kube.(*KubectlCmd).GetAPIResources(0xc00011a7b8, 0xc0005ea000, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
\t/home/runner/work/argo-cd/argo-cd/src/github.com/argoproj/argo-cd/vendor/github.com/argoproj/gitops-engine/pkg/utils/kube/ctl.go:130 +0x1cd
github.com/argoproj/argo-cd/vendor/github.com/argoproj/gitops-engine/pkg/utils/kube/cache.(*clusterCache).sync(0xc000a5c000, 0xc002032a00, 0xc0019cf0b8)
\t/home/runner/work/argo-cd/argo-cd/src/github.com/argoproj/argo-cd/vendor/github.com/argoproj/gitops-engine/pkg/utils/kube/cache/cluster.go:494 +0x333
github.com/argoproj/argo-cd/vendor/github.com/argoproj/gitops-engine/pkg/utils/kube/cache.(*clusterCache).EnsureSynced(0xc000a5c000, 0x0, 0x0)
\t/home/runner/work/argo-cd/argo-cd/src/github.com/argoproj/argo-cd/vendor/github.com/argoproj/gitops-engine/pkg/utils/kube/cache/cluster.go:569 +0xaa
github.com/argoproj/argo-cd/controller/cache.(*liveStateCache).getSyncedCluster(0xc00053c280, 0xc0000154c0, 0x1e, 0xc03844f6d4, 0x3844f6d4019cefc0, 0x5ec2c5b4, 0xc001569980)
\t/home/runner/work/argo-cd/argo-cd/src/github.com/argoproj/argo-cd/controller/cache/cache.go:271 +0x70
github.com/argoproj/argo-cd/controller/cache.(*liveStateCache).GetVersionsInfo(0xc00053c280, 0xc0000154c0, 0x1e, 0xc0006505d0, 0x8, 0x0, 0x0, 0x0, 0x0, 0x0)
\t/home/runner/work/argo-cd/argo-cd/src/github.com/argoproj/argo-cd/controller/cache/cache.go:335 +0x43
github.com/argoproj/argo-cd/controller.(*appStateManager).getRepoObjs(0xc00028a000, 0xc003695180, 0xc0006a7c50, 0x30, 0xc0006505d0, 0x8, 0x0, 0x0, 0x0, 0x0, ...)
\t/home/runner/work/argo-cd/argo-cd/src/github.com/argoproj/argo-cd/controller/state.go:138 +0x859
github.com/argoproj/argo-cd/controller.(*appStateManager).CompareAppState(0xc00028a000, 0xc003695180, 0xc00067e588, 0x0, 0x0, 0xc0006a7c50, 0x30, 0xc0006505d0, 0x8, 0x0, ...)
\t/home/runner/work/argo-cd/argo-cd/src/github.com/argoproj/argo-cd/controller/state.go:274 +0x47b
github.com/argoproj/argo-cd/controller.(*ApplicationController).processAppRefreshQueueItem(0xc0003bea00, 0x1)
\t/home/runner/work/argo-cd/argo-cd/src/github.com/argoproj/argo-cd/controller/appcontroller.go:898 +0x66e
github.com/argoproj/argo-cd/controller.(*ApplicationController).Run.func3()
\t/home/runner/work/argo-cd/argo-cd/src/github.com/argoproj/argo-cd/controller/appcontroller.go:432 +0x36
github.com/argoproj/argo-cd/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc0003b3c20)
\t/home/runner/work/argo-cd/argo-cd/src/github.com/argoproj/argo-cd/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5f
github.com/argoproj/argo-cd/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0003b3c20, 0x3b9aca00, 0x0, 0x1, 0xc0000969c0)
\t/home/runner/work/argo-cd/argo-cd/src/github.com/argoproj/argo-cd/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
github.com/argoproj/argo-cd/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0003b3c20, 0x3b9aca00, 0xc0000969c0)
\t/home/runner/work/argo-cd/argo-cd/src/github.com/argoproj/argo-cd/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/argoproj/argo-cd/controller.(*ApplicationController).Run
\t/home/runner/work/argo-cd/argo-cd/src/github.com/argoproj/argo-cd/controller/appcontroller.go:431 +0x377

docs: update FAQ

https://github.com/argoproj/gitops-engine/blob/master/docs/faq.md still refers a lot to the collaboration between Argo and Flux. In #126 I mostly just explained the current situation and how we got there. I didn't change any of the other text as I didn't want to speak for the Argo project and am not really qualified to write a good gitops-engine FAQ.

The page might need an update to reflect the current reality or aspirations.

gitops-engine should not perform logging on its own

gitops-engine API should not perform logging on its own.

We should drop logrus dependency from gitops-engine and have consumers specify an event callback instead if they want to process (i.e. log) specific events emitted from the engine. This way, gitops-engine would not interfere with logging requirements from the consumer, i.e. log levels, log format and log output.

If no callbacks are registered, events should be discarded.

error="error creating gvk parser: duplicate entry for /v1, Kind=APIGroup"

After a recent upgrade to argo-cd v2.4.3 coming from v2.2.x we're seeing the following error in argocd-application-controller component:

time="2022-07-04T16:18:16Z" level=error msg="warning loading openapi schema" error="error creating gvk parser: duplicate entry for /v1, Kind=APIGroup" server="https://172.16.8.2"
time="2022-07-04T16:18:17Z" level=error msg="warning loading openapi schema" error="error creating gvk parser: duplicate entry for /v1, Kind=APIGroup" server="https://172.16.8.2"
time="2022-07-04T16:18:25Z" level=error msg="warning loading openapi schema" error="error creating gvk parser: duplicate entry for /v1, Kind=APIGroup" server="https://kubernetes.default.svc"
time="2022-07-04T16:18:25Z" level=error msg="warning loading openapi schema" error="error creating gvk parser: duplicate entry for /v1, Kind=APIGroup" server="https://kubernetes.default.svc"
time="2022-07-04T16:18:36Z" level=error msg="warning loading openapi schema" error="error creating gvk parser: duplicate entry for /v1, Kind=APIGroup" server="https://172.16.8.2"
time="2022-07-04T16:18:36Z" level=error msg="warning loading openapi schema" error="error creating gvk parser: duplicate entry for /v1, Kind=APIGroup" server="https://172.16.8.2"
time="2022-07-04T16:18:41Z" level=error msg="warning loading openapi schema" error="error creating gvk parser: duplicate entry for /v1, Kind=APIGroup" server="https://172.16.8.2"
time="2022-07-04T16:18:43Z" level=error msg="warning loading openapi schema" error="error creating gvk parser: duplicate entry for /v1, Kind=APIGroup" server="https://172.16.8.2"
time="2022-07-04T16:18:44Z" level=error msg="warning loading openapi schema" error="error creating gvk parser: duplicate entry for /v1, Kind=APIGroup" server="https://172.16.8.2"

The error is constantly outputed, as you can see it is being reported for both clusters we have under argo-cd management.

I guess it is very related to this last commit from release v0.7.0 @leoluz

Let me know if you need more information and I'll gladly help in troubleshooting this issue as I'm a bit lost.

Argo CD Server version:

{
    "Version": "v2.4.3+471685f",
    "BuildDate": "2022-06-27T21:02:55Z",
    "GitCommit": "471685feae063c1c2e36a5ff268c4da87c697b85",
    "GitTreeState": "clean",
    "GoVersion": "go1.18.3",
    "Compiler": "gc",
    "Platform": "linux/amd64",
    "KustomizeVersion": "v4.4.1 2021-11-11T23:36:27Z",
    "HelmVersion": "v3.8.1+g5cb9af4",
    "KubectlVersion": "v0.23.1",
    "JsonnetVersion": "v0.18.0"
}

Inconsistent health status values

My preference is to capitalize the first letter and omit the trailing ...

return &HealthStatus{
Status: HealthStatusProgressing,
Message: "Waiting for statefulset spec update to be observed...",

return &HealthStatus{
Status: HealthStatusHealthy,
Message: fmt.Sprintf("partitioned roll out complete: %d new pods have been updated...", sts.Status.UpdatedReplicas),

if sts.Spec.UpdateStrategy.Type == appsv1.OnDeleteStatefulSetStrategyType {
return &HealthStatus{
Status: HealthStatusHealthy,
Message: fmt.Sprintf("statefulset has %d ready pods", sts.Status.ReadyReplicas),

return &HealthStatus{
Status: HealthStatusHealthy,
Message: fmt.Sprintf("statefulset rolling update complete %d pods at revision %s...", sts.Status.CurrentReplicas, sts.Status.CurrentRevision),

return &HealthStatus{
Status: HealthStatusHealthy,
Message: fmt.Sprintf("partitioned roll out complete: %d new pods have been updated...", sts.Status.UpdatedReplicas),

Upgrade to k8s 1.23

I've started upgrading k8s dependency in a Go project that depends on argocd, and as such indirectly on gitops-engine too.

However, for k8s 1.23, some components in the kubectl package had been refactored, and a build against the v0.23.1 k8s.io/* packages fail. The problematic code can be found here: https://github.com/argoproj/gitops-engine/blob/master/pkg/utils/kube/resource_ops.go#L244

The apply.NewApplyOptions function has been removed / split up into apply.NewApplyFlags and applyFlags.ToOptions. Using these is a bit problematic because they expect a cmdutil.Factory and a *cobra.Command.

I'd be happy to submit a patch, but am not sure what the best way to solve this would be. Maybe we can try just constructing empty ApplyOptions and building up from there - it looks like we already set a lot of fields, maybe we set all necessary fields already.

What do you think? Should I just submit a PR so this can be discussed within the PR already?

feat: Add support for real tracing

Now a Tracer interface is added to support adding some fields to the logger https://github.com/argoproj/gitops-engine/blob/master/pkg/utils/tracing/api.go#L10.
We want to add real opentracing/opentelemetry support based on this interface. However, to support real tracing, an additional argument ctx needs to be passed to the StartSpan interface so that the span context can be propagated.

Additionaly, it would be great to refactor this interface. It is hard to add opentracing span options using the current interface.

Getting Started / Example / Demo

Provide an example of how to use next generation Flux/Argo (i.e. GitOps Engine). Posts point you here to "take it for a spin" but there are no instructions on how to do so.

gitops-engine should consider supported verbs when resolving GVK to resources

Some API extensions provide two or more resources for the same GVK, but for different verbs.

One example of this is the Template API in OpenShift, which is implemented as an API extension instead of a CRD. For example:

$ oc api-resources -o wide | grep ' Template '
processedtemplates                                        template.openshift.io/v1                      true         Template                             [create]
templates                                                 template.openshift.io/v1                      true         Template                             [create delete deletecollection get list patch update watch]

So the GVK G=template.openshift.io, K=Template, V=v1 resolves to two APIs, processedtemplates and templates. However, only the templates API supports a full list of verbs such as get, delete, etc.

When performing the conversion from GVK to API resource, gitops-engine just picks the first match, which in above case would result in picking processedtemplates. However, gitops-engine should ensure that it also checks whether the resource supports the requested verb for the operation.

One could argue that above example is bad design, and that one GVK should only resolve to a single API and I would agree with that. I think this quirk is required for some backwards compatibility in OpenShift. However, it is currently not possible to manage Template resources in OpenShift due to this behavior. A Template resource can be created (because OpenShift under the hood converts processedtemplate to template), but processedtemplate only supports the create verb.

How To: Add CRD Health Check?

This issue is to ask about how one might manage the health check of a CRD. Maybe ArgoRollouts would provide a good example?

Figure out mailing list

Hot on the heels of #5, it might be good to set up a public mailing list for the project. A Google group might be the quickest way to go about it, it'd allow easy meeting invites and sharing docs with the entire group.

If we go with a mailing list (which is useful for longer feedback or longer, more structured discussions and announcements), we obviously need to document it too.

Sync may got dead lock

when using both gitsync and gitops in one docker container I always got stuck after certain times sync , I explore the code and find the lock is hold by clusterCache.processEvent then clusterCache.GetManagedLiveObjs in engine.Sync would be locked forever

since there is a timer to notify to get the sync state the resUpdated channel is useless..and remove the OnResourceUpdated in Sync function and everything goes well .

unfortunately I didn't find the root cause..

Add annotation `argocd.argoproj.io/sync-options: Force=true`

Summary

This issue is about the need for the new annotation argocd.argoproj.io/sync-options: Force=true, which is needed for the use case like the job resources that should run every time when syncing.

Goals & Proposal

Could we implement the new annotation feature on this repo?

Based on kubectl replace --help, --force is needed to delete and then re-create the resources.

~  kubectl replace --help
  
  # Force replace, delete and then re-create the resource
  kubectl replace --force -f ./pod.json
  

Currently, there is nothing like argocd.argoproj.io/sync-options: Force=true.
It seems like argocd.argoproj.io/sync-options: Replace=true runs on this library.

func WithReplace(replace bool) SyncOpt {
return func(ctx *syncContext) {
ctx.replace = replace
}
}

Releated Issues

There are some related issues at argo-cd repo. Please take a look.
argoproj/argo-cd#5882
argoproj/argo-cd#7459
argoproj/argo-cd#9163

Thank you for reading.

Is this project still on?

Just curious to know if you guys are still looking to join hands with ArgoCD. This repo has not been updated for quite a few months.

Figure out public meeting

We want to have regular public meetings to be transparent, inclusive of new contributors and able to figure out direction for the project. My suggestion would be:

  • meeting time in the EU afternoon/evenings (until we have more APAC contributors)
  • cadence: every 1 or 2 weeks(?) - maybe 1 in the beginning and 2 once the project is more stable?
  • video call, so we're able to record, screenshare and see faces
  • Google doc for agenda/notes

Action items:

  • figure out meeting times and cadence
  • set up agenda doc
  • document meeting times
  • set up meeting invite

Contribute Argo CD diffing customizations to GitOps Engine

Summary

Argo CD supports diffing customization https://argoproj.github.io/argo-cd/user-guide/diffing/ that allows users to better handle well-known limitations and edge cases.

Proposal

Migrate Argo CD that implements diffing customization into GitOps Engine so that functionality became available for all consumers. Following features should be available:

SyncContext deletes in progress resources on termination

https://github.com/argoproj/gitops-engine/blob/ff6e9f853241994e4e9ac9346fe4d22bab6d55e6/pkg/sync/sync_context.go#L649-L648

I'm not sure if this is intended but it looks like it may be causing deletions of resources that are being updated. E.g. if a Deployment is being updated to roll out a new version of an image and the sync context is terminated, it will be deleted.

Behavior on termination should probably be configurable. Off the top of my head, options are:

  • Don't do anything, just stop doing whatever it's doing.
  • Roll things back. Delete created objects, undo changes to updated objects.

I might have misunderstood the code, I'm sorry if that's the case.

Is argocd compatible with openkruise?

argocd is a very popular gitops project right now, and more and more people are using it for publishing. And openkruise is also a very widely used cncf project, can use argocd to publish openkruise cloneset and other workloads?

cncf project: https://www.cncf.io/projects/openkruise/
openkruise web: https://openkruise.io/

The conversion is as follows.
// GetHealthCheckFunc returns built-in health check function or nil if health check is not supported
func GetHealthCheckFunc(gvk schema.GroupVersionKind) func(obj *unstructured.Unstructured) (*HealthStatus, error) {
switch gvk.Group {
case "apps.kruise.io":
.....

If it works, I can submit that part of the code.

How to :enable SkipHooks

How do I enable the skipHooks feature? It is not visible in the documentation how to correctly set it up.

I tried to add:

sync.WithSkipHooks(true)

The console is always logging:

"msg"="Syncing" "skipHooks"=false "

thanks for any help

Suffix generateName is always 62135596800

Description

62135596800 is added to suffix when using generateName

example yaml

apiVersion: batch/v1
kind: Job
metadata:
  generateName: some-operation-
  annotations:
    argocd.argoproj.io/hook: PreSync

e.g some-operation-{git_hash}-presync-62135596800

Question

Is this intentional behavior?
If you get the same every time, I don't think you need to this suffix.

Also, I would like to keep the Job every time by using generateName, but if it is the same commit as it is now, the Job will be deleted.

Cause

Should remarshal LastAppliedConfigAnnotation before ThreeWayDiff

Description

Config and live object are all handled with remarshal(), should object from LastAppliedConfigAnnotation need to be handled with remarshal() too? This can avoid null values in LastAppliedConfigAnnotation make diff result different.

Related Code

func Diff(config, live *unstructured.Unstructured, opts ...Option) (*DiffResult, error) {
	o := applyOptions(opts)
	if config != nil {
		config = remarshal(config, o)
		Normalize(config, opts...)
	}
	if live != nil {
		live = remarshal(live, o)
		Normalize(live, opts...)
	}
	orig, err := GetLastAppliedConfigAnnotation(live)
	if err != nil {
		o.log.V(1).Info(fmt.Sprintf("Failed to get last applied configuration: %v", err))
	} else {
		if orig != nil && config != nil {
                        orig = remarshal(orig, o) // add core here
			Normalize(orig, opts...)
			dr, err := ThreeWayDiff(orig, config, live)
			if err == nil {
				return dr, nil
			}
			o.log.V(1).Info(fmt.Sprintf("three-way diff calculation failed: %v. Falling back to two-way diff", err))
		}
	}
	return TwoWayDiff(config, live)
}

Example

live object:

apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"argocd.argoproj.io/instance":"test"},"name":"test","namespace":"default"},"spec":{"ports":[{"name":"http","nodePort":null,"port":8080,"targetPort":8080}],"selector":{"app":"test"},"type":"NodePort"}}
  creationTimestamp: "2022-01-17T10:34:45Z"
  labels:
    argocd.argoproj.io/instance: test
  name: test
  namespace: default
  resourceVersion: "199634708"
  selfLink: /api/v1/namespaces/default/services/test
  uid: 384f0d3f-5146-4654-a11b-e7d7065cab88
spec:
  clusterIP: 10.45.102.60
  externalTrafficPolicy: Cluster
  ports:
  - name: http
    nodePort: 30395
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: test
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

config object:

apiVersion: v1
kind: Service
metadata:
  labels:
    argocd.argoproj.io/instance: test
  name: test
  namespace: default
spec:
  ports:
  - name: http
    nodePort: null
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: test
  type: NodePort

When someone set nodePort of Service to null, k8s will allocate a port to the Service, so the live object will have nodePort: 30395 and config object will have nodePort: null, remarshal() will clear nodePort: null, so there will be no diff between live object and config object, but object from LastAppliedConfigAnnotation without remarshal() will broke the result, mark application OutOfSync, Maybe we can prevent this meaningless sync.

Document GitOps engine public API

Summary

The first version of k8s caching with reconciliation/syncing functionality is ready. Next step is to document them:

  • Ensure publicly exposed methods/data structures have comments
  • Implement sample "poor man's" gitops operator that leverages GitOps engine
  • Prepare documentation website using godoc.
    Prepare documentation website using mkdocs

Allow delete from git repo

Currently, I am using fluxcd for syncing our cluster from GitOps repo, which is a centralized repo for all of our services and infra tools manifests. And there is something I wish fluxcd offers which is deleting a resource from the GitOps repo would be reflected in the sync process. I don't have experience with ArgoCD, and I am not sure if it offers this. If it is then this would be fantastic if it's not then this space would be for discussing this. Now, when we delete any thing from the GitOps repo we have to delete it manually from the cluster. It's nuisance since sometimes or many times can be forgotten ๐Ÿ˜….

I am thinking to make it a feature switched by an annotation to have more controller over it. And I have an idea to implement it

Improve release process

Summary

We don't have a well-defined GitOps Engine release process. The library consumers use commit SHA instead of release tag.

Referencing the commit in the main branch worked well since we rarely make big changes in GitOps engine and only consumer was Argo CD. However, this does not allow cherry-pick hot-fixes, into the previous release and the Argo CD is no longer the only consumer: https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent

Proposal

Start creating release branches and tags:

  • Instead of assuming that the main branch is 100% stable, create a release branches for every release. Release branches allow us to cherry-pick bug fixes into recent stable releases.
  • Release branches namomg convention: release-0.1, release-0.2 etc.
  • Release = release tag (v0.1.1, v0.1.2, v0.2.0 etc) pointing to a commit existing in release branch.

Additional notes:

  • Argo CD is the main source of new feature requests and we still heavily rely on Argo CD to test changes. To avoid overhead Argo CD dev/main branch is going to keep referencing commit SHA. So the release process won't change anything for Argo CD developers.

  • Release frequency. Let's not create a separate release for every commit. With two consumers, we know when release is required so it can be done on demand.

  • Argo CD release branch should reference GitOps Engien using release tag .

Required permissions are too broad and not configurable

Currently gitops engine gets the list of api resources and starts watches for all of them that support list+watch verbs. This means a program, using the gitops engine, needs permissions for all of the above, which is not desirable most of the time because the user only needs to manage only a subset of resource types, not everything.

For example:

time="2020-07-20T05:32:12Z" level=error msg="Failed to sync cluster https://10.96.0.1:443: failed to load initial state of resource EndpointSlice.discovery.k8s.io: endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:agentk:agentk\" cannot list resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope"

time="2020-07-20T05:32:35Z" level=warning msg="engine.Run() failed" error="failed to load initial state of resource LimitRange: limitranges is forbidden: User \"system:serviceaccount:agentk:agentk\" cannot list resource \"limitranges\" in API group \"\" at the cluster scope" project_id=root/gitops-manifests

time="2020-07-20T05:31:52Z" level=warning msg="engine.Run() failed" error="failed to load initial state of resource ResourceQuota: resourcequotas is forbidden: User \"system:serviceaccount:agentk:agentk\" cannot list resource \"resourcequotas\" in API group \"\" at the cluster scope" project_id=root/gitops-manifests

time="2020-07-20T05:31:12Z" level=warning msg="engine.Run() failed" error="failed to load initial state of resource Event.events.k8s.io: events.events.k8s.io is forbidden: User \"system:serviceaccount:agentk:agentk\" cannot list resource \"events\" in API group \"events.k8s.io\" at the cluster scope" project_id=root/gitops-manifests

I think a better way would be to only start watching resources of the types, passed to Sync() method of the engine. I.e. dynamically add and remove watches depending on what has been passed.

Drop k8s.io/kubernetes dependency

Please consider dropping dependency on k8s.io/kubernetes module as it makes the engine really unwieldy to use.

Imported:

describe infrastructure into two git repos instead of one: one public one private

Feature request:

aim :

split git repo on 2 repo git : one public one private

context:

We need to deploy on differents K8S the nearly the same infrastructure (K8S namespace/deployment/serviceAccount.... or helm chart) describe in a git.
Secret or certificate must not be stored inside this first git repo because this infrastructure is deployed in differents K8S they don't have same secret/certificate.

The secret git repo should also enable/disable some part discribed into first git repo

Use case:

on public repo we need to several products/object that need to be installed on K8S such as OpenLDAP /Gitlab/Prometheus/Grafana etc.

this repo will be use to create several instances of this infrastructure.
On Paris K8S, they need to install every thing.
On Tokyo K8S, as they already have an OpenLDAP, they need to install only Gitlab/Prometheus/Grafana etc. but no OpenLDap

Paris and Tokyo describe nearly the same K8S content except openldap installation and Secret and certificate

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.