Code Monkey home page Code Monkey logo

preliminary-findings-jan-2021's People

Watchers

 avatar

preliminary-findings-jan-2021's Issues

cloudfoundry/diego-release: src/github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/overlay/peerdb.go; 9 LoC

Found a possible issue in cloudfoundry/diego-release at src/github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/overlay/peerdb.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

Click here to see the code in its original context.

Click here to show the 9 line(s) of Go which triggered the analyzer.
for pKeyStr, pEntry := range mp {
	var pKey peerKey
	if _, err := fmt.Sscan(pKeyStr, &pKey); err != nil {
		logrus.Warnf("Peer key scan on network %s failed: %v", nid, err)
	}
	if f(&pKey, &pEntry) {
		return nil
	}
}
Click here to show extra information the analyzer produced.
No path was found through the callgraph that could lead to a function which writes a pointer argument.

No path was found through the callgraph that could lead to a function which passes a pointer to third-party code.

root signature {f 2} was not found in the callgraph; reference was passed directly to third-party code

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 24e7c50250eead4794ae33dc1e170ed52e2f9cc0

iceming123/tmp: etrue/filters/api.go; 3 LoC

Found a possible issue in iceming123/tmp at etrue/filters/api.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

function call which takes a reference to log at line 259 may start a goroutine

Click here to see the code in its original context.

Click here to show the 3 line(s) of Go which triggered the analyzer.
for _, log := range logs {
	notifier.Notify(rpcSub.ID, &log)
}
Click here to show extra information the analyzer produced.
The following graphviz dot graph describes paths through the callgraph that could lead to a function calling a goroutine:
digraph G {
  "(GetInt, 1)" -> {"(get, 2)";}
  "(injectResponse, 4)" -> {}
  "(NewProtocolManager, 12)" -> {"(New, 7)";"(handle, 1)";}
  "(NodeAddr, 1)" -> {"(Load, 1)";}
  "(delete, 1)" -> {"(delete, 5)";}
  "(NewTxPool, 3)" -> {"(NewTIP1Signer, 1)";}
  "(Set, 2)" -> {"(Do, 1)";"(Header, 1)";}
  "(setupConn, 3)" -> {"(doProtoHandshake, 1)";}
  "(subMatch, 4)" -> {"(run, 5)";"(TestBytes, 1)";}
  "(TestBytes, 1)" -> {"(Run, 2)";}
  "(SetupConn, 3)" -> {"(setupConn, 3)";}
  "(sendPing, 3)" -> {"(injectResponse, 4)";}
  "(New, 2)" -> {"(NewBlockChain, 5)";"(Start, 1)";"(NewTxPool, 3)";"(NewProtocolManager, 12)";"(NodeAddr, 1)";"(Set, 2)";"(Add, 1)";}
  "(Handshake, 6)" -> {}
  "(run, 4)" -> {"(subMatch, 4)";}
  "(Call, 6)" -> {"(run, 4)";}
  "(NewTIP1Signer, 1)" -> {"(Mul, 2)";}
  "(add, 2)" -> {}
  "(remove, 1)" -> {"(delete, 1)";}
  "(Register, 1)" -> {}
  "(get, 2)" -> {"(get, 1)";"(remove, 1)";}
  "(Start, 0)" -> {"(Start, 1)";"(Add, 1)";"(run, 1)";}
  "(New, 7)" -> {}
  "(newClientTransport, 6)" -> {}
  "(start, 2)" -> {"(New, 2)";"(Start, 0)";"(New, 1)";}
  "(NewBlockChain, 5)" -> {}
  "(dial, 2)" -> {"(Dial, 1)";"(SetupConn, 3)";}
  "(Mul, 2)" -> {"(Add, 2)";}
  "(RequestENR, 1)" -> {"(ensureBond, 2)";}
  "(WriteHeader, 1)" -> {"(Add, 2)";}
  "(DialInProc, 1)" -> {}
  "(Load, 1)" -> {}
  "(Add, 2)" -> {"(add, 2)";}
  "(Dial, 3)" -> {"(NewClientConn, 3)";}
  "(clientHandshake, 2)" -> {"(newClientTransport, 6)";}
  "(Header, 1)" -> {"(WriteHeader, 1)";}
  "(doProtoHandshake, 1)" -> {}
  "(Notify, 2)" -> {"(start, 2)";"(New, 1)";}
  "(Do, 1)" -> {"(dial, 2)";}
  "(Dial, 1)" -> {"(SetupConn, 3)";}
  "(NewClientConn, 3)" -> {"(clientHandshake, 2)";}
  "(updateNode, 1)" -> {"(RequestENR, 1)";}
  "(ensureBond, 2)" -> {"(sendPing, 3)";}
  "(Run, 2)" -> {"(watchNetwork, 1)";}
  "(watchNetwork, 1)" -> {}
  "(Start, 1)" -> {"(Register, 1)";"(DialInProc, 1)";"(Do, 1)";"(Dial, 3)";}
  "(New, 1)" -> {"(Add, 1)";"(GetInt, 1)";"(New, 2)";}
  "(Send, 1)" -> {"(delete, 1)";"(Call, 3)";"(Do, 1)";}
  "(delete, 5)" -> {}
  "(run, 5)" -> {}
  "(get, 1)" -> {"(Add, 2)";}
  "(Call, 3)" -> {"(Call, 6)";}
  "(Add, 1)" -> {"(Send, 1)";}
  "(run, 1)" -> {"(Do, 1)";"(updateNode, 1)";}
  "(handle, 1)" -> {"(Handshake, 6)";}
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 8fbb6bc7045f0e277adf2c86d57dc7a7a5bb7b8f

agiletechvn/HL-Loyalty: chaincode/loyalty/vendor/github.com/hyperledger/fabric/gossip/privdata/pull.go; 3 LoC

Found a possible issue in agiletechvn/HL-Loyalty at chaincode/loyalty/vendor/github.com/hyperledger/fabric/gossip/privdata/pull.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

reference to dig is reassigned at line 430

Click here to see the code in its original context.

Click here to show the 3 line(s) of Go which triggered the analyzer.
for i, dig := range digests {
	res[i] = &dig
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 745b648ccd7dc32867dd953dc0dfb1da1b097218

mackentan/GoProxyHunt: src/code.google.com/p/go.tools/cmd/vet/testdata/rangeloop.go; 5 LoC

Found a possible issue in mackentan/GoProxyHunt at src/code.google.com/p/go.tools/cmd/vet/testdata/rangeloop.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

range-loop variable v used in defer or goroutine at line 30

Click here to see the code in its original context.

Click here to show the 5 line(s) of Go which triggered the analyzer.
for _, v := range s {
	go func() {
		println(v) // ERROR "range variable v captured by func literal"
	}()
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: d48b11078a6769c36db11ed2fd819f38e571b74a

cloudfoundry/diego-release: src/github.com/docker/docker/distribution/push_v2.go; 73 LoC

Found a possible issue in cloudfoundry/diego-release at src/github.com/docker/docker/distribution/push_v2.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

function call which takes a reference to mountCandidate at line 363 may start a goroutine

Click here to see the code in its original context.

Click here to show the 73 line(s) of Go which triggered the analyzer.
for _, mountCandidate := range candidates {
	logrus.Debugf("attempting to mount layer %s (%s) from %s", diffID, mountCandidate.Digest, mountCandidate.SourceRepository)
	createOpts := []distribution.BlobCreateOption{}

	if len(mountCandidate.SourceRepository) > 0 {
		namedRef, err := reference.ParseNormalizedNamed(mountCandidate.SourceRepository)
		if err != nil {
			logrus.Errorf("failed to parse source repository reference %v: %v", reference.FamiliarString(namedRef), err)
			pd.v2MetadataService.Remove(mountCandidate)
			continue
		}

		// Candidates are always under same domain, create remote reference
		// with only path to set mount from with
		remoteRef, err := reference.WithName(reference.Path(namedRef))
		if err != nil {
			logrus.Errorf("failed to make remote reference out of %q: %v", reference.Path(namedRef), err)
			continue
		}

		canonicalRef, err := reference.WithDigest(reference.TrimNamed(remoteRef), mountCandidate.Digest)
		if err != nil {
			logrus.Errorf("failed to make canonical reference: %v", err)
			continue
		}

		createOpts = append(createOpts, client.WithMountFrom(canonicalRef))
	}

	// send the layer
	lu, err := bs.Create(ctx, createOpts...)
	switch err := err.(type) {
	case nil:
		// noop
	case distribution.ErrBlobMounted:
		progress.Updatef(progressOutput, pd.ID(), "Mounted from %s", err.From.Name())

		err.Descriptor.MediaType = schema2.MediaTypeLayer

		pd.pushState.Lock()
		pd.pushState.confirmedV2 = true
		pd.pushState.remoteLayers[diffID] = err.Descriptor
		pd.pushState.Unlock()

		// Cache mapping from this layer's DiffID to the blobsum
		if err := pd.v2MetadataService.TagAndAdd(diffID, pd.hmacKey, metadata.V2Metadata{
			Digest:           err.Descriptor.Digest,
			SourceRepository: pd.repoInfo.Name(),
		}); err != nil {
			return distribution.Descriptor{}, xfer.DoNotRetry{Err: err}
		}
		return err.Descriptor, nil
	default:
		logrus.Infof("failed to mount layer %s (%s) from %s: %v", diffID, mountCandidate.Digest, mountCandidate.SourceRepository, err)
	}

	if len(mountCandidate.SourceRepository) > 0 &&
		(metadata.CheckV2MetadataHMAC(&mountCandidate, pd.hmacKey) ||
			len(mountCandidate.HMAC) == 0) {
		cause := "blob mount failure"
		if err != nil {
			cause = fmt.Sprintf("an error: %v", err.Error())
		}
		logrus.Debugf("removing association between layer %s and %s due to %s", mountCandidate.Digest, mountCandidate.SourceRepository, cause)
		pd.v2MetadataService.Remove(mountCandidate)
	}

	if lu != nil {
		// cancel previous upload
		cancelLayerUpload(ctx, mountCandidate.Digest, layerUpload)
		layerUpload = lu
	}
}
Click here to show extra information the analyzer produced.
The following graphviz dot graph describes paths through the callgraph that could lead to a function calling a goroutine:
digraph G {
  "(start, 1)" -> {"(shutdownDaemon, 1)";"(handleControlSocketChange, 2)";"(Init, 4)";"(Start, 2)";"(NewDaemon, 4)";}
  "(NewNetTransport, 1)" -> {}
  "(apply, 2)" -> {"(New, 3)";}
  "(updateHealthMonitor, 1)" -> {"(monitor, 4)";}
  "(handleNodeConflict, 2)" -> {}
  "(Put, 2)" -> {"(New, 1)";}
  "(Put, 1)" -> {"(Create, 1)";}
  "(UpdateTaskStatus, 3)" -> {}
  "(runAgent, 4)" -> {}
  "(monitor, 4)" -> {}
  "(ViewAndWatch, 3)" -> {"(Watch, 2)";}
  "(New, 2)" -> {"(Update, 1)";"(initializeSubsystem, 3)";"(Join, 2)";"(New, 1)";"(Put, 2)";"(Register, 1)";"(apply, 2)";"(Init, 1)";"(loadPlugins, 1)";}
  "(Do, 3)" -> {"(Start, 1)";}
  "(New, 9)" -> {"(CopyConsole, 7)";"(copyPipes, 7)";}
  "(reconcileConfigs, 4)" -> {"(Remove, 1)";}
  "(ListenPipe, 2)" -> {}
  "(Write, 1)" -> {"(New, 1)";"(Put, 1)";"(Update, 3)";"(Log, 1)";}
  "(Create, 1)" -> {"(newMemberlist, 1)";"(newSerfQueries, 4)";"(NewSnapshotter, 8)";}
  "(Remove, 1)" -> {"(Remove, 2)";}
  "(shutdownDaemon, 1)" -> {}
  "(NewDaemon, 4)" -> {}
  "(handleControlSocketChange, 2)" -> {"(watchClusterEvents, 2)";}
  "(callRemoteBalancer, 2)" -> {}
  "(containerPause, 1)" -> {"(updateHealthMonitor, 1)";}
  "(Update, 1)" -> {"(Update, 2)";}
  "(Put, 3)" -> {"(Commit, 2)";"(Create, 1)";}
  "(New, 3)" -> {"(Remove, 1)";}
  "(Add, 2)" -> {"(Remove, 1)";}
  "(reconcileSecrets, 4)" -> {"(Remove, 1)";}
  "(Start, 2)" -> {"(callRemoteBalancer, 2)";}
  "(Update, 2)" -> {"(reconcileTaskState, 4)";"(Create, 2)";"(reconcileSecrets, 4)";"(reconcileConfigs, 4)";}
  "(superviseManager, 5)" -> {"(runManager, 5)";}
  "(Register, 1)" -> {"(Put, 3)";"(Add, 2)";}
  "(Remove, 2)" -> {}
  "(newMemberlist, 1)" -> {"(NewNetTransport, 1)";}
  "(aliveNode, 3)" -> {"(NotifyConflict, 2)";}
  "(runOrError, 1)" -> {"(Start, 1)";}
  "(newTaskManager, 3)" -> {"(newTaskManager, 4)";}
  "(WaitForCluster, 2)" -> {"(Watch, 2)";}
  "(UpdateNode, 1)" -> {"(aliveNode, 3)";}
  "(removeEndpoint, 1)" -> {"(Remove, 2)";}
  "(handleSessionMessage, 3)" -> {"(Remove, 1)";}
  "(Init, 1)" -> {"(startTask, 3)";}
  "(WatchFrom, 3)" -> {}
  "(copyPipes, 7)" -> {}
  "(startTask, 3)" -> {"(taskManager, 3)";}
  "(containerUnpause, 1)" -> {"(updateHealthMonitor, 1)";}
  "(NotifyConflict, 2)" -> {"(handleNodeConflict, 2)";}
  "(Create, 2)" -> {"(New, 9)";}
  "(run, 3)" -> {"(start, 2)";}
  "(Log, 1)" -> {"(Add, 2)";}
  "(CheckV2MetadataHMAC, 2)" -> {"(Write, 1)";"(New, 2)";}
  "(reconcileTaskState, 4)" -> {"(Remove, 1)";}
  "(newSerfQueries, 4)" -> {}
  "(initializeSubsystem, 3)" -> {"(Create, 2)";}
  "(start, 2)" -> {"(Session, 2)";}
  "(loadPlugins, 1)" -> {"(Register, 1)";"(Init, 1)";}
  "(watchClusterEvents, 2)" -> {"(Watch, 2)";}
  "(newSession, 5)" -> {"(run, 3)";}
  "(Start, 1)" -> {"(start, 1)";"(copyPipes, 7)";"(Start, 2)";}
  "(Join, 2)" -> {"(sbJoin, 2)";}
  "(sbJoin, 2)" -> {"(removeEndpoint, 1)";}
  "(CopyConsole, 7)" -> {}
  "(runManager, 5)" -> {"(Run, 1)";}
  "(taskManager, 3)" -> {"(newTaskManager, 3)";}
  "(Run, 1)" -> {"(ViewAndWatch, 3)";"(WaitForCluster, 2)";"(UpdateNode, 1)";}
  "(run, 1)" -> {"(newSession, 5)";"(Subscribe, 2)";"(Do, 3)";"(UpdateRootCA, 1)";"(Start, 1)";"(superviseManager, 5)";"(UpdateTaskStatus, 3)";"(runAgent, 4)";"(handleSessionMessage, 3)";}
  "(Subscribe, 2)" -> {"(Remove, 1)";}
  "(UpdateRootCA, 1)" -> {}
  "(Update, 3)" -> {"(Update, 2)";"(runOrError, 1)";}
  "(New, 1)" -> {"(run, 1)";"(Join, 2)";"(New, 4)";"(apply, 2)";}
  "(New, 4)" -> {"(New, 3)";}
  "(newTaskManager, 4)" -> {}
  "(Session, 2)" -> {"(ViewAndWatch, 3)";}
  "(NewSnapshotter, 8)" -> {}
  "(Commit, 2)" -> {"(containerPause, 1)";"(containerUnpause, 1)";}
  "(Init, 4)" -> {"(ListenPipe, 2)";}
  "(Watch, 2)" -> {"(WatchFrom, 3)";}
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 24e7c50250eead4794ae33dc1e170ed52e2f9cc0

nttcom/terraform-provider-ecl: ecl/clientconfig/requests.go; 3 LoC

Found a possible issue in nttcom/terraform-provider-ecl at ecl/clientconfig/requests.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

reference to v is reassigned at line 154

Click here to see the code in its original context.

Click here to show the 3 line(s) of Go which triggered the analyzer.
for _, v := range clouds {
	cloud = &v
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 9172123418b875e6a6980f2078627576f24f4100

cloudfoundry/diego-release: src/github.com/docker/docker/vendor/github.com/docker/libnetwork/networkdb/delegate.go; 18 LoC

Found a possible issue in cloudfoundry/diego-release at src/github.com/docker/docker/vendor/github.com/docker/libnetwork/networkdb/delegate.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

reference to nodes is reassigned at line 32

Click here to see the code in its original context.

Click here to show the 18 line(s) of Go which triggered the analyzer.
for _, nodes := range []map[string]*node{
	nDB.failedNodes,
	nDB.leftNodes,
	nDB.nodes,
} {
	if n, ok := nodes[nEvent.NodeName]; ok {
		active = &nodes == &nDB.nodes
		left = &nodes == &nDB.leftNodes
		failed = &nodes == &nDB.failedNodes
		if n.ltime >= nEvent.LTime {
			return active, left, failed, nil
		}
		if extract {
			delete(nodes, n.Name)
		}
		return active, left, failed, n
	}
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 24e7c50250eead4794ae33dc1e170ed52e2f9cc0

se0g1/cve-2018-1002101: test/e2e/scalability/density.go; 367 LoC

Found a possible issue in se0g1/cve-2018-1002101 at test/e2e/scalability/density.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

Click here to see the code in its original context.

Click here to show the 367 line(s) of Go which triggered the analyzer.
for _, testArg := range densityTests {
	feature := "ManualPerformance"
	switch testArg.podsPerNode {
	case 30:
		if isCanonical(&testArg) {
			feature = "Performance"
		}
	case 95:
		feature = "HighDensityPerformance"
	}

	name := fmt.Sprintf("[Feature:%s] should allow starting %d pods per node using %v with %v secrets, %v configmaps and %v daemons",
		feature,
		testArg.podsPerNode,
		testArg.kind,
		testArg.secretsPerPod,
		testArg.configMapsPerPod,
		testArg.daemonsPerNode,
	)
	if testArg.quotas {
		name += " with quotas"
	}
	itArg := testArg
	It(name, func() {
		nodePrepPhase := testPhaseDurations.StartPhase(100, "node preparation")
		defer nodePrepPhase.End()
		nodePreparer := framework.NewE2ETestNodePreparer(
			f.ClientSet,
			[]testutils.CountToStrategy{{Count: nodeCount, Strategy: &testutils.TrivialNodePrepareStrategy{}}},
		)
		framework.ExpectNoError(nodePreparer.PrepareNodes())
		defer nodePreparer.CleanupNodes()

		podsPerNode := itArg.podsPerNode
		if podsPerNode == 30 {
			f.AddonResourceConstraints = func() map[string]framework.ResourceConstraint { return density30AddonResourceVerifier(nodeCount) }()
		}
		totalPods = (podsPerNode - itArg.daemonsPerNode) * nodeCount
		fileHndl, err := os.Create(fmt.Sprintf(framework.TestContext.OutputDir+"/%s/pod_states.csv", uuid))
		framework.ExpectNoError(err)
		defer fileHndl.Close()
		nodePrepPhase.End()

		// nodeCountPerNamespace and CreateNamespaces are defined in load.go
		numberOfCollections := (nodeCount + nodeCountPerNamespace - 1) / nodeCountPerNamespace
		namespaces, err := CreateNamespaces(f, numberOfCollections, fmt.Sprintf("density-%v", testArg.podsPerNode), testPhaseDurations.StartPhase(200, "namespace creation"))
		framework.ExpectNoError(err)
		if itArg.quotas {
			framework.ExpectNoError(CreateQuotas(f, namespaces, totalPods+nodeCount, testPhaseDurations.StartPhase(210, "quota creation")))
		}

		configs := make([]testutils.RunObjectConfig, numberOfCollections)
		secretConfigs := make([]*testutils.SecretConfig, 0, numberOfCollections*itArg.secretsPerPod)
		configMapConfigs := make([]*testutils.ConfigMapConfig, 0, numberOfCollections*itArg.configMapsPerPod)
		// Since all RCs are created at the same time, timeout for each config
		// has to assume that it will be run at the very end.
		podThroughput := 20
		timeout := time.Duration(totalPods/podThroughput)*time.Second + 3*time.Minute
		// createClients is defined in load.go
		clients, internalClients, scalesClients, err := createClients(numberOfCollections)
		for i := 0; i < numberOfCollections; i++ {
			nsName := namespaces[i].Name
			secretNames := []string{}
			for j := 0; j < itArg.secretsPerPod; j++ {
				secretName := fmt.Sprintf("density-secret-%v-%v", i, j)
				secretConfigs = append(secretConfigs, &testutils.SecretConfig{
					Content:   map[string]string{"foo": "bar"},
					Client:    clients[i],
					Name:      secretName,
					Namespace: nsName,
					LogFunc:   framework.Logf,
				})
				secretNames = append(secretNames, secretName)
			}
			configMapNames := []string{}
			for j := 0; j < itArg.configMapsPerPod; j++ {
				configMapName := fmt.Sprintf("density-configmap-%v-%v", i, j)
				configMapConfigs = append(configMapConfigs, &testutils.ConfigMapConfig{
					Content:   map[string]string{"foo": "bar"},
					Client:    clients[i],
					Name:      configMapName,
					Namespace: nsName,
					LogFunc:   framework.Logf,
				})
				configMapNames = append(configMapNames, configMapName)
			}
			name := fmt.Sprintf("density%v-%v-%v", totalPods, i, uuid)
			baseConfig := &testutils.RCConfig{
				Client:               clients[i],
				InternalClient:       internalClients[i],
				ScalesGetter:         scalesClients[i],
				Image:                framework.GetPauseImageName(f.ClientSet),
				Name:                 name,
				Namespace:            nsName,
				Labels:               map[string]string{"type": "densityPod"},
				PollInterval:         DensityPollInterval,
				Timeout:              timeout,
				PodStatusFile:        fileHndl,
				Replicas:             (totalPods + numberOfCollections - 1) / numberOfCollections,
				CpuRequest:           nodeCpuCapacity / 100,
				MemRequest:           nodeMemCapacity / 100,
				MaxContainerFailures: &MaxContainerFailures,
				Silent:               true,
				LogFunc:              framework.Logf,
				SecretNames:          secretNames,
				ConfigMapNames:       configMapNames,
			}
			switch itArg.kind {
			case api.Kind("ReplicationController"):
				configs[i] = baseConfig
			case extensions.Kind("ReplicaSet"):
				configs[i] = &testutils.ReplicaSetConfig{RCConfig: *baseConfig}
			case extensions.Kind("Deployment"):
				configs[i] = &testutils.DeploymentConfig{RCConfig: *baseConfig}
			case batch.Kind("Job"):
				configs[i] = &testutils.JobConfig{RCConfig: *baseConfig}
			default:
				framework.Failf("Unsupported kind: %v", itArg.kind)
			}
		}

		// Single client is running out of http2 connections in delete phase, hence we need more.
		clients, internalClients, _, err = createClients(2)

		dConfig := DensityTestConfig{
			ClientSets:         clients,
			InternalClientsets: internalClients,
			Configs:            configs,
			PodCount:           totalPods,
			PollInterval:       DensityPollInterval,
			kind:               itArg.kind,
			SecretConfigs:      secretConfigs,
			ConfigMapConfigs:   configMapConfigs,
		}

		for i := 0; i < itArg.daemonsPerNode; i++ {
			dConfig.DaemonConfigs = append(dConfig.DaemonConfigs,
				&testutils.DaemonConfig{
					Client:    f.ClientSet,
					Name:      fmt.Sprintf("density-daemon-%v", i),
					Namespace: f.Namespace.Name,
					LogFunc:   framework.Logf,
				})
		}
		e2eStartupTime = runDensityTest(dConfig, testPhaseDurations)
		if itArg.runLatencyTest {
			By("Scheduling additional Pods to measure startup latencies")

			createTimes := make(map[string]metav1.Time, 0)
			nodeNames := make(map[string]string, 0)
			scheduleTimes := make(map[string]metav1.Time, 0)
			runTimes := make(map[string]metav1.Time, 0)
			watchTimes := make(map[string]metav1.Time, 0)

			var mutex sync.Mutex
			checkPod := func(p *v1.Pod) {
				mutex.Lock()
				defer mutex.Unlock()
				defer GinkgoRecover()

				if p.Status.Phase == v1.PodRunning {
					if _, found := watchTimes[p.Name]; !found {
						watchTimes[p.Name] = metav1.Now()
						createTimes[p.Name] = p.CreationTimestamp
						nodeNames[p.Name] = p.Spec.NodeName
						var startTime metav1.Time
						for _, cs := range p.Status.ContainerStatuses {
							if cs.State.Running != nil {
								if startTime.Before(&cs.State.Running.StartedAt) {
									startTime = cs.State.Running.StartedAt
								}
							}
						}
						if startTime != metav1.NewTime(time.Time{}) {
							runTimes[p.Name] = startTime
						} else {
							framework.Failf("Pod %v is reported to be running, but none of its containers is", p.Name)
						}
					}
				}
			}

			additionalPodsPrefix = "density-latency-pod"
			stopCh := make(chan struct{})

			latencyPodStores := make([]cache.Store, len(namespaces))
			for i := 0; i < len(namespaces); i++ {
				nsName := namespaces[i].Name
				latencyPodsStore, controller := cache.NewInformer(
					&cache.ListWatch{
						ListFunc: func(options metav1.ListOptions) (runtime.Object, error) {
							options.LabelSelector = labels.SelectorFromSet(labels.Set{"type": additionalPodsPrefix}).String()
							obj, err := c.CoreV1().Pods(nsName).List(options)
							return runtime.Object(obj), err
						},
						WatchFunc: func(options metav1.ListOptions) (watch.Interface, error) {
							options.LabelSelector = labels.SelectorFromSet(labels.Set{"type": additionalPodsPrefix}).String()
							return c.CoreV1().Pods(nsName).Watch(options)
						},
					},
					&v1.Pod{},
					0,
					cache.ResourceEventHandlerFuncs{
						AddFunc: func(obj interface{}) {
							p, ok := obj.(*v1.Pod)
							if !ok {
								framework.Logf("Failed to cast observed object to *v1.Pod.")
							}
							Expect(ok).To(Equal(true))
							go checkPod(p)
						},
						UpdateFunc: func(oldObj, newObj interface{}) {
							p, ok := newObj.(*v1.Pod)
							if !ok {
								framework.Logf("Failed to cast observed object to *v1.Pod.")
							}
							Expect(ok).To(Equal(true))
							go checkPod(p)
						},
					},
				)
				latencyPodStores[i] = latencyPodsStore

				go controller.Run(stopCh)
			}

			// Create some additional pods with throughput ~5 pods/sec.
			latencyPodStartupPhase := testPhaseDurations.StartPhase(800, "latency pods creation")
			defer latencyPodStartupPhase.End()
			var wg sync.WaitGroup
			wg.Add(nodeCount)
			// Explicitly set requests here.
			// Thanks to it we trigger increasing priority function by scheduling
			// a pod to a node, which in turn will result in spreading latency pods
			// more evenly between nodes.
			cpuRequest := *resource.NewMilliQuantity(nodeCpuCapacity/5, resource.DecimalSI)
			memRequest := *resource.NewQuantity(nodeMemCapacity/5, resource.DecimalSI)
			if podsPerNode > 30 {
				// This is to make them schedulable on high-density tests
				// (e.g. 100 pods/node kubemark).
				cpuRequest = *resource.NewMilliQuantity(0, resource.DecimalSI)
				memRequest = *resource.NewQuantity(0, resource.DecimalSI)
			}
			rcNameToNsMap := map[string]string{}
			for i := 1; i <= nodeCount; i++ {
				name := additionalPodsPrefix + "-" + strconv.Itoa(i)
				nsName := namespaces[i%len(namespaces)].Name
				rcNameToNsMap[name] = nsName
				go createRunningPodFromRC(&wg, c, name, nsName, framework.GetPauseImageName(f.ClientSet), additionalPodsPrefix, cpuRequest, memRequest)
				time.Sleep(200 * time.Millisecond)
			}
			wg.Wait()
			latencyPodStartupPhase.End()

			latencyMeasurementPhase := testPhaseDurations.StartPhase(810, "pod startup latencies measurement")
			defer latencyMeasurementPhase.End()
			By("Waiting for all Pods begin observed by the watch...")
			waitTimeout := 10 * time.Minute
			for start := time.Now(); len(watchTimes) < nodeCount; time.Sleep(10 * time.Second) {
				if time.Since(start) < waitTimeout {
					framework.Failf("Timeout reached waiting for all Pods being observed by the watch.")
				}
			}
			close(stopCh)

			nodeToLatencyPods := make(map[string]int)
			for i := range latencyPodStores {
				for _, item := range latencyPodStores[i].List() {
					pod := item.(*v1.Pod)
					nodeToLatencyPods[pod.Spec.NodeName]++
				}
				for node, count := range nodeToLatencyPods {
					if count > 1 {
						framework.Logf("%d latency pods scheduled on %s", count, node)
					}
				}
			}

			for i := 0; i < len(namespaces); i++ {
				nsName := namespaces[i].Name
				selector := fields.Set{
					"involvedObject.kind":      "Pod",
					"involvedObject.namespace": nsName,
					"source":                   v1.DefaultSchedulerName,
				}.AsSelector().String()
				options := metav1.ListOptions{FieldSelector: selector}
				schedEvents, err := c.CoreV1().Events(nsName).List(options)
				framework.ExpectNoError(err)
				for k := range createTimes {
					for _, event := range schedEvents.Items {
						if event.InvolvedObject.Name == k {
							scheduleTimes[k] = event.FirstTimestamp
							break
						}
					}
				}
			}

			scheduleLag := make([]framework.PodLatencyData, 0)
			startupLag := make([]framework.PodLatencyData, 0)
			watchLag := make([]framework.PodLatencyData, 0)
			schedToWatchLag := make([]framework.PodLatencyData, 0)
			e2eLag := make([]framework.PodLatencyData, 0)

			for name, create := range createTimes {
				sched, ok := scheduleTimes[name]
				if !ok {
					framework.Logf("Failed to find schedule time for %v", name)
					missingMeasurements++
				}
				run, ok := runTimes[name]
				if !ok {
					framework.Logf("Failed to find run time for %v", name)
					missingMeasurements++
				}
				watch, ok := watchTimes[name]
				if !ok {
					framework.Logf("Failed to find watch time for %v", name)
					missingMeasurements++
				}
				node, ok := nodeNames[name]
				if !ok {
					framework.Logf("Failed to find node for %v", name)
					missingMeasurements++
				}

				scheduleLag = append(scheduleLag, framework.PodLatencyData{Name: name, Node: node, Latency: sched.Time.Sub(create.Time)})
				startupLag = append(startupLag, framework.PodLatencyData{Name: name, Node: node, Latency: run.Time.Sub(sched.Time)})
				watchLag = append(watchLag, framework.PodLatencyData{Name: name, Node: node, Latency: watch.Time.Sub(run.Time)})
				schedToWatchLag = append(schedToWatchLag, framework.PodLatencyData{Name: name, Node: node, Latency: watch.Time.Sub(sched.Time)})
				e2eLag = append(e2eLag, framework.PodLatencyData{Name: name, Node: node, Latency: watch.Time.Sub(create.Time)})
			}

			sort.Sort(framework.LatencySlice(scheduleLag))
			sort.Sort(framework.LatencySlice(startupLag))
			sort.Sort(framework.LatencySlice(watchLag))
			sort.Sort(framework.LatencySlice(schedToWatchLag))
			sort.Sort(framework.LatencySlice(e2eLag))

			framework.PrintLatencies(scheduleLag, "worst schedule latencies")
			framework.PrintLatencies(startupLag, "worst run-after-schedule latencies")
			framework.PrintLatencies(watchLag, "worst watch latencies")
			framework.PrintLatencies(schedToWatchLag, "worst scheduled-to-end total latencies")
			framework.PrintLatencies(e2eLag, "worst e2e total latencies")

			// Test whether e2e pod startup time is acceptable.
			podStartupLatency := &framework.PodStartupLatency{Latency: framework.ExtractLatencyMetrics(e2eLag)}
			f.TestSummaries = append(f.TestSummaries, podStartupLatency)
			framework.ExpectNoError(framework.VerifyPodStartupLatency(podStartupLatency))

			framework.LogSuspiciousLatency(startupLag, e2eLag, nodeCount, c)
			latencyMeasurementPhase.End()

			By("Removing additional replication controllers")
			podDeletionPhase := testPhaseDurations.StartPhase(820, "latency pods deletion")
			defer podDeletionPhase.End()
			deleteRC := func(i int) {
				defer GinkgoRecover()
				name := additionalPodsPrefix + "-" + strconv.Itoa(i+1)
				framework.ExpectNoError(framework.DeleteRCAndWaitForGC(c, rcNameToNsMap[name], name))
			}
			workqueue.Parallelize(25, nodeCount, deleteRC)
			podDeletionPhase.End()
		}
		cleanupDensityTest(dConfig, testPhaseDurations)
	})
}
Click here to show extra information the analyzer produced.
No path was found through the callgraph that could lead to a function which writes a pointer argument.

No path was found through the callgraph that could lead to a function which passes a pointer to third-party code.

root signature {isCanonical 1} was not found in the callgraph; reference was passed directly to third-party code

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 08eb5fb80339f2738e62ab25142bde16debd4a60

mackentan/GoProxyHunt: src/code.google.com/p/go.tools/cmd/vet/testdata/rangeloop.go; 5 LoC

Found a possible issue in mackentan/GoProxyHunt at src/code.google.com/p/go.tools/cmd/vet/testdata/rangeloop.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

range-loop variable f used in defer or goroutine at line 56

Click here to see the code in its original context.

Click here to show the 5 line(s) of Go which triggered the analyzer.
for x[0], f = range s {
	go func() {
		_ = f // ERROR "range variable f captured by func literal"
	}()
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: d48b11078a6769c36db11ed2fd819f38e571b74a

e154/smart-home: endpoint/flow.go; 25 LoC

Found a possible issue in e154/smart-home at endpoint/flow.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

Click here to see the code in its original context.

Click here to show the 25 line(s) of Go which triggered the analyzer.
for _, w := range params.Workers {
	worker := &m.Worker{}
	common.Copy(&worker, &w)
	worker.WorkflowId = newFlow.Workflow.Id
	worker.FlowId = newFlow.Id
	worker.DeviceActionId = w.DeviceAction.Id

	_, errs = worker.Valid()
	if len(errs) > 0 {
		for _, err := range errs {
			log.Errorf("%s %s", err.Key, err.Message)
		}
		return
	}

	if worker.Id == 0 {
		if _, err = tx.Worker.Add(worker); err != nil {
			return
		}
	} else {
		if err = tx.Worker.Update(worker); err != nil {
			return
		}
	}
}
Click here to show extra information the analyzer produced.
No path was found through the callgraph that could lead to a function which writes a pointer argument.

No path was found through the callgraph that could lead to a function which passes a pointer to third-party code.

root signature {Copy 2} was not found in the callgraph; reference was passed directly to third-party code

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 08bfcca81c321b893cad87842a4ea40713f62e09

e154/smart-home: endpoint/flow.go; 46 LoC

Found a possible issue in e154/smart-home at endpoint/flow.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

Click here to see the code in its original context.

Click here to show the 46 line(s) of Go which triggered the analyzer.
for _, element := range params.Objects {

	fl := &m.FlowElement{}
	common.Copy(&fl, &element)
	common.Copy(&fl.GraphSettings.Position, &element.Position)
	fl.Uuid.Scan(element.Id)
	fl.FlowId = newFlow.Id
	fl.Name = element.Title

	if element.FlowLink != nil && element.FlowLink.Id != 0 {
		fl.FlowLink = &element.FlowLink.Id
	}

	if element.Script != nil {
		fl.ScriptId = &element.Script.Id
	}

	switch element.Type.Name {
	case "event":
		if element.Type.Start != nil {
			fl.PrototypeType = common.FlowElementsPrototypeMessageHandler
		} else if element.Type.End != nil {
			fl.PrototypeType = common.FlowElementsPrototypeMessageEmitter
		}
	case "task":
		fl.PrototypeType = common.FlowElementsPrototypeTask
	case "gateway":
		fl.PrototypeType = common.FlowElementsPrototypeGateway
	case "flow":
		fl.PrototypeType = common.FlowElementsPrototypeFlow
	default:
		fl.PrototypeType = common.FlowElementsPrototypeDefault
	}

	_, errs = fl.Valid()
	if len(errs) > 0 {
		for _, err := range errs {
			log.Errorf("%s %s", err.Key, err.Message)
		}
		return
	}

	if err = tx.FlowElement.AddOrUpdateFlowElement(fl); err != nil {
		return
	}
}
Click here to show extra information the analyzer produced.
No path was found through the callgraph that could lead to a function which writes a pointer argument.

No path was found through the callgraph that could lead to a function which passes a pointer to third-party code.

root signature {Copy 2} was not found in the callgraph; reference was passed directly to third-party code

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 08bfcca81c321b893cad87842a4ea40713f62e09

cnrancher/octopus: adaptors/mqtt/pkg/physical/device.go; 18 LoC

Found a possible issue in cnrancher/octopus at adaptors/mqtt/pkg/physical/device.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

reference to newSpecProp was used in a composite literal at line 309

Click here to see the code in its original context.

Click here to show the 18 line(s) of Go which triggered the analyzer.
for _, newSpecProp := range newSpecProps {
	if newSpecProp.ReadOnly != nil && !*newSpecProp.ReadOnly {
		var staleSpecProp = staleSpecPropsIndex[newSpecProp.Name]
		// publishes again if changed
		if !reflect.DeepEqual(staleSpecProp, newSpecProp) {
			var err = d.mqttClient.Publish(mqtt.PublishMessage{
				Render:          getPublishRender(&newSpecProp),
				QoSPointer:      (*byte)(newSpecProp.QoS),
				RetainedPointer: newSpecProp.Retained,
				Payload:         newSpecProp.Value,
			})
			if err != nil {
				return errors.Wrapf(err, "failed to publish property %s", newSpecProp.Name)
			}
			d.log.V(4).Info("Sent payload", "type", "AttributedTopic", "property", newSpecProp.Name)
		}
	}
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 3793775c1da67af923abfe79660e9d8055b262b3

se0g1/cve-2018-1002101: pkg/registry/rbac/rest/storage_rbac.go; 26 LoC

Found a possible issue in se0g1/cve-2018-1002101 at pkg/registry/rbac/rest/storage_rbac.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

reference to role was used in a composite literal at line 234

Click here to see the code in its original context.

Click here to show the 26 line(s) of Go which triggered the analyzer.
for _, role := range roles {
	opts := reconciliation.ReconcileRoleOptions{
		Role:    reconciliation.RoleRuleOwner{Role: &role},
		Client:  reconciliation.RoleModifier{Client: clientset, NamespaceClient: coreclientset.Namespaces()},
		Confirm: true,
	}
	err := retry.RetryOnConflict(retry.DefaultBackoff, func() error {
		result, err := opts.Run()
		if err != nil {
			return err
		}
		switch {
		case result.Protected && result.Operation != reconciliation.ReconcileNone:
			glog.Warningf("skipped reconcile-protected role.%s/%s in %v with missing permissions: %v", rbac.GroupName, role.Name, namespace, result.MissingRules)
		case result.Operation == reconciliation.ReconcileUpdate:
			glog.Infof("updated role.%s/%s in %v with additional permissions: %v", rbac.GroupName, role.Name, namespace, result.MissingRules)
		case result.Operation == reconciliation.ReconcileCreate:
			glog.Infof("created role.%s/%s in %v", rbac.GroupName, role.Name, namespace)
		}
		return nil
	})
	if err != nil {
		// don't fail on failures, try to create as many as you can
		utilruntime.HandleError(fmt.Errorf("unable to reconcile role.%s/%s in %v: %v", rbac.GroupName, role.Name, namespace, err))
	}
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 08eb5fb80339f2738e62ab25142bde16debd4a60

cloudfoundry/diego-release: src/github.com/docker/docker/vendor/github.com/docker/swarmkit/manager/allocator/allocator.go; 38 LoC

Found a possible issue in cloudfoundry/diego-release at src/github.com/docker/docker/vendor/github.com/docker/swarmkit/manager/allocator/allocator.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

reference to aa is reassigned at line 109

Click here to see the code in its original context.

Click here to show the 38 line(s) of Go which triggered the analyzer.
for _, aa := range []allocActor{
	{
		taskVoter: networkVoter,
		init:      a.doNetworkInit,
		action:    a.doNetworkAlloc,
	},
} {
	if aa.taskVoter != "" {
		a.registerToVote(aa.taskVoter)
	}

	// Assign a pointer for variable capture
	aaPtr := &aa
	actor := func() error {
		wg.Add(1)
		defer wg.Done()

		// init might return an allocator specific context
		// which is a child of the passed in context to hold
		// allocator specific state
		watch, watchCancel, err := a.init(ctx, aaPtr)
		if err != nil {
			return err
		}

		wg.Add(1)
		go func(watch <-chan events.Event, watchCancel func()) {
			defer func() {
				wg.Done()
				watchCancel()
			}()
			a.run(ctx, *aaPtr, watch)
		}(watch, watchCancel)
		return nil
	}

	actors = append(actors, actor)
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 24e7c50250eead4794ae33dc1e170ed52e2f9cc0

initcat-dest/opensource-bilibili: app/job/main/reply-feed/service/statistics.go; 9 LoC

Found a possible issue in initcat-dest/opensource-bilibili at app/job/main/reply-feed/service/statistics.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

reference to stat is reassigned at line 122

Click here to see the code in its original context.

Click here to show the 9 line(s) of Go which triggered the analyzer.
for slot, stat := range s.statisticsStats {
	nameMapping[stat.Name] = append(nameMapping[stat.Name], slot)
	s, ok := statisticsMap[stat.Name]
	if ok {
		statisticsMap[stat.Name] = s.Merge(&stat)
	} else {
		statisticsMap[stat.Name] = &stat
	}
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: cf98f71e6f3b5034b90c736681ef8eebe5a1973a

cloudfoundry/diego-release: src/github.com/docker/docker/vendor/github.com/docker/swarmkit/manager/scheduler/nodeset.go; 65 LoC

Found a possible issue in cloudfoundry/diego-release at src/github.com/docker/docker/vendor/github.com/docker/swarmkit/manager/scheduler/nodeset.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

Click here to see the code in its original context.

Click here to show the 65 line(s) of Go which triggered the analyzer.
for _, node := range ns.nodes {
	tree := &root
	for _, pref := range preferences {
		// Only spread is supported so far
		spread := pref.GetSpread()
		if spread == nil {
			continue
		}

		descriptor := spread.SpreadDescriptor
		var value string
		switch {
		case len(descriptor) > len(constraint.NodeLabelPrefix) && strings.EqualFold(descriptor[:len(constraint.NodeLabelPrefix)], constraint.NodeLabelPrefix):
			if node.Spec.Annotations.Labels != nil {
				value = node.Spec.Annotations.Labels[descriptor[len(constraint.NodeLabelPrefix):]]
			}
		case len(descriptor) > len(constraint.EngineLabelPrefix) && strings.EqualFold(descriptor[:len(constraint.EngineLabelPrefix)], constraint.EngineLabelPrefix):
			if node.Description != nil && node.Description.Engine != nil && node.Description.Engine.Labels != nil {
				value = node.Description.Engine.Labels[descriptor[len(constraint.EngineLabelPrefix):]]
			}
		// TODO(aaronl): Support other items from constraint
		// syntax like node ID, hostname, os/arch, etc?
		default:
			continue
		}

		// If value is still uninitialized, the value used for
		// the node at this level of the tree is "". This makes
		// sure that the tree structure is not affected by
		// which properties nodes have and don't have.

		if node.ActiveTasksCountByService != nil {
			tree.tasks += node.ActiveTasksCountByService[serviceID]
		}

		if tree.next == nil {
			tree.next = make(map[string]*decisionTree)
		}
		next := tree.next[value]
		if next == nil {
			next = &decisionTree{}
			tree.next[value] = next
		}
		tree = next
	}

	if node.ActiveTasksCountByService != nil {
		tree.tasks += node.ActiveTasksCountByService[serviceID]
	}

	if tree.nodeHeap.lessFunc == nil {
		tree.nodeHeap.lessFunc = nodeLess
	}

	if tree.nodeHeap.Len() < maxAssignments {
		if meetsConstraints(&node) {
			heap.Push(&tree.nodeHeap, node)
		}
	} else if nodeLess(&node, &tree.nodeHeap.nodes[0]) {
		if meetsConstraints(&node) {
			tree.nodeHeap.nodes[0] = node
			heap.Fix(&tree.nodeHeap, 0)
		}
	}
}
Click here to show extra information the analyzer produced.
No path was found through the callgraph that could lead to a function which writes a pointer argument.

No path was found through the callgraph that could lead to a function which passes a pointer to third-party code.

root signature {meetsConstraints 1} was not found in the callgraph; reference was passed directly to third-party code

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 24e7c50250eead4794ae33dc1e170ed52e2f9cc0

datastax/cassandra-data-apis: db/mocks.go; 3 LoC

Found a possible issue in datastax/cassandra-data-apis at db/mocks.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

reference to value was used in a composite literal at line 128

Click here to see the code in its original context.

Click here to show the 3 line(s) of Go which triggered the analyzer.
for _, value := range views {
	values = append(values, map[string]interface{}{"view_name": &value})
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 0f24c6a5f6ca28c0086c49ad0ce2997994e7f841

FreifunkBremen/yanic: respond/collector.go; 3 LoC

Found a possible issue in FreifunkBremen/yanic at respond/collector.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

Click here to see the code in its original context.

Click here to show the 3 line(s) of Go which triggered the analyzer.
for _, link := range coll.nodes.NodeLinks(node) {
	db.InsertLink(&link, node.Lastseen.GetTime())
}
Click here to show extra information the analyzer produced.
No path was found through the callgraph that could lead to a function which writes a pointer argument.

No path was found through the callgraph that could lead to a function which passes a pointer to third-party code.

root signature {InsertLink 2} was not found in the callgraph; reference was passed directly to third-party code

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: aff906d734cdff279c7c7cfd17ccc54f70c6c45f

YuDaChao/awesomeProject: advance/closure.go; 6 LoC

Found a possible issue in YuDaChao/awesomeProject at advance/closure.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

range-loop variable v used in defer or goroutine at line 27

Click here to see the code in its original context.

Click here to show the 6 line(s) of Go which triggered the analyzer.
for _, v := range arr {
	fmt.Println(v, &v) // 1 2 3 0xc00009c000
	defer func() {     // ้—ญๅŒ…
		fmt.Println(v, &v) // 3 3 3 0xc00009c000
	}()
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 01dad193166f0bc3be51abe8d8a27cb3e964f201

Ridecell/ridecell-operator: pkg/controller/rabbitmq_vhost/components/vhost.go; 7 LoC

Found a possible issue in Ridecell/ridecell-operator at pkg/controller/rabbitmq_vhost/components/vhost.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

reference to existingPolicy is reassigned at line 138

Click here to see the code in its original context.

Click here to show the 7 line(s) of Go which triggered the analyzer.
for _, existingPolicy := range existingPolicyList {
	name := existingPolicy.Name
	if strings.HasPrefix(name, instance.Spec.VhostName+"-") {
		name = name[len(instance.Spec.VhostName)+1:]
	}
	existingPolicies[name] = &existingPolicy
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: b075480deb812fa8c51cf9e5147d33dd93f1a976

mackentan/GoProxyHunt: src/code.google.com/p/go.tools/cmd/vet/testdata/rangeloop.go; 6 LoC

Found a possible issue in mackentan/GoProxyHunt at src/code.google.com/p/go.tools/cmd/vet/testdata/rangeloop.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

range-loop variable i used in defer or goroutine at line 19

Click here to see the code in its original context.

Click here to show the 6 line(s) of Go which triggered the analyzer.
for i, v := range s {
	defer func() {
		println(i) // ERROR "range variable i captured by func literal"
		println(v) // ERROR "range variable v captured by func literal"
	}()
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: d48b11078a6769c36db11ed2fd819f38e571b74a

cnrancher/octopus: adaptors/mqtt/pkg/physical/device.go; 8 LoC

Found a possible issue in cnrancher/octopus at adaptors/mqtt/pkg/physical/device.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

reference to newSpecProp was used in a composite literal at line 272

Click here to see the code in its original context.

Click here to show the 8 line(s) of Go which triggered the analyzer.
for idx, newSpecProp := range newSpecProps {
	// appends subscribe topic
	subscribeTopics = append(subscribeTopics, mqtt.SubscribeTopic{
		Index:      idx,
		Render:     getSubscribeRender(&newSpecProp),
		QoSPointer: (*byte)(newSpecProp.QoS),
	})
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 3793775c1da67af923abfe79660e9d8055b262b3

davecheney/benchjuju: src/github.com/juju/juju/apiserver/client/status.go; 11 LoC

Found a possible issue in davecheney/benchjuju at src/github.com/juju/juju/apiserver/client/status.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

reference to ep was used in a composite literal at line 493

Click here to see the code in its original context.

Click here to show the 11 line(s) of Go which triggered the analyzer.
for _, ep := range relation.Endpoints() {
	eps = append(eps, params.EndpointStatus{
		ServiceName: ep.ServiceName,
		Name:        ep.Name,
		Role:        ep.Role,
		Subordinate: context.isSubordinate(&ep),
	})
	// these should match on both sides so use the last
	relationInterface = ep.Interface
	scope = ep.Scope
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 43fb99820e9158cabe9141573cf05c874352e988

pombreda/swarming-go: isolateserver/server/handlers.go; 11 LoC

Found a possible issue in pombreda/swarming-go at isolateserver/server/handlers.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

range-loop variable v used in defer or goroutine at line 485

Click here to see the code in its original context.

Click here to show the 11 line(s) of Go which triggered the analyzer.
for i, v := range p {
	// Sends i+1 if the object is missing and -(i+1) if the object is present.
	go func() {
		k := aedmz.NewKey("ContentEntry", v.HexDigest, parentKey)
		if err := aedmz.Get(c, k, &v); err != nil {
			ch <- i + 1
		} else {
			ch <- -1 - i
		}
	}()
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: fc179c16f2f88c3cc4840627f2d6f5d31e2a31d4

agiletechvn/HL-Loyalty: chaincode/loyalty/vendor/github.com/hyperledger/fabric/core/committer/txvalidator/validator.go; 18 LoC

Found a possible issue in agiletechvn/HL-Loyalty at chaincode/loyalty/vendor/github.com/hyperledger/fabric/core/committer/txvalidator/validator.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

range-loop variable d used in defer or goroutine at line 211

Click here to see the code in its original context.

Click here to show the 18 line(s) of Go which triggered the analyzer.
for tIdx, d := range block.Data.Data {
	tIdxLcl := tIdx
	dLcl := d

	// ensure that we don't have too many concurrent validation workers
	v.support.Acquire(context.Background(), 1)

	go func() {
		defer v.support.Release(1)

		validateTx(&blockValidationRequest{
			d:     dLcl,
			block: block,
			tIdx:  tIdxLcl,
			v:     v,
		}, results)
	}()
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 745b648ccd7dc32867dd953dc0dfb1da1b097218

se0g1/cve-2018-1002101: test/e2e/framework/util.go; 25 LoC

Found a possible issue in se0g1/cve-2018-1002101 at test/e2e/framework/util.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

Click here to see the code in its original context.

Click here to show the 25 line(s) of Go which triggered the analyzer.
for _, pod := range podList.Items {
	if len(ignoreLabels) != 0 && ignoreSelector.Matches(labels.Set(pod.Labels)) {
		continue
	}
	res, err := testutils.PodRunningReady(&pod)
	switch {
	case res && err == nil:
		nOk++
	case pod.Status.Phase == v1.PodSucceeded:
		Logf("The status of Pod %s is Succeeded which is unexpected", pod.ObjectMeta.Name)
		badPods = append(badPods, pod)
		// it doesn't make sense to wait for this pod
		return false, errors.New("unexpected Succeeded pod state")
	case pod.Status.Phase != v1.PodFailed:
		Logf("The status of Pod %s is %s (Ready = false), waiting for it to be either Running (with Ready = true) or Failed", pod.ObjectMeta.Name, pod.Status.Phase)
		notReady++
		badPods = append(badPods, pod)
	default:
		if metav1.GetControllerOf(&pod) == nil {
			Logf("Pod %s is Failed, but it's not controlled by a controller", pod.ObjectMeta.Name)
			badPods = append(badPods, pod)
		}
		//ignore failed pods that are controlled by some controller
	}
}
Click here to show extra information the analyzer produced.
No path was found through the callgraph that could lead to a function which writes a pointer argument.

No path was found through the callgraph that could lead to a function which passes a pointer to third-party code.

root signature {GetControllerOf 1} was not found in the callgraph; reference was passed directly to third-party code

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 08eb5fb80339f2738e62ab25142bde16debd4a60

volcano-sh/volcano: test/e2e/jobp/util.go; 7 LoC

Found a possible issue in volcano-sh/volcano at test/e2e/jobp/util.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

Click here to see the code in its original context.

Click here to show the 7 line(s) of Go which triggered the analyzer.
for _, pod := range pods.Items {
	if !metav1.IsControlledBy(&pod, job) {
		continue
	}
	duplicatePod := pod.DeepCopy()
	tasks = append(tasks, duplicatePod)
}
Click here to show extra information the analyzer produced.
No path was found through the callgraph that could lead to a function which writes a pointer argument.

No path was found through the callgraph that could lead to a function which passes a pointer to third-party code.

root signature {IsControlledBy 2} was not found in the callgraph; reference was passed directly to third-party code

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 43c840f95a51ca86f635361579f3af6c8939633b

THUNDERGROOVE/SDETool: cmd/sdedumper/dumper.go; 7 LoC

Found a possible issue in THUNDERGROOVE/SDETool at cmd/sdedumper/dumper.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

range-loop variable v used in defer or goroutine at line 176

Click here to see the code in its original context.

Click here to show the 7 line(s) of Go which triggered the analyzer.
for _, v := range chans {
	go func() {
		for t := range v {
			out <- t // Yeah yeah shutup go vet
		}
	}()
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: c7d08917f85c208495e6463ac1ea5bac01dd0650

mackentan/GoProxyHunt: src/code.google.com/p/go.tools/cmd/vet/testdata/rangeloop.go; 6 LoC

Found a possible issue in mackentan/GoProxyHunt at src/code.google.com/p/go.tools/cmd/vet/testdata/rangeloop.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

range-loop variable i used in defer or goroutine at line 13

Click here to see the code in its original context.

Click here to show the 6 line(s) of Go which triggered the analyzer.
for i, v := range s {
	go func() {
		println(i) // ERROR "range variable i captured by func literal"
		println(v) // ERROR "range variable v captured by func literal"
	}()
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: d48b11078a6769c36db11ed2fd819f38e571b74a

cybertk/worktile-events-to-slack: Godeps/_workspace/src/github.com/stretchr/testify/mock/mock.go; 11 LoC

Found a possible issue in cybertk/worktile-events-to-slack at Godeps/_workspace/src/github.com/stretchr/testify/mock/mock.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

reference to call is reassigned at line 212

Click here to see the code in its original context.

Click here to show the 11 line(s) of Go which triggered the analyzer.
for _, call := range m.expectedCalls() {
	if call.Method == method {

		_, tempDiffCount := call.Arguments.Diff(arguments)
		if tempDiffCount < diffCount || diffCount == 0 {
			diffCount = tempDiffCount
			closestCall = &call
		}

	}
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 4b04ff6f61ce6ce93fdeb94d77735735ef30e47c

se0g1/cve-2018-1002101: plugin/pkg/admission/podpreset/admission.go; 4 LoC

Found a possible issue in se0g1/cve-2018-1002101 at plugin/pkg/admission/podpreset/admission.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

function call which takes a reference to ctr at line 384 may start a goroutine

Click here to see the code in its original context.

Click here to show the 4 line(s) of Go which triggered the analyzer.
for i, ctr := range pod.Spec.Containers {
	applyPodPresetsOnContainer(&ctr, podPresets)
	pod.Spec.Containers[i] = ctr
}
Click here to show extra information the analyzer produced.
The following graphviz dot graph describes paths through the callgraph that could lead to a function calling a goroutine:
digraph G {
  "(New, 1)" -> {"(New, 2)";"(Run, 1)";"(Run, 2)";"(Start, 1)";"(newClient, 1)";}
  "(List, 1)" -> {"(List, 2)";}
  "(addConnIfNeeded, 3)" -> {}
  "(Write, 1)" -> {"(Copy, 2)";}
  "(DecodeElement, 2)" -> {"(unmarshal, 2)";}
  "(RoundTrip, 1)" -> {"(awaitOpenSlotForRequest, 1)";}
  "(awaitOpenSlotForRequest, 1)" -> {}
  "(Add, 1)" -> {"(add, 1)";"(New, 1)";"(Search, 2)";"(Do, 3)";"(Has, 1)";}
  "(newTLSListener, 2)" -> {}
  "(Put, 1)" -> {"(Update, 3)";}
  "(add, 2)" -> {"(add, 1)";}
  "(NewClientConn, 3)" -> {"(clientHandshake, 2)";}
  "(dial, 1)" -> {"(dialWithoutProxy, 1)";}
  "(mergeEnv, 2)" -> {"(copy, 2)";}
  "(Remove, 1)" -> {"(Delete, 1)";"(New, 1)";"(Search, 2)";}
  "(Delete, 1)" -> {"(Unmarshal, 1)";}
  "(tryUpgrade, 2)" -> {}
  "(Accept, 2)" -> {}
  "(newWatchBroadcast, 3)" -> {}
  "(Run, 1)" -> {"(startControllers, 7)";"(Register, 2)";"(NewSSHTunnelList, 4)";"(Serve, 3)";"(Run, 2)";"(Write, 1)";"(Encode, 2)";}
  "(run, 2)" -> {"(UpdateTransport, 4)";"(RunKubelet, 4)";}
  "(List, 2)" -> {}
  "(Encode, 2)" -> {"(Write, 1)";}
  "(NewTimeoutListener, 5)" -> {"(wrapTLS, 4)";}
  "(post, 1)" -> {"(RoundTrip, 1)";"(GracefulClose, 1)";}
  "(copy, 2)" -> {"(findObject, 2)";"(get, 1)";}
  "(Serve, 3)" -> {"(RunServer, 4)";}
  "(decode, 1)" -> {"(Write, 1)";}
  "(RemoveMember, 2)" -> {"(removeMember, 2)";}
  "(applyPodPresetsOnContainer, 2)" -> {"(mergeEnv, 2)";}
  "(Has, 1)" -> {"(Do, 3)";}
  "(RunServer, 4)" -> {"(NewListener, 2)";}
  "(Decode, 1)" -> {"(decode, 1)";"(DecodeElement, 2)";}
  "(NewListener, 2)" -> {"(NewTimeoutListener, 5)";}
  "(DialURL, 2)" -> {"(Dial, 3)";}
  "(findObject, 2)" -> {"(get, 1)";}
  "(Do, 3)" -> {}
  "(startControllers, 7)" -> {}
  "(unmarshal, 2)" -> {"(Copy, 2)";}
  "(GracefulClose, 1)" -> {"(Copy, 2)";}
  "(NewMainKubelet, 30)" -> {}
  "(newClientTransport, 6)" -> {}
  "(get, 1)" -> {"(SetTransportDefaults, 1)";"(Get, 1)";"(Add, 2)";}
  "(ConfigureTransport, 1)" -> {"(configureTransport, 1)";}
  "(ServeHTTP, 2)" -> {"(handleHttps, 2)";"(RemoveMember, 2)";"(tryUpgrade, 2)";}
  "(handleHttps, 2)" -> {}
  "(Add, 2)" -> {"(Add, 1)";"(New, 1)";"(add, 2)";"(Search, 2)";"(Remove, 1)";"(Write, 3)";}
  "(newWatchBroadcasts, 1)" -> {}
  "(Update, 3)" -> {"(Accept, 2)";}
  "(Terminate, 1)" -> {}
  "(configureTransport, 1)" -> {"(addConnIfNeeded, 3)";}
  "(Search, 2)" -> {"(List, 1)";}
  "(writeJSON, 4)" -> {"(Write, 1)";}
  "(CreateAndInitKubelet, 30)" -> {"(NewMainKubelet, 30)";}
  "(Dial, 3)" -> {"(NewClientConn, 3)";}
  "(SetTransportDefaults, 1)" -> {"(ConfigureTransport, 1)";}
  "(removeMember, 2)" -> {"(Terminate, 1)";}
  "(grpcHealthCheck, 2)" -> {"(dial, 1)";}
  "(CreateServerChain, 2)" -> {"(NonBlockingRun, 4)";"(createAggregatorServer, 3)";}
  "(Get, 2)" -> {"(Accept, 2)";"(Do, 3)";}
  "(createAggregatorServer, 3)" -> {}
  "(unmarshalBody, 2)" -> {"(Copy, 2)";}
  "(updateTransport, 5)" -> {}
  "(startKubelet, 5)" -> {}
  "(dial, 2)" -> {"(DialURL, 2)";}
  "(Get, 1)" -> {"(Remove, 1)";"(Put, 1)";"(Add, 2)";"(Decode, 1)";"(Add, 1)";"(New, 1)";"(Get, 2)";}
  "(NewSSHTunnelList, 4)" -> {}
  "(addEndpoint, 1)" -> {}
  "(Start, 1)" -> {"(Run, 1)";}
  "(NonBlockingRun, 4)" -> {}
  "(Write, 3)" -> {"(writeJSON, 4)";"(writeXML, 4)";}
  "(wrapTLS, 4)" -> {"(newTLSListener, 2)";}
  "(dialWithoutProxy, 1)" -> {"(Dial, 3)";}
  "(newClient, 1)" -> {"(dial, 2)";"(grpcHealthCheck, 2)";}
  "(New, 2)" -> {"(Start, 1)";}
  "(Register, 2)" -> {"(addEndpoint, 1)";}
  "(writeXML, 4)" -> {"(Write, 1)";}
  "(Unmarshal, 1)" -> {"(unmarshalBody, 2)";}
  "(UpdateTransport, 4)" -> {"(updateTransport, 5)";}
  "(clientHandshake, 2)" -> {"(newClientTransport, 6)";}
  "(add, 1)" -> {"(post, 1)";"(newWatchBroadcast, 3)";"(newWatchBroadcasts, 1)";}
  "(Copy, 2)" -> {"(ServeHTTP, 2)";}
  "(RunKubelet, 4)" -> {"(startKubelet, 5)";"(CreateAndInitKubelet, 30)";}
  "(Run, 2)" -> {"(run, 2)";"(CreateServerChain, 2)";"(Write, 1)";"(List, 1)";}
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 08eb5fb80339f2738e62ab25142bde16debd4a60

SolarBankers/Solar-Bankers-Coin: src/visor/visor.go; 5 LoC

Found a possible issue in SolarBankers/Solar-Bankers-Coin at src/visor/visor.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

Click here to see the code in its original context.

Click here to show the 5 line(s) of Go which triggered the analyzer.
for _, tx := range txns {
	if f(&tx, otherFlts...) {
		retTxns = append(retTxns, tx)
	}
}
Click here to show extra information the analyzer produced.
No path was found through the callgraph that could lead to a function which writes a pointer argument.

No path was found through the callgraph that could lead to a function which passes a pointer to third-party code.

root signature {f 2} was not found in the callgraph; reference was passed directly to third-party code

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: a7c6652020de0267ca4eacd8fbc97687a58ef372

gabemontero/obu: pkg/cmd/cli/cmd/mirrors.go; 3 LoC

Found a possible issue in gabemontero/obu at pkg/cmd/cli/cmd/mirrors.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

reference to policy was used in a composite literal at line 70

Click here to see the code in its original context.

Click here to show the 3 line(s) of Go which triggered the analyzer.
for _, policy := range imageContentSourcePolicies.Items {
	polices = append(polices, &policy)
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 63751a6114940b461ab603a2352cc59728bae20f

mackentan/GoProxyHunt: src/code.google.com/p/go.tools/cmd/vet/testdata/rangeloop.go; 5 LoC

Found a possible issue in mackentan/GoProxyHunt at src/code.google.com/p/go.tools/cmd/vet/testdata/rangeloop.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

range-loop variable i used in defer or goroutine at line 25

Click here to see the code in its original context.

Click here to show the 5 line(s) of Go which triggered the analyzer.
for i := range s {
	go func() {
		println(i) // ERROR "range variable i captured by func literal"
	}()
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: d48b11078a6769c36db11ed2fd819f38e571b74a

mickeyzzc/ucloud_exporter: umonitor/udb.go; 42 LoC

Found a possible issue in mickeyzzc/ucloud_exporter at umonitor/udb.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

reference to udbTypeName is reassigned at line 94

Click here to see the code in its original context.

Click here to show the 42 line(s) of Go which triggered the analyzer.
for _, udbTypeName := range typeUDBnamelist {
	udbReq.ClassType = &udbTypeName
	udbList, _ := uclient.DescribeUDBInstance(udbReq)
	if udbList.TotalCount == 0 {
		selfConf.logger.Debug("Not resource.",
			zap.String("Project", string(projectName)),
			zap.String("Region", string(region)),
			zap.String("Type", string(udbTypeName)),
		)
		udbChan <- nil
		continue
	}
	// ๅŒฟๅๅ‡ฝๆ•ฐ
	selfFun := func(db interface{}, selfProjectID, selfProjectName, selfRegion, sqlType string) {
		switch db.(type) {
		case udb.UDBInstanceSet:
			masterResource(db.(udb.UDBInstanceSet), selfProjectID, selfProjectName, selfRegion, sqlType)
		case udb.UDBSlaveInstanceSet:
			slaveResource(db.(udb.UDBSlaveInstanceSet), selfProjectID, selfProjectName, selfRegion, sqlType)
		default:
			selfConf.logger.Warn("resource is not udb ",
				zap.Any("Type", reflect.TypeOf(db)),
			)
			return
		}
	}
	//
	for i := 0; i < udbList.TotalCount; i = i + limit {
		offset = i
		if offset > 0 {
			udbList, _ = uclient.DescribeUDBInstance(udbReq)
		}

		for _, udb := range udbList.DataSet {

			selfFun(udb, projectID, projectName, region, udbTypeName)
			for _, slaveDB := range udb.DataSet {
				selfFun(slaveDB, projectID, projectName, region, udbTypeName)
			}
		}
	}
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: cdab3430ab63531a463cbc5b48042be65b14a0c7

hackerlist/monty: app/controllers/probe.go; 66 LoC

Found a possible issue in hackerlist/monty at app/controllers/probe.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

function call at line 112 may store a reference to p

Click here to see the code in its original context.

Click here to show the 66 line(s) of Go which triggered the analyzer.
for _, p := range probes {
	/* make sure we run the test at the right frequency */
	last := p.LastRun
	if t.Sub(last).Seconds() < p.Frequency {
		revel.INFO.Printf("skipping probe %d - timeout not hit", p.Id)
		/* not enough time elapsed - skip */
		continue
	}

	/* get the script for this probe */
	script, err := txn.Get(&models.Script{}, p.ScriptId)
	if err != nil {
		/* if we can't get the script, the probe can't be run */
		revel.ERROR.Printf("can't run probe: missing script %d", p.ScriptId)
		continue
	}

	if script == nil {
		revel.ERROR.Printf("can't run probe: missing script %d", p.ScriptId)
		continue
	}

	/* set up and run the probe via revel jobs api.
	 * the result comes back through the Error channel. */

	scr := script.(*models.Script)

	revel.TRACE.Printf("node %d probe %d script (%d) %q args %s", n.Id, p.Id, scr.Id, scr.Code, p.Arguments)
	prunner := NewProbeRunner(&p, scr)

	jobs.Now(prunner)

	res := &models.Result{
		NodeId:  n.Id,
		ProbeId: p.Id,
	}

	err = <-prunner.Error
	if err != nil {
		res.Passed = false
		res.StatusMsg = err.Error()
	} else {
		res.Passed = true
		res.StatusMsg = ""
	}
	res.Timestamp = time.Now()
	err = txn.Insert(res)
	if err != nil {
		revel.ERROR.Printf("runchecks: %s", err)
		continue
	}
	p.LastResultId = res.Id
	_, err = txn.Update(&p)
	if err != nil {
		revel.ERROR.Printf("runchecks: %s", err)
		continue
	}

	revel.TRACE.Printf("probe passed? %d %t %q", p.Id, res.Passed, res.StatusMsg)

	// if the probe failed, send the result to the callback url in the node.
	if res.Passed == false {
		revel.WARN.Printf("probe failed, running callback")
		go pjob.Failed(&n, &p, res)
	}
}
Click here to show extra information the analyzer produced.
The following graphviz dot graph describes paths through the callgraph that could lead to a function which writes a pointer argument:
digraph G {
  "(Failed, 3)" -> {}
}

No path was found through the callgraph that could lead to a function which passes a pointer to third-party code.


Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: d79daf5bf53215e13cf1fe6344ff9164596959ca

davecheney/benchjuju: src/github.com/juju/juju/provider/openstack/provider.go; 12 LoC

Found a possible issue in davecheney/benchjuju at src/github.com/juju/juju/provider/openstack/provider.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

reference to fip is reassigned at line 736

Click here to see the code in its original context.

Click here to show the 12 line(s) of Go which triggered the analyzer.
for _, fip := range fips {
	newfip = &fip
	if fip.InstanceId != nil && *fip.InstanceId != "" {
		// unavailable, skip
		newfip = nil
		continue
	} else {
		logger.Debugf("found unassigned public ip: %v", newfip.IP)
		// unassigned, we can use it
		return newfip, nil
	}
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 43fb99820e9158cabe9141573cf05c874352e988

davecheney/benchjuju: src/github.com/juju/juju/provider/vsphere/client.go; 8 LoC

Found a possible issue in davecheney/benchjuju at src/github.com/juju/juju/provider/vsphere/client.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

reference to pool is reassigned at line 297

Click here to see the code in its original context.

Click here to show the 8 line(s) of Go which triggered the analyzer.
for _, pool := range ipPools.Returnval {
	for _, association := range pool.NetworkAssociation {
		if association.NetworkName == vmNet.Network {
			netPool = &pool
			break
		}
	}
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 43fb99820e9158cabe9141573cf05c874352e988

cloudfoundry/diego-release: src/github.com/docker/docker/vendor/github.com/hashicorp/serf/serf/coalesce_member.go; 6 LoC

Found a possible issue in cloudfoundry/diego-release at src/github.com/docker/docker/vendor/github.com/hashicorp/serf/serf/coalesce_member.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

reference to m was used in a composite literal at line 35

Click here to see the code in its original context.

Click here to show the 6 line(s) of Go which triggered the analyzer.
for _, m := range e.Members {
	c.latestEvents[m.Name] = coalesceEvent{
		Type:   e.Type,
		Member: &m,
	}
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 24e7c50250eead4794ae33dc1e170ed52e2f9cc0

davecheney/benchjuju: src/github.com/juju/juju/environs/simplestreams/simplestreams.go; 10 LoC

Found a possible issue in davecheney/benchjuju at src/github.com/juju/juju/environs/simplestreams/simplestreams.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

Click here to see the code in its original context.

Click here to show the 10 line(s) of Go which triggered the analyzer.
for _, metadataCatalog := range metadata.Products {
	for _, ItemCollection := range metadataCatalog.Items {
		for _, item := range ItemCollection.Items {
			coll := *ItemCollection
			inherit(&metadataCatalog, metadata)
			inherit(&coll, metadataCatalog)
			inherit(item, &coll)
		}
	}
}
Click here to show extra information the analyzer produced.
No path was found through the callgraph that could lead to a function which writes a pointer argument.

No path was found through the callgraph that could lead to a function which passes a pointer to third-party code.

root signature {inherit 2} was not found in the callgraph; reference was passed directly to third-party code

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 43fb99820e9158cabe9141573cf05c874352e988

mackentan/GoProxyHunt: src/code.google.com/p/go.tools/cmd/vet/testdata/rangeloop.go; 6 LoC

Found a possible issue in mackentan/GoProxyHunt at src/code.google.com/p/go.tools/cmd/vet/testdata/rangeloop.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

range-loop variable i used in defer or goroutine at line 35

Click here to see the code in its original context.

Click here to show the 6 line(s) of Go which triggered the analyzer.
for i, v := range s {
	go func() {
		println(i, v)
	}()
	println("unfortunately, we don't catch the error above because of this statement")
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: d48b11078a6769c36db11ed2fd819f38e571b74a

cloudfoundry/diego-release: src/github.com/docker/docker/daemon/metrics.go; 6 LoC

Found a possible issue in cloudfoundry/diego-release at src/github.com/docker/docker/daemon/metrics.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

range-loop variable p used in defer or goroutine at line 124

Click here to see the code in its original context.

Click here to show the 6 line(s) of Go which triggered the analyzer.
for _, p := range ls {
	go func() {
		defer wg.Done()
		pluginStopMetricsCollection(p)
	}()
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 24e7c50250eead4794ae33dc1e170ed52e2f9cc0

nttcom/terraform-provider-ecl: ecl/clientconfig/requests.go; 3 LoC

Found a possible issue in nttcom/terraform-provider-ecl at ecl/clientconfig/requests.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

reference to v is reassigned at line 193

Click here to see the code in its original context.

Click here to show the 3 line(s) of Go which triggered the analyzer.
for _, v := range secureClouds {
	cloud = &v
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 9172123418b875e6a6980f2078627576f24f4100

davecheney/benchjuju: src/github.com/Azure/azure-sdk-for-go/core/tls/handshake_client.go; 38 LoC

Found a possible issue in davecheney/benchjuju at src/github.com/Azure/azure-sdk-for-go/core/tls/handshake_client.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

reference to chain is reassigned at line 253

Click here to see the code in its original context.

Click here to show the 38 line(s) of Go which triggered the analyzer.
for i, chain := range c.config.Certificates {
	if !rsaAvail && !ecdsaAvail {
		continue
	}

	for j, cert := range chain.Certificate {
		x509Cert := chain.Leaf
		// parse the certificate if this isn't the leaf
		// node, or if chain.Leaf was nil
		if j != 0 || x509Cert == nil {
			if x509Cert, err = x509.ParseCertificate(cert); err != nil {
				c.sendAlert(alertInternalError)
				return errors.New("tls: failed to parse client certificate #" + strconv.Itoa(i) + ": " + err.Error())
			}
		}

		switch {
		case rsaAvail && x509Cert.PublicKeyAlgorithm == x509.RSA:
		case ecdsaAvail && x509Cert.PublicKeyAlgorithm == x509.ECDSA:
		default:
			continue findCert
		}

		if len(certReq.certificateAuthorities) == 0 {
			// they gave us an empty list, so just take the
			// first RSA cert from c.config.Certificates
			chainToSend = &chain
			break findCert
		}

		for _, ca := range certReq.certificateAuthorities {
			if bytes.Equal(x509Cert.RawIssuer, ca) {
				chainToSend = &chain
				break findCert
			}
		}
	}
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 43fb99820e9158cabe9141573cf05c874352e988

se0g1/cve-2018-1002101: pkg/registry/core/service/ipallocator/controller/repair.go; 41 LoC

Found a possible issue in se0g1/cve-2018-1002101 at pkg/registry/core/service/ipallocator/controller/repair.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

Click here to see the code in its original context.

Click here to show the 41 line(s) of Go which triggered the analyzer.
for _, svc := range list.Items {
	if !helper.IsServiceIPSet(&svc) {
		// didn't need a cluster IP
		continue
	}
	ip := net.ParseIP(svc.Spec.ClusterIP)
	if ip == nil {
		// cluster IP is corrupt
		c.recorder.Eventf(&svc, v1.EventTypeWarning, "ClusterIPNotValid", "Cluster IP %s is not a valid IP; please recreate service", svc.Spec.ClusterIP)
		runtime.HandleError(fmt.Errorf("the cluster IP %s for service %s/%s is not a valid IP; please recreate", svc.Spec.ClusterIP, svc.Name, svc.Namespace))
		continue
	}
	// mark it as in-use
	switch err := rebuilt.Allocate(ip); err {
	case nil:
		if stored.Has(ip) {
			// remove it from the old set, so we can find leaks
			stored.Release(ip)
		} else {
			// cluster IP doesn't seem to be allocated
			c.recorder.Eventf(&svc, v1.EventTypeWarning, "ClusterIPNotAllocated", "Cluster IP %s is not allocated; repairing", ip)
			runtime.HandleError(fmt.Errorf("the cluster IP %s for service %s/%s is not allocated; repairing", ip, svc.Name, svc.Namespace))
		}
		delete(c.leaks, ip.String()) // it is used, so it can't be leaked
	case ipallocator.ErrAllocated:
		// cluster IP is duplicate
		c.recorder.Eventf(&svc, v1.EventTypeWarning, "ClusterIPAlreadyAllocated", "Cluster IP %s was assigned to multiple services; please recreate service", ip)
		runtime.HandleError(fmt.Errorf("the cluster IP %s for service %s/%s was assigned to multiple services; please recreate", ip, svc.Name, svc.Namespace))
	case err.(*ipallocator.ErrNotInRange):
		// cluster IP is out of range
		c.recorder.Eventf(&svc, v1.EventTypeWarning, "ClusterIPOutOfRange", "Cluster IP %s is not within the service CIDR %s; please recreate service", ip, c.network)
		runtime.HandleError(fmt.Errorf("the cluster IP %s for service %s/%s is not within the service CIDR %s; please recreate", ip, svc.Name, svc.Namespace, c.network))
	case ipallocator.ErrFull:
		// somehow we are out of IPs
		c.recorder.Eventf(&svc, v1.EventTypeWarning, "ServiceCIDRFull", "Service CIDR %s is full; you must widen the CIDR in order to create new services", c.network)
		return fmt.Errorf("the service CIDR %s is full; you must widen the CIDR in order to create new services", c.network)
	default:
		c.recorder.Eventf(&svc, v1.EventTypeWarning, "UnknownError", "Unable to allocate cluster IP %s due to an unknown error", ip)
		return fmt.Errorf("unable to allocate cluster IP %s for service %s/%s due to an unknown error, exiting: %v", ip, svc.Name, svc.Namespace, err)
	}
}
Click here to show extra information the analyzer produced.
No path was found through the callgraph that could lead to a function which writes a pointer argument.

No path was found through the callgraph that could lead to a function which passes a pointer to third-party code.

root signature {Eventf 5} was not found in the callgraph; reference was passed directly to third-party code

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 08eb5fb80339f2738e62ab25142bde16debd4a60

dawnbass68/maddcash: api/xpub.go; 25 LoC

Found a possible issue in dawnbass68/maddcash at api/xpub.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

Click here to see the code in its original context.

Click here to show the 25 line(s) of Go which triggered the analyzer.
for _, txid := range newTxids {
	// the same tx can have multiple addresses from the same xpub, get it from backend it only once
	tx, foundTx := txmMap[txid.txid]
	if !foundTx {
		tx, err = w.GetTransaction(txid.txid, false, false)
		// mempool transaction may fail
		if err != nil || tx == nil {
			glog.Warning("GetTransaction in mempool: ", err)
			continue
		}
		txmMap[txid.txid] = tx
	}
	// skip already confirmed txs, mempool may be out of sync
	if tx.Confirmations == 0 {
		if !foundTx {
			unconfirmedTxs++
		}
		uBalSat.Add(&uBalSat, tx.getAddrVoutValue(ad.addrDesc))
		uBalSat.Sub(&uBalSat, tx.getAddrVinValue(ad.addrDesc))
		// mempool txs are returned only on the first page, uniquely and filtered
		if page == 0 && !foundTx && (txidFilter == nil || txidFilter(&txid, ad)) {
			mempoolEntries = append(mempoolEntries, bchain.MempoolTxidEntry{Txid: txid.txid, Time: uint32(tx.Blocktime)})
		}
	}
}
Click here to show extra information the analyzer produced.
No path was found through the callgraph that could lead to a function which writes a pointer argument.

No path was found through the callgraph that could lead to a function which passes a pointer to third-party code.

root signature {txidFilter 2} was not found in the callgraph; reference was passed directly to third-party code

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: c91c12cb36e5191b232bfbde598649fec2e1c993

cnrancher/octopus: test/util/node/get.go; 5 LoC

Found a possible issue in cnrancher/octopus at test/util/node/get.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

function call which takes a reference to node at line 56 may start a goroutine

Click here to see the code in its original context.

Click here to show the 5 line(s) of Go which triggered the analyzer.
for _, node := range list.Items {
	if IsOnlyWorker(&node) {
		workers.Insert(node.Name)
	}
}
Click here to show extra information the analyzer produced.
The following graphviz dot graph describes paths through the callgraph that could lead to a function calling a goroutine:
digraph G {
  "(ConfigureTransport, 1)" -> {"(configureTransport, 1)";}
  "(IsOnlyWorker, 1)" -> {"(IsControlPlane, 1)";"(IsEtcd, 1)";}
  "(IsControlPlane, 1)" -> {"(Set, 1)";}
  "(New, 1)" -> {"(get, 1)";}
  "(get, 1)" -> {"(SetTransportDefaults, 1)";}
  "(IsEtcd, 1)" -> {"(Set, 1)";}
  "(SetTransportDefaults, 1)" -> {"(ConfigureTransport, 1)";}
  "(Set, 1)" -> {"(New, 1)";}
  "(configureTransport, 1)" -> {"(addConnIfNeeded, 3)";}
  "(addConnIfNeeded, 3)" -> {}
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 3793775c1da67af923abfe79660e9d8055b262b3

initcat-dest/opensource-bilibili: app/service/main/reply-feed/service/service.go; 8 LoC

Found a possible issue in initcat-dest/opensource-bilibili at app/service/main/reply-feed/service/service.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

reference to stat is reassigned at line 127

Click here to see the code in its original context.

Click here to show the 8 line(s) of Go which triggered the analyzer.
for _, stat := range s.statisticsStats {
	s, ok := statisticsMap[stat.Name]
	if ok {
		statisticsMap[stat.Name] = s.Merge(&stat)
	} else {
		statisticsMap[stat.Name] = &stat
	}
}

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: cf98f71e6f3b5034b90c736681ef8eebe5a1973a

hackerlist/monty: app/controllers/probe.go; 76 LoC

Found a possible issue in hackerlist/monty at app/controllers/probe.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

function call at line 112 may store a reference to n

Click here to see the code in its original context.

Click here to show the 76 line(s) of Go which triggered the analyzer.
for _, n := range nodes {
	revel.TRACE.Printf("node %d", n.Id)
	probes = nil
	_, err := txn.Select(&probes, "select * from probe where nid=$1 order by id", n.Id)
	if err != nil {
		revel.ERROR.Printf("runchecks: %s", err)
		continue
	}

	for _, p := range probes {
		/* make sure we run the test at the right frequency */
		last := p.LastRun
		if t.Sub(last).Seconds() < p.Frequency {
			revel.INFO.Printf("skipping probe %d - timeout not hit", p.Id)
			/* not enough time elapsed - skip */
			continue
		}

		/* get the script for this probe */
		script, err := txn.Get(&models.Script{}, p.ScriptId)
		if err != nil {
			/* if we can't get the script, the probe can't be run */
			revel.ERROR.Printf("can't run probe: missing script %d", p.ScriptId)
			continue
		}

		if script == nil {
			revel.ERROR.Printf("can't run probe: missing script %d", p.ScriptId)
			continue
		}

		/* set up and run the probe via revel jobs api.
		 * the result comes back through the Error channel. */

		scr := script.(*models.Script)

		revel.TRACE.Printf("node %d probe %d script (%d) %q args %s", n.Id, p.Id, scr.Id, scr.Code, p.Arguments)
		prunner := NewProbeRunner(&p, scr)

		jobs.Now(prunner)

		res := &models.Result{
			NodeId:  n.Id,
			ProbeId: p.Id,
		}

		err = <-prunner.Error
		if err != nil {
			res.Passed = false
			res.StatusMsg = err.Error()
		} else {
			res.Passed = true
			res.StatusMsg = ""
		}
		res.Timestamp = time.Now()
		err = txn.Insert(res)
		if err != nil {
			revel.ERROR.Printf("runchecks: %s", err)
			continue
		}
		p.LastResultId = res.Id
		_, err = txn.Update(&p)
		if err != nil {
			revel.ERROR.Printf("runchecks: %s", err)
			continue
		}

		revel.TRACE.Printf("probe passed? %d %t %q", p.Id, res.Passed, res.StatusMsg)

		// if the probe failed, send the result to the callback url in the node.
		if res.Passed == false {
			revel.WARN.Printf("probe failed, running callback")
			go pjob.Failed(&n, &p, res)
		}
	}
}
Click here to show extra information the analyzer produced.
The following graphviz dot graph describes paths through the callgraph that could lead to a function which writes a pointer argument:
digraph G {
  "(Failed, 3)" -> {}
}

No path was found through the callgraph that could lead to a function which passes a pointer to third-party code.


Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: d79daf5bf53215e13cf1fe6344ff9164596959ca

se0g1/cve-2018-1002101: test/e2e/scalability/load.go; 140 LoC

Found a possible issue in se0g1/cve-2018-1002101 at test/e2e/scalability/load.go

Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.

Click here to see the code in its original context.

Click here to show the 140 line(s) of Go which triggered the analyzer.
for _, testArg := range loadTests {
	feature := "ManualPerformance"
	if isCanonical(&testArg) {
		feature = "Performance"
	}
	name := fmt.Sprintf("[Feature:%s] should be able to handle %v pods per node %v with %v secrets, %v configmaps and %v daemons",
		feature,
		testArg.podsPerNode,
		testArg.kind,
		testArg.secretsPerPod,
		testArg.configMapsPerPod,
		testArg.daemonsPerNode,
	)
	if testArg.quotas {
		name += " with quotas"
	}
	itArg := testArg
	itArg.services = os.Getenv("CREATE_SERVICES") != "false"

	It(name, func() {
		// Create a number of namespaces.
		namespaceCount := (nodeCount + nodeCountPerNamespace - 1) / nodeCountPerNamespace
		namespaces, err := CreateNamespaces(f, namespaceCount, fmt.Sprintf("load-%v-nodepods", itArg.podsPerNode), testPhaseDurations.StartPhase(110, "namespace creation"))
		framework.ExpectNoError(err)

		totalPods := (itArg.podsPerNode - itArg.daemonsPerNode) * nodeCount
		configs, secretConfigs, configMapConfigs = generateConfigs(totalPods, itArg.image, itArg.command, namespaces, itArg.kind, itArg.secretsPerPod, itArg.configMapsPerPod)

		if itArg.quotas {
			framework.ExpectNoError(CreateQuotas(f, namespaces, 2*totalPods, testPhaseDurations.StartPhase(115, "quota creation")))
		}

		serviceCreationPhase := testPhaseDurations.StartPhase(120, "services creation")
		defer serviceCreationPhase.End()
		if itArg.services {
			framework.Logf("Creating services")
			services := generateServicesForConfigs(configs)
			createService := func(i int) {
				defer GinkgoRecover()
				framework.ExpectNoError(testutils.CreateServiceWithRetries(clientset, services[i].Namespace, services[i]))
			}
			workqueue.Parallelize(serviceOperationsParallelism, len(services), createService)
			framework.Logf("%v Services created.", len(services))
			defer func(services []*v1.Service) {
				serviceCleanupPhase := testPhaseDurations.StartPhase(800, "services deletion")
				defer serviceCleanupPhase.End()
				framework.Logf("Starting to delete services...")
				deleteService := func(i int) {
					defer GinkgoRecover()
					framework.ExpectNoError(testutils.DeleteResourceWithRetries(clientset, api.Kind("Service"), services[i].Namespace, services[i].Name, nil))
				}
				workqueue.Parallelize(serviceOperationsParallelism, len(services), deleteService)
				framework.Logf("Services deleted")
			}(services)
		} else {
			framework.Logf("Skipping service creation")
		}
		serviceCreationPhase.End()
		// Create all secrets.
		secretsCreationPhase := testPhaseDurations.StartPhase(130, "secrets creation")
		defer secretsCreationPhase.End()
		for i := range secretConfigs {
			secretConfigs[i].Run()
			defer secretConfigs[i].Stop()
		}
		secretsCreationPhase.End()
		// Create all configmaps.
		configMapsCreationPhase := testPhaseDurations.StartPhase(140, "configmaps creation")
		defer configMapsCreationPhase.End()
		for i := range configMapConfigs {
			configMapConfigs[i].Run()
			defer configMapConfigs[i].Stop()
		}
		configMapsCreationPhase.End()
		// StartDaemon if needed
		daemonSetCreationPhase := testPhaseDurations.StartPhase(150, "daemonsets creation")
		defer daemonSetCreationPhase.End()
		for i := 0; i < itArg.daemonsPerNode; i++ {
			daemonName := fmt.Sprintf("load-daemon-%v", i)
			daemonConfig := &testutils.DaemonConfig{
				Client:    f.ClientSet,
				Name:      daemonName,
				Namespace: f.Namespace.Name,
				LogFunc:   framework.Logf,
			}
			daemonConfig.Run()
			defer func(config *testutils.DaemonConfig) {
				framework.ExpectNoError(framework.DeleteResourceAndPods(
					f.ClientSet,
					f.InternalClientset,
					extensions.Kind("DaemonSet"),
					config.Namespace,
					config.Name,
				))
			}(daemonConfig)
		}
		daemonSetCreationPhase.End()

		// Simulate lifetime of RC:
		//  * create with initial size
		//  * scale RC to a random size and list all pods
		//  * scale RC to a random size and list all pods
		//  * delete it
		//
		// This will generate ~5 creations/deletions per second assuming:
		//  - X small RCs each 5 pods   [ 5 * X = totalPods / 2 ]
		//  - Y medium RCs each 30 pods [ 30 * Y = totalPods / 4 ]
		//  - Z big RCs each 250 pods   [ 250 * Z = totalPods / 4]

		// We would like to spread creating replication controllers over time
		// to make it possible to create/schedule them in the meantime.
		// Currently we assume <throughput> pods/second average throughput.
		// We may want to revisit it in the future.
		framework.Logf("Starting to create %v objects...", itArg.kind)
		creatingTime := time.Duration(totalPods/throughput) * time.Second

		createAllResources(configs, creatingTime, testPhaseDurations.StartPhase(200, "load pods creation"))
		By("============================================================================")

		// We would like to spread scaling replication controllers over time
		// to make it possible to create/schedule & delete them in the meantime.
		// Currently we assume that <throughput> pods/second average throughput.
		// The expected number of created/deleted pods is less than totalPods/3.
		scalingTime := time.Duration(totalPods/(3*throughput)) * time.Second
		framework.Logf("Starting to scale %v objects first time...", itArg.kind)
		scaleAllResources(configs, scalingTime, testPhaseDurations.StartPhase(300, "scaling first time"))
		By("============================================================================")

		framework.Logf("Starting to scale %v objects second time...", itArg.kind)
		scaleAllResources(configs, scalingTime, testPhaseDurations.StartPhase(400, "scaling second time"))
		By("============================================================================")

		// Cleanup all created replication controllers.
		// Currently we assume <throughput> pods/second average deletion throughput.
		// We may want to revisit it in the future.
		deletingTime := time.Duration(totalPods/throughput) * time.Second
		framework.Logf("Starting to delete %v objects...", itArg.kind)
		deleteAllResources(configs, deletingTime, testPhaseDurations.StartPhase(500, "load pods deletion"))
	})
}
Click here to show extra information the analyzer produced.
No path was found through the callgraph that could lead to a function which writes a pointer argument.

No path was found through the callgraph that could lead to a function which passes a pointer to third-party code.

root signature {isCanonical 1} was not found in the callgraph; reference was passed directly to third-party code

Leave a reaction on this issue to contribute to the project by classifying this instance as a Bug ๐Ÿ‘Ž, Mitigated ๐Ÿ‘, or Desirable Behavior ๐Ÿš€
See the descriptions of the classifications here for more information.

commit ID: 08eb5fb80339f2738e62ab25142bde16debd4a60

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.