Code Monkey home page Code Monkey logo

go's Introduction

The Go Programming Language

Go is an open source programming language that makes it easy to build simple, reliable, and efficient software.

This repository, microsoft/go, contains the infrastructure Microsoft uses to build Go. The submodule named go contains the Go source code. By default, the submodule's remote URL is the official GitHub mirror of Go, golang/go. The canonical Git repository for Go source code is located at https://go.googlesource.com/go.

This project is not involved in producing the official binary distributions of Go.

Unless otherwise noted, the Go source files are distributed under the BSD-style license found in the LICENSE file.

If you are using this fork and have a Microsoft corporate account, consider clicking here to instantly join the Microsoft Go Toolset Announcements email distribution list ๐Ÿ“ง and receive notifications about Microsoft releases of Go and breaking changes. We also maintain an internal doc page.

Why does this fork exist?

This repository produces a modified version of Go that can be used to build FIPS 140-2 compliant applications. Our goal is to share this implementation with others in the Go community who have the same requirement, and to merge this capability into upstream Go as soon as possible. See eng/doc/fips for more information about this feature and the history of FIPS 140-2 compliance in Go.

The binaries produced by this repository are also intended for general use within Microsoft instead of the official binary distribution of Go.

We call this repository a fork even though it isn't a traditional Git fork. Its branches do not share Git ancestry with the Go repository. However, the repository serves the same purpose as a Git fork: maintaining a modified version of the Go source code over time.

Support

This project follows the upstream Go Release Policy. This means we support each major release (1.X) until there are two newer major releases. A new Go major version is released every six months, so each Go major version is supported for about one year.

When upstream Go releases a new minor version (1.X.Y), we release a corresponding microsoft/go version that may also include fork-specific changes. This normally happens once a month. At any time, we may release a new revision (1.X.Y-Z) to fix an issue without waiting for the next upstream minor release. Revision releases are uncommon.

Each microsoft/go release is announced in the internal Microsoft Go Toolset Announcements email distribution list ๐Ÿ“ง.

Download and install

This repository's infrastructure currently supports these OS/Arch combinations:

  • linux_amd64
  • linux_armv6l
  • linux_arm64
  • windows_amd64

See eng/README.md for more details about the infrastructure.

Binary distribution

Don't see an option that works for you? Let us know! File a GitHub issue, or comment on an existing issue in this tag:

Build from source

Prerequisites:

After cloning the repository, use the following build command. You can pass the -help flag to show more options:

pwsh eng/run.ps1 build -refresh

The resulting Go binary can then be found at go/bin/go.

If you download a source archive from a GitHub release, use the official Go install from source instructions. These source archives only include the go directory, not the microsoft/go build infrastructure.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

go's People

Contributors

alexperovich avatar dagood avatar dependabot[bot] avatar dotnet-maestro[bot] avatar gdams avatar malhotrag avatar microsoft-golang-bot avatar microsoft-golang-review-bot avatar qmuntal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

go's Issues

Improve how the build script sets up the "stage 0" Go toolset

The build script currently downloads, verifies, and extracts the Go linux-amd64 binary release 1.16 to ~/.go-stage-0/1.16. This is used in CI, and it can be used locally get a build going on a machine with minimal dependencies.

There are a few things to improve:

It should detect if Go is already installed and (optionally) use that.

  • We should probably still make it possible to download a fresh copy of 1.16 anyway, in case it makes it easier to reproduce failures that happen in CI.

It should download Go somewhere under the repository directory, not $HOME.

  • IMO, building a repo should not modify state in my home directory unless it has an extremely good reason.
  • This makes it easy to clean up in case a fresh repro is needed (it gets deleted by git clean -xdf).
  • However, putting Go inside the repo causes the test suite to fail, probably due to upwards directory traversal from go.
    --- FAIL: TestAllDependencies (6.36s)
        --- FAIL: TestAllDependencies/std(quick) (1.80s)
            moddeps_test.go:63: /work/bin/go list -mod=vendor -deps ./...: exit status 1
                package std/bytes
                    bytes/bytes.go:10:2: use of internal package internal/bytealg not allowed
                [...]
            moddeps_test.go:64: (Run 'go mod vendor' in /work/microsoft/.go-stage-0/go/src to ensure that dependecies have been vendored.)
    FAIL
    FAIL    cmd/internal/moddeps    6.405s
    

This issue is referenced in source.

Auto-update stage 0 Go toolset

I think that we should have a bot that automatically submits a PR to update Go when there's a new release.

I'm not sure if there's a nice way to do this--what comes to mind is a bot that polls https://golang.org/dl/ periodically and scrapes data. I haven't looked very far for a better source of this data, though. We could also manually trigger the bot when we hear about a release on golang-announce if the polling's slow.

Build official build using `buildmode=pie`, relro, stack protection

Normally, Go builds without PIE (Position Independent Executable) or other C-style security measures because these attacks are dealt with at a language/runtime level: https://groups.google.com/g/golang-nuts/c/Jd9tlNc6jUE. However, Go compiles to native binaries, so our SDL tooling (binskim) treats it like any other binary and scans for these security measures.

Cgo and unsafe may make it worthwhile to apply the C-style security measures to Go, but this is debatable.

Automatic branch merging from upstream

We should automatically (on some schedule) create PRs that merge changes from upstream into our forked branches. These PRs should be auto-merged.

Related: #4 tracks mirroring branches from upstream with no changes.

Use module dependencies for build scripts and utilities

As of writing, our build scripts are standalone Go files. They have no dependencies and don't live in a module. This is convenient, but means we have some code that would probably be better to replace with a library. (Git command interactions, GitHub API.) If we put the scripts into a module, we could add dependencies, with nice tooling around downloading them on demand and verifying them.

I tried it for #71 (using gotestsum as a dependency to convert go tool dist test output to a format that works in AzDO), but hit a few problems with tests:

  • There's a TestAllDependencies test that scans the repo for all modules (including the microsoft dir) and enforces that there are no dependencies that would require downloading. The goal is that someone can build the Go repo offline.
    • I'm able to fix this with go mod vendor, which copies the source code of the dependencies into the repo. ~400 files. (The test failure message had this as a suggestion.)
  • TestDependencyVersionsConsistent requires that every go.mod file in the repo depend on the same version of each library if present. gotestsum has some transitive dependencies with different versions from the others in the repo, so it failed.
    • I'm able to pin my module's dependencies to the same version to fix this.

These fixes would cause a fork maintenance problem... we'd need to update our pins and the vendored code every time upstream updates the version numbers, which seems fairly frequent:
https://github.com/golang/go/blame/master/src/go.mod
https://github.com/golang/go/blame/master/src/cmd/go.mod
We would hit test failures in sync PRs that bring in a change to these dependencies.

Alternative fixes:

  1. Remove dependencies entirely. gotestsum brings in a lot of dependencies we don't use--we only need a very small bit of what it has to offer. We could probably write our own simplified go test -json -> JUnit XML converter.
  2. Patch the tests to ignore our module.
    • It seems wrong at first glance, but we do have different infrastructural requirements/desires, so this might not be so bad.
  3. Move these tools to a different repo. Install the builder tool on the build machine during CI and/or during Docker image preparation.
  4. Install dependencies and compile against them without a Go module. Avoid getting "caught" by the test, but lose a the tooling that Go modules provide (checksums of all dependencies, straightforward build command line).
  5. Install gotestsum as a command-line tool on the build machine and use it that way. (This is how I had the PR set up initially.) This is super easy to include conditionally--we can dynamically turn it off when we won't want an online dependency.
    • This limits access to the API: we end up needing to print logs twice: once during the run to keep track of progress, and once during the conversion.
  6. ...?

I prefer removing dependencies to keep the project as simple as possible. Storing everything in one Git tree is great to avoid managing cross-repo dependencies.

Publish to branch name, not just the last `/` segment of branch name

blobDestinationUrl: 'https://golangartifacts.blob.core.windows.net/microsoft/$(Build.SourceBranchName)/$(Build.BuildNumber)'

$(Build.SourceBranchName) is misleading: with the ref name refs/heads/microsoft/main, it gives only the last segment, main. (This is documented AzDO behavior.) I would call the branch's name microsoft/main.

This can cause overlap if someone runs a dev build of a branch called e.g. dev/dagood/do-foo/main. I sometimes would do this if I planned to develop the same change for multiple branches. (dev/dagood/do-foo/release-branch.go1.16) The main overlap would mean that my build output shows up indistinguishable from a microsoft/main build, if you only look at blob storage.

We handled this in .NET (runtime/setup) with a step like this:

  - powershell: |
      $prefix = "refs/heads/"
      $branch = "$(Build.SourceBranch)"
      $branchName = $branch
      if ($branchName.StartsWith($prefix))
      {
        $branchName = $branchName.Substring($prefix.Length)
      }
      Write-Host "For Build.SourceBranch $branch, FullBranchName is $branchName"
      Write-Host "##vso[task.setvariable variable=FullBranchName;]$branchName"
    displayName: Find true SourceBranchName

Looks like Roslyn uses a oneliner that assumes it'll always have a refs/heads/ prefix:

    - powershell: Write-Host "##vso[task.setvariable variable=SourceBranchName]$('$(Build.SourceBranch)'.Substring('refs/heads/'.Length))"
      displayName: Setting SourceBranchName variable
      condition: succeeded()

Produce a Debian package that works on Ubuntu 18.04

To match upstream, the primary artifact from our official builds is a tar.gz file. We should also produce a Deb package, sign it, and host it in a package repository for more convenient installs and automatic upgrading.

The tool I'm most familiar with to do this is FPM (https://github.com/jordansissel/fpm), because .NET [Core] uses it to produce some Linux packages. This would make it easy to make RPM packages, too.

Also consider using a Go module to create the packages. It might be better to consistently use Go libraries rather than use a Ruby gem.

Change merge email titles

Can we change the title of merge PRs from upstream such that they are more filterable in email ๐Ÿ˜„

Filtering these can be tricky because there really isn't anything other than the title to filter on and Outlook only allows for substring matching. Most of our other repositories use a title that has auto merge and the repository name next to each other for easy filter. For example: [AutoMerge] Roslyn

The current merge emails have the pattern of [microsoft/go] [microsoft/main] Merge upstream. This means have to write a rule per target branch.

Run full linux-amd64 tests in this repo's CI

Running src/run.bash uses go tool dist test, which by default runs a bunch of go test commands with -short included on the command line. This skips a significant number of tests.

The goal seems to be running the same tests as amd64 longtest on https://build.golang.org.
(More about longtest at golang/go#12508, https://go-review.googlesource.com/c/build/+/113436/)

Setting GO_TEST_SHORT=false disables -short and lets the full set of tests run, but there are some extra dependencies required and it doesn't work on my machine/container. (Mercurial, permissions, ...?)

A few common skip patterns are:

  • testing.Short() && os.Getenv("GO_BUILDER_NAME") == ""
  • testing.Short() && testenv.Builder() == ""

and it shows up in helpers:

func MustHaveExternalNetwork(t testing.TB) {
	if runtime.GOOS == "js" {
		t.Skipf("skipping test: no external network on %s", runtime.GOOS)
	}
	if testing.Short() {
		t.Skipf("skipping test: no external network in -short mode")
	}
}

Running go test with -v makes it show --- SKIP: lines for these (and a lot of other output, too). It looks like you have to modify dist/test itself to get it to pass -v. (No existing feature like GO_TEST_SHORT.)

Auto-sync script shouldn't overwrite ongoing fixup work

Right now, the auto-sync script force pushes to the PR branch, then tries to create a PR, then (if PR creation fails) tries to find the PR so it can re-enable auto-merge.

It should either do nothing if the PR exists, or detect whether or not it makes sense to push the new commit to the old PR.

Hook up tests to AzDO test infrastructure

It can be a little tedious to see the tests failed and scroll up in the log to find the cause(s). In particular with verbosity turned up.

If we can get the Go tests to show up in the AzDO UI, it might be easier to tell what happened at a glance. It might also be easier to track test results over time. (E.g. are certain tests flaky?)

I know there's a go test -json command, but I haven't looked into its output.
We might need to modify go tool dist test to accept a parameter that passes -json to its go test subcommands. (Similar to how there isn't an existing flag to pass -v to subcommands.)

os/user TestGroupIds failure "The user name could not be found" on Windows 10 AAD machine

When I try to run the tests on Windows on my work machine, I get this error:

--- FAIL: TestGroupIds (0.00s)
    user_test.go:144: &{Uid:[...] Gid:[...] Username:NORTHAMERICA\dagood Name:Davis Goodin HomeDir:C:\Users\dagood}.GroupIds(): The user name could not be found.
FAIL
FAIL    os/user 1.819s

Someone else hit this in 2018, and it ended up seeming to be a configuration issue on their machine. Another machine also using AAD worked: golang/go#26041. It doesn't look like they were able to figure out the underlying issue.

To check for AAD: https://stackoverflow.com/a/51852296

C:\Users\dagood>dsregcmd /status

+----------------------------------------------------------------------+
| Device State                                                         |
+----------------------------------------------------------------------+

             AzureAdJoined : YES
          EnterpriseJoined : NO
              DomainJoined : NO
               Device Name : dagood-3
[...]

Failure in microsoft-go-upstream-sync: "module requires Go 1.16"

While trying to build the sync util, the build failed:

https://dev.azure.com/dnceng/internal/_build/results?buildId=1183280&view=logs&j=4a0cec82-aaa7-5c07-9c1e-30cc88945a51&t=02349945-0a2d-5e11-acd0-7f93c76bdad9&l=18

# github.com/microsoft/go/_util/cmd/sync
cmd/sync/sync.go:77:2: undefined: flag.Func
note: module requires Go 1.16
##[error]Bash exited with code '2'.

Our stage0 is 1.16, but the hosted agents actually already have a few versions of Go on them. The one we get on PATH is old--I ran go version on a hosted agent to get: go1.15.13 linux/amd64

ee905cb caused the error by changing a direct dependency on our stage0 (1.16) Go:

. microsoft/init-stage0.sh
"$stage0_dir/go/bin/go" run microsoft/sync/sync-upstream-refs.go \

into a soft dependency that prefers to use an existing Go on PATH first (1.15.13):

go/microsoft/run-util.sh

Lines 68 to 74 in ee905cb

. "$scriptroot/init-stage0.sh"
PATH="$PATH:$stage0_dir/go/bin"
(
# Move into module so "go build" detects it and fetches dependencies.
cd "$toolroot"
go build -o "$tool_output" "./cmd/$tool"

There's no good reason for this. run-util.sh should be changed to use the stage0 Go no matter what.

Set up our team's signing cert

Mostly internal procedures. At the end, we'll need to update the version of MicroBuild we use to a new one that knows our cert exists, and change the cert name in our signing csproj.

Flaky test: TestLookupGmailTXT, `got [globalsign-smime-dv=CD...KX8=]; want a record containing spf`

This showed up once in linux-amd64-racecompile in https://dev.azure.com/dnceng/internal/_build/results?buildId=1078273&view=results, then when I removed racecompile, it showed up in linux-amd64-regabi in https://dev.azure.com/dnceng/internal/_build/results?buildId=1078855&view=logs&j=17f0ed56-45c3-5f4e-2883-c1105f3261d7&t=be2aae34-cdee-5b9f-80ac-cc542ed93061&l=352:

--- FAIL: TestLookupGmailTXT (0.00s)
    lookup_test.go:257: got [globalsign-smime-dv=CDYX+XFHUw2wml6/Gb8+59BsH31KzUr6c1l2BPvqKX8=]; want a record containing spf, google.com
    lookup_test.go:257: got [globalsign-smime-dv=CDYX+XFHUw2wml6/Gb8+59BsH31KzUr6c1l2BPvqKX8=]; want a record containing spf, google.com
FAIL
FAIL	net	12.192s

(In #52.)

It looks like these are network tests that request DNS info from gmail.com. In some cases these are only run during longtest but it looks like that behavior is explicitly overridden here, enabling them (not skipping them) if it's running in any builder:

go/src/net/dial_test.go

Lines 990 to 998 in bc0c82c

// mustHaveExternalNetwork is like testenv.MustHaveExternalNetwork
// except that it won't skip testing on non-mobile builders.
func mustHaveExternalNetwork(t *testing.T) {
t.Helper()
mobile := runtime.GOOS == "android" || runtime.GOOS == "ios"
if testenv.Builder() == "" || mobile {
testenv.MustHaveExternalNetwork(t)
}
}


Related issues upstream: golang/go#29698, maybe golang/go#22857, golang/go#29722

"src/run.bat" doesn't pass args through to "dist test" like "src/run.bash" does, making "-json" ineffective

If we call run.bash -json, it passes args through to dist test ("$@"), making it emit JSON test events for gotestsum to parse in CI:

exec ../bin/go tool dist test -rebuild "$@"

run.bat doesn't so this, so -json (and therefore gotestsum) is ineffective:

go/src/run.bat

Line 43 in 6985bbf

..\bin\go tool dist test

As of writing, I'm going to make CI call go tool dist test -json directly rather than use src/run.bat at all. Filing this issue to track context and to link in a code comment.


When developing on upstream, you'd normally use src/run.bat to build/test on Windows. If we ever make any changes to our repo that don't work with src/run.bat, we might not detect it until we try to upstream the changes. I don't think this is a big risk--it seems unlikely to me that our changes would have this kind of effect, and it's not hard to try it out locally before submitting if it seems like a change would cause a problem.

If we end up wanting to fix this, we could either use src/run.bat as-is and give up gotestsum results for the windows-amd64-devscript job, or we could patch src/run.bat.

Failure in rolling build "Init stage 0 Go toolset": "curl: (18) transfer closed with X bytes remaining to read"

https://dev.azure.com/dnceng/internal/_build/results?buildId=1182013&view=logs&j=0dc5894a-280c-5daf-7974-626bf869c742&t=a448ff55-576e-5019-d913-0f00cfd70f20&l=25

...
 97  123M   97  120M    0     0  36.5M      0  0:00:03  0:00:03 --:--:-- 44.1M
curl: (18) transfer closed with 2912825 bytes remaining to read
...

Filing this issue to keep track in case it ends up happening often.

A long-term fix is to pre-install Go into a build prereqs Dockerfile: #5.

Documentation on .NET infrastructure reuse decisions

Currently there are a some bits we aren't reusing from .NET infrastructure:

  • https://github.com/dotnet/dotnet-buildtools-prereqs-docker
    • At first glance doesn't seem right to put our prereqs in repo with dotnet in the name.
    • Docker team is hesitant.
    • Short-term, we are using an image produced by this repo. Long-term we plan to not ask to put our own images into this repo, and make our own instead.
  • An ACR, or publishing into MCR.
    • Not investigated in detail. Similar factors as dotnet-buildtools-prereqs-docker.
    • ACR: my understanding is the current ACRs are very .NET-specific, and there are already multiple in use for parts of .NET. Even if we reused, we may need to create our own.
    • MCR: I believe this is public and broader than .NET.
  • dotnetcli.blob.core.windows.net
    • dotnet.
    • A storage account seems relatively easy to maintain: reuse is not as inherently beneficial as most other items.
    • Haven't discussed with .NET team.
  • Maestro++/BAR and associated publishing infra
    • Designed for .NET build orchestration and .NET's artifacts in particular. Not necessarily easy to adapt. (Up-front cost.)
    • We have a simple release scenario compared to .NET: one repo, simple acquisition.
    • Haven't discussed with .NET team.

We are reusing:

  • Signing infrastructure.
    • Both Arcade's SignTool and underlying VS Eng MicroBuild.
    • MicroBuild tooling interacts with ESRP on our behalf.
  • SDL validation scripts, running Guardian (in progress).
    • Acquiring and running Guardian isn't trivial. This is at least a starting point.
  • GitHub -> AzDO mirroring infra, Maestro.
  • https://github.com/dotnet/docker-tools (planned)
    • Used to maintain and build a repo with Dockerfiles with bonus features for dependency tracking and templating.

Filing this issue to keep track of which bits are are/aren't reusing and why. (May want to turn this into a checked-in doc at some point.)

Maintainable patches/forked branches

We need to figure out how to maintain forked branches (and/or patches). We should find an approach that:

  1. Makes it easy to submit changes back to upstream
  2. Lets us produce "baseline" builds that don't include the changes
  3. Works well with repeated merges from upstream
  4. Gives us reasonable dev workflows

There are a bunch of ways to go, here are two extremes:

  • One forked branch. Each feature is stored in a microsoft/patches/*.patch file that only gets applied during the build. All patches are disabled by a build arg when we want a baseline build.
  • Many forked branches. Each feature is forked from the "basic infra" branch that produces our baseline build. Periodically, all features are merged into a central "features" branch to produce a full-featured build.

Patch files are not great for dev workflows: viewing a diff between two versions of a patch file is difficult (++, +-, -+, -- ๐Ÿ˜จ), and to work on a patch, you need to apply it, make changes, and extract it again. Merge conflicts with another dev would be particularly tricky. (Although with a low number of contributors, this is not a huge risk.)

We plan to start with .patch files. They are a simple way to start, and we have company: Debian uses patch files in debian/patches/*.patch extensively to fix bugs. We can relatively easily move to a different strategy later if the dev workflow ends up being totally unreasonable when the changes are more than bug fixes.

Simplify publishing: combine signing and publish jobs

For now, Sign and Publish are separate jobs that run in sequence. Sign runs on a specific build agent queue that's capable of signing, and Publish runs on a generic image provided by AzDO that has az installed.

We could combine the jobs--run both on the signing-capable machine. However, they don't have 'az' installed, which AzureCLI@2 needs. We would either need to upload to blob storage in some other way (via Arcade, MSBuild task?), or get az installed on the signing build machines.

Consider moving official build signing out of the main build pipeline

We can move signing out of the main build pipeline into an independent pipeline.

  • The main build pipeline could complete whether or not signing has finished yet, for more throughput regardless of signing infra speed/state.
  • If the signing infrastructure needs to change, it can be updated in one infra branch that applies to all Go branches.
    • A "floating" dependency like this is a red flag. It can cause unanticipated build breaks, and makes it hard to resurrect an old branch if the floating branch has had breaking changes since the last successful build.
    • This might be ok to float, because the set of artifacts seems like it will be relatively simple and stable over time.
    • If we need to update the version of signing tooling, it could be tedious to cherry-pick the change to each branch. updating just one branch is a way to avoid that. Another way would be to figure out how to set up auto-updates for these versions.

Move signing job implementation to a yml template

There are several places the os_arch is repeated:

dependsOn:
- Build_linux_amd64

# Download all assets that need signing. (As of writing, only linux-amd64.)
- download: current
artifact: Binaries linux_amd64
displayName: 'Download: Binaries linux_amd64'

Contents: |
Binaries linux_amd64/*

It would be bad to add more platforms the same way. I think this job should be turned into a template that accepts a list of os_arch as a parameter. (That would also take the clutter out of the root azure-pipelines.yml file.)

This should be done before (or along with) windows_amd64 (#28).

Make windows-amd64 Go binaries work with BinSkim SDL tool (expects PDB)

Currently, there are no PDB files for Go. Debug information is in DWARF format:

BinSkim requires a PDB for each library, and has no way to turn this off:

Someone tried to extract a PDB file from a binary built on windows with MinGW, but that didn't work:

Running BinSkim on our windows-amd64 build gets this bunch of errors for each exe file:

error ERR997.ExceptionLoadingPdb : BA2002 : 'go.exe' was not evaluated for check 'DoNotIncorporateVulnerableDependencies' because its PDB could not be loaded.
error ERR997.ExceptionLoadingPdb : BA2006 : 'go.exe' was not evaluated for check 'BuildWithSecureTools' because its PDB could not be loaded.
error ERR997.ExceptionLoadingPdb : BA2007 : 'go.exe' was not evaluated for check 'EnableCriticalCompilerWarnings' because its PDB could not be loaded.
error ERR997.ExceptionLoadingPdb : BA2011 : 'go.exe' was not evaluated for check 'EnableStackProtection' because its PDB could not be loaded.
error ERR997.ExceptionLoadingPdb : BA2013 : 'go.exe' was not evaluated for check 'InitializeStackProtection' because its PDB could not be loaded.
error ERR997.ExceptionLoadingPdb : BA2014 : 'go.exe' was not evaluated for check 'DoNotDisableStackProtectionForFunctions' because its PDB could not be loaded.
error ERR997.ExceptionLoadingPdb : BA2024 : 'go.exe' was not evaluated for check 'EnableSpectreMitigations' because its PDB could not be loaded.

That happens even when I use a --config file to configure all those rules (BA*) to not run. It seems that you can turn off rules, but this is an "error" with no way to ignore.

Config file with exceptions for these rules (click me)
  <Properties Key="BA2002.DoNotIncorporateVulnerableDependencies.Options" Type="PropertiesDictionary">
    <Property Key="RuleEnabled" Value="Disabled" Type="Driver.RuleEnabledState" />
  </Properties>
  <Properties Key="BA2006.BuildWithSecureTools.Options" Type="PropertiesDictionary">
    <Property Key="RuleEnabled" Value="Disabled" Type="Driver.RuleEnabledState" />
  </Properties>
  <Properties Key="BA2007.EnableCriticalCompilerWarnings.Options" Type="PropertiesDictionary">
    <Property Key="RuleEnabled" Value="Disabled" Type="Driver.RuleEnabledState" />
  </Properties>
  <Properties Key="BA2011.EnableStackProtection.Options" Type="PropertiesDictionary">
    <Property Key="RuleEnabled" Value="Disabled" Type="Driver.RuleEnabledState" />
  </Properties>
  <Properties Key="BA2013.InitializeStackProtection.Options" Type="PropertiesDictionary">
    <Property Key="RuleEnabled" Value="Disabled" Type="Driver.RuleEnabledState" />
  </Properties>
  <Properties Key="BA2014.DoNotDisableStackProtectionForFunctions.Options" Type="PropertiesDictionary">
    <Property Key="RuleEnabled" Value="Disabled" Type="Driver.RuleEnabledState" />
  </Properties>
  <Properties Key="BA2024.EnableSpectreMitigations.Options" Type="PropertiesDictionary">
    <Property Key="RuleEnabled" Value="Disabled" Type="Driver.RuleEnabledState" />
  </Properties>

I think I can entirely disable scanning the exe files with BinSkim to get around it temporarily, but this is obviously not acceptable in the long run.


Options that come to mind:

  • Fix BinSkim--pick any one of:
    • Make these rules compatible with GCC/MinGW symbols. (Do not require PDBs for these checks with Go binaries.)
    • Make ERR997 ignorable.
    • Make it so ignoring BA2002, BA2006, ... BA2024 stops BinSkim from trying and failing to load the PDB.
    • Figure out how to baseline the results. According to microsoft/binskim#299, this requires some work on the BinSkim side.
  • Make Go produce PDBs. (May not be an option: I think we'd probably want to release before figuring it out, and if we want to ship both modified and unmodified versions of Go, we can't satisfy SDL with the unmodified non-PDB Go.)

Add linux-amd64-racecompile

This builder is very special: it installs (compiles) the compiler and linker packages with -race, then rebuilds Go again using those race-detecting compiler bits. (Instead of running any ordinary tests.) This is done to test concurrent compilation for races.

https://github.com/golang/build/blob/83a8520724285855120f774cc4a7b57540a1d50b/dashboard/builders.go

		Name:                "linux-amd64-racecompile",
		HostType:            "host-linux-jessie",
		tryBot:              nil, // TODO: add a func to conditionally run this trybot if compiler dirs are touched
		CompileOnly:         true,
		SkipSnapshot:        true,
		StopAfterMake:       true,
		InstallRacePackages: []string{"cmd/compile", "cmd/link"}, ...
func (c *BuildConfig) GoInstallRacePackages() []string {
	if c.InstallRacePackages != nil {
		return append([]string(nil), c.InstallRacePackages...)
	}
	if c.IsRace() {
		return []string{"std"}
	}
	return nil
}

https://github.com/golang/build/blob/83a8520724285855120f774cc4a7b57540a1d50b/internal/buildgo/buildgo.go

func (gb GoBuilder) RunMake(ctx context.Context, bc *buildlet.Client, w io.Writer) (remoteErr, err error) {
...
	// Need to run "go install -race std" before the snapshot + tests.
	if pkgs := gb.Conf.GoInstallRacePackages(); len(pkgs) > 0 {
		sp := gb.CreateSpan("install_race_std")
		remoteErr, err = bc.Exec(ctx, path.Join(gb.Goroot, "bin/go"), buildlet.ExecOpts{
			Output:   w,
			ExtraEnv: append(gb.Conf.Env(), "GOBIN="),
			Debug:    true,
			Args:     append([]string{"install", "-race"}, pkgs...),
		})
...
	if gb.Name == "linux-amd64-racecompile" {
		return gb.runConcurrentGoBuildStdCmd(ctx, bc, w)
	}
// runConcurrentGoBuildStdCmd is a step specific only to the
// "linux-amd64-racecompile" builder to exercise the Go 1.9's new
// concurrent compilation. It re-builds the standard library and tools
// with -gcflags=-c=8 using a race-enabled cmd/compile and cmd/link
// (built by caller, RunMake, per builder config).
// The idea is that this might find data races in cmd/compile and cmd/link.
func (gb GoBuilder) runConcurrentGoBuildStdCmd(ctx context.Context, bc *buildlet.Client, w io.Writer) (remoteErr, err error) {

Apply SDL validation task on windows-amd64 outputs

Building for windows-amd64 is tracked by #27.
Applying SDL tasks is WIP on an internal branch, as of writing.

I'm preemptively filing this issue to track the combination of the two: make sure the windows-amd64 assets are processed properly by the SDL tasks. There may be different requirements/issues that come up for Windows vs. Linux.

Set up dnceng PR validation triggers for GitHub repo

While this is a private repo, we need to use the internal project in our AzDO instance for auth reasons. The internal project needs to have a service connection that connects to microsoft repos to get the web hooks set up.

Once the repo is public, there's an existing "Microsoft" service connection in the public AzDO project we can use.

Update SUPPORT.md

Fill this out:

go/SUPPORT.md

Lines 1 to 10 in 0e8523d

# TODO: The maintainer of this repo has not yet edited this file
**REPO OWNER**: Do you want Customer Service & Support (CSS) support for this product/project?
- **No CSS support:** Fill out this template with information about how to file issues and get help.
- **Yes CSS support:** Fill out an intake form at [aka.ms/spot](https://aka.ms/spot). CSS will work with/help you to determine next steps. More details also available at [aka.ms/onboardsupport](https://aka.ms/onboardsupport).
- **Not sure?** Fill out a SPOT intake as though the answer were "Yes". CSS will help you decide.
*Then remove this first heading from this SUPPORT.MD file before publishing your repo.*

Failure downloading stage 0 Go: no progress for 20 seconds, then checksum failure

It looks like the download doesn't make progress for a while (20 seconds), then fails when checking the checksum. I assume the file's truncated. I'm not sure why curl itself isn't reporting an error:

https://dev.azure.com/dnceng/internal/_build/results?buildId=1186727&view=logs&j=d64f3f73-a65e-5047-8380-514dab748946&t=8bc915d5-88fc-5303-cb01-49be6dc3eb68&l=42

Downloading stage 0 Go compiler and extracting to '/home/vsts_azpcontainer/.go-stage-0/1.16' ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100    73  100    73    0     0   1216      0 --:--:-- --:--:-- --:--:--  1216

  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:15 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:16 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0
100  1613  100  1613    0     0     78      0  0:00:20  0:00:20 --:--:--   369
/home/vsts_azpcontainer/.go-stage-0/1.16/go.tar.gz: FAILED
sha256sum: WARNING: 1 computed checksum did NOT match

For comparison, a healthy download from another job in that build (completed in 5 seconds):

https://dev.azure.com/dnceng/internal/_build/results?buildId=1186727&view=logs&j=0062eaab-e429-5c7e-1d0d-476982c7b962&t=f4a61628-2565-5334-9bea-d42b30749d7a&l=24

Downloading stage 0 Go compiler and extracting to '/home/vsts_azpcontainer/.go-stage-0/1.16' ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100    73  100    73    0     0    646      0 --:--:-- --:--:-- --:--:--   646

 96  123M   96  118M    0     0   119M      0  0:00:01 --:--:--  0:00:01  119M
100  123M  100  123M    0     0   120M      0  0:00:01  0:00:01 --:--:--  129M
/home/vsts_azpcontainer/.go-stage-0/1.16/go.tar.gz: OK
Done extracting stage 0 Go compiler to '/home/vsts_azpcontainer/.go-stage-0/1.16'

If it's simply flaky, this will be fixed by using a Docker container: #5.

Sign official build outputs (linux-amd64) with '.sig' detached cert

The outputs of the build need to be signed.

What we're able to do right now is produce a detached signature file .sig that can be checked using gpg --verify/gpgv against the .tar.gz file.


There are some other approaches out there. For example, "IMA appraisal" verifies a signature in xattr (filesystem extended attributes). I don't know if the signing infra we have access to can support this, or if IMA appraisal will actually be used for our Go builds. These can be tracked separately, but we need info on who would use one of these approaches.

Add git-codereview gofmt checks and tests

I noticed "git-codereview gofmt checks and tests" mentioned in the .gitattributes, justifying blocking autocrlf in the repo:

go/.gitattributes

Lines 1 to 14 in e3168d7

# Treat all files in the Go repo as binary, with no git magic updating
# line endings. This produces predictable results in different environments.
#
# Windows users contributing to Go will need to use a modern version
# of git and editors capable of LF line endings.
#
# Windows .bat files are known to have multiple bugs when run with LF
# endings, and so they are checked in with CRLF endings, with a test
# in test/winbatch.go to catch problems. (See golang.org/issue/37791.)
#
# We'll prevent accidental CRLF line endings from entering the repo
# via the git-codereview gofmt checks and tests.
#
# See golang.org/issue/9281.

Maybe this is already running in the existing set of tests, but I'm not sure. Seems worth investigating to get to parity with upstream tests.

Build a Docker prereqs image for linux-amd64 longtest

Installing packages during CI from the distro can be slow and sometimes unreliable, so we should use an image with dependencies already installed. It should be able to run the full set of tests (longtest, see #1), the image should be hosted in a public ACR, and it should be based on Ubuntu (our first target).

We might want to setup a repo like https://github.com/dotnet/dotnet-buildtools-prereqs-docker. If we do, we should consider having it auto-update sometimes, for base image changes and updates to any packages we use from the distro.

Skip tests during official builds

The official build (internal build that produces artifacts) doesn't need to run tests, because the commit has been validated already. It should do the minimum work to produce artifacts to improve turnaround time.

If we don't have rolling builds, we might not want to do this. There's a chance that two PRs merged closely together can conflict in a way that the tests catch it but Git doesn't. We should have some way to detect this. Maybe running -short tests is sufficient?

Document licensing information

The current plan is that the Go files will continue with the existing BSD-style license, and any changes we make to those files will be under the BSD-style license. Infrastructure-only files checked into the microsoft/ dir will be licensed under the MIT license.

This should be clearly explained in the project docs.

Review ultra-verbose `go tool dist test` logs for unexpected skips.

Broken off from original issue tracking full coverage vs. upstream: #1

Something we can do to be a little more sure our tests are running properly is to look at the verbose logs, with a modification to make them more verbose and show SKIPs:

Here's what I did to make tests show skips:

diff --git a/src/cmd/dist/test.go b/src/cmd/dist/test.go
index 0c8e2c56bc..1248d9145b 100644
--- a/src/cmd/dist/test.go
+++ b/src/cmd/dist/test.go
@@ -271,7 +271,7 @@ func short() string {
 // defaults as later arguments in the command line.
 func (t *tester) goTest() []string {
 	return []string{
-		"go", "test", "-short=" + short(), "-count=1", t.tags(), t.runFlag(""),
+		"go", "test", "-v", "-short=" + short(), "-count=1", t.tags(), t.runFlag(""),
 	}
 }
 
@@ -350,6 +350,7 @@ func (t *tester) registerStdTest(pkg string, useG3 bool) {
 			}
 			args := []string{
 				"test",
+				"-v",
 				"-short=" + short(),
 				t.tags(),
 				t.timeout(timeoutSec),
@@ -388,6 +389,7 @@ func (t *tester) registerRaceBenchTest(pkg string) {
 			ranGoBench = true
 			args := []string{
 				"test",
+				"-v",
 				"-short=" + short(),
 				"-race",
 				t.timeout(1200), // longer timeout for race with benchmarks

~500 results for 'SKIP': https://gist.github.com/dagood/2bc08e37da295c2022b9196572d378a4
Results with 1 line of context (skip reason): https://gist.github.com/dagood/8105bad6aff4803794a4ed42c265a4fc

500 lines isn't small, but it isn't impossible. There are duplicate skip reasons, which helps.

Unfortunately, there is no baseline from upstream to check against. We need to reason out whether each one is a problem.

This is based on a golang-devs thread: https://groups.google.com/g/golang-dev/c/PNzwZXOe7bQ/m/M43Gl9mVDAAJ.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.