Code Monkey home page Code Monkey logo

orion's Introduction

Orion logo

Task Status Matrix

Monorepo for building and publishing multiple Docker containers as microservices within a single repository.

Table of Contents

What is Orion?

Orion is a build environment for containerized services we run in our Fuzzing infrastructure (eg. libFuzzer).

For spawning a cluster of Docker containers at EC2 or other cloud providers, see the parent project Laniakea.

How does it operate?

CI and CD are performed autonomously with Taskcluster and the Orion Decision service. A build process gets initiated only if a file of a particular service has been modified, or if a parent image is modified. Each image is either tagged with the latest revision or latest before being published to the Docker registry and as Taskcluster artifacts. For more information about each service take a look in the corresponding README.md of each service or check out the Wiki pages for FAQs and a Docker cheat sheet.

Build Instructions and Development

Usage

You can build, test and push locally, which is great for testing locally. The commands below are general, and each service may have more specific instructions defined in the README.md of the service.

TAG=dev
docker build -t mozillasecurity/service:$TAG ../.. -f Dockerfile

... or to test the latest build:

TAG=latest

Running the fuzzer locally:

eval $(TASKCLUSTER_ROOT_URL=https://community-tc.services.mozilla.com taskcluster signin)
LOGS="logs-$(date +%Y%m%d%H%M%S)"
mkdir -p "$LOGS"
docker run --rm -e TASKCLUSTER_ROOT_URL -e TASKCLUSTER_CLIENT_ID -e TASKCLUSTER_ACCESS_TOKEN -it -v "$(pwd)/$LOGS":/logs mozillasecurity/service:$TAG 2>&1 | tee "$LOGS/live.log"

... add any environment variables required by the fuzzer using -e VAR=value. Some fuzzer images alter kernel sysctls and will require docker run --privileged.

Testing

Before a build task is initiated in Taskcluster, each shell script and Dockerfile undergo a linting and testing process which may or may not abort each succeeding task. To ensure your Dockerfile passes, you are encouraged to install the pre-commit hook (pre-commit install) prior to commit, and to run any tests defined in the service folder before pushing your commit.

orion's People

Contributors

albill avatar alex avatar choller avatar garbas avatar jschwartzentruber avatar matt-boris avatar mozilla-github-standards avatar nth10sd avatar petemoore avatar posidron avatar pyoor avatar tysmith avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

orion's Issues

Inspect intermittent launch errors of test container

Running:

[Monorepo] DockerHub (test) INFO: ['docker', 'run', '--rm', 
'-v', '/var/run/docker.sock:/var/run/docker.sock', 
'-v', '/home/travis/build/MozillaSecurity/orion/base/fuzzos/tests/:/tmp/tests/', 
'gcr.io/gcp-runtimes/container-structure-test:latest', 
'test',  '--quiet', 
'--image', 'mozillasecurity/fuzzos:latest', 
'--config', '/tmp/tests/fuzzos_command_test.yaml', 
'--config', '/tmp/tests/fuzzos_metadata_test.yaml']

results in:

Error: Error creating container: API error (400): {"message":"OCI runtime create failed: container_linux.go:348: starting container process caused \"exec: \\\"rr\\\": executable file not found in $PATH\": unknown"}

Happens intermittently with cron but it has been reported to be seen locally as well.

`git clone --depth 1` may benefit from `--no-tags --shallow-submodules` as well

I noticed in fuzzmanager.sh that we're using --depth 1, which implies --single-branch, to get a checkout that I suspect we never run git on again. Adding the options --no-tags and (if used) --shallow-submodules would carry over that intent into git tags (which would be skipped) and submodules (which would get the same depth 1 indication).

I don't know if this will have any significant performance impact, or if the install process cares about git tags, but I figured I'd point it out since we must run this clone quite often.

All services should be rebuilt periodically

When Orion ran in Travis, we had a weekly cron build that rebuilt all services. We need something like this again in Taskcluster, or services that change infrequently will have expired artifacts indexed.

Workers should be divided by spec, not fuzzing target

Currently we allocate a TC worker-pool per fuzzing pool. We should instead define pools based on specs (machine types allowed, worker features/capabilities required), which can be used by one or more fuzzing pool. The max pool size would scale based on the sum of fuzzing pools using it.

This would allow us to modify machine specs without worrying about "stale" workers using an older configuration.

IndexError: list index out of range

from task BTRj0MD3RZSDZ6M7pic_0w:

Traceback (most recent call last):
  File "/usr/bin/decision", line 33, in <module>
    sys.exit(load_entry_point('orion-decision', 'console_scripts', 'decision')())
  File "/src/orion-decision/src/orion_decision/cli.py", line 163, in main
    sys.exit(Scheduler.main(args))
  File "/src/orion-decision/src/orion_decision/scheduler.py", line 306, in main
    evt = GithubEvent.from_taskcluster(args.github_action, args.github_event)
  File "/src/orion-decision/src/orion_decision/git.py", line 231, in from_taskcluster
    self.commit_range = f"{event['commits'][0]['id']}^..{event['after']}"
IndexError: list index out of range

Inputs:

{
      "GITHUB_EVENT": "{\"after\":\"07ad2a6d72db870aaec8b31e95a1b6d13721ec4e\",\"base_ref\":null,\"before\":\"0000000000000000000000000000000000000000\",\"commits\":[],\"compare\":\"https://github.com/MozillaSecurity/orion/compare/fix-cov-rev\",\"created\":true,\"deleted\":false,\"forced\":false,\"head_commit\":{\"added\":[],\"author\":{\"email\":\"[email protected]\",\"name\":\"Jesse Schwartzentruber\",\"username\":\"jschwartzentruber\"},\"committer\":{\"email\":\"[email protected]\",\"name\":\"Jesse Schwartzentruber\",\"username\":\"jschwartzentruber\"},\"distinct\":true,\"id\":\"07ad2a6d72db870aaec8b31e95a1b6d13721ec4e\",\"message\":\"[coverage-revision] Bash isn't installed. Emulate `-o pipefail` instead.\",\"modified\":[\"services/coverage-revision/launch.sh\"],\"removed\":[],\"timestamp\":\"2021-02-11T17:22:17-05:00\",\"tree_id\":\"87bd50a5890b9560ba64f61613b915d3533103cb\",\"url\":\"https://github.com/MozillaSecurity/orion/commit/07ad2a6d72db870aaec8b31e95a1b6d13721ec4e\"},\"installation\":{\"id\":5342972,\"node_id\":\"MDIzOkludGVncmF0aW9uSW5zdGFsbGF0aW9uNTM0Mjk3Mg==\"},\"organization\":{\"avatar_url\":\"https://avatars.githubusercontent.com/u/7916837?v=4\",\"description\":\"Fuzzing projects at the Mozilla Corporation\",\"events_url\":\"https://api.github.com/orgs/MozillaSecurity/events\",\"hooks_url\":\"https://api.github.com/orgs/MozillaSecurity/hooks\",\"id\":7916837,\"issues_url\":\"https://api.github.com/orgs/MozillaSecurity/issues\",\"login\":\"MozillaSecurity\",\"members_url\":\"https://api.github.com/orgs/MozillaSecurity/members{/member}\",\"node_id\":\"MDEyOk9yZ2FuaXphdGlvbjc5MTY4Mzc=\",\"public_members_url\":\"https://api.github.com/orgs/MozillaSecurity/public_members{/member}\",\"repos_url\":\"https://api.github.com/orgs/MozillaSecurity/repos\",\"url\":\"https://api.github.com/orgs/MozillaSecurity\"},\"pusher\":{\"email\":\"[email protected]\",\"name\":\"jschwartzentruber\"},\"ref\":\"refs/heads/fix-cov-rev\",\"repository\":{\"archive_url\":\"https://api.github.com/repos/MozillaSecurity/orion/{archive_format}{/ref}\",\"archived\":false,\"assignees_url\":\"https://api.github.com/repos/MozillaSecurity/orion/assignees{/user}\",\"blobs_url\":\"https://api.github.com/repos/MozillaSecurity/orion/git/blobs{/sha}\",\"branches_url\":\"https://api.github.com/repos/MozillaSecurity/orion/branches{/branch}\",\"clone_url\":\"https://github.com/MozillaSecurity/orion.git\",\"collaborators_url\":\"https://api.github.com/repos/MozillaSecurity/orion/collaborators{/collaborator}\",\"comments_url\":\"https://api.github.com/repos/MozillaSecurity/orion/comments{/number}\",\"commits_url\":\"https://api.github.com/repos/MozillaSecurity/orion/commits{/sha}\",\"compare_url\":\"https://api.github.com/repos/MozillaSecurity/orion/compare/{base}...{head}\",\"contents_url\":\"https://api.github.com/repos/MozillaSecurity/orion/contents/{+path}\",\"contributors_url\":\"https://api.github.com/repos/MozillaSecurity/orion/contributors\",\"created_at\":1496355919,\"default_branch\":\"master\",\"deployments_url\":\"https://api.github.com/repos/MozillaSecurity/orion/deployments\",\"description\":\"CI/CD pipeline for building and publishing multiple ๐Ÿณ containers as microservices within a mono repository.\",\"disabled\":false,\"downloads_url\":\"https://api.github.com/repos/MozillaSecurity/orion/downloads\",\"events_url\":\"https://api.github.com/repos/MozillaSecurity/orion/events\",\"fork\":false,\"forks\":10,\"forks_count\":10,\"forks_url\":\"https://api.github.com/repos/MozillaSecurity/orion/forks\",\"full_name\":\"MozillaSecurity/orion\",\"git_commits_url\":\"https://api.github.com/repos/MozillaSecurity/orion/git/commits{/sha}\",\"git_refs_url\":\"https://api.github.com/repos/MozillaSecurity/orion/git/refs{/sha}\",\"git_tags_url\":\"https://api.github.com/repos/MozillaSecurity/orion/git/tags{/sha}\",\"git_url\":\"git://github.com/MozillaSecurity/orion.git\",\"has_downloads\":true,\"has_issues\":true,\"has_pages\":false,\"has_projects\":false,\"has_wiki\":true,\"homepage\":\"\",\"hooks_url\":\"https://api.github.com/repos/MozillaSecurity/orion/hooks\",\"html_url\":\"https://github.com/MozillaSecurity/orion\",\"id\":93104559,\"issue_comment_url\":\"https://api.github.com/repos/MozillaSecurity/orion/issues/comments{/number}\",\"issue_events_url\":\"https://api.github.com/repos/MozillaSecurity/orion/issues/events{/number}\",\"issues_url\":\"https://api.github.com/repos/MozillaSecurity/orion/issues{/number}\",\"keys_url\":\"https://api.github.com/repos/MozillaSecurity/orion/keys{/key_id}\",\"labels_url\":\"https://api.github.com/repos/MozillaSecurity/orion/labels{/name}\",\"language\":\"Python\",\"languages_url\":\"https://api.github.com/repos/MozillaSecurity/orion/languages\",\"license\":{\"key\":\"mpl-2.0\",\"name\":\"Mozilla Public License 2.0\",\"node_id\":\"MDc6TGljZW5zZTE0\",\"spdx_id\":\"MPL-2.0\",\"url\":\"https://api.github.com/licenses/mpl-2.0\"},\"master_branch\":\"master\",\"merges_url\":\"https://api.github.com/repos/MozillaSecurity/orion/merges\",\"milestones_url\":\"https://api.github.com/repos/MozillaSecurity/orion/milestones{/number}\",\"mirror_url\":null,\"name\":\"orion\",\"node_id\":\"MDEwOlJlcG9zaXRvcnk5MzEwNDU1OQ==\",\"notifications_url\":\"https://api.github.com/repos/MozillaSecurity/orion/notifications{?since,all,participating}\",\"open_issues\":8,\"open_issues_count\":8,\"organization\":\"MozillaSecurity\",\"owner\":{\"avatar_url\":\"https://avatars.githubusercontent.com/u/7916837?v=4\",\"email\":\"[email protected]\",\"events_url\":\"https://api.github.com/users/MozillaSecurity/events{/privacy}\",\"followers_url\":\"https://api.github.com/users/MozillaSecurity/followers\",\"following_url\":\"https://api.github.com/users/MozillaSecurity/following{/other_user}\",\"gists_url\":\"https://api.github.com/users/MozillaSecurity/gists{/gist_id}\",\"gravatar_id\":\"\",\"html_url\":\"https://github.com/MozillaSecurity\",\"id\":7916837,\"login\":\"MozillaSecurity\",\"name\":\"MozillaSecurity\",\"node_id\":\"MDEyOk9yZ2FuaXphdGlvbjc5MTY4Mzc=\",\"organizations_url\":\"https://api.github.com/users/MozillaSecurity/orgs\",\"received_events_url\":\"https://api.github.com/users/MozillaSecurity/received_events\",\"repos_url\":\"https://api.github.com/users/MozillaSecurity/repos\",\"site_admin\":false,\"starred_url\":\"https://api.github.com/users/MozillaSecurity/starred{/owner}{/repo}\",\"subscriptions_url\":\"https://api.github.com/users/MozillaSecurity/subscriptions\",\"type\":\"Organization\",\"url\":\"https://api.github.com/users/MozillaSecurity\"},\"private\":false,\"pulls_url\":\"https://api.github.com/repos/MozillaSecurity/orion/pulls{/number}\",\"pushed_at\":1613485836,\"releases_url\":\"https://api.github.com/repos/MozillaSecurity/orion/releases{/id}\",\"size\":4097,\"ssh_url\":\"[email protected]:MozillaSecurity/orion.git\",\"stargazers\":39,\"stargazers_count\":39,\"stargazers_url\":\"https://api.github.com/repos/MozillaSecurity/orion/stargazers\",\"statuses_url\":\"https://api.github.com/repos/MozillaSecurity/orion/statuses/{sha}\",\"subscribers_url\":\"https://api.github.com/repos/MozillaSecurity/orion/subscribers\",\"subscription_url\":\"https://api.github.com/repos/MozillaSecurity/orion/subscription\",\"svn_url\":\"https://github.com/MozillaSecurity/orion\",\"tags_url\":\"https://api.github.com/repos/MozillaSecurity/orion/tags\",\"teams_url\":\"https://api.github.com/repos/MozillaSecurity/orion/teams\",\"trees_url\":\"https://api.github.com/repos/MozillaSecurity/orion/git/trees{/sha}\",\"updated_at\":\"2021-02-11T22:13:05Z\",\"url\":\"https://github.com/MozillaSecurity/orion\",\"watchers\":39,\"watchers_count\":39},\"sender\":{\"avatar_url\":\"https://avatars.githubusercontent.com/u/4673461?v=4\",\"events_url\":\"https://api.github.com/users/jschwartzentruber/events{/privacy}\",\"followers_url\":\"https://api.github.com/users/jschwartzentruber/followers\",\"following_url\":\"https://api.github.com/users/jschwartzentruber/following{/other_user}\",\"gists_url\":\"https://api.github.com/users/jschwartzentruber/gists{/gist_id}\",\"gravatar_id\":\"\",\"html_url\":\"https://github.com/jschwartzentruber\",\"id\":4673461,\"login\":\"jschwartzentruber\",\"node_id\":\"MDQ6VXNlcjQ2NzM0NjE=\",\"organizations_url\":\"https://api.github.com/users/jschwartzentruber/orgs\",\"received_events_url\":\"https://api.github.com/users/jschwartzentruber/received_events\",\"repos_url\":\"https://api.github.com/users/jschwartzentruber/repos\",\"site_admin\":false,\"starred_url\":\"https://api.github.com/users/jschwartzentruber/starred{/owner}{/repo}\",\"subscriptions_url\":\"https://api.github.com/users/jschwartzentruber/subscriptions\",\"type\":\"User\",\"url\":\"https://api.github.com/users/jschwartzentruber\"}}",
      "GITHUB_ACTION": "github-push",
}

Add a script to trigger a build manually and remotely

Sometimes a dependency in a base image can have changed internally (i.e GitHub) while developing a new service. Once the service is pushed the change is not immediately reflected in the base image because there was no trigger to initiated a build (exception: a cron task triggers). A user should have the possibility in triggering a build manually in this case.

PR CI dependencies not found

The PR #76 modifies the recipe recipes/linux/fuzzing_tc.sh, yet the dependency calculation (https://community-tc.services.mozilla.com/tasks/TygNjNkDTo6C0YJI7uRzXw/runs/0/logs/public/logs/live.log) contains:

DEBUG:orion_decision.orion:found path: fuzzing_tc.sh
INFO:orion_decision.orion:Image libfuzzer depends on path fuzzing_tc.sh
INFO:orion_decision.orion:Image grizzly depends on path fuzzing_tc.sh
INFO:orion_decision.orion:Image funfuzz depends on path fuzzing_tc.sh
INFO:orion_decision.git:Path changed in a5d450838a57e94ce99d6f5499a9cc4e3ba83f6a..42584ee060dcc715d994202e5a4570cc06a17c71: recipes/linux/fuzzing_tc.sh
INFO:orion_decision.scheduler:service funfuzz doesn't need to be rebuilt
INFO:orion_decision.scheduler:service grizzly doesn't need to be rebuilt
INFO:orion_decision.scheduler:service libfuzzer doesn't need to be rebuilt

Recipes are treated specially, so that only a basename match is required to create a dependency, but clearly this is incomplete.
This was probably broken in 580c9a2.

Index artifacts in push tasks

We currently use routes in the build task to index service tasks. We should manually index them with insertTask in the push task instead, so only successful builds are indexed, and post-build tests could run prior to publishing the image (either in Docker Hub or Taskcluster Index).

domato patches public?

Can the following patches be made public? Perhaps as a gist.
grizzly/corpman/resources/domato/add_fuzzPriv.patch
grizzly/corpman/resources/domato/grammar.patch

๐Ÿ› Fix build bustage of Grizzly

https://api.travis-ci.org/v3/job/488277277/log.txt

[Monorepo] DockerHub (build) INFO: Building image for services/grizzly/Dockerfile
Sending build context to Docker daemon  2.232kB


Step 1/8 : FROM mozillasecurity/fuzzos:latest
latest: Pulling from mozillasecurity/fuzzos
Digest: sha256:69c7bad5faadc6d1974fcedb83c8cbe77f09aa8879fb8f7d49b52ca39bf49019
Status: Image is up to date for mozillasecurity/fuzzos:latest
 ---> c54facd88970
Step 2/8 : LABEL maintainer Jesse Schwartzentruber <truber_mozilla.com>
 ---> Running in 8ae6fc7b7700
Removing intermediate container 8ae6fc7b7700
 ---> 2a1cd2c3a60d
Step 3/8 : USER root
 ---> Running in 862029d51d69
Removing intermediate container 862029d51d69
 ---> f29af429870c
Step 4/8 : COPY recipes/ /tmp/recipes/
 ---> cfe6068535bb
Step 5/8 : RUN /tmp/recipes/grizzly.sh     && rm -rf /tmp/recipes
 ---> Running in 2eb404ce044b
๏ฟฝ[91m+ apt-get update -y -qq
๏ฟฝ[0m๏ฟฝ[91m+ cat
๏ฟฝ[0m๏ฟฝ[91m++ lsb_release -cs
๏ฟฝ[0m๏ฟฝ[91m++ lsb_release -cs
๏ฟฝ[0m๏ฟฝ[91m++ lsb_release -cs
๏ฟฝ[0m๏ฟฝ[91m+ curl -sL http://ddebs.ubuntu.com/dbgsym-release-key.asc
๏ฟฝ[0m๏ฟฝ[91m+ apt-key add -
๏ฟฝ[0m๏ฟฝ[91mWarning: apt-key output should not be parsed (stdout is not a terminal)
๏ฟฝ[0mOK
๏ฟฝ[91m+ apt-get update -y -qq
๏ฟฝ[0m๏ฟฝ[91m+ apt-get install -q -y libasound2 libdbus-glib-1-2 libglu1-mesa libglu1-mesa-dbgsym libosmesa6 libpulse0 libwayland-egl1-mesa-dbgsym nodejs p7zip-full python-dev python-setuptools python-wheel screen subversion ubuntu-restricted-addons virtualenv wget xvfb zip
๏ฟฝ[0mReading package lists...
Building dependency tree...
Reading state information...
python-setuptools is already the newest version (39.0.1-2).
xvfb is already the newest version (2:1.19.6-1ubuntu4.2).
nodejs is already the newest version (10.15.1-1nodesource1).
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 libwayland-egl1-mesa-dbgsym : Depends: libwayland-egl1-mesa (= 18.0.0~rc5-1ubuntu1)
๏ฟฝ[91mE: Unable to correct problems, you have held broken packages.
๏ฟฝ[0mThe command '/bin/sh -c /tmp/recipes/grizzly.sh     && rm -rf /tmp/recipes' returned a non-zero code: 100
Traceback (most recent call last):
  File "./monorepo.py", line 409, in <module>
    sys.exit(MonorepoManager.main())
  File "./monorepo.py", line 393, in main
    Travis().run(args)
  File "./monorepo.py", line 253, in run
    self.deliver(folder, options, version)
  File "./monorepo.py", line 236, in deliver
    docker.build()
  File "./monorepo.py", line 127, in build
    os.path.dirname(self.dockerfile)
  File "/usr/lib/python3.4/subprocess.py", line 561, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['docker', 'build', '--pull', '--compress', '-t', 'mozillasecurity/grizzly:nightly', '-t', 'mozillasecurity/grizzly:latest', '-f', 'services/grizzly/Dockerfile', 'services/grizzly']' returned non-zero exit status 100

Also the following packages can be removed as they are in present in the base image:

xvfb, nano, nodejs, python-setuptools

The base image comes with unzip and bzip2 whereby this image installs p7zip-full and zip. We should analyze the use cases and remove either or.

Use role in fuzzing decision.

Because of taskcluster/taskcluster#5660, a long scope list can cause a 401 error when the hook decision runs whenever a fuzzing pool is cycled. The workaround is to use a client or role instead of listing scopes, and we already have an appropriate role to use: the hook role.

It does have one extra scope โ€“ queue:create-task:highest:proj-fuzzing/ci โ€“ which is used to create the decision task itself but not needed by other tasks in the task group. This is not a big deal.

Support for building Windows AMIs

For some projects, we will need to maintain Windows AMIs in the same way as we maintain Docker images.

community-tc-config already has support for applying a bootstrap script to an existing AMI and taking a snapshot. We could implement this in orion instead as another service type.

This is different from existing services:

  • artifacts live in EC2, not TC.
  • "build" tasks would work with an external EC2 instance.

The following needs to be implemented:

  • Create an EC2 instance, apply a bootstrap script, and take a snapshot. The resulting AMI id(s) should be in an artifact of the task. It may be possible to reuse laniakea for this.
  • Copy the resulting instance to other regions.
  • Share the instance(s) with another AWS account. (eg. community-tc)
  • Update TC pools that use the AMI to use the new one. (trigger fuzzing-tc-config)
  • Delete old AMIs.

The external EC2 instance could be lost if task exception or failure occurs, so we would need a periodic hook to check for and remove orphans.

In the future this may need to support GCE or Azure VMs, but for now we only run Windows instances in EC2.

rr test fails

The RR installation test currently fails. I'm going to temporarily disable it so the image updates again on pushes.

=== RUN: Command Test: rr installation

--- FAIL

duration: 774.72321ms

stderr: rr: /tmp/recipes/rr.build.iQCOY9y38s/rr/src/util.cc:1141: bool rr::running_under_rr(): Assertion `ret == 0 || (ret == -1 && (*__errno_location ()) == 38)' failed.

Error: Expected string '--disable-cpuid-features' not found in output ''

Error: Test 'rr installation' exited with incorrect error code. Expected: 0, Actual: 139

apply_to should be allowed to specify heterogenous pools

Currently we require most fields to be identical for the whole set of apply_to configs.

# while these fields are not required to be defined here, they must be the same
# for the entire set .. at least for now
same_fields = (
"cloud",
"cores_per_task",
"cpu",
"cycle_time",
"disk_size",
"gpu",
"imageset",
"metal",
"minimum_memory_per_core",
"platform",
"schedule_start",
)

This allows a shortcut later on where those values are set in the parent pool to be used by the decision somehow (according to the comment).

# set the field on self, so it can easily be used by decision
setattr(self, field, getattr(pools[0], field))

This shouldn't be necessary. It prevents us from creating a pool (eg. with macro COVERAGE=1) that applies to pools in disparate clouds or different OS'.

Manually triggered decisions should use schedule when setting deadlines

If we cycle a fuzzing pool by manually triggering its hook, we set the new task deadlines based on the fuzzing config without regard for when the hook will next run based on the schedule. Since fuzzing-decision cancels existing tasks in the worker pool, this produces canceled tasks in Taskmanager which lack artifacts.

fuzzing-decision should check when the next scheduled hook will fire, and set deadlines accordingly. We'll still have some canceled tasks caused by manually cycling the pool, which can't be helped.

Add Clang 5

I installed only Clang 4 and Clang 5 has libfuzzer support built-in. See recipe for llvm.sh.

๐Ÿ› Fix build bustage of FunFuzz

https://travis-ci.org/MozillaSecurity/orion/jobs/495171136#L9078

+ apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 8B48AD6246925553
Warning: apt-key output should not be parsed (stdout is not a terminal)
Executing: /tmp/apt-key-gpghome.wFqZWlFnRp/gpg.1.sh --keyserver keyserver.ubuntu.com --recv-keys 8B48AD6246925553
gpg: key 8B48AD6246925553: 27 signatures not checked due to missing keys
gpg: key 8B48AD6246925553: public key "Debian Archive Automatic Signing Key (7.0/wheezy) <[email protected]>" imported
gpg: Total number processed: 1
gpg:               imported: 1
+ echo 'deb http://ftp.us.debian.org/debian testing main contrib non-free'
+ apt-get update -y -qq
W: GPG error: http://ftp.us.debian.org/debian testing InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 7638D0442B90D010 NO_PUBKEY 04EE7237B7D453EC
E: The repository 'http://ftp.us.debian.org/debian testing InRelease' is not signed.
The command '/bin/sh -c /tmp/recipes/install_prerequisites.sh     && rm -f /tmp/recipes/install_prerequisites.sh     && chown -R worker:worker /home/worker' returned a non-zero code: 100

Taskboot fails to checkout PR from a fork.

The PR is from https://github.com/jschwartzentruber/orion

We should use the clone URL of the fork, or git fetch pull/{n}/head before checkout.

[taskcluster 2021-05-26 15:29:10.621Z] === Task Starting ===
Linux 24bb6a3bf791 5.4.0-1029-gcp #31~18.04.1-Ubuntu SMP Thu Oct 22 09:43:51 UTC 2020 x86_64 Linux
INFO:root:Target setup in /tmp/taskboot.f3ojx7o5
INFO:taskboot.target:Cloning https://github.com/MozillaSecurity/orion @ dc63e916ba10f5f59e7e79836cd9cd462c04bd83
INFO:taskboot.target:Cloned into /tmp/taskboot.f3ojx7o5
fatal: reference is not a tree: dc63e916ba10f5f59e7e79836cd9cd462c04bd83
Traceback (most recent call last):
  File "/usr/bin/build", line 33, in <module>
    sys.exit(load_entry_point('orion-builder', 'console_scripts', 'build')())
  File "/src/orion-builder/src/orion_builder/build.py", line 102, in main
    target = Target(args)
  File "/usr/lib/python3.8/site-packages/taskboot/target.py", line 31, in __init__
  File "/usr/lib/python3.8/site-packages/taskboot/target.py", line 45, in clone
  File "/usr/lib/python3.8/subprocess.py", line 411, in check_output
  File "/usr/lib/python3.8/subprocess.py", line 512, in run
subprocess.CalledProcessError: Command '['git', 'checkout', 'dc63e916ba10f5f59e7e79836cd9cd462c04bd83', '-b', 'taskboot']' returned non-zero exit status 128.
[taskcluster 2021-05-26 15:29:14.492Z] === Task Finished ===

CODE_OF_CONDUCT.md file missing

As of January 1 2019, Mozilla requires that all GitHub projects include this CODE_OF_CONDUCT.md file in the project root. The file has two parts:

  1. Required Text - All text under the headings Community Participation Guidelines and How to Report, are required, and should not be altered.
  2. Optional Text - The Project Specific Etiquette heading provides a space to speak more specifically about ways people can work effectively and inclusively together. Some examples of those can be found on the Firefox Debugger project, and Common Voice. (The optional part is commented out in the raw template file, and will not be visible until you modify and uncomment that part.)

If you have any questions about this file, or Code of Conduct policies and procedures, please see Mozilla-GitHub-Standards or email [email protected].

(Message COC001)

Deal with network connectivity issues of remote peers

This ain't about DNS resolve issues.
This is for building only, during runtime we have HEALTHCHECK.

apt-get peers like ddebs.ubuntu.com were intermittently not reachable and therefore the cron task failed. That should get prevented in the future and for other hosts as well.

Possible solution (for scripts only but not for Dockerfile)

alias apt-get="retry apt-get"

retry is defined in commons.sh but only used in FuzzOS and not exposed to each service image.

Alternatively and even better, if the build job fails, simply relaunch it in the Monorepo manager script N times. If that still didn't work out let the entire job fail.

Add back arm64 support

arm64 images were supported in Travis CI. Once Taskcluster supports arm64, we'll have to add support to orion-builder.

Let all Dockerfiles pass Hadolint

Analyze the following results and if necessary use 'ignores'.

https://github.com/hadolint/hadolint

% hadolint Dockerfile
Dockerfile:12 SC2038 Use -print0/-0 or -exec + to allow for non-alphanumeric filenames.
Dockerfile:12 DL3003 Use WORKDIR to switch to a directory
Dockerfile:12 DL3008 Pin versions in apt get install. Instead of `apt-get install <package>` use `apt-get install <package>=<version>`
Dockerfile:12 DL3009 Delete the apt-get lists after installing something
hadolint images/grizzly/Dockerfile
images/grizzly/Dockerfile DL4001 Either use Wget or Curl but not both
images/grizzly/Dockerfile:1 DL3007 Using latest is prone to errors if the image will ever update. Pin the version explicitly to a release tag.
images/grizzly/Dockerfile:5 DL3002 Do not switch to root USER
images/grizzly/Dockerfile:7 DL3008 Pin versions in apt get install. Instead of `apt-get install <package>` use `apt-get install <package>=<version>`
images/grizzly/Dockerfile:7 DL3013 Pin versions in pip. Instead of `pip install <package>` use `pip install <package>==<version>`
images/libfuzzer/Dockerfile:1 DL3007 Using latest is prone to errors if the image will ever update. Pin the version explicitly to a release tag.
images/libfuzzer/Dockerfile:7 DL3002 Do not switch to root USER
images/libfuzzer/Dockerfile:8 DL3008 Pin versions in apt get install. Instead of `apt-get install <package>` use `apt-get install <package>=<version>`
images/libfuzzer/Dockerfile:8 DL3009 Delete the apt-get lists after installing something
hadolint images/u2f-hid-rs/Dockerfile
images/u2f-hid-rs/Dockerfile:1 DL3007 Using latest is prone to errors if the image will ever update. Pin the version explicitly to a release tag.
images/u2f-hid-rs/Dockerfile:7 DL3002 Do not switch to root USER
images/u2f-hid-rs/Dockerfile:8 DL3008 Pin versions in apt get install. Instead of `apt-get install <package>` use `apt-get install <package>=<version>`
images/u2f-hid-rs/Dockerfile:8 DL3009 Delete the apt-get lists after installing something

Support arch manifests in Monorepo Manager for FuzzOS

To make sure the correct image is fetched for each corresponding platform while keeping the same image name.

docker manifest create 
    mozillasecurity/fuzzos:latest 
    mozillasecurity/fuzzos-linux-amd64:latest
    mozillasecurity/fuzzos-windows-amd64:latest

docker manifest push mozillasecurity/fuzzos:latest

Use a fuzzing specific `schedulerId`

We currently use the default schedulerId: -. We currently require queue:cancel-tasks:-/* which is very permissive and would allow us to cancel anyone's task on the community-tc instance. We should use our own schedulerId to restrict our scopes further.

pmoore: note, tasks in a task graph need to have the same schedulerId, but that is entirely feasible

`credstash get` broken in Travis CI build: `Error loading shared library libssl.so.45`

Describe the bug
credstash get returns an missing lib error.

To Reproduce
Steps to reproduce the behavior:

  1. Run docker run --rm mozillasecurity/credstash:latest get keys
  2. See error
Traceback (most recent call last):
  File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/lib/python3.6/site-packages/credstash.py", line 935, in <module>
    main()
  File "/usr/lib/python3.6/site-packages/credstash.py", line 922, in main
    getSecretAction(args, region, **session_params)
  File "/usr/lib/python3.6/site-packages/credstash.py", line 246, in func_wrapper
    return func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/credstash.py", line 453, in getSecretAction
    **session_params))
  File "/usr/lib/python3.6/site-packages/credstash.py", line 501, in getSecret
    return open_aes_ctr_legacy(key_service, material)
  File "/usr/lib/python3.6/site-packages/credstash.py", line 603, in open_aes_ctr_legacy
    return _open_aes_ctr(key, LEGACY_NONCE, ciphertext, hmac, digest_method).decode("utf-8")
  File "/usr/lib/python3.6/site-packages/credstash.py", line 627, in _open_aes_ctr
    hmac = _get_hmac(hmac_key, ciphertext, digest_method)
  File "/usr/lib/python3.6/site-packages/credstash.py", line 656, in _get_hmac
    backend=default_backend()
  File "/usr/lib/python3.6/site-packages/cryptography/hazmat/backends/__init__.py", line 15, in default_backend
    from cryptography.hazmat.backends.openssl.backend import backend
  File "/usr/lib/python3.6/site-packages/cryptography/hazmat/backends/openssl/__init__.py", line 7, in <module>
    from cryptography.hazmat.backends.openssl.backend import backend
  File "/usr/lib/python3.6/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 71, in <module>
    from cryptography.hazmat.bindings.openssl import binding
  File "/usr/lib/python3.6/site-packages/cryptography/hazmat/bindings/openssl/binding.py", line 15, in <module>
    from cryptography.hazmat.bindings._openssl import ffi, lib
ImportError: Error loading shared library libssl.so.45: No such file or directory (needed by /usr/lib/python3.6/site-packages/cryptography/hazmat/bindings/_openssl.abi3.so)

Expected behavior
credstash get should work. I have rebuilt the image locally (no changes) and it works as expected.

Logs
The credstash image digest is: sha256:7a2b225221e60504dd543ba502aaefcd82c35c1e2677eb453adbd0d3cae72523
I don't see anything out of the ordinary in the corresponding travis log.

Desktop (please complete the following information):

  • reproduced both in AWS CoreOS using IAM role auth, and locally Ubuntu 16.04 using .aws credentials folder mounted. Same behavior.

Additional context
Add any other context about the problem here.

Changes to the config repository should not trigger everything

Currently any change to the configuration repository (even a README update) will cause the hook cron schedule to be recalculated for every pool, and thus everything will be cycled. We should only update Taskcluster (and cycle) pools that are actually changed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.