Code Monkey home page Code Monkey logo

act-js's People

Contributors

ga13ou avatar ginxo avatar im-sampm avatar joshmccullough avatar shubhbapna avatar twisterrob avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

act-js's Issues

Parse `set-env` and `add-path` from logs

Feature request
Same as #66, but for other commands. Note #70 might invalidate this?

Additional context
Example:

[Android Build/Unit Tests]   ✅  Success - Main Set up Java for Android SDK.
[Android Build/Unit Tests]   ⚙  ::set-env:: JAVA_HOME=/opt/hostedtoolcache/Java_Temurin-Hotspot_jdk/17.0.10-7/x64
[Android Build/Unit Tests]   ⚙  ::set-env:: JAVA_HOME_17_X64=/opt/hostedtoolcache/Java_Temurin-Hotspot_jdk/17.0.10-7/x64
[Android Build/Unit Tests]   ⚙  ::set-output:: distribution=Temurin-Hotspot
[Android Build/Unit Tests]   ⚙  ::set-output:: path=/opt/hostedtoolcache/Java_Temurin-Hotspot_jdk/17.0.10-7/x64
[Android Build/Unit Tests]   ⚙  ::set-output:: version=17.0.10+7
[Android Build/Unit Tests]   ⚙  ::add-path:: /opt/hostedtoolcache/Java_Temurin-Hotspot_jdk/17.0.10-7/x64/bin

Passing `undefined` as `cwd` to constructor causes strange behavior

Describe the bug
new Act(gh.repo.getPath("some invalid repo")) ends up passing undefined to the Act constructor. This, in my case, caused an error where (somehow) a .travis.yml file was being parsed by Act, with the following output:

Error: workflow is not valid. .travis.yml: yaml: unmarshal errors:
line 28: cannot unmarshal !!seq into map[string]string

In this case, the .travis.yml file was located in a Node module at path: /path/to/my/project/node_modules/function-bind

To Reproduce
Pass undefined to the constructore while you have an invalid *.yml file somewhere e.g. under node_modules.

Expected behavior
The constructor probably should require a cwd, or throw if one is provided as null or undefined.

Enable logging even when act stalls

Describe the bug
When running act with logging enabled, the log file is only created if act finishes running. In some cases act can stall and the only way to debug is to look at the raw logs. However, the log file is not created for such cases.

Expected behavior
Logs should be added to the log file even if act stalls.

Cannot use `mockSteps` with specific `workflowFile` passed to Act

Hello there! First of all, thank you for this lib, it's turning out to be very useful so far so I appreciate the work put it into it!

Describe the bug
I believe there's a bug if you try to use both step mocking and passing a specific workflow file to the act runner together

To Reproduce

Workflow file
name: Test
on: [push]

jobs:
  repro:
    name: bug repro
    runs-on: ubuntu-latest
    steps:
    - name: step 1 (mocked)
      run: echo "this is the real command"
    - name: step 2 (not mocked)
      run: echo "this is the real command"

Minimal test file
const path = require("path");
const { Act } = require("@kie/act-js");
const { MockGithub } = require("@kie/mock-github");
const { beforeEach, test, afterEach, expect} = require("@jest/globals");

let mockGithub;


beforeEach(async () => {
    mockGithub = new MockGithub({
        repo: {
            test: {
                files: [
                    {
                        src: path.join(__dirname, "workflowfile-plus-mocksteps-repro.yaml"),
                        dest: "/.github/workflows/workflow-file-plus-mocksteps-repro.yaml",
                    }
                ]
            },
        }
    })
    await mockGithub.setup();

})

afterEach(async () => {
    await mockGithub.teardown();
})

test("workflow file + mockSteps repro", async () => {
    const act = new Act(mockGithub.repo.getPath("test"), ".github/workflows/workflow-file-plus-mocksteps-repro.yaml");
    const result = await act.runEvent("push", {
        logFile: process.env.ACT_LOG ? "workflow.log" : undefined,
        mockSteps: {
            "repro": [{name: "step 1 (mocked)", mockWith: "echo this is the mocked command"}]
        }
    })
    expect(result).toStrictEqual([
        {name: "Main step 1 (mocked)", status: 0, output: "this is the mocked command"},
        {name: "Main step 2 (not mocked)", status: 0, output: "this is the real command"}
    ])
})

Expected behavior

Both options are able to be used in conjunction and the given workflow file executes with the proper step mocked

Logs
Never makes it far enough for the act to actually execute, but the jest output is noted below.

 FAIL  action/deploy/v3/test/repro.test.js
  ✕ workflow file + mockSteps repro (1268 ms)

  ● workflow file + mockSteps repro

    Could not locate workflow-file-plus-mocksteps-repro.yaml

      30 | test("workflow file + mockSteps repro", async () => {
      31 |     const act = new Act(mockGithub.repo.getPath("test"), ".github/workflows/workflow-file-plus-mocksteps-repro.yaml");
    > 32 |     const result = await act.runEvent("push", {
         |                    ^
      33 |         logFile: process.env.ACT_LOG ? "workflow.log" : undefined,
      34 |         mockSteps: {
      35 |             "repro": [{name: "step 1 (mocked)", mockWith: "echo this is the mocked command"}]

      at StepMocker.getWorkflowPath (node_modules/@kie/act-js/build/src/step-mocker/step-mocker.js:63:19)
      at StepMocker.mock (node_modules/@kie/act-js/build/src/step-mocker/step-mocker.js:17:31)
      at node_modules/@kie/act-js/build/src/act/act.js:131:35
          at Array.map (<anonymous>)
      at Act.handleStepMocking (node_modules/@kie/act-js/build/src/act/act.js:129:46)
      at Act.runEvent (node_modules/@kie/act-js/build/src/act/act.js:118:9)
      at Object.<anonymous> (action/deploy/v3/test/repro.test.js:32:20)

Test Suites: 1 failed, 1 total
Tests:       1 failed, 1 total
Snapshots:   0 total
Time:        1.556 s, estimated 2 s
Ran all test suites matching /action\/deploy\/v3\/test\/repro.test.js/i.

Additional context
I suspect this is a problem with the arguments passed to the StepMocker from the Act class. The StepMocker expects the workflowFile + cwd here but what actually gets passed from Act's handleStepMocking call is the path to the workflow file 2x so that when this function tries to find the workflow file, none of the if branches are a match.

Installation of v2.0.7 errors.

Describe the bug
Installation of @kie/act-js v2.0.7 fails with this error:

npm install --save-dev @kie/act-js  
npm ERR! code 127
npm ERR! path /Users/ringods/Projects/pulumi/build-automation/node_modules/@kie/act-js
npm ERR! command failed
npm ERR! command sh -c npm run prebuild
npm ERR! > @kie/[email protected] prebuild
npm ERR! > ./scripts/act.sh 0.2.43
npm ERR! sh: ./scripts/act.sh: No such file or directory

npm ERR! A complete log of this run can be found in:
npm ERR!     /Users/ringods/.npm/_logs/2023-04-25T14_57_58_799Z-debug-0.log

To Reproduce

$ npm install --save-dev @kie/act-js

Expected behavior

Installation should work as with earlier versions. As a workaround, I fixed my version to v2.0.6.

Logs

Snippet from the generated log file:

83 info run @kie/[email protected] preinstall { code: 127, signal: null }
84 timing reify:rollback:createSparse Completed in 33ms
85 timing reify:rollback:retireShallow Completed in 0ms
86 timing command:install Completed in 787ms
87 verbose stack Error: command failed
87 verbose stack     at ChildProcess.<anonymous> (/Users/ringods/.volta/tools/image/node/16.15.0/lib/node_modules/npm/node_modules/@npmcli/promise-spawn/index.js:64:27)
87 verbose stack     at ChildProcess.emit (node:events:527:28)
87 verbose stack     at maybeClose (node:internal/child_process:1092:16)
87 verbose stack     at Process.ChildProcess._handle.onexit (node:internal/child_process:302:5)
88 verbose pkgid @kie/[email protected]

Additional context

There is a v2.0.7 on npmjs.com but not a corresponding Github tag, nor Github release.

Ability to add a step to a job

Feature request
I would like to insert a step into a workflow from a test.

I think this syntax would work:

      mockSteps: {
        "job-name": [
          {
            before: N, // or after: N,
            mockWith: {...}
          }
        ]
      }

Additional context
When testing local actions, we need to clone the MockGithub repository, but in production we don't need this and we don't want to create noise in the real executions by doing an if: ... or having a placeholder step to replace with mock.

Workaround
For now I did this:

      # "Placeholder for test to hook into."
      - id: checkout
        run: |
          true

+

      mockSteps: {
        "job-name": [
          {
            id: "checkout",
            mockWith: {
              name: "Checkout (test only)"
              uses: "actions/checkout@v4",
              run: undefined,
            }
          }
        ]
      }

Proposed solution

No yaml changes.
+

      mockSteps: {
        "job-name": [
          {
            before: 0,
            mockWith: {
              name: "Checkout (test only)"
              uses: "actions/checkout@v4",
            }
          }
        ]
      }

Shipped `act` binary does not work on macOS

Describe the bug
The error is:

Errors .../mock-github-act-js-examples/workflow/simple/node_modules/@kie/act-js/build/bin/act:
.../mock-github-act-js-examples/workflow/simple/node_modules/@kie/act-js/build/bin/act: cannot execute binary file

npx act-js prints the same error

It starts working when user sets ACT_BINARY to their local act installation. There seems to be an issue with the act binary built by this package

To Reproduce
Clone and run shubhbapna/mock-github-act-js-examples in the following environment:

Apple M1 Ventura 13.0
node: v16.15.0
git: 2.40.0

Expected behavior
The act binary shipped with this package should work on macOS as well

Additional context
Reported by @aleksei-commmune in shubhbapna/mock-github-act-js-examples#1

`mockApi` failing on WSL2

Describe the bug
It seems like mockApi is failing in a WSL2 windows 11 environment

To Reproduce

Workflow
name: Act Push Test 1
on: push
jobs:
  push1:
    runs-on: ubuntu-latest
    steps:
      - run: echo "push 1"
      - name: secrets
        run: echo $TEST1
        env:
          TEST1: ${{secrets.SECRET1}}
      - name: env
        run: echo $ENV1
      - name: pass
        run: echo "pass"
      - name: fail
        run: echo "fail" && exit 1
Code

Taken from act.test.ts

 test("run with proxy", async () => {
    const mockapi = new Mockapi({
      google: {
        baseUrl: "http://google.com",
        endpoints: {
          root: {
            get: {
              path: "/",
              method: "get",
              parameters: {
                query: [],
                path: [],
                body: [],
              },
            },
          },
        },
      },
    });

    const act = new Act();
    const output = await act.runJob("mock", {
      workflowFile: resources,
      mockApi: [
        mockapi.mock.google.root
          .get()
          .setResponse({ status: 200, data: "mock response" }),
      ],
    });
    expect(output).toStrictEqual([
      {
        name: "Main https api call",
        status: 0,
        output: expect.stringMatching(/<HTML><HEAD>.+/),
      },
      {
        name: "Main http api call",
        status: 0,
        output: "mock response",
      },
    ]);
  });

Expected behavior
Test should pass. Test passes in other linux environments

Logs

run › run with proxy

    expect(received).toStrictEqual(expected) // deep equality

    - Expected  - 7
    + Received  + 2

      Array [
        Object {
          "name": "Main https api call",
    -     "output": StringMatching /<HTML><HEAD>.+/,
    -     "status": 0,
    -   },
    -   Object {
    -     "name": "Main http api call",
    -     "output": "mock response",
    -     "status": 0,
    +     "output": "",
    +     "status": 1,
        },
      ]

      133 |       ],
      134 |     });
    > 135 |     expect(output).toStrictEqual([
          |                    ^
      136 |       {
      137 |         name: "Main https api call",
      138 |         status: 0,

      at Object.<anonymous> (test/unit/act/act.test.ts:135:20)

Parse workflow command messages like debug/notice/warning/error

Feature request
Same as #66, but for other commands. Note #70 might invalidate this?

Additional context
Docs: https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions

Example:

[Android/Build] ⭐ Run Main Upload 'Application APKs' artifact.
[Android/Build]   🐳  docker cp src=~/.cache/act/actions-upload-artifact@v3/ dst=/var/run/act/actions/actions-upload-artifact@v3/
[Android/Build]   🐳  docker exec cmd=[node /var/run/act/actions/actions-upload-artifact@v3/dist/index.js] user= workdir=
[Android/Build]   💬  ::debug::followSymbolicLinks 'true'
[Android/Build]   💬  ::debug::implicitDescendants 'true'
[Android/Build]   💬  ::debug::omitBrokenSymbolicLinks 'true'
[Android/Build]   💬  ::debug::followSymbolicLinks 'true'
[Android/Build]   💬  ::debug::implicitDescendants 'true'
[Android/Build]   💬  ::debug::matchDirectories 'true'
[Android/Build]   💬  ::debug::omitBrokenSymbolicLinks 'true'
[Android/Build]   💬  ::debug::Search path '~/.../testRepo/app/build/outputs/apk'
[Android/Build]   💬  ::debug::Search path '~/.../testRepo/app/build/outputs/mapping'
[Android/Build]   | Multiple search paths detected. Calculating the least common ancestor of all paths
[Android/Build]   💬  ::debug::Using search path ~/.../testRepo/app/build/outputs/apk
[Android/Build]   💬  ::debug::Using search path ~/.../testRepo/app/build/outputs/mapping
[Android/Build]   | The least common ancestor is ~/.../testRepo/app/build/outputs. This will be the root directory of the artifact
[Android/Build]   ❗  ::error::No files were found with the provided path: ~/.../testRepo/app/build/outputs/apk/*/*.apk%0A~/.../testRepo/app/build/outputs/mapping/*/mapping.txt%0A~/.../testRepo/app/build/outputs/mapping/*/configuration.txt. No artifacts will be uploaded.
[Android/Build]   ❌  Failure - Main Upload 'Application APKs' artifact.
[Android/Build] exitcode '1': failure

should be able to assert on No files were found with the provided path: ... or even Using Search path ....

Flaky tests caused by parallel jobs log parsing

Describe the bug
If one job starts a group, and another logs while that group is open, then the parser crashes with:

Cannot read properties of undefined (reading 'output')
TypeError: Cannot read properties of undefined (reading 'output')
    at OutputParser.parseStepOutput (node_modules/@kie/act-js/build/src/output-parser/output-parser.js:119:61)
    at OutputParser.parseOutput (node_modules/@kie/act-js/build/src/output-parser/output-parser.js:26:18)
this.groupMatrix[stepOutputMatcherResult[1]][length - 1].output += ...
                                                        ^ LHS undefined

this.groupMatrix[stepOutputMatcherResult[1]] === [], therefore length is 0, and ...[-1] === undefined.

To Reproduce

test("original", async () => {
    const contents = await fs.readFile("act.log");
    console.log(new OutputParser(contents.toString()).parseOutput());
});

act.log:

[Build/Tests            ] ⭐ Run Main Testing
[Build/Tests            ]   ❓  ::group::Unit Tests
[Build/Static Analysis  ] ⭐ Run Main Lint
[Build/Static Analysis  ]   | No problems found.
[Build/Tests            ]   | 1231 tests executed.
[Build/Tests            ]   ❓  ::endgroup::
[Build/Tests            ]   ❓  ::group::Integration Tests
[Build/Tests            ]   | 32 tests executed.
[Build/Tests            ]   ❓  ::endgroup::
[Build/Static Analysis  ]   ✅  Success - Main Linting
[Build/Tests            ]   ✅  Success - Main Testing

Expected behavior
"Just works".

Logs

Distilled real log focused on the issue
[Android Build/Unit Tests       ] 🚀  Start image=ghcr.io/catthehacker/ubuntu:act-latest
[Android Build/Android Lint     ] 🚀  Start image=ghcr.io/catthehacker/ubuntu:act-latest
[Android Build/Unit Tests       ] ⭐ Run Main Set up Android SDK.
[Android Build/Unit Tests       ]   🐳  docker cp src=/home/runner/.cache/act/android-actions-setup-android@v3/ dst=/var/run/act/actions/android-actions-setup-android@v3/
[Android Build/Android Lint     ] ⭐ Run Main Set up Android SDK.
[Android Build/Android Lint     ]   🐳  docker cp src=/home/runner/.cache/act/android-actions-setup-android@v3/ dst=/var/run/act/actions/android-actions-setup-android@v3/
[Android Build/Android Lint     ]   | [command]/root/.android/sdk/cmdline-tools/11.0/bin/sdkmanager tools
[Android Build/Unit Tests       ]   | [command]/root/.android/sdk/cmdline-tools/11.0/bin/sdkmanager tools
[Android Build/Android Lint     ]   | [command]/root/.android/sdk/cmdline-tools/11.0/bin/sdkmanager platform-tools
[Android Build/Android Lint     ]   ❓ add-matcher /run/act/actions/android-actions-setup-android@v3/matchers.json
[Android Build/Android Lint     ]   ✅  Success - Main Set up Android SDK.
[Android Build/Android Lint     ] ⭐ Run Main Set up Java for Project.
[Android Build/Android Lint     ]   🐳  docker cp src=/home/runner/.cache/act/actions-setup-java@v3/ dst=/var/run/act/actions/actions-setup-java@v3/
[Android Build/Android Lint     ]   ❓  ::group::Installed distributions
[Android Build/Android Lint     ]   | Resolved Java 17.0.9+9 from tool-cache
[Android Build/Unit Tests       ]   | [command]/root/.android/sdk/cmdline-tools/11.0/bin/sdkmanager platform-tools
[Android Build/Android Lint     ]   |   Path: /opt/hostedtoolcache/Java_Temurin-Hotspot_jdk/17.0.9-9/x64
[Android Build/Android Lint     ]   ❓  ::endgroup::
[Android Build/Android Lint     ]   ✅  Success - Main Set up Java for Project.
[Android Build/Unit Tests       ]   ❓ add-matcher /run/act/actions/android-actions-setup-android@v3/matchers.json
[Android Build/Unit Tests       ]   ✅  Success - Main Set up Android SDK.

Additional context
Looking at output-parser.js, as far as I see .isPartOfGroup is handled on the parser level, but it should per-job.

Add `run` command to execute using event and job

Feature request
We can define both event and job while executing act. So add a run command to use both

Additional context
Add any other context or screenshots about the feature request here.

The automated release is failing 🚨

🚨 The automated release from the main branch failed. 🚨

I recommend you give this issue a high priority, so other packages depending on you can benefit from your bug fixes and new features again.

You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can fix this 💪.

Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.

Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the main branch. You can also manually restart the failed CI job that runs semantic-release.

If you are not sure how to resolve this, here are some links that can help you:

If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.


Invalid npm token.

The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.

If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.

Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.


Good luck with your project ✨

Your semantic-release bot 📦🚀

Remove node 14

Remove node 14

Node 14 is reaching EOL. We should also stop supporting it

Enable custom parseRunOptions

Feature request
Either:

  1. remove defining parseRunOpts as private - it's only a type definition and it would be great to be able to customize that behavior without errors
  2. allow customizing actArguments used on parseRunOpts level

Additional context
First of all, thank you for this library 🙏

We're creating a class extending Act and right now we are overriding parseRunOpts, which is defined as private only on a .d.ts level, which causes errors, but doesn't restrict anything. I think it would be great to have more freedom in customizing how parse is done if someone decides to go further in using Act class only as a base.

https://github.com/Expensify/App/blob/main/workflow_tests/utils/ExtendedAct.js

Add option to use different platforms

Feature request
act lets us set different platforms using the --platform flag. This will allow us to set different base images for each run. Will be useful when we want to test on macos-latest or other platforms

Add option to set input

Feature request
Just like we set env and secrets, we need a way to set input as well.

const act = new Act();
await act.setInput("input", "value").runEvent("push")

Additional context
Add any other context or screenshots about the feature request here.

Node 20 Support Missing Due to Outdated act Version Installed by act-js

Describe the bug

act-js installs act version 0.2.48 which lacks support for Node 20, despite act having added Node 20 support in version 0.2.55. This discrepancy leads to failures when trying to run workflows with the latest versions of GitHub Actions designed for Node 20

To Reproduce

  1. Install act-js
  2. Check act version

OR

  1. Install act-js
  2. Try to run a workflow containing a github action that uses node20

Expected behavior
Installing latest act version and workflow runs as expected

Logs

Error: The runs.using key in action.yml must be one of: [composite docker node12 node16], got node20

Additional context
act's fixed issue

Can mock APIs for HTTP but not HTTPS

Describe the bug

We are able to mock HTTP API requests but when we attempt to mock HTTPS API requests, they instead pass through to the original, un-mocked endpoints.

To Reproduce

Here I've set up three mocks.

I added the moctokit one so I could be sure I wasn't doing the mock setup incorrectly.

Workflow
name: Mock API Test

on:
  pull_request:

jobs:
  metrics:
    name: Metrics
    runs-on: ubuntu-latest
    steps:
      - name: HTTP api call
        run: |
          result=$(curl -s http://google.com)
          echo "$result"

      - name: HTTPS api call
        run: |
          result=$(curl -s https://google.com)
          echo "$result"

      - run: |
          curl -s -L \
            -H "Accept: application/vnd.github+json" \
            -H "Authorization: Bearer ${{ github.token }}" \
            -H "X-GitHub-Api-Version: 2022-11-28" \
            -o response.json \
            https://api.github.com/rate_limit
          cat response.json
Jest Test Code
let { MockGithub, Moctokit } = require("@kie/mock-github")
let { Act, Mockapi } = require("@kie/act-js")
let path = require("path")

jest.setTimeout(60000)

let mockGithub
beforeEach(async () => {
  mockGithub = new MockGithub({
    repo: {
      testCompositeAction: {
        files: [
          {
            src: path.join(__dirname, "mock-api.test.yml"),
            dest: ".github/workflows/mock-api.test.yml",
          },
        ],
      },
    },
  })

  await mockGithub.setup()
})

afterEach(async () => {
  await mockGithub.teardown()
})

test("it mocks api calls", async () => {
  const moctokit = new Moctokit()
  const mockapi = new Mockapi({
    google_https: {
      baseUrl: "https://google.com",
      endpoints: {
        root: {
          get: {
            path: "/",
            method: "get",
            parameters: {
              query: [],
              path: [],
              body: [],
            },
          },
        },
      },
    },
    google_http: {
      baseUrl: "http://google.com",
      endpoints: {
        root: {
          get: {
            path: "/",
            method: "get",
            parameters: {
              query: [],
              path: [],
              body: [],
            },
          },
        },
      },
    }
  })
  const act = new Act(mockGithub.repo.getPath("testCompositeAction"))
    .setGithubToken("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") // fake token
  const result = await act.runEvent("pull_request", {
    logFile: path.join(__dirname, "../logs/metrics.log"),
    mockApi: [
      mockapi.mock.google_http.root
        .get()
        .setResponse({ status: 200, data: "mock response" }),
      mockapi.mock.google_https.root
        .get()
        .setResponse({ status: 200, data: "mock response" }),
      moctokit.rest.rateLimit
        .get()
        .setResponse({ status: 200, data: "mock response" }),
    ]
  })
  console.log(result)
})
Test Output
console.log
  [
    { name: 'Main HTTP api call', status: 0, output: 'mock response' },
    {
      name: 'Main HTTPS api call',
      status: 0,
      output: '<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">\n' +
        '<TITLE>301 Moved</TITLE></HEAD><BODY>\n' +
        '<H1>301 Moved</H1>\n' +
        'The document has moved\n' +
        '<A HREF="https://www.google.com/">here</A>.\n' +
        '</BODY></HTML>'
    },
    {
      name: 'Main curl -s -L \\',
      status: 0,
      output: '{\n' +
        '"message": "Bad credentials",\n' +
        '"documentation_url": "https://docs.github.com/rest"\n' +
        '}'
    }
  ]

Expected behavior

All three mocks should intercept the api calls and respond with 'mock response'.

Instead only the HTTP API request is being mocked. The other two are hitting the actual APIs.

Logs

Logs
[Mock API Test/Metrics] 🚀  Start image=ghcr.io/catthehacker/ubuntu:act-latest
[Mock API Test/Metrics]   🐳  docker pull image=ghcr.io/catthehacker/ubuntu:act-latest platform=linux/amd64 username= forcePull=true
[Mock API Test/Metrics]   🐳  docker create image=ghcr.io/catthehacker/ubuntu:act-latest platform=linux/amd64 entrypoint=["tail" "-f" "/dev/null"] cmd=[]
[Mock API Test/Metrics]   🐳  docker run image=ghcr.io/catthehacker/ubuntu:act-latest platform=linux/amd64 entrypoint=["tail" "-f" "/dev/null"] cmd=[]
[Mock API Test/Metrics] ⭐ Run Main HTTP api call
[Mock API Test/Metrics]   🐳  docker exec cmd=[bash --noprofile --norc -e -o pipefail /var/run/act/workflow/0] user= workdir=
[Mock API Test/Metrics]   | mock response
[Mock API Test/Metrics]   ✅  Success - Main HTTP api call
[Mock API Test/Metrics] ⭐ Run Main HTTPS api call
[Mock API Test/Metrics]   🐳  docker exec cmd=[bash --noprofile --norc -e -o pipefail /var/run/act/workflow/1] user= workdir=
[Mock API Test/Metrics]   | <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
[Mock API Test/Metrics]   | <TITLE>301 Moved</TITLE></HEAD><BODY>
[Mock API Test/Metrics]   | <H1>301 Moved</H1>
[Mock API Test/Metrics]   | The document has moved
[Mock API Test/Metrics]   | <A HREF="https://www.google.com/">here</A>.
[Mock API Test/Metrics]   | </BODY></HTML>
[Mock API Test/Metrics]   ✅  Success - Main HTTPS api call
[Mock API Test/Metrics] ⭐ Run Main curl -s -L \
  -H "Accept: application/vnd.github+json" \
  -H "Authorization: Bearer ***" \
  -H "X-GitHub-Api-Version: 2022-11-28" \
  -o response.json \
  https://api.github.com/rate_limit
cat response.json
[Mock API Test/Metrics]   🐳  docker exec cmd=[bash --noprofile --norc -e -o pipefail /var/run/act/workflow/2] user= workdir=
[Mock API Test/Metrics]   | {
[Mock API Test/Metrics]   |   "message": "Bad credentials",
[Mock API Test/Metrics]   |   "documentation_url": "https://docs.github.com/rest"
[Mock API Test/Metrics]   | }
[Mock API Test/Metrics]   ✅  Success - Main curl -s -L \
  -H "Accept: application/vnd.github+json" \
  -H "Authorization: Bearer ***" \
  -H "X-GitHub-Api-Version: 2022-11-28" \
  -o response.json \
  https://api.github.com/rate_limit
cat response.json
[Mock API Test/Metrics] 🏁  Job succeeded

Additional context

I've tried this on a macbook (Ventura 13.5) as well as an AWS EC2 instance (Amazon Linux 2023) with the same results.

Make artifactServer.port optional

Feature request
artifactServer.port should allow undefined value, because act has a default for --artifact-server-port, https://github.com/nektos/act/blob/v0.2.54/cmd/root.go#L93

Additional context
I'm trying to overwrite artifactServer.path so the output is in a folder I can observe in the test. For this I have to do (testOutput returns an absolute path):

const result = await act.runEventAndJob("workflow_call", "unit-tests", {
      logFile: testOutput("act.log"),
      artifactServer: {
          path: testOutput("artifacts"),
          port: 34567 /* default: https://github.com/nektos/act/blob/v0.2.54/cmd/root.go#L93 */
      },
});

Notice that I do not need to set the port value to achieve what I want, the default would suffice.

Failure during Act run is not "raised" in test output

Describe the bug
If an error occurs (from Act) while running a test, no info is shown without capturing/showing the log.

To Reproduce

  1. Introduce a failure within an action's YML which is under test. (For instance, add a step with a uses which points to a non-existent file path e.g. step.should-fail.uses: ./path/to/no/dir
  2. Run tests e.g. npm test.

The test may show as passed when actually Act failed under the hood. See:

image

Expected behavior
If Act fails during a test run, the test should fail (e.g. an error should be thrown).

Logs
See above.

Parsing of `$GITHUB_OUTPUT` variables

Feature request
I have workflows that pass their outputs via $GITHUB_OUTPUT.

For example:

[test-add-assignee-to-issue/add-assignee-to-issue]   ✅  Success - Main Test
[test-add-assignee-to-issue/add-assignee-to-issue]   ⚙  ::set-output:: ORG_NAME=automated-test-org
[test-add-assignee-to-issue/add-assignee-to-issue]   ⚙  ::set-output:: REPO_NAME=assignee-repo
[test-add-assignee-to-issue/add-assignee-to-issue]   ⚙  ::set-output:: ISSUE_ASSIGNEES=user1,user2
[test-add-assignee-to-issue/add-assignee-to-issue]   ⚙  ::set-output:: HOSTNAME=github.com

It would be nice if these outputs could be parsed and packaged so they can more easily integrate with a test runner.

Additional context
Add any other context or screenshots about the feature request here.

use `--json` flag for better output parsing

Feature request

act has an option to enable json output. Using json will provide much easier output parsing than the current regex based string parsing. This will also significantly reduce maintenance complexity.

Additional context

The output-parser is a pluggable module, which make it easy for us to add another output parser without any breaking changes

Mocking HTTPS requests even when a CONNECT request is sent

Feature request

Currently we are not able to mock HTTPS requests if the client sends a CONNECT request first. I do try to "fool" these clients by setting HTTPS_PROXY to a http location but it doesn't work for all clients, for example it doesn't work for curl but it works for axios

The issue with CONNECT request is that it tells the proxy to set up a TCP tunnel to the destination which is then secured by TLS. Since the tunnel is encrypted the proxy is not able to read the actual requests and is not able to mock it.

So for example:

  1. Client wants to make a request to https://google.com/ via the proxy running at http://localhost:3000/
  2. Client issues a CONNECT request to proxy. This request only contains the host ("google") and port ("443") and nothing else from the request
  3. Proxy sets up a tunnel between client and google
  4. Client initiates TLS handshake after which any data flowing through the tunnel in encrypted

One option to explore would be implementing a MITM proxy but the issue with that is getting the containers spun by act to accept the CA certs without having to manually force it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.