Code Monkey home page Code Monkey logo

node's People

Contributors

addaleax avatar apapirovski avatar bhackett1024 avatar bnoordhuis avatar bridgear avatar cjihrig avatar danbev avatar fishrock123 avatar gengjiawen avatar indutny avatar isaacs avatar jasnell avatar joyeecheung avatar mhdawson avatar mscdex avatar mylesborins avatar piscisaureus avatar refack avatar ronag avatar rvagg avatar ry avatar sam-github avatar shigeki avatar targos avatar tjfontaine avatar tniessen avatar tootallnate avatar trevnorris avatar trott avatar vsemozhetbyt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

node's Issues

Call to unimplemented callback napi_create_int32

When trying to use replay to run Jest tests I get this error:

❯ replay-node --exec npm test

> [email protected] test /Users/dan/devel/backend
> jest

Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int32
Unimplemented callback napi_create_int64

When I run replay-recordings ls I see:

  {
    "id": 1529019873,
    "createTime": "Fri Sep 17 2021 14:42:46 GMT-0400 (Eastern Daylight Time)",
    "runtime": "node",
    "metadata": {
      "argv": [
        "/Users/dan/devel/backend/node_modules/.bin/jest"
      ]
    },
    "status": "onDisk",
    "path": "/Users/dan/.replay/recording-1529019873.dat",
    "unusableReason": "Call to unimplemented callback napi_create_int32"
  }  

as the most recent recording

For Replay folks, fwiw, this was me running replay-node --exec npm test in the backenc repository.

Use a stable value for function offsets

Currently the offsets we use within functions is the index into the global array of instrumentation sites. This needs to be consistent in order for breakpoints in replay to work, which requires script compilation order to be consistent among recording and different replays. This isn't the case, and isn't something we can reasonably expect to enforce without running into problems --- compilation can be triggered by all sorts of non-deterministic VM activity. We should use bytecode offsets for function offsets instead, like we do in gecko.

Embed record/replay driver in node binary

Currently the path to the record/replay driver has to be specified via the RECORD_REPLAY_DRIVER environment variable when running node in order to record. This file has to be separately downloaded and kept track of, which is unfortunate as node is otherwise a fully self contained binary.

It would be better if we embedded the driver's contents themselves in the node binary when building, so that the driver can be written to a temporary file on startup and then dlopen'ed if RECORD_REPLAY_DRIVER was not set.

Unsupported signal 2 delivered

I tried to record a recording of an infinite loop because I wanted to see how it was getting stuck. The only way to terminate my program was to send it a SIGKILL via ctrl+c. Unfortunately that seems to render the recording unusable. When I run replay-recordings ls I get:

  {
    "id": 663276012,
    "createTime": "Thu Oct 21 2021 16:39:05 GMT-0400 (Eastern Daylight Time)",
    "runtime": "node",
    "metadata": {
      "argv": [
        "./node_modules/.bin/ts-node",
        "scripts/dev-backfill.ts",
      ]
    },
    "status": "unusable",
    "path": "/Users/dan/.replay/recording-663276012.dat",
    "unusableReason": "Recording invalidated: Unsupported signal 2 delivered"
  }

Rebase Node onto 16.x

We've been having problems recording node scripts for development / testing in the backend because we haven't rebased for around a year and a half. We should rebase onto node 16.x to resolve this and before making more performance etc. improvements.

Environment variable redaction

It would be good to redact environment variables from recordings by default --- when making a node recording users will not expect it to expose all of their potentially sensitive environment variables to people viewing the recording. I think the ideal behavior here is that the contents of environment variables which are never explicitly accessed by the script should be redacted. With node that is a bit tricky because there is a process.env object that contains all environment variables, and which is normally accessed like process.env.FOO but requires the VM to load the contents of all environment variables into its state. The environment is also frequently used in spread operators when spawning subprocesses with different environment settings. So this is non-trivial to get right, and whatever we do should be reusable more generally for redaction.

Near term, we should make sure we have good messaging around the fact that node recordings have information about the environment variables that are available. Filed https://github.com/RecordReplay/devtools/issues/6064 and https://github.com/RecordReplay/devtools/issues/6065 for this.

Use RecordReplaySetProgressCallback API

This is a relatively new driver API which greatly improves performance when replaying recordings with lots of JS execution --- when replaying we can run through large parts of the recording without incurring overhead from the instrumentation callbacks. Fixing this should fix performance discrepancies vs. gecko on recordings like this. Here are the gecko changes from a few months ago: replayio/gecko-dev@e07aff1

replay-node creates no recordings for jest in a Yarn PnP project

Version

v16.18.0

Platform

Darwin VT62F39XMV 22.4.0 Darwin Kernel Version 22.4.0: Mon Mar 6 20:59:28 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T6000 arm64

Subsystem

No response

What steps will reproduce the bug?

mkdir test
cd test
yarn init -y
yarn set version berry
yarn add @replayio/node @replayio/replay jest
echo "test('hello, world', () => { expect(true).toBe(true) })" > hello.test.js
yarn replay-node --exec jest hello.test.js # or yarn replay-node --exec yarn jest -- hello.test.js
yarn replay ls

How often does it reproduce? Is there a required condition?

100% of the time

What is the expected behavior?

Creates replay

What do you see instead?

Creates no recording

$ yarn replay ls
ID  Status  Title  Created At

Additional information

$ yarn -v
4.0.0-rc.42

$ yarn jest --version
29.5.0

Node sourcemaps are not being applied

Version

No response

Platform

No response

Subsystem

No response

What steps will reproduce the bug?

Create a recording with node of a script with sourcemaps and open the recording in Replay.

How often does it reproduce? Is there a required condition?

No response

What is the expected behavior?

No response

What do you see instead?

Sourcemaps are not being applied.

Additional information

The source objects we get from basic processing in node recordings don't contain a sourceMapURL, but the sourcemaps are uploaded with their sourceMapURL, so the backend thinks they don't match.
We could:

  1. ensure that the source objects from basic processing do contain their sourceMapURL
  2. write the sourcemaps in node recordings without their sourceMapURL
  3. ignore the sourceMapURL when matching sourcemaps in the backend if it is undefined in the source object from basic processing

The first option seems to be the most desirable, but I don't know how much effort that would be. Alternatively the third option would be easy to do and should be safe if we require the source's contentHash to be available and match in that case.

Build in a container on linux

When building node on linux the resulting binary has dependencies on a version of libc / libc++ that is >= the version on the machine used to build node in the first place. This is a problem when trying to use the version of node in a portable way, as containers often have an older version of libc installed.

It isn't easy to install an old version of libc on the system and then link against it when building node. I think it would be best to build node inside a container with only the desired glibc version installed. This will make it much easier to manage the build environment and have control over the resulting binary.

Replay not pausing at correct position in call stack for breakpoint

Version

v14.17.6

Platform

Darwin Nates-MacBook-Air.local 21.4.0 Darwin Kernel Version 21.4.0: Mon Feb 21 20:36:53 PST 2022; root:xnu-8020.101.4~2/RELEASE_ARM64_T8101 x86_64

Subsystem

No response

What steps will reproduce the bug?

As a possible workaround for #28 I used process.exit to exit my script as a desirable point. The command I used to run the recording is replay-node --exec ts-node ./src/index.ts .

How often does it reproduce? Is there a required condition?

Seemingly every time

What is the expected behavior?

Replay should pause at the specified breakpoint

What do you see instead?

Replay seems to be pausing at an unrelated breakpoint

Additional information

The replay I'm experiencing this bug on is https://app.replay.io/recording/replay-of-ts-node--2833115f-26fd-4160-8b1d-ec19f86d8d16

Replay of the bug occurring is at https://app.replay.io/recording/replay-misidentifying-where-to-pause-for-breakpoint--3668d82a-b2e6-49cd-a1cb-0ba103cdcd85

Reported on Discord at https://discord.com/channels/779097926135054346/798693728999047203/980889833998282792

Screen recording uploaded to Discord at https://discord.com/channels/779097926135054346/798693728999047203/980890518898753566

Add build ID

The record/replay driver needs to be passed a build ID that uniquely identifies the running software. Node doesn't seem to have any suitable identifier for this --- it has major/minor/patch versions, but these are too coarse grained. We should generate a unique ID every time we build node and compile it into the binary so that it can be passed to RecordReplayAttach.

Add back build script

It isn't currently possible to build from source without access to our backend repository. We should add back the build script needed and avoid having any dependencies on the backend.

Add --build-id CLI option

Node has a --version option to print the version, but when recording/replaying we also have a build ID that uniquely identifies the running software. It should be possible to print this as well with a --build-id option.

Add handler for Target.currentGeneratorId

This command is used to associate separate activations of the same generator frame with a unique ID, which allows the backend to stitch together these activations into the overall frame and allow stepping across awaits and yields.

Devtools e2e perma-failures after 16.x rebase

Devtools e2e has started perma-failing after releasing a new version of node based on 16.x:

https://github.com/RecordReplay/devtools/actions/runs/2353796782

I looked at one recording and it looks like evaluations are failing:

https://app.replay.io/recording/replay-of-devtools-nf5vxx1f2-recordreplayvercelapp--dcd57557-9f81-4d5d-87b4-3be6f4d0f55f?point=38617707887746171743783085932544000&time=20359.512012012012&hasFrames=false

I'm going to roll back to the last version before the rebase for now.

Node recording doesn't show current scope, nor the current breakpoint is highlighted

Description:

I've made a node recording:
https://app.replay.io/recording/6e5d8fce-4570-4c6c-a759-b1b402fe0352

Unfortunately, it's not quite usable at the moment. I've located a call site that was supposed to be a starting point of my debugging session and I can verify that most likely the number of calls to this function is correct (so something is working here). However, I don't see the current breakpoint being highlighted, stepping doesn't help, I can't hover over local identifiers, and trying to evaluate them in the console fails as well.

I've recorded my experience with this here:
https://app.replay.io/recording/bc9d2aaa-3c82-47f6-b518-75e0a7a496ec

Avoid calling getentropy

The recording driver currently has a problem where uses of missing library symbols won't be recorded/replayed properly. This should be fixed soon, but in the meantime we should avoid calling getentropy, a symbol which is only present in recent versions of glibc.

Show uncaught exceptions

The console in node recordings doesn't show uncaught exceptions. We should add this --- even though the exception will stop the node process, it's still important to be able to quickly jump to that last error.

Listing recordings always show a single result

I assume that replay-recordings ls should list all existing (or at least like X last) recordings. At the moment it only shows one.

I've recorded a replay and got such a result after running replay-recordings ls:

[
  {
    "id": 1337562688,
    "createTime": "Mon Sep 20 2021 17:08:55 GMT+0200 (Central European Summer Time)",
    "runtime": "node",
    "metadata": {
      "argv": [
        "./node_modules/.bin/tsc",
        "--noEmit"
      ]
    },
    "status": "startedUpload",
    "path": "/Users/mateuszburzynski/.replay/recording-1337562688.dat",
    "server": "wss://dispatch.replay.io",
    "recordingId": "e47e0791-68a5-4bf9-bcaf-fb06e49c3a1d"
  }
]

then I've made a second recording (using the same script and all) and got this back:

[
  {
    "id": 1337562688,
    "createTime": "Mon Sep 20 2021 17:08:55 GMT+0200 (Central European Summer Time)",
    "runtime": "node",
    "metadata": {
      "argv": [
        "./node_modules/.bin/tsc",
        "--noEmit"
      ]
    },
    "status": "startedUpload",
    "path": "/Users/mateuszburzynski/.replay/recording-1337562688.dat",
    "server": "wss://dispatch.replay.io",
    "recordingId": "2ca00a03-1a0e-496c-bbc0-3c08f0763007"
  }
]

Notice that recordingId is different (and IIRC it uploads as a new recording) but the rest is the same. createTime looks like the createTime of my very first node recording from yesterday.

I've actually made some small changes to the script I'm trying to debug since yesterday in the meantime. After uploading a new recording and going through the debugger I've noticed that I don't debug what I've expected to debug - I've realized that I'm debugging the old content of the script. That's what prompted this bug report.

Node Source Map support

Now that the replay-cli can upload source maps, it would be great if the Node runtime could write the maps to disk

Node wrapper

It would be nice to have a small node module that wrapped the recorder and handled downloading the latest ones, running the scripts, and outputting the recording url. Also debately we should hide the replay logs behind a -v flag

0 hits reported incorrectly

I'm debugging this recording:
https://app.replay.io/recording/21594023-f9ae-4787-912c-c391a68304aa

and I've noticed that sometimes I'm not able to step into certain functions. Take a look at this loom:
https://www.loom.com/share/9628264e684a4d0eb66e5176296972e5

Notice how I've wanted to step into reportImplicitAny but I was taken out of this frame, to the point after the call to the reportImplicitAny's caller.

One additional thing I've noticed here is that 0 hits are reported on the line containing that reportImplicitAny
Screenshot 2021-09-21 at 13 24 01

This is clearly incorrect as I was paused at this line of code.

RecordReplayInstrumentation opcode length can vary

The indexes associated with RecordReplayInstrumentation opcodes can vary between recording and replaying or between different replays according to differing compilation behavior. We deal with this by using bytecode offsets to identify instrumentation sites (#5), but this isn't working right either because the indexes varying can cause the lengths of the RecordReplayInstrumentation operands to vary and thus the bytecode offsets themselves. I don't see a good operand type which will always use four bytes, but we can hack around this by always setting the high bits of the operand so that the bytecode emitter is never able to compress the emitted operands.

node_build_id.o: No such file or directory

Hi
I get the following error message with some different linux distributions by executing
"./configure && make -j4 -C out BUILDTYPE=Release V=8":

ar: /mnt/c/Users/steph/Downloads/replay/node/out/Release/obj.target/libnode/src/node_build_id.o: No such file or directory
make: *** [libnode.target.mk:413: /mnt/c/Users/steph/Downloads/replay/node/out/Release/obj.target/libnode.a] Error 1

How could I resolve this issue and build this project correctly?

Thanks in advance!
Steve

Generate symbols when building

After building a new version of node we need to generate a symbols archive which can be used when symbolifying backtraces.

Node doesn't implement inspecting Proxy objects

Version

replay-jest "--version" 28.1.0

Platform

Darwin Dans-MacBook-Pro.local 21.5.0 Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:37 PDT 2022; root:xnu-8020.121.3~4/RELEASE_ARM64_T6000 arm64

Subsystem

No response

What steps will reproduce the bug?

In this replay (https://app.replay.io/recording/replay-of-jestjs--db53cc51-f3e5-4917-94eb-2c2209e68f0a?point=649037108049433612651252949188830&time=1324.5876182829033&hasFrames=true#) I try to inspect process.env. and their values.

How often does it reproduce? Is there a required condition?

No response

What is the expected behavior?

I would expect to get a map of all of my environment variables

What do you see instead?

Instead I get an empty Proxy object.

image

Additional information

No response

ExecutionPoints are not monotonically increasing

  • Version: v16.0.0
  • Platform: Linux 693f93ffa32a 5.4.0-1043-gcp #46-Ubuntu SMP Mon Apr 19 19:17:04 UTC 2021 x86_64 GNU/Linux
  • Subsystem:

What steps will reproduce the bug?

If I set up several breakpoints, sometimes when looking for the next continue targets, the ExecutionPoints are not always monotonically increasing (w.r.t. time). here's a small trace from a session against https://replay.io/view?id=6dabd641-7891-470e-b2aa-e6c52e06a7f9 obtained by adding a few breakpoints and then calling Debugger.findResumeTarget on each ExecutionPoint:

INFO:root:Moving forward from 0 to {'point': '3458764449396031495', 'time': 0.005393452348848498, 'frame': [{'sourceId': '95', 'line': 2, 'column': 2}], 'reason': 'breakpoint'}
INFO:root:Moving forward from 3458764449396031495 to {'point': '4611685954002878504', 'time': 0.008090178523272746, 'frame': [{'sourceId': '95', 'line': 4, 'column': 18}], 'reason': 'breakpoint'}
INFO:root:Moving backwards from 4611685954002878504 to {'point': '31224573109202812141809', 'time': 73.03273825575751, 'frame': [{'sourceId': '95', 'line': 17, 'column': 15}], 'reason': 'breakpoint'}
INFO:root:Moving forward from 31224573109202812141809 to {'point': '42752635233766675054906', 'time': 99.99730327382558, 'frame': [{'sourceId': '95', 'line': 22, 'column': 2}], 'reason': 'breakpoint'}

What do you see instead?

Even though the times are increasing, the ExecutionPoints are not.

Additional information

Fiddling with the protocol directly, I cannot reproduce this from replay.io

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.