callstack / reassure Goto Github PK
View Code? Open in Web Editor NEWPerformance testing companion for React and React Native
Home Page: https://callstack.github.io/reassure/
License: MIT License
Performance testing companion for React and React Native
Home Page: https://callstack.github.io/reassure/
License: MIT License
Is your feature request related to a problem? Please describe.
Add init
command to CLI. This comment would be invoked like this:
$ yarn reassure init
This command would apply common Reassure setup steps that need to be done by hand now:
reassure-tests.sh
script from template.reassure
folder to .gitignore
if presentDescribe the solution you'd like
Before doing the changes mentioned above tool should check if reassure-tests.sh
exists, and assume that if it exists then the folder hand been already initialized for Reassure
For subsequent steps (danger file, gitignore): they should be only carried if danger file does not yet exist, or if gitignore does not contain reassure entry.
The tool should be user friendly and have clear help messages.
Is your feature request related to a problem? Please describe.
Automatically detect branch name and commit hash when reassure
detects that it is run inside a Git repo.
Describe the solution you'd like
Basically make current --branch $(git branch --show-current) --commitHash $(git rev-parse HEAD)
options automatic when user is using Git.
Kept the existing CLI options to allow user to override the default values and/or support other source controls systems.
Invoke git
binary commands using node child_process
(?) API to avoid external deps.
Describe alternatives you've considered
simple-git
to communicate with GIT repo.Additional context
Describe the improvement
Consider updating the default testMatch to better reflect
default patterns used in Jest and allow users to also place their
performance tests into specific /perf/ directory without the need
of adding the *.perf-test. prefix
[ **/__tests__/**/*.[jt]s?(x)", "**/?(*.)+(spec|test).[jt]s?(x)" ]
Scope of improvement
Update the measure
command in reasure-cli
package
from
'<rootDir>/**/*.perf-test.[jt]s?(x)'
to
[ "<rootDir>/**/__perf__/**/*.[jt]s?(x)", "<rootDir>/**/*.perf-test.[jt]s?(x)" ]
Suggested implementation steps
One quick PR should do, additionally, we need to make sure there we're properly handling testMatch in any other place since it would now be an array as per the Jest docs
Describe the bug
I am not sure if this issue supposes to be a feature or a bug.
When I am running the reassure on windows (tested on 11 and 10), getting this error:
To Reproduce
Steps to reproduce the behavior:
yarn reassure
Expected behavior
For now I am setting the TEST_RUNNER_PATH
to node_modules/jest/bin/jest
Screenshots
After setting TEST_RUNNER_PATH
:
Desktop (please complete the following information):
Additional context
If anyone else has the same problem my solution was to add the cross-env package and set to package.json script to this
"perf-test": "cross-env TEST_RUNNER_PATH=node_modules/jest/bin/jest reassure"
Currently Reassure contains two binaries:
reassure-test
reassure-stability
As well as two low-level scripts:
perf-measure
perf-compare
All of the above are written in bash.
Desired state:
reassure measure
reassure compare
reassure check-stability
The CLI should be written in Node.js for cross-platform (I am talking about you Windows) support instead of bash.
Is your feature request related to a problem? Please describe.
When working locally user has to issue separate steps of reassure measure
and reassure compare
to get the measurement results.
Describe the solution you'd like
If after running reassure measure
there are both current and baseline performance measurements, then automatically display compare results to the console. Probably not worth generating MD & JSON output in such cases.
Add --no-compare
flag to disable the behaviour (i.e. just measure and do not display compare results to console).
Describe alternatives you've considered
measure
and compare
for local workflowAdditional context
V8 test suite behaves in a similar manner, they do not even have separate compare
script in their normal workflow but just do it with the current code measure
setup.
Is your feature request related to a problem? Please describe.
This would be an optional feature activated by measurePerformance
option on per test level and configure
option on global level. Option name could be detectRedundantRenders
or something similar.
When turned on, measuring code would analyze the rendered output and notify user if render did result in the same user interface being generated, i.e. that render was redundant.
Describe the solution you'd like
When turned on, after each onRender
callback from React.Profiler
component measuring code would run .toJSON
method from RNTL/RTL in order to generate host component representation of the output. Next it would compare the output with similar output generated on previous onRender
callback and warn user if the output is the same, meaning that the render did not cause change to the host component representation of the UI, i.e. UI did not change.
Describe the improvement
We currently have TypeScript type checks however it is not automatically invoked on CI, hence code is not effectively check with it.
Scope of improvement
Suggested implementation steps
Is your feature request related to a problem? Please describe.
Reassure can be though as general performance execution measurement and comparison tool. The current measurePerformance
works on JSX element by rendering it with React.Profiler
, which is run a number of times and saved to performance file. Next it is compared with baseline performance file using statistical tools.
All of the above steps, except rendering using React.Profiler
are just working on render durations (& counts), and might as well serve to analyse non-render-related measurements.
Describe the solution you'd like
Add measureDuration
function accepting a callback function with code to measure.
function measureDuration(callback: () => void, options)
Function body would be similar to existing measurePerformance
, but would swap rendering code for just running the callback.
User could optionally call following methods inside his code:
function timerStart(name):
function timerEnd(name):
Calling timerStart(name)
would capture current timestamp, for given name, using performance.now() if available or Date.now
otherwise
Calling timerEnd(name)
would also capture the current timestamp for given name, but would additionally take end and start timestamp calculate their difference, and treat this as name
event duration.
Having a test without using timerStart/timerEnd
would add a single line to performance results file. The line would be in the same format as current performance entries. The name
field would be take from Jest test name, as currently is for render tests. The durations
-related fields would contain code execution duration, while count
related fields would assume that count is 1
.
If timerStart/End
would be used then, each name
entry should generate additional line:
name
of that test would be Jest test name concatenated with timer name: {testName} - {timerName}
durations
fields would contain sum of given timer durations from given test runcount
fields would contain number of how many timers given timer has been run in the same test.Since migration to monorepo setup, there are some unnecessary deps packages extracted from single large package.
deps
, devDeps
& peerDeps
in all packages, remove unnecessary onesIs your feature request related to a problem? Please describe.
Currently we do not have documentation page, but only README. As the amount of documentation grows it would make sense to have it better organized into separate Getting started, API, guides, etc.
Describe the solution you'd like
Create a Docusaurus setup for Reassure! Regarding content we should start with having a single "Getting Started" page that will contain Readme.md file. We also need to have a typical sidebare where we would be able to add more docs pages in the future.
This documentation will be published using GH pages.
Describe the bug
After adding the lib, I tried to make a test of a component and after resolving some issues with my implementation I got the test to pass but with a console.error
message:
● Console
console.error
❌ Reassure: measure code is running under incorrect Node.js configuration.
Performance test code should be run in Jest with certain Node.js flags to increase measurements stability.
Make sure you use the Reassure CLI and run it using "reassure" command.
27 |
28 | test('ImageGallery', async () => {
> 29 | await measurePerformance(
| ^
30 | <Wrapper>
31 | <ImageGallery />
32 | </Wrapper>,
To Reproduce
Steps to reproduce the behavior:
yarn reassure
Expected behavior
The test to pass successfully and without an error output
Screenshots
jest.config.js
module.exports = {
...
preset: 'jest-expo',
...
};
module.exports = {
...
preset: '@testing-library/react-native',
...
};
Desktop (please complete the following information):
Smartphone (please complete the following information):
Additional context
Is your feature request related to a problem? Please describe.
Currently reading of performance file by compare
package does not perform any validation, so the CLI could crash with cryptic errors if file is corrupted.
Describe the solution you'd like
Use zod
or other hight quality library to parse/validate contents of the results file. This will require defining schemas for header and entry rows and should be configured to work with TypeScript. In case of error the information about the incorrect line should be displayed.
Describe alternatives you've considered
Write validation by hand, but that would be too not worth the tradeoff of additional deps
Additional context
N/A
In the current implementation we use scale
parameter for "smoothing' the results mainly for small componences, rendering in < 10 ms, where 1 ms measurement grain can have large % impact on render times.
This method however is not statistically sound as it leads to smoothing/averaging of results for individual runs.
Proposed solutions:
scale
parameterscale
parameter that would cause n
instances of the component rendered side-by-side (e.g. under common React.Fragment
or View
parent). This approach however seems to conflict with scenario
param. Hence, the solution here might be to allow either scale
or scenario
but not both.Describe the bug
After cloning the repo, yarn reassure
yields a validation error
● Validation Error:
Module @testing-library/jest-native/extend-expect in the setupFilesAfterEnv option was not found.
<rootDir> is: /Users/proximity/IdeaProjects/reassure/examples/native-expo
Configuration Documentation:
https://jestjs.io/docs/configuration
❌ Something went wrong, current performance file (.reassure/current.perf) does not exist
✨ Done in 0.98s.
To Reproduce
Steps to reproduce the behavior:
native-expo
projectyarn
yarn reassure
Expected behavior
The test run successfully
Desktop (please complete the following information):
Smartphone (please complete the following information):
Currently any messages, warning and errors generated by compare script are only displayed in console. There should be a way to forward it to Danger JS / output.md
Is your feature request related to a problem? Please describe.
Currently our current.perf
and baseline.perf
contain only performance results. Hence having such files does not allow you to tell on which code version were these results gathered. Having that info would tag our measurements with code version.
Describe the solution you'd like
.perf
file could contain { metadata: { branch: 'branchname', hash: 'commitHash' }}
as first line in the file that would not contain regular performance measurement entries but only metadata related to the measurements.reassure measure
which creates these files with any version control system, these could be passed as runtime flags, eg. reassure measure --branch branchname --hash commitHash
.reassure-tests.sh
script, on CI and perhaps locallyDescribe alternatives you've considered
None
Additional context
None
Describe the solution you'd like
We started using reassure to measure a particular screen's performance. To be able to render it in the right conditions, we currently render our entire app navigation container with all the necessary context providers (e.g. Jotai, RQ, NativeBase, etc...) and navigate to that problematic screen (i.e. open a modal screen).
We running that test, we see that it takes on average 500ms and 20 re-renders.
It'd be great to have a way to see what components took the longest and re-rendered the most during that test. Maybe we can specify a list of component names to track (e.g. high level screen components) or track the slowest 10 or only n level deep.
Describe alternatives you've considered
We're currently using the Profiler inside ReactDevTools (inside Flipper) to see those component render times. It's pretty good for debugging but it'd be great if we can have them from our automated perf tests as well.
Additional context
I'd be happy to contribute if you think this is a feasible feature to implement.
Thanks for this amazing tool. Render (and re-render) issues are the bottleneck of our app right now (we think NativeBase is not helping here). We're working hard on fixing some of these issues but we need a way to prevent future regressions. It's reassuring (pun intended) to see there's a tool that can warn us now.
Bitrise offers easy-to-use "steps" to be included in their CI pipelines. This could potentially make it easier for people to adopt Reassure. Especially if they're on paid plans, in which case Bitrise promises to deliver dedicated machines spawning VMs that are exact copies of each other (which we could leverage for sharding tests in the future).
Let's explore creating such a step. Here's relevant docs: https://devcenter.bitrise.io/en/steps-and-workflows/developing-your-own-bitrise-step/developing-a-new-step.html
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
Describe the improvement
API docs need usage examples to help the user make sense out of the interface, when and where to use it.
This is something that struck me when browsing our freshly released https://callstack.github.io/reassure/docs/api website :)
Is your feature request related to a problem? Please describe.
Create an example setup for Next.js app. Similar to our current examples/web-vite
there would be examples/web-nextjs
folder which would showcase how to connect Reassure with Next.js
Describe the solution you'd like
Generate a basic Next.js app, prefereably in TS, but could be also in JS there are any serious problems with TS. Then add Jest, RNTL and Reassure deps, and Reassure scripts to it. In order to have something to measure add the [SlowList
])(https://github.com/callstack/reassure/blob/main/examples/web-vite/src/SlowList.perf-test.tsx) component and tests (both unit and perf) from web-vite
example.
Make sure that you can run Reassure locally and that it does not generate warnings about not being able to set performance flags.
Voila! That's it.
Potential issues
There could be problems with making jest run unit/perf tests in some cases (they were some for Create React App, but non fore Vite web app).
Currently perf tests consist of measureRender
and writeStats
calls (+ dummy expect
). In order to simplify the API, a single method should be created, e.g. async function perfTest(jsx, options)
.
Investigate what's up there in the jest-expo preset that prevents global.gc
from being correctly populated by the --expose-gc
flag. See: #174
Describe the improvement
We currently have ES Lint integration however it is not automatically invoked on CI, hence code is not effectively check with it.
Scope of improvement
Suggested implementation steps
Describe the bug
When specifying TEST_RUNNER_PATH, the specified path is not taken into account, the default path node_modules/.bin/jest
is used.
I think the issue is in the measure command (measure.ts
):
The current line that grabs the test runner path:
const testRunnerPath =
process.env.TEST_RUNNER_PATH ?? process.platform === 'win32'
? 'node_modules/jest/bin/jest'
: 'node_modules/.bin/jest';
Should be:
const testRunnerPath =
process.env.TEST_RUNNER_PATH ?? (process.platform === 'win32'
? 'node_modules/jest/bin/jest'
: 'node_modules/.bin/jest');
To Reproduce
Steps to reproduce the behavior:
TEST_RUNNER_PATH=somepath yarn reassure
node_modules/.bin/jest
Cannot be foundExpected behavior
The TEST_RUNNER_PATH environment variable should be used instead of the default.
Desktop:
Im trying to use Reassure with Expo and it display some errors as you see below.
Anyone have any idea if Reassure should work with Expo or the error its related with something different?
"expo": "~45.0.0",
"react-native": "0.68.2",
"react": "17.0.2",
"jest": "^26.6.3",
"jest-expo": "^45.0.1",
"@testing-library/jest-native": "^4.0.5",
"@testing-library/react-native": "^10.1.1",
"react-test-renderer": "^17.0.2",
"reassure": "^0.1.0",
@TMaszko & @Xiltyn reported that under some circumstances render counts fluctuates by +/- 1 or +/- 2 between runs when applying Perf Tests on complex codebase. Theirs initial research pointed to update renders triggered by VirtualizedList/FlatList in situation of changing the underlying list data.
Investigate FlatList/etc impact of variable number of renders. Create test scenarios that try to replicate the random behavior and isolate it as much as possible in order to find root cause & possible solution or workaround.
This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.
These updates are awaiting their schedule. Click on a checkbox to get an update now.
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
@babel/core
, @testing-library/react-native
, @tsconfig/docusaurus
, @types/jest
, babel-jest
, eslint
, jest
, mathjs
, react-native
, typescript
, zod
)react
, react-dom
)These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
.circleci/config.yml
circleci/node 10
.github/workflows/docs-deploy.yml
actions/checkout v3
actions/setup-node v3
peaceiris/actions-gh-pages v3
.github/workflows/main.yml
actions/checkout v3
actions/setup-node v3
.github/workflows/stability.yml
actions/checkout v3
actions/setup-node v3
.github/workflows/test-example-apps.yml
actions/checkout v3
actions/setup-node v3
actions/checkout v3
actions/setup-node v3
docusaurus/package.json
@docusaurus/core 2.3.1
@docusaurus/preset-classic 2.3.1
@mdx-js/react ^1.6.22
clsx ^1.2.1
prism-react-renderer ^1.3.5
react ^17.0.2
react-dom ^17.0.2
@docusaurus/module-type-aliases 2.3.1
@tsconfig/docusaurus ^1.0.5
typescript ^4.7.4
node >=16.14
package.json
@babel/core ^7.20.12
@babel/runtime ^7.20.7
@callstack/eslint-config ^13.0.2
@changesets/cli ^2.26.0
@testing-library/react ^14.0.0
@testing-library/react-native ^11.5.1
@types/jest ^29.2.5
@types/react ^18.0.26
@types/react-native 0.71.3
babel-jest ^29.3.1
danger ^11.2.3
eslint ^8.32.0
eslint-config-prettier ^8.6.0
eslint-plugin-prettier ^4.2.1
jest ^29.3.1
pod-install ^0.1.38
prettier ^2.8.3
react 18.2.0
react-dom 18.2.0
react-native 0.71.3
react-native-builder-bob ^0.20.3
react-test-renderer 18.2.0
turbo ^1.6.3
typescript ^4.9.4
react *
packages/reassure-cli/package.json
simple-git ^3.16.0
yargs ^17.6.2
@types/yargs ^17.0.20
packages/reassure-compare/package.json
markdown-builder ^0.9.0
markdown-table ^2.0.0
zod ^3.20.2
babel-jest ^29.3.1
ts-jest ^29.0.5
packages/reassure-danger/package.json
packages/reassure-logger/package.json
chalk 4.1.2
packages/reassure-measure/package.json
mathjs ^11.5.0
react *
packages/reassure/package.json
Is your feature request related to a problem? Please describe.
Is it possible to run reassure for a single file? I have a lot of perf-test.js files in my project and sometimes when I change one file it does not get automatically picked to be the 1st one
Describe the solution you'd like
Being able to run tests for a single file like npx reassure ./tests/example.perf-test.js
Describe alternatives you've considered
none
Additional context
none
I'm getting this message when running yarn reassure
:
console.error
❌ Reassure: measure code is running under incorrect Node.js configuration.
Performance test code should be run in Jest with certain Node.js flags to increase measurements stability.
Make sure you use the Reassure CLI and run it using "reassure" command.
Same with npx reassure
or yarn node --expose_gc $(yarn bin reassure)
.
reassure: v0.4.1
node: v14.17.6
yarn: v1.22.19
jest: v26.6.3
MBP M1
macOS: v12.4
Describe the bug
running command yarn reassure
fails with error though the test completes successfully
Reassure: measure code is running under incorrect Node.js configuration.
Performance test code should be run in Jest with certain Node.js flags to increase measurements stability.
Make sure you use the Reassure CLI and run it using "reassure" command.
I have created a fresh Expo SDK 48
project and added reassure
by following official tutorial. I have set a simple component:
export default function Index() {
return (
<View style={{ flex: 1, justifyContent: 'center', alignItems: 'center' }}>
<Text>hello world</Text>
</View>
)
}
Then I setup a simple performance measurement test:
import { measurePerformance } from 'reassure';
import Component from '__path_to_component';
test('auth performance', async () => {
await measurePerformance(<Component />);
});
Running yarn reassure
throw above error:
To Reproduce
Steps to reproduce the behavior:
jest, jest-expo, @testing-library/react-native
and configure accordinglyyarn reassure
Expected behavior
Execution should complete without warning or error
Desktop (please complete the following information):
Additional context
"reassure": "^0.7.1"
"expo": 48.0.1,
"react-native": "0.71.3",
"jest": "^29.2.1",
"jest-expo": "^48.0.1",
"@testing-library/jest-native": "^5.4.2",
jest.config.json
module.exports = {
preset: "jest-expo",
setupFilesAfterEnv: ["<rootDir>/jest.setup.tsx"],
transformIgnorePatterns: [
"node_modules/(?!((jest-)?react-native|@react-native(-community)?)|expo(nent)?|@expo(nent)?/.*|@expo-google-fonts/.*|react-navigation|@react-navigation/.*|@unimodules/.*|unimodules|sentry-expo|native-base|react-native-svg)"
]
}
Is your feature request related to a problem? Please describe.
Currently we piggy back on Danger JS to offer support to popular CIs. This request is about building direct integration with GitHub actions.
Describe the solution you'd like
Describe alternatives you've considered
None
Additional context
None
Scope:
Yeah, I must say the Reassure is really an awesome tool and I decided to use it in our app development, but I'm facing a issue that I wonder how can I make the native modules work well, because you know, by using reassure, all the react native components are rendered in the Node.js environment, but as for a commercial app in real life, it definitely depends on many native modules, so could someone help tell me that what's the best practice of processing the native modules? Thanks.
Account for recent changes: perfTest
, single CLI command, RTL/web support
Describe the bug
Error: Cannot find module '/home/Projects/<mono-repo>/packages/benchmarks/node_modules/.bin/jest'
Trying to run reassure within an npm or yarn workspace fails. It appears the path to jest has been hardcoded but in a mono-repo the jest binary may be in the root node_modules
.
Scope:
renderMeasure
using RTL render
and RenderAPI
Describe the improvement
Currently the Danger JS pluging code is in main reassure
package. As we now have monorepo and plan to have multiple CI integrations it should be moved to its own reassure-danger
package.
Scope of improvement
reassure
package to reassure-danger
Suggested implementation steps
Ideally something simple, like we have it in RN CLI for example.
Originally posted by @thymikee in #76 (comment)
Is your feature request related to a problem? Please describe.
When tests are unstable locally, it is hard to figure out why this would be happening.
Describe the solution you'd like
A clear and concise description of what you want to happen.
When we have run into issues with unstable or long running tests it has been useful to pass an --inspect
flag to the node child process (just by editing the lib commonjs folder in reassure-cli). It would be nice if we could just run yarn reassure --baseline --inspect
and have that flag automatically added. Or we could use the --inspect-brk
option, or both. Assuming (but have not confirmed) this has the potential to affect the stability of test results, so we could have a big warning saying something along the lines of "You are running node with the --inspect flag, this should not be used in CI to measure performance as it can result in greater render variance".
I'd be happy to put up a PR with the change and some documentation.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Editing node modules is a bit of a pain but it is fine if we don't want to support this in the library.
Additional context
Add any other context or screenshots about the feature request here.
We currently use markdown-table.ts
which contains some code from Internet, please resolve licensing here.
Potential solutions:
markdown-table
)Right now while developing features we need to build the project every time to check against examples provided.
If we have unit tests integrated then a lot of this effort might be reduced.
Also this will ensure no existing functionality breaks during feature development.
Maybe some Jest/Mocha based scripts for unit testing which can later be evolved to check coverage and integrate with CI.
Describe the bug
When using pnpm as a package manager reassure binary cannot be found.
After running command pnpm reassure i get:
ERR_PNPM_RECURSIVE_EXEC_FIRST_FAIL not found: reassure
I noticed that reassure binary is missing in the node_modules/.bin folder.
To Reproduce
Steps to reproduce the behavior:
Parallelizing tests create extra variance and uncertainty, making it hard to measure performance relibaly. For now, let's pass --runInBand
option to Jest so it always run tests serially. Later on, once we have a fairly stable execution environment, we should figure out a way to parallelize tests across different CI workers (e.g. using Jest's --shard
flag or reimplement it).
Describe the improvement
It seems like the reassure-cli process always returns a success exit code no matter what is the result of the tests. Currently this makes it so the task that runs on CI always succeeds even if some code breaks the tests.
I think test failures usually mean something is wrong with the test vs a perf regression (unless maybe the test times out, although this could also be because of invalid scenario).
Scope of improvement
Return the error code returned by the jest process.
Suggested implementation steps
And merge "insignificant" parts there as well. We can always mark them somehow, e.g. with a
Describe the improvement
This section is missing "Examples". I would expect it to be a simple list to our existing examples on GitHub: https://github.com/callstack/reassure/tree/main/examples
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.