Comments (11)
This will require some work since over the same graph we're plotting data for different platforms (e.g. linux64 vs linux64-nigthly).
The platform will now have to be included in here [1].
The platform these days is passed through here:
https://github.com/mozilla-frontend-infra/firefox-performance-dashboard/blob/master/src/utils/fetchData.js#L6
which is then included in here:
https://github.com/mozilla-frontend-infra/firefox-performance-dashboard/blob/master/src/utils/fetchData.js#L16
[1]
armenzg@armenzg-mbp perf-dashboard$ git diff -p -U6
diff --git a/src/config.js b/src/config.js
index 424b1f3..c3e30a7 100644
--- a/src/config.js
+++ b/src/config.js
@@ -5,19 +5,21 @@ const JSBENCH_FRAMEWORK_ID = 11;
export const BENCHMARKS = {
'assorted-dom': {
compare: {
'raptor-assorted-dom-firefox': {
color: '#e55525',
label: 'Firefox',
+ platform: 'linux64',
frameworkId: RAPTOR_FRAMEWORK_ID,
suite: 'raptor-assorted-dom-firefox',
buildType: 'opt',
},
'raptor-assorted-dom-chrome': {
color: '#ffcd02',
label: 'Chrome',
+ platform: 'linux64-nightly',
frameworkId: RAPTOR_FRAMEWORK_ID,
suite: 'raptor-assorted-dom-chrome',
buildType: 'opt',
},
},
label: 'Assorted DOM',
from firefox-performance-dashboards.
Hi @armenzg, I couldn't understand Jmaher's requirement as there is no visual attached to it to see and relate.
we switched our chrome test runs from per commit on m-c to the nightly scheduler-
What is m-c and nightly scheduler?
I assume we can add this to the same graph.
Which graph?
However, I can understand your comment but don't know the actual issue here so can't connect the dots.
Would you mind elaborating?
Thanks,
from firefox-performance-dashboards.
@aimenbatool this is my fault for filing something with a cryptic request.
m-c = mozilla-central repository
nightly scheduler = instead of running the tests on every commit in the respository, we have 1-2 times/day a "nightly" build that is run and we only run the test on those nightly builds
In this case, this will be all the graphs that display google-chrome data- those tests were all switched. Looking at the above code it would be anywhere there is a |label: Chrome|. We don't have historical data for chrome, but we also just have a static version of google chrome, so I do not think it is important to have the old data showing along with the new data.
This should help clarify the request- ask more questions if there is more confusion and thanks for working on this!
from firefox-performance-dashboards.
If you follow this link, you will see few Chrome jobs that run after a Firefox nightly release happened:
If you select one of those jobs you will see this UI at the bottom of the page:
If you click on that score (45.51 on the screenshot above) it will take you here:
If you look closely you will see the platform associated to this performance data:
5 days ago, the Chrome jobs used to run on the normal 'linux64' jobs while they now run on the 'linux64-nightly' jobs (link):
This means that the Firefox performance dashboard does not have new data anymore (since it is looking for non-nightly data):
This lack of data is not isolated to the Motionmark-animometer benchmark or only the Linux64 platform for a bunch more. You can look at this link and scroll down to see which benchmarks are failing to have Chrome data (benchmarks that say 'Chrome v8' should not have missing data).
On that note, we probably should add a 3rd series like this:
armenzg@armenzg-mbp perf-dashboard$ git diff -p -U6
diff --git a/src/config.js b/src/config.js
index 424b1f3..c3e30a7 100644
--- a/src/config.js
+++ b/src/config.js
@@ -5,19 +5,21 @@ const JSBENCH_FRAMEWORK_ID = 11;
export const BENCHMARKS = {
'assorted-dom': {
compare: {
'raptor-assorted-dom-firefox': {
color: '#e55525',
label: 'Firefox',
+ platform: 'linux64',
frameworkId: RAPTOR_FRAMEWORK_ID,
suite: 'raptor-assorted-dom-firefox',
buildType: 'opt',
},
'raptor-assorted-dom-chrome': {
color: '#ffcd02',
label: 'Chrome',
+ platform: 'linux64-nightly',
frameworkId: RAPTOR_FRAMEWORK_ID,
suite: 'raptor-assorted-dom-chrome',
buildType: 'opt',
},
+ 'raptor-assorted-dom-chrome-old': {
+ color: '#e55525',
+ label: 'Chrome',
+ platform: 'linux64',
+ frameworkId: RAPTOR_FRAMEWORK_ID,
+ suite: 'raptor-assorted-dom-chrome',
+ buildType: 'opt',
},
},
label: 'Assorted DOM',
from firefox-performance-dashboards.
Hi @jmaher, Thanks for the clarification. This makes sense to me now.
from firefox-performance-dashboards.
Hi @armenzg, This makes little sense, I have a few confusions though.
- Benchmarks are defined here having
assorted-dom
orraptor-assorted-dom-firefox
etc. defined. And you also added'raptor-assorted-dom-chrome-old'
in your comment above.
On that note, we probably should add a 3rd series like this:
'raptor-assorted-dom-chrome-old': {
+ color: '#e55525',
+ label: 'Chrome',
+ platform: 'linux64',
+ frameworkId: RAPTOR_FRAMEWORK_ID,
+ suite: 'raptor-assorted-dom-chrome',
+ buildType: 'opt',
},
I am curious how this naming convention is defined? like you added 'old' at the end of 'raptor-assorted-dom-chrome'. Can we name it randomly?
I don't know how to put my question but I have this question in my mind from day one that how these benchmarks and their key values are defined?
I know that there is some role of queryPerfData
library in fetching data from perherder but it is also not clear to me.
Just to test run I added the code above in the file and It didn't work gave me errors.
In another PR a new benchmark was added. I was confused to see that also like where these benchmarks are available and how these are defined?
These questions can be annoying but I won't be able to fix things until I totally understand the workflow.
from firefox-performance-dashboards.
You can look at this link and scroll down to see which benchmarks are failing to have Chrome data (benchmarks that say 'Chrome v8' should not have missing data).
Link is not working and giving the following error.
from firefox-performance-dashboards.
Please defined these boxes on left becuase It has different color and information in three different links.
link1, link2, link3.
from firefox-performance-dashboards.
I am curious how this naming convention is defined? like you added 'old' at the end of 'raptor-assorted-dom-chrome'. Can we name it randomly?
Yes. The key of each entry is mostly irrelevant.
At most, it might the unique identifier (key) when iterating and creating React elements.
You can read more about it here:
https://reactjs.org/docs/lists-and-keys.html#keys
I don't know how to put my question but I have this question in my mind from day one that how these benchmarks and their key values are defined?
I know that there is some role of queryPerfData library in fetching data from perherder but it is also not clear to me.
That is a very good questions and perhaps you can take what I mention in here and add it to an FAQ.
On Treeherder we run thousands of jobs. Some of them are "build" jobs that generate Firefox for a specific platform (e.g. win10) and a specific build target ("opt", "debug", "pgo" or "nightly").
The build targets are:
- Optimized
- Debug - You can attach a debugger to it
- PGO - Profile guided optimized build
- It runs faster than normal builds but take longer to complete
- Nightly - It is a PGO builds that actually gets shipped to users
For some of these platform/buildtype combinations we run performance jobs. These performance jobs download Firefox and run it against different benchmarks. Some of these jobs used a framework called Talos while others use the new Raptor framework. There are a lot of them (e.g. jsbench).
Now, to specify which data points you want you have to either provide a unique signature ID or use various parameters which represent such signatureID. Specifying parameters is easier for humans than finding the right signature ID.
Jobs can run for a specific project/respository:
- e.g. mozilla-central:
- e.g. autoland:
Let's think of the following job:
- Platform -> Android (Pixel 2)
- Buildtype -> opt (I think PGO builds are only for Linux/Windows)
- Benchmark -> Speedometer
If I type "raptor speedometer" on 'mozilla-central''s tree you can see the following:
If I click on the "Pixel 2" job I can see this panel:
You can follow the link for the score and you reach Perfherder:
In the image above you get most values you need:
- suite: raptor-speedometer-geckoview
- buildType: opt
- project: mozilla-central
- platform: android-hw-p2-8-0-arm7-api-16
If you click on "Add more test data":
You will see pre-selected the 'raptor' framework which is the last piece of data to uniquely identify this series of performance data.
Just to test run I added the code above in the file and It didn't work gave me errors.
In another PR a new benchmark was added. I was confused to see that also like where these benchmarks are available and how these are defined?
Use the filter field on 'mozilla-central' and use 'raptor', 'talos' and 'bench' to see all the difference performance jobs:
e.g https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=raptor&group_state=expanded&selectedJob=208848791
These questions can be annoying but I won't be able to fix things until I totally understand the workflow.
from firefox-performance-dashboards.
You can look at this link and scroll down to see which benchmarks are failing to have Chrome data (benchmarks that say 'Chrome v8' should not have missing data).
Try this link.
from firefox-performance-dashboards.
My apologies I had to jump on this.
I was trying to verify the work on https://bugzilla.mozilla.org/show_bug.cgi?id=1502036
and once I was there I noticed that it was all due to using old Chrome data.
I've pushed a fix for this.
from firefox-performance-dashboards.
Related Issues (20)
- AWFY is slow to load
- Include a performance dashboard for energy intensive pages HOT 1
- Show URL used for page loads
- Add page descriptions
- Results dropdown should show only available data HOT 2
- Results with conditioned profiles disabled are not shown HOT 2
- Add a global page series control
- Speedometer chart gets displayed on other pages HOT 1
- Show full date range instead of limiting to available data HOT 1
- Remove obsolete test sites from AWFY dashboards: Apple, Google Sheets, Yahoo News, Yandex HOT 2
- AWSY 'Windows 10 64bit' not running since August 1st 2021? HOT 8
- Clicking AWFY's "Series" drop-down list sometimes auto-selects a menu item instead of opening the drop-down list HOT 1
- Show results for mozilla-beta
- Some subtests fail to load HOT 1
- Incorrectly combining data for chrome/chromium for certain subtests HOT 10
- Charts load very slowly on https://arewefastyet.com/win10/benchmarks/overview?numDays=60 HOT 3
- Speedomator X-axis sometimes wrong HOT 4
- AWFY no longer shows `Fenix-nofis` results for recorded page load (cold or warm) after 2023-02-13 HOT 2
- AWFY should drop chromium from the graphs
- Add "Lower position in all below graphs are faster" illustration
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from firefox-performance-dashboards.