Code Monkey home page Code Monkey logo

Comments (11)

armenzg avatar armenzg commented on May 30, 2024

This will require some work since over the same graph we're plotting data for different platforms (e.g. linux64 vs linux64-nigthly).
The platform will now have to be included in here [1].

The platform these days is passed through here:
https://github.com/mozilla-frontend-infra/firefox-performance-dashboard/blob/master/src/utils/fetchData.js#L6
which is then included in here:
https://github.com/mozilla-frontend-infra/firefox-performance-dashboard/blob/master/src/utils/fetchData.js#L16

[1]

armenzg@armenzg-mbp perf-dashboard$ git diff -p -U6
diff --git a/src/config.js b/src/config.js
index 424b1f3..c3e30a7 100644
--- a/src/config.js
+++ b/src/config.js
@@ -5,19 +5,21 @@ const JSBENCH_FRAMEWORK_ID = 11;
 export const BENCHMARKS = {
   'assorted-dom': {
     compare: {
       'raptor-assorted-dom-firefox': {
         color: '#e55525',
         label: 'Firefox',
+        platform: 'linux64',
         frameworkId: RAPTOR_FRAMEWORK_ID,
         suite: 'raptor-assorted-dom-firefox',
         buildType: 'opt',
       },
       'raptor-assorted-dom-chrome': {
         color: '#ffcd02',
         label: 'Chrome',
+        platform: 'linux64-nightly',
         frameworkId: RAPTOR_FRAMEWORK_ID,
         suite: 'raptor-assorted-dom-chrome',
         buildType: 'opt',
       },
     },
     label: 'Assorted DOM',

from firefox-performance-dashboards.

aimenbatool avatar aimenbatool commented on May 30, 2024

Hi @armenzg, I couldn't understand Jmaher's requirement as there is no visual attached to it to see and relate.

we switched our chrome test runs from per commit on m-c to the nightly scheduler-

What is m-c and nightly scheduler?

I assume we can add this to the same graph.

Which graph?

However, I can understand your comment but don't know the actual issue here so can't connect the dots.
Would you mind elaborating?

Thanks,

from firefox-performance-dashboards.

jmaher avatar jmaher commented on May 30, 2024

@aimenbatool this is my fault for filing something with a cryptic request.

m-c = mozilla-central repository
nightly scheduler = instead of running the tests on every commit in the respository, we have 1-2 times/day a "nightly" build that is run and we only run the test on those nightly builds

In this case, this will be all the graphs that display google-chrome data- those tests were all switched. Looking at the above code it would be anywhere there is a |label: Chrome|. We don't have historical data for chrome, but we also just have a static version of google chrome, so I do not think it is important to have the old data showing along with the new data.

This should help clarify the request- ask more questions if there is more confusion and thanks for working on this!

from firefox-performance-dashboards.

armenzg avatar armenzg commented on May 30, 2024

If you follow this link, you will see few Chrome jobs that run after a Firefox nightly release happened:
image
If you select one of those jobs you will see this UI at the bottom of the page:
image
If you click on that score (45.51 on the screenshot above) it will take you here:
image
If you look closely you will see the platform associated to this performance data:
image
5 days ago, the Chrome jobs used to run on the normal 'linux64' jobs while they now run on the 'linux64-nightly' jobs (link):
image
This means that the Firefox performance dashboard does not have new data anymore (since it is looking for non-nightly data):
image
This lack of data is not isolated to the Motionmark-animometer benchmark or only the Linux64 platform for a bunch more. You can look at this link and scroll down to see which benchmarks are failing to have Chrome data (benchmarks that say 'Chrome v8' should not have missing data).

On that note, we probably should add a 3rd series like this:

armenzg@armenzg-mbp perf-dashboard$ git diff -p -U6
diff --git a/src/config.js b/src/config.js
index 424b1f3..c3e30a7 100644
--- a/src/config.js
+++ b/src/config.js
@@ -5,19 +5,21 @@ const JSBENCH_FRAMEWORK_ID = 11;
 export const BENCHMARKS = {
   'assorted-dom': {
     compare: {
       'raptor-assorted-dom-firefox': {
         color: '#e55525',
         label: 'Firefox',
+        platform: 'linux64',
         frameworkId: RAPTOR_FRAMEWORK_ID,
         suite: 'raptor-assorted-dom-firefox',
         buildType: 'opt',
       },
       'raptor-assorted-dom-chrome': {
         color: '#ffcd02',
         label: 'Chrome',
+        platform: 'linux64-nightly',
         frameworkId: RAPTOR_FRAMEWORK_ID,
         suite: 'raptor-assorted-dom-chrome',
         buildType: 'opt',
       },
+      'raptor-assorted-dom-chrome-old': {
+        color: '#e55525',
+        label: 'Chrome',
+        platform: 'linux64',
+        frameworkId: RAPTOR_FRAMEWORK_ID,
+        suite: 'raptor-assorted-dom-chrome',
+        buildType: 'opt',
       },
     },
     label: 'Assorted DOM',

from firefox-performance-dashboards.

aimenbatool avatar aimenbatool commented on May 30, 2024

Hi @jmaher, Thanks for the clarification. This makes sense to me now.

from firefox-performance-dashboards.

aimenbatool avatar aimenbatool commented on May 30, 2024

Hi @armenzg, This makes little sense, I have a few confusions though.

  • Benchmarks are defined here having assorted-dom or raptor-assorted-dom-firefox etc. defined. And you also added 'raptor-assorted-dom-chrome-old' in your comment above.

On that note, we probably should add a 3rd series like this:

'raptor-assorted-dom-chrome-old': {
+        color: '#e55525',
+        label: 'Chrome',
+        platform: 'linux64',
+        frameworkId: RAPTOR_FRAMEWORK_ID,
+        suite: 'raptor-assorted-dom-chrome',
+        buildType: 'opt',
       },

I am curious how this naming convention is defined? like you added 'old' at the end of 'raptor-assorted-dom-chrome'. Can we name it randomly?

I don't know how to put my question but I have this question in my mind from day one that how these benchmarks and their key values are defined?
I know that there is some role of queryPerfData library in fetching data from perherder but it is also not clear to me.

Just to test run I added the code above in the file and It didn't work gave me errors.

In another PR a new benchmark was added. I was confused to see that also like where these benchmarks are available and how these are defined?

These questions can be annoying but I won't be able to fix things until I totally understand the workflow.

from firefox-performance-dashboards.

aimenbatool avatar aimenbatool commented on May 30, 2024

You can look at this link and scroll down to see which benchmarks are failing to have Chrome data (benchmarks that say 'Chrome v8' should not have missing data).

Link is not working and giving the following error.
screenshot from 2018-10-31 16-20-56

from firefox-performance-dashboards.

aimenbatool avatar aimenbatool commented on May 30, 2024

Please defined these boxes on left becuase It has different color and information in three different links.
link1, link2, link3.

raptor motionmark animometer chrome opt linux64 nightly mozilla central and others

from firefox-performance-dashboards.

armenzg avatar armenzg commented on May 30, 2024

I am curious how this naming convention is defined? like you added 'old' at the end of 'raptor-assorted-dom-chrome'. Can we name it randomly?

Yes. The key of each entry is mostly irrelevant.
At most, it might the unique identifier (key) when iterating and creating React elements.
You can read more about it here:
https://reactjs.org/docs/lists-and-keys.html#keys

I don't know how to put my question but I have this question in my mind from day one that how these benchmarks and their key values are defined?
I know that there is some role of queryPerfData library in fetching data from perherder but it is also not clear to me.

That is a very good questions and perhaps you can take what I mention in here and add it to an FAQ.

On Treeherder we run thousands of jobs. Some of them are "build" jobs that generate Firefox for a specific platform (e.g. win10) and a specific build target ("opt", "debug", "pgo" or "nightly").
The build targets are:

  • Optimized
  • Debug - You can attach a debugger to it
  • PGO - Profile guided optimized build
    • It runs faster than normal builds but take longer to complete
  • Nightly - It is a PGO builds that actually gets shipped to users

For some of these platform/buildtype combinations we run performance jobs. These performance jobs download Firefox and run it against different benchmarks. Some of these jobs used a framework called Talos while others use the new Raptor framework. There are a lot of them (e.g. jsbench).

Now, to specify which data points you want you have to either provide a unique signature ID or use various parameters which represent such signatureID. Specifying parameters is easier for humans than finding the right signature ID.

Jobs can run for a specific project/respository:

Let's think of the following job:

  • Platform -> Android (Pixel 2)
  • Buildtype -> opt (I think PGO builds are only for Linux/Windows)
  • Benchmark -> Speedometer

If I type "raptor speedometer" on 'mozilla-central''s tree you can see the following:
image

If I click on the "Pixel 2" job I can see this panel:
image

You can follow the link for the score and you reach Perfherder:
image

In the image above you get most values you need:

  • suite: raptor-speedometer-geckoview
  • buildType: opt
  • project: mozilla-central
  • platform: android-hw-p2-8-0-arm7-api-16

If you click on "Add more test data":
image

You will see pre-selected the 'raptor' framework which is the last piece of data to uniquely identify this series of performance data.

Just to test run I added the code above in the file and It didn't work gave me errors.

In another PR a new benchmark was added. I was confused to see that also like where these benchmarks are available and how these are defined?

Use the filter field on 'mozilla-central' and use 'raptor', 'talos' and 'bench' to see all the difference performance jobs:
e.g https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=raptor&group_state=expanded&selectedJob=208848791

These questions can be annoying but I won't be able to fix things until I totally understand the workflow.

from firefox-performance-dashboards.

armenzg avatar armenzg commented on May 30, 2024

You can look at this link and scroll down to see which benchmarks are failing to have Chrome data (benchmarks that say 'Chrome v8' should not have missing data).

Try this link.

from firefox-performance-dashboards.

armenzg avatar armenzg commented on May 30, 2024

My apologies I had to jump on this.

I was trying to verify the work on https://bugzilla.mozilla.org/show_bug.cgi?id=1502036
and once I was there I noticed that it was all due to using old Chrome data.

I've pushed a fix for this.

from firefox-performance-dashboards.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.