Code Monkey home page Code Monkey logo

googleads / publisher-ads-lighthouse-plugin Goto Github PK

View Code? Open in Web Editor NEW
94.0 23.0 57.0 2.21 MB

Publisher Ads Audits for Lighthouse is a tool to improve ad speed and overall quality through a series of automated audits. This tool will aid in resolving discovered problems, providing a tool to be used to evaluate effectiveness of iterative changes while suggesting actionable feedback.

Home Page: https://developers.google.com/publisher-ads-audits

License: Apache License 2.0

JavaScript 88.82% HTML 11.17% CSS 0.01%
audit best-practices lighthouse lighthouse-audits performance-analysis web

publisher-ads-lighthouse-plugin's Introduction

Publisher Ads Audits for Lighthouse

Linux Build Status NPM lighthouse package

Publisher Ads Audits for Lighthouse is a tool to improve ad speed and overall quality through a series of automated audits. At the moment, this is primarily targeted at sites using Google Ad Manager. This tool will aid in resolving discovered problems, providing a tool to be used to evaluate effectiveness of iterative changes while suggesting actionable feedback.

This tool is a plugin for Lighthouse, an open-sourced tool integrated into Chrome dev tools that is widely used by developers.

In order to help us improve please file an issue to let us know of any issues or suggestions you may have.

⚠️ Publisher Ads Audits for Lighthouse audit results aren't an indication of compliance or non-compliance with any Google Publisher Policies.

Web App

We currently have a web app version of Publisher Ads Audits for Lighthouse. It can be accessed at developers.google.com/publisher-ads-audits.

Lighthouse Node CLI

Publisher Ads Audits is available as a node package which can be used with the Lighthouse CLI.

Setup

mkdir pub-ads-audits-wrapper && cd pub-ads-audits-wrapper && \
npm init -y && \
yarn add -D lighthouse && \
yarn add -D lighthouse-plugin-publisher-ads

Usage

From within wrapper directory

yarn lighthouse {url} --plugins=lighthouse-plugin-publisher-ads

See Lighthouse documentation for additional options.

Development

Setup

git clone [email protected]:googleads/publisher-ads-lighthouse-plugin.git
cd publisher-ads-lighthouse-plugin
yarn

Usage

node index.js <url>

Available options:

  • --view: Open report in Chrome after execution.
  • --full: Run all Lighthouse categories.
  • Any other Lighthouse flags.

Some common options are:

  • --additional-trace-categories=performance to include general web performance audits.
  • --emulated-form-factor=desktop to run on the desktop version of the site.
  • --extra-headers "{\"Cookie\":\"monster=blue\"}" to include additional cookies on all requests.

Continuous Integration

This plugin can be integrated with your existing CI using Lighthouse CI to ensure that ad performance hasn't regressed. Learn More.

Tests

# Lint and test all files.
yarn test

Contributions

See CONTRIBUTING.md

References

publisher-ads-lighthouse-plugin's People

Contributors

adamraine avatar adamsilverstein avatar avparmar avatar brendankenny avatar connorjclark avatar datablocksproduction avatar dependabot[bot] avatar gabomatute avatar heyawhite avatar jburger424 avatar jimper avatar jonkeller avatar lusilva avatar publisher-ads-audits-bot avatar tollmanz avatar warrengm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

publisher-ads-lighthouse-plugin's Issues

Audit GPT API usage

GPT can throw errors or log messages when its API is used incorrectly. It would be great to audit these as well. In particular, it seems like we can have at least two sets of audits:

  1. Audit usage of deprecated APIs specifically
  2. Audit all other GPT errors and exceptions

For (1), I think we can just search for deprecated in warning messages. For (2), we would audit all other messages. We might want to iterate on (2) in testing to split it out into more granular audits and possibly include errors from other scripts.

See https://github.com/GoogleChrome/lighthouse/lighthouse-core/audits/errors-in-console.js as a code pointer for a similar audit.

Audit for minimizing layout shift best practices

Documentation for minimizing layout shift is up at https://developers.google.com/doubleclick-gpt/guides/minimize-layout-shift. It would be great to audit for some of these best practices on real pages. In particular we could verify that ad slots have some reserved space before being filled. For single-sized slots this size should match the ad size. For multisize it's a bit tricky since there are more tradeoffs. At the very least we should verify that the site is styled to fit the smallest size. (We don't want to audit for the largest size since there are trade-offs between size and blank space or it's possible that the largest size won't fill due to responsive issues)

Technically we want to do this by inspecting computed width/height style on the slot element. More discussion to follow

Getting discrepancy in result of CLI and https://pub-ads-audits.appspot.com/

I am using CLI command for run the ads-lighthouse-plugin which is "node index.js --output=json --output-path="results" --output=html --max-wait-for-load 60000 --quiet --chrome-flags="--headless" --full --emulated-form-factor desktop " and get result.

From last week I am getting value of Perf score is very low so I verified my website on https://developers.google.com/publisher-ads-audits and found huge difference for perf score.

I am getting Perf score through CLI is 14 and through web is 60.
Can anyone help me to solve this issue or let me know if i am doing anything wrong.

I am sharing snap shot for the same.

CLIScore

WebsiteScore

Thanks,
Ankit Jain

Critical path audit is hard to grok

The critical path audit is long and hard to understand. It often adds more noise without providing any useful actions. I propose the following set of changes:

  • Mark the audit as FYI so it doesn't directly factor into scoring
  • Identify new audits to extract possible actions to improve the path. For example parallelize bid requests (which is already an existing audit) or speed up bottleneck requests

Also, the term "critical path" is a bit of a misnomer since the loading graph is not often not a path. Perhaps "waterfall" is a more understandable, industry-standard term

Feedback and discussion welcome on this issue

Leverage the font-display CSS feature to ensure text is user-visible while webfonts are loading.

I'm getting the message: "Leverage the font-display CSS feature to ensure text is user-visible while webfonts are loading"

Which, as I understand it, is really a recommendation to use font-display: swap in a @font-face declaration.

Does this meant that embedding fonts via <link> is now discouraged, and @font-face is preferable?

If so, this seems like a pretty major Google recommendation. Should we all be making this transition?


NOTE: The link provided for more information ( https://web.dev/font-display?utm_source=lighthouse&utm_medium=lr/////////////////// ) does not appear to be working.

Local Reference to IFrameElements

Hello all,

Trying to figure out how to reference the iframe-elements type gatherer the proper way locally

I also found this GoogleChrome/lighthouse#8979. Should I just await its integration into core lighthouse?

Lighthouse Configuration:

const { report } = await lighthouse(url, {
        port: new URL(browser.wsEndpoint()).port,
        output: "html",
        extends: "lighthouse:full",
        logLevel: "info",
        plugins: ["lighthouse-plugin-publisher-ads"],
        passes: [
          {
            passName: "iframe",
            gatherers: [
              require.resolve(join(__dirname, "../lighthouse-plugin-publisher-ads/gatherers/iframe-elements")),
            ]
          }
        ],
    });

Appropriate Log Level info:

  status Auditing: Structured data is valid +1ms
  status Auditing: No long tasks blocking ad-related network requests +0ms
  status Auditing: Minimal render-blocking resources found +8ms
  status Auditing: Minimal requests found in ad critical path +14ms
  status Auditing: [Experimental] Network is efficiently utilized before ad requests +57ms
  status Auditing: Few or no ads loaded outside viewport +24ms
  Runner:warn IFrameElements gatherer, required by audit ads-in-viewport, did not run. +0ms
  ads-in-viewport:warn Caught exception: Required IFrameElements gatherer did not run. +0ms
  status Auditing: GPT tag is loaded asynchronously +0ms
  status Auditing: GPT tag is loaded over HTTPS +3ms
  status Auditing: GPT tag is loaded from recommended host +3ms
  status Auditing: GPT tag is loaded statically +3ms
  status Auditing: Ad density in initial viewport is within recommended range +3ms
  Runner:warn IFrameElements gatherer, required by audit viewport-ad-density, did not run. +0ms
  viewport-ad-density:warn Caught exception: Required IFrameElements gatherer did not run. +0ms
  status Auditing: Tag load time +0ms
  status Auditing: Latency of first ad request +66ms
  status Auditing: Latency of first ad request, from tag load +51ms
  status Auditing: Latency of first ad render +1ms
  Runner:warn IFrameElements gatherer, required by audit first-ad-paint, did not run. +0ms
  first-ad-paint:warn Caught exception: Required IFrameElements gatherer did not run. +0ms
  status Auditing: Ad slots effectively use horizontal space +0ms
  status Auditing: No ad found at the very top of the viewport +3ms
  Runner:warn IFrameElements gatherer, required by audit ad-top-of-viewport, did not run. +0ms
  ad-top-of-viewport:warn Caught exception: Required IFrameElements gatherer did not run. +0ms
  status Auditing: No duplicate tags found in any frame +0ms
  status Auditing: Header bidding is parallelized +1ms
  status Auditing: No script-injected tags found +4ms
  status Generating results... +4ms

Migrate plugin to Lighthouse's i18n framework

This audit currently uses a quickly hacked i18n framework but we should migrate to the framework used by Lighthouse. This means:

  • Use ICU for templating and pluralization
  • Use the UIStrings framework for loading locales
  • Use collect strings to easily export files for translation

Support Simulated Metrics in Audits

For consistency with core Lighthouse audits, this plugin should simulated throttling whenever necessary. Right now all metrics are report as observed regardless of throttling mode--so there can be inconsistencies.

In general I propose that any audits should be any one of the following states:

  1. Full support of simulated throttling, in reported number and in details
  2. Hide metrics when simulated throttling is on. For example, we can hide the start and end times in the critical paths audit while still reporting the critical path.
  3. Show metrics but disclose that they are observed numbers. This is provides more information than the solution above but may be more confusing to some users.

If Solution 1 is not feasible for any audit for whatever reason, either Solution 2 or Solution 3 should be taken for that audit in the short term.

As a quick rundown of TODOs by audit:

  • Support by tag load time audit [#64]
  • Support by ad request time audits [#64]
  • Support by ad paint time audit [#64]
  • Support by long tasks audit [#84]
  • Support by blocking requests audit (details only)
  • Support by script injected tags (details only)
  • Support by static tag loading audit (details only)
  • Support by critical path audit (details only) [#71]
  • Support by idle times audit (experimental)

Display full resource URL in audit results

example

The resource URL is hidden when viewing the audit results.

This essentially makes the recommendation useless as I have many assets/resources that start with a similar path... whether I open in a new tab or hover, there is no-way to see the full URL of the resource.

Async tag is not always lowest priority

If the tag is preloaded using <link rel="preload"... Chrome will classify it as High priority. Therefore, the async check at audits/async-ad-tags.js:29 return tagReq.priority == 'Low'; is inaccurate.

Audit FR: Improve audits of gpt.js script loading

As a quick background, our plugin currently audits pages for whether gpt.js is loaded statically, that is in the form:

<script async src="https://securepubads.g.doubleclick.net/tag/js/gpt.js"></script>

The main benefits of the snippet are (1) it guarantees that there are no other scripts in the critical path of gpt.js and (2) the URL is visible to the browser's resource scanner so the request can be preloaded before the script element is laid out.

However, there are some legitimate cases where a site may prefer to load gpt.js dynamically rather than statically (i.e. using document.head.appendChild or similar APIs). For example, a site may gate ad loading on loading gpt.js:

const consentCookie = 
if (consentCookie == '1') {
  const gpt = document.createElement('script');
  gpt.src ='https://securepubads.g.doubleclick.net/tag/js/gpt.js'
  document.head.appendChild(gpt);
}

It should also be noted that the site loses some of the benefits of (2) when gpt.js is cached because the doubleclick.net connection will be cold for subsequent uncached requests (e.g. /gampad/ads?...). The cached is far more common on the wild but often ignored in lab based tests.

To handle some of these nuances I propose a few changes to our audits:

  • Update the static tag loading audit to independently check for:
    • Static gpt.js requests -- by either using a static script tag or link rel=preload
    • Lack of blocking requests -- by either using a static script tag or verifying the gpt.js script element was not added by another external script (*)
  • Add a new audit for warming the doubleclick.net connection by using link rel=preconnect
    • There are still more nuances, but those likely don't warrant further caveats in our audit. For example, there is some risk of a preload request being made needlessly--which is a standard preload risk.

(*) Implementation note: the latter requires https://chromedevtools.github.io/devtools-protocol/tot/DOM#method-getNodeStackTraces which is still experimental. It does not yet work on all versions of chrome based on local testing.

Of course, the changes to script loading should have corresponding user docs.

More feedback or related issues would be welcome in this thread!

Support AdSense

Currently only supporting GPT.
To include AdSense support:

  • include iframe id "google_ads_frame*" in gatherer
  • include "pagead/ads" in hasAdRequestPath
  • create isAdSense() function to classify tag

Avoid long tasks that block ad-related network requests (show_ads_impl_fy2019.js)

I'm receiving the warning message to "Avoid long tasks that blog ad-related network requests -- 0.1 s blocked".

The warning message applies to the resource show_ads_impl_fy2019.js, which is part of adsbygoogle.js

In other words, a script which is required to show ads is being flagged itself as a "long task that blocks ad-related network requests."

Measure ad density based on document height, not viewport area

Reduce ad density in initial viewport audit measures ad density in the initial viewport based on area while industry best practices usually discuss density vertically based on content length. Do note that "content length" is somewhat vague so it's likely hard to define precisely and will need some further discussion

See https://www.betterads.org/mobile-ad-density-higher-than-30/ and https://developers.google.com/publisher-ads-audits/reference/audits/viewport-ad-density#fn1

BUG: wrong metric units in JSON

I'm using lighthouse with the ads plugin to retrieve data about some of our sites, the data is correct, but the units that appear in the JSON file are wrong:

{
  ...
  "audits": {
    ...
    "ad-request-from-page-start": {
      "id": "ad-request-from-page-start",
      "title": "Reduce time to send the first ad request",
      "description": "This metric measures the elapsed time from the start of page load until the first ad request is made. Delayed ad requests will decrease impressions and viewability, and have a negative impact on ad revenue. [Learn more](https://developers.google.com/publisher-ads-audits/reference/audits/ad-request-from-page-start).",
      "score": 0.01,
      "scoreDisplayMode": "numeric",
      "numericValue": 23.949903000000003,
      "numericUnit": "millisecond",
      "displayValue": "23.9 s"
    },
    "first-ad-render": {
      "id": "first-ad-render",
      "title": "Reduce time to render first ad",
      "description": "This metric measures the time for the first ad iframe to render from page navigation. [Learn more](https://developers.google.com/publisher-ads-audits/reference/audits/first-ad-render).",
      "score": 0.39,
      "scoreDisplayMode": "numeric",
      "numericValue": 24.756349000000007,
      "numericUnit": "millisecond",
      "displayValue": "24.8 s"
    },
    "tag-load-time": {
      "id": "tag-load-time",
      "title": "Reduce tag load time",
      "description": "This metric measures the time for the ad tag's implementation script (pubads_impl.js for GPT; adsbygoogle.js for AdSense) to load after the page loads. [Learn more](https://developers.google.com/publisher-ads-audits/reference/audits/tag-load-time).",
      "score": 0.16,
      "scoreDisplayMode": "numeric",
      "numericValue": 13.969837,
      "numericUnit": "millisecond",
      "displayValue": "14.0 s"
    }
  }
}

As you can see in the extract, the numericUnit says millisecond, but the displayValue is using seconds, and the numeric value is roughly the same.

I expect numericUnit to be second.

Enable auditing Ads on AMP pages (`amp-ad`)

Description

Feature request - when running the Publish Ads Lighthouse Audits against AMP pages, the plugin should provide ad audit details.

Currently running the tests for an AMP site with <amp-ad /> ads reports "No ads were requested when fetching this page.":

image

Viewing the site source code and front end the ads are apparent.

image

image

Handle anonymous functions in blocking load events audit

The audit reports the name of the load event listener, but the function name will be blank if the function is anonymous. In that case it would be better to report (anonymous) and include the line number. (The line number may be useful for named functions as well)

Weigh metrics over performance audits in scoring

Our audits should be re-weight so that time to first ad render reflects the majority of the publisher's. Performance audits and other measurements shouldn't be weighed highly since they exist solely to offer insights to improve time to first ad render.

Note that audits that aren't target to improve time to first ad render in the lab should still be weighed highly along. For example, auditing ad placement is aimed to improve real world viewability--not performance. Or consider an audit that targets loading in the warm-cache case which isn't usually reflected in lab measurements.

See https://github.com/googleads/publisher-ads-lighthouse-plugin/blob/master/lighthouse-plugin-publisher-ads/plugin.js for existing weights.

Audit wrongly interprets <link rel="preload"> as duplicate tag loading

Site: https://mejorconsalud.com

You can see an audit example here

In order to balance Lighthouse for Ads and normal Lighthouse, GPT library is preloaded using <link rel=prefetch as=script href="https://securepubads.g.doubleclick.net/tag/js/gpt.js"/> which is later converted into <script src="https://securepubads.g.doubleclick.net/tag/js/gpt.js">

With this approach, Ligthouse for Ads yields 2 issues which we consider are not accurate

  1. Tags best practices - Load tags only once per frame Here I think that Ligthouse is wrongly detecting that GPT library is being loaded twice, and if you check the network panel in Dev tools it actually appears twice, but the first one is initiated by <link rel="prefetch"> and the second one by <script> but is coming from the cache

  2. Ad speed - Load ad scripts statically Again, it seems that Lighthouse is yielding this error because our approach of using first <link rel="preload"> and later converting it to <script> But we consider this a fair approach, in order to control the initial amount of Javascript being executed and to don't penalize the Time to Interactive, or First Input Delay.

In our case, we use this approach in order to prioritize the execution of the CMP over the GPT library, in order to retreive user content, which is mandatory in Europe before start loading ads.

errors logged in devtools

Don't know what this is, but looks googely. Can these errors be suppressed? strangely, it happens even with pubads unchecked

No ads were rendered when rendering this page

I'm getting the following error when testing a few different websites running ads - a couple of colleges have also tried & they also have the error below
There were issues affecting this run of Lighthouse:
No ads were rendered when rendering this page.

I2I: Golden-based tests

Instead of having made up fake data in our tests which are fragile and likely aren't representative of the real world, I propose deleting those tests in general using real traces and hardcoding the expectations for each audit. (Though fake data is certainly useful some unit tests)

A strawman proposal is as follows:

const fixtures = [
  {
    trace: require('test/data/foo-trace.json'),
    description: 'foo',
    expectedAuditResults: { // Keyed by audit name
      'tag-load-time': {
        score: 0,
        rawValue: 1337,
      },
      'ad-blocking-tasks': {
        score: 0,
        rawValue: 4,
        extraDetails: {
          ...
        },
      },
    },
  },
];

The test runner would verify that each expected field matches the audit result. Omitted fields would not be checked.

I should point out that there are some risks of leaking your own credentials or other confidential headers if we simply get traces off of your main browser.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.