Code Monkey home page Code Monkey logo

google-auth-library-nodejs's Introduction

Google Cloud Platform logo

release level npm version

This is Google's officially supported node.js client library for using OAuth 2.0 authorization and authentication with Google APIs.

A comprehensive list of changes in each version may be found in the CHANGELOG.

Read more about the client libraries for Cloud APIs, including the older Google APIs Client Libraries, in Client Libraries Explained.

Table of contents:

Quickstart

Installing the client library

npm install google-auth-library

Ways to authenticate

This library provides a variety of ways to authenticate to your Google services.

  • Application Default Credentials - Use Application Default Credentials when you use a single identity for all users in your application. Especially useful for applications running on Google Cloud. Application Default Credentials also support workload identity federation to access Google Cloud resources from non-Google Cloud platforms.
  • OAuth 2 - Use OAuth2 when you need to perform actions on behalf of the end user.
  • JSON Web Tokens - Use JWT when you are using a single identity for all users. Especially useful for server->server or server->API communication.
  • Google Compute - Directly use a service account on Google Cloud Platform. Useful for server->server or server->API communication.
  • Workload Identity Federation - Use workload identity federation to access Google Cloud resources from Amazon Web Services (AWS), Microsoft Azure or any identity provider that supports OpenID Connect (OIDC).
  • Workforce Identity Federation - Use workforce identity federation to access Google Cloud resources using an external identity provider (IdP) to authenticate and authorize a workforce—a group of users, such as employees, partners, and contractors—using IAM, so that the users can access Google Cloud services.
  • Impersonated Credentials Client - access protected resources on behalf of another service account.
  • Downscoped Client - Use Downscoped Client with Credential Access Boundary to generate a short-lived credential with downscoped, restricted IAM permissions that can use for Cloud Storage.

Application Default Credentials

This library provides an implementation of Application Default Credentials for Node.js. The Application Default Credentials provide a simple way to get authorization credentials for use in calling Google APIs.

They are best suited for cases when the call needs to have the same identity and authorization level for the application independent of the user. This is the recommended approach to authorize calls to Cloud APIs, particularly when you're building an application that uses Google Cloud Platform.

Application Default Credentials also support workload identity federation to access Google Cloud resources from non-Google Cloud platforms including Amazon Web Services (AWS), Microsoft Azure or any identity provider that supports OpenID Connect (OIDC). Workload identity federation is recommended for non-Google Cloud environments as it avoids the need to download, manage and store service account private keys locally, see: Workload Identity Federation.

Download your Service Account Credentials JSON file

To use Application Default Credentials, You first need to download a set of JSON credentials for your project. Go to APIs & Auth > Credentials in the Google Developers Console and select Service account from the Add credentials dropdown.

This file is your only copy of these credentials. It should never be committed with your source code, and should be stored securely.

Once downloaded, store the path to this file in the GOOGLE_APPLICATION_CREDENTIALS environment variable.

Enable the API you want to use

Before making your API call, you must be sure the API you're calling has been enabled. Go to APIs & Auth > APIs in the Google Developers Console and enable the APIs you'd like to call. For the example below, you must enable the DNS API.

Choosing the correct credential type automatically

Rather than manually creating an OAuth2 client, JWT client, or Compute client, the auth library can create the correct credential type for you, depending upon the environment your code is running under.

For example, a JWT auth client will be created when your code is running on your local developer machine, and a Compute client will be created when the same code is running on Google Cloud Platform. If you need a specific set of scopes, you can pass those in the form of a string or an array to the GoogleAuth constructor.

The code below shows how to retrieve a default credential type, depending upon the runtime environment.

const {GoogleAuth} = require('google-auth-library');

/**
* Instead of specifying the type of client you'd like to use (JWT, OAuth2, etc)
* this library will automatically choose the right client based on the environment.
*/
async function main() {
  const auth = new GoogleAuth({
    scopes: 'https://www.googleapis.com/auth/cloud-platform'
  });
  const client = await auth.getClient();
  const projectId = await auth.getProjectId();
  const url = `https://dns.googleapis.com/dns/v1/projects/${projectId}`;
  const res = await client.request({ url });
  console.log(res.data);
}

main().catch(console.error);

OAuth2

This library comes with an OAuth2 client that allows you to retrieve an access token and refreshes the token and retry the request seamlessly if you also provide an expiry_date and the token is expired. The basics of Google's OAuth2 implementation is explained on Google Authorization and Authentication documentation.

In the following examples, you may need a CLIENT_ID, CLIENT_SECRET and REDIRECT_URL. You can find these pieces of information by going to the Developer Console, clicking your project > APIs & auth > credentials.

For more information about OAuth2 and how it works, see here.

A complete OAuth2 example

Let's take a look at a complete example.

const {OAuth2Client} = require('google-auth-library');
const http = require('http');
const url = require('url');
const open = require('open');
const destroyer = require('server-destroy');

// Download your OAuth2 configuration from the Google
const keys = require('./oauth2.keys.json');

/**
* Start by acquiring a pre-authenticated oAuth2 client.
*/
async function main() {
  const oAuth2Client = await getAuthenticatedClient();
  // Make a simple request to the People API using our pre-authenticated client. The `request()` method
  // takes an GaxiosOptions object.  Visit https://github.com/JustinBeckwith/gaxios.
  const url = 'https://people.googleapis.com/v1/people/me?personFields=names';
  const res = await oAuth2Client.request({url});
  console.log(res.data);

  // After acquiring an access_token, you may want to check on the audience, expiration,
  // or original scopes requested.  You can do that with the `getTokenInfo` method.
  const tokenInfo = await oAuth2Client.getTokenInfo(
    oAuth2Client.credentials.access_token
  );
  console.log(tokenInfo);
}

/**
* Create a new OAuth2Client, and go through the OAuth2 content
* workflow.  Return the full client to the callback.
*/
function getAuthenticatedClient() {
  return new Promise((resolve, reject) => {
    // create an oAuth client to authorize the API call.  Secrets are kept in a `keys.json` file,
    // which should be downloaded from the Google Developers Console.
    const oAuth2Client = new OAuth2Client(
      keys.web.client_id,
      keys.web.client_secret,
      keys.web.redirect_uris[0]
    );

    // Generate the url that will be used for the consent dialog.
    const authorizeUrl = oAuth2Client.generateAuthUrl({
      access_type: 'offline',
      scope: 'https://www.googleapis.com/auth/userinfo.profile',
    });

    // Open an http server to accept the oauth callback. In this simple example, the
    // only request to our webserver is to /oauth2callback?code=<code>
    const server = http
      .createServer(async (req, res) => {
        try {
          if (req.url.indexOf('/oauth2callback') > -1) {
            // acquire the code from the querystring, and close the web server.
            const qs = new url.URL(req.url, 'http://localhost:3000')
              .searchParams;
            const code = qs.get('code');
            console.log(`Code is ${code}`);
            res.end('Authentication successful! Please return to the console.');
            server.destroy();

            // Now that we have the code, use that to acquire tokens.
            const r = await oAuth2Client.getToken(code);
            // Make sure to set the credentials on the OAuth2 client.
            oAuth2Client.setCredentials(r.tokens);
            console.info('Tokens acquired.');
            resolve(oAuth2Client);
          }
        } catch (e) {
          reject(e);
        }
      })
      .listen(3000, () => {
        // open the browser to the authorize url to start the workflow
        open(authorizeUrl, {wait: false}).then(cp => cp.unref());
      });
    destroyer(server);
  });
}

main().catch(console.error);

Handling token events

This library will automatically obtain an access_token, and automatically refresh the access_token if a refresh_token is present. The refresh_token is only returned on the first authorization, so if you want to make sure you store it safely. An easy way to make sure you always store the most recent tokens is to use the tokens event:

const client = await auth.getClient();

client.on('tokens', (tokens) => {
  if (tokens.refresh_token) {
    // store the refresh_token in my database!
    console.log(tokens.refresh_token);
  }
  console.log(tokens.access_token);
});

const url = `https://dns.googleapis.com/dns/v1/projects/${projectId}`;
const res = await client.request({ url });
// The `tokens` event would now be raised if this was the first request

Retrieve access token

With the code returned, you can ask for an access token as shown below:

const tokens = await oauth2Client.getToken(code);
// Now tokens contains an access_token and an optional refresh_token. Save them.
oauth2Client.setCredentials(tokens);

Obtaining a new Refresh Token

If you need to obtain a new refresh_token, ensure the call to generateAuthUrl sets the access_type to offline. The refresh token will only be returned for the first authorization by the user. To force consent, set the prompt property to consent:

// Generate the url that will be used for the consent dialog.
const authorizeUrl = oAuth2Client.generateAuthUrl({
  // To get a refresh token, you MUST set access_type to `offline`.
  access_type: 'offline',
  // set the appropriate scopes
  scope: 'https://www.googleapis.com/auth/userinfo.profile',
  // A refresh token is only returned the first time the user
  // consents to providing access.  For illustration purposes,
  // setting the prompt to 'consent' will force this consent
  // every time, forcing a refresh_token to be returned.
  prompt: 'consent'
});

Checking access_token information

After obtaining and storing an access_token, at a later time you may want to go check the expiration date, original scopes, or audience for the token. To get the token info, you can use the getTokenInfo method:

// after acquiring an oAuth2Client...
const tokenInfo = await oAuth2Client.getTokenInfo('my-access-token');

// take a look at the scopes originally provisioned for the access token
console.log(tokenInfo.scopes);

This method will throw if the token is invalid.

JSON Web Tokens

The Google Developers Console provides a .json file that you can use to configure a JWT auth client and authenticate your requests, for example when using a service account.

const {JWT} = require('google-auth-library');
const keys = require('./jwt.keys.json');

async function main() {
  const client = new JWT({
    email: keys.client_email,
    key: keys.private_key,
    scopes: ['https://www.googleapis.com/auth/cloud-platform'],
  });
  const url = `https://dns.googleapis.com/dns/v1/projects/${keys.project_id}`;
  const res = await client.request({url});
  console.log(res.data);
}

main().catch(console.error);

The parameters for the JWT auth client including how to use it with a .pem file are explained in samples/jwt.js.

Loading credentials from environment variables

Instead of loading credentials from a key file, you can also provide them using an environment variable and the GoogleAuth.fromJSON() method. This is particularly convenient for systems that deploy directly from source control (Heroku, App Engine, etc).

Start by exporting your credentials:

$ export CREDS='{
  "type": "service_account",
  "project_id": "your-project-id",
  "private_key_id": "your-private-key-id",
  "private_key": "your-private-key",
  "client_email": "your-client-email",
  "client_id": "your-client-id",
  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  "token_uri": "https://accounts.google.com/o/oauth2/token",
  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  "client_x509_cert_url": "your-cert-url"
}'

Now you can create a new client from the credentials:

const {auth} = require('google-auth-library');

// load the environment variable with our keys
const keysEnvVar = process.env['CREDS'];
if (!keysEnvVar) {
  throw new Error('The $CREDS environment variable was not found!');
}
const keys = JSON.parse(keysEnvVar);

async function main() {
  // load the JWT or UserRefreshClient from the keys
  const client = auth.fromJSON(keys);
  client.scopes = ['https://www.googleapis.com/auth/cloud-platform'];
  const url = `https://dns.googleapis.com/dns/v1/projects/${keys.project_id}`;
  const res = await client.request({url});
  console.log(res.data);
}

main().catch(console.error);

Using a Proxy

You can set the HTTPS_PROXY or https_proxy environment variables to proxy HTTPS requests. When HTTPS_PROXY or https_proxy are set, they will be used to proxy SSL requests that do not have an explicit proxy configuration option present.

Compute

If your application is running on Google Cloud Platform, you can authenticate using the default service account or by specifying a specific service account.

Note: In most cases, you will want to use Application Default Credentials. Direct use of the Compute class is for very specific scenarios.

const {auth, Compute} = require('google-auth-library');

async function main() {
  const client = new Compute({
    // Specifying the service account email is optional.
    serviceAccountEmail: '[email protected]'
  });
  const projectId = await auth.getProjectId();
  const url = `https://dns.googleapis.com/dns/v1/projects/${projectId}`;
  const res = await client.request({url});
  console.log(res.data);
}

main().catch(console.error);

Workload Identity Federation

Using workload identity federation, your application can access Google Cloud resources from Amazon Web Services (AWS), Microsoft Azure or any identity provider that supports OpenID Connect (OIDC).

Traditionally, applications running outside Google Cloud have used service account keys to access Google Cloud resources. Using identity federation, you can allow your workload to impersonate a service account. This lets you access Google Cloud resources directly, eliminating the maintenance and security burden associated with service account keys.

Accessing resources from AWS

In order to access Google Cloud resources from Amazon Web Services (AWS), the following requirements are needed:

  • A workload identity pool needs to be created.
  • AWS needs to be added as an identity provider in the workload identity pool (The Google organization policy needs to allow federation from AWS).
  • Permission to impersonate a service account needs to be granted to the external identity.

Follow the detailed instructions on how to configure workload identity federation from AWS.

After configuring the AWS provider to impersonate a service account, a credential configuration file needs to be generated. Unlike service account credential files, the generated credential configuration file will only contain non-sensitive metadata to instruct the library on how to retrieve external subject tokens and exchange them for service account access tokens. The configuration file can be generated by using the gcloud CLI.

To generate the AWS workload identity configuration, run the following command:

# Generate an AWS configuration file.
gcloud iam workload-identity-pools create-cred-config \
    projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$POOL_ID/providers/$AWS_PROVIDER_ID \
    --service-account $SERVICE_ACCOUNT_EMAIL \
    --aws \
    --output-file /path/to/generated/config.json

Where the following variables need to be substituted:

  • $PROJECT_NUMBER: The Google Cloud project number.
  • $POOL_ID: The workload identity pool ID.
  • $AWS_PROVIDER_ID: The AWS provider ID.
  • $SERVICE_ACCOUNT_EMAIL: The email of the service account to impersonate.

This will generate the configuration file in the specified output file.

If you want to use the AWS IMDSv2 flow, you can add the field below to the credential_source in your AWS ADC configuration file: "imdsv2_session_token_url": "http://169.254.169.254/latest/api/token" The gcloud create-cred-config command will be updated to support this soon.

You can now start using the Auth library to call Google Cloud resources from AWS.

Accessing resources from AWS using a custom AWS security credentials supplier.

In order to access Google Cloud resources from Amazon Web Services (AWS), the following requirements are needed:

  • A workload identity pool needs to be created.
  • AWS needs to be added as an identity provider in the workload identity pool (The Google organization policy needs to allow federation from AWS).
  • Permission to impersonate a service account needs to be granted to the external identity.

Follow the detailed instructions on how to configure workload identity federation from AWS.

If you want to use AWS security credentials that cannot be retrieved using methods supported natively by this library, a custom AwsSecurityCredentialsSupplier implementation may be specified when creating an AWS client. The supplier must return valid, unexpired AWS security credentials when called by the GCP credential.

Note that the client does not cache the returned AWS security credentials, so caching logic should be implemented in the supplier to prevent multiple requests for the same resources.

class AwsSupplier implements AwsSecurityCredentialsSupplier {
  async getAwsRegion(context: ExternalAccountSupplierContext): Promise<string> {
    // Return the current AWS region, i.e. 'us-east-2'.
  }

  async getAwsSecurityCredentials(
    context: ExternalAccountSupplierContext
  ): Promise<AwsSecurityCredentials> {
    const audience = context.audience;
    // Return valid AWS security credentials for the requested audience.
  }
}

const clientOptions = {
  audience: '//iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$WORKLOAD_POOL_ID/providers/$PROVIDER_ID', // Set the GCP audience.
  subject_token_type: 'urn:ietf:params:aws:token-type:aws4_request', // Set the subject token type.
  aws_security_credentials_supplier: new AwsSupplier() // Set the custom supplier.
}

const client = new AwsClient(clientOptions);

Where the audience is: //iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$WORKLOAD_POOL_ID/providers/$PROVIDER_ID

Where the following variables need to be substituted:

  • $PROJECT_NUMBER: The Google Cloud project number.
  • $WORKLOAD_POOL_ID: The workload pool ID.
  • $PROVIDER_ID: The provider ID.

The values for audience, service account impersonation URL, and any other builder field can also be found by generating a credential configuration file with the gcloud CLI.

Access resources from Microsoft Azure

In order to access Google Cloud resources from Microsoft Azure, the following requirements are needed:

  • A workload identity pool needs to be created.
  • Azure needs to be added as an identity provider in the workload identity pool (The Google organization policy needs to allow federation from Azure).
  • The Azure tenant needs to be configured for identity federation.
  • Permission to impersonate a service account needs to be granted to the external identity.

Follow the detailed instructions on how to configure workload identity federation from Microsoft Azure.

After configuring the Azure provider to impersonate a service account, a credential configuration file needs to be generated. Unlike service account credential files, the generated credential configuration file will only contain non-sensitive metadata to instruct the library on how to retrieve external subject tokens and exchange them for service account access tokens. The configuration file can be generated by using the gcloud CLI.

To generate the Azure workload identity configuration, run the following command:

# Generate an Azure configuration file.
gcloud iam workload-identity-pools create-cred-config \
    projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$POOL_ID/providers/$AZURE_PROVIDER_ID \
    --service-account $SERVICE_ACCOUNT_EMAIL \
    --azure \
    --output-file /path/to/generated/config.json

Where the following variables need to be substituted:

  • $PROJECT_NUMBER: The Google Cloud project number.
  • $POOL_ID: The workload identity pool ID.
  • $AZURE_PROVIDER_ID: The Azure provider ID.
  • $SERVICE_ACCOUNT_EMAIL: The email of the service account to impersonate.

This will generate the configuration file in the specified output file.

You can now start using the Auth library to call Google Cloud resources from Azure.

Accessing resources from an OIDC identity provider

In order to access Google Cloud resources from an identity provider that supports OpenID Connect (OIDC), the following requirements are needed:

  • A workload identity pool needs to be created.
  • An OIDC identity provider needs to be added in the workload identity pool (The Google organization policy needs to allow federation from the identity provider).
  • Permission to impersonate a service account needs to be granted to the external identity.

Follow the detailed instructions on how to configure workload identity federation from an OIDC identity provider.

After configuring the OIDC provider to impersonate a service account, a credential configuration file needs to be generated. Unlike service account credential files, the generated credential configuration file will only contain non-sensitive metadata to instruct the library on how to retrieve external subject tokens and exchange them for service account access tokens. The configuration file can be generated by using the gcloud CLI.

For OIDC providers, the Auth library can retrieve OIDC tokens either from a local file location (file-sourced credentials) or from a local server (URL-sourced credentials).

File-sourced credentials For file-sourced credentials, a background process needs to be continuously refreshing the file location with a new OIDC token prior to expiration. For tokens with one hour lifetimes, the token needs to be updated in the file every hour. The token can be stored directly as plain text or in JSON format.

To generate a file-sourced OIDC configuration, run the following command:

# Generate an OIDC configuration file for file-sourced credentials.
gcloud iam workload-identity-pools create-cred-config \
    projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$POOL_ID/providers/$OIDC_PROVIDER_ID \
    --service-account $SERVICE_ACCOUNT_EMAIL \
    --credential-source-file $PATH_TO_OIDC_ID_TOKEN \
    # Optional arguments for file types. Default is "text":
    # --credential-source-type "json" \
    # Optional argument for the field that contains the OIDC credential.
    # This is required for json.
    # --credential-source-field-name "id_token" \
    --output-file /path/to/generated/config.json

Where the following variables need to be substituted:

  • $PROJECT_NUMBER: The Google Cloud project number.
  • $POOL_ID: The workload identity pool ID.
  • $OIDC_PROVIDER_ID: The OIDC provider ID.
  • $SERVICE_ACCOUNT_EMAIL: The email of the service account to impersonate.
  • $PATH_TO_OIDC_ID_TOKEN: The file path where the OIDC token will be retrieved from.

This will generate the configuration file in the specified output file.

URL-sourced credentials For URL-sourced credentials, a local server needs to host a GET endpoint to return the OIDC token. The response can be in plain text or JSON. Additional required request headers can also be specified.

To generate a URL-sourced OIDC workload identity configuration, run the following command:

# Generate an OIDC configuration file for URL-sourced credentials.
gcloud iam workload-identity-pools create-cred-config \
    projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$POOL_ID/providers/$OIDC_PROVIDER_ID \
    --service-account $SERVICE_ACCOUNT_EMAIL \
    --credential-source-url $URL_TO_GET_OIDC_TOKEN \
    --credential-source-headers $HEADER_KEY=$HEADER_VALUE \
    # Optional arguments for file types. Default is "text":
    # --credential-source-type "json" \
    # Optional argument for the field that contains the OIDC credential.
    # This is required for json.
    # --credential-source-field-name "id_token" \
    --output-file /path/to/generated/config.json

Where the following variables need to be substituted:

  • $PROJECT_NUMBER: The Google Cloud project number.
  • $POOL_ID: The workload identity pool ID.
  • $OIDC_PROVIDER_ID: The OIDC provider ID.
  • $SERVICE_ACCOUNT_EMAIL: The email of the service account to impersonate.
  • $URL_TO_GET_OIDC_TOKEN: The URL of the local server endpoint to call to retrieve the OIDC token.
  • $HEADER_KEY and $HEADER_VALUE: The additional header key/value pairs to pass along the GET request to $URL_TO_GET_OIDC_TOKEN, e.g. Metadata-Flavor=Google.

Accessing resources from an OIDC or SAML2.0 identity provider using a custom supplier

If you want to use OIDC or SAML2.0 that cannot be retrieved using methods supported natively by this library, a custom SubjectTokenSupplier implementation may be specified when creating an identity pool client. The supplier must return a valid, unexpired subject token when called by the GCP credential.

Note that the client does not cache the returned subject token, so caching logic should be implemented in the supplier to prevent multiple requests for the same resources.

class CustomSupplier implements SubjectTokenSupplier {
  async getSubjectToken(
    context: ExternalAccountSupplierContext
  ): Promise<string> {
    const audience = context.audience;
    const subjectTokenType = context.subjectTokenType;
    // Return a valid subject token for the requested audience and subject token type.
  }
}

const clientOptions = {
  audience: '//iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$WORKLOAD_POOL_ID/providers/$PROVIDER_ID', // Set the GCP audience.
  subject_token_type: 'urn:ietf:params:oauth:token-type:id_token', // Set the subject token type.
  subject_token_supplier: new CustomSupplier() // Set the custom supplier.
}

const client = new CustomSupplier(clientOptions);

Where the audience is: //iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$WORKLOAD_POOL_ID/providers/$PROVIDER_ID

Where the following variables need to be substituted:

  • $PROJECT_NUMBER: The Google Cloud project number.
  • $WORKLOAD_POOL_ID: The workload pool ID.
  • $PROVIDER_ID: The provider ID.

The values for audience, service account impersonation URL, and any other builder field can also be found by generating a credential configuration file with the gcloud CLI.

Using External Account Authorized User workforce credentials

External account authorized user credentials allow you to sign in with a web browser to an external identity provider account via the gcloud CLI and create a configuration for the auth library to use.

To generate an external account authorized user workforce identity configuration, run the following command:

gcloud auth application-default login --login-config=$LOGIN_CONFIG

Where the following variable needs to be substituted:

This will open a browser flow for you to sign in via the configured third party identity provider and then will store the external account authorized user configuration at the well known ADC location. The auth library will then use the provided refresh token from the configuration to generate and refresh an access token to call Google Cloud services.

Note that the default lifetime of the refresh token is one hour, after which a new configuration will need to be generated from the gcloud CLI. The lifetime can be modified by changing the session duration of the workforce pool, and can be set as high as 12 hours.

Using Executable-sourced credentials with OIDC and SAML

Executable-sourced credentials For executable-sourced credentials, a local executable is used to retrieve the 3rd party token. The executable must handle providing a valid, unexpired OIDC ID token or SAML assertion in JSON format to stdout.

To use executable-sourced credentials, the GOOGLE_EXTERNAL_ACCOUNT_ALLOW_EXECUTABLES environment variable must be set to 1.

To generate an executable-sourced workload identity configuration, run the following command:

# Generate a configuration file for executable-sourced credentials.
gcloud iam workload-identity-pools create-cred-config \
    projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$POOL_ID/providers/$PROVIDER_ID \
    --service-account=$SERVICE_ACCOUNT_EMAIL \
    --subject-token-type=$SUBJECT_TOKEN_TYPE \
    # The absolute path for the program, including arguments.
    # e.g. --executable-command="/path/to/command --foo=bar"
    --executable-command=$EXECUTABLE_COMMAND \
    # Optional argument for the executable timeout. Defaults to 30s.
    # --executable-timeout-millis=$EXECUTABLE_TIMEOUT \
    # Optional argument for the absolute path to the executable output file.
    # See below on how this argument impacts the library behaviour.
    # --executable-output-file=$EXECUTABLE_OUTPUT_FILE \
    --output-file /path/to/generated/config.json

Where the following variables need to be substituted:

  • $PROJECT_NUMBER: The Google Cloud project number.
  • $POOL_ID: The workload identity pool ID.
  • $PROVIDER_ID: The OIDC or SAML provider ID.
  • $SERVICE_ACCOUNT_EMAIL: The email of the service account to impersonate.
  • $SUBJECT_TOKEN_TYPE: The subject token type.
  • $EXECUTABLE_COMMAND: The full command to run, including arguments. Must be an absolute path to the program.

The --executable-timeout-millis flag is optional. This is the duration for which the auth library will wait for the executable to finish, in milliseconds. Defaults to 30 seconds when not provided. The maximum allowed value is 2 minutes. The minimum is 5 seconds.

The --executable-output-file flag is optional. If provided, the file path must point to the 3PI credential response generated by the executable. This is useful for caching the credentials. By specifying this path, the Auth libraries will first check for its existence before running the executable. By caching the executable JSON response to this file, it improves performance as it avoids the need to run the executable until the cached credentials in the output file are expired. The executable must handle writing to this file - the auth libraries will only attempt to read from this location. The format of contents in the file should match the JSON format expected by the executable shown below.

To retrieve the 3rd party token, the library will call the executable using the command specified. The executable's output must adhere to the response format specified below. It must output the response to stdout.

A sample successful executable OIDC response:

{
  "version": 1,
  "success": true,
  "token_type": "urn:ietf:params:oauth:token-type:id_token",
  "id_token": "HEADER.PAYLOAD.SIGNATURE",
  "expiration_time": 1620499962
}

A sample successful executable SAML response:

{
  "version": 1,
  "success": true,
  "token_type": "urn:ietf:params:oauth:token-type:saml2",
  "saml_response": "...",
  "expiration_time": 1620499962
}

For successful responses, the expiration_time field is only required when an output file is specified in the credential configuration.

A sample executable error response:

{
  "version": 1,
  "success": false,
  "code": "401",
  "message": "Caller not authorized."
}

These are all required fields for an error response. The code and message fields will be used by the library as part of the thrown exception.

Response format fields summary:

  • version: The version of the JSON output. Currently, only version 1 is supported.
  • success: The status of the response. When true, the response must contain the 3rd party token and token type. The response must also contain the expiration time if an output file was specified in the credential configuration. The executable must also exit with exit code 0. When false, the response must contain the error code and message fields and exit with a non-zero value.
  • token_type: The 3rd party subject token type. Must be urn:ietf:params:oauth:token-type:jwt, urn:ietf:params:oauth:token-type:id_token, or urn:ietf:params:oauth:token-type:saml2.
  • id_token: The 3rd party OIDC token.
  • saml_response: The 3rd party SAML response.
  • expiration_time: The 3rd party subject token expiration time in seconds (unix epoch time).
  • code: The error code string.
  • message: The error message.

All response types must include both the version and success fields.

  • Successful responses must include the token_type and one of id_token or saml_response. The expiration_time field must also be present if an output file was specified in the credential configuration.
  • Error responses must include both the code and message fields.

The library will populate the following environment variables when the executable is run:

  • GOOGLE_EXTERNAL_ACCOUNT_AUDIENCE: The audience field from the credential configuration. Always present.
  • GOOGLE_EXTERNAL_ACCOUNT_IMPERSONATED_EMAIL: The service account email. Only present when service account impersonation is used.
  • GOOGLE_EXTERNAL_ACCOUNT_OUTPUT_FILE: The output file location from the credential configuration. Only present when specified in the credential configuration.
  • GOOGLE_EXTERNAL_ACCOUNT_TOKEN_TYPE: This expected subject token type. Always present.

These environment variables can be used by the executable to avoid hard-coding these values.

Security considerations

The following security practices are highly recommended:

  • Access to the script should be restricted as it will be displaying credentials to stdout. This ensures that rogue processes do not gain access to the script.
  • The configuration file should not be modifiable. Write access should be restricted to avoid processes modifying the executable command portion.

Given the complexity of using executable-sourced credentials, it is recommended to use the existing supported mechanisms (file-sourced/URL-sourced) for providing 3rd party credentials unless they do not meet your specific requirements.

You can now use the Auth library to call Google Cloud resources from an OIDC or SAML provider.

Configurable Token Lifetime

When creating a credential configuration with workload identity federation using service account impersonation, you can provide an optional argument to configure the service account access token lifetime.

To generate the configuration with configurable token lifetime, run the following command (this example uses an AWS configuration, but the token lifetime can be configured for all workload identity federation providers):

# Generate an AWS configuration file with configurable token lifetime.
gcloud iam workload-identity-pools create-cred-config \
    projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$POOL_ID/providers/$AWS_PROVIDER_ID \
    --service-account $SERVICE_ACCOUNT_EMAIL \
    --aws \
    --output-file /path/to/generated/config.json \
    --service-account-token-lifetime-seconds $TOKEN_LIFETIME

Where the following variables need to be substituted:

  • $PROJECT_NUMBER: The Google Cloud project number.
  • $POOL_ID: The workload identity pool ID.
  • $AWS_PROVIDER_ID: The AWS provider ID.
  • $SERVICE_ACCOUNT_EMAIL: The email of the service account to impersonate.
  • $TOKEN_LIFETIME: The desired lifetime duration of the service account access token in seconds.

The service-account-token-lifetime-seconds flag is optional. If not provided, this defaults to one hour. The minimum allowed value is 600 (10 minutes) and the maximum allowed value is 43200 (12 hours). If a lifetime greater than one hour is required, the service account must be added as an allowed value in an Organization Policy that enforces the constraints/iam.allowServiceAccountCredentialLifetimeExtension constraint.

Note that configuring a short lifetime (e.g. 10 minutes) will result in the library initiating the entire token exchange flow every 10 minutes, which will call the 3rd party token provider even if the 3rd party token is not expired.

Workforce Identity Federation

Workforce identity federation lets you use an external identity provider (IdP) to authenticate and authorize a workforce—a group of users, such as employees, partners, and contractors—using IAM, so that the users can access Google Cloud services. Workforce identity federation extends Google Cloud's identity capabilities to support syncless, attribute-based single sign on.

With workforce identity federation, your workforce can access Google Cloud resources using an external identity provider (IdP) that supports OpenID Connect (OIDC) or SAML 2.0 such as Azure Active Directory (Azure AD), Active Directory Federation Services (AD FS), Okta, and others.

Accessing resources using an OIDC or SAML 2.0 identity provider

In order to access Google Cloud resources from an identity provider that supports OpenID Connect (OIDC), the following requirements are needed:

  • A workforce identity pool needs to be created.
  • An OIDC or SAML 2.0 identity provider needs to be added in the workforce pool.

Follow the detailed instructions on how to configure workforce identity federation.

After configuring an OIDC or SAML 2.0 provider, a credential configuration file needs to be generated. The generated credential configuration file contains non-sensitive metadata to instruct the library on how to retrieve external subject tokens and exchange them for GCP access tokens. The configuration file can be generated by using the gcloud CLI.

The Auth library can retrieve external subject tokens from a local file location (file-sourced credentials), from a local server (URL-sourced credentials) or by calling an executable (executable-sourced credentials).

File-sourced credentials For file-sourced credentials, a background process needs to be continuously refreshing the file location with a new subject token prior to expiration. For tokens with one hour lifetimes, the token needs to be updated in the file every hour. The token can be stored directly as plain text or in JSON format.

To generate a file-sourced OIDC configuration, run the following command:

# Generate an OIDC configuration file for file-sourced credentials.
gcloud iam workforce-pools create-cred-config \
    locations/global/workforcePools/$WORKFORCE_POOL_ID/providers/$PROVIDER_ID \
    --subject-token-type=urn:ietf:params:oauth:token-type:id_token \
    --credential-source-file=$PATH_TO_OIDC_ID_TOKEN \
    --workforce-pool-user-project=$WORKFORCE_POOL_USER_PROJECT \
    # Optional arguments for file types. Default is "text":
    # --credential-source-type "json" \
    # Optional argument for the field that contains the OIDC credential.
    # This is required for json.
    # --credential-source-field-name "id_token" \
    --output-file=/path/to/generated/config.json

Where the following variables need to be substituted:

  • $WORKFORCE_POOL_ID: The workforce pool ID.
  • $PROVIDER_ID: The provider ID.
  • $PATH_TO_OIDC_ID_TOKEN: The file path used to retrieve the OIDC token.
  • $WORKFORCE_POOL_USER_PROJECT: The project number associated with the workforce pools user project.

To generate a file-sourced SAML configuration, run the following command:

# Generate a SAML configuration file for file-sourced credentials.
gcloud iam workforce-pools create-cred-config \
    locations/global/workforcePools/$WORKFORCE_POOL_ID/providers/$PROVIDER_ID \
    --credential-source-file=$PATH_TO_SAML_ASSERTION \
    --subject-token-type=urn:ietf:params:oauth:token-type:saml2 \
    --workforce-pool-user-project=$WORKFORCE_POOL_USER_PROJECT \
    --output-file=/path/to/generated/config.json

Where the following variables need to be substituted:

  • $WORKFORCE_POOL_ID: The workforce pool ID.
  • $PROVIDER_ID: The provider ID.
  • $PATH_TO_SAML_ASSERTION: The file path used to retrieve the base64-encoded SAML assertion.
  • $WORKFORCE_POOL_USER_PROJECT: The project number associated with the workforce pools user project.

These commands generate the configuration file in the specified output file.

URL-sourced credentials For URL-sourced credentials, a local server needs to host a GET endpoint to return the OIDC token. The response can be in plain text or JSON. Additional required request headers can also be specified.

To generate a URL-sourced OIDC workforce identity configuration, run the following command:

# Generate an OIDC configuration file for URL-sourced credentials.
gcloud iam workforce-pools create-cred-config \
    locations/global/workforcePools/$WORKFORCE_POOL_ID/providers/$PROVIDER_ID \
    --subject-token-type=urn:ietf:params:oauth:token-type:id_token \
    --credential-source-url=$URL_TO_RETURN_OIDC_ID_TOKEN \
    --credential-source-headers $HEADER_KEY=$HEADER_VALUE \
    --workforce-pool-user-project=$WORKFORCE_POOL_USER_PROJECT \
    --output-file=/path/to/generated/config.json

Where the following variables need to be substituted:

  • $WORKFORCE_POOL_ID: The workforce pool ID.
  • $PROVIDER_ID: The provider ID.
  • $URL_TO_RETURN_OIDC_ID_TOKEN: The URL of the local server endpoint.
  • $HEADER_KEY and $HEADER_VALUE: The additional header key/value pairs to pass along the GET request to $URL_TO_GET_OIDC_TOKEN, e.g. Metadata-Flavor=Google.
  • $WORKFORCE_POOL_USER_PROJECT: The project number associated with the workforce pools user project.

To generate a URL-sourced SAML configuration, run the following command:

# Generate a SAML configuration file for file-sourced credentials.
gcloud iam workforce-pools create-cred-config \
    locations/global/workforcePools/$WORKFORCE_POOL_ID/providers/$PROVIDER_ID \
    --subject-token-type=urn:ietf:params:oauth:token-type:saml2 \
    --credential-source-url=$URL_TO_GET_SAML_ASSERTION \
    --credential-source-headers $HEADER_KEY=$HEADER_VALUE \
    --workforce-pool-user-project=$WORKFORCE_POOL_USER_PROJECT \
    --output-file=/path/to/generated/config.json

These commands generate the configuration file in the specified output file.

Where the following variables need to be substituted:

  • $WORKFORCE_POOL_ID: The workforce pool ID.
  • $PROVIDER_ID: The provider ID.
  • $URL_TO_GET_SAML_ASSERTION: The URL of the local server endpoint.
  • $HEADER_KEY and $HEADER_VALUE: The additional header key/value pairs to pass along the GET request to $URL_TO_GET_SAML_ASSERTION, e.g. Metadata-Flavor=Google.
  • $WORKFORCE_POOL_USER_PROJECT: The project number associated with the workforce pools user project.

Using Executable-sourced workforce credentials with OIDC and SAML

Executable-sourced credentials For executable-sourced credentials, a local executable is used to retrieve the 3rd party token. The executable must handle providing a valid, unexpired OIDC ID token or SAML assertion in JSON format to stdout.

To use executable-sourced credentials, the GOOGLE_EXTERNAL_ACCOUNT_ALLOW_EXECUTABLES environment variable must be set to 1.

To generate an executable-sourced workforce identity configuration, run the following command:

# Generate a configuration file for executable-sourced credentials.
gcloud iam workforce-pools create-cred-config \
    locations/global/workforcePools/$WORKFORCE_POOL_ID/providers/$PROVIDER_ID \
    --subject-token-type=$SUBJECT_TOKEN_TYPE \
    # The absolute path for the program, including arguments.
    # e.g. --executable-command="/path/to/command --foo=bar"
    --executable-command=$EXECUTABLE_COMMAND \
    # Optional argument for the executable timeout. Defaults to 30s.
    # --executable-timeout-millis=$EXECUTABLE_TIMEOUT \
    # Optional argument for the absolute path to the executable output file.
    # See below on how this argument impacts the library behaviour.
    # --executable-output-file=$EXECUTABLE_OUTPUT_FILE \
    --workforce-pool-user-project=$WORKFORCE_POOL_USER_PROJECT \
    --output-file /path/to/generated/config.json

Where the following variables need to be substituted:

  • $WORKFORCE_POOL_ID: The workforce pool ID.
  • $PROVIDER_ID: The provider ID.
  • $SUBJECT_TOKEN_TYPE: The subject token type.
  • $EXECUTABLE_COMMAND: The full command to run, including arguments. Must be an absolute path to the program.
  • $WORKFORCE_POOL_USER_PROJECT: The project number associated with the workforce pools user project.

The --executable-timeout-millis flag is optional. This is the duration for which the auth library will wait for the executable to finish, in milliseconds. Defaults to 30 seconds when not provided. The maximum allowed value is 2 minutes. The minimum is 5 seconds.

The --executable-output-file flag is optional. If provided, the file path must point to the 3rd party credential response generated by the executable. This is useful for caching the credentials. By specifying this path, the Auth libraries will first check for its existence before running the executable. By caching the executable JSON response to this file, it improves performance as it avoids the need to run the executable until the cached credentials in the output file are expired. The executable must handle writing to this file - the auth libraries will only attempt to read from this location. The format of contents in the file should match the JSON format expected by the executable shown below.

To retrieve the 3rd party token, the library will call the executable using the command specified. The executable's output must adhere to the response format specified below. It must output the response to stdout.

Refer to the using executable-sourced credentials with Workload Identity Federation above for the executable response specification.

Security considerations

The following security practices are highly recommended:

  • Access to the script should be restricted as it will be displaying credentials to stdout. This ensures that rogue processes do not gain access to the script.
  • The configuration file should not be modifiable. Write access should be restricted to avoid processes modifying the executable command portion.

Given the complexity of using executable-sourced credentials, it is recommended to use the existing supported mechanisms (file-sourced/URL-sourced) for providing 3rd party credentials unless they do not meet your specific requirements.

You can now use the Auth library to call Google Cloud resources from an OIDC or SAML provider.

Accessing resources from an OIDC or SAML2.0 identity provider using a custom supplier

If you want to use OIDC or SAML2.0 that cannot be retrieved using methods supported natively by this library, a custom SubjectTokenSupplier implementation may be specified when creating an identity pool client. The supplier must return a valid, unexpired subject token when called by the GCP credential.

Note that the client does not cache the returned subject token, so caching logic should be implemented in the supplier to prevent multiple requests for the same resources.

class CustomSupplier implements SubjectTokenSupplier {
  async getSubjectToken(
    context: ExternalAccountSupplierContext
  ): Promise<string> {
    const audience = context.audience;
    const subjectTokenType = context.subjectTokenType;
    // Return a valid subject token for the requested audience and subject token type.
  }
}

const clientOptions = {
  audience: '//iam.googleapis.com/locations/global/workforcePools/$WORKLOAD_POOL_ID/providers/$PROVIDER_ID', // Set the GCP audience.
  subject_token_type: 'urn:ietf:params:oauth:token-type:id_token', // Set the subject token type.
  subject_token_supplier: new CustomSupplier() // Set the custom supplier.
}

const client = new CustomSupplier(clientOptions);

Where the audience is: //iam.googleapis.com/locations/global/workforcePools/$WORKLOAD_POOL_ID/providers/$PROVIDER_ID

Where the following variables need to be substituted:

  • WORKFORCE_POOL_ID: The worforce pool ID.
  • $PROVIDER_ID: The provider ID.

and the workforce pool user project is the project number associated with the workforce pools user project.

The values for audience, service account impersonation URL, and any other builder field can also be found by generating a credential configuration file with the gcloud CLI.

Using External Identities

External identities (AWS, Azure and OIDC-based providers) can be used with Application Default Credentials. In order to use external identities with Application Default Credentials, you need to generate the JSON credentials configuration file for your external identity as described above. Once generated, store the path to this file in the GOOGLE_APPLICATION_CREDENTIALS environment variable.

export GOOGLE_APPLICATION_CREDENTIALS=/path/to/config.json

The library can now automatically choose the right type of client and initialize credentials from the context provided in the configuration file.

async function main() {
  const auth = new GoogleAuth({
    scopes: 'https://www.googleapis.com/auth/cloud-platform'
  });
  const client = await auth.getClient();
  const projectId = await auth.getProjectId();
  // List all buckets in a project.
  const url = `https://storage.googleapis.com/storage/v1/b?project=${projectId}`;
  const res = await client.request({ url });
  console.log(res.data);
}

When using external identities with Application Default Credentials in Node.js, the roles/browser role needs to be granted to the service account. The Cloud Resource Manager API should also be enabled on the project. This is needed since the library will try to auto-discover the project ID from the current environment using the impersonated credential. To avoid this requirement, the project ID can be explicitly specified on initialization.

const auth = new GoogleAuth({
  scopes: 'https://www.googleapis.com/auth/cloud-platform',
  // Pass the project ID explicitly to avoid the need to grant `roles/browser` to the service account
  // or enable Cloud Resource Manager API on the project.
  projectId: 'CLOUD_RESOURCE_PROJECT_ID',
});

You can also explicitly initialize external account clients using the generated configuration file.

const {ExternalAccountClient} = require('google-auth-library');
const jsonConfig = require('/path/to/config.json');

async function main() {
  const client = ExternalAccountClient.fromJSON(jsonConfig);
  client.scopes = ['https://www.googleapis.com/auth/cloud-platform'];
  // List all buckets in a project.
  const url = `https://storage.googleapis.com/storage/v1/b?project=${projectId}`;
  const res = await client.request({url});
  console.log(res.data);
}

Security Considerations

Note that this library does not perform any validation on the token_url, token_info_url, or service_account_impersonation_url fields of the credential configuration. It is not recommended to use a credential configuration that you did not generate with the gcloud CLI unless you verify that the URL fields point to a googleapis.com domain.

Working with ID Tokens

Fetching ID Tokens

If your application is running on Cloud Run or Cloud Functions, or using Cloud Identity-Aware Proxy (IAP), you will need to fetch an ID token to access your application. For this, use the method getIdTokenClient on the GoogleAuth client.

For invoking Cloud Run services, your service account will need the Cloud Run Invoker IAM permission.

For invoking Cloud Functions, your service account will need the Function Invoker IAM permission.

// Make a request to a protected Cloud Run service.
const {GoogleAuth} = require('google-auth-library');

async function main() {
  const url = 'https://cloud-run-1234-uc.a.run.app';
  const auth = new GoogleAuth();
  const client = await auth.getIdTokenClient(url);
  const res = await client.request({url});
  console.log(res.data);
}

main().catch(console.error);

A complete example can be found in samples/idtokens-serverless.js.

For invoking Cloud Identity-Aware Proxy, you will need to pass the Client ID used when you set up your protected resource as the target audience.

// Make a request to a protected Cloud Identity-Aware Proxy (IAP) resource
const {GoogleAuth} = require('google-auth-library');

async function main()
  const targetAudience = 'iap-client-id';
  const url = 'https://iap-url.com';
  const auth = new GoogleAuth();
  const client = await auth.getIdTokenClient(targetAudience);
  const res = await client.request({url});
  console.log(res.data);
}

main().catch(console.error);

A complete example can be found in samples/idtokens-iap.js.

Verifying ID Tokens

If you've secured your IAP app with signed headers, you can use this library to verify the IAP header:

const {OAuth2Client} = require('google-auth-library');
// Expected audience for App Engine.
const expectedAudience = `/projects/your-project-number/apps/your-project-id`;
// IAP issuer
const issuers = ['https://cloud.google.com/iap'];
// Verify the token. OAuth2Client throws an Error if verification fails
const oAuth2Client = new OAuth2Client();
const response = await oAuth2Client.getIapCerts();
const ticket = await oAuth2Client.verifySignedJwtWithCertsAsync(
  idToken,
  response.pubkeys,
  expectedAudience,
  issuers
);

// Print out the info contained in the IAP ID token
console.log(ticket)

A complete example can be found in samples/verifyIdToken-iap.js.

Impersonated Credentials Client

Google Cloud Impersonated credentials used for Creating short-lived service account credentials.

Provides authentication for applications where local credentials impersonates a remote service account using IAM Credentials API.

An Impersonated Credentials Client is instantiated with a sourceClient. This client should use credentials that have the "Service Account Token Creator" role (roles/iam.serviceAccountTokenCreator), and should authenticate with the https://www.googleapis.com/auth/cloud-platform, or https://www.googleapis.com/auth/iam scopes.

sourceClient is used by the Impersonated Credentials Client to impersonate a target service account with a specified set of scopes.

Sample Usage

const { GoogleAuth, Impersonated } = require('google-auth-library');
const { SecretManagerServiceClient } = require('@google-cloud/secret-manager');

async function main() {

  // Acquire source credentials:
  const auth = new GoogleAuth();
  const client = await auth.getClient();

  // Impersonate new credentials:
  let targetClient = new Impersonated({
    sourceClient: client,
    targetPrincipal: '[email protected]',
    lifetime: 30,
    delegates: [],
    targetScopes: ['https://www.googleapis.com/auth/cloud-platform']
  });

  // Get impersonated credentials:
  const authHeaders = await targetClient.getRequestHeaders();
  // Do something with `authHeaders.Authorization`.

  // Use impersonated credentials:
  const url = 'https://www.googleapis.com/storage/v1/b?project=anotherProjectID'
  const resp = await targetClient.request({ url });
  for (const bucket of resp.data.items) {
    console.log(bucket.name);
  }

  // Use impersonated credentials with google-cloud client library
  // Note: this works only with certain cloud client libraries utilizing gRPC
  //    e.g., SecretManager, KMS, AIPlatform
  // will not currently work with libraries using REST, e.g., Storage, Compute
  const smClient = new SecretManagerServiceClient({
    projectId: anotherProjectID,
    auth: {
      getClient: () => targetClient,
    },
  });
  const secretName = 'projects/anotherProjectNumber/secrets/someProjectName/versions/1';
  const [accessResponse] = await smClient.accessSecretVersion({
    name: secretName,
  });

  const responsePayload = accessResponse.payload.data.toString('utf8');
  // Do something with the secret contained in `responsePayload`.
};

main();

Downscoped Client

Downscoping with Credential Access Boundaries is used to restrict the Identity and Access Management (IAM) permissions that a short-lived credential can use.

The DownscopedClient class can be used to produce a downscoped access token from a CredentialAccessBoundary and a source credential. The Credential Access Boundary specifies which resources the newly created credential can access, as well as an upper bound on the permissions that are available on each resource. Using downscoped credentials ensures tokens in flight always have the least privileges, e.g. Principle of Least Privilege.

Notice: Only Cloud Storage supports Credential Access Boundaries for now.

Sample Usage

There are two entities needed to generate and use credentials generated from Downscoped Client with Credential Access Boundaries.

  • Token broker: This is the entity with elevated permissions. This entity has the permissions needed to generate downscoped tokens. The common pattern of usage is to have a token broker with elevated access generate these downscoped credentials from higher access source credentials and pass the downscoped short-lived access tokens to a token consumer via some secure authenticated channel for limited access to Google Cloud Storage resources.
const {GoogleAuth, DownscopedClient} = require('google-auth-library');
// Define CAB rules which will restrict the downscoped token to have readonly
// access to objects starting with "customer-a" in bucket "bucket_name".
const cabRules = {
  accessBoundary: {
    accessBoundaryRules: [
      {
        availableResource: `//storage.googleapis.com/projects/_/buckets/bucket_name`,
        availablePermissions: ['inRole:roles/storage.objectViewer'],
        availabilityCondition: {
          expression:
            `resource.name.startsWith('projects/_/buckets/` +
            `bucket_name/objects/customer-a)`
        }
      },
    ],
  },
};

// This will use ADC to get the credentials used for the downscoped client.
const googleAuth = new GoogleAuth({
  scopes: ['https://www.googleapis.com/auth/cloud-platform']
});

// Obtain an authenticated client via ADC.
const client = await googleAuth.getClient();

// Use the client to create a DownscopedClient.
const cabClient = new DownscopedClient(client, cab);

// Refresh the tokens.
const refreshedAccessToken = await cabClient.getAccessToken();

// This will need to be passed to the token consumer.
access_token = refreshedAccessToken.token;
expiry_date = refreshedAccessToken.expirationTime;

A token broker can be set up on a server in a private network. Various workloads (token consumers) in the same network will send authenticated requests to that broker for downscoped tokens to access or modify specific google cloud storage buckets.

The broker will instantiate downscoped credentials instances that can be used to generate short lived downscoped access tokens which will be passed to the token consumer.

  • Token consumer: This is the consumer of the downscoped tokens. This entity does not have the direct ability to generate access tokens and instead relies on the token broker to provide it with downscoped tokens to run operations on GCS buckets. It is assumed that the downscoped token consumer may have its own mechanism to authenticate itself with the token broker.
const {OAuth2Client} = require('google-auth-library');
const {Storage} = require('@google-cloud/storage');

// Create the OAuth credentials (the consumer).
const oauth2Client = new OAuth2Client();
// We are defining a refresh handler instead of a one-time access
// token/expiry pair.
// This will allow the consumer to obtain new downscoped tokens on
// demand every time a token is expired, without any additional code
// changes.
oauth2Client.refreshHandler = async () => {
  // The common pattern of usage is to have a token broker pass the
  // downscoped short-lived access tokens to a token consumer via some
  // secure authenticated channel.
  const refreshedAccessToken = await cabClient.getAccessToken();
  return {
    access_token: refreshedAccessToken.token,
    expiry_date: refreshedAccessToken.expirationTime,
  }
};

// Use the consumer client to define storageOptions and create a GCS object.
const storageOptions = {
  projectId: 'my_project_id',
  authClient: oauth2Client,
};

const storage = new Storage(storageOptions);

const downloadFile = await storage
    .bucket('bucket_name')
    .file('customer-a-data.txt')
    .download();
console.log(downloadFile.toString('utf8'));

main().catch(console.error);

Samples

Samples are in the samples/ directory. Each sample's README.md has instructions for running its sample.

Sample Source Code Try it
Adc source code Open in Cloud Shell
Authenticate Explicit source code Open in Cloud Shell
Authenticate Implicit With Adc source code Open in Cloud Shell
Compute source code Open in Cloud Shell
Credentials source code Open in Cloud Shell
Downscopedclient source code Open in Cloud Shell
Headers source code Open in Cloud Shell
Id Token From Impersonated Credentials source code Open in Cloud Shell
Id Token From Metadata Server source code Open in Cloud Shell
Id Token From Service Account source code Open in Cloud Shell
ID Tokens for Identity-Aware Proxy (IAP) source code Open in Cloud Shell
ID Tokens for Serverless source code Open in Cloud Shell
Jwt source code Open in Cloud Shell
Keepalive source code Open in Cloud Shell
Keyfile source code Open in Cloud Shell
Oauth2-code Verifier source code Open in Cloud Shell
Oauth2 source code Open in Cloud Shell
Sign Blob source code Open in Cloud Shell
Sign Blob Impersonated source code Open in Cloud Shell
Verify Google Id Token source code Open in Cloud Shell
Verifying ID Tokens from Identity-Aware Proxy (IAP) source code Open in Cloud Shell
Verify Id Token source code Open in Cloud Shell

The Google Auth Library Node.js Client API Reference documentation also contains samples.

Supported Node.js Versions

Our client libraries follow the Node.js release schedule. Libraries are compatible with all current active and maintenance versions of Node.js. If you are using an end-of-life version of Node.js, we recommend that you update as soon as possible to an actively supported LTS version.

Google's client libraries support legacy versions of Node.js runtimes on a best-efforts basis with the following warnings:

  • Legacy versions are not tested in continuous integration.
  • Some security patches and features cannot be backported.
  • Dependencies cannot be kept up-to-date.

Client libraries targeting some end-of-life versions of Node.js are available, and can be installed through npm dist-tags. The dist-tags follow the naming convention legacy-(version). For example, npm install google-auth-library@legacy-8 installs client libraries for versions compatible with Node.js 8.

Versioning

This library follows Semantic Versioning.

This library is considered to be stable. The code surface will not change in backwards-incompatible ways unless absolutely necessary (e.g. because of critical security issues) or with an extensive deprecation period. Issues and requests against stable libraries are addressed with the highest priority.

More Information: Google Cloud Platform Launch Stages

Contributing

Contributions welcome! See the Contributing Guide.

Please note that this README.md, the samples/README.md, and a variety of configuration files in this repository (including .nycrc and tsconfig.json) are generated from a central template. To edit one of these files, make an edit to its templates in directory.

License

Apache Version 2.0

See LICENSE

google-auth-library-nodejs's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

google-auth-library-nodejs's Issues

generateAuthUrl() generated url always asks for offline access

the generateAuthUrl() method generatates thee different urls for me, which differs from the access_type parameter

with access_type = 'online':

https://accounts.google.com/o/oauth2/auth?access_type=online&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.profile&response_type=code...

with access_type = 'offline':

https://accounts.google.com/o/oauth2/auth?access_type=offline&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.profile&response_type=code...

and without access_type (according to this example it should defaults to online):

https://accounts.google.com/o/oauth2/auth?scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.profile&response_type=code...

Regardless of the url, there is always a prompt asking for offline access.

The version of the library I've used: 0.9.6

The code I've used to generate the url:

var url = oauth2Client.generateAuthUrl({
    access_type: 'online',
    scope: 'https://www.googleapis.com/auth/userinfo.profile'
});

Buggy comment in oauth2client.js regarding replay on 401/403

The docstring on OAuth2Client.prototype.request says:

 * Provides a request implementation with OAuth 2.0 flow.
 * If credentials have a refresh_token, in cases of HTTP
 * 401 and 403 responses, it automatically asks for a new
 * access token and replays the unsuccessful request.

But I don't see any logic that would retry on 401/403, nor does that seem plausible given that it wouldn't work with stream bodies. AFAIK, the token is only refreshed before making the request. (Sorry if I missed something.)

TypeError: Not a Buffer

When using the verifyIdToken to verify a really old token (So far I only encoutered with these kind of tokens), instead of getting an expired error the app throws this error.

I've digged a bit and found that the main issues resides within the line 495 in oauth2client.js:

var pem = certs[envelope.kid];

That resolves to undefined sometimes. envelope.kid holds a value, but not one that matches the certs generated by the getFederatedSignonCerts function in oauth2client.js, thus causing the pemVerifier.verify throw.

API conventions

Oh boy, it's me again.

I think this library would be easier to fit into a modern Node developer's code if it was a bit easier to use.

Using the example from the readme:

(new GoogleAuth).getApplicationDefault(function(err, authClient) {

It's very rare you find the use of new Something() anymore. OO is still an important tool for structuring the code, but I think an approach like gcloud is better for the end user; basically, use OO internally, but don't require the user to see it. Alias by camelCase (as opposed to UpperCamelCase), and auto-initialize for them:

function GoogleAuth() {
  if (!(this instanceof GoogleAuth)) {
    return new GoogleAuth();
  }
}

module.exports.googleAuth = GoogleAuth;

// for the user:
googleAuth()
// is equivalent to:
new googleAuth()

  if (err === null) {
    // Inject scopes if they have not been injected by the environment
    if (authClient.createScopedRequired && authClient.createScopedRequired()) {

createScopedRequired being a method that may or may not exist is confusing. Can't this always be defined? Or maybe from the docs, it can be declared in what cases scopes are required up front?

** Edit ** Looking at this again, couldn't it just be a bool? If the function is sync, it should just be if (authClient.createScopedRequired) {}

      var scopes = [
        'https://www.googleapis.com/auth/cloud-platform',
        'https://www.googleapis.com/auth/compute'
      ];
      authClient = authClient.createScoped(scopes);
    }

The naming here is a bit confusing -- maybe it's just me. createScopedRequired and createScoped. Maybe this could be scopesRequired and addScopes.

Using addScopes would also avoid the re-assignment of the authClient. The pattern there feels a bit backwards. Get an authClient then get another one from it. It would be great if I could have gotten the authClient right from the start (as mentioned in the last suggestion with docs + requiring docs up front) or modify it so that it's good to go (addScopes).

  // Fetch the access token
    var _ = require(lodash);
    var optionalUri = null;  // optionally specify the URI being authorized
    var reqHeaders = {};
    authClient.getRequestMetadata(optionalUri, function(err, headers)) {
      if (err === null) {
        // Use authorization headers
        reqHeaders = _.merge(allHeaders, headers);
      }
    });

This is a good case for #54 -- if this library should be for the dev who just needs a token so they can talk to the API, it shouldn't be this hard. There should be a "getToken" method and a "extendRequestOptions" or something similar.


These are just some thoughts from the only code example given. I think docs and convenience methods will go a very long way, but in general, more terse naming conventions and a modern, intuitive API will help fill in the blanks.

Thanks for listening!

Error code no longer passed through in response

[This is cross-posted from https://github.com/googleapis/google-api-nodejs-client/issues/424. I'm not sure what project it should be in.]

I'm using v2.0.3 of this library in the following way:

      client.users.drafts.update({
        userId: 'me',
        id: gmailDraftId,
        resource: {
          message: {
            raw: require('base64-url').encode(job.data.emailDraft)
          }
        }
      }, function(err, res, req) {
          // Detect 404 errors
         if (err.code === 404) console.log('does not exist');
      }));

In v1.x of this library, err.code was populated and correctly set to 404 when the draft didn't exist. However, since I upgraded to 2.x, err.code is now undefined. However, the response from the api hasn't changed; req.body is still: { error: { errors: [Object], code: 404, message: 'Not Found' } } }.

Did something break in 2.x causing the error code to not get passed through from the response?

Make it easier to use OAuth2Client with multiple processes?

If I have multiple processes working with the same clientId and token, I need to make sure both processes are exactly in sync, knowing about the latest token, and grabbing an exclusive lock before trying to refresh the token. But this use case doesn't seem to be supported in OAuth2Client. For example, the docstrings describe that the request can be retried if it fails due to an expired token, so .credentials could change without me knowing for a while. And there doesn't seem to be an obvious place to insert an exclusive lock to prevent two processes from trying to refresh the token at the same time. The most obvious route seems to be to subclass OAuth2Client, override OAuth2Client.prototype.getRequestMetadata, and reimplement that method to grab the token out of a file and also grab an exclusive lock before trying to refresh. But the method calls the private this.refreshToken_. I could call this.refreshAccessToken instead, but it is deprecated.

Is this the right approach, or should I be looking elsewhere?

If it is, I think some tweaks could make this easier for the user to implement, e.g. if OAuth2Client.prototype.getRequestMetadata called a public overrideable method instead of this.refreshToken_; then I could just override that method instead of getRequestMetadata. I think this.credentials would also need to be replaced with an overrideable method that calls a callback with the latest credentials. That seems like a bigger change.

JWT access token is created every time you call getRequestMetadata

It looks like the JWT access token auth implementation creates a new JWT access token every time you call getRequestMetadata, instead of reusing a token created previously (if that one is still valid and has the same authUri).

see https://github.com/google/google-auth-library-nodejs/blob/master/lib/auth/jwtaccess.js#L55

This brings two problems:
-- the auth headers will be different every time you make an RPC (issue time will differ so the access token will also be different), and HTTP/2 header compression won't be able to cache the header that is considerably larger in size than a regular OAuth2 token.
-- additional overhead signing the JWT for each request (not sure how much of an issue, but I think I can be a problem under heavy load).

The exact way of caching JWT access tokens needs some thought - it needs to correctly handle token expiration and handle access tokens for different authUris separately.

it would also be good to investigate how do JWT access token implementations in other languages (Ruby, Java, etc.) handle this, I wouldn't be too surprised if they were behaving the same as node.

ADC Support for User Refresh Tokens

The initial implementation of Application Default Credentials is missing one of the 3 credential types: User Refresh Tokens. This is basically the end result of a 3LO flow, and is an important part of the tool integration, as the Cloud SDK writes out this form of credential in the well-known-file location.

The file format to support is this:

{  
  "type": "authorized_user",     
  "client_id": "kflc91.apps.googleusercontent.com",     
  "client_secret": "1/olgReg3YaBQqxm6T",
  "refresh_token": "2/fFAGRNJru1FTz70BzhT3Zg"     
}

The "type" value is to differentiate it from a service account. The other 3 values are needed to get refresh tokens back. This is a common OAuth2 scenario, so there is likely to be an existing object that already supports getting access tokens from these values. This file could be in either the well-known file location or the environment variable location.

For reference, the Java implementation of these features is here:
https://github.com/google/google-auth-library-java/blob/master/oauth2_http/java/com/google/auth/oauth2/GoogleCredentials.java
https://github.com/google/google-auth-library-java/blob/master/oauth2_http/java/com/google/auth/oauth2/UserCredentials.java

This should be about 3 days of work.

Bump Version to remove Security Vulnerability

Can you please publish the latest version of the code to NPM that includes the fix for the security vulnerability in request? It's also causing NPM shrinkwrap to fail due to a bug in NPM if you have a newer version of request required in your library

refresh token does not work for jwt client

After a long analysis I found following:

JWT.prototype.refreshToken_ = function(ignored_, opt_callback) {
  this._createGToken(function(err, gToken) {
    if (err) {
      callback(opt_callback, err);
    } else {
      gToken.getToken(function (err, token) {
        callback(opt_callback, err, {
          access_token: token,
          token_type: 'Bearer',
          expiry_date: gToken.token_expires * 1000 // <-- wrong
        });
      });
    }
  });
};

The problem is, the gToken object does not have a 'token_expires' property but a porperty called 'expires_at' which is a timestamp in ms.

The correct code is:

JWT.prototype.refreshToken_ = function(ignored_, opt_callback) {
  this._createGToken(function(err, gToken) {
    if (err) {
      callback(opt_callback, err);
    } else {
      gToken.getToken(function (err, token) {
        callback(opt_callback, err, {
          access_token: token,
          token_type: 'Bearer',
          expiry_date: gToken.expires_at // <-- right
        });
      });
    }
  });
};

I think there is a confusion between the gtoken library and the gapitoken library which has a 'token_expires' property in its token object

Add owners to the npm package

Can we add some new owners to the npm package?

npm owner add jmdobry
npm owner add murgatroid99
npm owner add stephenplusplus

JWT Client: Cannot read property refreshToken_ of undefined

Just noticed a bug in the lib/auth/jwtclient.js in the authorize method:

/**
 * Get the initial access token using gToken.
 * @param {function=} opt_callback Optional callback.
 */
JWT.prototype.authorize = function(opt_callback) {
  var that = this;
  var done = opt_callback || noop;

  that.refreshToken_(null, function(err, result) {
    if (!err) {
      that.credentials = result;
      that.credentials.refresh_token = 'jwt-placeholder';
      that.key = that.gtoken.key;
      that.email = that.gtoken.iss;
    }
    done(err, result);
  });
};

that is undefined!

Request callback function arguments are not in correct order

The callback functions in request module have arguments error, response, body, in that order. But DefaultTransporter passes the arguments in the order error, body, response, in lines 70 and 106

I suppose that's because the API is designed to return meaningful stuff in body of the HTTP response. And that's fine.

The problem arises when someone writes a callback that uses the response argument, and due to the difference is design, naively ends up using body.

JWT client getRequestMetadata should return value

Just spent a short while debugging.

JWT.prototype.getRequestMetadata = function(opt_uri, cb_with_metadata) {
  if (this.createScopedRequired() && opt_uri) {
    // no scopes have been set, but a uri has been provided.  Use JWTAccess credentials.
    var alt = new JWTAccess(this.email, this.key);
    alt.getRequestMetadata(opt_uri, cb_with_metadata);
  } else {
    JWT.super_.prototype.getRequestMetadata.call(
        this, opt_uri, cb_with_metadata);
  }
};

The oauth2client getRequestMetaData returns a value. And if you look at OAuth2Client.prototype.request it will return whatever getRequestMetadata returns. JWT doesn't bubble up the return.

Not returning the value causes issues in https://github.com/google/google-api-nodejs-client/blob/master/lib/apirequest.js where it'll fall over when it tries to pipe the request.

Possible fix, just return the values returned.

JWT.prototype.getRequestMetadata = function(opt_uri, cb_with_metadata) {
  if (this.createScopedRequired() && opt_uri) {
    // no scopes have been set, but a uri has been provided.  Use JWTAccess credentials.
    var alt = new JWTAccess(this.email, this.key);
    return alt.getRequestMetadata(opt_uri, cb_with_metadata);
  } else {
    return JWT.super_.prototype.getRequestMetadata.call(
        this, opt_uri, cb_with_metadata);
  }
};

Brains not functioning 100%, so if anything didn't make sense, let me know

Browser support?

I want to use this in the browser via Browserify, but it looks like it depends on process.env. Is there another way to use Application Default Credentials in the browser? I've searched and can't seem to find anything.

Jwt Access Credentials should cache JWTs

The JWT Access Credentials should temporarily cache JWTs. Suggested algorithm:

  • Hash JWTs using Audience as key.
  • Also store the timestamp of last use with each JWT.
  • On access clear any JWT unused for more than 1 hour.

Simplify surface

The GoogleAuth class would work just as well as a simple module that exposes methods without being instantiated.

OAuth2.getAccessToken returns invalid accessToken if refreshed

If the token has expired and you have a refresh token setup, it returns a new access token that only has two parts, i.e. accessToken.split('.').length === 2, which results in a JSON parse error when doing:

let split = accessToken.split('.');
let payloadSegment = split[1];
payload = JSON.parse(oauth.decodeBase64(payloadSegment));

So in the getAccessToken method, the tokens that are returned seem to have an id_token which looks more like an accessToken, but that's not accessible since those are returned so I wrote my own version of that method that returns all the tokens.

/**
 * Copy of getAccessToken, but returning all of the tokens
 */
google.auth.OAuth2.prototype.getOrRefreshTokens = function(callback) {
  var credentials = this.credentials;
  var expiryDate = credentials.expiry_date;

  // if no expiry time, assume it's not expired
  var isTokenExpired = expiryDate ? expiryDate <= (new Date()).getTime() : false;

  if (!credentials.access_token && !credentials.refresh_token) {
    return callback(new Error('No access or refresh token is set.'), null);
  }

  var shouldRefresh = !credentials.access_token || isTokenExpired;
  if (shouldRefresh && credentials.refresh_token) {
    if (!this.credentials.refresh_token) {
      return callback(new Error('No refresh token is set.'), null);
    }

    this.refreshAccessToken(function(err, tokens, response) {
      if (err) {
        return callback(err, null, response);
      }     
      if (!tokens || (tokens && !tokens.access_token)) {
        return callback(new Error('Could not refresh access token.'), null, response);
      }     
      return callback(null, tokens, response);
    });   
  } else {
    return callback(null, credentials, null);
  }
};

I guess it's possible that I'm using the wrong token, and the access/id tokens need to be used differently? Which token does what and should be used for what purpose?

Add convenience methods for common use cases

Hello again!

I was wondering if it would be possible to design some convenience wrappers around common uses for this library?

I can't appreciate all the nuances and complexities this library has to support, so I can't comment on all of the use cases that can be simplified. But as an example, I made google-auto-auth to make authing with application default, JSON/p12 keyfile, and credentials JS objects easier: https://github.com/stephenplusplus/google-auto-auth/blob/0bea71c76c46158a4a4911b3f16af35f7a16ae9d/index.js#L81-L92

I'm still not entirely sure I'm using this library's API the right way (#53), but a developer using google-auto-auth is surely going to have an easier time getting started.

Thanks!

Add a new credential type, IAMCredential

It is constructed with and holds two fields

  • iam-token
  • iam-authority-selector
    IAMCredential applies these values to requests as a pair HTTP headers (or an equivalents) keys
    • "x-goog-iam-authorization-token"
    • "x-goog-iam-authority-selector"
      respectively

N.B, there is no requirement that an IAMCredential be returned by Application Default Credentials.

Use done callback for async tests

This is missing in a couple of places. I have specifically noticed it missing in JWT#fromJSON tests. Those currently work anyway because the callback is called synchronously, but that should not be assumed.

Documentation

Hello, I just wanted to check if any documentation has been planned for this module? Something like this is a generally accepted format for docs. Maybe for this module, they can be grouped by auth types?


Default Credentials

This library provides an implementation of application default credentials for Node.js.

...

Example

var GoogleAuth = require('google-auth-library');
// ...

API

auth.getApplicationDefault(callback)

callback.err
  • Type: Error

An error occurred while trying to find the default credentials.

callback.authClient
  • Type: Object
Methods
createScopeRequired
  • Type: Function

If defined, scopes are required.

This is just a skeleton, it can of course be modified to your liking.

Thanks!

OAuth2 Refresh Tokens

I have implemented the following process flow:

  1. Try to authorize the client, using this function:
function _authorise(mailBox, callback) {
  let auth = new googleAuth();

  let clientId = eval(`process.env.GMAIL_API_CLIENT_ID_${mailBox.toUpperCase()}`);
  let clientSecret = eval(`process.env.GMAIL_API_CLIENT_SECRET_${mailBox.toUpperCase()}`);
  let redirectUri = eval(`process.env.GMAIL_API_REDIRECT_URI_${mailBox.toUpperCase()}`);
  let tokenFile = process.env.GMAIL_API_TOKEN_PATH + mailBox.toLowerCase()+ process.env.GMAIL_API_TOKEN_BASE_FILE_NAME;

  let oauth2Client = new auth.OAuth2(clientId, clientSecret, redirectUri);
  fs.readFile(tokenFile, ((err, token) => {
    if (err) {
      _getNewToken(mailBox,oauth2Client,callback);
    } else {
      oauth2Client.credentials = JSON.parse(token);
      callback(oauth2Client);
    }
  }))
}
  1. The method will check for existence of a token in a file. If the file is NOT found, the following functions will create the file:
function _getNewToken(mailBox, oauth2Client, callback) {
  var authUrl = oauth2Client.generateAuthUrl({
    access_type: 'offline',
    scope: process.env.GMAIL_API_SCOPES
  });
  console.log('To authorize this app, please use this url: ', authUrl);
  var rl = readline.createInterface({
    input: process.stdin,
    output: process.stdout
  });
  rl.question('Enter the code from that page here: ', ((code) => {
    rl.close();
    oauth2Client.getToken(code, function(err, token) {
      if (err) {
        console.log('Error while trying to retrieve access token', err);
        return;
      }
      oauth2Client.credentials = token;
      _storeToken(mailBox,token);
      callback(oauth2Client);
    });
  }));
}

function _storeToken(mailBox, token) {
  let tokenFile = process.env.GMAIL_API_TOKEN_PATH + mailBox.toLowerCase()+ process.env.GMAIL_API_TOKEN_BASE_FILE_NAME;
  fs.writeFile(tokenFile, JSON.stringify(token));
}

i am using https://www.googleapis.com/auth/gmail.readonly as the scopes.

Here's a sample of the file created:

{"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"Bearer","refresh_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","expiry_date":1460509994081}

When processed, here's a sample of the auth object that is returned:

OAuth2Client {
  transporter: DefaultTransporter {},
  clientId_: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.apps.googleusercontent.com',
  clientSecret_: 'xxxxxxxxxxxxxxxxxxxxxxxx',
  redirectUri_: 'urn:ietf:wg:oauth:2.0:oob',
  opts: {},
  credentials: {
access_token: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
     token_type: 'Bearer',
     refresh_token: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
     expiry_date: 1460509994081
  }
}

If I delete the file, and go through the manual consent process, then the authentication works 100%, until the token expires. After this, I get the "Invalid Credentials" message.

My assumption is that once the token expires, that the refresh token should auto recreate the access token. Am I missing something?

Very pathetic documentation!!

I wanted you to note this point. Thats why I opened this as an issue. A good detailed document will never do any bad. Its really hard to find how to use this. Please update this doc so that users with all levels of experience like novice to professionals can seamlessly use this.

low error margin on expiry_date => isTokenExpired false negatives

My app saves refresh tokens in long-term storage and caches the transient access tokens. I make calls to to the google drive api, polling the changes endpoint in ~1 minute intervals.

Every once in a while (~24 hours), I get a 401 Invalid Credentials error. The refresh token is definitely fine - it hasn't expired or been revoked, e.g. using the same refresh token a minute later (on the next poll cycle), it again continues to work again, despite the auth failure in the previous poll cycle.

Because it occurs pretty randomly, I think this issue originates from the following lines in the auth client library:

tokens.expiry_date = ((new Date()).getTime() + (tokens.expires_in * 1000));
var isTokenExpired = expiryDate ? expiryDate <= (new Date()).getTime() : false;

If there's any significant network latency in the remote calls, then the true expiry date will be earlier than the computed expiry_date, and isTokenExpired may be set to false even when it should really be true. This would result in an API call with the expired access token, leading to a 401 Invalid Credentials error.

In my specific case, it would also makes sense in that on the next poll cycle, we would have passed the computed expiry_date, and the refresh token is then used to get a new access token, and the system continues to work fine.

Let me know if I'm totally off base, and thanks for this great client library!

Fix lint errors

All of the travis builds are failing with dozens of lint errors, which makes it hard to tell if the tests are actually passing. This should be fixed.

Getting a 'conflicting engine id' error in JWT module

Hi there

I call the 'authorize' function of the JWT module and get error Error: error:26078067:engine routines:ENGINE_LIST_ADD:conflicting engine id

Full details
I am using a windows 7.1 machine but I am running this under docker, using the iron.io docker image. So in the docker BASH shell I execute command:

$ docker run --rm  -v "//$PWD":/worker -w //worker iron/node:4.1 node index.js

and receive stack trace

Error: error:26078067:engine routines:ENGINE_LIST_ADD:conflicting engine id
    at Error (native)
    at Sign.sign (crypto.js:279:26)
    at Object.sign (/worker/node_modules/jwa/index.js:53:47)
    at Object.jwsSign [as sign] (/worker/node_modules/jws/lib/sign-stream.js:23:
26)
    at GoogleToken._signJWT (/worker/node_modules/gtoken/lib/index.js:238:25)
    at GoogleToken._requestToken (/worker/node_modules/gtoken/lib/index.js:193:8
)
    at GoogleToken.getToken (/worker/node_modules/gtoken/lib/index.js:75:12)
    at /worker/node_modules/google-auth-library/lib/auth/jwtClient.js:137:21
    at JWT._createGToken (/worker/node_modules/google-auth-library/lib/auth/jwtC
lient.js:225:12)
    at JWT.refreshToken_ (/worker/node_modules/google-auth-library/lib/auth/jwtC
lient.js:133:15)

My code:

var jwt = require('google-auth-library/lib/auth/jwtClient');
var key = require('mykeyfile.json');
var jwtClient = new jwt(key.client_email, null, key.private_key, ['https://www.google.com/m8/feeds']); 

jwtClient.authorize(function(err, tokens) {
     if (err) {throw err;}

 console.log ("authorised :-)");
 console.log("token " + tokens.access_token);


 })

Any help much appreciated

Chris

generateAuthUrl doesn't pass state token

Should the generateAuthUrl method from OAuth2Client accept a state param in order to generate an auth url that provides anti-forgery unique session token validation?

The implementation flow suggested by Google (link) uses an auth url with the state param.

0.9.4 Causes error:0906D06C:PEM routines:PEM_read_bio:no start line

Full error is
139661365319552:error:0906D06C:PEM routines:PEM_read_bio:no start line:../deps/openssl/openssl/crypto/pem/pem_lib.c:696:Expecting: ANY PRIVATE KEY

I use google apis node for big query in some of my projects and was wiping my node_modules for anther reason. When I restart my project I got the above error even though my pem key hadn't changed. I got a new pem key but that didn't change the error.

After some investigation I found that the google-auth-library had bumped from 0.9.3 to 0.9.4. Forcing it back to 0.9.3 with my original pem key auths fixed the problem.

DeprecationWarning: Using Buffer without `new` will soon stop working.

I get a DeprecationWarning when calling the authorize method on google.auth.JWT in node 7.2.0

This is caused by the base64url package inside the jws package, and has been fixed in the latest versions of each. This can be fixed by bumping the jws package from 3.0 to 3.1.

full trace when I use the node flag to throw deprecation warnings:

{ DeprecationWarning: Using Buffer without `new` will soon stop working. Use `new Buffer()`, or preferably `Buffer.from()`, `Buffer.allocUnsafe()` or `Buffer.alloc()` instead. at Buffer (buffer.js:79:13) at base64url (/Users/andrew/Desktop/someproject/node_modules/base64url/index.js:41:21) at jwsSecuredInput (/Users/andrew/Desktop/someproject/node_modules/jws/lib/sign-stream.js:11:25) at Object.jwsSign [as sign] (/Users/andrew/Desktop/someproject/node_modules/jws/lib/sign-stream.js:22:24) at GoogleToken._signJWT (/Users/andrew/Desktop/someproject/node_modules/gtoken/lib/index.js:240:25) at GoogleToken._requestToken (/Users/andrew/Desktop/someproject/node_modules/gtoken/lib/index.js:195:8) at GoogleToken.getToken (/Users/andrew/Desktop/someproject/node_modules/gtoken/lib/index.js:77:12) at /Users/andrew/Desktop/someproject/node_modules/google-auth-library/lib/auth/jwtclient.js:137:21 at JWT._createGToken (/Users/andrew/Desktop/someproject/node_modules/google-auth-library/lib/auth/jwtclient.js:226:12) at JWT.refreshToken_ (/Users/andrew/Desktop/someproject/node_modules/google-auth-library/lib/auth/jwtclient.js:133:15) name: 'DeprecationWarning' }

Add a public method to retrieve the access token to OAuth2Client

I need this because I have a library that needs to use the access token to set HTTP2 headers, without routing any requests through the Auth library.

I think that this can be accomplished by moving the authorize function, or a very similar implementation, from jwtclient.js to oauth2client.js.

Error 'The incoming JSON object does not contain a client_email field' on JSON keys

Lately the Google Developer console has been giving me JSON service account keys with empty client_email and client_id fields. These are being generated at https://github.com/google/google-auth-library-nodejs/blob/71511d08a05253ede89930342ccdf4c48f93bead/lib/auth/jwtaccess.js#L101

Either there is a bug in this library (as in having an expectation that client_email must be present), or there is a bug in the developer console (as in generating a key without the required client_email property), but I will start by opening the bug here.

Here's an excerpt from a key I got today. I have verified that others are also getting keys with empty client_email.

  ..... ----END PRIVATE KEY-----\n",
  "client_email": "",
  "client_id": "",
  "type": "service_account"
}

cannot drive.files.update to "addParents" with service jwt client

I have a service account and I'm using the subject param in JWT function from google-auth-library-nodejs/lib/auth/jwtclient.js. I get a 200 OK status but the parents are not actually added.

Example:

const jwtClient = new google.auth.JWT(
        key.client_email,
        null,
        key.private_key,
        [
            'https://www.googleapis.com/auth/spreadsheets',
            'https://www.googleapis.com/auth/drive'
        ],
        '[email protected]'
);

async.waterfall(
    [
        (callback) => {
            sheets.spreadsheets.create(
                {
                    auth: jwtClient,
                    resource: {
                        properties: {
                            title: moment().format('YYYY-MM-DD')
                        }
                    }
                },
                (err, spreadsheet) => {
                    console.log(spreadsheet);
                    callback(err, spreadsheet.spreadsheetId)
                }
            )
        },
        (spreadsheetId, callback) => {
            console.log(config.spreadsheetFolders.international);
            drive.files.update(
                {
                    auth: jwtClient,
                    fileId: spreadsheetId,
                    resource: {
                        addParents: config.spreadsheetFolders.international
                    }
                },
                (err, file) => {
                    console.log(file);
                    callback(err, file);
                }
            )
        }
    ],
    (err, results) => {
        if (err) {
            console.log(err);
        } else {
                        something();
        }
    }
);

Ideas? This is Drive api v3 just fyi.

Promote OAuth2 service account to Jwt Access credentials

Application Default Credentials can return Oauth2 service account credentials. gRPC can use more efficient JwtAccess credentials in place of this if no scopes have been specified. This tracks logic to detect this case and "promote" the credentials to JwtAccess form with the same identity properties if the OAuth2 form is passed in for use by gRPC.

Logic should be similar to this (in Java):

if (credentials instanceof ServiceAccountOAuth2Credentials) {
ServiceAccountOAuth2Credentials serviceAccount =
(ServiceAccountOAuth2Credentials)credentials;
if (serviceAccount.getScopes().length() == 0) {
credentials = new ServiceAccountJwtAccessCredentials(
serviceAccount.getAccountEmail(),
serviceAccount.getPrivateKey(),
serviceAccount.getPrivateKeyId());
}
}

Support detection of project ID

RE: googleapis/google-cloud-node#1653

It would be nice if the authClient returned from this module also exported its best guess at the project_id. In the case of service account JSON files, they have the project_id set, so exporting that should be simple. For ADC, it would involve spawning a command:

$ gcloud -q config list core/project --format=json
{
  "core": {
    "project": "grape-spaceship-123"
  }
}

cc @tmatsuo @jmdobry

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.