Code Monkey home page Code Monkey logo

boundary's Introduction

Boundary

Boundary

Boundary is an identity-aware proxy that provides a simple, secure way to access hosts and critical systems on your network.

With Boundary you can:

  • Integrate with your IdP of choice using OpenID Connect, enabling users to securely sign-in to their Boundary environment
  • Provide just-in-time network access to network resources, wherever they reside
  • Manage session credentials via a native static credential store, or dynamically generate unique per-session credentials by integrating with HashiCorp Vault
  • Automate discovery of new endpoints
  • Manage privileged sessions using Boundary’s session controls
  • Standardize your team's access workflow with a consistent experience for any type of infrastructure across any provider

Boundary is designed to be straightforward to understand, highly scalable, and resilient. It can run in clouds, on-prem, secure enclaves and more, and does not require an agent to be installed on every end host, making it suitable for access to managed/cloud services and container-based workflows in addition to traditional host systems and services.

Watch the video

For more information, refer to "What is Boundary?" on the Boundary website.

Getting Started

Boundary consists of two server components:

  • Controller, which serves the API and coordinate session requests
  • Workers, which perform session handling

A real-world Boundary installation will likely consist of one or more controllers paired with one or more workers. A single Boundary binary can act in either, or both, of these two modes.

Additionally, Boundary provides a Desktop client and CLI for end-users to request and establish authorized sessions to resources across a network.

Boundary Desktop GIF

Boundary does not require software to be installed on your hosts and services.

Requirements

Boundary has two external dependencies:

  • A SQL database
  • At least one KMS

SQL database

The database contains Boundary's configuration and session information. The controller nodes must be able to access the database.

Values that are secrets (e.g credentials) are encrypted in the database. Currently, PostgreSQL is supported as a database and has been tested with Postgres 12 and above.

Boundary uses only common extensions and both hosted and self-managed instances are supported. In most instances, all that you need is a database endpoint and the appropriate credentials.

KMS

Boundary uses KMS keys for various purposes, such as protecting secrets, authenticating workers, recovering data, encrypting values in Boundary’s configuration, and more. Boundary uses key derivation extensively to avoid key sprawl of these high-value keys.

You can use any cloud KMS or Vault's Transit Secrets Engine to satisfy the KMS requirement.

Trying out Boundary

Running Boundary in a more permanent context requires a few more steps, such as writing some simple configuration files to tell the nodes how to reach their database and KMS. The steps below, along with the extra information needed for permanent installations, are detailed in our Installation Guide.

⚠️ Do not use the main branch except for dev or test cases. Boundary 0.10 introduced release branches which should be safe to track, however, migrations in main may be renumbered if needed. The Boundary team will not be able to provide assistance if running main over the long term results in migration breakages or other bugs.

Download and Run from Release Page

Download the latest release of the server binary and appropriate desktop client(s) from our downloads page

Quickstart with Boundary Dev

Boundary has a dev mode that you can use for testing. In dev mode, you can start both a controller and worker with a single command, and they have the following properties:

  • The controller starts a PostgreSQL Docker container to use as storage. This container will be shut down and removed, if possible, when the controller is shut down gracefully.
  • The controller uses an internal KMS with ephemeral keys

Building from Source

If you meet the following local requirements, you can quickly get up and running with Boundary:

Simply run:

make install

This will build Boundary. (The first time this is run it will fetch and compile UI assets; which will take a few extra minutes.) Once complete, run Boundary in dev mode:

$GOPATH/bin/boundary dev

Please note that development may require other tools; to install the set of tools at the versions used by the Boundary team, run:

make tools

Without doing so, you may encounter errors while running make install. It is important to also note that using make tools will install various tools used for Boundary development to the normal Go binary directory; this may overwrite or take precedence over tools that might already be installed on the system.

Start Boundary

Start the server binary with:

boundary dev

This will start a Controller service listening on http://127.0.0.1:9200 for incoming API requests and a Worker service listening on http://127.0.0.1:9202 for incoming session requests. It will also create various default resources and display various useful pieces of information, such as a login name and password that can be used to authenticate.

Configuring Resources

For a simple test of Boundary in dev mode you don't generally need to configure any resources at all! But it's useful to understand what dev mode did for you so you can then take further steps. By default, dev mode will create:

  • The global Scope for initial authentication, containing a Password-type Auth Method, along with an Account for login.
  • An organization Scope under global, and a project Scope inside the organization.
  • A Host Catalog with a default Host Set, which itself contains a Host with the address of the local machine (127.0.0.1)
  • A Target mapping the Host Set to a set of connection parameters, with a default port of 22 (e.g. SSH)

You can go into Boundary's web UI or use its API to change these default values, for instance if you want to connect to a different host or need to modify the port on which to connect.

Making the Connection

Next, let's actually make a connection to your local SSH daemon via Boundary:

  1. Authenticate to Boundary; using default dev values, this would be boundary authenticate password -auth-method-id ampw_1234567890 -login-name admin -password password. (Note that if you do not include the password flag you will be prompted for it.)
  2. Run boundary connect ssh -target-id ttcp_1234567890. If you want to adjust the username, pass -username <name> to the command.

Check out the possibilities for target configuration to test out limiting (or increasing) the number of connections per session or setting a maximum time limit; try canceling an active session from the sessions page or via boundary sessions, make your own commands with boundary connect -exec, and so on.

Going Further

This example is a simple way to get started but omits several key steps that could be taken in a production context:

  • Using a firewall or other means to restrict the set of hosts allowed to connect to a local service to only Boundary Worker nodes, thereby making Boundary the only means of ingress to a host
  • Using the Boundary Terraform provider to easily integrate Boundary with your existing code-based infrastructure
  • Pointing a BI tool (PowerBI, Tableau, etc.) at Boundary's data warehouse to generate insights and look for anomalies with respect to session access

Please note: We take Boundary's security and our users' trust very seriously. If you believe you have found a security issue in Boundary, please responsibly disclose by contacting us at [email protected].


Contributing

Thank you for your interest in contributing! Please refer to CONTRIBUTING.md for guidance.

boundary's People

Contributors

a-440hz avatar brandonromano avatar calcaide avatar cameronperera avatar covetocove avatar dan-heath avatar ddebko avatar dependabot[bot] avatar elimt avatar gsusmi avatar hashicorp-tsccr[bot] avatar hugoghx avatar irenarindos avatar jefferai avatar jimlambrt avatar johanbrandhorst avatar kheina avatar louisruch avatar malnick avatar mdeggies avatar mgaffney avatar mikemountain avatar moduli avatar pbernal avatar samsalisbury avatar stasryzhov avatar stellarsquall avatar talanknight avatar tmessi avatar vancluever avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

boundary's Issues

Systemd All-in-One Installation Script fails

Describe the bug
when running Systemd All-in-One Installation Script I get an error - * error running migrations: migration failed: syntax error at or near "function" (column 13) in line 80:

To Reproduce
Steps to reproduce the behavior:

  1. Run ./service.sh controller
  2. See error

Expected behavior
boundary controller service stating

Additional context

Error running database migrations: 1 error occurred:
        * error running migrations: migration failed: syntax error at or near "function" (column 13) in line 80:
begin;

  -- wh_rollup_connections calculates the aggregate values from
  -- wh_session_connection_accumulating_fact for p_session_id and updates
  -- wh_session_accumulating_fact for p_session_id with those values.
  create or replace function wh_rollup_connections(p_session_id wt_public_id)
    returns void
  as $$
  declare
    session_row wh_session_accumulating_fact%rowtype;
  begin
    with
    session_totals (session_id, total_connection_count, total_bytes_up, total_bytes_down) as (
      select session_id,
             sum(connection_count),
             sum(bytes_up),
             sum(bytes_down)
        from wh_session_connection_accumulating_fact
       where session_id = p_session_id
       group by session_id
    )
    update wh_session_accumulating_fact
       set total_connection_count = session_totals.total_connection_count,
           total_bytes_up         = session_totals.total_bytes_up,
           total_bytes_down       = session_totals.total_bytes_down
      from session_totals
     where wh_session_accumulating_fact.session_id = session_totals.session_id
    returning wh_session_accumulating_fact.* into strict session_row;
  end;
  $$ language plpgsql;

  --
  -- Session triggers
  --

  -- wh_insert_session returns an after insert trigger for the session table
  -- which inserts a row in wh_session_accumulating_fact for the new session.
  -- wh_insert_session also calls the wh_upsert_host and wh_upsert_user
  -- functions which can result in new rows in wh_host_dimension and
  -- wh_user_dimension respectively.
  create or replace function wh_insert_session()
    returns trigger
  as $$
  declare
    new_row wh_session_accumulating_fact%rowtype;
  begin
    with
    pending_timestamp (date_dim_id, time_dim_id, ts) as (
      select wh_date_id(start_time), wh_time_id(start_time), start_time
        from session_state
       where session_id = new.public_id
         and state = 'pending'
    )
    insert into wh_session_accumulating_fact (
           session_id,
           auth_token_id,
           host_id,
           user_id,
           session_pending_date_id,
           session_pending_time_id,
           session_pending_time
    )
    select new.public_id,
           new.auth_token_id,
           wh_upsert_host(new.host_id, new.host_set_id, new.target_id),
           wh_upsert_user(new.user_id, new.auth_token_id),
           pending_timestamp.date_dim_id,
           pending_timestamp.time_dim_id,
           pending_timestamp.ts
      from pending_timestamp
      returning * into strict new_row;
    return null;
  end;
  $$ language plpgsql;

  create trigger wh_insert_session
    after insert on session
    for each row
    execute function wh_insert_session();

  --
  -- Session Connection triggers
  --

  -- wh_insert_session_connection returns an after insert trigger for the
  -- session_connection table which inserts a row in
  -- wh_session_connection_accumulating_fact for the new session connection.
  -- wh_insert_session_connection also calls wh_rollup_connections which can
  -- result in updates to wh_session_accumulating_fact.
  create or replace function wh_insert_session_connection()
    returns trigger
  as $$
  declare
    new_row wh_session_connection_accumulating_fact%rowtype;
  begin
    with
    authorized_timestamp (date_dim_id, time_dim_id, ts) as (
      select wh_date_id(start_time), wh_time_id(start_time), start_time
        from session_connection_state
       where connection_id = new.public_id
         and state = 'authorized'
    ),
    session_dimension (host_dim_id, user_dim_id) as (
      select host_id, user_id
        from wh_session_accumulating_fact
       where session_id = new.session_id
    )
    insert into wh_session_connection_accumulating_fact (
           connection_id,
           session_id,
           host_id,
           user_id,
           connection_authorized_date_id,
           connection_authorized_time_id,
           connection_authorized_time,
           client_tcp_address,
           client_tcp_port_number,
           endpoint_tcp_address,
           endpoint_tcp_port_number,
           bytes_up,
           bytes_down
    )
    select new.public_id,
           new.session_id,
           session_dimension.host_dim_id,
           session_dimension.user_dim_id,
           authorized_timestamp.date_dim_id,
           authorized_timestamp.time_dim_id,
           authorized_timestamp.ts,
           new.client_tcp_address,
           new.client_tcp_port,
           new.endpoint_tcp_address,
           new.endpoint_tcp_port,
           new.bytes_up,
           new.bytes_down
      from authorized_timestamp,
           session_dimension
      returning * into strict new_row;
    perform wh_rollup_connections(new.session_id);
    return null;
  end;
  $$ language plpgsql;

  create trigger wh_insert_session_connection
    after insert on session_connection
    for each row
    execute function wh_insert_session_connection();

  -- wh_update_session_connection returns an after update trigger for the
  -- session_connection table which updates a row in
  -- wh_session_connection_accumulating_fact for the session connection.
  -- wh_update_session_connection also calls wh_rollup_connections which can
  -- result in updates to wh_session_accumulating_fact.
  create or replace function wh_update_session_connection()
    returns trigger
  as $$
  declare
    updated_row wh_session_connection_accumulating_fact%rowtype;
  begin
        update wh_session_connection_accumulating_fact
           set client_tcp_address       = new.client_tcp_address,
               client_tcp_port_number   = new.client_tcp_port,
               endpoint_tcp_address     = new.endpoint_tcp_address,
               endpoint_tcp_port_number = new.endpoint_tcp_port,
               bytes_up                 = new.bytes_up,
               bytes_down               = new.bytes_down
         where connection_id = new.public_id
     returning * into strict updated_row;
    perform wh_rollup_connections(new.session_id);
    return null;
  end;
  $$ language plpgsql;

  create trigger wh_update_session_connection
    after update on session_connection
    for each row
    execute function wh_update_session_connection();

  --
  -- Session State trigger
  --

  -- wh_insert_session_state returns an after insert trigger for the
  -- session_state table which updates wh_session_accumulating_fact.
  create or replace function wh_insert_session_state()
    returns trigger
  as $$
  declare
    date_col text;
    time_col text;
    ts_col text;
    q text;
    session_row wh_session_accumulating_fact%rowtype;
  begin
    if new.state = 'pending' then
      -- The pending state is the first state which is handled by the
      -- wh_insert_session trigger. The update statement in this trigger will
      -- fail for the pending state because the row for the session has not yet
      -- been inserted into the wh_session_accumulating_fact table.
      return null;
    end if;

    date_col = 'session_' || new.state || '_date_id';
    time_col = 'session_' || new.state || '_time_id';
    ts_col   = 'session_' || new.state || '_time';

    q = format('update wh_session_accumulating_fact
                   set (%I, %I, %I) = (select wh_date_id(%L), wh_time_id(%L), %L::timestamptz)
                 where session_id = %L
                returning *',
                date_col,       time_col,       ts_col,
                new.start_time, new.start_time, new.start_time,
                new.session_id);
    execute q into strict session_row;

    return null;
  end;
  $$ language plpgsql;

  create trigger wh_insert_session_state
    after insert on session_state
    for each row
    execute function wh_insert_session_state();

  --
  -- Session Connection State trigger
  --

  -- wh_insert_session_connection_state returns an after insert trigger for the
  -- session_connection_state table which updates
  -- wh_session_connection_accumulating_fact.
  create or replace function wh_insert_session_connection_state()
    returns trigger
  as $$
  declare
    date_col text;
    time_col text;
    ts_col text;
    q text;
    connection_row wh_session_connection_accumulating_fact%rowtype;
  begin
    if new.state = 'authorized' then
      -- The authorized state is the first state which is handled by the
      -- wh_insert_session_connection trigger. The update statement in this
      -- trigger will fail for the authorized state because the row for the
      -- session connection has not yet been inserted into the
      -- wh_session_connection_accumulating_fact table.
      return null;
    end if;

    date_col = 'connection_' || new.state || '_date_id';
    time_col = 'connection_' || new.state || '_time_id';
    ts_col   = 'connection_' || new.state || '_time';

    q = format('update wh_session_connection_accumulating_fact
                   set (%I, %I, %I) = (select wh_date_id(%L), wh_time_id(%L), %L::timestamptz)
                 where connection_id = %L
                returning *',
                date_col,       time_col,       ts_col,
                new.start_time, new.start_time, new.start_time,
                new.connection_id);
    execute q into strict connection_row;

    return null;
  end;
  $$ language plpgsql;

  create trigger wh_insert_session_connection_state
    after insert on session_connection_state
    for each row
    execute function wh_insert_session_connection_state();

commit;

 (details: pq: syntax error at or near "function")

minikube and "could not connect to docker: dial tcp [::1]:32770: connectex:"

When using minikube as the docker container

minikube start
@FOR /f "tokens=*" %i IN ('minikube -p minikube docker-env') DO @%i
boundary dev

it returns

Error creating dev database container: unable to start dev database with dialect postgres: could not connect to docker: dial tcp [::1]:32770: connectex: No connection could be made because the target machine actively refused it.

It looks like it is trying to connect to localhost instead of using the DOCKER_HOST details.

The container is running and mapped

CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS                NAMES
3675ac38c494        postgres:12            "docker-entrypoint.s…"   3 minutes ago       Up 3 minutes        0.0.0.0:32770->5432/tcp   reverent_mcclintock

I assume this would be the case for any remotely hosted docker services.

url = "postgres://postgres:password@localhost:%s?sslmode=disable"

url = "postgres://postgres:password@localhost:%s?sslmode=disable"

if I am reading this right it is hard coded to localhost, whilst the port is a placeholder to get the Docker Port mapped to 5432.

Would it be a good idea to map this to the variable -host-address that way you can specify it as part of startup?

boundary dev -host-address=192.168.99.100

Or is there a more generic way to get the Docker IP Address from the container?

Another way to get it would be, so that you don't need to introspect the container

boundary dev -host-address="$(minikube ip)"

Boundary roles add-grant lowercases IDs in the grant statement

Describe the bug

boundary role add-grants lowercases IDs supplied in the grant text, causing the grant to not match on mixed-case IDs.

To Reproduce

Steps to reproduce the behavior:

  1. Run boundary roles create -name=foo ...
  2. Run boundary roles add-grants id=(id#-from-step-1) grant='id=hcst_MixedCase;type=*;actions=*
  3. Run `boundary roles read id=(id#-from-step-1)
  4. See that the grant now shows as shown below, and does not succeed in granting the rights
Canonical Grants:
    id=hcst_mixedcase;type=*;actions=*

Expected behavior

Mixed-case IDs should be maintained as-is in the grant

Additional context

Seen using both the boundary server and client v0.1.1 on the following platforms: (the only ones available to me)

  • the Ubuntu/Debian package in the Hashicorp apt repo
  • HomeBrew

Non descript error message on failed account creation

Describe the bug
When creating an account via the UI, its not apparent that the username must be all lowercase. The bug is the fact that the returned error via the UI doesn't describe this at all:

The adapter operation failed due to a server error

To Reproduce
Steps to reproduce the behavior:

  1. Connect to Boundary via the UI
  2. Auth Method -> userpass -> Create Account
  3. Capitalise the username field and observe the error

Expected behavior
If you run the same command via the CLI the returned error is much more descriptive.

Oct 16 03:15:40 ip-10-0-101-209 boundary[1295]: 2020-10-16T03:15:40.383Z [ERROR] controller: internal error returned: error id=WfbRQxeQdl error="unable to create user: create: password account: invalid login name; must be all-lowercase alphanumeric: invalid parameter"

This same error message should be surfaced in the UI.

Additional context
Add any other context about the problem here.

SSH connection to target with port forwarding

Is your feature request related to a problem? Please describe.
I need to be able to port forward from remote target (server with a database) to my local for accessing DB's web ui

Describe the solution you'd like
I would like boundary connect ssh to accept ssh cli flags like -L and -R to establish port forwarding

Describe alternatives you've considered
No

Explain any additional use-cases
No

Additional context
No

Unable to use _ or any - in user account

Is your feature request related to a problem? Please describe.
Unable to use _ or - in user account

Describe the solution you'd like
some customer user _ or - in their linux account, but in boundary it's not available to use - or _ in account settings.

Describe alternatives you've considered
Account setting need to accept '_' and '-'.

Target-to-Worker connections

Is your feature request related to a problem? Please describe.
Reaching targets that are in environments where a worker can't be deployed, or the target refuses all inbound connections (even SSH) and uses a reverse tunnel to allow connection.

Describe the solution you'd like
Either the ability to connect to other client machines as a valid target or to configure target hosts to connect to the worker (instead of worker to target).

Explain any additional use-cases
A very specific example is a host managed by a vendor that is deployed to a customer network. In order to manage the device the vendor must be able to connect to the host, but customer networks can change and tend to block inbound connections, especially if the host is placed behind a DMZ.

[Feature Request] Command allow list/deny list policy.

Is your feature request related to a problem? Please describe.
We're looking for some kind of whitelist option to create a policy on allowed/disallowed commands on an SSH or terminal session. Similar to sudoers file granularity.

Describe the solution you'd like
A policy engine similar to Consul/Vault HCL policies that can be combined to give hierarchical access to run certain commands. It might be nice to add functionality similar to Vault's Control Groups to allow quorum override. Also it would be nice to allow but enable alerts on certain commands or watch filters. Regex options would be amazing.

Describe alternatives you've considered

  • sudoers file
  • SELinux/AppArmor or other MAC enforcement

Additional context
This came up in a partner conversation and I'm not sure if it's on roadmap already.

Unable to specify listen port with any IPv6 address

Describe the bug
Boundary will throw an error with any listen directive, when an IPv6 address and port are specified.
This happens with runtime options as well as when using the config file.

Error initializing listener of type tcp: listen tcp: address :::8080: too many colons in address

To Reproduce
Steps to reproduce the behavior:

boundary dev -api-listen-address="[2001:db8::1:cee:c0de:b0b]:8080"

OR

listener "tcp" {
  address = "[2001:db8::1:cee:c0de:b0b]:8080"
  purpose = "api"
}

both result in:

Error initializing listener of type tcp: listen tcp: address 2001:db8::1:cee:c0de:b0b:8080: too many colons in address

Expected behavior
Boundary should open a TCP socket on "2001:db8::1:cee:c0de:b0b" with port 8080.

Worker is unable to mantain connection with Controller located in another network

Describe the bug

We have deployed an infrastructure based on: boundary-reference-architecture but with several modifications including GCP infrastructure:

In AWS:

  • Two public subnets and one private subnet
  • One controller and one worker in a public subnet
  • One target in the private subnet
  • The LoadBalancer has one new target group pointing to 9201 port to controller

In GCP:

  • One public subnet and one private subnet in GCP
  • One worker in public subnet in GCP
  • One target in the private subnet

We have zero problems when dealing with the AWS Infra, but the worker in GCP is unable to mantain connection to the controller in AWS. Configuration in GCP Instances is replicated from the AWS EC2.

In the following images, we can see that the GCP worker reach the AWS controller in the first place through the Load Balancer, but once the communication has been established, it is lost because the controller tells the worker to reach it using its private address (10.0.0.11 in this case). As both instances are in different network, this can't be accomplished (Journalctl is executed in the GCP worker instance):

journalctl

We have make it work executing the following command in the GCP worker instance:

iptables -t nat -A OUTPUT -d 10.0.0.11 -j DNAT --to-destination 54.239.32.132

## 10.0.0.11 is the private address of the controller, the IP that controller sends to worker
## 54.239.32.132 was the public IP address of the load balancer

Doing this, boundary works as expected, allowing us to ssh to the GCP target instance using boundary connect ssh.

We suppose that this is caused by the controller sending its private IP address. We think that a possible solution might be to separate private IP address to expose boundary service and external IP address to send to workers in different configuration parameters.

To Reproduce
Run boundary with configuration stated in boundary-reference-architecture and add a worker in another network.

Expected behavior
Boundary Worker should be able to mantain the connection with the Controller.

Usability - Enable and disable dark mode

Is your feature request related to a problem? Please describe.
Hello Boundary team!! 👋
I installed Boundary and started the tutorials. The first thing that jumped out at me is that there may be usability issue. It appears that the UI changes based on OS (MacOS, Safari, Dark Mode). I can see this being a usability issue for someone with eyesight issues.

Describe the solution you'd like
I would like to see an option to have dark mode disabled by default and have a setting in the profile to enable.

Describe alternatives you've considered
A way to disable dark mode.

Additional context
https://www.w3.org/WAI/fundamentals/accessibility-usability-inclusion/

Thank you for making this amazing project!

404 attempting to authenticate

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:
.5. Install the ubuntu package for boundary and start the dev env.

  1. Run boundary authenticate password -login-name=admin -password password -auth-method-id=ampw_123456789
  2. See error

Expected behavior
I expect to login.

Additional context
Error as reported:

Error reading auth token from system credential store: The name org.freedesktop.secrets was not provided by any .service files
Error from controller when performing authentication:
Error information:
Code: NotFound
Message: Resource not found.
Status: 404

Migrations fail without PgCrypto and Postgres permissions but Boundary doesn't know

Describe the bug
If you are missing pgcrypto when you run the initial database migrations and don't have the permissions to install it, Boundary will report an error and fail to properly initialise the database, but will still record that migrations were successful, preventing use of Boundary.

To Reproduce
Steps to reproduce the behavior:

  1. Install postgresql12-server.
  2. Make sure your current user does not exist in Postgres. On CentOS 8, this is the default for the root user.
  3. Run boundary database init -config /etc/boundary.hcl
  4. Observe the error message:
Error running database migrations: 1 error occurred:
        * error running migrations: 2 errors occurred:
        * migration failed: permission denied to create extension "pgcrypto" in line 0:
begin;

  create extension if not exists "pgcrypto";

  create domain wh_inet_port as integer
  check(
    value > 0
    and
    value <= 65535
  );
  comment on domain wh_inet_port is
  'An ordinal number between 1 and 65535 representing a network port';

  create domain wh_bytes_transmitted as bigint
  check(
    value >= 0
  );
  comment on domain wh_bytes_transmitted is
  'A non-negative integer representing the number of bytes transmitted';

  -- wh_dim_id generates and returns a random ID which should be considered as
  -- universally unique.
  create or replace function wh_dim_id()
    returns text
  as $$
    select encode(digest(gen_random_bytes(16), 'sha256'), 'base64');
  $$ language sql;

  create domain wh_dim_id as text
  check(
    length(trim(value)) > 0
  );
  comment on domain wh_dim_id is
  'Random ID generated with pgcrypto';

  create domain wh_public_id as text
  check(
    value = 'None'
    or
    length(trim(value)) > 10
  );
  comment on domain wh_public_id is
  'Equivalent to wt_public_id but also allows the value to be ''None''';

  create domain wh_timestamp as timestamp with time zone not null;
  comment on domain wh_timestamp is
  'Timestamp used in warehouse tables';

  create domain wh_dim_text as text not null
  check(
    length(trim(value)) > 0
  );
  comment on domain wh_dim_text is
  'Text fields in dimension tables are always not null and always not empty strings';

  -- wh_date_id returns the wh_date_dimension id for ts.
  create or replace function wh_date_id(ts wh_timestamp)
    returns integer
  as $$
    select to_char(ts, 'YYYYMMDD')::integer;
  $$ language sql;

  -- wh_time_id returns the wh_time_of_day_dimension id for ts.
  create or replace function wh_time_id(ts wh_timestamp)
    returns integer
  as $$
    select to_char(ts, 'SSSS')::integer;
  $$ language sql;

  -- wh_date_id returns the wh_date_dimension id for current_timestamp.
  create or replace function wh_current_date_id()
    returns integer
  as $$
    select wh_date_id(current_timestamp);
  $$ language sql;

  -- wh_time_id returns the wh_time_of_day_dimension id for current_timestamp.
  create or replace function wh_current_time_id()
    returns integer
  as $$
    select wh_time_id(current_timestamp);
  $$ language sql;

commit;

 (details: pq: permission denied to create extension "pgcrypto")
        * pq: current transaction is aborted, commands ignored until end of transaction block in line 0: SELECT pg_advisory_unlock($1)
  1. Try and run the command again but be told Database already initialized..

Boundary is technically setup, but it isn't functional (the web ui returns a blank page), and it hasn't given the credentials for the user to log in with, rendering it useless.

Expected behavior
In my opinion, Boundary should abandon the transaction if pgcrypto can't be installed, and record as such, allowing for the user to install the extension and re-run migrations. The docs should probably also mention that manually installing pgcrypto may be required.

Additional context
Testing was performed on a CentOS 8.2004 system with Postgres 12.5.

Wrong field updated when editing role grants

Describe the bug
When I edit the grants for a role the wrong field is updated. In my example I have three grants, I edit the second one by typing in the field. After clicking save the third field is updated, not the second one as expected.
After a page refresh the fields are shown as expected.

To Reproduce
Go to Role > Grants
With at least two grants defined, edit the first grant. (Eg add 'authenticate')
Click 'Save'. The second grant will be updated.

Expected behavior
The correct field should be updated when clicking save.

Additional context
This makes it very confusing for me to try and change the grants, as I repeatedly edit the wrong one.
Boundary_Roles_20201029

Boundary systemd file

Describe the bug
When taking the systemd unit file as is, it exits for me with status 213/SECUREBITS.
Adding the SecureBits section to the service fixes this issue.
SecureBits=keep-caps

To Reproduce
Steps to reproduce the behavior:

  1. Use the systemd unit file as is;
[Unit]
Description=boundary controller

[Service]
ExecStart=/usr/local/bin/boundary server -config /etc/boundary/boundary-controller.hcl
User=boundary
Group=boundary
LimitMEMLOCK=infinity
Capabilities=CAP_IPC_LOCK+ep
CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK

[Install]
WantedBy=multi-user.target
  1. Reload and run the service
  2. Check status of the service or look at the log with journalctl -u boundary-controller

Expected behavior
Service starts up correctly

Additional context
Running on Ubuntu 16.04

(question): SOCKS5 proxy & DNS over HTTPS as L7 VPN alternative ?

Is possible that Boundary will someday will work as VPN alternative? Yes, its possible to do L4 now, thats super good, but I am talking now about L7 HTTP.

Zerotrust/Beyondcorp is really expensive these days and for certain environments its still a taboo.

There are not so many cloud-native VPN solutions ( AFAIK only OpenVPN Cloud, AS, Tailscale and some Enterprise FWs can handle SAML/OpenID integrations,ACL and audits, but these are still some reworked legacy apps, Tailscale is not, but its quite expensive. )

Related: https://github.com/inlets/inlets

SOCKS5 server:
https://github.com/shadowsocks/go-shadowsocks2
DNS with rfc8484 support:
https://github.com/coredns/coredns

Issue with secrets storage on Ubuntu

@jefferai taking conversation here with Katakoda issue as well
I am in process of writing a blog on neat installation steps for Ubuntu/Centos
There are few things that do not seem to work and if I can get them resolved then we can have a neat install step which can be helpful for others in the community as well I believe

I am planning to tell on 3 environments
bare centos 7 VM , Ubuntu 18.04 Vm and katakoda ubuntu playground

For Ubuntu I have bellow error s:

boundary authenticate password -auth-method-id=ampw_1234567890 \
>     -login-name=admin -password=password
Error reading auth token from system credential store: exec: "dbus-launch": executable file not found in $PATH

Authentication information:
  Account ID:      apw_EJK7s6DcCQ
  Auth Method ID:  ampw_1234567890
  Expiration Time: Thu, 22 Oct 2020 10:17:30 BST
  Token:
  at_h1qSNMdB1c_s125wLmVW7LgzNKf1TFeNrFDRUZNcR2qeWkNyjZ7Qd6E2DWGsyPT9KmPKZkxBaQps7JKkbeeoJEwt7xXMyR6YKjEkqbrFWcsCdXm8rYgqjsFJwUYS2WBFbNh2
  User ID:         u_1234567890
Error saving auth token to system credential store: exec: "dbus-launch": executable file not found in $PATH

Installed dbus

apt install dbus-x11
boundary authenticate password -auth-method-id=ampw_1234567890     -login-name=admin -password=password
Error reading auth token from system credential store: The name org.freedesktop.secrets was not provided by any .service files

Authentication information:
  Account ID:      apw_EJK7s6DcCQ
  Auth Method ID:  ampw_1234567890
  Expiration Time: Thu, 22 Oct 2020 10:19:18 BST
  Token:
  at_VqoR4tBhxy_s18pG3buSH7fMcBVL1YP14NXipTgeg4Jm9aVd3WXL2WCpUvqs8xnCwnNC7D41zejxUfRYhJPGMvCcp1mbN1q9cPyM5AAJRvakFGQUNLC5ZjiyyoTuic4ZpoQUmu9XXtC2YY2u9BTdJDg8
  User ID:         u_1234567890
Error saving auth token to system credential store: The name org.freedesktop.secrets was not provided by any .service files

Next installed

sudo apt-get install -y gnome-keyring
boundary authenticate password -auth-method-id=ampw_1234567890     -login-name=admin -password=password
Error reading auth token from system credential store: failed to unlock correct collection '/org/freedesktop/secrets/aliases/default'

Authentication information:
  Account ID:      apw_EJK7s6DcCQ
  Auth Method ID:  ampw_1234567890
  Expiration Time: Thu, 22 Oct 2020 10:21:00 BST
  Token:
  at_7y54bPBIDx_s13PefhTbtKamPce8iph3t8HMK85nJWEBh9n2JXCk6oiWQ9K9qusfJDx6TEJPLoGo8GPqbawpLAxtMgk9aS5wyzr3S6qVgMG7S939K93pUXLffE7a5KXdwAc464p42rV2
  User ID:         u_1234567890
Error saving auth token to system credential store: failed to unlock correct collection '/org/freedesktop/secrets/aliases/default'

ok Katakoda after the documentation steps

apt install dbus-x11

boundary authenticate password -auth-method-id=ampw_1234567890 \
>     -login-name=admin -password=password
panic: runtime error: slice bounds out of range [237:151]

goroutine 1 [running]:
github.com/godbus/dbus.getSessionBusPlatformAddress(0x17d1fc7, 0x18, 0x0, 0x0)
        /root/go/pkg/mod/github.com/godbus/[email protected]+incompatible/conn_other.go:30 +0x295
github.com/godbus/dbus.getSessionBusAddress(0x0, 0x0, 0x0, 0x0)
        /root/go/pkg/mod/github.com/godbus/[email protected]+incompatible/conn.go:96 +0xf8
github.com/godbus/dbus.SessionBusPrivate(0x0, 0x40d900, 0xc0003f5c80)
        /root/go/pkg/mod/github.com/godbus/[email protected]+incompatible/conn.go:101 +0x25
github.com/godbus/dbus.SessionBus(0x0, 0x0, 0x0)
        /root/go/pkg/mod/github.com/godbus/[email protected]+incompatible/conn.go:73 +0xb5
github.com/zalando/go-keyring/secret_service.NewSecretService(0x7, 0x1b, 0x7)
        /root/go/pkg/mod/github.com/zalando/[email protected]/secret_service/secret_service.go:50 +0x26
github.com/zalando/go-keyring.secretServiceProvider.Get(0x17dc55a, 0x1d, 0x17b78d7, 0x7, 0x0, 0x0, 0x0, 0x0)
        /root/go/pkg/mod/github.com/zalando/[email protected]/keyring_linux.go:78 +0x59
github.com/zalando/go-keyring.Get(...)
        /root/go/pkg/mod/github.com/zalando/[email protected]/keyring.go:32
github.com/hashicorp/boundary/internal/cmd/base.(*Command).ReadTokenFromKeyring(0xc0003cb080, 0x17b78d7, 0x7, 0x7)
        /go/internal/cmd/base/base.go:232 +0x77
github.com/hashicorp/boundary/internal/cmd/base.(*Command).Client(0xc0003cb080, 0xc0005dfbe8, 0x2, 0x2, 0x0, 0x0, 0x44aad5)
        /go/internal/cmd/base/base.go:217 +0x389
github.com/hashicorp/boundary/internal/cmd/commands/authenticate.(*PasswordCommand).Run(0xc000501440, 0xc00003a210, 0x3, 0x3, 0xc0000aae40)
        /go/internal/cmd/commands/authenticate/password.go:116 +0x136
github.com/mitchellh/cli.(*CLI).Run(0xc000498640, 0xc000498640, 0xc0000abce0, 0xc0000aada0)
        /root/go/pkg/mod/github.com/mitchellh/[email protected]/cli.go:262 +0x1cf
github.com/hashicorp/boundary/internal/cmd.RunCustom(0xc00003a1f0, 0x5, 0x5, 0xc0005dfe60, 0xc00007c058)
        /go/internal/cmd/main.go:186 +0x846
github.com/hashicorp/boundary/internal/cmd.Run(...)
        /go/internal/cmd/main.go:92
main.main()
        /go/cmd/boundary/main.go:13 +0xda

[Feature Request] PagerDuty integration

Is your feature request related to a problem? Please describe.
I just read the announcement post. This seems especially valuable for ephemeral access triggered by incidents for on-call access. A PagerDuty integration would make this quite nice.

Describe the solution you'd like
Ideally, first-class integration with Pager Duty.

Recording SSH sessions

Is your feature request related to a problem? Please describe.
For enterprises in healthcare and finance it's sometimes a requirement to have the SSH session recorded.
Also for ease of use, metadata is just not sufficient.

Describe the solution you'd like
Would Boundary support recording SSH sessions so that they can be shared.

Describe alternatives you've considered
The current alternative is Gravitational Teleport's Recording

Explain any additional use-cases
If there are any use-cases that would help us understand the use/need/value please share them as they can help us decide on acceptance and prioritization.

  • Meeting compliance requirements
  • Knowledge-sharing with the team
  • Better visibility, as you don't have to sift through logs.

Additional context
Add any other context or screenshots about the feature request here.

ssh_exchange_identification: Connection closed by remote host

Describe the bug
Using the tutorial on Hashicorp learn to establish an ssh connection, it refuses to establish a connection on Mac OS and Ubuntu 18.

To Reproduce
Steps to reproduce the behavior:

  1. Follow these steps https://learn.hashicorp.com/tutorials/boundary/getting-started-dev#authenticate-with-boundary
user@ubuntu:~$ boundary authenticate password -auth-method-id=ampw_1234567890 -login-name=admin
Password is not set as flag or in env, please enter it now (will be hidden): 
Error opening keyring: Specified keyring backend not available
Token must be provided via BOUNDARY_TOKEN env var or -token flag. Reading the token can also be disabled via -keyring-type=none.

Authentication information:
  Account ID:      apw_MxsVWAdqFV
  Auth Method ID:  ampw_1234567890
  Expiration Time: Sat, 07 Nov 2020 21:50:51 PST
  Token:
  at_Qt1d8azvau_s15nqPksvFhf9qBJa6m585ze8sZNyVadKrsgAXPAYQTkBMW2g4Ci6hqUhravqbfhqKP3GMy7YCHAde6atjDg1ut6Qjv5H5fXxENGh8v74DP42EuXMN6eaXJ6g4
  User ID:         u_1234567890
Error opening "pass" keyring: Specified keyring backend not available
The token printed above must be manually passed in via the BOUNDARY_TOKEN env var or -token flag. Storing the token can also be disabled via -keyring-type=none.
user@ubuntu:~$ export BOUNDARY_TOKEN=at_Qt1d8azvau_s15nqPksvFhf9qBJa6m585ze8sZNyVadKrsgAXPAYQTkBMW2g4Ci6hqUhravqbfhqKP3GMy7YCHAde6atjDg1ut6Qjv5H5fXxENGh8v74DP42EuXMN6eaXJ6g4
user@ubuntu:~$ boundary targets read -id ttcp_1234567890

Target information:
  Created Time:               Sat, 31 Oct 2020 22:47:05 PDT
  Description:                Provides an initial target in Boundary
  ID:                         ttcp_1234567890
  Name:                       Generated target
  Session Connection Limit:   1
  Session Max Seconds:        28800
  Type:                       tcp
  Updated Time:               Sat, 31 Oct 2020 22:47:05 PDT
  Version:                    1

  Scope:
    ID:                       p_1234567890
    Name:                     Generated project scope
    Parent Scope ID:          o_1234567890
    Type:                     project

  Host Sets:
    Host Catalog ID:          hcst_1234567890
    ID:                       hsst_1234567890

  Attributes:
    Default Port:             22
user@ubuntu:~$ boundary connect ssh -target-id ttcp_1234567890
ssh_exchange_identification: Connection closed by remote host
user@ubuntu:~$ boundary connect ssh -target-id ttcp_1234567890
ssh_exchange_identification: Connection closed by remote host
user@ubuntu:~$ ^C
user@ubuntu:~$ ^C
user@ubuntu:~$ sudo boundary connect ssh -target-id ttcp_1234567890
[sudo] password for user: 
Error opening keyring: Specified keyring backend not available
Token must be provided via BOUNDARY_TOKEN env var or -token flag. Reading the token can also be disabled via -keyring-type=none.
Error from controller when performing authorize-session against target: 
Error information:
  Code:                Unauthenticated
  Message:             Unauthenticated, or invalid token.
  Status:              401
user@ubuntu:~$ sudo boundary connect ssh -target-id ttcp_1234567890^C
user@ubuntu:~$ ^C
user@ubuntu:~$ boundary connect ssh -target-id ttcp_1234567890 -username user
ssh_exchange_identification: read: Connection reset by peer
user@ubuntu:~$ boundary connect ssh -target-id ttcp_1234567890 -username user
ssh_exchange_identification: Connection closed by remote host

Expected behavior
More information is needed to connect with ssh in the tutorial on macos and ubuntu 18 hosts.

Errors in systemd example docs

Describe the bug
So reading this one, I found lots of errors regarding the service file:
https://www.boundaryproject.io/docs/installing/systemd

There should not be a ${TYPE} as flag to boundary:
ExecStart=/usr/local/bin/${NAME} ${TYPE} -config /etc/${NAME}-${TYPE}.hcl

It should instead say:
ExecStart=/usr/local/bin/${NAME} server -config /etc/${NAME}-${TYPE}.hcl

In CentOS 8, this seems deprecated, you get lots of warning if you don't comment out:
Capabilities=CAP_IPC_LOCK+ep

This doesn't work since there is no group created in the script:
sudo adduser --system --group boundary || true

So would work better with:

sudo groupadd boundary
sudo adduser --system --group boundary boundary || true

Using the RPM package, this is created flawlessly.

Usage output for plural objects (host-catalogs and host-sets) not correct, missing dash

Describe the bug
The help function isn't showing the correct syntax of the command itself

To Reproduce
Steps to reproduce the behavior:

  1. Run boundary host-catalogs list -help or boundary host-sets list -help
  2. Returns:
Usage: boundary host catalogs list [options] [args]

  List host catalogs within an enclosing scope or resource. Example:

    $ boundary host catalogs list

which is wrong (missing dash between host and catalogs in the usage example)

Expected behavior
Should output:

Usage: boundary host-catalogs list [options] [args]

  List host catalogs within an enclosing scope or resource. Example:

    $ boundary host-catalogs list

Additional context
Might be due to this plural settings:

https://github.com/hashicorp/boundary/blob/main/internal/cmd/commands/hostcatalogs/hostcatalog.go#L170

add Apache Guacamole support for RDP / VNC / SSH

Is your feature request related to a problem? Please describe.
There is SSH support already, adding in RDP / VNC should not be too much to add. This would allow Windows Services to be accessed in the same way.

Describe the solution you'd like
Being able to define Apache Guacamole config or a pointer to said config where the passwords for RDP are managed via a Boundary request. i.e a user could request access to a RDP Session and get said access if they are authorized to. Behind the scenes it is using Hashicop Vault Active Directory Library support to roll the passwords for the request session.

Describe alternatives you've considered
I have started coding this for Hashicorp Vault, but going slowly due to other commitments. I have developed this previously for ForgeRock Identity Manager, but since Vault already has this functionality and Boundary now exists this makes an ideal place to move the functionality too.

Explain any additional use-cases
Apache Guacamole has the ability for VNC/RDP/SSH session recording already available, but this would fit in nicely for authentication.

Additional context
take a look at https://guacamole.apache.org/

Boundary for VFX HPC workflows: Will it work without users having emails in an FQDN? Is a VPN still possible?

I'm excited to see improving session authentication methods like what Boundary might offer. I am surprised in this era how difficult it still is to provide simple secure access to a private network for other open source projects. The only other close contender to a product like this that I've seen is Teleport, but I don't think its able to solve my use case for open source IAC with users that don't have emails in an FQDN.

Is your feature request related to a problem? Please describe.
A problem I have encountered in my own open source IAC project for VFX workflows is I want to provide any random group of artists the ability to collaborate on a project, and dissolve their infra when they are done. This has been a very difficult problem to map.

That might mean they don't even have a FQDN, and this is a pain point for me, because most authentication with MFA these days, or even ssh certificates carry the presumption of a FQDN.

I'd like to see in a product like Boundary, the ability to provide access and admin of private subnets without worrying about an FQDN, and still ensure MFA is used. There might be 20 different artists with different emails (probably standard gmail accounts), and I'd like them all to be able to collaborate with ease, and hopefully boundary could allow something like that.

Describe the solution you'd like
Boundary could be able to send an invite to any email address (including those not under a company FQDN), authenticate with MFA, and optionally enable VPN like solutions as well. In VFX, artists may have their own compute capability, but may share a cluster of cloud based spot instances, and usually seamless communication for those resources really only works with a VPN since nodes and schedulers (like AWS deadline) need to identify each other for a broad set of processes to function for rendering images. If a VPN isn't possible, then boundary would still be great for initial configuration of a VPN via Vault and using an SSH CA.

Describe alternatives you've considered
Teleport (but it doesn't have OIDC for OSS), everything the old school way with shell scripts, Vault.

Explain any additional use-cases
Any community based open source IAC project that needs to invite collaborators into a private network to produce content/output.

Backstory: If anyone else is interested in my SIGGRAPH presentation on conceptually what I'd like to apply boundary to in my own project you can checkout my presentation here https://www.youtube.com/watch?v=Ahw8pXu5RyY
At this point, it's a proof of concept for a single user, but the next phase of the project is to enable collaborators.

Docs clarification on how to run

Describe the bug
Using the example systemd service file doesn't successfully start boundary, instead printing usage information.

To Reproduce
Steps to reproduce the behavior:

  1. Manually run the steps from the Systemd All-In-One script
  2. Deploy the following systemd service file, pulled from (https://www.boundaryproject.io/docs/installing/systemd)
[Unit]
Description=boundary controller

[Service]
ExecStart=/usr/local/bin/boundary controller -config /etc/boundary-controller.hcl
User=boundary
Group=boundary
LimitMEMLOCK=infinity
Capabilities=CAP_IPC_LOCK+ep
CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK

[Install]
WantedBy=multi-user.target
  1. Deploy the following config file to /etc/boundary-controller.hcl
listener "tcp" {
  purpose = "api"
  address = "127.0.0.1:9200"
  tls_disable = true
}

listener "tcp" {
  purpose = "cluster"
  address = "10.0.0.1:9200"
  tls_disable = true
}

kms "gcpckms" {
  purpose     = "root"
  credentials = "/usr/boundary/boundary-project-user-creds.json"
  project     = "<censored>"
  region      = "global"
  key_ring    = "boundary-test-1-jsw"
  crypto_key  = "boundary-test-jsw-1-key"
}
controller {
  name = "boundary-demo--controller"
  description = "An example controller"
  database {
    url = "postgresql://postgres:<censored>@<censored>5432/boundary-demo"
  }
}
# Disable memory lock: https://www.man7.org/linux/man-pages/man2/mlock.2.html
disable_mlock = true
  1. Run 'systemctl restart boundary-controller' and observe in the syslog that it prints usage information instead of starting successfully.

Expected behavior
A clear and concise description of what you expected to happen.

Running systemctl restart boundary-controller results in boundary being started.

Additional context
Add any other context about the problem here.

Debian 10 running Google Cloud.

I'm quite sure the mistake is on my side, your testing would obviously have caught such a problem, I'm afraid I just can't quite spot my mistake. Indeed there is no 'controller' command that appears in the usage information. It's late in my day here so I apologize if this is a rather obvious user error, I did double check my work before posting here.

Boundary looks awesome and I can't wait to sink my teeth into it, this potentially solves all of our VPN problems and I'm so excited to try it out, my sincere thanks to everyone who worked on this!

Kubernetes Target support

Is your feature request related to a problem? Please describe.

Boundary provides an easy-to-use, platform-agnostic way to access all of your hosts and services across clouds, Kubernetes clusters, and on-premises datacenters through a single workflow based on trusted identity. It lets you remove hard-coded credentials and firewall rules, and makes access control more dynamic.
~ https://www.hashicorp.com/blog/hashicorp-boundary

As per the above announcement, Kubernetes clusters as targets is either an existing or planned feature. Is there any guidance on how to use Boundary today for Kubernetes use-cases? If not, happy to contribute it as a target.

Describe the solution you'd like
I would like to login to any Kubernetes cluster that exists in a given Boundary project using an expiring service account for a limited amount of time with specified RBAC permissions to my user. I expect a new Kubernetes context to be added to either export KUBECONFIG= or to my ~/.kube/config file.

Describe alternatives you've considered
Should I write my own target plugin?

Explain any additional use-cases
Apart from temporary access to a k8s cluster, it would be interesting to have the ability to request different RBAC permissions. For example, maybe I just want read-only access, and I can escalate my access on a need basis.

Additional context
Existing solutions focused on Kubernetes:

Package management download steps broken for Ubuntu 20.10 users

Describe the bug

Ubuntu 20.10 (Groovy Gorilla) users will experience the following two issues when following the instructions within the getting started guide, as well as the downloads page:

  • A deprecation warning for the apt-key command:
    Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).

    • According to the man page: apt-key(8) will last be available in Debian 11 and Ubuntu 22.04.
  • An inability to access the apt repo:
    The repository 'https://apt.releases.hashicorp.com groovy Release' does not have a Release file.

To Reproduce

On an Ubuntu 20.10 host:

  1. curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
  2. sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
  3. sudo apt-get update && sudo apt-get install boundary

Expected behavior

  • Adding the Hashicorp GPG key should not result in a deprecation warning, based on the provided instructions.
  • Adding the Hashicorp public apt repo and downloading Boundary should complete successfully, based on the provided instructions.

Additional context

Using the focal distribution (Ubuntu 20.04) appears to complete successfully as a temporary workaround:

sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com focal main"

Boundary command parameter name in documentation is different then actual In documentation

Boundary command parameter name in documentation is different than actual
In documentation:

https://www.boundaryproject.io/docs/installing/no-gen-resources

To initialize the Boundary database without generated resources:

boundary database init -skip-initial-login-role -config /etc/boundary.hcl
-skip-initial-login-role ->  -skip-initial-login-role-creation

But boundary 0.1.2 doesn't have this parameter:

boundary database init -skip-initial-login-role -config /etc/boundary.hcl
flag provided but not defined: -skip-initial-login-role
boundary database init -skip-initial-login-role-creation -config /etc/boundary/boundary.hcl

If we run boundary database init -h we see that we don't have parameter -skip-initial-login-role we should use -skip-initial-login-role-creation

boundary database init  -h
Usage: boundary database init [options]

  Initialize Boundary's database:

    $ boundary database init -config=/etc/boundary/controller.hcl

  Unless told not to via flags, some initial resources will be created, in the following order and in the indicated scopes:

    Initial Login Role (global)
    Password-Type Auth Method (global)
    Org Scope (global)
      Project Scope (org)
        Static-Type Host Catalog (project)
          Static-Type Host Set
          Static-Type Host
        Target (project)

  If flags are used to skip any of these resources, any resources that would be created afterwards are also skipped.

  For a full list of examples, please see the documentation.Connection Options:

  -addr=<string>
      Addr of the Boundary controller, as a complete URL (e.g.
      https://boundary.example.com:9200). This can also be specified via the
      BOUNDARY_ADDR environment variable.

  -ca-cert=<string>
      Path on the local disk to a single PEM-encoded CA certificate to
      verify the Controller or Worker's server's SSL certificate. This
      takes precedence over -ca-path. This can also be specified via the
      BOUNDARY_CACERT environment variable.

  -ca-path=<string>
      Path on the local disk to a directory of PEM-encoded CA certificates to
      verify the SSL certificate of the Controller. This can also be specified
      via the BOUNDARY_CAPATH environment variable.

  -client-cert=<string>
      Path on the local disk to a single PEM-encoded CA certificate to use
      for TLS authentication to the Boundary Controller. If this flag is
      specified, -client-key is also required. This can also be specified via
      the BOUNDARY_CLIENT_CERT environment variable.

  -client-key=<string>
      Path on the local disk to a single PEM-encoded private key matching the
      client certificate from -client-cert. This can also be specified via the
      BOUNDARY_CLIENT_KEY environment variable.

  -tls-insecure
      Disable verification of TLS certificates. Using this option is highly
      discouraged as it decreases the security of data transmissions to
      and from the Boundary server. The default is false. This can also be
      specified via the BOUNDARY_TLS_INSECURE environment variable.

  -tls-server-name=<string>
      Name to use as the SNI host when connecting to the Boundary server
      via TLS. This can also be specified via the BOUNDARY_TLS_SERVER_NAME
      environment variable.

Command Options:

  -config=<string>
      Path to the configuration file.

  -config-kms=<string>
      Path to a configuration file containing a "kms" block marked for
      "config" purpose, to perform decryption of the main configuration file.
      If not set, will look for such a block in the main configuration file,
      which has some drawbacks; see the help output for "boundary config
      encrypt -h" for details.

  -log-format=<string>
      Log format. Supported values are "standard" and "json".

  -log-level=<string>
      Log verbosity level. Supported values (in order of more detail to
      less) are "trace", "debug", "info", "warn", and "err". This can also be
      specified via the BOUNDARY_LOG_LEVEL environment variable.

Init Options:

  -migration-url=<string>
      If set, overrides a migration URL set in config, and specifies the
      URL used to connect to the database for initialization. This can allow
      different permissions for the user running initialization vs. normal
      operation. This can refer to a file on disk (file://) from which a URL
      will be read; an env var (env://) from which the URL will be read; or a
      direct database URL.

  -skip-auth-method-creation
      If not set, an auth method will not be created as part of
      initialization. If set, the recovery KMS will be needed to perform any
      actions. The default is false.

  -skip-host-resources-creation
      If not set, host resources (host catalog, host set, host) will not be
      created as part of initialization. The default is false.

  -skip-initial-login-role-creation
      If not set, a default role allowing necessary grants for logging in will
      not be created as part of initialization. If set, the recovery KMS will
      be needed to perform any actions. The default is false.

  -skip-scopes-creation
      If not set, scopes will not be created as part of initialization. The
      default is false.

  -skip-target-creation
      If not set, a target will not be created as part of initialization. The
      default is false.
boundary version
Version information:
  Git Revision:        d8020842ae8b6c742b94538baada313d7eb52809
  Version Number:      0.1.2

JSON output invalid due to errors on stdout

Describe the bug

JSON output is invalid due to errors on stdout.

To Reproduce

Steps to reproduce the behavior:

  1. Without being authenticated, run boundary auth-methods list -format=json 2> /dev/null
  2. First line of output is not valid JSON
No saved credential found, continuing without
[{"id":"ampw_foobar","scope_id":"global","scope":{"id": ...

Expected behavior

Valid JSON that could be used by a program, like jq to get the auth method id:

    boundary auth-methods list -format=json |  jq -r '.[] | select(.type=="password").id'

Additional context
Add any other context about the problem here.

This is the required workaround, which is messy:

    boundary auth-methods list -format=json |grep -v 'No saved credential' | jq '.[]'

Boundary dev - connection closed by remote host when trying to ssh connect

Describe the bug
I was following the tutorial from boundary getting started and all worked just fine until when I had to do the following:
boundary connect ssh -target-id ttcp_1234567890
I get back:
kex_exchange_identification: Connection closed by remote host

and from the boundary:

2020-10-25T16:21:15.501+0100 [INFO] controller.worker-handler: session activated: session_id=s_nIpfzJ89ao target_id=ttcp_1234567890 user_id=u_1234567890 host_set_id=hsst_1234567890 host_id=hst_1234567890 2020-10-25T16:21:15.513+0100 [INFO] controller.worker-handler: authorized connection: session_id=s_nIpfzJ89ao connection_id=sc_5XgYcfCbUx connections_left=0 2020-10-25T16:21:15.514+0100 [ERROR] worker: error dialing endpoint: error="dial tcp [::1]:22: connect: connection refused" endpoint=tcp://localhost:22 2020-10-25T16:21:15.541+0100 [INFO] controller.worker-handler: connection closed: connection_id=sc_5XgYcfCbUx

To Reproduce
Steps to reproduce the behavior:
Just follow the guide

Expected behavior
From the guide: When prompted, enter the password to proceed.

Port forwarding

Is your feature request related to a problem? Please describe.
I want to use software that does not support the boundary integration. To be more specific, I want to manage databases on MySQL server in a private network. I want to do it with the help of DataGrip.

Describe the solution you'd like
Let's allow binding to tcp port on localhost?
boundary connect bind2localhost -local-port 3306 -target-id XXX
and then I can use mysql -h127.0.0.1 or DataGrip to manage my databases.

Describe alternatives you've considered
SSH port forwarding.

Explain any additional use-cases
It's a generic solution that does not depend on a specific protocol, I can establish a connection to any target that supports tcp.

Additional context
Nope)

UI - Hosts added inside a host set is not actually added into to the host set.

Describe the bug
When adding a host from the host set, the host is created but there is no relation to host set created in the database.
The host set will always be empty if its created from the UI.

To Reproduce
Steps to reproduce the behavior:
In the UI (web interface on :9200):
1 - Go to host catalog
2 - Create a new host catalog (or use existing)
3 - Create a new host set
4 - Go to the host sets own host tab
5 - It’s empty. Create a host, which it says is successfully added.
6 - Now go back to your host set, go to host tabs again…and its empty.
7 - Go up to your host catalog again, press hosts and you can see your host there.

The database table for members is empty:
select * from static_host_set_member;

The host table contains the newly created host:
select * from static_host where name = ‘created_inside_newcat’;
hst_k4MVhZujH5 | hcst_amh9fDqevR | created_inside_newcat | | 8.8.8.8 | 2020-10-15 21:24:49.939941+00 | 2020-10-15 21:24:49.939941+00 | 1

Expected behavior
Ability to to manage hosts and host sets from the UI without using CLI in between.

So the admin console example on https://www.boundaryproject.io/docs/common-workflows/manage-targets#define-a-host-set doesn't really work when adding hosts to a host set.

Ability to toggle dark mode

Is your feature request related to a problem? Please describe.
Dark mode is currently enabled solely on the prefers-color-scheme CSS media query. Users are unable to indicate if they want to view in a mode different than their system

Describe the solution you'd like
Add a toggle button to the current user profile "rose menu" (top right)

Describe alternatives you've considered

  • Toggle in the header next to user profile
  • Toggle in a user profile page (not sure there is one yet)

Explain any additional use-cases
None at this time

Additional context
None at this time

Postgres database password with symbols which are not a-z, A-Z, 0-9 leads to an error and cannot be set via hcl config file

Postgres database password cannot contain any symbols other than symbols and numbers. I wasn’t able to escape special symbols. But in Postgres, I can set password with special symbols. Most systems’ passwords must consist of a lengthy set of characters often including numbers, special characters, and a combination of upper and lower cases.
Example: This password postgres#Boundary2020 incorrect
Error querying database for init status:

parse "postgresql://postgres:postgres": invalid port ":postgres" after host
Error parsing config file: At 10:3: key 'postgres' expected start of object ('{') or assignment ('=')

url = postgresql://postgres:postgres\#[email protected]:5432/postgres?sslmode=disable"
boundary database init -config /etc/boundary/boundary.hcl
Error parsing config file: At 9:43: illegal char escape
boundary version
Version information:
  Git Revision:        d8020842ae8b6c742b94538baada313d7eb52809
  Version Number:      0.1.2

Expected result: any password, set by the user, applied
Current result: password is limited to a-z, A-Z, 0-9, and become not secure

References:

  1. https://cwe.mitre.org/data/definitions/521.html
  2. https://owasp.org/www-project-web-security-testing-guide/latest/4-Web_Application_Security_Testing/04-Authentication_Testing/07-Testing_for_Weak_Password_Policy

Azure AAD OIDC Authentication

Is your feature request related to a problem? Please describe.
This is pretty self explanatory but having access to this now would be valuable. I am coincidentally in the middle of setting up several terraform plans to get a VPN gateway running in azure so that k8 cluster can be configured for private networking only.

Describe the solution you'd like
Connect to azure ad as a gallery application and allow users in specific ad groups to login then have scopes assigned to these users mapped to their groups.

Describe alternatives you've considered
Existing username and password auth in Boundary, but outside of that Azure VPN supports this on windows only unfortunately.

Boundary tries to connect to 127.0.0.1:9201 even when -cluster-listen-address is set

Describe the bug
When trying to run boundary in dev mode, with a different cluster-listen-address, boundary tries to reach 127.0.0.1:9201 instead of the provided cluster-listen-address.

To Reproduce
Steps to reproduce the behavior:

  1. Run boundary dev -cluster-listen-address=10.69.0.24
  2. Boundary starts, but shows following error:
2020-10-18T18:35:48.201Z [ERROR] worker: error making status request to controller: error="rpc error: code = Unavailable desc = last connection error: connection error: desc = "transport: Error while dialing unable to dial to controller: dial tcp 127.0.0.1:9201: connect: connection refused""

Expected behavior
Boundary should try to connect to the provided cluster-listen-address.

I realize there's not much reason to change the cluster-listen-address in dev mode, but when trying the options, it was confusing to see it fail.

Database init error when locale is not english

Hi,

When testing Boundary, I faced an issue during the init database.

After digging a while the script told me that the database was already initialized, so I started looking at the code, and found this :

case strings.Contains(err.Error(), "does not exist"):

My postgresql installation was not installed with an english locale (a french one), so the error message doesn't have the does not exist but the french equivalent n'existe pas

Maybe a good way to handle this would be to use an error code, not dependant of the locale ?

If not all options are set in the configuration file, then we will get the following error like panic

If not all options are set in the configuration file, then we will get the following error like panic: runtime error: invalid memory address or nil pointer dereference
If we set not all parameters in the boundary configuration file, then we will get the following error:

sudo boundary database init -config /etc/boundary/boundary.hcl
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x126af3f]
goroutine 1 [running]:
github.com/hashicorp/boundary/internal/cmd/commands/database.(*InitCommand).Run(0xc0005401e0, 0xc00010c030, 0x2, 0x2, 0x0)
	/go/internal/cmd/commands/database/init.go:208 +0x2ff
github.com/mitchellh/cli.(*CLI).Run(0xc0003683c0, 0xc0003683c0, 0xc00052aa00, 0xc000074260)
	/root/go/pkg/mod/github.com/mitchellh/[email protected]/cli.go:262 +0x1cf
github.com/hashicorp/boundary/internal/cmd.RunCustom(0xc00010c010, 0x4, 0x4, 0xc000841e60, 0xc000080058)
	/go/internal/cmd/main.go:186 +0x846
github.com/hashicorp/boundary/internal/cmd.Run(...)
	/go/internal/cmd/main.go:92
main.main()
	/go/cmd/boundary/main.go:13 +0xda
boundary version
Version information:
  Git Revision:        d8020842ae8b6c742b94538baada313d7eb52809
  Version Number:      0.1.2

Expected result: error message with a small clarification
Current result: go traceback

Could you please add in the command boundary “check config” option? Also, it would be better to add a sample boundary config file in documentation.

Boundary worker tries to connect to controller bind address instead of configured cluster public address

Describe the bug
When the Boundary controller is connected to by a worker on a different IP than the controller itself can bind to, the worker connection is redirected to the bind address (e.g. 0.0.0.0) instead of continuing on the controller external IP.

To Reproduce

  1. Set up a controller host (mine is called "boundary-controller"), run a Vault server and create transit keys for Boundary in Vault (I have Vault running on the Boundary controller host, with Transit engine mounted at /boundary)
  2. Set up Postgres on boundary-controller and run boundary init to populate the database (I have Postgres running on the Boundary controller host, and the database is named boundary)
  3. Run the Boundary server on boundary-controller with the controller config below (under "Additional context"). Output looks normal:
# docker run --name boundary-controller -v /root/boundary/boundary-controller.hcl:/etc/boundary/boundary-controller.hcl -p 9200:9200 -p 9201:9201 -p 9202:9202 --ulimit memlock=-1:-1 hashicorp/boundary:0.1.2 server -config /etc/boundary/boundary-controller.hcl
==> Boundary server configuration:
      Transit Address: http://[controller host IP]:8200
     Transit Key Name: worker-auth
   Transit Mount Path: boundary
                  Cgo: disabled
           Listener 1: tcp (addr: "0.0.0.0:9200", max_request_duration: "1m30s", purpose: "api")
           Listener 2: tcp (addr: "0.0.0.0:9201", max_request_duration: "1m30s", purpose: "cluster")
            Log Level: info
                Mlock: supported: true, enabled: true
  Public Cluster Addr: 0.0.0.0:9201
              Version: Boundary v0.1.2
          Version Sha: d8020842ae8b6c742b94538baada313d7eb52809
==> Boundary server started! Log data will stream in below:
  1. Run the Boundary worker on another host (mine is called "boundary-worker") with the worker config below. Output on controller and worker shows the initial connection is good but then the worker starts trying to connect to the controller on 0.0.0.0:

boundary-controller output:

2020-12-02T19:56:16.034Z [INFO]  controller: worker successfully authed: name=boundary-demo-worker

boundary-worker output:

# docker run --name boundary-worker -v /root/boundary/boundary-worker.hcl:/etc/boundary/boundary-worker.hcl -p 9200:9200 -p 9201:9201 -p 9202:9202 --ulimit memlock=-1:-1 hashicorp/boundary:0.1.2 server -config /etc/boundary/boundary-worker.hcl
==> Boundary server configuration:
     Transit Address: http://boundary-controller:8200
    Transit Key Name: worker-auth
  Transit Mount Path: boundary
                 Cgo: disabled
          Listener 1: tcp (addr: "0.0.0.0:9202", max_request_duration: "1m30s", purpose: "proxy")
           Log Level: info
               Mlock: supported: true, enabled:true
         Public Addr: [worker host IP]:9202
             Version: Boundary v0.1.2
         Version Sha: d8020842ae8b6c742b94538baada313d7eb52809
==> Boundary server started! Log data will stream in below:
2020-12-02T19:56:16.021Z [INFO]  worker: connected to controller: address=boundary-controller:9201
2020-12-02T19:56:18.112Z [ERROR] worker: error making status request to controller: error="rpc error: code = Unavailable desc = last connection error: connection error: desc = "transport: Error while dialing unable to dial to controller: dial tcp 0.0.0.0:9201: connect: connection refused""
2020-12-02T19:56:20.299Z [ERROR] worker: error making status request to controller: error="rpc error: code = Unavailable desc = last connection error: connection error: desc = "transport: Error while dialing unable to dial to controller: dial tcp 0.0.0.0:9201: connect: connection refused""
2020-12-02T19:56:21.995Z [ERROR] worker: error making status request to controller: error="rpc error: code = Unavailable desc = last connection error: connection error: desc = "transport: Error while dialing unable to dial to controller: dial tcp 0.0.0.0:9201: connect: connection refused""

This looks similar to #758, but in this case it's just connecting between two Docker hosts on the same private network instead of on completely different networks.

Expected behavior
Boundary worker connection to the controller continues on the controller host IP.

Additional context
/root/boundary/boundary-controller.hcl:

listener "tcp" {
  purpose = "api"
  address = "0.0.0.0"
  public_address = "[controller host IP]"
  tls_disable = "true"
}

listener "tcp" {
  purpose = "cluster"
  address = "0.0.0.0"
  public_address = "[controller host IP]"
  tls_disable = "true"
}

controller {
  name = "demo-controller"
  description = "Demo Boundary controller"
  public_cluster_address = "[controller host IP]"
  database {
    url = "postgres://postgres:[redacted]@[controller host IP]:5432/boundary?sslmode=disable"
  }
}

kms "transit" {
  purpose = "root"
  address = "http://[controller host IP]:8200"
  token = "[redacted]"
  disable_renewal = "true"
  key_name = "root"
  mount_path = "boundary"
}

kms "transit" {
  purpose = "recovery"
  address = "http://[controller host IP]:8200"
  token = "[redacted]"
  disable_renewal = "true"
  key_name = "recovery"
  mount_path = "boundary"
}

kms "transit" {
  purpose = "worker-auth"
  address = "http://[controller host IP]:8200"
  token = "[redacted]"
  disable_renewal = "true"
  key_name = "worker-auth"
  mount_path = "boundary"
}

/root/boundary/boundary-worker.hcl:

listener "tcp" {
  purpose = "proxy"
  address = "0.0.0.0"
  public_addr = "[worker host IP]"
  tls_disable = "true"
}

worker {
  name = "boundary-demo-worker"
  description = "Demo worker instance"
  address = "0.0.0.0"
  public_addr = "[worker host IP]"
  controllers = [
    "boundary-controller"
  ]
}

kms "transit" {
  purpose = "worker-auth"
  address = "http://boundary-controller:8200"
  token = "[redacted]"
  disable_renewal = "true"
  key_name = "worker-auth"
  mount_path = "boundary"
}

I would like to offer my services to help with the documentation of your project

Is your feature request related to a problem? Please describe.
The documentation flows poorly. It is difficult to consume and does not address many of the items one might need to instantiate a running service.

Describe the solution you'd like
I'd like someone to run through the documentation as though you have no idea how to use the product and see how you make out. I'm perplexed that there is so much work in tidying up a dev release and so little thought put into those of us who need something more robust. I've been doing this long enough that documentation is usually there to fill in some blanks as needed. In this instance, I'm at a total loss. I simply don't understand how to set up a basic service and the documentation does not support my requirements.

Describe alternatives you've considered
I've tried "winging it"

Explain any additional use-cases
If there are any use-cases that would help us understand the use/need/value please share them as they can help us decide on acceptance and prioritization.

Additional context
Add any other context or screenshots about the feature request here.

Ability to use a Unix Domain Socket with listeners in addition to localhost

Is your feature request related to a problem? Please describe.
Many use cases will have some sort of proxy in front of Boundary, only requiring direct connections to Boundary from processes on the same host / the same namespace.
While opening a TCP socket on localhost is an option, it carries unnecessary overhead for this use case.
A Unix Domain Socket would be much simpler and more efficient for inter-process communication.

Describe the solution you'd like
Either a different runtime option for unix domain sockets eg : -api-listen-socket="/var/run/boundary/api.sock"
Or the option to specify a socket location through the existing options eg: -api-listen-address="unix://var/run/boundary/api.sock"

Configuring a UDS listener could look like this:

listener "unix" {
  purpose = "api"
  socket = "var/run/boundary/api.sock"
}

Postgresql syntax error when trying to init database

Describe the bug
When I try to init the database using the CLI, I receive a syntax error. I would expect that any DB scripts should run in the backend successfully. This happens before initial scopes and auth methods are configured, meaning I can't authenticate to the service at all, leaving me unable to use it outside of dev mode. I could manually insert a global scope, maybe, but that shouldn't be necessary. It looks like tables are created successfully, but then fail to populate with initial data.

To Reproduce

  1. Create a database named boundary in psql
  2. configure /etc/boundary-controller.hcl
  3. run boundary database init -skip-target-creation -skip-host-resources-creation -config /etc/boundary-controller.hcl

Expected behavior
The database is initialized.

Additional context

the error message

ubuntu@ip-10-0-20-207:~$ sudo boundary database init -skip-target-creation -skip-host-resources-creation -config /etc/boundary-controller.hcl
Error running database migrations: 1 error occurred:
	* error running migrations: migration failed: syntax error at or near "function" (column 13) in line 80:
begin;

  -- wh_rollup_connections calculates the aggregate values from
  -- wh_session_connection_accumulating_fact for p_session_id and updates
  -- wh_session_accumulating_fact for p_session_id with those values.
  create or replace function wh_rollup_connections(p_session_id wt_public_id)
    returns void
  as $$
  declare
    session_row wh_session_accumulating_fact%rowtype;
  begin
    with
    session_totals (session_id, total_connection_count, total_bytes_up, total_bytes_down) as (
      select session_id,
             sum(connection_count),
             sum(bytes_up),
             sum(bytes_down)
        from wh_session_connection_accumulating_fact
       where session_id = p_session_id
       group by session_id
    )
    update wh_session_accumulating_fact
       set total_connection_count = session_totals.total_connection_count,
           total_bytes_up         = session_totals.total_bytes_up,
           total_bytes_down       = session_totals.total_bytes_down
      from session_totals
     where wh_session_accumulating_fact.session_id = session_totals.session_id
    returning wh_session_accumulating_fact.* into strict session_row;
  end;
  $$ language plpgsql;

  --
  -- Session triggers
  --

  -- wh_insert_session returns an after insert trigger for the session table
  -- which inserts a row in wh_session_accumulating_fact for the new session.
  -- wh_insert_session also calls the wh_upsert_host and wh_upsert_user
  -- functions which can result in new rows in wh_host_dimension and
  -- wh_user_dimension respectively.
  create or replace function wh_insert_session()
    returns trigger
  as $$
  declare
    new_row wh_session_accumulating_fact%rowtype;
  begin
    with
    pending_timestamp (date_dim_id, time_dim_id, ts) as (
      select wh_date_id(start_time), wh_time_id(start_time), start_time
        from session_state
       where session_id = new.public_id
         and state = 'pending'
    )
    insert into wh_session_accumulating_fact (
           session_id,
           auth_token_id,
           host_id,
           user_id,
           session_pending_date_id,
           session_pending_time_id,
           session_pending_time
    )
    select new.public_id,
           new.auth_token_id,
           wh_upsert_host(new.host_id, new.host_set_id, new.target_id),
           wh_upsert_user(new.user_id, new.auth_token_id),
           pending_timestamp.date_dim_id,
           pending_timestamp.time_dim_id,
           pending_timestamp.ts
      from pending_timestamp
      returning * into strict new_row;
    return null;
  end;
  $$ language plpgsql;

  create trigger wh_insert_session
    after insert on session
    for each row
    execute function wh_insert_session();

  --
  -- Session Connection triggers
  --

  -- wh_insert_session_connection returns an after insert trigger for the
  -- session_connection table which inserts a row in
  -- wh_session_connection_accumulating_fact for the new session connection.
  -- wh_insert_session_connection also calls wh_rollup_connections which can
  -- result in updates to wh_session_accumulating_fact.
  create or replace function wh_insert_session_connection()
    returns trigger
  as $$
  declare
    new_row wh_session_connection_accumulating_fact%rowtype;
  begin
    with
    authorized_timestamp (date_dim_id, time_dim_id, ts) as (
      select wh_date_id(start_time), wh_time_id(start_time), start_time
        from session_connection_state
       where connection_id = new.public_id
         and state = 'authorized'
    ),
    session_dimension (host_dim_id, user_dim_id) as (
      select host_id, user_id
        from wh_session_accumulating_fact
       where session_id = new.session_id
    )
    insert into wh_session_connection_accumulating_fact (
           connection_id,
           session_id,
           host_id,
           user_id,
           connection_authorized_date_id,
           connection_authorized_time_id,
           connection_authorized_time,
           client_tcp_address,
           client_tcp_port_number,
           endpoint_tcp_address,
           endpoint_tcp_port_number,
           bytes_up,
           bytes_down
    )
    select new.public_id,
           new.session_id,
           session_dimension.host_dim_id,
           session_dimension.user_dim_id,
           authorized_timestamp.date_dim_id,
           authorized_timestamp.time_dim_id,
           authorized_timestamp.ts,
           new.client_tcp_address,
           new.client_tcp_port,
           new.endpoint_tcp_address,
           new.endpoint_tcp_port,
           new.bytes_up,
           new.bytes_down
      from authorized_timestamp,
           session_dimension
      returning * into strict new_row;
    perform wh_rollup_connections(new.session_id);
    return null;
  end;
  $$ language plpgsql;

  create trigger wh_insert_session_connection
    after insert on session_connection
    for each row
    execute function wh_insert_session_connection();

  -- wh_update_session_connection returns an after update trigger for the
  -- session_connection table which updates a row in
  -- wh_session_connection_accumulating_fact for the session connection.
  -- wh_update_session_connection also calls wh_rollup_connections which can
  -- result in updates to wh_session_accumulating_fact.
  create or replace function wh_update_session_connection()
    returns trigger
  as $$
  declare
    updated_row wh_session_connection_accumulating_fact%rowtype;
  begin
        update wh_session_connection_accumulating_fact
           set client_tcp_address       = new.client_tcp_address,
               client_tcp_port_number   = new.client_tcp_port,
               endpoint_tcp_address     = new.endpoint_tcp_address,
               endpoint_tcp_port_number = new.endpoint_tcp_port,
               bytes_up                 = new.bytes_up,
               bytes_down               = new.bytes_down
         where connection_id = new.public_id
     returning * into strict updated_row;
    perform wh_rollup_connections(new.session_id);
    return null;
  end;
  $$ language plpgsql;

  create trigger wh_update_session_connection
    after update on session_connection
    for each row
    execute function wh_update_session_connection();

  --
  -- Session State trigger
  --

  -- wh_insert_session_state returns an after insert trigger for the
  -- session_state table which updates wh_session_accumulating_fact.
  create or replace function wh_insert_session_state()
    returns trigger
  as $$
  declare
    date_col text;
    time_col text;
    ts_col text;
    q text;
    session_row wh_session_accumulating_fact%rowtype;
  begin
    if new.state = 'pending' then
      -- The pending state is the first state which is handled by the
      -- wh_insert_session trigger. The update statement in this trigger will
      -- fail for the pending state because the row for the session has not yet
      -- been inserted into the wh_session_accumulating_fact table.
      return null;
    end if;

    date_col = 'session_' || new.state || '_date_id';
    time_col = 'session_' || new.state || '_time_id';
    ts_col   = 'session_' || new.state || '_time';

    q = format('update wh_session_accumulating_fact
                   set (%I, %I, %I) = (select wh_date_id(%L), wh_time_id(%L), %L::timestamptz)
                 where session_id = %L
                returning *',
                date_col,       time_col,       ts_col,
                new.start_time, new.start_time, new.start_time,
                new.session_id);
    execute q into strict session_row;

    return null;
  end;
  $$ language plpgsql;

  create trigger wh_insert_session_state
    after insert on session_state
    for each row
    execute function wh_insert_session_state();

  --
  -- Session Connection State trigger
  --

  -- wh_insert_session_connection_state returns an after insert trigger for the
  -- session_connection_state table which updates
  -- wh_session_connection_accumulating_fact.
  create or replace function wh_insert_session_connection_state()
    returns trigger
  as $$
  declare
    date_col text;
    time_col text;
    ts_col text;
    q text;
    connection_row wh_session_connection_accumulating_fact%rowtype;
  begin
    if new.state = 'authorized' then
      -- The authorized state is the first state which is handled by the
      -- wh_insert_session_connection trigger. The update statement in this
      -- trigger will fail for the authorized state because the row for the
      -- session connection has not yet been inserted into the
      -- wh_session_connection_accumulating_fact table.
      return null;
    end if;

    date_col = 'connection_' || new.state || '_date_id';
    time_col = 'connection_' || new.state || '_time_id';
    ts_col   = 'connection_' || new.state || '_time';

    q = format('update wh_session_connection_accumulating_fact
                   set (%I, %I, %I) = (select wh_date_id(%L), wh_time_id(%L), %L::timestamptz)
                 where connection_id = %L
                returning *',
                date_col,       time_col,       ts_col,
                new.start_time, new.start_time, new.start_time,
                new.connection_id);
    execute q into strict connection_row;

    return null;
  end;
  $$ language plpgsql;

  create trigger wh_insert_session_connection_state
    after insert on session_connection_state
    for each row
    execute function wh_insert_session_connection_state();

commit;

 (details: pq: syntax error at or near "function")

system info

ubuntu@ip-10-0-20-207:~$ boundary -version

Version information:
  Git Revision:        eccd68d73c3edf14863ecfd31f9023063b809d5a
  Version Number:      0.1.1

ubuntu@ip-10-0-20-207:~$ lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 18.04.1 LTS
Release:	18.04
Codename:	bionic

boundary-controller.hcl

controller {
  name = "dev-controller"
  description = "Dev"
  database {
    url = "postgresql://boundary:b0und@ry!@localhost:5432/boundary"
  }
}

listener "tcp" {
  # Should be the address of the NIC that the controller server will be reached on
  address = "10.0.20.207"
  # The purpose of this listener block
  purpose = "api"

  tls_disable = true

}

# Data-plane listener configuration block (used for worker coordination)
listener "tcp" {
  # Should be the IP of the NIC that the worker will connect on
  address = "10.0.20.207"
  # The purpose of this listener
  purpose = "cluster"

  tls_disable = true
}

# Root KMS configuration block: this is the root key for Boundary
# Use a production KMS such as AWS KMS in production installs
kms "aead" {
  purpose = "root"
  aead_type = "aes-gcm"
  key = "<REDACTED>"
  key_id = "global_root"
}

# Worker authorization KMS
# Use a production KMS such as AWS KMS for production installs
# This key is the same key used in the worker configuration
kms "aead" {
  purpose = "worker-auth"
  aead_type = "aes-gcm"
  key = "<REDACTED>"
  key_id = "global_worker-auth"
}

# Recovery KMS block: configures the recovery key for Boundary
# Use a production KMS such as AWS KMS for production installs
kms "aead" {
  purpose = "recovery"
  aead_type = "aes-gcm"
  key = "<REDACTED>"
  key_id = "global_recovery"
}```

Documented boundary proxy command doesn't exist

Describe the bug

Boundary proxy command documented at https://www.boundaryproject.io/docs/common-workflows/manage-sessions#advanced-session-establishment doesn't exist

To Reproduce

boundary proxy -authz foo

Gives error:

Usage: boundary <command> [args]

...

The command appears to actually be boundary connect without either a subcommand or -exec:

boundary connect  -target-scope-id=p_Foo -target-name=foo

Proxy listening information:
  Address:             127.0.0.1
  Connection Limit:    1
...

Expected behavior

Not having a subcommand is inconsistent with the help text and general command usage.

If would be more consistent to make proxy a subcommand.

boundary connect proxy -target-id ttcp_foo

Additional context

none

500!

Half a thousand!

Exposing errors in the message field would greatly improve debugging

Is your feature request related to a problem? Please describe.

Errors regarding scope or duplicate assignment give 500 errors with no detail:

$ boundary users add-accounts -id=u_BarFoo1234 -account=apw_FooBar1234
Error from controller when performing add-accounts on user:
Error information:
  Code:                Internal
  Error ID:            1B7vyMQ0bD
  Message:
  Status:              500

Describe the solution you'd like

The log information available via journalctl is very informative. If this could be returned in the Message field it would greatly assist debugging.

Nov 18 23:44:29 boundary-worker-01 boundary[41692]: 2020-11-18T23:44:29.946Z [ERROR] controller: internal error returned: error id=zJClPrNyng error="Status: 500, Code: "Internal", Error: "Unable to set accounts for the user: set associated accounts: set associated accounts: unable to associate ids: associate user with accounts: apw_FooBar1234 account is associated with a user u_BarFoo1234: invalid parameter.""
Nov 18 23:46:25 boundary-worker-01 boundary[41692]: 2020-11-18T23:46:25.956Z [ERROR] controller: internal error returned: error id=1B7vyMQ0bD error="Status: 500, Code: "Internal", Error: "Unable to add accounts to user: associate accounts: error associating account ids: associate accounts: associate user with accouts: failed to associate apw_FooBar1234 account: update: failed: pq: new row for relation \"auth_account\" violates check constraint \"user_and_auth_account_in_same_scope\".""

Describe alternatives you've considered

none

Explain any additional use-cases

Helps return useful responses to the user/admin.

Additional context

none

Postgres error when initializing database on Azure

Describe the bug
When attempting to initialize the database using an Azure VM and connecting to an instance of Azure Database for PostgreSQL Server, I am receiving an error.

To Reproduce
Steps to reproduce the behavior:

  1. Prepare an Azure VM to act as a boundary controller
  2. Create an instance of Azure Database for PostgreSQL Server and allow Azure Services to connect
  3. Run the command sudo /usr/bin/boundary database init -skip-auth-method-creation -skip-host-resources-creation -skip-scopes-creation -skip-target-creation -config /etc/boundary-controller.hcl || true

Expected behavior
The default postgres database is initialized properly.

Additional context

I am running Boundary 0.1.1
The VM is running Ubuntu 18.04 LTS

The postgres connection string for Azure looks like this:

url = "postgresql://<servername>.postgres.database.azure.com:5432/postgres?user=<username>@<servername>&password=<password>&sslmode=require"

Here's the full error:

Error running database migrations: 1 error occurred:
        * error running migrations: migration failed: syntax error at or near "function" (column 13) in line 80:
begin;

  -- wh_rollup_connections calculates the aggregate values from
  -- wh_session_connection_accumulating_fact for p_session_id and updates
  -- wh_session_accumulating_fact for p_session_id with those values.
  create or replace function wh_rollup_connections(p_session_id wt_public_id)
    returns void
  as $$
  declare
    session_row wh_session_accumulating_fact%rowtype;
  begin
    with
    session_totals (session_id, total_connection_count, total_bytes_up, total_bytes_down) as (
      select session_id,
             sum(connection_count),
             sum(bytes_up),
             sum(bytes_down)
        from wh_session_connection_accumulating_fact
       where session_id = p_session_id
       group by session_id
    )
    update wh_session_accumulating_fact
       set total_connection_count = session_totals.total_connection_count,
           total_bytes_up         = session_totals.total_bytes_up,
           total_bytes_down       = session_totals.total_bytes_down
      from session_totals
     where wh_session_accumulating_fact.session_id = session_totals.session_id
    returning wh_session_accumulating_fact.* into strict session_row;
  end;
  $$ language plpgsql;

  --
  -- Session triggers
  --

  -- wh_insert_session returns an after insert trigger for the session table
  -- which inserts a row in wh_session_accumulating_fact for the new session.
  -- wh_insert_session also calls the wh_upsert_host and wh_upsert_user
  -- functions which can result in new rows in wh_host_dimension and
  -- wh_user_dimension respectively.
  create or replace function wh_insert_session()
    returns trigger
  as $$
  declare
    new_row wh_session_accumulating_fact%rowtype;
  begin
    with
    pending_timestamp (date_dim_id, time_dim_id, ts) as (
      select wh_date_id(start_time), wh_time_id(start_time), start_time
        from session_state
       where session_id = new.public_id
         and state = 'pending'
    )
    insert into wh_session_accumulating_fact (
           session_id,
           auth_token_id,
           host_id,
           user_id,
           session_pending_date_id,
           session_pending_time_id,
           session_pending_time
    )
    select new.public_id,
           new.auth_token_id,
           wh_upsert_host(new.host_id, new.host_set_id, new.target_id),
           wh_upsert_user(new.user_id, new.auth_token_id),
           pending_timestamp.date_dim_id,
           pending_timestamp.time_dim_id,
           pending_timestamp.ts
      from pending_timestamp
      returning * into strict new_row;
    return null;
  end;
  $$ language plpgsql;

  create trigger wh_insert_session
    after insert on session
    for each row
    execute function wh_insert_session();

  --
  -- Session Connection triggers
  --

  -- wh_insert_session_connection returns an after insert trigger for the
  -- session_connection table which inserts a row in
  -- wh_session_connection_accumulating_fact for the new session connection.
  -- wh_insert_session_connection also calls wh_rollup_connections which can
  -- result in updates to wh_session_accumulating_fact.
  create or replace function wh_insert_session_connection()
    returns trigger
  as $$
  declare
    new_row wh_session_connection_accumulating_fact%rowtype;
  begin
    with
    authorized_timestamp (date_dim_id, time_dim_id, ts) as (
      select wh_date_id(start_time), wh_time_id(start_time), start_time
        from session_connection_state
       where connection_id = new.public_id
         and state = 'authorized'
    ),
    session_dimension (host_dim_id, user_dim_id) as (
      select host_id, user_id
        from wh_session_accumulating_fact
       where session_id = new.session_id
    )
    insert into wh_session_connection_accumulating_fact (
           connection_id,
           session_id,
           host_id,
           user_id,
           connection_authorized_date_id,
           connection_authorized_time_id,
           connection_authorized_time,
           client_tcp_address,
           client_tcp_port_number,
           endpoint_tcp_address,
           endpoint_tcp_port_number,
           bytes_up,
           bytes_down
    )
    select new.public_id,
           new.session_id,
           session_dimension.host_dim_id,
           session_dimension.user_dim_id,
           authorized_timestamp.date_dim_id,
           authorized_timestamp.time_dim_id,
           authorized_timestamp.ts,
           new.client_tcp_address,
           new.client_tcp_port,
           new.endpoint_tcp_address,
           new.endpoint_tcp_port,
           new.bytes_up,
           new.bytes_down
      from authorized_timestamp,
           session_dimension
      returning * into strict new_row;
    perform wh_rollup_connections(new.session_id);
    return null;
  end;
  $$ language plpgsql;

  create trigger wh_insert_session_connection
    after insert on session_connection
    for each row
    execute function wh_insert_session_connection();

  -- wh_update_session_connection returns an after update trigger for the
  -- session_connection table which updates a row in
  -- wh_session_connection_accumulating_fact for the session connection.
  -- wh_update_session_connection also calls wh_rollup_connections which can
  -- result in updates to wh_session_accumulating_fact.
  create or replace function wh_update_session_connection()
    returns trigger
  as $$
  declare
    updated_row wh_session_connection_accumulating_fact%rowtype;
  begin
        update wh_session_connection_accumulating_fact
           set client_tcp_address       = new.client_tcp_address,
               client_tcp_port_number   = new.client_tcp_port,
               endpoint_tcp_address     = new.endpoint_tcp_address,
               endpoint_tcp_port_number = new.endpoint_tcp_port,
               bytes_up                 = new.bytes_up,
               bytes_down               = new.bytes_down
         where connection_id = new.public_id
     returning * into strict updated_row;
    perform wh_rollup_connections(new.session_id);
    return null;
  end;
  $$ language plpgsql;

  create trigger wh_update_session_connection
    after update on session_connection
    for each row
    execute function wh_update_session_connection();

  --
  -- Session State trigger
  --

  -- wh_insert_session_state returns an after insert trigger for the
  -- session_state table which updates wh_session_accumulating_fact.
  create or replace function wh_insert_session_state()
    returns trigger
  as $$
  declare
    date_col text;
    time_col text;
    ts_col text;
    q text;
    session_row wh_session_accumulating_fact%rowtype;
  begin
    if new.state = 'pending' then
      -- The pending state is the first state which is handled by the
      -- wh_insert_session trigger. The update statement in this trigger will
      -- fail for the pending state because the row for the session has not yet
      -- been inserted into the wh_session_accumulating_fact table.
      return null;
    end if;

    date_col = 'session_' || new.state || '_date_id';
    time_col = 'session_' || new.state || '_time_id';
    ts_col   = 'session_' || new.state || '_time';

    q = format('update wh_session_accumulating_fact
                   set (%I, %I, %I) = (select wh_date_id(%L), wh_time_id(%L), %L::timestamptz)
                 where session_id = %L
                returning *',
                date_col,       time_col,       ts_col,
                new.start_time, new.start_time, new.start_time,
                new.session_id);
    execute q into strict session_row;

    return null;
  end;
  $$ language plpgsql;

  create trigger wh_insert_session_state
    after insert on session_state
    for each row
    execute function wh_insert_session_state();

  --
  -- Session Connection State trigger
  --

  -- wh_insert_session_connection_state returns an after insert trigger for the
  -- session_connection_state table which updates
  -- wh_session_connection_accumulating_fact.
  create or replace function wh_insert_session_connection_state()
    returns trigger
  as $$
  declare
    date_col text;
    time_col text;
    ts_col text;
    q text;
    connection_row wh_session_connection_accumulating_fact%rowtype;
  begin
    if new.state = 'authorized' then
      -- The authorized state is the first state which is handled by the
      -- wh_insert_session_connection trigger. The update statement in this
      -- trigger will fail for the authorized state because the row for the
      -- session connection has not yet been inserted into the
      -- wh_session_connection_accumulating_fact table.
      return null;
    end if;

    date_col = 'connection_' || new.state || '_date_id';
    time_col = 'connection_' || new.state || '_time_id';
    ts_col   = 'connection_' || new.state || '_time';

    q = format('update wh_session_connection_accumulating_fact
                   set (%I, %I, %I) = (select wh_date_id(%L), wh_time_id(%L), %L::timestamptz)
                 where connection_id = %L
                returning *',
                date_col,       time_col,       ts_col,
                new.start_time, new.start_time, new.start_time,
                new.connection_id);
    execute q into strict connection_row;

    return null;
  end;
  $$ language plpgsql;

  create trigger wh_insert_session_connection_state
    after insert on session_connection_state
    for each row
    execute function wh_insert_session_connection_state();

commit;

 (details: pq: syntax error at or near "function")

Boundary CLI UX

Is your feature request related to a problem? Please describe.

This is about the UX for the CLI, which breaks patterns/standards of *nix of 40+ years for CLI with long form --option and short form -o. Thought the commands seem to work with both, just looks bizarre in documentation without the --. It would be nice to have short form for things like --password with -p, and --login-name as -l.

Describe the solution you'd like

This is with the UX of CLI tool, so this can apply to all branches in the command.

Describe alternatives you've considered

Alternative is in the description, emphasize --option and where there's a short form, -o. Have actual short forms for common repetitive options like -p for password and -l for login-name.

Explain any additional use-cases

The use case is all use cases that use command line options for any subcommand, and corresponding documentation.

Additional context

Description should be sufficient.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.