Code Monkey home page Code Monkey logo

openfga's Introduction

OpenFGA

Go Reference GitHub release (latest SemVer) Docker Pulls Codecov Go Report CII Best Practices Join our community Twitter FOSSA Status Artifact HUB OpenSSF Scorecard SLSA 3

A high-performance and flexible authorization/permission engine built for developers and inspired by Google Zanzibar.

OpenFGA is designed to make it easy for developers to model their application permissions and add and integrate fine-grained authorization into their applications.

It allows in-memory data storage for quick development, as well as pluggable database modules. It currently supports PostgreSQL 14 and MySQL 8.

It offers an HTTP API and a gRPC API. It has SDKs for Java, Node.js/JavaScript, GoLang, Python and .NET. Look in our Community section for third-party SDKs and tools. It can also be used as a library (see example).

Getting Started

The following section aims to help you get started quickly. Please look at our official documentation for in-depth information.

Setup and Installation

ℹ️ The following sections setup an OpenFGA server using the default configuration values. These are for rapid development and not for a production environment. Data written to an OpenFGA instance using the default configuration with the memory storage engine will not persist after the service is stopped.

For more information on how to configure the OpenFGA server, please take a look at our official documentation on Running in Production.

Docker

OpenFGA is available on Dockerhub, so you can quickly start it using the in-memory datastore by running the following commands:

docker pull openfga/openfga
docker run -p 8080:8080 -p 3000:3000 openfga/openfga run

Tip

The OPENFGA_HTTP_ADDR environment variable can used to configure the address at which the playground expects the OpenFGA server to be. For example, docker run -e OPENFGA_PLAYGROUND_ENABLED=true -e OPENFGA_HTTP_ADDR=0.0.0.0:4000 -p 4000:4000 -p 3000:3000 openfga/openfga run will start the OpenFGA server on port 4000, and configure the playground too.

Docker Compose

docker-compose.yaml provides an example of how to launch OpenFGA with Postgres using docker compose.

  1. First, either clone this repo or curl the docker-compose.yaml file with the following command:

    curl -LO https://openfga.dev/docker-compose.yaml
  2. Then, run the following command:

    docker compose up

Package Managers

If you are a Homebrew user, you can install OpenFGA with the following command:

brew install openfga

Pre-compiled Binaries

Download your platform's latest release and extract it. Then run the binary with the command:

./openfga run

Building from Source

There are two recommended options for building OpenFGA from source code:

Building from source with go install

Make sure you have Go 1.20 or later installed. See the Go downloads page.

You can install from source using Go modules:

  1. First, make sure $GOBIN is on your shell $PATH:

    export PATH=$PATH:$(go env GOBIN)
  2. Then use the install command:

    go install github.com/openfga/openfga/cmd/openfga
  3. Run the server with:

    ./openfga run

Building from source with go build

Alternatively you can build OpenFGA by cloning the project from this Github repo, and then building it with the go build command:

  1. Clone the repo to a local directory, and navigate to that directory:

    git clone https://github.com/openfga/openfga.git && cd openfga
  2. Then use the build command:

    go build -o ./openfga ./cmd/openfga
  3. Run the server with:

    ./openfga run

Verifying the Installation

Now that you have Set up and Installed OpenFGA, you can test your installation by creating an OpenFGA Store.

curl -X POST 'localhost:8080/stores' \
--header 'Content-Type: application/json' \
--data-raw '{
    "name": "openfga-demo"
}'

If everything is running correctly, you should get a response with information about the newly created store, for example:

{
  "id": "01G3EMTKQRKJ93PFVDA1SJHWD2",
  "name": "openfga-demo",
  "created_at": "2022-05-19T17:11:12.888680Z",
  "updated_at": "2022-05-19T17:11:12.888680Z"
}

Playground

The Playground facilitates rapid development by allowing you to visualize and model your application's authorization model(s) and manage relationship tuples with a locally running OpenFGA instance.

To run OpenFGA with the Playground disabled, provide the --playground-enabled=false flag.

./openfga run --playground-enabled=false

Once OpenFGA is running, by default, the Playground can be accessed at http://localhost:3000/playground.

In the event that a port other than the default port is required, the --playground-port flag can be set to change it. For example,

./openfga run --playground-enabled --playground-port 3001

Profiler (pprof)

Profiling through pprof can be enabled on the OpenFGA server by providing the --profiler-enabled flag.

./openfga run --profiler-enabled

This will start serving profiling data on port 3001. You can see that data by visiting http://localhost:3001/debug/pprof.

If you need to serve the profiler on a different address, you can do so by specifying the --profiler-addr flag. For example,

./openfga run --profiler-enabled --profiler-addr :3002

Once the OpenFGA server is running, in another window you can run the following command to generate a compressed CPU profile:

go tool pprof -proto -seconds 60 http://localhost:3001/debug/pprof/profile
# will collect data for 60 seconds and generate a file like pprof.samples.cpu.001.pb.gz

That file can be analyzed visually by running the following command and then visiting http://localhost:8084:

go tool pprof -http=localhost:8084 pprof.samples.cpu.001.pb.gz

Next Steps

Take a look at examples of how to:

Don't hesitate to browse the official Documentation, API Reference.

Limitations

MySQL Storage engine

The MySQL storage engine has a lower length limit for some properties of a tuple compared with other storage backends. For more information see the docs.

OpenFGA's MySQL Storage Adapter was contributed to OpenFGA by @twintag. Thanks!

Production Readiness

The core OpenFGA service has been in use by Okta FGA in production since December 2021.

OpenFGA's Memory Storage Adapter was built for development purposes only and is not recommended for a production environment, because it is not designed for scalable queries and has no support for persistence.

You can learn about more organizations using OpenFGA in production here. If your organization is using OpenFGA in production please consider adding it to the list.

The OpenFGA team will do its best to address all production issues with high priority.

Contributing

See CONTRIBUTING.

Community Meetings

We hold a monthly meeting to interact with the community, collaborate and receive/provide feedback. You can find more details, including the time, our agenda, and the meeting minutes here.

openfga's People

Contributors

aaguiarz avatar adriantam avatar chenrui333 avatar curfew-marathon avatar dblclik avatar dependabot[bot] avatar elbuo8 avatar evansims avatar harshabangi avatar ilaleksin avatar jingchu000 avatar jon-whit avatar jpadilla avatar le-yams avatar lekaf974 avatar matoous avatar matthewpereira avatar midaslamb avatar miparnisari avatar poovamraj avatar raj-saxena avatar razorness avatar rhamzeh avatar sanketrai1 avatar saurabhkr952 avatar sergiught avatar siddhant-k-code avatar stgraber avatar tranngoclam avatar willvedd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openfga's Issues

Invalid tuple deletes not flagged as invalid tuple error

If I try to delete by writing http://localhost:8080/stores/01G0CECAB4TT41TKN3E7D7J27Q/write with the body

{
  "deletes": {
    "tuple_keys": [
      {
        "user": "",
        "relation": "writer",
        "object": "document:2021-budget"
      }
    ]
  }
}

It only fails due to tuples do not exist.

{
    "code": "write_failed_due_to_invalid_input",
    "message": "cannot delete a tuple which does not exist: user: '', relation: 'writer', object: 'document:2021-budget': invalid write input"
}

Note that when I try to write the same tuples (with no user)

{
  "writes": {
    "tuple_keys": [
      {
        "user": "",
        "relation": "writer",
        "object": "document:2021-budget"
      }
    ]
  }
}

It will fail with

{
    "code": "invalid_tuple",
    "message": "Invalid tuple 'object:\"document:2021-budget\" relation:\"writer\"'. Reason: missing user"
}

Thus, it looks like we validate against write but not read

feat: add a `migrate` target command to run database schema migrations

It would be nice if we added a migrate command to our CLI that runs database schema migrations so that developers can migrate up to the latest schema for a given release of OpenFGA.

Run database schema migrations needed for the OpenFGA server.

The migrate command is used to migrate up to a specific revision of the database schema needed for OpenFGA. If
the revision is omitted the latest or "HEAD" revision of the schema will be used.

Usage:
openfga migrate [revision]

Flags:
    --datastore-engine string - the type of database engine to run the migrations for (postgres, mysql, cockroachdb, etc..)
    --datastore-uri string        - the database connection uri (driver included) of the database to run the migrations against

Examples:

  • openfga migrate --datastore-engine postgres --datastore-uri postgres://localhost:5432 - migrates up to the latest "HEAD" revision of the database schema for postgres

  • openfga migrate --datastore-engine mysql --datastore-uri mysql://localhost:3306 1 - migrates up to the first revision of the database schema for mysql

I recommend we look into the https://github.com/golang-migrate/migrate package to simplify the migration execution process for a variety of database engines.

bug: validate type and relations in userset values

Given the following authorization model:

type document
  relations
    define viewer as self

I can write the following relationship tuple Write("document:doc1", "viewer", "group:engineering#member").

I shouldn't be able to do this because the group type does not exist in the authorization model.

bug: conflicting authorization models cause unexpected errors

Consider the following two authorization models.

model1

type group
  relations
    define member as self

type document
  relations
    define viewer as self

model2

type team
  relations
    define member as self

type document
  relations
    define viewer as self

The following writes are allowed by OpenFGA.

write(document:doc1, viewer, group:eng#member, model1)
write(group:eng, member, jon, model1)

write(document:doc1, viewer, team:buzz#member, model2)
write(team:buzz, member, et, model2)

But, when evaluating Check(document:doc1, viewer, someuser, model1), I get a 400 Bad Request error with message

{
    "code": "type_not_found",
    "message": "Type 'team' not found"
}

I would expect a 200 OK with {"allowed": false}, because the Check is issued as of model1 and the 'team' type definition does not pertain whatsoever to model1.

support grpc health spec and register HTTP gateway `/healthz` with it

The grpc protocol has a Health Check Specification that we should be following. We should register the grpc health check endpoints as part of the OpenFGA grpc server, and with the HTTP gateway we can use runtime.WithHealthzEndpoint to register the /healthz endpoint on the HTTP gateway server in a way that delegates to the grpc health check.

This approach will unify our health checks in OpenFGA.

The grpc-health-probe utility can be used to probe the grpc health check endpoint or (if the HTTP gateway is enabled) you can query the /healthz endpoint as well.

ContextualTuples don't seem to work

Code Version

github.com/openfga/go-sdk v0.0.1
image: "openfga/openfga:v0.1.7"

Tutorial

organization-context-authorization

type organization
  relations
    define base_project_editor as self or project_manager
    define member as self
    define project_editor as base_project_editor and user_in_context
    define project_manager as self and user_in_context
    define user_in_context as self
type project
  relations
    define can_delete as manager
    define can_edit as editor
    define can_view as editor
    define editor as project_editor from owner or project_editor from partner or manager
    define manager as project_manager from owner
    define owner as self
    define partner as self

queries

is anne related to organization:A as user_in_context?
  true
is anne related to organization:A as project_manager?
 true
is organization:A related to project:X as owner?
  true
is carl related to project:X as can_delete?
  false
is anne related to project:X as can_delete?
  true

I wish I could export all the tuples, but they are exactly what the tutorial had.

I purposefully put garbage tuples in for my context.
Object: fgaSdk.PtrString(uuid.New().String()),

I expected a "FALSE" to come back but got a "TRUE".

package main

import (
	"context"

	"github.com/fluffy-bunny/openfga/internal"
	"github.com/google/uuid"
	fgaSdk "github.com/openfga/go-sdk"
	"github.com/rs/zerolog/log"
)

func DoContextAuthorizaiton() {
	internal.DumpStores()
	storeName := "organization-context-authorization"
	storeID, err := internal.GetStoreIDByName(storeName)
	if err != nil {
		log.Fatal().Err(err).Msg("Fail to get store ID")
	}
	fgaClient, err := internal.GetApiClientByStoreID(storeID)
	if err != nil {
		log.Fatal().Err(err).Msg("Fail to get API client")
	}
	// is anne related to project:X as can_delete?
	//  returns true in playground
	//  no context here, so bad, because anne doesn't have can_delete to project:X in other orgs
	body := fgaSdk.CheckRequest{
		TupleKey: &fgaSdk.TupleKey{
			User:     fgaSdk.PtrString("anne"),
			Relation: fgaSdk.PtrString("can_delete"),
			Object:   fgaSdk.PtrString("project:X"),
		},
	}
	data, response, err := fgaClient.OpenFgaApi.Check(context.Background()).Body(body).Execute()
	if err != nil {
		log.Error().Err(err).Msg("Failed to check permission")
	} else {
		log.Info().Int("StatusCode", response.StatusCode).Send()
		log.Info().Interface("data", data).Send()
	}

	// is anne related to project:X as can_delete?  IN CONTEXT NOW
	//  Lets do a garbage request to see if it works
	body = fgaSdk.CheckRequest{
		TupleKey: &fgaSdk.TupleKey{
			User:     fgaSdk.PtrString("anne"),
			Relation: fgaSdk.PtrString("can_delete"),
			Object:   fgaSdk.PtrString("project:X"),
		},
		ContextualTuples: &fgaSdk.ContextualTupleKeys{
			TupleKeys: []fgaSdk.TupleKey{
				{
					User:     fgaSdk.PtrString("anne"),
					Relation: fgaSdk.PtrString("user_in_context"),
					Object:   fgaSdk.PtrString(uuid.New().String()),
				},
			},
		},
	}
	data, response, err = fgaClient.OpenFgaApi.Check(context.Background()).Body(body).Execute()
	if err != nil {
		log.Error().Err(err).Msg("Failed to check permission")
	} else {
		log.Info().Int("StatusCode", response.StatusCode).Send()
		log.Info().Interface("data", data).Send()
	}

} 

Logs

{"level":"info","store":{"created_at":"2022-07-28T18:10:04.681422Z","id":"01G9300PR8Q3TJZTJ06J3M9BMJ","name":"mystore2","updated_at":"2022-07-28T18:10:04.681422Z"},"time":"2022-08-08T08:14:42-07:00"}
{"level":"info","store":{"created_at":"2022-08-06T13:38:14.182322Z","id":"01G9SP1DH24F98HGRWEYJ01E2F","name":"tenant","updated_at":"2022-08-06T13:38:14.182322Z"},"time":"2022-08-08T08:14:42-07:00"}
{"level":"info","store":{"created_at":"2022-08-06T17:10:12.911545Z","id":"01G9T25J5E9ZMKK57MYJ201CAK","name":"entitlements","updated_at":"2022-08-06T17:10:12.911545Z"},"time":"2022-08-08T08:14:42-07:00"}
{"level":"info","store":{"created_at":"2022-08-06T20:39:05.151638Z","id":"01G9TE40NYT1GCTCEKBXZM5E2X","name":"multiteams","updated_at":"2022-08-06T20:39:05.151638Z"},"time":"2022-08-08T08:14:42-07:00"}
{"level":"info","store":{"created_at":"2022-08-06T21:02:36.609097Z","id":"01G9TFF320RJHSPBDWHGMZ688N","name":"projectmanagment","updated_at":"2022-08-06T21:02:36.609097Z"},"time":"2022-08-08T08:14:42-07:00"}
{"level":"info","store":{"created_at":"2022-08-06T21:04:19.055002Z","id":"01G9TFJ73EYM8X3FJKEX026QT5","name":"multiple-restrictions","updated_at":"2022-08-06T21:04:19.055002Z"},"time":"2022-08-08T08:14:42-07:00"}
{"level":"info","store":{"created_at":"2022-08-08T12:52:59.421785Z","id":"01G9YR80CF6G5R7TCBM96A1V7Z","name":"organization-context-authorization","updated_at":"2022-08-08T12:52:59.421785Z"},"time":"2022-08-08T08:14:42-07:00"}
{"level":"info","time":"2022-08-08T08:14:42-07:00","message":"Found store: 01G9YR80CF6G5R7TCBM96A1V7Z"}  
{"level":"info","StatusCode":200,"time":"2022-08-08T08:14:42-07:00"}
{"level":"info","data":{"allowed":true,"resolution":""},"time":"2022-08-08T08:14:42-07:00"}
{"level":"info","StatusCode":200,"time":"2022-08-08T08:14:46-07:00"}
{"level":"info","data":{"allowed":true,"resolution":""},"time":"2022-08-08T08:14:46-07:00"}

list objects with filter

after reading Search with Permission, i find it is nature to use Build A List Of IDs, Then Search pattern in some senario, but if the number of objects is too big, that's not practical. and i wonder can we add some kind of filter for GET /list-objects, so we can use this pattern for wide. for example, the model

type document
  relations
    define parent as self
    define editor as self
type folder
  relations
    define viewer as self

if i want to search for docs object ids, it may be big, but what if i want to search for docs object ids who's parent is folder:a, i think if the api can provide this, we can use this pattern in a lot more scenarios. maybe like this

body := fgaSdk.ListObjectsRequest{
    User:                 PtrString("bob"),
    Relation:             PtrString("reader"),
    Type:                 PtrString("document"),
    FilterTuples: &FilterTupleKeys{
        TupleKeys: []TupleKey{{
            User:     PtrString("folder:a"),
            Relation: PtrString("parent"),
        },
    },
}

data, response, err := apiClient.OpenFgaApi.ListObjects(context.Background()).Body(body).Execute()

i wonder if this is possible to implement

side effect of contextual tuple

There are two issues of contextual tuple I discovered. Below is my model:

type user
type repository
  relations
    define admin as self
    define user as self
type group
  relations
    define member as self
type tenure
  relations
    define user as self
type artifact
  relations
    define allowed_tenure as self
    define can_download as user from repo and member from group
    define can_upload as can_download and user from allowed_tenure
    define group as self
    define repo as self

And I populate the model with the following relation:

{
  "tuple_key": {
    "user": "user:anne",
    "relation": "member",
    "object": "group:engineering"
  }
}

{
  "tuple_key": {
    "user": "user:anne",
    "relation": "user",
    "object": "repository:us-east"
  }
}

{
  "tuple_key": {
    "user": "group:engineering",
    "relation": "group",
    "object": "artifact:doc1"
  }
}

{
  "tuple_key": {
    "user": "repository:us-east",
    "relation": "repo",
    "object": "artifact:doc1"
  }
}

{
  "tuple_key": {
    "user": "tenure:gt3",
    "relation": "allowed_tenure",
    "object": "artifact:doc1"
  }
}

It's apparent that user:anne has a can_download relation with artifact:doc1. And this can be verified with a check API. However, if I add a contextual tuple to the request in the body:

{
  "tuple_key": {
    "user": "anne",
    "relation": "can_download",
    "object": "artifact:doc1"
  },
  "contextual_tuples": {
    "tuple_keys": [
      {
        "user": "user:anne",
        "relation": "user",
        "object": "tenure:gt3"
      }
    ]
  }
}

The returned response is false. Is this an undesirable side effect of contextual tuple?

Another question is that I found that contextual tuple does not really work. The returned answer to a can_upload query is still false with the following request:

{
  "tuple_key": {
    "user": "anne",
    "relation": "can_upload",
    "object": "artifact:doc1"
  },
  "contextual_tuples": {
    "tuple_keys": [
      {
        "user": "user:anne",
        "relation": "user",
        "object": "tenure:gt3"
      }
    ]
  }
}

ci(release): fix our CI auto release notes generator

We need to fix our CI auto release notes generation mechanism so that it can support richer markdown text. The v0.1.5 release initially failed because it introduced richer markdown text and that introduced issues with bash escape sequences.

Ideally we introduce very few (ideally no) additional dependencies into our tooling to do so, but if it makes it more elegant and easier to manage, then it's probably worth it.

add support to enable/disable the HTTP (gateway) API

If you only want to run OpenFGA's grpc server you should be able to disable the HTTP API (grpc to http/json transcoding gateway).

To add this support we should:

  • Introduce a configuration OPENFGA_HTTP_ENABLED
  • If OPENFGA_HTTP_ENABLED=true then we initialize and start the HTTP server, otherwise we skip it.

register grpc request context tags middleware to inject common context

It'd be nice to use the go-grpc-middleware/tags package to register a middleware that injects things like the request store_id, authorization_model_id, etc.. into the request context. Then we can use the injected tags in the request context in our logging and telemetry throughout the app.

Something like

opts := []grpc_ctxtags.Option{
        // TagBasedRequestFieldExtractor extracts the request field based on the Go struct tags of the generated protobufs

	grpc_ctxtags.WithFieldExtractor(grpc_ctxtags.TagBasedRequestFieldExtractor("store_id")),
        grpc_ctxtags.WithFieldExtractor(grpc_ctxtags.TagBasedRequestFieldExtractor("authorization_model_id")),

        // ... others
}

_ = grpc.NewServer(
	grpc.StreamInterceptor(grpc_ctxtags.StreamServerInterceptor(opts...)),
	grpc.UnaryInterceptor(grpc_ctxtags.UnaryServerInterceptor(opts...)),
)

and then those ctx tags will be available to us in the request context. For example

package logger

type ZapLogger struct {...}

var _ Logger = &ZapLogger{} // verify we implement the interface

func (l *ZapLogger) InfoWithContext(ctx context.Context, msg string, fields ...zap.Field) {
    ctxfields := zaptags.TagsToFields(ctx)
    fields = append(fields, ctxfields...)
    l.underlying.Info(msg, fields)
}

See these integrations for additional niceties:
https://pkg.go.dev/github.com/grpc-ecosystem/[email protected]/tags/zap

ListObjects returning IDs not from the chosen type when contextual tuples are involved

To reproduce:

type user
type resource
  relations
    define parent as self
    define viewer as viewer from parent 
type folder
  relations
    define allowed as self
    define viewer as allowed

With the following tuples:

- user: folder:folder-1
  relation: parent
  object: resource:resource-1

Note that this issue does not appear when the relation definition is define viewer as self or allowed or define viewer as self, and it would appear if the relation definition is define viewer as self and allowed

To try this out, the sample store: https://play.fga.dev/stores/create/?id=01GJ41921A4W4A3DCN9A5JQEJ8

curl  -X POST 'https://api.playground.fga.dev/stores/01GJ41921A4W4A3DCN9A5JQEJ8/list-objects' \
-H "Content-Type: application/json" \
-d '{"user": "user:emily", "relation":"viewer","type":"resource","contextual_tuples":{"tuple_keys":[{"user":"user:emily", "relation":"allowed", "object":"folder:folder-1"}]}}' | jq

Thanks to Suresh for raising this issue on discord

Screen Shot 2022-11-18 at 5 15 02 PM

TLS certificates config path mismatch

In config schema is this config path described: grpc.tls.cert and grpc.tls.key. Neither runtime flags --grpc-tls-cert/--grpc-tls-key nor yaml configs grpc.tls.cert and grpc.tls.key are read correctly.
Mismatch is done by:

type TLSConfig struct {
Enabled bool
CertPath string
KeyPath string
}

Fields cert and key cannot be mapped to CertPath and KeyPath without mapstructure tag.

feat: allow specifying CORS AllowedOrigins

Currently, openfga hard coded CORS Allowed Origins to be []string{"*"}. It should be possible to allow specifying the allowed origins to be a more restrictive value.

[enhancement] A richer READ query for tuples

I am finding that the current Read Api is too broad in its results.
If the tuples are simple, it ok, however for more complex objects (complex by convention) it isn't ironically fined grained enough.

I have objects that look like the following.

permission_group:{{org_id}}/blah/something/else

Doing a Read, I can retrieve objects at the permission_group: scope. This gives me too many results.

I would like something like a lucene query where I can do a Read that matches all objects that look like "X"
i.e. permission_group:{{org_id}}

Also, READS where I want the USER object returned should at a minimum be as good as when I READ for OBJECTs.
I would like my USER input to also be namespaced, but for now it looks like I can only have "user":""

{
    "tuple_key": {
        "user": "permission_group:a/b",
        "relation": "associated_permission_group",
        "object":"permission_set:"
    },
    "page_size": 50
}

returns all permission_set(s)

vs

{
    "tuple_key": {
        "user": "",
        "relation": "associated_permission_group",
        "object":"permission_set:b"
    },
    "page_size": 50
}

Here I can only put an empty string.
Would like to be able to namespace the user.

I imagine this could explode into a bunch of opinions on how to do this, but eventually we will have to address it.

bug: open fga crashes for no apparent reason

Hey,

I'm using open fga using docker (I'm using the latest version - 0.2.0). The docker works fine, but after some time - sending a request to openfga (using SDK) returns a 500 error with the following message:
"FGA API Internal Error: post check : Error Internal Server Error"

Logs from docker container:
2022-08-17T20:55:47.861Z ERROR grpc error {"error": "rpc error: code = Unknown desc = rpc error: code = Code(4000) desc = Internal Server Error", "request_url": "/stores/01G9TE715A51EDP84Q7S9VH9CX/check"} github.com/openfga/openfga/pkg/logger.(*ZapLogger).ErrorWithContext /home/runner/work/openfga/openfga/pkg/logger/logger.go:80 github.com/openfga/openfga/server.New.func1 /home/runner/work/openfga/openfga/server/server.go:143 github.com/grpc-ecosystem/grpc-gateway/v2/runtime.HTTPError /home/runner/go/pkg/mod/github.com/grpc-ecosystem/grpc-gateway/[email protected]/runtime/errors.go:81 go.buf.build/openfga/go/openfga/api/openfga/v1.RegisterOpenFGAServiceHandlerClient.func3 /home/runner/go/pkg/mod/go.buf.build/openfga/go/openfga/[email protected]/openfga/v1/openfga_service.pb.gw.go:1591 github.com/grpc-ecosystem/grpc-gateway/v2/runtime.(*ServeMux).ServeHTTP /home/runner/go/pkg/mod/github.com/grpc-ecosystem/grpc-gateway/[email protected]/runtime/mux.go:386 github.com/rs/cors.(*Cors).Handler.func1 /home/runner/go/pkg/mod/github.com/rs/[email protected]/cors.go:231 net/http.HandlerFunc.ServeHTTP /opt/hostedtoolcache/go/1.18.5/x64/src/net/http/server.go:2084 net/http.serverHandler.ServeHTTP /opt/hostedtoolcache/go/1.18.5/x64/src/net/http/server.go:2916 net/http.(*conn).serve /opt/hostedtoolcache/go/1.18.5/x64/src/net/http/server.go:1966

I can't seem to reproduce it. The only thing that solves this is restarting the docker container.

Incorrect build instruction in README.md

The current README suggests that building openfga involves

go build cmd/openfga/openfga.go

However, there is no cmd/openfga/openfga.go. Instead, it is cmd/openfga/main.go

Instead of asking user to run go build, we can ask them to run make build for which we already configure the filename and output binary name.

Latest release (v0.2.3) is crashing when http or grpc add is provided in configuration

Removing these two configs (OPENFGA_HTTP_ADDR and OPENFGA_GRPC_ADDR) allows the service to start.

The configuration is passed through docker compose, like so:

services:
  openfga:
    image: openfga/openfga
    # ...
    environment:
      OPENFGA_HTTP_ADDR: ":${OPENFGA_HTTP_PORT}"
      OPENFGA_GRPC_ADDR: ":${OPENFGA_GRPC_PORT}"

Note: This used to work on v0.2.2

More info

  • Run through: official docker image & docker-compose
  • OpenFGA version: v0.2.3

Logs

{"level":"info","ts":1665181651.7706127,"caller":"logger/logger.go:48","msg":"using 'postgres' storage engine","build.version":"v0.2.3","build.commit":"07a7a1a7eaa4d04f2c9e7efed40072ae2f3560b4"}
{"level":"warn","ts":1665181651.7706547,"caller":"logger/logger.go:52","msg":"grpc TLS is disabled, serving connections using insecure plaintext","build.version":"v0.2.3","build.commit":"07a7a1a7eaa4d04f2c9e7efed40072ae2f3560b4"}
{"level":"warn","ts":1665181651.7706614,"caller":"logger/logger.go:52","msg":"HTTP TLS is disabled, serving connections using insecure plaintext","build.version":"v0.2.3","build.commit":"07a7a1a7eaa4d04f2c9e7efed40072ae2f3560b4"}
{"level":"warn","ts":1665181651.7706642,"caller":"logger/logger.go:52","msg":"authentication is disabled","build.version":"v0.2.3","build.commit":"07a7a1a7eaa4d04f2c9e7efed40072ae2f3560b4"}
{"level":"fatal","ts":1665181651.7706723,"caller":"logger/logger.go:64","msg":"failed to initialize openfga server","build.version":"v0.2.3","build.commit":"07a7a1a7eaa4d04f2c9e7efed40072ae2f3560b4","error":"failed to parse the 'grpc.addr' config: no IP","stacktrace":"github.com/openfga/openfga/pkg/logger.(*ZapLogger).Fatal\n\t/home/runner/work/openfga/openfga/pkg/logger/logger.go:64\ngithub.com/openfga/openfga/pkg/cmd.run\n\t/home/runner/work/openfga/openfga/pkg/cmd/run.go:55\ngithub.com/spf13/cobra.(*Command).execute\n\t/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:876\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:990\ngithub.com/spf13/cobra.(*Command).Execute\n\t/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:918\nmain.main\n\t/home/runner/work/openfga/openfga/cmd/openfga/openfga.go:18\nruntime.main\n\t/opt/hostedtoolcache/go/1.18.6/x64/src/runtime/proc.go:250"}

Reported by @ghstahl

Store ULID as 128 bits instead of TEXT

I might be missing some context, but just wanted to check anyway. I think it would be efficient to keep 128 bits instead of keeping the string representation. Something like a snowflake id is probably even better.

feat: add pprof profiling

The OpenFGA server should be extended to support pprof profiling. For more info about pprof see the official documentation.

This feature should be configurable so that you can enable/disable pprof profiling and the profiler should be served with it's own HTTP server on a configurable host:port address.

We should add the following config and CLI flags:

config flag env var type default
profiler.enabled profiler-enabled OPENFGA_PROFILER_ENABLED bool false
profiler.addr profiler-addr OPENFGA_PROFILER_ADDR string :3001

The stacktrace when OpenFGA errors is not useful

When something causes OpenFGA to error we receive a stacktrace that looks like the following:

github.com/openfga/openfga/pkg/logger.(*ZapLogger).Error
        /Users/craig/git/openfga/pkg/logger/logger.go:56
github.com/openfga/openfga/server/middleware.NewErrorLoggingInterceptor.func1
        /Users/craig/git/openfga/server/middleware/logging.go:20
google.golang.org/grpc.chainUnaryInterceptors.func1.1
        /Users/craig/go/pkg/mod/google.golang.org/[email protected]/server.go:1135
github.com/grpc-ecosystem/go-grpc-middleware/auth.UnaryServerInterceptor.func1
        /Users/craig/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/auth/auth.go:47
google.golang.org/grpc.chainUnaryInterceptors.func1.1
        /Users/craig/go/pkg/mod/google.golang.org/[email protected]/server.go:1138
github.com/grpc-ecosystem/go-grpc-middleware/validator.UnaryServerInterceptor.func1
        /Users/craig/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/validator/validator.go:47
google.golang.org/grpc.chainUnaryInterceptors.func1.1
        /Users/craig/go/pkg/mod/google.golang.org/[email protected]/server.go:1138
google.golang.org/grpc.chainUnaryInterceptors.func1
        /Users/craig/go/pkg/mod/google.golang.org/[email protected]/server.go:1140
go.buf.build/openfga/go/openfga/api/openfga/v1._OpenFGAService_Check_Handler
        /Users/craig/go/pkg/mod/go.buf.build/openfga/go/openfga/[email protected]/openfga/v1/openfga_service_grpc.pb.go:377
google.golang.org/grpc.(*Server).processUnaryRPC
        /Users/craig/go/pkg/mod/google.golang.org/[email protected]/server.go:1301
google.golang.org/grpc.(*Server).handleStream
        /Users/craig/go/pkg/mod/google.golang.org/[email protected]/server.go:1642
google.golang.org/grpc.(*Server).serveStreams.func1.2
        /Users/craig/go/pkg/mod/google.golang.org/[email protected]/server.go:938

(This is the stack trace that came from a "failed to connect to Postgres" error.) This is not very useful. Can we do something to improve this?

Read API: "type not found" is thrown when in fact the authorization model id was not found

To reproduce:

  1. Create an authorization model
curl --location --request POST 'http://localhost:8080/stores/01GCJB1DJ88RJ1KA5TMG5PJXQ4/authorization-models' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data-raw '{
    "type_definitions": [
        {
            "type": "document",
            "relations": {
                "owner": {
                    "this": {}
                }
            }
        },
        {
            "type": "group",
            "relations": {
                "member": {
                    "this": {}
                }
            }
        }
    ]
}'
  1. Call read with an incorrect auth model id
curl --location --request POST 'http://localhost:8080/stores/01GCJB1DJ88RJ1KA5TMG5PJXQ4/read' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data-raw '{
  "tuple_key": {
    "object": "document:2021-budget",
    "relation": "owner",
    "user": "anne"
  },
  "authorization_model_id": "01GDKQT51EGCGAAAAAAAAAAAAA"
}'

You get this:

{
    "code": "type_not_found",
    "message": "Type 'document' not found"
}

But I expected "auth model id not found"

ns, err := q.datastore.ReadTypeDefinition(ctx, store, authorizationModelID, objectType)
if err != nil {
if errors.Is(err, storage.ErrNotFound) {
return serverErrors.TypeNotFound(objectType)

Unable to connect to an Azure-hosted Postgres instance

I'm experimenting with connecting OpenFGA to an Azure managed Postgres instance by running the OpenFGA docker container locally, and passing the postgres:// connection URI as the OPENFGA_DATASTORE_CONNECTION_URI env var. I've verified that I can connect to the Postgres instance through other means, but when I make a cURL request to create a new store in OpenFGA I get the following error:

ERROR   grpc error      {"error": "rpc error: code = Unknown desc = rpc error: code = Code(4000) desc = Internal Server Error", "request_url": "/stores"}
github.com/openfga/openfga/pkg/logger.(*ZapLogger).ErrorWithContext
        /home/runner/work/openfga/openfga/pkg/logger/logger.go:80
github.com/openfga/openfga/server.New.func1
        /home/runner/work/openfga/openfga/server/server.go:134
github.com/grpc-ecosystem/grpc-gateway/v2/runtime.HTTPError
        /home/runner/go/pkg/mod/github.com/grpc-ecosystem/grpc-gateway/[email protected]/runtime/errors.go:81
go.buf.build/openfga/go/openfga/api/openfga/v1.RegisterOpenFGAServiceHandlerClient.func12
        /home/runner/go/pkg/mod/go.buf.build/openfga/go/openfga/[email protected]/openfga/v1/openfga_service.pb.gw.go:1639
github.com/grpc-ecosystem/grpc-gateway/v2/runtime.(*ServeMux).ServeHTTP
        /home/runner/go/pkg/mod/github.com/grpc-ecosystem/grpc-gateway/[email protected]/runtime/mux.go:386
github.com/rs/cors.(*Cors).Handler.func1
        /home/runner/go/pkg/mod/github.com/rs/[email protected]/cors.go:231
net/http.HandlerFunc.ServeHTTP
        /opt/hostedtoolcache/go/1.18.3/x64/src/net/http/server.go:2084
net/http.serverHandler.ServeHTTP
        /opt/hostedtoolcache/go/1.18.3/x64/src/net/http/server.go:2916
net/http.(*conn).serve
        /opt/hostedtoolcache/go/1.18.3/x64/src/net/http/server.go:1966

Steps to reproduce:

  1. Populate .openfga.env with:
OPENFGA_DATASTORE_ENGINE=postgres
OPENFGA_DATASTORE_CONNECTION_URI=postgres://<username>:<password>@<server_name>.postgres.database.azure.com/<database_name>
OPENFGA_GRPC_TLS_ENABLED=false
OPENFGA_HTTP_TLS_ENABLED=false
  1. Run docker run --env-file ./openfga.env -p 8080:8080 openfga/openfga run
  2. Run curl -X POST 'localhost:8080/stores' \ --header 'Content-Type: application/json' \ --data-raw '{ "name": "openfga-demo" }' in a separate terminal

generate dot graph visualizations for authorization models

Given an authorization model *openfgapb.AuthorizationModel generate a dot graph representing the direct and indirect edges in the graph of relationships.

In our graph package we can parse the model and export a digraph structure using https://github.com/emicklei/dot, for example:

package graph

import (
  ...
  "github.com/emicklei/dot"
)

// ToDigraph processes the provided authorization model and produces a directed-acyclic graph (digraph)
// in graphviz DOT format.
func ToDigraph(model *openfgapb.AuthorizationModel) (*dot.Graph, error) {
    digraph := dot.NewGraph(dot.Directed)
    // todo: construct the digraph from the model

    return digraph, nil
}

Then we can use the Digraph structure and print it out and run it through https://dot-to-ascii.ggerganov.com/ . For example,

input

digraph {
    
    "document editor" -> "user";
    "document editor" -> "group member";
    "document viewer" -> "document editor";
    "document viewer" -> "group member";
    "group member" -> "user";
    "group member" -> "group member";
}

output

     +-----------------+
     | document viewer | -+
     +-----------------+  |
       |                  |
       |                  |
       v                  |
     +-----------------+  |
  +- | document editor |  |
  |  +-----------------+  |
  |    |                  |
  |    |                  |
  |    v                  |
  |  +-----------------+  |
  |  |  group member   | <+
  |  +-----------------+
  |    |
  |    |
  |    v
  |  +-----------------+
  +> |      user       |
     +-----------------+

ci: add `.goreleaser-nightly.yaml` manifest for nightly releases

It would be nice to get nightly releases of the main branch so that each new day a developer can run the latest software on the main branch. We should add a .goreleaser-nightly.yaml manifest specifically for nightly releases.

Design goals:

  • A nightly release should only build and publish a new container image (e.g. openfga/openfga:nightly). No Github releases, Homebrew packages, etc..
  • Nightly releases should not override the openfga/openfga:latest image, because the latest image tag should always point to the latest "tagged" release.

We need to also make sure we add documentation to the README that indicates you can use nightly builds to test out new functionality on main. An example Github action workflow would look like:

name: Nightly Release
on:
  schedule:
    - cron: '0 2 * * *' # run at 2 AM UTC

jobs:
  goreleaser:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v2
        with:
          fetch-depth: 0

      - name: Set up Go
        uses: actions/setup-go@v2
        with:
          go-version: 1.18

      - name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}

      - name: Run GoReleaser
        uses: goreleaser/goreleaser-action@v2
        with:
          distribution: goreleaser
          version: latest
          args: release --rm-dist -f .goreleaser-nightly.yaml
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

feat: Reject tuple input with direct relationship if not allowed instead of writing

Currently, a WriteCommand (ref: server/commands/write.go) with a tupleset that is not valid because of attempting to write a DirectUserset (ref: openfga/v1/authzmodel.proto in openfga/api) is not rejected as invalid. This tupleset does not cause an invalid authorization check (which is great), but it does create two potential pain points with users:

  1. Services that write tuples to an fga store that violate this condition will not receive an error and assume that they provided a semantically valid tuple (they are already syntactically validated with the provided regex patterns); and,
  2. fga admins (global or store) may get complaints of authorization checks failing that a service believes should be valid and, upon investigating see the invalid direct relationship, assume there is a deeper or more complex issue than simply violation of the indirect only relationship

Looking through server/commands/write.go, we do not currently pull in the AuthorizationModel (like in server/commands/read_authzmodel.go; though we do have the StoreId and AuthorizationModelId in the req input for Execute and should be able to fetch it), but if we did we would have access to the TypeDefinitions and should be able to then check if the TypeDefinition.relations map contains a DirectUserset or not

Screen Shot 2022-06-26 at 2 45 10 PM
.

Sporadic authorization_model_resolution_too_complex error

Docker on Windows
OpenFGA v.0.2.2

I have a reproducible model, dataset, and check that gives me the following sporadic error;

{
    "code": "authorization_model_resolution_too_complex",
    "message": "Authorization Model resolution required too many rewrite rules to be resolved. Check your authorization model for infinite recursion or too much nesting"
}

I make the following Check call via POSTMAN

{
    "tuple_key": {
      "object": "org:f708c725-b072-4678-87d1-7fc47c52ffd3",
      "relation": "member",
      "user": "a2226717-7702-480a-a4ce-4608d0dcaea5"
    },
    "contextual_tuples": {
      "tuple_keys": [
        {
          "object": "org:f708c725-b072-4678-87d1-7fc47c52ffd3",
          "relation": "user_in_context",
          "user": "a2226717-7702-480a-a4ce-4608d0dcaea5"
        }
      ]
    } 
  }

or Curl

curl --location --request POST 'http://localhost:3601/stores/01GFS6AQP6WTBCHNNQR7YSA9AW/check' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data-raw '{
    "tuple_key": {
      "object": "org:f708c725-b072-4678-87d1-7fc47c52ffd3",
      "relation": "member",
      "user": "a2226717-7702-480a-a4ce-4608d0dcaea5"
    },
    "contextual_tuples": {
      "tuple_keys": [
        {
          "object": "org:f708c725-b072-4678-87d1-7fc47c52ffd3",
          "relation": "user_in_context",
          "user": "a2226717-7702-480a-a4ce-4608d0dcaea5"
        }
      ]
    } 
  }'

My Authorization Model

type feature
  relations
    define access as subscriber_member from associated_plan
    define associated_plan as self
    define subscriber as subscriber from associated_plan
type feature_repo
  relations
    define feature as self
type integrity
  relations
    define member as self
type org
  relations
    define admin as self and member
    define member as self and member from user_repo and user_in_context
    define plan_provider as self
    define user_in_context as self
    define user_repo as self
type permission
  relations
    define access as access from associated_feature and member from associated_permission_group
    define associated_feature as self
    define associated_permission_group as self
    define org_access as subscriber from associated_feature
type permission_group
  relations
    define member as self or admin from owner
    define owner as self
type plan
  relations
    define subscriber as self
    define subscriber_member as member from subscriber
type plan_provider
  relations
    define plan as self and plan from plan_repo and plan_in_context
    define plan_in_context as self
    define plan_repo as self
type plan_repo
  relations
    define plan as self
type user_repo
  relations
    define integrity as self
    define member as self and member from integrity

complex_error_reproducible.zip

wire up CLI flags in addition to server configuration in `config.yaml`

The functional tests in #83 launch a container for OpenFGA to test against, and I want to be able to launch the container with specific config but without having to construct a config.yaml in the automation. I'd like to be able to run openfga with, for example,

openfga run --authn-method preshared --authn-preshared-keys key1,key2

and

# enable HTTP TLS only
openfga run --http-tls-enabled --http-tls-cert /some/path --http-tls-key /some/path

# enable grpc TLS only
openfga run --grpc-tls-enabled --grpc-tls-cert /some/path --grpc-tls-key /some/path

# enable both
openfga run \
  --grpc-tls-enabled --grpc-tls-cert /some/path --grpc-tls-key /some/path \
  --http-tls-enabled --http-tls-cert /some/path --http-tls-key /some/path

Since we have the CLI it'd be nice to make these flags as part of our CLI in addition to be configurable by config.yaml. So config.yaml + CLI flags will work together. We'll have to choose which one should take precedence, but my hunch is that we should just follow viper's default.

Viper uses the following precedence order. Each item takes precedence over the item below it:

explicit call to Set
flag
env
config
key/value store
default

So flags should take precedence over the config.yaml values.

Migrate does not respect the same configuration methods as openfga

To Reproduce

docker run --env OPENFGA_DATASTORE_ENGINE=postgres --env OPENFGA_DATASTORE_URI=postgres://postgres:postgres@localhost:5432/openfga openfga/openfga:v0.1.7 migrate

Expected Result(s)

  • An error connecting to postgres if postgres is not running or the uri is invalid
  • A successful migration

Actual Result

  • Complaint about datastore params not being passed

Screen Shot 2022-08-12 at 3 22 49 PM

bug: unauthenticated/unauthorized requests return 500 status code instead of 401/403

When we introduced authn into OpenFGA we didn't introduce the corresponding API error codes for authn related API errors in openfga/api, so every 401 or 403 (unauthenticated or unauthorized) ends up returning a 500 (internal) error code.

Steps to reproduce (for example):

➜ ./openfga run --authn-method preshared --authn-preshared-keys key1

➜ curl --request POST -d '{"name":"openfga-demo"}' http://localhost:8080/stores

{"code":"internal_error","message":"missing bearer token"}

➜ curl --header "Authorization: Bearer key2" --request POST -d '{"name":"openfga-demo"}' http://localhost:8080/stores

{"code":"internal_error","message":"unauthorized"}

To fix this we need to introduce authn errors into openfga/api and encode authn errors to the appropriate error code herein.

OpenFGA server crashes with `"error":"rpc error: code = Canceled desc = context canceled"`

Description

Coming across the context canceled error many times over the last couple of weeks and have been unable to resolve it. I’ve tried adjusting the keepalive/timeout settings for the TCP sockets on the client-side. I think it’s unlikely to be the latency between the server and the DB as they are both hosted in the same region, and I have yet to enable replication on the DB. There is also plenty of bandwidth on cpu/memory etc on the OpenFGA server as well as the DB.

Once the error shows up, the server will continuously throw it in response to any request and must be rebooted to get it working again. I can’t seem to identify a pattern here, as it seems sporadic. There’ve been times where the server has been up for >24 hours, serving requests before it craps out with context canceled; on other occasions the error has shown up within 30 minutes of booting up. It has errored out when there's been just a single user on the environment, and worked fine when there's been a dozen or so concurrent users.

Configuration

Client

import axios from "axios";
import Agent from "agentkeepalive";
import { CredentialsMethod, OpenFgaApi } from "@openfga/sdk";

const axiosInstance = axios;
const keepAliveAgent = new Agent({
  timeout: 60000,
  maxSockets: 10,
  maxFreeSockets: 10,
  freeSocketTimeout: 2000,
});
axiosInstance.defaults.httpAgent = keepAliveAgent;
const endpointURL = new URL(endpoint);
const fga = new OpenFgaApi(
  {
    storeId: storeId,
    apiHost: endpointURL.host,
    apiScheme: endpointURL.protocol.split(":")[0],
    credentials: {
      method: CredentialsMethod.ApiToken,
      config: { token: authKey },
    },
  },
  axiosInstance
);

OpenFGA Server

{"level":"info","ts":1658864039.7765493,"caller":"logger/logger.go:48","msg":"🚀 starting openfga service...","build.version":"v0.1.4","build.commit":"226299f267219b7e36c3449270ee00d4b1234693","version":"v0.1.4","date":"2022-06-28T01:07:19Z","commit":"226299f267219b7e36c3449270ee00d4b1234693","go-version":"go1.18.3"}

Server Env Vars

OPENFGA_DATASTORE_ENGINE=postgres
OPENFGA_DATASTORE_CONNECTION_URI=postgres://username@[email protected]:5432/spike-test
OPENFGA_LOG_FORMAT=json
OPENFGA_AUTH_METHOD=preshared
OPENFGA_AUTH_PRESHARED_KEYS="redacted,redacted,redacted"

Database

Configuration: Burstable, B2s, 2 vCores, 4 GiB RAM, 32 GiB storage (Azure Database for PostgreSQL flexible server)
PostgreSQL version: 13.6

Sample logs

{"log":"{\"level\":\"info\",\"ts\":1658803072.0395293,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":22,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:37:52.039780237Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803072.0679107,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":7,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:37:52.068089867Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803072.182001,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":22,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:37:52.182169198Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803072.295995,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":12,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:37:52.296167227Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803072.3244417,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":9,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:37:52.324641959Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803072.3807542,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":10,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:37:52.380939816Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803074.0693555,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":7,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:37:54.069591812Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803076.356419,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":10,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:37:56.356608286Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803078.5058677,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":10,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:37:58.506099157Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803082.297125,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":7,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:38:02.297355339Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803084.4732785,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":10,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:38:04.473669834Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803086.5859325,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":10,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:38:06.586111568Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803090.624179,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":7,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:38:10.624356101Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803093.0691795,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":10,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:38:13.069395844Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803093.6985922,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":22,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:38:13.699084949Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803094.6023476,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":11,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:38:14.602779921Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803095.1992974,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":11,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:38:15.199572796Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803095.2273736,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":8,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:38:15.227614223Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803095.6119893,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":10,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:38:15.61217451Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803098.2356906,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":19,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:38:18.236149229Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803098.26291,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":7,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:38:18.263224645Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803098.8293798,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":22,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:38:18.829775356Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803099.6448712,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":45,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:38:19.645405573Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803099.8589158,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":22,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:38:19.859251568Z"}
{"log":"{\"level\":\"info\",\"ts\":1658803100.072831,\"caller\":\"logger/logger.go:72\",\"msg\":\"db_stats\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"method\":\"Check\",\"reads\":12,\"writes\":0}\n","stream":"stderr","time":"2022-07-26T02:38:20.073210865Z"}
{"log":"{\"level\":\"error\",\"ts\":1658803348.6587253,\"caller\":\"logger/logger.go:80\",\"msg\":\"grpc error\",\"build.version\":\"v0.1.4\",\"build.commit\":\"226299f267219b7e36c3449270ee00d4b1234693\",\"error\":\"rpc error: code = Canceled desc = context canceled\",\"request_url\":\"/stores/01G7QVP4XNWB9W64BEWACZ15FQ/check\",\"stacktrace\":\"github.com/openfga/openfga/pkg/logger.(*ZapLogger).ErrorWithContext\\n\\t/home/runner/work/openfga/openfga/pkg/logger/logger.go:80\\ngithub.com/openfga/openfga/server.New.func1\\n\\t/home/runner/work/openfga/openfga/server/server.go:136\\ngithub.com/grpc-ecosystem/grpc-gateway/v2/runtime.HTTPError\\n\\t/home/runner/go/pkg/mod/github.com/grpc-ecosystem/grpc-gateway/[email protected]/runtime/errors.go:81\\ngo.buf.build/openfga/go/openfga/api/openfga/v1.RegisterOpenFGAServiceHandlerClient.func3\\n\\t/home/runner/go/pkg/mod/go.buf.build/openfga/go/openfga/[email protected]/openfga/v1/openfga_service.pb.gw.go:1450\\ngithub.com/grpc-ecosystem/grpc-gateway/v2/runtime.(*ServeMux).ServeHTTP\\n\\t/home/runner/go/pkg/mod/github.com/grpc-ecosystem/grpc-gateway/[email protected]/runtime/mux.go:386\\ngithub.com/rs/cors.(*Cors).Handler.func1\\n\\t/home/runner/go/pkg/mod/github.com/rs/[email protected]/cors.go:231\\nnet/http.HandlerFunc.ServeHTTP\\n\\t/opt/hostedtoolcache/go/1.18.3/x64/src/net/http/server.go:2084\\nnet/http.serverHandler.ServeHTTP\\n\\t/opt/hostedtoolcache/go/1.18.3/x64/src/net/http/server.go:2916\\nnet/http.(*conn).serve\\n\\t/opt/hostedtoolcache/go/1.18.3/x64/src/net/http/server.go:1966\"}\n","stream":"stderr","time":"2022-07-26T02:42:28.659035685Z"}

gRPC server not accessible outside of docker

Currently, the gRPC server is exposed on port 8081, however that is only bound to localhost (and thus only reachable within the docker container itself, and not from the outside).
The docker image itself exposes port 8081, so it seems that it would be the intention that the gRPC server itself is bound to 0.0.0.0:8081 rather than localhost:8081.
It might be nice that this is configurable through e.g. an env variable, in case users don't want to expose the gRPC server.

[Bug] OpenFGA returning 500 on ListObjects

Hello, we are testing OpenFGA for a project and had some issues with the list_objects (normal) endpoint. Not missing data but server error.
Our tested model is as follows:

type organization
  relations
    define admin as self
    define editor as self or admin
    define member as self or viewer
    define viewer as self or editor
type type_1
  relations
    define owner as self
    define editor as self or owner
    define viewer as self or editor
type type_2
  relations
    define owner as self
    define editor as self or owner or editor from parent
    define viewer as self or editor or viewer from parent
    define parent as self

Server setup as the documentation shows for postgres engine:

docker network create openfga
docker run -d --name postgres --network=openfga -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=password postgres:14
docker run --rm --network=openfga openfga/openfga migrate --datastore-engine postgres --datastore-uri 'postgres://postgres:password@postgres:5432/postgres?sslmode=disable'
docker run --name openfga --network=openfga -p 3000:3000 -p 8080:8080 -p 8081:8081 openfga/openfga run --datastore-engine postgres --datastore-uri 'postgres://postgres:password@postgres:5432/postgres?sslmode=disable' --listObjects-max-results 50000  --listObjects-deadline 360s

First we added a user as admin of an organization ORG1: {"user": "user_1","relation": "admin","object": "organization:ORG1"}

We then performed 600 iterations of adding 25 tuples: {"user": "type_1: UUID_1","relation": "parent","object": "type_2:UUID_n"}
and 1 tuple giving members of ORG1 "editor" relation to each type_1 element: {"user": "organization:ORG1#member", "relation": "editor", "object": "type_1:UUID_1"}.

This created 600 type_1 elements and 15000 type_2 elements where each type_1 element is parent of 25 type_2 elements

When we tried listing the objects of type_2 that some user has by using the list_objects endpoint with {"type": type_2,"relation": "editor","user": "user_1"} we get the 15000 ids but if we try listing with a "viewer" relation rather than "editor" {"type": type_2,"relation": "viewer","user": "user_1"} we get a 500 response.

In the server we get the error:

2022-10-26T15:50:17.700Z        INFO    db_stats        {"method": "Check", "reads": 50, "writes": 0}
2022-10-26T15:50:17.700Z        INFO    db_stats        {"method": "Check", "reads": 38, "writes": 0}
2022-10-26T15:50:17.701Z        INFO    db_stats        {"method": "Check", "reads": 50, "writes": 0}
2022-10-26T15:50:17.702Z        INFO    db_stats        {"method": "Check", "reads": 32, "writes": 0}
2022-10-26T15:50:17.702Z        INFO    db_stats        {"method": "Check", "reads": 50, "writes": 0}
2022-10-26T15:50:17.702Z        INFO    db_stats        {"method": "Check", "reads": 29, "writes": 0}
2022-10-26T15:50:17.703Z        INFO    db_stats        {"method": "Check", "reads": 38, "writes": 0}
2022-10-26T15:50:17.710Z        INFO    db_stats        {"method": "Check", "reads": 38, "writes": 0}
2022-10-26T15:50:17.710Z        INFO    db_stats        {"method": "Check", "reads": 26, "writes": 0}
2022-10-26T15:50:17.710Z        ERROR   check_error     {"error": "postgres error: context deadline exceeded"}
github.com/openfga/openfga/pkg/logger.(*ZapLogger).ErrorWithContext
        /home/runner/work/openfga/openfga/pkg/logger/logger.go:80
github.com/openfga/openfga/server/commands.(*ListObjectsQuery).internalCheck
        /home/runner/work/openfga/openfga/server/commands/list_objects.go:233
github.com/openfga/openfga/server/commands.(*ListObjectsQuery).performChecks.func1
        /home/runner/work/openfga/openfga/server/commands/list_objects.go:207
golang.org/x/sync/errgroup.(*Group).Go.func1
        /home/runner/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:75
2022-10-26T15:50:17.710Z        INFO    db_stats        {"method": "Check", "reads": 11, "writes": 0}
2022-10-26T15:50:17.710Z        INFO    db_stats        {"method": "Check", "reads": 11, "writes": 0}
2022-10-26T15:50:17.710Z        INFO    db_stats        {"method": "Check", "reads": 32, "writes": 0}
2022-10-26T15:50:17.710Z        ERROR   check_error     {"error": "rpc error: code = Code(4000) desc = Internal Server Error"}
github.com/openfga/openfga/pkg/logger.(*ZapLogger).ErrorWithContext
        /home/runner/work/openfga/openfga/pkg/logger/logger.go:80
github.com/openfga/openfga/server/commands.(*ListObjectsQuery).internalCheck
        /home/runner/work/openfga/openfga/server/commands/list_objects.go:233
github.com/openfga/openfga/server/commands.(*ListObjectsQuery).performChecks.func1
        /home/runner/work/openfga/openfga/server/commands/list_objects.go:207
golang.org/x/sync/errgroup.(*Group).Go.func1

Second step by step population of the database and listing:

  • With 50 type_1 worked returning 1250 type_2 ids
  • With 100 type_1 worked returning 2500 type_2 ids
  • With 150 type_1 worked returning 3750 type_2 ids
  • With 200 type_1 worked returning 5000 type_2 ids
  • With 250 type_1 worked returning 6250 type_2 ids
  • With 300 type_1 worked returning 7500 type_2 ids
  • With 350 type_1 worked returning 8750 type_2 ids
  • With 400 type_1 stoped working for viewer relation but for "editor" relation returned the 10000 ids.
  • Then we jumped to 750 type_1 elements and the "editor" listing failed too.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.