Code Monkey home page Code Monkey logo

json-api's Introduction

json-api CircleCI Badge Coverage Status

This library creates a JSON API-compliant REST API from your Node app and automatically generates API documentation.

It currently integrates with Express or Koa apps that use Mongoose models, but it can easily be integrated with other frameworks and databases. If you want to see an integration with another stack, just open an issue!

This library implements all the required portions of the 1.0 spec, which is more than enough for basic CRUD. It does not yet implement some of the smaller, optional pieces, like related resource URIs.

V3 Installation

$ npm install json-api

Example API

Check out the full, working v3 example repo for all the details on building an API with this library. Or, take a look at the basic example below:

  var app = require('express')()
    , API = require('json-api');

  var models = {
    "Person": require('./models/person'),
    "Place": require('./models/place')
  };

  var adapter = new API.dbAdapters.Mongoose(models);
  var registry = new API.ResourceTypeRegistry({
    "people": {
      beforeRender: function(resource, req, res) {
        if(!userIsAdmin(req)) resource.removeAttr("password");
        return resource;
      }
    },
    "places": {}
  }, {
    "dbAdapter": adapter,
    "urlTemplates": { 
      "self": "/{type}/{id}"
    }
  });

  // Tell the lib the host name our API is served from; needed for security.
  const opts = { host: 'example.com' };

  // Set up a front controller, passing it controllers that'll be used
  // to handle requests for API resources and for the auto-generated docs.
  var Front = new API.httpStrategies.Express(
    new API.controllers.API(registry), 
    new API.controllers.Documentation(registry, {name: 'Example API'}),
    opts
  );

  // Render the docs at /
  app.get("/", Front.docsRequest);

  // Add routes for basic list, read, create, update, delete operations
  app.get("/:type(people|places)", Front.apiRequest);
  app.get("/:type(people|places)/:id", Front.apiRequest);
  app.post("/:type(people|places)", Front.apiRequest);
  app.patch("/:type(people|places)/:id", Front.apiRequest);
  app.delete("/:type(people|places)/:id", Front.apiRequest);

  // Add routes for adding to, removing from, or updating resource relationships
  app.post("/:type(people|places)/:id/relationships/:relationship", Front.apiRequest);
  app.patch("/:type(people|places)/:id/relationships/:relationship", Front.apiRequest);
  app.delete("/:type(people|places)/:id/relationships/:relationship", Front.apiRequest);


  app.listen(3000);

Core Concepts

Resource Type Descriptions

The JSON-API spec is built around the idea of typed resource collections. For example, you can have a "people" collection and a "companies" collection. (By convention, type names are plural, lowercase, and dasherized.)

To use this library, you describe the special behavior (if any) that resources of each type should have, and then register those descriptions with a central ResourceTypeRegistry. Then the library takes care of the rest. Each resource type description is simply an object with the following properties:

Query Factories

When a request comes in, the json-api library extracts various parameters from it to build a query that will be used to fulfill the user's request.

However, to support advanced use cases, you might want to override how the library generates this query in order to select/update different data, or to modify how the query's result (data or error) is placed into the JSON:API response. To do this, you can just pass in your own function (a "query factory") for constructing this query. See an example here.

One simple thing you can do with query factories is to create urls (or, in REST terminology, resources) that map to different database items over time. For example, you could have an /events/upcoming resource or a /users/me resource. To do that, your query factory function would call the library's built-in function to get its auto-generated query, and then modifiy that query (which would likely be for all the events and users respectively) to add a filter constraint that only returns the appropriate resources.

Query factory functions are also a good place to do permissions checks that rely on access to the parsed request body. If the user doesn't have permission, you can throw an error, and the library will pass it along gracefully in the response. See error handling.

Filtering

This library supports filtering out of the box, using a syntax that's designed to be easier to read, easier to write, and more expressive than the square bracket syntax used by libraries like qs.

For example, to include only items where the zip code is either 90210 or 10012, you'd write:

?filter=(zip,:in,[90210,10012]).

By contrast, with the square-bracket syntax, you'd have to write something like:

?filter[zip][in][]=90210&filter[zip][in][]=10012.

Also, the square-bracket syntax can't represent empty arrays or distinguish between non-string literals (e.g. true) and strings, while this library's format can. See details below.

Formatting filtering constraints

In this library's default format, the value of the filter parameter is one or more "filter constraints" listed next to each other. These constraints narrow the results to only include those that match. The format of a filter constraint is: (fieldName,:operator,value). For example:

  • (name,:eq,`Bob`): only include items where the name equals "Bob"
  • (salary,:gte,150000): only include items where the salary is greater than or equal to 150,000.

The value can be a number, null, true, or false, or a backtick-delimited string (like `Bob`). To define a list of values, surround the values in square brackets and separate them with commas (e.g. [90210,10012] is a list of values used with the in operator above).

The valid operators (for the buit-in Mongoose adapter) are: eq, neq, in, nin, lt, gt, lte, and gte.

If you have multiple constraints, you can choose whether to combine them with an AND or an OR. To do that, instead of providing three items in your field constraint (i.e., the field name, operator, and value), provide and or or as the opertor, followed by the applicable constraints. E.g.:

GET /people?filter=(:or,(name,:eq,`Bob`),(zip,:eq,90210))

Will find all the people who are named Bob or who live in the 90210 zip code.

Filter constraints listed next to each other at the top-level are combined with an "AND". E.g., GET /people?filter=(name,`Bob`)(zip,90210) will give only the people named Bob who live in the 90210 zip code.

The (name,`Bob`) constraint above is euivlent to (name,:eq,`Bob`). This is a shorthand. Whenever you don't provide an operator, eq will be inferred.

Putting it all together, you could do:

GET /people?filter=(:or,(email,`[email protected]`),(name,`Test`))(dob,:gte,1963)

This will find everyone born after 1963 who also has either the name "Test" or the email "[email protected]".

Note: your API can use a totally different filter query parameter format if you so desire, by providing a custom parser (see example). Also, each adapter can indicate support for a custom set of operators.

On URL Encoding

When sending filter constraints, make sure you don't URL encode characters that the syntax above uses as delimiters (namely, commas, parentheses, backticks, square brackets, and the exclamation point), unless you mean for these characters to be interpreted as part of your data (e.g., part of a field name or value) rather than as a separator.

Be aware that some clients, by default, automatically percent-encode certain characters in url query strings -- especially the backtick and square brackets. This will cause the server to error, because it won't see the appropriate (unescaped) delimiter characters. Whether the client is correct to do this automatic encoding is a nuanced question, as there are competing and conflicting standards governing URLs today (namely, RFC 3986 and WHATWG Url). If you encounter this problem, check your client for a way to make requests without any automatic encoding. If your client absolutely insists on URL encoding backticks (the trickiest character), you can delimit your strings with exclamation points (!) instead, and then encode exclamation points within the string.

Pagination

Pagination limit and offset parameters are accepted in the following form:

?page[offset]=<value>&page[limit]=<value>

For example, to get 25 people starting at the 51st person:

GET /people?page[offset]=50&page[limit]=25

Error Handling

Code that you provide the library (e.g., a query factory function) can throw an error or, in some cases, return a promise that rejects with an error.

Any time an Error is encountered, the library responds with a generic "An error has ocurred" response. The library doesn't pass any information from the error object (e.g., its message or stack trace) back to the client by default, as that could leak details about the server's implementation, which is not great for security.

If you want to pass information about the error back to the user, you need to explicitly mark it as safe to expose to the user, which you can do in two ways. First, you can throw an instance of the library's APIError class directly, instead of a more general Error. (See the APIError constructor and the json:api spec on errors for details about what properties you can expose.) Second, if your error is coming from some existing code, you can attach a special, Symbol-named property to it with a truthy value, and that'll trigger the framework to know that it's safe to display.

Many errors are also serialized by the library with standard URIs in their code field, which serves as a reliable way to identify each type of error. The library exports an Errors object, which contains a collection of functions for making errors of known types with these codes already set in them. If you throw errors, you should reuse these functions/codes where possible. To detect an APIError's type in your code, call .toJSON() and read the resulting .code property.

Routing, Authentication & Controllers

This library gives you a Front controller (shown in the example) that can handle requests for API results or for the documentation. But the library doesn't prescribe how requests get to this controller. This allows you to use any url scheme, routing layer, or authentication system you already have in place.

You just need to make sure that: req.params.type reflects the requested resource type; req.params.id reflects the requested id, if any; and req.params.relationship reflects the relationship name, in the event that the user is requesting a relationship url. The library will read these values to help it construct the query needed to fulfill the user's request.

The library may, in the future, also read req.params.related for related resource urls, so make sure you're not using that if you don't want the library to pick up it's value and use it in the query.

In the example above, routing is handled with Express's built-in app[VERB] methods, and the three parameters are set properly using express's built-in :param syntax. If you're looking for something more robust, you might be interested in Express Simple Router. For authentication, check out Express Simple Firewall.

Database Adapters

An adapter handles all the interaction with the database. It is responsible for turning requests into standard Resource or Collection objects that the rest of the library will use. See the built-in MongooseAdapter for an example.

json-api's People

Contributors

abhiagarwal avatar andrewbranch avatar carlbennettnz avatar ethanresnick avatar martinchau avatar philippmi avatar snyk-bot avatar warp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

json-api's Issues

Make “transforms” more flexible by allowing async behavior

I’d like to use the beforeSave function not so much as a transformation, but as a place to do some asynchronous work, and have the ability to carry out or cancel the save based on the results of the async task.

For example, the specific case I have right now is a resource that represents a file in an S3 bucket: I want to perform an upload to S3, which upon succeeding, will continue to save to Mongo. If the bucket upload fails, though, I want to be able to reject or something and kick off an error response.

A related thought that I’ve been thinking for some time is that maybe it would be worth exploring making each of these steps a distinct middleware. Although, that would be a function signature specific to Connect and Express servers, so maybe that doesn’t jive with the project’s vision. At any rate, I think these transforms could detect whether a Promise is being returned, and wait for it to settle if needed.

What do you think, @ethanresnick?

Handle "self" link protocol when proxied

I currently have json-api setup behind a Nginx proxy. The proxy sets up a SSL tunnel. All the clients speak only over HTTPS, but everything behind the proxy is only HTTP.

Self links derive their protocol from the incoming request protocol only. The simple fix is to use the traditional X-Forwarded-Proto header instead (if set).

There is probably a better way to handle this, but I'd like to get your thoughts before I post a PR. I just change this line to this:

it.uri = (req.get('X-Forwarded-Proto') || req.protocol) + "://" + req.get("Host") + req.originalUrl;

More tests!

This library now has pretty good test coverage (about 90%, with 265 tests), but still, it'd be good to:

  • Add "integration" tests for the JSON version of the auto-generated documentation, since its format has already been subject to a couple regressions in the past.
  • Make sure that each endpoint type/method has at least basic tests, to catch major regressions. Most do already, but there may be one or two that don't. These tests could be as simple as "did it return 200 with data even vaguely right (e.g. non-empty)".
  • Add tests for the most-complex features of the spec, like nested includes.

The lowest priority is to write tests for the "middle complexity" features, like precise status codes and the like. Not that those aren't important, but they're just lower priority.

Failed PUTs return 200

If you PUT to a resource, and the data you're PUTting is in the right format but the db rejects it (e.g. because it fails to pass validation rules)...

  1. The resource isn't actually modified in the database (good!), but
  2. the server returns a 200 with the body holding the contents of the resource as it would be if the update had succeeded.

Handle meta

Pass off incoming meta to some user function to store/whatever, and give the user some function through which to set meta info on the response should they wish. This would also presumably live in the resource type description, though I haven't thought about its design beyond that.

Dasherizing member names

The JSON API spec recommends that member names be dasherized. I think we should be dasherizing by default, but there probably needs to be an easy way to opt out of it. There are also some challenges to un-dasherizing (camelizing) some names. For instance, say you have a Mongoose schema like:

let PersonSchema = new mongoose.Schema({
  name: String,
  imageURL: String
});

It’s easy to dasherize imageURL to image-url on outgoing responses, but on an incoming POST request, for example, you might have a request body that looks like

{
  "data": {
    "type": "people",
    "attributes": {
      "name": "Andrew",
      "image-url": "http://placehold.it/100x100"
    }
  }
}

the camelization of image-url will result in imageUrl rather than imageURL.

So, I’m wondering if we should build in a way to customize the dasherized/camelized relationship. In my project, I’m handling this by adding a virtual field called imageUrl on the schema.

We could kill two birds (the “opt-in/opt-out” bird and the “customized transform” bird) with one stone by allowing users to specify a pair of functions when defining resources:

import camelize from 'camelize';
import dasherize from 'dasherize';

registry.type('people', {
  adapter: adapter,
  urlTemplates: {
    "self": "/people/{id}"
  },
  transformMemberNames: {
    fromRequest: camelize, // request comes in dasherized, camelize it before accessing store
    fromStore: dasherize   // names are camelized in store, dasherize before sending response
  }
});

Maybe that particular case, simple camelize/dasherize functions could be the default, but it would be easy to override on a per-resource basis, checking for special cases like

fromRequest: dasherized => {
  if (dasherized === 'image-url') {
    return 'imageURL';
  }
  return camelize(dasherized);
}

or to opt out of transformations:

transformMemberNames: {
  fromRequest: name => name,
  fromStore: name => name
}

I’m definitely not set on the exact naming above, maybe fromRequest and fromStore would be better as deserialize and serialize. But what do you think of going in this direction?

Make it easier to mark a field as readonly

I see two meaningful types of readonly fields:

  1. "final" fields, which are fields that, once they're set (on the initial POST), are then unmodifiable (i.e. on PATCH). An example might be a username: settable to anything at first, but then not mutable.
  2. "server-managed" fields, which are fields that are utterly unsettable through the API (i.e. on POST or PATCH), even on initial creation. This is stuff like: created-date, modified-date, and id (when server-generated ids are in use).

Right now, if the user wants one of their API fields to be readonly, whether final or server-managed, they have to manually add logic in the beforeSave transform in the resource description, to 403 if the client attempts to change one of these readonly fields. (For the final case, they even have to inspect the request to check the method.)

But this use case is common enough that there should be a declarative solution. That is, the user should be able to simple set a "server-managed" or "final" key to true in the resource description somewhere, and then a transform would automatically be added to handle the validation.

One natural place to put this key would be in the validation key that exists under each field in the info section. My issue with putting it here, though, is that it's not intuitive that values under info could actually affect the behavior of the app. So, instead, I'm thinking that resource type descriptions could get a "behaviors" key, structured like so:

{
  "behaviors": {
    "fields": {
      "created-at": {
        "server-managed": true
      }
  }
}

Adding this info under behaviors would then also need to be taken into account by the DocumentationController, to add it to the auto-generated docs.

Babel as dev-dependency

Currently we are trying to execute ES6 tests in a module where we have the json-api as a dependency. When I require 'babel/register' in one of my build steps I get the error only one instance of babel/polyfill is allowed
The problem is, that we have two different versions of babel as dependencies (the one we specified in our top-level-module and the other one which is the json-api's dependency) and both try to add the `global._babelPolyfill´. Even though I've seen #11 this causes us serious headaches. Could you consider specifying babel as a dev-dependency again?

Validate the type on incoming requests

This should just be another simple check in the APIController to make sure the adapter for the type requested exists. Will handle cases in which bad requests are routed to the library, as happened in #9 (comment)

JSON API RC3 tracking

Hey there! I just wanted to let you know that JSON API has hit RC3, and we'd like client libraries to start implementing it like it was 1.0, as a final check that we're good for 1.0. I wanted to make this issue to let you know!

json-api/json-api#484

PostgreSQL Adapter

The documentation told me to open this issue! :)
Anyway I would like to try this with a PostgreSQL database, so an adapter for that would be welcome.

If there were some additional information on how adapters are supposed to work I might be able to look into this myself.

sendError always sending 500s

It appears when you use sendError like suggested in the sample project it always hits the default case and sends a general unknown error.

{"errors":[{"status":"500","title":"An unknown error occurred while trying to process this request."}]}

After diving through the code a bit it seems to only work if you send it an instance of APIError. Is that the case or am I missing something?

If that is so, perhaps the documentation should be updated slightly? Something along the lines of:

var API = require('json-api');
var APIError = API.types.Error;
...
apiRouter.use(function(req, res, next) {
    Front.sendError(new APIError(404, undefined, 'Not Found'), req, res);
});

Restify integration

The Restify framework shares syntax with Express with a few minor variations. Is Restify currently supported, and if not what would be required to ensure integration?

Relationship self-links should be contained in links object

Relationship links should be contained in a links object but json-api currently adds the self-link as a plain string to the relationship object.
Actual:

"relationships": {
  "author": {
    "data": { ... },
    "self": "example.com/books/12345/relationships/author"
  }
}

Expected:

"relationships": {
  "author": {
    "data": { ... },
    "links": {
      "self": "example.com/books/12345/relationships/author"
    }
  }
}

Should filtering be supported?

I’m opening this to discuss support of filtering. I started by needing some filtering and wondering if this package already supported it. I couldn’t find anything about it in the docs, but I see some code floating around in do-get.js. I can’t figure out exactly how to use it or what syntax is supported, at least not without spending a lot of time debugging and reverse engineering it.

But, filtering isn’t actually part of the JSON API spec, and there are bound to be cases where we need support for more complex cases than it makes sense to incorporate into this package. I think it would make sense to remove any built-in implementation of filtering. But right now, it doesn’t look like there’s a place we (users of this package) can add our own implementations back in. Or maybe there is, but it’s not obvious or not documented?

Have you given any thought to a good way forward?

Errors in beforeSave and beforeRender hooks cause hangs

beforeRender: function() {
    notAFunction()
}

Related: I can't figure out how I'm supposed to handle errors there. You mentioned something about the APIError type, but throwing or returning that causes hangs too. How do I reject a request entirely?

Multi-level include paths

I wanted to experiment a bit with multi-level include paths but noticed that they aren't implemented yet. Maybe you could help me out and tell me which result I should expect when running following query wrt the json-api example. In addition to the example please assume that there is a to-many relationship friends of type people for each people type.

http://127.0.0.1:3000/schools/abc...?include=liaisons,liaisons.friends

Would I get a result looking like the following?


{
  "data": {
   // ...
    "links": {
      "liaisons": {
        "linkage": [{ "type": "people", "id": "..." }, ...],
        "friends": {
          "linkage": [{"type: "people", "id":"..."}, ...]
        }
      }
    }
  }
}

Remove `id` field from the auto-generated documentation

The only reason I can think of for including it is this: if the API accepted client-generated ids, it might want to include id as a field to indicate that the client can provide it. But given that APIs generated with this library don't take client-generated ids anyway, it shouldn't be there for now.

Apply transforms to linkage

Right now, when the response data being transformed is:

  • a single Resource (e.g. for non-bulk PATCH/POST requests, or requests to GET a single resource), the beforeSave/beforeRender function is called once, with the Resource as an argument
  • a Collection, the beforeSave, beforeRender function is called once for each resource, with the Resource as an argument.
  • a Linkage object (e.g. for requests to relationship endpoints), the linkage passes through untransformed. This is unacceptable, since linkage may need to be transformed too, e.g. if the server maps internal ids to different external ones, or wants to transparently rename a type.

However, I want to maintain the constraint that the transform functions are always called with a single resource object, so that users writing beforeSave/beforeRender functions don't have to do any switching on types (which the codebase already does way too much of, in part because the types aren't expressive enough).

So, my inclination, when linkage is the subject of the transform, is to wrap the linkage in an empty Resource object (preferably of the correct type, and preferably with the linkage at the right relationship path) and pass that Resource to the transform, for consistency.

This shouldn't cause any problems because, if linkage needs to be transformed, it should be transformed in the same way whether it's returned alone, or as part of a larger resource or in a collection of resources. However, we could also pass the original/raw object Linkage|Collection|Resource object as an extra argument to the transform function, just in case there are some edge cases for which knowing the original subject of the transform is important.

Custom query parameter support

Are there any plans to support custom query parameters. The standard allows custom query parameters.

In my particular case I have a custom dbAdapter and want to pass locale parameters to the adapter (e.g. /chapter/12345?Locale=en_us).

Does not support objects in an array

It would appear that json-api does not support arrays with objects inside, take the following schema:

var discussionSchema = new Mongoose.Schema({
  messageCount: Number,
  lastIndex: Number,
  message: [{
    index: { type: Number, required: true },
    parentMessage: Number,
    author: { type: ObjectId, ref: 'User', required: true },
    text: { type: String, required: true, validate: Validators.isLength(2, 1024) },
    dateCreated: { type: Date, default: Date.now }
  }]
});

It errors with the following stack:

TypeError: Cannot read property 'name' of undefined
    at getFieldType (/node_modules/json-api/build/src/db-adapters/Mongoose/MongooseAdapter.js:630:51)
    at /node_modules/json-api/build/src/db-adapters/Mongoose/MongooseAdapter.js:641:25
    at Schema.eachPath (/node_modules/mongoose/lib/schema.js:506:5)
    at Function.getStandardizedSchema (/node_modules/json-api/build/src/db-adapters/Mongoose/MongooseAdapter.js:636:20)
    at DocumentationController.getTypeInfo (/node_modules/json-api/build/src/controllers/Documentation.js:117:40)
    at /node_modules/json-api/build/src/controllers/Documentation.js:69:39
    at Array.forEach (native)
    at new DocumentationController (/node_modules/json-api/build/src/controllers/Documentation.js:68:27)
    ...

When I remove the message property it works fine.

Set Vary header properly

Really, I think this means we just need to add Vary: Accept to every response with the vary library.

We're clearly varying on accept for the documentation request, since we actually have distinct html and json representations. But we're also varying on accept for the api requests, since invalid Accept headers can trigger a 406 for which caches can't use the 200 response.

Delete Response has false Content-Type

I'm doing some testing with supertest:

request(app)
        .delete '/some/' + someId.toString()
        .set headers
        .expect 'Content-Type', headers['Accept']
        .expect 204
        .end done

headers =
        'Content-Type' : 'application/vnd.api+json; charset=utf-8'
        'Accept' : 'application/vnd.api+json; charset=utf-8; supported-ext=bulk'

I get the following error:

Error: expected "Content-Type" of "application/vnd.api+json; charset=utf-8; supported-ext=bulk", got "application/vnd.api+json; supported-ext="bulk""

GET, POST and PATCH work very well

Handling PUT requests

I'm trying to get this package working with Ember Data. Ember Data is generating PUT requests, but I can't figure out how to respond to them. Is there an example somewhere?

Testing the example

Hi,

When i test the example, currently it gives me this error :

/tmp/json-api-example/src/index.js:25
  , Controller = new API.controllers.API(registry);
                 ^
TypeError: undefined is not a function
    at Object.<anonymous> (/tmp/json-api-example/src/index.js:25:18)
    at Module._compile (module.js:460:26)
    at Object.Module._extensions..js (module.js:478:10)
    at Module.load (module.js:355:32)
    at Function.Module._load (module.js:310:12)
    at Function.Module.runMain (module.js:501:10)
    at startup (node.js:129:16)
    at node.js:814:3

Integrate with Koa?

I've been hearing a lot of good things about Koa, written by the team behind Express. It doesn't seem to have a json-api serializer yet, from what I can find. Have you thought about whether this project could also be used with Koa, in addition to Express? Or if not that, have you thought about writing a serializer similar to this one for Koa?

Example error

I recently discovered this repo after attempting to roll my own. Thank you for your hard work on this, I've learned first hand that it's not as easy as it looks.

As I attempted to use your example, I discovered an undefined variable.

var registry = new API.ResourceTypeRegistry();
var controller = new API.controllers.Base(Registry);
var adapter = new API.adapters.Mongoose(models);

Is 'Registry' defined somewhere? My linter is throwing an error there.

If I change 'Registry' to 'registry' (as I assume it should be), I then get the following error when pinging the endpoint:

{"error":"TypeError: Cannot call method 'adapter' of undefined\n at prototype.GET (/node_modules/json-api/build/lib/controllers/Base.js:24:31)

Again, I totally appreciate your efforts here. Thanks.

Incorrect filtering leads to problems

GET /people?filter[simple][name][going][deeper][gives][errors] gives a 500 error

It might also be helpful for newcomers if GET /people?filter[name]=John gave an error of some sort. It took me a while to find [simple].

Throw an error if model not found

Just a nicety: at the moment, when the adapter's find method gets called, if the type that it tries to find was never provided to the adapter, rather than erroring, it just hangs indefinitely. (I stupidly forgot to add a new model to my new API.dbAdapters.MongooseAdapter({...}) call, and it took me forever to figure out what was going wrong.)

When both ends of a relationship are of the same type, relationships are returned as attributes

For example, if I have the following schema

var PersonSchema = new Schema({
    parent: {
        type: Schema.ObjectId,
        rel: 'Person'
    }
})

mongoose.model('Person', PersonSchema)

and I GET request it, I get something like

{
    "data": [
        {
            "id": "1",
            "type": "person",
            "attributes": {
                "parent": "2"
            },
            "relationships": {}    
        }
    ]
}

instead of

{
    "data": [
        {
            "id": "1",
            "type": "person",
            "attributes": {},
            "relationships": {
                "parent": {
                    "id": "2",
                    "type": "person"
                }
            }    
        }
    ]
}

PUT handlers hang

If you try to do this…

app.put('/people/:id', handler)

…requesting PUT /people/1 results in a hang. 404 would probably be best, or a crash with a nice error message telling the dev to use PATCH.

How do I remove a particular resource from the response entirely?

I can get the beforeRender hook to remove specific fields, but I can't seem to get it to remove the resource altogether.

Looking at the code, it seems as though the way to do it is to replace the resource with undefined, but that logic is only applied to Collections and returning undefined from the beforeRender hook just results in a hung request for me.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.