Comments (16)
I think the generated service files will add a tremendous value on their own, the question in my head is more whether and how we want to add a more "natural" API on top of that.
Why not copy the JS SDK architecture, and just type it statically at the bounds of the API.
I like the general idea, but I'm not really well-versed in the JS SDK architecture, could you please outline the difference of this suggestion vs. the PR you've submitted? I get the difference between protocols, but how would the JSON-defined API become a typed Dart file? I'm just trying to understand what the difference would be compared to package:googleapis
(the generated API libs) and package:gcloud
(a well-typed lib using the googleapis
).
Btw. thinking about it, I think the least controversial HTTP library would be to use package:http
as the transport layer.
from aws_client.
The hardest issue I found when writing a general code generator was to define the code in the service functions e.g. DynamoDB.getItem()
. There is very clear type information in the API JSON files about what goes in and what comes back from the function.
The trouble is defining what happens in the function. Service metadata declares the protocol, and the function definition says what HTTP operation it uses etc.
The JS SDK has generic operations done in each service call:
- Set the HTTP operation (GET/POST etc) according to the JSON spec
- Generate a URI according to the JSON spec
- Generate the required headers according to the JSON spec
- Set the host prefix if endpoint discovery is enabled
- Format payload according to the JSON spec (JSON/XML/Query etc)
- Make the call
- Extract errors in an generic, ergonomic form according to the JSON spec
- OR
- Extract data from headers/body/http codes etc into the specified structure according to the JSON spec
- Return the response
My generated code is incomplete and probably non-functional in its current form. The code generation can contribute with Dart types and serializing to and from those types into Dart JSON.
I am not well versed in the package:googleapis
nor package:gcloud
packages, but by the short look I had at it, it looks like the latter have more hand crafted code for ergonomic use.
The AWS equivalent would be the DynamoDB.DocumentClient
versus the raw DynamoDB
.
DocumentClient is handcrafted to make the API more ergonomic. We could write an equivalent, where we wouldn't be able to autogenerate that code from a specification.
I have no experience with package:http
, but it can surely suit our needs.
from aws_client.
serializing to and from those types into Dart JSON
One idea that I've explored recently is to generate the Dart classes with package:json_serializable
, that way the JSON-encode and -decode parts are handed off to another library. One example of this can be seen here, in an experimental HTTP-client/HTTP-server stub library. There I am using a service definition, generate various classes, including the messages, and then the referenced library generates the fromJson
and toJson
methods.
by the short look I had at it, it looks like the latter have more hand crafted code for ergonomic use
It is exactly that. I think as a first step, we could aim to have the "raw" generated client first. Your description of what it should do seems plausible and doable. Figuring out how it should work as a high-level library may be done in the second phase. What do you think?
from aws_client.
I was using package:json_serializable
in the generation approach as well. It greatly simplifies the development on our end.
I agree on starting out with the purely generated solution. The part of typing the interfaces is basically solved with the current generator I have done. The remaining part is creating the core functionalities that will be called from the generated functions.
We can start small and implement the protocols and small classes like request and response.
We'll have to translate idiomatic JS to idiomatic Dart, and it will take a moderate effort upfront to make these helper classes. But they will apply to all services , which is a lot of saved work.
from aws_client.
I have started working on the downloading and generation part so that the SDK can easily be updated and generated when upstream SDKs update their definitions.
How are we coordinating this work, do we merge work into a feature branch?
from aws_client.
How are we coordinating this work
The work should be merged into master
. I think there is not much parallel activity going around, and my main concern is to honour the semantic versioning and offering an easy upgrade path for current users. That can be done while working on master
, and it also makes coordination easier.
Let's move the current root-level package to the aws_client/
subdirectory. (I'll do that soon.) Separating the development-related tools from the actual library is probably better for the users. I wouldn't separate the development-related tools to a separate repository, it would be harder to keep them in-sync.
With that, let's create a root-level directory (maybe generator
?) for your toolkit. I'll add a few comments on your current code, and you may already submit PRs for that directory structure.
We can start small and implement the protocols and small classes like request and response.
Agreed. It may take me a few days until I can carve out a reasonable time to do the coding myself, but I'll review and accept PRs.
from aws_client.
I've made the "shell" of the code generation: #16
from aws_client.
@Schwusch: what do you have in mind for the next step?
from aws_client.
I'm thinking from a code generation perspective, if we have the protocols implemented, we could "prepare" requests and construct responses. I'm not an expert on package:http
, but I can do a draft of the json
protocol for example.
Hopefully, the job is simply translating JS to Dart. I'm going to give it a go this weekend.
from aws_client.
@isoos I've done my take on the architecture in #21 .
It's a draft, and I've haven't thought everything out, but it's a start.
from aws_client.
I've come to realise that the model classes might be the best things to start with.
My plan so far has been to convert the JSON definitions to a const Dart Map, but that has proven to be an encoding nightmare. The pattern
field contains all possible permutations of Dart illegal character sequences, it can't be contained as a Dart string.
Still, the meta-data has to be compilable, in order for us to avoid runtime-loading-from-disk, which I believe is not feasible for a library anyway. The metadata contains all the validation, parsing etc.
One idea I have is to encode the JSON spec to Base64, and store it as a const String in the service class, and then decode it at runtime, marshalling into the model classes mentioned above.
@isoos what's your thoughts?
from aws_client.
The pattern field contains all possible permutations of Dart illegal character sequences, it can't be contained as a Dart string.
I think it is not critical to provide a verification for values at this point. It may specify a max size, or a pattern, or other constraint, but let's do the service call first. The verification may be added later, as an optional parameter, but if the value is invalid, it will be denied at AWS side anyway, why double the work?
One idea I have is to encode the JSON spec to Base64, and store it as a const String in the service class, and then decode it at runtime, marshalling into the model classes mentioned above.
This seems to be overly complex, and I'm not really convinced that we should do it at runtime. It should not be fundamentally different to parse the JSON spec at codegen time, maybe not 100%, but 90%. E.g. let's drop field constraints, let's focus only on the bare minimal message structure definitions and message singing. What do you think is the hardest part when parsing the JSON spec in preparation of the codegen? (I'll give it a go soon...)
from aws_client.
I think it is not critical to provide a verification for values at this point
You're right, it can be a later issue. The main argument for adding it is SDK ergonomics, since the backend might have cryptic error messages.
What do you think is the hardest part when parsing the JSON spec
Well the complexity has to go somewhere, and the logic that is present at runtime in the JS SDK, will be present in the code generation at our end. For example, we will have to split codegen into an impressive architecture where the different protocols will have different approaches.
What we might gain in runtime by not using the JSON spec, we will loose in code size and duplication. Eventual bugs will be in the code generator, and testing generated code is generally more difficult and takes longer.
Either way it can be beneficial to create model classes that marshal the JSON spec into typed classes. It will then be easier for us, both in codegen and perhaps runtime, whichever route we choose. A complete spec of those is found here
from aws_client.
I've done a draft of the model classes here: #23
from aws_client.
FYI, I made some research, it seems like this is the route AWS themselves has gone with their
Java SDK v2!
Perhaps we can draw some inspiration from them
from aws_client.
Since we are deep into the implementation of this request, perhaps this can be closed?
Discussions can be held in specific issues, and perhaps linked to a project task?
from aws_client.
Related Issues (20)
- There is no documentation or articles for How to integrate in Flutter also the import 'package:aws_rekognition_api/rekognition-2016-06-27.dart'; dosen't work HOT 1
- Refused to set unsafe header "Host" HOT 4
- Not have MeetingSession on aws chime HOT 1
- An example of streaming data to amazon Personalize [QUESTION] HOT 1
- Class 'QueueAttributeName' has no instance method 'toJson'. HOT 3
- Upgrade to xml ^6 HOT 4
- No mechanism to discern response status code HOT 1
- Unable to access S3 via IAM role HOT 5
- Incompatilibty with `amplify_flutter` HOT 1
- NoSuchMethodError: Class 'double' has no instance method 'toJson'
- MalformedInput Sender: timestamp must follow ISO8601 HOT 7
- How to use Amazon S3 to down file by presigned URL
- Exception: credentials for signing request is null
- Topic Subscribe
- aws_polly_api could really use an example
- pub.dev will rate limit the publishing of packages
- How to send data to aws HOT 1
- Investigate why the analysis of aws_client is stuck on pub.dev HOT 1
- [bug] sts assumeRoleWithWebIdentity tries to sign request HOT 2
- s3 getObject `SignatureDoesNotMatch` if path contains a `:` HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from aws_client.