ardanlabs / service Goto Github PK
View Code? Open in Web Editor NEWStarter-kit for writing services in Go using Kubernetes.
Home Page: https://www.ardanlabs.com
License: Apache License 2.0
Starter-kit for writing services in Go using Kubernetes.
Home Page: https://www.ardanlabs.com
License: Apache License 2.0
I think it will be good if we can add authz into the service. I think this is important part of any system and OPA is really interesting product and it is part of CNCF and OPA has integration with lot of popular product like Kubernetes and isto etc.
OPA is a lightweight general-purpose policy engine that can be co-located with service.
Specifically talking about internal/platform/docker/docker.go
I see that we use os/exec to pipe out commands to docker. This works right now, but if commands start getting more complex, it's going to get dirty.
Docker has an excellent official client which I believe would make this a lot cleaner. The README claims that docker is going to be an integral part. If so, I think having this as a dependency should not be an issue.
https://godoc.org/github.com/docker/docker/client
I'm happy to submit a PR if you like.
In the handler, you defined:
// Product represents the Product API method handler set.
type Product struct {
db *sqlx.DB
// ADD OTHER STATE LIKE THE LOGGER IF NEEDED.
}
What's the reasoning behind having a database access to be passed down the call stack? Also, why not having a dedicated layer, repository, to manage all the queries to the db?
Is there any plan to add the image server functionality to this package?
It is normaly not an issue but just to my point of view a simple recomendation. To build the project a private key private.pem should be provided so in the makefile it would be better maybe to generate the private key automatically.
My example in the makefile
all: keys sales-api metrics up
Right now internal/mid/errors.go
logs the errors it receives using %+v
. If the error in question was wrapped with pkg/errors
then it will print a stack trace.
This is good for unexpected errors but rather annoying for normal errors like a bad request. For example if someone tries to make a product with a quantity of -10 I don't need to see a whole stack trace.
We implemented our own code to marshal the trace context across the wire. This package does things a bit better.
https://godoc.org/go.opencensus.io/plugin/ochttp/propagation/tracecontext
In this project, I find it already using http for all api method. If I want to create a WebSocket related method between client with server after token
method, is it possible to add it directly in routes.go
?
In many APIs I've developed we have List endpoints but they don't return the entire collection just a single page of records. We should discuss if this is worth implementing in service.
instead of opencensus?
In Downloading The Project, there is a typo for the command that is
$ GO111MODULE=off go get -d gitHub.com/ardanlabs/service
where should it be
$ GO111MODULE=off go get -d github.com/ardanlabs/service
instead!
I had started this repo quickly for a training earlier this year.
https://github.com/ardanlabs/srvtraining
I think the code needs to be broken down into consumable parts. It's hard in the classroom to copy and paste the code and clean it up. It would be nice to have exercises in between to allow people to get comfortable as you add more complexity.
Thoughts?
Consider adding a CircleCI / TravisCI file.
On the wiki Getting Started page https://github.com/ardanlabs/service/wiki/Getting-Started
Could we add staticcheck as a pre-requisite along with docker. The tests fail if it is not installed or could not be found in the PATH.
/bin/bash: staticcheck: command not found
makefile:50: recipe for target 'test' failed
make: *** [test] Error 127
We need to add a minimal but complete layer for authentication and authorization.
If I add the fileSystem with http.Handle in the routes, is it autoClosed when shutdown the API?
// routes.go
func API(shutdown chan os.Signal, log *log.Logger, db *sqlx.DB, authenticator *auth.Authenticator) http.Handler {
app := web.NewApp(shutdown, mid.Logger(log), mid.Errors(log), mid.Metrics(), mid.Panics(log))
check := Check{
db: db,
}
app.Handle("GET", "/v1/health", check.Health)
...
http.Handle("/files", http.FileServer(http.Dir(uploadPath))) // This line
return app
}
In the HTTP Handler, I see that ctx
is passed explicitly (along the *http.Request
).
Do you have any reason not to use Request.Context()
?
And Request.WithContext(ctx)
when the context must be augmented.
Thanks for putting together this repo. It is a goldmine to learn more about structuring production-ready applications!
Initial make fails with:
Step 10/27 : COPY private.pem private.pem
COPY failed: stat /var/lib/docker/tmp/docker-builder401590106/private.pem: no such file or directory
makefile:20: recipe for target 'sales-api' failed
Steps to reproduce:
git clone --depth 1 https://github.com/ardanlabs/service.git
cd service/ && make
We need to add support for deploying to k8s. @jcbwlkr has a great idea of giving students a droplet on DO with a k8s environment already setup. We don't need to teach installing k8s but deplopying to it. Though this has a cost per student in each class, it is minimal and worth it.
We need to setup a k8s image and have instructions for creating a student lab environment.
Then teach how to deploy.
I have found that some companies lack a well defined philosophy on how to expose functionality... i.e. I need to develop functionality X: do I expose it as a shared library or a endpoint on a service? Perhaps we could define a philosophy on this topic as a part of this service / course?
We need to create the relevant k8s yaml assets that will enable the service to be deployed to k8s, currently we're using the docker-compose.yml
file to deploy locally.
We will be added a flag to indicate if the service should run insecure.
There is a mention in readme that project contains the feature of
but as far as I understand there is a simple logger that writes to stdout and nothing more. Do you consider to achieve distributed logging by adding some logging driver to docker-compose or another way?
### Getting the project
...
$ go get -u github.com/ardanlabs/gotraining
Should be
### Getting the project
...
$ go get -u github.com/ardanlabs/service
Peter Bourgon in his talk at Iceland made a point that services should have flags for all possible configuration options. This helps operation and with documentation. The cfg package should add the support for this since it is a single place for configuration knowledge.
The .Dockerfile
extension provides better editor-friendliness.
i.e. metrics.Dockerfile
instead of dockerfile.metrics
Appear unable to do: go run main.go something.txt
... i.e. reading a standard non-flag argument b/c a call to flag.Process
errors out on the default
:
internal/platform/flag/flag.go
switch {
case strings.HasPrefix(osArg, "-test"):
return nil
case strings.HasPrefix(osArg, "--"):
flag = osArg[2:]
case strings.HasPrefix(osArg, "-"):
flag = osArg[1:]
default:
return fmt.Errorf("invalid command line %q", osArg)
}
Implement authorization using the claims that get parsed from JWT.
There is an intermittent failure in the tests for internal/platform/trace
. If you look at the build output for this https://circleci.com/gh/ardanlabs/service/19 it failed but when I restarted the build it passed.
The tests say
trace_test.go:69: Test: 2 When running test: SendOnTime
trace_test.go:99: ✓ Should have zero spans.
trace_test.go:112: ✓ Should have a batch to send.
trace_test.go:117: Got : [0xc1fbe0 0xc1fcc0]
trace_test.go:118: Want: [0xc1fb00 0xc1fbe0 0xc1fcc0]
trace_test.go:119: ✗ Should have an expected match of the batch to send.
Just curious, is the sales-api
accessible, because I'm getting the following after running make up
:
pull access denied for sales-api-amd64, repository does not exist or may require 'docker login'
Thanks
Skaffold is a tool that makes developing applications for kubernetes easier. It supports a live-reload development mode where it watches for changes to code and will trigger docker builds / kubectl apply's (really handy when using minikube during local development). It also can be used to simplify build/deploy pipelines. Check it out:
I like the idea of having a type config struct
in the main
package. It provides a simple declarative way of showing all inputs that can effective application behavior. You can then populate it from the env (i.e. https://github.com/kelseyhightower/envconfig) or somewhere else on startup.
I see we use gopkg.in/mgo.v2 which is not supported anymore i think we should consider to start using official mongodb driver for GO
A fresh clone currently takes a good amount of time because of binary files in the commit history.
You can identify them by running this oneliner: git rev-list --objects --all | grep -f <(git verify-pack -v .git/objects/pack/*.idx| sort -k 3 -n | cut -f 1 -d " " | tail -10)
1930a9b3151f0dd2b3ff245b9f77fb83407ede70 cmd/search/search
728a7a93403083b8fd06accde6c79d6ed8cd82a4 cmd/search/search
41265c8f26433009e979fa88ca50fa5395dc734a cmd/sidecar/metrics/metrics
8ba505fcf5bad430d4e3e7df1a9b38802260c9a2 cmd/sidecar/metrics/metrics
90a30e0e48d378088c3a594688da3b5a2b785ce3 cmd/sidecar/metrics/metrics
e82d16d2e603ec8b6f58aef127d871331c150306 cmd/crud/crud
97e1a3e5cf9b86726d960fb30d8cf05189cb681b cmd/crud/crud
fc933685fe478c4686f4733e4f8d01242906cc1e cmd/crud/crud
218ad79408ee5e4e2f71748ecccfc8e1d4f54133 cmd/crud/crud
099217f9bae311d3159d43af64e2ccac8076d315 cmd/crud/crud
crud | 218ad794 (7,6 MB), 099217f9 (7,6 MB), ...
metrics | 90a30e0e (5,8 MB), 8ba505fc (6,2 MB), 41265c8f (6,2 MB)
search | 1930a9b3 (11,8 MB), 728a7a93 (12,4 MB)
My first proposal is that the dockerfiles be moved out of the root. We can get away with it with only 3 services, but the more services you have the messier the root is going to be.
I have a suggestion on how I would structure it, drawn inspiration from previous client projects and my own personal projects:
cmd/
sales-api/
deploy/
Dockerfile
apiary.apib
DEPLOY.md
sidecar/
metrics/
deploy/
Dockerfile
apiary.apib
DEPLOY.md
tracer/
deploy/
Dockerfile
apiary.apib
DEPLOY.md
internal/
docker-compose.yml
makefile
I've ommitted a decent amount of file structure above, but that should be enough to see what I'm proposing. The deploy
folder can be found in each program's directory. The deploy folder contains the Dockerfile (most important) and maybe some documentation related to the service/environment needed for deployment (examples being apiary.apib
and DEPLOY.md
)
Current the compose file is fine the way it is, the Dockerfiles would, however, have to be changed. I like the idea behind using a container to not only run the program, but also build it. Makes deployment even more trivial. I normally do not do this. I normally build the binary and only use docker to host it, automated through makefile rules, and link the containers in a network through compose. This Dockerfiles really easy to read, trivial, and fast. In order to keep building the way they are currently built, you'll have to make minor modifications to the Dockerfiles and build from the root, allowing the build context to be in a place where it can properly copy all the files.
My second proposal is pretty simple, make a dockerignore and clean up the COPY
commands. I noticed we were blindly copying the entire contents of the project into each image to build. We really only need the module files, internal/
and the specific cmd/
subfolder for the requested service.
DO droplet is the default for k8s environment deployment, however we should have an option for those who want to run locally.
Based on the blog post regarding package oriented design packages in the same level inside internal should not be importing each other. However mid
imports auth
here. Is there a reason why this is ok? Thank you.
Our Go code should all work with modules in 1.11 and be buildable outside of GOPATH. The problem is some of our tooling like the makefile have explicit dependencies on GOPATH. Remove those.
We need to create a script / tool that will provision the Digital Ocean (for now) infra required and install kubernetes ready for the service to be deployed to:
Project using the default admin password "gophers" and private.pem, how to generate my own secure information?
Consider removing time.Now()
and passing in a now time.Time
parameter into functions such as internal/user.Create(...)
so that time is treated as a dependency.
https://github.com/ardanlabs/service/blob/master/internal/user/user.go#L72
Add a CLI or API endpoint to demonstrate token generation.
The material would be easier to digest if our main service wasn't simply crud
and if we were manipulating records besides just User
. Some ideas include:
Task
values.Pet
values.Comic
s.Product
s I have for sale in my garage, how much they cost, and who they came from. Many people do multi-family sales and it could be useful to know that item 12345 came from Cindy so the money should go to her.We could probably come up with several and bikeshed this forever.
Currently auth.Key
is only used to retrieve the Claims
from the current Context
:
claims, ok := ctx.Value(auth.Key).(auth.Claims)
I t could be made unexported (key
) by adding of two small helpers in the auth
package:
// ClaimsValue extract the Claims from the context
func ClaimsValuectx context.Context) (Claims, bool) {
claims, ok := ctx.Value(key).(Claims)
return claims, ok
}
// WithClaims add the Claims to the context
func WithClaims(ctx context.Context, claims Claims) context.Context {
return context.WithValue(ctx, key, claims)
}
Which would be used like this:
claims, ok := auth.ClaimsValue(ctx)
If you think this approach could make sense, I would be happy to make a PR
Having some sort of interface definition is extremely useful as it provides a clear API contract for clients. For a REST API this should probably follow the OpenAPI spec (https://en.wikipedia.org/wiki/OpenAPI_Specification).
Usually there is some sort of automated link between the server's codebase and the IDL. This can be one of two directions: code generated from IDL or IDL generated from code. A few thoughts on each direction:
These items should be tackled sooner rather than later b/c if we choose to link the codebase and IDL in some automated way, it has big implications on the project's code structure.
Hi Bill,
On the search branch I bootstrapped the services with make up
and browsing the health endpoint works fine using http://0.0.0.0:5000/health
But what endpoint to hit for getting the search.html
displayed?
cmd/search/internal/handlers/routes.go shows that /static/ is stripped away from the path, so my guess is that http://0.0.0.0:5000/ should serve the UI but I currently get 404 for that.
Same for http://0.0.0.0:5000/css/main.css
SEARCH : 2020/02/20 17:28:00.746631 main.go:78: main : Started : Application Initializing version "c391787a78543663191e11e4f555e02c9f3e55a7"
SEARCH : 2020/02/20 17:28:00.746709 main.go:85: main : Config :
--web-api-host=0.0.0.0:5000
--web-debug-host=0.0.0.0:6000
--web-read-timeout=5s
--web-write-timeout=5s
--web-shutdown-timeout=5s
--zipkin-local-endpoint=0.0.0.0:5000
--zipkin-reporter-uri=http://zipkin:9411/api/v2/spans
--zipkin-service-name=search
--zipkin-probability=0.05
SEARCH : 2020/02/20 17:28:00.746726 main.go:102: main : Started : Initializing zipkin tracing support
SEARCH : 2020/02/20 17:28:00.746831 main.go:95: main : Debug Listening 0.0.0.0:6000
SEARCH : 2020/02/20 17:28:00.747080 main.go:150: main : API Listening 0.0.0.0:5000
SEARCH : 2020/02/20 17:31:59.074752 logger.go:34: d3d41c829cb0d6ea83aed5b97bc624e9 : (500) : GET /search -> 192.168.48.1:35650 (1.299281ms)
SEARCH : 2020/02/20 20:10:29.941274 logger.go:34: 7761c6e16a413deb561770312fb6877d : (200) : GET /health -> 192.168.48.1:48664 (33.701µs)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.