Code Monkey home page Code Monkey logo

bricks's People

Contributors

agandrabur avatar alessandromiceli avatar alinrosca97 avatar arnold-iakab avatar daemonfire300 avatar danielbalteanu96 avatar dependabot[bot] avatar ferlonas avatar fezde avatar floaust avatar flofuenf avatar gabrielruiu avatar jpwenzel avatar mihailozarinschi avatar mikekasperlik avatar monstermunchkin avatar nicmue avatar pieceofsoul avatar pnull avatar roberth1988 avatar schlottip avatar threez avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bricks's Issues

Extended generated REST API

Validation

  • use x-validator attribute in spec to select more validations and directly apply them (e.g. iso codes)
  • more validations for simple types like country id

Scopes

  • use x-must-scopes defines the set of scopes that need to be checked in order to make a request. Special values:
    • * all of the already provided scopes need to be given
    • 1 one of the already provided scopes need to be given

Generate gitlab ci file

This ci file will be inspired by the ci file that is generated for go-microservice.

  • Should be created during pace service new ...
  • and be accessible via pace service generate gitlab-ci ...l

Implement OAuth2 Middleware

Implement middleware that handles OAuth2.

  • create new muxable middleware see https://github.com/gorilla/mux#middleware
  • implement the new middleware type oauth2.Middleware
  • put the user info and client scope into the context see https://www.youtube.com/watch?v=LSzR0VEraWw
    • create a function to check client scope oauth2.HasScope(ctx context.Context, scope string) bool
    • create a function to receive values from the id provider oauth2.Get(ctx context.Context, key string) string
    • create a function to receive BaererToken from request (created by the id provider) oauth2.BaererToken(ctx context.Context) string
    • oauth2.ClientID(content.Context) string

Example program:

package main

import (
	"fmt"
	"log"
	"net/http"

	"github.com/gorilla/mux"
	"lab.jamit.de/pace/web/libs/go-microservice/http/oauth2"
)

func main() {
	r := mux.NewRouter()
	r.Use(oauth2.Middleware{
		// ... not exactly sure what we need to authenticate against
		// cockpit ...
		Host:         "id.pace.cloud",
		ClientID:     "dtc",
		ClientSecret: "some secret",
	})
	r.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
		log.Printf("AUDIT: User %s does something", oauth2.Get(r.Context(), "X-UID"))

		if oauth2.HasScope(r.Context(), "dtc:codes:read") {
			fmt.Fprintf(w, "The secret code is 42")
  			return
		}

		fmt.Fprintf(w, "Your client may not have the scopes to see the secret code")
	})

	srv := &http.Server{
		Handler: r,
		Addr:    "127.0.0.1:8000",
	}

	log.Fatal(srv.ListenAndServe())
}

Implement maintenance - errors

Add package for error wrapping and sentry message handling

License

Currently, the project has the MIT license set. I would argue, that we should use the Apache 2.0, that is very similar but has the extra "we don't sue you" part. It is a good choice for businesses

Implement Maintenance - pprof / health

We should run our microservices with pprof enabled to allow profiling and debugging of staging and production systems more easily.

Probing only happens if the endpoint is hit (with a small performance overhead during that time) https://golang.org/src/net/http/pprof/pprof.go


  • Implement pprof handler
  • Implement "/health" api service that should return 200 -> important for the API gateway
  • Implement http handler that contains all maintenance endpoints

Default buckets for paceHTTPDuration are off

		prometheus.HistogramOpts{
			Buckets: []float64{.1, .25, .5, 1, 2.5, 5, 10},

should be better

		prometheus.HistogramOpts{
			Buckets: []float64{10, 50, 100, 300, 600, 1000, 2500, 5000, 10000, 60000},

same is true for responseSize, currently:

			Buckets: []float64{100, 200, 500, 900, 1500},

better: 100Bytes, 1KB, 10KB, 100 KB, 10B, 5MB, 10MB, 100MB

Tasks

  • adopt numbers
  • check other buckets for general validity

Tests failing when creating new service because of go-microservice dependency

The following line https://github.com/pace/bricks/blob/master/maintenance/log/log.go#L127 leads to the following result, after creating a new service and executing the test suite

https://github.com/pace/web/service/poi/blob/621ddb17fd0038ade2816ab79c28ef00729c9ec2/Gopkg.toml#L30

See also rs/zerolog#116

Used Go version: 1.11

Can be fixed by adding the following https://github.com/pace/web/service/poi/blob/master/Gopkg.toml#L30 to Gopkg.toml and running dep ensure afterwards.

Build a Cloud SDK for Golang

What is that?

We want to use Golang to build (most of) our microservices. Since some of these services need to access other services, we should encapsulate that functionality. This project should integrate all the communication stuff as soon as we have either an grant, access token or session token.

We should build a SDK that handles Cloud requests (through PACE API Gateway). This SDK Should then be used for:

  • Our own (Micro)services
  • Enable partners to build services for our Platform
  • ... and made Open Source as we did with other Cloud SDKs on https://github.com/pace-car

Implement add gitlab ci pipleline

The test for the go-microservice kit should be evaluated on each check-in

  • docker based ci pipeline
  • execute unit tests with coverage
  • execute code lint

Add label for all http request related stats to filter

In order to filter technical stats from the Grafana dashboards, add a label to all request related stats.

Request-Source: (uptime|kubernetes|nginx|livetest)

Name Description
none Regular requests
uptime Uptime Robot and similar sources
kubernetes Kubernetes health/alive checks
nginx Backend availability checks
livetest Backend functional checks

The set needs to be checked, otherwise, one could DOS our Prometheus my using random labels (explosion of TSDBs)

Only generate bare method handler in case the response and/or request are of the type protocol buffer

  • Generate service method correctly (response writer and request parser)
    • use extensions x- attributes from openapiv3to generate the methods correctly
    • adapt the github.com/pace/bricks/http/jsonapi/runtime package to handle protocol buffers
  • Generate protocol buffer file next to the generated REST API file
  • Extend pace command for service new and the generate rest to respect the protocol buffers correctly

Build Helm Chart

Configure a pace/bricks mircoservice using a helm chart

Tasks

  • Helm Chart to configure
    • Sentry
    • Postgres
    • ENV
    • Redis
    • Jeager

Use unformatted query in postgres metrics

postgres.go:132

This should probably be changed to

q, qe := event.UnformattedQuery()

The way it's set up now, the wrapper will generate a new prometheus label for each individual query (new params generate new labels). UnformattedQuery will return the query string with placeholders (?) which is most likely what we want.

Allow to generate JSON API from remote source

If we execute pace service new dtc --source https://lab.jamit.de/pace/web/api-definitions/raw/master/dtc.yaml, we get a runtime error like this

Nil pointer in generate_tapes:18

If we manually download the JSON definition of an API from our developer hub and use that file everything works nicely. So this error probably occurrs due to unresolved schemas. We should catch such errors and use log.Fatalf(...) instead to show a proper error message.

Update 07.09.18

Having resolved yaml files deployed the above error is still thrown. So maybe this is more of an yaml vs json issue.

More updates

This actually is an issue with files from remotes. Adding a loadSwaggerFromURI will fix this issue.

CI pipeline integration using real Redis, PostgreSQL and Jaeger

  • add the docker services to the integration step of the pipeline
    • use docker compose for setup of the different services instead of the makefile
    • user docker compose to start the services while executing integration test suite
  • reuse the test server to do a real integration test of all the components (in other words start the test server and send an http request to it.)
    • Test should be skipped on -short
    • Test should be prefixed with TestIntegration

More metrics

  • regular http server metrics
  • metrics for redis and postgres

OAuth2 Middleware integration

  • Add a new integration test stage and add a -short flag to the current test stage
    • filter the tests in the integration suite using -run TestIntegration
  • Automatic configuration using environment variables, e.g.:
    import "github.com/caarlos0/env"
    
    type config struct {
    	URL    string `env:"OAUTH2_URL" envDefault:"`https://oauth.example.com`"`
    	Client string `env:"OAUTH2_CLIENT"`
    	Secret string `env:"OAUTH2_SECRET"`
    }
    
    func func NewMiddleware() *Middleware {
    	var cfg config
    	err := env.Parse(&cfg)
    	if err != nil {
    		log.Fatalf("Failed to parse postgres environment: %v", err)
    	}
    	return ....
    }
    The idea is, that we can 12-factor-app like create a new middleware which automatically takes the config from the environment.
  • Use the "lab.jamit.de/pace/go-microservice/maintenance/log" package for logging
  • Use ClientID in metrics https://github.com/pace/bricks/blob/master/maintenance/metrics/jsonapi/jsonapi.go#L60
    • Adopt tests (We (@vil and @why-el) elected to do this some other time by implementing a local Prometheus client to be used by the testserver).
  • Use ClientID and UserID in logs (the ClientID and UserID should be logged once for all requests. https://github.com/pace/bricks/blob/master/maintenance/log/handler.go#L28 UPDATE: Logging should be done in the router. The oauth2 will inject its own logging hander middleware.
    • Adopt tests
  • Add ClientID and UserID in traces (https://github.com/pace/bricks/blob/master/http/jsonapi/generator/generate_handler.go#L459-461) This will be reverted, see opentracing item below.
    • check that it works in Jaeger using the make testserver
  • document configuration for the oauth2 middleware via environment variables
  • Do opentracing from directly oauth2, since we want to capture the cockpit request.
    • check that it works in Jaeger using the make testserver

Generate enum for APIs

  • generate new type and consts for string, float, int types
  • type from raw type, type to string implementations

Fix metrics histogram negative value addition

https://sentry.jamit.de/pace-telematics/poi-dev/issues/11229/?environment=edge

Code at https://github.com/pace/go-microservice/blob/master/backend/postgres/postgres.go#L193 does an .Add() with a r.RowsAffected() value, a method which can also return -1, as documented in

// A Result summarizes an executed SQL command.
type Result interface {
	Model() Model

	// RowsAffected returns the number of rows affected by SELECT, INSERT, UPDATE,
	// or DELETE queries. It returns -1 if query can't possibly affect any rows,
	// e.g. in case of CREATE or SHOW queries.
	RowsAffected() int

	// RowsReturned returns the number of rows returned by the query.
	RowsReturned() int
}

An .Add call should be done with values >= 0 otherwise it panics.

Potential fix:

Replace code at https://github.com/pace/go-microservice/blob/master/backend/postgres/postgres.go#L193:

pacePostgresQueryAffectedTotal.With(labels).Add(float64(r.RowsAffected()))

with

pacePostgresQueryAffectedTotal.With(labels).Add(math.Max(0, float64(r.RowsAffected())))

Support testing in live

To assure the function of the service in a certain environment, a package with supportive functions should be created.

Tasks

  • Should look similar to tests and therefore has to implement testing.TB
    • such that libraries like testify/assert can be used
    • tests could be executed in an integration test scenario as well
  • Test logging should end up in the log package
  • Test results are exposed via Prometheus
  • Filtering of tests is possible
  • Tests are repeated in a given interval

Use new google/jsonapi library

  1. panic: reflect: call of reflect.Value.NumField on ptr Value
  2. Can't parse content: Pointer type in struct is not supported
  3. No support for array attributes

Fix missing relationships

json:api descriptions can contain relationships, the corresponding attributes need to be generated

Rename and branding

The go-microservice project develops itself into a small opinionated microservice generation kit and utility.
Currently, the util is still called pace and a bit separated from the rest of the code base. The main reason for this is, that it was developed in a separate repository.

So currently we have go-microservice with pace service new pay to create a new microservice.

Quickly we realized that the two things belong together and merging the pace tool into the go-microservice made changes to both at the same time much easier. Now the tool name is a bit off and the project name is a bit generic.

  • Rename go-microservice to pace/bricks
  • Remove the service argument in the command so that we can create new microservices with pace new pay
  • Rename pace to pb
  • Replace jsonapi example
  • Use of docker-compose in TestIntergration...

Fix wrong pointers in json errors

Currently, we don't implement http://jsonapi.org/format/#errors-processing correctly.
The details we provide should help the caller though.

Problem

The govalidation (github.com/asaskevich/govalidator) error has no reference to the original StructField. That makes it impossible to generate correct pointers. Since the actual data structure and the incoming JSON are a very different, fork and add struct field tags. Add custom tag and use a custom tag to produce correct source pointer/parameter.

Health check that reports service status

Next to the health check that only returns OK we need to add a health check that checks it's dependencies.

  • Check for redis / postgres availability
  • New endpoint under /health/service
  • cache the health result, don't allow dos via health (e.g. cache for 10sec)

  • TODO: Think about how a circuit breaker comes into play in the scenario.

Localization support

  • error detail
  • file based translation?
  • nominatim + locale
  • context with locale
  • response with content-language
  • accept-language header processing

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.