Code Monkey home page Code Monkey logo

envoyproxy / envoy Goto Github PK

View Code? Open in Web Editor NEW
24.0K 591.0 4.6K 212.59 MB

Cloud-native high-performance edge/middle/service proxy

Home Page: https://www.envoyproxy.io

License: Apache License 2.0

Shell 0.49% C++ 88.81% Python 1.23% C 0.10% Emacs Lisp 0.01% PureBasic 0.01% Go 0.32% Thrift 0.01% Dockerfile 0.01% Rust 0.08% Starlark 5.49% Makefile 0.01% Batchfile 0.01% Jinja 0.09% JavaScript 0.13% Smarty 0.01% CSS 0.01% HTML 0.01% Kotlin 0.63% Java 2.59%
cats rocket-ships cars more-cats cats-over-dogs nanoservices corgis cncf

envoy's People

Contributors

abeyad avatar adisuissa avatar alyssawilk avatar asraa avatar augustyniak avatar danzh2010 avatar dependabot[bot] avatar dio avatar fredyw avatar ggreenway avatar goaway avatar htuch avatar jmarantz avatar jpsim avatar junr03 avatar kbaichoo avatar keith avatar kyessenov avatar lizan avatar mattklein123 avatar phlax avatar piotrsikora avatar ramaraochavali avatar ravenblackx avatar rebello95 avatar ryantheoptimist avatar snowp avatar wbpcode avatar yanavlasov avatar zuercher avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

envoy's Issues

Unable to get external IP address behind AWS ELB

As I know AWS ELB doesn't support HTTP2 now, so I have to use it's TCP mode route traffic to an envoy instance. In this case envoy will populate X-Fowarded-For with inner IP address seems belong to ELB itself, and X-Envoy-External-Address is empty.

Is there any solution to this problem?

PS, log format as below:

"format": "[%START_TIME%] "%REQ(:METHOD)%" "%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%" %PROTOCOL% %RESPONSE_CODE% %FAILURE_REASON% %BYTES_RECEIVED% %BYTES_SENT% %DURATION% "%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%" "%REQ(X-FORWARDED-FOR)%" "%REQ(X-ENVOY-EXTERNAL-ADDRESS)%" "%REQ(appid)%/%REQ(appversion)%" "%REQ(X-REQUEST-ID)%" "%REQ(:AUTHORITY)%" "%UPSTREAM_HOST%"\n"

TLS: Server side SNI

The current TLS context implementation only support one certificate / key file pair.
Many use cases requires having several certificates files, so it'll be great if we can specify folders in TLS contexts.
We should be able to add / update any file in these folders without restarting.

Support for fault injection

For the purposes of resilience testing, the sidecar needs to be able to inject faults into the outbound API calls made by a microservice. Abort, delay and mangle are the three fault primitives that are needed. These primitives kick in before the load balancing stage.

Can these be implemented as TCP/HTTP filters (netfilter style)?

At minimum 2 types of faults are needed to construct elaborate failure scenarios across the application: (abort HTTP requests by returning with different error codes/terminate TCP connection, delay HTTP requests by specified duration). Fault injection should be scoped to a particular subset of requests (e.g., ones with a specific HTTP header or a particular cluster).

[as a wish list, being able to slow down a TCP pipe bandwidth would be cool].

perf: don't use bufferevent

Further perf investigation shows that a big delta on a raw throughput test has to to do with how libevent deals with write events in the bufferevent. bufferevent_sock does not use edge triggered behavior, so it deletes and then re-adds the write event on every write. I have a proof of concept that does not use bufferevent and the perf improvement is 20-25%. Along with another change that I have going to swap out the header map implementation it gets it up to around 40% improvement. That is tracked in: #120

perf: replace evbuffer

evbuffer has a lot of extra functionality that we don't need. It also does suboptimal things during reading (calling ioctl, etc.) that are not well tuned for using a fast allocator like tcmalloc.

This is not something we will do short term but just opening this item to track.

rate limit: configuration/filter enhancements

We are considering a number of enhancements/changes to how rate limiting can be configured on a per virtual host / route basis for HTTP requests. This will include a shift in configuration from the filter to the route/vhost, as well as new rules (potentially logical and/or, etc.) for setting descriptors.

@ccaraman when we have a better idea of what we are planning please post a summary here.

cc @chowchow316 @mandarjog let's discuss here if you have any requests since I think this issue will be relevant to what you are working on.

Spoting memory leak on ubuntu 14.04

Compiled an envoy binary statically on a dev machine(ubuntu 14.04), and deployed it on a cloud virtual machine(ubuntu 14.04).

Spot steady cpu and memory increase until the host is unresponsive. Blue line in the below image represent CPU usage.

We have very low traffic so restart it daily is not a big deal. Just report it, and we are currently migrated to a aws host of ubuntu 16.04, will monitor it further.

image

perf: more efficient header processing

As part of recent profiling comparison with nginx:

Right now how Envoy handles headers is extremely inefficient. We use a list, along with std::string, and generally iterate over the entire list for remove operations, etc.

Move to a more efficient implementation that:

  • Looks at each header on receive and store direct pointer into the list if Envoy will ever use it (via hash lookup). All add/remove/modify operations become O(1).
  • Move away from std::string, implement explicit short string optimization, with sentinel for common headers.

rethink request ID generation wrt runtime routing

As of #188, we use a process wide incrementing ID for the request ID (which is used as the stable random number during runtime routing).

This is good for performance reasons (and in many deployments provides effectively random behavior), but the behavior is confusing for newcomers. We should make the new and old behavior configurable.

Configuration schema and better error messages

  1. Create schema for configuration that will fail on unknown config elements.
  2. Better error messages and line numbers for config error messages.

If anyone has requests for how to better detail with configurations please add them to this issue.

NT support

Get Envoy working on NT. This will require OS shim layer most/all POSIX operations, all Linux specific operations, and especially a new hot restart implementation.

OSX support

Get Envoy compiling on OSX. Shim out all linux specific features and either turn them off in OSX build or provide OS shim layer to make them work on BSD.

perf: remaining small items

A few more small items for this perf pass before calling it good:

  • Don't allocate PooledStreamEncoder (class can be merged into Router, not used anywhere else)
  • Cache date string for date header every 500ms or so
  • Store integer headers as integers also to avoid converting back and forth
  • Optimize buffer callbacks
  • Remove locking from libevent base
  • 0 copy grpc encode
  • Only do PNRG in runtime when needed
  • stat flushing code
  • Don't copy http/2 static headers when encoding
  • remove fmt::format from dynamic stats in router
  • remove copy from path match route action
  • allocate "light" host when doing service discovery so we don't allocate full stat block which we never use.
  • std::vector<nghttp2_settings_entry> iv in http2 codec can be a fixed array

hot restart: make drain time configurable

Right now it's hard coded to 15 minutes for no particular reason. Non edge envoys can typically use much shorter drain times if desired. Let's make this time configurable.

integration tests using different language/libraries

Right now all of our integration tests use Envoy itself for fake clients and fake upstreams. This is useful for ease of testing (compiles to a single binary), but it would be good to have an additional set of tests that are written using a different language (probably Go) so that we can test interop.

cc @louiscryan

1.1.0 release tracker

We are stabilizing after all of the recent perf changes. Plan on doing a 1.1.0 release in the next 2 weeks. If anyone wants any other small features/fixes to be in the official release please speak up.

auto host rewrite for DNS clusters

Ability to automatically rewrite the host header if we are proxying to a DNS based cluster. This is a little complicated and will mean we need to:

  1. Store the original hostname in the host description.
  2. Once LB returns a host, rewrite host header if an option is set with this information.

Envoy seems not to honor "Connection: close" HTTP header

We have a setup when we have to varnish an Envoy-fronted server, and have defined Varnish probes to test the backend health. The problem is that Varnish considers the backend to be perpetually sick because it does not close the connection after forwarding the response, even if the probe sends the Connection: close header.

Here is a network capture from a probe being sent:

20:01:39.092382 IP private.38315 > 10.0.18.222.8282: Flags [S], seq 795357912, win 26883, options [mss 8961,sackOK,TS val 3658896941 ecr 0,nop,wscale 7], length 0
	0x0000:  4500 003c 64f5 4000 4006 9ad3 0a00 1416  E..<d.@.@.......
	0x0010:  0a00 12de 95ab 205a 2f68 32d8 0000 0000  .......Z/h2.....
	0x0020:  a002 6903 3b22 0000 0204 2301 0402 080a  ..i.;"....#.....
	0x0030:  da16 562d 0000 0000 0103 0307            ..V-........
20:01:39.092988 IP 10.0.18.222.8282 > private.38315: Flags [S.], seq 2488892053, ack 795357913, win 26847, options [mss 8961,sackOK,TS val 519304291 ecr 3658896941,nop,wscale 11], length 0
	0x0000:  4500 003c 0000 4000 4006 ffc8 0a00 12de  E..<..@.@.......
	0x0010:  0a00 1416 205a 95ab 9459 7a95 2f68 32d9  .....Z...Yz./h2.
	0x0020:  a012 68df 1bfb 0000 0204 2301 0402 080a  ..h.......#.....
	0x0030:  1ef3 f463 da16 562d 0103 030b            ...c..V-....
20:01:39.093001 IP private.38315 > 10.0.18.222.8282: Flags [.], ack 1, win 211, options [nop,nop,TS val 3658896942 ecr 519304291], length 0
	0x0000:  4500 0034 64f6 4000 4006 9ada 0a00 1416  E..4d.@.@.......
	0x0010:  0a00 12de 95ab 205a 2f68 32d9 9459 7a96  .......Z/h2..Yz.
	0x0020:  8010 00d3 3b1a 0000 0101 080a da16 562e  ....;.........V.
	0x0030:  1ef3 f463                                ...c
20:01:39.093028 IP private.38315 > 10.0.18.222.8282: Flags [P.], seq 1:89, ack 1, win 211, options [nop,nop,TS val 3658896942 ecr 519304291], length 88
	0x0000:  4500 008c 64f7 4000 4006 9a81 0a00 1416  E...d.@.@.......
	0x0010:  0a00 12de 95ab 205a 2f68 32d9 9459 7a96  .......Z/h2..Yz.
	0x0020:  8018 00d3 3b72 0000 0101 080a da16 562e  ....;r........V.
	0x0030:  1ef3 f463 4745 5420 2f6a 656c 6c6f 2f61  ...cGET./jello/a
	0x0040:  7069 2048 5454 502f 312e 310d 0a58 2d56  pi.HTTP/1.1..X-V
	0x0050:  5343 4f2d 5365 7276 6963 653a 206d 6f6e  SCO-Service:.mon
	0x0060:  6f6c 6974 680d 0a48 6f73 743a 206d 6f6e  olith..Host:.mon
	0x0070:  6f6c 6974 680d 0a43 6f6e 6e65 6374 696f  olith..Connectio
	0x0080:  6e3a 2063 6c6f 7365 0d0a 0d0a            n:.close....
20:01:39.093577 IP 10.0.18.222.8282 > private.38315: Flags [.], ack 89, win 14, options [nop,nop,TS val 519304291 ecr 3658896942], length 0
	0x0000:  4500 0034 8ef5 4000 4006 70db 0a00 12de  E..4..@[email protected].....
	0x0010:  0a00 1416 205a 95ab 9459 7a96 2f68 3331  .....Z...Yz./h31
	0x0020:  8010 000e d090 0000 0101 080a 1ef3 f463  ...............c
	0x0030:  da16 562e                                ..V.
20:01:39.106115 IP 10.0.18.222.8282 > private.38315: Flags [P.], seq 1:207, ack 89, win 14, options [nop,nop,TS val 519304294 ecr 3658896942], length 206
	0x0000:  4500 0102 8ef6 4000 4006 700c 0a00 12de  E.....@[email protected].....
	0x0010:  0a00 1416 205a 95ab 9459 7a96 2f68 3331  .....Z...Yz./h31
	0x0020:  8018 000e f63e 0000 0101 080a 1ef3 f466  .....>.........f
	0x0030:  da16 562e 4854 5450 2f31 2e31 2032 3030  ..V.HTTP/1.1.200
	0x0040:  204f 4b0d 0a63 6f6e 7465 6e74 2d6c 656e  .OK..content-len
	0x0050:  6774 683a 2035 0d0a 636f 6e74 656e 742d  gth:.5..content-
	0x0060:  7479 7065 3a20 7465 7874 2f68 746d 6c3b  type:.text/html;
	0x0070:  2063 6861 7273 6574 3d55 5446 2d38 0d0a  .charset=UTF-8..
	0x0080:  782d 656e 766f 792d 7570 7374 7265 616d  x-envoy-upstream
	0x0090:  2d73 6572 7669 6365 2d74 696d 653a 2031  -service-time:.1
	0x00a0:  320d 0a73 6572 7665 723a 2065 6e76 6f79  2..server:.envoy
	0x00b0:  0d0a 6461 7465 3a20 4672 692c 2031 3820  ..date:.Fri,.18.
	0x00c0:  4e6f 7620 3230 3136 2032 303a 3031 3a33  Nov.2016.20:01:3
	0x00d0:  3920 474d 540d 0a78 2d65 6e76 6f79 2d70  9.GMT..x-envoy-p
	0x00e0:  726f 746f 636f 6c2d 7665 7273 696f 6e3a  rotocol-version:
	0x00f0:  2048 5454 502f 312e 310d 0a0d 0a6f 6b20  .HTTP/1.1....ok.
	0x0100:  676f                                     go
20:01:39.106122 IP private.38315 > 10.0.18.222.8282: Flags [.], ack 207, win 219, options [nop,nop,TS val 3658896945 ecr 519304294], length 0
	0x0000:  4500 0034 64f8 4000 4006 9ad8 0a00 1416  E..4d.@.@.......
	0x0010:  0a00 12de 95ab 205a 2f68 3331 9459 7b64  .......Z/h31.Y{d
	0x0020:  8010 00db 3b1a 0000 0101 080a da16 5631  ....;.........V1
	0x0030:  1ef3 f466                                ...f
20:01:42.108259 IP private.38315 > 10.0.18.222.8282: Flags [F.], seq 89, ack 207, win 219, options [nop,nop,TS val 3658897695 ecr 519304294], length 0
	0x0000:  4500 0034 64f9 4000 4006 9ad7 0a00 1416  E..4d.@.@.......
	0x0010:  0a00 12de 95ab 205a 2f68 3331 9459 7b64  .......Z/h31.Y{d
	0x0020:  8011 00db 3b1a 0000 0101 080a da16 591f  ....;.........Y.
	0x0030:  1ef3 f466                                ...f
20:01:42.108939 IP 10.0.18.222.8282 > private.38315: Flags [F.], seq 207, ack 90, win 14, options [nop,nop,TS val 519305045 ecr 3658897695], length 0
	0x0000:  4500 0034 8ef7 4000 4006 70d9 0a00 12de  E..4..@[email protected].....
	0x0010:  0a00 1416 205a 95ab 9459 7b64 2f68 3332  .....Z...Y{d/h32
	0x0020:  8011 000e c9dd 0000 0101 080a 1ef3 f755  ...............U
	0x0030:  da16 591f                                ..Y.
20:01:42.108953 IP private.38315 > 10.0.18.222.8282: Flags [.], ack 208, win 219, options [nop,nop,TS val 3658897696 ecr 519305045], length 0
	0x0000:  4500 0034 64fa 4000 4006 9ad6 0a00 1416  E..4d.@.@.......
	0x0010:  0a00 12de 95ab 205a 2f68 3332 9459 7b65  .......Z/h32.Y{e
	0x0020:  8010 00db 3b1a 0000 0101 080a da16 5920  ....;.........Y.
	0x0030:  1ef3 f755                                ...U

Varnish is sending Connection: close but Envoy is not closing the connection after sending its response. Varnish is closing the connection itself after 3 seconds, but that causes it to consider the probe failing and reject the backend.

perf: TCP_CORK / better buffering pre-write

As part of the recent nginx comparison profiling:

Like most proxies Envoy uses TCP_NODELAY for all connection handling. A few improvements can be made here to reduce the number of small packet writes.

  • Allow for no delay to be optional during TCP proxy. During high throughput scenarios there is no reason to use no delay.
  • Utilize TCP_CORK and better pre-buffering in both HTTP/1.1 and HTTP/2 codecs. There are several flows where we may write several times that could be collapsed. Though modern kernels do have some level of auto corking would still be better to be explicit.

CLI log levels shouldn't just be a uint_t

Right now, per --help

-l <uint64_t>,  --log-level <uint64_t>
  Log level

This isn't the most helpful... IMO this should conform to what people are used to (info, fine, debug, etc) but perhaps the convention in C++ is different? Regardless, there should be some what to know what will get you what

Unable to use as a TCP proxy for mysql

Hi,
Envoy seems to have troubles proxy-ing mysql format.

My setup is a ubuntu 14.04.5 with mysqld Ver 5.5.52-0ubuntu0.14.04.1 running on port 3306.

If I use the mysql client to talk directly to mysqld it works.
If I try to go through envoy, it hangs.

This is my envoy config file:

{ "listeners": [ { "port": 3307, "filters": [ { "type": "read", "name": "tcp_proxy", "config": { "cluster": "mysql", "stat_prefix": "mysql" } } ] } ], "admin": { "access_log_path": "/dev/null", "port": 8001 }, "cluster_manager": { "clusters": [ { "name": "mysql", "connect_timeout_ms": 250, "type": "strict_dns", "lb_type": "round_robin", "hosts": [ { "url": "tcp://127.0.0.1:3306" } ] } ] } }

This is the sh session:

enricos@enricos:/ws$ mysql -h 127.0.0.1 -P 3306 -u admin
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 14
Server version: 5.5.52-0ubuntu0.14.04.1 (Ubuntu)
...
mysql> quit
Bye
enricos@enricos:
/ws$ mysql -h 127.0.0.1 -P 3307 -u admin
<... hangs ...>

On the envoy console, I see:

./build/source/exe/envoy -c ../envoy/examples/envoy-mysql.conf -l 0
...
[2016-10-20 17:20:50.398][167386][info][main] [C2] new connection
[2016-10-20 17:20:50.398][167386][info][filter] [C2] new tcp proxy session

This is the hash I built from:

commit 2cb3635

These are the stats:

http.admin.downstream_cx_rx_bytes_total: 260
http.admin.downstream_cx_ssl_active: 0
http.admin.downstream_cx_ssl_total: 0
http.admin.downstream_cx_total: 3
http.admin.downstream_cx_tx_bytes_buffered: 0
http.admin.downstream_cx_tx_bytes_total: 1564
http.admin.downstream_rq_2xx: 1
http.admin.downstream_rq_3xx: 0
http.admin.downstream_rq_4xx: 1
http.admin.downstream_rq_5xx: 0
http.admin.downstream_rq_active: 1
http.admin.downstream_rq_http1_total: 3
http.admin.downstream_rq_http2_total: 0
http.admin.downstream_rq_non_relative_path: 0
http.admin.downstream_rq_response_before_rq_complete: 0
http.admin.downstream_rq_rx_reset: 0
http.admin.downstream_rq_total: 3
http.admin.downstream_rq_tx_reset: 0
http.admin.failed_generate_uuid: 0
http.async-client.no_route: 0
http.async-client.rq_redirect: 0
http.async-client.rq_total: 0
listener.3307.downstream_cx_active: 0
listener.3307.downstream_cx_destroy: 1
listener.3307.downstream_cx_total: 1
listener.8001.downstream_cx_active: 1
listener.8001.downstream_cx_destroy: 2
listener.8001.downstream_cx_total: 3
server.days_until_first_cert_expiring: 2147483647
server.live: 1
server.memory_allocated: 428400
server.memory_heap_size: 1048576
server.parent_connections: 0
server.total_connections: 0
server.uptime: 45
server.version: 2929507
server.watchdog_mega_miss: 0
server.watchdog_miss: 0
stats.overflow: 0
tcp.mysql.downstream_cx_tx_bytes_buffered: 0
tcp.mysql.downstream_cx_tx_bytes_total: 0

Comparison to linkerd

The documentation's comparison list is both thorough and useful. Thanks for that!

Would you consider adding a brief comparison to linkerd? It's based on Finagle, so I would expect the existing section to cover most of the details, but as a proxy product, linkerd is even more similar to Envoy (and a competitor).

SRV service discovery support

Support SRV queries (used by Consul DNS, etc.) so that DNS can return IP/port combinations.

SRV is not supported by getaddrinfo_a so need to integrate with DNS library to make the query directly.

XSRF protection filter

cc @heston

Ideally, Envoy would handle the full XSRF lifecycle:

  1. Send a cookie containing a cryptographic hash (XSRF token) on GET requests.

  2. On PUT/POST/DELETE requests, Envoy would inspect various parts of the request looking for the token (XSRF_TOKEN header, encoded form body, json body).

  3. Envoy would validate the token (hash is valid, not expired).

  4. If valid, the request is passed to the origin.

  5. If invalid, Envoy would send a 406 status.

We'd also need a way to opt-in/opt-out of xsrf protection for certain endpoints.

Document the minimal interfaces for providing new discovery modes

@mattklein123 as we discussed last night, if we could get some documentation that shows the bare minimum interface(s) for providing a new mode of discovery, that would be excellent! Preferably it would be something that is not mixed up with the DNS part, which is far more complicated. Having such documentation would make contributing new modes of discovery far easier.

Thanks in advance.

Support for regular expression parsing in header match

It would be nice to be able to match headers based on regular expressions. Some simple use cases could be matching based on Cookies, where a specific type of user is making a request.

Depending on the library being used (e.g., PCRE vs something else), for maintaining performance, it might be okay to explicitly ask the user to specify whether the header match block contains a regex or not, with a isregex:true entry. For example,

"headers" : [
                          {
                              "name" : "Cookie",
                              "value" : "user=test-.*?",
                              "isregex" : true
                          }
                      ]

In the absence of the isregex block, the header match can be considered as simple string equality checks.

envoy binary file size - currently 127MB

Hi guys,

I;m thinking about to use envoy in a kubernetes setup.

One of the challenges in such a setup is to keep the docker / rtk container image size small and therefore I was a bit surprised that the envoy binary is about 127 MB.

However, I was able to "shrink" the binary down to about 8 MB using the ELF "strip" approach :

/tmp$ strip -S --strip-unneeded --remove-section=.note.gnu.gold-version --remove-section=.comment --remove-section=.note --remove-section=.note.gnu.build-id --remove-section=.note.ABI-tag envoy

-rwxr-xr-x 1 jj jj 127M Nov 22 20:17 envoy.orig
-rwxr-xr-x 1 jj jj 7,7M Nov 22 20:18 envoy

The question is, what makes the binary such large ?

Thank you.

Cheers,
jj

multi-cluster weighted routing rule

Right now it's clumsy if the user wants to have a routing rule that matches and then sends traffic to a number of upstream clusters on a % basis. We should support this directly both via static percentages as well as runtime override of the percentages.

The rule might look something like:

{
...
"weighted_clusters:" [
  { "name": "cluster1", "weight": 33 },
  { "name": "cluster2", "weight": 33 },
  { "name": "cluster3", "weight": 33 },
]
}

Need to also specify what the rule would look like with runtime overrides. Open so to suggestions here.

cc @rshriram

ACME support

Automated letsencrypt certificates creation / renewal would be great.

Support for consistent hashing lb

This is an enhancement request.
Need a load balancing policy that can route based on ketama hash of a HTTP header field (or source IP:port). This is useful in situations where a cache is sharded across a cluster of hosts. Incoming requests for a specific set of data can be routed to the same host if a particular HTTP header is present.

replace getaddrinfo_a() with c-ares

getaddrinfo_a() is not portable (it exists only in glibc) and Envoy already uses libevent's features quite heavily.

I'd be fine with a build-time alternative, if you prefer to use getaddrinfo_a() on Linux.

perf: remove unnecessary copies in data path

As part of recent perf comparison with nginx:

During high throughput scenarios Envoy does a bunch of extra copies of body data for HTTP as well as raw payload data for TCP proxy.

Many of these copies can be eliminated by changing the filter and connection interfaces to take pointers to buffers where applicable. Those buffers can then be written out by reference and finally freed.

custom access log format won't break line

Use default access log format will result in logs without break lines. I have to attach a \n in format string.

"format": "[%START_TIME%] "%REQ(:METHOD)%" "%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%" %PROTOCOL% %RESPONSE_CODE% %FAILURE_REASON% %BYTES_RECEIVED% %BYTES_SENT% %DURATION% "%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%" "%REQ(X-FORWARDED-FOR?X-ENVOY-EXTERNAL-ADDRESS)%" "%REQ(X-REQUEST-ID)%" "%REQ(:AUTHORITY)%" "%UPSTREAM_HOST%""

Tested on ubuntu 14.04 with envoy 1.0.

Support for HTTP/1.0

Envoy responds to an HTTP/1.0 request with an upgrade request

GET / HTTP/1.0

HTTP/1.1 426 Upgrade Required
server: envoy
date: Mon, 24 Oct 2016 21:11:59 GMT
x-envoy-protocol-version: HTTP/1.1
content-length: 0

It would be nice to support 1.0 as there are some ancient clients out there.

Logging to syslog

It would be nice to be able to send access logs directly to a syslog server, making it easier to centralize logs.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.