Comments (12)
Prefork enables use of the SO_REUSEPORT socket option, which is available in newer versions of many operating systems, including DragonFly BSD and Linux (kernel version 3.9 and later). This socket option allows multiple sockets to listen on the same IP address and port combination. The kernel then load balances incoming connections across the sockets.
SO_REUSEPORT scales perfectly when a lot of concurrent client connections (at least thousands) are established over the real network (preferrably 10Gbit with per-CPU hardware packet queues). When small number of concurrent client connections are established over localhost, then SO_REUSEPORT usually doesn't give any performance gain.
Benchmarks where preforking is enabled.
https://www.techempower.com/benchmarks/#section=test&runid=350f0783-cc9b-4259-9831-28987799782a&hw=ph&test=json&l=zijocf-1r
NGINX on socket sharding
https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1/
from fiber.
A single go process can easily support thousands of concurrent connections. Preforking makes use of single go processes but will load balance connections on OS level.
It's up to you if preforking has an advantage for your web app, we only provide the experimental option to enable it.
Feel free to re-open this issue if you have further questions!
from fiber.
database automigrate works in every process
https://docs.gofiber.io/api/fiber#ischild
if fiber.isChild() == false {
// make it work only once
}
from fiber.
Does using fiber behind a reverse proxy like nginx reduce the possible benefit by doing the same thing or is there still possibly a benefit?
I assume there are fewer tcp connections between the reverse proxy and fiber.
from fiber.
When fiber prefork is active, database automigrate works in every process, how can I make it work only once?
from fiber.
well thats the problem with inmemory caches when you want to use multiple threads
to 1. you have to inform all threads to do this, for this there are concepts like message queues or pub/sub mechanisms
in kubernetes there is often the same problem, you have a deployment with multiple pods and there you sometimes have to establish a communication if you want to update something over all pods
i personally use redis and the pub/sub concept for this purpose
to 2. and 3. i think no, but you loose the benefit of processing over several threads
instead of building your own solution, i.e. server endpoint with control and inmemory instance, i would recommend you to use a redis server
it's fast to install (at least with docker) and really fast + many features and the possibility of scaling through a master/slave cluster
from fiber.
https://redis.io/docs/manual/pubsub/
https://redis.com/redis-best-practices/communication-patterns/pub-sub/
https://hub.docker.com/_/redis/ -> alpine 28-32mb
or helm https://artifacthub.io/packages/helm/bitnami/redis
go client
https://redis.io/resources/clients/#go
-> https://github.com/go-redis/redis
from fiber.
Thanks for that explanation. Do you have evidence to suggest a single Go process cannot support "[thousands] of concurrent client connections" without SO_REUSEPORT?
from fiber.
Thanks for opening your first issue here! ๐ Be sure to follow the issue template!
from fiber.
So prefork runs multiple worker processes?
If each worker is a different process then memory will not be shared in each worker, If I'm not wrong?
from fiber.
Thanks @ReneWerner87
from fiber.
I also have a question about sharing memory.
We have implemented a FastCache instance to use for our Fiber application. When application starts, the data is pulled from an external server with a HTTP get request and the cache is updated. So far so good as both child processes gets their own update.
We also have a feature to update the cache by pushing it through HTTP POST from the external server to make it not require a server restart on Fiber application's side. Will it update every process? I assume it will not. My questions are,
- Is there a way to make it work?
- If we run a separate fiber instance as a cache server on the same machine, outside of the pre-forked application, will there be a performance penalty?
- Also if a separate cache server is an option, what protocol(s) would be the most efficient?
If we can share cache between forks within the application it should be a solution. But after the research I don't see it's possible.
Sharing memory is also beneficial as it reduces the memory requirements as we need only one instance for all the processes.
from fiber.
Related Issues (20)
- ๐ [Proposal]: Improving route matching and url parsing performance with Ada Url HOT 7
- Improve Error Handling in CSRF Middleware Storage
- ๐ค [Question]: v3 when ready for production HOT 1
- ๐ [Bug]: Unclear "json: invalid use of ,string struct tag, trying to unmarshal unquoted value into uint64" HOT 9
- ๐ค [Question]: Cache Next is not called when KeyGenerator is Defined HOT 6
- ๐ [Bug]: v3 Flash Message with redirect is not working HOT 3
- ๐ [Proposal]: Upgrade fiber.NewError() for error handle HOT 4
- ๐ [Proposal]: Reduce memory usage for "prefork" mode of the master process HOT 2
- ๐งน [Maintenance]: Add Parallel Benchmarks
- ๐ค [Question]: zero allocation - why is this any different from how Chi or others behave? Can you document an example of what you mean? HOT 8
- ๐ค [Question]: FX with Fiber graceful shutdown Failed to stop cleanly: context deadline exceeded HOT 15
- ๐ค [Question]: whats the session id injection issue in github report HOT 4
- ๐ [Bug]: Mutex issues with Idempotency middleware
- ๐ [Bug]: Logger module causes OOM with ${bytesSent} and ${bytesReceived} HOT 10
- ๐ค [Question]: limiter supports dynamic setting of โMaxโ parameter HOT 8
- ๐ [Proposal]: Add StartupProbe support to healthcheck Middleware HOT 6
- [Question]: Can't debug in Goland in Routes HOT 2
- ๐ [Bug]: cache middleware: runtime error: index out of range [0] with length 0 HOT 15
- ๐ [Proposal]: Get session by its ID HOT 1
- ๐ [Proposal]: Support for Application State Management HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fiber.