Comments (12)
Prefork enables use of the SO_REUSEPORT socket option, which is available in newer versions of many operating systems, including DragonFly BSD and Linux (kernel version 3.9 and later). This socket option allows multiple sockets to listen on the same IP address and port combination. The kernel then load balances incoming connections across the sockets.
SO_REUSEPORT scales perfectly when a lot of concurrent client connections (at least thousands) are established over the real network (preferrably 10Gbit with per-CPU hardware packet queues). When small number of concurrent client connections are established over localhost, then SO_REUSEPORT usually doesn't give any performance gain.
Benchmarks where preforking is enabled.
https://www.techempower.com/benchmarks/#section=test&runid=350f0783-cc9b-4259-9831-28987799782a&hw=ph&test=json&l=zijocf-1r
NGINX on socket sharding
https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1/
from fiber.
A single go process can easily support thousands of concurrent connections. Preforking makes use of single go processes but will load balance connections on OS level.
It's up to you if preforking has an advantage for your web app, we only provide the experimental option to enable it.
Feel free to re-open this issue if you have further questions!
from fiber.
database automigrate works in every process
https://docs.gofiber.io/api/fiber#ischild
if fiber.isChild() == false {
// make it work only once
}
from fiber.
Does using fiber behind a reverse proxy like nginx reduce the possible benefit by doing the same thing or is there still possibly a benefit?
I assume there are fewer tcp connections between the reverse proxy and fiber.
from fiber.
https://redis.io/docs/manual/pubsub/
https://redis.com/redis-best-practices/communication-patterns/pub-sub/
https://hub.docker.com/_/redis/ -> alpine 28-32mb
or helm https://artifacthub.io/packages/helm/bitnami/redis
go client
https://redis.io/resources/clients/#go
-> https://github.com/go-redis/redis
from fiber.
When fiber prefork is active, database automigrate works in every process, how can I make it work only once?
from fiber.
well thats the problem with inmemory caches when you want to use multiple threads
to 1. you have to inform all threads to do this, for this there are concepts like message queues or pub/sub mechanisms
in kubernetes there is often the same problem, you have a deployment with multiple pods and there you sometimes have to establish a communication if you want to update something over all pods
i personally use redis and the pub/sub concept for this purpose
to 2. and 3. i think no, but you loose the benefit of processing over several threads
instead of building your own solution, i.e. server endpoint with control and inmemory instance, i would recommend you to use a redis server
it's fast to install (at least with docker) and really fast + many features and the possibility of scaling through a master/slave cluster
from fiber.
Thanks for opening your first issue here! ๐ Be sure to follow the issue template!
from fiber.
Thanks for that explanation. Do you have evidence to suggest a single Go process cannot support "[thousands] of concurrent client connections" without SO_REUSEPORT?
from fiber.
So prefork runs multiple worker processes?
If each worker is a different process then memory will not be shared in each worker, If I'm not wrong?
from fiber.
Thanks @ReneWerner87
from fiber.
I also have a question about sharing memory.
We have implemented a FastCache instance to use for our Fiber application. When application starts, the data is pulled from an external server with a HTTP get request and the cache is updated. So far so good as both child processes gets their own update.
We also have a feature to update the cache by pushing it through HTTP POST from the external server to make it not require a server restart on Fiber application's side. Will it update every process? I assume it will not. My questions are,
- Is there a way to make it work?
- If we run a separate fiber instance as a cache server on the same machine, outside of the pre-forked application, will there be a performance penalty?
- Also if a separate cache server is an option, what protocol(s) would be the most efficient?
If we can share cache between forks within the application it should be a solution. But after the research I don't see it's possible.
Sharing memory is also beneficial as it reduces the memory requirements as we need only one instance for all the processes.
from fiber.
Related Issues (20)
- ๐ [Bug]: The documentation of `QueryParser()` is incorrect. HOT 3
- ๐ [Bug]: Prefork Option removed from fiber.Config HOT 4
- ๐ค [Question]: Setup of SSE Fiber with fasthttp.StreamWriter - event source is pending / never connects ... HOT 6
- ๐ [v3 Proposal]: Improve Storage Interface HOT 5
- ๐ค [Question]: Grouping by Middleware HOT 4
- ๐ค [Question]: How to handle multipart range request? HOT 5
- ๐ [v3 Proposal]: Koa Style `ctx.Request` and `ctx.Response` Objects HOT 12
- ๐ [Proposal]: Add options for active DisallowUnknownFields in the function BodyParser HOT 2
- ๐ [Bug]: Incorrect Parsing of Slice by `QueryParser()` with Embedded Structs HOT 2
- ๐ [Bug]: Healthcheck middleware doesn't work with group HOT 17
- ๐ Doc: Fix code snippet indentation in /docs/api/middleware/keyauth.md HOT 4
- ๐ [Bug]: Isolation Issue with Parallel Subtests HOT 14
- ๐[Bug]: internal/storage/memory/memory_test HOT 1
- ๐ค [Question]: Validate request params HOT 5
- ๐ [v3 Proposal]: Refresh README.md HOT 2
- ๐ [v3 Proposal]: Improve more new features HOT 4
- ๐ [Bug]: (c *fiber.Ctx).ClearCookie() does absolutely nothing HOT 4
- ๐งน [v3 Maintenance]: Update docs to reflect fiber.Ctx struct to interface change HOT 1
- :bug: [BUG]: CORS panic with AcceptOrigins with whitespace HOT 1
- ๐งน [Maintenance]: Use Gotestsum for Test Workflow HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fiber.