Code Monkey home page Code Monkey logo

golang-ipc's Introduction

golang-ipc

Testing codecov Go Report Card

Golang Inter-process communication library forked from james-barrow/golang-ipc with the following features added:

  • Adds the configurable ability to spawn multiple clients. In order to allow multiple client connections, multiple socket connections are dynamically allocated
  • Adds ReadTimed methods which return after the time.Duration provided
  • Adds a ConnectionPool instance to easily poll read requests from multiple clients and easily close connections
  • Adds improved logging for better visibility
  • Removes race conditions by using sync.Mutex locks
  • Improves and adds more tests
  • Makes both StartClient and StartServer blocking, omitting the need for time.Sleep between Server and Client instantiation. All tests are ran with 0 millisecond wait times using IPC_WAIT=0
  • Adds TCP support in place of Unix domain sockets.

Overview

A simple-to-use package that uses unix sockets on Mac/Linux to create a communication channel between two go processes.

Usage

Create a server with the default configuration and start listening for the client:

s, err := ipc.StartServer(&ServerConfig{Name:"<name of connection>"})
if err != nil {
	log.Println(err)
	return
}

Create a client and connect to the server:

c, err := ipc.StartClient(&ClientConfig{Name:"<name of connection>"})
if err != nil {
	log.Println(err)
	return
}

Read messages

Read each message sent (blocking):

for {

	//message, err := s.Read() // server
	message, err := c.Read() // client
	
	if err == nil {
	// handle error
	}
	
	// do something with the received messages
}

Read each message sent until a specific duration has surpassed.

for {

	message, err := c.ReadTimed(5*time.Second)
	
	if  message == ipc.TimeoutMessage {
		continue
    }   
	
	if err == nil && c.StatusCode() != ipc.Connecting {
	
	} 
}

MultiClient Mode

Allow polling of newly created clients on each iteration until a specific duration has surpassed.

s, err := ipc.StartServer(&ServerConfig{Name:"<name of connection>", MultiClient: true})
    if err != nil {
    log.Println(err)
    return
}

for {
    s.Connections.ReadTimed(5*time.Second, func(srv *ipc.Server, message *ipc.Message, err error) {
        if  message == ipc.TimeoutMessage {
            continue
        }
        
        if message.MsgType == -1 && message.Status == "Connected" {
        
        }
    })
}
  • Server.Connections.ReadTimed will block until the slowest ReadTimed callback completes.
  • Server.Connections.ReadTimedFastest will unblock after the first ReadTimed callback completes.

While ReadTimedFastest will result in faster iterations, it will also result in more running goroutines in scenarios where clients requests are not evenly distributed.

To get a better idea of how these work, run the following examples:

Using ReadTimed:

go run --race example/multiclient/multiclient.go

Using ReadTimedFastest:

FAST=true go run --race example/multiclient/multiclient.go

Notice that the Server receives messages faster and the process will finish faster

Message Struct

All received messages are formatted into the type Message

type Message struct {
	Err     error  // details of any error
	MsgType int    // 0 = reserved , -1 is an internal message (disconnection or error etc), all messages recieved will be > 0
	Data    []byte // message data received
	Status  string // the status of the connection
}

Write a message

//err := s.Write(1, []byte("<Message for client"))
err := c.Write(1, []byte("<Message for server"))

if err == nil {
// handle error
}

Advanced Configuration

Server options:

config := &ipc.ServerConfig{
	Name: (string),            // the name of the queue (required)
	Encryption: (bool),        // allows encryption to be switched off (bool - default is true)
	MaxMsgSize: (int) ,        // the maximum size in bytes of each message ( default is 3145728 / 3Mb)
	UnmaskPermissions: (bool), // make the socket writeable for other users (default is false)
	MultiMode: (bool),         // allow the server to connect with multiple clients
}

Client options:

config := &ipc.ClientConfig  {
	Name: (string),             // the name of the queue needs to match the name of the ServerConfig (required)
	Encryption: (bool),         // allows encryption to be switched off (bool - default is true)
	Timeout: (time.Duration),   // duration to wait while attempting to connect to the server (default is 0 no timeout)
	RetryTimer: (time.Duration),// duration to wait before iterating the dial loop or reconnecting (default is 1 second)
}

By default, the Timeout value is 0 which allows the dial loop to iterate in perpetuity until a connection to the server is established.

In scenarios where a perpetually attempting to reconnect is impractical, a Timeout value should be provided. When the connection times out, no further retries will be attempted.

When a Client is no longer used, ensure that the .Close() method is called to prevent unnecessary perpetual connection attempts.

Encryption

By default, the connection established will be encrypted, ECDH384 is used for the key exchange and AES 256 GCM is used for the cipher.

Encryption can be switched off by passing in a custom configuration to the server & client start function:

Encryption: false

Unix Socket Permissions

Under most configurations, a socket created by a user will by default not be writable by another user, making it impossible for the client and server to communicate if being run by separate users. The permission mask can be dropped during socket creation by passing a custom configuration to the server start function. This will make the socket writable for any user.

UnmaskPermissions: true	

TCP Support

Instead of using Unix domain sockets, you can also use TCP. This provides the benefits from TCP reliability and platform interoperability (i.e. Windows) but also sacrifices performance and cpu/memory.

To build with TCP support:

go build -tags network

You can customize the following using runtime environment variables:

  • IPC_NETWORK_HOST: The address host of which the TCP connection is bound to, by default this is 127.0.0.1
  • IPC_NETWORK_PORT: The address port of which the TCP connection is bound to, by default this is 8100
IPC_NETWORK_HOST=10.0.2.15 IPC_NETWORK_PORT=7200 go run -tags network

Debugging

Environment Variables

You can specify debug verbosity using the IPC_DEBUG environment variable.

IPC_DEBUG=true make run

IPC_DEBUG accepts the following values:

  • true: sets the debug level to debug
  • debug: has the same effect as true
  • info: sets the debug level to info
  • warn: sets the debug level to warn
  • error: sets the debug level to error

Testing

The package has been tested on Mac and Linux and has extensive test coverage. The following commands will run all the tests and examples with race condition detection enabled.

make test run

You can change the speed of the tests by providing a value for the IPC_WAIT environment variable. A value > 5 will specify the amount of milliseconds to wait in between critical intervals whereas a value <= 5 will resolve to the amount of seconds to wait in between the same. The default value is 10 milliseconds. You can also provide the IPC_DEBUG=true environment variable to set the logrus.Loglevel to debug mode. The following command will make the tests run in debug mode while waiting 500ms in between critical intervals:

IPC_WAIT=500 IPC_DEBUG=true make test run

golang-ipc's People

Contributors

joe-at-startupmedia avatar james-barrow avatar scrouthtv avatar basnijholt avatar nickycakes avatar

Watchers

 avatar

golang-ipc's Issues

provide benchmarking

Provide the following benchmarks:

  • golang-ipc versus posix_mq , I expect posix_mq to win by about 200%

  • golang-ipc versus a tcp-based IPC, qtalk-go seems to be a good candidate. Because golang-ipc is is using unix sockets, it should be theoretically about 350% faster.

  • golang-ipc versus itself using different configuration settings

Windows testing workflow hangs

I've debugged this to happen using windows-latest where both the client and server hang on the handshake call. Where in the handshake this occurs has yet to be determined. More testing requires access to a windows machine.

research configurability for named pipe implementation

The current implementation is using unix domain sockets. These are great for bi-directional communication between the client and server and are faster than using tcp/udp, obviously because the nature of how unix sockets work. In multi-client mode, a separate socket it established for each new client server connection. Technically a sockets can be shared between multiple clients but this would require the segmentation to be handled in the read/write logic which would require "recycling" of sockets message back into the stack (currently a channel of the Message struct) similar to how ReadTimed works. While there are certain tradeoffs to each, it's my initial assumption (without benchmarking) that the performance penalties of handling multiple connections between the same sockets due to message recycling would outweigh performance penalties incurred from creating per-client sockets. Aside from the presumed performance penalties (or not), the code also becomes a lot easer to maintain using multiple sockets.

Further research indicates that when using smaller messages < 1kb, named pipes out-perform sockets. If this is the case it may be worth allowing this functionality through configuration. While only one socket is necessary for communication between the client and server, two named pipes would be required per-client connection since they are uni-directional FIFOs. Benchmarks representing significant performance gains would be required to justify addition of this feature.

simple "named pipe" example
https://gist.github.com/matishsiao/fc1601a3a3f37c70d91ab3b1ed8485c4

os.Pipe benchmarking, not "named pipes"?
https://go.dev/src/os/pipe_test.go

Screen Shot 2024-05-12 at 9 14 18 PM

_we can notice that the performance of each method depends on the block size. We can see that pipes are slightly faster than sockets when we use the smallest block sizes of 100 bytes and 500 bytes. However, when we used the biggest block sizes of 10 Kbytes and 1 Mbyte, sockets were faster than pipes.

The fastest IPC method with a 100-byte block size was the named pipe, and the slowest was the UNIX socket. The named pipe transmitted at a speed of 318 Mbits/s, while the UNIX socket transmitted at 245 Mbits/s. So, in relative terms, named pipes are approximately 30% faster than UNIX sockets with a block size of 100 bytes._

source: https://www.baeldung.com/linux/ipc-performance-comparison#:~:text=Conclusion&text=We%20compared%20anonymous%20pipes%2C%20named,UNIX%20sockets%20being%20the%20fastest.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.