Code Monkey home page Code Monkey logo

goaio's People

Contributors

dzaninovic avatar traetox avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

goaio's Issues

WaitFor() holds mutex whilst waiting

If two AIO requests (A and B) are pending, then WaitFor() is called for A, and before that terminates WaitFor() is called for B, the WaitFor() request for B will not end until the WaitFor() request for A has ended. Also, and more seriously, during this period a new request cannot be submitted. This assumes I am reading the code right!

This is because inside WaitFor() it holds a.mtx even when waiting. I believe the mutex is only meant to protect the members of AIO, rather than be held whilst it's actually in the syscall. I suspect the correct behaviour is to drop the mutex immediately before the call to wait and retake it afterwards (so the defer still does its job).

Random IO throughput

Have you guys tested this async implementation to determine the random read throughput you can achieve?

We (at dgraph-io/badger) have been noticing that the Go model of doing disk reads (creating pthreads and blocking them per read) is causing our throughput to always be much lower than what's possible on an SSD. https://github.com/dgraph-io/badger-bench/blob/master/randread/main.go

So, we've been looking into building what you seem to have built. Hence curious about whether you have tested the random IO throughput?

Compared with read/write operations, AIO has higher latency overhead

I created a 40M file and used 4k random read and write to testing it。

When 50 working goroutines and each coroutine execute 1000 io tasks
r/w latency: 1.469s
aio latency: 6.378s

const.go

package compared
     
const (
    goroutineNum = 50                                                                                                                                                                                                                                                           
    rwNum        = 1000
    fileSize     = 4 * 100 * 1024 * 1024
    sloat        = fileSize / (4 * 1024)
)    
     
// only use in linux
const O_DIRECT = 0x100000

test code: r/w

package compared

import (
	"fmt"
	"math/rand"
	"os"
	"sync"
	"testing"
	"time"
)

func dorw(wg *sync.WaitGroup, file *os.File) {
	buf := make([]byte, 4096)
	for i := 0; i < rwNum; i++ {
		randNum := rand.Int31()
		op := randNum % int32(2)
		off := int64((randNum % int32(sloat)) * 4096)

                start := time.Now()
		if op == 0 {

			n, _ := file.ReadAt(buf, off)
			if n != 4096 {
				panic("n != 4096")
			}
		} else {

			n, _ := file.WriteAt(buf, off)
			if n != 4096 {
				panic("n != 4096")
			}
		}
               end := time.Now()

               fmt.Printf("op:%d %d\n", op, end.UnixNano() - start.UnixNano())
	}

	wg.Done()
}

func TestRW(t *testing.T) {
	// 环境准备
	file, err := os.OpenFile("1.txt", os.O_CREATE|os.O_TRUNC|os.O_WRONLY, 0644)
	if err != nil {
		t.Fatalf("open 1.txt failed, err: %v", err)
	}
	// 创建指定大小的文件
	file.Seek(fileSize, 0)
	file.Write([]byte("aaa"))
	file.Close()

	file, err = os.OpenFile("1.txt", os.O_RDWR | O_DIRECT, 0644)
	if err != nil {
		t.Fatalf("open 1.txt failed, err: %v", err)
	}

	start := time.Now()
	wg := &sync.WaitGroup{}
	wg.Add(goroutineNum)
	for i := 0; i < goroutineNum; i++ {
		go dorw(wg, file)
	}
	wg.Wait()
	end := time.Now()

	fmt.Printf("rw delay %d ms\n", (end.UnixNano()-start.UnixNano())/(1000*1000))

	file.Close()
}

test code: aio

package compared

import (
	"fmt"
	"go-aio"
	"math/rand"
	"os"
	"sync"
	"testing"
	"time"
)

func do(wg *sync.WaitGroup, file *os.File, gaio *aio.AIO) {
	buf := make([]byte, 4096)
	for i := 0; i < rwNum; i++ {
		randNum := rand.Int31()
		op := randNum % int32(2)
		off := int64((randNum % int32(sloat)) * 4096)

                start := time.Now()
		if op == 0 {
			n, _ := gaio.DoRequest(aio.ReadOP, file, buf, off)
			if n != 4096 {
				panic("n != 4096")
			}
		} else {
			n, _ := gaio.DoRequest(aio.WriteOP, file, buf, off)
			if n != 4096 {
				panic("n != 4096")
			}

		}
               end := time.Now()

               fmt.Printf("op:%d %d\n", op, end.UnixNano() - start.UnixNano())
	}

	wg.Done()
}

func TestAIO(t *testing.T) {
	// 环境准备
	file, err := os.OpenFile("1.txt", os.O_CREATE|os.O_TRUNC|os.O_WRONLY, 0644)
	if err != nil {
		t.Fatalf("open 1.txt failed, err: %v", err)
	}
	// 创建指定大小的文件
	file.Seek(fileSize, 0)
	file.Write([]byte("aaa"))
	file.Close()

	file, err = os.OpenFile("1.txt", os.O_RDWR | O_DIRECT, 0644)
	if err != nil {
		t.Fatalf("open 1.txt failed, err: %v", err)
	}

	gaio := aio.Setup(8, 4096)

	start := time.Now()
	wg := &sync.WaitGroup{}
	wg.Add(goroutineNum)
	for i := 0; i < goroutineNum; i++ {
		go do(wg, file, gaio)
	}
	wg.Wait()
	end := time.Now()

	fmt.Printf("aio rw delay %d ms\n", (end.UnixNano()-start.UnixNano())/(1000*1000))

	file.Close()
}

Ability to abort pending io?

Not sure if this repo is still maintained, but wondering on top of things like Wait() or Ready(), in a request is unable to complete for a long time, is there a way to kill it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.