Code Monkey home page Code Monkey logo

vhost-device's People

Contributors

ablu avatar aesteve-rh avatar alexandruag avatar andreeaflorescu avatar bilelmoussaoui avatar cutelizebin avatar dependabot[bot] avatar dorindabassey avatar dorjoy03 avatar epilys avatar gaelan avatar harshanavkis avatar ikicha avatar jiangliu avatar lauralt avatar mathieupoirier avatar matiasvara avatar mz-pdm avatar obeis avatar q-liuwei avatar ramyak-mehra avatar slp avatar stefano-garzarella avatar stsquad avatar techiepriyansh avatar toolmanp avatar uran0sh avatar vireshk avatar vitkyrka avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vhost-device's Issues

vsock: host listening, guest connecting partially works

Environment: crosvm
In the case of host listening and guest connecting, only first packet is delivered from host to guest. for guest-to-host works well.
And also guest listening scenario works well.

I suspect it's a kind of regression from 7f809ee

It looks like epoll_register is called with different fds(for the first time, use the original fd, after that, it uses the cloned fd).

[sound] add a comment to explain latency value

Based on crosvm comment, the latency of an used buffer is the number of bytes before the current buffer is played:

/// It returns how many bytes need to be consumed
/// before the current playback buffer will be played.

My guess is that Crosvm is using "0" because it notifies the guest after the current period has been consumed. We could add a comment to explain that too at

gpio: Drop unsafe Send traits on structs including libgpiod structures

A new lint pointed out some weird Arc<Mutex<>> use that turned out to happen because we are (unsafely) extending the contract of libgpiod [1].

The following code is marking PhysDevice as Send:

// SAFETY: Safe as the structure can be sent to another thread.
unsafe impl Send for PhysDevice {}
// SAFETY: Safe as the structure can be shared with another thread as the state
// is protected with a lock.
unsafe impl Sync for PhysDevice {}

PhysDevice is not Send automatically since PhysLineState contains request::Request which is not Send (because it contains a raw pointer).

Instead of just marking that as Send, the annotation in libgpiod needs to be fixed. Or - if that is not possible - we need to rewrite things so that the Send trait is not needed on PhysDevice. I am not immediately seeing anything that would disallow marking the line_request wrapper as Send, but it needs a discussion on whether libgpiod considers this as contract.

I kicked off a discussion around documenting libgpiod's threading safety guarantees [1]. Until that, we should silence the warning in order to keep CI happy.

[1] #435 (comment)
[2] https://lore.kernel.org/linux-gpio/CVHO091CC80Y.3KUOSLSOBVL0T@ablu-work/

/cc @vireshk

vsock: Issues in sibling VM communication

These issues were discovered while trying to test the current implementation of sibling VM communication in vhost-user-vsock. The testing was done with iperf-vsock and nc-vsock, both patched to set .svm_flags = VMADDR_FLAG_TO_HOST.

Issues

Deadlock

If you try to test the sibling communication by running iperf-vsock or transferring big files with nc-vsock, the vhost-user-vsock process hangs and becomes completely irresponsive. After a bit of debugging, I discovered that there is deadlock.

The deadlock occurs when two sibling VMs simultaneously try to send each other packets. The VhostUserVsockThreads corresponding to both the VMs hold their own locks while executing thread_backend.send_pkt and then try to lock each other to access their counterpart's raw_pkts_queue. This ultimately results in a deadlock.

In particular, this line of code unleashes the deadlock.

The deadlock can be resolved by separating the mutex over raw_pkts_queue from the mutex over VhostUserVsockThread.

Raw packets queue not being processed completely

Even after resolving the deadlock, the vhost-user-vsock process still hangs while testing, though not completely irresponsive this time. It turns out that sometimes the raw packets pending on the raw_pkts_queue are never processed, resulting in the hang.

This happens because currently, the raw_pkts_queue is processed only when a SIBLING_VM_EVENT is received. But it may happen that the raw_pkts_queue could not be processed completely due to insufficient space in the RX virtqueue at that time.

This can be resolved by trying to process raw packets on other events too similar to what happens in the RX of standard packets.

Current status

While fixing the above two issues seems to make nc-vsock run flawlessly, testing with iperf-vsock still results in the vhost-user-vsock process hanging. There might be a notification problem and could be related to the EVENT_IDX feature.

virtio-video device

Hi,

I am interested in developing a virtio-video device and wanted to inquire you, and see if there are already any ongoing efforts or plans. Also, whether there is interest in this device altogether.

I did post an RFC patch for Qemu here: https://www.mail-archive.com/[email protected]/msg950676.html
The device was initially proposed by Linaro, and I took over. But right now we would like to switch efforts into having the device implementation here.

I have just started working in an initial skeleton. As soon as I have something ready, I will share it here in this Issue.
The skeleton (for now) will be based on the v3 of the virtio-video patch, as the specs are still under discussion and there is no clear path forward (virtio-v4l2 vs virtio-video), but also because the v3 have already a virtio driver for testing.

So the idea is having a virtio-video device working with the spec v3, and update the patch once specs have settled.

For the backend, v4l2r lib would fit perfectly imo.

[i2c] reduce the use of unsafe blocks

We need to reduce the following unsafe code:

.

As a rule of thumb, all unsafe code blocks need to be accompanied by a comment that describes why the code inside an unsafe block accesses memory in a valid way. Also, unsafe code needs to be limited to the calls that are actually unsafe, and results need to be validated. You can see an example here: https://github.com/rust-vmm/kvm-ioctls/blob/66529690273e1ee555356705daca39e2991f3f9c/src/ioctls/vcpu.rs#L163.

cargo build failure for libgpiod dependency

It seems that Cargo.lock points to a reference removed in libgpiod:

cargo build
    Updating git repository `https://github.com/vireshk/libgpiod`
error: failed to get `libgpiod` as a dependency of package `vhost-device-gpio v0.1.0 (/home/stefano/repos/vhost-device/gpio)`

Caused by:
  failed to load source for dependency `libgpiod`

Caused by:
  Unable to update https://github.com/vireshk/libgpiod#9d8e18e2

Caused by:
  object not found - no match for id (9d8e18e2ad2d4bc4f5e315c01c9c03418ff47993); class=Odb (9); code=NotFound (-3)

@vireshk Has it been rebased?

cargo update fixes the issue, but I think we should point to a tag or something that doesn't change over time.

vsock: implement VhostUserBackend instead of VhostUserBackendMut for VhostUserVsockBackend

The VhostUserVsockBackend structure implements the VhostUserBackendMut trait that is supposed to be for structures without interior mutability.
VhostUserVsockBackend already uses Mutex to protect its threads, so it can implement the trait with interior mutability (i.e. VhostUserBackend) and maybe use the RwLock instead of Mutex to allow more concurrency in the future if we will support multiple worker threads or we need to share the VhostUserVsockBackend reference with multiple objects.

An example to follow could be the VhostUserFsBackend in https://gitlab.com/virtio-fs/virtiofsd

[BUG] vsock: Mem leak caused by circular referencing

There might be a memory leak bug caused by circular referencing.

VhostUserBackend ==> VhostUserVsockThread ==> VringEpollHandler
^ V
|| ||
==================================

There three struct refers to each other using Arc , but no single line is using Weak, which make it a circle.

And all these three struct cannot be dropped properly.

Reproduction Process:

  1. Launch vhost-device-vsock
  2. Launch and kill QEMU or other VMM
  3. Use top -Hp $(pgrep vhost-device-vsock) to watch the threads details

Advice

Maybe we can use Weak<VhostUserBackend> in VringEpollHandler.

CI: run nightly cargo fmt wiht a custom rustfmt.toml

@epilys suggested rustfmt with nightly (e.g. cargo +nightly fmt) with this rustfmt.toml to enforce several checks like the import groups.

[sound] creation of multiple streams that are not destroyed

The regular PCM command lifecycle for playing a stream is SET_PARAMS -> PREPARE -> START -> STOP ->RELEASE -> SET_PARAMS-> PREPARE -> ..........
but there are some exceptional cases that do not follow the regular PCM command lifecycle, like in the case of using sox tool to play audio from the guest, to test this:
start the vhost-device-sound daemon on the host with pipewire backend option enabled and enable rust debug on the daemon
Install sox tool on the guest and run play /usr/share/sounds/alsa/Front_Left.wav from the guest.
The order of PCM commands using this tool is SET_PARAMS -> PREPARE -> SET_PARAMS -> PREPARE -> SET_PARAMS -> PREPARE -> START -> STOP ->RELEASE -> .........
This order of PCM commands is also valid sequence of commands however this scenario lead to multiple creation of pipewire streams that are not destroyed. We need to ensure that in a case like this, when the prepare fn() in the pipewire backend is called multiple times without the release fn(), there should be only one pipewire stream.

see virtio-sound#43

[i2c] move the parsing logic in a single place

Right now, in the I2C implementation, parsing happens in 2 places:

  • in main.rs there is a basic parsing of the parameters sent via the CLI
  • in i2c.rs the devices string is further parsed to obtain devices

We should clean up the interface to clearly separate the device logic from the parsing. This will allow implementing multiple frontends for the device as well (i.e. you could implement a binary version where parameters are configured as JSON instead of yaml), and will enable testing the backend and frontend separately.

vsock: reduce credit update messages sent to the guest

The virtio-vsock driver and vhost-vsock device emulation in the Linux kernel have been optimized [1] to reduce the number of credit update messages exchanged [2].
The device in vhost-user-vsock behaves like the device in Linux before those changes, sending a credit update message after each read, even of a single byte.
We should reduce the credit update messages sent to the guest doing something similar of what Linux driver and device do.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c69e6eafff5f725bc29dcb8b52b6782dca8ea8a2
[2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b89d882dc9fc279c8acbf1df71d51b22394186d5

virtio-sound device

Implement a virtio-sound device using the vhost-user protocol and gstreamer crate as backend (plus other backends if needed).

dependabot: unignore vm-memory

vm-memory update to v0.13.0 is failing, so we ignored it for now.

Re-enable it when we are ready for the update.
This depends also on other crates (e.g vhost, vhost-user-backend).

[i2c] Disallow invalid adapter configuration

It is possible to pass in the list the same adapter number twice as follows: "1:4,1:32:21,5:10:23". Instead of this returning an error because the adapter number 1 is written twice, the parsing is successful.

Add workspace-wide clippy lint lists

Quoting #514 (comment):

Is it possible to enable these in some way for the entire workspace?

Good question. It's gonna be possible in stable soon:

https://rust-lang.github.io/rfcs/3389-manifest-lint.html

Quoting the RFC:

Currently, you can configure lints through

    #[<level>(<lint>)] or #![<level>(<lint>)], like #[forbid(unsafe)]
        But this doesn't scale up with additional targets (benches, examples, tests) or workspaces
    On the command line, like cargo clippy -- --forbid unsafe
        This puts the burden on the caller
    Through RUSTFLAGS, like RUSTFLAGS=--forbid=unsafe cargo clippy
        This puts the burden on the caller
    In .cargo/config.toml's target.*.rustflags
        This couples you to the running in specific directories and not running in the right directory causes rebuilds
        The cargo team has previously stated that [they would like to see package-specific config moved to manifests](https://internals.rust-lang.org/t/proposal-move-some-cargo-config-settings-to-cargo-toml/13336/14?u=epage)

So to enable them for the workspace we could put them in RUSTFLAGS for all targets in a .cargo/config.toml.

We should fix this when [lints] is added in the stable we target.

vsock: Have two separate threads for handling TX and RX

Currently, the VhostUserVsockBackend handles both TX and RX in a single thread. It would be nice to have two separate threads handling TX and RX.

Implementing this would involve the following:

  • Define two threads in VhostUserVsockBackend with the appropriate queue_masks: 1 for the RX thread and 2 for the TX thread.
  • Set the TX thread's vring worker to also listen for BACKEND_EVENTs by calling set_vring_worker during initialization in start_backend_server.
  • Handle RX or TX in VhostUserVsockBackend::handle_event depending on the thread_id.
  • Refactor the hashmaps and backend_rxq contained in VsockThreadBackend to be accessible by both the threads (maybe over an RwLock).
  • Make the option to choose between single-threaded or multi-threaded configurable by the user.

How to let cloud-hypervisror communicate with the vhost RNG daemon

start rng backend daemon:
#vhost-device-rng --socket-path=/tmp/rng.sock -c 1 -m 512 -p 1000

use ch-remote to let running cloud-hyperviosr communicate with the rng daemon:
# ch-remote --api-socket /work/run/clh.sock add-user-device socket=/tmp/rng.sock0,id=rng0 Error running command: Server responded with an error: InternalServerError: ApiError(VmAddUserDevice(DeviceManager(VfioUserCreateClient(StreamRead(Os { code: 104, kind: ConnectionReset, message: "Connection reset by peer" })))))

# ch-remote --api-socket /work/run/clh.sock add-device socket=/tmp/rng.sock0,id=rng0 Error running command: Error parsing device syntax: Error parsing --device: unknown option: socket

The error occurs when using the add-device or add-user-device parameter.

vsock: make TX buffer size configurable

The buffer space used to for the TX queue (TX is for the guest point of view as specified in the spec so this is the guest -> host side) has fixed size (i.e. CONN_TX_BUF_SIZE).
That buffer is used to store bytes coming from the guest before sending them to the Unix domain socket.
Some use cases might want to increase or decrease this space, so it would be best to make it user-customizable.

Revisit pub types

For a binary crates, most of the types should be local to the crate. Revisit existing crates to see any violations.

[i2c] Define an `Error` type

The error type is needed so we can:

  • get rid of prints in case of error; the error message needs to be comprehensive enough to replace println in quite a few places
  • be able to properly add negative tests where we actually assert against the Error type and not just call is_err.

[vsock] tx/rx hungup after large packet is delivered

guest side: while true; do ncat -e /bin/cat --vsock -l 1234 -k; done;
host side:

import socket
import os

socket_path = '/tmp/vm3.vsock'
client = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
client.connect(socket_path)
message = 'connect 1234\n'
client.sendall(message.encode())
response = client.recv(1024)
print(response)
i = 0
while True:
  message = ' '*1024*1024
  client.sendall(message.encode())
  response = client.recv(1024*1024)
  print(response, i)
  i += 1

client.close()

When I did stress test for vhost-device-vsock, I found a bug with large rx packet. It was stuck after VSOCK_OP_CREDIT_REQUEST in VsockConnection.recv_pkt(), (and it looks like there is no VSOCK_OP_CREDIT_UPDATE in tx) And it happens both in qemu and crosvm. I was curious if similar bugs happened before. (BTW, I found https://lore.kernel.org/netdev/[email protected]/ which is similar)

[i2c] Improve the `I2cAdapterTrait`

There are a few odd things in the trait. For example, get methods that need a mutable reference, and set methods that DO NOT need a mutable reference:

    /// Sets device's address for an I2C adapter.
    fn set_device_addr(&self, addr: usize) -> Result<()>;

    /// Gets adapter's functionality
    fn get_func(&mut self) -> Result<()>;

Is this a naming problem or something else?

[vsock] Questions about queue size

For now, the queue size for vsock is 256, is it determined in the spec, or arbitrary value?

I assume vhost-user frontend can send VHOST_USER_SET_VRING_NUM with any value below max queue size, and it means back-end should handle every valid value, is it right assumption? (and there is no API to query backend's queue size from frontend)

From the assumption, some vmm(crosvm) sends VHOST_USER_SET_VRING_NUM with maximum value(32768). So I was just curious if we can increase the queue size if there is no regression in QEMU.

Potential Bug: The update of cid_map doesn't check confict cid.

The update of cid_map uses insert() method which will update value if the map had the key before. The code below doesn't check the return value:
https://github.com/rust-vmm/vhost-device/blob/2fa80555d242f5aaa1d188cf79f982263c8bfd51/crates/vsock/src/vhu_vsock_thread.rs#L114C41-L114C41
It would happened if somebody writes a config which gives the same cid for different VMs on accident. It would be nice to have a conflict cid check in config parser.

Add tests to increase code coverage for vhost-device-sound

The goal is to increase the code coverage percentage by adding a couple of small tests for the vhost-device-sound crate. Currently the alsa.rs testing module is not covered, you can consider adding test for alsa.rs or lib.rs(to be specific the Drop Impl for IOMessage is not covered yet).
you can run cargo llvm-cov test --summary-only in the staging crate to see the statistics of test covered and missed functions.

Crash during vhost-device-sound test execution

> RUSTFLAGS=-Zsanitizer=address cargo +nightly test  -Zbuild-std --target x86_64-unknown-linux-gnu
=================================================================
==113604==ERROR: AddressSanitizer: global-buffer-overflow on address 0x5638bab45194 at pc 0x5638b97b4302 bp 0x7f2b838fa440 sp 0x7f2b838f9bc8
READ of size 21 at 0x5638bab45194 thread T3
    #0 0x5638b97b4301 in printf_common(void*, char const*, __va_list_tag*) /rustc/llvm/src/llvm-project/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors_format.inc:561:9
    #1 0x5638b97b4699 in vsnprintf /rustc/llvm/src/llvm-project/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:1649:1
    #2 0x5638b97b6150 in snprintf /rustc/llvm/src/llvm-project/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:1720:1
    #3 0x7f2b87313bda  (/lib64/libpipewire-0.3.so.0+0x90bda) (BuildId: e562ee24d1ec529637bda61d89c8fd3c1b3a7610)
    #4 0x5638ba1238aa in pipewire::thread_loop::ThreadLoopInner::new::h14ad85864c86dc6b /home/ablu/.cargo/git/checkouts/pipewire-rs-e803a8db90410a99/5fe090b/pipewire/src/thread_loop.rs:119:21
    #5 0x5638ba123112 in pipewire::thread_loop::ThreadLoop::new::hc350740d617dd9fc /home/ablu/.cargo/git/checkouts/pipewire-rs-e803a8db90410a99/5fe090b/pipewire/src/thread_loop.rs:27:21
    #6 0x5638b9bb1d79 in vhost_device_sound::audio_backends::pipewire::PwBackend::new::h09c310b67c56527e /home/ablu/projects/rust-vmm/vhost-device/staging/vhost-device-sound/src/audio_backends/pipewire.rs:91:36
    #7 0x5638b9ad24cf in vhost_device_sound::audio_backends::pipewire::tests::test_pipewire_backend_invalid_stream::hdd55eb512f704995 /home/ablu/projects/rust-vmm/vhost-device/staging/vhost-device-sound/src/audio_backends/pipewire.rs:672:26
    #8 0x5638b9888736 in vhost_device_sound::audio_backends::pipewire::tests::test_pipewire_backend_invalid_stream::_$u7b$$u7b$closure$u7d$$u7d$::h0f039bbaaa962ba9 /home/ablu/projects/rust-vmm/vhost-device/staging/vhost-device-sound/src/audio_backends/pipewire.rs:667:46
    #9 0x5638b98919fb in core::ops::function::FnOnce::call_once::h6914c5dd48499231 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
    #10 0x5638b9ce0611 in core::ops::function::FnOnce::call_once::h70320ddee4c636ac /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
    #11 0x5638b9ccb2a4 in test::__rust_begin_short_backtrace::hfb575a9368441a2d /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/test/src/lib.rs:628:18
    #12 0x5638b9e1d1c2 in test::types::RunnableTest::run::hf7c171c9edc28fc5 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/test/src/types.rs:146:40
    #13 0x5638b9cccce2 in test::run_test_in_process::_$u7b$$u7b$closure$u7d$$u7d$::h572fdf15830248f4 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/test/src/lib.rs:651:60
    #14 0x5638b9c59262 in _$LT$core..panic..unwind_safe..AssertUnwindSafe$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$$LP$$RP$$GT$$GT$::call_once::h71a1813592659c55 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/panic/unwind_safe.rs:272:9
    #15 0x5638b9c5c097 in std::panicking::try::do_call::h49dbcf71bc85d7eb /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panicking.rs:552:40
    #16 0x5638b9c79f8a in __rust_try test.e0adccd545cddd4a-cgu.04
    #17 0x5638b9c5b9bd in std::panicking::try::he9b294f69e4c06f1 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panicking.rs:516:19
    #18 0x5638b9d521dd in std::panic::catch_unwind::hcffe6df29e6c16b9 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panic.rs:142:14
    #19 0x5638b9ccbca3 in test::run_test_in_process::hdc33a415e9ce1dac /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/test/src/lib.rs:651:27
    #20 0x5638b9cca1a0 in test::run_test::_$u7b$$u7b$closure$u7d$$u7d$::hc9c2c1ea7f6be189 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/test/src/lib.rs:574:43
    #21 0x5638b9ccaaeb in test::run_test::_$u7b$$u7b$closure$u7d$$u7d$::h62db2dbecba302d8 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/test/src/lib.rs:602:41
    #22 0x5638b9e7d537 in std::sys_common::backtrace::__rust_begin_short_backtrace::h815185ac21d7b89f /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/sys_common/backtrace.rs:154:18
    #23 0x5638b9cae0ca in std::thread::Builder::spawn_unchecked_::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h9ef91c8c7ae4dd0a /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/thread/mod.rs:529:17
    #24 0x5638b9c5930e in _$LT$core..panic..unwind_safe..AssertUnwindSafe$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$$LP$$RP$$GT$$GT$::call_once::ha9c6508541a5ab85 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/panic/unwind_safe.rs:272:9
    #25 0x5638b9c5c196 in std::panicking::try::do_call::hb7fb385ffe79970c /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panicking.rs:552:40
    #26 0x5638b9c79f8a in __rust_try test.e0adccd545cddd4a-cgu.04
    #27 0x5638b9c5a605 in std::panicking::try::h2604580b318e399e /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panicking.rs:516:19
    #28 0x5638b9d5203d in std::panic::catch_unwind::h3b0182f6623e95d9 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panic.rs:142:14
    #29 0x5638b9cad952 in std::thread::Builder::spawn_unchecked_::_$u7b$$u7b$closure$u7d$$u7d$::h07a5bcc7448902ae /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/thread/mod.rs:528:30
    #30 0x5638b9cde7de in core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h61bcf01c6e8d57d4 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
    #31 0x5638ba49c721 in _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::ha4efd1e3cb3b5d65 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/boxed.rs:2007:9
    #32 0x5638ba49cb43 in _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::hd4e9872d06c8c505 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/boxed.rs:2007:9
    #33 0x5638ba522b71 in std::sys::unix::thread::Thread::new::thread_start::hb29e99cc3e80c68a /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/sys/unix/thread.rs:108:17
    #34 0x5638b981a24a in asan_thread_start(void*) /rustc/llvm/src/llvm-project/compiler-rt/lib/asan/asan_interceptors.cpp:225:31
    #35 0x7f2b86f16896 in start_thread (/lib64/libc.so.6+0x8e896) (BuildId: 651b2bed7ecaf18098a63b8f10299821749766e6)
    #36 0x7f2b86f9d6bb in __GI___clone3 (/lib64/libc.so.6+0x1156bb) (BuildId: 651b2bed7ecaf18098a63b8f10299821749766e6)

0x5638bab45194 is located 44 bytes before global variable 'alloc_06e2b0e850ef61146012249bb845fb03' defined in 'o3ah09rws3xh3gh' (0x5638bab451c0) of size 49
0x5638bab45194 is located 0 bytes after global variable 'alloc_3e10e733cdb6951eb77efacb33035e81' defined in 'o3ah09rws3xh3gh' (0x5638bab45180) of size 20
SUMMARY: AddressSanitizer: global-buffer-overflow /rustc/llvm/src/llvm-project/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors_format.inc:561:9 in printf_common(void*, char const*, __va_list_tag*)
Shadow bytes around the buggy address:
  0x5638bab44f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x5638bab44f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x5638bab45000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x5638bab45080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x5638bab45100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x5638bab45180: 00 00[04]f9 f9 f9 f9 f9 00 00 00 00 00 00 01 f9
  0x5638bab45200: f9 f9 f9 f9 00 00 00 f9 f9 f9 f9 f9 00 00 00 01
  0x5638bab45280: f9 f9 f9 f9 00 03 f9 f9 00 f9 f9 f9 00 00 00 f9
  0x5638bab45300: f9 f9 f9 f9 00 00 00 00 00 04 f9 f9 f9 f9 f9 f9
  0x5638bab45380: 00 00 00 07 f9 f9 f9 f9 07 f9 f9 f9 00 00 f9 f9
  0x5638bab45400: 00 00 f9 f9 00 01 f9 f9 00 f9 f9 f9 07 f9 f9 f9
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
Thread T3 created by T0 here:
    #0 0x5638b980218d in pthread_create /rustc/llvm/src/llvm-project/compiler-rt/lib/asan/asan_interceptors.cpp:237:3
    #1 0x5638ba521fe1 in std::sys::unix::thread::Thread::new::haaa15b4190fcb8a6 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/sys/unix/thread.rs:87:19
    #2 0x5638b9cac6a5 in std::thread::Builder::spawn_unchecked_::hdb6fabfafe730d67 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/thread/mod.rs:571:30
    #3 0x5638b9cab101 in std::thread::Builder::spawn_unchecked::h9f4cc87b4a1f2b98 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/thread/mod.rs:457:32
    #4 0x5638b9cae0e6 in std::thread::Builder::spawn::hf135da90b20714c1 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/thread/mod.rs:389:18
    #5 0x5638b9cc18f6 in test::run_tests::hfcdd633cf9ee2d3a /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/test/src/lib.rs:414:21
    #6 0x5638b9d78515 in test::console::run_tests_console::hb2a690ec4d88d036 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/test/src/console.rs:329:5
    #7 0x5638b9cb9840 in test::test_main::h4e620082feefe0b9 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/test/src/lib.rs:143:15
    #8 0x5638b9cbb92d in test::test_main_static::hfd0c485c189709e8 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/test/src/lib.rs:162:5
    #9 0x5638b9b5c3a2 in vhost_device_sound::main::h24c18a897f39109f /home/ablu/projects/rust-vmm/vhost-device/staging/vhost-device-sound/src/lib.rs:1:1
    #10 0x5638b9845bb3 in std::rt::lang_start::_$u7b$$u7b$closure$u7d$$u7d$::hc5c0737ed3849833 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/rt.rs:167:18
    #11 0x5638ba4b198e in std::panicking::try::do_call::h54a4b3c1072629ac /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panicking.rs:552:40
    #12 0x5638ba4bdc4a in __rust_try std.877bf41837851e8a-cgu.04
    #13 0x5638ba66c9b9 in std::panic::catch_unwind::h24a7dc974abb9554 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panic.rs:142:14
    #14 0x5638ba44d6e0 in std::rt::lang_start_internal::_$u7b$$u7b$closure$u7d$$u7d$::h4aa34b44ac44b24c /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/rt.rs:148:48
    #15 0x5638ba4b1cd6 in std::panicking::try::do_call::hb8fc96dc7d3209e8 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panicking.rs:552:40
    #16 0x5638ba4bdc4a in __rust_try std.877bf41837851e8a-cgu.04
    #17 0x5638ba66cb69 in std::panic::catch_unwind::h8da81933790a21d5 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panic.rs:142:14
    #18 0x5638ba44d1ff in std::rt::lang_start_internal::h93a2eefffde5b872 /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/rt.rs:148:20
    #19 0x5638b9845b0f in std::rt::lang_start::h19e48858eba19b4a /home/ablu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/rt.rs:166:17
    #20 0x5638b9b5c3cd in main (/home/ablu/projects/rust-vmm/vhost-device/staging/target/x86_64-unknown-linux-gnu/debug/deps/vhost_device_sound-42fc582550183513+0x6ca3cd) (BuildId: d63f1806839caf18577e2df66bf58558d1bb1147)
    #21 0x7f2b86eb020a in __libc_start_main@GLIBC_2.2.5 (/lib64/libc.so.6+0x2820a) (BuildId: 651b2bed7ecaf18098a63b8f10299821749766e6)
    #22 0x5638b9794304 in _start (/home/ablu/projects/rust-vmm/vhost-device/staging/target/x86_64-unknown-linux-gnu/debug/deps/vhost_device_sound-42fc582550183513+0x302304) (BuildId: d63f1806839caf18577e2df66bf58558d1bb1147)

==113604==ABORTING
error: test failed, to rerun pass `-p vhost-device-sound --lib`

Caused by:
  process didn't exit successfully: `/home/ablu/projects/rust-vmm/vhost-device/staging/target/x86_64-unknown-linux-gnu/debug/deps/vhost_device_sound-42fc582550183513` (exit status: 1)
note: test exited abnormally; to see the full output pass --nocapture to the harness.

Addresses should be addr2line'able with Fedora's debuginfod.

Found while testing #461

/cc @dorindabassey @MatiasVara @epilys @stefano-garzarella

vsock: support configuration file

Currently the application only allows parameters (vhost socket, vsock socket, guest cid) to be specified via CLI, but it might be useful to use a configuration file (e.g. in JSON) with this information.
Keep in mind that in the future a single application may handle multiple guests, so it would be good to provide the ability to easily extend it.
Additionally will be nice to support a runtime configuration. The daemon can start without any backend and receive the configuration (e.g. through a socket).
It's not very useful right now that we only support 1 guest, but in the future it would be useful when we support more guests to add/remove them at runtime.

[i2c] Inefficient use of memory when saving devices

Devices need to be unique. To achieve this, they're defined as device_map: [u32; MAX_I2C_VDEV]. This is then initialized as follows:

let mut device_map: [u32; MAX_I2C_VDEV] = [I2C_INVALID_ADAPTER; MAX_I2C_VDEV];

The value of MAX_I2C_VDEV is 1 << 7 which makes for an inefficient memory usage as this vector is sparse (only a few elements are valid). A better structure for defining this is a HashMap if we need to remember indexes, or a HashSet otherwise.

vsock: how can host vsock server accepting every guest cid?

I'd like to set up vsock server in the host accepting every connection which is acting likesvm_cid=VMADDR_CID_HOST. But for now, it needs to bind to uds-path from specific cid. Is there any way to do that for now? or we need to set up multiple sockets for each?

Initial release (v.0.1.0)

In the next few days, we should start releasing the first version of the binaries available in this workspace.

I was starting to look at the process and have a few questions I would like to discuss:

  • should we create a tag for each different crate (e.g. vhost-user-vsock-v0.1.0, etc.) or a single tag (e.g. v0.1.0)?
  • do we have to create a crate in crates.io for each binary (I think so, but I don't know if there is a way to publish them all together)
  • when do we want to stop development and tag the first versions?

Next Friday I would like to give it a first try, but since the workspace is big, maybe we should define a roadmap together.

Add a folder or nested workspace to support experimental devices

Some devices (such as virtio-gpu that we are discussing here: #445) do not yet have a final specification merged into virtio-spec.

However, it would be useful to merge these devices anyway and maintain them until the specification is stable. For this reason we should provide a sub-directory (as suggested by @Ablu here: #445 (comment)) or a nested workspace called for example experimental to support these devices.

[i2c] Decide on a unified way of displaying errors, warnings, and info

The code has scattered println in it. We should decide on the logging strategy, and use the appropriate macros (i.e. info!, error!, warn! as they're defined in the log crate).

Extensive use of println and unwraps is making the code hard to test for error scenarios. We should replace these with returning errors, and only use expects or unwraps in main.rs.

Allow controlling of the maximum queue size for daemons

This came up with the Gunyah integration testing with CrosVM. As their implementation has a fairly small amount of memory in the shared memory portion they couldn't fit the QUEUE_SIZE queue in it. This was solved with a local patch for the demo but obviously it would be nice to control this via an option/config.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.