Code Monkey home page Code Monkey logo

kvssd's People

Contributors

changhochoi avatar ilgulove avatar jingpei-yang avatar kvssdsamsung avatar nengjunma avatar xadongfei avatar yuanyinggao avatar ywkang-ssi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kvssd's Issues

kvs_init_* not documented

Location (Korea, USA, China, India, etc.)
USA

Describe the bug
A clear and concise description of what the bug is.

The KV_ADI spec does not document anything related to initialization functions, though they are clearly used in the sample code and in the kvs_api.h

The comments in kvs_api.h don't tell me anything about what the impact of these initialization variables in kv_init_options do.

To Reproduce
Steps to reproduce the behavior:
1.
2.
3.

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

System environment (please complete the following information)

  • Firmware version :
  • Number of SSDs :
  • OS & Kernel version [e.g., Ubuntu 16.04 Kernel v4.9.5]:
  • GCC version [e.g., gcc v5.0.0] :
  • kvbench version if kvbench runs [e.g., v0.6.0]:
  • KV API version [e.g., v0.6.0]
  • User driver version :
  • Driver [Kernel or user driver or emulator] :

Workload

  • number of records or data size
  • Workload(insert, mixed workload, etc.) [e.g., sequential or random insert, or 50% Read & 50% write]
  • key size :
  • value size :
  • operation option if available [e.g., sync or async mode] :

Additional context
Add any other context about the problem here.

kernel support update needed: kernel-devel-3.10.0-1062.9.1.el7

Location (Korea, USA, China, India, etc.)
South Korea

Describe the bug
kernel-devel-3.10.0-1062.9.1.el7 (which was published in Dec. 9) removes the function below

  • [nvme] blk-mq: remove blk_mq_complete_request_sync (Ming Lei) [1763624 1730922]

so that the driver of kernel_v3.10.0-1062-centos-7_7 won't be compiled.

To Reproduce
Steps to reproduce the behavior:

  1. yum update
  2. yum install kernel-devel
  3. compile the KVSSD driver for kernel_v3.10.0-1062-centos-7_7

Screenshots
image

System environment (please complete the following information)

  • Firmware version : -
  • Number of SSDs : 2
  • OS & Kernel version [e.g., Ubuntu 16.04 Kernel v4.9.5]: Centos kernel-3.10.0-1062.9.1.el7
  • GCC version : gcc v7.3.1
  • User driver version :
  • Driver [Kernel or user driver or emulator] :

[kvbench] About KVS_ERR_DEV_CAPACITY error

I tried to run kvbench(KVS) with 5 billion documents and 5 billion operations, but it stopped with KVS_ERR_DEV_CAPACITY Error during benchmark period.
However, when I checked the current state of KVSSD with 'sudo nvme list' command, it turned out that only 5.12MB is used.
Is there any internal restriction about usable capacity of KVSSD, for single container?

Regarding iterator and KVSSD's utilization

Location
Korea.

Describe the bug
When deleting the tuple using kvs_delete_tuple_async(), the utilization of KVSSD has no change. Moreover, the utilization received by API is different with 'sudo nvme list' command.

To Reproduce
Steps to reproduce the behavior:

  1. Use iterator (bitmask=0x80000000, bit_pattern=0x8000000) to get keys (I already wrote too many key-value pairs to KVSSD, they are about 250GB).
  2. In print_iterator_keyvals() function, I checked if the key-value pair exists, then deleting them using kvs_delete_tuple_async().
  3. I saw that there are too many key-value pairs were deleted.

Expected behavior

  1. After I delete key-value pairs using kvs_delete_tuple_async() function, the utilization of KVSSD should decrease but it has no change.
  2. Moreover, after I had deleted key-value pairs, I tried to use iterator again and it still retrieves keys which I used to delete values before --> it wastes time when I use iterator again
  3. The utilization of KVSSD checked by Samsung's APIs is different with 'sudo nvme list' command.

Screenshots
Utilization of KVSSD checked by APIs
Capture

Utilization of KVSSD checked by 'sudo nvme list'
Capture1

System environment (please complete the following information)

  • Firmware version : EEA51K0Q
  • Number of SSDs : 1 (block device: nvme0n1 / controller: nvme0)
  • OS & Kernel version: Ubuntu 16.04 Kernel v4.4.0-131-generic
  • GCC version: 5.4.0 20160609
  • KV API version: 1.0.0 (latest)
  • Driver: KDD (kernel_v4.4.0-141-ubuntu-16_04)

[kvbench] Questions about factors in bench_config.ini

  1. What does 'cache_size_MB' in [db_config] means? Does it mean main memory used as cache, or something else(e.g. buffer cache inside KVSSD)?

  2. [blobfs] includes two factors, spdk_conf_file and spdk_cache_size. Are they only for UDD mode, or do they also affects KDD/Emulator?

Repo is too big to clone

Location (Korea, USA, China, India, etc.)
Put your location for prompt support.

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:
1.
2.
3.

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

System environment (please complete the following information)

  • Firmware version :
  • Number of SSDs :
  • OS & Kernel version [e.g., Ubuntu 16.04 Kernel v4.9.5]:
  • GCC version [e.g., gcc v5.0.0] :
  • kvbench version if kvbench runs [e.g., v0.6.0]:
  • KV API version [e.g., v0.6.0]
  • User driver version :
  • Driver [Kernel or user driver or emulator] :

Workload

  • number of records or data size
  • Workload(insert, mixed workload, etc.) [e.g., sequential or random insert, or 50% Read & 50% write]
  • key size :
  • value size :
  • operation option if available [e.g., sync or async mode] :

Additional context
Add any other context about the problem here.

Question about small value padding

In "KV SSD Firmware Introduction" file of the KV SSD Presentation wiki, the KV-SSD features value size of 0B~2MB with padded up to 1KB for small values internally. I have an idea for using KV-SSD for my research. However, I use key-value storage that all values are size of 4 or 8 bytes, which is very small compared to 1KB. In that case, there could be a lot of useless space by padding. I'd like to know weather there is a way not to pad for storing lots of small values in KV-SSD.

[KV API] Requirement for using UDD

The provided UDD requires CPU that supports bmi2 instruction sets, because of 'shrx' instruction in rte_cpu_get_flag_enabled() function. Is there any User Space Device Driver that can be used for KV API, without bmi2 instruction sets?

KVSSD utilization in sample_test_async

Location (Korea, USA, China, India, etc.)
Korea

Describe the bug
In running sample_test_async, KVSSD's capacity is printed well. However, KVSSD's utilization is always printed as 0.

To Reproduce
Steps to reproduce the behavior:

  1. sudo ./sample_code_async -d /dev/nvme1n1 -n 524288 -q 64 -o 1 -k 16 -v 1048576
    sudo

Expected behavior
The total size written is 512GB so in sample_code_async, KVSSD's utilization should be printed as 143.

Screenshots
image

System environment (please complete the following information)

  • Firmware version : ETA51KBE
  • Number of SSDs : 1
  • OS & Kernel version :, Ubuntu 16.04 Kernel v4.9.5
  • GCC version [e.g., gcc v5.0.0] : 5.4.0
  • kvbench version if kvbench runs [e.g., v0.6.0]:
  • KV API version [e.g., v0.6.0] : the most recent one provided in Github
  • User driver version :
  • Driver [Kernel or user driver or emulator] : kernel driver

Workload

  • 524288
  • sample_code_async
  • key size : 16
  • value size : 1048576
  • operation option if available [e.g., sync or async mode] : async

Additional context
Add any other context about the problem here.

[KVBench] Running kv_bench with iterator causes error

I'm testing kv_bench with kernel device driver and every time I try to run kv_bench with iterator when write_mode=async, it fails with KVS_ERR_SYS_IO error followed by segmentation fault.

[Steps]
(1) When with_iterator option is set as 'false'(no iterator), it terminates normally. However, when I set it as 'true' or 'alone', it becomes buggy, terminates with segmentation fault, and everything on my machine freezes.
(2) Even 'reboot' or 'shutdown' command doesn't work, so I should shutdown and reboot machine compulsorily.
(3) When I check if everything is fine with test program(usually sample_code_async) after reboot, it fails with KVS_ERR_DEV_INIT.
(4) I recompile/reinstall NVMe modules for KVSSD to resolve the error, but system doesn't recognize nvme controller and block device after that. To solve this problem, I reboot machine again.
(5) Now KVSSD Firmware is damaged, shown as 'ERRORMOD'. Luckily, I can recover KVSSD Firmware successfully every time it is damaged. However, when I try to run kv_bench with same settings again, same error rises again.

Below, the line marked as '=>' is where I encountered segmentation fault (according to gdb).
('mov (%rdx), %r8' at assembly code)
(KVSSD/application/kvbench/wrappers/couch_kv.cc)


void on_io_complete(kvs_callback_context* ioctx) {
if(ioctx->result != KVS_SUCCESS && ioctx->result != KVS_ERR_KEY_NOT_EXIST) {
=> fprintf(stdout, "io error: op = %d, key = %s, result = %s\n", ioctx->opcode, (char*)ioctx->key->key, kvs_errstr(ioctx->result));
exit(1);
}

Until now, I couldn't find any information related with this error. Is this a known issue?

Why does user library use only one submission/completion queue?

Location (Korea, USA, China, India, etc.)
Korea

Describe the bug
I have a question about the KVSSD user library design, not a bug report.

To Reproduce
Steps to reproduce the behavior:
1.
2.
3.

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

System environment (please complete the following information)

  • Firmware version : ETA51KBV_20190809_ENC.bin
  • Number of SSDs : 1
  • OS & Kernel version [e.g., Ubuntu 16.04 Kernel v4.9.5]: kernel_v4.13.15-041315-ubuntu-16_04
  • GCC version [e.g., gcc v5.0.0] : gcc/g++ v7.5.0
  • kvbench version if kvbench runs [e.g., v0.6.0]:
  • KV API version [e.g., v0.6.0]: Latest
  • User driver version : Latest
  • Driver [Kernel or user driver or emulator] : kernel driver

Workload

  • number of records or data size
  • Workload(insert, mixed workload, etc.) [e.g., sequential or random insert, or 50% Read & 50% write]
  • key size : 16B
  • value size : 4KB
  • operation option if available [e.g., sync or async mode] : Both of them

Additional context

I have a question about the KVSSD design, not a bug report. Why does the user library use only one queue pair in the current implementation? In order to utilize scalability or the maximum bandwidth of the device in the NVMe architecture, it is appropriate to use a multi-queue structure. In the case of the kernel driver, it was confirmed that an aio completion thread was created for each core. One question here is why user libraries only use one queue pair. I ask if there are any technical issues or limitations in multi-queue user library support.

KV-SSD internals

I am trying to understand the internals of KV-SSD and at what granularity the KV device is implemented in the KV Emulator.

  1. In the FAQ page you mentioned that KV-SSD does not support offset read/write. But on page 17 of a SNIA KV-SSD presentation the diagram seems to insinuate that a key-value pair (KVP) is directly fetched from a page. For a scenario, where a physical KV-SSD page stores multiple KVPs, and an application has read for a specific KVP, does the KV-SSD reads the whole page in the device cache, gets the requested KVP from the page content and return to the device driver? Or the KV-SSD reads only the target KVP from the page.

  2. The above presentation also mentions that value has 32B granularity. Since, KV-SSD does not support offset read/write, what would such a granularity signify for read and write? Is it just that the physical device would read/write values in multiples of 32B?

  3. What does the User/Device Hash Key signifies (page 17 of the above mentioned presentation)? Is it a hash of the key in the key-value pair provided by an application?

  4. What is stored in the metadata section, as depicted in page 17 of the above mentioned presentation? What is it used for?

  5. Does KV ADI play a role similar to file systems in block SSDs? If yes, then how these two are similar or dissimilar? If no, then what is the role of KV ADI?

  6. How KV I/F command sends data that is too big for one I/F command to handle?

[Question] what makes the maximum key/value size to 255B/2MB, respectively

Dear all,

I have a tiny question about KVSSD system's features.

I have noticed that

  1. KVSSD use KV interface which is extended from NVMe standard
  2. the maximum key value size of KVSSD is 255B, 2MB, respectively

but I can't get about 'what makes the maximum key/value size to 255B/2MB, respectively'.
(simply, it can be considered that maximum value size can be up to 512MB or 4GB, because 4 byte is allocated to represent the 'value size' in KV IF)

Is there any specific reason for these characteristics?

There are some comments like 'max length of key/value in bytes that device is able to support' but i can't understand the exact meaning of 'device is able to support'. Which characteristics of the storage device(or SSD) make the key-value length limited?

[KVBench]There are no disk write throughput data when testing KV SSD

Dears,
When i`m running kv_bench on KV SSD there seems no data result for some throughput:

warming up
(5.0 s / 5 s, 77944.48 ops/s, 77895.79 ops/s)
(0.00 KB, 0.00 KB/s 0.0 x, 0.00 KB/s 0.0 x)
evaluation
(5.0 s / 5 s, 78038.87 ops/s, 78259.20 ops/s)
(0.00 KB, 0.00 KB/s 0.0 x, 0.00 KB/s 0.0 x)
5.0 sec elapsed
195123 reads (39010.49 ops/sec, 25.63 us/read)
195100 writes (39005.90 ops/sec, 25.64 us/write)
total 390223 operations performed
Throughput(Benchmark) 78016.39 ops/sec
average latency 12.817820
total 0 bytes (0.00 KB) written during benchmark
average disk write throughput: 0.00 MB/s
0.00 KB written per doc update (0.0 x write amplification)

The full process as follows:

[root@localhost build_kv]# ./kv_bench -f bench_config.ini
Using "bench_config.ini" config file
with iterator 0 - mode 0
db name /mnt/rocksdb
node 0: 0 1 2 3 4 5 6 7
pop -- dev nvme0n1, numaid -1, core: -1
bench --- dev nvme0n1, numaid -1, core: -1
ratio is 50:50:0:0

 === benchmark configuration ===
DB module: KVS
random seed: 1599480647
filename: /mnt/rocksdb# (initialize)
# documents (i.e. working set size): 100000
# threads: reader 0, iterator 0, writer 1, deleter 0
# auto-compaction threads: 4
block cache size: 16.00 GB
key length: Fixed size(16) / body length: Fixed size(4096)
batch distribution: Uniform
benchmark duration: 5 seconds
read batch size: point Uniform(1,1), range Uniform(500,1500)
write batch size: Uniform(1,1)
inside batch distribution: Uniform (-1 ~ +1, total 2)
write ratio: 50 % (synchronous)
insertion order: sequential fill
master core = 5, mask = 20
 device init done
db 0 name is /mnt/rocksdb/0
device open /dev/nvme0n1
thread 0 initialize key pool - base is 0x7f659aa51800, next free 0x7f659aa51800 free 128
thread 0 initialize value pool - base is 0x7f659999dd00, next free 0x7f659999dd00 free 128, unit size 4096
[2.0 s] 100000 / 100000 (49991.53 ops/sec, 32990.69 ops/sec, 100.0 %, 0.00 KB)  (-0 s)
Throughput(Insertion) 49989.40 latency=20.004240

write latency distribution
148 samples (100 Hz), average: 915.96 us
46 us (1.00%), 165 us (5.00%), 636 us (50.00%), 2159 us (95.00%), 2484 us (99.00%), N/A us (99.99%)

population done
2.0 sec elapsed (49981.68 ops/sec)

benchmark
opening DB instance ..
2.0 sec elapsed

warming up
(5.0 s / 5 s, 77944.48 ops/s, 77895.79 ops/s)
(0.00 KB, 0.00 KB/s 0.0 x, 0.00 KB/s 0.0 x)
evaluation
(5.0 s / 5 s, 78038.87 ops/s, 78259.20 ops/s)
(0.00 KB, 0.00 KB/s 0.0 x, 0.00 KB/s 0.0 x)
5.0 sec elapsed
195123 reads (39010.49 ops/sec, 25.63 us/read)
195100 writes (39005.90 ops/sec, 25.64 us/write)
total 390223 operations performed
Throughput(Benchmark) 78016.39 ops/sec
average latency 12.817820
total 0 bytes (0.00 KB) written during benchmark
average disk write throughput: 0.00 MB/s
0.00 KB written per doc update (0.0 x write amplification)

write latency distribution
213 samples (100 Hz), average: 1252.24 us
53 us (1.00%), 115 us (5.00%), 1148 us (50.00%), 3061 us (95.00%), 3728 us (99.00%), N/A us (99.99%)

read latency distribution
286 samples (100 Hz), average: 270.97 us
116 us (1.00%), 130 us (5.00%), 240 us (50.00%), 567 us (95.00%), 901 us (99.00%), N/A us (99.99%)

waiting for termination of DB module..

5.0 sec elapsed

And the bench_config.ini:

[document]
ndocs = 100000
amp_factor = 1

[log]
filename = logs/ops

[system]
allocator = posix
key_pool_size = 128
key_pool_unit = 16
key_pool_alignment = 4096
value_pool_size = 128
value_pool_unit = 4096
value_pool_alignment = 4096
device_path = /dev/nvme0n1

[kvs]
device_path = /dev/nvme0n1 # /dev/nvme0n1 # 0000:06:00.0
emul_configfile = /tmp/kvemul.conf
store_option = post
queue_depth = 64
aiothreads_per_device = 1
core_ids = 1,3,5
cq_thread_ids = 2,4,6
mem_size_mb = 1024
write_mode = async
with_iterator = false #alone # true/
iterator_mode = key

[aerospike]
hosts = 127.0.0.1
port = 3000
loop_capacity = 10
namespace = test2

[blobfs]
spdk_conf_file = /tmp/rocksdb.conf
spdk_cache_size = 4096

[db_config]
cache_size_MB = 16384
compaction_mode = auto
auto_compaction_threads = 4
wbs_init_MB = 64
wbs_bench_MB = 4
bloom_bits_per_key = 10
compaction_style = level
fdb_wal = 4096
wt_type = b-tree
compression = false
split_pct = 100
leaf_pg_size_KB = 4
int_pg_size_KB = 4

[db_file]
filename = /mnt/rocksdb
nfiles = 1

[population]
pop_first = true
nthreads = 1
batchsize = 1
seq_fill = true


[threads]
readers = 1
iterators = 0
writers = 0
deleters = 0
reader_ops = 0
writer_ops = 0
disjoint_write = false

[key_length]
#distribution = uniform
distribution = fixed
fixed_size = 16
upper_bound = 16
lower_bound = 16

[prefix]
level = 0
nprefixes = 0
distribution = uniform
lower_bound = 0
upper_bound = 0

[body_length]
#distribution = uniform
distribution = fixed
fixed_size = 4096
value_size = 512,2048,4096
value_size_ratio = 10:50:40
upper_bound = 4096
lower_bound = 4096
compressibility = 30

[operation]
warmingup = 5
duration = 5
#nops = 100

batch_distribution = uniform
#batch_parameter1 = 0.0
#batch_parameter2 = 8

batchsize_distribution = uniform

#read_batchsize_median = 3
#read_batchsize_standard_deviation = 1
read_batchsize_lower_bound = 1
read_batchsize_upper_bound = 1

iterate_batchsize_median = 1000
iterate_batchsize_standard_deviation = 100

#write_batchsize_median = 1
#write_batchsize_standard_deviation = 1
write_batchsize_lower_bound = 1
write_batchsize_upper_bound = 1

read_write_insert_delete = 50:50:0:0
write_type = sync

[compaction]
threshold = 50
period = 60
block_reuse = 70

[latency_monitor]
rate = 100
max_samples = 1000000
print_term_ms = 1000

Rocksdb-SPDK compilation failure

Location (Korea, USA, China, India, etc.)
USA

Describe the bug
Rocksdb-SPDK compilation failed due to the libspdk_event_rpc library that cannot be found for SPDK with the version above 17 (18, 19).

To Reproduce
Steps to reproduce the behavior:

  1. go to /KVSSD/application/kvbench/spdk/spdk/ to build spdk. The error occurs that head files cannot be found.
  2. Download and build spdk-18.04 from GitHub to replace /KVSSD/application/kvbench/spdk/spdk/.
  3. go to /KVSSD/application/kvbench/spdk/rocksdb/ to build rocksdb. Succeed.
  4. copy necessary libraries for spdk, dpdk and rocksdb to /usr/local/lib/
  5. cd kvbench
    mkdir build_spdk && cd build_spdk
    cmake ../
    make blobfs_rocksdb_bench
  6. error: -lspdk_event_rpc not found.
  7. Download and compile spdk-17.10 from Github to replace /KVSSD/application/kvbench/spdk/spdk/, copy libspdk_event_rpc.a to /usr/local/lib.
  8. ''No reference'' error happens.

Expected behavior
compile Rocksdb-SPDK successfully.

System environment (please complete the following information)

  • Firmware version : ETA51KCA_20191021_ENC.bin
  • Number of SSDs : 1
  • OS & Kernel version [e.g., Ubuntu 16.04 Kernel v4.9.5]: Ubuntu 16.04 Kernel v4.13.15
  • GCC version [e.g., gcc v5.0.0] :5.4.0
  • User driver version : spdk-18.04
  • Driver [Kernel or user driver or emulator] : User Driver

Additional context

  1. Benchmark test on kernel device driver succeeds for kvbench and rocksdb.
  2. The sample code test for the user device driver succeeds via uNVMe.

Documentation claims no CentOS 7 support.

Location (Korea, USA, China, India, etc.)
San Diego

Describe the bug
Requiring gcc v. 5.0 precludes building on CentOS 7

To Reproduce

  1. Try a base CentOS 7 distribution build.
  2. Please refer to the documentation.

Expected behavior
Unknown - I haven't tried it.

Screenshots
If applicable, add screenshots to help explain your problem.

System environment (please complete the following information)

  • OS & Kernel version [e.g., Ubuntu 16.04 Kernel v4.9.5]:
    CentOS 7
    3.10.0-957.5.1.el7.x86_64

  • GCC version [e.g., gcc v5.0.0] :
    gcc 4.8.5 20150623 (Red Hat 4.8.5-36)

Additional context
[email protected]

Cant build on Ubuntu

for some odd reason I'm not able to build master/0.7.0/0.8.1
Tested on my dev machine and on clean AWS instance with ubuntu 18.04 with latest updates installed

git clone https://github.com/OpenMPDK/KVSSD.git
cd KVSSD/
cd PDK/core/
./tools/install_deps.sh
mkdir build
cd build/
cmake -DWITH_EMU=ON ../
make -j

so, first I encounter compilation error

/home/ubuntu/KVSSD/PDK/core/src/device_abstract_layer/include/private/history.hpp:139:30: error: expected โ€˜)โ€™ before โ€˜<โ€™ token
latency_model(std::vector iops_model_coefficients) {
^
In file included from /home/ubuntu/KVSSD/PDK/core/src/device_abstract_layer/include/private/kv_emulator.hpp:45:0,
from /home/ubuntu/KVSSD/PDK/core/src/device_abstract_layer/emulator/src/kv_emulator.cpp:41:
/home/ubuntu/KVSSD/PDK/core/src/device_abstract_layer/include/private/history.hpp:277:27: error: expected โ€˜)โ€™ before โ€˜<โ€™ token
kv_history(std::vector iops_model_coefficients) : model(iops_model_coefficients) {}
^
In file included from /home/ubuntu/KVSSD/PDK/core/src/device_abstract_layer/emulator/src/kv_emulator.cpp:41:0:
/home/ubuntu/KVSSD/PDK/core/src/device_abstract_layer/include/private/kv_emulator.hpp:115:41: error: โ€˜std::vectorโ€™ has not been declared
kv_emulator(uint64_t capacity, std::vector iops_model_coefficients, bool_t use_iops_model, uint32_t nsid);
^~~~~~
/home/ubuntu/KVSSD/PDK/core/src/device_abstract_layer/include/private/kv_emulator.hpp:115:47: error: expected โ€˜,โ€™ or โ€˜...โ€™ before โ€˜<โ€™ token
kv_emulator(uint64_t capacity, std::vector iops_model_coefficients, bool_t use_iops_model, uint32_t nsid);
^
In file included from /home/ubuntu/KVSSD/PDK/core/src/device_abstract_layer/include/private/kv_emulator.hpp:45:0,
from /home/ubuntu/KVSSD/PDK/core/src/device_abstract_layer/emulator/src/kv_emulator.cpp:41:
/home/ubuntu/KVSSD/PDK/core/src/device_abstract_layer/include/private/history.hpp:139:30: error: expected โ€˜)โ€™ before โ€˜<โ€™ token
latency_model(std::vector iops_model_coefficients) {
^
In file included from /home/ubuntu/KVSSD/PDK/core/src/device_abstract_layer/include/private/kv_emulator.hpp:45:0,
from /home/ubuntu/KVSSD/PDK/core/src/device_abstract_layer/emulator/src/kv_emulator.cpp:41:
/home/ubuntu/KVSSD/PDK/core/src/device_abstract_layer/include/private/history.hpp:277:27: error: expected โ€˜)โ€™ before โ€˜<โ€™ token
kv_history(std::vector iops_model_coefficients) : model(iops_model_coefficients) {}
^
/home/ubuntu/KVSSD/PDK/core/src/device_abstract_layer/emulator/src/kv_emulator.cpp:49:50: error: โ€˜std::vectorโ€™ has not been declared
kv_emulator::kv_emulator(uint64_t capacity, std::vector iops_model_coefficients, bool_t use_iops_model, uint32_t nsid): stat(iops_model_coefficients), m_capacity(capacity),m_available(capacity), m_use_iops_model(use_iops_model), m_nsid(nsid) {
^~~~~~
In file included from /home/ubuntu/KVSSD/PDK/core/src/device_abstract_layer/emulator/src/kv_emulator.cpp:41:0:
/home/ubuntu/KVSSD/PDK/core/src/device_abstract_layer/include/private/kv_emulator.hpp:115:41: error: โ€˜std::vectorโ€™ has not been declared
kv_emulator(uint64_t capacity, std::vector iops_model_coefficients, bool_t use_iops_model, uint32_t nsid);
^~~~~~

I've added vector include to the ./src/device_abstract_layer/include/private/history.hpp and it compiled and linked

[KVSSD_Firmware] firmware update problem

Location (Korea, USA, China, India, etc.)
Korea

Describe the bug
Can't perform firmware update via nvme-cli 'nvme-fw-activate' command.

To Reproduce
Steps to reproduce the behavior:

  1. After firmware was damaged(ERRORMOD), tried to do update by using new firmware.
    Followed instructions from "KVSSD_seminar_2018_fw_introduction.pdf".
  2. Install nvme-cli package
  3. Do "nvme fw-download" -> Firmware download success
  4. Do "sleep 10"
  5. Do "nvme fw-activate" -> NVME Admin command error:FIRMWARE_IMAGE(2107)

Expected behavior
After step5, i expected a log "Success activating action:1, slot:2, but a conventional reset is required"
(According to "KVSSD_seminar_2018_fw_introduction.pdf", use slot 2 & action 1)

Screenshots
If applicable, add screenshots to help explain your problem.
image

System environment (please complete the following information)

  • Firmware version : EEA51K0Q_20181221_ENC.bin(NEW), EGA50K01(ERRORMOD)
  • Number of SSDs : SAMSUNG MZQLB3T8HALS-000KV(Model), S4DRNY0KA00033(SN)
  • OS & Kernel version : Ubuntu 16.04, Kernel 4.4.0-131-generic
  • GCC version : v5.4.0
  • kvbench version if kvbench runs [e.g., v0.6.0]:
  • KV API version [e.g., v0.6.0]
  • User driver version :
  • Driver [Kernel or user driver or emulator] :

Workload

  • number of records or data size
  • Workload(insert, mixed workload, etc.)
  • key size :
  • value size :
  • operation option if available [e.g., sync or async mode] :

Additional context

[KVS] Latest version of CMakeLists.txt for kvbench is likely buggy

Location (Korea, USA, China, India, etc.)
Korea

Describe the bug
Compiling kvbench with KVSSD Host SW Packages of latest version(v1.2.0) fails.
KVSSD/application/kvbench/CMakeLists.txt seems to be a reason of such problem, since changing CMakeLists.txt results on successful compilation.

To prevent an compilation error, I've changed CMakeLists.txt like this:
160 #set(KVS_LIB -L${CMAKE_LIBRARY_PATH} -lkvapi)
161 target_link_libraries(kv_bench ${COMMON_LIB} ${CMAKE_LIBRARY_PATH})
=>
160 set(KVS_LIB -L${CMAKE_LIBRARY_PATH} -lkvapi)
161 target_link_libraries(kv_bench ${COMMON_LIB} ${KVS_LIB})

, as it was in older version.

Screenshots
error

System environment

  • Firmware version : EEA51K0Q
  • Number of SSDs : 1 (nvme0n1)
  • OS & Kernel version: Ubuntu 18.04 Kernel v4.15.0
  • GCC version: gcc v7.4.0
  • kvbench version: v1.2.0 (Iatest)
  • KV API version: v1.2.0 (latest)
  • User driver version : uNVMe 18.11
  • Driver: Both KDD and UDD result on same error

ASYNC shows lower insert performance than SYNC when the value size is random

Location (Korea, USA, China, India, etc.)
Korea

Describe the bug
When the value size is random, async shows lower insert performance than sync

To Reproduce
Steps to reproduce the behavior:

  1. format KV SSD
  2. set kv_bench config with body_length distribution as uniform, upper_bound as 2MB, and lower bound as 512KB
  3. run kv_bench with sync option and async option

Expected behavior
When the value size is fixed, async shows higher performance than sync. I Expected the similar behavior when the value size was random.

Screenshots
image

System environment (please complete the following information)

  • Firmware version : ETA51KBV
  • Number of SSDs : 1
  • OS & Kernel version [e.g., Ubuntu 16.04 Kernel v4.9.5]: Ubuntu 16.04 Kernel v4.9.5
  • GCC version [e.g., gcc v5.0.0] : 5.4.0
  • kvbench version if kvbench runs [e.g., v0.6.0]: the most recent one provided in Github
  • KV API version [e.g., v0.6.0]: the most recent one provided in Github
  • User driver version :
  • Driver [Kernel or user driver or emulator] : kernel driver

Workload

  • 10000
  • sequential insert using kv_bench
  • key size : 16B
  • value size : uniform(512KB ~ 2MB)
  • operation option if available [e.g., sync or async mode] : sync & async

KVSSD firmware was damaged while testing

I've been testing KVSSD with sample code(sample_code_async), following instructions in KVSSD_QUICK_START_GUIDE's chapter 3.1.
The first test using KDD was successfully terminated. However, when I tried the same test using UDD, it failed because the SPDK option was not set correctly, even I recompiled KV API with '-DWITH_SPDK=ON' option.
Anyway, to find out what the problem is, I've recompiled KV API with KDD option and executed sample_code_async with same options. But it didn't work, terminating with 'KVS_ERR_SYS_IO' error.

I removed and reinstalled all the dependencies and the KVSSD host software package, disconnected and reconnected KVSSD, reset NVMe driver, but nothing have changed. Testing with KDD only results the same error - 'KVS_ERR_SYS_IO', and testing with UDD rises SIGILL. The only thing I can do without error is to use emulator.

After debugging, I found the reason - my KVSSD's firmware revision was 'ERRORMOD'. I guess the firmware was damaged when I tried to run sample_code_async with UDD, but I still don't know why.
Are there any solutions about such situation? Is it possible for me to recover the damaged firmware?

[kvbench] formatting after running benchmarks

After running KVS benchmark, written data are not removed until executing delete operation for associated keys.
Then, when running aerospike with SSD(not in-memory), does the same situation happen? If so, how can I remove such data, which have became useless after running benchmark? Should I just format the SSD?

[KVCeph] User guide typo?

In KVCeph v0.8.0 user guide page 6, there are several commands for installing kernel driver.
$ make
$ make install
$ ./re_insmod.sh
However, I can't run 'make install' under KVSSD/PDK/driver/PCIe/kernel_driver/kernel_v directory - it says that there's no rule to make target 'install'. Is it normal?
(For now, I'm working on ubuntu 16.04, kernel v4.13)

Thank you.

[kvbench] Running kvbench with UDD in synchronous mode keep fails.

In order to use KVSSD with UDD in synchronous mode, I've done setup during device initialization, by setting 2nd argument of KUDDriver::init() as 'true', at line 480 of cfrontend.cpp.
Since sample_code_sync terminated normally and sample_code_async terminated with error KVS_ERR_DD_INVALID_QUEUE_TYPE, I guessed that setup was successful.

However, when I tried to run kvbench with UDD, it only worked with asynchronous mode(in other words, only when when write_mode = async at [kvs] section).
And every time I tried to run it in synchronous mode, uNVMe driver function terminated with KVS_ERR_DD_INVALID_QUEUE_TYPE.

Am I doing something wrong? How can I run kv_bench with UDD, in synchronous mode?

lib dependency for build KVSSD simulator

Please add the following library into PDK/core/tools/install_deps.sh
libboost-all-dev
libnuma-dev

I am testing with a Ubuntu 14.04 machine. After install the above libraries, the target can build successfully.

Thanks,
Mian

likely data race conditions in ioctl handling

Location (Korea, USA, China, India, etc.)
USA, San Diego

Describe the bug
There are probably some data race conditions in the 4.15 version of the driver. The pattern:

  • get variable from userspace
  • take spinlock, save irq state
  • write to variable
  • release spinlock

One issue is that by the time you get the variable, another thread has the opportunity to write to it. Another issue is that 2 or more threads may read the same variable, then independently increment it and write it back out. Instead of being incremented twice, it's only incremented once. I'm not claiming that the code is incrementing a variable, I'm simply highlighting this as a potential problem. The correct pattern is:

  • take spinlock, save irq state
  • get variable from userspace
  • write to variable
  • release spinlock

Note: there is a spinlock acquire of a critical section that has been commented out. This is definitely a race condition.

To Reproduce
Steps to reproduce the behavior:

  1. It's obvious upon reading the code

Expected behavior
A clear and concise description of what you expected to happen.
See above

Screenshots
If applicable, add screenshots to help explain your problem.
It's in the code, primarily in core.c

System environment (please complete the following information)

  • Firmware version : N/A
  • Number of SSDs : N/A
  • OS & Kernel version [e.g., Ubuntu 16.04 Kernel v4.9.5]: Ubuntu 4.15
  • GCC version [e.g., gcc v5.0.0] : N/A
  • kvbench version if kvbench runs [e.g., v0.6.0]: N/A
  • KV API version [e.g., v0.6.0] N/A
  • User driver version : N/A
  • Driver [Kernel or user driver or emulator] : kernel driver

Workload

  • number of records or data size N/A
  • Workload(insert, mixed workload, etc.) [e.g., sequential or random insert, or 50% Read & 50% write] N/A
  • key size : N/A
  • value size : N/A
  • operation option if available [e.g., sync or async mode] : N/A

Additional context
Add any other context about the problem here.
See above

[email protected]

Memory leak if user doesn't handle async queue

Location (Korea, USA, China, India, etc.)
Korea

Describe the bug
When the user doesn't check if the queue is full in their program, memory leaks indefinitely as IO commands created in kv_store (and other commands) are not freed when the queue is full, and new commands are created on the retry.

To Reproduce
Steps to reproduce the behavior:

  1. Confirm free system memory with free
  2. Run the following modified version of the insertion routine in test_async.cpp
 sudo ./sample_code_async -d /dev/nvme0n1 -n 100000000 -k 16 -v 4096 -o 1
int perform_insertion(kvs_container_handle cont_hd, int count, 
int maxdepth, kvs_key_t klen, uint32_t vlen) {
  int num_submitted = 0;
  long int seq = 0;
  fprintf(stdout, "\n===========\n");
  fprintf(stdout, "   Do Write Operation\n");
  fprintf(stdout, "===========\n");

  struct timespec t1, t2;
  clock_gettime(CLOCK_REALTIME, &t1);

  kvs_key *kvskey = (kvs_key*)malloc(sizeof(kvs_key));
  kvs_value *kvsvalue = (kvs_value*)malloc(sizeof(kvs_value));

  char *key   = (char*)kvs_malloc(klen, 4096);
  char *value = (char*)kvs_malloc(vlen, 4096);

  while (num_submitted < count) {
      if (num_submitted >= count) break;

      if(!key || !value) {
        fprintf(stderr, "no mem is allocated\n");
        exit(1);
      }
      
      sprintf(key, "%0*ld", klen - 1, seq++);
      sprintf(value, "value%ld", seq);
      kvs_store_option option;
      memset(&option, 0, sizeof(kvs_store_option));
      option.st_type = KVS_STORE_POST;
      option.kvs_store_compress = false;
      
      const kvs_store_context put_ctx = {option, NULL, NULL };
      kvskey->key = key;
      kvskey->length = klen;
      kvsvalue->value = value;
      kvsvalue->length = vlen;
      kvsvalue->actual_value_size = kvsvalue->offset = 0;
      int ret = kvs_store_tuple_async(cont_hd, kvskey, kvsvalue, &put_ctx, NULL);
           
      if (ret != KVS_SUCCESS) {
        fprintf(stderr, "store tuple failed with err %s\n", kvs_errstr(ret)); 
        //ret = kvs_store_tuple_async(cont_hd, kvskey, kvsvalue, 
        //&put_ctx, complete);
        exit(1);
      }

      num_submitted++;
  }
  
  clock_gettime(CLOCK_REALTIME, &t2);
  unsigned long long start, end;
  start = t1.tv_sec * 1000000000L + t1.tv_nsec;
  end = t2.tv_sec * 1000000000L + t2.tv_nsec;
  double sec = (double)(end - start) / 1000000000L;
  fprintf(stdout, "Total time %.2f sec; Throughput %.2f ops/sec\n", sec, 
  (double)count /sec );
  
  return 0;
}
  1. Check free memory with free again as often as desired

Expected behavior
Reasonable memory usage

System environment (please complete the following information)

  • Firmware version : EEA51K0Q
  • Number of SSDs : 1
  • OS & Kernel version [e.g., Ubuntu 16.04 Kernel v4.9.5]: Ubuntu 16.04 v4.9.5
  • GCC version [e.g., gcc v5.0.0] : v5.5.0
  • KV API version [e.g., v0.6.0]: v0.9.0
  • Driver [Kernel or user driver or emulator] : Kernel

Workload

  • number of records or data size: Any
  • Workload(insert, mixed workload, etc.) [e.g., sequential or random insert, or 50% Read & 50% write]: Insert (possibly retrieve, exist, and delete too)
  • key size : 16
  • value size : 4096
  • operation option if available [e.g., sync or async mode] : async

Additional context

The problem can be fixed by modifying kernel_driver_adapter/src/kv_device.cpp: kv_store, from

    io_cmd *cmd = new io_cmd(dev, ns, que_hdl);

    cmd->ioctx.key = key;
    cmd->ioctx.value = const_cast<kv_value *>(value);
    cmd->ioctx.timeout_usec = 0;
    if (post_fn) {
        cmd->ioctx.post_fn = post_fn->post_fn;
        cmd->ioctx.private_data = post_fn->private_data;
    } else {
        cmd->ioctx.post_fn = NULL;
    }
    cmd->ioctx.opcode = KV_OPC_STORE;
    cmd->ioctx.command.store_info = info;


    return dev->submit_io(que_hdl, cmd);

to

    io_cmd *cmd = new io_cmd(dev, ns, que_hdl);

    cmd->ioctx.key = key;
    cmd->ioctx.value = const_cast<kv_value *>(value);
    cmd->ioctx.timeout_usec = 0;
    if (post_fn) {
        cmd->ioctx.post_fn = post_fn->post_fn;
        cmd->ioctx.private_data = post_fn->private_data;
    } else {
        cmd->ioctx.post_fn = NULL;
    }
    cmd->ioctx.opcode = KV_OPC_STORE;
    cmd->ioctx.command.store_info = info;

    ret = dev->submit_io(que_hdl, cmd);

    if(ret) free(cmd);

    return ret;

Alternatively, it can also be fixed by changing kernel_driver_adapter/src/kv_device.cpp: kv_store: submit_io, from

    if (kque->full()) {
        return KV_ERR_QUEUE_IS_FULL;
    } else {
        // send to real device directly for execution
        // actually only in a queue inside device driver for asyncIO
        // Please note this is not actual execution, it's just sent to device
        // for asynchronous execution.
        // Ideally device should export a queue interface for sending commands
        kv_result res = cmd->execute_cmd();
        if (res == KV_SUCCESS) {
            kque->increase_qdepth();
        }

        return res;
    }

to

    if (kque->full()) {
        free(cmd);
        return KV_ERR_QUEUE_IS_FULL;
    } else {
        // send to real device directly for execution
        // actually only in a queue inside device driver for asyncIO
        // Please note this is not actual execution, it's just sent to device
        // for asynchronous execution.
        // Ideally device should export a queue interface for sending commands
        kv_result res = cmd->execute_cmd();
        if (res == KV_SUCCESS) {
            kque->increase_qdepth();
        }

        return res;
    }

The same pattern is also repeated in many places throughout the code.

[KV API] About closing iterator when test is done

When testing / running benchmark for KVSSD(using KDD) with iterator is done, does it automatically closes all iterators, even when the program doesn't terminates normally - rising error(i.e. KVS_ERR_SYS_IO, etc)?

[KV API] Some questions about iterator

Dear all,
I have some questions about iterator's functionalities.

  1. Does KVSSD Host SW Packages supports KVS_ITERATOR_KEY_VALUE mode now(for version 0.9.0)? According to Samsung_KV_API_spec, it doesn't. However, I saw some lines that once used for restricting KVS_ITERATOR_KEY_VALUE mode but now commented out, in source codes.

  2. What does kvs_iterator_next() function do exactly? Does it concatenate data(key, key length, etc) for only one key to it_list?

Thank you.

[KV API] About using UDD

  1. UDD for KVSSD requires root privilege. Is it normal?

  2. Why does UDD(DPDK) need huge page of size 2^20 Bytes(or 1GB)? It doesn't work with 2MB size huge page.

SPDK Device open failed 0x3

I was using KDD in the machine before. When I moved to SPDK, I get the following error. Could please advise on this issue.

$ sudo ./sample_code_async -d 0000:06:00.0 -n 100 -q 64 -o 1 -k 16 -v 4096
EAL: Detected 32 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Auto-detected process type: PRIMARY
EAL: Multi-process socket /var/run/dpdk/spdk4251/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
kv_driver.c:: 219:: _kv_env_init:: mem_size_mb: 1024 shm_id: 4251

kv_driver.c:: 222:: _kv_env_init:: Initialized the KV API Environment
kv_driver.c:: 243:: _kv_env_init:: Done

init udd
kv_driver.c:: 372:: kv_nvme_init:: Cannot Use the Requested Device 0000:06:00.0
Device open failed 0x3

$ sudo nvme list
Node SN Model Namespace Usage Format FW Rev


/dev/nvme0n1 S4DRNY0MXXXXXX SAMSUNG MZQLB3T8HALS-000KV 1 1.46 GB / 3.84 TB 512 B + 0 B ETA51KCA
/dev/nvme1n1 S4DRNY0MXXXXXX SAMSUNG MZQLB3T8HALS-000KV 1 0.00 B / 3.84 TB 512 B + 0 B EDA53W0Q

Install error: rmmod: ERROR: Module nvme is in use

Location (Korea, USA, China, India, etc.)
Korea

Describe the bug
After successful compilation I tried to install the kernel driver using ./re_insmod.sh. The following error occurred. What should I check?

Error Messages
root@london:~/KVSSD/PDK/driver/PCIe/kernel_driver/kernel_v4.13.15-041315-ubuntu-16_04# ./re_insmod.sh
rmmod: ERROR: Module nvme is in use
rmmod: ERROR: Module nvme_core is in use by: nvme
insmod: ERROR: could not insert module nvme-core.ko: File exists
insmod: ERROR: could not insert module nvme.ko: File exists

root@london:~/KVSSD/PDK/driver/PCIe/kernel_driver/kernel_v4.13.15-041315-ubuntu-16_04# uname -a
Linux london 4.13.15-041315-generic #201711211030 SMP Tue Nov 21 10:31:16 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

root@london:/KVSSD/PDK/driver/PCIe/kernel_driver/kernel_v4.13.15-041315-ubuntu-16_04# ls -al
total 1184
drwxr-xr-x 3 root root 4096 Aug 19 15:35 .
drwxr-xr-x 11 root root 4096 Aug 19 14:25 ..
-rw-r--r-- 1 root root 104040 Aug 19 14:25 core.c
-rw-r--r-- 1 root root 102208 Aug 19 15:35 core.o
-rw-r--r-- 1 root root 50469 Aug 19 15:35 .core.o.cmd
-rw-r--r-- 1 root root 26507 Aug 19 14:25 fabrics.c
-rw-r--r-- 1 root root 5510 Aug 19 14:25 fabrics.h
-rw-r--r-- 1 root root 79613 Aug 19 14:25 fc.c
-rw-r--r-- 1 root root 1421 Aug 19 14:25 Kconfig
-rw-r--r-- 1 root root 25561 Aug 19 14:25 lightnvm.c
-rw-r--r-- 1 root root 20512 Aug 19 15:35 lightnvm.o
-rw-r--r-- 1 root root 42726 Aug 19 15:35 .lightnvm.o.cmd
-rw-r--r-- 1 root root 30337 Aug 19 14:25 linux_nvme.h
-rw-r--r-- 1 root root 3445 Aug 19 14:25 linux_nvme_ioctl.h
-rw-r--r-- 1 root root 408 Aug 19 14:25 Makefile
-rw-r--r-- 1 root root 191 Aug 19 15:35 modules.order
-rw-r--r-- 1 root root 4732 Aug 19 15:35 Module.symvers
-rw-r--r-- 1 root root 105400 Aug 19 15:35 nvme-core.ko
-rw-r--r-- 1 root root 447 Aug 19 15:35 .nvme-core.ko.cmd
-rw-r--r-- 1 root root 542 Aug 19 15:35 nvme-core.mod.c
-rw-r--r-- 1 root root 2536 Aug 19 15:35 nvme-core.mod.o
-rw-r--r-- 1 root root 28470 Aug 19 15:35 .nvme-core.mod.o.cmd
-rw-r--r-- 1 root root 110240 Aug 19 15:35 nvme-core.o
-rw-r--r-- 1 root root 387 Aug 19 15:35 .nvme-core.o.cmd
-rw-r--r-- 1 root root 10813 Aug 19 14:25 nvme.h
-rw-r--r-- 1 root root 55000 Aug 19 15:35 nvme.ko
-rw-r--r-- 1 root root 427 Aug 19 15:35 .nvme.ko.cmd
-rw-r--r-- 1 root root 1243 Aug 19 15:35 nvme.mod.c
-rw-r--r-- 1 root root 3680 Aug 19 15:35 nvme.mod.o
-rw-r--r-- 1 root root 28410 Aug 19 15:35 .nvme.mod.o.cmd
-rw-r--r-- 1 root root 52280 Aug 19 15:35 nvme.o
-rw-r--r-- 1 root root 287 Aug 19 15:35 .nvme.o.cmd
-rw-r--r-- 1 root root 71564 Aug 19 14:25 pci.c
-rw-r--r-- 1 root root 54424 Aug 19 15:35 pci.o
-rw-r--r-- 1 root root 44723 Aug 19 15:35 .pci.o.cmd
-rw-r--r-- 1 root root 51614 Aug 19 14:25 rdma.c
-rwxr-xr-x 1 root root 96 Aug 19 14:25 re_insmod.sh
drwxr-xr-x 2 root root 4096 Aug 19 15:35 .tmp_versions
root@london:
/KVSSD/PDK/driver/PCIe/kernel_driver/kernel_v4.13.15-041315-ubuntu-16_04#

System environment (please complete the following information)

  • Firmware version : ETA51KBV
  • Number of SSDs : 4EA
  • OS & Kernel version [e.g., Ubuntu 16.04 Kernel v4.9.5]: Ubuntu 16.04 Kernel v4.13.15-041315
  • GCC version [e.g., gcc v5.0.0] : 5.4.0 20160609
    KVSSD-Kernel Driver-Install-error.txt

Installation Error

I followed the procedure of KVSSD_QUICK_START_GUIDE_0_7_0.pdf.
I got this kind of error message:

root@bitcoin:~/KVSSD-master/PDK/driver/PCIe/kernel_driver/kernel_v4.15# ./re_insmod.sh
insmod: ERROR: could not insert module nvme-core.ko: Required key not available
insmod: ERROR: could not insert module nvme.ko: Required key not available

root@bitcoin:/KVSSD-master/PDK/driver/PCIe/kernel_driver/kernel_v4.15# uname -a
Linux bitcoin 4.15.0-22-generic #24
16.04.1-Ubuntu SMP Fri May 18 09:46:31 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Any idea for this error to fix?

[KVCeph] ./bin/rados bench doesn't use KV-SSD, use normal SSD.

Dear contributors,

First,
I installed the KV-SSD Kernel Driver and have run the sample test program(./sample_code_async -d /dev/nvme0n1 -n 100 -q 64 -o 1 -k 16 -v 4096). I saw the use of KV-SSD(nvme0n1) upon "iostat" command.

==================iostat========================
avg-cpu: %user %nice %system %iowait %steal %idle
8.09 0.00 8.78 0.03 0.00 83.10

Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
loop0 0.00 0.00 0.00 0 0
sda 17.40 0.00 396.40 0 3964
nvme1n1 0.00 0.00 0.00 0 0
nvme0n1 54123.30 0.00 216493.20 0 2164932
=====================sample code async=================
io_complete: op = 1, key = 000000000999999, result = KVS_SUCCESS
io_complete: op = 1, key = 000000000999989, result = KVS_SUCCESS
io_complete: op = 1, key = 000000000999995, result = KVS_SUCCESS
io_complete: op = 1, key = 000000000999996, result = KVS_SUCCESS
Total time 18.47 sec; Throughput 54135.79 ops/sec
After: Total size is 3840755982336 bytes, used is 346
root@bitcoin:~/KVSSD/PDK/core/build#
============================end of sampe...==================

second,
after build and

install KVCeph with Kernal driver, run the rados bench command, does not use KV-SSD. which part do I have to check.

../src/vstart.sh -d -n -x -l
./bin/rados -p rbd bench 30 write

===========result of rados bench====================
28 16 134 118 16.8549 0 - 3.28753
29 16 134 118 16.2737 0 - 3.28753
30 16 134 118 15.7312 0 - 3.28753
31 16 135 119 15.3528 0.666667 7.59606 3.32374
Total time run: 31.128823
Total writes made: 135
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 17.3473
Stddev Bandwidth: 15.9201
Max bandwidth (MB/sec): 64
Min bandwidth (MB/sec): 0
Average IOPS: 4
Stddev IOPS: 3
Max IOPS: 16
Min IOPS: 0
Average Latency(s): 3.68917
Stddev Latency(s): 2.33163
Max latency(s): 8.72245
Min latency(s): 0.107901
Cleaning up (deleting benchmark objects)
Removed 135 objects
Clean up completed and total clean up time :0.097754
root@bitcoin:~/KVSSD/application/KVCeph/build#

===========result of iostat============================
avg-cpu: %user %nice %system %iowait %steal %idle
2.63 0.00 2.24 8.60 0.00 86.53

Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
loop0 0.00 0.00 0.00 0 0
sda 203.20 0.00 132416.40 0 1324164
nvme1n1 0.00 0.00 0.00 0 0
nvme0n1 0.00 0.00 0.00 0 0
=================end of result================

ceph.conf.txt

regards,
Moon-Kee Bahk
m k b a h [email protected]

[KV API] About kvs_store_type

Does Samsung KVSSD supports the KVS_STORE_POST and KVS_STORE_NOOVERWRITE only, even when using SNIA-standard API?

build process error

when after running of "./do_cmake.sh", I got below error message. which part do I have check?
==============Error Message======================
-- Performing Test COMPILER_SUPPORTS_DIAGNOSTICS_COLOR
-- Performing Test COMPILER_SUPPORTS_DIAGNOSTICS_COLOR - Success
CMake Error at build/src/CMakeFiles/git-data/grabRef.cmake:39 (file):
file failed to open for reading (No such file or directory):

/root/KVSSD-master/application/KVCeph/build/src/CMakeFiles/git-data/head-ref

Call Stack (most recent call first):
cmake/modules/GetGitRevisionDescription.cmake:77 (include)
src/CMakeLists.txt:209 (get_git_head_revision)

CMake Error at build/src/CMakeFiles/git-data/grabRef.cmake:39 (file):
file failed to open for reading (No such file or directory):

/root/KVSSD-master/application/KVCeph/build/src/CMakeFiles/git-data/head-ref

Call Stack (most recent call first):
cmake/modules/GetGitRevisionDescription.cmake:77 (include)
cmake/modules/GetGitRevisionDescription.cmake:87 (get_git_head_revision)
src/CMakeLists.txt:210 (git_describe)

CMake Error at src/CMakeLists.txt:212 (if):
if given arguments:

"STREQUAL" "GITDIR-NOTFOUND"

Unknown arguments specified

-- Configuring incomplete, errors occurred!
See also "/root/KVSSD-master/application/KVCeph/build/CMakeFiles/CMakeOutput.log".
See also "/root/KVSSD-master/application/KVCeph/build/CMakeFiles/CMakeError.log".

  • cat
  • echo 40000
  • echo done.
    done.
    =====================End of Error Message=======================================
    CMakeOutput.log
    CMakeError.log

KDDriver::store_tuple() leaks memory in default case.

Location (Korea, USA, China, India, etc.)
San Diego, USA ([email protected])

Describe the bug
KDDriver::store_tuple() leaks memory because prep_io_context allocates memory that doesn't get freed if the default case gets hit.

To Reproduce
Steps to reproduce the behavior:
1.
2.
3.

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

System environment (please complete the following information)

  • Firmware version :
  • Number of SSDs :
  • OS & Kernel version [e.g., Ubuntu 16.04 Kernel v4.9.5]:
  • GCC version [e.g., gcc v5.0.0] :
  • kvbench version if kvbench runs [e.g., v0.6.0]:
  • KV API version [e.g., v0.6.0]
  • User driver version :
  • Driver [Kernel or user driver or emulator] :

Workload

  • number of records or data size
  • Workload(insert, mixed workload, etc.) [e.g., sequential or random insert, or 50% Read & 50% write]
  • key size :
  • value size :
  • operation option if available [e.g., sync or async mode] :

Additional context
Add any other context about the problem here.

Spec 0.7.0 typo

Location (Korea, USA, China, India, etc.)
San Diego

Describe the bug
Section 6.2.1:
typedef enum {
KVS_ITERAOR_KEY_VALUE =1,
} kvs_iterator_type;

should be
typedef enum {
KVS_ITERATOR_KEY_VALUE = 1,
} kvs_iterator_type;

Also, this isn't clear:
// [OPTION] iterator command retrieves key and
delete

Does iterator with delete return the key and value before deleting the tuple, or does it just return the key?

To Reproduce
Steps to reproduce the behavior:

  1. Read the spec.

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

System environment (please complete the following information)

  • Firmware version :
  • Number of SSDs :
  • OS & Kernel version [e.g., Ubuntu 16.04 Kernel v4.9.5]:
  • GCC version [e.g., gcc v5.0.0] :
  • kvbench version if kvbench runs [e.g., v0.6.0]:
  • KV API version [e.g., v0.6.0]
  • User driver version :
  • Driver [Kernel or user driver or emulator] :

Workload

  • number of records or data size
  • Workload(insert, mixed workload, etc.) [e.g., sequential or random insert, or 50% Read & 50% write]
  • key size :
  • value size :
  • operation option if available [e.g., sync or async mode] :

Additional context
Add any other context about the problem here.

" make rocksdb_bench " failure

Location (Korea, USA, China, India, etc.)
Korea

Describe the bug

"make rocksdb_bench" failure like below.
root@london:/KVSSD/application/kvbench/build_rxdb# make rocksdb_bench
Scanning dependencies of target rocksdb_bench
[ 8%] Building CXX object CMakeFiles/rocksdb_bench.dir/bench/couch_bench.cc.o
[ 16%] Building CXX object CMakeFiles/rocksdb_bench.dir/wrappers/couch_rocksdb.cc.o
[ 25%] Building CXX object CMakeFiles/rocksdb_bench.dir/utils/thpool.cc.o
[ 33%] Building CXX object CMakeFiles/rocksdb_bench.dir/utils/avltree.cc.o
[ 33%] Building CXX object CMakeFiles/rocksdb_bench.dir/utils/stopwatch.cc.o
[ 41%] Building CXX object CMakeFiles/rocksdb_bench.dir/utils/iniparser.cc.o
[ 50%] Building CXX object CMakeFiles/rocksdb_bench.dir/utils/crc32.cc.o
[ 58%] Building CXX object CMakeFiles/rocksdb_bench.dir/utils/memleak.cc.o
[ 66%] Building CXX object CMakeFiles/rocksdb_bench.dir/utils/zipfian_random.cc.o
[ 75%] Building CXX object CMakeFiles/rocksdb_bench.dir/utils/keyloader.cc.o
[ 83%] Building CXX object CMakeFiles/rocksdb_bench.dir/utils/memory.cc.o
[ 91%] Building CXX object CMakeFiles/rocksdb_bench.dir/utils/keygen.cc.o
[100%] Linking CXX executable rocksdb_bench
/usr/bin/ld: cannot find -lssl
/usr/bin/ld: cannot find -lcrypto
/usr/bin/ld: cannot find -lz
collect2: error: ld returned 1 exit status
CMakeFiles/rocksdb_bench.dir/build.make:380: recipe for target 'rocksdb_bench' failed
make[3]: *** [rocksdb_bench] Error 1
CMakeFiles/Makefile2:252: recipe for target 'CMakeFiles/rocksdb_bench.dir/all' failed
make[2]: *** [CMakeFiles/rocksdb_bench.dir/all] Error 2
CMakeFiles/Makefile2:264: recipe for target 'CMakeFiles/rocksdb_bench.dir/rule' failed
make[1]: *** [CMakeFiles/rocksdb_bench.dir/rule] Error 2
Makefile:183: recipe for target 'rocksdb_bench' failed
make: *** [rocksdb_bench] Error 2
root@london:
/KVSSD/application/kvbench/build_rxdb#

kvbench-racksdb failure-moonkee--20190821.txt

[KV API] length of key and value

In Samsung KV API, does length of key/value cares about '\0'?
for example, if key string = "abcd", is it fine to say that its length is 4B, or should I consider it as 5B?

[thread safety issue] seg fault on multithread using sync api with kvs emulator

I encountered some seg faults when using kvs emulator sync api. Here is a small test program regenerate the problem.
code as follows:
`#include <stdio.h>
#include <stdint.h>
#include
#include
#include <unistd.h>
#include
#include <kvs_api.h>
#include

void usage(char *program)
{
printf("==============\n");
printf("usage: %s -d device_path [-n num_ios] [-k klen] [-v vlen]\n", program);
printf("-d device_path : kvssd device path. e.g. emul: /dev/kvemul; kdd: /dev/nvme0n1; udd: 0000:06:00.0\n");
printf("-n num_ios : total number of ios per thread\n");
printf("-t num_threads : total number of client threads\n");
printf("-k klen : key length\n");
printf("-v vlen : value length\n");
printf("==============\n");
}

void perform_insertion(kvs_container_handle cont_hd, int offset, int count, uint16_t klen, uint32_t vlen) {

char key = (char)kvs_malloc(klen, 4096);
char value = (char)kvs_malloc(vlen, 4096);
if(key == NULL || value == NULL) {
fprintf(stderr, "failed to allocate\n");
exit(1);
}

fprintf(stdout, "\n===========\n");
fprintf(stdout, " Do Write Operation \n");
fprintf(stdout, "===========\n");

struct timespec t1, t2;
clock_gettime(CLOCK_REALTIME, &t1);

for(int i = offset; i < count + offset; i++) {
sprintf(key, "%0d", klen - 1, i);
sprintf(value, "%0
d", klen - 1, i + 10);

const kvs_store_context put_ctx = { KVS_STORE_POST | KVS_SYNC_IO, 0, 0 , 0};
const kvs_key  kvskey = { key, klen};
const kvs_value kvsvalue = { value, vlen, 0};

int ret = kvs_store_tuple(cont_hd, &kvskey, &kvsvalue, &put_ctx);    
if(ret != KVS_SUCCESS) {
  fprintf(stderr, "store tuple failed with error 0x%x - %s\n", ret, kvs_errstr(ret));
  exit(1);
} else {
  fprintf(stderr, "store key %s with value %s done %d \n", key, value, vlen);
}

}

clock_gettime(CLOCK_REALTIME, &t2);
unsigned long long start, end;
start = t1.tv_sec * 1000000000L + t1.tv_nsec;
end = t2.tv_sec * 1000000000L + t2.tv_nsec;
double sec = (double)(end - start) / 1000000000L;
fprintf(stdout, "Total time %.2f sec; Throughput %.2f ops/sec\n", sec, (double)count /sec );

if(key) kvs_free(key);
if(value) kvs_free(value);

}

int main(int argc, char argv[]) {
char
dev_path = NULL;
int num_ios = 10;
uint16_t klen = 16;
uint32_t vlen = 4096;
int num_thread = 1;
int c;

while ((c = getopt(argc, argv, "d:n:t:k:v:h")) != -1) {
switch(c) {
case 'd':
dev_path = optarg;
break;
case 'n':
num_ios = atoi(optarg);
break;
case 't':
num_thread = atoi(optarg);
break;
case 'k':
klen = atoi(optarg);
break;
case 'v':
vlen = atoi(optarg);
break;
case 'h':
usage(argv[0]);
exit(1);
break;
default:
usage(argv[0]);
exit(0);
}
}

if(dev_path == NULL) {
fprintf(stderr, "Please specify KV SSD device path\n");
usage(argv[0]);
exit(0);
}

kvs_init_options options;
kvs_init_env_opts(&options);

options.memory.use_dpdk = 0;
options.aio.iocomplete_fn = NULL;

const char *configfile = "../kvssd_emul.conf";
options.emul_config_file = configfile;

// initialize the environment
kvs_init_env(&options);

kvs_device_handle dev;
kvs_open_device(dev_path, &dev);

kvs_container_context ctx;
kvs_create_container(dev, "test", 4, &ctx);

kvs_container_handle cont_handle;
kvs_open_container(dev, "test", &cont_handle);

std::thread *worker = new std::thread[num_thread];

for (int i = 0; i < num_thread; i++) {
worker[i] = std::thread(perform_insertion, cont_handle, num_ios*i, num_ios, klen, vlen);
}

for (int i = 0; i < num_thread; i++) {
worker[i].join();
}
kvs_close_container(cont_handle);
kvs_delete_container(dev, "test");
kvs_exit_env();

return 0;
}`

compile script:
~/KVSSD/PDK/core/build$ g++ test_sync.cpp -I../include -L./ -lpthread -lkvapi -std=c++11 -fpic -Wall -g -O0

run script:
./a.out -d /dev/kvemul -n 100 -k 16 -v 1024 -t 4

I haven't check the emulator source code yet. Is this a known issue?

Thanks,
Mian

[KVBench]question for core id in cpu.txt

Dears,
After i generate cpu.txt with command "kv_bench -c" i get a default CPU file as below:
nodeid,coreid,dbid_load,dbid_perf
0,0,-1,-1
0,1,-1,-1
0,2,-1,-1
0,3,-1,-1
0,4,-1,-1
0,5,-1,-1
0,6,-1,-1
0,7,-1,-1
But it seems not fit the system:
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1

Should not the quantity of cores be 4 instead of 8 in cpu.txt?

About bitmask and bit pattern when using iterator

According to specification document, KVSSD iterator operation returns sets of keys that satisfies
'(bitmask & key) == bit_pattern'.
However, while bitmask and bit_pattern's type is unsigned int, key's type is an array of char.
Then, does KVSSD iterator only cares about 1st to 4th letter of key, when doing '&' operation?

[emulator issue] kvs_retrieve_tuple value length

I found another issue about the kvs emulator. In the document, it says " The tuple value is copied to value.value buffer and value.size is set to the actual size of value." However, both sync and async mode the value size is not set to the actual size of the value but still the original buffer size.
For sync mode it's easy to fix, but for async mode, it need a little bit hacking to fix. It's not a grace way.

Please double check the issue and fix in the next release.
Thanks,
Mian

queue full errors result in io_cmd leaks

Location (Korea, USA, China, India, etc.)
USA

Describe the bug
In function kv_store in kv_device.cpp, an io_cmd object is allocated and passed to submit_io. If there are any errors returned, the kv_store function goes out of scope and leaks the io_cmd object.

There are several of these types of bugs in the code (I count at least a dozen).

To Reproduce
Steps to reproduce the behavior:

  1. Use the kernel driver to overflow a queue.
  2. Once it hits a QFULL the code leaks memory.

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

System environment (please complete the following information)

  • Firmware version :
  • Number of SSDs :
  • OS & Kernel version [e.g., Ubuntu 16.04 Kernel v4.9.5]:
  • GCC version [e.g., gcc v5.0.0] :
  • kvbench version if kvbench runs [e.g., v0.6.0]:
  • KV API version [e.g., v0.6.0]
  • User driver version :
  • Driver [Kernel or user driver or emulator] :

Workload

  • number of records or data size
  • Workload(insert, mixed workload, etc.) [e.g., sequential or random insert, or 50% Read & 50% write]
  • key size :
  • value size :
  • operation option if available [e.g., sync or async mode] :

Additional context
Add any other context about the problem here.

[KVSSD KDD] Unit test and SNIA API test fail

I have updated the firmware successfully (getting the correct firmware version in $sudo nvme list). Installed KDD on Ubuntu 18.04 LTS on kernel version 4.15.0-66-generic. The unit tests for kernel driver failed.
$ sudo ./kv_store_16 -d /dev/nvme0n1 -k student0 -value KJKJKJ
[sudo] password for msaha002:
[dump issued cmd opcode (81)]
opcode(81)
flags(00)
rsvd1(0000)
nsid(00000001)
cdw2(00000000)
cdw3(00000000)
rsvd2(00000000)
cdw5(00000000)
data_addr(0x55a7e9742260)
data_length(00001000)
key_length(00000010)
cdw10(00000000)
cdw11(00000000)
cdw12(64757473)
cdw13(30746e65)
cdw14(00000000)
cdw15(00000000)
timeout_ms(00000000)
result(00000000)

opcode(81) error(-1) and cmd.result(0) cmd.status(0).
fail to store for /dev/nvme0n1

When I try API test code for SNIA, I get the following error.
$ sudo ./sample_code_async -d /dev/nvme0n1 -n 1000 -q 64 -o 1 -k 16 -v 4096
ENTER: open
fail to set_aioctx
EXIT : open
KV device is closed: fd 3
kv_initialize_device failed 0x20 - KVS_ERR_UNCORRECTIBLE
Device open failed 0x13

What could be the issue?

Spec 0.7 does not match kvs_api.h

Location (Korea, USA, China, India, etc.)
USA

Describe the bug
A clear and concise description of what the bug is.

In kvs_api.h:
#define KVS_ALIGNMENT_UNIT 4

In the KV SSD API spec:
6.1.1 KVS_ALIGNMENT_UNIT
This is an alignment unit. An offset of value must be a multiple of this value.
[SAMSUNG] The default alignment unit for the Samsung key-value SSD is 32 bytes.

To Reproduce
Steps to reproduce the behavior:
1.
2.
3.

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

System environment (please complete the following information)

  • Firmware version :
  • Number of SSDs :
  • OS & Kernel version [e.g., Ubuntu 16.04 Kernel v4.9.5]:
  • GCC version [e.g., gcc v5.0.0] :
  • kvbench version if kvbench runs [e.g., v0.6.0]:
  • KV API version [e.g., v0.6.0]
  • User driver version :
  • Driver [Kernel or user driver or emulator] :

Workload

  • number of records or data size
  • Workload(insert, mixed workload, etc.) [e.g., sequential or random insert, or 50% Read & 50% write]
  • key size :
  • value size :
  • operation option if available [e.g., sync or async mode] :

Additional context
Add any other context about the problem here.

Unable to run Benchmark application

Location (Korea, USA, China, India, etc.) Put your location for prompt support.
Denmark

Describe the bug A clear and concise description of what the bug is.
When trying to run the KV Benchmark application the following happens:

Using "bench_config.ini" config file
with iterator 0 - mode 0
node 0: 1 3 5 0 0 0 0 0 
pop -- dev 0000:01:00.0, numaid 0, core: 1 
bench --- dev 0000:01:00.0, numaid 0, core: 1 
ratio is 50:50:0:0
 === benchmark configuration ===
DB module: KVS
random seed: 1616169051
filename: 0000:01:00.0#  (initialize)
# documents (i.e. working set size): 100
# threads: reader 0, iterator 0, writer 1, deleter 0
# auto-compaction threads: 4
block cache size: 16.00 GB
key length: Fixed size(16) / body length: Fixed size(2048)
batch distribution: Uniform
benchmark duration: 5 seconds
read batch size: point Uniform(1,1), range Uniform(500,1500)
write batch size: Uniform(1,1)
inside batch distribution: Uniform (-1 ~ +1, total 2)
write ratio: 50 % (synchronous)
insertion order: sequential fill
master core = 5, mask = 20
 EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Auto-detected process type: PRIMARY
EAL: Multi-process socket /run/user/1001/dpdk/spdk2827/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: Cannot obtain physical addresses: No such file or directory. Only vfio will function.
EAL: Couldn't get fd on hugepage file
EAL: FATAL: Cannot init memory

EAL: Cannot init memory

Failed to initialize DPDK
kv_driver.c:: 221:: _kv_env_init:: spdk_env_init failed
device init done
init udd
EAL: Couldn't get fd on hugepage file
nvme.c: 302:nvme_driver_init: *ERROR*: primary process failed to reserve memory
kv_driver.c:: 411:: kv_nvme_init:: SPDK NVMe Probe failed, ret = -1
Failed to open device: KVS_ERR_DEV_INIT 0xffffffff
Device open failed KVS_ERR_SYS_IO

To Reproduce Steps to reproduce the behavior:

  1. Follow quick installation guide
  2. Setup CPU.txt and bench_config in same way

System environment (please complete the following information)

Firmware version : 
Number of SSDs : 1
OS & Kernel version [e.g., Ubuntu 16.04 Kernel v4.9.5]: Ubuntu 18.04 Kernel v4.15.18-041518-generic
GCC version [e.g., gcc v5.0.0] : 7.4.0
kvbench version if kvbench runs [e.g., v0.6.0]: 
KV API version [e.g., v0.6.0]
User driver version :
Driver [Kernel or user driver or emulator] : User driver

CPU.txt setup:
nodeid,coreid,dbid_load,dbid_perf
0,1,0,0
0,3,1,1
0,5,2,2

bench_config.ini setup (only listed changes i've made):

[system]
allocator = spdk
key_pool_size = 128
key_pool_unit = 16
key_pool_alignment = 2048
value_pool_size = 128
value_pool_unit = 2048
value_pool_alignment = 2048
device_path = 0000:01:00.0

[kvs]
device_path = 0000:01:00.0

[body_length]
#distribution = uniform
distribution = fixed
fixed_size = 2048
value_size = 512,2048
value_size_ratio = 10:50:40
upper_bound = 2048
lower_bound = 2048
compressibility = 30

Additional context Add any other context about the problem here.

HugePage setup:
HugePages_Total: 1024
HugePages_Free: 512
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB

The CPU setup is setup in the way i've understood its supposed to be setup, error is the same as if CPU.txt hasn't been modified

device_path in bench_config.ini has been set after testing the file "sample_code_async" has been successfully run to confirm the device path is correct

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.