Code Monkey home page Code Monkey logo

lmdbjava's Introduction

Maven Build and Deployment codecov Javadocs Maven Central

LMDB for Java

LMDB offers:

  • Transactions (full ACID semantics)
  • Ordered keys (enabling very fast cursor-based iteration)
  • Memory-mapped files (enabling optimal OS-level memory management)
  • Zero copy design (no serialization or memory copy overhead)
  • No blocking between readers and writers
  • Configuration-free (no need to "tune" it to your storage)
  • Instant crash recovery (no logs, journals or other complexity)
  • Minimal file handle consumption (just one data file; not 100,000's like some stores)
  • Same-thread operation (LMDB is invoked within your application thread; no compactor thread is needed)
  • Freedom from application-side data caching (memory-mapped files are more efficient)
  • Multi-threading support (each thread can have its own MVCC-isolated transaction)
  • Multi-process support (on the same host with a local file system)
  • Atomic hot backups

LmdbJava adds Java-specific features to LMDB:

  • Extremely fast across a broad range of benchmarks, data sizes and access patterns
  • Modern, idiomatic Java API (including iterators, key ranges, enums, exceptions etc)
  • Nothing to install (the JAR embeds the latest LMDB libraries for Linux, OS X and Windows)
  • Buffer agnostic (Java ByteBuffer, Agrona DirectBuffer, Netty ByteBuf, your own buffer)
  • 100% stock-standard, officially-released, widely-tested LMDB C code (no extra C/JNI code)
  • Low latency design (allocation-free; buffer pools; optional checks can be easily disabled in production etc)
  • Mature code (commenced in 2016) and used for heavy production workloads (eg > 500 TB of HFT data)
  • Actively maintained and with a "Zero Bug Policy" before every release (see issues)
  • Available from Maven Central and OSS Sonatype Snapshots
  • Continuous integration testing on Linux, Windows and macOS with Java 8, 11, 17 and 21

Performance

img

img

Full details are in the latest benchmark report.

Documentation

Support

We're happy to help you use LmdbJava. Simply open a GitHub issue if you have any questions.

Building

This project uses Zig to cross-compile the LMDB native library for all supported architectures. To locally build LmdbJava you must firstly install a recent version of Zig and then execute the project's cross-compile.sh script. This only needs to be repeated when the cross-compile.sh script is updated (eg following a new official release of the upstream LMDB library).

If you do not wish to install Zig and/or use an operating system which cannot easily execute the cross-compile.sh script, you can download the compiled LMDB native library for your platform from a location of your choice and set the lmdbjava.native.lib system property to the resulting file system system location. Possible sources of a compiled LMDB native library include operating system package managers, running cross-compile.sh on a supported system, or copying it from the org/lmdbjava directory of any recent, officially released LmdbJava JAR.

Contributing

Contributions are welcome! Please see the Contributing Guidelines.

License

This project is licensed under the Apache License, Version 2.0.

This project distribution JAR includes LMDB, which is licensed under The OpenLDAP Public License.

lmdbjava's People

Contributors

2018ik avatar alepar avatar altesse avatar at055612 avatar benalexau avatar danielcranford avatar domsj avatar harrigan avatar huahaiy avatar jheister avatar krisskross avatar lfoppiano avatar lgtm-com[bot] avatar maithem avatar maurice-betzel avatar pedrolamarao avatar phraktle avatar pstutz avatar seancarroll avatar sidnt avatar svegaxmr avatar sylvyrfysh avatar tran4o avatar wardle avatar weidaru avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lmdbjava's Issues

add Dbi#close method

Consider exposing mdb_dbi_close on Dbi.

Based on the LMDB docs this is not necessary and potentially problematic if there are pending modifications, so not quite sure this is necessarily a good idea. But pointing it out for completeness.

add Dbi<byte[]> support

In some contexts (such as migrating from legacy lmdbjni/leveldbjni APIs) it would be nice to have a Dbi<byte[]> instead of wrapping arrays with ByteBuffers and copying out the data in the caller. On first glance the design seems to indicate this should be accomplished with a BufferProxy implementation – but it's not clear how... eg. at the point of the allocate() call the size is unknown, etc.

BadReaderLockException on concurrent read transactions

Opening two concurrent read txns gives an error. Here's a unit test to reproduce (for TxnTest.java):

  @Test
  public void readOnlyConcurrentTxnAllowedInReadOnlyEnv() {
    env.openDbi(DB_1, MDB_CREATE);
    final Env<ByteBuffer> roEnv = create().open(path, MDB_NOSUBDIR,
            MDB_RDONLY_ENV);
    Txn<ByteBuffer> txn1 = roEnv.txnRead();
    Txn<ByteBuffer> txn2 = roEnv.txnRead();
    assertThat(txn1, is(notNullValue()));
    assertThat(txn2, is(notNullValue()));
    assertThat(txn1, is(not(sameInstance(txn2))));
    assertThat(txn1.getId(), is(not(txn2.getId())));
    txn1.close();
    txn2.close();
  }

Stacktrace:

org.lmdbjava.Txn$BadReaderLockException: Invalid reuse of reader locktable slot (-30783)

    at org.lmdbjava.ResultCodeMapper.<clinit>(ResultCodeMapper.java:54)
    at org.lmdbjava.Env$Builder.open(Env.java:369)
    at org.lmdbjava.TxnTest.before(TxnTest.java:81)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
    at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
    at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
    at org.junit.rules.RunRules.evaluate(RunRules.java:20)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
    at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
    at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:117)
    at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:42)
    at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:262)
    at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:84)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)


Process finished with exit code 255

MDB_INTEGERKEY usage

Hi,

I'm trying to use ints as keys. As such, I assumed that MDB_INTEGERKEY is the most efficient way to proceed. Unfortunately, I'm getting some keys "out of order". I say that with quotations because it seems like LMDB is probably doing the correct thing, being the highly used and tested library that it is. This is likely a case of user error, in which case guidance would be much appreciated, or something funky with lmdbjava <-> lmdb. My first guess is that there is something funky going on with the byte ordering.

The int keys 0-255 seem to be in order, but when I transition to 256, it comes before 255.

Here are the relevant details to reproduce: https://gist.github.com/devinrsmith/ccb12bb1cb81dfc88e8f72e60cfb5666

Thanks.

Java env loading issue

I am using LMDB-Java as maven dependency and trying to run some code with LMDB, here is my maven dependencies:

<dependency>
			<groupId>org.lmdbjava</groupId>
			<artifactId>lmdbjava</artifactId>
			<version>0.5.0</version>
		</dependency>
		<dependency>
			<groupId>org.lmdbjava</groupId>
			<artifactId>lmdbjava-native-linux-x86_64</artifactId>
			<version>0.9.20-2</version>
		</dependency>

And I am trying to create ENV so i can later do some DB work, the code i am using:

File f = new File("/home/bahaa/junk/1");
		final Env<ByteBuffer> env = Env.create().setMapSize(10_485_760).setMaxDbs(1).open(f);
		System.out.println("done");

And still i am getting the exception below:

Exception in thread "main" java.lang.UnsatisfiedLinkError: could not load FFI provider jnr.ffi.provider.jffi.Provider
at jnr.ffi.provider.InvalidProvider$1.loadLibrary(InvalidProvider.java:48)
at jnr.ffi.LibraryLoader.load(LibraryLoader.java:325)
at jnr.ffi.LibraryLoader.load(LibraryLoader.java:304)
at org.lmdbjava.Library.(Library.java:107)
at org.lmdbjava.Env$Builder.open(Env.java:406)
at org.lmdbjava.Env$Builder.open(Env.java:430)
at Bahaaa.main(Bahaaa.java:10)
Caused by: java.lang.ExceptionInInitializerError
at jnr.ffi.provider.jffi.NativeRuntime.getInstance(NativeRuntime.java:58)
at jnr.ffi.provider.jffi.Provider.(Provider.java:29)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at jnr.ffi.provider.FFIProvider$SystemProviderSingletonHolder.getInstance(FFIProvider.java:68)
at jnr.ffi.provider.FFIProvider$SystemProviderSingletonHolder.(FFIProvider.java:57)
at jnr.ffi.provider.FFIProvider.getSystemProvider(FFIProvider.java:35)
at jnr.ffi.LibraryLoader.create(LibraryLoader.java:73)
... 4 more
Caused by: java.lang.IllegalStateException: Can't overwrite cause with java.lang.UnsatisfiedLinkError: java.lang.UnsatisfiedLinkError: could not locate stub library in jar file. Tried [jni/x86_64-Linux/libjffi-1.2.so, /jni/x86_64-Linux/libjffi-1.2.so]
at com.kenai.jffi.internal.StubLoader.getStubLibraryStream(StubLoader.java:407)
at com.kenai.jffi.internal.StubLoader.loadFromJar(StubLoader.java:355)
at com.kenai.jffi.internal.StubLoader.load(StubLoader.java:258)
at com.kenai.jffi.internal.StubLoader.(StubLoader.java:444)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at com.kenai.jffi.Init.load(Init.java:68)
at com.kenai.jffi.Foreign$InstanceHolder.getInstanceHolder(Foreign.java:49)
at com.kenai.jffi.Foreign$InstanceHolder.(Foreign.java:45)
at com.kenai.jffi.Foreign.getInstance(Foreign.java:103)
at com.kenai.jffi.Type$Builtin.lookupTypeInfo(Type.java:242)
at com.kenai.jffi.Type$Builtin.getTypeInfo(Type.java:237)
at com.kenai.jffi.Type.resolveSize(Type.java:155)
at com.kenai.jffi.Type.size(Type.java:138)
at jnr.ffi.provider.jffi.NativeRuntime$TypeDelegate.size(NativeRuntime.java:187)
at jnr.ffi.provider.AbstractRuntime.(AbstractRuntime.java:48)
at jnr.ffi.provider.jffi.NativeRuntime.(NativeRuntime.java:66)
at jnr.ffi.provider.jffi.NativeRuntime.(NativeRuntime.java:41)
at jnr.ffi.provider.jffi.NativeRuntime$SingletonHolder.(NativeRuntime.java:62)
at jnr.ffi.provider.jffi.NativeRuntime.getInstance(NativeRuntime.java:58)
at jnr.ffi.provider.jffi.Provider.(Provider.java:29)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at jnr.ffi.provider.FFIProvider$SystemProviderSingletonHolder.getInstance(FFIProvider.java:68)
at jnr.ffi.provider.FFIProvider$SystemProviderSingletonHolder.(FFIProvider.java:57)
at jnr.ffi.provider.FFIProvider.getSystemProvider(FFIProvider.java:35)
at jnr.ffi.LibraryLoader.create(LibraryLoader.java:73)
at org.lmdbjava.Library.(Library.java:107)
at org.lmdbjava.Env$Builder.open(Env.java:406)
at org.lmdbjava.Env$Builder.open(Env.java:430)
at Bahaaa.main(Bahaaa.java:10)

at java.lang.Throwable.initCause(Throwable.java:457)
at com.kenai.jffi.Type$Builtin.lookupTypeInfo(Type.java:252)
at com.kenai.jffi.Type$Builtin.getTypeInfo(Type.java:237)
at com.kenai.jffi.Type.resolveSize(Type.java:155)
at com.kenai.jffi.Type.size(Type.java:138)
at jnr.ffi.provider.jffi.NativeRuntime$TypeDelegate.size(NativeRuntime.java:187)
at jnr.ffi.provider.AbstractRuntime.<init>(AbstractRuntime.java:48)
at jnr.ffi.provider.jffi.NativeRuntime.<init>(NativeRuntime.java:66)
at jnr.ffi.provider.jffi.NativeRuntime.<init>(NativeRuntime.java:41)
at jnr.ffi.provider.jffi.NativeRuntime$SingletonHolder.<clinit>(NativeRuntime.java:62)
... 15 more

Caused by: java.lang.UnsatisfiedLinkError: java.lang.UnsatisfiedLinkError: could not locate stub library in jar file. Tried [jni/x86_64-Linux/libjffi-1.2.so, /jni/x86_64-Linux/libjffi-1.2.so]
at com.kenai.jffi.internal.StubLoader.getStubLibraryStream(StubLoader.java:407)
at com.kenai.jffi.internal.StubLoader.loadFromJar(StubLoader.java:355)
at com.kenai.jffi.internal.StubLoader.load(StubLoader.java:258)
at com.kenai.jffi.internal.StubLoader.(StubLoader.java:444)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at com.kenai.jffi.Init.load(Init.java:68)
at com.kenai.jffi.Foreign$InstanceHolder.getInstanceHolder(Foreign.java:49)
at com.kenai.jffi.Foreign$InstanceHolder.(Foreign.java:45)
at com.kenai.jffi.Foreign.getInstance(Foreign.java:103)
at com.kenai.jffi.Type$Builtin.lookupTypeInfo(Type.java:242)
at com.kenai.jffi.Type$Builtin.getTypeInfo(Type.java:237)
at com.kenai.jffi.Type.resolveSize(Type.java:155)
at com.kenai.jffi.Type.size(Type.java:138)
at jnr.ffi.provider.jffi.NativeRuntime$TypeDelegate.size(NativeRuntime.java:187)
at jnr.ffi.provider.AbstractRuntime.(AbstractRuntime.java:48)
at jnr.ffi.provider.jffi.NativeRuntime.(NativeRuntime.java:66)
at jnr.ffi.provider.jffi.NativeRuntime.(NativeRuntime.java:41)
at jnr.ffi.provider.jffi.NativeRuntime$SingletonHolder.(NativeRuntime.java:62)
at jnr.ffi.provider.jffi.NativeRuntime.getInstance(NativeRuntime.java:58)
at jnr.ffi.provider.jffi.Provider.(Provider.java:29)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at jnr.ffi.provider.FFIProvider$SystemProviderSingletonHolder.getInstance(FFIProvider.java:68)
at jnr.ffi.provider.FFIProvider$SystemProviderSingletonHolder.(FFIProvider.java:57)
at jnr.ffi.provider.FFIProvider.getSystemProvider(FFIProvider.java:35)
at jnr.ffi.LibraryLoader.create(LibraryLoader.java:73)
at org.lmdbjava.Library.(Library.java:107)
at org.lmdbjava.Env$Builder.open(Env.java:406)
at org.lmdbjava.Env$Builder.open(Env.java:430)
at Bahaaa.main(Bahaaa.java:10)

at com.kenai.jffi.Foreign.newLoadError(Foreign.java:72)
at com.kenai.jffi.Foreign.access$300(Foreign.java:42)
at com.kenai.jffi.Foreign$InValidInstanceHolder.getForeign(Foreign.java:98)
at com.kenai.jffi.Foreign.getInstance(Foreign.java:103)
at com.kenai.jffi.Type$Builtin.lookupTypeInfo(Type.java:242)
... 23 more

Caused by: java.lang.UnsatisfiedLinkError: java.lang.UnsatisfiedLinkError: could not locate stub library in jar file. Tried [jni/x86_64-Linux/libjffi-1.2.so, /jni/x86_64-Linux/libjffi-1.2.so]
at com.kenai.jffi.internal.StubLoader.getStubLibraryStream(StubLoader.java:407)
at com.kenai.jffi.internal.StubLoader.loadFromJar(StubLoader.java:355)
at com.kenai.jffi.internal.StubLoader.load(StubLoader.java:258)
at com.kenai.jffi.internal.StubLoader.(StubLoader.java:444)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at com.kenai.jffi.Init.load(Init.java:68)
at com.kenai.jffi.Foreign$InstanceHolder.getInstanceHolder(Foreign.java:49)
at com.kenai.jffi.Foreign$InstanceHolder.(Foreign.java:45)
at com.kenai.jffi.Foreign.getInstance(Foreign.java:103)
at com.kenai.jffi.Type$Builtin.lookupTypeInfo(Type.java:242)
at com.kenai.jffi.Type$Builtin.getTypeInfo(Type.java:237)
at com.kenai.jffi.Type.resolveSize(Type.java:155)
at com.kenai.jffi.Type.size(Type.java:138)
at jnr.ffi.provider.jffi.NativeRuntime$TypeDelegate.size(NativeRuntime.java:187)
at jnr.ffi.provider.AbstractRuntime.(AbstractRuntime.java:48)
at jnr.ffi.provider.jffi.NativeRuntime.(NativeRuntime.java:66)
at jnr.ffi.provider.jffi.NativeRuntime.(NativeRuntime.java:41)
at jnr.ffi.provider.jffi.NativeRuntime$SingletonHolder.(NativeRuntime.java:62)
at jnr.ffi.provider.jffi.NativeRuntime.getInstance(NativeRuntime.java:58)
at jnr.ffi.provider.jffi.Provider.(Provider.java:29)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at jnr.ffi.provider.FFIProvider$SystemProviderSingletonHolder.getInstance(FFIProvider.java:68)
at jnr.ffi.provider.FFIProvider$SystemProviderSingletonHolder.(FFIProvider.java:57)
at jnr.ffi.provider.FFIProvider.getSystemProvider(FFIProvider.java:35)
at jnr.ffi.LibraryLoader.create(LibraryLoader.java:73)
at org.lmdbjava.Library.(Library.java:107)
at org.lmdbjava.Env$Builder.open(Env.java:406)
at org.lmdbjava.Env$Builder.open(Env.java:430)
at Bahaaa.main(Bahaaa.java:10)

at com.kenai.jffi.internal.StubLoader.load(StubLoader.java:270)
at com.kenai.jffi.internal.StubLoader.<clinit>(StubLoader.java:444)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at com.kenai.jffi.Init.load(Init.java:68)
at com.kenai.jffi.Foreign$InstanceHolder.getInstanceHolder(Foreign.java:49)
at com.kenai.jffi.Foreign$InstanceHolder.<clinit>(Foreign.java:45)
... 25 more

API concerns: Atomic operations, putIfAbsent, void returning put / delete

lmdb looks very interesting, but the feature set exposed in APIs like this one is woefully lacking if I were to try and replace my existing solution with it.

The C API supports a put with the MDB_NOOVERWRITE flag, but this API throws an exception (awkward) if there is a value. I would expect that users of this flag actually want either a boolean or the existing value returned instead of an error. One could catch the exception, then issue a get() to get the value, but that is probably significantly slower --- the exception handling on one side, and the additional get traversing the index on the other, when it could all be done in a single index traversal.

Ideally one would have a putIfAbsent(key, value) method. This needs to return (a pointer to) the prior value, if present.

Ideally ordinary put supports easily retrieving the value that was replaced as well. From what I can tell, it should be possible to move the cursor to a key (if it exists), fetch the old value, and put the new one in, atomically, for both 'put' and 'putIfAbsent' use cases. LMDBs concurrency model seems strong enough to do so.

Additionally supporting conditional replace -- where the put succeeds atomically only if the prior value matches some expectation, would be of great benefit for some use cases.

I have an application where multiple concurrent threads are updating data, and I need to atomically capture every state transition. This is fairly easy with any store that supports atomic operations and virtually impossible otherwise.

Basically, I want ConcurrentMap: http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ConcurrentMap.html

And I want delete / put to not return void, since there are valid non-exceptional results for both of those. In the case of delete -- did it actually delete something? In the case of put -- was there a prior value?

Use return value instead of exception?

Based on the special case handling of the MDB_NOOVERWRITE put flag I assume that this is a supported way of retrieving an existing value when putting. The ability to put and know the entry already existed is really useful for my use case, where I build an index bottom-up, cascading until an entry already exists (MDB_DUPSORT on the Dbi, MDB_NODUPDATA on the put):

    try {
      db.put(txn, key, value, MDB_NODUPDATA)
      true
    } catch {
      case e: KeyExistsException =>
        false
      case NonFatal(e) => ...
    }

The issue with this solution is that my index loading time is dominated by string formatting due to exception initialisations.

I was wondering if it might be possible to instead use a return value to communicate if a key/value already existed. I tested it and the performance was much improved.

Is there a better approach to achieving my goal? Would you be interested in a pull request for an approach where the return value is used to communicate extra information instead of an exception?

Regression - Env.create gives Dbi$BadValueSizeException

My code that worked in 0.0.1 is broken under 0.0.2 and 0.0.3. Giving the following exception:

org.lmdbjava.Dbi$BadValueSizeException: Unsupported size of key/DB name/data, or wrong DUPFIXED size (-30781)

    at org.lmdbjava.ResultCodeMapper.<clinit>(ResultCodeMapper.java:56)
    at org.lmdbjava.Env$Builder.open(Env.java:369)
    at org.lmdbjava.Env$Builder.open(Env.java:393)
    at com.unsilo.conceptstore.LmdbConceptStoreConnector.<init>(LmdbConceptStoreConnector.java:59)
    at com.unsilo.conceptstore.LmdbConceptStoreConnector.<init>(LmdbConceptStoreConnector.java:68)
    at com.unsilo.conceptstore.ConceptStoreConnector.lookup(ConceptStoreConnector.java:17)
    at com.unsilo.conceptstore.ConceptStoreConnectorTest.foo(ConceptStoreConnectorTest.java:28)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
    at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
    at org.junit.rules.RunRules.evaluate(RunRules.java:20)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
    at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
    at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:117)
    at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:42)
    at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:262)
    at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:84)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)

My (working) 0.0.1 code looks like:

        env = Env.create()
                .setMapSize(256, ByteUnit.MEBIBYTES)
                .setMaxDbs(32)
                .open(file, 0664,
                        EnvFlags.MDB_NOSYNC,
                        EnvFlags.MDB_WRITEMAP,
                        EnvFlags.MDB_MAPASYNC);

And for 0.0.3, where I get the error:

        env = Env.create()
                .setMapSize(MEBIBYTES.toBytes(256))
                .setMaxDbs(32)
                .open(file, 0664,
                        EnvFlags.MDB_NOSYNC,
                        EnvFlags.MDB_WRITEMAP,
                        EnvFlags.MDB_MAPASYNC);

I also tried setting maxDbs to 1 and omitting the file mode and envflags, but get the same result

Vote: Release 0.0.6 (update: 0.5.0)

It's been about 5 months since 0.0.5 was released, so there have been a number of dependency updates and some changes to CursorIterator to make it a little more efficient and flexible.

Are we fine to release 0.0.6?

Vote: Release 0.6.0

  • New CursorIterator capabilities (#7)
  • New mdb_set_compare capabilities (#56)
  • New MDB_MULTIPLE capability (#28)
  • Removed KeyValPopulator (discussed in #7)

Cursor.seek() broken in v0.0.5?

Maybe I found a bug in v0.0.5 which is not existent in v0.0.4 or v0.0.3.

Minimal example to reproduce the issue:

I created a minimal example lmdbtest.tar.bz2 on gdrive (355kb)

This archive contains a simple maven project with one java class (to trigger the bug) and a directory 'lmdb' containing some example data. The lmdb database comes in memory mapped flavor (that is: <900kb of data in a 10GB sparse file! please don't fill your hd with zeros! ;)

Problem description:

In the given data I'm searching for the first entry (SeekOp.MDB_FIRST) which is successful, because the file is not empty.

But the buffer from Cursor.key() is empty and thus the buffer throws an exception on read() calls.

In version v0.0.4 this problem does NOT occur and the returned buffer is filled with 4 bytes (an integer, as expected).

Please uncompress (with sparse support!) and just import the maven project into your preferred IDE. Call the main() method. Change the pom.xml for v0.0.4 vs. v0.0.5

I hope you can reproduce the bug. If you need further details, please feel free to contact me.

I like lmdb and lmdbjava very much. I hope this issue helps you and don't waste your time.

Txn allocation overhead?

While doing some performance testing for a simple put operation, I'm getting somewhat (~15%) worse performance than with lmdbjni. Based on profiling, it seems to me that this is related to buffer allocations the Txn constructor. Could the allocations for the key and value buffers be avoided there? The key and value buffers that are to be written are already allocated off-heap in this use-case, so this seems unnecessary.

Enumerating all dbs in an environment

I need to open an environment and discover all of the databases. From what I understand, the database names are stored in the default, nameless db. But what encoding? Creating a db ends in a native method stub passing a String... UTF-8?

    Txn<ByteBuffer> txn = env.txn(null);
    env.openDbi(null).iterate(txn).forEachRemaining(kv -> {
      ByteBuffer keybytes = kv.key();
      String key = ??? 
    });

It would probably be useful to add both the ability to retrieve all dbi names to Env, as well as the ability to return all Dbis in an Env as a Map<String, Dbi>. The LMDB docs suggest that it is good practice to retrieve and re-use these anyway.

So I suppose this is mostly a question, but also a minor feature request.

Corrupted stacktraces

It has been observed many times that stacktraces from lmdbjava oftens ends up corrupted. See fx. here: #23, and here #21.

It also seems to end up pointing into org.lmdbjava.Env$Builder.open()... or maybe that's just for me.

Example usage of MDB_MULTIPLE

Thank you for creating LmdbJava!

I'm trying to adapt your tutorial5() method to use the MDB_MULTIPLE flag to store the three values in one call. The LMDB documentation says that the data argument must be two MDB_vals. Is this possible using LmdbJava?

MVStore comparison

Well done for the comparison, really helpful. However, it looks like you are not setting the MVStore to use off-heap memory, which will cause the OOME and also increase the need for GC as the data size grows.

install without maven

Trying to get it going without maven. Added following to my classpath:

jffi-1.2.13.jar
jnr-constants-0.9.6.jar
jnr-ffi-2.1.1.jar
lmdbjava-0.0.5-20170104.024957-17.jar

Get following trace:

java.lang.UnsatisfiedLinkError: could not load FFI provider jnr.ffi.provider.jffi.Provider
	... 
Caused by: java.lang.IllegalStateException: Can't overwrite cause with java.lang.UnsatisfiedLinkError: java.lang.UnsatisfiedLinkError: could not locate stub library in jar file.  Tried [jni/x86_64-Linux/libjffi-1.2.so, /jni/x86_64-Linux/libjffi-1.2.so]
	at com.kenai.jffi.internal.StubLoader.getStubLibraryStream(StubLoader.java:406)

Had a look at poms and jars but don't see why those are supposed to come from.

Replace Unsafe and illegal reflection operations

Gave a quick try to lmdbjava with JDK 9 and wanted to record the findings here.

Due the the class access restrictions imposed by Jigsaw, the following JVM arguments have to be specified:

--add-opens java.base/java.nio=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED

Vote: Release 0.0.5

@scottcarey has requested a 0.0.5 release in issue #36. As I just updated to LMDB 0.9.20 (as per #43) and merged a fresh PR (#45) I wanted to check in whether we're happy for a release at this stage or need a few days to test it out? All the automated tests pass fine.

cc @krisskross / @phraktle

Cannot open Dbi when Env is in read only mode.

When the environment is create in read only mode a Dbi handle fails to create. This is because the openDbi method creates a WriteTxn. I'm not sure why that is the logic.

The problem is here:

  public Dbi<T> openDbi(final String name, final DbiFlags... flags) {
    try (Txn<T> txn = txnWrite()) {
      final Dbi<T> dbi = new Dbi<>(this, txn, name, flags);
      txn.commit();
      return dbi;
    } catch (ReadWriteRequiredException | CommittedException e) {
      throw new IllegalStateException(e); // cannot happen (Txn is try scoped)
    }
  }

There is an exception thrown as a result of this:

org.lmdbjava.LmdbNativeException$ConstantDerviedException: Platform constant error code: EACCES (13)
    at org.lmdbjava.ResultCodeMapper.checkRc(ResultCodeMapper.java:82)
    at org.lmdbjava.Txn.<init>(Txn.java:75)
    at org.lmdbjava.Env.txnWrite(Env.java:289)
    at org.lmdbjava.Env.openDbi(Env.java:213)
...

Code like this will not function (please ignore the Scala syntax)

  private[this] class LmdbEnv {
    val env: Env[ByteBuffer] = Env.create()
        .setMapSize(4, ByteUnit.GIBIBYTES)
        .setMaxDbs(1)
        .open(new File(conf.db_path), EnvFlags.MDB_RDONLY)
    val db = env.openDbi(conf.db_name)

    def close =
      env.close
  }

segfaults after MDB_MAP_FULL

The following code segfaults, after the db is full.

import static com.jakewharton.byteunits.BinaryByteUnit.MEBIBYTES;

import java.io.File;
import java.nio.ByteBuffer;
import java.util.concurrent.ThreadLocalRandom;

import org.lmdbjava.Dbi;
import org.lmdbjava.DbiFlags;
import org.lmdbjava.Env;

public class LmdbFullTest {

    public static void main(String[] args) {
        File path = new File("/tmp/lmdb_full/");
        path.mkdir();
        for (;;) {
            writeFull(path);
        }
    }

    private static void writeFull(File path) {
        Env<ByteBuffer> env = Env.create().setMapSize(MEBIBYTES.toBytes(10)).setMaxDbs(1).open(path);
        Dbi<ByteBuffer> db = env.openDbi("test", DbiFlags.MDB_CREATE);

        try {
            byte[] k = new byte[64];
            ByteBuffer key = ByteBuffer.allocateDirect(64);
            ByteBuffer val = ByteBuffer.allocateDirect(1024);

            ThreadLocalRandom rnd = ThreadLocalRandom.current();
            int count = 0;
            for (;;) {
                rnd.nextBytes(k);
                key.clear();
                key.put(k).flip();
                val.clear();
                db.put(key, val);
                System.out.println("written " + ++count);
            }
        } catch (Exception e) {
            e.printStackTrace(System.out);
        }

        System.out.println("closing db");
        db.close();

        System.out.println("closing env");
        env.close();
    }

}

improve CursorIterator

Consider making CursorIterator more extensible. It would be reasonable to be able to subclass to provide a range iterator (i.e. a forward iterator that checks an upper bound key, or a reverse iterator checking a lower bound). Since the class is final and tryToComputeNext is private, this is not currently feasible.

Another minor point is that the state machine should probably include a CLOSED state (in which hasNext returns false, and repeated calls to close are idempotent).

ByteBuffer limit/position not considered

It appears that the position and limit of the passed in buffers are not respected. So, for example, the tutorial actually ends up storing 511 byte long keys, instead of just the actual contents.

The tutorial should correctly read:

ByteBuffer key = allocateDirect(511);
key.put("foo".getBytes(UTF_8));
key.flip();

Note, the added flip.

ByteBufferProxy#in should add the buffer's position to the starting address and consider limit when calculating the size.

Exception when using parent transactions

I can't get parent transactions to work. This is the Scala code I run:

object ParentTxnProblem extends App {
  val db =  org.lmdbjava.Env.create
  val testFolder = new java.io.File("./test")
  testFolder.mkdir
  val env = db.open(testFolder)
  val parentTxn = env.txnRead
  val childTxn = env.txn(parentTxn, org.lmdbjava.TxnFlags.MDB_RDONLY_TXN)
}

This is the output of running the above with the current 0.0.5-SNAPSHOT from Sonatype:

Exception in thread "main" org.lmdbjava.LmdbNativeException$ConstantDerviedException: Platform constant error code: EINVAL (22)
	at org.lmdbjava.ResultCodeMapper.checkRc(ResultCodeMapper.java:113)
	at org.lmdbjava.Txn.<init>(Txn.java:73)
	at org.lmdbjava.Env.txn(Env.java:274)
	at last line of example code above

Am I doing something wrong or is there an issue with parent transactions? Thanks a lot for your time.

error using heap-allocated byte buffers

When putting a heap allocated ByteBuffer, you get the following exception:

java.lang.ClassCastException: java.nio.HeapByteBuffer cannot be cast to sun.nio.ch.DirectBuffer
    at org.lmdbjava.ByteBufferProxy$UnsafeProxy.in(ByteBufferProxy.java:204)
    at org.lmdbjava.ByteBufferProxy$UnsafeProxy.in(ByteBufferProxy.java:166)
    at org.lmdbjava.Txn.keyIn(Txn.java:240)
    at org.lmdbjava.Dbi.put(Dbi.java:266)

Build refactoring

Currently refactoring the build ecosystem for LmdbJava projects. The changes and their rationale are:

  • Shift to Circle CI. While Travis if fine, Circle CI offers more concurrent builds for open source projects, it offers caching out of the box, its interface is much slicker, and commercial projects have finer-grained pricing (so it can be less expensive if you wanted to use Circle CI over Travis for a small private project).
  • Shift to CodeCov. While Coveralls is fine, CodeCov offers vastly simpler integration (a one line bash command versus a fully Maven plugin), authentication-free support for multiple CI environments (not just Travis), it can aggregate the results of multiple builds (useful for proving the various OS X, Windows and Linux-specific paths touch relevant lines), its interface is slicker (including graphical drill-down), and it includes branch / instruction / line coverarge (versus just line coverage).
  • BinTray to perform GPG signature. Off-list support from BinTray (which is fantastic BTW) has confirmed private GPG keys are never displayed via the web UI, reducing my concerns with that attack vector. Allowing BinTray to sign releases means one less REST call, and no hassles with partial build setups (as needed for the native projects, where OS X builds separately from the Linux and Windows cross-compile).
  • Maven to perform Maven Central Sync. I've written a Groovy-based script that can do this, meaning no more curl invocation from a CI script. That makes it simple to run manually.
  • Discontinue encrypted files via Travis. Circle CI doesn't offer them, so we're back to environment variables. But that's OK as the aforementioned GPG and Maven Central Sync changes overcome the curl requirement of .json files, which was the only reason we needed the encrypted files in the first place. Circle CI has been confirmed to not display environment variables once recorded, eliminating concerns with that attack vector.
  • Establish a parent POM. This will hold all build config and dramatically reduce the individual POM sizes.
  • Establish a resources artifact. This will hold all classpath resources and files referenced by the above parent POM. For example, the OpenLDAP 2.8 license needs to be bundled somewhere, as do the various Checkstyle, PMD, Maven Versions Plugin etc configuration files.

It sounds like a lot, but I've already done most of this offline. I'm mainly making this ticket for those interested in the context behind the changes (and will understand why there will probably be various test deploys and intermittent build failures over coming days as this is moved to production and tweaked).

Need option for creating a new keyVal in the Txn class

As a user of lmdbjava, I want to use FlatBuffers to point directly to the LMDB database so I can access my data without allocating separate memory in the JVM.

FlatBuffers is a data exchange format that's much like ProtoBuf, but the key difference is it's zero-allocate/zero-parse. When constructing a FlatBuffer object, you pass it a ByteBuffer object, making it quite nice to work with in lmdbjava.

The problem, however, is that the underlying keyVal field in the Txn object is reused. If you assign a FlatBuffer object a ByteBuffer that is returned from a Txn, and then do the same to a different FlatBuffer object, they will both have the same values. You could copy the bytes into memory, but that can severely damper the performance and memory usage of the program (especially for the kind of work I do).

Attached is a quick and dirty project demonstrating the problem.

LMDBFlatBufferTest.zip

How to run on CentOS 6.6

What is your advice for running lmdbjava on CentOS 6.6?

The problem is that glibc is too old:

Caused by: java.lang.UnsatisfiedLinkError: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by /tmp/lmdbjava-native-library-2822335035865031061.so)

Can we install glibc on CentOS 6.6? Do you know of any good instructions of how to do that?

Or should we build our own version of lmdbjava and it's native dependencies? How would we do that?

Guard against keys being too large

Quoting @kamstrup in #25:

I also got a corrupted trace yesterday because I tried writing something with a key len > 511. I suspect that it is pretty much all code that ends up mapping a native lmdb error code to a java exception that triggers this bug

LmdbJava should detect this issue and raise an exception.

Exception in thread "main" java.lang.IllegalArgumentException: Unknown result code 131

Code is based on snippet from another issue

import java.io.File;
import java.nio.ByteBuffer;
import java.util.concurrent.ThreadLocalRandom;

import org.lmdbjava.Dbi;
import org.lmdbjava.DbiFlags;
import org.lmdbjava.Env;

public class Test {

    public static void main(String[] args) {
        File path = new File("lmdbTest");
        path.mkdir();
        for (; ; ) {
            writeFull(path);
        }
    }

    private static void writeFull(File path) {
        int size = 1;
        int MB = 1024 * 1024 ;//* 1024;
        Env<ByteBuffer> env = Env.create().setMapSize(size * MB).setMaxDbs(1).open(path);
        Dbi<ByteBuffer> db = env.openDbi("test", DbiFlags.MDB_CREATE);

        byte[] k = new byte[64];
        ByteBuffer key = ByteBuffer.allocateDirect(64);
        ByteBuffer val = ByteBuffer.allocateDirect(MB/4);

        ThreadLocalRandom rnd = ThreadLocalRandom.current();
        int count = 0;
        for (int i = 0; i < 1024*100; i++) {
            try {
                rnd.nextBytes(k);
                key.clear();
                key.put(k).flip();
                val.clear();
                db.put(key, val);
                System.out.println("written " + ++count);
            } catch (Exception e) {
                //e.printStackTrace(System.out);
                System.out.println("map full, old size = "+size+" MB");
                db.close();
                env.close();
                size++;
                env = Env.create().setMapSize(size * MB).setMaxDbs(1).open(path);
                db = env.openDbi("test", DbiFlags.MDB_CREATE);
            }
        }

        System.out.println("closing db");
        db.close();

        System.out.println("closing env");
        env.close();
    }

}```
Exception itself
```Exception in thread "main" java.lang.IllegalArgumentException: Unknown result code 131
    at org.lmdbjava.ResultCodeMapper.checkRc(ResultCodeMapper.java:96)
    at org.lmdbjava.Env$Builder.open(Env.java:376)
    at org.lmdbjava.Env$Builder.open(Env.java:388)
    at rhinodog.Run.Test.writeFull(Test.java:48)
    at rhinodog.Run.Test.main(Test.java:18)```

Growing map size

Is there an api for map (auto)resize? Currently It looks like lmdb just gives up writing when the size limit is reached.

TutorialTest tutorial5

In tutorial 5 it states "Duplicate support requires both keys and values to be <= max key size". The Dbi is opened only with MDB_DUPSORT. Should this not result in being able to add arbitrary value sizes as LMDB also allows MDB_DUPFIXED for fixes sizes?

add Dbi#stat method

While kicking the tires on this new LMDB wrapper, I've noticed that there's only mdb_env_stat (Env#stat), but no mdb_stat (Dbi#stat) exposed.

org.lmdbjava.LmdbNativeException$ConstantDerviedException: Platform constant error code: ENOENT (2)

I have the following code:

	public boolean init(Properties prop) {
	boolean fail = false;
	String csEnvironmentStr = prop.getProperty("csEnvironment");
	env = create()
            // LMDB also needs to know how large our DB might be. Over-estimating is OK.
            .setMapSize(53687091200l)
            // LMDB also needs to know how many DBs (Dbi) we want to store in this Env.
            .setMaxDbs(4)
            // Now let's open the Env. The same path can be concurrently opened and
            // used in different processes, but do not open the same path twice in
            // the same process at the same time.
            .open(new File(csEnvironmentStr+"/csdb"));

...

I get this exception, the last line above is line 91 indicated below:

org.lmdbjava.LmdbNativeException$ConstantDerviedException: Platform constant error code: ENOENT (2)
at org.lmdbjava.ResultCodeMapper.checkRc(ResultCodeMapper.java:113)
at org.lmdbjava.Env$Builder.open(Env.java:458)
at org.lmdbjava.Env$Builder.open(Env.java:474)
at uk.co.example.LmDbStore.init(LmDbStore.java:91)

As you can see this code is lifted from the Tutorial example. The path specified does exist. Ideas?

Can't build with maven

ssantoro@stefanows:/var/tmp/lmdbjava [master|…44]
09:17 $ mvn -version
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T11:41:47-05:00)
Maven home: /home/ssantoro/.sdkman/candidates/maven/current
Java version: 1.8.0_101, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-8-oracle/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "4.4.0-36-generic", arch: "amd64", family: "unix"
ssantoro@stefanows:
/var/tmp/lmdbjava [master|…44]
09:17 $ mvn package
[INFO] Scanning for projects...
[ERROR] [ERROR] Some problems were encountered while processing the POMs:
[FATAL] Non-resolvable parent POM for org.lmdbjava:lmdbjava:0.0.4-SNAPSHOT: Could not find artifact au.com.acegi:acegi-standard-project:pom:0.0.8-SNAPSHOT and 'parent.relativePath' points at wrong local POM @ line 4, column 11
@
[ERROR] The build could not read 1 project -> [Help 1]
[ERROR]
[ERROR] The project org.lmdbjava:lmdbjava:0.0.4-SNAPSHOT (/home/ssantoro/var/tmp/lmdbjava/pom.xml) has 1 error
[ERROR] Non-resolvable parent POM for org.lmdbjava:lmdbjava:0.0.4-SNAPSHOT: Could not find artifact au.com.acegi:acegi-standard-project:pom:0.0.8-SNAPSHOT and 'parent.relativePath' points at wrong local POM @ line 4, column 11 -> [Help 2]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
[ERROR] [Help 2] http://cwiki.apache.org/confluence/display/MAVEN/UnresolvableModelException
ssantoro@stefanows:~/var/tmp/lmdbjava [master|…44]
09:17 $

OSGi support

Dear lmdbjava,
i have implemented rudimentary OSGi support for lmdbjava, a blogpost can be found under betzel.net.
The dependencies like the jnr libs are mostly bundles already . So i figure it should not be such a task to add OSGi integration. The main issue is detecting an OSGi environment and loading the lmdb binary with the correct classloader. OSGi already takes care of extracting the matching platform binary inside the bundle cache folder. My 0.0.5_1 branch detects a Karaf environment variable that is a folder near the lib cache folders. A rather crude solution. Would love to help out and hear your thoughts on this.

Android support

As I try to phase users over from lmdbjni to lmdbjava it might be worth considering having support for Android also. The current lmdbjni release for android are built manually using the Android NDK through the ARM toolchain.

Since there are also other platforms such as MIPS we might choose not want to have an automated build procedure, but instead let users build releases at their own convince?

Thoughts?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.