Code Monkey home page Code Monkey logo

nfs4j's Introduction

NFS4J

Latest release

The pure java implementation of NFS server version 3, 4.0 and 4.1 including pNFS extension with nfs4.1-files and flex-files layout types.

Building from sources

To build nfs4j from source code Java11 and Maven3 are required.

To run benchmarks:

mvn verify -Pbenchmark

Implementing own NFS server

public class App {

    public static void main(String[] args) throws Exception {

        // create an instance of a filesystem to be exported
        VirtualFileSystem vfs = new ....;

        // create the RPC service which will handle NFS requests
        OncRpcSvc nfsSvc = new OncRpcSvcBuilder()
                .withPort(2049)
                .withTCP()
                .withAutoPublish()
                .withWorkerThreadIoStrategy()
                .build();

        // specify file with export entries
        ExportFile exportFile = new ExportFile(....);

        // create NFS v4.1 server
        NFSServerV41 nfs4 = new NFSServerV41.Builder()
                .withExportFile(exportFile)
                .withVfs(vfs)
                .withOperationFactory(new MDSOperationFactory())
                .build();

        // create NFS v3 and mountd servers
        NfsServerV3 nfs3 = new NfsServerV3(exportFile, vfs);
        MountServer mountd = new MountServer(exportFile, vfs);

        // register NFS servers at portmap service
        nfsSvc.register(new OncRpcProgram(100003, 4), nfs4);
        nfsSvc.register(new OncRpcProgram(100003, 3), nfs3);
        nfsSvc.register(new OncRpcProgram(100005, 3), mountd);

        // start RPC service
        nfsSvc.start();

        System.in.read();
    }
}

Use NFS4J in your project

<dependency>
    <groupId>org.dcache</groupId>
    <artifactId>nfs4j-core</artifactId>
    <version>0.19.0</version>
</dependency>

<repositories>
    <repository>
        <id>dcache-releases</id>
        <name>dCache.ORG maven repository</name>
        <url>https://download.dcache.org/nexus/repository/public/</url>
        <layout>default</layout>
    </repository>
</repositories>

IMPORTANT WARNINGS

Though NFS4J is used by the dCache and other projects in production, the public API is still unstable and subject to change (indicated by leading zero in the version number). Thus, should be considered as beta.

Please consult the API changes document when switching between version numbers. The patch level releases are not affected by API changes, of course.

License

licensed under LGPLv2 (or later)

How to contribute

NFS4J uses the linux kernel model where git is not only source repository, but also the way to track contributions and copyrights.

Each submitted patch must have a "Signed-off-by" line. Patches without this line will not be accepted.

The sign-off is a simple line at the end of the explanation for the patch, which certifies that you wrote it or otherwise have the right to pass it on as an open-source patch. The rules are pretty simple: if you can certify the below:

    Developer's Certificate of Origin 1.1

    By making a contribution to this project, I certify that:

    (a) The contribution was created in whole or in part by me and I
         have the right to submit it under the open source license
         indicated in the file; or

    (b) The contribution is based upon previous work that, to the best
        of my knowledge, is covered under an appropriate open source
        license and I have the right under that license to submit that
        work with modifications, whether created in whole or in part
        by me, under the same open source license (unless I am
        permitted to submit under a different license), as indicated
        in the file; or

    (c) The contribution was provided directly to me by some other
        person who certified (a), (b) or (c) and I have not modified
        it.

    (d) I understand and agree that this project and the contribution
        are public and that a record of the contribution (including all
        personal information I submit with it, including my sign-off) is
        maintained indefinitely and may be redistributed consistent with
        this project or the open source license(s) involved.

then you just add a line saying ( git commit -s )

Signed-off-by: Random J Developer <[email protected]>

using your real name (sorry, no pseudonyms or anonymous contributions.)

Contact Us

For help and development related discussions please contact us: dev (@) dcache (.) org

nfs4j's People

Contributors

amarcionek avatar ato avatar dependabot[bot] avatar diwakergupta avatar dkocher avatar dmitrylitvintsev avatar dotphoton-ziad avatar hamletson avatar kofemann avatar paulmillar avatar radai-rosenblatt avatar ruanimal avatar svenin avatar toilal avatar ylangisc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nfs4j's Issues

Can I use this library as an NFS client?

Sorry if this question is trivial. I went through the README and existing issues, but couldn't figure out if this library can be used as an NFS client to mount an existing NFS share and read/write files to that share. If it can be used to mount, read and write files, could you provide a quick example?

Access Denied

2019-10-09 20:09:53.222 [] [nfs@2049 (8)] WARN o.d.n.v.PseudoFs - Access denied: 01caffee000000000000000100080000000000000006 192.168.0.106/192.168.0.106 T rwaDtc Subject: < UidPrincipal[0],GidPrincipal[0,primary]>

what is the meaning of the above log ?

dead lock in nfs server

Today we have found a dead lock in our nfs4j-0.15.3 based deployment:

Found one Java-level deadlock:
=============================
"OncRpcSvc Worker(31)":
  waiting for ownable synchronizer 0x00000005c0ee2278, (a java.util.concurrent.locks.ReentrantLock$NonfairSync),
  which is held by "OncRpcSvc Worker(13)"
"OncRpcSvc Worker(13)":
  waiting to lock monitor 0x00007fcc8404dd98 (object 0x00000005c276d0e0, a org.dcache.nfs.v4.NFS4Client),
  which is held by "OncRpcSvc Worker(31)"

Java stack information for the threads listed above:
===================================================
"OncRpcSvc Worker(31)":
	at sun.misc.Unsafe.park(Native Method)
	- parking to wait for  <0x00000005c0ee2278> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
	at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
	at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
	at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
	at org.dcache.nfs.v4.FileTracker.removeOpen(FileTracker.java:200)
	at org.dcache.nfs.v4.FileTracker.lambda$addOpen$3(FileTracker.java:141)
	at org.dcache.nfs.v4.FileTracker$$Lambda$152/1462883798.notifyDisposed(Unknown Source)
	at org.dcache.nfs.v4.NFS4State.disposeIgnoreFailures(NFS4State.java:119)
	- locked <0x00000005c27ca228> (a org.dcache.nfs.v4.NFS4State)
	at org.dcache.nfs.v4.NFS4Client.drainStates(NFS4Client.java:485)
	at org.dcache.nfs.v4.NFS4Client.updateLeaseTime(NFS4Client.java:277)
	- locked <0x00000005c276d0e0> (a org.dcache.nfs.v4.NFS4Client)
	at org.dcache.nfs.v4.OperationSEQUENCE.process(OperationSEQUENCE.java:60)
	at org.dcache.chimera.nfsv41.door.proxy.ProxyIoMdsOpFactory$1.lambda$process$0(ProxyIoMdsOpFactory.java:53)
	at org.dcache.chimera.nfsv41.door.proxy.ProxyIoMdsOpFactory$1$$Lambda$127/2020621766.run(Unknown Source)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:360)
	at org.dcache.chimera.nfsv41.door.proxy.ProxyIoMdsOpFactory$1.process(ProxyIoMdsOpFactory.java:47)
	at org.dcache.nfs.v4.NFSServerV41.NFSPROC4_COMPOUND_4(NFSServerV41.java:173)
	at org.dcache.nfs.v4.xdr.nfs4_prot_NFS4_PROGRAM_ServerStub.dispatchOncRpcCall(nfs4_prot_NFS4_PROGRAM_ServerStub.java:48)
	at org.dcache.xdr.RpcDispatcher$1.run(RpcDispatcher.java:110)
	at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:591)
	at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:571)
	at java.lang.Thread.run(Thread.java:748)
"OncRpcSvc Worker(13)":
	at org.dcache.nfs.v4.NFS4Client.isLeaseValid(NFS4Client.java:263)
	- waiting to lock <0x00000005c276d0e0> (a org.dcache.nfs.v4.NFS4Client)
	at org.dcache.nfs.v4.FileTracker.lambda$addOpen$1(FileTracker.java:119)
	at org.dcache.nfs.v4.FileTracker$$Lambda$150/1052688442.test(Unknown Source)
	at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
	at java.util.ArrayList$ArrayListSpliterator.tryAdvance(ArrayList.java:1351)
	at java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126)
	at java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:498)
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:485)
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
	at java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:230)
	at java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:196)
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.util.stream.ReferencePipeline.anyMatch(ReferencePipeline.java:449)
	at org.dcache.nfs.v4.FileTracker.addOpen(FileTracker.java:120)
	at org.dcache.nfs.v4.OperationOPEN.process(OperationOPEN.java:262)
	at org.dcache.chimera.nfsv41.door.AccessLogAwareOperationFactory$OpOpen.process(AccessLogAwareOperationFactory.java:251)
	at org.dcache.chimera.nfsv41.door.proxy.ProxyIoMdsOpFactory$1.lambda$process$0(ProxyIoMdsOpFactory.java:53)
	at org.dcache.chimera.nfsv41.door.proxy.ProxyIoMdsOpFactory$1$$Lambda$127/2020621766.run(Unknown Source)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:360)
	at org.dcache.chimera.nfsv41.door.proxy.ProxyIoMdsOpFactory$1.process(ProxyIoMdsOpFactory.java:47)
	at org.dcache.nfs.v4.NFSServerV41.NFSPROC4_COMPOUND_4(NFSServerV41.java:173)
	at org.dcache.nfs.v4.xdr.nfs4_prot_NFS4_PROGRAM_ServerStub.dispatchOncRpcCall(nfs4_prot_NFS4_PROGRAM_ServerStub.java:48)
	at org.dcache.xdr.RpcDispatcher$1.run(RpcDispatcher.java:110)
	at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:591)
	at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:571)
	at java.lang.Thread.run(Thread.java:748)

Found 1 deadlock.

NFS Client Problem

First, Thank you for the library. I'm trying to implement an NFS Client based on your dcache/nfs/v4/client example ( It's just for test purpose). My question is, how can I mount on disk the remote directory, like when we mount with Ubuntu command:
mount -t nfs 192.168.1.10:/media/user/ /mnt/media_rw/remote

Logs after I start the Client:
System.out: Connected to: Mr. X
System.out: pNFS MDS: true
System.out: pNFS DS: false
System.out: Using slots: 159
System.out: server lease time: 90 sec.
System.out: root fh = 0100010000000000

java.util.ConcurrentModificationException in Simple Lock manager

Under throughput benchmark SimpleLm throwns java.util.ConcurrentModificationException on the master branch (985b50e)

java.util.ConcurrentModificationException
	at com.google.common.collect.AbstractMapBasedMultimap$WrappedCollection$WrappedIterator.validateIterator(AbstractMapBasedMultimap.java:476)
	at com.google.common.collect.AbstractMapBasedMultimap$WrappedCollection$WrappedIterator.hasNext(AbstractMapBasedMultimap.java:482)
	at java.util.Spliterators$IteratorSpliterator.tryAdvance(Spliterators.java:1811)
	at java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126)
	at java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:498)
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:485)
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
	at java.util.stream.FindOps$FindOp.evaluateSequential(FindOps.java:152)
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.util.stream.ReferencePipeline.findAny(ReferencePipeline.java:469)
	at org.dcache.nfs.v4.nlm.AbstractLockManager.lock(AbstractLockManager.java:91)
	at org.dcache.nfs.v4.nlm.SimpleLmBenchmark.benchmarkConcurrentLocking(SimpleLmBenchmark.java:60)
	at org.dcache.nfs.v4.nlm.generated.SimpleLmBenchmark_benchmarkConcurrentLocking_jmhTest.benchmarkConcurrentLocking_thrpt_jmhStub(SimpleLmBenchmark_benchmarkConcurrentLocking_jmhTest.java:148)
	at org.dcache.nfs.v4.nlm.generated.SimpleLmBenchmark_benchmarkConcurrentLocking_jmhTest.benchmarkConcurrentLocking_Throughput(SimpleLmBenchmark_benchmarkConcurrentLocking_jmhTest.java:87)
	at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:453)
	at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:437)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Listing of large directories is not sequential

When you list large folders, the list() method is called relatively for every 100 elements. I keep track of elements already retrieved from the set and the once that are left for retrieving. But every time a sequential list() is called, the cookie is not the expected one. For example, on first list() call, 115 elements are retrieved from the NavigableSet, but the next list() call is with cookie = 100 and I have to move my pointer backwards.

When I put org.dcache.nfs.v4.OperationREADDIR on debug, i saw the following result:

Sending 115 entries (32556 bytes from 32680, dircount = 8170) cookie = 0 EOF=false
Sending 115 entries (32552 bytes from 32768, dircount = 8192) cookie = 100 EOF=false
Sending 115 entries (32540 bytes from 32768, dircount = 8192) cookie = 202 EOF=false
Sending 115 entries (32554 bytes from 32768, dircount = 8192) cookie = 304 EOF=false

Every time the cookie received is not equal to initial cookie + entriesSend.
This's an issue on my end, as I use database like file system, and when directory contain thousands of entries, I use lazy collection of entries, loading only 1k files in memory. When they're exhausted, I remove them and load next 1k. I may hit a case, where I move to next batch of 1k files, but the client want to continue listing from element, that was in previous 1k batch.

nfs3 server does not correctly handle utf-8 filenames

When trying to cp files with chinese/japanese characters in them, the nfs3 server will replace anything that isn't US ASCII with ASCII 63 which is the question mark - this appears to be caused by Xdr (in this case) decoding the strings as US ASCII using Java's string constructor in xdrDecodeString with Charset.US_ASCII as the second parameter.

If this NFS3 implementation is to follow the spec, I think it should either use UTF-8 or throw an error - as suggested by RFC 1813 section 3.2:

  1. In general, there will be characters that a server will
    not be able to handle as part of a filename. This set of
    characters will vary from server to server and from
    implementation to implementation. In most cases, it is the
    server which will control the client's view of the file system.
    If the server receives a filename containing characters that it
    can not handle, the error, NFS3ERR_EACCES, should be returned.
    Client implementations should be prepared to handle this side
    affect of heterogeneity.

Handle more that 16 aux groups limit

NFS server, or to be more precise, RPC's AUTH_SYS supports maximum 16 secondary groups for the user. However, in modern environments, having more subgroups is not an exception. While we can't change RPC layer, the NFS server itself can resolve user's group membership by talking to an external provider.

Access Has No User Credentials

According to both the V3 and V4 RFCs, the function access() is supposed to check the requested permissions against the permissions on the object considering the user in the request. The current definition of access in VirtualFileSystem does not have any user (Subject?) Is this intentional?

Mountd run on same port as NFS

I'm facing an issue running NFS4j server on Linux environment, while the default OS's NFS server is running. I'm changing port of NFS and Mountd daemons of OS's nfs, but still, there are some problems mounting the server.
I construct my server the following way:
`

        mNFSService = new OncRpcSvcBuilder().withPort(2049).withTCP().withAutoPublish().withWorkerThreadIoStrategy().withServiceName(serverName).build();
        NFSServerV41 nfs4 = new NFSServerV41.Builder().withVfs(mVFS).withOperationFactory(new MDSOperationFactory()).withExportFile(exportFile).build();
        NfsServerV3 nfs3 = new NfsServerV3(exportFile, mVFS);
        MountServer mountd = new MountServer(exportFile, mVFS);
        mNFSService.register(new OncRpcProgram(mount_prot.MOUNT_PROGRAM, mount_prot.MOUNT_V3), mountd);
        
        mNFSService.register(new OncRpcProgram(nfs3_prot.NFS_PROGRAM, nfs3_prot.NFS_V3), nfs3);
        mNFSService.register(new OncRpcProgram(nfs4_prot.NFS4_PROGRAM, nfs4_prot.NFS_V4), nfs4);
        mNFSService.start();

`
But it seems that both NFS server and Mountd server runs on same port - 2049. If I try to mount using the following command:
mount -v -t nfs -o vers=3,nolock,noacl,proto=tcp, :/ /
it works fine both with version 3 and 4.1 if the default nfs server is stopped.
But if I start the OS's NFS server on port 11200 and mountd on 11300 it seems that portmap detect that mountd on 11300 is the default one and redirect calls to it, instead to NFS4J mountd server.
So now if I run the same command, it fails with:
mount.nfs: mount(2): Permission denied
and message in Linux's mountd log -> refused mount request from for / (/): not exported
This works OK for version 4.1 but fails for 3, not sure what's the difference.
If I add the following to mount command "port=2049,mountport=2049" I'm able to mount with version 3 too, so I'm suspecting that the problem is that mountd is running on the same port as nfs.

Is it possible to configure different port for mountd daemon, so the mount command to succeed without specifying ports for both version 3 and 4.1?

Add an example to REAMDE how to start the NFS server

It is currently not clear how to start the server. README doesn't explain it and java -jar target/jpnfs-0.5.10-jar-with-dependencies.jar seems to require a config file named "SpringRunner" which is mentioned nowhere (Spring: IOException parsing XML document from file [/root/jpnfs/target/SpringRunner]; nested exception is java.io.FileNotFoundException: SpringRunner (No such file or directory) occurs at invokation of java -jar target/jpnfs-0.5.10-jar-with-dependencies.jar SpringRunner).

I experienced this trouble with git tag jpnfs-0.5.10 after building with mvn install.

Missing directory entries

Hi,
There seems to be a bug in OperationREADDIR#process setting EOF to true too early. In my specific case I have 21 entries in my DirectoryStream. While processing the 21st and last entry the condition (dircount + newDirSize > _args.opreaddir.dircount.value) is true and due to the break the while loop is being leaved without processing the last entry. Since it was the last entry !dirList.hasNext() evaluates to false and thus res.resok4.reply.eof is set to true which is wrong. The last item got lost.

-Yves

macOS compatiblity

I am attempting to get a NFS server working on macOS Sierra (10.12.6). java version "1.8.0_151". The same code works fine on CentOS 7.

In the macOS case, it successfully registers (visible in rpcinfo) and the bindings look correct. The system has the firewall disabled.

showmount -e fails with "showmount: Cannot retrieve info from host: localhost: RPC failed:: RPC: Procedure unavailable"

~aleigh$ rpcinfo -u localhost nfs
rpcinfo: RPC: Procedure unavailable
program 100003 version 0 is not available
rpcinfo: RPC: Procedure unavailable
program 100003 version 4294967295 is not available

~ aleigh$ rpcinfo -u localhost mountd
rpcinfo: RPC: Procedure unavailable
program 100005 version 0 is not available
rpcinfo: RPC: Procedure unavailable
program 100005 version 4294967295 is not available

I am willing to put in the time to fix this problem, but I wonder if this is a known issue, or if there is any advice.

public class NfsServer {
private static final MechaLogger logger = MechaLoggerFactory.getLogger(NfsServer.class);
private final OncRpcSvc service;

public NfsServer(VirtualFileSystem vfs, ExportFile exportFile) throws IOException {
    // create the RPC service which will handle NFS requests
    service = new OncRpcSvcBuilder()
            .withPort(2049)
            .withTCP()
            .withUDP()
            .withAutoPublish()
            .withWorkerThreadIoStrategy()
            .build();

    // create NFS v4.1 server
    //   NFSServerV41 nfs4 = new NFSServerV41(new MDSOperationFactory(), null, vfs, exportFile);

    // create NFS v3 and mountd servers
    NfsServerV3 nfs3 = new NfsServerV3(exportFile, vfs);
    MountServer mountd = new MountServer(exportFile, vfs);

    // register NFS servers at portmap service
    //  service.register(new OncRpcProgram(100003, 4), nfs4);
    service.register(new OncRpcProgram(100003, 3), nfs3);
    service.register(new OncRpcProgram(100005, 3), mountd);
}

public void run() {
    try {
        service.start();
    } catch (IOException e) {
        e.printStackTrace();
    }
}

public static void main(String args[]) throws IOException {
    NfsServer srv = new NfsServer(new SdsFs(), new ExportFile(new File("exports.nfs")));
    srv.run();
    ThreadUtil.forever();
}

}

NPE when listing an empty directory

using NfsServerV3 and my own VFS implementation, if VirtualFileSystem.list(Inode) returns an empty list (if the directory is empty) then i get an NPE.

the flow behind the NPE is as follows:

  1. in NfsServerV3.NFSPROC3_READDIRPLUS_3() List dirList is assigned an empty list (a transform on the empty list that my VFS returned).
  2. in NfsServerV3 line 761, the code creates a new dirlistplus3 instance: res.resok.reply = new dirlistplus3();
  3. since dirList.size() is 0, the iteration loop starting at line 775 is never executed, which means the internal fields of the dirlistplus3 instance created in step 2 remain all null.
  4. later, when attempting to serialize the result object, it encounters that dirlistplus3 object, and calls xdrEncode() on it.
  5. in entryplus3 line 44 (inside xdrEncode) a NPE is generated since all fields are null ($this.fileid.xdrEncode(xdr) is the exact NPE cause)

i think in NfsServerV3 res.resok.reply.entries should only be initialized if there are results. setting the field to null in a breakpoint produced the expected result for me ("ls" call from shell completed)

nfs server fails with stacktrace on craete with krb5

When mounting with krb5 and user mapping returns NOBODY as a result of missing mapping, then
a create request will fail with following stacktrace:

22 Jun 2016 13:04:02 (NFS-dcache-lab000) [] Unhandled exception:
java.util.NoSuchElementException: Subject has no UID
    at org.dcache.auth.Subjects.getUid(Subjects.java:164) ~[dcache-common-2.17.0-SNAPSHOT.jar:2.17.0-SNAPSHOT]
    at org.dcache.chimera.nfsv41.door.ChimeraVfs.create(ChimeraVfs.java:132) ~[dcache-nfs-2.17.0-SNAPSHOT.jar:2.17.0-SNAPSHOT]
    at org.dcache.nfs.vfs.VfsCache.create(VfsCache.java:156) ~[nfs4j-core-0.12.2.jar:0.12.2]
    at org.dcache.nfs.vfs.PseudoFs.create(PseudoFs.java:155) ~[nfs4j-core-0.12.2.jar:0.12.2]
    at org.dcache.nfs.v4.OperationOPEN.process(OperationOPEN.java:158) ~[nfs4j-core-0.12.2.jar:0.12.2]
    at org.dcache.chimera.nfsv41.door.proxy.ProxyIoMdsOpFactory$1.lambda$process$0(ProxyIoMdsOpFactory.java:53) ~[dcache-nfs-2.17.0-SNAPSHOT.jar:2.17.0-SNAPSHOT]
    at org.dcache.chimera.nfsv41.door.proxy.ProxyIoMdsOpFactory$1$$Lambda$72/1766112330.run(Unknown Source) ~[na:na]
    at java.security.AccessController.doPrivileged(Native Method) ~[na:1.8.0_31]
    at javax.security.auth.Subject.doAs(Subject.java:360) ~[na:1.8.0_31]
    at org.dcache.chimera.nfsv41.door.proxy.ProxyIoMdsOpFactory$1.process(ProxyIoMdsOpFactory.java:50) ~[dcache-nfs-2.17.0-SNAPSHOT.jar:2.17.0-SNAPSHOT]
    at org.dcache.nfs.v4.NFSServerV41.NFSPROC4_COMPOUND_4(NFSServerV41.java:152) ~[nfs4j-core-0.12.2.jar:0.12.2]
    at org.dcache.nfs.v4.xdr.nfs4_prot_NFS4_PROGRAM_ServerStub.dispatchOncRpcCall(nfs4_prot_NFS4_PROGRAM_ServerStub.java:48) [nfs4j-core-0.12.2.jar:0.12.2]
    at org.dcache.xdr.RpcDispatcher$1.run(RpcDispatcher.java:82) [oncrpc4j-core-2.5.3.jar:na]
    at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:591) [grizzly-framework-2.3.24.jar:2.3.24]
    at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:571) [grizzly-framework-2.3.24.jar:2.3.24]
    at java.lang.Thread.run(Thread.java:745) [na:1.8.0_31]

nfsv4 OperationWRITE has arbitrary limit on write offset + size

OperationWRITE contains this code:

if (_args.opwrite.offset.value + _args.opwrite.data.remaining() > 0x3ffffffe) { throw new InvalException("Arbitrary value"); }

Which seems to prevent any write from succeeding where the offset + the size is greater than around 1GB. Why does this limit exist? What is the meaning of the "Arbitrary value" exception thrown in this instance?

support for xattr (rfc8276)

RFC 8276 describes 'File System Extended Attributes in NFSv4'. This functionality may help to provide a standard way to internal metadata which is currently exposed through various magic files.

Running server on Windows

Hi,

I'm trying to get the NFS4J server to run on Windows.
I'm using the latest code from the repository, since the released version had some issues.
And I'm using the LocalFileSystem implementation from the simple-nfs project.
Note: I'm the using NFS3 protocol.

My NFS client connects correctly, however it performs a number of operations, that fails on the server, most noticable wildcards *. And the request for large files.

Wildcards
When running the client, I get exceptions like this on the server.

  • can not be translated, which makes sense.
    java.nio.file.InvalidPathException: Illegal char <*> at index 0: *.vmx9IizqK at java.base/sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182) at java.base/sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153) at java.base/sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77) at java.base/sun.nio.fs.WindowsPath.parse(WindowsPath.java:92) at java.base/sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:229) at java.base/java.nio.file.Path.resolve(Path.java:515) at dk.ti.nfs.server.LocalFileSystem.lookup(LocalFileSystem.java:243) at org.dcache.nfs.vfs.PseudoFs.lookup(PseudoFs.java:191) at org.dcache.nfs.v3.NfsServerV3.NFSPROC3_LOOKUP_3(NfsServerV3.java:547) at org.dcache.nfs.v3.xdr.nfs3_protServerStub.dispatchOncRpcCall(nfs3_protServerStub.java:62) at org.dcache.oncrpc4j.rpc.RpcDispatcher$1.run(RpcDispatcher.java:110) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834)
    I could easily implement a new lookup method, with wildcard support, but I'm uncertain as to how I return multiple INodes within the LOOKUP3res...

Large file support
I'm uncertain as to the problem, might be related to the problem above, might be an incorrect error message from the client.
However it might also be a configuration or implementation problem with the server.

error: Failed to clone disk: The destination file system does not support large files (12).

Does NFS4j support large files, and if so, how do I configure this?

Please advice

Best regards
Kjertil

expose write stability parameters to underlying VFS

currently jpnfs disregards the incoming "committed" args and always reports FILE_SYNC in responses.
this is a problem for VFS implementations that do not necessarily write directly to persistent storage (like a database).

it would be useful to have the incoming stable arg as a parameter to vfs.write() and allow VFS to set the returned value to indicate the level of stability of data written.

Publish artifacts into maven central

As there are more projects that dCache use nfs4j it makes sense to publish artifact into maven central to decouple external projects from dcache.org maven repository.

Failure to connect to AWS EFS mounts

It looks like the client included within the project is unable to communicate with AWS's Elastic File System that supports NFSv4. Have you had any luck connecting to it? I initially got a minor version not supported error 10021, then tried using withMinorVersion(0) and was able to move onto an unsupported error when it attempts to do the exchange_id 10004. Not sure if it should be able to talk with it but the linux NFSv4 client works with it fine.

It would be amazing if this worked as you could then use EFS from AWS Lambda.

support for netgroup with ldap (and nis)

netgroups are groups (sets) of NFS client machines that defined to be given some specific filesystem access. This is a convenient way to control access to NFS server by managing membership of the netgroup.

Can it work on windows?

Actually I need to start the server on the windows and start the client connection to the server on the liunx. But I don't know how?

Missing callback for OP_CLOSE in VirtualFileSystem

I am missing a notification in the virtual filesystem for CLOSE operations. Would be nice if the VirtualFileSystem can be extended with a close operation that is called from org.dcache.nfs.v4.OperationCLOSE.

Permission denied for OPEN with mode=EXCLUSIVE4_1

On NFSv4.1, It seems OperationOPEN opened with EXCLUSIVE4_1 mode can't be write by subsequent OperationWRITE operation (access denied in PseudoFs checks).

cva_attrs contains a single attribute mode=0, causing the file to be writable by nobody.

I'll try to fully understand the problem and add some details later.

set attribute error

Hello I am encountering this problem what might be the reason ?

2019-10-10 01:14:11.794 [] [nfs@2049 (8)] ERROR o.d.n.v.NfsServerV3 - SETATTR
java.lang.IllegalArgumentException: Illegal mode "w" must be one of "r", "rw", "rws", or "rwd"
at java.base/java.io.RandomAccessFile.(RandomAccessFile.java:239)
at java.base/java.io.RandomAccessFile.(RandomAccessFile.java:214)
at org.dcache.simplenfs.LocalFileSystem.setattr(LocalFileSystem.java:527)
at org.dcache.nfs.vfs.PseudoFs.setattr(PseudoFs.java:326)
at org.dcache.nfs.v3.HimeraNfsUtils.set_sattr(HimeraNfsUtils.java:167)
at org.dcache.nfs.v3.NfsServerV3.NFSPROC3_SETATTR_3(NfsServerV3.java:1159)
at org.dcache.nfs.v3.xdr.nfs3_protServerStub.dispatchOncRpcCall(nfs3_protServerStub.java:55)
at org.dcache.oncrpc4j.rpc.RpcDispatcher$1.run(RpcDispatcher.java:110)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)

Too many levels of symbolic links

hello i have used the latest update and the previous set attribute error is now gone

actually i am trying to network boot android-x86 over NFS , now i am stuck with this error

2019-10-10 03:55:00.912 [] [nfs@2049 (11)] WARN  o.d.s.LocalFileSystem - Unable to set mode of file /home/ayx/test/data/dalvik-cache/x86/system@[email protected]: /home/ayx/test/data/dalvik-cache/x86/system@[email protected]: Too many levels of symbolic links or unable to access attributes of symbolic link
2019-10-10 03:55:00.913 [] [nfs@2049 (11)] ERROR o.d.n.v.NfsServerV3 - SYMLINK
java.lang.UnsupportedOperationException: set mode unsupported: /home/ayx/test/data/dalvik-cache/x86/system@[email protected]: Too many levels of symbolic links or unable to access attributes of symbolic link
	at org.dcache.simplenfs.LocalFileSystem.setattr(LocalFileSystem.java:523)
	at org.dcache.nfs.vfs.PseudoFs.setattr(PseudoFs.java:326)
	at org.dcache.nfs.v3.HimeraNfsUtils.set_sattr(HimeraNfsUtils.java:167)
	at org.dcache.nfs.v3.NfsServerV3.NFSPROC3_SYMLINK_3(NfsServerV3.java:1206)
	at org.dcache.nfs.v3.xdr.nfs3_protServerStub.dispatchOncRpcCall(nfs3_protServerStub.java:111)
	at org.dcache.oncrpc4j.rpc.RpcDispatcher$1.run(RpcDispatcher.java:110)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:834)

i was able to successfully network boot using standard nfs-utils package provided by apt manager of ubuntu

thanks for your kind help

NFS V3 and Mount Clients?

Curious if any thought has been given to implementing an NFS V3 and MOUNT V1 RPC clients. As there is server side code here, presumably it would be easy to make client side RPC on top of it. If anyone knows of a library that does this, I would be grateful.

I don't even need I/O capability at the moment. Our use case is to simply enumerate exports on a server. I'd like pure java so its not platform specific.

Liquibase DDL compatibility issue with Oracle

changeset-1.9.2.xml table t_acl.rs_id should be char(36) instead of varchar(36), add constraint would not work in oracle.
changeset-1.9.2.xml t_retention_policy_ipnfsid_fkey, the name is 31 characters long. Oracle does not allow more than 30 characters.
changeset-2.8.xml "FROM t_inodes AS p", keyword AS is not need and it does not work with Oracle either.

Sunny

Retrieving mount options

Hello
Is it at all possible to retrieve mount options that were used on the client? For example, if

mount -o user=myuser server1:/ /mnt/server1

How can I get the value of user?

Thanks!

Can I force nfs client to perform lookup till current working directory? (trough error codes?)

I'm using nfs4j with custom VFS implementation that uses file paths instead of Inode numbers. As Inode data is limited to 64/128 bytes, based on nfs version, I'm keeping cache with mapping between INode and file path. Everything works fine until I reboot the server and I loose this cache, as nfs client also use caching and it sends already cached inode values, but I'm unable to rebuild file path from inode itself. I disable all type of client cache (-o noac,lookupcache=none), but still there are cases that won't work. If I enter some sub-folder structure, restart the nfs server, and execute "ls -al" I get "ls: cannot open directory '.': Stale file handle", as the Inode is not in the cache and I'm unable to complete "getattr()" call. On the other hand, if I call "ls -al ", then the client correctly perform lookup for every path item and I'm able to rebuild the cache mappings and operation succeed.
So is there a way when you perform operation on invalid Inode, to force the client to perform those lookups? I've tried throwing StaleException in such cases, but without luck. Nfs client is v4.1.

NFS Gateway capability?

Is this code suitable for implementing an NFS gateway to an arbitrary backend? If so, where would one start looking in the code?

SameThreadIOStrategy not working as expected

Hello,

I'm launching a custom nfs server with nfs4j. I wanted to build the RPC service with a "same thread IO strategy" because according to the grizzly documentation (https://javaee.github.io/grizzly/iostrategies.html), it could be the most efficient strategy. However, when specifying a number of selector thread in the builder, it seems that only one thread is used to process incoming requests on my nfs server.

Here is how I instantiate the RPC service :
rpcService = new OncRpcSvcBuilder().withPort(6115).withTCP().withAutoPublish().withSameThreadIoStrategy() .withSelectorThreadPoolSize(8).withServiceName("test_service").build();

And in the logs I only see one thread (test_service:6115 (2) SelectorRunner):
[2019-04-05 07:33:59,443][INFO ] LOOKUP : .xdg-volume-info (MyVirtualFilesystem.java:113 - test_service:6115 (2) SelectorRunner)
[2019-04-05 07:33:59,443][INFO ] LOOKUP : .autorun.inf (MyVirtualFilesystem.java:113 - test_service:6115 (2) SelectorRunner)
[2019-04-05 07:33:59,443][INFO ] LOOKUP : .Trash (MyVirtualFilesystem.java:113 - test_service:6115 (2) SelectorRunner)

Maybe I'm doing something wrong ?

layout state ids must start wih seqid equal to one (1)

according to rfc7530:

When such a set of locks is first created, the server returns a
stateid with a seqid value of one.

This is true for layout state ids as well. However, current implementation returns fresh created layout stateid with seqid equal to two (2).

Is there any working example of server based on NFS4J?

I searched through the google and went through the source code repository, i can not find a runnable server class which i could launch directly. Do we have one or what is the steps to build up one? The example in the readme need virtualfilesystem to be implemented. Do we have a example implementation?

Directory listing always traverse all elements

I'm using nfs4j 17.3, but as far as I see the issue is present in latest version. The problem is in DirectoryStream initialization:

  1. If I use constructor with Collection, then it's transformed to TreeSet, which call "addAll" method and thus traversing all elements.
  2. If I use constructor with NavigableSet, the collection is not traversed during initialization, but it is from PseudoFS -> list method, where the set is transformed to collection and passed to new DirectoryStream instance, that converts it to TreeSet (the first option).
    So no matter what I use, it always come down to Collection to Set transformation, that require traversing all elements. This is problematic, as if we have folder with 10k items, I can't return them on demand, as all of them will be loaded in the memory for every listing.

Even if I use VFSCache, every time all elements from cookie start point till the end will be processed.

Interoperability for lock implementation with macOS client

Follow for #42. With the latest changes in master branch with state handling (possibly b873c76) and merged lock support, the simple lock implementation is no longer interoperable with the NFS client on macOS 10.12.

When mounting the volume, an LOCK : NFS4ERR_BAD_STATEID error is thrown.

RENEW operation fails when lease has expired

[OncRpcSvc Worker(6)] WARN org.dcache.nfs.v4.NFSServerV41 - Bad client: op: RENEW : NFS4ERR_EXPIRED : lease time expired: (32209908): 00000000b8e85600dffe1002cda07f00000100000000000000006475636b2d30002f55736572732f646b6f636865722f4c6962726172792f47726f757020436f6e7461696e6572732f473639534358393458552e6475636b2f4c6962726172792f4170706c69636174696f6e20537570706f72742f6475636b2f566f6c756d65732f64726f70626f782e697465726174652e636820e2809320576562444156202848545450532900 (6307675351287857153).

Not sure who is to blaim but when accessing with the server with the NFS client in OS X the client endlessly tries to send a RENEW operation but always gets an NFS4ERR_EXPIRED error returned when the default leas time has expired. It looks like the default lease time in NFSv4StateHandler cannot be configured.

Mount Ok but no files present at client side

hello thanks for your library
i was able to run nfs server under ubuntu here is my exports files

/home/ayx/AAA/nfs-Server *(rw,all_squash,sec=sys)

at the client side i used the following mount command

$sudo mount.nfs -o nfsvers=4.1,nolock,noacl -rw -v 192.168.0.105:/home/ayx/AAA/nfs-Server nfs-Client

the folder gets mounted but there are no files present
what is that i am doing wrong ?

java9 module compliance

Java9 have introduces a new module system. One of the restrictions of it that a package can be only in one jar file. This is not true with nfs4j. As result, to make nfs4j module system compliant source code requires restructuring.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.