Code Monkey home page Code Monkey logo

pyfilesystem's Introduction

PyFilesystem

Note: The project has largely been replaced by PyFilesystem2 which offers many improvements over the original.

PyFilesystem is an abstraction layer for filesystems. In the same way that Python's file-like objects provide a common way of accessing files, PyFilesystem provides a common way of accessing entire filesystems. You can write platform-independent code to work with local files, that also works with any of the supported filesystems (zip, ftp, S3 etc.).

Pyfilesystem works with Linux, Windows and Mac.

Supported Filesystems

Here are a few of the filesystems that can be accessed with Pyfilesystem:

  • DavFS access files & directories on a WebDAV server
  • FTPFS access files & directories on an FTP server
  • MemoryFS access files & directories stored in memory (non-permanent but very fast)
  • MountFS creates a virtual directory structure built from other filesystems
  • MultiFS a virtual filesystem that combines a list of filesystems into one, and checks them in order when opening files
  • OSFS the native filesystem
  • SFTPFS access files & directories stored on a Secure FTP server
  • S3FS access files & directories stored on Amazon S3 storage
  • TahoeLAFS access files & directories stored on a Tahoe distributed filesystem
  • ZipFS access files and directories contained in a zip file

Example

The following snippet prints the total number of bytes contained in all your Python files in C:/projects (including sub-directories)::

from fs.osfs import OSFS
projects_fs = OSFS('C:/projects')
print sum(projects_fs.getsize(path)
          for path in projects_fs.walkfiles(wildcard="*.py"))

That is, assuming you are on Windows and have a directory called 'projects' in your C drive. If you are on Linux / Mac, you might replace the second line with something like::

projects_fs = OSFS('~/projects')

If you later want to display the total size of Python files stored in a zip file, you could make the following change to the first two lines::

from fs.zipfs import ZipFS
projects_fs = ZipFS('source.zip')

In fact, you could use any of the supported filesystems above, and the code would continue to work as before.

An alternative to explicitly importing the filesystem class you want, is to use an FS opener which opens a filesystem from a URL-like syntax::

from fs.opener import fsopendir
projects_fs = fsopendir('C:/projects')

You could change C:/projects to zip://source.zip to open the zip file, or even ftp://ftp.example.org/code/projects/ to sum up the bytes of Python stored on an ftp server.

Documentation

http://pyfilesystem.readthedocs.org

Screencast

This is from an early version of PyFilesystem, but still relevant

http://vimeo.com/12680842

Discussion Group

http://groups.google.com/group/pyfilesystem-discussion

Further Information

http://www.willmcgugan.com/tag/fs/

pyfilesystem's People

Contributors

0x326 avatar hmeine avatar jwilk avatar liryna avatar lurch avatar msabramo avatar selaux avatar whiterabbit1983 avatar willmcgugan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyfilesystem's Issues

re-implement CacheFS

(mostly adding this to remind myself...)

The current implementation of CacheFS is just dumb; it was written by me in 
about two hours to facilitate a quick demo of mounting S3FS via FUSE.  It 
should be thrown away and replaced with an implementation that:

  * uses fs.path.PathMap to keep an in-memory cache of info dicts
  * turns calls to exists(), isdir() and isfile() into getinfo() and then a local check of st_mode
  * turns calls to listdir() into listdirinfo() so it can pre-populate the cache
  * has all other methods make appropriate adjustments to the cache
  * dumps entries from the cache after a configurable timeout

Original issue reported on code.google.com by [email protected] on 29 Oct 2010 at 11:49

Support to listdir/listdirinfo for large directories

What steps will reproduce the problem?
Listing of extra large directories will consume extreme amount of memory.

What is the expected output? What do you see instead?
Listing of directories is ideal candidate to generators. If FS use generators 
instead of lists, information about entries in directory will be retrieved 
sequentially on request.

Discussion - switch all results from listdir(info) to generator type? Will be 
much cleaner, but breaks current FS API. Parameter 'as_generator=True' in 
listdir() is backward compatible, but brings new complications for FS 
developers (maintaining both version of code, switch regards to parameter).

What version of the product are you using? On what operating system?
trunk

Please provide any additional information below.
I'm implementing filesystem with hashed trees of directories for better 
performance on extra large directories (say millions of files). Files are 
stored in directory/<hash1>/<hash2>/<hash_n>/filename, but pyfs works as 
facade, so I can open file with fs.open('directory/filename'). Everything works 
perfectly, but listdirinfo crashes on such large directories, because 
listdir('directory') should return millions of rows.

Original issue reported on code.google.com by [email protected] on 18 Oct 2010 at 12:17

Wrong default value in HideDotFilesFS.listdir()

What steps will reproduce the problem?

import os
import tempfile
from fs.osfs import OSFS
from fs.wrapfs.hidedotfilesfs import HideDotFilesFS

temp_dir = tempfile.mkdtemp("fstest")
open(os.path.join(temp_dir, ".dotfile"), 'w').close()
open(os.path.join(temp_dir, "regularfile"), 'w').close()
fs = HideDotFilesFS(OSFS(temp_dir))
print fs.listdir()

What is the expected output? What do you see instead?

expected output: ['regularfile']
actual output: ['.dotfile', 'regularfile']

What version of the product are you using? On what operating system?

Problem exists in both 0.3.0 and svn trunk version.

Please provide any additional information below.

The documentation in hidedotfilesfs.py says:
    "The listdir() function takes an extra keyword argument 'hidden'
    indicating whether hidden dotfiles shoud be included in the output.
    It is False by default."
but it's actually True by default (which is wrong). The fix is to simply change:
        hidden = kwds.pop("hidden",True)
to:
        hidden = kwds.pop("hidden",False)


I've also attached a patch to add test-cases to the end of test_wrapfs.py

Original issue reported on code.google.com by [email protected] on 3 Oct 2010 at 4:14

Attachments:

r542 MultiFS.open unable to create files

hi,

multifs.py line 170:

 for fs in self:
            if fs.exists(path):
                fs_file = fs.open(path, mode, **kwargs)
problem:
open is only searching for existing files

should:
when using 'w','w+' mode a host directory for the filepath should be requested, 
not a filepath 

write access should be tested: if found open file and break, if write test 
fails then host path search should continue in multifs list.

- if success then returns fs_file as a open handle to a new file.
- if at least 1 host path is found but all write tests fail should return read 
only filesystem
- if no host path is found then raise ResourceNotFoundError

actually : any attempt to touch a file raise ResourceNotFoundError that leads 
to a "No such file or directory" with fuse which is wrong.

Original issue reported on code.google.com by [email protected] on 9 Dec 2010 at 7:23

SpooledTemporaryFile has no _lock

I've found another problem with SpooledTemporaryFile. Due to error in my code 
the RemoteFileBuffer.__del__ with underlying SpooledTemporaryFile was 
accidentally called and I got this error:

Exception AttributeError: "SpooledTemporaryFile instance has no attribute 
'_lock'" in <bound method RemoteFileBuffer.__del__ of 
fs.remote.RemoteFileBuffer object at 0x01EFE150>> ignored


Original issue reported on code.google.com by [email protected] on 27 Sep 2010 at 3:49

FS storage info API

Current base FS is missing universal API for retrieving general information 
about storage/filesystem like free space/used space etc

* What is the expected output? What do you see instead?
We should implement standard API for that.

* Please provide any additional information below.
Implementing this API is needed to provide correct information for FUSE/Dokan 
and other applications.

Original issue reported on code.google.com by [email protected] on 9 Oct 2010 at 6:37

Function signature error whenever functools is not available

try:
    from functools import wraps
except ImportError:
    wraps = lambda f:f

Subsequent use of 'wraps' after the ImportError case gives a function
signature error - too many arguments.

A fix for the immediate error is:

    wraps = lambda f: lambda f: f




Original issue reported on code.google.com by [email protected] on 13 Feb 2010 at 11:51

RemoteFileBuffer range requests feature

* What steps will reproduce the problem?
When I need just part of file stored on remote system (FS using 
RemoteFileBuffer), I have to download whole file from start to requested 
position (or whole file to end without on-demand feature).

* What is the expected output? What do you see instead?
Many file formats (mp3, video files) stores additional information to possition 
near end of file. I expect some support for remote filesystems, which can 
handle range requests to allow download just requested part of file. That 
should greatly improve FS performance.


Original issue reported on code.google.com by [email protected] on 8 Oct 2010 at 5:28

SpooledTemporaryFile does not support parameter in truncate()

File "c:\python26\lib\site-packages\fs\remote.py", line 136, in truncate
   self.file.truncate(size)
TypeError: truncate() takes exactly 1 argument (2 given)

* What steps will reproduce the problem?
Write just few bytes to RemoteFileBuffer (less than max_size_in_memory) and 
close.

* What is the expected output? What do you see instead?
I'd like to save file ;).

* What version of the product are you using? On what operating system?
trunk on Win7, python 2.6

* Additional info:
As some programs expect possibility to truncate file to specific size (in my 
case notepad.exe as testing tool), there should be workaround in truncate() to 
support 'size' also on SpooledTemporaryFile.

Possible fix: 
If file is SpooledTemporaryFile and size is defined, load file content to 
memory, truncate file to zero and then write content back again. As 
SpooledTemporaryFile is used just for very short files and this fix is needed 
only when application define 'size' parameter, I suppose it is fine 
workaround...

Working code:

def truncate(self,size=None):
    self._lock.acquire()
    try:
        if isinstance(self.file, SpooledTemporaryFile):
            # Workaround for missing argument 'size' in SpooledTemporaryFile
            f = self.file
            if size:
                f.seek(0)
                data = f.read(size)
                f.truncate()
                f.write(data)
            else:
                f.truncate()
        else:
            self.file.truncate(size)
        self.flush()
    finally:
        self._lock.release()

Original issue reported on code.google.com by [email protected] on 27 Sep 2010 at 1:22

tempfs ResourceLockedError

* What steps will reproduce the problem?
Running unit tests on my Windows 7 box.

* What is the expected output? What do you see instead?
These tests passed on my Linux box without any problem. Because there is no RFB 
or Tahoe in traceback I don't suspect my code.

======================================================================
ERROR: test_writefile (__main__.TestRemoteFileBuffer)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_remote.py", line 55, in tearDown
    self.fs.close()
  File "c:\Users\marekp\data\eclipse\pyfilesystem\fs\tempfs.py", line 58, in clo
se
    self._close()
  File "c:\Users\marekp\data\eclipse\pyfilesystem\fs\errors.py", line 204, in wr
apper
    return func(self,*args,**kwds)
  File "c:\Users\marekp\data\eclipse\pyfilesystem\fs\tempfs.py", line 78, in _cl
ose
    os_remove(os.path.join(dir,filename))
  File "c:\Users\marekp\data\eclipse\pyfilesystem\fs\errors.py", line 204, in wr
apper
    return func(self,*args,**kwds)
ResourceLockedError: Resource is locked: \\?\c:\users\marekp\appdata\local\temp\
tmp2e6cvvTempFS\test1.txt

* What version of the product are you using? On what operating system?
Trunk on Windows7, python 2.6.5


Original issue reported on code.google.com by [email protected] on 9 Oct 2010 at 5:48

zipfs.getcontents fails on absolute path



if zf is a ZipFS instance,

assert zf.isfile('/an/absolut/path.txt')

is valid

but

zf.getcontents('/an/absolut/path.txt')

fails with ResourceNotFound

The ResourceNotFoundError is itself a re-raising of the KeyError coming fron 
the underlying zipfile.


Original issue reported on code.google.com by [email protected] on 10 Jul 2010 at 9:19

DebugFS implementation

Hi, I just finished simple wrapper for debugging my filesystem. I think it can 
be helpful also for others, so you can find it in attachment. If you like that, 
feel free to include it into FS source tree ;).

--------------

    DebugFS is a wrapper around filesystems to help developers
    debug their work. I wrote this class mainly for debugging
    TahoeFS and for fine tuning TahoeFS over Dokan with higher-level
    aplications like Total Comander, Winamp etc. Did you know
    that Total Commander need to read file before it delete them? :-)

    I hope DebugFS can be helpful also for other filesystem developers,
    especially for those who are trying to implement their first one (like me).

    DebugFS prints to stdout (by default) all attempts to
    filesystem interface, prints parameters and results.

    Basic usage:
        fs = DebugFS(OSFS('~')
        print fs.listdir('.')
        print fs.unsupportedfunction()

Original issue reported on code.google.com by [email protected] on 29 Sep 2010 at 1:17

Attachments:

Typo in function _info2finddataw() in /fs/expose/dokan/__init__.py

Typo in function _info2finddataw() in /fs/expose/dokan/__init__.py
Results in absence of file time attributes in windows.

Currently:
data.ftCreateTime = _datetime2filetime(info.get("created_time",None))
data.ftAccessTime = _datetime2filetime(info.get("accessed_time",None))
data.ftWriteTime = _datetime2filetime(info.get("modified_time",None))

Should be:
data.ftCreationTime = _datetime2filetime(info.get("created_time",None))
data.ftLastAccessTime = _datetime2filetime(info.get("accessed_time",None))
data.ftLastWriteTime = _datetime2filetime(info.get("modified_time",None))


Original issue reported on code.google.com by [email protected] on 17 Oct 2010 at 7:12

utils.copydir applied to MemoryFS destination opens MemoryFile with None


code like::

>>> copydir(osfs_instance, memoryfs_instance)

fails with:

TypeError: write() argument 1 must be string or read-only character buffer, not 
None

Traceback attached.

This happens with current trunk: r395, but didn't happen with r345.

Can be worked around with a modified file_factory:

class FileFactory(MemoryFile):
    # 
    def __init__(self, path, memory_fs, value, mode):
        value = value or ''
        super(FileFactory, self).__init__(path, memory_fs, value, mode)





Original issue reported on code.google.com by [email protected] on 4 Aug 2010 at 3:59

Attachments:

TahoeLAFS filesystem implementation

Hi, regards to my roadmap I decided to publish implementation of TahoeFS 
filesystem just now. I know it is not yet ideal (there are many overcomplicated 
things like removedir() and listdir()) as it is the oldest code.

I hope there is not something critically wrong as unit test passed and 
filesystem was very intensively tested with both Dokan (and also FUSE a little 
bit) and with many desktop applications. All issues are related to other pieces 
of pyfilesystem as I reported before and will work on after we make agreement 
on TahoeFS itself.

Please close your eyes to few hacks inside (hacked support for colon on Windows 
etc), I will be happy to implement that elsewhere and remove from filesystem 
code.

Short intro to using library itself. TahoeLAFS is distributed filesystem using 
P2P network. You always need URL to some node with accessible web api and 
so-called 'dircap', which is auth hash allowing you to read/write your root 
directory.

Ad node. You can use publicly available Pubgrid 
(http://pubgrid.tahoe-lafs.org/), but I cannot recommend it. TahoeFS is 
generally working on that, but sometimes (ok, say nearly always) you will 
receive HTTP Gateway errors. Much better is to invest time and install your own 
node. For tests you don't even need any other node in network, but you have to 
configure your node as standalone (mainly Happiness and share counts).

Ad dircap. When you work with Tahoe first time, you created your own dircap, 
keep it safe and then use dircap as key for your storage. All other files and 
directories are relative to this dircap. Creating dircaps is supported by 
classmethod TahoeFS.createdircap().

I'm looking forward your reviews!

Original issue reported on code.google.com by [email protected] on 1 Oct 2010 at 1:17

Attachments:

coding error in DAVFS.setcontents()

What steps will reproduce the problem?

In my case, I created a new file on the DAV server (Apache 2.2.3 on RHEL 5) and 
wrote to it, which resulted in a 204 status.  This particular status was not 
caught and fell into a catchall raise which passed 'response' (an undefined 
variable in the method) instead of 'resp' to the exception class.

What is the expected output? What do you see instead?

First, I fixed the raise to use the correct variable.  Also, I believe that a 
204 status should be just fine since it indicates that the request succeed, and 
that the attributes of the content (but not the content) may have changed on 
the server as a result of the request.  I added 204 to the list of acceptable 
statuses in setcontents().

What version of the product are you using? On what operating system?

trunk

Please provide any additional information below.

Here's a diff of the changes I made:


Index: pyfilesystem/fs/contrib/davfs/__init__.py
===================================================================
--- pyfilesystem/fs/contrib/davfs/__init__.py   (revision 446)
+++ pyfilesystem/fs/contrib/davfs/__init__.py   (working copy)
@@ -266,8 +266,8 @@
             raise ResourceInvalidError(path)
         if resp.status == 409:
             raise ParentDirectoryMissingError(path)
-        if resp.status not in (200,201):
-            raise_generic_error(response,"setcontents",path)
+        if resp.status not in (200,201,204):
+            raise_generic_error(resp,"setcontents",path)

     def open(self,path,mode="r"):
         # Truncate the file if requested
@@ -710,5 +710,5 @@
         raise ResourceLockedError(path,opname=opname,details=response)
     if response.status == 501:
         raise UnsupportedError(opname,details=response)
-    raise OperationFailedError(opname,details=response)
+    raise OperationFailedError(opname,details='HTTP status: 
%s'%response.status)

Original issue reported on code.google.com by [email protected] on 27 Sep 2010 at 12:48

Add support for URL

What steps will reproduce the problem?
There is not a standard way to retrieve universal identifier of any path for 
interacting with outer world (non-pyfilesystem).

What is the expected output? What do you see instead?
Would be great to extend standard FS API for retrieving some kind of URL 
(http://en.wikipedia.org/wiki/Uniform_Resource_Identifier). Almost every 
filesystem can construct some string which will be usable as identificator.

Please provide any additional information below.
def geturl(path) can generate useful links for retrieving file in another way. 
For example:

file://<system path> for OSFS
ftp:// for FTPFS
http:// or https:// for S3FS
http:// for TahoeFS

and so on. Currently I implement geturl in TahoeFS, but feel that this should 
be standard in some way in pyfilesystem itself. File systems that have no 
reasonable URL (memoryfs) should raise UnsupportedError instead.

Please comment or close as invalid if you think it is nonsense.



Original issue reported on code.google.com by [email protected] on 11 Oct 2010 at 8:03

Ensure path to zip file is relative before opening it

What steps will reproduce the problem?
1. fs = zipfs.ZipFS(myzipfile)
2. fs.open('/subdir/somefile.txt')
3.

What is the expected output? What do you see instead?

An OSFS object will open an absolute file path but a ZIPFS object gives a
ResourceNotFoundError. It is desirable that behaviour is consistent between
fs types.

To fix, change line 127 of file zipfs.py to:

    path = makerelative(normpath(path))



Original issue reported on code.google.com by [email protected] on 18 Jan 2010 at 5:35

'basename' is not defined in fs/wrapfs/hidedotfilesfs.py

What steps will reproduce the problem?

In [27]: HideDotFilesFS(OSFS('~/')).listdir(hidden=False)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)

/home/fm/<ipython console> in <module>()

/usr/lib/python2.6/site-packages/fs/wrapfs/hidedotfilesfs.py in listdir(self, 
path, **kwds)
     32         entries = self.wrapped_fs.listdir(path,**kwds)
     33         if not hidden:
---> 34             entries = [e for e in entries if not self.is_hidden(e)]
     35         return entries
     36 

/usr/lib/python2.6/site-packages/fs/wrapfs/hidedotfilesfs.py in is_hidden(self, 
path)
     20     def is_hidden(self, path):
     21         """Check whether the given path should be hidden."""
---> 22         return path and basename(path)[0] == "."


Quick fix:

--- /usr/lib/python2.6/site-packages/fs/wrapfs/hidedotfilesfs.py~       
2010-06-17 22:22:46.000000000 +0300
+++ /usr/lib/python2.6/site-packages/fs/wrapfs/hidedotfilesfs.py        
2010-08-26 20:30:05.006755949 +0300
@@ -7,6 +7,7 @@
 """

 from fs.wrapfs import WrapFS
+import os.path


 class HideDotFilesFS(WrapFS):
@@ -19,7 +20,7 @@

     def is_hidden(self, path):
         """Check whether the given path should be hidden."""
-        return path and basename(path)[0] == "."
+        return path and os.path.basename(path)[0] == "."

     def _encode(self, path):
         return path

Original issue reported on code.google.com by [email protected] on 26 Aug 2010 at 5:32

Unable to copy dir from osfs to memory fs

I have a directory with some files called Maps which I would like to copy
into a memoryfs. I am using utils.copydir. If I create a directory called
maps first in the memfs I get a destination exist error. If I don't, I get
a resource not found error.

Here is the code:
filesys = fs.memoryfs.MemoryFS()
programFS = fs.osfs.OSFS(".")
#filesys.makedir("Maps")
fs.utils.copydir((programFS, "Maps"), (filesys, "Maps"))

With line uncommented:
...
File "/usr/local/lib/python2.6/dist-packages/fs/utils.py", line 131, in copydir
    mount_fs.copydir('dir1', 'dir2', ignore_errors=ignore_errors,
chunk_size=chunk_size)
  File "/usr/local/lib/python2.6/dist-packages/fs/base.py", line 664, in
copydir
    self.makedir(dst, allow_recreate=overwrite)
  File "/usr/local/lib/python2.6/dist-packages/fs/base.py", line 112, in
acquire_lock
    return func(self, *args, **kwargs)
  File "/usr/local/lib/python2.6/dist-packages/fs/mountfs.py", line 129, in
makedir
    return fs.makedir(delegate_path, recursive=recursive,
allow_recreate=allow_recreate)
  File "/usr/local/lib/python2.6/dist-packages/fs/base.py", line 764, in
makedir
    return self.parent.makedir(self._delegate(path), recursive=recursive,
allow_recreate=allow_recreate)
  File "/usr/local/lib/python2.6/dist-packages/fs/base.py", line 112, in
acquire_lock
    return func(self, *args, **kwargs)
  File "/usr/local/lib/python2.6/dist-packages/fs/memoryfs.py", line 274,
in makedir
    raise DestinationExistsError(dirname)
fs.errors.DestinationExistsError: Destination exists: Maps


And with commented:
File "/usr/local/lib/python2.6/dist-packages/fs/utils.py", line 126, in copydir
    fs2 = fs2.opendir(dir2)
  File "/usr/local/lib/python2.6/dist-packages/fs/base.py", line 422, in
opendir
    raise ResourceNotFoundError(path)
fs.errors.ResourceNotFoundError: Resource not found: Maps

Original issue reported on code.google.com by [email protected] on 22 Oct 2009 at 9:19

Directories are not recognized in MemoryFS when mounted to Dokan

When MemoryFS is mounted to Dokan, directories appears to be not browsable in 
windows.
This is because MemoryFS does not set directory bit on 'st_mode' of file info 
dict.

Currently MemoryFS does this:
        if dir_entry.isdir():
            info['st_mode'] = 0755

But in dokan adapter:
        st_mode = info.get("st_mode",None)
        if st_mode:
            if statinfo.S_ISDIR(st_mode): # S_ISDIR will return False
                attrs |= FILE_ATTRIBUTE_DIRECTORY

so attrs will not have directory bit which confuses dokan and windows file 
explorer

This can be fixed by adding directory bit to st_mode in MemoryFS:
        if dir_entry.isdir():
            info['st_mode'] = 040755

Original issue reported on code.google.com by [email protected] on 17 Oct 2010 at 7:28

Large file upload failed in fs.remote

I got this strange traceback (it is probably mix of many tracebacks printed in 
parallel to my console):

File "c:\python26\lib\site-packages\fs\expose\dokan\__init__.py", line 152, in 
wrapper
I/O operation on closed fileValueError
return a(*args,**kwds)
res = func(*args,**kwds)
File "c:\python26\lib\site-packages\fs\errors.py", line 166, in wrapper
File "c:\python26\lib\tempfile.py", line 578, in seek

File "c:\python26\lib\site-packages\fs\expose\dokan\__init__.py", line 396, in 
WriteFile
I/O operation on closed file
self._file.seek(*args)
Traceback (most recent call last):
file.seek(offset)

return func(*args,**kwds)
  File "c:\python26\lib\site-packages\fs\expose\dokan\__init__.py", line 396, in WriteFile
  File "_ctypes/callbacks.c", line 295, in 'calling callback function'
  File "c:\python26\lib\site-packages\fs\remote.py", line 120, in call_with_lock

ValueError      File 
"c:\python26\lib\site-packages\fs\expose\dokan\__init__.py", line 152, in 
wrapper: file.seek(offset)
return a(*args,**kwds)


* What steps will reproduce the problem?
I was uploading large file (1.6 GB) using Dokan and RemoteFileBuffer to my 
virtual remote filesystem on Windows7, python2.6. Errors started immediately 
after operation starts, so it cannot be error in underlying network filesystem 
(setcontents was not called yet). I'm not smart from that traceback much, but 
some observations:

* It is not bug just in RemoteFileBuffer, because uploading the same file using 
following code is working well:

fs = TahoeFS(...)
f = fs.open('garbage.avi', 'wb')
f2 = open('c:/garbage.avi', 'rb')
data = f2.read(16384*1024)
while data:
    f.write(data)
    data = f2.read(16384*1024)
    print f.tell()

print 'closing'
f.close()
f2.close()

This will pass ('f' is RemoteFileBuffer). Okay, it breaks later on close() in 
setcontents(), but it is another issue - and in my code :-).

* It is not just error in Dokan, because copying the same file using dokan to 
OSFS is working without problem.

* Smaller file (~500MB) is working well, no problem there.

So it looks that problem is Dokan+RemoteFileBuffer+Large_File. My first idea is 
that Dokan is using mmap internally (and there is limit to file size), but this 
limit is on my 32bit computer around 3-4GBs, not 1.6GB.

I hope my observation helps.

Original issue reported on code.google.com by [email protected] on 28 Sep 2010 at 10:44

SpooledTemporaryFile is a function, not a class

What steps will reproduce the problem?
Using RemoteFileBuffer on Python<2.6 or Jython

What is the expected output? What do you see instead?
Working RFB. I got exception that metaclass is invalid for SpooledTemporaryFile.

What version of the product are you using? On what operating system?
Linux, Jython 2.5 beta

Working patch (remote.py, line ~37):

try:
    from tempfile import SpooledTemporaryFile
except ImportError:
    from tempfile import NamedTemporaryFile

    def SpooledTemporaryFile(max_size=0, *args, **kwds):
        return NamedTemporaryFile(*args,**kwds)


Original issue reported on code.google.com by [email protected] on 13 Oct 2010 at 8:31

DAVFS copy/move assume a DAVFS source and destination

What steps will reproduce the problem?
1.attempt to call the copy method of a DAVFS handle source to an OSFS handle 
destination

What is the expected output? What do you see instead?

The operation should copy data from the source handle file to the destination 
handle file, even if it is on a different server or of a different FS 
implementation.

What version of the product are you using? On what operating system?

trunk.

Please provide any additional information below.

It appears that the DAVFS class short-cuts the copy assuming that the caller 
intended to copy/move a file on the same DAV server.  The FS class uses 
getsyspath to try to determine if the copy/move is on the same "space" and 
short-cuts if possible, falling back on a read from source and write to 
destination if not.  The DAVFS class should function similarly.

Original issue reported on code.google.com by [email protected] on 1 Oct 2010 at 12:52

unused code in memoryfs.py


The result of pathsplit is not used in following code:

    def _on_close_memory_file(self, open_file, path, value):
        filepath, filename = pathsplit(path)
        dir_entry = self._get_dir_entry(path)
        if dir_entry is not None and value is not None:
            dir_entry.data = value
            dir_entry.open_files.remove(open_file)
            self._unlock_dir_entry(path)

    def _on_flush_memory_file(self, path, value): 
        filepath, filename = pathsplit(path)
        dir_entry = self._get_dir_entry(path)
        dir_entry.data = value



Original issue reported on code.google.com by [email protected] on 19 Feb 2010 at 11:56

r317: multifs doesn't import ResourceNotFoundError

@multifs.py:27
from fs.base import FS, FSError, synchronize

is too restrictive and makes "ResourceNotFoundError" unrechable.

should be:
from fs.base import FS, FSError, synchronize , ResourceNotFoundError
or add:
from fs.errors import ResourceNotFoundError


ps: thanks for the great job.
additionnaly fs.expose.fuse should give a simple way to provide a
fuse_ctypes compatible library at loading ( eg dokan+pywinfuse on win32 ).

Original issue reported on code.google.com by [email protected] on 1 Feb 2010 at 1:54

Incorrect modified_time in SFTPFS.getinfo()

I've just spotted this cut'n'paste typo in sftpfs.py in the svn trunk:
        at = info.get('st_atime', None)
        if at is not None:
            info['accessed_time'] = datetime.datetime.fromtimestamp(at)
        mt = info.get('st_mtime', None)
        if mt is not None:
            info['modified_time'] = datetime.datetime.fromtimestamp(at)

seems fairly obvious that the last line *should* say .fromtimestamp(mt)

Original issue reported on code.google.com by [email protected] on 4 Oct 2010 at 11:46

CacheFS does not handle path in right way

There can be many metadata caches of one file, becauce CacheFS itself does not 
normalize paths in any way. This may lead (and leads in my case!) to crazy 
behaviour.

* What steps will reproduce the problem?

fs.exists('filename') => True
fs.remove('/filename')
fs.exists('filename') => True

* What is the expected output? What do you see instead?
first call True, second call False

By the way, is there any "path normalizer" for use in filesystems 
implementation? I'm too lazy to write something which cover complete 
specification (for example /some/path/..//next path/./file => /some/next 
path/file or just reported 'file' => '/file')

Original issue reported on code.google.com by [email protected] on 30 Sep 2010 at 10:05

Not able to create file in memoryfs

Hello everyone, i try to use your project but i have a problem : i cannot 
create a file when I use memoryfs.

What steps will reproduce the problem?
1. from fs.memoryfs import MemoryFS
2. fs = MemoryFS('/')
3. fs.open('file', 'w')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "~/venv/fs/fs/fs/base.py", line 536, in createfile
    f = self.open(path, 'wb')
  File "~/venv/fs/fs/fs/base.py", line 122, in acquire_lock
    return func(self, *args, **kwargs)
  File "fs/memoryfs.py", line 365, in open
    raise ResourceLockedError(path)
fs.errors.ResourceLockedError: Resource is locked: coucou

3bis. x.createfile('file2')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "~/venv/fs/fs/fs/base.py", line 536, in createfile
    f = self.open(path, 'wb')
  File "~/venv/fs/fs/fs/base.py", line 122, in acquire_lock
    return func(self, *args, **kwargs)
  File "fs/memoryfs.py", line 371, in open
    mem_file = self.file_factory(path, self, None, mode)
TypeError: 'str' object is not callable

What is the expected output? What do you see instead?

Open should return an file-like object instead, i think.

What version of the product are you using? On what operating system?

r513 on Mac OS X.6

Original issue reported on code.google.com by [email protected] on 31 Oct 2010 at 5:55

DAVFS.open() coding error

What steps will reproduce the problem?
1. use DAVFS.open() to open a file that can not be read due to permissions on 
the file.

What is the expected output? What do you see instead?

A 'permission denied' exception

What version of the product are you using? On what operating system?

trunk

Please provide any additional information below.

Looks like a cut and paste error, which calls the raise_generic_error() 
function using 'resp' instead of 'contents'.  This patch fixes it for me, as 
well as checks for the Unauthorized/Forbidden access cases:


Index: __init__.py
===================================================================
--- __init__.py (revision 45213)
+++ __init__.py (working copy)
@@ -284,9 +284,12 @@
                     raise ResourceNotFoundError(path)
                 contents = ""
                 self.setcontents(path,contents)
+            if contents.status in (401,403):
+                contents.close()
+                raise PermissionDeniedError("open")
             elif contents.status != 200:
                 contents.close()
-                raise_generic_error(resp,"open",path)
+                raise_generic_error(contents,"open",path)
         if mode == "r-":
             contents.size = contents.getheader("Content-Length",None)
             if contents.size is not None:

Original issue reported on code.google.com by [email protected] on 4 Oct 2010 at 6:53

RemoteFileBuffer fillbuffer changes broke DAVFS

What steps will reproduce the problem?

Open a remote file using DAVFS using mode='r'.  The subsequent read() will 
yield and empty string.

What is the expected output? What do you see instead?

data from the read()

What version of the product are you using? On what operating system?

trunk on Linux

Please provide any additional information below.

DAVFS.open() appears to have made an assumption based on the old behavior of 
RemoteFileBuffer, which read the entire remote contents on open.  There is a 
try/finally at the bottom of DAVFS.open() that instantiates the 
RemoteFileBuffer and then calls the close method on the HTTPRequest object 
('contents') that it passed in as the file endpoint.  I'm not exactly sure why 
DAVFS.open() closes the request immediately instead of just letting the caller 
close the connection when done reading.  In 'r-' mode, this close does not 
happen.

Original issue reported on code.google.com by [email protected] on 12 Oct 2010 at 6:25

S3FS without setting keys produces low-level error

What steps will reproduce the problem?
Calling methods on S3FS instance without correct parameters produces this 
low-level error. This issue has minor severity as correct usage of S3FS 
probably works.

Traceback (most recent call last):
  File "C:\Users\marekp\data\eclipse\pyfilesystem\s3.py", line 8, in <module>
    print fs.listdir('/')
  File "C:\Users\marekp\data\eclipse\pyfilesystem\fs\s3fs.py", line 262, in listdir
    keys = self._list_keys(path)
  File "C:\Users\marekp\data\eclipse\pyfilesystem\fs\s3fs.py", line 278, in _list_keys
    for k in self._s3bukt.list(prefix=s3path,delimiter=self._separator):
  File "C:\Users\marekp\data\eclipse\pyfilesystem\fs\s3fs.py", line 112, in _s3bukt
    b = self._s3conn.get_bucket(self._bucket_name, validate=True)
  File "C:\Users\marekp\data\eclipse\pyfilesystem\fs\s3fs.py", line 102, in _s3conn
    c = boto.s3.connection.S3Connection(*self._access_keys)
  File "C:\Python26\lib\site-packages\boto-2.0b2-py2.6.egg\boto\s3\connection.py", line 136, in __init__
    path=path, provider=provider)
  File "C:\Python26\lib\site-packages\boto-2.0b2-py2.6.egg\boto\connection.py", line 186, in __init__
    self.hmac = hmac.new(self.aws_secret_access_key, digestmod=sha)
  File "C:\Python26\lib\hmac.py", line 133, in new
    return HMAC(key, msg, digestmod)
  File "C:\Python26\lib\hmac.py", line 68, in __init__
    if len(key) > blocksize:
TypeError: object of type 'NoneType' has no len()

What is the expected output? What do you see instead?
S3FS constructor accepts None as valid parameters for these keys, because they 
can be read from system variables later. But I expect that there should be some 
high-level check if parameters are present. If not, some information should 
given in exception. Without that, I had to go to sources and find the reason of 
failure by self.

What version of the product are you using? On what operating system?
Trunk, Windows7, python2.6.5

Please provide any additional information below.


Original issue reported on code.google.com by [email protected] on 12 Oct 2010 at 7:32

clarify license for contrib.bigfs

This is quick support for BIG files from Command & Conquer Generals and later 
games up to and including Command & Conquer 4, including decompression support 
for the compressed BIG version. Support is read-only.
Needed a utility class (SubrangeFile) to avoid in-memory copies of (in-archive) 
files of a few hundred MB when opening the file.

Original issue reported on code.google.com by [email protected] on 21 Jun 2010 at 11:21

Attachments:

zipfs should not re-assign self.zip_path

What steps will reproduce the problem?
1. create a ZipFS object called zip
2. "print zip"
3. do a zip.copy(src, dst) operation
4. "print zip"

What is the expected output? What do you see instead?
I expected to see <ZipFS: test.zip>. I got <ZipFS: dst> instead.

What version of the product are you using? On what operating system?
0.1

Please provide any additional information below.
I think in zipfs's open method you should not be assigning "path" to
self.zip_path.

david@centurion:~/downloads/fs-0.1.0/fs$ diff zipfs.py.orig zipfs.py
--- zipfs.py.orig   2008-09-28 23:07:57.000000000 -0700
+++ zipfs.py    2008-09-28 23:08:08.000000000 -0700
@@ -125,7 +125,6 @@
         self._lock.acquire()
         try:
             path = normpath(path)
-            self.zip_path = path

             if 'r' in mode:
                 if self.zip_mode not in 'ra':

Original issue reported on code.google.com by davidgrant on 29 Sep 2008 at 6:09

AESFS implementation

Hi, I needed transparent encryption over pyfilesystem, so I implemented very 
basic wrapfs with AES support.

It is mainly designed for remote filesystems and primary works with 
RemoteFileBuffer. It is also possible to use that with general file, but there 
are many restrictions (no seeking, not reading and writing to the same file). 
It works pretty well with RFB.

What is the expected output? What do you see instead?
File saved in storage is not in plaintext, but encrypted with AES256. User of 
pyfs does not see any difference, file is transparently encrypted/decrypted on 
demand.

There is ugly hack in AESFS._file_wrap(), but it currently works fine. I'm open 
to discussion about that solution, but feels I need more tools like FS class 
for working with file itself (so some standard API how to wrap file objects 
etc).

Original issue reported on code.google.com by [email protected] on 13 Oct 2010 at 11:11

Attachments:

if 'size' is None

In file base.py line 580:

if 'size' is None:

should be

if size is None:





Original issue reported on code.google.com by [email protected] on 20 Feb 2010 at 10:40

RemoteFileBuffer on-demand feature

* What steps will reproduce the problem?
Opening large files on remote filesystems hangs process until remote file is 
completely downloaded to local machine.

* What is the expected output? What do you see instead?
open() should not be blocking, should serve content as soon as possible. Also 
when I'm interested only in part of file, downloading whole file is not 
necessary.

*What version of the product are you using? On what operating system?
Trunk, Linux, Windows


Original issue reported on code.google.com by [email protected] on 8 Oct 2010 at 2:09

tests.test_fs fails unexpectedly on Mac

What steps will reproduce the problem?
1. Run test_fs as unittest
2. osfs.remove() throws an unexpected error when testing remove("dir1")

What version of the product are you using? On what operating system?
* This is on current svn tip.

Please provide any additional information below.

The following svn diff shows a change that improves the behavior, using
an "ask permission" pattern rather than an "ask forgiveness" pattern.

Index: __init__.py
===================================================================
--- __init__.py (revision 348)
+++ __init__.py (working copy)
@@ -147,15 +147,10 @@
     @convert_os_errors
     def remove(self, path):
         sys_path = self.getsyspath(path)
-        try:
-            os.remove(sys_path)
-        except OSError, e:
-            if e.errno == errno.EACCES and sys.platform == "win32":
-                # sometimes windows says this for attempts to remove a dir
-                if os.path.isdir(sys_path):
-                    raise ResourceInvalidError(path)
-            raise
-
+        if os.path.isdir(sys_path):
+            raise ResourceInvalidError(path)
+        os.remove(sys_path)
+  

Original issue reported on code.google.com by [email protected] on 13 May 2010 at 5:51

factor out win32 hacks from TahoeFS


TahoeFS contains several hacks to work nicer when mounted as a windows 
filesystem:

  * replace ":" with __colon__ in all paths
  * avoid listing of autorun files

These should be factored out into a WrapFS subclass or two; probably under 
fs.expose.dokan since that's where they'll most likely be used.

Original issue reported on code.google.com by [email protected] on 4 Nov 2010 at 11:56

suggestion - memoryfs.DirEntry.make_dir_entry()

Suggestion:

Move the function

    def _make_dir_entry(self, ...)

currently defined in MemoryFS class,

to DirEntry class.

I want to record the parent identity when a new DirEntry is created. It
(seems) easier to do this in a `DirEntry.make_dir_entry` function rather
than in an external class.


Original issue reported on code.google.com by [email protected] on 24 Feb 2010 at 12:52

zipfs import datetime

Hi, there is missing import of module datetime in zipfs.py.


* What steps will reproduce the problem?
from fs.zipfs import ZipFS

* What version of the product are you using? On what operating system?
Current trunk.


Original issue reported on code.google.com by [email protected] on 25 Sep 2010 at 6:16

undocumented variable 'info' in listdir

What steps will reproduce the problem?
When using own FS with Dokan/Fuse, listdir() sometimes receive variable 'info', 
which is undocumented. When I strictly keep docs, listdir is crashing.

What is the expected output? What do you see instead?
Add info to documentation or remove info=True from sources of some exposed 
filesystems.

What version of the product are you using? On what operating system?
trunk on Win7, python 2.6


Original issue reported on code.google.com by [email protected] on 27 Sep 2010 at 3:33

module object has no attribute 'ENONET'

Traceback (most recent call last):
  File "_ctypes/callbacks.c", line 295, in 'calling callback function'
  File "c:\Users\marekp\data\eclipse\pyfilesystem\fs\expose\dokan\__init__.py",
line 189, in wrapper
    return func(self,*args)
  File "c:\Users\marekp\data\eclipse\pyfilesystem\fs\expose\dokan\__init__.py",
line 152, in wrapper
    res = func(*args,**kwds)
  File "c:\Users\marekp\data\eclipse\pyfilesystem\fs\errors.py", line 190, in wr
apper
    raise OSError(errno.ENONET,str(e))
AttributeError: 'module' object has no attribute 'ENONET'

Original issue reported on code.google.com by [email protected] on 10 Oct 2010 at 3:18

CacheFS doesn't invalidate listdirinfo

What steps will reproduce the problem?
When I use listdirinfo instead listdir, directory listing is not affected by 
changing cache of some file.

What is the expected output? What do you see instead?
When I uncache metadata for file, I expect listdirinfo cache will be 
invalidated.

What version of the product are you using? On what operating system?
trunk

There is missing 
cache[""].pop("listdirinfo",None)
in remote.py in _uncache() and also 

    @_cached_method
    def listdirinfo(self,path="",**kwds):
        return super(CacheFS,self).listdirinfo(path,**kwds)

in CacheFS class body.

This break unit tests for TahoeFS after migrating to listdirinfo().

Original issue reported on code.google.com by [email protected] on 9 Oct 2010 at 12:31

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.