Code Monkey home page Code Monkey logo

svfs's Introduction

*** This project is not maintained anymore ***

The Swift Virtual File System

Release Github All Releases Build Status Go Report Card Coverage Status GoDoc

SVFS is a Virtual File System over Openstack Swift built upon fuse. It is compatible with hubiC, OVH Public Cloud Storage and basically every endpoint using a standard Openstack Swift setup. It brings a layer of abstraction over object storage, making it as accessible and convenient as a filesystem, without being intrusive on the way your data is stored.

Disclaimer

This is not an official project of the Openstack community.

Installation

Download and install the latest release packaged for your distribution.

Usage

Mount command

On Linux (requires fuse and ruby) :

mount -t svfs -o <options> <device> /mountpoint

On OSX (requires osxfuse and ruby) :

mount_svfs <device> /mountpoint -o <options>

Notes :

  • You can pick any name you want for the device parameter.
  • All available mount options are described later in this document.

Credentials can be specified in mount options, however this may be desirable to read them from an external source. The following sections desribe alternative approaches.

Reading credentials from the environment

SVFS supports reading the following set of environment variables :

  • If you are using HubiC :
 HUBIC_AUTH
 HUBIC_TOKEN
  • If you are using a vanilla Swift endpoint (like OVH PCS), after sourcing your OpenRC file :
 OS_AUTH_URL
 OS_USERNAME
 OS_PASSWORD
 OS_REGION_NAME
 OS_TENANT_NAME
  • If you already authenticated to an identity endpoint :
 OS_AUTH_TOKEN
 OS_STORAGE_URL

Reading credentials from a configuration file

All environment variables can also be set in a YAML configuration file placed at /etc/svfs.yaml.

For instance :

hubic_auth: XXXXXXXXXX..
hubic_token: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX...

Usage with OVH products

  • Usage with OVH Public Cloud Storage is explained here.
  • Usage with hubiC is explained here.

Mount options

Keystone options

  • auth_url: keystone URL (default is https://auth.cloud.ovh.net/v2.0).
  • username: your keystone user name.
  • password: your keystone password.
  • tenant: your project name.
  • region: the region where your tenant is.
  • version: authentication version (0 means auto-discovery which is the default).
  • storage_url: the storage endpoint holding your data.
  • internal_endpoint: the storage endpoint type (default is false).
  • token: a valid token.

Options region, version, storage_url and token are guessed during authentication if not provided.

Hubic options

  • hubic_auth: hubic authorization token as returned by the hubic-application command.
  • hubic_times : use file times set by hubic synchronization clients. Option attr should also be set for this to work.
  • hubic_token : hubic refresh token as returned by the hubic-application command.

Swift options

  • container: which container should be selected while mounting the filesystem. If not set, all containers within the tenant will be available under the chosen mountpoint.
  • storage_policy: expected containers storage policy. This is used to ignore containers not matching a particular storage policy name. If empty, this setting is ignored (default).
  • segment_size: large object segments size in MB. When an object has a content larger than this setting, it will be uploaded in multiple parts of the specified size. Default is 256 MB. Segment size should not exceed 5 GB.
  • connect_timeout: connection timeout to the swift storage endpoint. Default is 15 seconds.
  • request_timeout: timeout of requests sent to the swift storage endpoint. Default is 5 minutes.

Prefetch options

  • block_size: Filesystem block size in bytes. This is only used to report correct stat() results.
  • readahead_size: Readahead size in KB. Default is 128 KB.
  • readdir: Overall concurrency factor when listing segmented objects in directories (default is 20).
  • attr: Handle base attributes.
  • xattr: Handle extended attributes.
  • transfer_mode: Enforce network transfer optimizations. The following flags / features can be combined :
  • 1 : disable explicit empty file creation.
  • 2 : disable explicit directory creation.
  • 4 : disable directory content check on removal.
  • 8 : disable file check in read only opening.

Cache options

  • cache_access: cache entry access count before refresh. Default is -1 (unlimited access).
  • cache_entries: maximum entry count in cache. Default is -1 (unlimited).
  • cache_ttl: cache entry timeout before refresh. Default is 1 minute.

Access restriction options

  • allow_other: Bypass allow_root.
  • allow_root: Restrict access to root and the user mounting the filesystem.
  • default_perm: Restrict access based on file mode (useful with allow_other).
  • uid: default files uid (defaults to current user uid).
  • gid: default files gid (defaults to current user gid).
  • mode: default files permissions (default is 0700).
  • ro: enable read-only access.

Debug options

  • debug: enable debug log.
  • stdout : stdout redirection expression (e.g. >/dev/null).
  • stderr : stderr redirection expression (e.g. >>/var/log/svfs.log).
  • profile_addr: Golang profiling information will be served at this address (ip:port) if set.
  • profile_cpu: Golang CPU profiling information will be stored to this file if set.
  • profile_ram: Golang RAM profiling information will be stored to this file if set.

Performance options

  • go_gc: set garbage collection target percentage. A garbage collection is triggered when the heap size exceeds, by this rate, the remaining heap size after the previous collection. A lower value triggers frequent GC, which means memory usage will be lower at the cost of higher CPU usage. Setting a higher value will let the heap size grow by this percent without collection, reducing GC frequency. A Garbage collection is forced if none happened for 2 minutes. Note that unused heap memory is not reclaimed after collection, it is returned to the operating system only if it appears unused for 5 minutes.

Limitations

Be aware that SVFS doesn't transform object storage to block storage.

SVFS doesn't support :

  • Opening files in other modes than O_CREAT, O_RDONLY and O_WRONLY.
  • Moving directories.
  • Renaming containers.
  • SLO (but supports DLO).
  • Per-file uid/gid/permissions (but per-mountpoint).
  • Symlink targets across containers (but within the same container).

Take a look at the docs for further discussions about SVFS approach.

FAQ

Got errors using rsync with svfs ? Can't change creation time ? Why svfs after all ?

Take a look at the FAQ.

Hacking

Make sure to use the latest version of go and follow contribution guidelines of SVFS.

License

This work is under the BSD license, see the LICENSE file for details.

svfs's People

Contributors

dduportal avatar kayrus avatar rledisez avatar vmalguy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

svfs's Issues

Problem with OVH Object Storage

Hi, I't looks like it's mounting the storage without any problem, but there's an issue when trying to create a file inside the container. I'm trying to create a simple txt with some text inside but I get this:
"/mnt/swift-hls/test2.txt" E667: Fsync failed
WARNING: Original file may be lost or damaged
don't quit the editor until the file is successfully written!
Press ENTER or type command to continue

Any help is appreciated.

File has vanished with rsync

When using rsync between a data source which is a large container and local storage, file has vanished errors occur frequently.

Support for opening a file with an offset

Right now we throw OpenNonSeekable in all open() response flags. Since the swift library implements Range: byte=X-Y reading we can handle this case for OpenReadOnly mode.

Add integration tests

We should add unit test in order to make sure we don't break things now that many features are available.

We should at least have these scenari :

  • Create an empty file
  • Remove an empty file
  • Create a non-segmented file
  • Move a non-segmented file
  • Remove a non-segmented file
  • Create a segmented file
  • Move a segmented file
  • Remove a segmented file
  • Create a directory
  • Remove a directory

Encryption

Should give it a try using AEAD processing.

First, we need to check wich of the following would be the best match :

  • AES-GCM (16, 24, 32 bytes key)
  • XSalsa20 + Poly1305 (32 bytes key)
  • something else

Second, check consequences since working on fixed block sizes, it's necessary to :

  • Pad data
  • Process closest blocks in byte range requests

Thus, besides pure encryption overhead, some extra actions will lower performance, but this is expected.

Using encryption should only happen when using extra_attr=true since former size would be written in meta header and content size comparisons should not use encrypted data size.

Segments not removed

It happens sometimes segments are not removed.
It seems the bulk-delete implementation in the swift library ends up with errors.

ls: reading directory .: Input/output error (recurrent on "root" folder)

Hello (I'm not saying how much of a fan I am each time anymore because, well I'm here ;))!

whenever listing folders of the hubic root, i get either the following error:

ls: reading directory .: Input/output error

or an incomplete list.

Same thing happens in file browser.

Once I get one folder deeper however, no problem at all.

Is that (once more) something I did not read right?

Thanks and have a nice day! :)

svfs with synology

Hi,

I would like to use svfs to backup my datas from my synology NAS toward Hubic.
I try do do it by myself but i have not succeeded

So, i would like to know if you plan to do a .spk file to do that ?

Thanks !

Initial lookups triggers nil pointer derefence

Just after the fuse mount takes place, panicking can occur because of nil pointer dereference.
This seems to be an issue with directory readdirAll when the container pointer is nil.

rsync creating empty files

Hello,

I am really really enthusiastic about your app! I have been waiting for it for a lonnnng time!

Thanks a lot for all the great work!

I do have one issue.

I tried to copy a big folder by issuing the rsync -r command.

It seems to work well and files get copied (I have the corresponding Network activity on my computer), but when I visit the online hubic page or when I browse the directory on the hubic side, all the files are 0 bytes + have a random 6 letter extension (ex: ".r30nL4 ").

Am I doing anything wrong?

Thanks anyways and have a nice day!

upload large file fail on swift

Hi xlucas,
Thanks for your software. It's great work :)
I have use it for mounting swift as local file system on my server (/mnt).
I install PureFTP on that server and create ftp account that have home directory is /mnt/container . But when i set segment_size=4200 (4G) and upload large file it fail. When client upload object at 4200MB, the FTP server return 500 unknow command. (pureftp support large file, i tested on local disk). Have you know what happen with this error ?

Uncomplete support for SLO/DLO

Static and Dynamic Large Objects doesn't get the right file attributes : their size appear to be 0 whilst the real size should be computed from object segments. Also, download and upload of large files may not be possible due to manifests not being hanlded.

MIME type of object

Hi,
Thanks for svfs : it's work fine. But...The MIME type is not set in the header "Content-type" of each objects. It's a problem when using the object storage through StaticWeb Middleware. How to fix it?

Implement smarter cache

As for now a very basic cache is used, it doesn't allow for :

  • Cache entry TTL
  • Cache entry access limit
  • Cache size (entry count)

Reading large files can lead to Out Of Memory case

When retrieving large files, the whole content is allocated in memory due to the HandleReadAller interface implementation.

Implementing a specific case in the Read function (avoiding range requests that are unecessarry since we can have the whole file being read from the TCP socket) is an idea to solve this problem.

Support for uid/gid/permissions

Supporting per-file uid/gid/permissions would require to send a request for every single file/directory inside a container. An other issue is that we can have pseudo-directories and setting this information would require additional processing (to fetch an actual parent or children node and inherit metadata for instance) or providing default uid/gid/permissions. That would be a strange behavior.

Instead we should support per-mountpoint support : every single children node in a mountpoint would inherit same ownership and permissions.

Hubic-application is broken

The program does not pass step 1. The response code request is "302". The program accepts only "200".

The stupid Ruby does not know follow redirections.

Moving directories

Hi,

As it's wrote in the README, it's impossible to move directories... (whereas it's available on hubiC)

But I don't understand why, as moving directory is equal to copy it (and its content) and after that, remove it... (and these features is actually active...)
as Octave said in the mails he sent in the Cloud/hubiC mailing list, the future rsync/ftp/etc... gateway who will be soon available will be based on the use of svfs on docker container to work... and maybe these protocols may use the mv syscall ? And client may need to have this feature available ?

As an example, I would like to mount my PCS and use it as primary storage on my Owncloud server, without using the External storage support (who don't work... Maybe a bug in Owncloud, or in the swift api at ovh... I don't know, see my issue here : owncloud/core#20968), and I can't beacause Owncloud need the capability to move folders (Desktop client)

Support environment variables

The following openrc-like environment variables should be read if the relevant CLI options are not set.

OS_USERNAME
OS_PASSWORD
OS_TENANT_NAME
OS_AUTH_URL
OS_CONTAINER_NAME

bug when removing directories created outside of svfs

Hello!

I’m using git commit 110896e

svfs can’t remove directories created by the hubiC website nor by hubicfuse. (removing files works OK)
There is no error message from “rm”: it just silently fails!

# l
total 0
drwx------ 1 root root 4,0K 15 mars  11:02 archive
drwx------ 1 root root 4,0K 15 mars  11:02 hubiC_created
drwx------ 1 root root 4,0K 15 mars  11:02 logiciels
# rm -rf hubiC_created/
# l
total 0
drwx------ 1 root root 4,0K 15 mars  11:02 archive
drwx------ 1 root root 4,0K 15 mars  11:02 hubiC_created
drwx------ 1 root root 4,0K 15 mars  11:02 logiciels
2016/03/15 11:03:37 FUSE: <- Lookup [ID=0x29 Node=0x1 Uid=0 Gid=0 Pid=3149] "rm"
2016/03/15 11:03:37 FUSE: -> [ID=0x29] Lookup error=ENOENT
2016/03/15 11:03:37 FUSE: <- Open [ID=0x2a Node=0x2 Uid=0 Gid=0 Pid=3162] dir=true fl=OpenReadOnly+OpenDirectory+OpenNonblock+0x20000
2016/03/15 11:03:37 FUSE: -> [ID=0x2a] Open 0x1 fl=0
2016/03/15 11:03:37 FUSE: <- Read [ID=0x2b Node=0x2 Uid=0 Gid=0 Pid=3162] 0x1 4096 @0x0 dir=true fl=0 lock=0 ffl=OpenReadOnly+OpenDirectory+OpenNonblock+0x20000
2016/03/15 11:03:37 FUSE: -> [ID=0x2b] Read 0
2016/03/15 11:03:37 FUSE: <- Release [ID=0x2c Node=0x2 Uid=0 Gid=0 Pid=0] 0x1 fl=OpenReadOnly+OpenDirectory+OpenNonblock+0x20000 rfl=0 owner=0x0
2016/03/15 11:03:37 FUSE: -> [ID=0x2c] Release
2016/03/15 11:03:37 FUSE: <- Remove [ID=0x2d Node=0x1 Uid=0 Gid=0 Pid=3162] "hubiC_created" dir=true
2016/03/15 11:03:39 FUSE: -> [ID=0x2d] Remove
2016/03/15 11:03:39 FUSE: <- Forget [ID=0x2e Node=0x2 Uid=0 Gid=0 Pid=0] 1
2016/03/15 11:03:39 FUSE: -> [ID=0x2e] Forget
# mkdir svfs_created
# l
total 0
drwx------ 1 root root 4,0K 15 mars  11:02 archive
drwx------ 1 root root 4,0K 15 mars  11:02 hubiC_created
drwx------ 1 root root 4,0K 15 mars  11:02 logiciels
drwx------ 1 root root 4,0K 15 mars  11:02 svfs_created
# rm -rf svfs_created/
# l
total 0
drwx------ 1 root root 4,0K 15 mars  11:02 archive
drwx------ 1 root root 4,0K 15 mars  11:02 hubiC_created
drwx------ 1 root root 4,0K 15 mars  11:02 logiciels
2016/03/15 11:05:00 FUSE: <- Lookup [ID=0x66 Node=0x1 Uid=0 Gid=0 Pid=3149] "rm"
2016/03/15 11:05:00 FUSE: -> [ID=0x66] Lookup error=ENOENT
2016/03/15 11:05:00 FUSE: <- Open [ID=0x67 Node=0x5 Uid=0 Gid=0 Pid=3169] dir=true fl=OpenReadOnly+OpenDirectory+OpenNonblock+0x20000
2016/03/15 11:05:00 FUSE: -> [ID=0x67] Open 0x1 fl=0
2016/03/15 11:05:00 FUSE: <- Read [ID=0x68 Node=0x5 Uid=0 Gid=0 Pid=3169] 0x1 4096 @0x0 dir=true fl=0 lock=0 ffl=OpenReadOnly+OpenDirectory+OpenNonblock+0x20000
2016/03/15 11:05:00 FUSE: -> [ID=0x68] Read 0
2016/03/15 11:05:00 FUSE: <- Release [ID=0x69 Node=0x5 Uid=0 Gid=0 Pid=0] 0x1 fl=OpenReadOnly+OpenDirectory+OpenNonblock+0x20000 rfl=0 owner=0x0
2016/03/15 11:05:00 FUSE: -> [ID=0x69] Release
2016/03/15 11:05:00 FUSE: <- Remove [ID=0x6a Node=0x1 Uid=0 Gid=0 Pid=3169] "svfs_created" dir=true
2016/03/15 11:05:01 FUSE: -> [ID=0x6a] Remove
2016/03/15 11:05:01 FUSE: <- Forget [ID=0x6b Node=0x5 Uid=0 Gid=0 Pid=0] 1
2016/03/15 11:05:01 FUSE: -> [ID=0x6b] Forget

On the other hand, hubicfuse can’t remove svfs created directories:

$ l
total 0
drwxr-xr-x 2 laurent users 0  1 mars   2013 archive
drwxr-xr-x 2 laurent users 0 15 mars  09:54 hubiC_created
drwxr-xr-x 2 laurent users 0  1 mars   2013 logiciels
drwxr-xr-x 2 laurent users 0 15 mars  11:11 svfs_created
$ rm -rf svfs_created/
$ l
total 0
drwxr-xr-x 2 laurent users 0  1 mars   2013 archive
drwxr-xr-x 2 laurent users 0 15 mars  09:54 hubiC_created
drwxr-xr-x 2 laurent users 0  1 mars   2013 logiciels
drwxr-xr-x 2 laurent users 0 15 mars  11:11 svfs_created
$ rm -rf hubiC_created/
$ l
total 0
drwxr-xr-x 2 laurent users 0  1 mars   2013 archive
drwxr-xr-x 2 laurent users 0  1 mars   2013 logiciels
drwxr-xr-x 2 laurent users 0 15 mars  11:11 svfs_created

The debug log for a hubicfuse created directory:

# l
total 0
drwx------ 1 root root 4,0K 15 mars  11:24 archive
drwx------ 1 root root 4,0K 15 mars  11:24 hubicfuse_created
drwx------ 1 root root 4,0K 15 mars  11:24 logiciels
# rm -rf hubicfuse_created/
# l
total 0
drwx------ 1 root root 4,0K 15 mars  11:24 archive
drwx------ 1 root root 4,0K 15 mars  11:24 hubicfuse_created
drwx------ 1 root root 4,0K 15 mars  11:24 logiciels
2016/03/15 11:25:09 FUSE: <- Getattr [ID=0x36 Node=0x1 Uid=0 Gid=0 Pid=3690] 0x0 fl=0
2016/03/15 11:25:09 FUSE: -> [ID=0x36] Getattr valid=1m0s ino=1 size=1607005318 mode=drwx------
2016/03/15 11:25:09 FUSE: <- Lookup [ID=0x37 Node=0x1 Uid=0 Gid=0 Pid=3690] "rm"
2016/03/15 11:25:09 FUSE: -> [ID=0x37] Lookup error=ENOENT
2016/03/15 11:25:09 FUSE: <- Open [ID=0x38 Node=0x4 Uid=0 Gid=0 Pid=3699] dir=true fl=OpenReadOnly+OpenDirectory+OpenNonblock+0x20000
2016/03/15 11:25:09 FUSE: -> [ID=0x38] Open 0x1 fl=0
2016/03/15 11:25:09 FUSE: <- Read [ID=0x39 Node=0x4 Uid=0 Gid=0 Pid=3699] 0x1 4096 @0x0 dir=true fl=0 lock=0 ffl=OpenReadOnly+OpenDirectory+OpenNonblock+0x20000
2016/03/15 11:25:09 FUSE: -> [ID=0x39] Read 0
2016/03/15 11:25:09 FUSE: <- Release [ID=0x3a Node=0x4 Uid=0 Gid=0 Pid=0] 0x1 fl=OpenReadOnly+OpenDirectory+OpenNonblock+0x20000 rfl=0 owner=0x0
2016/03/15 11:25:09 FUSE: -> [ID=0x3a] Release
2016/03/15 11:25:09 FUSE: <- Remove [ID=0x3b Node=0x1 Uid=0 Gid=0 Pid=3699] "hubicfuse_created" dir=true
2016/03/15 11:25:10 FUSE: -> [ID=0x3b] Remove
2016/03/15 11:25:10 FUSE: <- Forget [ID=0x3c Node=0x4 Uid=0 Gid=0 Pid=0] 1
2016/03/15 11:25:10 FUSE: -> [ID=0x3c] Forget

Regards.

File size not updated when file content is modified

After writing to a file, size is not updated immediately.

# echo "foo" > bar
# cat bar
foo
# ll bar 
-rw------- 1 root root 0 Feb 17 10:46 bar

About 1 min later :

# ll bar
-rw------- 1 root root 4 Feb 17 10:46 bar

Check maximum segment size

Openstack swift doesn't support segment size > 5GB.
We should return a mount error when --os-segment-size value exceeds it.

HTTP 408 on file creation

It happens once in a while that a write() operation fails just after a call to open() with OpenCreate flag.

This is due to a partial PUT request standing by whilst an unecessary HEAD request is emited and completes in a time close or greater than the remote load balancer timeout.

The first write() call then fails since we get an HTTP 408 status from the remote storage endpoint.

Trying to reach a file more than one level deep after mount fails

After a mount, when not using option -c container, accessing a file deeper than the root directory fails.

Fuse debug log shows that the lookup call is failing :

# ls container/file
2016/02/17 15:03:55 FUSE: <- Lookup [ID=0x2 Node=0x1 Uid=0 Gid=0 Pid=20385] "container"
2016/02/17 15:03:55 FUSE: -> [ID=0x2] Lookup error=ENOENT
2016/02/17 15:04:36 FUSE: <- Forget [ID=0x3 Node=0x1 Uid=0 Gid=0 Pid=0] 1
2016/02/17 15:04:36 FUSE: -> [ID=0x3] Forget

Bring support for swift ACL

If storage_url is specified along with username and password, it should be used and not overriden on re-auth.

This would permit to handle swift ACLs.

Mounting HUBIC containers

Hi,

And thanks for SVFS!

Could you possibly add a short section to the documentation explaining how to use it to mount HUBIC containers?

That would be awesome.

Thanks!

Packaging

Bonjour, j'utilise Archlinux en 64 bits, je voulais savoir quelles sont les commandes pour pouvoir compiler le programme svfs? comme ça je construis un PKGBUILD et je pourrais le publier sur https://aur.archlinux.org/

Rsync errors : symlink and mkstemp

Hi,

i just get this :

rsync: symlink "/mnt/svfs/hubic/.composer/vendor/bin/.laravel.15610" -> "../laravel/installer/laravel" failed: Input/output error (5)

Rsync freeze.

Hello,

I'm encountering an issue while using rsync on mounted containers.

After some files, it freeze, and nothing happens
SVFS : 0.6.2

Mouting with :

mount.svfs hubic /mnt/hubic2 -o hubic_auth=***,hubic_token=***,container=default,extra_attr=true

rsync :

abyss ~ # rsync -rtW --inplace --progress --delete --exclude "*.dll" /mnt/mp3/ /mnt/hubic2/mp3/
sending incremental file list
./
test
              0 100%    0.00kB/s    0:00:00 (xfr#1, ir-chk=1052/1065)
CUETools/
CUETools/ArCueDotNet.exe
          6,144 100%    0.00kB/s    0:00:00 (xfr#2, ir-chk=1042/1065)
CUETools/CUERipper.exe
         32,768  12% 1000.00kB/s    0:00:00

On another shell :

abyss ~ # df -h |grep hubic
hubic           9,9T  7,7T  2,3T  78% /mnt/hubic2
abyss ~ # lsof |grep /mnt/hubic2/
rsync     4453            root  cwd       DIR               0,38      4096 15692796082419027574 /mnt/hubic2/mp3
rsync     4454            root  cwd       DIR               0,38      4096 15692796082419027574 /mnt/hubic2/mp3


So i kill them : 
abyss ~ # ps aux |grep rsync
root      4452  0.0  0.0 124464  3124 pts/0    S+   18:57   0:00 rsync -rtW --inplace --progress --delete --exclude *.dll /mnt/mp3/ /mnt/hubic2/mp3/
root      4453  0.0  0.0 124088  2320 pts/0    S+   18:57   0:00 rsync -rtW --inplace --progress --delete --exclude *.dll /mnt/mp3/ /mnt/hubic2/mp3/
root      4454  0.0  0.0 124172  1532 pts/0    S+   18:57   0:00 rsync -rtW --inplace --progress --delete --exclude *.dll /mnt/mp3/ /mnt/hubic2/mp3/
root      4748  0.0  0.0 113928  2208 pts/1    S+   20:06   0:00 grep --colour=auto rsync

abyss ~ # kill -9 4452 4453 4454

And i still have 1 process that i can not kill (i need to reboot the computer to free the mount...)

abyss ~ # lsof |grep /mnt/hubic2/
rsync     4454            root  cwd       DIR               0,38      4096 15692796082419027574 /mnt/hubic2/mp3

Can I have your help ?

Regards.

mount: mounting {myname} on /hubic failed: No such device

Hello

I have a strange bug on my docker image when I run
svfs -os-storage-url="https://lb1.hubic.ovh.net/v1/AUTH_{user}" -os-auth-token="{token}" test /hubic &
it's work but if I do

mount -t svfs -o token={token},storage_url=https://lb1.hubic.ovh.net/v1/AUTH_{user} test /hubic

it's say mount: mounting test on /hubic failed: No such device

any idea ?

Error 401 with Hubic

Hi,

From many days, when I try to mount Hubic with svfs, I have this error :

2016/04/18 09:02:37 Invalid reply from server when fetching hubic API token : 401

I use this command : mount -t svfs -o hubic_auth=https://lb1040.hubic.ovh.net/v1/AUTH_xyz123,hubic_token=mytoken123,container=default hubic /mnt/hubic I use the 0.5.1 version on debian 7.10 x64

Mount a specific container

It should be possible to mount a specific container instead of all the containers for a specific tenant with a new option -c container.

Profiling options

Options for memory and cpu profiling should be added so it's easier to track down potential performance issues and improve cpu usage and memory footprint.

These options should be :

--profile-cpu <path_to_prof>
--profile-ram <path_to_prof>

Hubic token expiration

Hi,

I would like to Dockerize svfs to backup my datas to Hubic.
So, i plan to create a Python script to automate the process to get the token access from the Hubic API.
After the token expiration, is the container auto unmounted even if data are being transfered ?
Is svfs auto exited when the token expired ?
Have you planned to address the limitation of 24h expiration ?
How to use svfs with an Hubic application ?

Thanks !

you don't have the enough permission for watch the hubic content

Hi!!..I followed this guide https://github.com/ovh/svfs/blob/master/docs/HubiC.md and set my credentials, the cli return me the hubic_token and hubic_auth..then I try to run svfs

sudo mount -t svfs -o hubic_auth=YXBpX2h1YmljX3hBODg4eDFSdjZHYmt6x___secret____VUFFXOlZiZVJSd1VPd3NhS29SNUdoMk1JN3Fyc2I5dHRwODRUOTFQWkVBcWlCcDM4WXVuVnpZcEJPZGpKTThSdHNUNHg=,hubic_token=l8oyKv39Fv4Y0fp31dUiRC8mM___secret____zWvXZWWh2dgoSjgehlKcYyr60txv,container=default hubic /home/yo/Descargas/hubic

this mount the unit but when I try enter in it I get a message saying something like "you don't have the enough permission for watch the hubic content" (could not be exactly as this because I'm traslating it from spanish)

do you know what could be the problem here?
thank you so much!

space available

Hello and thanks again for solutioning my NASfree home headache! :)

it would be nice to be able to ask how much space is occupied vs free :)

of course, it's a "nice to have".

Thanks and have a nice day!

Support for symlinks

It would be better for svfs to support symlinks.

Implementation : In order to implement this feature, an other type of object should be added with Content-Type: application/link. A symlink target will be stored in a Meta-Symlink-Target header.

Restriction : a link should always reference an object within the same container. This should be documented in the limitations section.

Support mtime change

Bring a new option extra_attr=[true|false] that would allow setting mtime.
This will require listing all files like manifests to get extended attributes, so this should be optional.

Add support for remote profiling

A new option --profile-bind=<host>:<port> (default to "" = no live profiling) should be added so direct remote profiling would be possible. For instance, one could use this to get the live memory prof and use gcvis in the same time to correlate observations with heap size evolution.

Parallelize directory listing

Listing objects can be long specially for SLO/DLO objects where container listing is not sufficient to get the real object size and further HEAD requests are required.

A global directory listing concurrency option --max-readdir-concurrency should be introduced in order to run these extra requests inside a pool of workers.

Support mkdir calls

mkdir syscalls should be implemented as an upload of <dirname>/ objects in swift.

mount hubic over svfs on boot

Hello,

I've tried to mount my Hubic over svfs on boot with :

  • Debian stretch amd64
  • Raspbian jessie (Pi 2)
  • All is OK on /etc/fstab.
  • "mount -a" once since is up and reachable is OK.

Mount fails on startup because network is unreachable to get credentials :
Post https://api.hubic.com/oauth/token: dial tcp: lookup api.hubic.com on *:53: dial udp *: connect: network is unreachable

Thx

Report filesystem information in common tools

For now, as svfs doesn't use blocks, no block size or block count metrics were reported to common syscalls.

However, it would be useful to use a fake block size and report this information to stat(2) and statfs(2) syscalls so tools like df or du gets the right information.

For swift account using quotas, like hubiC, reported usage in statfs(2) will be tailored to the quota value.

For other accounts, we will show an "unlimited" storage space.

Directory removal does not work for non-pseudo directories.

When running rm -r on a directory, it is not removed unless this is a pseudo directory. This seems to be an issue whith the remove function for directories : the object name does not end with a slash when sending the delete request to swift.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.