Code Monkey home page Code Monkey logo

onedata's Introduction

Onedata

Onedata is a global data management system, providing easy access to distributed storage resources, supporting wide range of use cases from personal data management to data-intensive scientific computations. Please visit the Onedata homepage (https://onedata.org) for more information, including the documentation and API specifications.

Changelog: CHANGELOG.md.

Contact us: https://onedata.org/#/home/contact.

Bug reports and discussions: please use GitHub issues.

Project structure

This repository serves as an entrypoint to the Onedata software ecosystem, containing the general information, changelog and license.

Onedata is composed of numerous subprojects — see the organization page: https://github.com/onedata.

The main components are:

  • Onezone - allows connecting multiple storage providers (Oneprovider instances) into a distributed domain and offers a centralized Graphical User Interface for navigating the domain and performing data management tasks,
  • Oneprovider - component deployed at each storage provider site, responsible for unifying and controlling access to data over low level storage resources of the provider,
  • Oneclient - command line tool which enables transparent access to user data spaces through Fuse virtual filesystem.
Component Repository
Onezone https://github.com/onedata/onezone-pkg
Oneprovider https://github.com/onedata/oneprovider-pkg
Oneclient https://github.com/onedata/oneclient-pkg

Copyright and license

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Acknowledgements

This work was supported in part by 2017's research funds in the scope of the co-financed international projects framework (project no. 3711/H2020/2017/2).

This work is co-funded by the EOSC-hub project (Horizon 2020) under Grant number 777536.

onedata's People

Contributors

bkryza avatar bwalkowi avatar cwiertniamichal avatar darnik22 avatar dulebapiotr avatar groundnuty avatar jakud avatar kasias999 avatar kliput avatar krzysztof-trzepla avatar kzemek avatar lopiola avatar michalrw avatar mistanisz avatar mpaciore avatar mwrona avatar pkociepka avatar rslota avatar rwidzisz avatar wgslr avatar xorver avatar zmumi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

onedata's Issues

Can't create subdomain delegation level 2

Can't create a subdomain delegation for a Oneprovider machine that is part of other subdomain. I try to create the subdomain delegation: oneprovider01.ceta-ciemat.datahub.egi.eu and oneprovider02.ceta-ciemat.datahub.egi.eu

When the script ask me for the name, I get the next error:

########### Certificates #############
You can choose to provide your own certificates or use subdomain delegation and request Onezone to help in generating the Let's Encrypt certificate for your subdomain.
Do you want use subdomain delegation to acquire a FQDN for your Oneprovider? (y/n, default: y)?: y
Please enter the subdomain for your oneprovider (auto-detected default: oneprovider01.ceta-ciemat):
The format of the subdomain must mach the regualr expression '[ a-z0-9 ] ( [ -a-z0-9] * [ a-z0-9 ] )'. Please try again.

Regards.

files aprently with 0 Bytes when use oneclient mounts (OneProvider 19.02.5 and 20.02.4)

Good evening.

We are experiencing strange behaviours when we mount Spaces with oneclient against 19.02.5 and 20.02.4 releases of OneProvider.

We have an application that creates local files and then it progresively moves these files into a Space mounted with oneclient.

Although files are small, the first file spends 1 minute to be moved. Rest of them seems to be moved quickly. The actual problem is that many of these files are appearing with 0 Bytes, even at datahub.egi.eu, and they cannot be downloaded.

However, files were stored in OneProvider. This is:

[root@38c332668b15 run]# oneclient /mnt
....
RUN APP THAT CREATES LOCAL FILES AND THEN IT MOVES THEM INTO /mnt/test6/sac_xxxx 
....
[root@38c332668b15 run]# ls -alh /mnt/test6/sac_10_150.0_75600_QGSII_flat/
total 0
drwxr-xr-x 1 1034995 1073456    0 Dec 29 18:19 .
drwxrwxr-x 1 root    root       0 Dec 29 18:14 ..
-rw-r--r-- 1 1034995 1073456    0 Dec 29 18:17 DAT000703-0703-00000000024.input
-rw-r--r-- 1 1034995 1073456    0 Dec 29 18:17 DAT000703-0703-00000000024.lst.bz2
-rw-r--r-- 1 1034995 1073456    0 Dec 29 18:17 DAT000703.bz2
-rw-r--r-- 1 1034995 1073456    0 Dec 29 18:18 DAT000904-0904-00000000007.input
-rw-r--r-- 1 1034995 1073456    0 Dec 29 18:18 DAT000904-0904-00000000007.lst.bz2
-rw-r--r-- 1 1034995 1073456    0 Dec 29 18:18 DAT000904.bz2
-rw-r--r-- 1 1034995 1073456    0 Dec 29 18:17 DAT001105-1105-00000000017.input
-rw-r--r-- 1 1034995 1073456    0 Dec 29 18:18 DAT001105-1105-00000000017.lst.bz2
-rw-r--r-- 1 1034995 1073456    0 Dec 29 18:17 DAT001105.bz2
-rw-r--r-- 1 1034995 1073456  614 Dec 29 18:16 DAT001206-1206-00000000061.input
-rw-r--r-- 1 1034995 1073456 3.6K Dec 29 18:16 DAT001206-1206-00000000061.lst.bz2
-rw-r--r-- 1 1034995 1073456   14 Dec 29 18:16 DAT001206.bz2
-rw-r--r-- 1 1034995 1073456    0 Dec 29 18:18 DAT001407-1407-00000000013.input
-rw-r--r-- 1 1034995 1073456    0 Dec 29 18:18 DAT001407-1407-00000000013.lst.bz2
-rw-r--r-- 1 1034995 1073456    0 Dec 29 18:18 DAT001407.bz2
....

However if I list these files in the OneProvider:

# ls -alh sac_10_150.0_75600_QGSII_flat/
total 292K
drwxr-xr-x  2 1034995 1073456 4.0K Dec 29 19:21 .
drwxr-xr-x. 3 root    root      42 Dec 29 19:14 ..
-rw-r--r--  1 1034995 1073456  612 Dec 29 19:17 DAT000703-0703-00000000024.input
-rw-r--r--  1 1034995 1073456 3.6K Dec 29 19:17 DAT000703-0703-00000000024.lst.bz2
-rw-r--r--  1 1034995 1073456   14 Dec 29 19:17 DAT000703.bz2
-rw-r--r--  1 1034995 1073456  612 Dec 29 19:18 DAT000904-0904-00000000007.input
-rw-r--r--  1 1034995 1073456 3.6K Dec 29 19:18 DAT000904-0904-00000000007.lst.bz2
-rw-r--r--  1 1034995 1073456   14 Dec 29 19:18 DAT000904.bz2
-rw-r--r--  1 1034995 1073456  613 Dec 29 19:18 DAT001105-1105-00000000017.input
-rw-r--r--  1 1034995 1073456 3.6K Dec 29 19:18 DAT001105-1105-00000000017.lst.bz2
-rw-r--r--  1 1034995 1073456   14 Dec 29 19:18 DAT001105.bz2
-rw-r--r--  1 1034995 1073456  614 Dec 29 19:16 DAT001206-1206-00000000061.input
-rw-r--r--  1 1034995 1073456 3.6K Dec 29 19:16 DAT001206-1206-00000000061.lst.bz2
-rw-r--r--  1 1034995 1073456   14 Dec 29 19:16 DAT001206.bz2
-rw-r--r--  1 1034995 1073456  614 Dec 29 19:18 DAT001407-1407-00000000013.input
-rw-r--r--  1 1034995 1073456 3.6K Dec 29 19:18 DAT001407-1407-00000000013.lst.bz2
-rw-r--r--  1 1034995 1073456   14 Dec 29 19:18 DAT001407.bz2
-rw-r--r--  1 1034995 1073456  614 Dec 29 19:17 DAT001608-1608-00000000059.input
-rw-r--r--  1 1034995 1073456 3.6K Dec 29 19:17 DAT001608-1608-00000000059.lst.bz2
-rw-r--r--  1 1034995 1073456   14 Dec 29 19:17 DAT001608.bz2

Can you help us with the throubleshooting? We don't see any hint in the logs.

Cheers.
Happy new year.

Failed to install pre-built linux package for oneclient on Ubuntu 14.04.5 LTS

Hi,

I was trying to install oneclient by

wget -q -O - http://get.onedata.org/oneclient.sh | bash # for oneclient

but failed

The full text output from the terminal is pasted below

OK
Ign http://ftp.acc.umu.se trusty/ InRelease
Ign http://extras.ubuntu.com trusty InRelease
Hit http://ftp.acc.umu.se trusty/ Release.gpg
Hit http://ftp.acc.umu.se trusty/ Release
Hit http://ftp.acc.umu.se trusty/ Packages
Ign http://se.archive.ubuntu.com trusty InRelease
Ign http://dl.google.com stable InRelease
Hit http://se.archive.ubuntu.com trusty-updates InRelease
Ign http://ftp.acc.umu.se trusty/ Translation-en_US
Hit http://se.archive.ubuntu.com trusty-backports InRelease
Ign http://ftp.acc.umu.se trusty/ Translation-en
Hit http://packages.onedata.org vivid InRelease
Hit http://packages.onedata.org trusty InRelease
Hit http://extras.ubuntu.com trusty Release.gpg
Hit https://apt.dockerproject.org ubuntu-trusty InRelease
Hit https://apt.dockerproject.org ubuntu-trusty/main amd64 Packages
Hit https://apt.dockerproject.org ubuntu-trusty/main i386 Packages
Hit http://packages.onedata.org vivid/main amd64 Packages
Get:1 https://apt.dockerproject.org ubuntu-trusty/main Translation-en_US
Hit http://dl.google.com stable Release.gpg
Hit http://dl.google.com stable Release
Hit http://se.archive.ubuntu.com trusty Release.gpg
Hit http://dl.google.com stable/main amd64 Packages
Hit http://extras.ubuntu.com trusty Release
Hit http://se.archive.ubuntu.com trusty-updates/main Sources
Ign https://apt.dockerproject.org ubuntu-trusty/main Translation-en_US
Ign https://apt.dockerproject.org ubuntu-trusty/main Translation-en
Hit http://se.archive.ubuntu.com trusty-updates/restricted Sources
Ign http://repository.egi.eu squeeze InRelease
Ign http://dl.google.com stable/main Translation-en_US
Ign http://dl.google.com stable/main Translation-en
Hit http://se.archive.ubuntu.com trusty-updates/universe Sources
Ign http://repository.egi.eu squeeze-updates InRelease
Hit http://packages.onedata.org trusty/main Sources
Hit http://packages.onedata.org trusty/main amd64 Packages
Hit http://extras.ubuntu.com trusty/main Sources
Hit http://extras.ubuntu.com trusty/main amd64 Packages
Hit http://extras.ubuntu.com trusty/main i386 Packages
Hit http://se.archive.ubuntu.com trusty-updates/multiverse Sources
Hit http://se.archive.ubuntu.com trusty-updates/main amd64 Packages
Hit http://se.archive.ubuntu.com trusty-updates/restricted amd64 Packages
Hit http://se.archive.ubuntu.com trusty-updates/universe amd64 Packages
Hit http://se.archive.ubuntu.com trusty-updates/multiverse amd64 Packages
Hit http://se.archive.ubuntu.com trusty-updates/main i386 Packages
Hit http://se.archive.ubuntu.com trusty-updates/restricted i386 Packages
Hit http://repository.egi.eu egi-igtf InRelease
Ign http://repository.egi.eu trusty InRelease
Hit http://se.archive.ubuntu.com trusty-updates/universe i386 Packages
Hit http://se.archive.ubuntu.com trusty-updates/multiverse i386 Packages
Hit http://se.archive.ubuntu.com trusty-updates/main Translation-en
Ign http://packages.onedata.org trusty/main Translation-en_US
Ign http://packages.onedata.org trusty/main Translation-en
Hit http://repository.egi.eu squeeze Release.gpg
Hit http://se.archive.ubuntu.com trusty-updates/multiverse Translation-en
Hit http://se.archive.ubuntu.com trusty-updates/restricted Translation-en
Hit http://se.archive.ubuntu.com trusty-updates/universe Translation-en
Ign http://toolbelt.heroku.com ./ InRelease
Hit http://repository.egi.eu squeeze-updates Release.gpg
Hit http://toolbelt.heroku.com ./ Release.gpg
Hit http://repository.egi.eu egi-igtf/core amd64 Packages
Hit http://toolbelt.heroku.com ./ Release
Hit http://se.archive.ubuntu.com trusty-backports/main Sources
Hit http://se.archive.ubuntu.com trusty-backports/restricted Sources
Hit http://se.archive.ubuntu.com trusty-backports/universe Sources
Hit http://toolbelt.heroku.com ./ Packages
Hit http://se.archive.ubuntu.com trusty-backports/multiverse Sources
Hit http://se.archive.ubuntu.com trusty-backports/main amd64 Packages
Hit http://se.archive.ubuntu.com trusty-backports/restricted amd64 Packages
Hit http://se.archive.ubuntu.com trusty-backports/universe amd64 Packages
Ign http://toolbelt.heroku.com ./ Translation-en_US
Hit http://se.archive.ubuntu.com trusty-backports/multiverse amd64 Packages
Ign http://toolbelt.heroku.com ./ Translation-en
Ign http://extras.ubuntu.com trusty/main Translation-en_US
Ign http://extras.ubuntu.com trusty/main Translation-en
Hit https://packagecloud.io trusty InRelease
Hit https://packagecloud.io trusty/main Sources
Hit https://packagecloud.io trusty/main amd64 Packages
Hit https://packagecloud.io trusty/main i386 Packages
Get:2 https://packagecloud.io trusty/main Translation-en_US
Hit http://se.archive.ubuntu.com trusty-backports/main i386 Packages
Hit http://repository.egi.eu egi-igtf/core i386 Packages
Hit http://se.archive.ubuntu.com trusty-backports/restricted i386 Packages
Hit http://repository.egi.eu trusty Release.gpg
Hit http://se.archive.ubuntu.com trusty-backports/universe i386 Packages
Hit http://repository.egi.eu squeeze Release
Hit http://se.archive.ubuntu.com trusty-backports/multiverse i386 Packages
Hit http://repository.egi.eu squeeze-updates Release
Ign https://packagecloud.io trusty/main Translation-en_US
Ign https://packagecloud.io trusty/main Translation-en
Hit http://se.archive.ubuntu.com trusty-backports/main Translation-en
Hit http://se.archive.ubuntu.com trusty-backports/multiverse Translation-en
Hit http://se.archive.ubuntu.com trusty-backports/restricted Translation-en
Hit http://se.archive.ubuntu.com trusty-backports/universe Translation-en
Hit http://repository.egi.eu trusty Release
Hit http://repository.egi.eu squeeze/main amd64 Packages
Hit http://se.archive.ubuntu.com trusty Release
Hit http://se.archive.ubuntu.com trusty/main Sources
Hit http://se.archive.ubuntu.com trusty/restricted Sources
Hit http://repository.egi.eu squeeze-updates/main amd64 Packages
Hit http://se.archive.ubuntu.com trusty/universe Sources
Hit http://se.archive.ubuntu.com trusty/multiverse Sources
Hit http://se.archive.ubuntu.com trusty/main amd64 Packages
Hit http://se.archive.ubuntu.com trusty/restricted amd64 Packages
Hit http://se.archive.ubuntu.com trusty/universe amd64 Packages
Hit http://se.archive.ubuntu.com trusty/multiverse amd64 Packages
Hit http://se.archive.ubuntu.com trusty/main i386 Packages
Hit http://se.archive.ubuntu.com trusty/restricted i386 Packages
Hit http://se.archive.ubuntu.com trusty/universe i386 Packages
Hit http://se.archive.ubuntu.com trusty/multiverse i386 Packages
Hit http://se.archive.ubuntu.com trusty/main Translation-en
Hit http://se.archive.ubuntu.com trusty/multiverse Translation-en
Hit http://repository.egi.eu trusty/main amd64 Packages
Hit http://se.archive.ubuntu.com trusty/restricted Translation-en
Hit http://se.archive.ubuntu.com trusty/universe Translation-en
Ign http://repository.egi.eu egi-igtf/core Translation-en_US
Ign http://repository.egi.eu egi-igtf/core Translation-en
Ign http://se.archive.ubuntu.com trusty/main Translation-en_US
Ign http://se.archive.ubuntu.com trusty/multiverse Translation-en_US
Ign http://se.archive.ubuntu.com trusty/restricted Translation-en_US
Ign http://se.archive.ubuntu.com trusty/universe Translation-en_US
Hit http://security.ubuntu.com trusty-security InRelease
Hit http://security.ubuntu.com trusty-security/main Sources
Hit http://security.ubuntu.com trusty-security/restricted Sources
Hit http://security.ubuntu.com trusty-security/universe Sources
Hit http://security.ubuntu.com trusty-security/multiverse Sources
Hit http://security.ubuntu.com trusty-security/main amd64 Packages
Hit http://security.ubuntu.com trusty-security/restricted amd64 Packages
Hit http://security.ubuntu.com trusty-security/universe amd64 Packages
Hit http://security.ubuntu.com trusty-security/multiverse amd64 Packages
Hit http://security.ubuntu.com trusty-security/main i386 Packages
Hit http://security.ubuntu.com trusty-security/restricted i386 Packages
Hit http://security.ubuntu.com trusty-security/universe i386 Packages
Hit http://security.ubuntu.com trusty-security/multiverse i386 Packages
Hit http://security.ubuntu.com trusty-security/main Translation-en
Hit http://security.ubuntu.com trusty-security/multiverse Translation-en
Hit http://security.ubuntu.com trusty-security/restricted Translation-en
Hit http://security.ubuntu.com trusty-security/universe Translation-en
W: Failed to fetch http://packages.onedata.org/apt/ubuntu/dists/vivid/InRelease Unable to find expected entry 'main/binary-i386/Packages' in Release file (Wrong sources.list entry or malformed file)

W: Failed to fetch http://repository.egi.eu/sw/production/umd/3/debian/dists/squeeze/Release Unable to find expected entry 'main/binary-i386/Packages' in Release file (Wrong sources.list entry or malformed file)

W: Failed to fetch http://repository.egi.eu/sw/production/umd/3/debian/dists/squeeze-updates/Release Unable to find expected entry 'main/binary-i386/Packages' in Release file (Wrong sources.list entry or malformed file)

W: Failed to fetch http://repository.egi.eu/community/software/rocci.cli/4.3.x/releases/ubuntu/dists/trusty/Release Unable to find expected entry 'main/binary-i386/Packages' in Release file (Wrong sources.list entry or malformed file)

E: Some index files failed to download. They have been ignored, or old ones used instead.

Best,

Nanjiang

Installation 2_0_oneprovider_onezone

Hi,
I downloaded getting started scenario 2_0_oneprovider_onezone.  Attempt to start ./run_onedata.sh --zone fails with  * service_couchbase: start
onezone-1                  |
onezone-1                  | Error: Service Error
onezone-1                  | Description: Action 'deploy' for a service 'onezone' terminated with an error.

as root user.
in oz_panel error log I have:
more error.log
2017-01-11 21:12:19.130 [error] <0.481.0>@service_utils:log:226 Step service_couchbase:start failed
Node: '[email protected]'
Reason: badarg
Stacktrace: [{erlang,list_to_integer,
["/bin/sh: 1: ulimit: error setting limit (Operation not permitted)\n0"],
[]},
{onepanel_shell,call,1,
[{file,"/build/oz-panel-AEwFIY/oz-panel-3.0.0.rc11/_build/package/lib/onepanel/src/modules/onepanel_shell.erl"},
{line,91}]},
{onepanel_shell,check_call,1,
[{file,"/build/oz-panel-AEwFIY/oz-panel-3.0.0.rc11/_build/package/lib/onepanel/src/modules/onepanel_shell.erl"},
{line,101}]},
{onepanel_rpc,apply,3,
[{file,"/build/oz-panel-AEwFIY/oz-panel-3.0.0.rc11/_build/package/lib/onepanel/src/modules/onepanel_rpc.erl"},
{line,41}]},
{rpc,'-handle_call_call/6-fun-0-',5,
[{file,"rpc.erl"},{line,187}]}]
2017-01-11 21:12:19.131 [error] <0.481.0>@service_utils:log:216 Action onezone:deploy failed due to: {service_couchbase,start,
{[],
[{'[email protected]',
{error,onepanel_rpc,apply,3,
undefined,badarg,
[{erlang,list_to_integer,
["/bin/sh: 1: ulimit: error setting limit (Operation not permitted)\n0"],
[]},
{onepanel_shell,call,1,
[{file,
"/build/oz-panel-AEwFIY/oz-panel-3.0.0.rc11/_build/package/lib/onepanel/src/modules/onepanel_shell.erl"},
{line,91}]},
{onepanel_shell,check_call,1,
[{file,
"/build/oz-panel-AEwFIY/oz-panel-3.0.0.rc11/_build/package/lib/onepanel/src/modules/onepanel_shell.erl"},
{line,101}]},
{onepanel_rpc,apply,3,
[{file,
"/build/oz-panel-AEwFIY/oz-panel-3.0.0.rc11/_build/package/lib/onepanel/src/modules/onepanel_rpc.erl"},
{line,41}]},
{rpc,'-handle_call_call/6-fun-0-',
5,
[{file,"rpc.erl"},{line,187}]}],
43}}]}}
2017-01-11 21:12:20.863 [error] <0.510.0>@onepanel_errors:translate:139 Function: onepanel_rpc:apply/3
Args: undefined
Reason: badarg
....
Any ideas what can cause it?

Deploy cluster

Dear all,

I'm new on ONEDATA and I'm configuring the cluster but I receive this error...

"We are sorry, but cluster deployment failed!
The reason of failure:
Service Error"

Could you help me please?

Thanks

Mario

Error after logging in to onedata panel

Hi,

I got this error directly after logging in to the onedata server page

ERROR: error:{badmatch,{error,[70,97,105,108,117,114,101,32,111,110,32,97,108,108,32,110,111,100,101,115,46]}}

STACK: page_spaces_management:body/0:102
page_spaces_management:main/0:60
wf_core:run/1:15
n2o_handler:handle/2:39
cowboy_handler:handler_handle/4:111
cowboy_protocol:execute/4:442

The command I used to start the onedata is

$ ./run_onedata.sh --oneprovider -n 1 --oneprovider-data-dir "/media/storage"

The same error occured when I used the default command to start onedata daemon

$ ./run_onedata.sh --oneprovider

No error message was shown from the command line when I started the onedata server

Any ideas?

Best,

Nanjiang

P.S. below I pasted also the full log of the two commands I used to start onedata the onedata server

$ ./run_onedata.sh --oneprovider -n 1 --oneprovider-data-dir "/media/storage"
IMPORTANT: After each start wait for a message: Congratulations! oneprovider has been successfully started.
To ensure that the oneprovider is completely setup.
Pulling node1.oneprovider.onedata.example.com (onedata/oneprovider:3.0.0-RC1)...
3.0.0-RC1: Pulling from onedata/oneprovider
Digest: sha256:45426ad8227f909716eb30ce149cda415cc26e64b44de80118409fa91009348f
Status: Image is up to date for onedata/oneprovider:3.0.0-RC1
Starting 10oneprovideronedataorg_node1.oneprovider.onedata.example.com_1
Attaching to 10oneprovideronedataorg_node1.oneprovider.onedata.example.com_1
node1.oneprovider.onedata.example.com_1 | Starting op_panel: [ OK ]
node1.oneprovider.onedata.example.com_1 | Starting couchbase-server: [ OK ]
node1.oneprovider.onedata.example.com_1 | Starting cluster_manager: [ OK ]
node1.oneprovider.onedata.example.com_1 |
node1.oneprovider.onedata.example.com_1 | Congratulations! oneprovider has been successfully started.
node1.oneprovider.onedata.example.com_1 |
node1.oneprovider.onedata.example.com_1 | Container details:
node1.oneprovider.onedata.example.com_1 | * IP Address: 172.18.0.2
node1.oneprovider.onedata.example.com_1 | * Ports: 0.0.0.0:53 -> 53/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:7443 -> 7443/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:6666 -> 6666/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:8876 -> 8876/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:5556 -> 5556/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:9443 -> 9443/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:8877 -> 8877/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:6665 -> 6665/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:5555 -> 5555/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:80 -> 80/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:8443 -> 8443/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:443 -> 443/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:53 -> 53/udp
^CGracefully stopping... (press Ctrl+C again to force)
Stopping 10oneprovideronedataorg_node1.oneprovider.onedata.example.com_1 ... done
nanjiang@centos7-test1 ~/soft/getting-started/scenarios/1_0_oneprovider_onedata_org
$ ./run_onedata.sh --oneprovider
IMPORTANT: After each start wait for a message: Congratulations! oneprovider has been successfully started.
To ensure that the oneprovider is completely setup.
Pulling node1.oneprovider.onedata.example.com (onedata/oneprovider:3.0.0-RC1)...
3.0.0-RC1: Pulling from onedata/oneprovider
Digest: sha256:45426ad8227f909716eb30ce149cda415cc26e64b44de80118409fa91009348f
Status: Image is up to date for onedata/oneprovider:3.0.0-RC1
Recreating 10oneprovideronedataorg_node1.oneprovider.onedata.example.com_1
Attaching to 10oneprovideronedataorg_node1.oneprovider.onedata.example.com_1
node1.oneprovider.onedata.example.com_1 | Starting op_panel: [ OK ]
node1.oneprovider.onedata.example.com_1 | Starting couchbase-server: [ OK ]
node1.oneprovider.onedata.example.com_1 | Starting cluster_manager: [ OK ]
node1.oneprovider.onedata.example.com_1 | Starting op_worker: [ OK ]
node1.oneprovider.onedata.example.com_1 |
node1.oneprovider.onedata.example.com_1 | Congratulations! oneprovider has been successfully started.
node1.oneprovider.onedata.example.com_1 |
node1.oneprovider.onedata.example.com_1 | Container details:
node1.oneprovider.onedata.example.com_1 | * IP Address: 172.18.0.2
node1.oneprovider.onedata.example.com_1 | * Ports: 0.0.0.0:53 -> 53/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:7443 -> 7443/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:6666 -> 6666/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:8876 -> 8876/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:5556 -> 5556/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:9443 -> 9443/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:8877 -> 8877/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:6665 -> 6665/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:5555 -> 5555/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:80 -> 80/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:8443 -> 8443/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:443 -> 443/tcp
node1.oneprovider.onedata.example.com_1 | 0.0.0.0:53 -> 53/udp
^CGracefully stopping... (press Ctrl+C again to force)
Stopping 10oneprovideronedataorg_node1.oneprovide

Events and notifications

Is there an event and/or notification system in onedata to which 3rd party applications can subscribe? What is the recommended way for an external application to subscribe for changes in data, metadata, ACL, groups, spaces, etc? How does one track transfer progress asynchronously?

Configurable User for data import

Hello,
we have seen cases where, in the case of storage import, the root user has not read access to files ( the ACLs do not allow it). Do you think it would be possible to make the user running the import configurable?
thanks
Andrea

Usage of default posix credentials for direct I/O

Hello,
i was wondering if it would be possible to use the default posix credentials configured in LUMA ( both UID and GID) to enable direct I/O via oneclient, without having the need to add mapping to local UIDs for each users.
There are cases where the is no need to have a separate UIDs for each user.
thanks
Andrea

onedatify upgrade does not print message on success

On the contrary, an error message is printed to stdout:

onedatify upgrade --version "18.02.0-rc10" -s
sed: can't read : No such file or directory

Although the upgrade seems to succeed, an unsuspecting user would assume the opposite.

unable to create new cluster on oneprovider

when trying to create a new cluster on one provider I see the service error in attachment from the web portal

I get this error trying to create a new cluster now, see file in attachment, and form the logs

journal -xe

Jul 23 14:33:17 onedata-dev.hpc.cineca.it systemd[1]: Stopped LSB: oneprovider worker node.
-- Subject: Unit op_worker.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

-- Unit op_worker.service has finished shutting down.
Jul 23 14:33:18 onedata-dev.hpc.cineca.it systemd[1]: Starting LSB: oneprovider worker node...
-- Subject: Unit op_worker.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

-- Unit op_worker.service has begun starting up.
Jul 23 14:33:18 onedata-dev.hpc.cineca.it op_worker[15106]: Starting up
Jul 23 14:33:19 onedata-dev.hpc.cineca.it systemd[1]: Started LSB: oneprovider worker node.
-- Subject: Unit op_worker.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

-- Unit op_worker.service has finished starting up.

-- The start-up result is done.
Jul 23 14:33:27 onedata-dev.hpc.cineca.it sudo[13562]: pam_unix(sudo:session): session closed for user root
Jul 23 14:33:27 onedata-dev.hpc.cineca.it sudo[15361]: ubuntu : TTY=pts/2 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/bin/systemctl status op_worker.service
Jul 23 14:33:27 onedata-dev.hpc.cineca.it sudo[15361]: pam_unix(sudo:session): session opened for user root by ubuntu(uid=0)
Jul 23 14:33:28 onedata-dev.hpc.cineca.it sudo[15361]: pam_unix(sudo:session): session closed for user root
Jul 23 14:33:29 onedata-dev.hpc.cineca.it sudo[15364]: ubuntu : TTY=pts/2 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/bin/systemctl status op_worker.service
Jul 23 14:33:29 onedata-dev.hpc.cineca.it sudo[15364]: pam_unix(sudo:session): session opened for user root by ubuntu(uid=0)

and

ubuntu@onedata-dev:~$ sudo systemctl status op_worker.service
● op_worker.service - LSB: oneprovider worker node
Loaded: loaded (/etc/init.d/op_worker; bad; vendor preset: enabled)
Active: active (running) since Tue 2019-07-23 14:33:19 CEST; 9s ago
Docs: man:systemd-sysv-generator(8)
Process: 14571 ExecStop=/etc/init.d/op_worker stop (code=exited, status=0/SUCCESS)
Process: 14783 ExecStart=/etc/init.d/op_worker start (code=exited, status=0/SUCCESS)
Tasks: 129
Memory: 75.9M
CPU: 2.610s
CGroup: /system.slice/op_worker.service
├─14964 /usr/lib/op_worker/erts-9.2/bin/run_erl -daemon /tmp/op_worker// /var/log/op_worker exec /usr/sbin/op_worker console -config /etc/op_worker/overlay.config
├─14967 /usr/lib/op_worker/erts-9.2/bin/beam.smp -W w -P 1000000 -K true -stbt db -sbwt none -hmax 134217728 -hmaxk false -- -root /usr/lib/op_worker -progname op_worker -- -home /var/lib/op_wo
├─15124 erl_child_setup 65536
├─15152 sh -s disksup
├─15154 /usr/lib/op_worker/lib/os_mon-2.4.4/priv/bin/memsup
├─15155 /usr/lib/op_worker/lib/os_mon-2.4.4/priv/bin/cpu_sup
├─15208 inet_gethost 4
└─15209 inet_gethost 4

Jul 23 14:33:18 onedata-dev.hpc.cineca.it systemd[1]: Starting LSB: oneprovider worker node...
Jul 23 14:33:19 onedata-dev.hpc.cineca.it systemd[1]: Started LSB: oneprovider worker node.

not really easy to make this stuff to work :(

seems some problem with Erlang, any suggestion?

thanks a lot
Michele

Continue getting a partially-downloaded file through CDMI interface

Good Morning.

(I apologize if this question is not an issue or it is at the wrong repository)

We have noted that we cannot recovery partially-downloaded files when we are using the CDMI interface. This is a problem for big files, specially for remote users located at the other side of Atlantic Ocean, where micro-cuts are usual.

We tested curl -C- and wget -c. For example:

curl -O -C- -k -H "X-Auth-Token: MDA...go" "https://mon01-tic.ciemat.es/cdmi/test4/file.tar.gz"
** Resuming transfer from byte position 8093688
% Total % Received % Xferd Average Speed Time Time Time Current
0 148M 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
curl:** (33) HTTP server doesn't seem to support byte ranges. Cannot resume.

wget -c -d --header="X-Auth-Token: MDA...go" https://mon01-tic.ciemat.es/cdmi/test4/test4/file.tar.gz
...
...
file.tar.gz 4%[> ] 7,11M 237KB/s in 32s
2020-09-17 14:34:51 (228 KB/s) - Connection closed at byte 7454712. Retrying.

However, HTTP range reads should work in 19.02 release , but if I try some examples as the one described in the documentation:

curl -k -H "X-Auth-Token: MDA...go" -H 'Range: 0-3' -X GET "https://mon01-tic.ciemat.es/cdmi/test4/file.tar.gz"

I always obtain as the contain of downladed file (or sometimes as output):

{"error":"Bad value: provided \"range\" could not be understood by the server"}

The OneProvider that we are using is mon01-tic.ciemat.es with the 19.02.3 release, registered in datahub.egi.eu.

Thanks in advance.
Cheers.

Starting dockerized OneProvider fails if backend storage does not allow file modifications

When trying to start OneProvider from the dockerized scenario 1.0, it will fail to start when using a file system backend that does not allow file modifications.

We observed this problem with OneData RC14, using an NFS-mounted dCache 2.16 as a backend. Upon startup, a 100 byte test file containing random ASCII characters was successfully written to the storage space:

total 2
drwxr-xr-x 3 root root 512 May  4 15:41 .
drwxr-xr-x 5 root root 512 May  4 15:40 ..
-rw-rw-rw- 1 root root 100 May  4 15:41 cemumsktjrpnxnkegizpitnxzdvtdorr

But subsequently, the startup scripts seem to try to modify that file, which fails. Console output is:

Digest: sha256:65dbf6b494719c51e05099e0c40c7da8724ae9e4f09a168b92f4187cdd2a3589
Status: Image is up to date for onedata/oneprovider:3.0.0-rc14
Creating oneprovider-1
Attaching to oneprovider-1
oneprovider-1                | Starting op_panel	[  OK  ]
oneprovider-1                | 
oneprovider-1                | Configuring oneprovider:
oneprovider-1                | * service_couchbase: configure
oneprovider-1                | * service_couchbase: start
oneprovider-1                | * service_couchbase: wait_for_init
oneprovider-1                | * service_couchbase: init_cluster
oneprovider-1                | * service_couchbase: rebalance_cluster
oneprovider-1                | * service_couchbase: status
oneprovider-1                | * service: save
oneprovider-1                | * service_cluster_manager: configure
oneprovider-1                | * service_cluster_manager: stop
oneprovider-1                | * service_cluster_manager: start
oneprovider-1                | * service_cluster_manager: status
oneprovider-1                | * service_op_worker: configure
oneprovider-1                | * service_op_worker: setup_certs
oneprovider-1                | * service_op_worker: stop
oneprovider-1                | * service_op_worker: start
oneprovider-1                | * service_op_worker: wait_for_init
oneprovider-1                | * service_op_worker: status
oneprovider-1                | * service_op_worker: add_storages
oneprovider-1                | 
oneprovider-1                | Error: Service Error
oneprovider-1                | Description: Action 'deploy' for a service 'oneprovider' terminated with an error.
oneprovider-1                | Module: service_op_worker
oneprovider-1                | Function: add_storages
oneprovider-1                | Hosts: dcache-onedata-01.desy.de
oneprovider-1                | For more information please check the logs.
oneprovider-1 exited with code 1

Attached is the op_panel_error.log.txt with more details about the problem. We can provide a tcpdump capture of the relevant NFS traffic if needed.

In general, dCache treats files as immutable once they are CLOSEd. The expectation is that client software that needs to modify a file does so by replacing the existing file, like e.g. the vim editor does. If mutability of files is tested during system startup, more detailed information (is this a required property of any file system backend?) and different error handling might be beneficial.

Job for onedatify.service failed because the control process exited with error code

Hello again Folks! As I mentioned in this other issue , after having slightly tweaked the code in order to be able to fetch the beta script at this URL, we have run the script and started to configure with the requested parameters as described here .

After the last question, we are getting the following error:

Expose storage as read only? (y/n, default: n): n
Job for onedatify.service failed because the control process exited with error code. See "systemctl status onedatify.service" and "journalctl -xe" for details.

and the script hangs for a long time at "Waiting for oneprovider to start........". This is the output we get by running systemctl status onedatify.service in a separate window:

ubuntu@carmat-onedata-test1:~$ systemctl status onedatify.service
● onedatify.service - Onedatify Service
   Loaded: loaded (/usr/lib/systemd/system/onedatify.service; enabled; vendor preset: enabled)
   Active: inactive (dead) (Result: exit-code) since Tue 2018-05-29 13:40:19 UTC; 1min 23s ago
  Process: 10094 ExecStartPre=/usr/local/bin/docker-compose -f /opt/onedata/onedatify/docker-compose.yml config (code=exited, status=203/EXEC)

May 29 13:40:19 carmat-onedata-test1 systemd[1]: Failed to start Onedatify Service.
May 29 13:40:19 carmat-onedata-test1 systemd[1]: onedatify.service: Unit entered failed state.
May 29 13:40:19 carmat-onedata-test1 systemd[1]: onedatify.service: Failed with result 'exit-code'.
May 29 13:40:19 carmat-onedata-test1 systemd[1]: onedatify.service: Service hold-off time over, scheduling restart.
May 29 13:40:19 carmat-onedata-test1 systemd[1]: Stopped Onedatify Service.
May 29 13:40:19 carmat-onedata-test1 systemd[1]: onedatify.service: Start request repeated too quickly.
May 29 13:40:19 carmat-onedata-test1 systemd[1]: Failed to start Onedatify Service.

Any insights that could help us correcting any configuration errors?

Many thanks. Cheers!

Error reported by localFeedRemoveOnedataUserToCredentialsMapping

Hello,
we are noticing that removing a user mapping from the LUMA Local Feed on a non imported storage is giving an error even though the mapping is actually removed from the feed

the mapping is there

[Onedata REST CLI - 20.02.7]$ onepanel-rest-cli -u onepanel:$PASSPHRASE localFeedGetOnedataUserToCredentialsMapping id=$STORAGE_ID onedata_user_id=$ONEDATA_USER_ID --host https://$ONEPANEL_HOST:9443 {"storageCredentials":{"uid":"1000"},"displayUid":1000}#

we try to remove it but we get an error

[Onedata REST CLI - 20.02.7]$ onepanel-rest-cli -u onepanel:$PASSPHRASE localFeedRemoveOnedataUserToCredentialsMapping id=$STORAGE_ID onedata_user_id=$ONEDATA_USER_ID --host https://$ONEPANEL_HOST:9443 {"error":{"id":"errorOnNodes","details":{"hostnames":["notebooks-cesnet-00.datahub.egi.eu"],"error":{"id":"requiresImportedStorage","details":{"storageId":"7093397c7214c48e89aa2d3df81ab992ch8e65"},"description":"Cannot apply for storage 7093397c7214c48e89aa2d3df81ab992ch8e65 - this operation requires an imported storage."}},"description":"Error on nodes notebooks-cesnet-00.datahub.egi.eu: Cannot apply for storage 7093397c7214c48e89aa2d3df81ab992ch8e65 - this operation requires an imported storage."}}#

if we check again, the mapping is correctly removed

[Onedata REST CLI - 20.02.7]$ onepanel-rest-cli -u onepanel:$PASSPHRASE localFeedGetOnedataUserToCredentialsMapping id=$STORAGE_ID onedata_user_id=$ONEDATA_USER_ID --host https://$ONEPANEL_HOST:9443 {"error":{"id":"errorOnNodes","details":{"hostnames":["notebooks-cesnet-00.datahub.egi.eu"],"error":{"id":"notFound","description":"The resource could not be found."}},"description":"Error on nodes notebooks-cesnet-00.datahub.egi.eu: The resource could not be found."}}#

same behaviour if we use directly the REST API

thanks
Andrea

Oneclient 20.02.x for Ubuntu 20.04 LTS

Hello,
would it be possible to have a package for Ubuntu 20.04 for the oneclient 19.02.x release?

i'm getting this error when i try to install it

Either your platform is not easily detectable, is not supported by this
installer script, or does not yet have a package for oneclient.
Currently supported distributions are: ubuntu trusty, ubuntu wily, ubuntu xenial, centos 7 and fedora 23

thanks
Andrea

problems with ceph/s3 configuration of OneProvider

Hello.

I've deployed a working OneData infrastructure based on the "getting-started" scenario 3.0.  I'm now trying to modify the configuration files to use ceph and s3 for the storage backend of OneProvider.  Both of these fail with the error:

Starting op_panel	[  OK  ]

Failed to start configuration process (code: 400)
For more information please check the logs.

Looking through the logs, the OnePanel service started correctly, but there are no other logs with errors/information to indicate what has gone wrong.

Is it possible to use the OneProvider docker image for something other than a POSIX file system? If so, is there a working example somewhere? If not, how can ceph/s3 support be tested?

The result is the same whether trying S3 or Ceph storage. The configuration file that is used for Ceph is:

version: '2.0'

services:
  node1.oneprovider.localhost:
    image: onedata/oneprovider:3.0.0-rc11
    hostname: node1.oneprovider.localhost
    # dns: 8.8.8.8 # uncomment if container can't ping any domain
    container_name: oneprovider-1
    volumes:
        - "/var/run/docker.sock:/var/run/docker.sock"
        # configuration persistence
        - "${ONEPROVIDER_CONFIG_DIR}:/volumes/persistence"
        # data persistence
        - "${ONEPROVIDER_DATA_DIR}:/volumes/storage"
        # Oneprovider
        #- "${OP_PRIV_KEY_PATH}:/volumes/persistence/etc/op_panel/certs/key.pem"
        #- "${OP_CERT_PATH}:/volumes/persistence/etc/op_panel/certs/cert.pem"
        #- "${OP_CACERT_PATH}:/volumes/persistence/etc/op_panel/cacerts/cacert.pem"
        #- "${OP_CACERT_PATH}:/volumes/persistence/etc/op_worker/cacerts/cacert.pem"
    ports:
      - "53:53"
      - "53:53/udp"
      - "443:443"
      - "80:80"
      - "5555:5555"
      - "5556:5556"
      - "6665:6665"
      - "6666:6666"
      - "7443:7443"
      - "8443:8443"
      - "8876:8876"
      - "8877:8877"
      - "9443:9443"
    environment:
      #ONEPANEL_DEBUG_MODE: "true" # prevents container exit on configuration error
      ONEPANEL_BATCH_MODE: "true"
      ONEPROVIDER_CONFIG: |
        cluster:
          domainName: "oneprovider.localhost"
          nodes:
            n1:
              hostname: "node1"
          managers:
            mainNode: "n1"
            nodes:
              - "n1"
          workers:
            nodes:
              - "n1"
          databases:
            nodes:
              - "n1"
          storages:
            nfs1:
              type: "posix"
              mountPoint: "/volumes/storage"
            ceph:
              type: "Ceph"
              username: "client.admin"
              key: "aaaaaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaa=="
              monitorHostname: "185.19.30.30"
              clusterName: "24880865-497f-4aff-80cc-151a7d4fcc60"
              name: "24880865-497f-4aff-80cc-151a7d4fcc60"
        oneprovider:
          register: true
          name: "${PROVIDER_TYPE}_${PROVIDER_FQDN}"
          redirectionPoint: "https://${PROVIDER_FQDN}" # OR IP ADDRESS
          geoLatitude: ${GEO_LATITUDE}
          geoLongitude: ${GEO_LONGITUDE}
        onezone:
          domainName: "${ZONE_FQDN}" # OR IP ADDRESS
        onepanel:
          users:
            admin:
              password: "password"
              userRole: "admin"

Question about repackaging OneProvider

Hello,

I've gotten Indigo IAM working, and was using it in OneProvider storage config for S3. The storage config was to point at our S3 clone, which doesn't support v4 auth headers. The AWS libraries used by OneProvider were generating v4 auth headers, so the S3 configuration failed.

I looked to see if I could swap out the AWS libraries for something older, but got a bit confused. In the OneData git repo it looks like C++ AWS library is referenced, but in the onedata/oneprovider:3.0.0-rc11 container I didn't find them; I found Erlang BEAM files like /usr/lib/op_worker/lib/op_worker-3.0.0-rc11/ebin/{amazonaws_iam,s3_user}.beam.

Am I missing something with the C++ AWS library? And is there a way you know of that I could repackage OneProvider container with an older version of AWS libraries?

Onedatify validation email reports bad format when use some characters like "-"

When I use a email with characters like "-", the validation script reports bad email format.

########## Basic parameters ###########
Please enter the preety name of your oneprovider (default: oneprovider-01.localhost):
Enter a new administration password for the Oneprovider or the password will be autogenerated: XXXXXXXX
Your new password to login into Oneprovider is: XXXXXXXXXX
Please enter the email address that will be used as the emergency contact for this provider (default: ): [email protected]
Invalid email format. The email must match: [a-zA-Z0-9]+(.[a-zA-Z0-9]+)*@[a-zA-Z0-9]+(.[a-zA-Z0-9]+)+
Please enter the email address that will be used as the emergency contact for this provider (default: ):

We'd need a way to disable chunked uploading

As mentioned in #10, dCache treats files as immutable as soon as they are CLOSEd. With the current OneProvider, we observe a chunked uploading mechanism that does CLOSE calls after touching (creating) the file, and subsequently, there are sequences of GETATTR, OPEN and CLOSE calls during the transfer.

Is this transfer mode configurable in any way? If not, we'd like to ask for a possibility to turn off chunked uploading altogether for a given OneProvider instance. Alternatively, not CLOSEing files in between subsequent writes would also suffice.

Handshake error: incompatible Oneprovider version

Hi,

Installing the oneclient through the usual installation approach:

curl -sS  http://get.onedata.org/oneclient.sh | bash

currently installs oneclient version 18.02.3. However, the plg-cyfronet-01 oneprovider (in EGI DataHub) has oneprovider version 18.02.1. Therefore attempting to mount the Onedata space results in an "incompatible Oneprovider version" error:

Connecting to provider 'plg-cyfronet-01.OMITTED' using session ID: '15240073247038925449'...
E0912 15:21:17.000768 23797 configuration.cc:41] Fatal error during handshake: incompatible Oneprovider version
E0912 15:21:17.001421 23797 clprotoHandshakeResponseHandler.h:49] Error during handshake: handshake:10
E0912 15:21:17.001543 23797 translator.h:95] **Handshake error: incompatible Oneprovider version(10)**
E0912 15:21:17.002769 23797 clprotoClientBootstrap.cc:225] Connection refused by remote Oneprovider at plg-cyfronet-01.datahub.egi.eu:443: std::runtime_error: Error during handshake.
Connection refused - aborting...

How can I choose to install a specific version of the oneclient through the installation script to overcome this problem?

Thanks in advance.

ghost files

two files appeared in my space with weird names (rtqcpbkkeetssflvgvesdthwnuwahpmk and wbtrzyybharrlcqmryclhxdbytkfjems)
I was able to open one of them

cat wbtrzyybharrlcqmryclhxdbytkfjems
qvbysfcnleyisnhwjuaceyzreideujtmzzhpwumcfalsoqorjnamlitrhnrjgpmsmpyhftnbixidmphkvcfqenijnpzkrywcrmar

then those two files disappeared from a server point of view (physically) but are still present in onedata (web GUI and oneclient) making them ghosts

ghosts

Comparing to previous onedata version (rc8 for example), it seems possible now to delete such ghost files.

Multi-host Docker swarm question

I'm running a Docker Swarm based on Scenario 3.1, where I have two nodes, one for the OneZone, and the other for the OneProvider. I'm using the 3.0.0rc15 docker images for this.

I have an overlay network to link the nodes together. The system spins up fine using this docker compose file that I've combined from the other two to simplify deployment:

version: '3.0'
services:
  onezone:
    image: onedata/onezone:3.0.0-rc15
    hostname: onezone.example.net
    # dns: 8.8.8.8 # uncomment if container can't ping any domain
    networks:
      - 'goose'
    deploy:
      placement:
        constraints: [node.hostname == swarm-manager]
    ports:
      - "53:53"
      - "53:53/udp"
      - "443:443"
      - "80:80"
      - "5555:5555"
      - "5556:5556"
      - "6665:6665"
      - "6666:6666"
      - "7443:7443"
      - "8443:8443"
      - "8876:8876"
      - "8877:8877"
      - "9443:9443"
  oneprovider:
    image: onedata/oneprovider:3.0.0-rc15
    hostname: oneprovider.example.net
    # dns: 8.8.8.8 # uncomment if container can't ping any domain
    networks:
      - 'goose'
    deploy:
      placement:
        constraints: [node.hostname == swarm-worker]
    networks:
      - 'goose'

networks:
  goose:
    driver: 'overlay'

After it spins up, I try to hit my public IP on port 9443, but get an 'Empty reply from server', and the container's server logs contain:

Container details:
* IP Address: 10.0.0.3
* Ports: -

Logging on 'info' level:
[oz_panel] 2017-06-13 04:04:08.802 [info] <0.33.0> Application lager started on node '[email protected]'
[oz_panel] 2017-06-13 04:04:08.802 [info] <0.33.0> Application ctool started on node '[email protected]'
[oz_panel] 2017-06-13 04:04:08.823 [info] <0.33.0> Application inets started on node '[email protected]'
[oz_panel] 2017-06-13 04:04:08.849 [info] <0.33.0> Application gproc started on node '[email protected]'
[oz_panel] 2017-06-13 04:04:08.961 [info] <0.33.0> Application onepanel_gui started on node '[email protected]'
[oz_panel] 2017-06-13 04:04:08.988 [info] <0.33.0> Application bcrypt started on node '[email protected]'
[oz_panel] 2017-06-13 04:04:09.014 [info] <0.33.0> Application mnesia exited with reason: stopped
[oz_panel] 2017-06-13 04:04:09.105 [info] <0.33.0> Application mnesia started on node '[email protected]'
[oz_panel] 2017-06-13 04:04:09.967 [info] <0.286.0>@rest_listener:start:81 REST listener successfully started
[oz_panel] 2017-06-13 04:04:10.045 [info] <0.33.0> Application onepanel started on node '[email protected]'
[oz_panel] 2017-06-13 04:04:30.018 [error] emulator Error in process <0.489.0> on node '[email protected]' with exit value:
[oz_panel] {'HTTP_REQUEST',[{ranch_etls,accept_ack,2,[{file,"/build/oz-panel-MksOa_/oz-panel-3.0.0.rc15/_build/default/lib/etls/src/ranch_etls.erl"},{line,87}]},{cowboy_protocol,init,4,[{file,"/build/oz-panel-MksOa_/oz-panel-3.0.0.rc15/_build/default/lib/cowboy/src/cowboy_protocol.erl"},{line,91}]}]}
[oz_panel] 2017-06-13 04:04:30.018 [error] <0.489.0> Ranch listener https terminated with reason: 'HTTP_REQUEST' in ranch_etls:accept_ack/2 line 87

Is there a port binding I'm missing somewhere?

Here is the script that deploys the containers (modified from the existing, provided scripts):

#!/bin/bash

REPO_ROOT="${PWD//getting-started*}getting-started/"
AUTH_CONF="bin/config/auth.conf"
ZONE_COMPOSE_FILE="docker-compose-onezone.yml"
PROVIDER_COMPOSE_FILE="docker-compose-oneprovider.yml"
COMPOSE_FILE="docker-compose.yml"

DEBUG=0;

docker_stack_deploy_sh=("docker" "stack" "deploy")

#Default Onezone
#ZONE_FQDN="beta.onedata.org"

# Error handling.
# $1 - error string
die() {
  echo "${0##*/}: error: $*" >&2
  exit 1
}

# Get variables from env
set_defaults_if_not_defined_in_env() {
  # Default coordinates
  [[ -z ${GEO_LATITUDE+x} ]] && GEO_LATITUDE="50.068968"
  [[ -z ${GEO_LONGITUDE+x} ]] && GEO_LONGITUDE="19.909444"

  # Default paths
  [[ -z ${ONEPROVIDER_DATA_DIR+x} ]] && ONEPROVIDER_DATA_DIR="${PWD}/oneprovider_data/"
  [[ -z ${ONEPROVIDER_CONFIG_DIR+x} ]] && ONEPROVIDER_CONFIG_DIR="${PWD}/config_oneprovider/"
  [[ -z ${ONEZONE_CONFIG_DIR+x} ]] && ONEZONE_CONFIG_DIR="${PWD}/config_onezone/"
  [[ -z ${AUTH_PATH+x} ]] && AUTH_PATH="${REPO_ROOT}${AUTH_CONF}"

  # Default names for provider and zone
  [[ -z ${ZONE_NAME+x} ]] && ZONE_NAME="Example Zone"
  [[ -z ${PROVIDER_NAME+x} ]] && PROVIDER_NAME="Example Provider"
}

print_docker_compose_file() {
  local compose_file_name=$1
  echo "The docker compose file with substituted variables be used:
BEGINING===="

  # http://mywiki.wooledge.org/TemplateFiles
  LC_COLLATE=C
  while read -r; do
    while [[ $REPLY =~ \$(([a-zA-Z_][a-zA-Z_0-9]*)|\{([a-zA-Z_][a-zA-Z_0-9]*)\})(.*) ]]; do
      if [[ -z ${BASH_REMATCH[3]} ]]; then   # found $var
        printf %s "${REPLY%"${BASH_REMATCH[0]}"}${!BASH_REMATCH[2]}"
      else # found ${var}
        printf %s "${REPLY%"${BASH_REMATCH[0]}"}${!BASH_REMATCH[3]}"
      fi
      REPLY=${BASH_REMATCH[4]}
    done
    printf "%s\n" "$REPLY"
  done < "${compose_file_name}"
  echo "====END"
}

# As the name suggests
usage() {
  echo "Usage: ${0##*/}  [-h] [ --zone  | --provider ] [ --(with-|without-)clean ] [ --debug ]

Onezone usage: ${0##*/} --zone
Oneprovider usage: ${0##*/} --provider [ --provider-fqdn <fqdn> ] [ --zone-fqdn <fqdn> ] [ --provider-data-dir ] [ --set-lat-long ]

Example usage:
${0##*/} --provider --provider-fqdn 'myonedataprovider.tk' --zone-fqdn 'myonezone.tk' --provider-data-dir '/mnt/super_fast_big_storage/' --provider-conf-dir '/etc/oneprovider/'

Options:
  -h, --help           display this help and exit
  --name               a name of a provider or a zone
  --zone               starts onezone service
  --provider           starts oneprovider service
  --provider-fqdn      FQDN for oneprovider (not providing this option causes a script to try to guess public ip using http://ipinfo.io/ip service)
  --zone-fqdn          FQDN for onezone (defaults to beta.onedata.org)
  --provider-data-dir  a directory where provider will store users raw data
  --provider-conf-dir  directory where provider will store configuration its files
  --zone-conf-dir      directory where zone will store configuration its files
  --set-lat-long       sets latitude and longitude from reegeoip.net service based on your public ip's
  --clean              clean all onezone, oneprivder and oneclient configuration and data files - provided all docker containers using them have been shutdown
  --with-clean         run --clean prior to setting up service
  --without-clean      prevents running --clean prior to setting up service
  --debug              write to STDOUT the docker-compose config and commands that would be executed
  --detach             run container in background and print container name"
  exit 0
}

get_log_lat(){
  ip="$(curl http://ipinfo.io/ip)"
  read -r GEO_LATITUDE GEO_LONGITUDE <<< $(curl freegeoip.net/xml/"$ip" | grep -E "Latitude|Longitude" | cut -d '>' -f 2 | cut -d '<' -f 1)
}

debug() {
  set -o posix ; set
}

is_clean_needed () {

  [[ -d "$ONEZONE_CONFIG_DIR" ]] && return 0
  [[ -d "$ONEPROVIDER_CONFIG_DIR" ]] && return 0
  [[ -d "$ONEPROVIDER_DATA_DIR" ]] && return 0

  [[ $(docker ps -aqf 'name=onezone') != "" ]] && return 0
  [[ $(docker ps -aqf 'name=oneprovider') != "" ]] && return 0

  return 1
}

clean() {
  echo "The cleaning procedure will need to run commands using sudo, in order to remove volumes created by docker. Please provide a password if needed."

  [[ $(git status --porcelain "$ZONE_COMPOSE_FILE") != ""  ]] && echo "Warrning the file $ZONE_COMPOSE_FILE has changed, the cleaning procedure may not work!"
  [[ $(git status --porcelain "$PROVIDER_COMPOSE_FILE") != ""  ]] && echo "Warrning the file $PROVIDER_COMPOSE_FILE has changed, the cleaning procedure may not work!"

  echo "Removing provider and/or zone config dirs..."
  sudo rm -rf "${ONEZONE_CONFIG_DIR}"
  sudo rm -rf "${ONEPROVIDER_CONFIG_DIR}"

  echo "Removing provider data dir..."
  sudo rm -rf "${ONEPROVIDER_DATA_DIR}"

  clean_scenario
}

batch_mode_check() {
  local service=$1
  local compose_file_name=$2

  grep 'ONEPANEL_BATCH_MODE: "true"' "$compose_file_name" > /dev/null
  if [[ $? -eq 0 ]] ; then

    RED="$(tput setaf 1)"
    GREEN="$(tput setaf 2)"
    RESET="$(tput sgr0)"

    echo -e "${RED}IMPORTANT: After each start wait for a message: ${GREEN}Congratulations! ${service} has been successfully started.${RESET}"
    echo -e "${RED}To ensure that the ${service} is completely setup.${RESET}"
  fi
}

handle_stack() {
  local n=$1
  local compose_file_name=$2
  local stack_name=$3

  mkdir -p "$ONEZONE_CONFIG_DIR"

  _docker_stack_deploy_sh() {
    CMD="ZONE_NAME=\"${ZONE_NAME}\" ZONE_DOMAIN_NAME=\"${ZONE_DOMAIN_NAME}\" PROVIDER_FQDN=\"${PROVIDER_FQDN}\" ZONE_FQDN=\"${ZONE_FQDN}\" AUTH_PATH=\"${AUTH_PATH}\" ONEZONE_CONFIG_DIR=\"${ONEZONE_CONFIG_DIR}\" ${docker_stack_deploy_sh[*]} ${@}"
    echo "Executing ${CMD}";
    eval ${CMD}
  }

  _docker_stack_deploy_sh --compose-file $compose_file_name $stack_name
  docker service ls
}

main() {

  if (( ! $# )); then
    usage
  fi

  set_defaults_if_not_defined_in_env

  local n=1
  local get_log_lat_flag=0

  while (( $# )); do
      case $1 in
          -h|-\?|--help)   # Call a "usage" function to display a synopsis, then exit.
              usage
              exit 0
              ;;
          --name)
              STACK_NAME=$2
              shift
              ;;
          --zone-name)
              ZONE_NAME="$2"
              shift
              ;;
          --provider-name)
              PROVIDER_NAME="$2"
              shift
              ;;
          --zone-conf-dir)
              ONEZONE_CONFIG_DIR=$2
              shift
              ;;
          --provider-data-dir)
              ONEPROVIDER_DATA_DIR=$2
              shift
              ;;
          --provider-conf-dir)
              ONEPROVIDER_CONFIG_DIR=$2
              shift
              ;;
          --without-clean)
              keep_old_config='y'
              ;;
          --with-clean)
              keep_old_config='n'
              ;;
          --debug)
              DEBUG=1
              ;;
          --zone-fqdn)
              ZONE_FQDN=$2
              shift
              ;;
          --provider-fqdn)
              PROVIDER_FQDN=$2
              shift
              ;;
          --swarm-manager-ip)
              SWARM_MANAGER_IP=$2
              shift
              ;;
          --set-lat-long)
              get_log_lat_flag=1
              ;;
          -?*)
              printf 'WARN: Unknown option (ignored): %s\n' "$1" >&2
              exit 1
              ;;
          *)
              die "no option $1"
              ;;
      esac
      shift
  done

  if is_clean_needed ; then
    if [[ -z $keep_old_config ]]; then
      echo "We detected configuration files, data and docker containers from a previous Onedata deployment.
  Would you like to keep them (y) or start a new deployment (n)?"
      read -r keep_old_config
    fi
    if [[ $keep_old_config == 'n' ]]; then
        clean
    fi
  fi

  if [[ $get_log_lat_flag -eq 1 ]]; then
    get_log_lat
  fi

  [[ -z ${SWARM_MANAGER_IP} ]] && SWARM_MANAGER_IP="$(docker-machine ip swarm-manager)"

  echo "Found Swarm Manager ${SWARM_MANAGER_IP}"

  handle_stack ${n} ${COMPOSE_FILE} ${STACK_NAME}

  echo "Swarm deployed."
}

clean_scenario() {
  : # pass
}

main "$@"

Many, many thanks for any attention to this.

Dustin

Ghost folder

Hello,
I accidentally created a ghost folder when a multiple file upload failed inside it. The folder seems to be empty, but I receive "Internal Server Error" when I try to delete it.
Also, I've checked that an error appears if I try to rename the folder, but reloading the web interface the name changes.

screenshot from 2019-02-28 16-11-36

Oneprovider version: 18.02.1

Share view's file browser crashes when the root space directory is shared

Sharing the root space directory results in a share that cannot be properly displayed in GUI:
20.02.4: an "endless spinner" is displayed when navigating to the Files tab in the share view
20.02.5: there is no spinner, but there is an information that the shared directory has been deleted (which is misleading)

The problem is caused by a bug in the logic specific for the root space directory, which is handled differently than other files/directories in the space. This bug has not been observed with subdirectories or files in the space.

A fix is planned for the upcoming release (20.02.6).

Canceling a scheduled or active transfer

For unknown reasons, I've got transfers stucked. I can see them in the transfer web UI. Is there a way to cancel them? Or a trigger to force the queue to be activated?

I'm running on version 18.02.0-rc8

errors in systemctl services and documentation (OneProvider 20 and 19)

Good evening.

I have noticed (installing from CentOS 7 repos) that OneProvider' services return errors when the systemctl status command is used.

There are some errors that appear only in cluster_manager and op_panel logs, and we don't know their source:

$ systemctl status cluster_manager
...
 su[4363]: (to cluster_manager) root on none
...
$
$ systemctl status op_panel
...
 su[6473]: (to cluster_manager) root on none
...

And there are other that appear in the three services:

...
 cluster_manager[4352]: Starting cluster_manager: touch: cannot touch ‘/opt/onedata/onedata2002/root/var/lock/su…directory
...
 op_worker[4905]: Starting op_worker: touch: cannot touch ‘/opt/onedata/onedata2002/root/var/lock/subsys/op_wor… directory
...
 op_panel[831]: Starting op_panel: touch: cannot touch ‘/opt/onedata/onedata2002/root/var/lock/subsys/op_panel’… directory

But they can be easily corrected if the corresponding directories are made:

$ mkdir -p /opt/onedata/onedata2002/root/var/lock/subsys/op_panel
$ mkdir -p /opt/onedata/onedata2002/root/var/lock/subsys/op_worker
$ mkdir -p /opt/onedata/onedata2002/root/var/lock/subsys/cluster_manager

Moreover, I have seen in 20.02.04 release that the op_panel service is now able to start the other required services. If you try to start (or to stop) services in the ordered fashion suggested at the current documentation, it results in many errors and orphan processes (not guided by systemctl). I suggest to include this note in that section:

$ systemctl disable couchbase-server
$ systemctl disable cluster_manager
$ systemctl disable op_worker
$ systemctl enable op_panel
$ # reboot 

Cheers and happy new year.

problem with small files: Resource temporarily unavailable

Dear onedata team,

following the admin quickstart guide, I created a setup on two machines (scenario 3, one machine for onezone and one for oneprovider). Both of these machines have a 10-Core Xeon CPU and 64 GB RAM, data was stored on SSDs. The network connection between both computers is 1GE.

On a third computer, I used the fuse-client to mount the onedata-filesystem. That worked without problems and for large files, the full network bandwidth was usable. However, for small files the performance was extremely bad. I tried to create a large number of files with a size of 2kB. It was not possible to create more then 30 files per second. After a few thousand files were created, the write process crashed with an I/O error Resource temporarily unavailable. At the same time, the 10-Core CPU on the oneprovider machines is fully utilized and remains fully utilized for several minutes. It is than not possible to create any new files.

Question: is the behavior described above to be expected? Is onedata intended to share only large files without lots of changes?

Unable to subscribe to file events

Trying to subscribe using curl as shown here I always receive a 405 error.

curl -v logs:

$ curl --tlsv1.2 -v -N -X GET -H "X-Auth-Token: $TOKEN" \ 
"https://$HOST/api/v3/oneprovider/changes/metadata/xxx"                                                  
Note: Unnecessary use of -X or --request, GET is already inferred.
*   Trying 149.156.11.36...
* TCP_NODELAY set
* Connected to plg-cyfronet-01.datahub.egi.eu (149.156.11.36) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
*  subject: CN=plg-cyfronet-01.datahub.egi.eu
*  start date: May 22 10:08:42 2019 GMT
*  expire date: Aug 20 10:08:42 2019 GMT
*  subjectAltName: host "plg-cyfronet-01.datahub.egi.eu" matched cert's "plg-cyfronet-01.datahub.egi.eu"
*  issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3
*  SSL certificate verify ok.
> GET /api/v3/oneprovider/changes/metadata/xxx HTTP/1.1
> Host: plg-cyfronet-01.datahub.egi.eu
> User-Agent: curl/7.64.1
> Accept: */*
> X-Auth-Token:  XXX
> 
< HTTP/1.1 405 Method Not Allowed
< allow: POST
< content-length: 0
< date: Fri, 31 May 2019 09:50:27 GMT
< server: Cowboy
< x-frame-options: SAMEORIGIN
< 
* Connection #0 to host plg-cyfronet-01.datahub.egi.eu left intact
* Closing connection 0

Oneprovider version: 18.02.1.

Oneclient access token with path caveat - Bug

User A has Oneprovider properly installed. User A creates the space "data" and two folders inside of it: "folder1" and "folder2". He/she just wants to give access to "folder1" to User B (but not to "folder2"). User A creates a Oneclient access token with the "folder1" path caveat in the "data" space.
User B uses Oneclient and justs sees "folder1", as expected. However, if User A creates a new folder, let's say "folder3", User B can see that new folder when he/she must not.

So, Onedata doesn't hide folders created after the accesion-token generation.

Can not upload file in a given space

When uploading a file in a given space, I have the following error in the provider log:

[D 2018-10-24 14:34:45.619 <0.20890.313>] Supervisor {<0.20890.313>,session_sup} started event_manager_sup:start_link(<<"6caf085ceff1cf43ec6e9cd2d4704673">>) at pid <0.20963.313>
[op_worker] [E 2018-10-24 14:46:36.987 <0.25498.313>] Error while streaming file upload from user <<"b3fb1e96153974ef97dba0bea310804b">> - error:{badmatch,
[op_worker] {error,
[op_worker] enospc}}
[op_worker] Stacktrace:
[op_worker] proc_lib:init_p_do_apply/3 line 247
[op_worker] cowboy_stream_h:request_process/3 line 227
[op_worker] cowboy_stream_h:execute/3 line 249
[op_worker] cowboy_handler:execute/2 line 37
[op_worker] dynamic_page_handler:init/2 line 37
[op_worker] page_file_upload:handle/2 line 57
[op_worker] page_file_upload:multipart/3 line 221
[op_worker] page_file_upload:stream_file/4 line 249

There is no error in the log from onezone, and if a create a new space on the same provider everything works fine on the new space.

Thanks,

Frederic

OpenId using github fails with version 18.02.0-rc8

Since I updated to version 18.02.0-rc8, it seems github authentication fails.
Here is the log message:

Query params:
[{<<"error">>,<<"redirect_uri_mismatch">>},
 {<<"error_description">>,
  <<"The redirect_uri MUST match the registered callback URL for this application.">>},
 {<<"error_uri">>,
  <<"https://developer.github.com/apps/managing-oauth-apps/troubleshooting-authorization-request-errors/#redirect-uri-mismatch">>},
 {<<"state">>,<<"90xxxxxxxxxxxxxxxxxxxxxxx33">>}]

On github side, I filled out the form field "Authorization callback URL" with https://MY ONEZONE URL/validate_login

Any idea?

Configuration:
Centos 7.5
Using Docker

Possible bug in the onedatify.sh script

Hello Folk!

As per your documentation, we are trying the scenario for deploying Oneprovider and expose existing data collection.

However when pasting the copied command in the terminal on the Oneprovider machine (as superuser), we are getting such error:

Downloaded Onedatify installation script /tmp/onedatify_18.02.0.beta3.sh
chmod: cannot access '/tmp/onedatify_18.02.0.beta3.sh': No such file or directory
/tmp/onescript: 110: /tmp/onescript: /tmp/onedatify_18.02.0.beta3.sh: not found

By inspecting the onedatify.sh script, we've gone up to the URL where the script onedatify_18.02.0.beta3.sh is supposed to exist. Such URL is the following: https://packages.onedata.org/ . In the onedatify folder it is possible to find the collection of all different version for the oneprovider script, however the filename convention for beta scripts uses "_" (underscores) not "." (dots). Therefore at line 95 and 97 of onedatify.sh script, it should be added an extra | tr . _ if the onezone_version is a beta one.

What do you think ? Should this be addressed ? Or is it an assumption that a user will always have a onezone service deployed on a dedicated VM rather than using beta.onedata.org?

Thanks for your help. Cheers!

Question about Transfer performance and data routing

I'm learning about onedata at the moment an I have a few questions about remote transfer performance.

  • What protocol is currently used to move data between distributed oneproviders?
  • Is that the same protocol used to move the data from a oneprovider to a oneclient?
  • Is data proxying through an intermediate node once it leaves the oneprovider, or does it go point to point between oneproviders an oneclients?

Unable to Deploy OneProvider via Onedatify

Executing the onedatify installation script obtained from the EGI DataHub on a freshly deployed Ubuntu 16.04 EC2 instance (ami-0565af6e282977273) with the following open ports open to anyone (0.0.0.0/0) in the security group: 80, 22, 443, 9443, 665 and the following configuration:

  • subdomain delegation
  • storage type: posix

results in the following error log trace when checking onedatify logs:

Apr 17 14:39:07 ip-172-31-55-27 systemd[1]: Starting Onedatify Service...
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]: services:
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:   node1.oneprovider:
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:     container_name: onedatify-oneprovider-1
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:     domainname: datahub.egi.eu
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:     environment:
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:       ONEPANEL_BATCH_MODE: "true"
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:       ONEPANEL_GENERATED_CERT_DOMAIN: upvaws.datahub.egi.eu
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:       ONEPANEL_GENERATE_TEST_WEB_CERT: "false"
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:       ONEPANEL_LOG_LEVEL: error
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:       ONEPANEL_TRUST_TEST_CA: "false"
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:       ONEPROVIDER_CONFIG: "# Cluster configuration allows to specify distribution\
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:         \ of Oneprovider\n# components over multiple nodes - here we deploy entire\
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:         \ service on\n# a single node\ncluster:\n  domainName: \"datahub.egi.eu\"\n\
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:         \  nodes:\n    n1:\n      hostname: \"upvaws\"\n      #externalIp: \"10.87.23.53\"\
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:         \n  managers:\n    mainNode: \"n1\"\n    nodes:\n      - \"n1\"\n  workers:\n\
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:         \    nodes:\n      - \"n1\"\n  databases:\n    # Per node Couchbase cache\
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:         \ size in MB for all buckets\n    serverQuota: 4096\n    # Per bucket Couchbase\
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:         \ cache size in MB across the cluster\n    bucketQuota: 1024\n    nodes:\n\
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:         \      - \"n1\"\n  #storages:\n    # Add initial storage resource (optional\
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:         \ - can be added later)\n    # In this example NFS mounted at /mnt/nfs on\
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:         \ the host, which is\n    # mounted to /volumes/storage directory inside Docker\
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:         \ container\noneprovider:\n  # Automatically register this Oneprovider in\
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:         \ Onezone\n  register: true\n  name: \"oneprovideraws\"\n  # Deprecated field\
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:         \ used in 17.* versions\n  redirectionPoint: \"https://upvaws.datahub.egi.eu\"\
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:         \n  domain: \"upvaws.datahub.egi.eu\"\n  # Oneprovider Registration token\
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:         \ acquired in runtime from Onezone\n  token: \"\"\n  subdomain: \"upvaws\"\
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:         \n  subdomainDelegation: \"true\"\n  letsEncryptEnabled: true\n  adminEmail:\
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:         \ \"[email protected]\"\n  geoLatitude: 39.0481\n  geoLongitude: -77.4728\n\
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:         onezone:\n  # Assign custom name to the Oneprovider instance\n  domainName:\
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:         \ \"datahub.egi.eu\"\nonepanel:\n  # Create initially 1 administrator and\
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:         \ 1 regular user\n  users:\n    \"admin\":\n      password: \"TVKmJcWaQzuKc\"\
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:         \n      userRole: \"admin\"\n"
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:     extra_hosts:
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:     - upvaws.datahub.egi.eu:127.0.1.1
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:     hostname: upvaws
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:     image: onedata/oneprovider:18.02.0-rc13
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:     network_mode: host
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:     ulimits:
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:       core: 0
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:     volumes:
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:     - /tmp:/onedatify/tmp:rw
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:     - /var/run/docker.sock:/var/run/docker.sock:rw
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:     - /opt/onedata/onedatify/oneprovider_conf:/volumes/persistence:rw
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:     - /opt/onedata/onedatify/op-panel-overlay.config:/etc/op_panel/overlay.config:rw
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]:     - /opt/onedata/onedatify/op-worker-overlay.config:/etc/op_worker/overlay.config:rw
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5225]: version: '2.0'
Apr 17 14:39:08 ip-172-31-55-27 systemd[1]: Started Onedatify Service.
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5234]: Pulling node1.oneprovider (onedata/oneprovider:18.02.0-rc13)...
Apr 17 14:39:08 ip-172-31-55-27 docker-compose[5234]: 18.02.0-rc13: Pulling from onedata/oneprovider
Apr 17 14:39:47 ip-172-31-55-27 docker-compose[5234]: Digest: sha256:15a448f71db8468ca8b3d0c2cfeeafda504d74bbb4b16769ff06305758dcfcaf
Apr 17 14:39:47 ip-172-31-55-27 docker-compose[5234]: Status: Downloaded newer image for onedata/oneprovider:18.02.0-rc13
Apr 17 14:39:47 ip-172-31-55-27 docker-compose[5234]: Creating onedatify-oneprovider-1 ...
Apr 17 14:39:48 ip-172-31-55-27 docker-compose[5234]: [91B blob data]
Apr 17 14:39:48 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 | Copying missing persistent files...
Apr 17 14:39:48 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 | Done.
Apr 17 14:39:48 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 | Starting op_panel...
Apr 17 14:39:54 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 | [  OK  ] op_panel started
Apr 17 14:39:54 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 |
Apr 17 14:39:54 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 | Configuring oneprovider:
Apr 17 14:39:55 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 | * service_couchbase: configure
Apr 17 14:39:55 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 | * service_couchbase: start
Apr 17 14:40:00 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 | * service_couchbase: wait_for_init
Apr 17 14:40:01 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 | * service_couchbase: init_cluster
Apr 17 14:40:02 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 |
Apr 17 14:40:02 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 | Error: Service Error
Apr 17 14:40:02 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 | Description: Action 'deploy' for a service 'oneprovider' terminated with an error.
Apr 17 14:40:02 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 | Module: service_couchbase
Apr 17 14:40:02 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 | Function: init_cluster
Apr 17 14:40:02 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 | Details by host:
Apr 17 14:40:02 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 | * upvaws.datahub.egi.eu
Apr 17 14:40:02 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 |         Internal Error:
Apr 17 14:40:02 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 |         Server encountered an unexpected error.
Apr 17 14:40:02 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 |
Apr 17 14:40:02 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 | For more information please check the logs.
Apr 17 14:40:03 ip-172-31-55-27 docker-compose[5234]: onedatify-oneprovider-1 exited with code 1
Apr 17 14:40:03 ip-172-31-55-27 docker-compose[5234]: Aborting on container exit...
Apr 17 14:40:03 ip-172-31-55-27 systemd[1]: onedatify.service: Main process exited, code=exited, status=1/FAILURE
Apr 17 14:40:03 ip-172-31-55-27 docker-compose[6137]: Removing onedatify-oneprovider-1 ...
Apr 17 14:40:03 ip-172-31-55-27 docker-compose[6137]: [55B blob data]
Apr 17 14:40:03 ip-172-31-55-27 systemd[1]: onedatify.service: Unit entered failed state.
Apr 17 14:40:03 ip-172-31-55-27 systemd[1]: onedatify.service: Failed with result 'exit-code'.

rtransfer_link-0.1.0 continously hangs with unsupported arguments

Good evening.

I'm testing clean deployments of OneProvider based on 20.02.4 as well as 19.02.5 releases, and the rtransfer_link-0.1.0 service is continously hanging due to be launched with unsupported parameters.

This is the continous entry in root/var/log/op_worker . Although this is obtained from the 19.02 release, the ones belonging to 20.02.04 are similar:

[I 13:17:02.554 <0.2547.11>] Starting port  /opt/onedata/onedata1902/root/usr/lib64/op_worker/lib/rtransfer_link-0.1.0/priv/link --v=0 --log_dir=/opt/onedata/onedata1902/root/var/log/op_worker/link --log_link=/opt/onedata/onedata1902/root/var/log/op_worker --server_port=6665 --single_fetch_max_size=12582912 --graphite_namespace_prefix=rtransfer_link --graphite_reporting_period=60 --max_open_descriptors=1000 --descriptor_cache_duration=60000 --descriptor_cache_tick=1000 --number_of_data_conns=16 --recv_buf_size=8388608 --send_buf_size=8388608 --use_ssl --ssl_cert_path=/tmp/tmp.CM4K6kEP3m --ssl_key_path=/opt/onedata/onedata1902/root/etc/op_panel/certs/web_key.pem --max_incoming_buffered_size=20971520 --storage_buckets=100 --send_congestion_flavor=bbr --shaper_quantum_ms_size=50 --helper_workers=100 --webdav_helper_workers=25 --throughput_probe_interval=25
ERROR: unknown command line flag 'log_dir'
ERROR: unknown command line flag 'log_link'
ERROR: unknown command line flag 'v'
[E 13:17:02.671 <0.2547.11>] link port exited with 1
[E 13:17:02.671 <0.626.0>] Supervisor rtransfer_link_sup had child rtransfer_link_port started with rtransfer_link_port:start_link() at <0.2547.11> exit with reason {shutdown,link_port_down} in context child_terminated

If you manually debug it, logging into the op_worker account, sourcing the /opt/onedata/onedataxxx/enable profile and then executing above command, it reports as unsupported these parameters: log_dir, log_link and v. When you remove them, the command seems to work.

The only way that I provisionally solved the issue is using a Bash wrapper, which removes these arguments. For example for the 20.02 release:

$
$ cd /opt/onedata/onedata2002/root/usr/lib64/op_worker/lib/rtransfer_link-0.1.0/priv
$ cat link.wrapper

#!/bin/bash

ARGS=()
for var in "$@"; do
   # Ignore bad arguments
   [[ "$var" == --v=* ]] && continue
   [[ "$var" == --log\_dir=* ]] && continue
   [[ "$var" == --log\_link=* ]] && continue

   ARGS+=("$var")
done

/opt/onedata/onedata2002/root/usr/lib64/op_worker/lib/rtransfer_link-0.1.0/priv/link.bin  "${ARGS[@]}"

$
$ cp link link.bin
$ cp link.wrapper link
$ chmod +x link
$ # reboot or systemctl restart op_worker

I remember this behaviour ocasionally appeared in previous releases and it could mask other kind of errors/bugs/missconfigurations.

Can you please give us some support?
(The behaviour is easily reproducible if you install an OneProvider from the CentOs 7 repos).

Thank you in advance.
Happy new year.

port numbers

Is is possible to config different port numbers to deploy scenario 3.0?

Unable to get groups and roles from delegated credentials through EGI Check-in

Good evening.

(This issue corresponds to the code included in the oz-worker' repository, but it laks the usability of whole OneData service).

The custom parser that implements groups for the EGI Check-in IdP (https://aai.egi.eu) doesn't take into consideration groups from delegated credentials. This problem can be checked at https://datahub.egi.eu.

For example, I can login into https://datahub.egi.eu with my eduTEAMs credential passing through EGI Check-in (https://aai.egi.eu). Then, https://datahub.egi.eu obtains an "eduperson_entitlement" similar to this (for my LAGO-AAI virtual organisation):

urn:geant:eduteams.org:service:eduteams:group:LAGO-AAI#eduteams.org

However, the function “parse_egi_entitlement”, at line 104 in (https://github.com/onedata/oz-worker/blob/05a6aca6a171df9767613368c15e4a9ddc2eeca4/rel/files/auth_plugins/custom_entitlement_parser.erl), only accepts URNs starting by "urn:mace:egi.eu:group:”, so any group from eduTEAMS or any other delegated IdP will be ignored.

I think you should consider to allow any NAMESPACE after "urn:" and before ":group:".

--

Another issue is how the permissions are set. Following the AARC-G002 recommedations (https://aarc-project.eu/wp-content/uploads/2017/11/AARC-JRA1.4A-201710.pdf), any entitlement declaring a group should imply to grant user permissions at least as "member".

Additionally, the ROLE component is optional in AARC-G002. This issue has been traditionally solved taking the last SUBGROUP as the role of the preceding group. For example, for the "manager" role:

urn:geant:eduteams.org:service:eduteams:group:LAGO-AAI:manager#eduteams.org

Nevertheless, although EGI also follows AARC recommendations, I know it is introducing ROLE component as a must (https://wiki.egi.eu/wiki/AAI_guide_for_SPs). In consequence, I have not suggestions to solve the issue, but it involves compatiblity.

Thank you very much in advance.
Cheers.

Documentation for backing up / restoring oneprovider shares

Good Evening.

After I rebooting my OneProvider instance, it was unable to view the data in the configured spaces. The OneProvider panel looked fine, but the "Browse Files" link didn't appear for the users. This seems that the permissions stored were lost (Was Couchbase corrupted? it possibly was).

As it has not been the first time that it happens, finally, I reinstalled the instance, using the 1902 repo (http://packages.onedata.org/yum/1902/centos/7x) and forcing the 19.02.3 release for cluster_manager, op_worker, op_panel and oneprovider packets. Now, it seems that I can reboot the system without problems, but I have lost the whole previous metadata (not the data, which it remains in the filesystem tree).

Therefore, I have doubts about how I can recover metadata and user permissions for every file after any kind of event (hangs, Couchbase corruptions or hardware failures of the oneprovider). I suspect both are stored in Couchbase database, but I have not found documentation about these questions.

¿Can you please add a section with backup tips? (Specifically, with methods to rebuild metadata and permissions for every file in a disk space).

Thank you very much in advance.
Cheers.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.