Code Monkey home page Code Monkey logo

lizardfs's Introduction

LizardFS Gitter chat

Introduction

LizardFS is a highly reliable, scalable and efficient distributed file system. It spreads data over a number of physical servers, making it visible to an end user as a single file system.

LizardFS Windows Client: https://lizardfs.com/windows-client

For support please contact [email protected] or visit lizardfs.com.

Useful links

You can also join the community irc at #lizardfs on FreeNode.

Participation

Please feel free to contribute to documentation, wiki and the code. Rules for submitting patches can be found in the developers Guide in the LizardFS documentation: https://dev.lizardfs.com/docs/devguide/submitting.html

We would love to hear about your use cases

We would like to engage more with our community so if you are able to please email us with your use case at: [email protected].

It will help us make LizardFS even better and will help others choose LizardFS for their storage.

You could mention the following:

- purpose of the installation,
- RAW capacity,
- features used,
- HW architecture (including network),
- number of concurrent users,
- some performance stats,
- if you want us to anonymise the data or not.

Thank you for your help.

The LizardFS team.

lizardfs's People

Contributors

amokhuginnsson avatar blink69 avatar darkhaze avatar etam avatar fretek avatar ictus4u avatar jedisct1 avatar kazik208 avatar kerneltravel avatar kskalski avatar lamvak avatar lgsilva3087 avatar malcom avatar marcinsulikowski avatar mouseratti avatar onlyjob avatar pbeza avatar pilusx avatar pjanicki avatar przemekpjski avatar psarna avatar pyrovski avatar qbm avatar sfindeisen avatar sirnolaan avatar smurfix avatar three3 avatar trzysiek avatar wojciesh avatar yummybian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lizardfs's Issues

Finish removing references to MooseFS and MFS

Hi there,

It would be awesome if we could completely strip "MFS", "mfs", "MooseFS" and variants thereof from LizardFS (with the exception of the CHANGELOG and other crucial areas) - at this point it just serves to add confusion.

I can't easily do a pull request right now, so instead here's the diff where I made these changes. I also documented below the best way to make the changes. https://github.com/Zorlin/lizardfs/commit/bf4e09510a0cb725ca449b09d512b3befd610b15

Thanks,
~ Benjamin

Please consider implementing erasure-coded tier

LizardFS is awesome but it will be a killer storage solution if/when an erasure-coded tier will be implemented. I see such feature as "cold" storage behind current replicated storage implementation which can act as cache for slower erasure-coded storage tier. As for particular choice of erasure algorithm it could be Mojette Transform which is used in RozoFS or Jerasure coding which is used by Ceph and Openstack Swift. Jerasure library is already available in Debian. Thanks.

chunkserver: detect mount points

It is quite possible for failed HDD to not be mounted to corresponding mount point on reboot.
However in such case chunkserver still creates its files in designated directory as per mfshdd.cfg.
Unfortunately lack of mount point detection lead to possibility that chunkserver will work in rootfs instead of storage mount. Such error would be hard to detect as chunkserver starts like everything is OK.

Please consider detecting whether configured storage location is mounted and fail to start corresponding storage if it isn't.

IMHO check with mountpoint utility or similar might be appropriate. According to documentation chunkserver has to use mount points to properly manage free space so perhaps it can be the only supported configuration... I'm not even sure it it is worth to make it override-able if bind mounts will be allowed as well...

The following link may provide some insights into mount point detection logic:

3.9.4: FTBFS@i686 with "-DENABLE_TESTS=YES"

On i686 LizardFS FTBFS when built with "-DENABLE_TESTS=YES":

Building CXX object src/common/CMakeFiles/mfscommon_tests.dir/serialization_macros_unittest.cc.o 
cd /tmp/buildd/lizardfs-2.5.4.2/build/src/common && /usr/lib/ccache/c++   -DLIZARDFS_HAVE_CONFIG_H -DTHROW_INSTEAD_OF_ABORT -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2  -pipe -std=c++0x -pthread -Wall -Wextra -fwrapv -pedantic -Wno-gnu -mcrc32 -O3 -DNDEBUG -O3 -DNDEBUG -g -I/tmp/buildd/lizardfs-2.5.4.2/build -I/tmp/buildd/lizardfs-2.5.4.2/src -I/tmp/buildd/lizardfs-2.5.4.2/src/common -I/tmp/buildd/lizardfs-2.5.4.2/external/crcutil-1.0/code    -o CMakeFiles/mfscommon_tests.dir/serialization_macros_unittest.cc.o -c /tmp/buildd/lizardfs-2.5.4.2/src/common/serialization_macros_unittest.cc                                                                                                                                  
In file included from /tmp/buildd/lizardfs-2.5.4.2/src/common/serialization_macros.h:7:0, 
                 from /tmp/buildd/lizardfs-2.5.4.2/src/common/serialization_macros_unittest.cc:2: 
/tmp/buildd/lizardfs-2.5.4.2/src/common/serialization.h: In instantiation of 'uint32_t serializedSize(const T&) [with T = long int; uint32_t = unsigned int]': 
/tmp/buildd/lizardfs-2.5.4.2/src/common/serialization.h:163:51:   recursively required from 'uint32_t serializedSize(const T&, const Args& ...) [with T = short int; Args = {long int}; uint32_t = unsigned int]'                                                                                                                                             
/tmp/buildd/lizardfs-2.5.4.2/src/common/serialization.h:163:51:   required from 'uint32_t serializedSize(const T&, const Args& ...) [with T = int; Args = {short int, long int}; uint32_t = unsigned int]'                                                                                                                                                    
/tmp/buildd/lizardfs-2.5.4.2/src/common/serialization_macros_unittest.cc:14:436:   required from here 
/tmp/buildd/lizardfs-2.5.4.2/src/common/serialization.h:158:26: error: request for member 'serializedSize' in 't', which is of non-class type 'const long int' 
  return t.serializedSize(); 
                          ^ 
/tmp/buildd/lizardfs-2.5.4.2/src/common/serialization.h: In instantiation of 'void serialize(uint8_t**, const T&) [with T = long int; uint8_t = unsigned char]': 
/tmp/buildd/lizardfs-2.5.4.2/src/common/serialization.h:301:32:   recursively required from 'void serialize(uint8_t**, const T&, const Args& ...) [with T = short int; Args = {long int}; uint8_t = unsigned char]'                                                                                                                                           
/tmp/buildd/lizardfs-2.5.4.2/src/common/serialization.h:301:32:   required from 'void serialize(uint8_t**, const T&, const Args& ...) [with T = int; Args = {short int, long int}; uint8_t = unsigned char]'                                                                                                                                                  
/tmp/buildd/lizardfs-2.5.4.2/src/common/serialization_macros_unittest.cc:14:536:   required from here 
/tmp/buildd/lizardfs-2.5.4.2/src/common/serialization.h:295:32: error: request for member 'serialize' in 't', which is of non-class type 'const long int' 
  return t.serialize(destination); 
                                ^ 
/tmp/buildd/lizardfs-2.5.4.2/src/common/serialization.h:295:32: error: return-statement with a value, in function returning 'void' [-fpermissive] 
In file included from /tmp/buildd/lizardfs-2.5.4.2/src/common/serialization_macros.h:7:0, 
                 from /tmp/buildd/lizardfs-2.5.4.2/src/common/serialization_macros_unittest.cc:2: 
/tmp/buildd/lizardfs-2.5.4.2/src/common/serialization.h: In instantiation of 'void deserialize(const uint8_t**, uint32_t&, T&) [with T = long int; uint8_t = unsigned char; uint32_t = unsigned int]':                                                                                                                                                        
/tmp/buildd/lizardfs-2.5.4.2/src/common/serialization.h:523:48:   recursively required from 'void deserialize(const uint8_t**, uint32_t&, T&, Args& ...) [with T = short int; Args = {long int}; uint8_t = unsigned char; uint32_t = unsigned int]' 
/tmp/buildd/lizardfs-2.5.4.2/src/common/serialization.h:523:48:   required from 'void deserialize(const uint8_t**, uint32_t&, T&, Args& ...) [with T = int; Args = {short int, long int}; uint8_t = unsigned char; uint32_t = unsigned int]' 
/tmp/buildd/lizardfs-2.5.4.2/src/common/serialization_macros_unittest.cc:14:678:   required from here 
/tmp/buildd/lizardfs-2.5.4.2/src/common/serialization.h:517:48: error: request for member 'deserialize' in 't', which is of non-class type 'long int' 
  return t.deserialize(source, bytesLeftInBuffer); 
                                                ^ 
/tmp/buildd/lizardfs-2.5.4.2/src/common/serialization.h:517:48: error: return-statement with a value, in function returning 'void' [-fpermissive] 
In file included from /tmp/buildd/lizardfs-2.5.4.2/src/common/serialization_macros.h:7:0, 
                 from /tmp/buildd/lizardfs-2.5.4.2/src/common/serialization_macros_unittest.cc:2: 
/tmp/buildd/lizardfs-2.5.4.2/src/common/serialization.h: In function 'uint32_t serializedSize(const T&) [with T = long int; uint32_t = unsigned int]': 
/tmp/buildd/lizardfs-2.5.4.2/src/common/serialization.h:159:1: warning: control reaches end of non-void function [-Wreturn-type] 
 } 
 ^ 
At global scope: 
cc1plus: warning: unrecognized command line option "-Wno-gnu" 
src/common/CMakeFiles/mfscommon_tests.dir/build.make:126: recipe for target 'src/common/CMakeFiles/mfscommon_tests.dir/serialization_macros_unittest.cc.o' failed 
make[3] *** [src/common/CMakeFiles/mfscommon_tests.dir/serialization_macros_unittest.cc.o] Error 1 
make[3] Leaving directory '/tmp/buildd/lizardfs-2.5.4.2/build' 
CMakeFiles/Makefile2:255: recipe for target 'src/common/CMakeFiles/mfscommon_tests.dir/all' failed 

It builds fine without tests on x86_32; it builds fine with tests on x86_64 (a.k.a. amd64). Apparently only building tests on x86_32 fails...

Questions about testing configure and installation

  1. Could you please add clean function for setup_machine.sh?

    Because I found that the output files generated by setup_machine.sh is different from the first ones if I ran it again. For example, there are only 3 lines in /etc/sudoers.d/lizardfstest:

    [root@mfs4 tests]# cat /etc/sudoers.d/lizardfstest
    
    ALL ALL = (lizardfstest) NOPASSWD: ALL
    ALL ALL = NOPASSWD: /usr/bin/pkill -9 -u lizardfstest
    lizardfstest ALL = NOPASSWD: /bin/sh -c echo\ 1\ >\ /proc/sys/vm/drop_caches
    

When the testing is done, I would like to remove the testing environment. The several tests tools and lizardfs programs generated by cmake, make and make install from the directory lizardfs/tests are also considered to be cleaned.

  1. It seems that the file /etc/lizardfs_tests.conf does not work. I set: ${LIZARDFS_ROOT:=/var/lib/lizardfstest/local}, but it is still installed into /usr/local. I have to use cmake .. -DENABLE_TESTS=YES -DCMAKE_INSTALL_PREFIX=/var/lib/lizardfstest/local/.

Documentation update

In mfsmaster options there's no description of ACCEPTABLE_DIFFERENCE and deprecated CHUNKS_LOOP_TIME (it can be used but anything is shown only when it's used). Man should be extended.

Rename config files (drop "mfs" prefix)?

I'm wondering whether dropping "mfs" prefix from config files' names would worth it?
From aesthetic prospective names like exports.cfg (instead of mfsexports.cfg) are nicer and could be helpful to distinguish LizardFS from MooseFS. On the other hand compatible names of config files may be handy for those who migrates from MooseFS to LizardFS but in such case renaming or symlinking files is just a little effort which is probably not worth considering... Any thoughts?

Could NOT find Polonaise v0.3.0

It says "Could NOT find Polonaise v0.3.0" when I run ./configure
I donot know where to find Polonaise to install it on my Redhat EL 6.5.

The file sys/rusage.h is also not found. I google the problem, and try "yum install perl-BSD-Resource", which did not work.

mfsmount Segmentation fault

Hello,

Can you please help we with a issue that seems to be a bug?
The problem manifests in Redhat 6.5
The rpm's compilation and other thing were made by me and worked on lizardfs 2.5.0, the problem seems to be in 2.5.2 only.

When I try to mount it gives me: Segmentation fault
GDB debug information:

GNU gdb (GDB) Red Hat Enterprise Linux (7.2-60.el6_4.1)
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/...
Reading symbols from /usr/bin/mfsmount...(no debugging symbols found)...done.
[New Thread 9186]
Reading symbols from /lib64/libfuse.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/libfuse.so.2
Reading symbols from /lib64/librt.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib64/librt.so.1
Reading symbols from /lib64/libz.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib64/libz.so.1
Reading symbols from /lib64/libpthread.so.0...(no debugging symbols found)...done.
[Thread debugging using libthread_db enabled]
Loaded symbols for /lib64/libpthread.so.0
Reading symbols from /usr/lib64/libstdc++.so.6...done.
Loaded symbols for /usr/lib64/libstdc++.so.6
Reading symbols from /lib64/libm.so.6...(no debugging symbols found)...done.
Loaded symbols for /lib64/libm.so.6
Reading symbols from /lib64/libgcc_s.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib64/libgcc_s.so.1
Reading symbols from /lib64/libc.so.6...(no debugging symbols found)...done.
Loaded symbols for /lib64/libc.so.6
Reading symbols from /lib64/libdl.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/libdl.so.2
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
Core was generated by `mfsmount -o mfsmaster=192.168.255.232,mfsdebug,mfsacl,mfssubfolder=/ /storage'.
Program terminated with signal 11, Segmentation fault.
#0 0x00007f749f67395a in __strchr_sse2 () from /lib64/libc.so.6

Missing separate debuginfos, use: debuginfo-install lizardfs-client-2.5.2-1rh.x86_64
(gdb) bt
#0 0x00007f749f67395a in __strchr_sse2 () from /lib64/libc.so.6
#1 0x00007f74a0777a3e in ?? () from /lib64/libfuse.so.2
#2 0x00007f74a0777ba3 in ?? () from /lib64/libfuse.so.2
#3 0x00007f74a0777d5a in ?? () from /lib64/libfuse.so.2
#4 0x00007f74a077811a in fuse_opt_parse () from /lib64/libfuse.so.2
#5 0x000000000041105f in main ()

There seems to be a problem with line 934 of /src/mount/fuse/main.cc.

Some more output with the debug headers enabled.
#0 0x00007ffff6ab895a in __strchr_sse2 () from /lib64/libc.so.6
#1 0x00007ffff7bbca3e in match_template (t=0x240166 <Address 0x240166 out of bounds>, arg=0x16e53b0 "mfsmaster=192.168.255.232", sepp=0x7fffffffe43c) at fuse_opt.c:171
#2 0x00007ffff7bbcba3 in find_opt (ctx=0x7fffffffe4c0, arg=0x16e53b0 "mfsmaster=192.168.255.232", iso=1) at fuse_opt.c:193
#3 process_gopt (ctx=0x7fffffffe4c0, arg=0x16e53b0 "mfsmaster=192.168.255.232", iso=1) at fuse_opt.c:274
#4 0x00007ffff7bbcd5a in process_real_option_group (ctx=0x7fffffffe4c0, opts=) at fuse_opt.c:302
#5 process_option_group (ctx=0x7fffffffe4c0, opts=) at fuse_opt.c:326
#6 0x00007ffff7bbd11a in process_one (args=0x7fffffffe560, data=, opts=, proc=) at fuse_opt.c:342
#7 opt_parse (args=0x7fffffffe560, data=, opts=, proc=) at fuse_opt.c:362
#8 fuse_opt_parse (args=0x7fffffffe560, data=, opts=, proc=) at fuse_opt.c:397
#9 0x000000000045d223 in main (argc=4, argv=0x7fffffffe688) at /root/lizardfs-2.5.2/src/mount/fuse/main.cc:934

Thank you

Master HA

Hello,

As I sayd in a previous post I am moosefs user and I am interested in migrating to lizardfs.
In moosefs I have managed to make some kind of HA on the master using ucarp and a script that does the metadata restore.
I have seen that you it is on the roadmap to implement Masted HA.

Can you tell me what is your ideea(plan) to do this? Is it going to be something inside the code or some sort of script?

Is it going to be load balanced or only failover?

Is there a forcased date on implementing this?

Thank you

LizardFS Shadow -> Master transition failure

Steps to reproduce: Have two or more LizardFS masters (one master, one or more shadows) and try to transition a shadowmaster into a full master by changing its personality and reloading it. With the "right" dataset, the result is this message: Can't connect to MFS master (IP:127.0.0.1 ; PORT:9421)

The message persists until you run another reload, at which point full master functionality is resumed.

Deleting the metadata set and starting with a fresh one may fix the issue temporarily. Deleting stats.mfs and sessions.mfs has no effect.

Make Error on Alpine Linux

Hi,

I have tried to compile Lizardfs 2.5.4 Branch on Alpine Linux, which uses musl libc.
I ran into the following Error:
src/common/CMakeFiles/mfscommon.dir/build.make:333: recipe for target 'src/common/CMakeFiles/mfscommon.dir/lockfile.cc.o' failed
make[2]: *** [src/common/CMakeFiles/mfscommon.dir/lockfile.cc.o] Error 1

See full output:
http://pastebin.com/Ppm08Yss

It would be great if somebody could tell me if i just made a stupid error or if there is some kind of compatibility issue with Alpine / Musl.

Although it is more or less unrelated, Moosefs 2.0.4.3 compiled without problems.

Thanks!

mfstopology.cfg throws syntax errors

Hey there,

When you try to start the LFS Master on Debian 7 it throws syntax errors like this when it attempts to load the included mfstopology.cfg file:

mfstopology: incomplete definition in line: 28
mfstopology: incomplete definition in line: 28

Run lizardfs-tests: Permission denied

I have run setup_machine.sh successfully to configure testing environment. The tests is also compiled without any errors. But I got permission denied when running lizardfs-tests.
For example:
[root@mfs4 tests]# lizardfs-tests --gtest_filter='SanityChecks.test_goals'
Running main() from gtest_main.cc
Note: Google Test filter = SanityChecks.test_goals
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from SanityChecks
[ RUN ] SanityChecks.test_goals
bash: /root/lizardfs/tests/test_suites/SanityChecks/test_goals.sh: Permission denied
/root/lizardfs/tests/lizardfs-tests.cc:31: Failure
Failed
[ FAILED ] SanityChecks.test_goals (542 ms)
[----------] 1 test from SanityChecks (542 ms total)

[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (542 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] SanityChecks.test_goals

1 FAILED TEST

However, it works well if I run ./run-test.sh test_suites/SanityChecks/test_goals.sh

Please expose recursive directory size

linux-doc-3.16/Documentation/filesystems/ceph.txt
describes a very nice CephFS feature:

Ceph also provides some recursive accounting on directories for nested
files and bytes. That is, a 'getfattr -d foo' on any directory in the
system will reveal the total number of nested regular files and
subdirectories, and a summation of all nested file sizes. This makes
the identification of large disk space consumers relatively quick, as
no 'du' or similar recursive scan of the file system is required.

I'd very much like to see this awesome feature implemented in LizardFS.
I reckon it should be relatively easy to implement because all metadata is already in RAM...

chunk server labels

Hello,

I just want to point something out that I do not know if someone thought of when introducing chunk server labels.

As I understand you can give a label to some category of chunk servers and force write one copy on this ones(the ssd example). Then there is the topology feature, that makes clients read from the chunk servers from the same rack. So for example:

  • ssd array in rack 1
  • servers with 7200 rpm drives in rack 2
  • clients in rack 2
    Then process would be like this.... clients write to the ssd drives(due to labels) and the second copy to the 7200rpm drives. The reads will go to 7200rpm drives due to topology definition..... Am I right or are there any other thing that I have to consider? If I am right it does not seem very efficient, I think it would be more efficient if the labels could have some sort of priority and the read location decision to be taken by a formula using the 2 variables.

Using this issue I only want to help improve this software, because to me this project seems to go in a good direction.

Thank you

Global Locks

Hello,

First of all I am a moosefs user for some time and I like what you did with lizardfs. I am planning to migrate to it mainly for the quota support and performance improvements, but the IO limiter is also a nice feature.

Another feature that I am interested in is global locking. Any news on this feature?

Thank you

Confusing/inconsistent LFS Chunkserver (and others) behaviour with cfg files

Hey there,

When you try to start the LFS chunkserver on Debian 7 it claims to try to load /etc/mfschunkserver.cfg but is actually reading /etc/mfs/mfschunkserver.cfg which is the only one that exists.

This is the exact set of messages:
Starting lizardfs-master:
cannot load config file: //etc/mfsmaster.cfg
can't load config file: //etc/mfsmaster.cfg - using defaults

It would be neat if it was changed to be consistent. Keep being awesome.
~ B

How far to reach a windows client

I noticed that " (polonaise) Add filesystem API for developers allowing to use the filesystem without FUSE (and thus working also on Windows)" in NEWS for Lizardfs (2.5.2) (2014-09-15).

So I guess it is possible to implement a windows client in future. I also found a video of Native Windows client for LizardFS on Youtube (http://youtu.be/YpfAc92ei90).
I want to know if there is a plan to release the windows client. Thanks.

Warning during compilation on Jenkins

Check this warning origin:

/home/jenkins/workspace/dev.LongRun/src/lizardfs/mfsmount/mfs_fuse.cc: In function ‘int mfs_errorconv(int)’:
/home/jenkins/workspace/dev.LongRun/src/lizardfs/mfsmount/mfs_fuse.cc:283:32: warning: ignoring return value of ‘char* strerror_r(int, char*, size_t)’, declared with attribute warn_unused_result [-Wunused-result]

Mfscgi shows RAM used: "not available" in v2.5.1

I have updated to the latest version v2.5.1
I found that RAM used shows "not available" in the mfscgi webpage.
image
I try to stop the mfscgi process and start again, but I get an error:
[root@mfs1 ~]# mfscgiserv stop
sending SIGTERM to lock owner (pid:55509)
Traceback (most recent call last):
File "/usr/sbin/mfscgiserv", line 492, in
if wdlock(lockfname,mode,locktimeout)==1:
File "/usr/sbin/mfscgiserv", line 419, in wdlock
posix.kill(l,signal.SIGTERM)
OSError: [Errno 3] No such process
[root@mfs1 ~]# ps aux | grep mfs
mfs 5537 0.0 0.7 240252 119056 ? S< 09:23 0:00 mfsmaster
root 5550 0.0 0.0 186860 14200 ? S 09:25 0:00 python /usr/sbin/mfscgiserv
mfs 5553 0.1 0.0 274940 8800 ? S<l 09:25 0:00 mfschunkserver
root 5589 0.0 0.0 103240 860 pts/0 S+ 09:35 0:00 grep mfs

Enhance error messages with log path hint

Error message in console should provide path hint of log file containing description about cause of failure.

Currently you can see messages like this:
"init: hdd space manager failed."

Kernel Panic on FreeBSD 9.2

Hello,
this is probably more fuse related - but I tought it might be good start to ask here.
Theres is a problem with clients nodes. While starting rsync we encounter kernel panic:

fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.19


Fatal trap 12: page fault while in kernel mode
cpuid = 0; apic id = 00
fault virtual address       = 0x0
fault code                           = supervisor read instruction, page not present
instruction pointer         = 0x20:0x0
stack pointer             = 0x28:0xffffff811b71bac0
frame pointer           = 0x28:0xffffff811b71bb10
code segment                  = base 0x0, limit 0xfffff, type 0x1b
                                               = DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags              = interrupt enabled, resume, IOPL = 0
current process                               = 16502 (rsync)
trap number                     = 12
panic: page fault
cpuid = 0
KDB: stack backtrace:
#0 0xffffffff80920bb6 at kdb_backtrace+0x66
#1 0xffffffff808eabce at panic+0x1ce
#2 0xffffffff80bd8640 at trap_fatal+0x290
#3 0xffffffff80bd897d at trap_pfault+0x1ed
#4 0xffffffff80bd8f9e at trap+0x3ce
#5 0xffffffff80bc355f at calltrap+0x8
#6 0xffffffff80bd7ee6 at amd64_syscall+0x546
#7 0xffffffff80bc3847 at Xfast_syscall+0xf7
Uptime: 21h22m6s
Dumping 847 out of 4043 MB:..2%..12%..21%..31%..42%..51%..61%..72%..82%..91%

Could you please advice any stable set for FreeBSD client?
Thank you!
Luke

chunkserver: fails is some volumes are inaccessible

While I was testing 9cec600 I've noticed that chunkserver fails to start when one storage can't be initialised therefore proposed configuration do not work. Such failures are undesirable because server may have many HDDs dedicated to LizardFS and if one of them couldn't be mounted on restart of the server (in case of HDD failure) it should not affect the whole node. In the other words failure of one HDD should not stop chunkserver from serving other healthy storages. IMHO logging non-fatal error should be sufficient when one (or more) storage volumes are unavailable.

chunkserver: bypass OS cache (posix_fadvise/POSIX_FADV_DONTNEED)

On 2.5.4 I verified that thousands of files in chunkserver's directories are completely or partially cached. It will be best to avoid caching of chunkserver's data due to low probability of cache hit. Therefore I recommend implementing posix_fadvise/POSIX_FADV_DONTNEED to exclude chunks from operating system cache which will improve co-existence of chunkserver with other applications. Currently chunkserver's activity displace OS cache which negatively affect performance of other services with very little hope for cache hit.

Not caching chunkserver's data will reduce cache pressure and will help to improve overall system performance by using cache more effectively.

Please note that it will affect only data (file's contents) but not the cache of directory entries etc.

Please implement posix_fadvise/POSIX_FADV_DONTNEED to prevent chunkserver's data caching.

P.S. There is an interesting related project: nocache

chunkserver: please add user friendly error message when "mfshdd.cfg" is missing

If chunkserver is started with mfschunkserver.cfg but without mfshdd.cfg it fails as follows:

initializing mfschunkserver modules ...
init: hdd space manager failed !!!
error occured during initialization - exiting

without saying anything in particular about the problem.
It would be nice to have a friendly error message saying that it actually unable to open mfshdd.cfg.
Thanks.

Documentation references MATOCU_LISTEN_(HOST|PORT) instead of MATOCL

The mfsmaster.cfg man page still lists MATOCU_LISTEN_HOST and MATOCU_LISTEN_PORT, which have been renamed to MATOCL_.... Additionally, the code to provide backwards compatibility appears to be broken, if you try to configure MATOCU_LISTEN_PORT you get the default value for MATOCL_LISTEN_PORT instead.

Print reason for chunkserver failure after installation

Chunkserver normally does not start after installation, because it is disabled.
Enabling and starting causes failure as folows

root@debian:~# /usr/local/etc/init.d/lizardfs-chunkserver start
Starting LizardFS chunkserver daemon:
cannot load config file: /usr/local/etc/mfs/mfschunkserver.cfg
can't load config file: /usr/local/etc/mfs/mfschunkserver.cfg - using defaults
working directory: /usr/local/var/lib/mfs
lockfile created and locked
initializing mfschunkserver modules ...
init: hdd space manager failed !!!
error occured during initialization - exiting
[FAIL] Starting lizardfs-chunkserver... ... failed!

The cause of this error is missing configuration file mfs/mfshdd.cfg. This should be stated explicitly.

What's the "shadow master" ?

Lizardfs provides the High availbility: "the new ‘shadow master’ server which maintains a current copy of filesystem metadata and is prepared to immediately replace the master metadata server in case of a failure" "Shadow master obtains metadata updates from the master server using the old MooseFS/LizardFS metalogger protocol." -- from the Lizardfs website

However, I do not find the difference between "shadow master" in Lizardfs and "metalogger server" in Moosefs. In Moosefs, the meta data could be restored from metalogger server when the master sever stops working.

bug: mfsmount -> mfsmaster unknown message

Hi, we are encountering a bug in our cluster where on some mounts, the server keeps disconnecting the mount because it receives invalid data.

master log:

Apr 27 23:50:37 toronto2 mfsmaster[17477]: main master server module: got unknown message from mfsmount (type:1416)
Apr 27 23:50:38 toronto2 mfsmaster[17477]: main master server module: got unknown message from mfsmount (type:1416)
Apr 27 23:50:46 toronto2 mfsmaster[17477]: main master server module: got unknown message from mfsmount (type:1416)
Apr 27 23:50:49 toronto2 mfsmaster[17477]: main master server module: got unknown message from mfsmount (type:1416)
Apr 27 23:50:52 toronto2 mfsmaster[17477]: main master server module: got unknown message from mfsmount (type:1416)
Apr 27 23:50:55 toronto2 mfsmaster[17477]: main master server module: got unknown message from mfsmount (type:1416)
Apr 27 23:50:59 toronto2 mfsmaster[17477]: main master server module: got unknown message from mfsmount (type:1416)

mount log:

Apr 27 23:50:42 toronto4 mfsmount[29921]: master: connection lost
Apr 27 23:50:42 toronto4 mfsmount[29921]: registered to master
Apr 27 23:50:44 toronto4 mfsmount[29921]: master: connection lost
Apr 27 23:50:44 toronto4 mfsmount[29921]: registered to master
Apr 27 23:50:46 toronto4 mfsmount[29921]: master: connection lost
Apr 27 23:50:46 toronto4 mfsmount[29921]: registered to master
Apr 27 23:50:49 toronto4 mfsmount[29921]: master: connection lost
Apr 27 23:50:49 toronto4 mfsmount[29921]: registered to master

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.