Comments (19)
Yeah this sounds like it might be a timing issue issue where the client thread of the test tries connecting before the server has started listening. I probably should have added some sort of condition variables or something to ensure that doesn't happen. I'll try something here.
from fastcgipp.
Check out commit 9c747d4 and try it again.
from fastcgipp.
Hi, thanks for the quick update :)
Unfortunately I now get There are leftover file descriptors after they should all have been closed
. I see it is something you have anticipated though, so any suggestions?
If it helps, here is what lsof -c make -r
said during the output:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
make 1915 peter_devoy cwd DIR 252,0 4096 5377982 /home/peter_devoy/fastcgi++.build
make 1915 peter_devoy rtd DIR 252,0 4096 2 /
make 1915 peter_devoy txt REG 252,0 207528 4856084 /usr/bin/make
make 1915 peter_devoy mem REG 252,0 1864888 4591636 /lib/x86_64-linux-gnu/libc-2.23.so
make 1915 peter_devoy mem REG 252,0 14608 4591637 /lib/x86_64-linux-gnu/libdl-2.23.so
make 1915 peter_devoy mem REG 252,0 162632 4591632 /lib/x86_64-linux-gnu/ld-2.23.so
make 1915 peter_devoy mem REG 252,0 1674800 4855565 /usr/lib/locale/locale-archive
make 1915 peter_devoy 0u CHR 136,2 0t0 5 /dev/pts/2
make 1915 peter_devoy 1u CHR 136,2 0t0 5 /dev/pts/2
make 1915 peter_devoy 2u CHR 136,2 0t0 5 /dev/pts/2
make 1915 peter_devoy 3r FIFO 0,10 0t0 18059 pipe
from fastcgipp.
You know, it's not so much something I anticipated as it is something I assumed simply shouldn't happen. I likened file descriptors not being properly closed to "not good" but my method of checking likely leaves something to be desired. Can you modify to check how many are left? Simply change
if(openfds() != initialFds)
FAIL_LOG("There are leftover file descriptors after they should all "\
"have been closed");
to
const auto fds = openfds()-initialFds;
if(fds != initialFds)
FAIL_LOG("There are " << fds << " leftover file descriptors after they "\
"should all have been closed");
and see what it says?
from fastcgipp.
Just 1.
I also changed:
ss << "/proc/" << getpid() << "/fd";
DIR* const directory = opendir(ss.str().c_str());
to
ss << "/proc/" << getpid() << "/fd";
ERROR_LOG(ss.str().c_str());
DIR* const directory = opendir(ss.str().c_str());
resulting in:
[error] /proc/1552/fd
[error] /proc/1552/fd
[fail] There are 1 leftover file descriptors after they should all have been closed
P.S. Now I have taken the time too understand the code I see that extra information isn't very useful ;)
from fastcgipp.
So I added a log line in here:
while((file = readdir(directory)) != nullptr)
ERROR_LOG(file->d_name);
++count;
Which gave the following:
[error] /proc/1684/fd
[error] .
[error] ..
[error] 0
[error] 1
[error] 2
[error] 3
[error] 4
[error] /proc/1684/fd
[error] .
[error] ..
[error] 0
[error] 1
[error] 2
[error] 3
[error] 4
[error] 5
[fail] There are 0 leftover file descriptors after they should all have been closed
from fastcgipp.
Hmmm. Good bit of debugging, thanks! It looks like it is simply the OS itself cleaning up the file descriptors asynchronously. Perhaps a small std::this_thread::sleep_for() before checking how many file descriptors are still open would be the solution. What do you think?
from fastcgipp.
Glad to help! Unfortunately I am new to C++ so not qualified enough to answer :( I did think there may be some semantic error in if(fds != initialFds)
seeing as your message says there are 0 left over file descriptors but I realise the original logic has changed from if(openfds() != initialFds)
...
I don't really understand how count is being decremented either... given the output in #issuecomment-216062252 I would have expected the first and second calls to openfds() to return 7 & 8 respectively but from the below I see it is returning 1 & 1.
const auto fds = openfds()-initialFds;
ERROR_LOG(fds); //0
ERROR_LOG(initialFds); //1
//below output shows fds = 0 therefore openfds() must return 1 to yield 1 - 1 = 0
if(fds != initialFds) //0 != 1
FAIL_LOG("There are " << fds << " leftover file descriptors after they "\
"should all have been closed");
//Log: "There are 0 leftover file descriptors after they should all have been closed"
It's all foreign to me so if you're happy the logic is sound I could try putting a sleep in before const auto fds = openfds()-initialFds;
, if you like?
Just realised that with the original logic the test would have passed on that run... so I see what you mean about the async, I think.
from fastcgipp.
Yeah I'd say try a few milliseconds of sleep before that statement and see what happens. Perhaps the OS just needs a little time.
from fastcgipp.
Unfortunately it fails even with 1 minute of sleep:
Code
int main()
{
const auto initialFds = openfds();
std::random_device trueRand;
std::uniform_int_distribution<> portDist(2048, 65534);
port = std::to_string(portDist(trueRand));
done=false;
std::thread serverThread(server);
{
std::unique_lock<std::mutex> cvLock(cvMutex);
cv.wait(cvLock);
}
client();
serverThread.join();
std::this_thread::sleep_for(std::chrono::milliseconds(60000));
const auto fds = openfds()-initialFds;
if(fds != initialFds)
FAIL_LOG("There are " << fds << " leftover file descriptors after they "\
"should all have been closed");
return 0;
}
Command
make tests;make test;cat Testing/Temporary/LastTest.log | grep -e .fail.*
Output
[fail] There are 1 leftover file descriptors after they should all have been closed
from fastcgipp.
Man. Qoi le fuck. I guess this has nothing to do with the OS taking a bit longer to clean file descriptors up at all. I am, of course, assuming that if there are file descriptors remaining, it is because they haven't been properly closed. Perhaps that assumption is incorrect? I dunno. Hmmmm. I guess the next step would be to find out which file descriptor is still open. Otherwise I supposed the check is not really important but I feel as though it is. If you're feeling up for the challenge I suppose you could try and match the file descriptor number to all those that have been opened. I would suspect it is a listen socket or epoll file descriptor. Otherwise, comment out the check and move on!
from fastcgipp.
Perhaps that assumption is incorrect?
I am not sure because I do not understand why count is only 1... do you know why? I will have access to the server later so I can try and do some more debugging.
from fastcgipp.
Yeah. I haven't the foggiest idea why only one file descriptor is left over. Obviously it means all the other hundreds of file descriptors are being properly closed. Since it is only one, that may indicate either a listen or epoll file descriptor since there is only one of each of them.
from fastcgipp.
I really mean 'how' not 'why' I.e. if the count variable is declared in function scope, incremented in the while loop which executes 6 or 7 times, then returned from the function, how is it only 1 and not 6 or 7? Where along the line does it get decreased?
I have access to the server now so will have a dig anyway...
from fastcgipp.
Maybe this will shed some light on it?
unsigned int openfds()
{
std::ostringstream ss;
ss << "/proc/" << getpid() << "/fd";
ERROR_LOG(ss.str().c_str());
DIR* const directory = opendir(ss.str().c_str());
dirent* file;
unsigned int count = 0;
ERROR_LOG("count before loop: " << count);
while((file = readdir(directory)) != nullptr)
ERROR_LOG("count before increment: " << count << "\tfile->d_name: " << file->d_name);
++count;
ERROR_LOG("count after increment: " << count << "\tfile->d_name: " << file->d_name);
closedir(directory);
ERROR_LOG("count before return: " << count);
return count;
}
Output:
----------------------------------------------------------
[error] /proc/1435/fd
[error] count before loop: 0
[error] count before increment: 0 file->d_name: .
[error] count before increment: 0 file->d_name: ..
[error] count before increment: 0 file->d_name: 0
[error] count before increment: 0 file->d_name: 1
[error] count before increment: 0 file->d_name: 2
[error] count before increment: 0 file->d_name: 3
[error] count before increment: 0 file->d_name: 4
[error] count after increment: 1 file->d_name:
<end of output>
Test time = 0.00 sec
Note it seems to be failing at ERROR_LOG("count before return: " << count); ERROR_LOG("count after increment: " [...]
from fastcgipp.
This gets weirder by the second:
while((file = readdir(directory)) != nullptr)
ERROR_LOG("count before increment: " << count << "\tfile->d_name: " << file->d_name);
++count;
ERROR_LOG("count after increment: " << count); //<-------removed file->d_name
Output:
----------------------------------------------------------
[error] /proc/1785/fd
[error] count before loop: 0
[error] count before increment: 0 file->d_name: .
[error] count before increment: 0 file->d_name: ..
[error] count before increment: 0 file->d_name: 0
[error] count before increment: 0 file->d_name: 1
[error] count before increment: 0 file->d_name: 2
[error] count before increment: 0 file->d_name: 3
[error] count before increment: 0 file->d_name: 4
[error] count after increment: 1
[error] /proc/1785/fd
[error] count before loop: 0
[error] count before increment: 0 file->d_name: .
[error] count before increment: 0 file->d_name: ..
[error] count before increment: 0 file->d_name: 0
[error] count before increment: 0 file->d_name: 1
[error] count before increment: 0 file->d_name: 2
[error] count before increment: 0 file->d_name: 3
[error] count before increment: 0 file->d_name: 4
[error] count before increment: 0 file->d_name: 5
[error] count after increment: 1
[fail] There are 0 leftover file descriptors after they should all have been closed
<end of output>
Test time = 4.69 sec
----------------------------------------------------------
Test Failed.
"Fastcgipp::Sockets" end time: May 03 21:27 BST
"Fastcgipp::Sockets" time elapsed: 00:00:04
----------------------------------------------------------
from fastcgipp.
Hmmm. I'm pretty sure that
while((file = readdir(directory)) != nullptr)
ERROR_LOG("count before increment: " << count << "\tfile->d_name: " << file->d_name);
++count;
ERROR_LOG("count after increment: " << count << "\tfile->d_name: " << file->d_name);
would have to be
while((file = readdir(directory)) != nullptr)
{
ERROR_LOG("count before increment: " << count << "\tfile->d_name: " << file->d_name);
++count;
ERROR_LOG("count after increment: " << count << "\tfile->d_name: " << file->d_name);
}
to accomplish what you're trying to accomplish. What is this python? :P
from fastcgipp.
Haha, well, as you mention it I did think it was odd that neither the while loop nor the if statement at the bottom had braces. I was surprised at how forgiving the compiler is having always thought that whitespace is disregarded in C++. I now realise that it allows the termination of a condition with ;
-- bit of a gotcha for a C++ newb such as myself ;)
And that explains why it was only returning 1 above, doh. If you want to give me a patch to try and identify the remaining fds I am happy to have a look but as it's not a show stopper I will just move on to hello world in the mean time :)
from fastcgipp.
I first want to thank you for this library. I have two questions: 1.Can this work on Windows or just Linux? 2.Can it work on Apache server? I just want to know before using it. Thanks again.
from fastcgipp.
Related Issues (20)
- [failed build example]try use c++20 modules for compile my example
- Support the Meson build system. HOT 1
- Prevent GCC warnings
- Sould not register signal handlers HOT 3
- how to make a json response? HOT 1
- Windows systems not supported??? HOT 1
- 3.1 Release HOT 1
- undefined reference to pthread, for examples HOT 3
- error: ‘Fastcgipp::Http::RequestMethod’ is not a class or namespace
- Are you accepting pull requests? HOT 3
- tests/curl.cpp cannot POST data to another website HOT 7
- Overview of logging mechanism
- fastcgipp and proxy HOT 1
- CMake PostgreSQL missing required version number HOT 4
- io_uring support? HOT 2
- postgres pipline query HOT 1
- Raspberry Pi with nginx: Echo example gives 413 Request Entity Too Large on form submit HOT 4
- Permission not being set on unix socket according to permission parameter in listen() call. HOT 2
- Error during compilation on Ubuntu 16.04, g++ 5.4 HOT 2
- Missing virtual destructor in Curl_base HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fastcgipp.