Code Monkey home page Code Monkey logo

Comments (10)

kvnwinata avatar kvnwinata commented on July 20, 2024

Interesting...are you using the coordinate mapper? Did you generate a .DAT file on your server machine, copy it to your client machine, and specify the filename when you run the server?

from kv2streamer.

kirubhakinect avatar kirubhakinect commented on July 20, 2024

Hi,
Thanks for reply,Yes i generate .data file correctly and copied to client machine and also given filename when running server.But my question is why i am getting double human in the first screenshot.
Another peculiar thing that i noticed now is i am getting the depth image of sensor on the client machine overlaps with color image of sensor coming from server machine.Actually the server machine sensor which is facing the door and the client machine sensor is facing us.The image that you are seeing is from server machine sensor.
serverimage

Steps of getting this image:-

  1. server system-- where i give the following command in the command line kv2server_application ###.#.#.1 kv2sensor01
  2. Client system - Run my application which displays this point cloud

We are sitting opposite to the client system sensor, but the server sensor is facing the door.As you see in the figure both are overlapping, i am confused why it happens.

Please tell me why i am getting depth data from the client system overlaps the server system color image.

Thanks
Kirubha

from kv2streamer.

kvnwinata avatar kvnwinata commented on July 20, 2024

Hi Kirubha,

I'm sorry but I can't seem to understand the situation clearly from your description about the double human. Let me clarify this: you're using two sensors, one connected to a server, called "secondary" and one connected to your client, called "primary", and your point cloud rendering from your primary sensor is fine, but the one from your secondary sensor has a double image for some reason--is this correct? Could you condense your code logic a little bit and highlight where it is different so that it's easier for me to parse?

As for the monitor shadow thing, I'm not sure why that is the case, but (if my understanding that you're using two sensors is correct) it might be caused by the interference of the infrared rays from each camera. Maybe try to see if you still have the issue in a bigger room (possibly less interference)?

I'd love to help but I will be busy until the end of this week and I will be traveling the entire week after, so I probably won't be able to reply much until then.

Kevin

from kv2streamer.

kirubhakinect avatar kirubhakinect commented on July 20, 2024

Thanks Kevin,

[Let me clarify this: you're using two sensors, one connected to a server, called "secondary" and one connected to your client, called "primary", and your point cloud rendering from your primary sensor is fine, but the one from your secondary sensor has a double image for some reason--is this correct?]
Yes, what you understand is correct.

[As for the monitor shadow thing, I'm not sure why that is the case, but (if my understanding that you're using two sensors is correct) ]
This is our mistake,i have used same cameraspacepoint for both primary and secondary.So that only primary sensor depth overlaps the secondary sensor rgb.

I have some doubts on rendering rgb,depth and bodyindex buffers from secondary sensor:-
Actually i want to render rgb point cloud and clipped bodies of human,so for clipped body i need bodyindex buffer from secondary sensor.

  1. I am getting correct playerindex bytes for primary sensor,from that i am able to isolate the human.But in the secondary sensor the playerindex data comes as unsigned char.
    From your coding , we are using upto this
    void KV2ClientExample::AcquireAndProcessBodyIndexFrame()
    {
    IBodyIndexFramePtr bodyIndexFrame;
    if (bodyIndexStreamer->AcquireLatestFrame(&bodyIndexFrame))
    {
    UINT bufferSize;
    signed char
    buffer;
    bodyIndexFrame->AccessRawUnderlyingBuffer(&bufferSize, &buffer);
    .....
    From the above code i will get bodyindexframe signed char* buffer that i will convert to byte* in our logic, but in byte* array of our's we are getting values like 234,238,240.But normally playindex values between [-1,0,1,2,3,4,5] and [255(if no player)].
    In our logic,
    we are getting indexsensor2buffer like this,
    void AcquireAndProcessBodyIndexFrame()
    {
    IBodyIndexFramePtr bodyIndexFrame;
    signed char* s2buffer;
    if (bodyIndexStreamer->AcquireLatestFrame(&bodyIndexFrame))
    {
    UINT bufferSize;
    bodyIndexFrame->AccessRawUnderlyingBuffer(&bufferSize, &s2buffer);
    unsigned char* output = bodyIndexFrameRenderBuffer;
    const signed char* bufferEnd = s2buffer + DEPTH_MULTICAST_WIDTH_DEPTH_MULTICAST_HEIGHT;
    while(s2buffer < bufferEnd)
    {
    signed char index = *s2buffer;
    output = color_mapping[3_(index+1)+0]; ++output;
    output = color_mapping[3(index+1)+1]; ++output;
    output = color_mapping[3(index+1)+2]; ++output;
    ++s2buffer;
    }
    indexSensor2buffer=reinterpret_cast<BYTE
    >(s2buffer);
    bodyIndexFrame.reset();
    }
    }
    p = pDepthS2SpaceBuffer[colorIndex];
    int depthX = static_cast(p.X );
    int depthY = static_cast(p.Y );
    if ((depthX >= 0 && depthX < 512) && (depthY >= 0 && depthY < 424))
    {
    BYTE player = indexSensor2buffer[depthX + (depthY * 512)];
    const CameraSpacePoint& rPt1 = pCSS2Points[colorIndex];
    xx =(i-pp_x)_rPt1.Z/fl_x;
    yy = (j-pp_y)_rPt1.Z/fl_y;
    zz=rPt1.Z;
    if (player != 0xff)
    {
    glColor4ub(pColorBuffer[4 * colorIndex], pColorBuffer[4 * colorIndex + 1], pColorBuffer[4 * colorIndex + 2], pColorBuffer[4 * colorIndex + 3]);
    glVertex3f(xx, -yy, rPt1.Z);
    }
    }
    but indexSensor2buffer returns 230,234,248 ... , it should be like this [-1,0,1,2,3,4,5] and [255(if no player)].
  2. When i invoke secondary sensor from network, most of the times the depth data is not coming and it is very slow,please suggest me how to increase the speed of the data coming from secondary sensor of network.

Thanks
Kirubha

from kv2streamer.

kvnwinata avatar kvnwinata commented on July 20, 2024
  1. Not sure why this is but have you tested whether reinterpret_cast behaves the way you expect it to be? Does it have the correct value prior to the reinterpret_cast?
  2. I would suggest limiting the streams that you're requesting from the server. For example, if the full color streamer is being added to the client but is not actually doing anything, it will take most of the bandwidth, so removing that would improve the speed.

-kevin

from kv2streamer.

kirubhakinect avatar kirubhakinect commented on July 20, 2024

Thanks kevin :)

Now i have rectified the playerindex problem by specifiying the following condition correctly in my playerindex logic.
int player = buffer[w + (h * 512)];

So i am able to see human pointcloud alone by using the playerindex.

Basically i didn't receive full colorframe resolution(1920*1080=2073600) coming from secondary sensor.I have checked the colorframe length by executing the following code.

IColorFramePtr colorFrame;
if (colorStreamer->AcquireLatestFrame(&colorFrame))
{
    UINT bufferSize; 
    unsigned char*  buffer;
    colorFrame->AccessRawUnderlyingBuffer(&bufferSize, &buffer);
            for(int width=0 ;width<1920;width)
           {
        for(int height= 0; height<1080; height++)
              {
               int colorIndexes = width + (height * 1920);
               if( colorIndexes <= 1530000)
                    {
                          BYTE  R = buffer[4*colorIndexes];
                          BYTE G = buffer[1+4*colorIndexes];
                          BYTE B = buffer[2+4*colorIndexes];
                       testbytes[4*colorIndexes]= R;
                        testbytes[1 +4*colorIndexes]= G;
                        testbytes[2 +4*colorIndexes]= B;                     
                     }           
                }
            } 
         }

in the above code i have put the following condition,
if( colorIndexes <= 1530000)
because after that i am receiving access violation error.But the colorframe has the length of 2073600.Because of this problem color and depth of human is not matching correctly.

So evertime depth of human appears on front part and color of human goes behind (as you can see this in my first screen shot)

2.Thanks for your suggestion, But what is our situation is, we are already getting low frames(5-8 fps) is coming .But i have to increase it to atleast by 15 - 20fps instead of full 30 fps.

Actually my system bandwidth is 2mbps.

So i think your suggestion will be to increase my bandwidth.

  • Kirubha

from kv2streamer.

kvnwinata avatar kvnwinata commented on July 20, 2024

Hi Kirubha,

Yes, the full resolution was not sent because it would take too much bandwidth. You can actually adjust the resolution in kv2streamer-lib/kv2client/SharedConfig.h:

#define COLOR_MULTICAST_WIDTH 1280
#define COLOR_MULTICAST_HEIGHT 720

Best,

Kevin

from kv2streamer.

kirubhakinect avatar kirubhakinect commented on July 20, 2024

Thanks Kevin,

Is it because of the full resolution was not sent through my secondary sensor in network, the color and depth are not mapped in my above screenshot. Whether i have to change any encoding in kv2Server applications in order to receive full bytes through network.

Please suggest me what i have to modify in order to map the color and depth correctly in my above screenshot.

  • Kirubha

from kv2streamer.

kvnwinata avatar kvnwinata commented on July 20, 2024

Hi Kirubha,

Can you tell me what streamer you're using? If you're trying to construct the point cloud you would want to use the depth streamer and the colored depth streamer (original color image transformed to match the size of the depth image).

Kevin

from kv2streamer.

kirubhakinect avatar kirubhakinect commented on July 20, 2024

Thanks Kevin,
We are using both color stream and depth stream.Yes we are matching the color image over the size of the depth image.

from kv2streamer.

Related Issues (9)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.