Code Monkey home page Code Monkey logo

Comments (10)

MartyG-RealSense avatar MartyG-RealSense commented on June 15, 2024

Hi @ronbitonn When saving depth data to PNG files, most of the depth information is lost. There is no way to avoid this when saving to PNG, unfortunately. If possible it is better to save to the .raw image format which preserves depth information.

The best RealSense reference about reading depth data from a .raw file (which follows the UVC specification for the depth stream) is at #2231 (comment)

from librealsense.

ronbitonn avatar ronbitonn commented on June 15, 2024

Hi @MartyG-RealSense ,

I hope this message finds you well. I've been facing an issue with capturing RGB and depth information using the Intel D435 camera and saving them s PNG files. When attempting to read the depth information from these PNG files, I encounter unexpected values that represent RGB values instead of the actual depth values.

After reviewing your guidance on the limitations of PNG for preserving depth information, I'm seeking recommendations on how to efficiently save both RGB and depth data for subsequent use in object detection algorithms.

My goal is to save the PNG files for RGB frames and, for each frame, save a corresponding matrix of depth information in the size of the RGB image. Ultimately, I aim to create a folder with X*60 frames, along with an equal number of text files containing depth information. This structure will enable me to correlate depth information with RGB data, facilitating the development of algorithms for object detection.

Could you kindly guide the best approach for achieving this goal while ensuring that depth information is accurately preserved for subsequent analysis?

Thank you in advance for your assistance.

from librealsense.

MartyG-RealSense avatar MartyG-RealSense commented on June 15, 2024

#4934 (comment) has a Python script for saving RGB as a PNG image and depth as an array of scaled matrices saved as an .npy file.

from librealsense.

MartyG-RealSense avatar MartyG-RealSense commented on June 15, 2024

Hi @ronbitonn Do you require further assistance with this case, please? Thanks!

from librealsense.

MartyG-RealSense avatar MartyG-RealSense commented on June 15, 2024

Case closed due to no further comments received.

from librealsense.

ronbitonn avatar ronbitonn commented on June 15, 2024

Subject: Performance Issue in Bag File Conversion to JPEG and Depth Arrays with Jetson Orin and RealSense D435 Camera

Issue:

While using the provided scripts with a Jetson Orin and a RealSense D435 camera, there is a noticeable performance concern during the conversion of rosbag files to JPEG images and depth arrays. The current implementation takes approximately 30 seconds for the conversion of one minute of recorded data, which is suboptimal for real-time or large-scale applications.

Platform Details:

Jetson Orin
Camera: Intel RealSense D435
Problem Description:

The observed bottleneck lies in the code responsible for processing and converting the frames from the rosbag files to JPEG images and depth arrays. This performance concern might impact the scalability and real-time usability of the solution on the Jetson Orin platform.

Codespython3 record_multiple_bagfiles_d435.py

import os
import pyrealsense2 as rs
import cv2
import numpy as np
import time
import datetime

import rosbag

NUM_RECORDINGS = 20
BREAK_TIME = 2
video_length = 60

ctx = rs.context()
devices = ctx.query_devices()

if len(devices) >= 1:
first_device = devices[0]
first_device_serial_number = first_device.get_info(rs.camera_info.serial_number)

pipeline_record = rs.pipeline()
config_record = rs.config()
config_record.enable_device(first_device_serial_number)

config_record.enable_stream(rs.stream.color, 1280, 720, rs.format.rgb8, 30)
config_record.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)

color_sensor_record = first_device.query_sensors()[1]
color_sensor_record.set_option(rs.option.auto_exposure_priority, False)
color_sensor_record.set_option(rs.option.sharpness, 100)

for i in range(NUM_RECORDINGS):
    now = datetime.datetime.now()
    timestamp = now.strftime("%Y%m%d_%H%M%S_%f")
    output_filename = f"/data/records/records_ivri_farm/{timestamp}"
    bag_filename = output_filename + ".bag"
    config_record.enable_record_to_file(bag_filename)

    pipeline_record_profile = pipeline_record.start(config_record)
    device_record = pipeline_record_profile.get_device()
    device_recorder = device_record.as_recorder()

    color_sensor_record = device_record.query_sensors()[1]
    color_sensor_record.set_option(rs.option.auto_exposure_priority, False)

    try:
        rs.recorder.pause(device_recorder)
        print('Pausing.', end='')
        camera_startup_pause = 2
        for t in range(camera_startup_pause*2):
            print('\r','Pausing','.'*(t+2), sep='', end='')
            time.sleep(0.5)

        print()    
        rs.recorder.resume(device_recorder)

        colorizer = rs.colorizer()

        start=time.time()
        while time.time() - start < int(video_length):
            frames = pipeline_record.wait_for_frames()
            frames.keep()

            color_frame = frames.get_color_frame()
            depth_frame = frames.get_depth_frame()

            color_frame = np.asanyarray(color_frame.get_data())
            color_frame = cv2.cvtColor(color_frame, cv2.COLOR_RGB2BGR)
            depth_frame = np.asanyarray(colorizer.colorize(depth_frame).get_data())

            camera_images = np.vstack((color_frame, depth_frame))

            camera_pair = first_device_serial_number
            cv2.namedWindow('Camera ' + str(camera_pair), cv2.WINDOW_AUTOSIZE)
            cv2.imshow('Camera ' + str(camera_pair), camera_images)
            cv2.waitKey(5)

    finally:
        pipeline_record.stop()
        cv2.destroyAllWindows()

    print(f"Recorded video saved to: {bag_filename}")

    time.sleep(BREAK_TIME)

Second code

#!/usr/bin/env python3

import os
import numpy as np
import cv2
import rosbag
from datetime import datetime, timedelta

bag_directory = "/data/records/records_ivri_farm/new_31_01_2024"

for bag_file in os.listdir(bag_directory):
if bag_file.endswith(".bag"):
bag_path = os.path.join(bag_directory, bag_file)

    pipeline = rosbag.Bag(bag_path)

    bag_file_name = os.path.splitext(os.path.basename(bag_path))[0]
    date_str = bag_file_name[:8]
    start_time_str = bag_file_name[9:15]

    def add_seconds_to_start_time(start_time_str, seconds):
        start_time = datetime.strptime(start_time_str, "%H%M%S")
        new_time = start_time + timedelta(seconds=seconds)
        return new_time.strftime("%H%M%S")

    output_dir = f"/data/records/records_ivri_farm/new_31_01_2024{bag_file_name}"
    os.makedirs(output_dir, exist_ok=True)
    print("create output_dir - " + output_dir)

    playback_speed = 5
    color_frame_count = 0
    depth_frame_count = 0

    for topic, msg, t in pipeline.read_messages():
        timestamp_seconds = t.to_sec()
        adjusted_time_str = add_seconds_to_start_time(start_time_str, timestamp_seconds)

        if topic == "/device_0/sensor_1/Color_0/image/data":
            color_image = np.frombuffer(msg.data, dtype=np.uint8).reshape(msg.height, msg.width, -1)

            if color_frame_count % 6 == 0:
                color_image_path = f"{output_dir}/{date_str}_{adjusted_time_str}_{color_frame_count}_color.jpg"
                cv2.imwrite(color_image_path, cv2.cvtColor(color_image, cv2.COLOR_BGR2RGB))

            color_frame_count += 1

        elif topic == "/device_0/sensor_0/Depth_0/image/data":
            depth_image = np.frombuffer(msg.data, dtype=np.uint16).reshape(msg.height, msg.width)

            if depth_frame_count % 6 == 0:
                depth_npy_path = f"{output_dir}/{date_str}_{adjusted_time_str}_{depth_frame_count}_depth.npy"
                np.save(depth_npy_path, depth_image)

            depth_frame_count += 1

    pipeline.close()

from librealsense.

MartyG-RealSense avatar MartyG-RealSense commented on June 15, 2024

Hi @ronbitonn If you increase the frame queue size then the pipeline will be able to contain more frames simultaneously (the default maximum pipeline capacity is 16 frames for each stream type), though this increases the amount of computer memory that is consumed. Python code for setting the frame queue size can be found at #6448 (comment)

from librealsense.

ronbitonn avatar ronbitonn commented on June 15, 2024

Hi @MartyG-RealSense

I appreciate your suggestion on increasing the frame queue size. Currently, due to storage limitations, I record as a rosbag file first and then convert to JPEGs and depth arrays separately. Any recommendations for a more efficient approach?

Additionally, if there's an option to reduce the number of frames and depth arrays saved per second in the rosbag file, it would be beneficial for storage optimization on my Jetson Orin.

Thanks for your advice.

from librealsense.

MartyG-RealSense avatar MartyG-RealSense commented on June 15, 2024

Rosbags are the most efficient method of writing multiple frames of RealSense data.

A way to set a custom frame rate is to only use every 'nth' frame (e.g every 5th frame out of 30 when using 30 FPS to simulate 6 FPS). A Python script that demonstrates this technique is at #3169

from librealsense.

ronbitonn avatar ronbitonn commented on June 15, 2024

ok @MartyG-RealSense thank you very much I will check it out and update soon. Appreciate your help a lot.

from librealsense.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.