Code Monkey home page Code Monkey logo

record3d's Introduction

2022/08/16 Update: Added camera position streaming (introduced breaking changes). To be used with Record3D 1.7.2 and newer.

2021/07/28 Update: Introduced support for higher-quality RGB LiDAR streaming. To be used with Record3D 1.6 and newer.

2020/09/17 Update: Introduced LiDAR support. To be used with Record3D 1.4 and newer.

This project provides C++ and Python libraries for the iOS Record3D app which allows you (among other features) to live-stream RGBD video from iOS devices with TrueDepth camera to a computer via USB cable.

Prerequisites

  • Install CMake >= 3.13.0 and make sure it is in PATH.
  • When on macOS and Windows, install iTunes.
  • When on Linux, install libusbmuxd (sudo apt install libusbmuxd-dev). It should be installed by default on Ubuntu.

Installing

The libraries are multiplatform — macOS, Linux and Windows are supported.

Python

You can install either via pip:

python -m pip install record3d

or build from source (run as admin/root):

git clone https://github.com/marek-simonik/record3d
cd record3d
python setup.py install

C++

After running the following, you will find compiled static library in the build folder and header files in the include folder.

macOS and Linux

git clone https://github.com/marek-simonik/record3d
cd record3d
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make -j8 record3d_cpp
make install
# now you can link against the `record3d_cpp` library in your project

Windows

git clone https://github.com/marek-simonik/record3d
cd record3d
md build
cd build
cmake -DCMAKE_BUILD_TYPE=Release -G "Visual Studio 15 2017 Win64" ..
# Open the generated Visual Studio Solution (.sln) fil and build the "`record3d_cpp`" Project

Sample applications

There is a Python (demo-main.py) and C++ (src/DemoMain.cpp) sample project that demonstrates how to use the library to receive and display RGBD stream.

Before running the sample applications, connect your iOS device to your computer and open the Record3D iOS app. Go to the Settings tab and enable "USB Streaming mode".

Python

After installing the record3d library, run python demo-main.py and press the record button to start streaming RGBD data.

C++

You can build the C++ demo app by running the following (press the record button in the iOS app to start streaming RGBD data):

macOS and Linux

git clone https://github.com/marek-simonik/record3d
cd record3d
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make -j8 demo
./demo

Windows

git clone https://github.com/marek-simonik/record3d
cd record3d
md build
cd build
cmake -DCMAKE_BUILD_TYPE=Release -G "Visual Studio 15 2017 Win64" ..
# Open the generated Visual Studio Solution (.sln) file and run the "`demo`" Project

record3d's People

Contributors

adrelino avatar marek-simonik avatar ylabo0717 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

record3d's Issues

[feature-request] also get the 6doF camera odometry

Great app, we were rolling something in-house for some research at NYU, but we might switch to using your app.

Currently, rgb, depth and intrinsic parameters are exposed.
However, for full 3D reconstruction or SLAM applications, it would be essential to also stream the iphone extrinsics (i.e. XYZ locations, camera normal, etc.). The closest project that exposes this info (but for logging onto iphone local disk) is https://github.com/PyojinKim/ARKit-Data-Logger

Is this something that you have plans to expose? It would be really helpful for SLAM applications

Frames never arrive

Hi, I'm attempting to stream the depth frames to my windows PC. I installed itunes so that the USB interface works. When I attempt to run demo.exe, I get this in the log:

Found 1 iOS device(s):
        Device ID: 4776
        UDID: 00008101-000D392C01F0001E

Trying to connect to device with ID 4776.
Connected and starting to stream. Enable USB streaming in the Record3D iOS app (https://record3d.app/) in case you don't see RGBD stream.

And then it just hangs there. When I stop the frame from the record3D app, it says 'Stream Stopped!' but the program never exits.

I've traced the problem to the usbmuxd_recv call on Record3DStream.cpp:243, which is called when attempting to load the PeerTalk header. It seems that it's not receiving any bytes: numTotalReceivedBytes stays at 0.

Any ideas? :) thanks

C++ demo app on Linux

Hi, thank you for your awesome project.

I have the exact same issue as this but on Linux Ubuntu.
I checked that the python demo-main.py works just fine on the same machine.
But when I run the ./demo after a successful build, it just hangs and OpenCV display is not shown as supposed to.
Below is the config result after I run cmake -DCMAKE_BUILD_TYPE=Release.

-- The C compiler identification is GNU 7.5.0
-- The CXX compiler identification is GNU 7.5.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Performing Test CFLAG_Wall
-- Performing Test CFLAG_Wall - Success
-- Performing Test CFLAG_Wunknown_pragmas
-- Performing Test CFLAG_Wunknown_pragmas - Success
-- Performing Test CFLAG_Wunused_variable
-- Performing Test CFLAG_Wunused_variable - Success
-- Looking for stpncpy
-- Looking for stpncpy - found
-- Looking for sleep
-- Looking for sleep - not found
-- Looking for pselect
-- Looking for pselect - found
-- Looking for sys/inotify.h
-- Looking for sys/inotify.h - found
-- Found OpenCV: /usr (found version "3.2.0")
-- Configuring done
-- Generating done
-- Build files have been written to: /home/jaykim305/record3d/build

Any ideas?
p.s. I tried restarting the app and ios device (both Ipad and Iphone) but still no progress.

World/Camera coordinates

Let me start with that this is a great app and the API is a bliss to work with. I was wondering if it could be possible to send some camera coordinates like relative XYZ for up/left/forward vectors. Not sure if ARKit provide you with that data, but I think would be very interesting to do some virtual productions and AR across devices.

Setting JPEG compression for in-app captures

Hi @marek-simonik, thanks for the fantastic app! I use depth cameras for computer vision research, and the ability to capture RGB-D videos without being tethered to a computer is a game-changer.

My one complaint is that the JPEG compression rate for in-app captures leaves a lot of artifacts. Are there any plans to expose that setting in the future? I've made a few captures using the USB interface with lossless RGB streaming, and the resulting images are much better. I realize that a higher setting will significantly increase file sizes on the iPad, but that is something I am willing to work around.

Thanks!

[feature request] Rear RGB+Odometry

Record3D successfully delivers on some parts of the following matrix:

Device and Sensor RGB Depth Odometry
Pro FaceID Y Y N
Pro LiDAR+Rear Camera Y Y Y
non-Pro Face ID Y Y N
non-Pro Rear Camera N N N

The Pro models expose RGB + Depth + Odometry. The non-Pro models expose only RGB + Depth (via FaceID cam).

This feature request is to request the enablement of the last line of this matrix, i.e. enable RGB + Odometry on non-Pro models.
The motivation here is to use the much cheaper non-Pro models on a variety of robotics research settings.
Mixing and matching the iphones can provide us with a more cost-effective research. There are many settings in which we don't need to use the depth, but need RGB and Odometry.
We think this feature would be useful not just for us but for many others.

Thank you!

Import r3d metadata into blender and convert to animation

Hi Marek :)

we are currently implementing a blender python script that imports a .r3d file and then automatically animates the currently selected object in blender based on the values in metadata.

the repository with our blend file, data.r3d and python script is here:
https://github.com/thesystemcollective/record3d-blender-import
( the python script is embedded into blender, changes to the .py file won't change the behaviour, it is just for easier copy pasting into different blend files)

i have a few questions that would help me with debugging:

  1. am i right in assuming that the poses array is ordered like the following:
    [qw, qx, qy, qz, px, py, pz] where q is the quaternion and p is the position?
  2. my rotation and positioning seems to be off, did you use different axis than blender does?

please note that, for privacy reasons, i removed all files except the metadata file from the r3d file, it works with a full r3d file too though.

What ARkit API are you using?

Hi,

What ARkit API are you using to get the depth, and is there any post-processing on the record3D app side for the depth?

Launch start stream command from terminal

Hi,
I need to open the app and run the stream command from the terminal (iphone 13pro wired connected to my mac).
Is there something in the library (command/examples) I could use?

Thanks a lot,
Mattia

The resolution of lidar RGBD images

Hi, thanks for sharing this repo. During my using the USB live RGBD streaming, I encoutered a problem that the resolution of the image shot by lidar camera is too small, while the resolution of FaceID is normal. I'm wondering if the resolution of the RGBD images could be changed?

How to process depth confidence map files?

Hello Marek!
Thank you for a great app (once again! --> I emailed you before for other issues).
I wonder how to process the depth confidence map that is outputted on .r3d files?

   def load_confidence(filepath):
    with open(filepath, 'rb') as confidence_fh:
        raw_bytes = confidence_fh.read()
        decompressed_bytes = liblzfse.decompress(raw_bytes)
        confidence_img = np.frombuffer(decompressed_bytes, dtype=np.float32)
        print (raw_bytes)
        print (confidence_img)
   
    confidence_img = confidence_img.reshape((256, 192))  # For a LiDAR 3D Video

    return confidence_img

I tried to process it the same way as processing the depth map but I get the following error:
ValueError: cannot reshape array of size 12288 into shape (256,192)

Thank you in advance!

Landscape support?

Hej!
I'm having a lot of fun with record3d. Awesome work!

I was wondering why there is no landscape support for shooting video? I can of course rotate the video in USB mode, but when shooting and editing landscape in the app, things get really weird. Am I missing something?

Also tiny feature request: a clutter free, fullscreen shooting/preview mode would be wonderful!

Cheers!

Is it possible to get a mesh reconstruction of the entire scene observed throughout the entire video?

Hello,

as the title suggests, my question is whether it is possible to get a mesh reconstruction of the entire scene observed throughout the entire video, as opposed to getting a per-frame mesh reconstruction that isn't stitched together with meshes from other frames.

I'm hoping to use a reconstructed mesh to center the camera poses such that the world origin lies at the center of the reconstructed scene mesh, as opposed to the camera center of the first frame of the video.

Thanks!

RGB-D images

Hello, would like to find out how I can get RGB + D images instead of videos. Thank you

Several questions on updated resolution

Hi,
I wonder if you've updated both LiDAR & FaceID depth resolution?

depth_img = depth_img.reshape((1280, 960)) # For a FaceID camera 3D Video
depth_img = depth_img.reshape((512, 384)) # For a LiDAR 3D Video

Is the above correct?
Another three questions:

  1. Is it possible you could update the FaceID RGB resolution? It is still 640*480.
  2. It seems the LiDAR confidence resolution is not updated?
  3. What are the units of the depth measurements? I assume it is mm?

Thank you very much!

Originally posted by @zehuiz2 in #7 (comment)

Stream through wifi?

Hi,

If I want to scan large space such as a room, I have to connect ipad to laptop.
This is quite cumbersome so I want to stream RGBD images through wifi.

Is there any way to stream RGBD image through wifi or any chances to add this feature in the future?

RuntimeError: Cannot connect to device #0, try different index.

os: Windows 10
device: IPhone 12 Pro Max
record3d version: 1.5.4

Hi!
I'm trying to run demo_main.py with IPhone 12 Pro Max.
I've install iTunes, build library from sources, connect device and press record button. Then run demo script and get the error for each device from range [-1, 10]:

Searching for devices
0 device(s) found
[libusbmuxd] usbmuxd_get_device_list: error opening socket!
Traceback (most recent call last):
  File "D:/ARVR-DL/utilities/record3d/demo-main.py", line 74, in <module>
    app.connect_to_device(dev_idx=0)
  File "D:/ARVR-DL/utilities/record3d/demo-main.py", line 30, in connect_to_device
    .format(dev_idx))
RuntimeError: Cannot connect to device #0, try different index.

Process finished with exit code 1

While debugging, figured out, that empty list of avaliable devices is returned here.

Any suggestion?

Thanks in advance!

What dose High quality RGBD stream mean?

Hi, thanks for sharing this repo, What dose High quality RGBD stream mean? High quality RGBD means higher resolution or something else? Can I get high resolution(640480) RGB and 256192 deph map ? hoping for your kind reply. Thanks.

Problem export zip PLY

Hello Marek

I recorded 1 hour before some 3D videos with Record3D app and when I got home I wanted to export in ABC PLY format but I only have 4kb files with no info I see thumbnails of my videos in the library but I can't read them
how do i really need these videos
Please help me

Visual Code window pop up

Dear Marek, your library is amazing. I wanna use this LiDar data for my university project for obstacle avoidance for truck platooning (prototypes). Is it possible to extract the point data from your library to use it in my control algorithm?

Best,
Shahryar

Problem when trying to install record3d package

Hi, I'm trying to install the newest version 3.1.1 of Record3d package using pip. However, I got this error message:

Collecting record3d
Using cached record3d-1.3.1-1.tar.gz (539 kB)
Preparing metadata (setup.py) ... done
Discarding https://files.pythonhosted.org/packages/bd/ef/9f785310e900e88cc01d98443bec956c8bda1901a099642dae68db5c9f8b/record3d-1.3.1-1.tar.gz (from https://pypi.org/simple/record3d/): Requested record3d from https://files.pythonhosted.org/packages/bd/ef/9f785310e900e88cc01d98443bec956c8bda1901a099642dae68db5c9f8b/record3d-1.3.1-1.tar.gz has inconsistent version: expected '1.3.1.post1', but metadata has '1.3.1'
Using cached record3d-1.3.0-cp310-cp310-macosx_10_15_x86_64.whl
Requirement already satisfied: numpy in ./opt/miniconda3/envs/LipMotion/lib/python3.10/site-packages (from record3d) (1.23.5)
Installing collected packages: record3d
Successfully installed record3d-1.3.0

Instead of version 1.3.1, it installed old version automatically and I encountered the same segmentation fault problem. Could you please help to figure it out?

Thank you so much!

Support for recording plug-in mics?

Hey, love the app, was hoping to use a usb preamp and mic to record hq audio along side the 3D video. Is this something that’s possible in future iterations?

matrix data stream not visible

I am a student and I need to memorize in a txt file the straming RGBD from an iPhone connected to a robot. On the robot there is no montor. I installed the C++ version of record3d and demo app on the robot (operating system of the robot is Ubuntu 18.04 and ROS melodic).
If I log by ssh to the robot (using flag -X) the demo app seems works, I see the two windows with optical and lidar data images but on the terminal window I cannot see the textual matrix of data that is what I need.
Can you help me? I'm not a programmer but a electronic student.
Thank you.

WebRTC calls over HTTP causing security warnings

Thanks for the work - nice app. Purchased full version today.

Is there anyway to get the iOS app to serve over HTTPS as it's failing in Chrome etc due to mixed content security. It is possible to set Chrome flags to get around but is not the best for obvious reasons. Happy to help if needed. I suppose there may be a way of using a proxy service.within React/Angular etc.

Error in demo.exe

I've successfully built the solution, but when I run the demo.exe I get this error:

error

Getting an error ... Iphone 12 Pro plugged into a windows 10 machine

Is is possible to plug in the iphone 12 to windows 10 and use the python app?

Searching for devices [libusbmuxd] usbmuxd_get_device_list: error opening socket! 0 device(s) found Traceback (most recent call last): File "C:\Users\GennaroSchiano\Documents\GitHub\record3d\demo-main.py", line 76, in <module> app.connect_to_device(dev_idx=0) File "C:\Users\GennaroSchiano\Documents\GitHub\record3d\demo-main.py", line 31, in connect_to_device raise RuntimeError('Cannot connect to device #{}, try different index.' RuntimeError: Cannot connect to device #0, try different index.

Segfault when streaming from iOS 13

Hi Marek,

I am working on an app that uses the USB streaming feature of your app. The Python Demo App works flawlessly with my iPhone, however whenever I try to use it with iPad Pro 2020 I get a segfault and Python crashes.

I am attaching several crash logs produced this way. It happens in 100% of cases with my iPad.

macOS: 10.14.6
iPadOS: 13.5

$ python3 demo-main.py
Searching for devices
1 device(s) found
	ID: 4779
	UDID: 00008027-001E091A3606802E

Segmentation fault: 11

Crash.zip

Load .r3d files in Python

Hey,

If I understand it correctly ".r3d" files contain all RGBD information. Is it possible to load them via the python API or is there a small snipped you could provide on how to load those? Currently, I export to .mp4 but as I understand it, the quality is not optimal here and files are pretty large.

Possible to export without LZFSE?

I can write a converter but it makes things more of a hassle - especially if I want to create a workflow for non-developers.

Ideally a zip with jpgs for both RGB and depth would be nice.

Install error for Record3D python package for Win1, Python 3.10

During installation of the latest revision for Record3D python streaming I get an error when compiling Record3DStream.cpp:
[...]\record3d\python-bindings\pybind11\include\pybind11\cast.h(442,36): error C2027: use of undefined type '_frame'
[[...]\record3d\build\temp.win-amd64-cpython-311\Release\record3d_py.vcxproj]
C:\Python\Python311\include\pytypedefs.h(22): message : see declaration of '_frame' [C[...]\record3d\build\temp.win-am
d64-cpython-311\Release\record3d_py.vcxproj]

Any hints what could be the reason?

Do 'poses' in 'metadata' refer to world-to-camera transformation (extrinsics) or camera-to-world transformation?

Hello,

first of all, thank you for your excellent work with record3d. It really has made extracting RGBD video from the iPad a fluid experience.
The question I would like to ask to day is the following:

Do 'poses' in 'metadata' refer to world-to-camera transformation (extrinsics) or camera-to-world transformation?

I'm asking this question because different github issues related to the ARKit poses provide conflicting information. For instance, in issue #31 , t is referred to as the "world pose", which I assumes refers to the coordinates of the world origin in the camera frame. This suggests that [R | t] refers to the world-to-camera transformation (extrinsics).

However, in the same issue, you reply that X_{world} = [R|t] X_{cam}, suggesting that [R | t] actually refers to the camera-to-world transformation.

I would really appreciate it if you could resolve this confusion.
Thank you.

scale of the truedepth camera

Hi, thank you for the charming app.

I am currently using the truedepth camera of an iphone 12 pro max. I basically follow the demo-main.py to stream rgbd frames. So the returned depth map is of type float32. Then what should I do to recover the absolute depth from the depth map (say the unit is centimeter)?

Segmentation fault (core dumped)

I am trying to stream RGBD video using record3d but I run into segmentation fault every single time:

1 device(s) found
        ID: 4776
        UDID: 00008101-0011348121B8001E

Camera Intrinsics: [[435.33657837   0.         240.14639282]
 [  0.         435.33657837 320.78341675]
 [  0.           0.           1.        ]]
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Segmentation fault (core dumped)

These are the softwares/hardwares I am using:

  • iPhone 12 pro max;
  • iOS 16.2;
  • Record3D app 1.8.3;
  • record3d library 1.3.0 via pip;
  • Ubuntu 20.04.4 LTS 64 bit;
  • AMD Ryzen pro 3975wx CPU with NVIDIA RTX A4000.

Thank you!

List of issues 12/18/2021

Issue 1: Bought full version but still locked out of exporting RGBD. Brings up modal but doesn’t “restore”. Happy to support but yea, need my features please. $$$

Issue 2: One of the options resulted in an attempted export of 1.3gb made of 3 files? I had “generate normals” on but this seems excessive and I’m confused given it was a 7 second video.

Issue 3: I have no idea what each of the options are for the icons when in the playback view. Except the AR icon of course.

Issue 4: No USDz export option? AR view is available but need that to be shareable. Should be possible since GLTF is there.

Issue 5: Library full of preview images that are gray. Hard to tell what I’m looking at sometimes

Issue 6: The pull right menu in the library view is a little finicky. At risk of accidentally deleting. Needs a better menu for actions. Happy to continue with feedback. I’m building an AR marketplace for scans and interactive content.

Issue 7: Cant rotate AR preview around.

Hi, Question about intrinsics.

If I scale up depthMap image size 4 times, correct intrinsic matrix is multipling x4 for each component for "intrinsic_mat = self.get_intrinsic_mat_from_coeffs(self.session.get_intrinsic_mat())"?

Get lens distortion characteristics

I have been using this app for quite a while and I am absolutely loving it.

As per apple documentation "depth map representations are geometrically distorted to align with images produced by the camera". We need to correct this distortion. Do we obtain distortion corrected image and depth? If not, it would be great if you can provide lens distortion characteristics provided by apple.

program crash

Connected to pydev debugger (build 183.4588.64)
Searching for devices
1 device(s) found
ID: 4779
UDID: ..

Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)

I try to use the python version in Mac and Linux. But the same problem occurred.

Any documentation on the depth frame format?

I've exported a recording in the native r3d format and I'm attempting to read the depth data

>>> pth = 'winhome/Documents/3D Scans/2020-10-28--15-01-03/rgbd/1.depth'
>>> fh = open(pth, "rb")
>>> compressed = fh.read()
>>> decompressed = liblzfse.decompress(compressed)

But then I'm not sure what to do with the decompressed data. Is it just a case of reading each 4 bytes, unpacking them to a single precision float? The jpgs are 192x256 and doing the maths on that seems to add up: 192 x 256 x 4 = 196608 and len(decompressed) gives me 196608.

So this looks right:

>>> f = [struct.unpack('f', d[x:x+4]) for x in range(0,len(d),4)]

Then I guess I can just write f into any image format that supports floating point (.hdr or .exr maybe)

Am I on the right lines? Are the values linear distances from the camera?

If so - it would be nice to add this to the docs.

How to get the pointcloud from the USB-streamed depth images

Hi, many thanks for this great project. I'm looking for a way to transform the USB-streamed depth images into a pointcloud on my pc, just like the results shown in the app. Could you share the code of how you make the transformation from depth to pointcloud? Thanks!

How to get higher-resolution RGB images (1920 x 1440)?

Hello,

first of all, thank you so much for your work. It has truly made extracting RGBD video from iphone / ipads a lot easier!

I just wanted to ask whether it's possible to extract higher-resolution RGB images than the current max. resolution that is outputted by the app (960 x 720). I've been playing around with ARKit and it seems like the resolution of the capturedImage (RGB) is (1920 x 1440); is it possible to capture RGB video at this resolution (1920 x 1440) on record3d as well - either via the app itself, or via USB stream to a mac?

Thank you in advance :)

Strange LiDAR intrinsics

Hi,

I have a strange behavior with my iPad Pro when inspecting the intrinsics of LiDAR frames I receive through USB streaming: the fx, fy, tx & ty parameters are not constant over time, they change depending on the scene... which I find quite strange. Is that a normal behavior?
Also, the frame resolution is 192x256, and the tx/ty values are around 366 and 476 respectively, which is also strange because the should not exceed frame resolution. Am I missing something?

In comparison, with FaceID camera, intrinsincs are consistent compared to frame resolution, and constant over time.

Thanks for your help,

Regards

Albert

PS: I'm using C++ Record3DStream class.

Few Questions Regarding Intrinsic and Extrinsic Parameters

First, I must say this is a great App and I would like to thank the developer for being responsive!
I have some questions regarding the intrinsic and extrinsic parameters:

  1. Is the depth image and RGB image already fused? Do I need to estimate the rigid transformation between the lidar and the RGB camera?
  2. Do I need to calibrate the RGB/Depth images in terms of radial and tangential distortions?
  3. I have difficulty understanding the poses matrix provided in metadata. From previous posts, I think they are extrinsic parameters. I assume each row corresponds to each image. There are seven parameters in each row. How could extrinsic parameters have only 7 elements? I guess it should have 12?

I'm looking forward to hearing from you!

Extending the record depth

Hi!

I would like to know if the maximum of depth could be expanded, I believe it’s set to be around 3 meters as default, but the LiDAR of iPhone supports 5 meters. Is there any way for us to manually change the maximum depth?

Also, I wonder if the new API coming with iOS15.4 is supported.

Thank you!

Get the depth value in meters

Hi, I wanted to know how to get the depth value in meters in python of the recorded video? from .ply file or the .r3d file

CMAKE - errors

I must be missing some prerequisites for USB setup.
I can't figure it out. I have installed VS 2022 and 2019 and have CMake tools installed.

I just keep getting errors for Cmake.

I installed it to path for all users.

Any help would be great.

I have tried using the Wifi method but its doesnt seem to work in Brave or Edge. I am not installing chrome.

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.