Code Monkey home page Code Monkey logo

surround360's Introduction

Surround360 System

Surround360 is a hardware and software system for capturing and rendering 3D (stereo) 360 videos and photos, suitable for viewing in VR. We divide the open source components of the system into three subdirectories:

  • /surround360_design -- hardware designs and assembly instructions
  • /surround360_camera_ctl_ui -- Linux desktop application for controlling the camera to capture raw data
  • /surround360_render -- software for rendering the raw data that is captured into a format suitable for viewing in VR.

Instruction Manual

Please refer to our instruction manual (which covers use of both hardware and software) here:

https://github.com/facebook/Surround360/blob/master/surround360_design/assembly_guide/Surround360_Manual.pdf

Sample Data

We provide a sample dataset for those who are interested in testing the rendering software without first building a camera.

Join the Surround360 community

See the CONTRIBUTING file in each subdirectory for how to help out.

License

The hardware designs, camera control software, and rendering software are subject to different licenses. Please refer to the LICENSE file in each subdirectory. We also provide an additional patent grant.

surround360's People

Contributors

aparrapo avatar arnaudlopez avatar bkcabral avatar chaicko avatar emmanix2002 avatar flarnie avatar fotolesny avatar hyunjinku avatar igorsugak avatar joelmarcey avatar np-csu avatar nschlemm avatar superdog8 avatar thatch avatar tlian7 avatar tox-fb avatar ybrsmm avatar zpao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

surround360's Issues

Stereoscropic coverage

What is the degree of three-dimensional coverage?Why is the Centerline+/- 144(h), 77(v)degrees?

Rendering panorama with own sample data

Hello!

I managed to render the panorama with the provided sample data and now want to try to stitch my own
images together. Therefore I adapted the configuration files (intrinsics and json file), I also created a new .yml file for the rectification as described in your configuration document.

I have a problem with the stereo panorama creation after the optical flow step. The picture pairs for optical flow are generated nicely but the resulting images to be horizontally stacked as a panorama are not 100% filled with image data, so there is a black bar on the right where there is no data (this seems like some projection problem to me).

Are there some other parameters (maybe in the python script) that I have to change to get the stitching working with my own data? Which parameters should be generally altered?

Thank you!

USB 3.0 Expansion Cards UE-1008 vs UE-1004

Hi,

in the manual I see 5 x UE-1004 cards are used.
Can you confirm we can use 3 x UE-1008 cards in place of 5 x UE-1004 without performance loss?

I am also considering an alternative setup, with the UE-1008 plugged directly in the PC, without an intermediate breakout usb3/fiber box, in this case 2xUE-1008 cards + 1 on-board USB3 port would be sufficient?

Thanks.

Enrico

frames for testing

So far the project looks great. Thanks a lot
Before we purchase the hardware I would like to test the render part. Are you able to share a set of frames (5 secs to 15 sec) from each camera so we can use it for testing? I know it is a lot to ask but will really help us to understand what the software is doing.

Single capture box approach?

My dual proc Xeon box has 8 x PCIe slots so I'm thinking of putting the 5 x USB cards in there plus a SAS HBA plus a 10Gbs Nic.

Then mounting the box on a custom built wheeled dolly with the camera tripod secured to it along with the UPS .... run the 17x USB cables directly into the box and store the live data directly onto the 16 x SSDs inside the box .... then run two Ethernet cables to my i7 render machine (plus the power cable to the UPS) .... one for the IPMI/KVM and the other for the 10Gbs fie transfer to the i7.

My rationale for this approach is this:

  1. I only need to have one box on set directly under the camera ... it's a heavy SuperMicro chassis so will be stable even on wheels.
  2. I only need to run 3 x black cables (robust, easily sourced and cheap) from the camera rig to the DIT station (Digital Imaging Technician).
  3. the DIT will have the live data on the Xeon machine under the camera (ZFS or BTRFS equivalent of RAID 5) and a back-up set on the i7 .... with 10Gbs NFS we can pull data off the Xeon box at around 900 MB/s. DITs always make at least one back-up copy of the camera neg on set anyway and this way we could also make another secure copy to an encrypted portable SSD if needs be off the i7 data pool.
  4. while the Xeon box is securely capturing data the DIT can be rendering the RAW files on his i7 machine with decent monitors attached plus HMDs.

Any reason or concerns on this approach?

I totally get the thinking behind the LunchBox/USB Expansion Chassis/Areca 8 bay approach if you're out in the field or a live concert situation but on a Sound Stage or controlled environment, a single, heavy box under the camera rig with three long, durable, bog-standard cables running off it, might be a better alternative.

Appreciate feedback on the idea and any other approaches being considered by folks.

Bug in TestRingRectification.cpp

In TestRingRectification.cpp (Line 87), you have the following for loop for reading the side camera images for different frames:

vector<vector<Mat>> sideCamImages(framesList.size(), vector<Mat>());
  for (int frameIdx = 0; frameIdx < framesList.size(); ++frameIdx) {
    const string& frameName = framesList[frameIdx];
    LOG(INFO) << "getting side camera images from frame: " << frameName;
    for (int i = 0; i < numSideCameras; ++i) {
      const string srcImagePath = FLAGS_root_dir + "/vid/" + framesList[0] +
        "/isp_out/" + camModelArray[i].cameraId + ".png";
      LOG(INFO) << "\t" << camModelArray[i].cameraId << ": " << srcImagePath;
      Mat origImage = imreadExceptionOnFail(srcImagePath, CV_LOAD_IMAGE_COLOR);
      Mat imageUndistorted;
      cvUndistortBicubic(
        origImage, imageUndistorted, intrinsic, distCoeffs, intrinsic);
      sideCamImages[frameIdx].push_back(imageUndistorted);
    }
  }

I think the line

const string srcImagePath = FLAGS_root_dir + "/vid/" + framesList[0] +
        "/isp_out/" + camModelArray[i].cameraId + ".png";

should rather contain framesList[frameIdx], otherwise the images for the first frame are repeatedly loaded?

Facing error while compiling render on Mac

When i execute following part:

cd /surround360/surround360_render
cmake -DCMAKE_BUILD_TYPE=Release
make

I run into following error:

ld: library not found for -lgflags
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [bin/TestRenderWithoutSides] Error 1
make[1]: *** [CMakeFiles/TestRenderWithoutSides.dir/all] Error 2
make: *** [all] Error 2

Any help regarding this issue?

Wide Angle Lens

I find the wide angle lens are CS mount in the SUNEX. But the cameras are C mount. Why?

Integration with 'Two Big Ears' audio workstation?

I know this forum is focused on the 360 camera and visual side of VR content creation but as we say in Hollywood, audio is 50% of a movie experience .... are there any plans to integrate the FB 360 camera engine workflow with the FB spatial audio workstation recently acquired from 'Two Big Ears? .... it would seem to be a marriage made in virtual heaven.

numNovelViews large than lazyNoveViewBuffer size lead to crash

Hello,

When run TestRenderStereoPanorama program crash, Because two variable conflict, numNovelViews(around 683 ) large than FLAGS_eqr_width / numCams(around 18).

In function a:
void generateRingOfNovelViewsAndRenderStereoSpherical() {
const int numNovelViews = camImageWidth - overlapImageWidth; // per image pair
}

in function b:
void renderStereoPanoramaChunksThread(
LazyNovelViewBuffer lazyNovelViewBuffer(FLAGS_eqr_width / numCams, camImageHeight);
}

so how to set these two variables? what are relations between them? thanks a lot!!!!

Cameras overheat?

From the website:

and the industrial cameras can run for hours without overheating.

Does this imply they will overheat eventually? Or can this happily run 24/7? I guess adding a fan isn't a big deal.

Thanks for open sourcing.

Stereo inconsistency

I generated the panorama using the sample data, but the result has a lot of stereo inconsistency, both in geometric structure and color. I am not sure if I got the wrong result or your algorithm has issues in handling stereo consistency. Could you please provide your results as a ground truth? Thank you!

GPU or CPU bound for render engine?

Any recommendations on optimal hardware platform to drive the rendering software?
eg:

  • GPU choice nVIDIA v. AMD ... CUDA v. OpenCL/OpenGL
  • dual TITANs v. dual W9100s
  • CPU choice i7 v, dual Xeon procs
  • Bandwidth requirements for writing the 14 x USB RAW data streams to disk.

Thanks
Neil

360 Surround Camera Rental in LA area?

Has anyone in LA area built the 360 beast? Any rental packages available?

Anyone with a camera rig interested in collaborating on end-to-end workflow testing?

Brian, I don't suppose there's any chance that FB would let us book some time in your lab to let us shoot some test footage of our own on one of your rigs? ... bring our Capture and Render boxes up to Menlo Park to record some charts and people moving around ... I know it's a total pain in the butt to organize - I run the VR Lab @ LumaForge in Hollywood, so believe me I know ... but with the next batch of GP cameras not shipping until the end of Oct we're going to be snookered on testing the workflow in its entirety for another couple of months now.

Added afterthought:

Or maybe instead of the pain of booking individual visits to Menlo Park, organize a one day open house workshop only for peeps who have already built a capture/render box and run the Palace test footage .... let us attach our boxes to your rig and record some live footage ... have a bunch of 30 minute slots which are booked in advance by qualifying individuals ... almost like a mini f8 only for 360 camera folks.

That way we could attach our Linux boxes to a rig we know for sure is accurately calibrated and set-up correctly, record a ten minute take and then go off-line and put it through the render process ... any problems or issues in our software set-up would be immediately apparent and presumably you'd have one of your brilliant team members on hand to help diagnose the problem.

Obviously, not everyone lives in California or could make it to the 360 Lab Test day but if we managed to get a dozen or so systems through the testing we could do a wrap session at the end and publish the findings and insights to the wider community ... hell, we could even record the wrap session on your 360 rig and put it up on the FB 360 page ... just a thought.

Parameter camera_ring_radius

Hello!

Could you explain to me what the parameter "camera_ring_radius" is used for in the stitching
and how I have to set it to get proper results when stitching images (not necessarily images from the facebook surround camera but any images?).

I only found this parameter description:

DEFINE_double(camera_ring_radius, 19.0, "used for computing shift for zero parallax as a function of IPD.");

Why is it that if I set this value too small, the projected images are not properly merged together but a space is left between them in the overall panorama?

Thank you for your help!!

Stereoscopic coverage

Could you minutely explain why the stereoscopic coverage is the Centerline +/- 144(h),77(v) degrees?
Especially the horizontal view is +/- 144?

Alternative cameras ....

Just saw BC's comment on using alternative cameras to get around the single source GreyPoint bottleneck .... are we tied to USB 3 data feeds or could we use HDMI or SDI video streams?

The main problem I see in using alternatives to the PG camera is the amount of thought and detail you guys have put into the design of the 'flying saucer' form factor, stereo sweet spot, lenses and workflow to say nothing of the science underpinning your optical flow algorithm .... for ex-rocket scientists this is probably all very straight forward stuff but for us mere mortals working in production and post in LA, it's a bit beyond our pay-grade and skill set.

Any suggestions on what cameras we might want to take a look at assuming there's a slim chance we could figure out the hard stuff.

Placement of Surround360 - "safe zone" under the camera?

Hi,
I see the two suggested placement setups at 7.5a and 7.5b of documentation manual.

It seems from the comparison of the two solutions that the setup 7.5a, with only the breakout box and the UPS under the camera, will result in no post-production remove activity at all, due to the smart stitching from the two bottom cameras. Is it correct?

In affermative case, can you specify the exact size/location of the "safe zone" where we can place our equipment (angle, origin, etc. etc.)?

Thank you in advance,

Enrico

Average run times?

Be interested to know what kind of speeds folks are getting on their render times for the Palace Test dataset?

I've run the process twice and am getting around 15 minutes on dual proc Intel Xeon CPU E5-2697 v2 @ 2.70GHz with 48 Cores and 96 GB RAM:

Runtime #1:
UNPACK runtime: 0:00:00.130109
ARRANGE runtime: 0:00:00.033033
total frames: 3
ISP runtime: 0:00:23.239405
RECTIFY runtime: 0:14:07.659499
RENDER runtime: 0:00:44.247832
FFMPEG runtime: 0:00:04.559142
TOTAL runtime: 0:15:19.870323

Runtime #2:
UNPACK runtime: 0:00:00.157266
ARRANGE runtime: 0:00:00.033929
total frames: 3
ISP runtime: 0:00:23.814318
RECTIFY runtime: 0:13:39.033048
RENDER runtime: 0:00:44.261449
FFMPEG runtime: 0:00:04.437203
TOTAL runtime: 0:14:51.738422

Am about to try same thing on a i7 PC with 32GB of RAM but would like to know if others are getting much faster RECTIFY times - all my datasets are on NVMe drives so I don't think I'm I/O bound and the files are relatively small anyway.

Palace test dataset?

Have the 360 Render App up and running on Ubuntu 16.04 LTS .... very cool .. thanks for the help.

Any chance of giving us access to the 'Palace of Fine Arts' RAW .bin dataset? ... we know what the demo looks like so it would be a good reference to play with.

Also, the hardware spec for the storage suggests 8 x 1TB SSDs for the FS .... presume nvme drives would work OK? ... or 2 x pools of 8 x SSDs each ... one for READ and one for WRITE?

How to run generate stereo panorama from the sample data

Can someone let me know the complete steps to generate stereo panorama from the sample data? There isn't a clear instruction in Readme.md on how to do this.

./bin/TestRenderStereoPanorama --help

Yields a lot of parameters and all has it's own default value and I have no idea which parameter is required for me to overwrite besides -imgs_dir and -output_data_dir.

./bin/TestRenderStereoPanorama -imgs_dir ../sample_dataset/vid/000000/isp_out -output_data_dir ../sample_dataset/vid/000000/out

shows

E0805 15:27:22.046569  3968 SystemUtil.cpp:48] Terminated with VrCamException: missing required command line argument: src_intrinsic_param_file

however there is no checkerboard images in the sample zip file for me to calibrate nor there is any files storing the intrinsic/extrinsic parameters.

Creating a panorama from the sample data

Hello!

Thanks a lot for providing some sample data to try out by ourselves!

I was trying to stitch a panorama from these images using "TestRenderStereoPanorama".
So far, I only want to stitch without the top/bot fisheyes (therefore only using the side cameras).

My call of the program then looks like this:

./TestRenderStereoPanorama 
-src_intrinsic_param_file ../res/config/sunex_intrinsic.xml 
-rig_json_file ../res/config/17cmosis_default.json 
-imgs_dir ../sample_dataset/vid/000000/isp_out 
-output_data_dir ../sample_dataset/vid/000000/output_data
-eqr_width 252

Then the program yields the following error (opencv cannot write the specified file extension):

OpenCV Error: Unspecified error (could not find a writer for the specified extension) in imwrite_, file /opencv-3.1.0/modules/imgcodecs/src/loadsave.cpp, line 459
E0804 13:04:31.529429  8554 SystemUtil.cpp:50] Terminated with exception: /opencv-3.1.0/modules/imgcodecs/src/loadsave.cpp:459: error: (-2) could not find a writer for the specified extension in function imwrite_

Regarding this, I have some questions:

  1. When using the standard options, the error occurs that the eqr_width must be divisible by the number of cameras used. The default value is 256, and by default no top cameras are used, leaving 14
    cameras where 256 is not divisible by 14. Am I correct, setting this value manually to for example 252 which is divisible by 14 then?

  2. Is the general approach im following to create the panorama from the isp processed images correct? or am I missing something? This information would help
    me to figure out if something with my opencv is wrong or I just made a basic mistake using the stitching tools.

Thank you very much!

Top/Bottom Covers too large for milling?

Trying to get a quote on total costs for CNCing all the parts ... just got this email back from a company that does prototyping saying that the Top and Bottom Cover parts are too large to go through their CNC mills:

" ... Unfortunately, the parts listed below are too large for our current process. Currently, machined parts are generally limited to a maximum part size of 10in x 7in x 3.75in deep (254mm X 177.8mm X 95.2mm). Thinner parts in select materials are limited to 22in x 14in (559mm x 356mm). Turned parts are limited to 2.95 in (75mm) in diameter and 9.0 in (228mm) in length.

FB360_V1_29_A
File: FB360_V1_29_A(2).STEP

FB360_V1_30_A
File: FB360_V1_30_A(2).STEP ... "

Did FB have the Top/Bottom Cover parts milled in-house or through an external CNC shop?

Has anyone else run into difficulties getting these two parts milled or is it just a question of shopping around?

Fatal Error on compiling CameraControl Software

Just went through the install procedure for the CameraControl software and in the final compile section get this Fatal Error Message .... any suggestions on where I might have gone wrong (I'm not a C++ or C guy):

fbvr22@FBVR22:/Surround360/surround360_camera_ctl$ make
[ 40%] Building CXX object CMakeFiles/CameraControl.dir/source/camera_control/CameraControl.cpp.o
/Surround360/surround360_camera_ctl/source/camera_control/CameraControl.cpp:49:32: fatal error: libavcodec/avcodec.h: No such file or directory
compilation terminated.
CMakeFiles/CameraControl.dir/build.make:86: recipe for target 'CMakeFiles/CameraControl.dir/source/camera_control/CameraControl.cpp.o' failed
make[2]: *** [CMakeFiles/CameraControl.dir/source/camera_control/CameraControl.cpp.o] Error 1
CMakeFiles/Makefile2:99: recipe for target 'CMakeFiles/CameraControl.dir/all' failed
make[1]: *** [CMakeFiles/CameraControl.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
fbvr22@FBVR22:/Surround360/surround360_camera_ctl$

Here's the ls -l output:

fbvr22@FBVR22:/Surround360/surround360_camera_ctl$ ls -l
total 68
drwxr-xr-x 2 root root 4096 Aug 12 10:28 bin
-rw-r--r-- 1 root root 10486 Aug 12 10:28 CMakeCache.txt
drwxr-xr-x 8 root root 4096 Aug 12 10:29 CMakeFiles
-rw-r--r-- 1 root root 1388 Aug 12 10:28 cmake_install.cmake
-rwxr-xr-x 1 root root 1377 Aug 12 09:20 CMakeLists.txt
drwxr-xr-x 2 root root 4096 Aug 12 09:20 configs
-rw-r--r-- 1 root root 1550 Aug 12 09:20 CONTRIBUTING.md
-rw-r--r-- 1 root root 1531 Aug 12 09:20 LICENSE_camera_ctl.md
-rw-r--r-- 1 root root 8692 Aug 12 10:28 Makefile
-rw-r--r-- 1 root root 4106 Aug 12 09:20 README.md
drwxr-xr-x 2 root root 4096 Aug 12 09:20 scripts
drwxr-xr-x 4 root root 4096 Aug 12 09:20 source

Solidworks CAD data files?

Out of curiosity, would it be possible to provide the source Solidworks 2016 CAD files as well?

STEP is good for CAD data interchange, but for the people who have Solidworks 2016 themselves the native models would be more useful. ๐Ÿ˜„

cameranames.txt file not found

Did a completely fresh install of Ubuntu 14.04 LTS and only installed apps necessary for the 360render engine ... want to keep the system clean as possible - not even upgrading to 16.04 LTS.

Fired up the run_all.py GUI and keep getting this error message:

-------- Sun Aug 07 2016 15:45:42 PDT -------

python /Surround360/surround360_render/scripts/run_all.py --rectify_file /media/fbvr4/fbvol1/dest5/rectify.yml --src_intrinsic_param_file /Surround360/surround360_render/res/config/sunex_intrinsic.xml --data_dir /media/fbvr4/fbvol1/sample_dataset/vid/000000/raw --start_frame 0 --frame_count 0 --cubemap_format video --steps all --rig_json_file /Surround360/surround360_render/res/config/17cmosis_default.json --dest_dir /media/fbvr4/fbvol1/dest5 --pole_masks_dir /Surround360/surround360_render/res/pole_masks --quality 6k --cubemap_face_resolution 0 --cam_to_isp_config_file /Surround360/surround360_render/res/config/isp/cam_to_isp_config.json --flow_alg pixflow_low

** UNPACK ** [Step 1 of 6]

E0807 15:45:42.032326 4966 SystemUtil.cpp:48] Terminated with VrCamException: file read failed:/media/fbvr4/fbvol1/sample_dataset/vid/000000/raw/cameranames.txt
Aborted (core dumped)

UNPACK runtime: 0:00:00.129529

Can not find 'cameranames.txt' file anywhere on my system ... was it included in the download or do we need to generate it?

Also, in the Render gui app, the top left argument says 'Data Directory' and underneath 'directory containing .bin files' ... exactly which directory do we need to point to in the Palace sample_download .zip you provided? ... I can only see .bmp and .png image files in the 'vid' folder. Tried both.

Streaming Solution

Hi,
thanks for your job. We're going to assembly the Surround and we'd like to create our custom player for 8k VR video. Do you provide any software solution that takes advantage of your dynamic streaming algorithm and pyramid map? Otherwise can we customize FB/Oculus player? The player should be embedded into an Oculus/Gear VR app where we'll provide (our) 8k video VR experience downloadable or in streaming.
Thanks,
Francesco

Error installing Gooey (method 2 - Linux only):

vrlf@vrlf:~/ffmpeg_sources/ffmpeg$ sudo apt-get install python-wxgtk2.8
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package python-wxgtk2.8 is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source

E: Package 'python-wxgtk2.8' has no installation candidate

Raid array setup/detail

Hi,

I read the documentation but I miss the point about RAID setup.
You say 8 x 1TB SSD sata III HD in a RAID5 array.
Is there a detailed setup document about this configuration / may you better specify?

Anyway, why RAID5?
Despite the proper redundancy, as far I know, write performance are poor compared to RAID0 (without redundancy) or RAID10 (at expense of much more disk used) - I've seen RAID5 often used in read-oriented arrays, while in FBS360 write performance seems the crucial point.

Thank you in advance for your reply.

Enrico

undefined reference to symbol 'pthread_create@@GLIBC_2.2.5'

I am on Ubuntu 14.04 LTS. I followed the exact same steps listed in the Readme file in surround360_render. When compiling the below error shows

[ 42%] Building CXX object CMakeFiles/LibVrCamera.dir/source/util/CvUtil.cpp.o
[ 46%] Building CXX object CMakeFiles/LibVrCamera.dir/source/util/JsonUtil.cpp.o
[ 50%] Building CXX object CMakeFiles/LibVrCamera.dir/source/util/StringUtil.cpp.o
Linking CXX static library lib/libLibVrCamera.a
[ 50%] Built target LibVrCamera
[ 53%] Building CXX object CMakeFiles/Raw2Rgb.dir/source/camera_isp/Raw2Rgb.cpp.o
Linking CXX executable bin/Raw2Rgb
/usr/bin/ld: CMakeFiles/Raw2Rgb.dir/source/camera_isp/Raw2Rgb.cpp.o: undefined reference to symbol 'pthread_create@@GLIBC_2.2.5'
//lib/x86_64-linux-gnu/libpthread.so.0: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
make[2]: *** [bin/Raw2Rgb] Error 1
make[1]: *** [CMakeFiles/Raw2Rgb.dir/all] Error 2
make: *** [all] Error 2

This is tested in both release mode and debug mode

Matthews VRIG?

Matthews Studio Equipment have been making pro production and grip gear for years and have just come out with the VRIG which is purposely designed for VR cameras:

screen shot 2016-08-13 at 10 09 06 am

Just wondering if the VRIG would work with 360 Surround and how easy/difficult it would be to attach the 'flying saucer' directly to the VRIG using the Baby Pin rather than the 22 inch Support Tube (FB360_V1_26):

screen shot 2016-08-13 at 10 20 10 am

.... or is 22 ins Support Tube an integral part of the stereo geometry underneath the camera and for the 'pole removal' module to work correctly?

My thinking is that the VRIG attached directly to the Baby Pin might provide more stability/rigidity for when the 360 Camera is on a dolly, etc.

Also, what's the total weight of the 360 Surround fully loaded including USB cables? ... there's two models available depending on weight of camera rig.

Here's the link to the intro video from Matthews for those who want to know more.

Less cameras

What would be the minimum number of cameras to use? I love the design but there are so many cameras and it gets expensive. I wonder if I can do my own design with less cameras and maybe increase the fov with different lenses

Are you going to have this for windows?

17 x camera simulator?

Given that the next batch of GreyPoint cameras has slipped to the end of October, is there any way to emulate the input and bandwidth of the 17 x cameras into the 5 x USB cards?

Single sourcing the key component of the FB 360 Surround ecosystem is going to cause problems ... are there any other sources apart from GP direct ... ie, does FB have a stash in a back-room somewhere? ... just kidding .. but seriously wondering how we're going to test the end-to-end pipeline without any RAW files to play with :-(

Ubuntu 16.04 LTS?

Just did a clean install of Ubuntu 16,04 LTS on my dual Xeon proc machine .... updated a few of the libs and GreyPoint has drivers for Ub16.04 LTS ... any reason to believe that the Camera Control software will not function as designed on 16.04 (assuming I can get the final compile to work - see Issue #6.

Changing cmosis_fujinon.json

I see that on cmosis_fujinon.json you have all the parameters to do the color correction, remove noise, white balance, etc
If I were to change to different cameras how can I define the right values to put there?
Do you have scripts to calculate those values? how did you find them for the current camaras?

Error installing gflags (method 1 - Linux only):

vrlf@vrlf:/media/sandshare$ sudo apt-get install libgflags2 libgflags-dev
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package libgflags2 is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
libgflags2v5:i386 libgflags2v5

E: Package 'libgflags2' has no installation candidate

I'm installing on Ubuntu 16.04 LTS .... on BTRFS share.

how to calculate min distance to object?

I know the user guide shows the minimum recommended distance to objects so they look 3d and the 360 is good
How is this calculated?
I am thinking on doing some changes, so my r will be different
i = r * sin(FOV/2 - 360/n)
Is there a relation between the formula (an r) to the min distance to objects ?
can I assume the smaller the r the higher the quaility?

STEP drawings?

Real dumb question - apologies in advance.

  • In the 360 Construction Manual there are 11 (eleven) items listed in Machine Parts (pages #3 and #4).
  • In the surround360_design tab -> 3d_models -> STEP_files there are 15 (fifteen) STEP documents.

Why the difference and which ones do we send off to the CNC shop?

These files seem surplus to requirements:
160523_3.STEP
FB360_V1_32.STEP
FB360_V1_33.STEP
FB360_V1.STEP

Warned you it was going to be dumb :-(

80 character line length should also apply to comments

In the contribution section, the 80 character line length is mentioned in the coding style guideline.

I believe this restriction not only should be applied to the code but also to the comments. I am scanning through the code with Sublime in multi-column mode. It works great browsing the code but not so great when I want to read the comments, and I end up having to scroll left and right with my mouse in order to read them.

Increase FOV

I would like to increse the default 90 deg FOV in the camaras for fisheye or full frame fisheye. What changes do I have to do in the code to adjust/remove distorsion? I hope to use less cameras increasing the fov. Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.