Code Monkey home page Code Monkey logo

vivemediadecoder's Introduction

ViveMediaDecoder Copyright (c) 2015-2019, HTC Corporation. All rights reserved.

Introduction:

  • ViveMediaDecoder is a high performance video decoding Unity plugin for Windows which supports streaming multiple formats.

  • We also provide the samples for playing various video types which includes 2D, stereo 2D, 360, stereo 360. You can build custom VR video player easily through this plugin.

  • This software uses FFmpeg 3.4 licensed under LGPL license. To review the LGPL license, look here. The FFmpeg 3.4 source code can be downloaded from here.

Requirements:

  • Windows 7
  • DirectX 11
  • Unity 5
  • FFmpeg 3.4

vivemediadecoder's People

Contributors

jjwu168 avatar kyo8568131 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vivemediadecoder's Issues

Video not getting displaying on Quad following the guide

As specified in Readme, I have added the required ffmpeg dlls, Using the quad one from the demo, just specified the video and enable Play on Awake. After playing this on the editor, the logs have shown seems to have no error but I can't see my video on the Quad mesh renderer. shader is YUV2RGBA.

Can anyone please tell me why? I am using Unity 2018.1.0b4.
Here is the log what I am getting

[ViveMediaDecoder] play on wake.
[ViveMediaDecoder] init Decoder.
[ViveMediaDecoder] Init success.
[ViveMediaDecoder] Video format: (4096, 2048)
[ViveMediaDecoder] Total time: 0.015625
[ViveMediaDecoder] OnApplicationQuit

Can anyone please tell me why my video not being displayed?

RTSP communication not fluent

Hello,

I am trying to use the ViveMediaDecoder.
I am able to see the RTSP streaming, however the lag is not constant.
When it is playing it is stuck during 5 seconds ( more or less) and after that, everything is reproduced at a high speed. It looks like it is waiting during the buffering time and later everything is being played at a higher speed.
Is there a way to solve that? or at least to have a constant lag?

Thanks

Building error for Nativescript, Header files are missing

while building the native code to generate ViveMediaDecoder.dll, getting errors of header files are missing

The command used in ubuntu terminal for building the native code
gcc -shared -o ViveMediaDecoder.all ViveMediaDecoder.cpp

#include <d3d11.h> is missing after adding that some other header files which are called from d3d11.h showing as missing elements like rpc.h, rpcndr.h

Can anyone (@kyo8568131, @OlafZwe) please help regarding how to resolve this issue? Any help regarding this will be highly appreciated.

Dolby Surround

Hi there,
is there anyway to play a dolby surround 7.1 channel video with this plugin?
or how can I handle this?

audio is slower than video

Hi,
i have an issue
at the first of video playing audio and video are synchronized and there is no problem
but after some seconds audio gets slower and it continues during playing. after several seconds audio will completely at at a wrong time at compare of video.
is it a hardware issue or i did something wrong?

How to link ffmpeg libraries while compiling the native source code in Visual Studio

While compiling the native source code on Microsoft visual studio (to generate dll), I am getting the following linking errors (LNK2019)

_Severity Code Description Project File Line Suppression State Error LNK2019 unresolved external symbol av_strerror referenced in function "private: void __cdecl DecoderFFmpeg::printErrorMsg(int)" (?printErrorMsg@DecoderFFmpeg@@AEAAXH@Z)
Severity Code Description Project File Line Suppression State Error LNK2019 unresolved external symbol av_get_channel_layout_nb_channels referenced in function "private: int _cdecl DecoderFFmpeg::initSwrContext(void)"

several other functions like av_dict_set, av_dict_free, avcodec_decode2 which is related to the ffmpeg libraries. Can anyone (@kyo8568131) please tell me how to properly add "ffmpeg" as dependencies to this project and link it properly?

All the scripts are missing when open the demo.scene

Unity 5.6.

Open a new project in Unity 5.6
Drag everything under MediaDecoder-master\UnityPackage into the Asset under unity project
Open the demo scene. Click on any gameobject. All the attached scripts are missing.

How to load the decoded data into a buffer?

I'm trying to load the decoded frame data into a buffer.

If I change the Unity code, can I load a decoded frame in the buffer, or should I have to modify the native code?

Videoframe swapping in native script

mVideoFrames.swap(decltype(mVideoFrames)());

function showing me some error of initial value of reference to non-constant must be lvalue and function overload. I commented this function and generate dll and use it on Unity. Although it works, the decoding and displaying process are really slow (~100ms) and in the native script, I found the actual decoding function takes much less time. Can anyone (@kyo8568131 ) please tell me about the purpose of mVideoFrames.swap() and how it can impact the performance? and how to solve the aforementioned issue.

add sws_scale function to upscale the resolution

Hello. I want to add sws_scale function. So, I made DecoderFFmpeg::scaledResolution() function which is called in DecoderFFmpeg::UpdateVideoFrame.

AVFrame* DecoderFFmpeg::scaleResolution(AVFrame* frame, int scaledWidth, int scaledHeight)
{
	mVideoInfo.isScaled = true;
	mVideoInfo.scaledWidth = scaledWidth;
	mVideoInfo.scaledHeight = scaledHeight;

	SwsContext* pSwsCtx;

	pSwsCtx = sws_getContext(frame->width, frame->height, AV_PIX_FMT_YUV420P,
		scaledWidth, scaledHeight, AV_PIX_FMT_YUV420P,
		SWS_FAST_BILINEAR, NULL, NULL, NULL);

	AVFrame* pScaledFrame = av_frame_alloc(); 
	int ScaledByte = avpicture_get_size(AV_PIX_FMT_YUV420P, scaledWidth, scaledHeight);
	pBuffer = (uint8_t*)av_malloc(ScaledByte * sizeof(uint8_t)); 

	avpicture_fill((AVPicture*)pScaledFrame, pBuffer, AV_PIX_FMT_YUV420P, scaledWidth, scaledHeight); 
	sws_scale(pSwsCtx,
		frame->data, frame->linesize, 0, frame->height,
		pScaledFrame->data, pScaledFrame->linesize); 

	sws_freeContext(pSwsCtx);
	av_frame_free(&frame); 

	return pScaledFrame;
}

void DecoderFFmpeg::updateVideoFrame() 
{
	int isFrameAvailable = 0;
	AVFrame* frame = av_frame_alloc();

	clock_t start = clock();

	if (avcodec_decode_video2(mVideoCodecContext, frame, &isFrameAvailable, &mPacket) < 0)
	{
		LOG("Error processing data. \n");
		return;
	}
	
	LOG("updateVideoFrame = %f\n", (float)(clock() - start) / CLOCKS_PER_SEC);

	if (isFrameAvailable)
	{
		std::lock_guard<std::mutex> lock(mVideoMutex);
		mVideoFrames.push(scaleResolution(frame, 4096, 2048)); 
		updateBufferState();
	}
}

Now, frames in "mVideoFrames queue" have upscaled resolution, so I modified ViveMediaDecoder::DoRendering() function like below.

void DoRendering (int id)
{
	LOG("[DoRendering] %d\n", nDoRender++);
	if (s_DeviceType == kUnityGfxRendererD3D11 && g_D3D11Device != NULL) 
	{
		ID3D11DeviceContext* ctx = NULL;
		g_D3D11Device->GetImmediateContext (&ctx); 
		shared_ptr<VideoContext> localVideoContext;
		if (getVideoContext(id, localVideoContext))  
		{
			AVHandler* localAVHandler = localVideoContext->avhandler.get(); 
			
			if (localAVHandler != NULL && localAVHandler->getDecoderState() >= AVHandler::DecoderState::INITIALIZED && localAVHandler->getVideoInfo().isEnabled) 
			{ 
				if (localVideoContext->textureObj == NULL) // 6
				{
					// unsigned int width = localAVHandler->getVideoInfo().width;
					// unsigned int height = localAVHandler->getVideoInfo().height;
					
					unsigned int width = localAVHandler->getVideoInfo().scaledWidth; // I modified
					unsigned int height = localAVHandler->getVideoInfo().scaledHeight; // I modified

					localVideoContext->textureObj = make_unique<DX11TextureObject>(); 
					localVideoContext->textureObj->create(g_D3D11Device, width, height); 
				}
				double videoDecCurTime = localAVHandler->getVideoInfo().lastTime;
				if (videoDecCurTime <= localVideoContext->progressTime) 
				{
					uint8_t* ptrY = NULL;
					uint8_t* ptrU = NULL;
					uint8_t* ptrV = NULL;
					double curFrameTime = localAVHandler->getVideoFrame(&ptrY, &ptrU, &ptrV); 
					if (	ptrY != NULL && 
							curFrameTime != -1 && 
							localVideoContext->lastUpdateTime != curFrameTime) 
					{
						localVideoContext->textureObj->upload(ptrY, ptrU, ptrV);  // error occured here
						localVideoContext->lastUpdateTime = (float)curFrameTime; 
						localVideoContext->isContentReady = true; 
					}
					localAVHandler->freeVideoFrame();
				}
			}
		}
		ctx->Release();
	}
}

Only "mVideoBuffMax"th frames have been rendered, and a runtime error occurs when trying to render the "mVideoBuffMax+1"th frame.

according to my LOG in localVideoContext->textureObj->upload(),

std::thread YThread = std::thread([&]() {
    if (mWidthY == rowPitchY) {
        LOG("----------------- [Y2] before memcpy\n");
        memcpy(ptrMappedY, ych, mLengthY);  // error occured here
        LOG("----------------- [Y3] after memcpy \n");
    }
}

LOG "[Y3] after memcpy" doesnt called when the current frame number is mVideoBuffMax+1.
When I checked the mLengthY, there was no problem. The value of mLengthY was [upscaled width * upscaled height].
why this error happens and how can I fix?

Thank you.

Feature Request: Better Looping

It's possible to loop videos by calling 'replay' in the 'onVideoEnd' callback, but it takes a long time for the seek to complete, and it looks pretty bad, especially for larger videos.

Reducing the time for displaying a frame on the screen

The total time for decoding the video (BUFFERING-->START) IS <=22 ms for my case but after that to display the frame on the screen takes additional 25-30 ms. Mainly these four APIs are being executed in order to display the frame on the screen

nativeSetVideoTime(decoderID, (float)setTime);
GL.IssuePluginEvent(GetRenderEventFunc(), decoderID);
getTextureFromNative();
setTextures(videoTexYch, videoTexUch, videoTexVch);

Is there any way to optimize this delay associated to display the frame on the screen. Any help is highly appreciated.

nativeLoadThumbnail causes unity editor to crash

To reproduce:

  1. load SampleScene
  2. uncheck MediaDecoder on the Quad from the Inspector
  3. add a new script TestThumbnail.cs to the Quad
public class TestThumb : MonoBehaviour {

	// Use this for initialization
	void Start () {
        MediaDecoder.loadVideoThumb(gameObject, "C:\\Users\\video\\swim.mp4", 2);
	}
	
	// Update is called once per frame
	void Update () {
		
	}
}
  1. nativeLoadThumbnail will crash the unity editor.

Failed to load specific module from the ViveMediaDecoder.dll

After linking with the FFmpeg in the visual studio, I somehow generated my own .dll for x86_64 not changing any part of native DLL project but while using that dll as a native plugin to Unity, It's showing me the following error of " failed to load specific module"

Error log:
Failed to load 'Assets/Plugins/x86_64/ViveMediaDecoder.dll' with error 'The specified module could not be found.
'.
RtlLookupFunctionEntry returned NULL function. Aborting stack walk.
0x00000001417CC67B (Unity) StackWalker::GetCurrentCallstack
0x00000001417CE35F (Unity) StackWalker::ShowCallstack
0x00000001417A8350 (Unity) GetStacktrace
0x0000000140D5520B (Unity) DebugStringToFile
0x0000000140D559EC (Unity) DebugStringToFile
0x0000000140DA521D (Unity) PlayerSettings::InitDefaultCursors
0x0000000140DB33F5 (Unity) FindAndLoadUnityPlugin
0x00007FFE8CF59A21 (mono) [loader.c:1298] mono_lookup_pinvoke_call
0x00007FFE8CF6D151 (mono) [marshal.c:8122] mono_marsha_get_native_wrapper
0x00007FFE8D01379B (mono) [method-to-ir.c:6435] mono_method_to_ir
0x00007FFE8D033F70 (mono) [mini.c:3590] mini_method_compile
0x00007FFE8D03503B (mono) [mini.c:4344] mono_jit_compile_method_inner
0x00007FFE8D035659 (mono) [mini.c:4556] mono_jit_compile_method_with_opt
0x00007FFE8D0356EC (mono) [mini.c:4580] mono_jit_compile_method
0x00007FFE8D02D7C3 (mono) [mini-trampolines.c:477] mono_magic_trampoline

E.g. all the required ffmpeg dlls are kept in Plugins/x86-64/ already. Can anyone please help me what modules I am missing and how to resolve this issue?

How to seek a particular frame and display on screen

I have encoded 5 frames in one video file. Is it possible to show exactly the 4th/5th frame on the unity screen?

As per observation, I have seen the current time of the video is getting changed while playback but it's not changing uniformly.

On this context, can anyone (@kyo8568131, @OlafZwe) please tell me how to exactly display particular fame?

Using multi-channel audio support causes memory leaks

A (native) memory leak occurs when using the following setup:
Enabling all audio channels by initializing the decoder with enableAllAudioCh = true.
Pulling that multi-channel audio data using the C# function getAllAudioChannelData.

This memory leak appears to occurs both when testing in the Unity editor, as well as in a standalone Windows build of the Unity app.

Bug - Unity freezes when stopping app while the decode is in INITIALIZING state

There seems to be a bug in the asset. Unity will freeze when I stop the app or change scene while the decoder is in INITIALIZING state.

To reproduce it:

  1. Set the Media Path to some random rtsp or rtmp address that are not working
    image
  2. Press play. (ViveMediaDecoder.cs in initDecoderAsync() will enter this while loop untill it will either fail or succeed)
    image
  3. Stop the app while the result is 0 (if step one is followed properly then it will be 0 forever)
  4. Unity will freeze and only a restart of Unity will help.

Solution I found
Make sure you you don't run the StopDecoding() method when the decoder is in INITIALIZING state. To do that remove the =(equls) sign from StopDecoding() method in ViveMediaDecoder.cs. So far I dont see any problems with this solution, if I miss something please let me know.
Snipaste_2019-11-27_10-42-58

Playback is broken when building with 2017.2

Playback stutters when running the built version.

To reproduce the problem, load the example from the asset store in 2017.2.0f3. Launch in the editor, notice that the playback is working. Build & run the same project, and notice that the playback stalls almost to a halt.

The problem is not reproducible in 2017.1

Video Decoding on windows is too slow

The video with just one 1 I frame and 4 P frame is taking almost (~25 ms to allocate and initialize decoder context and additionally more than 100 ms is needed to decode it make the state from buffering to state) which implies the FPS that viveMediaPlayer can support is less than 10 FPS.

can anyone please tell me Why it's too slow? and is there any way to improve this? I am using viveMediaPlayer on Unity windows platform.

I have some issues about "swap" function in DecoderFFMpeg.cpp, NativeCode

Hello. I have some troubles to use Native code.

I made a project and include all of ffmpeg's lib, and ViveMediaDecoder's include directory.

When I build this project, I got 4 errors.
This errors accured when "mAudioFrames.swap(decltype(mAudioFrames)())" and "mVideoFrames.swap(decltype(mAudioFrames)());" are called.

Error logs below

C2664 : 'void std::queue<AVFrame *,std::deque<_Ty,std::allocator<_Ty>>>::swap(std::queue<_Ty,std::deque<_Ty,std::allocator<_Ty>>> &) noexcept()' : cannot convert argument 1 from 'std::queue<AVFrame *,std::deque<_Ty,std::allocator<_Ty>>>' to 'std::queue<AVFrame *,std::deque<_Ty,std::allocator<_Ty>>> &'

I'm using Visual Studio 2017 and ffmpeg 3.4 now.
Why these errors happen and how can I fix it?

thank you.


add) The swap function appears to exist to initialize all the queues inside.
so, I make empty queue with same data type and swapped.
Is this method appropriate?

mVideoFrames.swap(decltype(mVideoFrames)());
-> mVideoFrames.swap(EmptyQueue);

Frame-rate mismatch and latancy

It's taken a while to deduce this but I found that when the streaming server's source was <60fps the received video would play in unity at a higher frame-rate that would cause it to freeze while it waits to get more frames from the source. Also while streaming at 60fps works fine the latency is quite bad getting around 2-3 seconds of latency.

RTSP Question

I've been using a Matrox® Maevex 6150 encoder appliance and VLC to stream mp4s over RTSP. When I use the ViveMediaDecover in Unity as the client, the audio is fine, but the video gets distorted. See images below. The video plays fine if I play it from the hard drive as a file. I'm using a Corsair One Pro, which has more than enough CPU and GPU.

Can you explain what's happening here and how I might approach fixing this issue?

Thank you!

Video is from here, 1080p: https://www.youtube.com/watch?v=V57Nkmrerk8

example01

example01a

example02

example05

example04

Is there anyway to preload the video?

Hi, We are using the this package in Unity to play some big videos for a university experiment. The issue we have is that, since the videos are really big(8k, uncompressed), there usually are several seconds for blackscreen before the video actually plays. We are trying to find a way, in Unity, to "pre-load" the video, but can't find any. We also tried to add a "Loading..." text to the camera (so that the text always follows the head movement), but it seems to be covered by the 360 video.
Can anyone help us on the "pre-load" issue? Or is there a way to bring the "Loading..." text to foreground?
We are quiet new to Unity and VR, so I'm very sorry if these questions sounds stupid....

Poor performance with AES-128 encrypted HLS stream

When streaming an HLS stream with AES-128 encryption, the video stalls every second. This happens regardless of the video resolution, meaning even low resolution video stalls.
Since tests with mobile media players with the same encrypted HLS stream causes no performance issue, good performance is also expected on Windows with MediaDecoder.

Feature request: DASH streaming support

DASH (or MPEG-DASH) is a streaming protocol and standard that is widely supported and offers some improvements to HLS streaming that make it a good choice for a VR video streaming protocol. For example, DASH supports codecs like VP9 and the webm container, where HLS is restricted to a specific transport stream container for its segments.

Is it possible to support DASH?

How to use TCP function in config file

Hello.

I want to use TCP function in Config file, but I'm just beginner to Unity.
So, could you tell me how to use the TCP function?
or sample using that function will be very helpful to me.

Thanks.

Huge time difference between Native Script and c# decoding time

From the unity C# script ViveMediaDecoder.cs, I am getting the decoding time (time to change the state from BUFFERING to START is really high in the range of 60-90 ms but what I observed from the native script the video_decoding consists of 4 functions av_read_frame(), updatevideoframe(), av_packet_unref() and updateBufferState() which cumulatively takes no more than 25 ms and the updatevideoframe() takes the most of the time.

can anyone (@kyo8568131) please tell me why there is a large time gap between native script decoding time and decoding time observed from c# unity script?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.