Code Monkey home page Code Monkey logo

gpuimage's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gpuimage's Issues

Zombie in GPUImageVideoCamera.m file

I have found zombie in the GPUImageVideoCamera.m on line 52.
You have allocated AVCaptureDeviceInput object and have sent autorelease method to the object. But in the dealloc method you are releasing this object later.

I have tried to repair it removing autorelease call. In the simulator and instruments it seems fine but on the device it is still crashing when trying to release GPUImageVideoCamera object.

Applying a filter to an already recorded video.

Hello. I am having an issue when trying to apply a filter to a video that is already recorded. I have tried the sample application that came with the project along with adding the example that is on the main page into a fresh project. What I get is a bunch of lines of "Problem appending pixel buffer at time: 203" .. "Problem appending pixel buffer at time: 207" .. etc.

The video that I am using is the same m4v file that is in the example, except I got it from http://support.apple.com/kb/HT1425 . When I check the output file it will sit at zero bytes until i close the simulator at which point it becomes roughly 800 bytes when the original file is 2.2 MB.

GPUImageVideoCamera Output in a squared view

Hi All,

I'm trying to use GPUImage in my app and I'm facing the following problem : I would like to have the camera output in a squared view (due to some UI constraints). There is at the moment a way to do this using GPUImage? I have tried (using the examples as a reference) but the camera output is not scaled proportionally.

By the way thanks for the efforts you put in this project.

GPUImageMovieWriter fires auto-exposure/wb/focus

I have setup my [GPUImageVideoCamera inputCamera] to lock exposure, white balance and focus, which works fine. The issue is when i record with GPUImageMovieWriter, it will auto-expose, focus, white balance as soon as it starts recording. The real head-scratcher is that if i then stop recording and start recording again, it doesn't fire the auto-expose/focus/WB.

Any ideas on why this is happening, or possibly some ideas for a work-around?

Filtered video sped up

When using GPUImageMovieWriter with a GPUImageMovie the output is sped up and thus is shorter and doesn't seem to respect the original frame rate.

Recorded movies start with a short duration of black video

I'm not sure what's causing this, but there's a brief period at the beginning of all recorded videos from the live camera feed where it's black. This doesn't happen with prerecorded videos that are fed through filters and then encoded.

It could still be an initial timestamp issue, but the initial timestamp should be getting set to that of the first frame of video that comes in.

Blend two image

Could you show me the way to blend two image from bundle.
Example:
GPUImageColorDodgeBlendFilter

GPUImagePicture *pic1 = [[GPUImagePicture alloc] initWithImage:grayscaleImage];
GPUImagePicture *pic2 = [[GPUImagePicture alloc] initWithImage:invertImage];
[pic1 addTarget:colorDodge];
[pic2 addTarget:colorDodge];
[pic1 processImage];
UIImage *output = [colorDodge imageFromCurrentlyProcessedOutput];
I try this but it returns white image.
Thank you

Crash on iOS4 because of fast texture loading?

Hi,

do you have access to an iOS 4 device? I seem to see crashes if I deploy a .ipa file because CVOpenGLESTextureCacheCreate isn't available on iOS4, and there's a dynamic linking error when I try to launch my app. This doesn't seem to appear when deploying directly from Xcode, though.

The problem appeared in 3512cd3 - can you reproduce this on your end or did I manage to screw up my setup? I'll have another look at this tomorrow but it would be great if you could try to reproduce this.

Background target of CustomView. Is it possible?

CustomViewBase.h
@interface CustomViewBase : UIView

CustomView.h

@interface CustomView : CustomViewBase

UIImage *inputImage = [UIImage imageNamed:@"WID-small.jpg"];
sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];

[sourcePicture addTarget:filter];

how to add "customView" as a background target?
----- So it will be like below
CustomView *inputImage = [CustomView imageNamed:@"WID-small.jpg"];
sourcePicture = [[GPUCustomView alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
[sourcePicture addTarget:filter];

Do I need to create my own GPUCustomView?

Capture Video while mixing with audio file

I am trying to capture video from the camera and save the movie mixed with an audio file. I want to instead of mic audio, use a music track (a .caf file). I've tried to set up an AVAssetReader to read the audio file, but I couldn't make it work with the AVAssetWriter (maybe because the decoding of audio happens real fast). Also, I don't want to save the movie without audio and mix it afterwards with an AVAssetExportSession. It would be to slow for my purpose. Any Ideas ? Thanks in advance.

Use of multiples filters

Hi, is there a way to use multiple filters at the same time (ex. TransformFilter + SepiaFilter) ?? Thanks.

examples not working

I have fresh xcode 4.3.2 with IOS 5.1 SDK and iphone 3GS with 4.1 ios
Was trying to build the examples to see how they work. All examples are building without failuers, but nothing happens on my iphone. Only simple photo filter builds, runs, and I see Capture Photo button and a horizontal scroll bar. That's all. When I scroll it nothing happens. WHen I press capture Photo I hear only capture sound "click" and that's all. I cannot see what camera is shooting, and nothing goes to picture library. I have very low experience and thinking now maybe I've done something wrong ? Just opened exsample in xcode . Pressed Build. Nothing else. I've set target ios to 4.1 only.

Please help

When using front camera, image is upside down in landscape

Somehow Apple's GLCameraRipple sample project handles this, though I haven't quite figured out how yet. Maybe it's handled by the shader itself? (some I know nothing about, for now...)

I also looked at RosyWriter, which has a similar problem with the front camera.

Thanks Brad for creating this... after a couple of days of testing, I'm pretty sure it will replace Core Image in a project I'm working on.

NSInvocation exception with GPUImageFilterPipeline

When you use GPUImageExposureFilter in a Configuration plist file, app crash with 'NSInvalidArgumentException', reason: '+[NSInvocation invocationWithMethodSignature:]: method signature argument cannot be nil', it appears that at line 49 of GPUImageFilterPipeline.m :
Class theClass = NSClassFromString(filterName);

it return a null value.

Also with GPUImageSepiaFilter and many other, but with GPUImageVignetteFilter and GPUImageContrastFilter it works great.

My Sample plist Configuration File:

Filters (array)
--Item0 (dict)
----FilterName (string) - GPUImageVignetteFilter
--Item1 (dict)
----FilterName (string) - GPUImageContrastFilter
----Attributes (dict)
------setContrast: (string) - float(4.0)
--Item2 (dict)
----FilterName (string) - GPUImageExposureFilter <-- don't work with this filter
----Attributes (dict)
------setExposure: (string) - float(4.0)

My device: iPhone 4S with iOS 5 and Xcode 4.3

Improve the performance of the Kuwahara filter

There may be a way to improve the performance of the Kuwahara filter by removing the dependent texture reads and instead moving the offset calculations to the vertex shader. This seemed to work well for this person trying the same thing:

http://www.imgtec.com/forum/forum_posts.asp?TID=1017

However, my early attempts to apply his approach didn't produce quality results. It's also hard to see how this could be modified to allow for variable filter radii.

Output image screwed up on iOS4.*

Something that has been done recently(since my patch was submitted anyway) that has hosed output. Looks like the tracking being off on an old tv where the picture gets all shifted.

Crashing when saving an image from GPUImageStillCamera

the app is crashing on 4s if an image is taken and is being manipulated (with original code, with UIImageWriteToSavedPhotosAlbum() as well as with ALAssetsLibrary)

So far it looks like not enough memory on the device to process the image, this is just a guess as I am not getting any message even with zombies enabled. 8 maga pixel camera might be just too much for image handling? I have even tried to save the image on a separate thread (don't really kno why) :(

GPUImagePicture : How to resize, zoom in and out, or move around background image?

using sample image,
GPUImagePicture : How to resize, zoom in and out, or move around background image? At the same time still having the camera live?

        // The picture is only used for two-image blend filters
        UIImage *inputImage = [UIImage imageNamed:@"WID-small.jpg"];
        sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
        [sourcePicture addTarget:filter];

Emboss filter to add

Just a very simple emboss filter to add :

----- GPUImageEmbossFilter.h -----

#import "GPUImageFilter.h"

@interface GPUImageEmbossFilter : GPUImageFilter

@end

----- GPUImageEmbossFilter.m -----

#import "GPUImageEmbossFilter.h"


NSString *const kGPUImageEmbossFragmentShaderString = SHADER_STRING
(
 varying highp vec2 textureCoordinate;

 uniform sampler2D inputImageTexture;

 void main()
 {    
     mediump vec2 pixel = vec2(1.0 / 480.0, 1.0 / 320.0);

     mediump vec2 texCoord = textureCoordinate;

     mediump vec4 color;
     color.rgb = vec3(0.5);
     color -= texture2D(inputImageTexture, texCoord - pixel) * 5.0;
     color += texture2D(inputImageTexture, texCoord + pixel) * 5.0;

     color.rgb = vec3((color.r + color.g + color.b) / 3.0);
     gl_FragColor = vec4(color.rgb, 1);
 }
);


@implementation GPUImageEmbossFilter

- (id)init;
{
    if (!(self = [super initWithFragmentShaderFromString:kGPUImageEmbossFragmentShaderString]))
    {
        return nil;
    }

    return self;
}

@end

Add an asset writer output type

We need a way to record to a movie, so I'll add an asset writer that has an internal GCD queue managed by a dispatch semaphore to automatically drop frames at the recording end as appropriate, without jamming up the rest of the pipeline.

GPUImageStillCamera capture are not in high resolution

I've run the sample application SimplePhotoFilter and checked the captured image resolution by downloading my phone's data directory with xcode organizer.
The output image has a resolution of 640x852 pixels which according to this thread is my iPhone video resolution for the photo preset not the video preset (http://stackoverflow.com/questions/4893664/applying-effect-to-iphone-camera-preview-video-using-opengl)

I'm using an iPhone 4 device running iOS 5.1which from my understanding wouldn't be able to process the high res image returned from AVCaptureStillImageOutput since one of there dimension is bigger than 2048px.
To adress this I was planning to resize the image on an iPhone4 to 2048x1529px (3.1MP) from the original resolution of 2592x1936. (5MP)

GPUImageFilterPipeline process still image

Hi all it's possible to process a still image using a pipeline filter?? because I'm trying but I receive this error :
CGImageCreate: invalid image size: 0 x 0. Btw, I have no problem capturing the image output from a live preview of the camera.

Thanks in advance

ARC issue

Hi there,

I just pulled down a fresh clone of GPUImage and I get two ARC errors out of the box:
ARC forbids synthesizing a property of an Objective-C object with unspecified ownership or storage attribute

on lines 19 and 20 of GPUImageFilterGroup.m
@synthesize initialFilter = _initialFilter;
@synthesize terminalFilter = _terminalFilter;

Anyone have any idea how to get around this?

Cheers,
Brett

GPUImageSobelEdgeDetectionFilter intensity attribute

Hi,

I'm trying to make a Neon effect (black image with colored edges) using the GPUImageSobelEdgeDetectionFilter. However, the intensity attribute is not showing up for me?

Is it the correct attribute?? Or should I do in a differente way?

Thanks

Enabling Flash Mode

How to enable Flash mode properties to the AVCaptureDevice.

When calling [filter imageFromCurrentlyProcessedOutput]; the flashMode is not working when capturing an Image.

Please provide some inputs to resolve this issue

Thanks

still image filter rotates portrait image

I noticed that if I used a photo taken on my 4s camera in portrait the image gets rotated and squashed when processed however if the image is first scaled it does not get broken. I spotted this because the small scale image I'm showing during input of values from sliders does not get deformed, only the full size image when saved.

It's an easy work around to 'scale' the image first by the size it is so the problem is not urgent. I'm using this code to get past the problem when I do final processing (brightness and contrast)

  • (UIImage_)resizeImage:(UIImage_)image
    {
    CGSize size = image.size;
    UIGraphicsBeginImageContext( size );
    [image drawInRect:CGRectMake(0,0,size.width ,size.height )];
    UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return newImage;
    }

Overall very happy with GPUImage, it's about 7 times faster than all the other image processing solutions I have tried.
Keep up the good work. I'm happy to contribute if you need anything simple done, grunt work, docs, examples once my app gets to version 1 in a few weeks.

Created an analogue to UIImageView's animatedImages functionality

It became necessary due to the fact that iOS doesn't seem to support any video formats with an alpha channel... so I needed something to blast a stream of PNGs like an old-school projector. A few questions about this:

  1. Do you know of an easy way to make PVR files out of this?
  2. Is it even feasible to make a PVR file of ~450 PNGs at 40-50kb each?
  3. Are there other caching mechanisms that make more sense?
  4. I essentially used the approach you used in GPUImagePicture, but housed within a structure more akin to GPUImageMovie ... works great except for being kinda slow. I'm wondering if you have an opinion on whether it's a better idea to try to load all of the image data into the Core Video texture cache instead. I got pretty close to doing that this morning, but took the GPUImagePicture route instead for expediency's sake.

I'll fork the repo and send a pull request shortly... before I do though, one last little question -- since I'm using an accelerometer-based rotation affine transform on these things in my application, I frequently encounter the scenario in which part of the image should be off-screen. I encountered that auto-squish-to-view-bounds behavior, which I kind of circumvented by continually recreating the image's framebuffer to accommodate the bounding rectangle of the rotated image... but damn, it's slow. Any thoughts on that? Is there an easier way to allow things to go off-screen, become clipped, and still maintain their normal scale?

Accessing the current frame

Hi there,
Great library! Very impressive. I have more of a question than an issue: how can I access the current frame and store it in a UIImage? I'm used to being able to do this in:

  • (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { }

but I'm not sure how to grab it with the filters applied in this method.

Is there any way to do this?

Cheers,
Brett

Capturing UIImage with GPUImageSketchFilter produces black and white image

Hey Brad - I'm trying out a few of your filters and I'm having some trouble with the GPUImageSketchFilter. I've modified the SimpleImageFilter example and replaced the default sepia filter with a sketch filter. I immediately capture a UIImage using:

UIImage *image = [sketchFilter imageFromCurrentlyProcessedOutput];

and writing it to the photos library using:

ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library writeImageToSavedPhotosAlbum:[image CGImage] metadata:nil completionBlock:nil];

It's a fairly basic example but I noticed that the image written is simply a black and white image. The preview of the image in the app properly has the "sketch" effect applied to it. I noticed this issue in a different app I was working on and wanted a simple example to demonstrate it.

Is saving an image this way not intended behavior? Should I be doing something different here to write out the filtered image?

How it happend to AddTarget and RemoveTarget?

I add a filter as a target of videocamera, and add a GpuImageView as the filter's target, and then ,I want to change another filter. I remove the first filter from videocamera's targets, and add the new filter as the videocamera's target, and add the GpuImageView as the new filter's target. I works good.

But when I what to remove all the filter ,to show the origan screen , I remove the filter ,and add the GpuImageView as the target of VideoCamera, and then ,It become black?

What happend?

Issue with MovieWriter and FIlterGroup filters

Using GPUImageAdaptiveThresholdFilter or GPUImageUnsharpMaskFilter with the SimpleVideoFileFilter example does not work. It appears to be an issue with filters descending from GPUImageFilterGroup and GPUImageMovieWriter.

Portrait Image Sizing Issue

I am having an issue with captured images. At first I had an issue where when the image is captured with the camera and processed through a filter the image would rotate. This was fixed by using the imageFromCurrentlyProcessedOutputWithOrientation passing [image imageOrientation] as the argument. The reason was because when you are holding the camera in portrait view it is changing the orientation to for the image to Up. Up with iPhone is actually when you are holding it sideways which then would make the image landscape view rather than portrait view.

Anyways, the new issue is after the image is processed with the filters the image gets resized. I printed out the width and height in imageFromCurrentlyProcessedOutputWithOrientation in GPUImageFilter.m for cgImageFromBytes with CGImageGetWidth and Height, and then also for finalImage with filterImage.size.width and height. The variable cgImageFromBytes has the correct width and height passed via sizeOfFBO, however finalImage has the width and height reversed.

Any ideas?

Memory warning while saving photos in camera roll

I'm using GPUImageFilterPipeline and while I use AssertLibrary to save photo in camera roll, it success but I receive a memory warning and if i retry to still photo, it crash.

I use 2 filters (transform and rotation) and add 3 filters (vignette, contrast, exposure) with GPUImageFilterPipeline. To still photo I call [GPUImageStillCamera capturePhotoProcessedUpToFilter:] with [[GPUImageFilterPipeline filters] lastObject] argument.

Filter Showcase Crash

In filter showcase example, it crash at the second time choose filter.At GLProgram.m line 159.

GPUImageView and UIScrollView Freeze.

I'm using a GPUImageView and a UIScrollView as childs of a UIView, Sometimes when I scroll the scrollview the render freezes on GPUImageView. I saw some posts about NSRunLoop and CADisplayLink fixes, but I think none of them are used by the framework. Any ideas ? Thanks.

Advice on implementing JFA Voroni

I'd like to try to implement the algorithm here: http://unitzeroone.com/labs/jfavoronoi/ to create voroni diagrams. One thing I need is a double buffer ping/pong structure to iterate the flood fill algorithm. I'm wondering if my best strategy is to build that into a subclass of GPUImageFilter, or if some combination of GPUImageTexture objects would be better? Any advice?

GPUImageTransformFilter Scaling Issue

I am trying to resize my video stream, so I have implemented the GPUImageTransformFilter:

filter = [[GPUImageTransformFilter alloc] init];
[(GPUImageTransformFilter *)filter setAffineTransform:CGAffineTransformMakeScale(2,2)];

This effectively 'zooms' the image which is great, but I would like the content mode of the display to remain the same. So the final image displayed in the GPUImageVideoCamera output would ideally be the same image (as without the transform filter), only at half the resolution.

Is this possible?

I tried adding:
[imageView setContentMode:UIViewContentModeScaleAspectFit];

but no luck.

Does my question make sense?

The reason I would like to do this, is that I would like to make 3 simultaneous video sizes (full size 640x480, then 320x240, then 160x120).

Many thanks!

Append/merge multiple video files into one final output video?

Hi, I've been experimenting with this great framework (thanks Brad/All for the amazing work!), & have recording with various filters saving to output files working.

I now need to merge a number of the already recorded videos into a final single composition. Is this possible with the current api?

As a test, I've tried to edit the SimpleVideoFileFilter example to play 2 video files (pre added to the project) one after the other, but can't find a way to hook into the end of the first one so I can start the second?

Does anyone have any pointers/ideas on how I could go about doing/adding this functionality?

Thanks
Phil

Custom Filter (CustomFilter.fsh) crashes on FilterShocase

Hi there,

I have looked around the forum but have not found anything related so I will go ahead and post a problem I encountered.

I am trying to use a Custom Filter through shaders. In order to do so, I was exploring the source code present in
the FilterShowcase example to learn how to do it.

However, the CustomFilter crashes with the following log (see below).

Have anybody ran into this problem ?

Thanks in advance for any help or pointers.

2012-04-16 17:18:55.627 FilterShowcase[3718:707] Shader compile log:
ERROR: 0:8: Use of undeclared identifier 'fractionalWidthOfPixel'
ERROR: 0:10: Use of undeclared identifier 'sampleDivisor'
ERROR: 0:11: Use of undeclared identifier 'samplePos'
2012-04-16 17:18:55.628 FilterShowcase[3718:707] Failed to compile fragment shader
2012-04-16 17:18:55.630 FilterShowcase[3718:707] Program link log: ERROR: One or more attached shaders not successfully compiled
2012-04-16 17:18:55.633 FilterShowcase[3718:707] Fragment shader compile log: (null)
2012-04-16 17:18:55.634 FilterShowcase[3718:707] Vertex shader compile log: (null)
2012-04-16 17:18:55.643 FilterShowcase[3718:707] *** Assertion failure in -[GPUImageFilter initWithVertexShaderFromString:fragmentShaderFromString:], /Users/mcabral/Dropbox/projects/iphone_new/DrawingTest/BradLarson-GPUImage-7461db7/framework/Source/GPUImageFilter.m:64
2012-04-16 17:18:55.659 FilterShowcase[3718:707] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Filter shader link failed'
*** First throw call stack:
(0x378b188f 0x35db7259 0x378b1789 0x380b53a3 0x8bb9 0x8dcf 0x8ebf 0x5bdb 0x2fc7 0x33f96e33 0x33fa3391 0x33fa3201 0x33fa30e7 0x33fa28b3 0x33fa26ab 0x33f96ca7 0x33f9697d 0x8937 0x3400fae5 0x340897ab 0x380e7933 0x37885a33 0x37885699 0x3788426f 0x378074a5 0x3780736d 0x32714439 0x33f8be7d 0x2aa9 0x2544)
terminate called throwing an exception

Larger images need better interpolation when scaling down

Right now, when passing a large image into a filter that will process it for output to a much smaller destination, the filter will just sample from locations corresponding to pixels in the final image. This leads to sharp discontinuities, rather than a smoother scaledown, so a better way of scaling these images to provide smoother interpolation needs to be used.

Add a tiling mechanism to prevent iPhone 4 crashes on photo processing

Currently, the maximum size for any image that can be processed is limited by the device's GPU maximum texture size. For all devices older than the iPad 2, this is 2048x2048 pixels. For the iPad 2, Retina iPad, and the iPhone 4S, this is 4096x4096.

This presents a particular problem for the iPhone 4, where the new GPUImageStillCamera can capture still photos of a higher resolution than the device's GPU can support.

In order to support filtering larger images, a tiling mechanism will be needed to break these images into smaller chunks to be processed and then stitched together at a final stage.

Sound?

Hi,

I am working on a video recording project at the moment and I found your framework to be the best one out there.

I know that this framework deals with Image aspect only. But all the result is a silence film.

After some digging around, I notice that you put a comment on "AVAssetWriterInput *assetWriterAudioIn;" in the GPUImageMovieWriter class. So is this something you plan to support in the future? if not can you point me to the right direction.

I am assuming the best way is to share the same "assetWriter" with the "assetWriterVideoInput" since capture the video and audio separately and merge them back together seem a little bit too much.

Thank you again for sharing this framework and also in advance for any further guidance .

Best,
Pondd

rotation and RGBImagePicture

UIImage which is processed by Filter. seems to be affected by device orientation.
But I blocked view's orientation change so view itself doen't changed by orientation.
But RGBImagePicture or Filter seems to be process orientation change by itself..
How can I block this kinds standalone orientation ???

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.