Code Monkey home page Code Monkey logo

sdl-gpu's People

Contributors

ademuri avatar albertvaka avatar cosmo-ray avatar diegoace avatar ephemer avatar grimfang4 avatar helios-vmg avatar jdearborn avatar jesta88 avatar kirilledelman avatar masonwheeler avatar mp4 avatar mutability avatar nuk510 avatar ramirez7 avatar randomerrormessage avatar robloach avatar seanballais avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sdl-gpu's Issues

MSVC 2015 defines snprintf

To compile under Visual Studio 2015, the
#ifdef _MSC_VER
in SDL_gpu_matrix.c#L25 and renderer_GL_common.inl#L39 has to be replaced by
#if defined(_MSC_VER) && (_MSC_VER < 1900)
because it defines snprintf by itself now.

Originally reported by @chisquare130 over at wesnoth/wesnoth

Any full documentation or files to use sdl gpu to build a game framework

i built a game engine using sdl gpu , i am trying to make it bigger ( like a game framework ) to add are more capabilities , the problem is that i find my self many times obliged to edit sdl_gpu and fix some issues , that is not going greate without a full documentation , i know this documentation " http://dinomage.com/reference/SDL_gpu/index.html " it is not helping alot , not helping with the inner implementation of the functions im dealing with , where can i find some files , or any kind of informations that helps me here , im sure that there where some sort of a team that worked on sdl_gpu and offred to one an other some explanation , can i get that ?

GPU_BlitRect doesn't align correctly

GPU_BlitRect(image, NULL, target, NULL) doesn't work as expected: the top-left corner of the image is not mapped to the top-left corner of the target. Instead you get a weird effect where the center of scaling is somewhere to the bottom-right.

Unscaled (image dimensions == target dimensions):

screenshot from 2016-12-03 17-02-43

Scaling 1760x960 image to 886x522 target:

screenshot from 2016-12-03 17-03-16

Code in this case is simply:

        GPU_BlitRect( render_buffer, NULL, back_buffer, NULL );

(back_buffer is the framebuffer; render_buffer is a fixed-size GPU_Image that is the main render target; I call GPU_SetWindowResolution on back_buffer when WINDOWRESIZED is received)

Blit with linear filtering bleeds adjacent pixels from outside the source region

Simplest way to explain this is with a demo:

sdlgpu-filtering

Source: https://gist.github.com/mutability/82eea1c8f0b092cd7710a74922caa001

Left side is the source GPU_Image, 16x16, scaled up so you can see it.

Right side is the results of GPU_BlitRect with a source rectangle that covers the red square only:

  • Top: GPU_FILTER_NEAREST
  • Middle: GPU_FILTER_LINEAR
  • Bottom: GPU_FILTER_LINEAR with snap disabled and adding in a half-pixel correction

As I understand it, the problem is that the texture coordinates that BlitRect computes lie on the texel boundaries, not texel centers. Probably GPU_Blit* should do the necessary corrections itself rather than requiring the caller to juggle this.

GPU_GetContextTarget() always returns null

I'm using SDL 1.2.15 on Windows with the latest sdl_gpu and trying to run the init-demo sample. I get an exception when applyTargetCamera() is called because GPU_GetContextTarget() always returns null. I can't find any code that is actually initialising the context (I see some code that handles this if using SDL 2 on line 1368 of renderer_GL_common.inl, but I need to stay with SDL 1.2)

Does sdl_gpu still work with SDL 1.2?

GPU_Circle and GPU_Arc draw buggy circles/arcs

Code:

#include "SDL.h"
#include "SDL_gpu.h"

int main(int argc, char *argv[]) {
    GPU_Target *screen = GPU_Init(800, 600, SDL_WINDOW_RESIZABLE);

    bool quit = false;
    while (!quit) {
        SDL_Event e;
        while (SDL_PollEvent(&e)) {
            switch (e.type) {
            case SDL_QUIT:
                quit = true;
                break;
            }
        }

        GPU_Clear(screen);

        SDL_Color color = { 255, 0, 0, 255 };
        GPU_SetLineThickness(50.0);
        GPU_Circle(screen, 250, 250, 200, color);

        GPU_Flip(screen);
        SDL_Delay(16);
    }

    return 0;
}


Output: https://i.imgur.com/Ykfdkxn.png

Original issue reported on code.google.com by [email protected] on 10 Dec 2014 at 8:07

crash on sdl2_gpu when loading images

i was compiling my game engine , and i got a crash when loading an image , after trying and trying i became sure that something is wrong with the internal code of the library , ( sdl2_gpu mingw release )
it is working as expected on android but not on windows with your prebuilt release , so i tried with sdl_gpu version 1 , and it was working fine ! but i can't switch now , even your space-tutorial example crashs using that release , exactly when loading the image , but when compiled with sdl_gpu version 1 it doesn't , i afraid i will have to download visual studio to use it's release and see if it is stable . can you plz tell me what is wrong , or can you plz offer a new realease

when stbi is found, but stbi_write isn't, the include path for bundled stbi is missing

here's a super hacky fix. when only one of the two is missing, the system-wide one that was found shouldn't be used and i don't know anything about cmake to kick the system-wide one out if stbi_write isn't there.

diff --git a/CMakeLists.txt b/CMakeLists.txt
index be0fe38..5eadd1f 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -235,11 +235,10 @@ if(NOT GLEW_FOUND)
   include_directories(src/externals/glew/GL)
 endif(NOT GLEW_FOUND)

-if(NOT STBI_FOUND)
+if(NOT STBI_FOUND OR NOT STBI_WRITE_FOUND)
   include_directories(src/externals/stb_image)
   add_definitions("-DSTBI_FAILURE_USERMSG")
-endif(NOT STBI_FOUND)
-
+endif(NOT STBI_FOUND OR NOT STBI_WRITE_FOUND)

 add_definitions("-Wall -std=c99 -pedantic")

Unable to initialize a renderer on an external window

SDL2 allows you to call SDL_CreateWindowFrom() on an existing GUI object's window handle, and then create a renderer on it. This technique doesn't appear to migrate well to SDL_GPU.

  • Create a Windows form
  • Create a control on it and obtain its HWND
  • Create an SDL_Window on that HWND with SDL_CreateWindowFrom.
  • Call SDL_GetWindowID on this window and GPU_SetInitWindow on the result
  • Call GPU_Init

Expected: A working renderer
Observed: GPU_Init fails. Could not initialize.

A bit of digging reveals that this happens because GLEW could not retrieve the version string, because SDL_GPU failed to initialize the first GL context before calling glewInit. After doing this manually on my end, GPU_Init now throws an access violation in the call to applyTargetCamera because GPU_GetContextTarget returns null.

As near as I can tell, this is returning null because earlier on, in the call to MakeCurrent, target->context->context is null and so the required initialization never takes place. I haven't been able to determine yet what's missing that's causing target->context->context to be null at this point.

Request for fallback software renderer

This computer is one of the few laptops that the FSF have come even close to 
recommending. Unfortunately, it doesn't have a 3d acceleration card. If that 
laptop doesn't have a mesa implementation, games that transition from using 
sdl1.2 to sdl_gpu would have raised system requirements to the point where 
computers like the yeeloong 8101b can't even support them. At the very least, 
can you add a fallback software renderer that is superior to sdl2's downgraded 
renderer?

Original issue reported on code.google.com by [email protected] on 20 Aug 2014 at 8:24

Build in Windows Error

Hello,

I am doing the following process:

  1. Cloning sdl-gpu git repo to my Desktop using cmd.exe (command prompt) with admin access.
  2. Running: cmake -G "MinGW Makefiles"
  3. Running: mingw32-make

I get some warnings like:

  1. "cast to pointer from integer of different size"
  2. "CMakeFiles\SDL_gpu_shared.dir/objects.a(renderer_OpenGL_4.c.obj):renderer_OpenGL_4.c:(.text+0xd947): undefined reference to `SDL_free'"

Finally, I get some errors like:

  1. "collect2.exe: error: ld returned 1 exit status"
  2. "src/libSDL2_gpu.dll failed"
  3. "mingw32-make[2]: *** [src/libSDL2_gpu.dll] Error 1"
  4. "mingw32-make[1]: *** [src/CMakeFiles/SDL_gpu_shared.dir/all] Error 2"

At the end of it all, "src/libSDL2_gpu.dll.a" is created but "libSDL2_gpu.dll" is not.

My questions are as follows:

  1. How can I properly build sdl-gpu on Windows?
  2. Are the errors and warnings I am getting normal?
  3. Don't I need the actual .dll to use after building my project with sdl-gpu or just the .a file?

Thanks so much for any help you can provide.

Drawing circle goes wrong when drawing to huge coordinates

it looks like the algorithm you are using to draw a circle doesn't work as expected when drawing to huge coordinates like 100,000,000 ( tracing it using a GPU_Camera ) , so the game world have to be small or the engine should change units ( this doesn't work because the drawing coordinates that will be passes to GPU_CircleX() will be the same at the end ) , when drawing a circle to such coordinates many vertices of the circle will be absent ( say 40% of them ) so it will look like a polygon

Inaccurate rectangle (and other shapes) rendering

What steps will reproduce the problem?
1. compile and run attached program

What is the expected output? What do you see instead?

Expected output: for line thickness greater than 1, when drawing a rectangle, 
it should look like drawn with a single line. Instead it looks like four lines 
with missing rectangle corners. See attached image.

What version of the product are you using? On what operating system?

SDL2_gpu, SDL2 2.0.1+ (trunk), archlinux (current)


Original issue reported on code.google.com by [email protected] on 25 Nov 2013 at 8:18

Attachments:

UpdateImage is very slow at updating a small region from an incompatible SDL_Surface

If UpdateImage needs to convert the format of the source pixels, it converts the entire source surface in copySurfaceIfNeeded.

My usecase involves copying thousands of individual 32x32 regions from a very large (512x6000 or larger) source surface to build an atlas image at startup. With the current code, that ends up converting the whole 512x6000 surface thousands of times and is intolerably slow.

Run Shader on Blits from a GPU_Image Tileset

Hi,

I have been successful in doing two things with sdl-gpu so far:

  1. Use a GPU_Image tileset with GPU_BlitScale() to draw out 16x16 pixel tiles to create the level of my game.
  2. Use GPU_LoadShader(), GPU_LinkShaders(), glGenVertexArrays(), glVertexAttribPointer(), etc. in order to create and modify shader triangles within my game. I am able to pass in vertex data and utilize attributes, uniforms, position, color, etc. to modify the shaders during rendering.

One question is as follows:

  1. What is a good method to apply/run a shader (from point 2 above) on a 16x16 blit (from point 1 above)? In other words, how do I target only specific 16x16 blit tiles to be affected by a particular shader?

A second question is as follows:

  1. What is a good method to have a shader apply across multiple 16x16 blits as a whole? For example, where a light would diffuse across an area of 20x20 tiles, not just each tile individually. Would each tile get its own shader class? How would multiple tiles share 1 shader?

Thanks so much!

Access Violation / Segmentation Fault when freeing "screen" GPU_Target alias

As the title says, every time I try to free GPU_target created with GPU_CreateAliasTarget from "screen" GPU_Target application crashes. I'm a Ruby programmer, so my knowledge of C is limited, but I managed to make C application using "alias" test and debugging it revealed that it happens somewhere in FreeTarget function (wherever that is).
It's possible I screwed up something when building DLL since I'm pretty new to all of this, but this is the only problem so far I had so far.
Just in case it's relevant, DLL was built twice using mingw32-make and makefiles created with:

  1. cmake-gui-3.8.0-rc1-win32-x86 with "MinGW Makefiles",
  2. cmake built-in in CLion IDE, which apparently uses "CodeBlocks - MinGW Makefiles".

Don't know if that matters, but in both cases SDL_gpu was linked against SDL 2.0.5 .
Oh, almost forgot... tried SDL2_gpu-0.10.0-mingw32.zip release and there was no such problem with it.

CMake SDL_gpu_USE_SYSTEM_GLEW does not work

I get multiple definition errors because its trying to include its own GLEW statically

// Many warnings like this
C:\sdl-gpu-master\src\externals\glew\glew.c:3461:11: warning: '__GLEW_SGIX_reference_plane' redeclared without dllimport attribute: previous dllimport ignored [-Wattributes]

// A couple errors like this
C:/glew-2.0.0/lib/libglew32.dll.a(d003195.o):(.text+0x0): multiple definition of `glewInit@0'
CMakeFiles\3d-demo.dir/objects.a(glew.c.obj):glew.c:(.text+0x27b8a): first defined here

Using wrong GPU adapter

Hi!
My laptop has one integrated Intel HD 4600 GPU and one NVidia GTX850M. It is using the Intel GPU which is slow (but saves power), but I would like the user to be able to choose which GPU to use.
I just wanted to inform about this issue.

Luckily I found a solution. With this code the NVidia driver chooses the most powerful GPU:

#include <windows.h> // <---- for the DWORD

// enable optimus!
extern "C" {
_declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
}

I did some bench marking on my laptop: (screen: 800x600, image: 64x64)
For Intel HD: ~10000 blits/sec @ 60 FPS
For NVidia: ~50000 blits/sec @ 60 FPS (Temp: GPU:~75 Celsius, CPU: ~80 Celsius)

I wish there could have been a way to list all adapters and choose which one to use. This would be a good thing for having in the option menu to select wanted adapter.

GPU_CopyImageFromTarget produces upside-down image

If you pass a GPU_Target to GPU_CopyImageFromTarget that is created with GPU_LoadTarget, the resulting GPU_Image will be an upside-down version of the original GPU_Image passed to GPU_LoadTarget.

I believe this is caused because GPU_CopyImageFromTarget calls CopySurfaceFromTarget which calls getRawTargetData which flips the data vertically.

Could GPU_CopyImageFromTarget detect that the GPU_Target was created from an GPU_Image and just return a copy of that GPU_Image instead?

This code shows the glitch:

GPU_Target* screen = GPU_Init(400, 200, SDL_VIDEO_OPENGL);

GPU_Image* image = GPU_CreateImage(200, 200, GPU_FORMAT_RGBA);
GPU_Target* imageTarget = GPU_LoadTarget(image);
GPU_TriFilled(imageTarget, 0, 0, 200, 0, 100, 200, GPU_MakeColor(0, 0, 255, 255));
GPU_Image* targetCopy = GPU_CopyImageFromTarget(imageTarget);

GPU_Blit(image, NULL, screen, 100, 100);
GPU_Blit(targetCopy, NULL, screen, 300, 100);

GPU_Flip(screen);

SDL_Haptic and Fullscreen question

Hi Jon!
I have a question.

I'm initializing the sdl part like this:

if(SDL_Init(SDL_INIT_AUDIO | SDL_INIT_VIDEO | SDL_INIT_JOYSTICK | 
SDL_INIT_GAMECONTROLLER | SDL_INIT_HAPTIC) < 0){
    printf("initialize failed SDL_Error: %s\n", SDL_GetError());
  }
  atexit(SDL_Quit);

i need to do it this way so i can have a sdl window handle to use

SDL_SetWindowFullscreen(main_window, SDL_FALSE);
SDL_SetWindowSize(main_window, SCREEN_WIDTH, SCREEN_HEIGHT);
SDL_SetWindowPosition(main_window, SDL_WINDOWPOS_CENTERED, 
SDL_WINDOWPOS_CENTERED);

thats only because GPU_ToggleFullscreen changes the position of the window 
everytime it changes from window to fullscreen. (and i want it to be centered 
everytime)

The problem is everytime i exit, an assertion comes up when closing the Haptic 
Subsystem. (because it is already closed ?...)

have you encounterd something like this while running/testing?
any recommendations?

i did not want to post this on the SDL forums, i think is more concerning to 
sdl-gpu.. anyway, see ya round.

Original issue reported on code.google.com by [email protected] on 23 Dec 2014 at 8:22

Basic Triangle Not Drawing

Hi,

I am trying to render a simple triangle to the screen using the following tutorial code with sdl-gpu:

Tutorial Source Code

My code base uses sdl-gpu and I have got the 3D spinning triangles to work. However, I can't get the basic code above to work.

I am using the following paradigm:

  1. Initialize GLEW.
  2. Create Vertex Array Object.
  3. Create a Vertex Buffer Object and copy the vertex data to it.
  4. Bind Buffer and Buffer Data.
  5. Create and compile the vertex shader.
  6. Create and compile the fragment shader.
  7. Link the vertex and fragment shader into a shader program.
  8. Specify the layout of the vertex data with pointer.
  9. Use glDrawArrays() in a while loop.

My main "while" loop uses "GPU_Clear(this->_renderer);" and "GPU_Flip(this->_renderer);" wrapping the glDrawArrays() listed above.

Is there something I am not doing with sdl-gpu to get the triangle above to render?

Camera and matrix transforms don't seem to mix intuitively

i want to get the current projection matrix and store it to a variable so that later i can reuse it ,
i can try this :

float * matrix;
GPU_PushMatrix();
matrix = GPU_GetProjection();
...
//here i want to set the projection matrix to "matrix"

how to do that ?
i mean i looking for an equivalent to "glLoadMatrixf(matrix);"

YUV format support is incomplete

Hi,

how do i replace SDL_UpdateYUVTexture() with sdl-gpu functions?

with GPU_UpdateImageBytes(), i can only copy Y planar pixels to GPU_image.
so i can see gray image/stream only, no color image.


  while (1) {
    GPU_Clear(screen);
    for (int i = 0; i < 4; ++i) {
      fread(yPlane[i], 1, pixel_w * pixel_h, fp[i]);
      fread(uPlane[i], 1, pixel_w * pixel_h / 4, fp[i]);
      fread(vPlane[i], 1, pixel_w * pixel_h / 4, fp[i]);

      GPU_UpdateImageBytes(texture, &rect[i], yPlane[i], pixel_w);
      GPU_Blit(texture, NULL, screen, pixel_w / 2, pixel_h / 2);
    }

    GPU_Flip(screen);
    SDL_Delay(33);
  }

//SDL2 version 
 while (1) {
    SDL_RenderClear(renderer);
    for (int i = 0; i < 4; ++i) {
      fread(yPlane[i], 1, pixel_w * pixel_h, fp[i]);
      fread(uPlane[i], 1, pixel_w * pixel_h / 4, fp[i]);
      fread(vPlane[i], 1, pixel_w * pixel_h / 4, fp[i]);

      SDL_UpdateYUVTexture(texture, NULL, yPlane[i], pixel_w, uPlane[i],
                           uvPitch, vPlane[i], uvPitch);
      SDL_RenderCopy(renderer, texture, NULL, &(rect[i]));
    }

    SDL_RenderPresent(renderer);
    SDL_Delay(33);
 }

Snapping is performed before scaling which produces flickering

In tests/pixel-perfect/main.c I added
camera.zoom = 0.5
and switched filtering to NEAREST by pressing 'f' (but the effect is observed without this as well).

When I move rectangles by 1.0 they flicker, i.e.:
when y=0: 0-st, 2-nd, 4-th lines of the image are drawn
when y=1: 1-st, 3-rd, 5-th lines are drawn
when y=2: 0-st, 2-nd, 4-th lines are drawn and the image is moved down 1 pixel row

Am I right that snapping is applied before scaling? I think it would be better for pixel art games to apply snapping to the resulting vertices, i.e.:
when y=0: 0-st, 2-nd, 4-th lines are drawn
when y=1: 0-st, 2-nd, 4-th lines are drawn
when y=2: 0-st, 2-nd, 4-th lines are drawn and image is moved down 1 pixel row
etc

upd:

                float k = 1.0 / camera.zoom;
                float y_ = floorf((y + image->h/2) / k) * k;
                float x_ = floorf((x + image->w/2) / k) * k;
                GPU_Blit(image, NULL, screen, x_, y_);
                //GPU_Blit(image, NULL, screen, floorf(x + image->w/2), floorf(y + image->h/2));

I've tried disabling snap and rounding the values manually and this fixed the flickering. Should this be a separate snapping mode? For example, Oxygine has OXYGINE_NO_SUBPIXEL_RENDERING, Cocos also has a similar mode.
http://oxygine.org/doc/api/_renderer_8h_source.html

Differentiate naming of static and shared libraries

I'm building my project for 'Release' in vs2015. I'm using the libs generated from building SDL_gpu_shared for 'Release.' Using the latest SDL_gpu commit, building my project for 'Release' gets a lot of unresolved external errors for OpenGL functions. I must link opengl32.lib for my project to build. This is not the case when using the May 14th commit 7c3d8fa. I do not need to link opengl32.lib in my project for the older commit.
I'm not sure if this is desired behavior, but I wanted to let you know of the change.

GPU_BLEND_NORMAL problems with partial transparency

Not sure if this is a bug, a poor default, or intended behavior.

For reasons I don't entirely understand, blitting images with partial transparency and GPU_BLEND_NORMAL to a RGB target produces all-or-nothing transparency: the only pixels that make it to the target are those with an alpha of close 1.0, other pixels are just dropped. This is perhaps because the target doesn't have an alpha layer?

Calling GPU_SetBlendFunction( image, GPU_FUNC_SRC_ALPHA, GPU_FUNC_ONE_MINUS_SRC_ALPHA, GPU_FUNC_ONE, GPU_FUNC_ZERO ) yields better behavior (partial transparency)

Should GPU_BLEND_NORMAL set something more like those values?
If not, it'd be handy to have another preset for the RGBA->RGB "normal" case.

Compilation errors with VS2015

When I compile the latest SDL_GPU with visual studio 2015, I get multiple syntaxes and undeclared identifiers errors.

Here is the full build log (in french).
logs.txt

Problems with size in destiny sprites

Hello.
I'm trying to migrate my rendering system from SDL2 default rendering to SDL_Gpu, but I'm confusing about how this works.
In my previous rendering system, I was using two rectangles, one source rectangle to specify the portion of the texture and a destiny rectangle for display the position and the size (in pixels). The size of src and dst may vary.

e.g: a source rect (x, y, w, h) with values of (0, 0, 256, 128) and a dst rect with values of (0, 0, 128, 64)

Now i have 2 GPU_Rect src and dst, but due the openGL unit conversion this is not working, the sprites are too big for the screen size.
GPU_Ortho gives me strange results (and some images like the background needs to fill the screen).
I tried to use GPU_BlitScale too, but i don't know how to find stretch factor W and stretch factor H (i'm pretty new in opengl).
So i wondering if there is some funcion similar to SDL_RenderCopy that accepts a destiny rect to blit.

GPU_BlitRectX Incorrect Flip

I believe the recent commit fixing the pivot point messed up the result of using GPU_FlipEnum in GPU_BlitRectX.
I think the code that adjusts for the flip in GPU_BlitRectX should be changed to something like this:

if(flip_direction & GPU_FLIP_HORIZONTAL)
{
    scale_x = -scale_x;
    dx += dw;
    pivot_x = w - pivot_x;
}
if(flip_direction & GPU_FLIP_VERTICAL)
{
    scale_y = -scale_y;
    dy += dh;
    pivot_y = h - pivot_y;
}

This behaves like using SDL_RendererFlip in SDL_RenderCopyEx.

Segmentation fault when accessing the windowID (Ubuntu)

I have a segmentation fault on Ubuntu 16.04 at this line (it's working on windows 7) :
SDL_Window *win = SDL_GetWindowFromID(window->context->windowID);

Here is the code for the test :

#include <iostream>
#include "gl_core_3_2.h"

#include "SDL.h"
#include "SDL_gpu.h"

int main() {
    GPU_Target* window = GPU_InitRenderer(GPU_RENDERER_OPENGL_3, 800, 600, GPU_DEFAULT_INIT_FLAGS);

    if (window == NULL) {
        GPU_LogError("Initialization Error: Could not create a OpenGL 3.2 renderer.\n");
        return 0;
    }

    SDL_Window *win = SDL_GetWindowFromID(window->context->windowID);
    return 0;
}

Access GPU_Image/Texture Handle

I am trying to access the handle/ID of a texture through GPU_Image. My code looks like the following:

GPU_Image *textureImage = graphics.loadImage("content/particles/flare.bmp");
GLuint Texture = ((GPU_IMAGE_DATA*)textureImage->data)->handle;

I get build errors stating that GPU_IMAGE_DATA is an undeclared identifier and that textureImage expects a ')'. I am using Visual Studio 2015 with C++.

My questions are as follows:

  • What is the best way to access the handle/ID of an image?
  • What did I do wrong in my code?

Thanks so much for the help.

Cygwin support

I'm trying to build the library on Cygwin:

mkdir build-cygwin
cd build-cygwin
cmake -G "Unix Makefiles" -DCMAKE_INSTALL_PREFIX="$(PREFIX)" -DSDL_gpu_DEFAULT_BUILD_DEMOS=OFF ..
make

This fails with errors about CALLBACK macro not being defined. It's defined in bundled GLEW/gl.h only if both _WIN32 and __CYGWIN__ macros are present, so I added this switch to cmake commandline:

 -DCMAKE_C_FLAGS="-D_WIN32"

The library now compiles, but linking step fails:

[  9%] Linking C shared library cygSDL2_gpu.dll
cd /cygdrive/c/Users/User/.../sdl-gpu/build-cygwin/src && /usr/bin/cmake.exe -E cmake_link_script CMakeFiles/SDL_gpu_shared.dir/link.txt --verbose=1
/usr/bin/cc  -D_WIN32   -shared -Wl,--enable-auto-import -o cygSDL2_gpu.dll -Wl,--out-implib,libSDL2_gpu.dll.a -Wl,--major-image-version,0,--minor-image-version,0 CMakeFiles/SDL_gpu_shared.dir/externals/glew/glew.c.o 
[skipped]
CMakeFiles/SDL_gpu_shared.dir/externals/stb_image_write/stb_image_write.c.o /cygdrive/c/Users/User/.../libroot/cygwin/lib/libSDL2.dll.a -Wl,-Bstatic -lm -Wl,-Bdynamic -lGL 
CMakeFiles/SDL_gpu_shared.dir/externals/glew/glew.c.o:glew.c:(.text+0x4ff): undefined reference to `__imp_wglGetProcAddress'
CMakeFiles/SDL_gpu_shared.dir/externals/glew/glew.c.o:glew.c:(.text+0x4ff): relocation truncated to fit: R_X86_64_PC32 against undefined symbol `__imp_wglGetProcAddress'
CMakeFiles/SDL_gpu_shared.dir/externals/glew/glew.c.o:glew.c:(.text+0x537): undefined reference to `__imp_wglGetProcAddress'
CMakeFiles/SDL_gpu_shared.dir/externals/glew/glew.c.o:glew.c:(.text+0x537): relocation truncated to fit: R_X86_64_PC32 against undefined symbol `__imp_wglGetProcAddress'

I've tried adding -lopengl32 to linker flags, but this didn't change anything. Any pointers are appreciated! I'm trying to link against native OpenGL, not the Cygwin's
X11 OpenGL.

Multiple Window problems

SDL2 has excellent support for multiple windows, but SDL_GPU is a bit schizophrenic on the subject. On one hand, it appears to assume that only one window will exist:

  • Library initialization is conflated with window creation
  • After Init runs, GPU_GetInitWindow() is guaranteed to return a valid window ID, which will interfere with attempts to Init a second window.
  • There is no DestroyWindow API distinct from GPU_Quit

But on the other hand, we have GPU_MakeCurrent, which is only useful if you have more than one window.

Rendering to Framebuffers?

Hi, I'm trying to follow the tutorial over here https://open.gl/framebuffers to achieve some post processing in combination with sdl-gpu. I am just rendering to a framebuffer connected to a texture, and then trying to render the texture to a quad. So I call glBindFramebuffer to switch to the offscreen framebuffer.

I can't my quad with the framebuffer texture to show up when I switch back to the onscreen framebuffer. I was just wondering if this is a limitation of sdl-gpu before I continue.

To provide more context, I am simply trying to render an entire scene, including shader output, at 320x240, and then scale it up 3x with nearest-neighbor sampling for a retro-style game. Maybe there is an easier way to accomplish this.

Rendering appears broken on ARM/Linux

I'm trying to get sdl_gpu working using this tutorial. However, I can't get an image to show up. I get a window, but the contents are black.

I tried doing GPU_ClearRGBA(window, 255, 255, 255, 255) to see if I could get anything to show up, and the window is green (I expect that it should be white).

I'm using SDL 2.0.5 that I built myself. This is on an Odroid XU4 running Ubuntu 16.04 (which supports OpenGL ES 3, but not desktop OpenGL).

Any idea what's wrong here? I'm not sure how to debug this.

Documentation for GPU_Flip

I am in the process of updating a project from SDL2 to SDL_gpu and I encountered some strange behavior:
The content of my GPU_Images are drawn to the window, even though they are supposed to be off-screen drawables.
After digging trough the source code of SDL_gpu I believe that this is caused by the GPU_Flip function.

I used the GPU_Flip function to replace all occurrences of SDL_RenderPresent, as it is suggested in the "Conversion" section in your README. When I used SDL_RenderPresent on my off-screen drawables, they where not drawn on the window.

If it is intended for GPU_Flip draw the content of a GPU_Image to the screen, please document that behavior so others won't make the same mistake as I did.

Fixed Function Pipeline

Is it possible to use the fixed-function pipeline with SDL_gpu? Any simple examples of this?

Best way for a heavy shader use in SDL_Gpu?

I'm planning to do a game that uses multiple shaders. Currently i'm using GPU_Targets for this and works pretty well, but i feel that I'm using too many copies and blit calls (I'm copying from a GPU_Target to another with GPU_Blit) , and this may keep growing. Is this the proper way? I see people using OpenGL FBOs to do this kind of effects, but I don't know if SDL_gpu is using this under the hood.

Here is a example scene with a few shaders, it's made by using multiple copies and combining them:
paisaje

Any help would be appreciated.

VBOs unavailable when running over RDP

When running the app over RDP, sdl_gpu chooses OpenGL_1 renderer, but crashes on this line:

    #ifdef SDL_GPU_USE_BUFFER_PIPELINE
        // Create vertex array container and buffer

        glGenBuffers(2, cdata->blit_VBO); // SEGFAULT

glGenBuffers is NULL. Is there any fallbacks I can try enabling or are VBOs required?

CMake issues demos/tools/tests

On my archlinux box the .so created is actually called SDL2_gpu - the Cmake wants to link demos/tools/tests to SDL_gpu though which obviously isn't found then.
[ 16%] Linking C shared library libSDL2_gpu.so
[ 18%] Linking C executable 3d-demo
/usr/bin/ld: cannot find -lSDL_gpu

Additionally the path to the .so isn't added to the linker flags for the demos/tools/tests makefiles so the linker still can't find it unless you add -L${SDL_gpu_SOURCE_DIR}/src to the respective TEST_LIBS/DEMO_LIBS in their cmakelists.txt

Memory Leak in GPU_CreateRenderer_OpenGL_4

I'm getting a memory leak from this function GPU_CreateRenderer_OpenGL_4.
It looks like renderer->impl is not freed in any of the GPU_FreeRenderer functions.

Adding:

if (renderer->impl)
SDL_free(renderer->impl);

in GPU_CreateRenderer_OpenGL_4 solved the leak.

What else would be good for v0.12.0?

There are already a lot of good changes and new features in 0.11.0 that will be released as 0.12.0, but here are some more possibilities:

Direct VBO support
GLES 3 renderer
GL 4 renderer

SDL_GPU w/ SPARK Particle Engine

Hi,

I am trying to get my SDL_GPU implementation working with the SPARK particle engine demo code as follows:

Demo Code

I am essentially using the following paradigm:

Initialization of OpenGL/SDL_GPU/SPARK:

  • GPU_Init(); (from SDL_GPU)
  • loadTexture(); (from SPARK demo code) -> This uses SDL_LoadBMP();, glGenTextures(), glBindTexture(), glTexImage2D(), etc.
  • glClearColor(); (SPARK/OpenGL)
  • glMatrixMode(); (SPARK/OpenGL)
  • glLoadIdentity(); (SPARK/OpenGL)
  • SPK::GL::GLQuadRenderer::create();
  • SPK::SphericEmitter::create();
  • etc.

Render Loop:

  • GPU_Clear(); (SDL_GPU)
  • glClear(); (SPARK/OpenGL)
  • glMatrixMode(); (SPARK/OpenGL)
  • glTranslatef(); (SPARK/OpenGL)
  • glRotatef(); (SPARK/OpenGL)
  • glBegin(); (SPARK/OpenGL)
  • glVertex3f() x4 for quad (SPARK/OpenGL)
  • drawBoundingBox(); for particle system (SPARK/OpenGL)
  • renderParticles(); (Spark/OpenGL)
  • glEnd(); (SPARK/OpenGL)
  • GPU_Flip(); (SDL_GPU)

No matter what I try, I can never get anything (even a basic quad) to draw to the screen. I have tried the following:

  • Wrap render loop with GPU_FlushBlitBuffer(); and GPU_ResetRendererState();.
  • Use GPU_MatrixMode(), GPU_PushMatrix(), GPU_LoadIdentity(), and GPU_PopMatrix() in the render loop.
  • Use my own SDL_GL_SwapWindow() instead of GPU_Flip().
  • Dozens of other small tests that didn't work.

I have even tried to simplify the paradigm where I am only trying to just render a basic quad doing the following:

Initialization of OpenGL/SDL_GPU:

  • GPU_Init(); (from SDL_GPU)
  • glClearColor(); (SPARK/OpenGL)
  • glMatrixMode(); (SPARK/OpenGL)
  • glLoadIdentity(); (SPARK/OpenGL)

Render Loop:

  • GPU_Clear(); (SDL_GPU)
  • glClear(); (SPARK/OpenGL)
  • glMatrixMode(); (SPARK/OpenGL)
  • glBegin(); (SPARK/OpenGL)
  • glVertex3f() x4 for quad (SPARK/OpenGL)
  • glEnd(); (SPARK/OpenGL)
  • GPU_Flip(); (SDL_GPU)

The above paradigm should be one of the minimal ways to draw an OpenGL quad to the screen. However, I am not able to get the quad to render to the screen at all.

My questions are as follows:

  • Does SDL_GPU work with glBegin(), glEnd(), glVertex3f()?
  • Does SDL_GPU work with SDL_LoadBMP(), glGenTextures(), glBindTexture(), glTexImage2D(), etc. or do I need to convert this texture to a GPU_Image?
  • Should SDL_GPU be working with at least the basic paradigm above or is there an incompatibility that you can see?
  • I am using a replacement for gluPerspective() as seen here. Could this or an improper glTranslatef() or glRotatef() be causing nothing to be drawn to the screen?
  • Based on what you can see with SPARK, is there any way to get SDL_GPU to work with SPARK or is there something I am missing that makes them incompatible?
  • Do you have any ideas on why I can't get either of the paradigms above to draw anything to the screen?

I have successfully got SDL_GPU to work with everything so far (e.g. sprites, tilesets, shaders, etc.) and am very much looking forward to getting it working with a great particle engine such as SPARK.

Thanks so much for any help you can provide!

Does not compile in Visual Studio after July 6th commit

The commit on May 14th, 7c3d8fa, works.
The first commit on July 6th, 4837977, does not compile, along with none of the following commits.

I am experiencing this using Visual Studio 14 on a Windows 10 64-bit machine, building for Debug or Release for Win32.
I follow the same procedure for building the two different versions.
I configure with cmake. Set SDL2 lib, SDL2main lib, and SDL2 include locations. Configure again. Generate. Open SDL_gpu.sln. Build SDL_gpu project.
It then succeeds for the May commit but fails for the July commit.
The errors mostly consist of undefined identifiers for all of the GL_ GPU_ and SDL_ identifiers.
Please let me know if I can be of more help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.