academysoftwarefoundation / openfx Goto Github PK
View Code? Open in Web Editor NEWOpenFX effects API
License: Other
OpenFX effects API
License: Other
I notice that there is no LICENSE or COPYING file in the repository describing under which open source license this code is being distributed under. This makes it unclear under what terms an
open source developer can modify and redistribute the code, if this is permitted. Clarifying the license would be very helpful.
Thanks!!
Please read the contribution guidelines first.
standard change
tagfeature/PROPOSAL-NAME
branch)
Can someone make a pull request against this for OpenCL suite proposal. It's fine to add WIP so we understand it's for reference review.
We are waiting for a while for this to be able to abstract GPU API and be able to decide what we do with VK, Metal,.... It should be OCL 3.0 compliant too now :)
Certain hosts (like those who support multi-GPU on Windows GTX/RTX cards) cannot support the OpenGL suite, another API is necessary to allow that.
The preliminary suite has been reviewed by 3 members.
A new header file would be added.
ALL
we should probably change the copyright in the header files now that OpenFX is officially an Academy Software Foundation project. I'm happy to do that (using a script). We are using the BSD 3-Clause License.
I assume we should use a modern SPDX-License-Identifier comment block; perhaps something like this:
// Copyright (c) Contributors to the OpenFX project.
// SPDX-License-Identifier: BSD-3-Clause
// https://github.com/AcademySoftwareFoundation/OpenFx
Is that sufficient or should we add the license text itself, a copyright year, or anything else? Is the copyright attribution correct? (I got it from OpenShadingLanguage and OpenColorIO's header is similar.)
By the way, I notice a few //
style comments in the headers although we mostly avoid them. I presume we are OK with this now, since //
comments were added to ANSI C in 1999. Let me know if you have any problem with that.
Please read the contribution guidelines first.
standard change
tagfeature/PROPOSAL-NAME
branch)
Fix some doc typos is ofxClipPreferences.rst
As a professional standard, doc should be free of typos.
Several typos exist on this page.
Doc only, no code change.
Docs should be rebuilt to test that this is done correctly.
All
Users often want to create a LUT from their image pipeline.
DaVinci Resolve achieves this by piping a LUT image through the node graph. They've defined an extension OfxImageEffectPropNoSpatialAwareness
that allows the plugin to specify whether or not it wants to participate in the LUT generation process. This approach is smart, but I think there could be better logic.
It seems like there should be three options for communicating this to the host:
It might also be useful to provide helper code for the host for generating a LUT image in-memory to pipe through their image pipeline.
Users of hosts want the ability to generate LUTs. Hosts should have some built-in ability to do so. Plugins should be able to opt-in or opt-out.
Purely additive.
We'd need to add documentation, yes!
Plugins and Hosts!
/** @brief String used to label the OpenGL half float (16 bit floating point) sample format */
Should we make CPU branch support kOfxBitDepthHalf in v1.4
If so this comment would need to be fixed
There is now a VFX Platform 2023 version of the ci-base aswfdocker build container available:
It could be added to the CI build matrix:
openfx/.github/workflows/build.yml
Line 37 in 95a1179
Also since OpenFX is now part of ASWF, are there required build pre-requisites which would be needed and would justify a specific ci-openfx
build container, or could just be rolled into ci-base
, or do you currently have everything you need?
Describe the problem or suggestion here.
openfx-master-NatronGitHub\openfx-master\Examples
Hi everyone:
I build this in Visual Studio 2019 successfully. The Basic.sln generate a dll, however I have no idea where or how this dll been called.
I noticed that there is a basicPlugin, but nowhere mentioned anymore.
Can anyone generously give me an example?
In https://openfx.readthedocs.io/en/master/Reference/suites/ofxMemorySuiteReference.html:
Use this suite for ordinary memory management functions, where you would normally use malloc/free or new/delete on ordinary objects.
It is not described why this suite should be used in the first place and why you cannot simply use malloc/free/new/delete as normal. Is it mandatory to use this suite for allocation? Can I not use standard allocators? Will I have poor performance? Will it crash on some hosts?
I can see that the support library in some cases uses std::string
with the default allocator for some data, for example.
So what's the idea here? I'm very confused.
This feature is deprecated in C++17 and removed in C++20.
https://en.cppreference.com/w/cpp/language/except_spec
I have a patch to remove it, just want to inquire if this is a welcome change :)
Thanks.
It was mentioned that the typedefs in ofxDrawSuite.h cause spurious compiler warnings in some compilers and should be removed.
I Write a plugin with OFX for HitFilm, and use as an effect on the layer. When I removed this plugin from the effect stack on the layer, the memory I applied for cannot be released. Can anyone tell me, how can I capture the message that the plugin was removed from the layer effector stack? Or is there any other way to solve this problem ?
is it really "user_data" or should it be something like kOfxPropUserData, which is not defined yet?
In ofxOpenGLRender.h "OfxImageEffectSuiteV2::clipGetImage function, with a few minor changes" - there is no OfxImageEffectSuiteV2 (at least not yet)
standard change
tagfeature/PROPOSAL-NAME
branch)
A drawing utility suite to abstract away required use of OpenGL.
With OpenGL "going away" and new GPU and UI layers being used, we should provide an abstraction layer so plugins can draw on-screen controls or custom parameter UIs without requiring OpenGL.
Demarcation, that's the problem!
This suite is initially optional for both host and plugin use, and mostly embodied in a new header. If a host provides the suite, the plugin should make use of it, otherwise it is free to fall back to OpenGL. However, it should be considered as a required feature in future OFX versions, since OpenGL is slowly being deprecated.
One new POD type should be added to ofxCore.h.
New documentation will have to be integrated.
Plugin writers will benefit as they will no longer be required to use OpenGL for their interacts.
Hosts will benefit as they will not be required to support OpenGL at all.
The shipped examples fail to build because the linuxSymbols
file is not present.
Will fix this as part of the DrawSuite comments PR
standard change
tagfeature/PROPOSAL-NAME
branch)
A drawing utility suite to abstract away required use of OpenGL.
With OpenGL "going away" and new GPU and UI layers being used, we should provide an abstraction layer so plugins can draw on-screen controls or custom parameter UIs without requiring OpenGL.
Demarcation, that's the problem!
This suite is initially optional for both host and plugin use, and mostly embodied in a new header. If a host provides the suite, the plugin should make use of it, otherwise it is free to fall back to OpenGL. However, it should be considered as a required feature in future OFX versions, since OpenGL is slowly being deprecated.
One new POD type should be added to ofxCore.h.
New documentation will have to be integrated.
Plugin writers will benefit as they will no longer be required to use OpenGL for their interacts.
Hosts will benefit as they will not be required to support OpenGL at all.
Please read the contribution guidelines first.
standard change
tagfeature/PROPOSAL-NAME
branch)
We need a spatial format model separate from project size. Already a PR #61 done before issue filed.
We have an issue of spatial representation across different hosts that this would address.
Right now we have no way to set output size of a generator other than default project size and the render region with project size is not complete or that useful for many hosts.
Initial Spatial format from http://openeffects.org/standard_changes/spatial-formats-are-broken-replace-with-new-properties-and-a-new-action without discussions
Regions of definition are well defined concepts within OFX, the ‘format’ of a clip is not. For example a clip may be HD, 4K, SD etc… while the RoD need not be congruent to the ‘format’ at all. Furthermore, effects have no way of changing the spatial format. For example an effect that up scales images from SD to 4K would want to flag the change in format as well as the RoD, otherwise it will be interpreted as a zoom.
The 'format' nomenclature is taken from Nuke.
(Pierre: Adding this thread from Nuke dev forum as reference
http://community.thefoundry.co.uk/discussion/topic.aspx?f=191&t=104638&show=openfx%2cgenerator
)
However as is, it's incomplete so I relabel the PR WIP - we have 3 basic host "contexts" maybe to address at once. Some hosts might not have the same power in different part of their app with regards to that. Some "editing" app might have some strong conforming per timeline/comp/sequence and be limited perhaps to provide a distinction between timeline size and clip size. Some host might not be able to grow the image definition size post say a blur but should maintain spatial reference otherwise say a point positioner could relatively move... Some host might be a layer stack and have a distinction between layer size and comp size (the final crop). Anyhow the render region optimization is something that can change every frame but for cross-compatibility we need a stable spatial format. Even some exotic compositing system might have no concept of project space, spatial format is sort of separately carried across a graph (with some rule when 2 different ones are merged) and what we have as project size should really just be a default.
Initial Solution to refine
Currently we use the clip properties kOfxImageEffectPropProjectExtent, kOfxImageEffectPropProjectSize and kOfxImageEffectPropProjectOffset to represent the format. This should be simplified to...
kOfxImageEffectImageFormatResolution a 2D integer indicating the resolution in pixels of the output image, this is always full res with no render scale applied,
kOfxImageEffectImageFormatAspectRatio as per normal.
These are present on all input clips.
We have a new action to compute the format of the output clip, kOfxImageEffectActionGetImageFormat, which would ask the effect to set the appropriate properties on the outArgs.
What changes to the docs are needed for this change? TBA
All
Please read the contribution guidelines first.
standard change
tagfeature/PROPOSAL-NAME
branch)
Describe the change succintly in one or two sentences.
Why is this change proposed?
What problem does this proposed change solve? Be specific, concise, and clear.
Is this a new feature (no compatibility impact), a change to an existing function suite (version
the suite to avoid compatibility issues), a change to an existing property, or a documentation
change?
How will hosts and plugins negotiate use of this change? Show how it works when a host implements
it but not plugin and vice versa.
What changes to the docs are needed for this change?
Who will benefit from this proposed change? Plug-ins, hosts, or both? Specific types of hosts?
Currently there is no way to update a list of choices except by appending them to the end of the list of choices. This is because OfxParamTypeChoice
simply remembers what integer index was selected by the user.
For UX reasons, plugin developers sometimes need to be able to provide alphabetically sorted lists (or just a curated order) of choices for a user to pick through. Unfortunately OfxParamTypeChoice
doesn't provide a solution to this particular problem.
The new OfxParamTypeStrChoice
should have 2 N-string arrays:
kOfxParamPropChoiceEnum
- a list of strings that could be some sort of non-user-facing internal identifier
kOfxParamPropChoiceOption
- a corresponding list of strings (same size as kOfxParamPropChoiceEnum
) of "pretty" user-facing display name strings
This allows the plugin developer to have constant implementation-detail enum identifier strings while having a pretty display name that they can change between releases for UI/UX reasons.
DaVinci Resolve already defines an extension:
/** @brief String to identify a param as a Single string-valued, 'one-of-many' parameter */
#define kOfxParamTypeStrChoice "OfxParamTypeStrChoice"
/** @brief Set a enumeration string in a choice parameter.
- Type - UTF8 C string X N
- Property Set - plugin parameter descriptor (read/write) and instance (read/write),
- Default - the property is empty with no options set.
This property contains the set of enumeration strings corresponding to the options that will be presented to a user from a choice parameter. See @ref ParametersChoice for more details..
*/
#define kOfxParamPropChoiceEnum "OfxParamPropChoiceEnum"
/** @brief Indicates if the host supports animation of string choice params.
- Type - int X 1
- Property Set - host descriptor (read only)
- Valid Values - 0 or 1
*/
#define kOfxParamHostPropSupportsStrChoiceAnimation "OfxParamHostPropSupportsStrChoiceAnimation"
Fills a UI gap for plugin developers.
Plugin developers need to update their plugins with the latest and greatest features which can include adding new entries to a choice param. Currently they are stuck adding the entry to the end of the choice param (not to mention removing an old entry).
Purely additive.
A new param type, probably OfxParamTypeStrChoice
since it's already used by Resolve.
Everyone, I hope!
Please read the contribution guidelines first.
standard change
tagfeature/PROPOSAL-NAME
branch)
This proposal aims to add metadata read/write capability to plugins.
Some plugins want to be able to get the file path and frame number of the media that sourced the current clip and/or frame.
Other plugins may want to query information about the lens used, or the stack of previous effects applied. The ability to add metadata to the output may also be useful.
Currently there is no way to query anything about input images to an effect came from, such as the name or path of the source clip.
This is a new feature, embodied as a new set of properties. It dos not affect any existing suites or properties.
This feature is entirely optional. A host that does not implement it will simply appear to have no meta data to an effect. There is no requirement for an effect to make use of it at all.
The documentation for this feature could be embodied entirely in the header file additions.
Plugin writers should benefit the most from this change.
Please read the contribution guidelines first.
standard change
tagfeature/PROPOSAL-NAME
branch)
We discussed for a while supporting interpolation types for parameters.
As we are discussing also effects interchange... a good thing would be to support key-framing representation
Except for boolean, and perhaps menu which we assumed is constant animation (specs should precise so one does not animate menus) - maybe string too?
We don't have a model for curve function curves. I put Smooth below, I presume we need two things: a form of cubic type (much like in GLTF) and would like a form of monotonous curve representation for time based parameters (e.g. in retiming you don't want the curve to make you go backwards which means going from A to B never goes outside the min and max...). From experience most application use a form of bezier for the type Smooth. Other types can be defined but not expected to be supported by app. It's been suggested that if an app does not support "smooth" interpolation that it can perhaps use an hybrid linear to position KF and a backing per frame of values...
Add kOfxParamPropInterpType
Likely suite would need to be V2?
What changes to the docs are needed for this change?
Who will benefit from this proposed change? Plug-ins, hosts, or both? Specific types of hosts?
Currently the CI builds package includes, libs, and examples into a zip file manually. We could use conan create
for this, or better yet set up cmake's packaging to package the build into a releaseable form, and then call that from CI.
This will be an important component of our release process.
standard change
tagfeature/PROPOSAL-NAME
branch
Add a new plugin property that hosts can set to suggest that effects should premultiply their output.
The standard currently has premult properties for input clips; however there is no way for the host to communicate to a generator plugin, or a plugin taking only an opaque or RGB input, whether it should premultiply its output or not.
This effect property allows an effect generating alpha to know whether to output premult or straight alpha. If the input is ImageOpaque
or the effect's type is Generator
it can't use its input(s) to know this.
The value is only advisory; the effect may or may not check this property and may in any case create output clips of any premultiplication state.
Effects using this property should still tag their outputs with the correct premultiplication state.
The property is set on the image effect's effect instance.
This property, when set by the host, should be set during the following calls:
kOfxImageEffectActionBeginSequenceRender
kOfxImageEffectActionRender
kOfxImageEffectActionEndSequenceRender
kOfxImagePreMultiplied
kOfxImageUnPreMultiplied
see pull request #62: Desired Premult
Standard version: 1.5 (Should be set as 1.5 milestone by @garyo)
Subcommittee
Dennis Adams, Phil Barrett (Their Github user account (if any) should be set to assignees by @garyo)
standard change
tagfeature/PROPOSAL-NAME
branch)
There is little to no information about the colour space (tone curve and RGB chromaticities) in which image sample values are to be interpreted.
Historically there has been some assumption that the tone curve is probably linear, with a nominal black point at 0.0 and nominal white point at 1.0, but this is not specified and is often not the case.
Depending on the host, unmodified images presented to plugins may be linear, or "video" (typically Rec.1886, or a simple gamma curve), or "log" (e.g. Cineon log, or a modern log curve such as ARRI LogC), or "HDR" (e.g. Rec.2020) or any number of other curves. The primaries may be Rec.1886, or a larger gamut such as Rec.2020 or ARRI WideGamut. Plugins have no way of knowing what the numbers in the image represent. Similarly hosts have no way of knowing in which colour space(s) (if any) a plugin would like its inputs, nor the colour space of its output image.
Some plugins won't care what space the images are in. Some however need to know in order to make sense of the data. A lens flare, for example, needs to behave very differently (a) when the input data is linear with highlights well above 1.0 (b) when the input data is log-like with highlights preserved but much lower in the range, (c) when the input data is in a display-referred colour space where highlights have been rolled off to fit under 1.0.
It may be that some plugins and/or hosts prefer to work with non-RGB primaries, e.g. X'Y'Z'. A mechanism to specify colour spaces could support this.
At first glance Pierre suggests:
Define strings to identify well-defined tone curves, and well-defined sets of primary chromaticities. This is necessarily vague since without defining tone curves using formulae we can't offer general support. Chromaticites could be identified precisely as three (x,y) coordinates but that might be too much?
Add kOfxImageClipPropToneCurve and kOfxImageClipPropPrimaries properties to clip property sets to identify the colour space of the clip, if known
Add kOfxImagePropToneCurve and kOfxImagePropPrimaries properties to image property sets to identify the colour space of the data, if known
Hosts set these properties where they know the colour space of input clips
Define a protocol so plugins can request that input clips are converted by the host into a specific colour space
Define a protocol so plugins can identify the colour space of their output image (during setup time, before rendering)
Great proposal, well thought-out.
Long ago at an OFXA meeting someone (Gary @ GenArts?) brought up adding a color space property for images and my first comment was "no way it could work because there are so many and no host and plug-in could understand all of them" so the request got refined to "we'd at least like to know the primaries and if the tone curve is log-like (log or gamma with neutral gray near-ish to 0.5) or linear (neutral gray around 0.18)" which is really quite reasonable. Today I'd even expand "linear-like" to be SDR (mostly within 0.0 to 1.0) or HDR (can go much above 1.0 for HDR). You could get more specific (Gamma value, Cineon, S-Log, PQ, REDgamma4) using a secondary property for those host/plug-ins that really know. Note that hosts that let the user apply arbitrary LUTs really don't know what color space they are processing plug-ins in. That covers the tone curve side of things. Then add a “primaries” property, which could be “rec709”, “rec2020”, “dcip3”, “acesap1”, “sgamut3cine”, “DRAGONcolor”, etc. Again, perhaps more than most hosts and plug-ins could agree upon. Would they be better as CIE xy coordinates or something much simpler (a kind of S,M,L,XL range like “SD”, “HD”, “UHD”, “ACES”)? What exactly would plug-ins be doing with the primaries indication? We probably also need to think about color space for color-based parameters (which most hosts expose as color pickers) – traditionally they have been sRGB but maybe they need some kind of color space handling as well (or at least understood to be in the same color space as the images?).
In addition to a property indicating what color space images are in, we’d like a way for a plug-in to indicate that it can or can’t handle HDR (not sure if this should be opt-in or opt-out). Then the host can do range reduction (typically by converting to a log curve) to bring things into SDR range. For example, we have an HSL Adjust plug-in that hates HDR. Another example is a Lift/Gamma/Gain color corrector – they are meant to be run on images with a log-like tone curve. If all we had was an image property saying these were HDR then the plug-in could apply a log curve before and inversely after processing, but the host really knows more about what to do than a plug-in. A specific example: if the host is doing ACES it would convert to ACESproxy.
In any case, Sony is interested in this extension.
Some more notes- in general here we only care to have an approximation of gamma, for example linear, sRGB/gamma 2.2, log. So we would be care a bit but not that much.
The exception is our color matching tools, that would benefit a lot if the two clips could be automatically brought by host in same basis (example matching an h264 to a SLog)...
Color parameters would benefit from an additional hint so the values are in right space (thinking about host that display a color picker also).
Color space properties could perhaps be a parameter populated by host, a bit like in Spatial Format - the example of Nuke Spatial Format parameter. This way an host can populate that with what it supports instead of plugins understanding what it means.
I would think that maybe we might need a suite that has conversion methods abstracted to: e.g. Gamma 1.0 (linear) to ColorFormat (for example rec2020 selected in for example in a color space property menu). Like this effect can ask linear for inputs and pass result through host supplied conversion so everything stays in same colorspace on host side and plugin does not need to know about every possible Color Format, just call toColorFormat.
So for Color discussion item net before this meeting (and I stay out of this one as we have on this list FilmLight, Assimilate, BlackMagic and Sony here - and the last 2 even make cameras). Although as a reminder this started with Phil asking my host is natively LOG, in what color space do I give you images?
Also thanks to Alexandre, thinking about meta-data, the latest in the parallel world of OFX Meta-Data discussion is Clip Preferences Properties are basically clip associated (or indexed by clip) time-invariant meta-data. Clip Preferences Props happened to be the clip metadata that are defined in the API (given a name). So this becomes how to associate to a clip additional color meta-data with the particularity that this imply a way to set request to have clips converted to a specific color space. So a particular case (like we have for clips versus parameters) that does not fit the meta-data suite model, which I think means that it needs to be a suite where the first purpose of the Color suite is to pass back the ColorSpace wanted to affect the clipGetImage...
So if a ColorManaged image/clip could have an associated "ColorSpace MetaData collection" (so meta-data added to the base Clip Properties). And by collection I imply that it can be described by a set of values without callbacks. Once you see the world like that, there could be only one clip property ColorManaged or RAW (to mean here ColorManaged = false) in the Clip Preferences Prop. And the Clip Preferences action be a place to set the active ColorSpace MetaData collection (even if this is not a Clip Preferences Property per se). And unlike other types of meta-data I think only the host can populate supported "ColorSpace MetaData Collections" as it would probably be a bad idea for hosts to depend on an effect to convert images for another effect...
Something Gary reminded us earlier is this has to work with clipLoadTexture not just clipGetImage. And Pierre (me) adds: OfxStatus (* clipGetImagePlane) in multi-plane suite should also be following this as well on a sub-image basis (e.g reflection.r, reflection.g, reflection.b is of type colour so should I think follow same logic, while a depth image plane would not -or if you like would be tag as gamma 1).
In Phil's proposal (http://openeffects.org/standard_changes/colour-space-handling ) :
Define a protocol so plugins can request that input clips are converted by the host into a specific colour space
Define a protocol so plugins can identify the colour space of their output image (during setup time, before rendering)
Now the word colorspace is ambiguous to start with (color space as LAB, YUV, XYZ, RGB, color space as "color profile" - sRGB, Adobe RGB, rec 2020, or color space as in Log versus Linear (and by extention linear with gamma - perhaps even with Levels handles for black and white points...).
So before we go too deep here the obvious:
There is not much point in adding a CYMK colorspace. By the absurb if I just said to host just give me all the colorspaces you support as option and I will let user decide - it's likely not to be very productive.
Most of the time an effect will not care whether an image is in gamma 2.2, sRGB, rec 709,... you have to be a pretty specialized plugin to worry about these distinctions but one might so it's good to support such distinction (yes there can be effects that need to understand color gamut...).
To be in gamma 2.2 ballpark or linear(darker) or log(flat) however starts to matter to a lot more to effects. It matters more then to users in the sense that the user might be looking at the LOG file with an sRGB viewer so he does not even see the image as flat. The presence of white and black point only more matters when this as for consequence to remap e.g. 64 to 941 to 0-1023 in my back sort of things. Then it has conversion to LOG like effect.
Follows from that, we are not discussing here color correction etc but a time-invariant property set that affects the clip pixels we will receive. Yes?
In that sense we sort of have variations of lift/gamma/gain (e.g. like CDL, or black point - white point, gamma)... we have typically device dependant Curves with strongly defined chromacities/white points and maybe LUT. I might have the breakdown wrong, but let me try - the goal in that list is to define what type of colorspace for what - not to say there is the equivalent of 4 supported ColorSpace MetaData Collection.
1. Do we need to start this discussion by Is Linear the reference space of OFX? Is Linear gamma = 1.0 (what is ColorManaged=false) - with black point at 0.0 and white point at 1.0. Isn't it fair to say that most effects if they care, only care about gamma space. So the first collection to support maybe is gamma space and it has one value: gamma. Not lift/gamma/gain... just GammaSpace. In this mode the effect is ignorant of black and white point (64 and 941 in 10 bit basis in example above). If I select this mode I don't expect host to stretch 64-941 to 0-1023. Similarly if I say ColorManaged = false I mean it - that is if I draw a vertical greyscale ramp of 0 to 1023 in an image that is 1024 pixels high, the next effect will get 0-1023 (as opposed to get a color transform in my back because the host thinks it's in gamma 2.2 color space or something and assumed because I said ColorManaged=false that I must be linear and need to be converted.
2. This does not address 2 basic things: Host native LOG (e.g. FilmLight) who would want to support that and to give another example, in Sony Vegas one can work in 32b Video levels or 0 to 1.0. I am not sure if these two things can be combined in a Cineon/DPX like model... (except LOG to LOG, LOG to Gamma Space, Gamma Space to LOG, Gamma Space to Gamma Space maybe?) - this adds black and white point support to the Color Image Meta Data. This could also have more descriptors (so be like lift/gamma/gain or CDL (same operators in different order - not commutative?)). Not my job to desnarl but obviously we are still somewhere where all this can be described by a small set of parameters. This does not have to be one MetaData ColorSpace collection, I am just defining an example category of ColorSpace. Now I am throwing ASC CDL and LOG in same sentence (A reference color correction versus a technical descriptor about the image).In the regular metadata suite scenario there could be a CDL thing but it would not ask the host to modify how it gets image to effect etc, it's just meta-data.
3. Then there is a whole space of nuances (what personally I call normally color profiles ). Now we are in domain of specifics like Adobe RGB, sRGB, rec 601, rec 709, etc. Assuming all this is in RGB(A), then shouldn't this be a mode that has an associated option menu. Unless you write the NTSC color safe plugin sort of thing it's not always needed, we have reached a level of specialized usage. This already becomes complicated, does a plugin need to collect a list from host and filter out the ones they know...
4. And then we can have an even more dense thing. This might include not working in RGB(A) and the chromacities, tonal curves / "roll off" and what not become critical. This is about my pay grade here :) -- ACES etc
Additional Notes:
Unlike other "metadata" types, because there is an implied getImage converted, in this case I can't imagine someone else than the host populating that. So that tells us that this maybe does not fit with the MetaData suite, is it's own suite.
Under that hypothesis, such suite would do the following work:
- At description time indicate (hand-shake) support for ColorSpace suite
- Pre-Render associate a particular ColorSpace collection by name to a clip (default clip property is RAW==NULL or hosts populate a Project preference?). There is only one clip property, color managed or not and there is ColorSuite passing back ClipHandle with the ColorSpace MetaData Collection set?
- As for implementation, would an host populate the Effects Controls somehow with an option menu for input clips I said are color manageable? Or it the effect responsible for that and then once we introduced color profiles and actual color spaces does this assume the effect can consult a "dictionary" of host supported ColorSpace/Profile Image MetaData Collections? So then a need to walk through all ColorSpace available.
Does effect need to create parameters and slave them to a ColorSpace MetaData set?
- If I create a color parameter and host provides a color picker as a bonus do I always get RGB values internally from my color parameter and are they transformed by host or we need the pixel level equivalent getPixel instead of getImage sort of thing? That is what value do I get if my input is linear and my output is sRGB.
*: Sidecar question as I never used that feature, is if the Parametric Parameter a place to store LUTs.?
*: Just discussing Image Colors, there might be all kinds of meta-data that are related to color, example someone might figure a nice way to forward to me exposure entry in an EXIF or something. But then it's not an Image conversion/transform, it's just a or like a parameter (often something you could capture and write as parameter in the effect). So it does not fit this-here.
Thanks for the comments! Unfortunately it looks like I won't be able to join the discussion this evening. But I'd like to respond to some of the comments that have been made so far.
Pierre says "colourspace is ambiguous". I tried to be unambigious. I shall try again. By "colour space" I specifically mean
a defined tone curve (explicitly or implicitly as two formulae to convert to and from linear)
and ideally three defined primary chromaticities (x,y)
and possibly a defined white point (x,y), e.g. D65
The tone curve definition encodes the black and white points by defining the mapping to/from linear 0.0 and 1.0. This includes legal-ranging if that is in effect, by squashing and offsetting.
The tone curve definition is independent of the bit depth of integer image formats, and expresses values scaled so that the maximum code value (255, 1023, 65535 etc) is treated as 1.0.
I specifically did not use the word "gamma" since that is just one way of defining one class of tone curves, and is often inaccurately used.
As an example of a complete definition, here is a Baselight colour space which has the ST 2084 (a.k.a. Dolby PQ) tone curve, Rec.2020 primaries and a D65 white point. We define it as
Tone curve formulae (ST 2084):
float convertToLinear(float val)
{
float f = max(pow(abs(val), 1.0/78.84375), 0.8359375);
return sign(val)*pow((f-0.8359375)/(18.8515625-18.6875*f),6.277394636);
}
float convertFromLinear(float val)
{
float f = pow(abs(val), 1.0/6.277394636);
return sign(val)*pow((18.8515625*f + 0.8359375)/(1.0 + 18.6875*f), 78.84375);
}
Primaries (Rec.2020): (0.708,0.292), (0.170,0.797), (0.131,0.046)
White point (D65): (0.3127,0.3290)
I am not suggesting that OFX need necessarily go this far, and I am certainly not suggesting that we define OFX constants for a large number of supported tone curves, but this should give an indication of the depth of information that is available from some hosts.
This proposal does not include any mention of colour correction descriptions (e.g. CDL, lift/gamma/gain etc), and I would strongly argue that it should not. The proposal is just about definining the colour space of image data passing through OFX plugins.
"Do we need to start this discussion by [asking] Is Linear the reference space of OFX?" - yes I think that would be a great starting question. Baselight assumes this and converts images to linear for OFX, but I would far rather this was made explicit, and/or plugins could have a way to request this behaviour.
"Host native LOG (e.g. FilmLight)" - Baselight is not "natively log". Baselight has a generalised colour space handling mechanism which means that images being processed can be in any defined colour space. That's why we have a deep understanding of this domain and why I have raised this proposal.
Defining the colour space of colour parameters is definitely an issue, and indeed it's something we're only now starting to address in our applications.
Defining the handling of the alpha channel in an RGBA image would be good. I suggest that the only sensible path is to say it is always linear with 0=transparent and 1=opaque.
Yes this needs to apply to OpenGL textures (and any future extensions to OpenCL, CUDA etc as well) as well as to CPU images.
I'm no expert on this, but there are very well-defined ways to specify color space now, thanks to OpenColorIO profiles (for example). It seems to me we could piggyback on a standard open profile spec. Input clips would be tagged with their color profile, and the host would specify the desired output profile. Yes, this is very detailed, but anything less isn't really useful IMHO.
I'm agnostic about whether the spec should provide negotiation between host and plugin for color space. On one hand, if the plugin is colorspace-aware, it can (and should) just convert from/to the host's declared space. On the other hand, if the host is already converting from its format to OFX format, converting colorspace at the same time is more efficient. If the hosts convert, plugins are easier to write ;-) But performance (minimizing conversions) is important.
Here's notes from last meeting -
-> Some “simple” color space enumeration clip properties could be immediately useful (Log-like [e.g., S-Log, ACEScc], Gamma-like [e.g., Rec.709, Rec.2020], linear 0-1 [needed?], linear HDR [e.g., ACES]).
-> Some “detailed” color space clip properties could be useful down the road (named gamut [e.g, “Rec.709”, “Rec.2020”, “ACEScc”), named curve [e.g., “Rec.709 gamma”, “HLG”, “PQ”]). Perhaps even primaries coordinates, optional white point. Hard to pass curve definition math though.
-> It would be useful for a plug-in to indicate which simple color spaces it can work in, and hosts converts if possible (e.g., Lift/Gamma/Gain color corrector prefers Log-like or Gamma-like curve and cannot work in Linear). Alternatively, this could be indicated in getClipImage like some other clip properties are. In either case, it is optional and the host does not have to do this conversion.
-> Can a plug-in change the color space of an output clip? Many hosts would not be able to deal with this; can we table for now?
(Pierre and Fabien add): there is also the topic of color picker - color parameter relation to that
Still discussed on https://groups.google.com/forum/#!topic/ofxa-members/Gye7kLeHdhY
Solution unknown
There is a typo in ofxParam.h
/** @brief value for the ::kOfxParamDoubleTypeAngle property, indicating the parameter is to be interpreted as an angle. See \ref ParameterPropertiesDoubleTypes. */
#define kOfxParamDoubleTypeAngle "OfxParamDoubleTypeAngle"
/** @brief value for the ::kOfxParamDoubleTypeAngle property, indicating the parameter is to be interpreted as a time. See \ref ParameterPropertiesDoubleTypes. */
#define kOfxParamDoubleTypeTime "OfxParamDoubleTypeTime"
/** @brief value for the ::kOfxParamDoubleTypeAngle property, indicating the parameter is to be interpreted as an absolute time from the start of the effect. See \ref ParameterPropertiesDoubleTypes. */
#define kOfxParamDoubleTypeAbsoluteTime "OfxParamDoubleTypeAbsoluteTime"
http://openeffects.org/standard_changes/modifying-plugin-properties-after-description
states that a plugin instance can change the Enable state in Instance Changed with the following precaution advised for plugins that do so:
"
When setting to true we also do this (assuming it's not set):
// we support all the OpenGL bit depths
gPropHost->propSetString(effectProps, kOfxOpenGLPropPixelDepth, 0, kOfxBitDepthByte);
gPropHost->propSetString(effectProps, kOfxOpenGLPropPixelDepth, 1, kOfxBitDepthShort);
gPropHost->propSetString(effectProps, kOfxOpenGLPropPixelDepth, 2, kOfxBitDepthFloat);
gPropHost->propSetString(effectProps, kOfxOpenGLPropPixelDepth, 3, kOfxBitDepthHalf );
"
How should I compile the OpenFX example plugin for DaVinci Resolve?
I'm compiling the next source code:
https://github.com/ofxa/openfx/tree/master/Documentation/sources/Guide/Code
I'm using Developer Command Prompt for VS2015
and the INCLUDE=<repo_path>\include
environment variable. It compiles but is not loaded by DaVinci Resolve. I already try with other plugin binaries and they works (I don't compile them, I download them).
I want to develop my own plugin so I need to learn how to compile a simple plugin for an Video Editing Application like DaVinci Resolve.
I'm missing something?
Standard version: 1.4.1 (Should be set as 1.4.1 milestone by @garyo)
Subcommittee
Pierre Jasmin @revisionfx (Their Github user account (if any) should be set to assignees by @garyo)
Todo by @revisionfx
typedef struct OfxTimeLineSuiteV2 {
OfxStatus (*getProjectTime)(void *instance, double EffectTime, double *ProjectTime);
// converts what host displays in it's user interface to local effect time, could be a large number if host project starts at 12:00:00:00)
OfxStatus (*getEffectTrimPoints)(void *instance, double *InPoint, double *OutPoint);
// for example in an NLE this refers to In and out point handles of the video track on which the effect is applied, this is in effects local time. This is different then frame range and 0 to Duration.
OfxStatus (*gotoEffectTime)(void *instance, double *time); // this is in effects local time, if one asks to go to time -5000, it might not be defined
// because of this not being supported a lot, this is example of wanting to check if function pointer is NULL as means of seeing if supported
} OfxTimeLineSuiteV2;
For simplification assuming we have effect time which is normalized to 0 to Duration. That effect lives in a Project timeline (Project here is whatever embeds the effect, in one host it could be called a comp for example). And that effect has source clips.
For context: The effect space is also normalized in terms of frame time (+1 is next frame in effects space, and +0.5 is next field when it applies actually kOfxImageEffectPropFrameStep). We also have a specifier for that: kOfxImageEffectPropFrameStep (read only for effects). And we have a parallel Frame Range mechanism in API as well.
getEffectTrimPoints (for example returns the in and out point in an NLE, distinct from 0 to Duration valid frames) : if user changes those, should produce an instance changed prop reason. Of course actions like a razor cut or what happens in a pre-comp or nested sequence etc can reset what is Effect first frame.
getProjectTimelineTime: Theorically the double parameter OFX time is supposed to be allowing us to let the user see that - I would like to display time (in Frames at least) in an info non-animated slider or something so user sees time in the same unit as host displays current time. Would pass effect time and get Project time:
getProjectTimelineStartTimeInFrames -- provides an offset in frame (our effect local time is normalized to 0 is first frame so maybe this is implicit mapped via the effect to timeline conversion and NOT NEEDED
Question 1: Then what is getTimeBounds... is there a use case for a Global (to effect) time bound (e.g. needing to know the Project time bounds)?
In other words: is getEffectTrimPoints == getTimeBounds?
Question 2: If we have getProjectTime I don't think we need getTime as we can pass Effect Current time
This will ensure that the headers, libs, and examples all build properly with and without GPU support.
Background:
Silhouette (and probably other hosts) starts "time" at frame 0, but has a setting to display time with a frame number offset.
kOfxImageEffectPropFrameRange isn't enough to convey this to plugins, so if a plugin has a custom browser that can display frame time, it doesn't know what this time offset should be.
My proposal is to add a new clip property: kOfxImageClipPropDisplayOffset
Type: double X 1
Property Set - an image clip instance (read only)
Normally this would be zero, but on hosts that support it, it would be the human-readable frame offset to be added time values so frame numbers will match the host's user interface.
Please ask questions ("how do I...?") on the forum, not by filing issues.
Read the contribution guidelines first.
Hi,
Thanks for the help in advance!
I am aware that there is "fetchClip()" method that grabs a clip. Is there a similar function that fetch each frame of the clip at a time so the frames can be processed sequentially inside a clip?
Thanks a lot and have a great day!
Please read the contribution guidelines first.
standard change
tagfeature/PROPOSAL-NAME
branch)
We don't define explicitely the API version as a string at least anywhere. We do have GetAPIVersion but it's not clear it's being properly updated/referred to.
Although API version should not be a conditional for compliance, it's a good practice when adding features in API to mark the version this was added with context.
Adds an explicit version string. Suggestion is to add "1.5" in ofxCore and a top level text file with sub-version, e.g. 1.5.5, for each API update. The sub-version would be used to in parallel update the wiki entry with Issue/PR number. This way a developer can see what was added since they last downloaded master and at a glance understand what matters to them.
Provides a way to source API version in non-ambiguous manner.
When you google the version displayed would be auto-accurate.
All
For oldOfx.h over time...
Perhaps we should do ofxOld_v1.3.h, ofxOld_v1.2.h
and ofxOld.h that is just:
ofxOld_v1.2.h would be just the YUV stuff
and documented as safer to ignore (comment out) when building against 1.4 but not to ignore 1.3 because of old hosts and plug-ins
Hello,
I don't understand something about OpenFX.
What's the difference between:
http://en.wikipedia.org/wiki/OpenFX / http://www.openfx.org/
And:
http://openfx.sourceforge.net/ / http://natron.inria.fr/561/
While the first says it only supports Win32, it seems others say they support all OS's.
Thank You.
Please read the contribution guidelines first.
standard change
tagfeature/PROPOSAL-NAME
branch)
These properties allow a host to pass information about the OCIO config, display settings and clip colourspace to a plug-in.
Various hosts use OCIO to manage colour. This is useful in two scenarios:
New optional feature. If the properties are not set by the host, plug-ins can apply sensible defaults as they do now.
The properties require simple documentation and an example would be beneficial
Plug-ins are the main beneficiary as they will work more predictably in a host that supports OCIO.
Please read the contribution guidelines first.
standard change
tagfeature/PROPOSAL-NAME
branch (branch is optional but keeps thingsWhat problem does this proposed change solve? Be specific, concise, and clear.
Is this a new feature (no compatibility impact), a change to an existing function suite (version
the suite to avoid compatibility issues), a change to an existing property, or a documentation
change?
How will hosts and plugins negotiate use of this change? Show how it works when a host implements
it but not plugin and vice versa.
Who will benefit from this proposed change? Plug-ins, hosts, or both? Specific types of hosts?
Please read the contribution guidelines first.
standard change
tagfeature/PROPOSAL-NAME
branch)
The spec currently specifies that plugins should be installed and found in two subfolders for MacOS:
ARCHITECTURE is the specific operating system architecture the plug-in was built for, these are currently…
MacOS - for Apple Macintosh OS X 32 bit and/or universal binaries
MacOS-x86-64 - for Apple Macintosh OS X, specifically on intel x86 CPUs running AMD’s 64 bit extensions. 64 bit host applications should check this first, and if it doesn’t exist or is empty, fall back to “MacOS” looking for a universal binary.
This is now out of date, since Macs have shipped with "arm64" CPUs for several years now. This text was written when MacOS was moving from PowerPC to Intel CPUs, a long time ago. "Universal binaries" now commonly include 64-bit Intel and 64-bit Arm binaries, not PowerPC.
This standard change proposes to deprecate the x86-64
folder, which few if any host applications even search anymore. All MacOS plugins should be in MacOS, and preferably should be universal binaries (whatever that means at any given point in time). There will not be a specific arm64
folder.
This is acceptable since MacOS supports multi-architecture binaries, so per-architecture subdirectories are not needed for a host application to find the proper architecture plugin. This proposal would prohibit a plugin from shipping a separate Intel and arm64 binaries (since the plugin .ofx
file must always be named to match the top-level bundle name), but it should not be difficult for any plugin to merge multiple architectures into a single binary on MacOS.
What about Windows and Linux? They do not support universal binaries. Each .ofx
file is for one architecture, so a host must check the proper subdir. Windows and Linux are already starting to ship arm64-based OS versions; no native arm64 OpenFX hosts are shipping on those OSes yet however. To support this, we propose to add the following text to the documentation:
Future Thoughts:
When Windows and/or Linux support alternative processor architectures such as arm64, hosts should look in appropriately-named subdirs for the proper
.ofx
plugin file. On Windows with arm64,Win-arm64
should be used (vs. currentWin32
andWin64
which are Intel-specific). On Linux, hosts should look in the subdir named byLinux-${uname -m}
which for arm64 should beLinux-aarch64
. Usinguname -m
rather than a hard-coded list allows for any future architectures.
The current MacOS file location MacOS-x86_64
is out of date since the introduction of Arm-based ("Apple Silicon") Macs.
Proposes a back-compatible change to the spec to allow for current architectures, and prepare for future ones.
No host or plugin impact, unless a plugin is installing in MacOS-x86_64
. Any such plugin should migrate to using MacOS
and shipping all relevant binaries in a single universal binary.
If a plugin expects to be found in the wrong folder, the host will not find it and the plugin will not show up. A host could indicate that it did not load plugins in an unexpected folder such as MacOS-x86_64
, in its log file.
What changes to the docs are needed for this change? The change is all in the doc (the spec).
All stakeholders shipping plugins on MacOS, and anyone who wants to ship on alternative architectures in the future, will benefit from this change.
Plugin I developed can't be detected by hitFilm pro 13, I sure the compiled .ofx file is in the proper place, and in the proper bundle format. Can anyone tell me how to solve this problem?
https://github.com/AcademySoftwareFoundation/openfx/tree/main/include
still has the old logo files
need to swap for new ones
Perhaps the make for example should use these as well
Using the build.sh script, or building with conan and cmake, fails on Windows on the second build due to a bug in Conan. This is now fixed for the next v2 conan release; see conan-io/conan#15215
This issue is to track that bugfix and when it is released, update the CI builds and docs to reflect the need for that conan version.
We should publish our conan recipe to conancenter so users can use it easily in a conan/cmake project.
Having an OverlayInteract instance in a plugin, overridden OFX::OverlayInteract::draw method will not be called after loading a saved project in Davinci Resolve, while it will be called in a newly created project
I had the issue with my own developed plugin, but as a reference I decided to check the provided basic sample in the C++ Support folder and it was the same behavior there.
Support/Plugins/Basic
Please read the contribution guidelines first.
standard change
tagfeature/PROPOSAL-NAME
branch
What problem does this proposed change solve? Be specific, concise, and clear.
Who will benefit from this proposed change? Plug-ins, hosts, or both? Specific types of hosts?
I'm trying to go through the documentation to start learning OpenFX. Unfortunately, the directory structure of the sources seems to have been changed without a corresponding update to the documentation. In particular, make
in openfx/Documentation/sources/Guide/Code
fails because of include search paths. (They're an extra two parent directories up.)
Additionally, there is a link on the word 'there' on the following sentence from this page: https://openfx.readthedocs.io/en/doc/Guide/ofxExample1_Basics.html#basicexample in the following text:
An example plugin will be used to illustrate how all the machinery works, and its source can be found in the C++ file there.
That link points to https://github.com/ofxa/openfx/blob/master/Guide/Code/Example1/basics.cpp, which is a 404. I believe it should point to https://github.com/ofxa/openfx/blob/master/Documentation/sources/Guide/Code/Example1/basics.cpp. I think there are similarly incorrect links on other documentation pages.
If we change type for:
Do we need to rename to something like: #define kOfxParamPropInteractSizeD "OfxParamPropInteractSizeD"
to be safe and move kOfxParamPropInteractSize to ofxOld
Please read the contribution guidelines first.
standard change
tagfeature/PROPOSAL-NAME
branch)
This proposal adds a new C function OfxSetHost(const OfxHost *host)
to the plugin API; if the plugin exports that symbol, a conforming host should call it as the very first call, before OfxGetNumberOfPlugins()
and OfxGetPlugin(n)
. That way the plugin can examine the passed-in host
struct, which the host should fully fill in, and make decisions about which plugins it wants to show on that host.
The standard way to not expose a plugin to a certain host is to avoid returning it in OfxGetNumberOfPlugins()
and then OfxGetPlugin(n)
. However, at that point, early in the startup of the plugin, it can't reliably know what the host is. In some cases plugins have resorted to looking at argv[0]
or other hacks.
Plugins have no reliable way during startup to know what host is calling them. If there's an effect within the plugin (or even all of them) that doesn't work on a particular host, the plugin can't avoid exposing that effect to the user. There's no other way later on to hide plugins either; the accepted method is to not enumerate the hidden ones during OfxGetNumberOfPlugins()
and then OfxGetPlugin(n)
-- which means the plugin has to know what host it's on before those calls.
If a plugin doesn't implement this call, the host should not try to call it; it has to check for the symbol being defined as an extern C-linkage symbol, like OfxGetNumberOfPlugins
. Unlike the latter, though, which is always expected to be present, hosts need to check for OfxSetHost
.
If a host doesn't implement it, the plugin won't be able to use this info to select per-host behavior. A plugin can tell by the fact that OfxSetHost
has not been called by the time the host calls OfxGetNumberOfPlugins
.
Both plugins and hosts will benefit from this: plugins can avoid exposing effects that don't work on a host, and hosts don't have to field bug reports about extraneous or incorrect effects from plugins.
This has been implemented in #53 by OFX member @revisionfx in [which plugins?] .
It has been tested and approved by member Paul Miller of Silhouette.
Peter Loveday of member Black Magic Design also approves and added support in Fusion.
There was discussion on the OFX mailing list on what the name should be; OfxSetHost
was agreed.
There was discussion about the lifetime of the passed-in host
pointer; it was agreed that the host owns it (so the plugin should not free it) and it should last at least through the last call to OfxGetPlugin()
, but the pointer should not be cached by the plugin beyond that.
There was discussion about the return value; it was agreed that if the plugin returns kOfxStatFailed
then the host should stop loading this plugin entirely.
There was discussion about an alternative idea, OfxGetNumberOfPluginsV2(OfxHost *host) );
, but that was rejected in favor of this proposal.
Please read the contribution guidelines first.
standard change
tagfeature/PROPOSAL-NAME
branch)
The latest version as of today is 1.4 ( released in 2015, 7 years ago ). May be its time to release and bump up the version number.
There's a fork of the library by Natron project ( https://github.com/NatronGitHub/openfx ) which is 1201 commits ahead of this original branch ( Not sure if those commits are specific to Natron ). Looking at the commits & discussion in the issues, there clearly is a lot of work done & going on. It would be a good idea to consolidate those changes and release a new version.
In absence of continuous releases, the clients aren't able to make use of the hard-work added by the contributors. Also, the project appears dormant, and clients keep facing the issues with little to no help.
Is this a new feature (no compatibility impact), a change to an existing function suite (version
the suite to avoid compatibility issues), a change to an existing property, or a documentation
change?
How will hosts and plugins negotiate use of this change? Show how it works when a host implements
it but not plugin and vice versa.
What changes to the docs are needed for this change?
Who will benefit from this proposed change? Plug-ins, hosts, or both? Specific types of hosts?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.