khronosgroup / anari-docs Goto Github PK
View Code? Open in Web Editor NEWANARI Documentation
License: Other
ANARI Documentation
License: Other
The primitive sampler samples an `Array1D` at unnormalized integer
coordinates based on the `primitiveId` surface attribute, shifted by
non-negative `offset`.
.Parameters accepted by all `primitive` samplers.
[cols="<,<,<2",options="header,unbreakable"]
|===================================================================================================
| Name | Type | Description
| array | `ARRAY1D` | backing array of the sampler
|===================================================================================================
The `array` size must be at most <<Devices, device limit `geometryMaxIndex`>>.
For me, it was not clear that this is basically a geometry independent per primitive attribute:
What is meant with unnormalized integer coordinates?
The offset is missing from the table. Type should be ARRAY1D
of FLOAT32
/ FLOAT32_VEC2
/ FLOAT32_VEC3
/ FLOAT32_VEC4
After discussing how the colormap sampler idea could entirely be implemented by image1D
samplers (where color + opacity are in a single image
array) and due to the fact that there are not control points in the transferFunction1D
volume, I think it makes sense to combine color + opacity arrays into a single FLOAT32_VEC4
array. This simplifies applications by allowing them to have a single color map representation that is used to color map both surfaces and volumes.
Now that we have multiple implementations (with each utilizing different underlying graphics/compute APIs), I think there's a strong case for strided arrays to be their own extension -- I think they are overly generic right now, with more downsides than benefits. There is an obvious use case for them and they ought to exist, but I want to make the case that doing so in an explicit extension makes their existence and usage much more clear to both application developers and implementors.
Shared arrays exist for exactly one purpose - reduce memory overhead by avoid copies. This comes up when dealing with arrays in the following two contexts:
Both of these use cases to avoid copies are simultaneously real and niche. They are real in the sense that OSPRay has been used to render volumes that are simply too large to fit into system memory twice, and applications that can drive more than one rendering engine at the same time are quite common (ex: Blender, ParaView, etc.).
The first context is very niche in the sense that the overwhelming majority of 3D applications (and therefore end users) do not render volumes or surfaces which are that large. In fact, no mainstream engine that isn't a purpose-built CPU renderer can deal with spatial fields that are >50% of system memory. If you are the rare user that is faced with rendering such a scene, you must use a rendering engine capable of doing it at all -- this is a very non-portable problem that has very intentional requirements for the device to be used to solve it.
The reality is that as more implementations have come online, it further cements the notion that implementations either a) hard ignore this feature by throwing an error, or b) will surprisingly copy the data anyway, which is exactly the opposite of what reaching for a shared array is intended to solve.
As a side note -- I think "convenience" is a non-factor in having strided arrays in the base array APIs, given that the alternative is essentially a single std::transform()
:
T *inPtr = (T*)getPointerToAppArray();
U *outPtr = (U*)anariMapArray(device, array);
std::transform(inPtr, inPtr + size, outPtr, [](T *inValue) {
return inValue->elementOfInterest;
});
anariUnmapArray(device, array);
This actually makes copies more obvious and makes reaching for strided arrays an intentional act: sparse arrays are always worse except when copying the array is, in fact, not possible given the data + HW. On top of that, this then empowers the app developer to make a choice on whether this is a multi-device sharing scenario (i.e. we want to avoid privatized copies in each live device), or whether it's a single device where we benefit from using device-allocated memory (through a managed array or direct array parameter). All this together points a user toward the generally good advice that dense data is always better than sparse data, so when you reach for a sparse data array it's because you have no other choice and thus need reliable behavior by the implementation to actually use the array in-place.
The solution I propose is to add the extensions KHR_SPARSE_ARRAY1D/2D/3D
and remove strides from the current array functions. The new extensions which then enable the following functions:
ANARISparseArray1D anariNewSparseArray1D(ANARIDevice,
const void *appMemory,
ANARIDataType elementType,
uint64_t numElements,
uint64_t byteStride);
ANARISparseArray2D anariNewSparseArray2D(ANARIDevice,
const void *appMemory,
ANARIDataType elementType,
uint64_t numElements1,
uint64_t numElements2,
uint64_t byteStride1,
uint64_t byteStride2);
ANARISparseArray3D anariNewSparseArray3D(ANARIDevice,
const void *appMemory,
ANARIDataType elementType,
uint64_t numElements1,
uint64_t numElements2,
uint64_t numElements3,
uint64_t byteStride1,
uint64_t byteStride2,
uint64_t byteStride3);
When the extension isn't present, these functions would generate an error and return the invalid handle.
Note that these functions remove the deleter callback as this function is to be used in very precarious circumstances that are only made more complicated when handling the "ownership transfer" variant of shared arrays.
Furthermore, I think it's worth discussing using unique array handles (subtypes of generic ANARIArray1/2/3D
) so other extensions can talk about where sparse arrays are permitted -- it's also overly generic to just say "you can sparse all the things", when that's not practically true (even in OSPRay).
| Name | Type | Default | Description
| inAttribute | `STRING` | `attribute0` | surface attribute used as texture coordinate
| outTransform | `FLOAT32_MAT4` | \((1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1)) | transform applied to coordinates before sampling
|===================================================================================================
The sampler accepts any attribute, not just texture coordinates. This confused me quite a bit, especially since you actually can not use it as texture coordinates in another sampler.
This proposal is to add an API call to expose the complete list of extensions implemented by a back-end device. This proposal would add the following to the spec:
const char **anariGetDeviceExtensions(ANARIDevice d)
with a description of what the call is expected to do.Also, the proposal would add roughly two lines of code to the SDK as shown below:
extern "C" const char **anariGetDeviceExtensions(ANARIDevice d)
{
return deviceRef(d).getDeviceExtensions();
}
I noticed some typos and missing information in the ANARI 1.0 provisional specification document:
ANARIDataType
ANARI_SPATIAL_FIELD
should be included in the list of allowed data types for anariGetObjectSubtypes
.anariGetObjectParameters
for objects which have no subtype (e.g. Group, Instance, World, Surface).
NULL
as the objectSubtype
is the intention, but it's not explained in the text).ANARI_ARRAY{1,2,3}D
is expected based on anariGetObjectParameter
without reading the source code or documentation for the library you're using. Perhaps a future version of the spec could allow for this, for example, something like ANARI_ARRAY1D_OF(ANARI_LIGHT)
.Edit: Found a couple others:
ANARI_FLOAT32_MAT4x4
(e.g. Section 6.2.1 inTransform
and outTransform
) but this type isn't defined in Section 3.5 Table 1 (FLOAT32_MAT4
is but not MAT4x4
).
ANARI_FLOAT32_MAT3x4
is supposed to be. The current description is "three by four 32 bit float matrix in column major order" but a better description might be "three rows by four columns 32 bit float matrix in column major order." I realized that my confusion was because the description says it's "column major order" (in terms of the data) but then the dimensions are still listed in row major order.Let me start by a simple example: Say, we are creating visualization software for a city traffic and we want to render the city and all the cars inside. Each car is composed of chassis and four wheels. The chassis is in local coordinates of the car, the wheels are represented by one model instanced four times relative to the car local coordinate system. And we want to instantiate the car 1'000 times in 10 colors, so 10'000 instances in total.
In Anari, we would create two geometries - one for the chassis, another for the wheel. This is perfect and without issues.
Then, we would create 10 surfaces, assign ten different colors to them and set their geometries to the chassis. For the wheels, we would create one surface, assign the wheel as geometry and black color as material. I do not fully like such lightweight objects as surfaces that just associate geometry with material, but let's see later if there are any alternatives. Anyway, so far, we are ok.
Now, to instantiate one car, we need to create Anari::Instance that carries single transformation. We create it and assign the car transformation. We cannot associate it with the chassis surface directly, but we have to create new group and array of Surfaces and set the array's single element to the chassis surface. So, to instantiate the chassis, we had to allocate four objects - Instance, Group, Array and Surface. Luckily, the surface will be shared among 1000 objects. So, in our example, it is only three, but having unique textures for each car would result in four objects.
But what about the wheels? They cannot be placed bellow chassis instance. For this, we would need to support a kind of hierarchy of instances or multi-level instancing as you call it in #2. Currently, we would need to create Anari::Instance for each wheel and assign it composition of car and wheel transformation. I guess, your team is already aware of the problem.
However, the multi-level instancing would, unfortunately, not save the day. If all four wheels would have constant transformation against car's local coordinate system, it would work. But if we would like to change front wheel transformation during turning left or turning right and slightly rotate front wheels to follow the path of the car, it would not work. Some of 1000 cars would be turning left, some to the right and some going straight, all having different transformations of their wheels.
OpenGL Performer (famous real-time rendering library of Silicon Graphics, now a little historical) tackled the problem by introducing shared instancing and cloned instancing. With the shared instancing, all rear wheel transformations can be shared among all cars as these wheels are fixed (let's not make wheels spinning with the speed of the car, for simplicity). But all front wheels need to use cloned instancing so all cars will have their own transformation for the front wheels to allow them turning left and right.
Now, how to apply the principle to Anari? (This is not about telling that the current Anari design is not good. Rather it is about the opportunity to give arguments, why the current design is the best one and about finding arguments for it, while at the same time it is the opportunity to find other ways, possibly even better ways, that nobody thought about before.)
Another thought was why to have all this Instance+Group+Array+Surface just to instantiate one Geometry. I asked myself whether instance of Geometry - is it not just Geometry+Material+Transformation? Why not having just single object holding all the three components? Something named possibly Drawable, Instance or Geode (GEOmetry noDE) and containing the references to Geometry+Material+Transformation? You might argue that we need to instantiate Volumes and Lights also, but Instance class might be made to handle all three mentioned types. You might also argue that Instance+Group+Array+Surface solution provides the ability to instantiate many Surfaces, Volumes and Lights using just one Instance. But this lacks flexibility of instancing of our four wheels in one instance of a car. It also requires all Geometries to share the same local coordinate system - basically, they must fit. All they can differ is material. If we would like this new Instance (Geometry+Material+Transformation) solution to handle more Geometries with different Materials, we might allow it to reference Array of Geometries and Materials or even better, array of Geometry+Material+Transformation. Optionally, we might call it MultiInstance. Surely, we can analyze many alternatives and look for the best fit. Or we can analyze why the current Anari design is the best one.
I propose we change the signature of anariMapFrame
from
const void* anariMapFrame(ANARIDevice device, ANARIFrame frame, const char* channel);
to
const void* anariMapFrame(ANARIDevice device, ANARIFrame frame, const char* channel,
uint32_t *size, ANARIDataType *type);
Which would return the channel dimensions (uint32_t[2]
) and data type of the returned pointer by writing to the size
and type
pointers if they are not NULL
.
This way the returned data can be correctly used and its layout verified without external knowledge of what parameters were set on the frame object. This particularly comes up in tooling like tracing instrumentation which would otherwise have to externally track the parameters. This gets additionally complicated when extensions add more channels whose associated enabling parameters need to be intrinsically known to the tooling.
This is to my knowledge the only part of the "core" API requiring knowledge of object parameters to write defined C(++) code. By comparison the array API doesn't require inspection of object parameters since the dimensions and types are passed as function arguments during array creation.
When going through chapter 3 of the spec, I reached following issues/suggestions:
Looking forward for high quality Anari API release!
When a scalar volume value is associated with multiple materials/boundaries a 1D transfer function is unable to render them in isolation. This arises a lot in medical datasets, like the one below, and when trying to isolate different layers in an ICF capsule dataset. There are other limitations to 1D transfer functions that can be found with a quick Google search so I won’t list them here.
APIs like VTK (see VTK Volume Property) currently support 2D transfer functions and knowing what renderers support 2D TFs will be important for volume rendering these types of datasets.
EXAMPLE: Volume rendering head dataset (Data/headsq/quarter) with VTK using OpenGL (vtkOpenGLGPUVolumeRayCastMapper
) with 2D transfer function (linear, gradient) vs. VTK using ANARI (vtkAnariVolumeMapper
) using a 1D transfer function (linear)
Rasterization algorithms require the choice of near and far planes for their projections. These can be derived from bounds or set heuristically but there are situations in which one wants them to be explicitly set.
This extension would add "nearPlane"
and "farPlane"
parameters of type FLOAT32
with no default value to camera objects. In their absence the device is still expected to derive these values.
Analogous to the parameter info query
const void* anariGetParameterInfo(ANARILibrary library, const char* deviceSubtype,
const char* objectSubtype, ANARIDataType objectType,
const char* parameterName, ANARIDataType parameterType,
const char* infoName, ANARIDataType infoType);
should there also be a object info query function?
const void* anariGetObjectInfo(ANARILibrary library, const char* deviceSubtype,
const char* objectSubtype, ANARIDataType objectType,
const char* infoName, ANARIDataType infoType);
The main purpose would be to expose a "description"
field for now. Looking forward this could serve as tool to provide information for renderer selection.
See #67
This attempts to address some issues with library based queries.
The design of the library based query API predates the current feature definition and feature query mechanism. Our current concept of features means it should be sufficient to query the features from the library to make decisions about device suitability. Most of the queries for object types, object info, parameters and parameter info can safely be device based. This even encourages correct usage of the feature concept since object and parameter existence should not be used as a feature selection mechanism anymore.
This moves most of these queries to be based off a device handle instead of a pair of library and device name. To maintain the ability to query features before instantiating a function
const char **anariGetDeviceFeatures(ANARILibrary, const char *deviceSubtype)
is added to allow non-device based queries of the feature list.
Framebuffer channels are identified by strings but are neither parameters nor object types. Which means currently there is no way to enumerate them.
I suggest we expand the table "Info of objects for introspection." by an entry:
Name | Type | Required | Description |
---|---|---|---|
channel | STRING_LIST |
for FRAME |
list of supported channel names* |
*may need to be enabled via parameter to be mappable.
There should be no need query the types of the channels since anariMapFrame
already returns that. In case a channel supports multiple types that is exposed through the associated parameter query.
Open questions:
Need to update the transform types from FLOAT32_MAT3x4
to be FLOAT32_MAT4x3
to correctly reflect how they are being stored in the ANARI SDK.
Also, the spec should explicitly state that ANARI uses a row-major layout for matrices which is different from APIs like OpenGL that uses a column-major layout for matrices. That way it's clear that matrices used for OpenGL must be transposed when they are used in ANARI.
Currently the SpatialField filter
subtype only supports nearest
and linear
filtering. This issue is to add cubic
to the list of filtering values.
The change would look like:
Name | Type | Default | Description |
---|---|---|---|
filter | String | linear | filter used for reconstructing the field, possible values: nearest, linear, cubic |
Feature queries are no new concept nor API, they just use introspection or properties (which are explained the previous sections).
anariNewArray*()
functions currently take void *
pointers, but really should be const void *
. This was encountered in the SDK C++ bindings, which will just do a . However, this is indeed a spec defect.const_cast<>()
for now
This issue serves as a discussion point whether ANARI aims to support high-throughput rendering and if yes, do we need additional APIs or specification text. One example of high-throughput rendering is real-time rendering on GPUs, to achieve that multiple frames are in flight at any given time, i.e., multiple renderFrame calls with the same Frame object, double / triple buffering of the framebuffer and associated scene data (a pipeline) is needed.
Some string parameters in the API are effectively enums with only a finite set of meaningful (legal?) values.
The filter mode on the sampler objects for example only allows "nearest"
and "linear"
. The list of defined parameter info strings allows querying the default value but not the other options.
I propose the list should be extended to include
| values | `STRING_LIST` | No | Zero terminated list of accepted values
The addition of the required ANARI_STRING_LIST
type is also discussed in #21 (comment)
We said that a transformation only applies to vectors, not to scalars / lengths (needs to be stated explicitly in the spec):
height
of orthographic camera is indeed in worlds units, regardless of a transformradius
of point light with KHR_AREA_LIGHTS
is not affected as welldirection
of camera or edge1
of quad light are transformedradius
of a sphere geometry will be transformed (even non-uniformly resulting in an ellipsoid)In reviewing #3, I believe a bug in the text was found surrounding commits. Specifically the statement "Calling anariCommit
while an object is participating in a rendering operation is undefined behavior" (seen here) I believe is incorrect. Commits are only expressing that a group of parameters ought to be now used by the object in the next frame: there isn't a hard consequence that commit expresses that limit the application in any way. Implementations which are limited by needing to defer the "action of a commit" is a private implementation choice/limitation, not something the application should be concerned about.
While not common, it has come up in practice that direct array parameters may want to allow writing sparse data arrays internal to the 3D rendering engine. This specifically came up in the Cycles implementation where VEC3
data is 16-byte aligned -- thus a staging buffer is required for the current formulation of direct parameter arrays.
The remedy is quite simple -- have a stride out-parameter that implementations use to calculate the offset into the memory for each element. This would look like:
ANARIGeometry geom = anariNewGeometry(device, "triangle");
size_t outStride = 0;
void *vertices = anariMapParameterArray1D(device, geom, "vertex.position", ANARI_FLOAT32_VEC3, numVertices, &outStride);
for (int i = 0; i < numVertices, i++) {
float3 *out = vertices + i * outStride;
*out = inVertices[i];
}
// ...
Some applications can benefit from using a subset of an already allocated ANARI array to minimize internal state change overhead. This proposal seeks to add an extension that uses parameters to control the active sub-region of array elements to be used.
Currently array size is only controllable at array object construction -- in order to change the size of the array data used by an object, an entirely new array object must be constructed. This all but guarantees that new array allocation will occur in most implementations, especially implementations that offload to memory spaces other than the host system's memory (i.e. GPUs). Being able to reuse an existing allocation can be advantageous.
The proposal is to add the ANARI_KHR_ARRAY1D_REGION
core extension, which adds the following parameter to ANARIArray1D
respectively:
Name | Type | Default | Description |
---|---|---|---|
region | UINT64_BOX1 |
0, *capacity | first (inclusive) and last (exclusive) elements to be used by parent objects |
Array capacity is established on construction -- this is defined as the maximum number of elements the array can contain for the lifetime of the array object. The begin
and end
parameters are then clamped to the range [0,capacity] respectively, and warnings should be emitted by the debug layer if end
is larger than capacity. It is also undefined behavior for region.lower
>= region
, meaning implementations should be able to rely on 1) the array containing at least one element in each dimension, and 2) region.lower
< region.upper
.
UINT64
box types also need to be added to the spec if this proposal is accepted into the standard.
begin
/end
parameters to instead use single region
parameterANARI_KHR_ARRAY_DYNAMIC_REGION
to ANARI_KHR_ARRAY1D_REGION
The PR adding ANARI_KHR_ARRAY1D_REGION missed the WG consensus decision (on 5/4) to use UINT64_REGION
types intead of UINT64_BOX
types.
VisRTX currently implements a colormap
sampler subtype which as proven to be useful in scivis applications (specifically VTK-m). It essentially applies the same value color mapping operation that the transferFunction1D
does to volume data but instead on geometry attributes.
This issue proposes the following sampler extension text be added to the spec (roughly speaking, wording to be improved):
The KHR_SAMPLER_COLOR_MAP
extension indicates that an additional sampler subtype is available:
"colorMap"
. This sampler takes the first channel of the input attribute and
applies a transfer function (normalization + color) to it. This sampler has the
following parameters in addition to the base ANARI sampler parameters:
Name | Type | Default | Description |
---|---|---|---|
color | ARRAY1D of Color | array to map sampled and clamped field values to color | |
valueRange | FLOAT32_BOX1 | [0,1] | input attribute values are clamped to this range |
This sampler follows the same rules for the color
parameter as they are found on the "transferFunction1D"
volume subtype. Values from this sampler are output as VEC3(r, g, b, 1)
to the input of the material where the sampler used, where r
, g
, and b
come from the values in color
respectively.
The spec should define how opacity values on volumes are interpreted and how they behave under transformation.
Typically the transmittance
Currently the spec only defines the "scivis"
volume type that uses opacity as obtained from its transfer function. Is the "opacity"
parameter just
When transforming the volume via an instance transform is "scivis"
volume this would make sense while this could be different a more physical volume type.
Rasterization algorithms rely on mipmaps to avoid aliasing artifacts when sampling textures. For many use cases these can be automatically generated by downsampling the original image. However this approach is not always appropriate. For example cutout transparency maps or tiled texture atlases require special treatment.
There should be a way to explicitly specify mipmaps and/or give hints to automatic mipmap generation.
The lowest impact option on the API would probably be to allow image samplers to accept an array of array objects for their "image"
parameter where the individual arrays represent the mipmap levels. The downside to this is that the sampler needs to own the combined mipmap texture object and using the same texture with for example a different "inAttribute"
or filter mode would require duplicating the entire sampler including any internal data it owns.
Another approach could be to attach the mipmap levels to the array object itself in a similar manner. The the top level array would have a parameter "mipmapLevels"
that is set to an array of arrays representing the additional levels [1:N].
Independent of these explicit mipmap specification options arrays could have hinting parameters for automatic generation to cover some common cases by for example indicating that the array represents cutout opacity data or contains tiles of a certain size.
Proposal for an optional Feature (extension) ANARI_KHR_ID_BUFFERS
, adding new Frame channels for various IDs to facilitate picking/selection use cases. Implemented in OSPRay v2.10.
New parameters on Frame
:
DATA_TYPE primitive
, possible value UINT32
(and others like UINT64
, UINT16
?), enabling the primitive ID channel holding the primitive index of the first hitDATA_TYPE object
, possible value UINT32
(and others like UINT64
, UINT16
?), enabling the object ID channel holding the Surface
/Volume
id
(if specified) or index in Group
of first hitDATA_TYPE instance
, possible value UINT32
(and others like UINT64
, UINT16
?), enabling the instance ID channel holding the user defined Instance
id
(if specified) or instance index of first hitApplications can optionally set UINT32 id
parameters at Surface
, Volume
and Instance
, defaulting to -1u
(0xffffffff
), which are written to the object
or instance
channel, respectively. The value -1u
is special, indicating that the implicit, automatically generated ID is used (see above).
Caveat: If no user id
s are used than there is no way to differentiate between hit volumes or surfaces.
If nothing is hit then -1u
is written to the channels (which is a way to differentiate between a hit Surface
within an Instance
and directly at the World
where instance channel has -1u
).
This issue proposes a new API for injecting array data into objects: anariMapParameterArray[1/2/3]D()
and anariUnmapParameterArray()
.
The idea is to allow a map/unmap operation on an array directly on an object parameter -- doing so without an independent handle. For example, injecting vertex position data on a triangle geometry would look like:
ANARIGeometry geom = anariNewGeometry(device, "triangle");
float3 *vertices = anariMapParameterArray1D(device, geom, "vertex.position", ANARI_FLOAT32_VEC3, numVertices);
// if the array doesn't exist because an extension isn't implemented, such as
// ANARI_KHR_GEOMETRY_TRIANGLE, it may return NULL
if (vertices) {
fillVertexData(vertices);
}
anariUnmapParameterArray(device, geom, "vertex.position");
// ...
The near-term motivation is that some rendering engines (Cycles, to be specific) are not flexible with their memory allocation model and rely on their own abstractions for managing memory. While ANARI has plenty of great memory abstractions with lots of flexibility, there currently isn't a "fast path" for apps to inject data straight into the lowest-level location in memory that an implementation will use. This means that even managed arrays can still result in internal copies taking place, based on the underlying implementation's design limitations.
Long term I still think this API has value, as it guarantees that the array on the object in question cannot exist anywhere else, meaning this API gives the best performance for arrays which are not shared between multiple objects. For example, VisRTX would use this to allow injecting straight into a CUDA texture for image samplers.
There are plenty of reasons to keep all our other handle based arrays, so this proposed purely as an addition.
Practically all of the examples in the SDK would end up being convertible to this form, as we basically have no examples where a single array gets used in more than one place, nor needs to be memory shared with the application. Those use cases certainly exist and should be demonstrated, but I think direct array mapping is actually the default use case -- not the exception.
In order to unify all feature testing into a single API call, this proposal is to add feature names for all core features so there is a single list of all feature names that can be queried by the application. This has the added benefit of providing value to applications which use pre-compliant ANARI devices that may not have all core features implemented. This proposal adds the following to the specification:
anariDeviceImplements(ANARIDevice, const char *)
remains unchanged (just for this proposal, future forthcoming proposals are not to be excluded)ANARI_KHR_
and vendors should not use this prefix for vendor extensions.ANARI_KHR_GEOMETRY_TRIANGLE
ANARI_KHR_GEOMETRY_QUAD
ANARI_KHR_GEOMETRY_SPHERE
ANARI_KHR_GEOMETRY_CONE
ANARI_KHR_GEOMETRY_CYLINDER
ANARI_KHR_GEOMETRY_CURVE
ANARI_KHR_SPATIAL_FIELD_STRUCTURED_REGULAR
ANARI_KHR_LIGHT_DIRECTIONAL
ANARI_KHR_LIGHT_POINT
ANARI_KHR_LIGHT_SPOT
ANARI_KHR_LIGHT_RING
ANARI_KHR_LIGHT_QUAD
ANARI_KHR_LIGHT_HDRI
ANARI_KHR_AREA_LIGHTS
ANARI_KHR_CAMERA_PERSPECTIVE
ANARI_KHR_CAMERA_ORTHOGRAPHIC
ANARI_KHR_CAMERA_OMNIDIRECTIONAL
ANARI_KHR_VOLUME_SCIVIS
ANARI_KHR_MATERIAL_MATTE
ANARI_KHR_MATERIAL_TRANSPARENT_MATTE
ANARI_KHR_SAMPLER_IMAGE_1D
ANARI_KHR_SAMPLER_IMAGE_2D
ANARI_KHR_SAMPLER_IMAGE_3D
ANARI_KHR_SAMPLER_PRIMITIVE_ID
ANARI_KHR_SAMPLER_TRANSFORM
ANARI_KHR_FRAME_COMPLETION_CALLBACK
ANARI_KHR_DEVICE_SYNCHRONIZATION
ANARI_KHR_TRANSFORMATION_MOTION_BLUR
ANARI_KHR_FRAME_AUXILIARY_BUFFERS
ANARI_KHR_STOCHASTIC_RENDERING
The main function of instance objects is to associate a group with a transform. However extensions in the specification (motion blur) specify alternative parameters to set the transform and change the behavior of the instance. This also creates a precedent that extensions for further transform types (skeletal animation, array instancing, etc.) would have to follow. The result is a "fat object" type whose behavior is governed by a precedence hierarchy of parameters with overlapping or even conflicting meaning.
Instances should be explicitly subtyped instead of having an implicit subtype based on parameter precedence. This would also be consistent with similar design decisions we made for geometries.
The changes to the spec would be:
ANARIInstance anariNewInstance(ANARIDevice device);
changes to
ANARIInstance anariNewInstance(ANARIDevice device, const char* type);
New instance subtypes are introduced:
"matrix"
the current default type repackaged in an extension KHR_INSTANCE_MATRIX
"motionMatrix"
encapsulating motion.transform
in KHR_INSTANCE_MOTION_MATRIX
(split from KHR_MOTION_BLUR_TRANSFORMATION_INSTANCE
)"motionSRT"
encapsulating motion.scale/rotation/translation
in KHR_INSTANCE_MOTION_SRT
(split from KHR_MOTION_BLUR_TRANSFORMATION_INSTANCE
)I think this is currently undefined behavior. The call is not needed (because the array is managed and owned by ANARI). Is it then an error to create a managed array with the callback set (appMemory=NULL
but ANARIMemoryDeleter != NULL
)?
The callbacks ANARIStatusCallback
and ANARIFrameCompletionCallback
both take userdata pointers that can be set via parameters on the device and frame objects respectively. Since anariSetParameter
takes the pointer as const void*
and by value the passed in pointer is by API definition also const
. This should be reflected in the callbacks. Since the user is in control of the callback and the passed in memory they can const_cast
it away again if required.
From 11/9/22 WG meeting:
KHR_TRANSFORMATION_MOTION_BLUR
into:
KHR_MOTION_BLUR_GLOBAL_SHUTTER
(shutter time on camera)KHR_MOTION_BLUR_ROLLING_SHUTTER
(adds rolling duration and direction on camera)KHR_MOTION_BLUR_TRANSFORMATION_CAMERA
(xforms on camera)KHR_MOTION_BLUR_TRANSFORMATION_INSTANCE
(xforms on inst)KHR_MOTION_BLUR_DEFORMATION
(vertex positions everywhere) orKHR_MOTION_BLUR_DEFORMATION_[GEOMSUBTYPE]
The definition of anariCommit()
is based entirely on transitioning an objects set parameters to being "active" on the next frame. This name change would make the purpose of this function much more clear, and directly associates it with anariSetParameter()
and anariUnsetParameter()
.
Rendering algorithms may need to make decisions based on object/scene bounds. For rasterization APIs near and far planes need to be selected. Shadow mapping requires setting up a shadow projection encompassing all shadow casters etc.
On the other hand scenes may contain large objects such as ground planes whose size does not necessarily correspond to the area of interest in the scene. In these situations it may be preferable to explicitly set the bounds instead of having them implicitly derived.
This extension would add a parameter "bounds"
of type FLOAT32_BOX3
to at least the world object but potentially also to instance, group, geometry and spatial field.
Dear Anari designers,
I am working in CAD industry development and research, particularly in designing rendering backends for CAD systems. Thus, Anari is of my interest. I went through the provisional specification and I have some thoughts that I will split into couple of issues in ANARI-Docs. My first point concerns performance and low overhead of the Anari API.
Rationale: Anari probably uses trampoline to redirect Anari API calls to the appropriate library. Moreover, it uses strings for the parameter names, property names, subtype names, channels, and so on. Although API is very nicely and flexibly designed, these might become performance issues in some use cases.
Example from CAD: Some users do not hesitate to create 100 million single-triangle objects plus one. Very effective Anari library might render these in real-time on the high end GPUs. However, he might select all of them except one and change one of their attributes, say color or visibility. It would result in 100 million calls through Anari API. It challenges both - the Anari and the library to handle the things effectively without unnecessary overhead. On the side of the library, it can be as simple as setting one variable, however, Anari probably imposes the trampoline overhead and passing parameters as string. The trivial implementation of processing of parameter string would go through the list of parameter strings and do strcmp() on each of them. It means linear complexity and quite an overhead when doing this 100 million times.
Idea for consideration - trampoline overhead: We can remove the trampoline in the same way as Vulkan. We can provide means to get the function pointers of particular library and allow the client to call the function pointers directly. This might be implemented as Anari performance extension.
Idea for consideration - using strings for the parameter names, property names, subtype names, channels, and so on: We can probably provide the second function for the setting parameters that would take enum or int instead of string. The enum types can be provided for all standard Anari parameters, properties, etc. while they should also be queriable through the Anari introspection API. Having parameters and properties as small-value ints or enums would allow all the libraries to use them as direct indices into the data arrays, or at least use std::array or vector to quickly translate them to particular data offset or function call. This would eliminate long string lookup that might be noticeable for 100 million calls.
Did I overlooked something, or do you have ideas of your own in the direction of high performance and low overhead Anari API? Does my ideas seems reasonable, or do you prefer another approach?
Follow-up of #62 (comment)
Somewhat related to #57 we should probably consider standardizing a way to define clip planes eventually.
Name | Type | Description |
---|---|---|
clipPlanes | ARRAY1D of FLOAT32_VEC4 |
Each VEC4 (A, B, C, D) defines a clip equation Ax + By + C*z + D = 0. Elements where the left side is positive are visible while elements on the negative side are clipped. |
Additionally the device or renderer would have an integer property "maxClipPlanes" in case there is a limit.
Since both scene level clipping and object level clipping can make sense this parameter could exist almost anywhere in the scene hierarchy as well as on the renderer. These could either be separate features KHR_RENDERER_CLIP_PLANES
, KHR_SURFACE_CLIP_PLANES
etc. or some a priori definition that they exist on say the world, surface and volume?
This proposal seeks to move the omnidirectional
camera and stereo rendering features to be core extensions. This is because they are niche features that implementations are likely to ignore if they don't already exist as an engine feature. Furthermore, the existence of these features may imply supported use cases that engines may not be interested in, such as VR. Thus moving them to be core extensions, this keeps a standardized interface while making them ultimately optional for implementers.
The proposed extension names are ANARI_KHR_CAMERA_OMNIDIRECTIONAL
and ANARI_KHR_CAMERA_STEREO_RENDERING
respectively.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.