Code Monkey home page Code Monkey logo

sig-graphics-audio's Introduction

O3DE (Open 3D Engine)

O3DE (Open 3D Engine) is an open-source, real-time, multi-platform 3D engine that enables developers and content creators to build AAA games, cinema-quality 3D worlds, and high-fidelity simulations without any fees or commercial obligations.

Contribute

For information about contributing to Open 3D Engine, visit https://o3de.org/docs/contributing/.

Download and Install

This repository uses Git LFS for storing large binary files.

Verify you have Git LFS installed by running the following command to print the version number.

git lfs --version 

If Git LFS is not installed, download and run the installer from: https://git-lfs.github.com/.

Install Git LFS hooks

git lfs install

Clone the repository

git clone https://github.com/o3de/o3de.git

Building the Engine

Build requirements and redistributables

For the latest details and system requirements, refer to System Requirements in the documentation.

Windows

Optional

  • Wwise audio SDK
    • For the latest version requirements and setup instructions, refer to the Wwise Audio Engine Gem reference in the documentation.

Quick start engine setup

To set up a project-centric source engine, complete the following steps. For other build options, refer to Setting up O3DE from GitHub in the documentation.

  1. Create a writable folder to cache downloadable third-party packages. You can also use this to store other redistributable SDKs.

  2. Install the following redistributables:

    • Visual Studio and VC++ redistributable can be installed to any location.
    • CMake can be installed to any location, as long as it's available in the system path.
  3. Configure the engine source into a solution using this command line, replacing <your build path>, <your source path>, and <3rdParty package path> with the paths you've created:

    cmake -B <your build path> -S <your source path> -G "Visual Studio 16" -DLY_3RDPARTY_PATH=<3rdParty package path>
    

    Example:

    cmake -B C:\o3de\build\windows -S C:\o3de -G "Visual Studio 16" -DLY_3RDPARTY_PATH=C:\o3de-packages
    

    Note: Do not use trailing slashes for the <3rdParty package path>.

  4. Alternatively, you can do this through the CMake GUI:

    1. Start cmake-gui.exe.
    2. Select the local path of the repo under "Where is the source code".
    3. Select a path where to build binaries under "Where to build the binaries".
    4. Click Add Entry and add a cache entry for the <3rdParty package path> folder you created, using the following values:
      1. Name: LY_3RDPARTY_PATH
      2. Type: STRING
      3. Value: <3rdParty package path>
    5. Click Configure.
    6. Wait for the key values to populate. Update or add any additional fields that are needed for your project.
    7. Click Generate.
  5. Register the engine with this command:

    scripts\o3de.bat register --this-engine
    
  6. The configuration of the solution is complete. You are now ready to create a project and build the engine.

For more details on the steps above, refer to Setting up O3DE from GitHub in the documentation.

Setting up new projects and building the engine

  1. From the O3DE repo folder, set up a new project using the o3de create-project command.

    scripts\o3de.bat create-project --project-path <your new project path>
    
  2. Configure a solution for your project.

    cmake -B <your project build path> -S <your new project source path> -G "Visual Studio 16"
    

    Example:

    cmake -B C:\my-project\build\windows -S C:\my-project -G "Visual Studio 16"
    

    Note: Do not use trailing slashes for the <3rdParty cache path>.

  3. Build the project, Asset Processor, and Editor to binaries by running this command inside your project:

    cmake --build <your project build path> --target <New Project Name>.GameLauncher Editor --config profile -- /m
    

    Note: Your project name used in the build target is the same as the directory name of your project.

This will compile after some time and binaries will be available in the project build path you've specified, under bin/profile.

For a complete tutorial on project configuration, see Creating Projects Using the Command Line Interface in the documentation.

Code Contributors

This project exists thanks to all the people who contribute. [Contribute].

License

For terms please see the LICENSE*.TXT files at the root of this distribution.

sig-graphics-audio's People

Contributors

amzn-tommy avatar antonmic avatar broganab avatar galibzon avatar jeremyong-az avatar jhmueller-huawei avatar lijianwen13 avatar moudgils avatar neilgitgud avatar obwando avatar rgba16f avatar santorac avatar smurly avatar vincent6767 avatar yangfei103 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sig-graphics-audio's Issues

Nominations for new chair

What: Jon B (discord: JonB [Amazon] / github: wintermute-motherbrain) is stepping down as chair of sig-graphics-audio.
When: New nominations are being held as of 2022-04-07 and will continue until 2022-04-20 (the next sig-graphics-audio monthly)

Please reply to this ticket with a nomination. If you wish to +1 (vote) for someone that has been nominated, please use the thumbs up emoji.

Maintainer Nomination: @invertednormal

Nomination Details

Why this person is being nominated

  • Has a history of many contributions to O3DE
  • Has been a Reviewer since June
  • Reviewed 12 PRs in July, 21 PRs in August, so far 10 in September
  • Changed far more than 200 lines of code (18 commits in July, 5 in August, thousands of lines of code total)

Outcome (For SIG Facilitator)

Voting Results

Proposed SIG-Graphics-Audio meeting agenda for 2022-07-06

Meeting Details

The SIG-Graphics-Audio Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

See previous meeting notes here: #50

Meeting Agenda

  • @mekhontsev will present their work on ShaderCanvas.
  • Discuss #51 in the next meeting (just dropped at the start, too long to go through) and let @gadams3 fill us in on the details
  • Discuss possible solutions to make the RT code aware of materials. Proposals include custom shaders that will act as drop-in for a material that can be sampled by the raytracing code, proxy geometry with vertex colors or rendering out albedo textures that can be sampled (which however implies diffuse-only).
  • Pull in @dmcdiarmid-ly for a discussion on hybrid rendering. It is currently unclear when the RT acceleration structures are built. Does the user need to have a GI component in the scene, or what triggers this?

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Open Discussion Items

List any additional items below!

Proposed SIG-Graphics-Audio meeting agenda for October-19-22

Meeting Details

The SIG-Graphics-Audio Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

What happened since the last meeting?

Meeting Agenda

Discuss agenda from proposed topics

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Create actionable items from proposed topics

Open Discussion Items

List any additional items below!

.gltf support

Summary:

Support for .gltf files so developers will have the choice of multiple file formats that should work nearly the same. .fbx and .gltf work very close to the same for what they have inside of the file that would be used by the o3de engine.

What is the relevance of this feature?

The support for .gltf is important to me because fbx support on an open source project is always going to be time consuming especially if fbx got an update in the future. .gltf on the other hand is open and assimp has great support for it already so a lot of the features that one would need already work at least at a visual level when imported.

What are the advantages of the feature?

The advantages of using .gltf over .fbx is that .gltf is faster than .fbx for loading and unloading, and that greatly increases if you use the binary version of .gltf.

What are the disadvantages of the feature?

Some disadvantages is it is hard to scale an export of a .gltf file since most applications I have tried does not allow to scale the exported asset.

How will this be implemented or integrated into the O3DE environment?

The great thing about this feature is it can already be integrated by using assimp.

Are there any alternatives to this feature?

The only other alternative I know of is .fbx files, and the support for them in any open source software is rough around the edges, but for O3DE it is getting better daily so in the future this should also be a reliable source.

Are there any open questions?

What platforms if not all would this be able to support?

For more information on this topic go to the issue page where it all started: o3de/o3de#3058

Proposed RFC Feature Material Canvas

Summary:

Material Canvas (MC) will empower users and eliminate pain points associated with hand authoring material types, materials, and shaders. It will offer intuitive, easy to use, node based, visual scripting interfaces that are consistent with other node-based tools in O3DE. Through MC, users will be able to define modular, material and shader centric nodes, create graphs by dragging the nodes onto a 2D grid, make connections between node input and output slots to control the flow of logic and data. Graphs will be automatically transformed into functional shader, material type, and material assets that otherwise require hand authoring. Workflows like this have been the standard for years in many content-creation tools and game engines. They are used for creating shaders, materials, particle effects, visual effects, animation sequences, game logic, and have many other applications.

image

What is the relevance of this feature?

O3DE and the Atom rendering engine provide many flexible, data driven, but complex systems like scenes, shaders, materials, passes, and image processing that bridge between high level rendering features and low level render hardware interfaces. Most of the data for controlling these systems is produced by manually editing JSON text files. The exception is that shader code is written in a custom, HLSL-derived, language called AZSL, that can be cross-compiled to different target platforms. Hand authoring any of this data can be daunting and prone to error. Users must learn the syntax, structures, properties, data types, supported values, and how all the pieces fit together. This requires familiarity with the corresponding systems, reviewing C++ code, dissecting or copying examples, and reading whatever user and technical documentation is available.

In addition to that, the data cannot be validated, previewed, or used in the engine until it has been saved, compiled by the Asset Processor (AP), and loaded by the systems that use it. The turnaround time, or "compile time", for this varies based on the AP queue, asset type, asset size, asset complexity, asset dependencies, asset builder performance, AZSL compiler, and other factors. The amount of time it takes to process an asset directly impacts the ability to view it in the engine and make iterations based on the results. This delay may not be as noticeable or impactful for some assets but is exacerbated for shaders and materials in relation to the number of other assets that are automatically reprocessed whenever these low-level dependencies change.

MC can resolve many, but not all, of these issues by hiding most of the data and complexity with visual editing, fast feedback, automatic validation, and live previews instead of studying and editing multiple JSON or AZSL files. This will reduce the learning curve, barrier to entry, and potential for human error.

Example Shader and Material Type Assets

https://github.com/o3de/o3de/tree/development/Gems/Atom/Feature/Common/Assets/Materials/Types

Feature design description:

O3DE already includes multiple node-based tools. Script Canvas is widely used for creating game logic and is being extended for other purposes. Landscape Canvas is used for managing connections between networks of entity and component references for systems like gradients, dynamic vegetation, and terrain. These tools are both built on top of the Graph Canvas (GC) gem that provides UI and systems for managing, rendering, and interacting with node graphs. Landscape Canvas also uses the Graph Model (GM) gem, which attempts to hide some of the complexity of GC’s numerous buses by adapting it to a more conventional set of object-oriented classes.

O3DE and Atom also include a dedicated Material Editor (ME) and other related features to create, customize, and manage material assets for use in the engine. Within ME, material source files, which are also represented in JSON, can be created, opened, and edited in a tabbed, multi-document/view workspace. ME contains a customizable viewport that displays a real-time preview of materials and any changes made to them. The viewport has options for selecting different models and lighting presets to show effects of the material on different surfaces in different lighting conditions. It informs users about external changes, gives them the options to apply those external changes to open materials, supports undo and redo, has its own asset browser filtered to the most relevant asset types, is fully automatable and extensible via Python and PyQt, and has several other features.

All of these tools are established, continuously maintained, hardened, extended, and have gone through many revisions and reviews with stakeholders. 

MC will assimilate the systems, features, UI, UX, and workflows developed for these tools, operating in an environment similar to ME. The most obvious UI differences from ME are that the main documents and views center around the graph, the viewport is a dockable window, and the controls from the other node-based tools like the node palette, mini map, and bookmarks are integrated. Other dialogs or wizards may be developed as part of MC to help extend the library of node definitions and configure other data that informs the code generation process.

By taking advantage of these existing systems, MC will give users a familiar experience, environment, and workflows for authoring shaders and materials using node graphs. All shader and material related assets will be generated from the graph as changes are made to it.

An incredibly basic example would involve dragging in an output node representing standard PBR lighting model, dragging in a color variable node (internally it is a float4 with UI to visualize the color), setting it to red, and assigning it to the base color of the output node.

image
image

A more detailed description of workflow can be found in Appendix A.

Data Driven Nodes

In most cases, MC nodes will be defined using JSON, with settings for include files and lines of AZSL code that the generator will use to assemble the final shader program. Slot names, not to be confused with display names, on the node must match variable names in the AZSL so the generator can perform substitutions. The node AZSL will generally include calls to user defined functions, intrinsic functions, operators, etc. However, because the generator is primarily performing text substitutions and insertions, node AZSL can potentially contain anything that can be inserted inline in the generated code that does not break it. If segments of the generated code can be applied in different locations, the same system can be used for vertex shader inputs, the material SRG, struct, class, and function definitions, declarations, and function calls.

Nodes representing material properties or shader inputs might have other properties that can be edited in a node inspector, like enumeration values, ranges, group names, etc. These nodes will map directly to members in the material SRG and properties in the material type and material. It might be possible to automatically determine grouping and order from the organization in the graph.

There will be at least one variable node type for every supported material value data type. This would allow for variations of one data type surfaced in different ways, like colors versus vectors. Alternative options may be considered to simplify variable management, like the variable manager from Script Canvas.

Output Nodes (Lighting Model)

The surfaces parameters affecting the lower level lighting model shader implementation and contains the rest of the data needed for generation. It lists all the include files supporting the shading/lighting model, describes the organization of files, which passes are affected, input/output structures for each shader used in a pass, and other data needed to fill out ShaderSourceData and MaterialTypeSourceData structures.

Nodes With Multiple Outputs

GC and the DynamicNodeConfig support an arbitrary number of output slots. Variable, function, and other nodes need to support multiple output slots to access different members of complex types like structures and vectors. There must be a clean option for data driving this without extra code and ideally without node functions for every output. This should be able to append the member name to whatever is returned by the node function. AZSL also supports classes with member functions. It might be an option for the code generator to output complete classes with accessor and mutator functions that input and output slots map to.

Condition and Control Nodes

Branching can be implemented using nodes that resemble logic gates. The nodes should be able to select and output one of multiple inputs based on the value of an input flag.

Code Generation

The code generation process mostly involves performing inline insertions of include statements and shader code samples into the final ASZL file, performing search and replace string operations along the way. There are C++ data structures representing all of the other files that need to be produced. They will be populated with data from all of these supporting files.

Multiple options are being considered for the layout of the data used to drive code generation.

Validation

GC will automatically validate that data types match between node connections. Additional validation can be done to assess the completeness and correctness of the graph. The tool could check for and prevent cyclic dependencies, report general errors, report shader compilation errors, and allow users to view generated code.

Technical design description:

Requirements

  • MC will be built on top of established frameworks like AZ Framework, AZ Tools Framework, Atom Tools Framework (AtomTF), GC, and GM.
  • MC nodes representing variables, functions, inputs, outputs, and other supporting data sources must be data driven instead of hardcoded wherever possible, simple data structures, serialized in JSON, and thoroughly reflected to edit and behavior contexts to support editing, automation, and testing.
  • MC must include basic reflected property editors for all supporting data sources except for AZSL.
  • Graph traversal and data processing implementation for generating output data must be decoupled from the graph representation but directly accessible and executable on demand from within MC. This will allow the graph to be evaluated in different ways by different systems.
  • MC will implement a data driven dynamic node management system that allows creating and customizing GC and GM nodes without additional C++ code.
  • MC DynamicNodeConfig must be able to reference external data sources like AZSLI include files, AZSL code that may include markup, ShaderSourceData definitions, information about pass configuration, vertex and pixel shader input and output structures.
  • MC will populate the node library and UI by enumerating and loading node configuration files stored on disk in the project.
  • Any systems developed for MC that have a high likelihood of being needed elsewhere should automatically go to AtomTF.

Work Completed

https://github.com/o3de/o3de/pulls?page=1&q=is%3Apr+author%3Agadams3+is%3Aclosed+created%3A%3E2021-12-01
https://github.com/o3de/o3de/tree/development/Gems/Atom/Tools

The development time to create MC and other proposed tools will be greatly accelerated because a lot of the groundwork has already been completed. There was an extensive series of incremental refactors to the Material Editor (https://github.com/o3de/o3de/tree/development/Gems/Atom/Tools/MaterialEditor). More than 70 files worth of code and features was made generic, configurable, and migrated to AtomTF (https://github.com/o3de/o3de/tree/development/Gems/Atom/Tools/AtomToolsFramework).

Shader Management Console (SMC) (https://github.com/o3de/o3de/tree/development/Gems/Atom/Tools/ShaderManagementConsole) is a related tool that was originally created by duplicating the entire codebase from a much older version of ME. It is used for generating and managing shader variant lists containing many permutations of shader options that drive compilation of optimized versions of shaders. This refactor also involved cleaning up, optimizing, deleting a lot of boilerplate code, and bringing SMC up to standard and in synchronization with ME. Both of these projects now only contain a handful of files and will automatically benefit from improvements and fixes going forward.

MC prototype project (https://github.com/o3de/o3de/tree/development/Gems/Atom/Tools/MaterialCanvas) was created after this work was completed. It has a small footprint of mostly application specific code. GC and GM are integrated and operational. This required deep diving into Script Canvas and both gems and to learn and adapt them to work in a different environment. The prototype also implements initial, experimental support and test data for interactive, data driven nodes. It builds, runs, has a small library of test nodes created from node configurations files, and behaves exactly as a user would expect from something described as a cross pollination of Script Canvas and ME. 

Much of the basic workflow is operational but the prototype is not complete.

High Level Development Plan

This is a high level breakdown of tasks to be completed. Some of the work is order independent and parallelizable, especially the work in later milestones to create a comprehensive node library for making useful graphs. 

Milestone 1: New tools can be built on foundation of ME (Complete)
  • Refactor ME and SMC, moving all extracted features and systems to AtomTF in order to support development of MC and other tools requiring similar features and workflows. 
  • Update all systems and buses to support multiple instances to unblock potential integration into the main editor.
  • Update the viewport to support multiple instances within the same process.
  • Implement support for multiple document types with multiple views within the same tool.
Milestone 2: MC prototype project running with GC, GM, data driven nodes (Complete)
  • Create initial MC project and bare bones application built on top of the refactored ME framework.
  • Make the necessary changes to support for integrating GC and GM into MC.
  • Implement document and view classes that adapt GC and GM into the document and view systems.
  • Develop prototype system for data driven dynamic nodes that can work with GC and GM.
  • Create test data to validate that dynamic node system and graph serialization successfully operate in the prototype.
Milestone 3: MC supports basic code generation (In Progress)
  • Create this document.
  • Design for material canvas template system using existing data structures and AZSL with markup.
  • Design for general purpose mechanism to inject code generation data into DynamicNodeConfig.
  • Implement support for include files, shader functions, variables, and supporting data in the DynamicNodeConfig.
  • Implement support for serializing AZStd::any material value types in DynamicNodeConfig and the graph serializer.
    • There are a couple of large blocks for loading and storing data types within a AZStd::any.
      • This code was copied from the material component material assignment serializer but it does not support image assets.
      • The code should moved to a common location or the JSON serializer could add native support for AZStd::any and AZStd::variant.
  • Implement serialize, edit, and behavior context reflection for the required types.
    • Primarily includes DynamicNodeConfig, ShaderSourceData, MaterialTypeSourceData.
  • Create test data for testing and validating code generation.
    • Scalar and vector variable nodes.
    • A handful of function nodes for operating on scalar and vector variables.
    • A basic output lighting model node with supporting shaders and configuration files.
    • Test nodes to test and validate the generation process.
    • Test graphs that use these nodes
  • Implement code generation for variable, function, and lighting model output nodes.
    • This will be implemented in C++ inside the MC project.
      • It might be possible to implement this using a Python script if the necessary data is reflected. The main advantages for driving this with script are leaving the process open for extension, keeping it decoupled from the compiled code so that's the strategy for generating code can change or the same executable can be used for other applications.
Milestone 4: MC has testing and extended tooling
  • Add missing support to the MC document class
    • Capturing graph data for undo redo
    • Exposing selected node data and other configuration so that it is surfaced in the document inspector
    • Add functions to the document request bus to support automation and testing
  • Add RPE/DPE based dialogs for creating and editing node configurations, ShaderSourceData, templates
    • This will require edit and behavior context reflection of all of the types for which we plan to provide in editor tools.
    • The RPE will eventually be replaced with the DPE. When that happens, these tools will need to be revisited. There should be adapters in place for serialize and edit context reflected data types to work with the DPE.
  • Work with QA to create a test plan and tests
    • This will require changes to the Python test harness that was recently updated for ME.
      • It will need to be further generalized to work with MC and other standalone tools like SMC and the soon to be proposed pass editor.
    • Because MC will use AtoomTF, GC, and GM, automation and testing will immediately be able to use the existing behavior context bound buses and data types for testing and automation as proven with LC.
    • Some of the tests and cases that were written for ME can also be generalized to run against other tools built on the framework
Milestone 5: MC has extended node library
  • Once the end to end workflow is validated and the node configuration format is hardened, it will be time to populate the node library.
    • This is where the most support will be needed.
    • The effort will be large but completely parallelizable because everything is data driven, automatically discovered and registered.
    • It might be worthwhile to investigate writing a Python script to scrape the data from documentation or use the shader compiler to generate nodes for intrinsic functions.
    • The node library can be extended with new nodes incrementally and indefinitely.
      • This also means that the stock node library must be moderated but that would be done through GH and PRs.

DynamicNodeConfig

These are the very simple but critical data structures currently used for configuring dynamic nodes, the literal building blocks of MC. This is not complete but the data structures are serializable and produce interactive nodes. Strings are intentionally used and suitable for most fields except for the slot values. That will need to be switched to AZStd::any or another dynamic data type so the data can be assigned and serialized. It should not take more than a day to finalize these structures, fill out the contexts and create a property editor that supports adding and removing slots, with you like to select from available types. Using AZStd::any will probably require another custom JSON serializer. Requiring a custom serializer will make it more difficult to maintain and extend with new or different data types. 

https://github.com/o3de/o3de/tree/development/Gems/Atom/Tools/AtomToolsFramework/Code/Source/DynamicNode

namespace AtomToolsFramework
{
    using DynamicNodeSettingsMap = AZStd::unordered_map<AZStd::string, AZStd::string>;

    //! Contains all of the settings for an individual input or output slot on a DynamicNode
    struct DynamicNodeSlotConfig final
    {
        AZ_CLASS_ALLOCATOR(DynamicNodeSlotConfig, AZ::SystemAllocator, 0);
        AZ_RTTI(DynamicNodeSlotConfig, "{F2C95A99-41FD-4077-B9A7-B0BF8F76C2CE}");
        static void Reflect(AZ::ReflectContext* context);

        DynamicNodeSlotConfig(
            const AZStd::string& name,
            const AZStd::string& displayName,
            const AZStd::string& description,
            const AZStd::any& defaultValue,
            const AZStd::vector<AZStd::string>& supportedDataTypes,
            const DynamicNodeSettingsMap& settings);
        DynamicNodeSlotConfig() = default;
        ~DynamicNodeSlotConfig() = default;

        //! Unique name or ID of a slot
        AZStd::string m_name = "Unnamed";
        //! Name displayed next to a slot in the node UI
        AZStd::string m_displayName = "Unnamed";
        //! Longer description display for tooltips and other UI
        AZStd::string m_description;
        //! The default value associated with a slot
        AZStd::any m_defaultValue;
        //! Names of all supported data types that a slot can connect to
        AZStd::vector<AZStd::string> m_supportedDataTypes;
        //! Container of generic or application specific settings for a slot
        DynamicNodeSettingsMap m_settings;
    };
} // namespace AtomToolsFramework
namespace AtomToolsFramework
{
    //! Structure used to data drive appearance and other settings for dynamic graph model nodes.
    struct DynamicNodeConfig final
    {
        AZ_CLASS_ALLOCATOR(DynamicNodeConfig, AZ::SystemAllocator, 0);
        AZ_RTTI(DynamicNodeConfig, "{D43A2D1A-B67F-4144-99AF-72EA606CA026}");
        static void Reflect(AZ::ReflectContext* context);

        DynamicNodeConfig(
            const AZStd::string& category,
            const AZStd::string& title,
            const AZStd::string& subTitle,
            const DynamicNodeSettingsMap& settings,
            const AZStd::vector<DynamicNodeSlotConfig>& inputSlots,
            const AZStd::vector<DynamicNodeSlotConfig>& outputSlots,
            const AZStd::vector<DynamicNodeSlotConfig>& propertySlots);
        DynamicNodeConfig() = default;
        ~DynamicNodeConfig() = default;

        //! Save all the configuration settings to a JSON file at the specified path
        //! @param path Absolute or aliased path where the configuration will be saved
        //! @returns True if the operation succeeded, otherwise false
        bool Save(const AZStd::string& path) const;

        //! Load all of the configuration settings from JSON file at the specified path
        //! @param path Absolute or aliased path from where the configuration will be loaded
        //! @returns True if the operation succeeded, otherwise false
        bool Load(const AZStd::string& path);

        //! The category will be used by the DynamicNodeManager to sort and group node palette tree items
        AZStd::string m_category;
        //! Title will be displayed at the top of every DynamicNode in the graph view 
        AZStd::string m_title = "Unnamed";
        //! Subtitle will be displayed below the main title of every DynamicNode 
        AZStd::string m_subTitle;
        //! Settings is a container of key value string pairs that can be used for any custom or application specific data
        DynamicNodeSettingsMap m_settings;
        //! Input slots is a container of DynamicNodeSlotConfig for all inputs into a node 
        AZStd::vector<DynamicNodeSlotConfig> m_inputSlots;
        //! Output slots is a container of DynamicNodeSlotConfig for all outputs from a node 
        AZStd::vector<DynamicNodeSlotConfig> m_outputSlots;
        //! Property slots is a container of DynamicNodeSlotConfig for property widgets that appear directly on the node 
        AZStd::vector<DynamicNodeSlotConfig> m_propertySlots;
    };
} // namespace AtomToolsFramework

DynamicNodeConfig Sample Data

{
    "Type": "JsonSerialization",
    "Version": 1,
    "ClassName": "DynamicNodeConfig",
    "ClassData": {
        "category": "Math Operations",
        "title": "Combine",
        "inputSlots": [
            {
                "name": "inX",
                "displayName": "X",
                "description": "X",
                "supportedDataTypes": [
                    "float"
                ],
                "defaultValue": {
                    "$type": "float",
                    "Value": 0.0
                }
            },
            {
                "name": "inY",
                "displayName": "Y",
                "description": "Y",
                "supportedDataTypes": [
                    "float"
                ],
                "defaultValue": {
                    "$type": "float",
                    "Value": 0.0
                }
            },
            {
                "name": "inZ",
                "displayName": "Z",
                "description": "Z",
                "supportedDataTypes": [
                    "float"
                ],
                "defaultValue": {
                    "$type": "float",
                    "Value": 0.0
                }
            },
            {
                "name": "inW",
                "displayName": "W",
                "description": "W",
                "supportedDataTypes": [
                    "float"
                ],
                "defaultValue": {
                    "$type": "float",
                    "Value": 0.0
                }
            }
        ],
        "outputSlots": [
            {
                "name": "outValue",
                "displayName": "Value",
                "description": "Value",
                "supportedDataTypes": [
                    "float4"
                ]
            }
        ]
    }
}

High Level Code Generation Logic

  • The process will begin whenever the graph is opened, modified, or saved. This eliminates the need for an export button as part of the iteration and preview process.
  • A request will be queued to generate the data, possibly throttled by some minimum interval to not flood the AP with requests.
  • The generator collects all of the nodes from the graph and sorts them in execution and dependency order.
  • Data from all of the variable nodes will be used to populate the material SRG and property groups in MaterialTypeSourceData structures.
    • Ideally, all of the variable names in the SRG and material type can be automatically deduced from node names.
      Data will be loaded from the lighting model output node template files and updated with details specific to the graph being processed.
    • Template files represent ShaderSourceData, MaterialTypeSourceData, AZSL, and other files that guide or need to be exported by the generation process.
  • Template files can be deserialized into their corresponding classes.
    * Template files have a filename prefix that will be replaced with the graph name when generated data is saved.
  • Template files have an extension suffix to prevent them from being recognized and processed by the AP.
  • Template files contain substitutable tokens or markers for inserting different types of generated data.
  • AZSL code will be generated for each input slot on the lighting model output node. The generator will traverse the graph, collecting a list of all of the unique include files from each node, inserting the include files at a designated location in the AZSL file, replacing symbols for variable names in function node text and pasting it into the shader in the designated location.
    • More advanced cases need additional consideration.
      Generating AZSL functions, structures, or classes for each node could possibly make the generation process easier and the code more readable.
      • Then only class instances would need unique names.
      • Having multiple inputs and outputs could also be handled with member functions.
  • Once all of the structures have been filled out and buffers filled with shader code, everything will be saved in the same folder as the graph or a relative, intermediate asset folder.
  • The AP will recognize and process the generated assets.
  • The viewport will listen for ready and reload notifications for the generated material asset and apply it to the model. If the generated assets always resolve to the same asset IDs, then material hot-reloading will automatically handle this. As described above, the turnaround time for previewing assets that need to be built by the AP can vary but options exist to improve responsiveness if it proves to be an issue.
  • ME uses the same classes that the material asset builder does to create preview assets in memory without using the AP. Updates to material properties are applied directly to the displayed material instance. This approach may not be practical for all asset types.

What are the advantages of this feature?

MC will excite, empower, and accelerate users that want to create content and develop new features for O3DE and their projects.

All of the tools built on the same framework will benefit from any enhancements and fixes made in the process of developing MC.

MC can be the foundation for future rendering centric, node-based tools built on the same framework. Hypothetical but very realistic and reasonable examples include editors for particles, passes, post effects, image processing, procedural generation, configuring asset pipelines, and generating other kinds of scripts. The developers of those tools could start from a template, customize their viewport, set up details for the document types and node configuration, and define logic for transforming the graphs into whatever the target data would be. In an ideal scenario, a lot of the hard coding and transformation will move to data or script, making one tool support many types of graphs. 

What are the disadvantages of this feature?

There are no disadvantages to MC except for the time that it takes to develop and support it.

How will this be implemented or integrated into the O3DE environment? 

MC will be integrated into O3DE initially as a standalone executable exactly like and in a parallel location to ME and SMC under https://github.com/o3de/o3de/tree/development/Gems/Atom/Tools . It will be developed using the same underlying framework, UI controls, UX, and workflows built for those tools. It will also incorporate GC and GM, which were used to build Script Canvas and Landscape Canvas.

Multiple options will be considered to address existing concerns over startup times for these executables. Startup times are primarily impacted by initializing unrelated gems that do unnecessary work in an isolated environment, connecting to the AP, and enumerating information from the asset catalog multiple times to populate controls that list selectable assets.

Some investigation and experimentation to improve startup times and integrate more seamlessly into the O3DE editor has already been done as part of exploration for another RFC proposal and prototype.

Options include:

  • Review and refine the list of gems that should not auto load with the tools. This initially resulted in major improvements but dependencies may have changed. The tool does not control other modules with long load times. Those modules can be identified and optimized, which will also help launch times for the main editor.
  • Restructure AtomTF based application project configurations, modules, and components so that they can build as both standalone executables and O3DE editor extensions. This was not feasible prior to a massive refactoring effort that was done earlier this year. ME was developed as a dedicated, standalone tool for users who wanted to focus on content creation and integration into workflows specific to materials without opening up the rest of the tools. At the time there were also other engineering and product related reasons for this request. Sandboxing the tool and its systems in their own process ensured that all systems could be developed and tested in modularly and independently. However, because some engine systems and the asset processor are required, launch times were impacted. Changes from the aforementioned refactor can enable optional integration of multiple tools with multiple documents and views into the main editor with some effort.
  • Launch the standalone executables as child processes of the O3DE editor when it launches, initially hiding their UI, just like the AP. This will make them available immediately upon request and ensure that they shut down with the editor when it closes. It might also be possible to pass a window ID through the command line so that compatible executables can mount inside of the editor UI.
  • Consolidate the number of times project assets are enumerated and cache the results for the duration of execution
  • Move away from using azasset and AnyAsset. They are a general purpose file extension and asset type for feeding JSON files through the AP and compiling them into binary streams without registering individual concrete asset types. These assets are not identifiable based on their asset type or extension. They must be loaded and require casting to identify the object. Systems that use this type often have hard coded asset references. ME used it as a convenience mechanism for lighting and model presets. ME also used a double file extension to identify them without opening them. That is still an unnecessary string compare while enumerating thousands of files. There are also usability concerns using the same file extension for different types.

Are there any alternatives to this feature?

While there might be alternatives, MC is in high demand. Internal and external customers and developers have been anticipating this workflow for years. Having familiar, empowering, accessible tools included with the engine is a necessity to improve the user experience and adoption.

Material X

Material X is an open source, open standard, cross platform, C++ SDK for exchanging and representing materials and shaders in a node graph described by a simple XML format. It’s integrated into multiple, commercial content creation tools and other projects but is not standard or fully supported across all of them. It does not currently provide any tooling and requires additional work to adapt it to the needs of Atom. There are also questions about its suitability for real time rendering requirements. There may still be opportunities for integration or converting between Atom and Material X data.

Simplification and Optimization 

Regardless of MC, evaluate all systems, data formats, APIs, and look for opportunities to simplify and optimize. One shining example of something like this was the refactor of MaterialTypeSourceData structure to support “remixable material types”. Prior to that, the organization of the data structures and file format was disjointed and unintuitive for navigating between property sets and property definitions. ME originally needed to improvise by stitching all of the data together in the order that it was supposed to appear in the UI. There was also a massive amount of overlap and repetition between material types that used the same sets of properties or even multiple layers of the same repeated property data.

All of that was significantly improved because the organization of the data now translates directly to how it’s presented in the editor. Support was also added for including external references to eliminate thousands of lines of repetition in stock material types. This was a major win for hand authoring material types. 

Reflection for Editing and Automation

Most serializable, Atom data structures have not been fully reflected with edit and behavior context bindings.

Edit context reflection exposes details of a class or other structure so that it can be inspected and edited in a reflected property editor. Developers can add display names for variables, detailed descriptions for classes and members, attributes for validating values, specifying options, ranges, and the type of widget used to edit each property. After setting that up, creating a property editor is as simple as instantiating a widget, setting a few parameters, passing in the object pointer and data type, and making it visible.

Behavior context reflection exposes classes, data structures, functions, buses, types, and values to Lua, Python, and Script Canvas.

Filling out all of the contexts will enable users that are familiar with the supported scripting languages to build tools around the data without touching C++ code or worrying about the format. This is the exact same process used for components, extremely simple to if the structure is well organized and does not have any unsupported data types, and absolutely imperative for the sake of tooling, automation, and extensibility. MC will require some types to be fully reflected like ShaderSourceData.

MaterialTypeSourceData reflection and property editor

MaterialTypeSourceData reorganization made the layout much easier to digest for users and tools. It's a complex class but getting it completely reflected to the edit context with controls that auto populate options would enabled using the standard reflected property editor to create material types. ME inspector properties and UI are generated by transforming the data in this structure into something that works with the reflected property editor.

Without the reflection, a non node based editor for MaterialTypeSourceData could be created based off of the material document class. Users would be able to assemble a material type by adding groups, properties, etc and configuring all of the data. The UI would be very similar to the entity inspector. Something similar was described here o3de/o3de#2598 but the user was asking to be able to add and remove groups, opting into and out of material type features, instead of expanding and collapsing them in a complex material type. That sounds intuitive and would free up screen real estate in ME, compared to the original enable button proposal.

Retire AnyAsset and azasset

Not specific to materials and shaders, we should retire AnyAsset and azasset then register and use explicit asset types. It is difficult to identify what the file represents at first glance. At runtime, there is also a performance cost associated with using AnyAsset because the asset type cannot be determined without loading it, attempting to cast, comparing the file extension, or matching another naming convention.

Expose APIs from individual asset builders

One thing to consider is the possibility that the AP shouldn't be used if instantaneous feedback is required and other options are available. If systems that provide custom asset types and builders expose APIs for working with the source data formats and building the target asset formats then custom tools can use them directly. For example, APIs have been exposed from the image processing gem that are used in multiple places for loading and manipulating images in source data formats. Many of the Atom asset types implement “builder” classes so that their product assets can be constructed in code. Other systems could follow this pattern and expose APIs and builder classes. Script bindings could also be added to all of them. This may not be a practical option in all cases depending on the complexity of the asset and builder.

Education and Evangelization

Create a series of user guides, technical documents, video tutorials, and sample content that continuously educate developers and users about how to use these systems, work with their data, and any improvements.

How will users learn this feature?

While user guides and technical documentation will help, learning MC should not require deep study or much effort depending on what users want to do. It will probably be more beneficial to provide samples and video tutorials.

Users that want to build shaders and material types from pre-existing nodes, templates, and examples should only need to open the application and start creating immediately, with minimal documentation.

Users that want to create custom nodes will require some familiarity with writing AZSL and concepts in the engine. MC will eventually have in-editor dialogs to configure all of the node inputs and outputs, required include files, and AZSL.

Users that want to create more extensive, lower-level shading code, templates for completely new lighting models, will require more in-depth knowledge about Atom and AZSL. Teaching these things is beyond the scope of MC but there should be supporting documentation for how to do these things in O3DE. MC can include dialogs for creating everything except the AZSL code definition for the lighting model or other general purpose shader code. Creating extensive, lower-level shader code is probably not impossible in the same environment but might be impractical and unmanageable with extremely complex graphs.

Users that want to automate or extend MC using Python or C++ might need examples and documentation about the available function, class, and bus bindings. Users may also need experience or general documentation about event buses, organization of the Python code, writing Python and C++ code, all of which are beyond the scope of MC.

Are there any open questions?

What is not supported by Material Canvas?

MC does not generate anything that cannot already be created by hand. is just providing tools and transforming one representation of the data to another based on user and developer sourced data sets that get stitched together and fill out data structures.MC

MC does not change the implementation or behavior of existing systems. Other initiatives can be developed in parallel that may improve performance, features, organization, and representation of materials and shaders. If data formats or the shading language change then the nodes and generator might need updating. The graphs should continue to work as long as node slots and connections don’t change.

MC does not improve asset processing times. Iteration and preview times can be improved by optimizing asset queue prioritization, material and shader related asset builders, asset dependencies, and the shader compiler. Application startup times can also be improved by optimizing the time it takes to launch and connect to the AP.

MC does not improve performance of shaders, materials, or rendering in general. In fact, assets generated by MC may initially perform worse. While some level of deduplication and optimization can be done as part of converting the graph to generated data, the data will not be much better organized or optimized than what’s defined in the graph and each node. Generated data will likely not be more performant than something optimized by someone familiar with the shading language, target hardware, compiler, and performance tradeoffs. Providing the ability to export the generated files would allow them to be copied and manually optimized if needed. A lot of that responsibility will fall onto developers adding nodes and the compiler. Ease of use will hopefully lead to wider adoption but also open the floodgates to complex use cases that highlight opportunities for optimizing the rest of the engine. User created performance degradations can be mitigated with performance analysis tools.

Where should generated data be saved?

Users may or may not want to pollute their project or source control branches with generated assets. This makes sense with transient product assets. However, other source types will need references to the generated files. Otherwise users will not be able to create new materials based on them. Therefore, it seems like a hard requirement that the generated files can be output to source folders. We have Python scripts and other tools that are capable of generating source material from other data and they are also checked into source control.

An option would be for the source material data to support referencing either material types or material graphs that generate material types. That would require material graphs to exist, be processed, and function independently, of  MC . This also does not cover references to the other generated data types. This would require major changes to many tools and systems like ME, material component, and material preview rendering that need access to MaterialTypeSourceData. For those reasons, this is not recommended or within scope.

What features that exist today does this not cover?

This does not yet include support for material functors. Material functors or lua scripts mutate the material in different ways as values change. They are used within the editor to update property metadata for visibility, readonly, value ranges, and other data. The options for functors are the same as the options for nodes. The template can come with functor script files that can be updated with string substitutions for whatever variables they need to affect. We could also provide a dialogue that allows the user to set the rules for functors and output the lua files from that. we could implement functor nodes, triggered by different conditions, that use the same generation process to also emit lua code.. The use cases and support would be much simpler than Script Canvas.

Initially, there will not be support for conditional or control flow nodes. Having support for function nodes or implementing nodes using shader code will allow users to implement their own behavior. However, if that functionality needs to be represented in the graph itself, options can be explored. It is preferred that we avoid use of the execution slot system provided by GC nodes because it requires manually directing the flow of logic with additional connections. Execution order can be determined by tracing dependencies between input and output slots on nodes in the graph. 

What are other applications for this feature?

Appendix A - Workflow

  • Users open MC to an empty workspace, surrounded by docked widgets for working with graphs, assets, and properties.
  • Users see the node palette populated with all of the nodes defined in their project and active gems.
    • Users can create custom nodes types by adding node configuration JSON and supporting files to gems or projects.
    • Users can also create custom nodes using dialogs or other workflows to minimize editing text files.
  • Users create or open graphs using the file menu or shortcuts.
  • Users create new graphs by selecting from a library of existing graphs or templates, which they can also extend.
  • Users see open documents represented by tabs at the top of the main window.
  • Users see each open document tab contains a graph view in the workspace.
  • Users add nodes to the graph view by dragging them from the node palette or using the context menu from within the graph view.
  • Users see a node block appear with a title, description, input/output connection slots.
  • Users see thumbnails or previews of colors, images, and other state on nodes that support it.
  • Users see each node block is color coded and styled to distinguish between different types like variable, function, group, and output nodes.
  • Users configure property values for inputs on each node and other property values in a node inspector.
  • Users repeat this process to create additional nodes.
  • Users click and drag to make connections between input and output slots.
  • Users connect output slots from variable and function nodes to input slots on the output node.
  • Users see the effects of their changes visualized and applied to a model in the viewport.
  • Users find generated files saved in the same location as the graph or another designated location.
  • Users save the graph and can reopen it in the same state.
  • Workflow involving other windows and features provided from GC or ME can be found elsewhere.

Appendix B - Dynamic Node Configuration and Shader Generation Data Notes

Dynamic nodes and their configurations data drive most aspects of graphs in Material Canvas (MC). Dynamic node configurations specify all of the data for the appearance of a node, the name, description, category, etc. It also provides information about the layout, data types, and default values for all of the slots on a node and what they can connect to in a graph.

Dynamic nodes must be able to provide other details about the objects or code that they represent. Those details will feed into the code generation process in MC.

This should be achievable by including a basic table inside the configuration. Each entry in the table will provide settings for include files, code snippets, or other information that will be assembled into the final output.

In case shader code, or some other complex string, gets included inline the data will need to be escaped. Having a tool that allows entering the text without escape characters would be beneficial. Reading and writing the escape characters could then be left to the JSON serializer. If the embedded code is long or complicated then it’s probably better to put it in an include file.

All of this data would be used to fill out sections of the templates described here: Material Canvas Lighting Model Output Node and Templates.

Examples

"settings": {
    "definition" : ["optionally define functions, structures, classes, preprocessor statements, specific to this node that get implanted in a common, global space in lieu of an include file"],
    "dependency" : ["path to another file that’s required for generation, like template files"],
    "include" : ["path to an include file to be inserted into AZSL code"],
    "initialize" : ["optionally declare/initialize variables before they are used"],
    "invoke" : ["code snippet that will be modified and injected into the generated code"]
}

Appendix C - Lighting Model Output Node and Templates Notes

Summary

Instead of a new class to represent the template, my recommendation is to provide folders containing the dynamic node configuration file along with partially filled out versions of the files to be generated. The filenames and content will contain markup or tokens that can be replaced by the generator. The template files will be used similarly to what the engine does with project, gem, component, and other templates.

The lighting model output node configuration will store a list of all of the included files and any other dependencies. Other than including the dependencies, and maybe a unique extension, this node can probably be treated like any other function node.

Requirements

The node folder contains stubbed out versions of all of the files that will be generated. These files would:

Have a special extension suffix so they don't get picked up by the AP
Be in the exact same format as the final data so they can be loaded, saved, and possibly edited using existing classes
Contain markup or some form of token that the generator will recognize and substitute with generated shader code, SRG definitions, include file lists

Advantages

  • No new classes are required
  • No conversion between classes
  • Less redundant data
  • Clear outline of generated data
  • Setting up the edit context for these classes should enable editing them with minimal UI
  • New lighting model nodes can be created from code or a simple tool
  • Existing material times can be adapted into lighting model nodes more easily than if new classes were required

Examples

Example Folder Contents
An example node folder resembling but not representing Base PBR that contains the node and template files.

Gems/Atom/Feature/Assets/Materials/GraphNodes/YourNodeName would contain:
  • YourNodeName_LightingModel.materialcanvasnode
  • MaterialGraphName.materialtype.template
  • MaterialGraphName.material.template
  • MaterialGraphName_ForwardPass.shader.template
  • MaterialGraphName_ForwardPass.shader.template
  • MaterialGraphName_ForwardPass.azsl.template
  • MaterialGraphName_Common.azsli.template
  • MaterialGraphName_ForwardPass.shadervariantlist
  • MaterialGraphName_LowEndForward.azsl.template
  • MaterialGraphName_LowEndForward.shader.template
  • MaterialGraphName_LowEndForward.shadervariantlist.template

The generator will load these files using the existing classes or streams to replace all of the tokens with the generated data for include files, shader code, property lists.

MaterialGraphName will be replaced with the name of the graph document.
The template extension will be removed when these files are resaved in the output folder.
Example File Contents

MaterialGraphName.materialtype.template

{
    "description": "MaterialGraphName Material Type Template.",
    "shaders": [
        {
            "file": "./MaterialGraphName_ForwardPass.shader",
            "tag": "ForwardPass_EDS"
        },
        {
            "file": "./MaterialGraphName_LowEndForward.shader",
            "tag": "LowEndForward_EDS"
        },
        {
            "file": "@gemroot:Atom_Feature_Common@/Assets/Materials/Types/Shaders/Shadow/Shadowmap.shader",
            "tag": "Shadowmap"
        },
        {
            "file": "@gemroot:Atom_Feature_Common@/Assets/Materials/Types/Shaders/Depth/DepthPass.shader",
            "tag": "DepthPass"
        },
        {
            "file": "@gemroot:Atom_Feature_Common@/Assets/Materials/Types/Shaders/MotionVector/MeshMotionVector.shader",
            "tag": "MeshMotionVector"
        }
    ],
    "functors": [
    ],
    "uvNameMap": {
        "UV0": "Tiled",
        "UV1": "Unwrapped"
    }
}

MaterialGraphName_ForwardPass.shader.template

{
    "Source" : "./MaterialGraphName_ForwardPass.azsl",
 
    "DepthStencilState" :
    {
        "Depth" :
        {
            "Enable" : true,
            "CompareFunc" : "GreaterEqual"
        },
        "Stencil" :
        {
            "Enable" : true,
            "ReadMask" : "0x00",
            "WriteMask" : "0xFF",
            "FrontFace" :
            {
                "Func" : "Always",
                "DepthFailOp" : "Keep",
                "FailOp" : "Keep",
                "PassOp" : "Replace"
            },
            "BackFace" :
            {
                "Func" : "Always",
                "DepthFailOp" : "Keep",
                "FailOp" : "Keep",
                "PassOp" : "Replace"
            }
        }
    },
 
    "ProgramSettings":
    {
      "EntryPoints":
      [
        {
          "name": "MaterialGraphName_ForwardPassVS",
          "type": "Vertex"
        },
        {
          "name": "MaterialGraphName_ForwardPassPS_EDS",
          "type": "Fragment"
        }
      ]
    },
 
    "DrawList" : "forward"
}

MaterialGraphName_Common.azsli.template

/*
 * Copyright (c) Contributors to the Open 3D Engine Project.
 * For complete copyright and license terms please see the LICENSE at the root of this distribution.
 *
 * SPDX-License-Identifier: Apache-2.0 OR MIT
 *
 */
 
#pragma once
 
#include <Atom/Features/SrgSemantics.azsli>
#include <viewsrg.srgi>
#include <Atom/RPI/ShaderResourceGroups/DefaultDrawSrg.azsli>
#include <Atom/Features/PBR/LightingOptions.azsli>
#include <Atom/Features/PBR/AlphaUtils.azsli>
 
// <GENERATED_COMMON_INCLUDES
//  GENERATED_COMMON_INCLUDES>
 
ShaderResourceGroup MaterialSrg : SRG_PerMaterial
{
    // Auto-generate material SRG fields from MaterialGraphName
    // <GENERATED_MATERIAL_SRG
    //  GENERATED_MATERIAL_SRG>
}
 
// <GENERATED_CLASSES
//  GENERATED_CLASSES>
 
// <GENERATED_FUNCTIONS
//  GENERATED_FUNCTIONS>

MaterialGraphName_ForwardPass.azsl.template

/*
 * Copyright (c) Contributors to the Open 3D Engine Project.
 * For complete copyright and license terms please see the LICENSE at the root of this distribution.
 *
 * SPDX-License-Identifier: Apache-2.0 OR MIT
 *
 */
 
#include "Atom/Features/ShaderQualityOptions.azsli"
 
#include "MaterialGraphName_Common.azsli"
 
// SRGs
#include <Atom/Features/PBR/DefaultObjectSrg.azsli>
#include <Atom/Features/PBR/ForwardPassSrg.azsli>
 
// Pass Output
#include <Atom/Features/PBR/ForwardPassOutput.azsli>
 
// Utility
#include <Atom/Features/ColorManagement/TransformColor.azsli>
 
// Custom Surface & Lighting
#include <Atom/Features/PBR/Lighting/BaseLighting.azsli>
 
// Decals
#include <Atom/Features/PBR/Decals.azsli>
 
// ---------- Vertex Shader ----------
 
struct VSInput
{
    // Base fields (required by the template azsli file)...
    float3 m_position : POSITION;
    float3 m_normal : NORMAL;
    float4 m_tangent : TANGENT;
    float3 m_bitangent : BITANGENT;
  
    // Extended fields (only referenced in this azsl file)...
    float2 m_uv0 : UV0;
    float2 m_uv1 : UV1;
};
 
struct VSOutput
{
    // Base fields (required by the template azsli file)...
    // "centroid" is needed for SV_Depth to compile
    precise linear centroid float4 m_position : SV_Position;
    float3 m_normal: NORMAL;
    float3 m_tangent : TANGENT;
    float3 m_bitangent : BITANGENT;
    float3 m_worldPosition : UV0;
    float3 m_shadowCoords[ViewSrg::MaxCascadeCount] : UV3;
 
    // Extended fields (only referenced in this azsl file)...
    float2 m_uv[UvSetCount] : UV1;
};
 
#include <Atom/Features/Vertex/VertexHelper.azsli>
 
// One possibility for generating code for each input on the lighting model node would be to fill in stub functions for every input.
// Using this pattern would allow calling these functions from the main AZSL functions without the graph generated data.
// Vertex and pixel shader input and output structures might also need representation in the graph and to be generated.
// The inputs are going to be needed for anything interesting that varies with vertex and pixel attributes.
// VS/PS I/O might need to be represented in the graph and the structures generated from it like material SRG.
float3 GetBaseColorInput(VSOutput IN)
{
    float3 result = float3(1.0, 1.0, 1.0);
    // <GENERATED_VSINPUT_FUNCTION_BODY BaseColor
    //  GENERATED_VSINPUT_FUNCTION_BODY>
    return result;
}
 
float GetMetallicInput(VSOutput IN)
{
    float result = 1.0;
    // <GENERATED_VSINPUT_FUNCTION_BODY Metallic
    //  GENERATED_VSINPUT_FUNCTION_BODY>
    return result;
}
 
float GetSpecularF0Input(VSOutput IN)
{
    float result = 1.0;
    // <GENERATED_VSINPUT_FUNCTION_BODY SpecularF0
    //  GENERATED_VSINPUT_FUNCTION_BODY>
    return result;
}
 
float3 GetNormalInput(VSOutput IN)
{
    float3 result = float3(0.0, 1.0, 0.0);
    // <GENERATED_VSINPUT_FUNCTION_BODY Normal
    //  GENERATED_VSINPUT_FUNCTION_BODY>
    return result;
}
 
VSOutput MaterialGraphName_ForwardPassVS(VSInput IN)
{
    VSOutput OUT;
 
    float3 worldPosition = mul(ObjectSrg::GetWorldMatrix(), float4(IN.m_position, 1.0)).xyz;
 
    // By design, only UV0 is allowed to apply transforms.
    OUT.m_uv[0] = mul(MaterialSrg::m_uvMatrix, float3(IN.m_uv0, 1.0)).xy;
    OUT.m_uv[1] = IN.m_uv1;
 
    // No parallax in BaseBPR, so do shadow coordinate calculations in vertex shader
    bool skipShadowCoords = false;
 
    VertexHelper(IN, OUT, worldPosition, skipShadowCoords);
 
    return OUT;
}
 
 
// ---------- Pixel Shader ----------
 
PbrLightingOutput ForwardPassPS_Common(VSOutput IN, bool isFrontFace)
{
    // ------- Tangents & Bitangets -------
    float3 tangents[UvSetCount] = { IN.m_tangent.xyz, IN.m_tangent.xyz };
    float3 bitangents[UvSetCount] = { IN.m_bitangent.xyz, IN.m_bitangent.xyz };
 
    if (o_normal_useTexture)
    {
        PrepareGeneratedTangent(IN.m_normal, IN.m_worldPosition, isFrontFace, IN.m_uv, UvSetCount, tangents, bitangents);
    }
     
    Surface surface;
    surface.position = IN.m_worldPosition.xyz;
 
    surface.normal = GetNormalInput(IN);
    float3 baseColor = GetBaseColorInput(IN);
    float metallic = GetMetallicInput(IN);
    float specularF0Factor = GetSpecularF0Input(IN);
 
    surface.SetAlbedoAndSpecularF0(baseColor, specularF0Factor, metallic);
 
    surface.roughnessLinear = GetRoughnessInput(IN);
    surface.CalculateRoughnessA();
 
    // ------- Lighting Data -------
 
    LightingData lightingData;
 
    // Light iterator
    lightingData.tileIterator.Init(IN.m_position, PassSrg::m_lightListRemapped, PassSrg::m_tileLightData);
    lightingData.Init(surface.position, surface.normal, surface.roughnessLinear);
     
    // Directional light shadow coordinates
    lightingData.shadowCoords = IN.m_shadowCoords;
     
    // Diffuse and Specular response (used in IBL calculations)
    lightingData.specularResponse = FresnelSchlickWithRoughness(lightingData.NdotV, surface.specularF0, surface.roughnessLinear);
    lightingData.diffuseResponse = float3(1.0, 1.0, 1.0) - lightingData.specularResponse;
 
    // ------- Multiscatter -------
 
    lightingData.CalculateMultiscatterCompensation(surface.specularF0, o_specularF0_enableMultiScatterCompensation);
 
    // ------- Lighting Calculation -------
 
    // Apply Decals
    ApplyDecals(lightingData.tileIterator, surface);
 
    // Apply Direct Lighting
    ApplyDirectLighting(surface, lightingData);
 
    // Apply Image Based Lighting (IBL)
    ApplyIBL(surface, lightingData);
 
    // Finalize Lighting
    lightingData.FinalizeLighting();
 
    float alpha = 1.0f;
    PbrLightingOutput lightingOutput = GetPbrLightingOutput(surface, lightingData, alpha);
 
    // Disable subsurface scattering
    lightingOutput.m_diffuseColor.w = -1;
 
    // Debug output for opaque objects
    DebugModifyOutput(lightingOutput.m_diffuseColor, lightingOutput.m_specularColor, lightingOutput.m_albedo, lightingOutput.m_specularF0,
                      surface.normal, tangents[MaterialSrg::m_normalMapUvIndex], bitangents[MaterialSrg::m_normalMapUvIndex],
                      surface.baseColor, surface.albedo, surface.roughnessLinear, surface.metallic);
 
    return lightingOutput;
}
 
 
ForwardPassOutput MaterialGraphName_ForwardPassPS_EDS(VSOutput IN, bool isFrontFace : SV_IsFrontFace)
{
    ForwardPassOutput OUT;
    PbrLightingOutput lightingOutput = ForwardPassPS_Common(IN, isFrontFace);
 
#ifdef UNIFIED_FORWARD_OUTPUT
    OUT.m_color.rgb = lightingOutput.m_diffuseColor.rgb + lightingOutput.m_specularColor.rgb;
    OUT.m_color.a = lightingOutput.m_diffuseColor.a;
#else
    OUT.m_diffuseColor = lightingOutput.m_diffuseColor;
    OUT.m_specularColor = lightingOutput.m_specularColor;
    OUT.m_specularF0 = lightingOutput.m_specularF0;
    OUT.m_albedo = lightingOutput.m_albedo;
    OUT.m_normal = lightingOutput.m_normal;
#endif
    return OUT;
}

Material Graph Template Alternative

The approach does not require any new classes to store the data or systems to convert it to the final form. Theoretically, all of the template assets should build as is if the template file extension was removed, no illegal characters are present in the filenames, and the markers in shader code are commented out.

We can also include ready to use graph templates that would be material canvas graph documents, preconfigured with the output node and possibly other nodes.

Reviewer/Maintainer Nomination: gadams3


name: SIG Reviewer/Maintainer Nomination Template
about: Nominate yourself or someone else to become a SIG reviewer or maintainer
title: 'SIG Reviewer/Maintainer Nomination'
labels: 'needs-triage,needs-sig'


Nomination Guidelines

Reviewer Nomination Requirements
6+ contributions successfully submitted to O3DE
100+ lines of code changed across all contributions submitted to O3DE
2+ O3DE Reviewers or Maintainers that support promotion from Contributor to Reviewer
Requirements to retain the Reviewer role: 4+ Pull Requests reviewed per month

Maintainer Nomination Requirements
Has been a Reviewer for 2+ months
8+ reviewed Pull Requests in the previous 2 months
200+ lines of code changed across all reviewed Pull Request
2+ O3DE Maintainers that support the promotion from Reviewer to Maintainer
Requirements to retain the Reviewer role: 4+ Pull Requests reviewed per month


Reviewer/Maintainer Nomination

Fill out the template below including nominee GitHub user name, desired role and personal GitHub profile

I would like to nominate: @gadams3, to become a Reviewer+Maintainer on behalf of @o3de/sig-content @o3de/sig-graphics-audio. I verify that they have fulfilled the prerequisites for this role.

Reviewers & Maintainers that support this nomination should comment in this issue.

Proposed SIG-Graphics-Audio meeting agenda for 2022-06-29

Meeting Details

The SIG-Graphics-Audio Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

See previous meeting notes here: #49

Meeting Agenda

Discuss the Roadmap #44

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

  • Discuss #51 in the next meeting (just dropped at the start, too long to go through) and let @gadams3 fill us in on the details
  • Pull in Doug for a discussion on hybrid rendering. It is currently unclear when the RT acceleration structures are built. Does the user need to have a GI component in the scene, or what triggers this?
  • Discuss possible solutions to make the RT code aware of materials. Proposals include custom shaders that will act as drop-in for a material that can be sampled by the raytracing code, proxy geometry with vertex colors or rendering out albedo textures that can be sampled (which however implies diffuse-only).

Open Discussion Items

List any additional items below!

Proposed SIG-Graphics-Audio meeting agenda for June-15-22

Meeting Details

The SIG-Graphics-Audio Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

What happened since the last meeting?

Meeting Agenda

  1. Discuss Mitsuba vs LuxCore, and decide what tool will be used as Ground Truth Provider.

Outcomes from Discussion topics

Action Items

  • Discuss with legal team and O3DE-TSC on viability to use Mitsuba as ground truth given its GPL license.
  • Discuss MaterialCanvas work with hushaoping[huawei] as he has a team willing to develop a similar Material Editor tool.

Open Discussion Items

List any additional items below!

Add @invertednormal as a Maintainer

  • Has been a Reviewer since June
  • Reviewed 12 PRs in July, 21 PRs in August, so far 10 in September
  • Changed far more than 200 lines of code (18 commits in July, 5 in August, thousands of lines of code total)

Proposed SIG-Graphics-Audio meeting agenda for 2021-12-15

Meeting Details

  • Date/Time: December 15, 2021 @ 6:00pm UTC / 10:00 am PST
  • Location: Discord SIG-Graphics-Audio Voice Room
  • Moderator: JonB (wintermute-motherbrain)
  • Note Taker JonB (wintermute-motherbrain)

The SIG-Graphics-Audio Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

  • The following RFCs were approved:

  • The decision to add new labels for sig-graphics-audio was approved

  • The sig decided to continue discussion of this RFC. The RFC for ShaderCanvas will have impact on this so discussion will continue once the ShaderCanvas RFC is ready.

Meeting Agenda

Discuss agenda from proposed topics

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Create actionable items from proposed topics

Open Discussion Items

Proposed SIG-Presentation meeting agenda for 2021-08-18

Meeting Details

The SIG-Presentation Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

What happened since the last meeting?

Meeting Agenda

  • Discuss the renaming of sig-presentation
  • Review open RFCs

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Create actionable items from proposed topics

Open Discussion Items

List any additional items below!

SIG-Graphics-Audio 2022 Roadmap

This issue will track any Roadmap items that SIG-Graphics-Audio will contribute to the O3DE Roadmap for 2022.

Please comment under this issue with any other roadmap items for sig-graphics-audio.

Proposed SIG-Graphics-Audio meeting agenda for 2022-01-19

Meeting Details

  • Date/Time: January 19, 2022 @ 6:00pm UTC / 10:00 am PST
  • Location: Discord SIG-Graphics-Audio Voice Room
  • Moderator: JonB (wintermute-motherbrain)
  • Note Taker JonB (wintermute-motherbrain)

The SIG-Graphics-Audio Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

  • No new updates since the holiday break

Meeting Agenda

Discuss agenda from proposed topics

Feature Grid Review:
#26

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Create actionable items from proposed topics

Open Discussion Items

Proposed RFC Suggestion: Renaming sig-presentation

Summary:

Some folks have expressed that the name "sig-presentation" does not obviously indicate that graphics and audio fall under the SIG. This RFC is to solicit feedback on whether or not sig-presentation should be renamed and if so, what should the new name be.

What is the motivation for this suggestion?

Why is this important?

We want graphics and audio questions to flow through sig-presentation and not to other sigs. This requires discoverability to be straightforward for new folks to O3DE.

What should be the outcome be if this suggestion is implemented?

We either keep the current name or a new name is chosen.

Next Steps

Provide comments to this RFC indicating if you feel the name should be changed and if so, propose a new name. We will then bring this to the next monthly meeting agenda.

Proposed SIG-Graphics-Audio meeting agenda for July-20-22

Meeting Details

The SIG-Graphics-Audio Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

What happened since the last meeting?

Meeting Agenda

  • Material Canvas update
  • #58

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Create actionable items from proposed topics

Open Discussion Items

List any additional items below!

RFC for Presentation Charter

Please read and provide feedback on our charter in this thread:

https://github.com/o3de/sig-presentation/blob/main/governance/SIG%20Presentation%20Charter.md

SIG Charter

This charter adheres to the Roles and Organization Management specified in .
Team information may be found in the <readme.md>

Overview of SIG

Two concise lines explaining what this SIG does with bullet points of the major responsibilities

  • Responsibility

Goals

  • Major goals that SIG seeks to generally achieve

Scope

Rendering:
Publish GPU/Video hardware driver compatibility, supported versions, and requirements per platform.

Design and manage Material authoring, workflow, and editor.

DCC scripting interface to renderer - Interaction with 3rd party tools (maya/houdini/substance/etc)

Publish pipeline API and specification for user generated script integration support to manipulate asset pipelines.

Audio

Design and maintain architecture of engine audio API subsystem for integration.
Publish audio hardware driver compatibility, supported versions, and requirements per platform.
Design, produce specification, and maintain client side authoring component into 3rd party audio subsystem.

AR/VR

Design and maintain common AR & VR Architecture API for engine integration
Design and maintain head tracking, object anchoring, and user interaction API to normalize and scale engine world space to AR & VR coordinate space.
Design and maintain VR stereoscopic rendering support.
Publish and maintain AR/VR device, SDK, and driver support.
Design and implement interface driver support

In scope

Design and manage feature of shader language and toolchain (AZSL/HLSL - AZSL C compiler)
Design and matain Rendering Pipeline Interface (RPI) (Application interface) pass system
Design and matain Rendering Hardware Interface (RHI) (low level interface) to add or support new platforms (Vulkan/RT, Metal, DirectX 12, DXR/Raytracing)

Cross-cutting Processes

Publish and maintain data specification of renderer mesh, material, shader and texture formats for external file format groups. (example gltf to array of vertex points for rendering)

Design and maintain code for Mesh, material, shader, and texture builders from file format parser output to renderer asset format.

Support rendering of blend shapes from animation system, but not responsible for authoring of blend shapes.

Provide support, design, and guidance for GPU related subsystem support for 3rd party features.

Out of Scope

Not responsible for the ingestion or export system for assets.
Not responsible for the builder queue system in AP.
Not responsible for maintaining audio driver and hardware subsystem compatibility.
Not responsible for 3rd party authoring tool support, but may advise and refer to 3rd party.

SIG Links and lists:

  • Joining this SIG
  • Slack/Discord
  • Mailing list
  • Issues/PRs
  • Meeting agenda & Notes

Roles and Organization Management

SIG Docs adheres to the standards for roles and organization management as specified by . This SIG opts in to updates and modifications to

Individual Contributors

Additional information not found in the sig-governance related to contributors.

Maintainers

Additional information not found in the sig-governance related to contributors

Additional responsibilities of Chairs

Additional information not found in the sig-governance related to SIG Chairs

Subproject Creation

Additional information not found in the sig-governance related to subproject creation

Deviations from sig-governance

Explicit Deviations from the sig-governance

Proposed SIG-Graphics-Audio meeting agenda for May-18-22

Meeting Details

The SIG-Graphics-Audio Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

What happened since the last meeting?

Meeting Agenda

  1. Discuss new meeting format results. Poll here: #41

Outcomes from Discussion topics

  • APPROVED: #39

Action Items

  • Discuss with legal team and O3DE-TSC on viability to use Mitsuba as ground truth given its GPL license.

Open Discussion Items

List any additional items below!

Proposed SIG-Presentation meeting agenda for 2021-06-16

Meeting Details

  • Date/Time: 06 16, 2021 @ 5:00 pm UTC / 10:00 am PDT
  • Location: Link will be posted in the #sig-presentation voice channel on Discord shortly before the call.
  • Moderator: Jonathan Boldiga
  • Note Taker

The SIG-Presentation repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

Introductions

*Facilitator Jonathan Boldiga, Amazon, SDM
*Participants Alphabetically - <Name, Company, Team Role>

Meeting Agenda

Discuss agenda from proposed topics

** Charter RFC and feedback process
** SIG Scope Summary review
** SIG Meeting Cadence
** Community suggested topics
** Open topics (if time)

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Create actionable items from proposed topics

Open Discussions

List of things discussed that remain open

Postponed due to time

List any agenda topics missed/not completed

Proposed SIG-Graphics-Audio meeting agenda for September-21-22

Meeting Details

The SIG-Graphics-Audio Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

What happened since the last meeting?

Meeting Agenda

Discuss agenda from proposed topics

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Create actionable items from proposed topics

Open Discussion Items

List any additional items below!

Proposed SIG-Graphics-Audio meeting agenda for 2022-04-20

Meeting Details

  • Date/Time: April 20, 2022 @ 6:00pm UTC / 10:00 am PST
  • Location: Discord SIG-Graphics-Audio Voice Room
  • Moderator: JonB (wintermute-motherbrain)
  • Note Taker JonB (wintermute-motherbrain)

The SIG-Graphics-Audio Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

Meeting Agenda

SIG Chair nominations - #34

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Create actionable items from proposed topics

Open Discussion Items

List any additional items below!

Proposed SIG-Graphics-Audio meeting agenda for 2021-11-17

Meeting Details

  • Date/Time: November 17, 2021 @ 6:00pm UTC / 10:00 am PST
  • Location: Discord SIG-Graphics-Audio Voice Room
  • Moderator: JonB (wintermute-motherbrain)
  • Note Taker JonB (wintermute-motherbrain)

The SIG-Graphics-Audio Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

What happened since the last meeting?

Meeting Agenda

Outcomes from Discussion topics

  • The following RFCs were approved:

  • The decision to add new labels for sig-graphics-audio was approved

  • The sig decided to continue discussion of this RFC. The RFC for ShaderCanvas will have impact on this so discussion will continue once the ShaderCanvas RFC is ready.

Discuss outcomes from agenda

Action Items

Create actionable items from proposed topics

Open Discussion Items

  • This RFC is still open for discussion

List any additional items below!

Maintainer Nomination: @jromnoa

Nomination Details

Why this person is being nominated

  • Has many contributions to Atom automation for o3de.
  • Has many contributions to AtomTest and AtomSampleViewer for automation.
  • Faster automation merges means more development runs get the automation changes faster rather than wait for a merge from someone else.
  • Changed far more than 200 lines of code.
  • Has been a reviewer for over 2 months (mainly on automation PRs).
  • Participated in at LEAST 15 PRs.

Outcome (For SIG Facilitator)

Voting Results

Proposed RFC Suggestion - Separate Transparent and Parallax into separate shaders

Currently Transparent and Parallax are part of the standard PBR shader. We would like to separate them out into separate shaders for clarity, optimization purposes, etc.
This can be done in one of two ways:
A) Limit the separation to azsl shaders and have Standard PBR material pick the correct shader. Standard PBR material would stay the same.
B) Remove transparent and parallax from StandardPBR material and make two new material types for transparents and parallax. This would be more explicit, the intention would be clear when the user picks the material type. The downside is it might mess up existing materials that people have.

Proposed RFC Feature: Material Canvas Shader generation

The text covers files and data organization that includes connections between generated documents, and existing approach extensions. The solution is based on the material canvas introduced in the Material canvas feature RFC, and considered as a starting point for the further discussions (pre-RFC in terms of the Feature template).

Helpful information:

Summary:

The shader generation logic creates all necessary infrastructure files in two cases:

  • *.azsl, *.shader, *.materialtype are created on the "save"/"save as" operation, with the same name as a canvas file for further usage in the material editor
  • *.azsl, *.shader, *.materialtype, and *.material temporary files are created for preview

All the files are created in a template-file-based manner using substitutors and node configuration's shader extensions. The following asset's processing is unchanged and works as is.

Feature design description:

Material type based node

O3DE has several base material types such as StandartPBR, BasePBR, and so on, and these material types can be treated as a kind of Output node in MC. In other words material type-based node with input slots only. That kind of node uses already implemented azsl shaders, shader descriptor files, and represents all the input properties as slots.

Technically it can be implemented as a generated materialtype file that copies "propertyLayout", "shaders", "functors", and "uvNameMap" from the base. This file is going to be used for further extensions including generated pair shader/azsl files, and additional properties retrieved from the graph's unset slot list.

In addition, the base material could be chosen on canvas creation (as we do in the material editor) with the ability to choose using properties. After it, the output node will be added to the canvas as an Output node.

materialcanvasnode.azasset extension

The node in the current solution is described in the *.azasset file, and the structure looks approximately as:

{
    ...
    "ClassData": {
        "category": "Constants",
        "title": "Constant Color",
        "propertySlots": [
            {
                "name": "...",
                "supportedDataTypes": [],
                ...
            }
        ],
        "outputSlots": [
            {
                "name": "...",
                "supportedDataTypes": [],
                ...
            }
        ],
        "inputSlots": [
            {
                "name": "...",
                "supportedDataTypes": [],
                ...
            }
        ],
    }
}

and this file is extended:

  1. Introducing ClassData.inputSlots.templateSubstitutor field

Contains the substitutor's name that is used during code generation as a substituting point in the node's shader template file, which is going to be discussed later. Simply: slot's name.

  1. Introducing ClassData.azslTemplate field

The name of node's shader template file containing azsl code with subtitutors.

  1. Introducing ClassData.baseMaterial field

Contains alternative input slots description based on existing *.materialtype (ex: PBR base) and enumerates propagating as slots properties. 'Propagating properties' in this context means viewable, and accessible in the graph.

"baseMaterial": {
    "type": "BasePBR.materialtype",
    "propagatingSlots": [
        {
            "ref": "baseCololor.color",
            "displayName": "Diffuse color",
            "templateSubstitutor": "DIFFUSE_COLOR"
        }
    ]
}
Subtitutors

The current approach supposes hardcoded substitutor names and dynamically described unique names in configuration:

  • %INCLUDES% - the place for shader's includes required by nodes,
  • %SHADER_PARAMS% - contains shader incoming parameters, typically bound in material type property layout
  • %FUNCTION_DEFINITIONS% - contains function definitions generated by existing in graph connected nodes
  • %SOURCE_CODE% - the place for additional generated code in main shader's functions, can be additional variables definition and function's calls
  • %NODE_FUNC% - node's main function, returning result is forwarded to the connected node function
  • %IN_TYPE1% .. %IN_TYPEN% - slot dependent type, can be used for multiple implementation
  • %IN_SIZE1% .. %IN_SIZEN% - slot dependent array size, used for the slots presenting arrays
  • slot's ClassData.inputSlots.templateSubstitutor - the name of a slot's data.

and probably more in future.

AZSL template

The node's azsl shader code contains substitutors. There are two examples the first is more verbose describing the whole shader template, and the simple node template.

It's better to show real examples. The first is connected with the material-based node, also known as the output node. It's a whole shader skeleton:

#include <Atom/Features/PBR/DefaultObjectSrg.azsli>
#include <viewsrg.srgi>

%INCLUDES%


ShaderResourceGroup ShaderParams: SRG_PerMaterial
{
    %SHADER_PARAMS%
}

struct VertexInput
{
    float3 m_position   : POSITION;
    // ...
};

struct VertexShaderOutput
{
    // ...   
};

%FUNCTION_DEFINITIONS%

VertexShaderOutput MainVS(VertexInput IN)
{
    VertexShaderOutput OUT;
    // Some code ...
    return OUT;
}

ForwardPassOutput MainPS(VertexShaderOutput IN)
{
    %SOURCE_CODE%

    ForwardPassOutput OUT;

    OUT.m_diffuseColor      = %DIFFUSE_COLOR%;
    OUT.m_albedo            = float4(0.0, 0.0, 0.0, 1.0);
    
    return OUT;
}

The second is just a brick that adds to the skeleton of its code. There is a color combiner node template:

#include <viewsrg.srgi>

float3 %foo%(float r, float g, float b)
{
    return float3(r, g, b);
}

void %NODE_FUNC%(%IN_TYPE1% r, %IN_TYPE2% g, %IN_TYPE3% b, %IN_TYPE4% a, out float4 result)
{
    result = float4(%foo%(r, g, b).xyz, a);
}
AZSL template variations

In this example the result type depends on first argument type:

void %NODE_FUNC%(%IN_TYPE1% lhs, %IN_TYPE1% rhs, out %IN_TYPE1% result)
{
    result = lhs + rhs;
}

This example demonstrates array argument type:

void %NODE_FUNC%(%IN_TYPE1% arr[%IN_SIZE1%], out %IN_TYPE1% result)
{
    result = 0;
    for (int i = 0; i < %IN_SIZE1%; ++i)
    {
        result = result + arr[i];
    }
}

According to the last examples, a shader node function can support multiple types and has to be generated several times for every unique connected slot type. It pushes us to the solution where we have to make a file parsing process that we want to avoid. Alternatively, we can make up special layout constructions that can be found easily.

@GenericFunctionBegin

void %NODE_FUNC%(%IN_TYPE1% lhs, %IN_TYPE1% rhs, out %IN_TYPE1% result)
{
    result = lhs + rhs;
}

@GenericFunctionEnd

Generation

The graph is topologically sorted with the output (material-based) node as a root. During the traverse, the code generates unique property and function names and performs substitutions using the slot's connections. All unset slots become property layout properties.

What are the advantages of the feature?

  • Solution can be smoothly embedded into the existing infrastructure
  • It generates not only the shader, but the ready-to-use material type

What are the disadvantages of the feature?

  • User can create the material type and do work with it manually, that can be rewritten

Are there any open questions?

  • Pipeline settings that are described in *.shader file have not been mentioned anywhere. Where is the better place to locate it in MC?
  • The newly generated shader has to be added to the materialtype.shaders section. But the ordered position is not so clear for a while.
  • In the RFC 51 behavior context has been mentioned. How we are going to use it there? Runtime configuration?
  • Should we do the explicit 'public editable material properties' setup? In the proposing solution, we use the 'expose not set slot properties' strategy.
  • Do we have to make up a special 'Export material' menu item, that saves shader/azsl/materialtype/material detaching it from the material canvas file in the Asset processor pipeline.
  • Now we are connecting the node with the azsl shader template. Does exist the necessity to connect it per slot?

Reviewer Nomination: tjmgd


name: SIG Reviewer/Maintainer Nomination Template
about: Nominate yourself or someone else to become a SIG reviewer or maintainer
title: 'SIG Reviewer/Maintainer Nomination'
labels: 'needs-triage,needs-sig'


Nomination Guidelines

Reviewer Nomination Requirements
6+ contributions successfully submitted to O3DE
100+ lines of code changed across all contributions submitted to O3DE
2+ O3DE Reviewers or Maintainers that support promotion from Contributor to Reviewer
Requirements to retain the Reviewer role: 4+ Pull Requests reviewed per month

Maintainer Nomination Requirements
Has been a Reviewer for 2+ months
8+ reviewed Pull Requests in the previous 2 months
200+ lines of code changed across all reviewed Pull Request
2+ O3DE Maintainers that support the promotion from Reviewer to Maintainer
Requirements to retain the Reviewer role: 4+ Pull Requests reviewed per month


Reviewer/Maintainer Nomination

Fill out the template below including nominee GitHub user name, desired role and personal GitHub profile

I would like to nominate: tjmgd, to become a Reviewer on behalf of sig-graphic-audio. I verify that they have fulfilled the prerequisites for this role.

Reviewers & Maintainers that support this nomination should comment in this issue.

Reviewer Nomination: Thomas Poulet @bindless-chicken

Nomination Guidelines

Reviewer Nomination Requirements
6+ contributions successfully submitted to O3DE
100+ lines of code changed across all contributions submitted to O3DE
2+ O3DE Reviewers or Maintainers that support promotion from Contributor to Reviewer
Requirements to retain the Reviewer role: 4+ Pull Requests reviewed per month

Maintainer Nomination Requirements
Has been a Reviewer for 2+ months
8+ reviewed Pull Requests in the previous 2 months
200+ lines of code changed across all reviewed Pull Request
2+ O3DE Maintainers that support the promotion from Reviewer to Maintainer
Requirements to retain the Reviewer role: 4+ Pull Requests reviewed per month

Reviewer/Maintainer Nomination

I would like to nominate: @Bindless-Chicken, to become a Reviewer on behalf of sig-graphics-audio. The requirements will be met with the next accepted PR.

Reviewers & Maintainers that support this nomination should comment in this issue.

Proposed SIG-Graphics-Audio meeting agenda for August-17-22

Meeting Details

The SIG-Graphics-Audio Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

What happened since the last meeting?

Meeting Agenda

Discuss agenda from proposed topics

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Create actionable items from proposed topics

Open Discussion Items

List any additional items below!

Proposed RFC Feature =Water Gem/Infinite Ocean=

Originally in the Lumberyard engine there was an "Infinite Ocean" gem that could be piggy-backed on a game project making it possible to surround your terrain with water. The gem was very useful and provided a quick way to add water volume in the game environment. It is also one of the most important aspects of our game development for our studio.

We feel that continuing with O3DE would bring back those features where we left off in Lumberyard. However, in speaking to different representatives from AWS/O3DE development team we were informed to create an RFC with a proposal to bring back this feature that we feel is necessary.

The features we had in mind surrounding the Infinite Ocean/Water Gem are,

  • Advanced rendering of waves and foam using a complex water shader driven by heightmap, animated normal map and foam created where the water volume intersects the terrain.
  • Inside the water the shader transformed there is a post processing effect of rays of light that always are directed towards the main light source of the map/level.
  • The water gem/infinite ocean below its surface provides us with light diffraction effect and blurriness. Additionally the same effect provides also a distortion of the environment image in line with how the waves are moving (left and right).

  • The ability through a variety of properties easily accessible to steer the amount of waves, the size of waves and the speed of waves.
  • The ability through the Material Editor to bring up specific water-volume properties (see screenshot below).

The old Lumberyard documentation is located here.

The old "Water" gem was located for example under, C:\Amazon\Lumberyard\1.28.0.0\dev\Gems\Water and is a code-based gem that has some integration to the editor,

I made an attempt to convert it over on my own but it refers to "OceanConstants.h" "OceanEnvironmentBus.h" and "I3DEngine.h". While the first two are a non-issue to bring over, the "I3DEngine.h" is referring in its turn to "CryThreadSafeRendererContainer.h" which has been disposed by the Core team, as Atom replaces modules in the engine. However, looking at the old code, it is just basic functionality and someone with Atom-engine knowledge should be able to nail this core-engine class conversion to suit Atom's needs.

Underwater Effects Video:
Watch the video

Proposed SIG-Presentation meeting agenda for 2021-07-21

Meeting Details

The SIG-Presentation Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

What happened since the last meeting?

Meeting Agenda

Discuss agenda from proposed topics

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Create actionable items from proposed topics

Open Discussion Items

List any additional items below!

Maintainer Nomination: HogJonny-AMZN

I would like to nominate: HogJonny-AMZN, to become a Maintainer on behalf of sig-graphics-audio. I verify that they have fulfilled the prerequisites for this role.

Reviewers & Maintainers that support this nomination should comment in this issue.

Proposed RFC Suggestion: Ground truth provider

Motivation

Currently, visual fidelity in O3DE is driven by existing state of the art implementations, but to grow O3DE to full maturity and utilize it as a platform to lead research and foster innovative development in the industry, we will need a way to validate our results against established ground truth solutions.

The current situation asks for tedious manual processing and precise mirroring of the setup in external packages, to reproduce the result. During this process, small errors can be introduced generating false negatives or positives in the results.

O3DE

Cornell box rendered in O3DE with diffuse direct illumination enabled

Mitsuba

The same Cornell box rendered using Mitsuba with one light bounce

Overview

The system proposed here will allow a seamless integration of an offline renderer into O3DE to allow quick and easy comparison of the rendered visuals.

Theory of operation

Scenes are created directly in O3DE and are exported from within the engine to the (offline) ground truth provider. The system is designed to operate with minimal user input and provide repeatable results to be used as ground truth comparison points to validate visual results.

The ground truth provider: Mitsuba

The system proposed here is designed around Mitsuba. Mitsuba is a well-established open source rendering system (https://github.com/mitsuba-renderer/mitsuba), which makes it a prime candidate for our purpose.

Mitsuba is a research-oriented rendering system in the style of PBRT, from which it derives much inspiration.

In comparison to other open source renderers, Mitsuba places a strong emphasis on experimental rendering techniques, such as path-based formulations of Metropolis Light Transport and volumetric modeling approaches. Thus, it may be of genuine interest to those who would like to experiment with such techniques that haven't yet found their way into mainstream renderers, and it also provides a solid foundation for research in this domain.

In many cases, physically-based rendering packages force the user to model scenes with the underlying algorithm (specifically: its convergence behavior) in mind. For instance, glass windows are routinely replaced with light portals […]. One focus of Mitsuba will be to develop path-space light transport algorithms, which handle such cases more gracefully.

The O3DE scene exporter

The system is designed to function with minimal user input and process scenes automatically.

It will present itself as a scene level component in which the user can setup the export properties such as camera setup and export path.

For the few Atom components where an automatic conversion cannot be achieved, specialized components could be implemented to provide the required properties.

Discussion points

User defined materials

Most existing materials can be trivially represented by a model already available in Mitsuba, but representing user defined materials can be quite complex. Evaluating the shader could be done by transpiling the azsl into its C++ counter part, or by forcing the user to provide a custom Mitsuba implementation for their material.

LookDev library

In many cases testing a scene side by side is too broad of a comparison. A specialized LookDev testing environment which can be exported to validate material properties independent of several other effects in a scene should exist so users can for instance test lights, GI bouncing behavior, materials or camera properties (maybe even post-processes such as DoF)  independently.

Built-in metrics

A range of different metrics exist to validate results from comparisons, a simple one for instance being PSNR. However, the aim of an algorithm might be to be biased on purpose while retaining the highest perceptual quality. Showing multiple of these indicators can give a user an overview if an algorithm in O3DE might have different strengths or weaknesses depending on the context it is measured in.

Proposed SIG-Graphics-Audio meeting agenda for 2022-02-16

Meeting Details

  • Date/Time: February 16, 2022 @ 6:00pm UTC / 10:00 am PST
  • Location: Discord SIG-Graphics-Audio Voice Room
  • Moderator: JonB (wintermute-motherbrain)
  • Note Taker JonB (wintermute-motherbrain)

The SIG-Graphics-Audio Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

  • Decoupling materials from core lighting
  • Gemification
  • AtomsampleViewer PreCheckin Wizard #29

Meeting Agenda

Discuss agenda from proposed topics

Reviewer Nomination: #28

Outcomes from Discussion topics

Action Items

  • Coding standards should be linked to reviewer/maintainer nominations

Open Discussion Items

Proposed RFC Feature - Remixable Material Types

Summary:

Re-structure the material type file format to allow large portions of the file to be reused in multiple material types. To achieve this, we need to formalize the concept of a material property group, allow property groups to contain other property groups, and allow property groups to be defined in their own JSON files separate from the material type.

(Note that "Remixable Material Types" is not necessarily the official name of a particular feature set. It's a working-title to describe this body of work.)

What is the relevance of this feature?

O3DE ships with several material types including StandardPBR (1197 lines), StandardMultilayerPBR (3100 lines), EnhancedPBR (1679 lines), and Skin (1098 lines). These include roughly 1000 lines of JSON that are repeated 6 times. If we could factor out this duplication, it would reduce the total lines from 7092 to about 2500. More importantly, it will allow users to define new custom material types without further duplicating these files, and more easily receive fixes and improvements that are made to the core files.

Feature design description:

The end goal is to use the new JSON "$import" feature (see o3de/sig-core#14 ) to factor-out the common parts of various .materialtype files. The common parts of these files are mostly related to the property layout, and that information is scattered accross several sections of the .materialtype file, so we first need to reorganize the file to co-locate related property definition data. Also, there are some related updates that would naturally coincide with this change, including nested property groups and flattening the .material file format. All of this is described in detail below, listed in roughly the order in which it needs to be implemented.

Note that all these changes can and should be made in a way that preserves backward compatibility with existing files.

Comprehensive Property Groups

Currently, the material property configuration is spread throughout the material type file, separated by configuration type. All property groups are in one section of the file, individual property definitions are in another section, and functors that process these properties are in another. This will make it difficult to factor out information about a subgroup of material properties.

Current .materialtype Format (click to expand)
{
    "propertyLayout": {
        "groups": [
            {
                "id": "baseColor",
                "displayName": "Base Color",
                "description": "Properties for configuring the surface reflected color for dielectrics or reflectance values for metals."
            },
            {
                "id": "metallic",
                "displayName": "Metallic",
                "description": "Properties for configuring whether the surface is metallic or not."
            },
			...
        ],
        "properties": {
            "baseColor": [
                {
                    "id": "color",
                    "displayName": "Color",
                    "description": "Color is displayed as sRGB but the values are stored as linear color.",
                    "type": "Color",
                    "defaultValue": [ 1.0, 1.0, 1.0 ],
                    "connection": {
                        "type": "ShaderInput",
                        "id": "m_baseColor"
                    }
                },
                {
                    "id": "factor",
                    "displayName": "Factor",
                    "description": "Strength factor for scaling the base color values. Zero (0.0) is black, white (1.0) is full color.",
                    "type": "Float",
                    "defaultValue": 1.0,
                    "min": 0.0,
                    "max": 1.0,
                    "connection": {
                        "type": "ShaderInput",
                        "id": "m_baseColorFactor"
                    }
                },
                {
                    "id": "textureMap",
                    "displayName": "Texture Map",
                    "description": "Base color texture map",
                    "type": "Image",
                    "connection": {
                        "type": "ShaderInput",
                        "id": "m_baseColorMap"
                    }
                },
                {
                    "id": "useTexture",
                    "displayName": "Use Texture",
                    "description": "Whether to use the texture map.",
                    "type": "Bool",
                    "defaultValue": true
                },
                ...
            ],
            "metallic": [
                {
                    "id": "factor",
                    "displayName": "Factor",
                    "description": "This value is linear, black is non-metal and white means raw metal.",
                    "type": "Float",
                    "defaultValue": 0.0,
                    "min": 0.0,
                    "max": 1.0,
                    "connection": {
                        "type": "ShaderInput",
                        "id": "m_metallicFactor"
                    }
                },
                ...
            ],
            ...
        }
    },
    "shaders": [
        ...
    ],
    "functors": [
        ...
        {
            "type": "UseTexture",
            "args": {
                "textureProperty": "baseColor.textureMap",
                "useTextureProperty": "baseColor.useTexture",
                "shaderOption": "o_baseColor_useTexture"
            }
        },
        {
            "type": "UseTexture",
            "args": {
                "textureProperty": "metallic.textureMap",
                "useTextureProperty": "metallic.useTexture",
                "shaderOption": "o_metallic_useTexture"
            }
        },
        ...
    ]
}

Instead, we need to combine all configuration for related properties into one section. This will eventually allow them to be factored out together. In the example below, there is now one property group for all Base Color properties. It also includes the UseTexture functor, which can reference the properties assuming a local scope like just "textureMap" instead of "baseColor.textureMap".

.materialtype Format Rearranged Into Property Groups (click to expand)
{
    "propertyLayout": {
        "propertyGroups": [
            {
                "id":  "baseColor",
                "displayName": "Base Color",
                "description": "Properties for configuring the surface reflected color for dielectrics or reflectance values for metals."
                "properties": [
                    {
                        "id": "color",
                        "displayName": "Color",
                        "description": "Color is displayed as sRGB but the values are stored as linear color.",
                        "type": "Color",
                        "defaultValue": [ 1.0, 1.0, 1.0 ],
                        "connection": {
                            "type": "ShaderInput",
                            "id": "m_baseColor"
                        }
                    },
                    {
                        "id": "factor",
                        "displayName": "Factor",
                        "description": "Strength factor for scaling the base color values. Zero (0.0) is black, white (1.0) is full color.",
                        "type": "Float",
                        "defaultValue": 1.0,
                        "min": 0.0,
                        "max": 1.0,
                        "connection": {
                            "type": "ShaderInput",
                            "id": "m_baseColorFactor"
                        }
                    },
                    {
                        "id": "textureMap",
                        "displayName": "Texture Map",
                        "description": "Base color texture map",
                        "type": "Image",
                        "connection": {
                            "type": "ShaderInput",
                            "id": "m_baseColorMap"
                        }
                    },
                    {
                        "id": "useTexture",
                        "displayName": "Use Texture",
                        "description": "Whether to use the texture map.",
                        "type": "Bool",
                        "defaultValue": true
                    },
                    ...
                ],
                "functors": [
                    {
                        "type": "UseTexture",
                        "args": {
                            "textureProperty": "textureMap",
                            "useTextureProperty": "useTexture",
                            "shaderOption": "o_baseColor_useTexture"
                        }
                    }
                ]
            },
            {
                "id":  "metallic",
                "displayName": "Metallic",
                "description": "Properties for configuring whether the surface is metallic or not.",
                "properties": [
                    {
                        "id": "factor",
                        "displayName": "Factor",
                        "description": "This value is linear, black is non-metal and white means raw metal.",
                        "type": "Float",
                        "defaultValue": 0.0,
                        "min": 0.0,
                        "max": 1.0,
                        "connection": {
                            "type": "ShaderInput",
                            "id": "m_metallicFactor"
                        }
                    },
                    ...
                ],
                "functors": [
                    {
                        "type": "UseTexture",
                        "args": {
                            "textureProperty": "textureMap",
                            "useTextureProperty": "useTexture",
                            "shaderOption": "o_metallic_useTexture"
                        }
                    }
                ]
            },
        ]
    },
    "shaders": [
        ...
    ],
    "functors": [
       ...
    ]
}

Nested Property Groups

The StandardMultilayerPBR material type is essentially three layers of standard PBR properties that get blended together. Each of these layers are identical and should eventually be factored out. In the end, we want the same property group definition to be used once in StandardPBR and used three times for each of the layers in StandardMultilayerPBR. This is difficult with the current file format because it only has two levels in the property hierarchy: property groups and properties. So we name each group like "layer1_baseColor", "layer2_baseColor", etc. and each of these has the same standard base color properties. Similarly, each of these properties connects to a ShaderResourceGroup (SRG) field using a layer number prefix like "m_layer1_m_baseColorFactor", "m_layer2_m_baseColorFactor", etc.

Multilayer Material Type Using Current Format (click to expand)
{
    "propertyLayout": {
        "groups": [
            ...
            {
                "id": "layer1_baseColor",
                "displayName": "Layer 1: Base Color",
                "description": "Properties for configuring the surface reflected color for dielectrics or reflectance values for metals."
            },
            {
                "id": "layer2_baseColor",
                "displayName": "Layer 2: Base Color",
                "description": "Properties for configuring the surface reflected color for dielectrics or reflectance values for metals."
            },
            {
                "id": "layer3_baseColor",
                "displayName": "Layer 3: Base Color",
                "description": "Properties for configuring the surface reflected color for dielectrics or reflectance values for metals."
            },
            ...
        ],
        "properties": {
            "layer1_baseColor": [
                {
                    "id": "color",
                    "displayName": "Color",
                    "description": "Color is displayed as sRGB but the values are stored as linear color.",
                    "type": "Color",
                    "defaultValue": [ 1.0, 1.0, 1.0 ],
                    "connection": {
                        "type": "ShaderInput",
                        "id": "m_layer1_m_baseColor"
                    }
                },
                {
                    "id": "factor",
                    "displayName": "Factor",
                    "description": "Strength factor for scaling the base color values. Zero (0.0) is black, white (1.0) is full color.",
                    "type": "Float",
                    "defaultValue": 1.0,
                    "min": 0.0,
                    "max": 1.0,
                    "connection": {
                        "type": "ShaderInput",
                        "id": "m_layer1_m_baseColorFactor"
                    }
                },
                ...
            ],
            "layer2_baseColor": [
                {
                    "id": "color",
                    "displayName": "Color",
                    "description": "Color is displayed as sRGB but the values are stored as linear color.",
                    "type": "Color",
                    "defaultValue": [ 1.0, 1.0, 1.0 ],
                    "connection": {
                        "type": "ShaderInput",
                        "id": "m_layer2_m_baseColor"
                    }
                },
                {
                    "id": "factor",
                    "displayName": "Factor",
                    "description": "Strength factor for scaling the base color values. Zero (0.0) is black, white (1.0) is full color.",
                    "type": "Float",
                    "defaultValue": 1.0,
                    "min": 0.0,
                    "max": 1.0,
                    "connection": {
                        "type": "ShaderInput",
                        "id": "m_layer2_m_baseColorFactor"
                    }
                },
                ...
            ],
            ...
        }
    },
    "shaders": [ 
        ...
    ],
    "functors": [
        ...
        {
            "type": "UseTexture",
            "args": {
                "textureProperty": "layer1_baseColor.textureMap",
                "useTextureProperty": "layer1_baseColor.useTexture",
                "shaderOption": "o_layer1_o_baseColor_useTexture"
            }
        },
        {
            "type": "UseTexture",
            "args": {
                "textureProperty": "layer2_baseColor.textureMap",
                "useTextureProperty": "layer2_baseColor.useTexture",
                "shaderOption": "o_layer2_o_baseColor_useTexture"
            }
        },
        ...
    ]
}

In order to remove the "layerN_" prefixes from the property group names, we will add support for nested property groups. Each property group can contain any number of property subgroups. Thus layer1 will be a property group that contains property groups for baseColor, metallic, etc, and each of those will contain their relevant properties.

To remove the "layerN_" prefixes from the property connections, each property group can specify a prefix that will be automatically attached to the SRG names or shader option names within each property definition. So in the example below, the layer1.baseColor.factor property specifies a ShaderInput name "m_baseColorFactor"; because the layer has shaderInputPrefix "m_layer1_", the property will be connected to the SRG field called "m_layer1_m_baseColorFactor".

Multilayer Material Type Using Nested Property Groups (click to expand)
{
    "propertyLayout": {
        "propertyGroups": [
            {
                "id":  "layer1",
                "shaderInputsPrefix": "m_layer1_",
                "shaderOptionsPrefix": "o_layer1_",
                "displayName":  "Layer 1",
                "description":  "Material properties for the first layer, to be blended with other layers.",
                "propertyGroups": [
                    {
                        "id":  "baseColor",
                        "displayName":  "Base Color",
                        "description":  "Properties for configuring the surface reflected color for dielectrics or reflectance values for metals.",
                        "properties": [
                            {
                                "id": "color",
                                "displayName": "Color",
                                "description": "Color is displayed as sRGB but the values are stored as linear color.",
                                "type": "Color",
                                "defaultValue": [ 1.0, 1.0, 1.0 ],
                                "connection": {
                                    "type": "ShaderInput",
                                    "id": "m_baseColor"
                                }
                            },
                            {
                                "id": "factor",
                                "displayName": "Factor",
                                "description": "Strength factor for scaling the base color values. Zero (0.0) is black, white (1.0) is full color.",
                                "type": "Float",
                                "defaultValue": 1.0,
                                "min": 0.0,
                                "max": 1.0,
                                "connection": {
                                    "type": "ShaderInput",
                                    "id": "m_baseColorFactor"
                                }
                            },
                            {
                                "id": "textureMap",
                                "displayName": "Texture Map",
                                "description": "Base color texture map",
                                "type": "Image",
                                "connection": {
                                    "type": "ShaderInput",
                                    "id": "m_baseColorMap"
                                }
                            },
                            {
                                "id": "useTexture",
                                "displayName": "Use Texture",
                                "description": "Whether to use the texture map.",
                                "type": "Bool",
                                "defaultValue": true
                            },
                        ],
                        "functors": [
                            {
                                "type": "UseTexture",
                                "args": {
                                    "textureProperty": "textureMap",
                                    "useTextureProperty": "useTexture",
                                    "shaderOption": "m_useTexture"
                                }
                            }
                        ]
                    },
                    ...
                ]
            },
            {
                "id":  "layer2",
                "shaderInputsPrefix": "m_layer2",
                "shaderOptionsPrefix": "o_layer2",
                "displayName":  "Layer 2",
                "description":  "Material properties for the second layer, to be blended with other layers.",
                "propertyGroups": [
                    ////////////////////////////////////////////////////////
                    // This section will be identical to the layer1 propertyGroups above
                ]
            },
            ...
        ]
    },
    "shaders": [
        ...
    ],
    "functors": [
        ...
    ]
}

Flatten the .material file format

Currently, the .material file specifies property values in a two-level hierarchy of property groups and property values, with the group name and property name being specified separately. By flattening this into a single layer of full property names (i.e. "groupName.propertyName") we can support an arbitrarily deep nesting of property groups. (Note that this is easier than just expanding the nesting in the current format, because of the way the json serialization system works. It also makes it easier for developers to search .material files for specific properties by their full name).

The old format will still be supported for backward compatibility, but any new files created by the Material Editor will use the new format.

Current .material File Format (click to expand)
{
    "materialType": "Materials\\Types\\StandardPBR.materialtype",
    "properties": {
        "baseColor": {
            "textureMap": "Textures/Default/default_basecolor.tif"
        },
        "normal": {
            "textureMap": "Textures/Default/default_normal.tif"
        },
        "roughness": {
            "textureMap": "Textures/Default/default_roughness.tif"
        }
    }
}
Flattened .material File Format (click to expand)
{
    "materialType": "Materials\\Types\\StandardPBR.materialtype",
    "properties": {
        "baseColor.textureMap": "Textures/Default/default_basecolor.tif",
        "normal.textureMap": "Textures/Default/default_normal.tif"
        "roughness.textureMap": "Textures/Default/default_roughness.tif"
    }
}

Factor Out Core Material Types

Finally, we can update StandardPBR, EnhancedPBR, StandardMultilayerPBR, and Skin material types to share common property definitions. Some minor restructuring of properties might be needed to create exact alignment between the property groups. Then the common portions can be moved to separate json files using the $import feature of the json serialization system.

Some portions of this refactoring could be done in phases as the above features come online.

Technical design description:

Much of the work described here is already in progress on a branch. We have made significant updates to the MaterialTypeSourceData class, added a clean API for accessing property groups (all the data was publicly accessible before). The work in progress looks something like this...

class MaterialTypeSourceData
{
...
    PropertyGroup* AddPropertyGroup(AZStd::string_view propertyGroupId);
    PropertyDefinition* AddProperty(AZStd::string_view propertyId);
    
    const PropertyGroup* FindPropertyGroup(AZStd::string_view propertyGroupId) const;
    const PropertyDefinition* FindProperty(AZStd::string_view propertyId) const;
    
    bool EnumeratePropertyGroups(const EnumeratePropertyGroupsCallback& callback) const; // Run a callback function for each property group
    bool EnumerateProperties(const EnumeratePropertiesCallback& callback) const;     // Run a callback function for each property group
    
...
}

The structure of the PropertyGroup class naturally supports nesting by giving each PropertyGroup a list of PropertyGroups.

class PropertyGroup
{
...
    PropertyDefinition* AddProperty(AZStd::string_view name);
    PropertyGroup* AddPropertyGroup(AZStd::string_view name);
    
    const AZStd::string& GetName() const;
    const AZStd::string& GetDisplayName() const;
    const AZStd::string& GetDescription() const;
    const PropertyList& GetProperties() const;
    const AZStd::vector<AZStd::unique_ptr<PropertyGroup>>& GetPropertyGroups() const;
    
private:
    AZStd::vector<AZStd::unique_ptr<PropertyDefinition>> m_properties;
    AZStd::vector<AZStd::unique_ptr<PropertyGroup>> m_propertyGroups;
...
}

FAQ

What are the advantages of the feature?

  • Reduced duplication
  • Reduced maintenance overhead
  • Reduced opportunity for type-specific bugs
  • Ensures consistency for the users across multiple material types
  • You can fix or improve core material types in one place. Other material types can automatically benefit from from the improvements.

What are the disadvantages of the feature?

  • The implementation of MaterialTypeSourceData has to maintain old code paths for backward compatibility.
  • Sharing property layouts could make it a bit more diffucult make certain customizations; you might need to inline some shared data before modifying it.

How will this be implemented or integrated into the O3DE environment?

These changes will be transparent to most users. Anyone who has made custom .materialtypes might want to update them to use the new format or import common property groups. This isn't require though, as the current format will continue to be supported.

Are there any alternatives to this feature?

  • Instead of using the JSON $import feature, we could formalize the Property Group even more, with a bespoke reference in the .materialtype to a specialized PropertyGroup asset. This may be desirable in the future if we someday bundle shader code together with a property group.

How will users learn this feature?

After completing this work, we will provide updated documentation that shows how to author .materialtype files. We can also provide a document that shows how to convert existing .materialtype files to the new format.

Are there any open questions?

  • How soon do we really need this?
  • Are there many customers who are already making custom material types?

Out Of Scope:

The following related features could be pursued after implementing the "Remixable Material Types" concept, but they are considered out of scope for this RFC and would need their own respective RFCs.

Property Group Version Auto Updates

We currently have planned a feature that will automatically update .material files when a .materialtype file changes. For example, if a property "baseColor.map" is renamed to "baseColor.texture", the .materialtype can provide a procedure for updating any older .material file that still uses the old name, thus ensuring backward compatibility. It would be useful to do this on the property-group level as well, so that any materialtype using this property-group will inherit the upgrade procedure for that property group.

This is quite a bit more complex than supporting version updates for the .materialtype. At the material type level, the main downstream data is the .material file, which is relatively easy to update. But since the .materialtype consumes the property group data, any renames at that the property group level will have to be applied to functors within the materialtype. These functors are often implemented by lua scripts which reference properties by name. Applying renames or other transformations inside a lua script would be prohibitively complex. We will have to change our lua script API to not directly reference properties, but rather expose data slots, and those slots could be connected to properties from inside the .materialtype file. Then it would be straight-forward to provide mechanisms for editing those connections into and out of the the lua functor scripts.

This will likely be out of scope for the initial version of remixable material types. For now, users can write conversion procedures at the top .materialtype level, and will have to manually update those procedures if any of the imported property groups are changed.

Material Multiple Inheritance

In the existing system, materials can inherit the property values from any other material that shares the same material type. So in the case of StandardMultilayerPBR, these materials can only inherit from other StandardMultilayerPBR materials. But this material type functions as three layers of StandardPBR that get blended together. It would be reasonable to allow StandardMultilayerPBR materials to inherit three different StandardPBR material files, as the parents for each of its three layers. This would allow game teams to maintain a large library of StandardPBR materials, and also combine those into multilayer materials, and any changes made to the StandardPBR materials would be automatically applied to any multilayer materials where they are used.

This feature is somewhat complex, probably deserves its own RFP, and is therefore out of scope for the initial version of remixable material types. It will be important for artist workflows that use multilayer materials extensively, and should be considered a high priority follow-up especially if there are game teams with this need.

Proposed RFC Suggestion Unit Tests and Code Coverage

Summary:

The objective of unit testing is the robust detection of regression early, allowing correction before issues integrate with other systems. If we allow regressions to persist, consumers of our code may inadvertently code to them, causing a negative outcome to the quality of the product.

What is the relevance of this feature?

Automation supports higher confidence in the quality of our product. Code coverage is a measurable proxy for confidence in code quality. While code coverage does not equate to code quality, the level of coverage correlates to perceived confidence in quality. Our atom editor unit test code coverage is below 42% with some individual dll's being below 12%; the common industry objective being greater than 80%. All software has defects; given that we are not covering our code, it is possible that defects exist in the uncovered areas.

Increased unit test code coverage

Unit tests are designed to be quick to run and can provide rapid verification of changes by any contributor to O3DE both as a local development verification and as a gated AR element. This represents the lowest cost solution in both implementation and ongoing maintenance. Emphasis should be placed on both increasing coverage and designing meaningful tests.
Investigating regressions found by unit tests is a much easier process than that of a regression failure in full stack integration tests which load editor. Unit tests tell you exactly where the failure has occurred and what the expectation that was not met is. This makes the return on investment from unit tests much higher than costly to implement full stack integration tests. We still need integration tests, however, where possible we should have unit tests.

How to accomplish increased unit test coverage:

  • Unit test creation training hosted as a SIG special meeting with a curriculum created by SIG-Testing and experienced developers
  • Create O3DE community documentation of how to and expectations of inclusion with contributions
  • When fixing a defect, the fix pull request should include a unit test that would detect a regression of the defect if possible.
  • Developers should critically review pull requests for inclusion of unit tests.

Code coverage collection

Code coverage is a helpful indication for confidence in quality, however, unit tests do not tell you if the function of your code is correct from an end user perspective only if the units of code are as asserted. It is possible to have 95% coverage and yet still have significant bug count once units of code integrate with other systems. Conversely, we can conclude that with low coverage numbers we cannot have confidence that the units of code are correct beyond compilation, it is an unknown quality state.
Code Coverage obtained manually is a critical source of gap analysis to identify where to add tests. Using coverage line information you can critically identify useful areas that can benefit from additional unit tests.
This proposal does not stipulate a minimum bar for coverage. Some areas of code are not practical to unit test since they would require mocking significant 3rd party dependencies or have other complexities that don't easily lend themselves to unit tests.

How to accomplish code coverage collection:

  • Developers should consider installing OpenCPPCoverage visual studio plugin and check line coverage to identify opportunities to add unit tests and cover uncovered code.
  • Developers should include information about code coverage in pull request descriptions.
  • Pull requests that add code, should demonstrably increase code coverage or contribute to coverage of the included code.

https://marketplace.visualstudio.com/items?itemName=OpenCppCoverage.OpenCppCoveragePlugin

Mainter Nomination: hershey5045


name: SIG Reviewer/Maintainer Nomination Template
about: Nominate yourself or someone else to become a SIG reviewer or maintainer
title: 'SIG Reviewer/Maintainer Nomination'
labels: 'needs-triage,needs-sig'


Nomination Guidelines

Reviewer Nomination Requirements
6+ contributions successfully submitted to O3DE
100+ lines of code changed across all contributions submitted to O3DE
2+ O3DE Reviewers or Maintainers that support promotion from Contributor to Reviewer
Requirements to retain the Reviewer role: 4+ Pull Requests reviewed per month

Maintainer Nomination Requirements
Has been a Reviewer for 2+ months
8+ reviewed Pull Requests in the previous 2 months
200+ lines of code changed across all reviewed Pull Request
2+ O3DE Maintainers that support the promotion from Reviewer to Maintainer
Requirements to retain the Reviewer role: 4+ Pull Requests reviewed per month


Reviewer/Maintainer Nomination

Fill out the template below including nominee GitHub user name, desired role and personal GitHub profile

I would like to nominate: hershey5045, to become a Maintainer on behalf of sig-graphics-audio. I verify that they have fulfilled the prerequisites for this role.

Reviewers & Maintainers that support this nomination should comment in this issue.

Proposed SIG-Graphics-Audio meeting agenda for 2022-06-22

Meeting Details

The SIG-Graphics-Audio Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

  • See previous meeting notes here: #43
  • Can we close meeting agenda issues?

Meeting Agenda

Outcomes from Discussion topics

Action Items

  • Discuss MaterialCanvas work with hushaoping[huawei] as he has a team willing to develop a similar Material Editor tool.

Open Discussion Items

List any additional items below!

Proposed RFC OpenXR

OpenXR

Summary:

OpenXR is an open royalty-free API standard from Khronos, providing engines with native access to a range of devices. It provides access to following API and we will need to add support for it in a modular fashion within O3DE. The api to add support for is as follows

XrSpace A representation of 3D space
XrInstance A representation of the OpenXR Runtime
XrSystemId A representation of the devices
XrActions A representation of user inputs
XrSession A representation of the session between the app an the user
XrSwapChain A representation of XR swapchain

What is the relevance of this feature?

This feature will help allow O3DE to work with VR (virtual reality), AR(Augmented reality) and MR(mixed reality). Initially this work will only focus on OpenXr Vk and DX12 but it can be expanded to include other xr backends. As a start this document will mostly focus on Quest2 as a testing platform.

Goals
This document aims to provide framework around setting up XR related functionality that will need to work with Atom (O3de renderer) in an abstract modular manner. As part of setting up this framework we want to adhere to following goals.

  • Clean XR api accessible to all gems that require XR functionality.
  • Iterative development
  • Modularity across multiple XR backends
  • Debugging support
  • Profiling tools
  • Supported device - Quest 2. Eventually any device that supports OpenXr with Vk and Dx12 should work.

Technical design description:

Proposed Framework

In order to gain a high level understanding attached are three diagrams related to

  • Gem structure
  • UML diagram of interaction for XR and Atom gems
  • XR render pipeline

Gem structure
GemStructure

XR gem is responsible for implementing any interface (we could have more than one) needed by other gems. The interface will provide access to XR related data as well as functions to update the data as needed. In this document we will mostly be focusing on the rendering interface. This gem will hold common objects which will be extended by the backend gems like OpenXrVk, OpenXrDx12, etc. This will allow the XR gem to contain any common functionality that exists across all XR backends. XR gem will have no idea which backend is running under hood and to do that it will use the Factory pattern.

At a high level the design will be setup such that only XR gems will be including Atom gems and not vice versa. The RPI will provide XR specific rendering interface which is then implemented by the XR gems. This interface will be specified within XRSystemInterface and will live within RPI gem. This way Atom will not need to have a dependency on XR tech stack and should still work if the concrete implementation for this interface is not provided. With this design if the binaries related to XR gems are modified it will have no impact on binaries for Atom gems. This header file can be setup as a HEADERONLY module called RPI.XR.Interface within RPI’s cmakefile. The XRSystem class within the XR gem will extend from this interface and provide implementation as described below.

For this document we are only focusing on the vulkan implementation via OpenxrVk gem but later on it will be easy to support for DX12 via OpenxrDx12 gem or even other backends for more devices if needed

UML diagram for XR related functionality
The color for each class dictate they will be in a specific gem.
UML-OpenXr

  • Blue = XR gem
  • Orange = OpenXrVk gem
  • Green = RPI gem
  • Purple = RHI gem
  • Pink = RHI::Vk gem

XRSystemInterface - Interface related to XR rendering and it lives within RPI. If in future a different gem (other than XR gem discussed here) wants to provide an implementation for this gem it can be done.

XRSystem (Singleton) - The class implementing all the interface functionality. More details with code provided further in this document. This class will hold all the XR objects and each XR object will be extended and backed by a class that will live in another gem. For example XR::Instance will be extended by OpenXrVk::Instance. Each XR object will also inherit from AZStd::intrusive_refcount in order to attain refcount support. The XR version of the objects will contain all the common code plus any validation code related for that object. Further down I have added code with comments to better explain the purpose of each object and how it will interact with other objects.

OpenXrVk::SystemComponent - This will act as a way to register OpenXrVk as a Factory at runtime and we can then use XR::FactoryManagerSystemComponent to pick which factory to use (based on which rhi is picked by Atom). This will ensure we do not pick an incorrect factory for a platfor. For example we do not want to register OpenXrVk on Mac. It will provide a way to create objects that are from OpenXrVk namespace.

RHI::XR::XXXDescriptors - The definitions of these descriptors will live in RHI side but it will populated by XR gems. This will ensure that RHI gems will have no dependency on XR gems. The XR gem will include RHI gem headers and OpenXrVk gem will include RHI::Vk gem headers. That way OpenXrVk gem code can cast a RHI::XXXDescriptor object to RHI::Vulkan::XXXDescriptor and populate it accordingly.

XR Render Pipeline (No view instancing)
XRRenderPipeline

Above is an example of what a render pipeline for a frame may look like initially. It is small yet big enough to cover most of the use cases we may encounter for a scene on a VR device. It has support for skinning, shadows, direct lighting, PBR shading, sky box, transparent objects (i.e particles), tonemapping and ui. Since the pipeline will be data driven it should be easy to modify or create new ones for a game’s specific needs.

BeginXRViewPass/EndXRViewPass - These passes are setup so that RHI can call into XR (as part of Execution phase of the framegraph) and synchronize XR swapchain images. Part of this synchronization would contain calls like xrAcquireSwapchainImage, xrWaitSwapchainImage and xrReleaseSwapchainImage for multiple views.

CopyToXRSwapChain - This pass will be used to copy the final image to the XR swapchain image for a given view.

Initially we will duplicate all the CPU Pass work for multiple views as that will be easy to add and will not require core changes to any of the current rendering features. Further down the line we can look into adding MultiView/View instancing support whereby we just have one pipeline and it will be able to do the work for multiple views and write out to multiple render target textures. This will require much bigger changes within RPI and RHI space and hence should be considered as a separate feature to be added later on.

XRSystemInterface (pseudocode)
As part of writing this document it was easier for me to just write pseudo code in order to explain how all the code within XR gems will look like. I have added comments within the pseudocode to provide more context.

Once XR gem is activated it will initialize the XRSystem and then register iteself with RPI gem . The initialization may involve checking some criteria which may evolve later on. As a start it can just be as simple as checking a command line parameter (-xr=openxr). We should add an enum to the XRSystemInterface header which will allow us to capture the result of all XRSystem calls. The backend XR gems implementing the XR interface will return this ResultCode for most of the calls and allow the XR gem to take action based on the return code.

namespace RPI::XR
{
    enum class ResultCode : uint32_t
    {
        // The operation succeeded.
        Success = 0,
        
        // The operation failed with an unknown error.
        Fail,
       
        // The operation failed because the feature is unimplemented on the particular platform.
        Unimplemented,
       
        // The operation failed due to invalid arguments.
        InvalidArgument,
    }
}

XR::XRSystemInterface pseudo code - Here is a possible starting interface used by the XR gem. This is mostly based on OpenXR functionality.

namespace RPI::XR
{
    class XRSystemInterface
    {
        static XRSystemInterface* Get();

        XRSystemInterface() = default;
        virtual ~XRSystemInterface() = default;
        
        // Creates the XR::Instance which is responsible for managing 
        // XrInstance (amongst other things) for OpenXR backend
        // Also initializes the XR::Device
        virtual ResultCode InitializeSystem() = 0;
        
        // Create a Session and other basic session-level initialization.
        virtual XR::ResultCode InitializeSession() = 0;
        
        // Start of the frame related XR work
        virtual void BeginFrame() = 0;
        
        // End of the frame related XR work
        virtual void EndFrame() = 0;
        
        // Start of the XR view related work
        virtual void BeginXRView() = 0;
        
        // End of the XR view related work
        virtual void EndXRView() = 0;

        // Manage session lifecycle to track if RenderFrame should be called.
        virtual bool IsSessionRunning() const = 0;
    
        // Create a Swapchain which will responsible for managing
        // multiple XR swapchains and multiple swapchain images within it
        virtual void CreateSwapchain() = 0;
        
        // This will alow XR gem to provde device related data to RHI
        RHI::XR::DeviceDescriptor* GetDeviceDescriptor() = 0;
        
        // Provide access to instance specific data to RHI
        RHI::XR::InstanceDescriptor* GetInstanceDescriptor() = 0;
        
        // Provide Swapchain specific data to RHI
        RHI::XR::SwapChainImageDescriptor* GetSwapChainImageDescriptor(uint_32 swapchainIndex) = 0;
        
        // Provide access to Graphics Binding specific data that RHI can populate
`        RHI::XR::GraphicsBindingDescriptor* GetGraphicsBindingDescriptor() = 0;`
    }
}

XR::XRSystem pseudo code - This will be implementing the Interface described above. It is a singleton so it can be accessed by other gems for anything related to XR.


    // XRSystem will be the singleton that will act as a frontend to everything XR
    // related. It will have API to allow creation of XR specific objects and 
    // provide access to it's data. This class will also inherit from  SystemTickBus
    // in order to tick xr input
    class XRSystem: public XRSystemInterface
            , public AZ::SystemTickBus::Handler
    {
    public:
    
        virtual ~XRInterface() = default;
   
        // Accessor functions for RHI objects that are populated by backend XR gems
        // This will alow XR gem to provide device related data to RHI
        RHI::XR::DeviceDescriptor* GetDeviceDescriptor() override
        {
            return m_deviceDesc.get();
        }
        
        // Provide access to instance specific data to RHI
        RHI::XR::InstanceDescriptor* GetInstanceDescriptor() override
        {
            return m_instanceDesc.get();
        }
        
        // Provide Swapchain specific data to RHI
        RHI::XR::SwapChainImageDescriptor* GetSwapChainImageDescriptor(uint_32 swapchainIndex) override
        {
            return m_swapchainDesc.get();
        }
        
        // Provide access to Graphics Binding specific data that RHI can populate
        RHI::XR::GraphicsBindingDescriptor* GetGraphicsBindingDescriptor() override
        {
            return m_gbDesc.get();
        }
        
        // Access supported Layers and extension names
        const AZStd::vector<AZStd::string>& GetXRLayerNames(){..}
        const AZStd::vector<AZStd::string>& GetXRExtensionNames(){..}
         
        // Create XR instance object and initialize it            
        ResultCode InitInstance() 
        {
            m_instance = Factory::Get()->CreateXRInstance();

            if(m_instance)
            {
                return m_instance->InitInstanceInternal();
            }
            return ResultCode::Fail;
        }
        
        // Create XR device object and initialize it 
        ResultCode InitDevice() 
        {
            m_device = Factory::Get()->CreateXRDevice();

            //Get a list of XR compatible devices
            AZStd::vector<AZStd::intrusive_ptr<PhysicalDevice>> physicalDeviceList =
                  Factory::Get()->EnumerateDeviceList();
            
            //Code to pick the correct device. 
            //For now we can just pick the first device in the list
            
            if(m_device)
            {
                return m_device->InitDeviceInternal();
            }
            return ResultCode::Fail;
        }
        
        // Initialize XR instance and device 
        ResultCode InitializeSystem() override
        {
            ResultCode instResult = InitInstance();
            if(instResult != ResultCode::Success)
            {
               AZ_Assert(false, "XR Instance creation failed");
               return instResult;
            }
            
            ResultCode deviceResult = InitDevice();
            if(deviceResult != ResultCode::Success)
            {
               AZ_Assert(false, "XR device creation failed");
               return deviceResult;
            }
            return ResultCode::Success;
        }
        
        // Initialize a XR session 
        ResultCode InitializeSession(AZStd::intrusive_ptr<GraphicsBinding> graphicsBinding) override
        {
            m_session = Factory::Get()->CreateXRSession();
            
            if(m_session)
            {
                Session::SessionDescriptor sessionDesc;
                m_gbDesc = Factory::Get()->CreateGraphicsBindingDescriptor();
                sessionDesc.m_graphicsBinding = RPISystem::Get()->PopulateGrapicsBinding(m_gbDesc);
                ResultCode sessionResult = m_session->Init(sessionDesc);
                AZ_Assert(sessionResult==ResultCode::Success, "Session init failed");
                
                m_xrInput = Factory::Get()->CreateXRInput();
                return m_xrInput->InitializeActions(); 
            } 
            return ResultCode::Fail;  
        }
   
        // Manage session lifecycle to track if RenderFrame should be called.
        bool IsSessionRunning() const override
        {
            return m_session->IsSessionRunning();
        }
   
        // Create a Swapchain which will responsible for managing
        // multiple XR swapchains and multiple swapchain images within it
        ResultCode CreateSwapchain() override
        {
            m_swapChain = Factory::Get()->CreateSwapchain();
            
            if(m_swapChain)
            {
                ResultCode swapchainCreationResult = m_swapChain->Init(sessionDesc);
                AZ_Assert(sessionResult==ResultCode::Success, "Swapchain init failed");
                return swapchainCreationResult;
            }
            return ResultCode::Fail; 
        }
        
        // Indicate start of a frame
        void BeginFrame() override
        {
            ..
        }
        
        // Indicate end of a frame
        virtual void EndFrame() override
        {
            ..
        }
        
        // Indicate start of a XR view to help with synchronizing XR swapchain
        virtual void BeginXRView() override
        {
            ..
        }
        
        // Indicate end of a XR view to help with synchronizing XR swapchain
        virtual void EndXRView() override
        {
            ..
        }
        
    private:
        ResultCode InitInstance();
        
        //System Tick to poll input data
        void OnSystemTick() override
        {
            m_input->PollEvents();
            if (exitRenderLoop) 
            {
                break;
            }

            if (IsSessionRunning()) 
            {
                m_input->PollActions();
            }
        }
        
        AZStd::intrusive_ptr<Instance> m_instance;
        AZStd::intrusive_ptr<Device> m_device;
        AZStd::intrusive_ptr<Session> m_session;
        AZStd::intrusive_ptr<Input> m_input;
        AZStd::intrusive_ptr<Input> m_swapChain;
        bool m_requestRestart = false;
        bool m_exitRenderLoop = false;
        AZStd::intrusive_ptr<RHI::XR::DeviceDescriptor> m_deviceDesc;
        AZStd::intrusive_ptr<RHI::XR::InstanceDescriptor> m_instanceDesc;
        AZStd::intrusive_ptr<RHI::XR::SwapChainDescriptor> m_swapchainDesc;
        AZStd::intrusive_ptr<RHI::XR::GraphicsBindingDescriptor> m_gbDesc;
    };
    
    XRSystemInterface* XRSystemInterface::Get()
    {
         return Interface<XRSystemInterface>::Get();
    }
}

Since XRSystem inherits from AZ::SystemTickBus::Handler it will be able to use OnSystemTick to poll input as shown in the pseudocode above.

XR::Factory pseudo code
As explained above we will have XR objects which are implemented by Openxr gems like OpenXrVk. In order to help get this working we can setup a XR factory that is extended by SystemComponent within OpenXrVk

The factory will act as an interface for creating XR objects. It will be a singleton and be accessed by a call to XR::Factory::Get(). The OpenXrVk::SystemComponent will be responsible creating OpenXrVk objects by calling the static function Create that resides in all the OpenXrVk objects.

namespace XR
{
    //! Interface responsible for creating all the XR objects which are 
    //! internally backed by concrete objects
    class Factory
    {
    public:
        Factory();
        virtual ~Factory() = default;

         AZ_DISABLE_COPY_MOVE(Factory);
      
         //! Registers the global factory instance.
         static void Register(Factory* instance);

         //! Unregisters the global factory instance.
         static void Unregister(Factory* instance);

         //! Access the global factory instance.
         static Factory& Get();
         
         //Create XR::Instance object
         virtual AZStd::intrusive_ptr<Instance> CreateXRInstance() = 0;
         
         //Create XR::Device object
         virtual AZStd::intrusive_ptr<Device> CreateXRDevice() = 0;
         
         //Return a list of XR::PhysicalDevice 
         AZStd::vector<AZStd::intrusive_ptr<PhysicalDevice>> EnumerateDeviceList() = 0
         
         //Create XR::Session object
         virtual AZStd::intrusive_ptr<Session> CreateXRSession() = 0;
         
         //Create XR::Input object
         virtual AZStd::intrusive_ptr<Input> CreateXRInput() = 0;
         
         //Create XR::SwapChain object
         virtual AZStd::intrusive_ptr<SwapChain> CreateSwapchain() = 0; 
         
         //Create XR::ViewSwapChain object
         virtual AZStd::intrusive_ptr<ViewSwapChain> CreateViewSwapchain() = 0;  
         
         //Create RHI::XR::GraphicsBindingDescriptor that will contain
         //renderer information needed to start a session
         virtual AZStd::intrusive_ptr<RHI::XR::GraphicsBindingDescriptor> CreateGraphicsBindingDescriptor() = 0;  
    }
}

Below are all the XR objects which will define higher level common functionality. XR objects have no idea which gem will be implementing the XR backend. These objects should hopefully encapsulate most of the XR specific data and the API around these objects would be subject to change in future. Most of these objects are self explanatory and does not require further clarification around why they are needed. For deeper insight look at the code for OpenXrVk version of these objects.

namespace XR
{
    // XR::Instance class. It will be responsible for collecting all the data like 
    // form factor, physical device etc that will be needed to initialize an instance
    class Instance
    {
        class InstanceDescriptor
        {
            //Form Factor enum
            //XR::PhysicalDevice* physicalDevice
        };
        
        virtual XR::ResultCode InitInstanceInternal() = 0; 
    }
    
    // This class will be responsible for iterating over all the compatible physical
    // devices and picking one that will be used for the app
    class PhysicalDevice
    {
        struct PhysicalDeviceDescriptor
        {
            AZStd::string m_description;
            uint32_t m_deviceId = 0;
            //Other data related to device
        };
        PhysicalDeviceDescriptor m_descriptor;
    }
    
    // This class will be responsible for creating XR::Device instance 
    // which will then be passed to the renderer to be used as needed. 
    class Device
    {
        struct DeviceDescriptor
        {
            //XR::PhysicalDevice* physicalDevice
        };
        
        virtual XR::ResultCode InitDeviceInternal(DeviceDescriptor descriptor) = 0;
        DeviceDescriptor m_descriptor;
    }
    
    // This class will be responsible for creating XR::Session and 
    // all the code around managing the session state
    class Session
    {
    public:    
        struct SessionDescriptor
        {
            // Graphics Binding will cntain renderer related data to start a xr session
            GraphicsBinding* m_graphicsBinding
        };
        
        void Init(SessionDescriptor sessionDesc)
        {
            return InitSessionInternal(sessionDesc);
        }
        
        bool IsSessionRunning() const 
        {
            return m_sessionRunning;
        }
        
        bool IsSessionFocused() const = 0;
        virtual ResultCode InitSessionInternal(SessionDescriptor descriptor) = 0;
    private:
        
        SessionDescriptor m_descriptor;
        bool m_sessionRunning = false;
    }
   
    // This class will be responsible for creating XR::Input
    // which manage event queue or poll actions
    Class Input
    {
    public:    
        struct InputDescriptor
        {
            Session* m_session;
        };
    
         ResultCode Init(InputDescriptor descriptor)
         {
            m_session = descriptor.m_session;
            return InitInternal();
         }
         
         virtual void PollActions() = 0;         
         virtual ResultCode InitInternal() = 0;
    private:
         AZStd::intrusive_ptr<Session> m_session;
    }
    
    // This class will be responsible for creating multiple XR::SwapChain::ViewSwapchains
    // (one per view). Each XR::SwapChain::ViewSwapchain will then be responsible 
    // for manging and synchronizing multiple swapchain images 
    class SwapChain
    {
    public:
    
         class Image
         {
            struct ImageDescriptor
            {
                uint16_t m_width;
                uint16_t m_height;
                uint16_t m_arraySize;
            }
            ImageDescriptor m_descriptor;
         }
         class ViewSwapChain
         {
              //! All the images associated with this ViewSwapChain
              AZStd::vector<AZStd::intrusive_ptr<Image>> m_images;
              
              //! The current image index.
              uint32_t m_currentImageIndex = 0;
         }    
         
         //! Returns the view swapchain related to the index
         ViewSwapChain* GetViewSwapChain(const uint32_t swapchainIndex) const;

         //! Returns the image associated with the provided image 
         //! index and view swapchain index
         Image* GetImage(uint32_t imageIndex, uint32_t swapchainIndex) const;
         
         ResultCode Init()
         {
            return InitInternal();
         }
         
         virtual ResultCode InitInternal() = 0;
    private:
        
          AZStd::vector<AZStd::intrusive_ptr<ViewSwapChain>> m_viewSwapchains;    
    }
    
    // This class will be responsible for managing XR Space
    class Space
    {
    public:
        virtual ResultCode InitInternal() = 0;
    }
    
}

OpenXRVk::XX pseudo code - This code will be part of OpenXrVk gem. Below is an example of one possible backend implementation for XR functionality

namespace OpenXR
{
    // Class that will help manage XrInstance
    class Instance : public XR::Instance
    {
    
    public:
        static AZStd::intrusive_ptr<Instance> Create();
        XR::ResultCode InitInstanceInternal() override
        {
            ....
            // xrCreateInstance(m_xrInstance);
            // xrGetSystem(m_systemId)
            // vkCreateInstance(m_instance)
            ...
        }
        
        
    private:
        XrInstance m_xrInstance{ XR_NULL_HANDLE };
        AZStd::vector<XrApiLayerProperties> m_layers;
        AZStd::vector<XrExtensionProperties> m_extensions;
        XrFormFactor m_formFactor{ XR_FORM_FACTOR_HEAD_MOUNTED_DISPLAY };
        XrSystemId m_systemId{ XR_NULL_SYSTEM_ID };
        VkInstance m_instance = VK_NULL_HANDLE;
    }
    
    
    // Class that will help manage VkPhysicalDevice
    class PhysicalDevice: public XR::PhysicalDevice
    {
    
    public:
        static AZStd::intrusive_ptr<PhysicalDevice> Create();
        XR::ResultCode InitInstanceInternal() override
        {
            ....
            //xrGetVulkanGraphicsDeviceKHR
            ...
        }
    private:
        VkPhysicalDevice m_physicalDevice;  
    }
    
    
    // Class that will help manage VkDevice
    class Device: public XR::Device
    {
    
    public:
        static AZStd::intrusive_ptr<Device> Create();
        XR::ResultCode InitDeviceInternal(XR::PhysicalDevice& physicalDevice) override
        {
            ....
            // Create Vulkan Device
            
            ...
        }
    private:
        VkDevice m_nativeDevice;
        
    }
   
    
    // Class that will help manage XrSession
    class Session: public XR::Session
    {   
     public: 
        static AZStd::intrusive_ptr<Session> Create();  
        XR::ResultCode InitSessionInternal(SessionDescriptor descriptor) override
        {
            ..
            //AZStd::intrusive_ptr<GraphicsBinding> gBinding = static_cast<GraphicsBinding>(descriptor.m_graphicsBinding);
            //xrCreateSession(..m_session,gBinding,..)
            ..
        }
        void LogReferenceSpaces()
        {
            //..xrEnumerateReferenceSpaces/
        }
        void HandleSessionStateChangedEvent(const XrEventDataSessionStateChanged& stateChangedEvent, bool* exitRenderLoop, bool* requestRestart)
        {
            //Handle Session state changes
        }
        
        XrSession GetSession()
        {
            return m_session;
        }
        
        bool IsSessionFocused() const override
        {
            return m_sessionState == XR_SESSION_STATE_FOCUSED;
        }
        
        ResultCode InitInternal()
        {
            //Init specific code
        }
     private:
        
        XrSession m_session{ XR_NULL_HANDLE };
        // Application's current lifecycle state according to the runtime
        XrSessionState m_sessionState{XR_SESSION_STATE_UNKNOWN};
        XrFrameState m_frameState{ XR_TYPE_FRAME_STATE };
    }
    
    // Class that will help manage XrSpace's'
    class Space: public XR::Space
    {
    public:
        static AZStd::intrusive_ptr<Space> Create(); 
    
        XrSpaceLocation GetSpace(XrSpace space)
        {
          .. 
          //xrLocateSpace
          ..
        }
        ResultCode InitInternal()
        {
            //Init specific code
        }
    private:
        
        
        XrSpace m_baseSpace{ XR_NULL_HANDLE };
    }
    
    // Class that will help manage XrSwapchain
    class SwapChain: public XR::SwapChain
    {
    public:
        static AZStd::intrusive_ptr<SwapChain> Create();
        
        class Image : public XR::SwapChain::Image
        {
        public:
            static AZStd::intrusive_ptr<Image> Create();
        private:
            VkImage m_image;
            XrSwapchainImageBaseHeader* m_swapChainImageHeader;    
        }
        
        class ViewSwapChain : public XR::SwapChain::Image
        {
        public:
            static AZStd::intrusive_ptr<ViewSwapChain> Create();
            ResultCode Init(XrSwapchain handle, uint32_t width, uint32_t height);
        private:
            XrSwapchain m_handle;
            int32_t m_width;
            int32_t m_height;      
        }
        
        ResultCode InitInternal() override
        {
             ..
             // xrEnumerateViewConfigurationViews
             ..
             for(int i = 0 ; i < views ; i++
             {
                xrCreateSwapchain
                AZStd::intrusive_ptr<ViewSwapChain> vSwapChain = Factory::Get()->ViewSwapChain();
                
                if(vSwapChain)
                {
                    xrCreateSwapchain(.., xrSwapchainHandle, .)
                    vSwapChain->Init(xrSwapchainHandle,..);
                    m_viewSwapchains.push_back(vSwapChain);
                }
             }
             ..
        }
    private:
        AZStd::vector<XrViewConfigurationView> m_configViews;
        AZStd::vector<XrView> m_views;
        int64_t m_colorSwapchainFormat{ -1 };
    }
    
    // Class that will help manage XrActionSet/XrAction
    Class Input: public XR::Input
    {
    public:   
        static AZStd::intrusive_ptr<Input> Create();
        
        ResultCode Init() override
        {
            InitializeActions();
        }
        
        void InitializeActions() override
        {
            ..
            // Code to populate m_input
            // xrCreateActionSet
            // xrCreateAction
            // xrCreateActionSpace
            // xrAttachSessionActionSets
        }
        
        void PollActions() override
        {
            ..
            // xrSyncActions
            ..
            
        }
        
        void PollEvents() override
        {
            ..
            // .. m_session->HandleSessionStateChangedEvent
            ..
        }
   
       
    private:
       struct InputState
        {
            XrActionSet actionSet{ XR_NULL_HANDLE };
            XrAction grabAction{ XR_NULL_HANDLE };
            XrAction poseAction{ XR_NULL_HANDLE };
            XrAction vibrateAction{ XR_NULL_HANDLE };
            XrAction quitAction{ XR_NULL_HANDLE };
           AZStd::array<XrPath, Side::COUNT> handSubactionPath;
            AZStd::array<XrSpace, Side::COUNT> handSpace;
            AZStd::array<float, Side::COUNT> handScale = { { 1.0f, 1.0f } };
            AZStd::array<XrBool32, Side::COUNT> handActive;
        };
        InputState m_input;
    }
}

We will also need to add support for proper validation logging. This will allow us to better debug error codes across XR api. For example a function like this will be needed to log errors in case of a non successful XR error code.


bool IsSuccess(XrResult result)
{
    if (result != XR_SUCCESS)
    {
        AZ_Error("XR", false, "ERROR: XR API method failed: %s", GetResultString(result));
        return false;
    }
    return true;
}

RPI
Following stuff will need to be done at RPI level

  • RPI core gem will be responsible for initializing the XRSystem. Once the device is created it will also work with the RHI backend to populate GraphicsBindingDescriptor which is used to create a XR session. The pseudocode for this work looks like this
class RPISystem
{
    void RPISystem::RegisterXRSystem(XRSystemInterface* xrSystem);
    {
        m_xrSystem = xrSystem;
    }

    void RPISystem::Initialize()
    {
        ..
        ..
        //RHI::Vulkan::XR::DeviceDescriptor* deviceDesc = m_xrSystem->GetDeviceDescriptor();
        //pass the devicedscriptor to RHI for device creation
        ..
        
        //Session creation
        if(m_xrSystem)
        {
            RHI::XR::GraphicsBindingDescriptor* gb_desc = m_xrSystem->`GetGraphicsBindingDescriptor`();
            m_rhiSystem->PopulateGrapicsBinding(gb_desc);
            m_xrSystem->InitializeSession(graphicsBinding);
        }
        ..
        
    }
    
    AZStd::intrusive_ptr<XRSystem> m_xrSystem;
}

  • We will also need to setup a special XRSwapchain pass that is able to call BeginXRView/EndXRView during framegraph execution phase.
  • A separate XR pipeline will need to be created that will be duplicated for left and right eye
  • ViewSRG and View c++ code will need to be modified to support multiple views. FOV/Orientation data will need to be queried via XRSystem and passed to the view SRG. We could add View data to the bindless heap and index into it from the shader. That way view indices can just be added as root constants.
  • Swapchain specific RHI image views will need to be created with the Image object extracted from XR.

RHI

  • FrameScheduler - Modify RHISystem::FrameUpdate to check if FrameScheduler::BeginFrame succeeded. If not skip calling compile or execute on the FrameScheduler (i.e framegraph).
  • Add API to accept a Device from XR gem
  • Add API to generate a RHI::ImageView from XR::Image passed in from RPI.
    • We could modify RHI::Image api to skip creation of native object and have it accept the one from XR::Image.
    • RHI:;ImageView would then just be created as normal. Any issues related to this will be caught in Step 1 of dev plan below.

Debugging

  • PC support - In order to facilitate debugging while working on XR we should still maintain a window and swapchain on PC. The idea is that when someone removes the VR device the app should automatically switch to using the PC app. It can do that by switching to PC swapchain instead of XR swapchain. The app can check every frame if the XR session is still active and if it is we will use the XR swapchains but as soon as the device is removed from the head RPI should be able to detect this and switch to PC swapchain making it seameless. The PC swapchain support could be added behind a macro so we dont ship this code as part of a Ocullus app.
  • DX12 vs Vulkan - We should consider adding XR support to both DX12 and Vulkan instead of just Vulkan. Given that DX12 is more used, it is usually more stable and and more performant and hence it will provide stable builds for XR dev work day to day. Having said that it will add to capacity which is a concern at the moment. Hence we can consider just dropping dx12 initially
  • Having PC builds across DX12, Vulkan and Android will also help isolate issues which are rhi backend specific vs platform specific, etc. Definitely for dev purposes PC builds will be the most used in this case.
  • Oculus provides Renderdoc support so we should be able to do gpu captures natively on device to help debug gpu specific rendering artifacts.
  • Add support for debug text specific to VR rendering.

Profiling

Performance

1. Multi View / View instancing support
As part of the perf work the first big change we should do is to implement “Multi View” support within Atom. This will essentially allow us to not duplicate CPU work related to all the passes for left and the right eye. We should be able to add support that will allow the passes to write out to texture arrays. This will be a fairly significant change to Atom.
DX12 - https://microsoft.github.io/DirectX-Specs/d3d/ViewInstancing.html
VK - https://www.khronos.org/registry/vulkan/specs/1.3-extensions/man/html/VK_KHR_multiview.html
Metal - https://developer.apple.com/documentation/metal/render_passes/rendering_to_multiple_viewports_in_a_draw_command?language=objc

XR Render Pipeline (MultiView support)
FinalXRRenderPipeline
High level RHI changes

  • PSO api changes support view instance declaration
  • Commandlist api changes to support view instance mask
  • Shader changes to now support SV_ViewID . This will impact shaders for any pass that is processing view specific data
    • Azslc support for SV_ViewID
  • Investigate tier approach to view instancing.
  • This will require deeper prototyping in order to better understand all the requirements.

High level RPI

  • A new pipeline as shown above which will be much more performant.
  • This will require changes to all the passes and how the render target for each pass that deals with view data is setup.

2. VRS (Foveated rendering) support - This will require a separate doc on VRS abstraction

3. Mobile specific optimizations (via O3de scalability work) - We can use it to disable features within the XR pipeline or lower configuration settings per feature that can help yield better perf. For example

  • Lower Rendering scale
  • Half Float support in the forward pass.
    • This can be achieved by introducing a new type "float_auto" and use Azslc to emit float or half in the final hlsl code based on the platform
  • Faster Shadow filtering
  • Reduced Shadow sample count
  • Only allow directional shadows
  • Baked shadows?
  • Baked lighting (Direct)?
  • Disable Area lights
  • XR specific pipeline optimizations - This can be a variant off the low end render pipeline. Deeper analysis of perf using native gpu tools.
  • Experiment with cheaper BRDF (optimized for mobile) - (Ref -> siggraph2015 Optimizing PBR)

Brdf

What are the advantages of the feature?

  1. Framework to add support for AR/VR/MR
  2. Initially only supported on Quest 2

What are the disadvantages of the feature?

  1. More complexity across all the gems within Atom space
  2. Higher maintenance burden as we are essentially increasing our number of supported platforms
  3. Different XR render pipeline means more work around supporting it

How will this be implemented or integrated into the O3DE environment?

Recommended Development Plan in iterative steps

  1. RHI OpenXR sample with Vk renderer (using Oculus link)
    1. Add support for XR and OpenXrVk gems.
    2. RHI OpenXR sample within AtomSampleViewer
      1. XR initialization support
      2. Setup a simple XR pipeline (in c++) that renders a cube per xr Space
      3. Extract view data per view and create a framegraph that will execute a simple XR pipeline twice. It could look like BeginXRView(Left eye)→RenderCube→EndXRView(Left eye)->BeginXRView(Right eye)→RenderCube→EndXRView(Right eye)
      4. Extract Positional data per space from XR gem and feed that to the RenderCube pass
  2. RHI sample support on Android (i.e Quest 2)
    1. Add support for the sample to run on Android so we do not need to use Oculus link.
  3. Dx12 support (based on capacity). Drop this if needed
  4. RPI support on PC
    1. Add a the proper XR pipeline using the pass data (described in the doc above)
    2. Setup usage of this pipeline when XR is enabled
    3. Setup multi view support within shaders + code
  5. RPI support on Android
  6. Multi View support on RHI sample (VK only at first)
  7. Multi View support integrated within RPI
  8. Profile + optimizations as described und the perf section

Are there any alternatives to this feature?

No alternatives at the moment

How will users learn this feature?

  1. Initially this feature will only be added as a support within the engine core.
  2. Eventually we will add editor specific components (for example XRCameraComponent) that will allow placing XR specific content within a level

Oen questions?

  1. Should we add support for Dx12 at the same time as Vk or leave that for later. Adding DX12 earlier will help harden the XRSystemInterface api
  2. What is our perf target this year and what kind of content will this perf target be attached to?

Proposed SIG-Graphics-Audio meeting agenda for 2022-03-16

Meeting Details

  • Date/Time: March 16, 2022 @ 6:00pm UTC / 10:00 am PST
  • Location: Discord SIG-Graphics-Audio Voice Room
  • Moderator: JonB (wintermute-motherbrain)
  • Note Taker JonB (wintermute-motherbrain)

The SIG-Graphics-Audio Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

Meeting Agenda

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Create actionable items from proposed topics

Open Discussion Items

List any additional items below!

SIG-Graphics-Audio Chair/Co-Chair Nominations 12/1 - 12/8 -- Elections 12/8 - 12/15

SIG chair / co-chair elections for 2022

Since the inception of O3DE, each SIG chair has been staffed as an interim position. It's time to hold some official elections, following some of the proposed guidance but with our own process due to the holiday season and in order to expedite the elections into next year.

The chair / co-chair roles

The chair and co-chair serve equivalent roles in the governance of the SIG and are only differentiated by title in that the highest vote-getter is the chair and the second-highest is the co-chair. The chair and co-chair are expected to govern together in an effective way and split their responsibilities to make sure that the SIG operates smoothly and has the availability of a chairperson at any time.

Unless distinctly required, the term "chairperson" refers to either/both of the chair and co-chair. If a chair or co-chair is required to perform a specific responsibility for the SIG they will always be addressed by their official role title.

In particular, if both chairpersons would be unavailable during a period of time, the chair is considered to be an on-call position during this period. As the higher vote-getter they theoretically represent more of the community and should perform in that capacity under extenuating circumstances. This means that if there is an emergency requiring immediate action from the SIG, the chair will be called to perform a responsibility.

Responsibilities

  • Schedule and proctor regular SIG meetings on a cadence to be determined by the SIG.
  • Serve as a source of authority (and ideally wisdom) with regards to O3DE SIG area of discipline. Chairpersons are the ultimate arbiters of many standards, processes, and practices.
  • Participate in the SIG Discord channel and on the GitHub Discussion forums.
  • Serve as a representative of the broader O3DE community to all other SIGs, partners, the governing board, and the Linux Foundation.
  • Represent the SIG to O3DE partners, the governing board, and the Linux Foundation.
  • Coordinate with partners and the Linux Foundation regarding official community events.
  • Represent (or select/elect representatives) to maintain relationships with all other SIGs as well as the marketing committee.
  • Serve as an arbiter in SIG-related disputes.
  • Coordinate releases with SIG Release.
  • Assist contributors in finding resources and setting up official project or task infrastructure monitored/conducted by the SIG.
  • Long-term planning and strategy for the course of the SIG area of discipline for O3DE.
  • Maintain a release roadmap for the O3DE SIG area of discipline.

Additionally, at this stage of the project, the SIG chairpersons are expected to act in the Maintainer role for review and merge purposes only, due to the lack of infrastructure and available reviewer/maintainer pool.

... And potentially more. Again, this is an early stage of the project and chair responsibilities have been determined more or less ad-hoc as new requirements and situations arise. In particular the community half of this SIG has been very lacking due to no infrastructural support, and a chairperson will ideally bring some of these skills.

Nomination

Nomination may either be by a community member or self-nomination. A nominee may withdraw from the election at any time for any reason until the election starts on 12/3.

Nomination requirements

For this election, nominees are required to have at minimum two merged submissions to http://github.com/o3de/o3de (must be accepted by 2022-01-31). This is to justify any temporary promotion to Maintainer as required by this term as chairperson. Submissions may be in-flight as of the nomination deadline (2021-12-08 12PM PT), but the nominee must meet the 2-merge requirement by the end of the election or they will be removed from the results.

Any elected chairperson who does not currently meet the Maintainer status will be required to work with contributors from the SIG to produce an appropriate number of accepted submissions by January 31, 2022 or they will be removed and another election will be held.

The only other nomination requirement is that the nominee agrees to be able to perform their required duties and has the availability to do so, taking into account the fact that another chairperson will always be available as a point of contact.

How to nominate

Nominations will be accepted for 1 week from 2021-12-01 12:00PM PT to 2021-12-08 12:00PM PT.
Nominate somebody (including yourself) by responding to this issue with:

  • A statement that the nominee should be nominated for a chair position in the specific SIG holding its election. Nominees are required to provide a statement that they understand the responsibilities and requirements of the role, and promise to faithfully fulfill them and follow all contributor requirements for O3DE.
  • The name under which the nominee should be addressed. Nominees are allowed to contact the election proctor to have this name changed.
  • The GitHub username of the nominee (self-nominations need not include this; it's on your post.)
  • Nominee's Discord username (sorry, but you must be an active Discord user if you are a chairperson.)

Election process

The election will be conducted for one week from 2021-12-08 12:00PM PT and 2021-12-15 12:00PM PT and held through an online poll. Votes will be anonymous and anyone invested in the direction of O3DE and the SIG holding the election may vote. If you choose to vote, we ask that you be familiar with the nominees.

If there is a current interim chair, they will announce the results in the Discord sig channel as well as the SIG O3DE mailing list no later than 2021-12-17 1:00PM PT. If there is no interim chair, the executive director will announce the results utilizing the same communication channels. At that time if there is a dispute over the result or concern over vote tampering, voting information will be made public to the extent that it can be exported from the polling system and the SIG will conduct an independent audit under the guidance of a higher governing body in the foundation.

The elected chairpersons will begin serving their term on 2022-01-01 at 12AM PT. Tentatively SIG chairs will be elected on a yearly basis. If you have concerns about wanting to replace chairs earlier, please discuss in the request for feedback on Governance.

Proposed RFC Feature - AtomSampleViewer Pre-Checkin Wizard

Summary:

AtomSampleViewer's automated screenshot tests are Atom's most exhaustive safety net against bugs and regressions. We want to formalize a process for developers to run these tests before every render-related merge and provide tools that help streamline this process.

What is the relevance of this feature?

There are a number of problems around testing the renderer that we would like to address with improved process and tools. Ultimately these problems stem from the fact that testing render results is hard to automate because hardware and driver differences (and gremlins). We have to keep loose thresholds to avoid false positives, but this increases the risk of false negatives. And so we need humans involved in the testing process on a regular basis, using loose thresholds as a first line of defense, and then use human inspection to identify false negatives.

  • We need a report that can be pasted into a PR description to provide assurance the developer thoroughly exercised the required ASV test cases.
  • We need regular human verification of screenshot results, to avoid false negatives.
  • We need the process to be simple enough that developers don't hate the checkin process. Stretch goal: can we make developers love the checkin process?
  • We should not rely on BeyondCompare, because not everyone has it.

Feature design description:

Let's create a "wizard" that walks the developer through a series of checks, helps them verify the most important screenshots, and ends with a brief report of the testing that was completed for inclusion in the PR.

Instead of running the automation normally, they will go to the Automation menu and pick "Run pre-commit wizard...". This will run the _fulltestsuite_ script first, then run through a series of screens for the user to interact with...

image

On the first screen we'll see a summary of the steps that will take place.

(in case you are wondering, the wizard image is Public Domain https://www.wpclipart.com/cartoon/mythology/wizard/wizard_angry.png.html )

image

After running the full test suite, the wizard will show a summary of the auto-test results. You can click a "See Details" button to open the normal Script Results dialog that we are all used to seeing.

image

Next the user will be required to inspect and respond to a series of manual screenshot verifications. I'm not sure the exact heuristic we'll use for determining which screenshots to present and how many to present; it's something we can iterate on. Here are some options I have in mind...

  1. Sort the results from largest to smallest diff score.
  2. Continue presenting screenshots until the user reports "no visual difference".
  3. Continue presenting screenshots until some minimum diff score is reached, regardless of user response.
  4. Any other ideas?

The screenshot evaluation screen will have two image swatches. The first will highlight significantly different pixels in red. The second will automatically flip back and forth between the expected baseline screenshot and the captured screenshot. The user can turn off the automatic flipping as needed, but auto-flip is the default. The user can drag and mousewheel to pan and zoom both swatches in sync.

The user must select one of the available options: "I don't see any difference", "I see a benign difference", "I see a difference that's probably benign)", "This looks like a problem".

For each screenshot, we should provide a description of what's important in determining whether certain differences are benign. This will require us to update all our scripts, something we can do gradually. For example, for MaterialHotReloadTest I would say "It's important that the pattern and color match, as well as which words are shown at the bottom." So if there are differences is aliasing or sampling, the user should be able to read this description and pick "I see a benign difference".

We'll also have a button to quickly export artifacts that shows the diff mask and the expected/actual images, so the results can be easily shared. Note this could take several forms: export an animated gif where the expected/actual images flip back and forth, export an animated "apng" (since gif is limited to 256 colors), or just a still image that lines up the expected/actual/diff next to each other. The exact set of artifacts is something we can iterate on.

ScreenshotInspection

On the final screen, we'll show a summary of the auto test, a list of any auto failures, a summary of the interactive screenshot inspection, and a list of any issues that were reported. The "Copy to Clipboard" function is what we'll normally use, and paste the output into the PR description. We should also have buttons for re-opening any of the prior screenshot inspection screens, so the user can re-inspect and export anything they missed.

image

What are the advantages of the feature?

  • By requiring ASV results in every PR, we will catch issues sooner.
  • We can feel comfortable loosening the tolerances to ensure we don't have false positives, without worrying about introducing false negatives because of the human element. This will allow us to use more of these screenshots as gating test cases in the AR system.
  • Visually inspecting screenshots will be faster since diff's can be viewed in-tool.
  • We can keep better maintenance of our screenshot golden images because developers will notice drift in the results, like when small benign changes occur.
  • It will be easier to communicate about screenshot result issues, using the export artifacts feature.
  • By interactively walking the developer through the test procedure, it reduces the burden of onboarding new developers.

What are the disadvantages of the feature?

We have to trust developers to inspect diffs who might have different opinion or experience about what is a benign difference.

How will this be implemented or integrated into the O3DE environment?

These changes will be made in AtomSampleViewer which lives it its own GitHub repo (https://github.com/o3de/o3de-atom-sampleviewer). Developers must have both O3DE and AtomSampleViewer repo's cloned locally. This is no different from the current operating environment for Atom developers.

Are there any alternatives to this feature?

We could implement something similar in a python tool that runs external to AtomSampleViewer, doing analysis on artifacts captured by AtomSampleViewer after-the-fact. This has the advantage of being able to run ASV on one platform (like mobile) and analyze the results on another platform (developer computer). It might also be easier to develop the UI in PySide Qt rather than ImGui. Disadvantage is that the user might have to jump around more between tools, or it could take longer to develop a streamlined integrated experience. Since we already have built-in systems in place for screenshot analysis, it would probably be easier to just extend those systems with these new features.

How will users learn this feature?

We will need to communicate the general process to the community through a wiki page or official documentation to point them in the right direction. But once they have figured out how to run ASV and open the wizard, then the wizard should guide them the rest of the way.

Are there any open questions?

  1. What are the best ways to export screenshot artifacts? Ideally we want something that the user can drop into the description of a PR on GitHub, or into Jira or on a wiki, and see an animation that flips back and forth between the expected and actual screenshots.
    1. Gif is the most portable but only supports 256 colors. This may be enough in some cases, but in others you might not be able to clearly see what the difference is.
    2. APNG is an unofficial extension to PNG. It supports 24bit color, and it is supported on the major web browsers and in GitHub. It is not supported in Jira or Confluence. It is not widely supported in image editing programs.
    3. WebP has wider support than APNG, but it is not supported on GitHub PRs.
    4. We could look into exporting some video format, but most of those are patent protected and not compatible with open source projects.
    5. We could output a single image that shows the screenshots side-by-side, also side-by-side with a heatmap that shows where the differences are. It just isn't ideal, can be hard to really see the difference, even with a heatmap.
    6. We could output the two screenshots separately and expect the audience to load them in some tool where they can flip back and forth themselves. This is simple to implement but clunky for the audience.
    7. It would be nice if GitHub had some some built-in still image comparison feature on PR descriptions. As an alternative, maybe image comparison could be done with some client-side JS plugin.
  2. What (if any) platform requirements should we impose? Historically at Amazon we expected tests to be run on dx12 and vulkan. Some changes could impact other platforms but we haven't required them before. Some contributors might be working on Linux and not have access to Windows. My recommendation is that we do not impose a blanket platform requirement and reviewers should address it on a case by case basis, and use their own judgement.

Proposed SIG-Graphics-Audio meeting agenda for 2022-07-13

Meeting Details

The SIG-Graphics-Audio Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

See previous meeting notes here: #54

Meeting Agenda

  • Discuss the roadmap items for sig-graphics-audio
  • Continue the discussion on pipeline-independent materials #54 (comment)
  • Discuss possible solutions to make the RT code aware of materials. Proposals include custom shaders that will act as drop-in for a material that can be sampled by the raytracing code, proxy geometry with vertex colors or rendering out albedo textures that can be sampled (which however implies diffuse-only).
  • Pull in @dmcdiarmid-ly for a discussion on hybrid rendering. It is currently unclear when the RT acceleration structures are built. Does the user need to have a GI component in the scene, or what triggers this?

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Open Discussion Items

List any additional items below!

Maintainer Nomination: rgba16f

Nomination Details

  • Nominee: rgba16f
  • Role Maintainer

Why this person is being nominated

  • Has a history of many contributions to O3DE
  • Has been a Reviewer since June
  • Participated in 140 PR's since June
  • Thousands of lines of code changes to o3de.

Outcome (For SIG Facilitator)

Voting Results

Maintainer Nomination: amzn-tommy


Nomination Guidelines

Maintainer Nomination Requirements
Has been a Reviewer for 2+ months
8+ reviewed Pull Requests in the previous 2 months
200+ lines of code changed across all reviewed Pull Request
2+ O3DE Maintainers that support the promotion from Reviewer to Maintainer
Requirements to retain the Reviewer role: 4+ Pull Requests reviewed per month


I would like to nominate: myself, @amzn-tommy to become a Maintainer on behalf of sig-graphics-audio. I verify that I have fulfilled the prerequisites for this role.

Reviewers & Maintainers that support this nomination should comment in this issue.

Proposed SIG-Graphics-Audio meeting agenda for 2021-09-15

Meeting Details

  • Date/Time: September 15, 2021 @ 5:00pm UTC / 10:00am PDT
  • Location: Discord SIG-Graphics-Audio Voice Room
  • Moderator: JonB (wintermute-motherbrain)
  • Note Taker JonB (wintermute-motherbrain)

The SIG-Graphics-Audio Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

  • We are now known as sig-grapchis-audio!

Meeting Agenda

  • Good first issues
  • Unit tests and code coverage for Atom (RFC)

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

  • The sig decided that the RFC for unit test coverage needs more work and potentially needs to be decided at the steering committee level. @smurly will continue to iterate on the RFC and solicit feedback.
  • RFC o3de/o3de#3385 was approved and the feature has been added to the backlog. Additional information has been requested from the original author to gather more requirements.
  • @jeremyong-az presented his RFC for the material system. Initial response was positive but still warrants discussion. The group decided to discuss the RFC after finishing triage at the next triage session (9/22).

Open Discussion Items

  • RFC for the material system is still being openly discussed.
  • This RFC has still not been discussed.

List any additional items below!

SIG-Graphics-Audio Feature Grid

sig-graphics-audio Feature Grid

Graphics Rendering Subsystem

Place Holder**

Subsystem Detail
Rendering
Directx12
Vulkan
Metal
UberRender
Nanite-ish
Raytracing
Directx12
Vulkan
Metal
Anti-Aliasing
MSAA
SMAA
TAA
Shaders
HLSL
AZSL
OLDZSL
Global Illumination
Forward+
Deferred
Hybrid
Lighting
Directional
Point
Global Sky
Area
Forward shading
Diffuse Probe Grid
Reflection Probes
Subsurface Scattering
Exposure / Eye Adaptation
HDRI Skybox
Decal
PostFX
Tonemapping
Color Grading
Post-Process Volumes
Bloom
Deferred Fog
Depth of Field
SSAO

SIG-graphics-audio 11/30 release notes

Please fill any info related to the below for the 11/30 release notes: Note this has a due date of Friday Nov 12th, 2021

  • Features
  • Bug fixes (GHI list if possible)
  • Deprecations
  • Known issues

Proposed SIG-Graphics-Audio meeting agenda for 2021-10-20

Meeting Details

  • Date/Time: October 20, 2021 @ 5:00pm UTC / 10:00 am PDT
  • Location: Discord SIG-Graphics-Audio Voice Room
  • Moderator: JonB (wintermute-motherbrain)
  • Note Taker JonB (wintermute-motherbrain)

The SIG-Graphics-Audio Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

What happened since the last meeting?

  • Approved this RFC
  • Pushed back and requested more information/details for this RFC
  • Discussed this RFC, discussion is still ongoing.

Meeting Agenda

  • Discuss the new guidelines for promotion to Reviewer and Maintainer in O3DE.
  • Review nominations for Reviewer/Maintainer.
  • Continue discussion for the material shader RFC

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Create actionable items from proposed topics

Open Discussion Items

  • This RFC is still open for discussion

List any additional items below!

Atom Render viewport for animation editor

Summary:

Replace OpenGLRenderViewport in emotionfx with Atom Render Viewport.

What is the relevance of this feature?

Since EMotionFX released beta in LY 1.10, it has been using OpenGL to render actor in the animation editor. It has many disadvantages including:

1)Performance. We are still using outdated OpenGL rendering technology and creating actor format for OpenGL needs.

  1. Different rendering results with animation editor vs in game.

  2. An inconsistent workflow (compared to other o3de viewport) when navigating the OpenGL render viewport.

  3. OpenGL is no long supported on Mac. Currently EMFX's opengl renderviewport is the only renderer that's not integrated with Atom.

Integration to atom viewport solves all problems above and gives customer a much better experience in the animation editor. We plan to leverage some of the technology we have built for the UI editor and material editor (i.e Atom Viewport Context, Render Viewport Widget and Camera? work) to speed up development.

Technical design description:

Here is a simplified version of the class diagram to help understand the current design for opengl render viewport and render plugin in emotionfx.

currentDesign

As you can see, it is going to be significant amount of job to replace MCore::Camera to AZ::Camera, MCore::Manipulator to AZ::Manipulator and keep supporting all the OpenGL rendering and functions on parallel to atom renderer. The good thing is we don't have to be, since the goal is to completely remove openGL dependency and replace it with atom. Our plan is to use the existing RenderViewportWidget (which is already integrated with AZ::Camera, controller, TickBus, and currently already working for MaterialEditor), and built it from the ground up by adding the functionality that needed by emotionfx. It should be kept entirely separate from the existing renderPlugin.

New design for AtomRenderPlugin
Here is a simplified version of the class diagram for the AtomRenderViewportWidget

atomRenderView

In this new diagram, you will noticed a few thing got changed. First, compare to the first graph, we got rid of a lot of rendering specific class and functions. This is because those function were deferred to being handled by RenderViewportWidget. Second, usage of MCore::Camera and MCore::Manipulator are completely gone and replaced by AZ::Camera and AzToolsFramework::Manipulator. In addition, we also get rid of the layout setting and multiple viewWidget for the new renderPlugin, because it is not a core feature and not having a lot of usage. For now, the atom render plugin will only have one active viewport.

Update the ui menu options

UI option name  
Select Keep.
Manipulator Keep. This option toggles between translation, rotation and scale manipulators.
Layout Remove. Layout option offers split layout and there isn't much of a usage of that anymore.
View Option:  
Solid, Lighting, Backface Culling, Grid Keep. Atom should already have option to toggle them on and off.
Bounding Box, skeleton, joint names, joint orientaion, bind Pose, Colliders Keep. All of these option should be able to be replaced by auxGeom rendering from atom.
Wireframe, vertex normals, face normals and tangents (include bitangents). Keep. Those option will require software skinning of the actor. We would like to get rid of CPU skinning as it is terribly slow and not efficient. It would be great if atom can provide support to render those debug option on GPU later in the pipeline, then in here we could just enable the rendering options.The fallback plan is to still use software skinning and use auxGeom to render those options.
Gradient background Remove. Not essential. In Material editor it support different skybox, if needed we could introduce that option instead.
Motion extraction Keep. MCore render util uses a simple trajectory path class to render those and should be easily port over. The rendering function just combines some triangle and line rendering, should be easily replaced with auxGeom under the hood.
Camera Options:  
Perspective, front, back, top, bottom, left, right Keep. Checked with Atom team and editor team that orthographic camera is supported.
Reset Camera Keep. Also there seems to be a bug that reset camera will reset the camera at a different distance vs clicking the camera position option (left, right, top..). Make sure that problem is addressed as well.
Show selected, show entire scene Remove. Not essential. Almost never you would need multiple character on the animation editor, so that makes the option redundant.
Follow character Keep.
Right Click:  
Reset Transform, Open Actor, Recent Actor, Open workspace, Recent workspace, Transform, Camera options Leaning Keep. Those are some of the essential UI options. Some might argue those are the same option can be found in file menus and it could be redundant. Could be an optional task and less prioritized.
Unselect/Hide/Unhide actor instances, Clone / Remove select actor instances, Merge Actor Leaning Remove. Those option are not very often used as we don't recommend having multiple actor open in the animation editor. To me it's just add a bit confusion.

Execution plan:

Here is a brief execution plan to show the order of the task should be work on and which of the task could be parallel between multiple SDEs. The openGLRenderPlugin redcode and emotionfx::Mesh redcode will be discussed after 1st release and could be done in parallel as awell.

executionplan

What are the advantages of the feature?

It will allow us to completely get rid of the OpenGL dependency. It will give us performance advantages using atom renderer and more align to other editor tools. The EMotionFX::Mesh format can also be deleted which would result in better performance in tool time.

What are the disadvantages of the feature?

EMotionFX is further coupled with atom renderer.

How will this be implemented or integrated into the O3DE environment?

EMotionFX is already depended on Atom and its mesh format. User can just turn on the atom render viewport window instead of the openGL window in emotionfx editor.

Are there any alternatives to this feature?

  1. Keep the OpenGL render viewport.

  2. If there is an alternative to atom renderer that is also a possibility by nature. But I haven't heard of any alternative and I'm sure no one want to go back to use Cry Renderer.

How will users learn this feature?

The atom render viewport is designed to be a less complicated version of the openGL render viewport - and the interface will largely remain the same.

Are there any open questions?

  • Material still won't be rendered in the emotionfx editor. This is because currently emotionfx doesn't support the entity / component system yet and there isn't any place to do the material authoring. But if we want to support render material in the future, this would be a necessary step.

Poll: sig-graphics-audio weekly meeting format - Open until 2022-05-18

What: Change the format of the weekly discord meeting. Instead of using the meeting to assign jira issues, the meeting should be used to discuss technical topics, more like a regular standup meeting. In addition, an administrative meeting will be held on Tuesdays to do assignment of tasks and issues.

When: The poll starts on 2022-04-28 and will continue until 2022-05-18 (the next sig-graphics-audio monthly)

There are several options listed below, as comments. Add the emoji (👍 ) to the options of your preference.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.