file_path
stringlengths
5
148
content
stringlengths
150
498k
size
int64
150
498k
carter-urdf-example_index.md
# Omniverse URDF Importer ## Omniverse URDF Importer The URDF Importer Extension is used to import URDF representations of robots. Unified Robot Description Format (URDF) is an XML format for representing a robot model in ROS. ### Getting Started 1. Clone the GitHub repo to your local machine. 2. Open a command prompt and navigate to the root of your cloned repo. 3. Run `build.bat` to bootstrap your dev environment and build the example extensions. 4. Run `_build\{platform}\release\omni.importer.urdf.app.bat` to start the Kit application. 5. From the menu, select `Isaac Utils->URDF Importer` to launch the UI for the URDF Importer extension. This extension is enabled by default. If it is ever disabled, it can be re-enabled from the Extension Manager by searching for `omni.importer.urdf`. **Note:** On Linux, replace `.bat` with `.sh` in the instructions above. ### Conventions Special characters in link or joint names are not supported and will be replaced with an underscore. In the event that the name starts with an underscore due to the replacement, an a is pre-pended. It is recommended to make these name changes in the mjcf directly. See the Convention References documentation for a complete list of Isaac Sim conventions. ### User Interface Information Panel: This panel has useful information about this extension. Import Options Panel: This panel has utility functions for testing the gains being set for the Articulation. See Import Options below for full details. Import Panel: This panel holds the source path, destination path, and import button. ### Import Options - Merge Fixed Joints ## URDF Importer Settings - **Consolidate Fixed Joints**: Consolidate links that are connected by fixed joints, so that an articulation is only applied to joints that move. - **Fix Base Link**: When checked, the robot will have its base fixed where it’s placed in world coordinates. - **Import Inertia Tensor**: Check to load inertia from urdf directly. If the urdf does not specify an inertia tensor, identity will be used and scaled by the scaling factor. If unchecked, Physx will compute it automatically. - **Stage Units Per Meter**: |kit| default length unit is centimeters. Here you can set the scaling factor to match the unit used in your URDF. Currently, the URDF importer only supports uniform global scaling. Applying different scaling for different axes and specific mesh parts (i.e. using the `scale` parameter under the URDF mesh label) will be available in future releases. If you have a `scale` parameter in your URDF, you may need to manually adjust the other values in the URDF so that all parameters are in the same unit. - **Link Density**: If a link does not have a given mass, uses this density (in Kg/m^3) to compute mass based on link volume. A value of 0.0 can be used to tell the physics engine to automatically compute density as well. - **Joint Drive Type**: Default Joint drive type, Values can be `None`, `Position`, and `Velocity`. - **Joint Drive Strength**: The drive strength is the joint stiffness for position drive, or damping for velocity driven joints. - **Joint Position Drive Damping**: If the drive type is set to position this is the default damping value used. - **Clear Stage**: When checked, cleans the stage before loading the new URDF, otherwise loads it on current open stage at position `(0,0,0)`. - **Convex Decomposition**: If Checked, the collision object will be made a set of Convex meshes to better match the visual asset. Otherwise a convex hull will be used. - **Self Collision**: Enables self collision between adjacent links. It may cause instability if the collision meshes are intersecting at the joint. - **Create Physics Scene**: Creates a default physics scene on the stage. Because this physics scene is created outside of the robot asset, it won’t be loaded into other scenes composed with the robot asset. - **Output Directory**: The destination of the imported asset. it will create a folder structure with the robot asset and all textures used for its rendering. You must have write access to this directory. ### Note: - It is recommended to set Self Collision to false unless you are certain that links on the robot are not self colliding. - You must have write access to the output directory used for import, it will default to the current open stage, change this as necessary. ### Known Issue: If more than one asset in URDF contains the same material name, only one material will be created, regardless if the parameters in the material are different (e.g two meshes have materials with the name “material”, one is blue and the other is red. both meshes will be either red or blue.). This also applies for textured materials. ## Robot Properties There might be many properties you want to tune on your robot. These properties can be spread across many different Schemas and APIs. The general steps of getting and setting a parameter are: 1. Find which API is the parameter under. Most common ones can be found in the Pixar USD API. 2. Get the prim handle that the API is applied to. For example, Articulation and Drive APIs are applied to joints, and MassAPIs are applied to the rigid bodies. 3. Get the handle to the API. From there on, you can Get or Set the attributes associated with that API. For example, if we want to set the wheel’s drive velocity and the actuators’ stiffness, we need to find the DriveAPI: ```python # get handle to the Drive API for both wheels left_wheel_drive = UsdPhysics.DriveAPI.Get(stage.GetPrimAtPath("/carter/chassis_link/left_wheel"), "angular") right_wheel_drive = UsdPhysics.DriveAPI.Get(stage.GetPrimAtPath("/carter/chassis_link/right_wheel"), "angular") # Set the velocity drive target in degrees/second left_wheel_drive.GetTargetVelocityAttr().Set(150) right_wheel_drive.GetTargetVelocityAttr().Set(150) # Set the drive damping, which controls the strength of the velocity drive left_wheel_drive.GetDampingAttr().Set(15000) right_wheel_drive.GetDampingAttr().Set(15000) ``` # Set the drive stiffness, which controls the strength of the position drive # In this case because we want to do velocity control this should be set to zero left_wheel_drive.GetStiffnessAttr().Set(0) right_wheel_drive.GetStiffnessAttr().Set(0) Alternatively you can use the Omniverse Commands Tool to change a value in the UI and get the associated Omniverse command that changes the property. **Note:** - The drive stiffness parameter should be set when using position control on a joint drive. - The drive damping parameter should be set when using velocity control on a joint drive. - A combination of setting stiffness and damping on a drive will result in both targets being applied, this can be useful in position control to reduce vibrations. ## Examples The following examples showcase how to best use this extension: - **Carter Example:** ``` Isaac Examples > Import Robot > Carter URDF ``` - **Franka Example:** ``` Isaac Examples > Import Robot > Franka URDF ``` - **Kaya Example:** ``` Isaac Examples > Import Robot > Kaya URDF ``` - **UR10 Example:** ``` Isaac Examples > Import Robot > UR10 URDF ``` **Note:** For these example, please wait for materials to get loaded. You can track progress on the bottom right corner of UI. ## Carter URDF Example To run the Example: 1. Go to the top menu bar and click ``` Isaac Examples > Import Robots > Carter URDF ``` 2. Press the `Load Robot` button to import the URDF into the stage, add a ground plane, add a light, and a physics scene. 3. Press the `Configure Drives` button to configure the joint drives and allow the rear pivot to spin freely. 4. Press the `Open Source Code` button to view the source code. The source code illustrates how to import and integrate the robot using the Python API. 5. Press the `PLAY` button to begin simulating. 6. Press the `Move to Pose` button to make the robot drive forward. ## Franka URDF Example To run the Example: 1. Go to the top menu bar and click ``` Isaac Examples > Import Robots > Franka URDF ``` 2. Press the `Load Robot` button to import the URDF into the stage, add a ground plane, add a light, and a physics scene. 3. Press the `Configure Drives` button to configure the joint drives and allow the rear pivot to spin freely. 4. Press the `Open Source Code` button to view the source code. The source code illustrates how to import and integrate the robot using the Python API. 5. Press the `PLAY` button to begin simulating. 6. Press the `Move to Pose` button to make the robot drive forward. ## Franka URDF Example To run the Example: 1. Go to the top menu bar and click `Isaac Examples > Import Robots > Franka URDF`. 2. Press the `Load Robot` button to import the URDF into the stage, add a ground plane, add a light, and and a physics scene. 3. Press the `Configure Drives` button to configure the joint drives. This sets each drive stiffness and damping value. 4. Press the `Open Source Code` button to view the source code. The source code illustrates how to import and integrate the robot using the Python API. 5. Press the `PLAY` button to begin simulating. 6. Press the `Move to Pose` button to make the robot move to a home/rest position. ## Kaya URDF Example To run the Example: 1. Go to the top menu bar and click `Isaac Examples > Import Robots > Kaya URDF`. 2. Press the `Load Robot` button to import the URDF into the stage, add a ground plane, add a light, and and a physics scene. 3. Press the `Configure Drives` button to configure the joint drives. This sets the drive stiffness and damping value of each wheel, sets all of its rollers as freely rotating. 4. Press the `Open Source Code` button to view the source code. The source code illustrates how to import and integrate the robot using the Python API. 5. Press the `PLAY` button to begin simulating. 6. Press the `Move to Pose` button to make the robot rotate in place. ## UR10 URDF Example To run the Example: 1. Go to the top menu bar and click `Isaac Examples > Import Robots > UR10 URDF`. 2. Press the `Load Robot` button to import the URDF into the stage, add a ground plane, add a light, and and a physics scene. 3. Press the `Configure Drives` button to configure the joint drives. This sets each drive stiffness and damping value. 4. Press the `Open Source Code` button to view the source code. The source code illustrates how to import and integrate the robot using the Python API. 5. Press the `PLAY` button to begin simulating. 6. Press the `Move to Pose` button to make the robot move to a home/rest position. ## Extension Documentation * URDF Import Extension [omni.importer.urdf] * Usage * High Level Code Overview * Limitations * Changelog * Contributing to the URDF Importer Extension
10,902
Categories.md
# Node Categories An OmniGraph node can have one or more categories associated with it, giving the UI a method of presenting large lists of nodes or node types in a more organized manner. ## Category Specification In .ogn File For now the node categories are all specified through the .ogn file, so the node will inherit all of the categories that were associated with its node type. There are three ways you can specify a node type category in a .ogn file; using a predefined category, creating a new category definition inline, or referencing an external file that has shared category definitions. ### Predefined Categories This is the simplest, and recommended, method of specifying categories. There is a single .ogn keyword to add to the file to associate categories with the node type. It can take three different forms. The first is a simple string with a single category in it: ```json { "MyNodeWithOneCategory": { "version": 1, "categories": "function", "description": "Empty node with one category" } } ``` **Warning** The list of categories is intentionally fixed. Using a category name that is not known to the system will result in a parsing failure and the node will not be generated. See below for methods of expanding the list of available categories. The second is a comma-separated list within that string, which specifies more than one category. ```json { "MyNodeWithTwoCategories": { "version": 1, "categories": "function,time", "description": "Empty node with two categories" } } ``` The last also specifies more than one category; this time in a list format: ```json { "MyNodeWithAListOfTwoCategories": { "version": 1, "categories": ["function", "time"], "description": "Empty node with a list of two categories" } } ``` The predefined list is contained within a configuration file. Later you will see how to add your own category definitions. The predefined configuration file looks like this: ```json { "categoryDefinitions": { "$description": [ "This file contains the category information that will tell OGN what the acceptable values for", "categories are in the .ogn file under the 'categories' keyword along with descriptions of each of", "the categories. Categories have an optional namespace to indicate subcategories, e.g. 'math:vector'", "indicates math nodes dealing exclusively with vectors.", "", "The contents of this file will always denote legal categories. The list can be extended by creating", "another file with the same format and adding it under the 'categoryDefinitions' keyword in .ogn, or by", "inserting an individual category definition in the 'category' keyword in .ogn. See the docs for details." ], "bundle": "Nodes dealing with Bundles", "constants": "Nodes which provide a constant value", "debug": "Development assist nodes for debugging", "event": "Action Graph nodes which are event sources for the graph", "examples": "Nodes which are used in documentation examples", "fileIO": "Nodes relating to data stored in the file system", "flowControl": "Nodes dealing with evaluation order of the graph", "function": "Nodes implementing a general function", "geometry:analysis": "Nodes dealing with the analysis of geometry representations", "geometry:deformer": "Nodes that deform geometry in space", "geometry:generator": "Nodes that generate different kinds of geometry", "geometry": "Nodes dealing with the manipulation of geometry", "graph:action": "Nodes specifically relating to action graphs", "graph:onDemand": "Nodes specifically relating to on-demand graphs", "graph:postRender": "Nodes specifically relating to post-render graphs", "graph:preRender": "Nodes specifically relating to pre-render graphs", "graph:simulation": "Nodes specifically relating to action graphs", "graph": "Nodes dealing with the graph to which they belong", "input:gamepad": "Nodes dealing with gamepad input", "input:keyboard": "Nodes dealing with keyboard input", "input:mouse": "Nodes dealing with mouse input", "input": "Nodes dealing with external input sources", "internal:test": "Nodes used solely for internal testing", "internal:tutorial": "Nodes used solely for illustrating a node writing technique", "internal": "Nodes not meant for general use", "material:shader": "Nodes dealing with shader information", "material:texture": "Nodes dealing with texture information", "material": "Nodes dealing with general materials", "math:array": "Nodes dealing with operations on arrays", "math:casts": "Nodes casting values from one type to another compatible type", "math:condition": "Nodes implementing mathematical logic functions", "math:constant": "Nodes with a constant value", "math:conversion": "Nodes converting between different types of data", "math:matrix": "Nodes dealing with matrix math" } } ``` { "math:operator": "Nodes implementing a mathematical operator", "math:vector": "Nodes dealing with vector math", "math": "General math values and functions", "rendering": "Nodes dealing with rendering components", "sceneGraph:camera": "Nodes dealing with the scene graph directly", "sceneGraph": "Nodes dealing with the scene graph directly", "script": "Nodes dealing with custom scripts", "sound": "Nodes dealing with sound", "time": "Nodes dealing with time values", "tutorials": "Nodes which are used in a documentation tutorial", "ui": "Nodes that create UI widgets and UI containers", "variables": "Nodes dealing with variables on the graph", "variants": "Nodes dealing with USD variants", "viewport": "Nodes dealing with viewport" } ``` ```markdown Note ==== You might have noticed some categories contain a colon as a separator. This is a convention that allows splitting of a single category into subcategories. Some UI may choose to use this information to provide more fine-grained filtering and organizing features. ``` ```markdown Inline Category Definition ========================== On occasion you may find that your node does not fit into any of the predefined categories, or you may wish to add extra categories that are specific to your project. One way to do this is to define a new category directly within the .ogn file. The way you define a new category is to use a `name:description` category dictionary rather than a simple string. For example, you could replace a single string directly: ```json { "MyNodeWithOneCustomCategory": { "version": 1, "categories": { "light": "Nodes implementing lights for rendering" }, "description": "Empty node with one custom category" } } ``` You can add more than one category by adding more dictionary entries: ```json { "MyNodeWithTwoCustomCategories": { "version": 1, "categories": { "light": "Nodes implementing lights for rendering", "night": "Nodes implementing all aspects of nighttime" }, "description": "Empty node with two custom categories" } } ``` You can also mix custom categories with predefined categories using the list form: ```json { "MyNodeWithMixedCategories": { "version": 1, "categories": [ "rendering", { "light": "Nodes implementing lights for rendering", "night": "Nodes implementing all aspects of nighttime" } ], "description": "Empty node with mixed categories" } } ```markdown Shared Category Definition File =============================== While adding a category definition directly within a file is convenient, it is not all that useful as you either only have one node type per category, or you have to duplicate the category definitions in every .ogn file. A better approach is to put all of your extension’s, or project’s, categories into a single configuration file and add it to the build. <p> The configuration file is a .json file containing a single dictionary entry with the keyword <em> categoryDefinitions . The entries are <em> name:description pairs, where <em> name is the name of the category that can be used in the .ogn file and <em> description is a short description of the function of node types within that category. <p> Here is the file that would implement the above two custom categories: ```json { "categoryDefinitions": { "$description": "These categories are applied to nodes in MyProject", "light": "Nodes implementing lights for rendering", "night": "Nodes implementing all aspects of nighttime" } } ``` ```markdown <div class="admonition tip"> <p class="admonition-title"> Tip <p> As with regular .ogn file any keyword beginning with a “ <em> $ ” will be ignored and can be used for documentation. <p> If your extension is building within Kit then you can install your configuration into the build by adding this line to your <em> premake5.lua file: ```lua install_ogn_configuration_file("MyProjectCategories.json") ``` ```markdown <p> This allows you to reference the new category list directly from your .ogn file, expanding the list of available categories using the .ogn keyword <em> categoryDefinitions : ```json { "MyNodeWithOneCustomCategory": { "version": 1, "categoryDefinitions": "MyProjectCategories.json", "categories": "light", "description": "Empty node with one custom category" } } ``` ```markdown <p> Here, the predefined category list has been expanded to include those defined in your custom category configuration file, allowing use of the new category name without an explicit definition. <p> If your extension is independent, either using the Kit SDK to build or not having a build component at all, you can instead reference either the absolute path of your configuration file, or the path relative to the directory in which your .ogn resides. As with categories, the definition references can be a single value or a list of values: ```json { "MyNodeWithOneCustomCategory": { "version": 1, "categoryDefinitions": ["myConfigDirectory/MyProjectCategories.json", "C:/Shared/Categories.json"], "categories": "light", "description": "Empty node with one custom category" } } ``` ```markdown <section id="category-access"> <h3> Category Access <p> Access to the existing categories in C++ is done through the <code> <span class="pre"> omni::graph::core::INodeCategories_abi class. In Python you use the bind to that interface in <code> <span class="pre"> omni.graph.core.INodeCategories_abi . <footer> <hr/>
11,946
changes.md
## Changed - OVCC-1497: `tsc_clock` was moved from the `carb::time` namespace to `carb::clock`, though it is also available from within the `time` namespace for historical consistency. The `time` namespace was conflicting in some cases with the `time()` function. - OM-122864: update freetype to version 2.13.2. - OVCC-1529 / OM-122864: Update FLAC to version 1.4.3 - OVCC-1525: `carb.crashreporter-breakpad.plugin`: added default settings for `/crashreporter/url`, `/crashreporter/product`, and `/crashreporter/version` so that crash reports can always be uploaded even in apps that have not been fully configured. ## Fixed - OVCC-1497: `tsc_clock::sample()` now acts as a compiler barrier to prevent compiler reordering of sampling. - OVCC-1513: address UB in omni::string. Replace the union with a struct to avoid accessing an inactive member. - OVCC-1497: Significantly improved performance of `carb::thread::mutex` (now ~70% faster than `std::mutex`) and `carb::tasking::Mutex` (about ~70% faster than the previous implementation). - OM-123447: add module docstring for carb.settings, carb.tokens, omni.kit.app, omni.ext. - OM-121493: Python: Using `carb.profiler` decorators and profiling will now work properly with `async` functions. ## Added - OVCC-1472: Added `CARB_PLUGIN_INTERFACE_EX` which allows interfaces to have a default version that is distinct from the latest version. This allows modules to opt-in to interface changes rather than always getting the latest. - OMPE-1332: `omni.ext`: apply ext dict filters for remote extensions and include them when publishing. - OVCC-1504: `carb.audio-forge.plugin`: added a `playFilePaused` command to the `example.audio.playback` app. This plays a sound on a voice that is initially paused. It is unpaused after a few seconds then plays to completion. The `--input` option is used to provide the path to the sound to play. The `--interactive` option is optionally used to print the play cursor position during playback. Various other common options such as `--format`, `--frame-rate`, `--decoded`, `--memory-stream`, `--disk-stream`, etc will also affect the behavior of this new command. - OVCC-1524: `carb.crashreporter-breakpad.plugin`: added a new volatile metadata type that allows a file blob in memory to be written to file. The crash reporter will then include the written file in the crash report. This is useful as a way of including arbitrarily large metadata values in the crash report as files instead of directly as metadata values. Each metadata value itself has a limit of 32KB on the OmniCrashes side. This provides a way to work around this limit. Currently OmniCrashes will discard any metadata values that are beyond this size limit. - OM-123294: `omni.structuredlog.plugin`: added a setting that can be used to enable ‘anonymous data’ mode. This forces all user and login information to be cleared out and all consent levels approved. This modified information will also ## 167.0 ### Added - OM-123459: `omni.structuredlog.plugin`: added the `omni::structuredlog::IStructuredLogControl2` interface. This new interface contains the `emitProcessLifetimeExitEvent()` function to force the process lifetime exit event to be emitted early and with a custom reason. This is only intended to be used at process shutdown time. ### Changed - OM-122864: Upgrade python dependencies to fix security issues. - OM-117408: `omni.ext`: print/log registry sync status. - OVCC-1523: `framework`: Made assertion failures also log to all of the currently configured standard logger destinations. This includes the main log file, standard streams, and the debug console (on Windows at least). These messages will only be explicitly written by the `IAssert` interface to the standard streams and debug console destinations if the standard logger has not already written to those destinations as well. ### Fixed - OVCC-1504: `carb.audio-forge.plugin`: fixed a bug in `carb.audio` that caused voices that were started in a paused state to potentially accumulate position information from previous voices. This was only an issue if the previous voice did not play to completion of its sound and following uses of the same bus were started in a paused state. - OM-123140: `omni.ext`: fix kit sdk extensions exclusion when generating version lock. ## 166.0 ### Added - OVCC-1506: `carb.tasking.plugin`: Added `/carb.tasking.plugin/suppressTaskException` setting for debugging task exceptions. - OVCC-1509: `carb.tasking.plugin`: Added `/carb.tasking.plugin/debugAlwaysContextSwitch` setting which is a debug/test mode that will greatly increase the number of context switches to shake out issues. ### Changed - OM-121332: `omni.kit.app.plugin`: /app/fastShutdown defaults to true - OVCC-1503: Increased buffer given to `omniGetModuleExports` from 1k to 4k. - OVCC-1512: `carb::cpp::bit_cast` will be `constexpr` if the compiler supports it, even prior to C++20. - OVCC-1512: The bit operations library (`carb/Bit.h`) will fall back to C++20 definitions (i.e. from `<bit>`) if C++20 is available. - OM-116009: flush prints to `stdout` after loading the extensions. ### Fixed - OVCC-1510: `carb.input.plugin`: Fixed a regression with hashability of device types. - OVCC-1511: Fixed compilation issues when built with TSAN (`-fsanitize=thread`) on newer versions of GCC. - OVCC-1512: Fixed compilation issues when building with older versions of MSVC 2017. - OM-122114: Updated `omni.bind` testing dependencies. - OVCC-1494: Linux: Logging to an ANSI console now issues an explicit reset to fix issues with text coloring. ## 165.0 ### Added - OM-120782: `omni.ext`: add deprecation warning param for extensions - OM-120782: `omni.ext`: make deprecation warning a developer-only warning - OVCC-1428: Carbonite now packages `tools/premake-deps.lua` –a library of Premake functions to build with various Carbonite dependencies. ### Changed - OM-120985: updated to zlib 1.3.1 # OM-120985: omni.telemetry.transmitter: updated to zlib 1.3.1 and libcurl v8.6.0 to get a fix for CVE-2024-0853 ## Fixed - OM-120831: Fix CARB_PROFILE_FRAME crash on Kit shutdown (emitting event after Tracy shutdown) - OVCC-1502: Carbonite headers should now compile under C++20. - OVCC-1500: carb.tasking.plugin: Fixed a rare race that could manifest when very short tasks return a value via a `Future`. - OM-120313: carb.crashreporter-breakpad.plugin: launched the user story GUI child process with its working directory set to the location that the parent (ie: crashing) process loaded its own `carb.dll`/`libcarb.so` library from. This allows the child process to find the carb module on startup since it is a static dependency of the tool. - OMPRW-707: cache more acquire interface calls for omni.kit.app and omni.ext - OM-120581: carb.crashreporter-breakpad.plugin: added some thread safety fixes around accessing the metadata and file tables in the crash reporter and improved some logging output around the crash report upload. - OVCC-1499: carb.input Python bindings: Fixed an issue with `get_modifier_flags` and `get_global_modifier_flags` that would not properly distinguish between empty arrays and `None`. Also added several functions that were available in the `carb::input::IInput` API but not available in the Python bindings. - OVCC-1493: omni.app.plugin: `/app/fastShutdown` will no longer hijack the exit code. - OM-120581: Build: took initial steps to get Carbonite building with the GCC 11 toolchain. - fixed an issue in `omni.bind` that hardcoded the paths to the include folders for GCC 7. - fixed some new warning that have shown up with GCC 11. - fixed a change in the `cpuid.h` header under GCC 11’s headers that added a `__cpuidex()` function. This conflicted with a version that was explicitly implemented in `omni.platforminfo.plugin`. - fixed some deprecations in GCC 11’s libraries. - called `__gcov_dump()` instead of `__gcov_flush()` since the latter has been both deprecated and removed in GCC 11. - added passthrough replacements for the `__*_finite()` math functions since they were removed in a more recent GCC/glibc. This could also be fixed later by rebuilding `forgeaudio` under a newer GCC version as well. # 164.0 ## Added - OVCC-1488: Added `carb::this_thread::spinTryWaitWithBackoff()`, similar to `spinWaitWithBackoff()` but will give up in high contention cases, informing the caller that waiting in the kernel is likely a good idea. - OVCC-1492: carb.scripting-python.plugin and Python bindings: Added support for Python 3.11. ## Changed - OM-119706: omni.telemetry.transmitter: improved some logging in the telemetry transmitter. - OM-119706: omni.telemetry.transmitter: changed the default transmission limits for each event processing protocol to match the limits for each default endpoint. - OVCC-1441: framework: changed the `carb::StartupFrameworkDesc` struct to include a `carb::PluginLoadingDesc` member. This allows apps to programmatically control which plugins are loaded during framework init instead of having to specify it either on the command line or in the ‘/PluginsToLoad’ setting in a config file for the app. # Section 1 ## Fixed - OVCC-1491: Linux: Improved `carb::this_thread::getId()` performance by about 97% in most cases. - OVCC-1488: aarch64: Fixed a potential rare hang that could occur in the futex system, affecting all synchronization primitives. As a result of the fix, aarch64 futex operations are much faster, with a contended test executing nearly 90% faster. - OVCC-1488: Linux: Improved `carb::thread::mutex` and `carb::thread::recursive_mutex` substantially by fixing a performance regression: in the contended case they are ~98% faster executing in a mere 2% of the previous time. In contended tests they are about ~54% faster than `std::mutex` and `std::recursive_mutex`, respectively. - OM-119240: `omni.ext`: Fix non-deterministic import order when using fast_importer - OVCC-1473: Windows: Console applications will no longer hang or crash when CTRL+C is pressed to end the application. Note: Applications that desire different CTRL+C behavior should install their own handler after initializing the Carbonite framework. - OVCC-1478: `carb.tasking` with `omni.job`: Fixed a rare hang that could occur when calling `ITasking::reloadFiberEvents()`. - OM-119706: `omni.telemetry.transmitter`: reworked the transmitter’s retry policy for failed endpoints. A failed transmission will now be retried a configurable number of times before removing the endpoint for the session. This fixes a potential situation where the transmitter could effectively hang under certain configurations. - OM-87381: Fixed issues where `carb.crashreporter-breakpad` would crash inside the crash handler when `NtCreateThreadEx` was used to inject a thread to crash the process. # Section 2 ## 163.0 ### Added - OVCC-1480: `kit-kernel`: added the `OMNI_ENABLE_CRASHREPORTER_ON_FAST_SHUTDOWN` environment variable to allow the crash reporter to remain enabled during a fast shutdown in Kit. Set this environment variable to `1` to keep the crash reporter enabled. Set to any other value to use the default behavior of disabling the crash reporter during a fast shutdown. If the environment variable is not present, the default behavior is to leave the crash reporter enabled when running under TeamCity or GitLab, and to disable it on fast shutdown otherwise. - OVCC-1295: Added documentation for `carb::tokens::ITokens` and Python bindings. ### Changed - OM-111557: Reworked Python exceptions handling to call `sys.excepthook` instead of just logging - OVCC-1481: `carb.tasking`: Implemented an optimization that can execute tasks within a `TaskGroup` or `Counter` when waiting on the `TaskGroup` or `Counter`, which allows tasks to resolve quicker. Improves performance of the ‘skynet:TaskGroup’ test by about 28% on Windows and makes it runnable on Linux. ### Fixed - OVCC-1474: Fixed a rare crash that could occur in `carb.tasking`. - OVCC-1475: Fixed a hang in `carb.crashreporter-breakpad` if a crash occurred in the logging system. - OVCC-1476: Linux: Reduced calls to `getenv()` as it cannot be called safely if any other thread may also be performing any change to the environment due to deficiencies in GLIBC. - OVCC-1474: Fixed a rare crash that could occur in `carb.tasking` in certain `applyRange` subtasks. - OVCC-1478: Fixed a rare crash that could occur in `carb.tasking` if the main thread calls `executeMainTasks` and happens to context switch while another thread is calling `reloadFiberEvents`. ### 162.0 #### Added - OVCC-1434: Added `carb::tasking::Delegate<>` , which is the same as `carb::delegate::Delegate<>` , but is tasking-aware. - OVCC-411: Added `omni::vector<>` , an ABI-safe implementation of `std::vector<>` . This class adheres to the C++ standard vector excepting that `Allocator` is not a template parameter; `omni::vector<>` always uses `carb::Allocator<>` , which uses Carbonite’s `carb::allocate` and `carb::deallocate` functions (and require `carb.dll` or `libcarb.so`). - OVCC-1467: `omni.telemetry.transmitter`: output the transmitter version and log file location(s) to its log during startup. #### Changed - OM-117489: `omni.ext.plugin`: target.kit now takes kit patch part of the version into account too. - OVCC-1352: `carb.tasking` has received various performance improvements: - `applyRange` / `parallelFor` are significantly faster; the `skynet` test runs 21x faster on Windows and 7x faster on Linux due to algorithmic changes. The new algorithm is better able to sense system overload and adapt. - Waking threads has been found to take about 10 µs (microseconds) on Linux and 50 µs on Windows, which is very slow. The changes try to lessen the impact of waking threads by preferring deferred wake (chain reaction) and keeping threads active while the system has work available. - Algorithms and task queues have changed to utilize multiple lanes to reduce contention. - Pinning, while still not recommended, is much more efficient and will no longer log warnings. - `omni.kit.app` , `omni.ext`: Replace most std map/set containers with carb RH version. - OVCC-1054: `carb.tokens`: Warning/error logs will now include the entire token stack with the issue. Token names and values longer than 256 characters will be truncated. - OVCC-1440: Windows: The `omni.kit.app` / `kit.exe` CTRL+C behavior now matches Linux: a quit is posted to the app and it will shutdown gracefully on the next frame. - OVCC-1470: `carb::container::LocklessStack` uses a common algorithm between Windows and Linux and is once again lock-free on Linux. All operations on Linux are now ~99% faster in the uncontended case and ~95% faster in the contended case. Windows operations are ~20% faster in the uncontended case and ~10% faster in the contended case. - OVCC-1468: On Windows, Carbonite executables and libraries now implement Control Flow Guard excepting `carb.tasking.plugin` for performance concerns. - OM-118678: `omni.telemetry.transmitter`: updated to libcurl v8.5.0 to get a fix for CVE-2023-46218. #### Fixed - OVCC-1235: Linux/GCC: Warnings in public includes from `-Wconversion` and `-Wno-float-conversion` have been fixed. - OVCC-1352: Fixed issues with `CARB_PROFILE_BEGIN` and `CARB_PROFILE_END` not respecting profiler channels properly. - OVCC-1464/OVCC-1420: Further fix to a deadlock that could occur if Python bindings are loaded simultaneously with Carbonite plugins in separate threads. This resolves on old versions of GLIBC that have problems with internal locking. - **OM-108121**: Fixed test for keyboard-modifier-down transitioning after up. - **OM-118059**: `omni.telemetry.transmitter`: fixed a bug that prevented the transmitter from automatically pulling down the latest schema ID for use on the non-OVE open endpoint. - **OVCC-1069**: Fixed carb.dictionary python bindings does not handle `bool` correctly. - **OVCC-1450**: Improved tracy profiler backtracing performance and fixed symbol resolution for dynamically loaded shared libraries on linux. - **OVCC-1469**: Fixed a race condition that could lead to a hang when `carb.tasking` was used with `omni.job` as the underlying thread pool. ### 161.0 - **OVCC-1431**: For Linux x86-64, Carbonite now publishes `carb_sdk+tsan` and `carb_sdk+plugins+tsan` packages that have Thread Sanitizer enabled (`-fsanitize=thread`), allowing those modules to report to Thread Sanitizer. Suppressions are located in the `include/tsan-suppressions.txt` file. - **OM-114023**: `omni.kit.app`: Rework `--vulkan`, add `app/vulkan` setting to control Vulkan ### Fixed - **OVCC-1164**: Fixed ScratchBuffer move and copy constructors to correctly initialize data. - **OVCC-1444**: `omni.ext.plugin`: fix ext linking bug when using non-English OS language on Windows - **OVCC-1420**: Fixed a deadlock that could occur if Python bindings are loaded simultaneously with Carbonite plugins in separate threads. - **OVCC-1427**: Log consumers and Loggers will no longer be called recursively. If a Logger does something that would recursively log, other Loggers will still receive the log message but the offending Logger will be ignored. - **OVCC-1427**: Logs that occur on a thread with either the Framework or PluginManager (OVCC-948) mutexes locked are now deferred until both of these mutexes are unlocked to prevent unsafe recursive calling into the Framework. ### 160.0 - **OM-113541**: Linux: `omni.kit.app.gcov.plugin` was added and `kit-gcov` uses this to ensure that the fast shutdown code path also calls `__gcov_flush`. - **OVCC-1438**: Added `runningInContainer` crash metadata, indicating if the process is running inside a container. - **OVCC-1263**: `omni.kit.app.plugin`: added the `OMNI_TRACK_SETTINGS` environment variable to allow all changes to a given list of settings to be reported as warning messages. If a debugger is attached, a software breakpoint will also be triggered for each change to the tracked setting. Multiple settings may be monitored by separating their paths with a comma (‘,’), pipe (‘|’), colon (‘:’), or semicolon (‘;’). ### Fixed - **OVCC-1432**: `carb.dictionary.plugin`: Added a missing mutex in subscribeToNodeChangeEventsImpl that otherwise led to data corruption. - **OVCC-1397**: `carb.input.plugin`: Fixed an issue from 159.0 that caused action mapping hooks to not work properly. - **OM-115009**: Fixed carbReallocate not freeing memory when using mimalloc. - **OVCC-1436**: Fixed missing quotes for Windows in `omni.structuredlog.lua`. - **OVCC-1442**: `carb.crashreporter-breakpad.plugin`: prevented upload retry attempts for crash reports that previously failed with most 4xx HTTP status codes. These crash report files will remain on disk locally but will never attempt to be uploaded again. The user can attempt to modify the crash report’s metadata manually to allow it to be tried again, or they can delete the report. - **OVCC-1442**: `carb.crashreporter-breakpad.plugin`: fixed an issue that could cause the crash reporter to become disabled if its interface was acquired early on Kit startup but the settings registry wasn’t present yet. ## Added - OVCC-1209: `omni.platforminfo.plugin`: added a function to `omni::platforminfo::IOsInfo2` to retrieve a machine ID that can be used as an anonymous user ID in telemetry. - OVCC-1209: set the structured logging user ID to an anonymous machine ID if `/structuredLog/anonymousUserIdMode` is set to “machine”. - OMFP-3389: `kit-kernel`: scanned the command line on startup for extension and integration test commands. When found, `kit` puts `omni.structuredlog.plugin` into ‘test’ mode automatically so that structured log events generated during test runs do not interfere with production telemetry analysis. - OVCC-1415: `carb.crashreporter-breakpad.plugin`: added a check for the `CARB_DISABLE_ABORT_HANDLER` environment variable before installing the SIGABRT handler on Windows for plugins and executables that are statically linked to the Windows CRT. If this environment variable is set to “1”, the SIGABRT handler will not be installed. If it is undefined or set to any other value, the SIGABRT handler will be installed by default. This is a temporary workaround to allow the Nsight debugger to still work with newer Omniverse apps that include this Carbonite functionality. - OVCC-1422: `carb.crashreporter-breakpad.plugin`: added a check for the `OMNI_CRASHREPORTER_CRASHREPORTBASEURL` environment variable to override the ‘/crashreporter/crashReportBaseUrl’ setting. - OVCC-1286: Fix a bug where omni.structuredlog would generate a pure python module with invalid `send_event()` functions when the event properties had characters other than `[a-zA-Z_]`. - OM-113393: [omni.ext.plugin] package yanking support - OVCC-1414: Added additional Python binding documentation. - OVCC-1411: The environment variable `CARB_USE_SYSTEM_ALLOC`, if defined and set to a value other than `0` at the time of the first request, will cause `carb::allocate`, `carb::deallocate`, and `carb::reallocate` to use the system default heap instead of using Mimalloc. - OVCC-1418: `carb.crashreproter-breakpad.plugin`: added the `/crashreporter/metadataToEmit` setting. This is expected to be an array of regular expression strings that identify the names of crash metadata values that should also be emitted as telemetry events any time they are modified. A single event will be emitted when any metadata value is added or modified. If the metadata is set again to its current value, no event will be emitted. - OM-114420: `omni.ext.plugin`: add support for filter:setting to be able to change extension dependencies based on setting - OM-113920: `omni.ext.plugin`: updated packman link code to sync closely with changes in packman 7.10.1 which fixes issues creating symbolic links / junctions. - OM-114145: `omni.ext.plugin` add support for different version ranges (~, >, <, =, ^ operators) - OVCC-1358: Try to allow structuredlog.sh/structuredlog.bat to find packman from the CWD. - OVCC-1358: Add `repo structuredlog` to launch structuredlog from repo tools. This avoids issues around having structuredlog find packman and other resources. ## Changed - OVCC-1204: **POSSIBLY BREAKING CHANGE** Now that Packman 7.10 can support aliases through the `<filter ... as="" />` tag, the previous change for CC-1204 has been undone and Carbonite’s `target-deps.packman.xml` chain no longer includes a `python` dependency. All Python dependencies are now tagged with their version, such as `python-3.10`. - OVCC-1368: `Framework::releaseInterface` will now allow plugins to unload that are only referenced by themselves. - OVCC-1397: `carb.input.plugin` has been made thread-safe. The `IInput` interface and associated `carb::input` namespace have also been documented. ## Fixed - OVCC-1407: Python: Fixed an issue where creating a `carb.dictionary` with an empty value (i.e. `{ "payload": {} }`) would produce an empty dictionary (without the `"payload"`). - OMFP-3353: `omni.ext.plugin`: Fix core.reloadable=False not propagating correctly to all dependencies - OVCC-1368: `carb::getCachedInterface` will work properly if a plugin restarts even if a module fails to unload. - OMFP-3389: `kit-kernel`: fixed the ‘normal’ shutdown path for `kit` so that it properly shuts down `omni.structuredlog.plugin`. Previously this path was force unloading the plugin without notifying it of the unload. - OM-113843: `carb.crashreporter-breakpad.plugin`: fixed an issue with the recent crash reporter changes that caused the Nsight system profiler to stop working with Carbonite based apps on Windows. The issue was tracked down to the new SIGABRT handler using a `thread_local` global variable. - OVCC-1423: `omni.bind` now correctly produces a Python binding to `__init__()` that allows for casting between interfaces. - OVCC-1416: fix temp folder not being removed on fast shutdown - OVCC-1380: Linux: Fixes a crash that can occur when `carb.crashreporter-breakpad.plugin` is shut down when running on older versions of GLIBC, such as on CentOS-7. - OVCC-1371: `include/carb/thread/RecursiveSharedMutex.h`: Fix compilation error if `CARB_ASSERT_ENABLED` was forced on. ### 158.0 #### Added - OVCC-1254: `carb.settings.plugin`: Added `carb::settings::ScopedWrite` and `carb::settings::ScopedRead` RAII lock classes. - OVCC-1408: Added `include/carb/time/Util.h` which has platform-independent versions of time utility functions `asctime_r()`, `ctime_r()`, `gmtime_r()` and `localtime_r()`. - OMFP-901: Linux: `kit-gcov` was added. This is a `kit` binary that is compiled with `gcov` support and calls `__gcov_flush()` before exiting. This can be used for downstream projects that want to collect code coverage information. #### Changed #### Fixed - OMFP-2683: Upgrade Python packages to fix BDSA-2022-3544 (CVE-2022-46908) - OVCC-1406: Fixed the handling of assertion failures so they can now generate a crash report if not ignored by the user (in debug builds on Windows). This allows the process lifetime ‘crash’ structured logging event to still be emitted in cases of an assertion failure. - OVCC-1254: `carb.settings.plugin`: Documented `ISettings` and fixed some thread safety issues. - OVCC-1408: Linux: locations that were using non-thread-safe `gmtime()` and `localtime()` have been changed to use `gmtime_r()` and `localtime_r()` respectively. The biggest issue was logging timestamps, but it also affected `carb.crashreporter-breakpad.plugin` timestamps and `carb.profiler-cpu.plugin` file names. - OVCC-1410: `carb.tasking.plugin`: Fixed a very rare case where `ITasking::addSubTask` would never return. - OVCC-1399: Windows: Fixed an issue where `_exit()` was attempting to shutdown the Carbonite Framework even though atexit callbacks were not run. NOTE: This solution requires Carbonite executables to either dynamically link the CRT, or to start the framework from the executable with either ```code carb::acquireFrameworkAndRegisterBuiltins() ``` , ```code OMNI_CORE_START() ``` or ```code OMNI_CORE_INIT() ``` . ## 157.0 ### Added * **carb.profiler-tracy.plugin**: * Added tracy profiler plugin option `/plugins/carb.profiler-tracy.plugin/memoryTraceStackCaptureDepth` to capture callstacks on memory event operations in order to benefit from tracy views where it uses callstack info for grouping [defaults to 0]. * Added tracy profiler plugin option `/plugins/carb.profiler-tracy.plugin/instantEventsAsMessages` to inject “instant” events via tracy profiler messages instead of the current “fake zones” [defaults to false]. * Added tracy profiler plugin option `/plugins/carb.profiler-tracy.plugin/skipEventsOnShutdown` to skip injecting profile events after tracy plugin shutdown has been requested [default to false]. ### Changed ### Fixed * OVCC-1392: Logging performance with `/log/async` on has been greatly improved. * OVCC-1393: Fixed an issue where `/log/async` on at shutdown could cause the last few log messages to be skipped. * OMFP-2562: `omni.telemetry.transmitter`: updated to `libcurl` version 8.4.0 and `openssl` version 3.0.11 to address CVE-2023-38545. * OMFP-1262: Fix for passing nullptr as text when setting clipboard crashes on Linux. * OM-112381: `omni.ext.plugin`: Fix exact=true to match exactly one version * OMFP-2353 / OMFP-2356: `omni.kit.app`: (Linux only) Fixed a crash that could occur sometimes when `SIGTERM` or `SIGINT` (CTRL+C) was received. * OVCC-1315: Visual Studio: Fix compilation warning 4668 on `include/carb/Defines.h`. * OVCC-1371: `include/carb/thread/SharedMutex.h`: Fix compilation error if `CARB_ASSERT_ENABLED` was forced on. * OVCC-1398 / OMFP-2908: `carb.crashreporter-breakpad.plugin`: Heap usage and `fork()` can cause hanging inside the crash handler. These have been eliminated in the crash handler. ## 156.0 ### Added * OVCC-1372: Added `carb::logging::StandardLogger2` sub-interface that is accessible from `carb::logging::ILogging`. This interface has functions that can be used to override the log level for the current thread only. * OVCC-1372: Added `carb::logging::ScopedLevelThreadOverride`, a RAII class that can be used to override the log level for a given `StandardLogger2` while in scope. * OVCC-1325: `carb.events.plugin`: `carb::IEvent` now has `attachObject()` and `retrieveObject()` functions that can be used to attach descendants of `carb::IObject` to an event. * OMFP-1450: `omni.telemetry.transmitter`: added support to the transmitter to pull down the latest Kratos schema IDs for use on open endpoints. The transmitter now attempts to download the latest schema IDs file from the same base URLs as the schemas packages are downloaded from. It then reads the appropriate schema ID for the current run. ## Changed ### OVCC-1369: *carb.crashreporter-breakpad.plugin* now has a companion tool called *crashreport.gui*. If shipped along with *carb.crashreporter-breakpad.plugin*, it will be invoked in the event of a crash for the user to provide a story of what they were doing when the crash occurred. ### OVCC-1372: **BREAKING CHANGE** *carb::logging::StandardLogger* is deprecated; *carb::logging::StandardLogger2* inherits all of its functionality as a pure virtual interface. The old *ILogging::getDefaultLogger()*, *ILogging::createStandardLogger()*, and *ILogging::destroyStandardLogger()* functions have gained an *Old* suffix, breaking the API, but not the ABI. The new *ILogging::getDefaultLogger()* and *ILogging::createStandardLogger()* functions work with *StandardLogger2*. *StandardLogger2* is ref-counted and can be destroyed with *release()*. ### BREAKING CHANGE The *carb::logging::ScopedFilePause* class has moved to *carb/logging/LoggingUtils.h*. ## Fixed ### OVCC-1373: Code that uses *carb/profile/Profile.h* macros will now compile if profiling is disabled at compile time by setting *CARB_PROFILING=0*. ### OVCC-1379: *carb.crashreporter-breakpad.plugin*: fixed the abort/termination handlers for all plugins that are statically linked to the Windows CRT. ### OVCC-1381: *carb.events.plugin*: Fixed slow *IEventStream* and subscription creation times when a name is not provided. The generated name will be very simple unless */plugins/carb.events.plugin/nameFromCaller* setting key is *true*, in which case symbol lookup will take place to determine a more descriptive generated name. ### OVCC-1367: *omni.structuredlog.plugin*: fixed how the early *privacy.toml* fields are loaded to avoid a bad user ID from being used. When the user ID value in the *privacy.toml* file is present and points to a non-existent environment variable, this would previously incorrectly insert the full *$env{}* tag as the user name. This fixes that behavior so that even the early loading of the file handles resolving environment variables too. ### OM-111167: fix an explicit python3.dll load from python ### OMFP-1450: *omni.bind*: fixed a crash in the *omni.bind* tool due to a diff of C++ code being passed to *f""* while writing out an error message for a *--fail-on-write* failure. ### OMFP-1977: Fixed a performance regression on Windows with logging. ### OVCC-1387: *omni.structuredlog.plugin*: fixed some bad early loading of consent settings in standalone mode. This does not effect Kit or Carbonite based uses of structured logging however since the consent settings are loaded again through *ISettings* later during startup. # Added - OVCC-1363: Adds type hints for omni.bind python code - OVCC-1166: omni.bind’s input/output filenames now correctly resolve `%{cfg.buildcfg}` and `%{config}` in `premake5.lua`. - OVCC-1364: `omni.telemetry.transmitter`: added support to the telemetry transmitter to add specific extra fields to each message instead of having to add them only at the time each message is produced. The ‘/telemetry/extraFieldsToAdd’ setting controls which extra fields from ‘/structuredLog/extraFields/’ will be added to each message. The ‘/telemetry/replaceExtraFields’ setting controls whether any existing extra fields will be replaced with the new values (true) or just be left as-is (false). - OMFP-583: `omni.crashreporter-breakpad.plugin`: added the crash report metadata file to compressed crash report files. - OM-108908: `carb.windowing`: Exposed monitor functions to python. - OM-108791: Added support for default values for arguments passed through ONI / omni.bind # Changed - OVCC-1375: cleaned up `omni.bind.util.Lazy`; replaced with python’s `functools.cache` - OVCC-1377: Tweaked `CARB_HARDWARE_PAUSE()` to be more correct on x86_64 with non-Microsoft compilers, and aarch64. # Fixed - OVCC-1374: A built-in plugin that was released before the framework was shutdown could cause a crash during framework shutdown. - OVCC-1376: An assertion could happen in debug builds if `Framework::tryAcquireInterfaceFromLibrary` or `Framework::loadPlugin` was called prior to `IFileSystem` being instantiated. This would only happen in certain “lightweight” instantiations of the Carbonite framework as `OMNI_CORE_START()` or `acquireFrameworkAndRegisterBuiltins()` would not exhibit this problem. - OM-109545: fix omni::ext::getExtensionPath not to crash when can’t find an extension # Added - OVATUEQI-35: Adds functionality to enable extra security checks when opening an archive based on the trust level of the registry from where the extension originates. By default, registries are trusted. - OM-101243: `carb.crashreporter-breakpad.plugin`: added a helper tool called `hang.crasher` for Windows that is used to intentionally crash a Carbonite based app that has hung. This is intended to be used in CI/CD with unit test child processes that run beyond their expected timeout. - OM-108121: `carb.input.plugin`: API to query modifier state of input devices. # Changed - OM-108121: `carb.input.plugin`: Added ability to query number of keys pressed. - OVCC-1347: `carb::cpp::countl_one()` and `carb::cpp::countr_one()` have been implemented as C++14-compatible equivalents to `std::countl_one()` and `std::countr_one()` respectively. - OVCC-1347: All non-`constexpr` functions in the `carb/cpp/Bit.h` library now have `constexpr` extensions with a `_constexpr` suffix. Within the `carb::cpp` namespace the new functions are: `popcount_constexpr`, - **Added** - `countl_zero_constexpr`, `countr_zero_constexpr`, `countl_one_constexpr`, and `countr_one_constexpr`. - OVCC-1347: Added `carb::UseCarbAllocatorAligned`, which allows overriding `new` and `delete` on a per-class basis while specifying an overriding alignment. - OVCC-1340: Linux/Mac: Futex performance was further improved, especially contended wake-up situations where no waiters are present. - OVCC-1347: Added `carb::thread::AtomicBackoff`, a helper class for providing a back-off in spin-wait loops. - OVCC-1347: `carb::Allocator<>` now has an optional `Align` parameter that is used as an alignment hint. - Several minor performance improvements for `carb.tasking.plugin`: - The futex system has received the same improvements from OVCC-1340. - Spurious wake-ups of co-routines and threads that wait on tasking objects have been eliminated. - Allocations are cacheline-aligned to reduce false-sharing. - Linux: Thread IDs are now cached as the syscall to obtain them was very slow. - Linux: Handle slab allocations are now larger to reduce the total number of slow `mmap` syscalls. - **Fixed** - OM-99583: bump python to 3.10.13+nv2 (CVE) - **153.0** - **Added** - OM-93636: added documentation on how to configure an app in a container to support telemetry. - OVCC-1344: Adds `IWeakObject` and `WeakPtr`. Interfaces implementing `IWeakObject` will support non-owning references. `WeakPtr` is similar in functionality to `std::weak_ptr`. Implementations are encouraged to use `ImplementsWeak` to add support for weak pointers in their implementations. Consumers of weak pointers must add the following to their `premake5.lua` projects to enable weak pointer support on OSX: ```lua filter { "system:macosx" } linkoptions { "-Wl,-U,_omniWeakObjectGetOrCreateControlBlock" } linkoptions { "-Wl,-U,_omniWeakObjectControlBlockOp" } ``` - OVCC-1350: string_view: Add std::string to carb::string_view implicit conversion - **Changed** - OVCC-1340: Linux/Mac: Futex performance was improved, which has a knock-on effect for all synchronization primitives. Four-byte futexes now use the ParkingLot as opposed going directly to the system futex, which saves ~97% on no-op cases. Waking 8 threads has improved about 55%, and the threads wake about 20% faster due to less contention. - OVCC-1340: `carb::thread::futex`: the `wake`, `wake_one`, and `wake_all` functions are deprecated in favor of respective replacement functions: `notify`, `notify_one`, and `notify_all`. These new names better match the standard and are more distinct from “wait”. - OVCC-1341: `carb::allocate`, `carb::deallocate`, and `carb::reallocate`... - **Performance Improvements:** - Allocations in `carb.tasking`, `carb.dictionary.plugin`, and `carb.events.plugin` are now based on Microsoft’s Mimalloc allocator for improved performance, especially in multi-threaded contentious environments. On Windows, single-threaded alloc/dealloc is 54% faster; contended alloc/dealloc is 219% faster. On Linux (GLIBC 2.35), single-threaded alloc/dealloc is 23% faster; contended alloc/dealloc is 864% faster. - OVCC-1348: Fixed a performance regression in Linux `carb.profiler-cpu.plugin`. - OVCC-1355: `carb.crashreporter-breakpad.plugin`: Python tracebacks are now reported with full filenames. - **Fixed Issues:** - OM-106256: fix supported targets check on registry 2.0 - OM-100519: [omni.ext.plugin] fix crash on shutdown when extension was removed - OM-106907: fix not all ext summaries are included in registry v2 - **Added Features:** - OVCC-1284: `omni.structuredlog.plugin`: added an interface to allow extra fields to be added to each structured log message. Extra fields may be provided programmatically with the `omni::structuredlog::IStructuredLogExtraFields` interface or by adding a key/value pair to the ‘/structuredLog/extraFields/’ settings branch on framework startup. - OVCC-1323: `carb.crashreporter-breakpad.plugin`: improved some logging and metadata from the crash reporter. This includes: - printed an info message indicating that the crash reporter successfully started up. - printed info messages any time the crash reporter or upload setting is toggled. - renamed some existing CI metadata keys to be more clear what they are collecting. - added several more GitLab and TC specific metadata values. - added a metadata value that indicates whether TC or GitLab is being used to run the crashing job. - tried to collect the app name and version from multiple settings. - added a GMT timestamp indicating when the crash occurred. - OM-93636: `omni.telemetry.transmitter`: added a new message processing protocol (called ‘defaultWithList’) to the telemetry transmitter. This will batch up events in a JSON array and deliver them to the endpoint as a JSON batch object. - OVCC-1320: Added `carb/time/TscClock.h`, a CPU time-stamp counter sampling “clock” for super-high-performance profiling and timing purposes. - OVCC-1320: Added `carb/cpp/Numeric.h` with an implementation of `std::gcd`: `carb::cpp::gcd`. - OVCC-1320: `HandleDatabase` has changed to using a `LocklessQueue` for free-list instead of `LocklessStack` as this proved to perform better due to less contention, especially on Linux. - OVCC-1343: Added `hash` field to `carb::datasource::ItemInfo`. - **Changed Features:** - OVCC-1316: unified the parsing of the command line options in kit-kernel so that parameters for other options are parsed consistently in all cases. All options now support providing their parameter either after an equals sign in the same command line argument or in the following argument. - OM-93335: `omni.kit.app` revert “python” token change to include python version in the path - OVCC-1324/OM-96947: The `carb.tasking` setting `debugTaskBacktrace` is now off by default in release builds. - OVCC-1320: Greatly improved `carb.tasking` handle allocation speeds. Created a Fibers-vs-Threads document with history and current performance times. The maximum number of fibers was increased to support the new “skynet” benchmark unit test. - **Fixed Issues:** - OVCC-1294: Fixed a performance issue with using `omni.log` from Python - OVCC-1313: Fixed a crash fix that could occur with `carb::settings::appendToStringArray()` - OVCC-1310: `carb.crashreporter-breakpad.plugin`: Fixed an issue where a deadlock could occur when a crash occurred. - OM-103625: `omni.ext` extensions packing does not preserve file symlinks on linux ## OVCC-1318 *carb.crashreporter-breakpad.plugin*: Fixed an issue where a crash in multiple threads simultaneously would likely cause the process to exit without writing crash information. ## OVCC-1298 Worked around issues with `carb::thread::recursive_shared_mutex` when used with Link-Time Code Generation ## OM-100518 Load and Release hooks that are not unregistered when a module is unloaded will now log an error instead of causing a crash when called. Note that this is a workaround in attempt to diagnose libraries that are not unregistering Load and Release hooks. ## 151.0 ### Added *omni.ext.plugin*: add support for the registry v2 with incremental index loading *OVCC-414*: Adds `omni::expected<T, E>` and `omni::unexpected<E>` class templates, which are ABI-stable implementations of their `std` counterparts. These monads are useful in representing potential error conditions when exceptions are not desired or not enabled. The API for these types is compatible with the `std` version, with a few minor changes: - `error_type` for both class templates is allowed to be `void` to match parity with other languages with result monads (e.g.: Rust allows `Result<T, ()>`) - The `omni` implementation is less `constexpr`-friendly that the `std` implementation, mainly due to requiring C++14 compatibility. - Cases where the C++ Standard leaves things as implementation-defined will generally result in program termination (e.g.: calling `expected.error()` when `expected.has_value()` will hit an assertion). *OVCC-1301, OVCC-1303, OVCC-1306*: Created gdb pretty-printers for `carb::extras::HandleDatabase`, `carb::RString` and several `carb.tasking` types. Additionally commands were created for `task list` (lists all `carb.tasking` tasks), `task bt <task>` (gives the current backtrace of a `carb.tasking` task), etc. ### Changed *OVCC-414*: Changes the C++17 utility type backports (e.g.: `carb::cpp17::in_place_t`) to alias their `std` counterparts when C++17 is enabled instead of being distinct types. This has the potential to break APIs where there is an overload for both the `carb` and `std` versions, which can be fixed by deleting one of them. Keep the `carb::cpp17` overload if you need support for pre-C++17; keep the `std` overload if you only support C++17 and beyond. *OVCC-1293*: `omni.structuredlog`: changed the `omni.structuredlog` tool to include the event name in generated struct and enum names. This allows the same field name for an object to be used across multiple events within a schema. *OVCC-1264*: The `carb::cpp17` and `carb::cpp20` namespaces have been merged into `carb::cpp`. Likewise, the `carb/cpp17` and `carb/cpp20` include directories have been merged into `carb/cpp`. - For backwards compatibility, the previous `carb::cpp17` and `carb::cpp20` directories have files (marked as deprecated) that pull the `carb::cpp` symbols into their respective namespaces (`carb::cpp17` or `carb::cpp20`). - OVCC-1311: `carb::extras::HandleDatabase::makeScopedRef` (the `const` variant) now returns a `ConstHandleRef` which can only be used for const access to mapped items. ### Fixed - OVCC-1288: Carbonite uses a new `doctest` package that fixes a build issue on glibc 2.34+ where `SIGSTKSZ` is not a constant. - OVCC-1311: Fixed a compilation error in `carb::extras::HandleDatabase::makeScopedRef` (the `const` variant). - OVCC-1166: omni.bind’s include paths now correctly resolve `%{cfg.buildcfg}` in `premake5.lua`. ### 150.0 #### Added - OVCC-1281: Add `OMNI_ATTR("nodiscard")` to omni.bind. - OM-99352: `omni.structuredlog.plugin`: added the `/structuredLog/emitPayloadOnly` setting to allow the structured logging system to skip adding the CloudEvents wrapper to each event on output. This is useful for apps that want to use the structured logging system for a purpose other than telemetry and allows the schema for each event to be directly used for full validation of each event instead of just defining the layout of the “data” field. - add `omni.ext.get_all_sys_paths` and `omni.ext.get_fast_importer_sys_paths` to get all sys paths and fast importer sys paths respectively. - `omni.ext.plugin`: log when extensions are requested to be enabled or disabled. - OM-96953: `omni.ext.plugin`: the Kit-kernel extension manager now outputs structured log events any time an extension is loaded or shutdown. The extension’s name and ID are included in both events. On the extension startup event, the startup time in milliseconds is also included. #### Changed - OM-97668: `omni.ext.plugin`: FS watcher will now ignore changes `extension.gen.toml` not to reload extensions when installing by parallel kit instances. - OVCC-1186: omni.bind: out-of-date messages are no longer considered warnings. - OM-99926: `omni.ext.plugin`: defaulted the `/app/extensions/fsWatcherEnabled` setting to `false` when running in a container. When running outside of a container this setting still defaults to `true`. - OVCC-1273: `carb.crashreporter-breakpad.plugin`: removed the missing metadata check before uploading a batch of old crash reports and instead checked for missing metadata for each individual crash report before uploading. - OM-93335: `Runtime-breaking change`: renamed carb_scripting_project from “scripting-python-3.7m” to “scripting-python-3.7” Runtime python version is now determined by carb.scripting-python.plugin settings in kit-core.json python prebuild link is now named python-3.10, python-3.9, and python-3.7 #### Fixed - OVCC-1268: Plugin startup functions `carbOnPluginStartup` and `carbOnPluginStartupEx` will no longer assert on debug builds when called from within a `carb.tasking` task and the calling thread changes. - OVCC-1265: `carb.filesystem` ### Changes - **OM-99795**: `omni.ext.plugin`: fix if set_extension_enabled_immediate is called from python and parallelPullEnabled is enabled - app hangs. - **OVCC-1285**: `carb.crashreporter-breakpad.plugin` changes: - The python traceback file is now named `$crashid.py.txt` by default. - Metadata is written out before gathering volatile metadata in case of a double-fault. - Attempts to upload previous crashes will now include all available files (such as python traceback files). - Failing to load metadata will produce `MetadataLoadFailed` metadata for the upload. - **OVCC-1291**: `carb.tasking.plugin`: Fixed a rare edge-case crash that was introduced by OVCC-1268. ### Added - **OM-93952**: `omni.kit.app`: (Linux only) support for /app/preload, a key which can be used to relaunch with LD_PRELOAD prefixed with the value of this key. - **OVCC-1258**: `omni::string_view` is available as an alias for `carb::cpp17::string_view`, and related typedefs. - **OVCC-1261**: `carb.crashreporter-breakpad.plugin`: Additional “PythonBacktraceStatus” metadata is now recorded for the status of gathering python backtrace during crashing. - **OVCC-1200**: `carb.crashreporter-breakpad.plugin`: added environment variables to override selected settings for the crash reporter. These are only intended to be used in debugging situation. The following environment variables have been added: - `OMNI_CRASHREPORTER_URL` will override the value of the `/crashreporter/url` setting. - `OMNI_CRASHREPORTER_ENABLED` will override the value of the `/crashreporter/enabled` setting. - `OMNI_CRASHREPORTER_SKIPOLDDUMPUPLOAD` will override the value of the `/crashreporter/skipOldDumpUpload` setting. - `OMNI_CRASHREPORTER_PRESERVEDUMP` will override the value of the `/crashreporter/preserveDump` setting. - `OMNI_CRASHREPORTER_DEBUGGERATTACHTIMEOUTMS` will override the value of the `/crashreporter/debuggerAttachTimeoutMs` setting. - **OM-84354**: `omni::extras::OmniConfig` has been added to replace omni-config-cpp. This new class has similar functionality but uses standard Carbonite utilities, which improves some things such as unicode support. This new class requires carb.dictionary.serializer-toml.plugin for full functionality, but it will work without the framework. - **OVCC-1277**: `omni.structuredlog.plugin`: added the `/structuredLog/needLogHeaders` setting to control whether header JSON objects will be added to each written log file. - **OVCC-1279**: `omni.telemetry.transmitter`: added support for accepting a `file:///` URI in the `/telemetry/endpoint` and `/telemetry/transmitter/<index>/endpoint` settings. This file URI can also point to `file:///dev/stdout` or `file:///dev/stderr` to write the output to stdout or stderr respectively. When a file URI is used as the endpoint, the event data will not be sent to another server but instead just written to a local log file. This is useful for cloud or farm setups where another log collecting system will be run to collect and aggregate data before sending elsewhere. ## OVCC-1258: `carb::cpp17::basic_string_view` and associated typedefs (i.e. `string_view`) are now considered ABI- and Interop-safe and may be used across ABI boundaries. These classes have been checked against the C++ standard and improved with additions made up to C++23. ## OVCC-1259: `carb::cpp20::span` (and `omni::span`) now support limited ranges: that is, classes such as `std::vector` that have a `data()` method and a `size()` method (with other requirements–see documentation) can be used to construct a `span`. ## OM-96952: `carb.profiler-cpu.plugin`, `carb.profiler-tracy.plugin`, and `carb.profiler-nvtx.plugin` will ignore floating-point value records of NaN and Infinity. ## OVCC-1269: Fixed a bug in `omni::string` where certain operations (such as appending) that exceeded the small string optimization size would result in a string that was not null terminated. ## OVCC-1276: Fixed `omni::string` constructor and `assign` which accept `(pointer, size)` erroneously throwing when receiving `(nullptr, 0)`. This is now legal. ## OM-95894: Added clipboard support for virtual windows. ## 148.0 ### Added - OVCC-1257: `omni.bind`: Support `externalincludedirs`. - OVCC-1240: `carb.crashreporter-breakpad.plugin`: Added wide-string volatile metadata support. - OVCC-1248: `omni.telemetry.transmitter`: Added the crash reporter to the telemetry transmitter tool. - OVCC-68: Added `carb::extras::withFormatV()` and `carb::extras::withFormatNV()` which will format a string as by `std::vsnprintf()` and call a Callable with the formatted string (the `N` variant also passes the length). This utility removes the need to allocate space on the stack or the heap to format the string as the function does it. - OVCC-68: Added macros `CARB_FORMATTED`, `CARB_FORMATTED_SIZE`, `CARB_FORMATTED_N`, and `CARB_FORMATTED_N_SIZE` which initialize the `va_list` machinery for a varargs function, and call `carb::extras::withFormatV()` or `carb::extras::withFormatNV()` (for the `N` variants) to format the string. - OM-95249: `omni.kit.app.plugin` Add a setting to clear user config extension version selections - OVCC-1255: Carbonite packages now include `tools/gdb-syms/gdb-syms.py`, a Python script that can be used with the GDB command `source /path/to/gdb-syms.py` which will attempt to download symbols from the Omniverse symbol server. - OVCC-1250: `carb.scripting-python.plugin` is now available for Python 3.8 and 3.9, in addition to the previously available 3.7 and 3.10. ### Changed - ... ## Changes ### Added - OVCC-1241: removed the ‘/privacy/externalBuild’ setting from being able to override internal telemetry related behavior. - OVCC-1237: Updates Carbonite, `omni.bind`, and `omni.structuredlog` to Python 3.10.11. - OVCC-68: `carb.profiler-tracy.plugin` and `carb.profiler-nvtx.plugin` have been refactored to share more common code and to improve performance. - `omni.ext.plugin`: downgrade “Extension with the same id is already registered” to info level (was a warning) - OM-95781: `omni.ext.plugin` deprecate and cleanup stripping level support in the registry ### Fixed - OVCC-1251: Worked around slow extension unloading code in omni.ext. - OVCC-1242: `carb.filesystem` (Windows only) Fixed an issue introduced in 147.0 where file timestamp calculations could vary between `getFileInfo()`, `getFileModTime()`, `getModTime()`, and the `DirectoryItemInfo` passed to the callback for `forEachDirectoryItem()`. - OVCC-1240: `carb.crashreporter-breakpad.plugin`: Fixed an issue that could cause a stack overflow on Linux while in the crash handler, resulting in lost crash information. - OVCC-1245: Fixed a compilation issue that could happen in some cases when `omni/Function.h` was included. Also fixed a few missing include guards. - OVCC-1247: `carb.filesystem` (Windows only) Fixed an assertion failure that can occur if an enumerated file is deleted during the enumeration. - OVCC-1248: `omni.telemetry.transmitter`: disabled assertion dialogs in the telemetry transmitter. ## 147.0 ### Added - OVCC-1228: Added `CARB_ASSERT_INTEROP_SAFE` that checks types for trivially-copyable and standard layout which are required for interop safety. - OVCC-1232: `carb.crashreporter-breakpad.plugin`: Added `/crashreporter/pythonTracebackArgs` (defaults to `dump --nonblocking --pid $pid`) that can be used to override options to `py-spy`. - OM-92726: Add OMNI_KIT_ALLOW_ROOT=1 env var as an alternative to –allow-root for kit executable - OVCC-1221: Added some helper functions to `carb::ErrorApi`. - OVCC-1222: Added documentation around `omni.kit.app`, `omni.ext.plugin` and many other items in the `omni::kit`, `omni::ext` and `omni::extras` namespaces. ### Changed - OVCC-1222: **POSSIBLY BREAKING CHANGE**: Fixed a spelling error in the `prerelease` members of omni::ext::Version and omni::extras::SemanticVersion. - OVCC-1221: `carb.filesystem`: Eliminated all info, warning and error logs. Instead, error states are set through `carb::ErrorApi`. Documentation indicates what error states are set by which functions. - OVCC-1216 / OM-90179: The Carbonite framework no longer spams “pluginA is already a dependency of pluginB” when plugins are repeatedly acquired. However, if a plugin is acquired a significant number of times, a performance warning will appear recommending use of `getCachedInterface()`. ### Fixed - OM-86228: `carb.launcher.plugin`: fixed support for launching detached child processes. Previously a zombie process was left on Linux that needed to be externally cleaned up. A detached child process now launches as an orphaned grandchild - **OM-92725**: Fixed a process that is automatically cleaned up by the terminal session or initd. - **OM-92726**: Fixed regression to use only the first positional argument to kit executable as an app file. - **OVCC-1221**: `carb.filesystem`: `getModTime()` was fixed to match `getFileModTime()`, and `getCreateTime()` was fixed to match `getFileCreateTime()`. - **OVCC-1229**: `omni.telemetry.transmitter`: added the missing `libjemalloc.so*` files to the `telemetry_transmitter` package. The same files were also added to the `carb_sdk` and `carb_unittests` packages. - **OVCC-1229**: `omni.telemetry.transmitter`: fixed an issue with `omni::extras::UniqueApp` that could cause guard files to fail to be created if the requested directories do not exist. This now makes sure to create all directories in the given path when creating guard lock files. - **OVCC-1213**: `carb.tasking.plugin`: Fixed a rare race condition that could lead to a hang or crash if `ITasking::reloadFiberEvents()` is called from multiple threads simultaneously. - **OM-93618**: Linux: Removed `jemalloc` as it could cause some crashes. ### 146.0 #### Added - **OM-89427**: Linux: Executables now use `jemalloc` for better heap performance. - **OM-89273**: `omni.structuredlog.plugin`: added support for getting privacy setting in `privacy.toml` from environment variables instead of just fixed strings. The settings in this TOML file can now have values like `$env{<envvar_name>}` to specify that their value should come from an environment variable. - **CC-1206**: `carb.crashreporter-breakpad.plugin`: added a “[crash]” tag to each log message for the handling of a current crash and a “[previous crash]” tag to each log message when uploading old crash reports so they can be easily and explicitly differentiated in a log. - **OVCC-1214**: `carb::delegate::RefFromDelegate<>` and `RefFromDelegate_t<>` were created in order to specify the type of a `DelegateRef` from a `Delegate`. - **CC-1202**: Implemented `carb::cpp20::span<>` which is a C++14+ implementation of `std::span` that is mostly compatible with the C++20 version, and also ABI safe. It also implements the C++23 requirement that it be trivially copyable and therefore is also interop-safe. This class is also available as `omni::span<>` and has MSVC visualizers. - **CC-1193**: `carb.crashreporter-breakpad.plugin`: added the process launch command line to the crash report metadata. Any paths in the command line will be scrubbed of usernames. - **CC-1193**: `carb.crashreporter-breakpad.plugin`: added the current working directory to the crash report metadata. Any usernames will be scrubbed from expected path components. - **CC-1193**: `carb.crashreporter-breakpad.plugin`: optionally added the environment block to the crash report metadata. Any detectable usernames will be scrubbed from the variable values. This is enabled with the boolean setting ‘/crashreporter/includeEnvironmentAsMetadata’. This setting defaults to ‘false’. - **CC-1138**: Added `carb::ErrorApi`, the low-level unified layer for propagating errors across modules and programming languages. High level (C++ and Python) utilities that are easier to use will be coming in a future release (soon). - **OVCC-1201**: `carb.crashreporter-breakpad.plugin` at crash time will run `py-spy` (if present) to capture python traceback information about the crashing process, which is then uploaded as a file to the crash server. - **OM-74259**: `omni.kit.app`: add `IApp::restart` API to restart the application. - **OM-54868**: `omni.ext.plugin` adds the `uninstallExtension` API. ## Changed - **POSSIBLY BREAKING CHANGE**: CC-1199: `omni.bind` no longer requires an empty “Dummy.cpp” file on Windows; it is no longer packaged with the `carb_sdk` and `carb_sdk+plugins` packages. On Linux, an empty file is still required but this file has been renamed to `Empty.cpp`. - OM-89273: changed the behavior of `carb::extras::EnvironmentVariableParser` so that if no prefix string is given, all environment variables get stored as normal environment variables instead of pathwise overrides. - `carb.profiler-tracy.plugin`: Updated to use Tracy v0.9.1. **NOTE**: Viewers will need to update to the same binary version in order to accept and view captures. - CC-1207: `omni.telemetry.transmitter`: the ‘default’ transmission protocol of the transmitter has now changed such that the ‘data’ field of each event is not converted to a string value before sending to the endpoint server. A new protocol mode called ‘dataWithStringify’ has been added to retain the previous behavior. This can be set using the ‘/telemetry/eventProtocol’ or ‘/telemetry/transmitter/ /eventProtocol’ settings. - CC-1211: Windows: Error messages for failing to load plugins or extensions have been improved when it is suspected that the failure is due to a dependent library. Improved documentation for `carb::extras::loadLibrary()`. - Call `ITasking::reloadFiberEvents` in the startup functions of `carb.profiler-cpu.plugin`, `carb.profiler-nvtx.plugin`, and `carb.profiler-tracy.plugin` so they still work correctly when loaded on demand, as opposed to only at app startup. ## Fixed - OM-90424: Fixed the `config` token set in `omni.kit.app` incorrectly resolving to release for debug builds on Linux. - OM-90358: `carb.crashreporter-breakpad.plugin`: avoided calling `abort()` in the termination handler since it can cause up to two extra unnecessary crash reports to be generated and uploaded. - CC-1210: `omni.structuredlog.plugin`: made retrieving the privacy settings safer, especially if a value in the given privacy file has the wrong inferred type (ie: a boolean instead of a string). - `carb.profiler-mux.plugin`: Fixed race condition when appending new profilers while another thread iterates over them. - OM-91369: `Empty.cpp` is now packaged along with omni.bind for MacOSX packages. - OVCC-1214: Fixed an issue with `carb::delegate::DelegateRef<>` whereupon destruction it would unbind all elements from the referenced `carb::delegate::Delegate<>`. - OVCC-1212: Load hooks are no longer called with the Framework mutex locked in order to prevent deadlocks where another thread may be waiting on the Framework mutex, but a load hook is waiting on that other thread. However, load hooks must still complete before any attempts to acquire that specific interface return. - CC-1212: `carb.crashreporter-breakpad.plugin` now uses cached interfaces instead of acquiring interfaces when settings change. This can help prevent deadlocks involving the Framework mutex. - omni.ext: Create cache path anyway even when mkdir ext folders is disabled. - omni.ext: Remove progressbar from extension packing/unpacking code (speedup). - OVCC-1219: `carb.crashreporter-breakpad.plugin`: all metadata and extra files key names will now be sanitized regardless of whether they come through a command line setting, config file, direct change to the settings registry, or one of the `carb::crashreporter::addCrashMetadata()` or `carb::crashreporter::addCrashExtraFile()` helper functions. ## 145.0 ### Added - CC-1176: `carb.tasking.plugin`: Added `ITasking::reloadFiberEvents()`, a safe method to reload `IFiberEvents` interfaces. - CC-1137: `carb.crashreporter-breakpad.plugin`: allowed the crash reporter to enable its configuration functionality once the `carb.settings.plugin`. - The `carb::settings::ISettings` interface may not be available if the crash reporter plugin has been loaded (or another plugin that provides the `carb::settings::ISettings` interface). This can happen if the crash reporter plugin is loaded before the `carb.settings.plugin` plugin is loaded. ### Changed - CC-1198: On Windows, `kit.exe` is now known as “NVIDIA Omniverse Kit” instead of “NVIDIA Omniverse Carbonite SDK”. ### Fixed - OM-90036: Incorrect ternary logic when an optional `classDocstring` parameter is passed to `carb::defineInterfaceClass`. - CC-1204: The Carbonite dependency files now have an “alias” of the `python-3.10` dependency as `python`. This is temporary until Packman allows aliases. ### Added - OM-81978: New `carb.profiler-mux.plugin` that can be used to forward profiler events to multiple other loaded implementations of `carb::profiler::IProfiler`. - Add start_time and thread_id to Python get_profile_events, and add Instant and Frame events to monitor. - CC-1158: `carb.events.plugin`: - `IEventStream` objects can be created with a name. If a name is not provided, a name is generated based on the caller of the creation function. The name can be retrieved via the `getName` function. - `ISubscription` objects now generate a name based on the module that contains the `onEvent` function if a name is not provided. The name can be retrieved via the `getName` function. - Profiling of event notification now is more explicit and provides the name of the `IEventStream` and each `ISubscription` as it is called, along with the `IEvent::type` field (which unfortunately is numeric as the string name is lost to hashing). - CC-1161: added methods of determining whether a given plugin or module is a debug or release build without needing to load up and interrogate the library. On Windows, the string “(Debug)” will show up in the “Product Name” field of the module’s property window (“Details” tab). On Linux, the `g_carbIsDebugConfig` symbol will be present in debug modules and `g_carbIsReleaseConfig` symbol will be present in release modules. - CC-1153: Added kit-kernel plugins `omni.kit.app.plugin` and `omni.ext.plugin` and launcher executable `kit`. - OM-88365: Added optional `classDocstring` parameter to `carb::defineInterfaceClass`. - CC-1188: `omni.platforminfo.plugin`: added support for detecting and handling processes running under compatibility mode on Windows 10 and up. `omni::platforminfo::IOsInfo` will now report as close to the actual OS version as it can even when running under compatibility mode. ### Changed - CC-1144: Telemetry: updated the default transmitter schemas packages download URLs. - CC-1159: `carb.tasking.plugin`: A task name can now be passed as a `Tracker` to make task naming easier. - CC-1149: StructuredLog: failed early during prebuild if the Carbonite package path passed to `setup_omni_structuredlog()` either doesn’t exist or doesn’t contain the tool at the expected subfolder. Previously this would only fail out at build time if the tool was missing. ## carb.dictionary.plugin: New verbose printout on un-subscribing of a subscription with a list of remaining subscriptions that are active. ### Standardized internally on `detail` for an implementation-specific namespace, as opposed to `details`. ## Fixed - CC-1174: Fixed a situation whereby the Framework would reject a default plugin even though it supported the requested version. - CC-1173: Fixed a compile issue on Clang where `include/carb/delegate/Delegate.h` did not `#include <memory>`. - CC-1160: Framework: fixed framework startup so that it can succeed even if no plugins are present. - CC-1177 / OM-86573: `carb.profiler-cpu.plugin`: Fixed a stack overflow crash that could occur if a frame contained too many dynamic strings. - OM-85428: Build: made the command line flag generation in `omni.bind.bat` files deterministic to avoid unintentional rebuilding of projects. - CC-1190: `carb.dictionary.plugin`: Fixed a potential assert (debug) or bad access (release) that could occur if multiple threads were iterating an item array. ## 143.0 ### Added - Added `CARB_FILE_DEPRECATED` to warn that a deprecated file has been included. This can be ignored by having `CARB_IGNORE_REMOVEFILE_WARNINGS` defined. - CC-1139: Added `CARB_NODISCARD_TYPE`, `CARB_NODISCARD_MSG`, and `CARB_NODISCARD_TYPE_MSG` to supplement the existing `CARB_NODISCARD` macro for decorating types, decorating functions with a custom message, and decorating types with a custom message, respectively. These fall back to their most reasonable approximation if the compiler does not support them. - CC-1117: Added a `carb.tasking.plugin` debug visualizer to `carb.natvis` for Visual Studio 2019+ that displays an aggregate view of task counts by task function (see `[task counts by function]` in the CarbTaskingDebug visualizer). - CC-1118: Prerequisites of a `carb.tasking.plugin` task are now visible in the tasks listed under the `[task database]` member of the CarbTaskingDebug visualizer (called `[prerequisite]`). - OM-66287: StructuredLog: added an optional process lifetime heartbeat event that can be emitted at regular intervals from `omni.structuredlog.plugin`. These heartbeat events are disabled by default but can be controlled using the `omni::structuredlog::IStructuredLogSettings2` interface (specifically the `setHeartbeatPeriod()` method). - CC-1151: Windows crash dumps will included unloaded module info, which will aid debugging when a crash occurs in an unloaded module (such as when a callback is called but the module has been unloaded). - CC-51: `carb.tasking` now provides `carb::tasking::ITasking::nameTask()` which allows naming a task. The task name shows up in debug information and is retrievable with `getTaskDebugInfo()`. - `include/carb/memory/Utils.h` now has `carb::memory::protectedMemmove()` and `carb::memory::protectedStrncpy()`, which are implementations of `memcpy()` and `strncpy()` (respectively) that will return `memcpy()` and `strncpy()`. - **CC-1141**: StructuredLog: allowed all event output to be written to stdout or stderr using the `/structuredLog/defaultLogName` setting and a value of either `/dev/stdout` or `/dev/stderr`. When used, all events will be redirected to the specified stream regardless of per-event or per-schema settings. - **Removed**: - The following public items were determined to be internal to Carbonite only and have been removed: - `omni::log::configureLogChannelFilterList()` - `omni::log::registerLogChannelFilterList()` - `omni::log::ILogChannelFilterList_abi` - `omni::log::ILogChannelFilterList` - `omni::log::ILogChannelFilterListUpdateConsumer_abi` - `omni::log::ILogChannelFilterListUpdateConsumer` - `omni::log::WildcardLogChannelFilter` - Associated Python bindings for the above - **Changed**: - **DEPRECATED**: The following include files have been marked as Deprecated and may be removed in the future: - `LogChannelFilterUtils.h` - `WildcardLogChannelFilter.h` - CC-1113: `carb.dictionary.plugin` keeps child keys in the same order as they are created via `createItem`, `update`, or `duplicateItem`. The `IDictionary` interface was changed to version 1.1 to reflect this. - CC-1113: `carb.dictionary.serializer-json.plugin` and `carb.dictionary.serializer-toml.plugin` were changed to SAX-style parsing, so that keys encountered are pushed into `carb.dictionary.plugin` in the order specified in the data. - CC-1117: `carb.tasking.plugin` now considers the `debugTaskBacktrace` setting to be on by default. - All non-generated public include files now use relative paths - CC-1147: Amended coding standard on thread-safety, internal code, and other practices. - **Fixed**: - OM-71761: updated to openssl 1.1.1t, fixes CVE-2023-0286 for omni.telemetry.transmitter - OM-80517: Linux: Fixed some issues that could lead to deadlocks when running on Centos7 (glibc 2.17) - CC-1113: Log filters (specified in settings under `/log/channels`) are now processed in order, provided that the underlying `IDictionary`, `ISettings`, and `ISerializer` interface(s) respect order. Note that the Carbonite-provided `carb.dictionary.plugin`, `carb.dictionary.serializer-json.plugin`, `carb.dictionary.serializer-toml.plugin`, and `carb.settings.plugin` all respect order. The logging system now automatically monitors the `/log/channels` settings key and processes changes as they occur. - Fixed and improved several visualizers for Carbonite types listed in `carb.natvis` (for Visual Studio 2019+). Carbonite types and members that have visualizer support are now tagged with `CARB_VIZ`. - OM-79835: omni.bind: Avoid touching .gen.h files when contents haven’t changed. - CC-1154: Fixed issues with public include files compiling on GCC 8, such as ### 142.0 #### Added - OM-81922: updated to repo_build 0.29.4 to preserve symlinks when copying in premake - OM-59371: Requested feature to stop all voices `IAudioPlayback::stopAllVoices`. - CC-1087: New carb.audio-forge setting: `/audio/allowThreadTermination`. Forge will attempt to terminate its own threads if they’re detected to be hanging as a result of watchdog timers expiring. Terminating threads is not safe and may cause a crash/hang later on. Setting this to `false` will switch the behavior to abort when the watchdogs expire. This is intended to be set to `false` during tests to avoid a terminated thread resulting in a deferred crash. - CC-241: Added `omni::function`, an ABI safe drop-in replacement for `std::function`. Support for `omni::function` callbacks in ONI will be added in CC-1131. - Added a `carb.natvis` visualizer for `omni::string`. #### Changed - CC-1104: `carb::IFileSystem::subscribeToChangeEvents()` on Mac OS now uses the `FSEvents` backend. This should have a lower performance overhead and modify events are no longer dependent on filesystem timestamps. The timing and ordering of events may change as a result. This also limits the maximum number of file subscriptions to ~512. - OM-80477: allowed the structured log session ID to be retrieved without a consent check. #### Fixed - OM-81171: Fixed several issues stemming from shutdown when started by Python. - CC-1116: removed a hard link dependency on X11 in `omni.platforminfo.plugin`. The required functionality is now dynamically loaded from that library instead. - CC-1126: Implemented a workaround for shutdown crashes on Linux where `exit()` is called before the Carbonite framework is shut down. The Framework now attempts to register late `atexit()` handlers and shut itself down in the event of an unexpected `exit()`. - OM-81172: Linux: Fixed an issue with using the Carbonite allocation functions in a module that was not explicitly linking against `libcarb.so` that could lead to crashes if the symbol was not found. - CC-1112 and CC-1120: The GCC warning `noexcept-type` was producing false positives on `carb::cpp17::invoke` family functions when compiling in C++14 mode. This add the macros `CARB_DETAIL_PUSH_IGNORE_NOEXCEPT_TYPE` and `CARB_DETAIL_POP_IGNORE_NOEXCEPT_TYPE` to assist in disabling this warning when it is appropriate and fixes `invoke` family of functions when using `noexcept`-specified function pointers. ### 141.0 #### Added - CC-1078: Carbonite, through a new macro `CARB_PLUGIN_IMPL_EX`, now has the ability for a plugin to support multiple versions for a given interface. The Framework also has the ability to serve multiple versions of itself, allowing for future change without breaking already-built plugins. See the Carbonite Interface Walkthrough documentation for more ## Added ### Changed - CC-1078: Framework version has been changed to 0.6. The previous Framework version 0.5 is still supported, but plugins built after this version will not be compatible with earlier versions of Carbonite. - CC-1107: Attempting to load a Carbonite plugin with the same name as an already loaded plugin will now produce a warning log (instead of error) and will now return `carb::LoadPluginResult::eAlreadyLoaded` instead of `carb::LoadPluginResult::eFailed`. - CC-1107: Attempting to load an already-loaded ONI plugin with `carb::Framework::loadPlugin()` will now return `carb::LoadPluginResult::eAlreadyLoaded` and will no longer log that the dynamic library remains loaded after unload request. ### Fixed - CC-1098: Fixing passing the errorlevel through to the build system when the called-python routines return an error. - CC-1103: Fixed an issue with packaging on Mac that was excluding python bindings. - CC-1086: Removed use of Carbonite logging in Breakpad Crash Reporter. Logging here can cause deadlock while crashing if a thread was logging and held the lock while crashing (noticed in OM-78013). - CC-1108: A Linux system that cannot create any more inotify instances when `carb::filesystem::IFileSystem::subscribeToChangeEvents` is called will now fail gracefully with a warning message instead of crash. - CC-1110: Fixed undefined reference linker errors that could occur in C++14 with header-only utilities on GCC, including `Path.h`, `RobinHoodImpl.h`, and `Utf8Parser.h`, due to `static constexpr` members within classes. ## 140.0 ### Added ### Changed - CC-1088: updated to `repo_build` 0.28.12 to generate `compile_commands.json` on MacOS. - CC-1085: bump all Conan dependencies for updated metadata. - For `carb.crashreporter-breakpad.plugin`: - CC-1065: flattened the folder structure in zip files for crash reports. All filenames in the flattened zip archive are also ensured to be unique. - CC-1065: added a manifest to crash report zip files that lists each file’s original location and upload key name. - CC-1073: prevented preserved crash dumps from being re-uploaded. - OM-18948: Detect incorrect usage of `ObjectPtr` at compile time rather than runtime. Note, this may cause hand written ONI Python bindings to have to be re-written (i.e. use `omni::python::detail::PyObjectPtr` rather than `omni::core::ObjectPtr`). ### Fixed - CC-1068: Fixed `carb.datasource-file.plugin`’s `readData` and `readDataSync` functions not working properly on Mac. - CC-1077: Fixed an issue with `carb::extras::Path` where `replaceExtension` would crash in Linux on startup in debug builds. - CC-1065: fixed some potential zip file corruption in crash reports generated by `carb.crashreporter-breakpad.plugin` related to storing 0 byte files and missing files. - CC-1055/CC-1089: Updated to Python 3.7.15, 3.8.15, 3.9.15, 3.10.8 and zlib 1.2.13-1 to fix security issues. - CC-1072: Fixed two issues when fetching environment variables on Windows. Fetching a zero-sized value will no longer read uninitialized stack data. Fetching a value larger than 256 wide characters is no longer subject to a race condition if the environment variable changes between the size query and data fetching. - CC-1099: fixed the detection of the Windows 11 OS display name in `omni.platforminfo.plugin` on machines that were # 139.0 ## Added - CC-1064: added options to the `omni.structuredlog` tool to allow it to skip the code formatting step. - added the `--skip-structuredlog-format` option to the `build.{sh|bat}` scripts and the `premake5` tool. - added the `skip_format` boolean argument to the `omni_structuredlog_schema()` project function. - added the `--skip-format` option to the `tools/omni.structuredlog/omni.structuredlog.py` script. - OM-43302: Extended support for setting and querying the maximized/minimized/restored state of a glfw window. - CC-1060: Added more verbose logging to `carb::Framework::releaseInterface()` to log which plugins still have references to an interface. - CC-615: Preliminary VulkanSDK support for MacOS ## Changed - Corrected the spelling of `omni::core::GetModuleDependenciesFn`. While this is a public symbol it should not affect ABI and its use is generally confined to macros whose names did not change. ## Fixed - CC-1057: Fixed some constructors in `carb::ObjectPtr` and `carb::container::BufferedObject` to not be `explicit` as these potentially lead to compilation warnings/errors in C++17. - OM-74155: Fixed an issue on Linux where walking a directory on XFS filesystems would sometimes not properly identify the type of a file/directory. - CC-1060: Fixed an issue where releasing interfaces from a script binding would not actually release the interface. - CC-1012: Fixed an issue where ONI modules did not properly initialize the Carbonite profiler. # 138.0 ## Added - OM-58581: The new carb.audio bindings are now available in python versions prior to 3.10. ## Changed - CC-629: updated to glfw-3.3.8, removed dynamic glfw libs ## Fixed - CC-1031: Fixed an issue where outputting a structured logging event to a standard stream would unintentionally close the stream. This fix also ensures that the stream is flushed before continuing with the next message. - CC-1046: Fixed an issue where `carb::Framework` would not be `nullptr`-padded, so functions added to the Framework later would be garbage data instead of `nullptr`. - CC-1046: Fixed an issue where older Carbonite releases (pre v135.0) would crash when compiled against newer headers (v135.0 and later). - CC-1052: `carb::container::LocklessStack` on Linux is no longer lockless as registering multiple signal handlers in multiple dynamic libraries became untenable. # 137.0 ## Added - CC-448: `carb.variant.plugin` now supports a `VariantMap` type: an associative container with `Variant` keys and values that can be itself passed as a `Variant`. ## Changed - CC-1027: `CARB_EXPORTS` (note the S) is no longer required to be defined in order for `CARB_EXPORT` to export a function. This was the cause of much pain and lost hours. Now, `CARB_EXPORT` always exports a function, which is the typical need for functions like `carbOnPluginStartup()`. The rare case of needing dynamic import/export is now handled by a new macro– `CARB_DYNAMICLINK` –which will export if `CARB_EXPORTS` is defined before `carb/Defines.h` is included, but otherwise imports (the default). - CC-949: updated some packages to get several CVE fixes: - updated to zlib 1.2.13+nv2. - updated to OpenSSL 1.1.1q+nv7. - updated to libcurl 7.85.0+nv3. - CC-1034: changed code generated by `omni.structuredlog` so that it now uses references to objects instead of pointers when passing to the various ‘send event’ functions. This allows for easier access to inlined calls to emit messages instead of having to declare local variables for the object parameters. This also included generating default, basic, and copy constructors for each generated object struct as well as an assignment operator to make it easier to construct objects inline in the event helper macros. - CC-1031: added support for writing structured log messages to stdout and stderr. The `f*FlagOutputToStderr`, `f*FlagOutputToStdout`, and `f*FlagSkipLog` schema and event flags have been added to allow structured log events to be output to the stdout and stderr streams either in addition to the normal log or to exclude the output to the normal log. ## Fixed - CC-1029: If functions were added to an interface without changing the version, newer code that conditionally checked for the functions when running against an older version of the plugin (with a compatible version but without the functions) would see uninitialized memory instead of `nullptr`. This has been fixed: the Framework now keeps a null region large enough for 32 functions following any interface memory so that reading beyond the end of it will show null function pointers for new functions. - CC-1034: Fixed a code generation bug with the `omni.structuredlog` tool. If a property in an object property specified in a schema contained a character that is not valid in a C++ symbol, invalid C++ code would be generated. This bug did not occur on property names that were outside of objects however. - CC-47: Fixed potential deadlocks that could occur if multiple threads were acquiring interfaces at the same time. - CC-1039: To mitigate an issue that could potentially cause a dangling pointer crash in `deregisterLoggingForClient()` if `ONI` was stopped and restarted without unloading all plugins, stopping `ONI` (i.e., with `OMNI_CORE_STOP()`) will now unload all plugins. - **CC-1023**: The `no_acquire` keyword tells `omni.bind` that the returned pointer has not had `acquire()` called on it and `ObjectPtr` should not be used in the API layer. Additionally, methods that end with `WithoutAcquire` will also use raw pointers rather than `ObjectPtr` on the returned pointer. `no_acquire`/`WithoutAcquire` are dangerous but useful in hot code paths where the cost of atomically incrementing the reference counts degrades performance. - **CC-1025**: `omni.bind` now supports the `ref` attribute on both return types and parameters. `ref` converts pointers in the ABI layer to references in the API layer. - **CC-1021**: `omni.bind` now supports `throw_result` attribute on methods. When used, this attribute will throw an exception in the API layer when the ABI layer returns a bad `Result` error code. This attribute can be combined with the `return` attribute (on a parameter) to return a value in the API layer instead of the ABI layer’s `Result`. - **CC-1024**: `build` now supports building all flavors of Python bindings. Before the user had to build each binding individually: ```bat build -t bindings\carb.python-3.9 && build -t bindings\carb.python-3.10 && ... ``` Now: ```bat build -t bindings\carb.python ``` - Added many `omni::core::Result` codes. - `ObjectParam` can now be used as a boolean. ### Changed - **OM-70174**: updated to the latest `concurrentqueue` package to fix some include issues in Kit and Carbonite. ### Fixed - **CC-889**: Fixed `carb::filesystem::IFileSystem::unsubscribeToChangeEvents()` so that it is possible to unsubscribe from within the subscription callback. - `omni.bind` no longer produces optimization warnings. - `OMNI_ASSERT` messages now include the filename. - Fixed ambiguous pointer to `ObjectParam` casts. - Removed troublesome usage of `OMNI_DECLARE_INTERFACE` in `ILogChannelFilter.h`. - Function/Methods in `IObject.h` now have correct `noexcept` specifiers. ### Added - **CC-883**: The `carb.tokens.plugin` plugin now supports environment variable replacement via tokens named `${env.<name>}`. - **CC-63**: Profiler channels have been added! Profiler channels can be configured at runtime via the `ISettings` system and can be used in place of masks anywhere. Use `CARB_PROFILE_DECLARE_CHANNEL` to declare a channel, and then it can be turned on/off (even at runtime) with `/profiler/channels/<name>/enabled`. Internally Carbonite’s profiling has changed to use channels with the following names (corresponding to plugins): - carb.assets - carb.eventdispatcher - carb.events - carb.scripting-python - carb.tasking - carb.windowing-glfw ## Changed - CC-1015: silenced a warning message in `carb.audio` that can occur while enumerating capture devices on a system that does not have a microphone. Also downgraded several error messages around device enumeration to warnings. ## Fixed - CC-1011: Fixed an issue where `carb::cpp20::countl_zero()` and `carb.tasking.plugin` would not run properly on pre-Haswell Intel CPUs. ## 134.0 ### Added - OM-58581: Added bindings to carb.audio to add support for modifying carb.audio.Voice parameters, as well as the ability to play sounds from python. This functionality will not be available with versions of python prior to 3.10. - CC-1010: Added support for compressing crash dumps and crash report extra files for upload on all platforms. The crash report files will all be compressed into a single standard .zip file and only that .zip file will be uploaded. This feature is enabled with the `/crashreporter/compressDumpFiles` setting. Note that the server side must also support receiving zipped crash reports before this can be enabled. - CC-206: `ITasking` can now be used with `omni.experimental.job.plugin` providing the underlying thread pool. This is enabled by setting `/plugins/carb.tasking.plugin/useOmniJob` (default: `false`) to `true`. This allows other systems to also share worker threads by using `omni.experimental.job.plugin`. ### Changed - OM-57542: Updated ImGui version and added necessary package references. Added premake5 section for building SimpleGui plugin on MacOS. Changed SimpleGui’s renderer to support 32-bit indices. ### Fixed - CC-249: fix comparison bug so that CC-249 is really fixed. - OM-58581: Fixed an out of bounds read in a `CARB_ASSERT()` statement in carb.audio. - CC-1006: Fixed the quick shutdown process so that it properly shuts down structured logging. - CC-860: Added support for uploading extra files with Carbonite crash reports. New files are added to the upload with `carb::crashreporter::addExtraCrashFile()`. Files are only uploaded if a crash occurs. - CC-1009: Fixed `carb::container::LocklessStack` on Linux to try and avoid crashes when modules that use `LocklessStack` are unloaded in an order that is not the reverse of load order. ## 133.0 ### Added - OM-62703: added `CARB_PLATFORM_NAME` containing the name of the current platform and `CARB_ARCH_NAME` containing the name of the current architecture. - OM-62703: added some new flags to `carb::extras::loadLibrary()` to expose some extra library loading behavior on Linux. - CC-879: added a new version of `carb.scripting-python.plugin` that supports python 3.10. These are in subdirectories in the binaries directory which are labeled `scripting-python-[version]`. Loading multiple python versions in a single process is dangerous, so it’s not possible to load multiple of these plugins simultaneously and not recommended to try to unload one and load another. - OM-63213: added `omni::platforminfo::IMemoryInfo::getProcessPeakMemoryUsage()` and `carb::extras::getPeakProcessMemoryUsage()`. - OM-63213: added `omni::platforminfo::IOsInfo::getKernelVersion()`. - carb::dictionary::ISerializer has an updated createDictionaryFromStringBuffer() function with more options that can produce better performance. The ABI supports the previous function, but new compilation will use the new function. ### Changed - ** BREAKING CHANGE ** CC-879: carb.scripting-python.plugin was moved into the scripting-python-3.7m subdirectory in the binaries directory. - CC-891: carb.tasking.plugin is now a task-stealing scheduler that scales better with more cores. Other changes: - The watchdog functionality has been split from the timer thread into its own thread. - Instead of one (potentially giant) thread pool, threads are divided up into thread-groups with four threads ideally. - When a task is added, a thread-group is chosen and assigned the task. If the thread-group has too much work, other thread-groups are woken and will try to steal tasks. - Resuming tasks now respects priority order. - Resuming a task that is pinned to a specific thread is more efficient. ### Fixed - OM-62819: Fixed a performance issue in carb.profiler-cpu.plugin that was making it unusable in certain cases. - OM-62523: Addressed some docstrings that error when consumed by Sphinx. - CC-879: Fixed a bug where IScripting::getObjectAsString() was not acquiring the GIL. - CC-948: Fixed a hang that could occur if a log listener called back into the framework while the framework was loading a plugin. - CC-988: Fixed an issue where a deadlock could occur if a log listener attempted to acquire an interface. - CC-988: Fixed a compilation warning that could occur in TaskingHelpers.h. - OM-63659: Fixed an issue with carb.tasking where a stuck condition would be improperly realized and emergency threads started unnecessarily. - CC-996: Improved performance of carb.dictionary.serializer-json.plugin and carb.dictionary.plugin in some cases, especially with large JSON files. ### 132.0 #### Added - CC-868: added support for bindings for python 3.10 with pybind11 2.7. The pybind11 internals ABI has changed, so if you want to add pybind11 bindings on top of the Carbonite bindings, you will need to upgrade to an ABI compatible pybind (such as 2.7.1-08fa9f60). #### Changed - ** BREAKING CHANGE ** The CARB_PROFILE_FRAME() macro no longer increments IProfileMonitor frames for last-frame events. This is because CARB_PROFILE_FRAME() now supports multiple frame-sets (i.e. main thread and GPU thread). Instead, please call IProfileMonitor::markFrameEnd() to increment a frame for the Profile Monitor. - CC-868: The include path to pybind11 needs to be added on the build command line when building python bindings with Carbonite’s binding utilities headers. These previously required an include path pointing into the target-deps directory. - CC-868: Some workarounds for -Wundef errors in pybind11 2.3 have been removed because they break pybind11 2.7. The suggested fix is to disable -Wundef until you can upgrade to 2.7. - CC-902: Built-in plugins (ILogging, IAssert, IThreadUtil, IFileSystem) are now always installed the first time the framework is acquired, and are not unloaded via Framework::unloadAllPlugins(). They are only unloaded when the framework is released. - CC-918: removed the carb.ecs plugin, along with all of its headers, tests, and projects. This plugin was not used in any projects and has long since been superceded by other projects. - CC-919: removed support for hot reloading C++ plugins. This feature was untested and unused and has become bit rotted. ## Fixed - CC-907/OM-60787: Fixed an issue that could cause plugins to unload in an incorrect order. - CC-902: Calling `carb::acquireFrameworkAndRegisterBuiltins()` multiple times is now safe and will no longer result in error logs. - CC-880: `carb.profiler-cpu.plugin`: `CARB_PROFILE_FRAME()` now outputs an event to the chrome-tracing file to mark the frame end. When a subsequent Tracy PR is merged, Tracy’s import-chrome tool will be able to translate these events into frame markers. - CC-913: Fixed a rare bug in `carb::extras::HandleDatabase` that could lead to rare crashes in `carb.tasking`. - CC-915: Fixed an issue with `carb.profiler-tracy.plugin` where the Tracy viewer gets confused with too many source locations. - CC-954: Fixed a problem with the RobinHood container hashing that was causing compilation issues on clang 14. ## Added - CC-864: Added lock profiling interface, which is currently only used by `carb.profiler-tracy.plugin`. - CC-848: Several performance improvements were made to `carb.profiler-cpu.plugin`. - CC-862: Completed the `invoke`-family functions and meta queries (`invoke_r`, `is_invocable`, `is_invocable_r`, `is_nothrow_invocable`, `is_nothrow_invocable_r`) and added requisite utility meta query `is_nothrow_convertible`. These behave according to the rules in the current (as of August 2022) C++23 Standard Draft, but reside in the `carb::cpp17` namespace, as that is when the functionality was introduced to the C++ Standard. - CC-372: Added a setting `log/forceAnsiColor` that allows the user to force the logger to use ANSI escape codes to color the console. Useful for CI/CD settings. - CC-853: `carb.tasking.plugin` has a function that will bind trackers to a required object: `ITasking::bindTrackers()`. ## Changed - `carb.profiler-cpu.plugin` no longer exports Chrome Tracing Flow events by default. They can be enabled with setting key `/plugins/carb.profiler-cpu.plugin/flow` (boolean, default: `false`). - CC-848: `carb.profiler-cpu.plugin` now emits additional profiler zones for itself (profiling the profiler) if `carb::profiler::kCaptureMaskProfiler` is set in the capture mask. - CC-848: Some of the `carb.tasking.plugin` profile zones are now static names instead of dynamic names to save time capturing these profiler zones. (Example: “Running fiber 123” is now just “Running fiber”). - Upgraded `carb.profiler-tracy.plugin` to Tracy 0.8.2. - CC-658: Removed `carb::thread::getCurrentProcessor()`. The only reasonable use of this function was in tests for thread pinning utilities, so the decision was made to remove it. ## Fixed - CC-854: Fixed locking that could lead to a performance issue with `carb.settings` and `carb.dictionary`. - CC-64: Performance is improved when `carb.profiler-cpu.plugin` is loaded but not started or has a capture mask of `0`. - CC-855: Fixed a rare crash that could occur if `carb.profiler-cpu.plugin` was started and quickly shutdown. - CC-886: Added the missing `ChangeEventType.CREATED` type to - carb.settings python bindings. - CC-862: carb::cpp17::invoke_result to be SFINAE-safe. Previously, when the F(Args...) signature was not invocable, it was an error in the immediate context. - CC-857: Fixed an issue with carb.profiler-tracy.plugin where CARB_PROFILE_END() would still pay attention to the mask and could potentially cause profile zones to become out of sync. Applied the same fix from CC-603 to the Tracy profiler. - CC-881: Fixed an issue with carb.profiler-tracy.plugin where profiler zone source location could be duplicated. - CC-537: Fixed omni.structuredlog python code generation. ### 130.0 - Changed - CC-684: Failing to close a file will now report the file that failed to close and the underlying error. - CC-817: Non-quick shutdown returns back to two phases: the first phase is shutting down all plugins, followed by unloading all plugins. - CC-847: (Windows only) As a performance improvement, OutputDebugString is no longer called by default for log messages unless a debugger is attached. Calling StandardLogger::setDebugConsoleOutput(true) will force debug output even when a debugger is not attached. - Fixed - CC-817: Fixed an issue where interfaces are released before a plugin is shutdown. In some cases (such as carb.tasking.plugin) this can result in the interface being re-instantiated while shutting down. - Fixed verbose log spam in carb.settings.plugin ("node type mismatch" logs when a key did not exist). - Fixed verbose log spam in carb.audio-forge.plugin ("{context = xxx}" logs when updating a context). ### 129.0 - Added - Warning logs are now emitted when a plugin unload is requested but the module is not unloaded. - Support for limiting the zone depth of a profile capture taken using the carb.profiler-cpu.plugin. The max zone depth can be set via the new /plugins/carb.profiler-cpu.plugin/maxZoneDepth setting. - Changed - The CARB_BINDINGS() and CARB_BINDINGS_EX() macros take an additional optional parameter that is the string of the script language that the binding is for. If unspecified, "python" is assumed. - CC-808: carb::quickReleaseFrameworkAndTerminate() no longer runs carbOnPluginShutdown() for plugins that do not provide a carbOnPluginQuickShutdown(); only the carbOnPluginQuickShutdown() function is called if provided. This makes quick-shutdown significantly faster, but it is now highly recommended that all plugins that have files open for writing flush and close those files. - Fixed - CC-779: Python script bindings are now considered dependencies of carb.scripting-python.plugin and now attempt to unload carb.scripting-python.plugin before dependent interfaces to improve shutdown stability. - Fixed an issue with carb::extras::getLibraryHandleByFilename() that would leak shared object references on Linux. - Fixed an issue with carb.scripting-python.plugin leaking a reference to the python shared object on Linux. - Fixed an issue with omni.structuredlog.plugin where it would not be properly reset if the module failed to unload. - CC-781: Fixed an issue with carb::Delegate where parameters to Call() were improperly handled. - Fixed an issue where `std::move` was not correctly forwarded to the first receiver when multiple receivers were bound. - Fixed an issue where interfaces could be acquired (and plugins loaded) during a call to `Framework::unloadAllPlugins`. - Fixed an issue where shutting down the framework did not clear some state, causing inconsistencies and issues when the framework was later re-initialized. - Fixed a warning at shutdown from `carb.profiler-cpu.plugin` about “Failed to remove release hook”. - Fixed a rare thread-safety crash with `omni.structuredlog.plugin`. - CC-809: fixed the auto-detection of MP3 files with both the ‘error protection’ bit enabled and disabled. Previously, MP3 files with error protection enabled were failing to load. - CC-668: got `carb.launcher.plugin` building and passing all tests on MacOS. Some tests had to be disabled because MacOS doesn’t support the related functionality. ### 128.0 #### Added - CC-587: Basic support for Mac OS was added. - Carbonite binaries for Mac OS are not available yet. - `CARB_PLATFORM_MACOS` has been added for detection of Mac OS. - `CARB_POSIX` was added. This is set to `_POSIX_VERSION` on systems that are mostly-compliant with POSIX, such as Linux and Mac OS, and set to 0 on other systems, such as Windows. - `CARB_MACOS_UNIMPLEMENTED()` has been added to mark code paths that are not supported yet on Mac OS. If you want to use Carbonite extras headers on Mac OS, you can grep for this symbol to verify whether the extras header is supported on Mac OS yet. - OM-55766: - `OMNI_GENERATED_API` Helper macro to access generated API - `OMNI_USE_FROM_GENERATED_API` Helper macro to bring member functions from generated API when function overloads are provided. #### Changed - CC-705: reworked the telemetry transmitter so that it now stores the processed and validated message strings instead of the JSON documents themselves in the event queue. This significantly reduces memory usage for the transmitter. - CC-745: Unsuccessfully loading a plugin optionally (with `tryAcquireInterface`) will no longer produce any error logs. - CC-743: Fixed a performance issue with `carb::tolower` and `carb::toupper` on Linux. - CC-716: `carb::extras::convertUtf8ToWide()` and `carb::extras::convertWideToUtf8()` no longer use the deprecated C++ `wstring_convert` library and instead use our Utf8Parser library. This affects the failure condition of these functions. #### Fixed - CC-595: Use canonical paths in `omni::core::TypeFactoryImpl`. Prior to this commit, `omni::core::TypeFactoryImpl` would directly use the `moduleName` arguments provided to the `createType_abi`, `registerInterfaceImplementationsFromModule_abi`, and `unregisterInterfaceImplementationsFromModule_abi` functions. This caused an issue because `carb::FrameworkImpl::loadPlugins` would call `registerInterfaceImplementationsFromModule_abi` with the canonical path of the module, and then future calls to createType with the non-canonical version of the path to the same module would reload the module because the module name (which is the path) would not match. This commit fixes that by making the three API methods above convert the moduleName argument to a canonical path for consistency. - CC-747: Resolved a shutdown order issue with - **CC-751**: Upgraded openssl in omni.telemetry.transmitter to 1.1.1q to address CVE-2022-2068. - **CC-772**: Resolved an issue with plugin unload order not respecting dependencies correctly. - **CC-622**: Fixed `carb::extras::Path::getParentPath()` when resolving parent path’s with only 1 directory level. ### Added - **CC-582**: `carb::cpp20::atomic<>` now supports waiting for non-primitive (but `is_always_lock_free`) types. - **OM-53429**: Added settings and support to the telemetry transmitter to be able to filter out messages that came from host apps run in different modes (ie: ‘dev’ vs ‘production’ vs ‘test’). The `/telemetry/transmitter/messageMatchMode` setting is used to control this behavior. By default, all messages are allowed to pass through. - **CC-510**: Adds `omni::experimental::IUrl`, an RFC-3986 compliant URL implementation. It is currently experimental. - **OM-45102**: Added separate functions for trimming strings in a UTF-8 friendly way or by using current C locale. Helper function for finding the last Unicode symbol in a UTF-8 string. - **CC-714**: Added `carb::thread::hardware_concurrency()`, which is similar to `std::thread::hardware_concurrency()` but will take docker cgroups into account if running from within a docker container. - **CC-705**: added support for batching events when sending to NVDF backends. These will now send as many events as possible in a single HTTP request instead of sending each message individually. - **CC-716**: `convertUtf32ToUtf8()` now has flags which allow you to return U+FFFD on codepoint conversion failure. ### Changed - **CC-632**: `carb.tasking.plugin`: Evaluated how calling `ITasking::applyRange` recursively works, especially with non-uniform workloads. In some cases, performance improvements up to 67% were seen in tests with Kit. - **OM-53429**: Added a telemetry mode tag to each structured log message’s “source” field if running in “dev” or “test” mode. - **CC-386**: `omni.bind` now outputs a comprehensive diff when using `--fail-on-write` option alongside the error message. This should help debugging CI errors by eliminating the “What’s different?” question that requires a re-run in the environment the build failed in. - **CC-705**: changed the telemetry transmitter to limit the number of events that would be processed at any given time to help reduce the amount of memory used. Added the `/telemetry/transmitter/<index>/queueLimit` option to control the maximum number of events that can be processed at once. This new option defaults to 10000 events. In testing, this roughly corresponds to ~500-1000MB of memory usage. - **CC-714**: `carb.tasking.plugin` and `omni.job` now use `carb::thread::hardware_concurrency()` for default thread counts. - **CC-560**: Deprecated the old `kSerializerOption*` flags and renamed them to `fSerializerOption*` to better comply with the coding standard. The use of the old flags will have to be changed in any Carbonite based apps. ### Fixed - **CC-603**: Fixed issue where repeated profiler runs caused incorrect profile in second run onwards. This was caused by zones ends unintentionally matching partial zone begins from the previous run. Please note that this has been fixed for the cpu profiler ONLY (`/plugins/carb.profiler-cpu`). - **OM-34980**: Fixed an issue with command line array argument in the form `[a, b]` not replacing the whole previous array. - **CC-628**: Fixed a few more issues with `min`/`max` in public headers, as well as adding a compilation test to prevent future uses. - **CC-625**: `carb::framework::getPluginCount` returns only the count of carb plugins loaded. The change for CC-383 made `carb::framework::getPluginCount` return the count of all plugins. - OM-53428: Fixed the behavior of `carb::framework::getPluginCount` to only return the count of carb plugins. Previously, it would return the total number of carb and oni plugins, but `carb::framework::getPlugins` would only return carb plugins, leading to empty entries in the `get_plugins()` python function. This change restores the previous behavior. - OM-53429: Fixed an issue in `omni.telemetry.transmitter` where only the first transmitter would control whether the log directory was processed and whether events were uploaded. This has now been changed so that any transmitter that needs or can accept data will allow the logs to be processed and events uploaded. - CC-697: Fixed use-after-move issue in `carb::extras::EnvironmentVariable`. - CC-705: fixed the broken launch guard in the `omni.telemetry.transmitter` app. - CC-705: fixed the transmitter so that it no longer has the potential to resend events when multiple transmitter endpoints are out of sync. - CC-716: `convertUtf32ToUtf8()` is now implemented identically across all platforms. ### 126.0 #### Added - CC-505: `omni.telemetry.transmitter` now has the capability to send data to multiple endpoints at once. To do this, just make the settings key `/telemetry/transmitter` an array of objects. Each object will transmit to a separate telemetry endpoint. - The new setting `/telemetry/transmitter/retryLimit` has been added to allow the transmitter to handle offline telemetry endpoints more gracefully. - CC-505: `omni.telemetry.transmitter` can specify an array of telemetry endpoints with the setting key `/telemetry/transmitter/endpoint` to specify fallback URLs to if your main endpoint goes down. - CC-454, OM-28642: `omni.bind` diagnostic reporting has been re-done for a better experience when reporting things to the user. Multiple errors can be reported by a single run of the program as opposed to stopping at the first issue (some errors are still considered fatal and will halt execution immediately). Diagnostic messages are highlighted based on their severity if the output is a TTY. - CC-533: `carb::tasking::ITasking::applyRangeBatch` was added as a more tunable alternative to `applyRange` and reduces the function call overhead of `applyRange`. - CC-447: `carb.variant.plugin.dll` now supports `RString` / `RStringU` / `RStringKey` / `RStringUKey` as contained types. #### Changed - CC-505: The following omni.telemetry.transmitter settings have been updated to work better with the multiple endpoint system. The old settings will continue to work as long as the `/telemetry/transmitter` settings key is not set; if you use the new settings keys, you must upgrade all of your keys to the new system. - `/telemetry/resendEvents` => `/telemetry/transmitter/resendEvents` - `/telemetry/transmissionLimit` => `/telemetry/transmitter/transmissionLimit` - `/telemetry/endpoint` => `/telemetry/transmitter/endpoint` - `/telemetry/schemasUrl` => `/telemetry/transmitter/schemasUrl` - `/telemetry/authenticate` => `/telemetry/transmitter/authenticate` - `/telemetry/authTokenUrl` => `/telemetry/transmitter/authTokenUrl` - `/telemetry/authTokenKeyName` => `/telemetry/transmitter/authTokenKeyName` - `/telemetry/authTokenExpiryName` => `/telemetry/transmitter/authTokenExpiryName` - `/telemetry/eventProtocol` => `/telemetry/transmitter/eventProtocol` - `/telemetry/seekTagName` => `/telemetry/transmitter/seekTagName` - `/telemetry/authTokenType` => `/telemetry/transmitter/authTokenType` - `/telemetry/oldEventThreshold` => `/telemetry/transmitter/oldEventThreshold` - `/telemetry/ignoreOldEvents` => `/telemetry/transmitter/ignoreOldEvents` - `/telemetry/pseudonymizeOldEvents` => `/telemetry/transmitter/pseudonymizeOldEvents` The following settings have been deprecated. These settings will continue to work as-is, but they are no longer supported when using the `/telemetry/transmitter` settings key. To continue using this functionality, specify the directory or file in `/telemetry/transmitter/schemasUrl` with a `file://` prefix. - `/telemetry/schemaFile` - `/telemetry/schemasDirectory` ### Fixed - CC-514: The `g_carbProfiler` global variable is now automatically set to `nullptr` when the profiler is unloaded and/or the framework is released. Once `carb::profiler::registerProfilerForClient()` is called, a load hook is installed which will cause the `g_carbProfiler` global variable to be set for a module once the profiler module loads. Previously this global variable would be left dangling, potentially leading to crashes. A plugin or application need only be rebuilt to pick up this fix. - CC-585: Fixed an issue with `carb::hashPair()` where it would erroneously always return 0. - CC-593: Carbonite has changed its coding standard to not use keywords `min` or `max` in public headers and changed all existing places that previously used these keywords. This is an effort to prevent compilation issues when Windows.h is included without defining `NOMINMAX`. `::carb_min()` and `::carb_max()` have been added as alternatives to `std::min` and `std::max`, in `include/carb/Defines.h`. - CC-597: Build fix in `omni::core::PluginManager` ## 125.0 ### Added - CC-258: ONI Python bindings now support `omni::string` by conversion to Python-native `str`. - CC-520: Added python support for several `IProfiler` functions that were missing. - CC-506: Added “Robin Hood” open-addressing hash containers to `carb::container`: `RHUnorderedMap`, `RHUnorderedSet`, `RHUnorderedMultimap`, and `RHUnorderedMultiset`. While similar to the `std` unordered containers, they are not drop-in replacements, but should be useable in many cases. - CC-497: Added support for the new NVDF protocol for omni.telemetry.transmitter. Use the settings key `/telemetry/eventProtocol` to configure this. - CC-497: Added an option to specify the name for seek tag in omni.telemetry.transmitter. Use the settings key `/telemetry/seekTagName` to configure this. - CC-507: Removed some log messages in `ILauncher` that could lead to a deadlock in the child process. - CC-440: Added overloads to allow `carb::extras::Guid` to be used as a key in `std::unordered_map`. - CC-440: Added `IAudioDevice::getDeviceCapsByGuid()` and `IAudioDevice::getDeviceNameByGuid()`. - CC-440: Added example.audio.device to demonstrate usage of the `IAudioDevice` interface. - CC-513: Added a walkthrough document and related example app and plugin to demonstrate how to create and use a Carbonite interface. - CC-383: Added the ability for ONI plugins to declare dependencies, and for Carbonite and ONI plugins to depend on each other. `omni::bind` now generates an `OMNI_PLUGIN_INTERFACE` macro that defines the `getInterfaceDesc()` function, the same as carbonite interfaces. This allows for declaring dependencies to work the same between carb/omni. Omni modules can now export the function `getModuleDependencies` to declare their dependencies. Carbonite and ONI plugin dependencies are tracked together, to unload order can be guaranteed across frameworks. ### Changed - CC-520: For `carb.profiler-cpu.plugin`, the `CARB_PROFILE_FRAME` macro now ignores the `mask` parameter by default. This is an effort to make frame processing consistent no matter what capture mask is specified. This behavior can be returned to the previous behavior (where the `mask` parameter is considered) by setting config key `/plugins/carb.profiler-cpu.plugin/ignoreFrameMask` to false. - CC-497: Increased the omni.structuredlog log header size to 1024 when new log files are generated. Older versions of omni.telemetry.transmitter should continue to work with these longer headers. - CC-440: The audio example binaries have been renamed to better indicate which submodule they demonstrate. ### Fixed - CC-516: Upgrade to OpenSSL 1.1.1o to fix multiple CVEs. - CC-520: Fixed a race condition in `carb.profiler-cpu.plugin` where it was possible to immediately query `IProfileMonitor::getLastProfileEvents` after `CARB_PROFILE_FRAME` but not receive the previous frame’s information (instead it was still the frame-before-previous). ### 124.0 #### Added - **CC-498**: Added more debugging log output to ILauncher when a new child process fails to launch (as error or warning level messages) and when a new process handle is first created (as info level messages). Only launch descriptor values that differ from their defaults will be output. - **CC-291**: Added documentation for `/audio/nullBackend/CaptureTestMode`. - **CC-291**: Added `/audio/nullBackend/ReportsOverruns` for testing overrun detection in `IAudioCapture`. #### Changed - **CC-478**: On Windows, the memory functions (`carb::allocate`, `carb::deallocate`, and `carb::reallocate`) no longer require `carb.lib` to be linked in order to build; instead, by default they now find the underlying `carbReallocate` function from `carb.dll` at runtime. However, sometimes it is desirable to have `carb.dll` loaded implicitly. This is now accomplished by defining `CARB_REQUIRE_LINKED=1` before including `include/carb/Memory.h` and will also require linking against `carb.lib`. - **CC-500**: Changed the exit codes returned from `carb::launcher::ILauncher::waitProcessExit()` and from `carb::launcher::ILauncher::getProcessExitCode()` on Linux in the case of a crashed child process so that they return the same value that a shell would report in `$?`. This does not in any way affect the exit code returns in the case of child processes that exit normally. It is left up to an exercise for the host app to detect and handle the event of a crashed child process. In general, Linux processes will return a value that is 128 plus the number of the signal (ie: SIGSEGV, SIGABRT, etc) that killed the process. On Windows a crashed process will generally have an exit code that starts with 0xc0000000 (though this is not true in the case of `raise()`, `abort()`, or a failed `assert()` call which all typically set an exit code of `3`). - **CC-291**: Renamed some device backend settings in carb.audio-forge to make it easier for users to find settings. - `/audio/oldWindowsBackend` was renamed to `/audio/WASAPI/legacyBackend`. - `/audio/nullDeviceCount` was renamed to `/audio/nullBackend/DeviceCount`. - `/audio/nullBackendIsFunctional` was renamed to `/audio/nullBackend/IsFunctional`. - `/audio/nullBackendCaptureTestMode` was renamed to `/audio/nullBackend/CaptureTestMode`. #### Fixed - **CC-486**: Improved performance of `carb.tasking.plugin`: fewer system calls to wake a thread, and fiber stack reset (if `/plugins/carb.tasking.plugin/resetFiberStack` is `true`) now only occurs in a single thread when no other tasks are available to run. ### CARB_PROFILE_EVENT() macro. ### DRIVE-4086: Fix for reconciling an external callers held zoneId with the new/replayed zoneId created on fiber switch. ### CC-473: Allow for plugins to depend on interfaces they provide. ### OM-36366: Fixed a bug in `carb.audio` that would cause a muted sound to not be muted if the `carb::audio::fPlaybackModeFadeIn` flag was also used. ### OM-48634: Add GPU timestamp injection interface to `IProfiler`. ### OM-48808: Reworded some output in the crash reporter about the size and readability of the crash dump file to make it more clear whether the dump file was deleted due to a successful upload or not. ### CC-470: The `Framework` now supports “load hooks”: a load hook is a callback that is called when an interface becomes available from a plugin load. This can be used to weakly couple plugins, such as if a Profiler plugin doesn’t normally load `carb.tasking.plugin`, but takes action when it is otherwise loaded. ### OM-49026: Improved the `carb.crashreporter-breakpad.plugin` startup by deferring symbol loading until a crash actually occurs. ### CC-485: Improved performance of `carb.tasking.plugin` on Windows by setting the system timer to the highest resolution available. ### CC-485: Further improved performance of `carb.profiler-cpu.plugin` when used with `carb.tasking.plugin` by more quickly recording task switches. ### CC-479: Upgraded to FLAC-1.3.4. This fixes CVE-2021-0561. ### CC-449: `IAudioCapture` on Windows no longer uses DSound. It now uses the same capture system as on Linux, with a device backend built on Windows Audio Services. Functionality on Windows should not change substantially; the only major difference is that the new system has improved overrun detection. Additionally, this means the `null` device backend will now work with `IAudioCapture` on Windows. ### OM-48865: Fix potential infinite loop in ProfilerNvtx ‘endEx’. ### OM-47219: A better fix for the degenerative performance issue when `carb.profiler-cpu.plugin` was used with `carb.tasking.plugin` and `/plugins/carb.profiler-cpu.plugin/fibersAsThreads` as `true`. ### CC-422: `carb::Framework` will not print a warning message when a plugin acquires an interface that is not listed as a dependency if that interface is provided by the plugin that is loading it. ### CC-475 `omni.experimental.job.plugin` no longer links with X11. ### CC-472: Fixed an issue that could cause a hang if multiple threads in different modules were using `carb::RString` at the same time. ### CC-483: Worked around a few places in the logging system that could deadlock via interaction with internal locks in GLIBC 2.17, especially when logging in a thread while a different thread loads a python binding .so which registers a log listener in a static initializer. ### CC-483: Fixed a potential hang that could occur when using `carb::getCachedInterface` simultaneously from multiple threads. ## New Features - Added a new feature to the `carb::variant_literals` namespace: a literal `_v` that can be appended to a literal to create a `Variant` type from it (e.g. `123_v`, `"Hello"_v`, etc.). - CC-359: `carb.assets.plugin` is now documented and now sends (via `carb.eventdispatcher.plugin`, if available) events `Asset.BeginLoading` when loading (or re-loading) starts for an asset, and `Asset.EndLoading` when loading finishes. See `carb::assets::IAssets::loadAsset()` in `include/carb/assets/IAssets.h` for more information. - CC-441: Added a specialization of std::hash for `carb::cpp17::variant`. - CC-376: `omni.bind` generated Python bindings now support keyword parameters with names based on their C++ parameter name (in `snake_case`). This can be overridden with `OMNI_ATTR("py_name=something_else")`. ## Changed - **BREAKING CHANGE**: The types `Id`, `Pool`, and `Snapshot` within the `carb::assets` namespace have been converted to strong types (`carb::Strong`) instead of fake pointer types. As such, assigning them to `0` or `nullptr` will now cause a compile error. Instead use their default constructed or special invalid types: `kInvalidAssetId`, `kInvalidPool`, and `kInvalidSnapshot`, respectively. For `printf`-style formatting, instead of using `%p`, use the format specifier for the underlying type and call the `.get()` function. This is to better facilitate types passing through the Variant system. - **BREAKING CHANGE**: The rarely-used `carb::assets::CreateContextFn` loader function must now return a `carb::assets::LoadContext*` instead of a `void*`; the rarely-used symbol `carb::assets::OnDependancyChangedFn` has had its spelling corrected to `carb::assets::OnDependencyChangedFn`. - CC-275: Updated the Windows audio playback backend to an improved implementation. If you run into playback issues, you can restore the old backend with this settings key: `/audio/oldWindowsBackend`. - OM-39028: improved `carb::dictionary` conversion error reporting. - CC-420: On Linux, changes the `carbReallocate` function to be weak-linked. This alters the memory functions which use it (`carb::allocate`, `carb::deallocate`, and `carb::reallocate`) to understand this and gracefully fail when the `carbReallocate` function is not available. This allows users which only incidentally use these functions to not link against `libcarb.so`. This case is common for plugins which might use an `omni::string`. ## 121.0 ### Added - CC-289: Allowed `omni.structuredlog.plugin` to be used in a ‘standalone’ mode. This plugin can be used in a non-Carbonite app without the need for other dependent libraries (including ‘carb.dll/so’). When being used in standalone mode, the `omni/structuredlog/StructuredLogStandalone.h` header should be used to pull in the supported dependencies. The structured log library is expected to be present in the same directory as the main executable. The `example.structuredlog.dynamic` example app demonstrates how the library can be used in standalone mode. - CC-396: Added string conversion helpers to Utf8Parser.h, which will allow easy conversions between UTF-8 and UTF-16/UTF-32. - CC-377: Added `carb.eventdispatcher.plugin` which is a replacement for `carb.events.plugin` for immediate events (the `carb.events.plugin` is more of a message queue). Additional features: - Allows observers to filter which events they receive. - Doesn’t use `carb.dictionary.plugin`; instead it uses a new extensible variant system in `carb.variant.plugin` which is faster than dictionary and supports the same types. - More clearly defined behavior of recursive event dispatching when adding/removing observers. - Lack of ref-counting prevents crashes like OM-43254 from unreleased events. - CC-421: Carb python logging will now convert to string and log any python object. ### Fixed - CC-401: Update Python and OpenSSL dependencies to fix a security vulnerability. - CC-396: Fixed some security issues in `carb::extras::Utf8Parser` around invalid characters. ## 120.0 ### Added - CC-363: Added support for specifying other types of authentication tokens in `omni.telemetry.transmitter`. This now supports long-lived API keys as tokens through the use of the ‘/telemetry/authTokenType’ setting and specifying a local file through ‘/telemetry/authTokenUrl’ instead of a URL. These two options can also be used to specify a long-lived API key that is downloaded from a custom URL or read from a file. The ‘/telemetry/authTokenKeyName’ and ‘/telemetry/authTokenExpiryName’ settings can also be used to specify different ways to parse the token out of a JSON blob from all sources. By default, the token type is detected based on whether it is retrieved from a URL or file. - CC-199: Added the `IJob` interface. `IJob` is an experimental ONI interface that provides an abstraction for a foreign job system. ## Fixed Issues ### OM-45612: Changed an error message from `carb.profiler` to a warning since it could potentially be written out frequently and flood the telemetry data lake if it is redirected as a telemetry event. ### CC-364: Fixed improper use of `memory_order_relaxed` in `carb::tasking::TaskGroup` and improper use of `volatile`. ### CC-355: Fixed some minor rendering issues in `IAudioUtils::drawWaveform()`. Golden image tests that use this functionality will likely need their images regenerated. ### CC-362: Fixed the `flags` parameter validity checks in `carb::audio::createSoundFromFile()` and `carb::audio::createSoundFromBlob()`. ### CC-362: `IAudioData::getFormat()` has had its documentation corrected. ### CC-362: Fixed the confusing behavior of `SoundData.get_format()` on streaming sounds in the `carb.audio` python bindings. ## Added Features ### OM-44734: Added a ‘retry count’ to each crash dump’s metadata. If a crash dump fails to upload multiple times, it will eventually be deleted and considered corrupt instead of letting failing crash dump files pile up locally and attempting to upload on each new run. The retry count to delete failed dumps after is controlled with the setting ‘/crashreporter/retryCount’. This is an integer value that defines the maximum number of times to try an upload of any given crash dump file. This setting defaults to 10 tries. This can be set to a lower value to delete so that on the next run, any local dumps that have already been tried and failed that many times will be deleted if they fail again. ### CC-329: added python bindings for `IAudioData`. ## Fixed Issues ### CC-353: Fixed a build break in GCC 9.3.0 caused by `__attribute__(())` syntax changes. ### CC-374: Fixed memory orders and contention issues that could arise on high CPU counts, especially on AArch64 machines. ## Added Features ### CC-316: Added the `audio/pulseAudio/enumerateMonitors` setting to allow PulseAudio’s loopback (AKA monitors) devices to be enumerated as audio capture devices. ### CC-331: Added `carbGetSdkVersion()` and `include/carb/SdkVersion.h` to allow access to the SDK version string. A helper macro was also added to allow verification of whether the loaded Carbonite framework version matches the header files that are being used in a host app. ### OM-44740: Added the Carbonite SDK version as metadata in the crash reporter. ## Changed Features ### CC-301: Changed all of the Carbonite framework globals (ie: `g_carb*`) so that they are weakly linked. This is to resolve an issue related to pulling in `include/carb/extras/` headers from pure ONI modules. When this happened, the linker would often add an undefined reference to one of the framework globals because something in the header called `CARB_ASSERT()`, `CARB_LOG_*()`, etc. ### OM-44182: Changed the ‘quick shutdown’ process to include unloading the omni.core plugins as well as the Carbonite plugins. ### CC-304: `carb::delegate::Delegate` now allows itself to be destructed while in a callback. ## Fixed Issues ### OM-43654: Evaluated all Carbonite Python bindings and released the GIL (global interpreter lock) wherever appropriate. - **carb.profiler-cpu.plugin** when `/plugins/carb.profiler-cpu.plugin/recordSourceInfo` was enabled (the default). - **OM-43254**: `carb::memory::PooledAllocator` behaves correctly if the underlying allocator throws during `allocate()`. - **OM-43254**: Resolved an issue with `carb.dictionary.plugin` where a `std::bad_alloc` exception could be raised during operations that could create a new `dictionary::Item`. This is now considered a fatal condition instead of allowing an exception to cross the ABI boundary. - **OM-43898**: Thread safety improvements to fix a rare crash and a few other issues in carb.audio. ### 117.0 #### Added - **CC-298**: `omni::string` now supports `printf`-style formatting through additional constructors and functions: `assign_printf`, `append_printf`, `insert_printf`, and `replace_printf`. `_vprintf` versions of these functions also exist that accept a `va_list`. - **CC-253**: Adds overloads to `omni::string` for `std::string`, `carb::cpp17::string_view`, and `std::string_view`. These additional overloads make it easier to use `omni::string` with `std::string`. - **CC-274**: Implemented `carb::container::IntrusiveUnorderedMultimap`. See the documentation for more details and example usage. - **CC-193**: Adds `CARB_TOOLCHAIN_CLANG` definition to be `1` when a Clang-infrastructure tool is running (`0` otherwise). #### Changed - **CC-266**: Both the `omni.bind` and `omni.structuredlog` tools now support the `--fail-on-write` option that causes them to fail their operation if a change to a generated file needs to be written to disk. This is used in the CI build scripts to ensure that MRs that make changes to generated code always include the latest version. - **OM-43576**: Removed the `MiniDumpWithDataSegs` flag from the crash dumps on Windows. This skips adding the global data segments of each module to the crash dump which can make some debugging tasks more difficult. However, if needed that dump flag can be added back by using the ‘/crashreporter/dumpFlags=WithDataSegs’ setting. - **CC-247**: Warn, instead of fail, when a user asks for 0.x and we have an implementation 0.y (where y > x). Previously, we error out when x != y since according to semver 0.* versions should not be considered to be backwards compatible (but they could be). #### Fixed - **CC-295**: Fixed compilation errors in `omni::string` when exceptions are disabled. - **CC-303 / OM-43576**: Now that `MiniDumpWithDataSegs` has been removed from crash dumps, - RString values cannot be resolved in minidumps. CC-303 resolves that by including additional debugging information for RString in crash dumps. Also establishes better testing practices for future RString version changes. - CC-306: Fixed a compiler error when using `operator+(carb::extras::Path, const char*)` in the `omni` namespace. That operator was ambigous following the additional `operator+` overloads added by CC-253. Added additional `operator+` overloads for `carb::extras::Path` to remove the ambiguity. - OM-43783: Fixed an issue in `carb::cpp20::counting_semaphore` and `binary_semaphore` where threads would busy wait. - OM-43783: Fixed an issue in `include/carb/Framework.h` that would cause issues if `free()` was `#define`’d (i.e. for memory debugging). - OM-43783: Cleaned up some Intellisense warnings in `include/carb/cpp17/Variant.h`. - OM-43783: Fixed some ambiguous operator errors and namespace issues with `include/omni/String.inl` and `include/carb/cpp17/StringView.h` that could occur when building external projects. - OM-43783: Issues around `std::vsnprintf()` that are worked around in `include/carb/extras/StringSafe.h` also now apply to `std::snprintf()`. - CC-193: Changes GNUC compiler-specific warning macros (those with names like `CARB_IGNOREWARNING_GNUC`) to be available when `CARB_COMPILER_MSC` is `1` and a Clang-infrastructure tool is being run (`CARB_TOOLCHAIN_CLANG` is `1`). This now correctly suppresses GNUC-specific warnings being emitted on the Windows platform from Clang. - OM-43819: Allowed the `omni.structuredlog.plugin` module’s ‘shutdown’ flag to be cleared each time the plugin is reloaded, even if the previous unload didn’t actually remove it from memory. This prevents some crashes and potential loss of functionality on Linux when dlclose() silently leaves the library loaded. - Reverted CC-5: Ability for crashreporter to upload files (attachments) to S3. The original change was causing issues on Linux related to the static linking of OpenSSL in `carb.crashreporter-breakpad.plugin` and conflicting with the static linking of OpenSSL in Python 3.7. A future Carbonite build will reinstate this functionality. ### 116.0 #### Added - CC-183: Added `std::string` and `carb::cpp17::string_view` overloads to `IDictionary::makeAtPath()`. - CC-82: Added optional support for zipping crash dump files before uploading them. The zip implementation supports 4+GB files and can process files without allocating memory at crash time. The zip files are typically ~10% the size of the original file in practice, so crash dump upload time should be greatly reduced once supported on the server side. - CC-252: Added DLL boundary safe `allocate`, `deallocate`, and `reallocate` top level functions to Carbonite. These functions all use an internal reallocation function within carb.dll/libcarb.so, making them safe to use from different modules. `carb::Framework::internalRealloc`, `carb::Framework::allocate`, `carb::Framework::free`, and `carb::Framework::reallocate` are now deprecated in favor of these new top level functions. ## Added - CC-263: Added `IFileSystem::makeCanonicalPathEx2()`. This version takes an extra `flags` parameter that allows some behaviour to be controlled. By default on Windows, this version assumes that the caller has already checked if the given file already exists. The `carb::filesystem::fCanonicalFlagCheckExists` has been added to get back the same behaviour as `IFileSystem::makeCanonicalPathEx()`. - CC-244: Added a framework for simple geolocation to the telemetry transmitter. This will be used to allow the transmitter to exit itself early on startup if its current country matches a country code in a restricted regions list (specified in the ‘/telemetry/restrictedRegions’ setting). This is currently disabled due to the need for an account to use various geolocation APIs. This may be enabled at some point in the future. By default, the transmitter is allowed to run in all regions. - CC-246: Added `omni::detail::PointerIterator`, a wrapper iterator for a possibly-cv-qualified `T*` as a `T` iterator class. It does not change the semantics from the fundamental logic of pointers. This is meant to be used on container types with contiguous storage (such as `vector` and `string`), as returning a pointer directly from iterator functions (`begin()` and `end()`) would be inappropriate. - CC-218: Added flags and settings to the telemetry transmitter to control the behaviour of how old events are processed if they are encountered in a telemetry log. - CC-8: Added `omni::string`, an ABI safe replacement for `std::string` that also offers DLL-Boundary safe allocation. - CC-290: Added `carb::extras::getLibraryHandleFromFilename()` to get a library handle without forcing a load (ie: only retrieve the handle if the library is already loaded in the process). ## Changed - CC-262: Improved some logging in the telemetry transmitter. It will now output the path to log directory it will be scanning and whether it is able to get a valid authentication token (if needed). - CC-281: Changed the way the `omni.structuredlog.plugin` plugin loads the privacy settings on startup. In addition to the previous method of loading the state into the ISettings dictionaries, it now also loads and caches the privacy settings state internally on plugin load. - CC-290: Added a `flags` parameter to `carb::extras::loadLibrary()` to allow a module name to be constructed internally from a base filename instead of having to explicitly call `carb::extras::createLibraryNameForModule()` first. - CC-290: Changed `carb::extras::createLibraryNameForModule()` to allow path components to also be included in the base library name instead of only supporting the base name itself. ## Fixed - OM-42834: Fixed an issue on x86-64 Linux where `carb.windowing-glfw.plugin` required libGLX but did not link against it, potentially causing crashes if libGLX.so was unloaded before it. - OM-28142: `omni.bind` now only updates copyright in .gen.h files if the code contents of the file change. - CC-266: `omni.structuredlog` now only updates the copyright year in .gen.h files if other code changes also occurred. - CC-268: `carb.events` now keeps subscriptions at a given `Order` in the same order as when they were registered. - CC-278: Resolved an issue where `carb.profiler-cpu.plugin` could potentially access memory from a DLL/SO after it had been unloaded. - CC-279: Resolved a rare issue with `carb.tasking` where a task could be assigned to an emergency thread which wouldn’t execute the task and could leave the task in a stuck state. - CC-293: Fixed an issue where `carb.tasking` could crash shortly after unloading if emergency threads had been started. - CC-217: Evaluated and fixed several places that were incorrectly using `std::memory_order_relaxed`. - CC-292: Fixed an issue in `carb.datasource-file.plugin` where a crash could rarely occur when the plugin was shut down and unloaded. ``` ## 115.0 ### Changed - CC-276: Instead of using getFileStat to determine the filesize of an opened file use fseek / ftell on linux and getFileSizeEx on windows. getFileStat retrieves costly information like the time converted to the local timezone which showed up in profilings. - CC-276: use DeleteFileEx to delete a file on Windows instead of ShFileOperationW for performance reasons. ### Added - CC-222: ILauncher: Added `ILauncher::waitForStreamEnd()` to wait for the `stdout` or `stderr` streams from a child process to end completely before doing other operations on the process handle. This allows a caller to ensure all the data from the child process has been received before destroying the process handle or waiting for the child process to exit. Since the reads from these streams happen asynchronously, it was not previously possible to ensure all data had been received without waiting for a fixed period of time after the child process exits. - CC-222: ILauncher: added a single extra read callback call for `stdout` or `stderr` streams when the stream ends. The end of the stream is signalled by delivering a callback with a zero byte count. - CC-248: Exposed all of the minidump ‘type’ flags to the settings under `/crashreporter/dumpFlags`. These can either be specified as a single hex value for all the flags to use (assuming the user knows what they are doing), or with `MiniDump*` flag names separated by comma (‘,’), colon (‘:’), bar (‘|’), or whitespace. There should be no whitespace between flags when specified on the command line. The ‘MiniDump’ prefix on each flag name may be omitted if desired. This defaults to an empty string (ie: no extra flags). The flags specified here may either override the default flags or be added to them depending on the value of `/crashreporter/overrideDefaultDumpFlags`. This option is ignored on Linux. ### Changed - CC-222: ILauncher: documented some behavior on Linux where destroying a process handle before the child process exits could result in the child process terminating. This occurs if a read callback has been registered for the child process, then the child process tries to write to the stream (ie: `stdout` or `stderr`) after the parent process has destroyed the handle. This occurs because the Linux kernel’s default behavior on trying to write to a broken pipe or socket is to raise a SIGPIPE signal. This cannot be worked around in a general case without the child process knowing to ignore SIGPIPE. ### Fixed - CC-220: Fixed a crash that could occur in `carb.dll` on Windows in `carb::filesystem::FileSystemWatcherWindows`, especially when watching paths that were mapped to network shares. - CC-5: Added ability for crashreporter to upload files (attachments) to S3. By default the `.dmp` and `.dmp.toml` are uploaded, but other files can be specified in the settings (either statically in config file or dynamically via the settings interface). By default only internal NVIDIA crashes are uploaded to carb-telemetry bucket. A future update will include a web interface and backtrace.io link to the data. - CC-242: Fixed a issue on Windows that would cause DLLs to fail to load if their pathname exceeded 260 characters. ``` ```markdown ## 114.0 ### Added - Functions for string trimming - `overwriteOriginalWithArrayHandling` function in `DictionaryUtils.h` that handles overwriting of arrays during dictionary merge - CC-192: Added the `/audio/nullDeviceCount` setting to allow the number of null audio devices to be configured for testing purposes. - CC-192: Added the `/audio/nullBackendIsFunctional` setting to allow the null backend to simulate broken audio devices for testing purposes. - Some additional flags were added to `carb::extras::SharedMemory`. - A `carb::extras::SharedMemory::open()` overload was added to support opening an existing region by name. - Added: - `carb::this_process::getId()` and `carb::this_process::getIdCached()` were added to `carb/process/Util.h`; `carb::this_thread::getProcessId()` and `carb::this_thread::getProcessIdCached()` are now deprecated. - `carb::this_process::getUniqueId()` was added to generate a process-specific unique ID for the uptime of the machine since PID can be reused. - `carb::memory::testReadable()` was added to `carb/memory/Util.h` to test if a memory word can be read from an address without crashing. - CC-182: `carb::Framework` now has memory management functions that enable cross-plugin memory blocks: `allocate`, `free`, and `reallocate`. Memory allocated with these functions by one plugin can be passed to and freed by a different plugin or the executable. - CC-185: Added task debugging functions to `carb.tasking.plugin`: `ITasking::getTaskDebugInfo()` and `ITasking::walkTaskDebugInfo()`. These functions allow retrieving runtime debug information about tasks. - CC-235: The `carb.assets.plugin` system will now provide more debug information about orphaned assets when asset types are unregistered and the system is shut down. - Changed: - CC-190: Allowed the `/telemetry/schemasUrl` setting to be treated as either an array of URL strings or just a single URL string. If multiple URLs are provided, they will be tried in order until a schemas package successfully downloads. This allows for a way for the ‘backup’ URLs to be provided for the telemetry transmitter. - carb::extras::CmdLineParser now trims whitespace character for keys and values. - OM-34980: Helper functions for processing command line arguments handle array and JSON values. - CC-192: carb.audio previously had two `null` audio backends that could be chosen in different configurations (there was no obvious way to do this). Only one of these `null` backends should be possible to use anymore. Audio devices under the `null` backend may appear to have changed as a result of this. - CC-192: `IAudioPlayback::setOutput()` now checks for invalid flags, so it will now fail if non-zero flags are passed in which contain neither of `fOutputFlagDevice` or `fOutputFlagStreamer`. - How `carb::extras::SharedMemory::createOrOpen()` uses the `size` parameter when opening an existing shared memory region has changed to be safer. On Linux the shared memory region grows to accommodate a larger size; on Windows growth is not supported, so requesting a larger size will fail. Previously the `size` parameter was ignored which could lead to invalid memory accesses. - On Linux, a global semaphore named `/carbonite-sharedmemory` is used to synchronize the `carb::extras::SharedMemory` system across multiple processes. If a process crashed or was killed with this semaphore acquired, all apps that use `carb::extras::SharedMemory` (or derivative systems, such as `RString` and `carb.dictionary`) would hang when attempting to acquire the semaphore. Now, a warning log (or print to `stderr`) is emitted after 5 seconds of waiting on the semaphore. - CC-58: Previously, `Framework::unloadAllPlugins()` would terminate all plugins (calling `carbOnPluginShutdown`) and then on a subsequent pass unload the plugin (call `FreeLibrary()` or `dlclose()`). Now, the plugin is unloaded immediately after termination. ## Fixed - **Replaced usage of** `kUpdateItemOverwriteOriginal` **with** `overwriteOriginalWithArrayHandling` **in code that handles configuration processing** - **CC-192:** Capture devices capabilities queried through `IAudioDevice::getDeviceCaps()` and `IAudioCapture::getDeviceCaps()` will now report `fDeviceFlagConnected` on working devices. - **CC-192:** Audio device capabilities queried through `IAudioDevice::getDeviceName` will no longer report `fDeviceFlagConnected`, since that function does not test for device connectivity/functionality. - **CC-192:** When using the `null` audio device backend on Linux, `IAudioPlayback` and `IAudioDevice` in some configurations could report different playback devices. This should no longer occur. - **CC-181:** On Linux, shared memory regions used for `carb::RString` could be left over from an incomplete previous run of a Carbonite application, causing crashes when RString was initialized. This has been fixed in a backwards-compatible way so that the application validates the shared memory. - **Fixed an issue with** `extras::isTestEnvironment()` **leaking module references on Linux.** - **Fixed an issue with** `IFileSystem::getExecutablePath()` **and** `IFileSystem::getExecutableDirectoryPath()` **not initializing in a thread-safe manner.** - **Fixed an incorrect error message that was reported when using** `ILauncher` **on Windows.** - **CC-227:** omni.structuredlog timestamp checking has been fixed, so it won’t regenerate files during every build. - **CC-213:** Fixed carb.audio returning `AudioResult::eInvalidParameter` and `AudioResult::eDeviceDisconnected` when `AudioResult::eDeviceLost` should have been returned. ## 113.0 ### Fixed - **BREAKING CHANGE:** `carb::Format::B5_G6_R5_UNROM` spelling was corrected to `carb::Format::B5_G6_R5_UNORM`. - **CC-1:** `carb::RStringKey` and `carb::RStringUKey` now satisfy `std::is_standard_layout<>` in order to conform to better ABI safety. The binary layout of these classes did not change and are therefore backwards compatible with older Carbonite binaries. - **CC-184:** Fixed an issue where running a Carbonite executable as root on Linux could potentially cause other Carbonite executables by other users to crash on startup when either the `SharedMemory` or `RString` systems were in use. - **Fixed a shared memory leak on Linux when the** `RString` **system was in use and shutdown occurred with** `carb::quickReleaseFrameworkAndTerminate()`. - **OM-39751:** `IAudioCapture::getSoundFormat()` will no longer report its format as `eDefault` and it will also no longer report a channel mask of 0. - **CC-164:** Fixed an issue where `carb::profiler::ProfileEvent` objects within a frame provided by `carb::profiler::IProfileMonitor` would not be sorted correctly. ### IAudioPlayback leaking ### Streamer objects when the ### Context is destroyed. ### Added - Documentation for our release and versioning process, *docs/Releasing.html|rst* - Implemented `carb::cpp20::bit_cast` and `carb::cpp20::endian` in *include/carb/cpp20/Bit.h*. - *carb/tasking/TaskingHelpers.h* now contains additional macros such as `CARB_ASYNC` and `CARB_MAYBE_ASYNC` to signal that a function is called from a task. Macros such as `CARB_ASSERT_ASYNC` to assert that a function is running in task context. - OM-41046: added `waitForClose()` to `carb::audio::OutputStreamer`, so that it is possible to verify when the output file is no longer being written to when you disconnect the streamer via `IAudioPlayback::setOutput()`. - OM-41046: exposed `fOutputFlagAllowNoStreamers` in `IAudioPlayback` to allow clients to easily have an audio context with no underlying audio device. This is mainly intended to simplify testing. - CC-176: Added support for fixed length string buffers in structs to `omni.bind`. - Implemented *carb.simplegui.plugin*, a wrapper around Dear ImGui with a built-in Vulkan raster renderer that significantly simplifies the work required to get a usable GUI for example projects. This also removes Carbonite’s dependency on the *carb_gfx_plugins*. - OM-39664: Added the `omni::platforminfo::IDisplayInfo` interface to use to collect information on all the attached displays in the system and their supported display modes. ### Changed - `carb::dictionary::getStringArray()` now has its parameters marked as `const`. - CC-163: Changed the `omni.structuredlog` tool so that it only writes the output files if they have changed versus the code that has been generated or the output file doesn’t exist yet. If the file already exists and is identical to the generated code, the output will just be ignored. This is to avoid having to touch timestamps for unmodified generated files and to avoid the possibility of a write permission error if another project with a missing dependency is currently reading from the generated file. - OM-39751: add `/audio/maxDefaultChannels` to limit the number of audio channels that are used when opening an audio device with a default speaker mode. This was an issue when opening the “null” device in ALSA which reported the maximum possible number of audio channels. Opening an `IAudioPlayback` context with more than 16 channels requires some additional setup, so this should not be automatic. - OM-35496: added some extra omni.structuredlog bindings to python. As a result, omni.structuredlog now ships an `__init__.py` and a binary instead of only shipping a binary. - CC-23: Applicable *carb.tasking.plugin* settings are now dynamic and will be reloaded automatically when they change. ### 112.53 @joshuakr #### Changed ##### carb.profiler-cpu.plugin Improvements - The `IProfiler` interface now supports Instant events via macro `CARB_PROFILE_EVENT()`. Instant events are drawn on the timeline with zero duration. Instant events may not be supported by profilers other than *profiler-cpu*. - The `IProfiler` interface now supports Flow events via macros. - CARB_PROFILE_FLOW_BEGIN() and CARB_PROFILE_FLOW_END(). Flow events draw as an arrow between two zones and can begin and end in different threads. Flow events may not be supported by profilers other than profiler-cpu. - When using carb.tasking.plugin and profiler-cpu, an Instant event is automatically emitted when a task is queued. A Flow event is automatically drawn from where the task was queued to when the task begins executing. When the setting key /plugins/carb.profiler-cpu.plugin/fibersAsThreads is false, Flow events are automatically drawn to connect all zones executing the task as they move between various task threads. - For profiler zones, the Category field is now automatically set to the name of the library containing the zone. - The resulting (possibly compressed) json output from carb.profiler-cpu.plugin is more compact. - The “begin” event (via CARB_PROFILE_BEGIN() or CARB_PROFILE_ZONE()) is now slightly faster. - The “frame complete” event (via CARB_PROFILE_FRAME()) is now slightly faster. - Fixed a few possible crashes that could occur if a library had been unloaded when unprocessed profile events referenced it. ### 112.52 @jshentu #### Added - OM-40643: Added createCursor to IWindowing. Allowing supplying custom image and use it as cursor shape. ### 112.51 @cdannemiller #### Changed - Removed trailing new lines from all CARB_LOG messages as Carbonite will automatically add this newline to each log. ### 112.50 @jroback #### Added - OM-36988: Added carb::extras::Uuid, UUIDv4 class for UUIDv4 unique identifiers ### 112.49 @jfeather #### Fixed - OM-39751: removed an optimization for an edge case in carb.audio-forge that prevented certain callbacks from being sent. ### 112.48 @saxonp #### Added - OM-22490: Include the deps folder in the Carbonite Package ### 112.47 @evanbeurden #### Fixed - OM-40193: Deferred creation of the standard logger log file until a message is actually written to it. This was to fix the behaviour of a log file being truncated by a secondary concurrent launch of an app trying to write to the same log file even if the new process did not write any log messages. - OM-40193: Fixed the behaviour of carb::logging::StandardLogger::setFileConfiguration() in the case of disabling logging to file when the log file has not been opened yet. Previously the log filename was not being cleared as documented. - OM-40193: Allowed carb::filesystem::IFileSystem::closeFile() to fail gracefully if passed nullptr. This makes the function more friendly to use and makes its behaviour match expected common OS level behaviour for similar functions such as fclose() and CloseHandle(). - OM-40193: Enabled much more of the omni.telemetry.transmitter app’s logging by default. This allows it to leave a log file that can be reliably used for post mortem analysis of its behaviour. - OM-40193: Updated the OpenSSL version used in the omni.telemetry.transmitter app to fix some issues. # CentOS 7 ## 112.46 @hfannar ## 112.45 @evanbeurden / @joshuakr ### Fixed #### OM-40256: Fixed a potential race condition calling the IEventListener::onEvent() functions for subscribers to an event stream. It was previously possible to have the subscription removed at the same time the object was being used to call the onEvent() function. #### OM-40212: Fixed an issue where serialized events sent from different threads could be received out-of-order despite the serialization. Added documentation for `carb.events`. ## 112.44 @joshuakr ### Added #### OM-39253: For `carb::profiler::IProfiler`, `CARB_PROFILE_BEGIN()` now returns a value that can be passed to `CARB_PROFILE_END()` for validation. #### OM-33647: `carb.profiler-cpu.plugin` now outputs source information if available. This can be disabled with setting: `/plugins/carb.profiler-cpu.plugin/recordSourceInfo` (default: true). #### Documentation for header files in `include/carb/profiler/`. ### Fixed #### OM-40102: Fixed a `carb.tasking` issue where a timed-wait could cause `ITasking::suspendTask()` to not work correctly. ## 112.43 @jfeather ### Fixed #### OM-40135: Converting bytes to frames with a variable-bitrate format in the audio utilities in `AudioUtils.h` will no longer result in a divide-by-zero. Note that this behavior is still an error. #### OM-40135: setting `AudioImageDesc::lengthType` to `UnitType::eBytes` should now function correctly instead of hitting a divide-by-zero. #### OM-40135: Fixed a bug where sounds would fail to encode due to libvorbis setting `bitrate_nominal` to a negative number. ## 112.42 @evanbeurden ### Added #### OM-39664: Added a concept of ‘volatile’ metadata to the crash reporter. These are metadata values that change frequently (ie: free RAM, frame rate, etc) that should only be collected in the event a crash does actually occur. #### OM-39664: Added the ‘/crashreporter/debuggerAttachTimeoutMs’ config option to provide a wait period after a crash occurs where a debugger can attach before the crash upload and metadata processing actually occurs. #### OM-39664: Added RAM, swap, and VM space information as metadata in the crash report. ## 112.41 ### Fixed #### OM-40023: Add an error message when `remove()` fails in `IFileSystem::removeFile()` to match the Windows behavior. ## 112.40 ### Fixed #### OM-39974: Fixed a memory corruption crash that could occur when `IInput::clearActionMappings` was called in a multi-threaded environment. ## 112.39 ### Fixed #### Updated the following INFO log from carb.audio from `[carb.audio.context] the bus count would not` ## 112.38 ### Fixed - `carb::audio::IAudioUtils::drawWaveform()` will no longer encounter errors when the sound offset is specified in a unit other than frames. ## 112.37 ### Changed #### Enable C++17 on codebase - OM-34896: Enable C++17. By default Carbonite now compiles with C++17. Linux toolchains stay the same as before, MSVC has been moved to VS2019. All existing binaries remain 100% compatiable, including python bindings. Projects using Carbonite can continue using C++14. Carbonite remains backwards compatible with C++14. ## 112.36 ### Fixed - Fixed a bug in ILauncher on Linux where destroying a process handle could incorrectly close a file descriptor it didn’t own. ## 112.35 ### Changed #### Drastically improved performance of carb.profiler-cpu.plugin - OM-33716: Change `carb.profiler-cpu.plugin` to use per-thread queues instead of a global ringbuffer. This greatly improved performance by reducing contention: about 99.5% faster in highly contended cases. - Switched to using Ryu and {fmt} for conversion of numeric values to string in the profiler for additional speed improvement. Adjusted compression settings so that a generated .gz profile is about 2% larger but much faster. - OM-39381: Fixed an issue where `CARB_PROFILE_FUNCTION` incorrectly generated the function name. ## 112.34 ### Fixed - OM-39573: Fixed an issue where dictionary keys that ended with `_0` could drop the `_0`. ## 112.33 ### Fixed - OM-38834: `IFileSystem::readFileLine()` now consumes the line ending even if there is no room for it in the buffer provided. See the documentation for `IFileSystem::readFileLine()` for more information. ### Changes - Keys for `carb.dictionary.plugin` now internally use `carb::RString`. This has no effect on the plugin interface, ABI or API but should result in higher performance and less (transient and long-term) memory usage. - **Compile-breaking change**: `carb/rstring/RString.h` has moved to `carb/RString.h`. ## 112.31 ### Fixed - OM-38790: Fixed memory corruption and invalid memory accesses that could occur in the `RString` system on Linux. - Improved stability around `carb::extras::SharedMemory` and `carb.launcher`. ### Added - Added a `CARB_RETRY_EINTR` macro for GCC on Linux to automatically retry operations when `EINTR` is reported. - All Carbonite python bindings are now built for Python 3.9 in addition to the previous platforms. ## 112.30 ### Fixed - Improved the retrieval of available physical RAM in `carb::extras::getPhysicalMemory()`. ## 112.29 ### Added - Added the `omni.platforminfo.plugin` plugin. This provides interfaces to access the CPU, memory, and OS information for the calling process. - Added python bindings for the `omni.platforminfo.plugin` plugin. ## 112.28 ### Fixed #### Structured Logging: - Fixed a thread race condition that could lead to a crash. ## 112.27 ### Fixed #### carb.scripting-python.plugin - Using global context `getGlobalContext()` no longer causes stack overflow. ### Changed #### carb.scripting-python.plugin - Enabled UTF-8 mode for Python scripting by default. It can be turned off with the `/plugins/carb.scripting-python.plugin/pythonFlags/Py_UTF8Mode` setting. ## 112.26 ### Reverted - The changes from 112.19 have been reverted pending further testing. ## 112.25 ### Fixed #### carb.input.plugin - OM-37121: Allow `filterBufferedEvents()` to be called multiple times before `distributeBufferedEvents()`. ## 112.24 ### Fixed #### carb.assets.plugin - OM-38102: The carb.assets system would consider successful but zero-byte loads from a datasource as failures. This is no longer the case: zero-byte successful loads are now treated as success and the asset will proceed with the loading process. - `carb::assets::IBlobAsset` now supports `nullptr`/zero-byte data and has been changed to version 1.0. ## 112.23 ### Changes #### carb.crashreporter-breakpad.plugin - OM-38102: Crashes now store metadata as a separate file alongside crashes, so if crash uploads must be deferred until a subsequent run, the proper metadata can be uploaded. - The uptime of the application (measured as time that `carb.crashreporter-breakpad.plugin` has been loaded) is now automatically included in the metadata as `UptimeSeconds`. - The UUID of the dump is now included in the metadata as `DumpId`. - Crash metadata is included in the human-readable text file that is produced alongside the crash dump. ## 112.22 ### Changed - fix carb.log_* python functions to provide more source info (file, line number, module etc) to the logging system. ## 112.21 ### Fixed - OM-37197: When `carb.crashreporter-breakpad.plugin` detects a crash and writes out the callstack in the .txt file, it now attempts to discern source file and line information (via `addr2line`) and include it in the report. - Resolved an erroneous log message “Failed to create or open a semaphore” that would be logged when using `RString` in multiple modules. ## 112.20 ### Removed - Removed the `omni/extras/InterruptableSleep.h` header and related classes in favor of using `carb::cpp20::binary_semaphore` directly instead. ## 112.19 ### Added (Reverted in 112.26) ## Functions for string trimming ## OM-34980: Helper functions for processing command line arguments handle array and JSON values ## Changed (Reverted in 112.26) ### carb::extras::CmdLineParser now trims whitespace character for keys and values ## 112.18 ### Fixed #### OM-37431: Fixed an internal carb.tasking assumption that could lead to `CARB_CHECK` violations. #### OM-37902: Fixed a race condition with certain timed waits that could lead to asserts and test failures. ## 112.17 ### Fixed #### OM-37870: Fixed an issue that was causing `carb::tasking::Future` returned from `ITasking::addTask` (and variants) to be signaled before a task was complete, causing spurious test failures. Added documentation. ## 112.16 ### Added #### Added the omni.telemetry.transmitter tool to its own package that is published with the other Carbonite packages. This was done for easier deployment of the tool in non-Carbonite projects that want to use it. ## 112.15 ### Added #### OM-31214: Added a function `carb::quickReleaseFrameworkAndTerminate()` which will attempt to shutdown all plugins, flush `stdout` and `stderr`, and terminate the app via `TerminateProcess()` on Windows or `_exit()` on Linux. ### Fixed #### carb.memory has been disabled by default for Debug builds (it was already disabled by default for Release builds) as it could cause shutdown crashes. #### On Windows, if `main()` exits and the Carbonite framework has not been shut down, the framework now attempts to shut itself down before all threads are terminated. #### A workaround was implemented when using carb.tasking with ASan and UBSan: false errors could previously be reported due to fiber stack memory being reused. If ASan is detected, the memory for thread stacks is now `mmap`’d to `PROT_NONE` instead of unmapped, leaving the address space non-reusable but preventing the errors. ## 112.14 ### Fixed #### OM-37286: Fixed possible UB with `carb::tasking::Trackers` storing an `initializer_list`. #### Fixed a compile issue with `include/carb/assets/IAssets.h` on GCC 10. ## 112.13 ### Fixed #### OM-34654: `carb.dictionary.serializer-json` could not handle serializing or parsing float values such as `inf` or `nan`, which could cause corrupted JSON files from failed serialization. This has been fixed to serialize those values as strings so they can be serialized/deserialized properly. ## 112.12 ### Changed # 112.11 ## Changed - Changed `carb.crashreporter-breakpad` to determine the absolute path of the crash dump directory during configuration rather than in the crash handling code path to prevent possible errors while crashing. # 112.10 ## Changed - Changed command line setting parsing to also parse hexadecimal and octal values. # 112.9 ## Fixed - Changed command line setting parsing to also parse hexadecimal and octal values. # 112.8 ## Fixed - Made the handling of finding the system temp directory more robust on Linux and improved some error messages from `carb::filesystem::IFileSystem::makeTempDirectory()`. # 112.7 ## Fixed - Command line settings are now parsed as 64 bit integers and doubles, rather than 32 bit integers and floats. This prevents clamping of values larger than the maximum values of 32 bit integers or floats. # 112.6 ## Changed - Changed `carb.crashreporter-breakpad` to print the absolute path of the crash dump file. # 112.5 ## Changed - `carb.crashreporter` is now disabled by default on Linux. # 112.4 ## Changed - Added a `omni::core::ImplementsCast` that only implements the `cast_abi` function. # 112.3 ## Fixed ### CrashReporter-Breakpad changes - OM-34471: Reduced the amount of redundant Verbose log output. - OM-36335: Asynchronous uploading of crash dumps was holding a mutex that could block the main thread. This mutex is no longer held while the upload is pending, allowing the main thread to proceed. # 112.2 ## 112.2 - Changed the code generator for structured logging to use `repo_format`. # 112.1 ## Fixed - Resolved an issue with counted `RString` objects from 112.0 where the length was not always taken into account. # 112.0 ## Changed ### RString changes - `RString` changes. ### API has a minor change: `isNone()` is replaced by `isEmpty()`, and default-constructed `RString` will refer to an empty string (`c_str()` returns “”). Similarly, `eRString::RS_None` is replaced by `eRString::Empty`. ### `RString` and variant classes now have constructors that accept a counted string (length provided) and a `std::string`. ### As such, `RString` now supports strings with embedded NUL (‘\0’) characters. ### CrashReporter-Breakpad changes - Windows minidumps now include data segment information (global variables). - `RString` data is included in Windows minidumps. - Settings key `/crashreporter/preserveDump` (default: false) can be used to have a dump file remain even after it has been uploaded to the server. - Several log messages when writing a crash dump and uploading were changed from Verbose to Warn to be more visible. ### Fixed - On Linux, processes launched via `ILauncher` with `fLaunchFlagNoStdOut` and/or `fLaunchFlagNoStdErr` would close the file handles for `stdout` and `stderr`, allowing them to be reused. This could lead to undefined behavior. ### 111.21 #### Added - Added `BitScanForward`, `BitScanReverse`, and `PopCount` functions to `carb::math` ### 111.20 - Optimized read access of the IDictionary interface. #### Fixed #### Changed - OM-36102: `carb.crashreporter-breakpad` prints crash dump file location at error level when crash occurs. ### 111.18 #### Fixed - omni::core::cast would not compile if used with `IObject` and the Implementation uses multiple inheritance. - Added a unit test for the above. ### 111.17 #### Fixed - OM-13024 and OM-25736: Interfaces (that may be non-trivial) were always memcpy’d when instantiated. Now, by means of a new plugin function `carbOnPluginRegisterEx2` (automatically generated by `CARB_PLUGIN_IMPL`), the plugin interfaces are now constructed in place without memcpy. This requires the plugin to be recompiled against Carbonite 111.17. The Carbonite Framework version did not change and remains version 0.5. Any plugins compiled against Carbonite Framework 0.5 will continue to work (but will perform the memcpy which is unsafe if the Interface struct is not trivial), and Carbonite Framework 0.5 from Carbonite 111.16 and previous will continue to load newer plugins. ## 111.16 ### Fixed - **OM-34946**: C++17 compile error with pybind11 and BindingsPythonUtils.h ## 111.15 ### Fixed - **OM-35664**: Fixed an issue with plugin unload ordering where dependencies could be unloaded too early. This was rare and generally affected plugins which had a stated dependency in `CARB_PLUGIN_IMPL_DEPS` but did not acquire the dependency until `carbOnPluginShutdown()` was called. ## 111.14 ### Fixed - **OM-35375**: It was possible for some `ISettings` setter functions such as `setString` to call callbacks while an internal lock was held. This could lead to some deadlock situations if another thread was querying the settings. ## 111.13 ### Fixed - Renamed template vars in `TypeTraits.h` to avoid conflicting with termios.h ### Changed - Moved the telemetry transmitter helper functions in `include/omni/structuredlog/Telemetry.h` from the `omni::structuredlog` namespace to `omni::telemetry`. - Implied ‘test’ mode in the telemetry transmitter when the `/telemetry/schemaFile` or `/telemetry/schemasDirectory` options are used. ## 111.12 ### Fixed - Fixed the `omni.structuredlog` tool to only disable MSVC’s ‘fast up-to-date’ check for projects that make use of the tool. Projects that do not use the tool will still skip the build check if they are up to date. ## Changed - `omni::structuredlog::launchTransmitter()` will now automatically pass a number of settings from the host process to the child process, so that launching the transmitter is easier. ## 111.11 ## Changed - Updated to repo_format 0.6.4 ## 111.10 ### Fixed - Fixed an issue with `carb.profiler-tracy.plugin` where throwing a C++ exception could cause issues after the plugin was unloaded. ### Added - Added a helper function to `carb::launcher::ArgCollector` to collect all the settings in a given branch and add them as arguments that can be passed to a child process. ## 111.9 ### Changed - Upgraded to repo_format 0.6.0 and ran latest repo_format on all code. This results in a large change to copyright ranges in the header of source files only. ## 111.8 ### Changed ## 111.7 ### Fixed #### Fixed potential memory corruption that could occur within `carb.tasking`, especially with `ITasking::applyRange()`. ### Changed #### `omni.telemetry.transmitter` requires `--/telemetry/allowRoot=true` to be able to run as root. This was done to prevent users from running the transmitter with `sudo` and causing permission issues with the lock files. ## 111.6 ### Fixed #### Shutdown/unload improvements - Change 110.0 introduced a tweak to unload ordering that would attempt to discover “implicit dependencies” by plugins acquiring interfaces through `tryAcquireInterface`. There were issues with this implementation and it was disabled in version 111.3. - A corrected unload order is now present in Carbonite. If plugin `Foo` acquires an interface from plugin `Bar`, `Foo` must now be unloaded before `Bar`. The exception to this rule is a circular reference, in which case unload order is the reverse of the load order. ### Added - Added documentation for the various structured log settings to the Telemetry Overview docs. - Added documentation for the various telemetry transmitter settings to the Telemetry Overview docs. - Added `/structuredlog/logDirectory` to allow the output directory of structured logging to be specified by Settings. ## 111.5 ### Added #### Structured Log - Documented the wildcard helper functions in `include/omni/str/Wildcard.h`. - Added settings keys to allow structured log schemas to be enabled or disabled from config files or command line. - The following new settings paths have been added to disable schemas and events. Each key under this path will enable or disable one or more schemas or events (wildcards are allowed): - `/structuredLog/state/schemas`: enable or disable zero or more schemas when they are first registered. - `/structuredLog/state/events`: enable or disable zero or more events when they are first registered. - The following new settings keys have been added to disable schemas and events. Each of these keys is expected to be an array of strings indicating the schema or event name and its enable or disable state: - `/structuredLog/schemaStates`: enables or disables zero or more matching schemas when they are first registered. - `/structuredLog/eventStates`: enables or disables zero or more matching events when they are first registered. - For more information, see `include/omni/structuredlog/StructuredLogSettingsUtils.h` or consult the generated documentation package. - Added `/structuredlog/logDirectory` to allow the output directory of structured logging to be specified by Settings. #### Utilities - Added `carb::this_thread::getProcessId()` and `carb::this_thread::getThreadId()`. # 111.4 ## Fixed - Fixed a hang that could occur rarely with `carb::getCachedInterface<>` on the AArch64 platform. # 111.3 ## Changed - The “Improvements to Carbonite Shutdown” from version 110.0 have been revoked pending review. An issue was discovered that would cause improper plugin unload order on shutdown. # 111.2 ## Added ### String-interning (registered fast-comparison strings) - A registered string class `RString` has been added in `carb/rstring/RString.h`. - The `RString` class is case-sensitive, but has companion class `RStringU` that is “un-cased” (i.e. case-insensitive). - The `RString` and variant classes can be (in)equality checked in O(1), and non-lexicographically sorted in O(1) time. - All registered strings are shared within an application’s memory space across multiple modules and can be used prior to `main()` without needing to instantiate the Carbonite framework. - Carbonite registered strings are similar to Unreal’s `FName`, `boost::flyweight<std::string>` and USD’s `TfToken`. # 111.1 ## Added - `/telemetry/schemasDirectory` and `/telemetry/schemaFile` has been added to omni.telemetry.transmitter. This will allow debug builds to load their telemetry schemas from disk instead of requiring a HTTP download. - `/telemetry/authentication` has also been added to the transmitter to allow test cases where authentication can’t be used. ## Fixed - Fixed a shutdown crash that could occur if a cached interface was destroyed after the Carbonite framework has been unloaded. # 111.0 ## Removed - Removed unused, untested lua bindings. # 110.1 ## Fixed - Fixed an issue with `carb.scripting-python.plugin` where `IScripting::executeScript()` and `IScripting::executeScriptWithArgs()` would crash if the script had incorrect Python grammar. # 110.0 ## Changed ### Improvements to Carbonite Shutdown - If a plugin creates a dependency with `tryAcquireInterface()`, that dependency could be unloaded before the plugin is unloaded, leading to potential crashes at shutdown. Now such a dependency is taken into account during plugin shutdown. unloading so that plugins are unloaded before their dependencies. - The `getCachedInterface()` helper function no longer tries to re-acquire the interface if the interface has been released. The `resetCachedInterface()` function will evict the cached interface and reset so that the next call to `getCachedInterface()` will attempt to re-acquire the interface. Releasing the entire framework however does reset the cached state so that it can be re-acquired. - Cached interfaces are now evicted immediately prior to calling `carbOnPluginShutdown` for a plugin. This causes `getCachedInterface()` to return `nullptr` instead of an interface that is in the process of destructing. ### 109.0 #### Changed - `carb::extra::EnvironmentVariable` is now RAII class. ### 108.14 #### Added - `IAudioDevice::getBackend()` has been added to allow users to check which audio backend they’re running under to make specific decisions based on which audio system is running. This is mainly intended to be used to detect if ALSA is in use, which makes some operations (such as device enumeration) costly. ### 108.13 #### Changed - omni.telemetry.transmitter switched from environment variables to ISettings. This allows users to configure omni.telemetry.transmitter with the config.toml file or via command line arguments. `OMNI_TELEMETRY_ENDPOINT` has been replaced with `/telemetry/endpoint`. `OMNI_TELEMETRY_SCHEMAS_URL` has been replaced with `/telemetry/schemasUrl`. ### 108.12 #### Changed - omni.telemetry.transmitter no longer uses python. - `omni::structuredlog::launchTransmitter()` has had the arguments for its python environment removed. ### 108.11 #### Changed - The carb.tasking.plugin heuristic that guides `carb::tasking::ITasking::applyRange` has been adjusted to allow for more parallelism, especially with fewer than 512 items. ### 108.10 #### Fixed - Callstacks reported by `carb.crashreporter-breakpad.plugin` during crash time now include offsets next to the function names in order to be more accurate. ### 108.9 #### Changed - carb.profiler-cpu is now statically linked to zlib on Linux. This was the existing behavior on windows. - Public include files should now build cleanly with /W4 warning level on MSVC and -Wall on GCC. #### Added - If the `/plugins/carb.tasking.plugin/debugTaskBacktrace` key is true and a task enters the Waiting state, the backtrace is now captured and stored at `carb::tasking::TaskBundle::m_backtrace` for a given task. The entire database of tasks is available at ``` ``` carb.tasking.plugin.dll!carb::tasking::Scheduler::s_scheduler->m_taskHandleDb ``` and can be added to the watch window. ``` carb::tasking::ScopedTracking ``` now exists to be able to start and end tracking on ``` carb::tasking::Trackers ``` without having to add a dummy task (OM-33470). ### 108.8 #### Fixed Fixed the ``` omni.structuredlog ``` tool when it is called using an external project (ie: outside of Carbonite). Previously this script would generate prebuild commands that would not work properly under MSVC. They would however build correctly when run through ``` build.{bat|sh} ``` . This tool has now been fixed to write out a full build script for MSVC prebuild commands and write out a similar set of commands for use on Linux. ### 108.7 #### Changed ##### carb.tasking.plugin Changes: An optimization where a task that waits on another task may attempt to execute the dependent task immediately, regardless of priority. Dependency helper classes ``` carb::tasking::Any ``` and ``` carb::tasking::All ``` can now take iterators as input. Fixed a small rare memory leak at shutdown. ### 108.6 #### Added ##### Additional features for carb.tasking.plugin “Task Storage” has now been exposed in the ``` ITasking ``` interface (similar to thread-local storage, but for tasks). Full support for ``` Promise ``` and ``` Future ``` types. Added support for specifying multiple schemas to be built in a single project. Multiple schemas may now be specified as an array in a single call to ``` omni_structuredlog_schema() ``` . A single schema to be built may still be specified by passing an object containing the parameters instead of a full array. ### 108.5 #### Added Added support for anonymized and pseudonymized events in the telemetry transmitter. #### Fixed Fixed some structured log schemas that don’t always rebuild properly under MSVC. This was caused by a limitation of MSVC projects generated under premake. This problem didn’t exist on Linux. ### 108.4 #### Changed Class names in generated structured log headers have changed to include the version in their name. This will require changes in structured log calls that use enum types or object types and code that uses generated structured log python bindings. #### Fixed The ``` OMNI_FORCE_SYMBOL_LINK ``` macro now correctly evaluates, so redefinition errors should no longer occur. It should now be possible to include multiple versions of a structured log schema without encountering build errors. ### 108.3 #### Fixed Added a more helpful error message when setup_omni_structuredlog() isn’t called in a premake script that uses omni_structuredlog_schema(). ### 108.2 ## Changed - Added `isWindowFloating` and `setWindowFloating` to `carb::windowing::IWindowing`, and implemented them for `carb.windowing-glfw.plugin`. ## 108.1 ### Changed - Upgraded to the newest version of repo_format, which includes clang-format v12. - Removed the –clang-format and –clang-format-style options from the omni.bind python script, these are replaced with –format-module, which gives the user more flexibility in choose how and if they want to format the code. ### Breaking Changes - Users are required to upgrade to repo_format 0.5.4 or later. ## 108.0 ### Changed - `ILogMessageConsumer_abi::onMessage_abi()` had its prototype changed to add the TID, PID and timestamp, so that this data will be available when using asynchronous logging. ## 107.3 ### Fixed - Fixed a rare debug assert in ITasking. - Fixed an issue where a carb::tasking::Semaphore could assert on destruction. ### Changed - omni.structuredlog.lua has replaced the questionable `carb_path` global with `setup_omni_structuredlog()` to specify the carbonite path. ## 107.2 ### Changed ### IAssets - The `IAssets` interface has changed to better support `ITasking` and to deprecate `carb::tasking::Counter`. - The `IAssets::loadAsset()` functions no longer take a `Counter` object. Instead, they take a `carb::tasking::Tracker` helper object similar to the `ITasking::addTask` functions. - The `IAssets::loadAssetEx` function is deprecated and directs the caller to use the safer `loadAsset` wrapper functions. - The `LoadAssetFn` and `CreateContextFn` no longer pass a `Counter` and expect data to be returned rather than passed through an output argument. ## 107.1 ### Fixed ### Python uses static openssl again - 107.0 inadvertently used a python package that had switched to dynamically linking libssl.so and libcrypto.so, so dependencies would encounter linking issues. 107.1 switches back to a python package that has these dependencies statically linked. ## 107.0 ### Changed #### ITasking 2.0 The `ITasking` interface has undergone some extensive improvements that are (mostly) backwards-compatible with existing code. - The `Counter` type is deprecated. The interface remains for the time being but will now produce deprecation warnings for manual `Counter` manipulation. Instead, `TaskGroup` is a more efficient means of grouping tasks together. - The `Any` and `All` helper objects for `RequiredObject` used by `ITasking::addSubTask` are now nestable. - Tasks can now be canceled before they have started executing. See `ITasking::tryCancelTask()`. - The `ITasking::addTaskIn` and `ITasking::addTaskAt` functions now accept multiple tracker objects. - The `ITasking` functions `yieldUntilCounter` and `waitForTask` have been combined into a generic `wait()` function. The `wait()` function can also use `Any` and `All` helper objects. - Unhandled exceptions thrown from a task will now treat the task as canceled. - A fiber-aware futex system has now been exposed to allow for generic waiting. - The `PinGuard` and wrapper classes such as `CounterWrapper` no longer need an `ITasking*` pointer passed to them, nor do they cache the pointer. These classes now rely on the `carb::getCachedInterface` helper function. #### IAssets The `IAssets` interface has changed to better support `ITasking` and to deprecate `carb::tasking::Counter`. - The `IAssets::loadAsset()` functions no longer take a `Counter` object. Instead, they take a `carb::tasking::Tracker` helper object similar to the `ITasking::addTask` functions. This allows specifying multiple variant types of trackers to asset loading (including the now-deprecated `Counter` objects). - The `IAssets::loadAssetEx()` function is deprecated and directs the caller to use the safer `loadAsset` wrapper functions. - The `LoadAssetFn` and `CreateContextFn` no longer pass a `Counter` and expect data to be returned rather than passed through an output argument. Since these functions are called from Task Context as a co-routine, they are free to use any of the fiber-safe waiting methods provided by `ITasking`. ## HandleDatabase - HandleDatabase has a new function `handleWasValid()` to check if a handle was previously valid but has been released. - Added an `addRef()` function that accepts a valid `TrueType*`. ## 106.7 ### Changed - `carb.crashreporter-breakpad.plugin` was updated to use Google Breakpad chrome_90 release. ## 106.6 ### Changed - Renamed the `OmniCoreStartArgs::padding` member to `OmniCoreStartArgs::flags` and added some flags to control the behavior of some of the built-in object overrides more explicitly. Specifically to allow for the `ILog` and `IStructuredLog` objects to be able to be disabled without having to create a dummy stub implementation for each of them. This member was intentionally unused before and did not change size so it should not cause any breakages from the name and usage change. ## 106.5 ### Changed - Moved documentation related docs into `repo_docs` repo. ## 106.4 ### Changed - `carb.scripting-python.plugin`: `executeScript()` and `executeScriptWithArgs()` now do just-in-time script compilation when called for the first time. Previously the script was compiled each time the functions were called. ## 106.3 ### Fixed - Fixed a crash that would occur in an app started from Python if `carb.scripting-python.plugin` was initialized but the GIL had been released by the calling thread. ## 106.2 ### Fixed - The `carb.tasking` plugin will now discard all pending tasks when released. Only the tasks currently running will be allowed to finish. Note that this will potentially cause leaks; if it is important to clean up rather than abandoning tasks, use a `carb::tasking::Counter` or other means to ensure that all necessary tasks have completed before releasing `carb.tasking.plugin`. ## 106.1 ### Added - Added `omniGetBuiltInWithoutAcquire()` to carb.dll/libcarb.so which will become the new entry-point for accessing interfaces. The existing functions `omniGetLogWithoutAcquire()`, `omniGetTypeFactoryWithoutAcquire()`, and `omniGetStructuredLogWithoutAcquire()` are now deprecated and will be removed in a future version. ## 106.0 ### Broke - Removed `ICaching` interface. The interface had no known usage in the wild. ## Fixed - Restored `omniGetTelemetryWithoutAcquire` which was removed between 105.0 and 105.1. ## 105.2 ### Changes #### Improvements to carb.tasking’s parallel_for()/applyRange() The `parallel_for()` and `applyRange()` functions in `ITasking` should be seeing 5-10% improvement in certain situations. #### Improvements to LocklessQueue and LocklessStack The `InputIterator`-style `push()` functions on these containers now accept `InputIterator` types that dereference into both `T*` and `T&` types to allow for different situations. ### Fixed - Using `carb.tasking`’s `applyRange()` function should prevent stuck checking and fix the occasional “carb.tasking is likely stuck” messages. - Fixed an issue that could sometimes cause a crash at shutdown in `carb.profiler-cpu` and `carb.dictionary`. ## 105.1 ### Fixed - `.pyd` files that call `OMNI_PYTHON_GLOBALS` can now access `carb::Framework` from within the bindings. ## 105.0 ### Removed - The `mirror` tool dependency has been removed along with associated tests. - `carb.windowing-glfw` now only supports X11/GLX on Linux. The Wayland/EGL backend has been removed. ## 104.0 ### Broke #### ITokens 0.1 -> 1.0 `ITokens::calculateDestinationBufferSize` now accepts a pointer to store an error code (`ResolveResult`) of the calculation. This is an ABI breaking change. `ITokens::calculateDestinationBufferSize` and `ITokens::resolveString` now accept special flags (`ResolveFlags`) that allow to modify token resolution process. This is an ABI breaking change. #### All Interfaces Bumped to at least 1.0 - `IObject` 0.1 -> 1.0 - `IAssert` 0.2 -> 1.0 - `IAssetsBlob` 0.1 -> 1.0 - `IAudioCapture` 0.4 -> 1.0 - `IAudioData` 0.6 -> 1.0 - `IAudioDevice` 0.1 -> 1.0 - `IAudioGroup` 0.1 -> 1.0 - `IAudioPlayback` 0.5 -> 1.0 - `IAudioUtils` 0.3 -> 1.0 - `ICaching` 0.1 -> 1.0 - `IDataSource` 0.3 -> 1.0 - `IDictionary` 0.8 -> 1.0 - `IEcs` 0.1 -> 1.0 - `IEvents` 0.2 -> 1.0 - `IFileSystem` 0.1 -> 1.0 - `IMemoryTracker` 0.1 -> 1.0 - `IProfileMonitor` 0.3 -> 1.0 - `ISettings` 0.6 -> 1.0 - `IFiberEvents` 0.1 -> 1.0 - `IThreadPool` 0.1 -> 1.0 - `IThreadUtil` 0.1 -> 1.0 - `ITypeInfo` 0.1 -> 1.0 - `IGLContext` 0.1 -> 1.0 This change was made to avoid potentially unnecessary breaking changes (due to semantic versioning's 0 major rule) in the future. ### 103.1 #### Fixed ##### RingBuffer improvements The `carb/container/RingBuffer` object had a problem with high contention. The RingBuffer now performs much better in a high-contention environment. ##### carb.profiler-cpu improvements Because it was based on the RingBuffer, `carb.profiler-cpu.plugin` also suffered from debilitating performance problems that have been resolved with the RingBuffer improvements. ##### Resolved potential Linux hang in carb::thread::futex The Futex system had a rare hang that could occur in certain situations where a wait() would not be woken by a notify() due to a race condition. This also affected things using `carb::cpp20::atomic<>` which was based on `carb::thread::futex`. ##### PooledAllocator no longer constrained by “buckets” The template parameter for the number of “buckets” in use by the `PooledAllocator` has now been removed. `PooledAllocator` will now happily allocate memory until the underlying allocator runs out of memory. ### 103.0 #### Fixed ##### getCachedInterface() now re-acquires when framework/plugin unloaded If an interface is released (or the entire framework is released), `carb::getCachedInterface()` will now detect this and attempt to re-acquire the interface (or return `nullptr` if it is no longer available). This fixes issues where plugins may be often released and re-acquired. NOTE: Do not use `static` when calling `carb::getCachedInterface()`. ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ```markdown ### IDictionary::getPreferredArrayType Now use the first element as a baseline type to convert to, instead of trying to convert everything to eBool by default. This was causing an array of [0.0, 0.0, 0.0] to convert to [False, False, False] erroniously. ### Fixed #### IAssets delay reloading fix Fixes two issues introduced with 102.11: - `IAssets::yieldForAssetLoaded` could spuriously wait if an asset reload was in progress. - Unregistering an AssetType would wait for pending assets to reload, even if the reload delay was very long. Now asset reloads are abandoned immediately upon unregistering an AssetType. #### ITasking crash fix Fixes a rare crash with use of `carb::tasking::Any` and `carb::tasking::All` in 102.10 ### 102.12 #### ITasking fix for “carb.tasking is likely stuck” during applyRange() In long-running parallel tasks during `applyRange()` the carb.tasking stuck checking feature could determine that the system is stuck even when it is not. Now `applyRange()` contributes to the heuristic which should reduce false positive stuck warnings. ### 102.11 #### Changes #### IAssets version 1.0 `IAssets` has been promoted to version 1.0 with a few additional changes. `AssetTypeParams` has been created as a parameter for `registerAssetType()` to contain all of the configuration information for Asset Types; this struct should start as `AssetTypeParams::getDefault()` and then any parameter changes can be made. The `AssetTypeParams` struct has a new `reloadDelayMs` option (default=100 ms). For Asset Types that automatically reload, this will wait for `reloadDelayMs` to elapse from the last datasource change notification in an effort to prevent multiple reloads for multiple changes in rapid succession that can occur when an asset is being written to. ### 102.10 #### Changes #### ITasking::addSubTask() and ITasking::addThrottledSubTask() support lists of required Counter objects In some cases, it is desirable to wait until any or all of a group of Counter objects become complete in order to start a sub-task. To that end, `addSubTask()` and `addThrottledSubTask()` now accept an initializer_list of Counters, passed into helper objects `carb::tasking::Any` or `carb::tasking::All`. ```html <div class="highlight"> <pre><span> <p> Which would wait until both <code> counter1 and <code> counter2 are complete before starting the sub-task. <code> carb::tasking::Any is specified similarly but waits until at least one of the passed Counters is complete. <section id="itasking-addtask-and-variants-support-lists-of-notify-counter-objects"> <h4> <code> ITasking::addTask() and variants support lists of notify Counter objects <p> In some cases, it is desirable for multiple Counter objects to be incremented before a task starts and decremented when the task completes. Previously, only one Counter could be passed to <code> addTask() variant functions and the other Counter objects would need to be manually incremented/decremented. Now, <code> addTask() variants (except for <code> addTaskAt() and <code> addTaskIn() ) will allow passing an <code> initializer_list of Counters. Each of the non-nullptr Counter objects passed will be incremented before the <code> addTask() variant function returns and decremented upon task completion. <section id="id468"> <h2> 102.9 <section id="id469"> <h3> Added <section id="new-feature-carb-strongtype"> <h4> New feature: <code> CARB_STRONGTYPE <p> <code> CARB_STRONGTYPE now exists in <em> carb/Strong.h to declare a strong type. The <code> using and <code> typedef keywords do not actually create a new type; they just create aliases for the type. <code> CARB_STRONGTYPE declares a structure that acts like the referenced type but is strongly typed and requires explicit assignment. <section id="new-feature-carb-delegate-delegate"> <h4> New feature: <code> carb::delegate::Delegate <p> <code> Delegate implements a <code> std::function -like loose-coupling system but allows multiple bindings and thread-safe access. Additionally, it attempts to be as safe as possible allowing bound callbacks to unbind themselves (or other callbacks) safely when the callback is executing. See <em> carb/delegate/Delegate.h for more information. <section id="id470"> <h2> 102.8 <section id="id471"> <h3> Fixed <section id="performance-improved-in-itasking-applyrange"> <h4> Performance improved in <code> ITasking::applyRange() <p> A degenerate case was identified that could cause <code> ITasking::applyRange() to perform poorly under very short workloads. This has been corrected. In some cases, small workloads may still perform more poorly than a straight <code> for() loop due to the overhead in dispatching tasks to other threads and subsequent cache performance. Performance testing uses of <code> applyRange() is advised. <section id="id472"> <h3> Added <section id="itasking-parallelfor-implementation"> <h4> <code> ITasking::parallelFor() implementation <p> <code> ITasking::parallelFor() now exists as a wrapper for <code> ITasking::applyRange() . These new funtions allow specifying the classic parameters to <code> for loops: begin, end and optional step values. <section id="id473"> <h2> 102.7 # Changes ## ITypeFactory now calls onModuleCanUnload and onModuleUnload Previously, `~ITypeFactory` would simply call `dlclose` / `FreeLibrary` to shutdown each loaded plugin. This patch now attempts to call each plugin’s `onModuleCanUnload`, and if it returns `true`, `onModuleUnload` is called followed by `dlclose` / `FreeLibrary`. If a plugin’s `onModuleCanUnload` returns `false` it may be called again after other plugins are unloaded. If `onModuleUnload` cannot be safely called, either because `onModuleCanUnload` returns `false` or it is not defined, `~ITypeFactory` will call `dlclose` / `FreeLibrary` on the loaded plugin. This is new behavior and may expose bugs in previously existing `onModuleCanUnload` / `onModuleUnload` implementations. # 102.6 ## Changes ### Debugging mode for waiting tasks `ITasking` now supports a new boolean setting: **/plugins/carb.tasking.plugin/debugWaitingTasks** (default=false). This setting parks all waiting tasks in a dedicated thread so that they will show up in debuggers. For Visual Studio, the Parallel Stacks and Threads windows will now have threads named “carb.task wait” and `__WAITING_TASK__()` will be in the call stack. For GDB, the threads will appear with the name “carb.task wait” and will show up in the “info threads” list. Note that this debugging feature can create hundreds of transient threads and may negatively affect performance, but can be useful for debugging hangs. ### Debugging mode for Task and Counter creation backtrace `ITasking` now supports a new boolean setting: **/plugins/carb.tasking.plugin/debugTaskBacktrace** (default=false). This setting will capture a backtrace when `ITasking::addTask()` is (or variants are) called. This backtrace is available to view in the debugger as a local variable `DebugTaskCreationCallstack` in the `carb::tasking::TaskBundle::execute` function whenever a task is being executed. Similarly, a backtrace will be captured for Counter objects created with `ITasking::createCounter` and `ITasking::createCounterWithTarget`. This backtrace is available as a local variable in the `Counter::wait` function called `DebugCounterCreationCallstack`. At `carb.tasking.plugin` shutdown time, the creation backtrace of any leaked Counter objects is logged as a Warning. # 102.5 ## Fixed - Fixed a crash that could occur sometimes in **carb.tasking.plugin**. # 102.4 ## Fixed - **Carbonite SDK + Plugins package no longer has licensing errors.** ## 102.3 ### Changes - **ITasking will attempt to detect a “stuck” condition and attempt to get “unstuck”** - It is possible to get `ITasking` into a situation where it is stuck: a task uniquely locks a resource and then waits in a fiber-safe way. Other tasks attempt to lock the same resource but don’t wait in a fiber-safe way until all task threads are blocked waiting on the resource. At this point `ITasking` is stuck because when the task that holds the lock becomes ready to run, no threads are available to run it. The new emergency-unstick system will detect this and start emergency threads to process ready tasks, attempting to un-stick the system. - The stuck-check time can be configured using `ISettings` key `/plugins/carb.tasking.plugin/stuckCheckSeconds` (default: 1; set to 0 to disable). ## 102.2 ### Fixed - **IAssets::unregisterAssetType() now waits until all assets are actually unloaded** - In prior versions, performing several `IAssets::unloadAsset`, `IAssets::unloadAssets` or `IAssets::releaseSnapshot` could start unloading data in the background. A call to `IAssets::unregisterAssetType` after this point could potentially destroy the asset type, but would not wait for the background unload tasks to complete. This could cause issues at shutdown where other systems would shut down expecting that after unregistering the type that they would no longer be used. This fix causes `IAsset::unregisterAssetType` to wait for any pending load/unloads to complete before returning. - **Fixes an issue where IAssets could stop updating an asset** - If an asset was loaded twice and one of the assets was unloaded, the system would ignore changes to the underlying data and no longer update the still-loaded asset. This has been resolved. ## 102.1 ### Fixed - **Task priority changes to ITasking** - `ITasking` now correctly respects task priority when resuming tasks that have slept, waited or suspended. - **ITasking compile fixes** - `ITasking::addTask()` and variants would fail to compile in situations where they were capturing additional parameters that would be passed to the `Callable` parameter. These compile issues have been fixed. ## 102.0 ### Broke - **Normalized Mouse Coordinate in IInput** - `IInput` interface version has bumped from 0.5 to 1.0. This is to support mouse coordinates being returned in either normalized or pixel coordinates. - `IInput::getMouseCoords` was renamed to `IInput::getMouseCoordsPixel`. - Added `getMouseCoordsNormalized`. ## Changes in MouseEvent Methods - **JavaScript**: `MouseEvent.get_mouse_coords` was renamed to `MouseEvent.get_mouse_coords_pixel`. - **Python**: `MouseEvent.get_mouse_coords` was renamed to `MouseEvent.get_mouse_coords_pixel`. - **Python**: Added `MouseEvent.get_mouse_coords_normalized`. ## carb.imaging.plugin.dll Has Moved to the rendering Repo `IImaging` has moved to the rendering repo. - Kit is incompatible with older Carbonite builds that still have `carb.imaging.plugin.dll`. - `carb.imaging.plugin.dll` users outside kit/rendering must: - Pull `rtx_plugins` instead of Carbonite. - Add a header include path from `rtx_plugins/include`. - Add `plugins/carb_gfx` to your search library path. - kit/rendering users can no longer rely on `carb.imaging.plugin.dll` being built ahead of time and must make it a build prerequisite by including it in their `dependson`. Transition discussion can be found here.
216,625
changetracking.md
# Change Tracking in USDRT USDRT offers an option for tracking changes to Fabric scene data with the RtChangeTracker class, which is available for users in both C++ and Python. Like the rest of the USDRT Scenegraph API, RtChangeTracker aims to leverage the performance gains of Fabric while remaining accessible for USD-centric developers. ## Background ### Change Tracking in USD USD provides native support for change tracking through a notification subsystem. It works by notifying interested objects when an event has occured somewhere in the application. Notifications are defined by users by extending the `TfNotice` base class. The user can populate these notifications with relevant information about the triggering event. Listener classes register interest in specific notice classes. When a notice-triggering function is invoked, the subsystem delivers a notice to all interested listener objects. Because the subsystem synchronously invokes each listener from the sender’s thread, performance is highly dependent on the listeners. The original thread is blocked until all listeners resolve. Avoiding this potential block is one of the main goals of change tracking in USDRT. More details about the TfNotice subsystem can be found in Pixar’s USD Documentation. ### Change Tracking in Fabric Fabric provides a non-blocking subsystem to track changes to prim and attributes in Fabric buckets. Instead of using notifications, Fabric changes are tracked by ID and polled at whatever frequency the developer requires. Multiple listeners can be registered to a stage, and tracking is enabled by attribute name per listener. Fabric provides methods to manage what attributes are currently being tracked, as well as methods to query and clear changes on a listener. Below is a usage example that enables change tracking on a listener for the position attribute, and then queries and iterates through the changes after they occur. **C++** ```c++ ListenerId listener = iSimStageWithHistory->createListener(); stage.attributeEnableChangeTracking(positionToken, listener); // Changes are made to position attributes on the stage ChangedPrimBucketList changedBuckets = stage.getChanges(listener); for (size_t i = 0; i != changedBuckets.size(); i++) { BucketChanges changes = changedBuckets.getChanges(i); gsl::span<const Path> paths = changes.pathArray; for (size_t j = 0; j != changes.attrChangedIndices.size(); j++) ``` ```cpp #include &lt;usdrt/scenegraph/usd/rt/changeTracker.h&gt; using namespace usdrt; UsdStageRefPtr stage = UsdStage::Open("./data/usd/tests/cornell.usda"); UsdPrim prim = stage->GetPrimAtPath(SdfPath("/DistantLight")); RtChangeTracker tracker(stage); tracker.TrackAttribute("color"); UsdAttribute color = prim.GetAttribute("color"); Change tracking in Fabric is fast and non-blocking. However, the underlying struct (BucketChanges) that changes are stored and returned in relies on some understanding of Fabric’s underlying bucket-based structure. For developers that are used to working with the standard USD API, this may not be as intuitive. ## RtChangeTracker The RtChangeTracker class provides an interface for tracking changes to Fabric data. Unlike TfNotice, RtChangeTracker handles change notifications in a non-blocking manner. RtChangeTracker leverages Fabric’s change tracking capabilities, while abstracting out the underlying bucket-based details. It also adds helper methods for querying changes in the data by prim and attribute. ### Functionality Overview Under the hood, an RtChangeTracker instance represents one listener in Fabric. Just like on the Fabric side, users add attribute names they are interested in tracking to the RtChangeTracker instance. The RtChangeTracker class closely mirrors Fabric’s functionality for managing the tracked attributes: - TrackAttribute(TfToken attrName) - StopTrackingAttribute(TfToken attrName) - PauseTracking() - ResumeTracking() - IsChangeTrackingPaused() - IsTrackingAttribute(TfToken attrName) - GetTrackedAttributes() When a change occurs on a tracked attribute, the change gets added to a persistent stack of changes. These changes accumulate over time in the change tracker. A user can query for the presence of any changes as needed, and clear any that have accumulated to “reset” the stack: - HasChanges() - ClearChanges() The RtChangeTracker class adds functionality atop Fabric’s change tracking that lets users query specific information from the stack of changes by prim and attribute: - GetAllChangedPrims() - GetAllChangedAttributes() - PrimChanged(UsdPrim prim) & PrimChanged(SdfPath primPath) - AttributeChanged(UsdAttribute attr) & AttributeChanged(SdfPath primPath) Currently, RtChangeTracker supports tracking changes to attribute values in a scene. Future work includes adding support for tracking the creation and deletion of prims, which is currently supported in Fabric. ### Usage Examples **C++** ```cpp #include &lt;usdrt/scenegraph/usd/rt/changeTracker.h&gt; using namespace usdrt; UsdStageRefPtr stage = UsdStage::Open("./data/usd/tests/cornell.usda"); UsdPrim prim = stage->GetPrimAtPath(SdfPath("/DistantLight")); RtChangeTracker tracker(stage); tracker.TrackAttribute("color"); UsdAttribute color = prim.GetAttribute("color"); ```
5,290
class-list_Overview.md
# Overview An extension wraps around RTX Raycast Query to provide simpler raycast interface into the stage. ## Class List - **IRaycastQuery**: Interface class for Raycast Query operations. This class represents the interface for performing Raycast Query operations in the current scene. This class is thread-safe and can be called from any thread. There are two sets of functions to do raycast query: - The combination of `add_raycast_sequence` / `remove_raycast_sequence`, `submit_ray_to_raycast_sequence`, get_latest_result_from_raycast_sequence as a way to cast multiple rays into the scene and get result by polling. - `submit_raycast_query` as a single call that casts one Ray into the scene and returns hit result via callback. - **Result**: An enumeration class called “Result” which represents the possible error codes for a raycast query system. - **RayQueryResult**: Alias of the struct rtx::raytracing::RaycastQueryResult, represents the result of a raycast query. Inclued whether this query is valid, the hit position, normal and so on. - **Ray**: Alias of the struct rtx::raytracing::RaycastQueryRay, defines a ray that is used as input for a raycast query. ## Example Usage ```python import omni.kit.raycast.query from pxr import Gf # get raycast interface raycast = omni.kit.raycast.query.acquire_raycast_query_interface() # set up a cube for test stage = omni.usd.get_context().get_stage() CUBE_PATH = "/Cube" cube = UsdGeom.Cube.Define(stage, CUBE_PATH) UsdGeom.XformCommonAPI(cube.GetPrim()).SetTranslate(Gf.Vec3d(123.45, 0, 0)) # generate a ray ray = omni.kit.raycast.query.Ray((1000, 0, 0), (-1, 0, 0)) def callback(ray, result): if result.valid: # Got the raycast result in the callback print(Gf.Vec3d(*result.hit_position)) ``` ```python print(result.hit_t) print(Gf.Vec3d(*result.normal)) print(result.get_target_usd_path()) ``` ```python raycast.submit_raycast_query(ray, callback) ```
1,943
classes.md
# Classes - **omni::avreality::rain::IPuddleBaker**: Bakes puddle into dynamic textures. - **omni::avreality::rain::IWetnessController**: Controller scene level wetness parameters. - **omni::avreality::rain::PuddleBaker**: Bake puddles into dynamic textures. - **omni::avreality::rain::WetnessController**: Controller for scene level wetness parameters.
354
classomni_1_1avreality_1_1rain_1_1_puddle_baker.md
# omni::avreality::rain::PuddleBaker Defined in [omni/avreality/rain/PuddleBaker.h](#puddle-baker-8h) ## Functions - [PuddleBaker()=delete](#classomni_1_1avreality_1_1rain_1_1_puddle_baker_1a5471be17f6dcaa1d8b80fd9e419cf5cb) : Creates a new PuddleBaker. - [assignShadersAccumulationMapTextureNames(gsl::span&lt; std::tuple&lt; const pxr::SdfPath, const std::string &gt; &gt; shaderPathsAndTextureNames, omni::usd::UsdContext *usdContext=nullptr)](#classomni_1_1avreality_1_1rain_1_1_puddle_baker_1a6d928e2c37ef74e4bd5feeec453432cb) - [bake(std::string textureName, carb::Uint2 textureDims, carb::Float2 regionMin, carb::Float2 regionMax, gsl::span&lt; const carb::Float2 &gt; puddlesPositions, gsl::span&lt; const float &gt; puddlesRadii, gsl::span&lt; const float &gt; puddlesDepths)](#classomni_1_1avreality_1_1rain_1_1_puddle_baker_1a9b667a91c8f838194bb16760530c6fd6) - [bake(std::string textureName, carb::Float2 regionMin, carb::Float2 regionMax, gsl::span&lt; carb::Float2 &gt; puddlesPositions, gsl::span&lt; float &gt; puddlesRadii, gsl::span&lt; float &gt; puddlesDepths)](#classomni_1_1avreality_1_1rain_1_1_puddle_baker_1ad2f231448a08e002e2b83a7c34981a38) ## Public Functions - PuddleBaker()=delete : Creates a new PuddleBaker. - assignShadersAccumulationMapTextureNames(gsl::span&lt; std::tuple&lt; const pxr::SdfPath, const std::string &gt; &gt; shaderPathsAndTextureNames, omni::usd::UsdContext *usdContext=nullptr) - bake(std::string textureName, carb::Uint2 textureDims, carb::Float2 regionMin, carb::Float2 regionMax, gsl::span&lt; const carb::Float2 &gt; puddlesPositions, gsl::span&lt; const float &gt; puddlesRadii, gsl::span&lt; const float &gt; puddlesDepths) - bake(std::string textureName, carb::Float2 regionMin, carb::Float2 regionMax, gsl::span&lt; carb::Float2 &gt; puddlesPositions, gsl::span&lt; float &gt; puddlesRadii, gsl::span&lt; float &gt; puddlesDepths) Creates a new PuddleBaker. ### Public Static Functions ```cpp static inline void bake( std::string textureName, carb::Uint2 textureDims, carb::Float2 regionMin, carb::Float2 regionMax, gsl::span<const carb::Float2> puddlesPositions, gsl::span<const float> puddlesRadii, gsl::span<const float> puddlesDepths ) ``` Bake puddles to the dynamic texture `textureName`. > **Warning** > The size of puddlesPositions, puddlesRadii and puddlesDepths must be the same. - **textureName** – The name of the dynamic texture to bake into. - **textureDims** – The dimensions of the generated texture. - **regionMin** – The 2D world space minimum of the region the texture maps onto. - **regionMax** – The 2D world space maximum of the region the texture maps onto. - **puddlesPositions** – A pointer to the 2D positions of the puddles. - **puddlesRadii** – A pointer to the radii of the puddles. - **puddlesDepths** – A pointer to the depths of the puddles. ```cpp static inline void bake( std::string textureName, carb::Float2 regionMin, carb::Float2 regionMax, gsl::span<carb::Float2> puddlesPositions, gsl::span<float> puddlesRadii, gsl::span<float> puddlesDepths ) ``` Bake puddles to the dynamic texture `textureName`. > Note > Generated texture size defaults to 1024x1024. > Warning > The size of puddlesPositions, puddlesRadii and puddlesDepths must be the same. ### Parameters - **textureName** – The name of the dynamic texture to bake into. - **regionMin** – The 2D world space minimum of the region the texture maps onto. - **regionMax** – The 2D world space maximum of the region the texture maps onto. - **puddlesPositions** – A pointer to the 2D positions of the puddles. - **puddlesRadii** – A pointer to the radii of the puddles. - **puddlesDepths** – A pointer to the depths of the puddles. ### Parameters - **usdContext** – The Usd context holding the prims to update. - **shaderPathsAndTextureNames** – The shaders to assign the dynamic textures to, provided as a list of tuple of shader path and associated texture name.
3,982
classomni_1_1avreality_1_1rain_1_1_wetness_controller.md
# omni::avreality::rain::WetnessController Defined in [omni/avreality/rain/WetnessController.h](#) ## Functions - **WetnessController(omni::usd::UsdContext *usdContext=nullptr)** : Create a new scene wetness controller. - **applyGlobalPorosity(float porosity)** : Apply porosity value `porosity` to all SimPBR primitives supporting porosity. - **applyGlobalPorosityScale(float porosityScale)** : Apply porosityScale value `porosityScale` to all SimPBR primitives supporting porosity. - **applyGlobalWaterAccumulation(float waterAccumulation)** : Apply water accumulation value `waterAccumulation` to all SimPBR primitives supporting accumulation. - **applyGlobalWaterAccumulationScale(float accumulationScale)** : Apply accumulationScale value `accumulationScale` to all SimPBR primitives supporting accumulation. - **applyGlobalWaterAlbedo(carb::ColorRgb waterAlbedo)** : Apply albedo value `waterAlbedo` to all SimPBR primitives supporting accumulation. - **applyGlobalWaterTransparency(float waterTransparency)** : Apply water transparency value `waterTransoarency` to all SimPBR primitives supporting accumulation. - **applyGlobalWetness(float wetness)** : Apply wetness value `wetness` to all SimPBR primitives supporting wetness. - applyGlobalWetnessState(bool state) : Set wetness state to `state` to all SimPBR primitives supporting wetness. - loadShadersParameters(omni::usd::UsdContext *usdContext=nullptr) : Load shader parameters default values for USD shader when those values are not authored. - ~WetnessController() : Destroys this WetnessController. ### omni::avreality::rain::WetnessController Controller for scene level wetness parameters. #### Public Functions - WetnessController(omni::usd::UsdContext *usdContext=nullptr) : Create a new scene wetness controller. - ~WetnessController() : Destroys this WetnessController. - void applyGlobalWetnessState(bool state) : Set wetness state to `state` to all SimPBR primitives supporting wetness. ### omni::avreality::rain::WetnessController::applyGlobalWetness(float wetness) - **Description**: Apply wetness value `wetness` to all SimPBR primitives supporting wetness. ### omni::avreality::rain::WetnessController::applyGlobalPorosity(float porosity) - **Description**: Apply porosity value `porosity` to all SimPBR primitives supporting porosity. ### omni::avreality::rain::WetnessController::applyGlobalPorosityScale(float porosityScale) - **Description**: Apply porosityScale value `porosityScale` to all SimPBR primitives supporting porosity. ### omni::avreality::rain::WetnessController::applyGlobalWaterAlbedo(carb::ColorRgb waterAlbedo) - **Description**: Apply albedo value `waterAlbedo` to all SimPBR primitives supporting accumulation. ### Public Static Functions #### loadShadersParameters(omni::usd::UsdContext* usdContext) - **Description**: Load shader parameters. #### applyGlobalWaterTransparency(float waterTransparency) - **Description**: Apply water transparency value `waterTransoarency` to all SimPBR primitives supporting accumulation. #### applyGlobalWaterAccumulation(float waterAccumulation) - **Description**: Apply water accumulation value `waterAccumulation` to all SimPBR primitives supporting accumulation. #### applyGlobalWaterAccumulationScale(float accumulationScale) - **Description**: Apply accumulationScale value `accumulationScale` to all SimPBR primitives supporting accumulation. = nullptr ) ## Load shader parameters default values for USD shader when those values are not authored. ### Warning Unstable API, subject to change.
3,548
class_structomni_1_1avreality_1_1rain_1_1_i_puddle_baker.md
# omni::avreality::rain::IPuddleBaker Defined in omni/avreality/rain/IPuddleBaker.h ## Variables - **assignShadersAccumulationMapTextureNames**: Assign dynamic textures to the water accumulation entry in the provided shader list. - **bake**: ## Class Definition ```cpp class IPuddleBaker ``` Bakes puddle into dynamic textures. > **Warning** > Low level ABI compatible interface. Common usage will be through the `PuddleBaker::PuddleBaker()` class. ### Public Members - **bake** ```cpp void bake(const char * const) textureName, carb::Uint2 textureDims, carb::Float2 regionMin, carb::Float2 regionMax, carb::Float2 puddleCount, std::size_t const carb::Float2 *puddlesPositions, const float *puddlesRadii, const float *puddlesDepths Bake puddles to the dynamic texture `textureName`. !!! warning The size of puddlesPositions, puddlesRadii and puddlesDepths must match the size given as `puddleCount`. * Param textureName: The name of the dynamic texture to bake into. * Param textureDims: The dimensions of the generated texture. * Param regionMin: The 2D world space minimum of the region the texture maps onto. * Param regionMax: The 2D world space maximum of the region the texture maps onto. * Param puddleCount: The number of puddles to bake. * Param puddlesPositions: A pointer to the 2D positions of the puddles. * Param puddlesRadii: A pointer to the radii of the puddles. * Param puddlesDepths: A pointer to the depths of the puddles. * assignShadersAccumulationMapTextureNames( gsl::span< const std::tuple< const char *, const char * > > shaderPathsAndTextureNames, const char * usdContextName ) Assign dynamic textures to the water accumulation entry in the provided shader list.
1,754
class_structomni_1_1avreality_1_1rain_1_1_i_wetness_controller.md
# omni::avreality::rain::IWetnessController Defined in [omni/avreality/rain/IWetnessController.h](#i-wetness-controller-8h) ## Variables - **applyGlobalPorosity**: Apply porosity value `porosity` to all SimPBR primitives supporting wetness. - **applyGlobalPorosityScale**: Apply porosity scale value `porosityScale` to all SimPBR primitives supporting wetness. - **applyGlobalWaterAccumulation**: Apply water accumulation value `accumulation` to all SimPBR primitives supporting wetness. - **applyGlobalWaterAccumulationScale**: Apply water accumulationScale value `waterAccumulationScale` to all SimPBR primitives supporting wetness. - **applyGlobalWaterAlbedo**: Apply water albedo value `waterAlbedo` to all SimPBR primitives supporting wetness. - **applyGlobalWaterTransparency**: Apply water transparency `waterTransparency` to all SimPBR primitives supporting wetness. - **applyGlobalWetness**: Apply wetness value `wetness` to all SimPBR primitives supporting wetness. - **applyGlobalWetnessState**: Apply wetness state to `state` to all SimPBR primitives supporting wetness. ### loadShadersParameters : Load shader parameters default values to the session layer. ### IWetnessController : Controller scene level wetness parameters. #### Warning Low level ABI compatible interface. Common usage will be through WetnessController class. #### Public Members ##### applyGlobalWetnessState Apply wetness state to `state` to all SimPBR primitives supporting wetness. ##### applyGlobalWetness Apply wetness value `wetness` to all SimPBR primitives supporting wetness. ##### applyGlobalPorosity Apply porosity value `porosity` to all SimPBR primitives supporting porosity. ```cpp void applyGlobalPorosity(float porosity, const char *usdContextName); ``` Apply porosity value `porosity` to all SimPBR primitives supporting wetness. ```cpp void applyGlobalPorosityScale(float porosityScale, const char *usdContextName); ``` Apply porosity scale value `porosityScale` to all SimPBR primitives supporting wetness. ```cpp void applyGlobalWaterAlbedo(carb::ColorRgb waterAlbedo, const char *usdContextName); ``` Apply water albedo value `waterAlbedo` to all SimPBR primitives supporting wetness. ```cpp void applyGlobalWaterTransparency(float waterTransparency, const char *usdContextName); ``` Apply water transparency value `waterTransparency` to all SimPBR primitives supporting wetness. ### applyGlobalWaterTransparency Apply water transparency `waterTransparency` to all SimPBR primitives supporting wetness. ### applyGlobalWaterAccumulation Apply water accumulation value `accumulation` to all SimPBR primitives supporting wetness. ### applyGlobalWaterAccumulationScale Apply water accumulationScale value `waterAccumulationScale` to all SimPBR primitives supporting wetness. ### loadShadersParameters ( * loadShadersParameters ) ( const char * usdContextName ) --- Load shader parameters default values to the session layer.
2,940
cleaning-up_ext_serialization.md
# Serialization (NvBlastExtSerialization) ## Introduction This extension defines the Nv::Blast::ExtSerialization class, a modular serialization manager which can be extended to handle data types from different Blast modules (such as low-level and Tk). An ExtSerialization manager is created using the global function NvBlastExtSerializationCreate: ```text ExtSerialization* ser = NvBlastExtSerializationCreate(); ``` ExtSerialization is capable of loading sets of serializers for different data types and encodings. The NvBlastExtSerialization extension comes with a set of low-level data serializers, with types enumerated in the header **NvBlastExtLlSerialization.h**. **The low-level serializers are automatically loaded into an ExtSerialization when it is created.** To load serializers for ExtTk assets, you must also load the extension [BlastTk Serialization (NvBlastExtTkSerialization)](ext_tkserialization.html). See the documentation for that module. Each serializer is capable of reading (and writing, if it is not read-only) a single data type in a single encoding (format). Some serializers are read-only, in order to read legacy formats. The encodings available are enumerated in ExtSerialization::EncodingID. They are currently: - CapnProtoBinary - Uses Cap’n Proto’s binary serialization format - Raw - For low-level NvBlastAsset and NvBlastFamily types, this is simply a memory copy. ## Serialization (writing) To serialize an object, the serialization manager’s write encoding ID must be set to the desired value. By default it is set to EncodingID::CapnProtoBinary, as this is the only encoding which supports writing for all object types (at the present time). When other encodings become available, use ExtSerialization::setSerializationEncoding to set the write encoding to the desired type. Each serialization module defines the object types it can serialize. ExtSerialization defines the low-level types in **NvBlastExtLlSerialization.h**: - LlObjectTypeID::Asset - An NvBlastAsset - LlObjectTypeID::Family - An NvBlastFamily To serialize an object, for example an NvBlastAsset, use ExtSerialization::serializeIntoBuffer as follows: ```text const NvBlastAsset* asset = ... // Given pointer to an NvBlastAsset void* buffer; uint64_t size = ser->serializeIntoBuffer(buffer, asset, LlObjectTypeID::Asset); ``` If successful, the data is written into a buffer allocated using the NvBlastGlobals allocator, written to the “buffer” parameter, and the size of the buffer written is the return value of the function. If the function returns 0, then serialization was unsuccessful. Notice that the second function parameter is actually a void*, so it requires the last parameter to tell it what object it is serializing. A utility wrapper function is given in **NvBlastExtLlSerialization.h** which performs the same operation with an NvBlastAsset, so one could equivalently use ```text void* buffer; uint64_t size = NvBlastExtSerializationSerializeAssetIntoBuffer(buffer, *ser, asset); ``` A corresponding function also exists for NvBlastFamily, as well as other data types supported by other serialization extensions. This buffer may be written to disk, memory, networked, etc. Since the memory for the buffer is allocated using the allocator in NvBlastGlobals, it may be freed using the same allocator: ```text NVBLAST_FREE(buffer) ``` # Using a Buffer Provider If you wish to provide the serialization buffer by some means other than the NvBlastGlobals allocator, you may set a “buffer provider” in the serialization manager. A buffer provider is simply a callback that requests a buffer from the user of the necessary size. The user implements the interface ExtSerialization::BufferProvider, and passes a pointer to an instance of one to the serialization manager using ExtSerialization::setBufferProvider. For example: ```cpp std::vector<char> growableBuffer; class MyBufferProvider : public Nv::Blast::ExtSerialization::BufferProvider { public: MyBufferProvider(std::vector<char>& growableBuffer) : m_growableBuffer(growableBuffer) {} virtual void* requestBuffer(size_t size) override { if (m_growableBuffer.size() < size) { m_growableBuffer.resize(size); } return m_growableBuffer.data(); } private: std::vector<char>& m_growableBuffer; } myBufferProvider(growableBuffer); ser->setBufferProvider(&myBufferProvider); ``` Passing NULL to setBufferProvider returns the serialization to its default behavior of using the NvBlastGlobals allocator. # Deserialization (reading) To deserialize an object, use the ExtSerialization::deserializeFromBuffer method. If you know the type of object in the buffer, you may directly cast the returned pointer to one of that type. For example, if the buffer contains an NvBlastAsset, use: ```cpp const void* buffer = ... // A given buffer, may be read from disk, memory, etc. const uint64_t size = ... // The buffer's size in bytes NvBlastAsset* asset = static_cast<NvBlastAsset*>(ser->deserializeFromBuffer(buffer, size)); ``` This returns a valid pointer if deserialization was successful, or NULL otherwise. If no serializer is loaded which can handle the object type in the stream in its given encoding, it will fail and return NULL. Again, the memory for the asset is allocated using NvBlastGlobals, so that the asset may be released using ```cpp NVBLAST_FREE(asset); ``` # Detecting the Object Type in a Buffer If you don’t know the object type in the buffer, you may use the last (optional) argument in deserializeFromBuffer to return the type: ```cpp uint32_t objTypeID; void* obj = ser->deserializeFromBuffer(buffer, size, &objTypeID); NVBLAST_CHECK_ERROR(obj != nullptr, "Object could not be read from buffer.", return); switch (objTypeID) { case LlObjectTypeID::Asset: handleAssetLoad(static_cast<NvBlastAsset*>(obj)); break; case LlObjectTypeID::Family: handleFamilyLoad(static_cast<NvBlastFamily*>(obj)); break; default: NVBLAST_LOG_ERROR("Unknown object type "); } ``` # Peeking at and Skipping Buffer Data If a buffer contains multiple objects, you may peek at the buffer to get object information including object type, encoding, and data size, and skip to the next object in the buffer (whether or not you have chosen to read the current object). For example: ```cpp const void* buffer = ... // The input buffer uint64_t size = ... // The input buffer size while (size) { uint64_t objTypeID; if (!ser->peekHeader(&objTypeID, NULL, NULL, buffer, size)) // Only reading the object type ID; may pass in NULL for the other header value pointers { break; // Read error, stop } if (objectShouldBeLoaded(objTypeID)) // Some function to determine whether or not we want this object { void* obj = ser->deserializeFromBuffer(buffer, size); // Handle loaded object ... } // Jump to next object: buffer = ser->skipObject(size, buffer); // Updates size as well } ``` # Cleaning Up When finished with the serialization manager, it may be released using its release() method: ```cpp ser->release(); ```
7,136
cli.md
# Omni Asset Validator (CLI) ## Command Line Interface Utility for USD validation to ensure layers run smoothly across all Omniverse products. Validation is based on the USD ComplianceChecker (i.e. the same backend as the usdchecker commandline tool), and has been extended with additional rules as follows: - Additional “Basic” rules applicable in the broader USD ecosystem. - Omniverse centric rules that ensure layer files work well with all Omniverse applications & connectors. - Configurable end-user rules that can be specific to individual company and/or team workflows. - Note this level of configuration requires manipulating PYTHONPATH prior to launching this tool. ## Syntax Use the following syntax to run asset validator: ```text usage: omni_asset_validator [-h] [-d 0|1] [-c CATEGORY] [-r RULE] [-e] [-f] [-p PREDICATE] [URI] ``` ## Positional arguments ### URI A single Omniverse Asset. > Note: This can be a file URI or folder/container URI. (default: None) ## Options ### -h, –help show this help message and exit ### -d 0| 1, –defaultRules 0|1 Flag to use the default-enabled validation rules. Opt-out of this behavior to gain finer control over the rules using the –categories and –rules flags. The default configuration includes: - ByteAlignmentChecker - CompressionChecker - MissingReferenceChecker - StageMetadataChecker - TextureChecker - PrimEncapsulationChecker - NormalMapTextureChecker - KindChecker ## -c CATEGORY, –category CATEGORY Categories to enable, regardless of the –defaultRules flag. Valid categories are: - Basic - ARKit - Omni:NamingConventions - Omni:Layout - Omni:Material - Usd:Performance - Usd:Schema ## -r RULE, –rule RULE Rules to enable, regardless of the –defaultRules flag. Valid rules include: - ByteAlignmentChecker - CompressionChecker - MissingReferenceChecker - StageMetadataChecker - TextureChecker - PrimEncapsulationChecker - NormalMapTextureChecker - KindChecker - ExtentsChecker - TypeChecker - ARKitLayerChecker - ARKitPrimTypeChecker - ARKitShaderChecker - ARKitMaterialBindingChecker - ARKitFileExtensionChecker - ARKitPackageEncapsulationChecker - ARKitRootLayerChecker - OmniInvalidCharacterChecker - OmniDefaultPrimChecker - OmniOrphanedPrimChecker - OmniMaterialPathChecker - UsdAsciiPerformanceChecker - UsdLuxSchemaChecker - UsdGeomSubsetChecker - UsdMaterialBindingApi - UsdDanglingMaterialBinding OmniOrphanedPrimChecker OmniMaterialPathChecker UsdAsciiPerformanceChecker UsdLuxSchemaChecker UsdGeomSubsetChecker UsdMaterialBindingApi UsdDanglingMaterialBinding (default: []) ### -e, –explain Rather than running the validator, provide descriptions for each configured rule. (default: False) ### -f, –fix If this is selected, apply fixes. ### -p PREDICATE, –predicate PREDICATE Report issues and fix issues that match this predicate. Currently: IsFailure IsWarning IsError HasRootLayer See [Asset Validator](../../index.html) for more details. ### Command Line Interface using USD Composer #### Getting USD Composer installation path Open Omniverse Launcher. On Library / USD Composer, beside the `Launch` button click the burger menu to view the settings. On Settings we can see the path of USD Composer installation. Add it as environment variable. In Windows: ```bash set INSTALL_DIR=#See above set KIT_PATH=%INSTALL_DIR%\kit ``` In Linux: ```bash export INSTALL_DIR=#See above export KIT_PATH=${INSTALL_DIR}\kit ``` #### Getting Asset Validation Core path Open the Extension manager in USD Composer. In Windows / Extensions, select `omni.asset_validator.core` extension. On the extension information click on the path icon. Add it as environment variable. In Windows: ```bash set VALIDATION_PATH=#See above ``` In Linux: ```bash export VALIDATION_PATH=#See above ``` #### Examples ##### Calling the help command Windows: ```bash %KIT_PATH% --enable omni.asset_validator.core --exec "%VALIDATION_PATH%\scripts\omni_asset_validator.py --help" ``` Linux: ```bash ${KIT_PATH} --enable omni.asset_validator.core --exec "${VALIDATION_PATH}\scripts\omni_asset_validator.py --help" ``` ##### Validating a file Windows: ```bash %KIT_PATH% --enable omni.asset_validator.core --exec "%VALIDATION_PATH%\scripts\omni_asset_validator.py %VALIDATION_PATH%\scripts\test\asset.usda" ``` Linux: ```bash ${KIT_PATH} --enable omni.asset_validator.core --exec "${VALIDATION_PATH}\scripts\omni_asset_validator.py ${VALIDATION_PATH}\scripts\test\asset.usda" ``` # Validating a folder, recursively ## Windows: ```bash %KIT_PATH% --enable omni.asset_validator.core --exec "%VALIDATION_PATH%\scripts\omni_asset_validator.py %VALIDATION_PATH%\scripts\test\" ``` ## Linux: ```bash ${KIT_PATH} --enable omni.asset_validator.core --exec "${VALIDATION_PATH}\scripts\omni_asset_validator.py ${VALIDATION_PATH}\scripts\test\" ``` # Apply fixes on file ## Windows: ```bash %KIT_PATH% --enable omni.asset_validator.core --exec "%VALIDATION_PATH%\scripts\omni_asset_validator.py --fix %VALIDATION_PATH%\scripts\test\asset.usda" ``` ## Linux: ```bash ${KIT_PATH} --enable omni.asset_validator.core --exec "${VALIDATION_PATH}\scripts\omni_asset_validator.py --fix ${VALIDATION_PATH}\scripts\test\asset.usda" ``` # Apply fixes on a folder, specific category ## Windows: ```bash %KIT_PATH% --enable omni.asset_validator.core --exec "%VALIDATION_PATH%\scripts\omni_asset_validator.py --fix --category Usd:Schema %VALIDATION_PATH%\scripts\test\" ``` ## Linux: ```bash ${KIT_PATH} --enable omni.asset_validator.core --exec "${VALIDATION_PATH}\scripts\omni_asset_validator.py --fix --category Usd:Schema ${VALIDATION_PATH}\scripts\test\" ``` # Apply fixes on a folder, multiple categories ## Windows: ```bash %KIT_PATH% --enable omni.asset_validator.core --exec "%VALIDATION_PATH%\scripts\omni_asset_validator.py --fix --category Usd:Schema --category Basic %VALIDATION_PATH%\scripts\test\" ``` ## Linux: ```bash ${KIT_PATH} --enable omni.asset_validator.core --exec "${VALIDATION_PATH}\scripts\omni_asset_validator.py --fix --category Usd:Schema --category Basic ${VALIDATION_PATH}\scripts\test\" ``` # Apply predicates, single file ## Windows: ```bash %KIT_PATH% --enable omni.asset_validator.core --exec "%VALIDATION_PATH%\scripts\omni_asset_validator.py --predicate HasRootLayer %VALIDATION_PATH%\scripts\test\asset.usda" ``` ## Linux: ```bash ${KIT_PATH} --enable omni.asset_validator.core --exec "${VALIDATION_PATH}\scripts\omni_asset_validator.py --predicate HasRootLayer ${VALIDATION_PATH}\scripts\test\asset.usda" ```
6,491
client_library_api.md
# Omniverse Client Library API ## Class Hierarchy - struct OmniClientAclEntry - struct OmniClientAuthDeviceFlowParams - struct OmniClientBookmark - struct OmniClientBranchAndCheckpoint - struct OmniClientContent - struct OmniClientCredentials - struct OmniClientListEntry - struct OmniClientLiveUpdateInfo - struct OmniClientRetryBehavior - struct OmniClientServerInfo - struct OmniClientUrl - struct OmniClientWriteFileExInfo - enum OmniClientAccessFlags - enum OmniClientCacheBypassStatus - enum OmniClientChannelEvent - enum OmniClientConnectionStatus - enum OmniClientCopyBehavior - enum OmniClientFileStatus - enum OmniClientItemFlags - enum OmniClientListEvent - enum OmniClientListIncludeOption - enum OmniClientLiveUpdateType - enum OmniClientLogLevel ## File Hierarchy ### File * OmniClient.h * OmniClientAbi.h * OmniClientVersion.h ## Classes and Structs * **OmniClientAclEntry**: ACL Entry. * **OmniClientAuthDeviceFlowParams**: This struct contains data provided to the “Device Flow” authentication callback. * **OmniClientBookmark**: A bookmark. * **OmniClientBranchAndCheckpoint**: Branch & Checkpoint. * **OmniClientContent**: Content. * **OmniClientCredentials**: Credentials to sign in with. * **OmniClientListEntry**: List Entry. * **OmniClientLiveUpdateInfo**: This holds information about a live update that was queued. * **OmniClientRetryBehavior**: Parameters to control retry behavior. * **OmniClientServerInfo**: Server Info. * **OmniClientUrl**: A URL broken into the component pieces. * **OmniClientWriteFileExInfo**: This holds extra info provided by omniClientWriteFileEx. ## Enums * **OmniClientAccessFlags**: Access flags. * **OmniClientCacheBypassStatus**: Cache Bypass Status If enabled, the cache is being bypassed to workaround misbehaving cache. * **OmniClientChannelEvent**: Channel Event. * **OmniClientConnectionStatus**: Connection Status. * **OmniClientCopyBehavior**: Copy Behavior. * **OmniClientFileStatus**: File Status. * **OmniClientItemFlags**: Item flags. * **OmniClientListEvent**: List Subscribe Event. * **OmniClientListIncludeOption**: Stat/List Include Options. * **OmniClientListSubscribeEvent**: List Subscribe Event. * **OmniClientListIncludeOption**: Stat/List Include Options. * **OmniClientLiveUpdateStatus**: Live Update Status. ## Enumerations - **OmniClientLiveUpdateType**: Live Update Type. - **OmniClientLogLevel**: Log Level. - **OmniClientResult**: The primary result code returned by the asynchronous functions. ## Functions - **omniClientAddBookmark**: Add a URL to the list of bookmarks. - **omniClientAddDefaultSearchPath**: Add a default search path to the list of search paths used by resolve. - **omniClientAddUserToGroup**: Add user to a group. - **omniClientAllocContent**: Allocate a content buffer with the specified size. - **omniClientAuthenticationCancel**: Call this to cancel the current authentication process. - **omniClientBreakUrl**: Break a URL into components. - **omniClientBreakUrlReference**: Break a URL into components. - **omniClientBypassListCache**: Bypass the internal cache for list requests. - **omniClientCombineUrls**: This combines a URL with an explicit base URL. - **omniClientCombineUrls2**: This combines a URL with an explicit base URL. - **omniClientCombineWithBaseUrl**: This calls `omniClientCombineUrls` with the URL on the top of the stack. - **omniClientCombineWithBaseUrl2**: This calls `omniClientCombineUrls` with the URL on the top of the stack. - **omniClientConfigFreeString**: This is an internal function intended for unit tests. - **omniClientConfigGetString**: This is an internal function intended for unit tests. - **omniClientConfigReload**: This is an internal function intended for unit tests. - **omniClientConfigSetInt**: This is an internal function intended for unit tests. - **omniClientCopy**: Copy a thing from ‘srcUrl’ to ‘dstUrl’. - **omniClientCopyContent**: Copy a content buffer. - **omniClientCreateCheckpoint**: Create a checkpoint for a given URL (which can include a branch, otherwise assume the default branch) - **omniClientCreateFolder**: Create a folder. - **omniClientCreateGroup**: Create a group on server. - **omniClientCreateWithHash**: Create a new file with the hash known upfront. This can be used to avoid additional uploads of an asset that is already on the server. - **omniClientDelete**: Delete something (file, folder, mount, live object, channel etc..). - **omniClientFreeBranchAndCheckpoint**: Free the structure returned from omniClientGetBranchAndCheckpointFromQuery. - **omniClientFreeContent**: Free an allocated content buffer. - **omniClientFreeUrl**: Free the URL structure allocated by omniClientBreakUrlReference or omniClientBreakUrl. - **omniClientGetAcls**: Retrieve the ACLs for an item. - **omniClientGetBaseUrl**: Returns the top of the base URL stack. - **omniClientGetBranchAndCheckpointFromQuery**: Breaks a query string into the branch/checkpoint parameters. - **omniClientGetCacheBypassStatusString**: Retrieve a human readable string for a cache bypass status. - **omniClientGetConnectionStatusString**: Retrieve a human readable string for a connection status. - **omniClientGetDefaultSearchPaths**: Retrieve the current list of default search paths. - **omniClientGetFileStatusString**: Retrieve a human readable string for a file status. - **omniClientGetGroups**: Returns a list of all groups registered with the server. - **omniClientGetGroupUsers**: Returns a list of users associated with a group. - **omniClientGetLocalFile**: Get a local file name for the URL. - **omniClientGetLogLevelChar**: Retrieve a single character to represent a log level. - **omniClientGetLogLevelString**: Retrieve a human readable string for a log level. - **omniClientGetOmniHubVersion**: Check the version of the OmniHub. - **omniClientGetReactor**: Get access to the reactor. - **omniClientGetResultString**: Retrieve a human readable string from a result. - **omniClientGetServerInfo**: Retrieve information about the server for a specified URL. - **omniClientGetUserGroups**: Returns all groups a user belongs to. - **omniClientGetUsers**: Returns all users registered with the server. - **omniClientGetVersionString**: Returns a human-readable version string. - **omniClientInitialize**: Perform some one-time initialization. - **omniClientJoinChannel**: Start listening to a channel. - **omniClientKvCacheGet**: Retrieve a value/content from the KvCache which has been stored before by `omniClientKvCacheSet`. Still experimental, interface might change. - **omniClientKvCacheSet**: Store a value/content in the KvCache using a context/key pair as address. Still experimental, interface might change. - **omniClientKvCacheStat**: Check if a key exists in the KV cache, and optionally determine the size of the data. See `omniClientKvCacheSet`. Still experimental, interface might change. - **omniClientList**: Retrieve contents of a folder. This function is equivalent to omniClientList2 with eOmniClientListIncludeOption_DefaultNotDeleted. - **omniClientList2**: Retrieve contents of a folder. - **omniClientListBookmarks**: Register a callback to receive the list of bookmarks. - **omniClientListCheckpoints**: Returns a list of checkpoints for a URL. - **omniClientListSubscribe**: Subscribe to change notifications for a url. This function is equivalent to omniClientListSubscribe2 with eOmniClientListIncludeOption_DefaultNotDeleted. - **omniClientListSubscribe2**: Subscribe to change notifications for a url. - **omniClientLiveConfigureJitterReduction**: Set parameters that control jitter reduction. - **omniClientLiveCreate**: Create a live object. - **omniClientLiveGetLatestServerTime**: Returns the server timestamp of the most recently received message (0 if no messages have been received) - **omniClientLiveProcess**: Call this to send live updates to the server and process live updates received from the server. - **omniClientLiveProcessUpTo**: Same as `omniClientLiveProcess`. - **omniClientLiveRead**: Read a live object and set up a subscription to be notified of new updates to that object. - **omniClientLiveRead2**: This is the same as `omniClientLiveRead` except you don’t need to call omniClientLiveProcess. - **omniClientLiveRegisterProcessUpdatesCallback**: Register a callback to be notified that we are about to begin processing live updates. - **omniClientLiveRegisterQueuedCallback2**: Register a function to be called any time there’s an update in the queue that needs to be processed. - **omniClientLiveSetQueuedCallback**: Set a function to be called any time there’s an update in the queue that needs to be processed. - **omniClientLiveUpdate**: Update a live object. - **omniClientLiveUpdate2**: This is the same as `omniClientLiveUpdate` except you don’t need to call omniClientLiveProcess. - **omniClientLiveWaitForPendingUpdates**: Call this to wait for all pending live updates to complete. - **omniClientLock**: Lock a file so no other clients can modify it. - **omniClientMakeFileUrl**: This creates a “file:” URL from a path. - **omniClientMakePrintable**: This makes a URL safe for printing in a UI or to a console window. - **omniClientMakeQueryFromBranchAndCheckpoint**: This creates a query string from the parameters provided. - **omniClientMakeRelativeUrl**: This makes “otherUrl” relative to “baseUrl”. - **omniClientMakeUrl**: This creates a URL from the pieces provided. - **omniClientMove**: Move a thing from ‘srcUrl’ to ‘dstUrl’. - **omniClientMoveContent**: Attempt to take ownership of a content buffer. - **omniClientNormalizeUrl**: This normalizes a URL by parsing it then recomposing it. - **omniClientObliterate**: Obliterate a path. - **omniClientPopBaseUrl**: Pop a base URL from the context stack. - **omniClientPushBaseUrl**: Push a base URL for relative URLs to resolve against. - **omniClientReadFile**: Read the entire file. - **omniClientReconnect**: Attempt to reconnect, even if the previous connection attempt failed. - **omniClientReferenceContent**: Reference an existing content buffer. - **omniClientRefreshAuthToken**: This refreshes the auth token for a given URL. - **omniClientRegisterAuthCallback**: Register a callback to provide authentication credentials. - **omniClientRegisterAuthDeviceFlowCallback**: Register a function to be called when authenticating using “Device Flow”. - **omniClientRegisterCacheBypassStatusCallback**: Register a callback to receive cache bypass status updates. - **omniClientRegisterConnectionStatusCallback**: Register a callback to receive connection status updates. - **omniClientRegisterFileStatusCallback**: Register a callback to receive file transfer updates. - **omniClientRemoveBookmark**: Remove a URL from the list of bookmarks. - **omniClientRemoveDefaultSearchPath**: Remove a default search path from the list of search paths used by resolve. - **omniClientRemoveGroup**: Remove group from server. - **omniClientRemoveUserFromGroup**: Remove user from a group. - **omniClientRenameGroup**: Rename group on server. - **omniClientResolve**: Resolve operates similarly to stat with one major difference. - **omniClientResolveSubscribe**: Resolve an item, and subscribe to future changes. - **omniClientSendMessage**: Send a message to a channel. - **omniClientSetAcls**: Set ACLs for an item. - **omniClientSetAlias**: Redirect a URL to a different location. - **omniClientSetAuthenticationMessageBoxCallback**: Set a callback which is called instead of showing the “Please sign in using your browser” dialog. - **omniClientSetAzureSASToken**: Set Azure SAS token for a blob container. - **omniClientSetCacheBypassStatus**: Set the cache bypass status. The function will not call the callback registered with omniClientRegisterCacheBypassStatusCallback. - **omniClientSetLogCallback**: Set a log callback function. - **omniClientSetLogLevel**: Set the log level. ## Functions - `omniClientSetProductInfo`: Sets product information that’s sent to Nucleus when connecting. - `omniClientSetRetries`: Configure retry behavior. - `omniClientSetS3Configuration`: Set S3 configuration info for a given URL. - `omniClientShutdown`: Terminate all connections and free everything. - `omniClientSignOut`: Immediately disconnect from the server specified by this URL. - `omniClientStat`: Retrieve information about a single item. This function is equivalent to omniClientStat2 with eOmniClientListIncludeOption_DefaultNotDeleted. - `omniClientStat2`: Retrieve information about a single item. - `omniClientStatSubscribe`: Retrieve information about a single item, and subscribe to future changes. This function is equivalent to omniClientStatSubscribe2 with eOmniClientListIncludeOption_DefaultNotDeleted. - `omniClientStatSubscribe2`: Retrieve information about a single item, and subscribe to future changes. - `omniClientStop`: Stop an active request. - `omniClientTraceStart`: Start tracing using carb::tracer. - `omniClientTraceStop`: Stop tracing using carb::tracer. - `omniClientUndelete`: Restore a path. - `omniClientUnlock`: Unlock a file so other clients can modify it. - `omniClientUnregisterCallback`: Unregister a callback. - `omniClientWait`: Wait for a request to complete. - `omniClientWaitFor`: Wait for a request to complete, but with a timeout. - `omniClientWriteFile`: Create a new file, overwriting if it already exists. - `omniClientWriteFileEx`: Create a new file, overwriting if it already exists. ## Variables - `kInvalidRequestId`: This is returned if you call an asynchronous function after calling `omniClientShutdown`. - `kOmniClientVersion`: The version of this library. You can pass it to `omniClientInitialize` to verify that the dll which is loaded matches the header file you compiled against. ## Macros - **BIT**: Macro to help define bit fields. - **OMNICLIENT_ABI** - **OMNICLIENT_BUILD_STRING**: This is the full build string that is also returned by `omniClientGetVersionString`. - **OMNICLIENT_CALLBACK_NOEXCEPT** - **OMNICLIENT_DEFAULT** - **OMNICLIENT_DEPRECATED** - **OMNICLIENT_EXPORT** - **OMNICLIENT_EXPORT_C** - **OMNICLIENT_EXPORT_CPP** - **OMNICLIENT_NOEXCEPT** - **OMNICLIENT_VERSION_BUILD**: This unused, and is always 0. - **OMNICLIENT_VERSION_MAJOR**: Major version number. This will not change unless there is a major non-backwards compatible change. - **OMNICLIENT_VERSION_MINOR**: Minor version number. This changes with every release. - **OMNICLIENT_VERSION_PATCH**: Patch number. This will normally be 0, but can change if a fix is backported to a previous release. ## Typedefs - **OmniClientAddUserToGroupCallback**: This is called with the result of `omniClientAddUserToGroup`. - **OmniClientAuthCallback**: This allows you to provide credentials used to sign in to a server. - **OmniClientAuthDeviceFlowCallback**: This is called when connecting to a server using “Device Flow” authentication. - **OmniClientAuthenticationMessageBoxCallback**: This is called when the library needs to continue authentication in a web browser. - **OmniClientBookmarkCallback**: This is called with the list of bookmarks. - **OmniClientCacheBypassStatusCallback**: This is called any time any cache status changes. - **OmniClientConnectionStatusCallback**: This is called any time any connection status changes. - **OmniClientCopyCallback**: This is called with the result of - `omniClientCopy`: - `OmniClientCreateCheckpointCallback`: This is called with the result of `omniClientCreateCheckpoint`. - `OmniClientCreateFolderCallback`: This is called with the result of `omniClientCreateFolder`. - `OmniClientCreateGroupCallback`: This is called with the result of `omniClientCreateGroup`. - `OmniClientCreateWithHashCallback`: This is called with the result of `omniClientCreateWithHash`. - `OmniClientDeleteCallback`: This is called with the result of `omniClientDelete`. - `OmniClientFileStatusCallback`: This is called any time any file status changes. - `OmniClientGetAclsCallback`: This is called with the result of `omniClientGetAcls`. - `OmniClientGetGroupsCallback`: This is called with the result of `omniClientGetGroups`. - `OmniClientGetGroupUsersCallback`: This is called with the result of `omniClientGetGroupUsers`. - `OmniClientGetLocalFileCallback`: This is called with the result of `omniClientGetLocalFile`. - `OmniClientGetOmniHubVersionCallback`: Called with the result of `omniClientGetOmniHubVersion`. - `OmniClientGetServerInfoCallback`: This is called with the results of `omniClientGetServerInfo`. - `OmniClientGetUserGroupsCallback`: This is called with the result of `omniClientGetUserGroups`. - `OmniClientGetUsersCallback`: This is called with the result of `omniClientGetUsers`. - `omniClientGetUsers`: - `OmniClientJoinChannelCallback`: This is called with the result of `omniClientJoinChannel`. - `OmniClientKvCacheGetCallback`: Called with the result of `omniClientKvCacheGet`. result will be eOmniClientResult_Ok if content is a valid pointer, and eOmniClientResult_ErrorNotFound if the key doesn’t exist. Other error codes indicate connection errors to OmniHub. The content’s memory can be acquired by `omniClientMoveContent` or copied out, as the library will free the memory once the callback returns if it is not moved. - `OmniClientKvCacheSetCallback`: Called with the result of `omniClientKvCacheSet`. - `OmniClientKvCacheStatCallback`: Called with the result of `omniClientKvCacheStat`. - `OmniClientListCallback`: This is called with the results of `omniClientList` and `omniClientListSubscribe`. - `OmniClientListCheckpointsCallback`: This is called with the result of `omniClientListCheckpoints`. - `OmniClientListSubscribeCallback`: This is called any time an item you’ve subscribed to with `omniClientListSubscribe` changes. - `OmniClientLiveCreateCallback`: Called with the result of `omniClientLiveCreate`. - `OmniClientLiveProcessUpdatesCallback`: This is called any time `omniClientLiveProcess`, `omniClientLiveProcessUpTo`, or `omniClientLiveWaitForPendingUpdates` is called. - `OmniClientLiveQueuedCallback`: This is called any time we receive a live update from the network. - `OmniClientLiveQueuedCallback2`: This is called any time we receive a live update from the network. - `OmniClientLiveReadCallback`: Called with the result of `omniClientLiveRead`. - `OmniClientLiveReadCallback`: Called with the result of `omniClientLiveRead`. - **OmniClientLiveUpdateCallback**: Called with the result of `omniClientLiveUpdate`. - **OmniClientLockCallback**: This is called with the result of `omniClientLock`. - **OmniClientLogCallback**: This is called from a background thread any time the library wants to print a message to the log. - **OmniClientMoveCallback**: This is called with the result of `omniClientMove`. - **OmniClientObliterateCallback**: This is called with the result of `omniClientObliterate`. - **OmniClientReadFileCallback**: This is called with the result of `omniClientReadFile`. - **OmniClientRefreshAuthTokenCallback**: This is called with the results of `omniClientRefreshAuthToken`. - **OmniClientRemoveGroupCallback**: This is called with the result of `omniClientRemoveGroup`. - **OmniClientRemoveUserFromGroupCallback**: This is called with the result of `omniClientRemoveUserFromGroup`. - **OmniClientRenameGroupCallback**: This is called with the result of `omniClientRenameGroup`. - **OmniClientRequestId**: Request Id returned from all the asynchronous functions. - **OmniClientResolveCallback**: This is called with the result of `omniClientResolve` or `omniClientResolveSubscribe`. - **OmniClientResolveSubscribeCallback**: This is called any time an item you’ve subscribed to with `omniClientResolveSubscribe` changes. - **OmniClientSendMessageCallback**: This is called with the result of `omniClientSendMessage`. - **OmniClientSetAclsCallback**: This is called with the result of omniClientSetAcls. - **OmniClientStatCallback**: This is called with the results of omniClientStat or omniClientStatSubscribe. - **OmniClientStatSubscribeCallback**: This is called any time an item you’ve subscribed to with omniClientStatSubscribe changes. - **OmniClientUndeleteCallback**: This is called with the result of omniClientUndelete. - **OmniClientUnlockCallback**: This is called with the result of omniClientUnlock. - **OmniClientWriteFileCallback**: This is called with the result of omniClientWriteFile. - **OmniClientWriteFileExCallback**: This is called with the result of omniClientWriteFileEx.
20,423
clone-the-kit-app-template-github-repository_developer_setup.md
# Project Setup ## Visual Studio Code Download and install Visual Studio Code. Standard installation works for this tutorial. ## Clone the kit-app-template GitHub Repository Use a preferred method to download the repo. Here is how clone kit-app-template from within Visual Studio: 1. Open VSCode. 2. Open the command palette using `Ctrl + Shift + P`. 3. In the palette prompt enter gitcl then select `Git: Clone` command. 4. Paste `https://github.com/NVIDIA-Omniverse/kit-app-template` into the repository URL then select Clone from URL. 5. Select (or create) the local directory into which you want to clone the project. 6. Once it has finished cloning it will ask if you want to open the cloned repository, select `Open`. 7. Set the terminal to use `Command Prompt` so the syntax in the [Command Cheat-Sheet](commands.html) is supported. ![VS Code Terminal](_images/vs_code_terminal.png) Once the project has been downloaded, make sure it’s open in VSCode. Note: VSCode may recommend installing VSCode Extensions such as the Python Extension. VSCode Extensions are not the same as Kit SDK Extensions. Feel free to install those for VSCode to improve developer workflows. ## Project Overview Before changing or adding anything let’s review the starting point. This kit-app-template project is a barebone starting point and additional files will be added as tools are used. Here is an outline of the core project: | Directory Item | Purpose | |----------------|---------| | docs | Source files for building documentation. | | source | Source files for Applications, Services, and Extensions. | | tools | Tools and configurations for making builds and packages. | | .editorconfig | [EditorConfig](https://editorconfig.org/) file. | ## Verify Project Starting Point Let’s make sure the core functionality works before creating new solutions. 1. Open a terminal in VSCode and run a **build** - see `build` command in [Command Cheat-Sheet](commands.html). Internet access to NVIDIA repositories is required as dependencies are downloaded as part of the build process. Subsequent builds will be faster as dependencies no longer need to be downloaded. 2. Notice the additional directories that are created in the root directory after the build has completed: - **_build**: Debug and release builds of apps. Packages. Built docs. - **_compiler**: Solution files. - **_repo**: Links to installed `repo` tools. **Important**: The directories named with an underscore are safe to delete. They are generated by `repo` and `build` commands. 3. Start an app included in the project. - Windows: `.\_build\windows-x86_64\release\my_name.my_app.bat`. - Linux: `./_build/linux-x86_64/release/my_name.my_app.sh`. 4. An Application should launch - presenting a viewport, content browser, and a few other panels. This is a basic functional USD viewer Application within the Kit SDK itself. If there were errors causing the apps not to run then please start over - making sure not to make any changes prior to this section. Now you are ready to either continue with the tutorial or develop Applications and Extensions on your own.
3,174
CODING.md
# Coding Style Guidelines This document covers style guidelines for the various programming languages used in the Carbonite codebase. ## C/C++ Coding Conventions This covers the basic coding conventions and guidelines for all C/C++ code that is submitted to this repository. - It’s expected that you will not love every convention that we’ve adopted. - These conventions establish a modern and hybrid C/C++14 style. - Please keep in mind that it’s impossible to make everybody happy all the time. - Instead appreciate the consistency that these guidelines will bring to our code and thus improve the readability for others. - Coding guidelines that can be enforced by clang-format will be applied to the code. This project heavily embraces a plugin architecture. Please consult [Architectural Overview](docs/Architecture.html) for more information. ### Repository The project should maintain a well structured layout where source code, tools, samples and any other folders needed are separated, well organized and maintained. The convention has been adopted to group all generated files into top-level folders that are prefixed with an underscore, this makes them stand out from the source controlled folders and files while also allowing them to be easily cleaned out from local storage (Ex. `rm -r _*`). This is the layout of the **Carbonite** project repository: | Item | Description | | --- | --- | | .vscode | Visual Studio Code configuration files. | | _build | Build target outputs (generated). | | _compiler | Compiler scripts, IDE projects (generated). | | deps | External dependency configuration files. | | docs | Carbonite documentation. | | include/carb | Public includes for consumers of Carbonite SDK. | | tools | Small tools or boot-strappers for the project. | | source | All source code for project. | | source/bindings | Script bindings for Carbonite SDK. | | source/examples | Examples of using Carbonite. | | Folder/File | Description | | --- | --- | | source/framework | Carbonite framework implementation. | | source/tests | Source code for tests of Carbonite. | | source/tools | Source code for tools built with Carbonite. | | source/plugins | Carbonite plugin implementations. | | source/plugins/carb.assets | The carb.assets.plugin implementation | | source/plugins/carb.graphics-direct3d | The implementation of carb.graphics interface for Direct3D12 | | source/plugins/carb.graphics-vulkan | The implementation of carb.graphics interface for Vulkan | | .clang-format | Configuration for running clang format on the source code. | | .editorconfig | Maintains editor and IDE style conformance Ex. Tabs/Spaces. | | .flake8 | Configuration for additional coding style conformance. | | .gitattributes | Governs repository attributes for git repository. | | .gitignore | Governs which files to ignore in the git repository. | | build.bat | Build script to build debug and release targets on Windows. | | build.sh | Build script to build debug and release targets on Linux. | | CODING.md | These coding guidelines. | | format_code.bat | Run this to format code on Windows before submitting to repository. | | format_code.sh | Run this to format code on Linux before submitting to repository. | | prebuild.bat | Run this to generate visual studio solution files on Windows into _compiler folder. | | premake5.lua | Script for configuration of all build output targets using premake. | | setup.sh | Setup run once installation script of Linux platform dependencies. | | README.md | The summary of any project information you should read first. | One important rule captured in the above folder structure is that public headers are stored under `include/carb` folder but implementation files and private headers are stored under `source` folders. ### Include There are four rules to be followed when writing include statements correctly for Carbonite: 1. Do not include `Windows.h` in header files as it is monolithic and pollutes the global environment for Windows. Instead, a much slimmer CarbWindows.h exists to declare only what is needed by Carbonite. If additional Windows constructs are desired, add them to CarbWindows.h. There are instructions in that file for how to handle typedefs, enums, structs and functions. `Windows.h` should still be included in compilation units (**cpp** and **c** files); CarbWindows.h exists solely to provide a minimal list of Windows declarations for header files. Example from a file in `include/carb/extras`: ```cpp #include "../Defines.h" #if CARB_PLATFORM_WINDOWS # include "../CarbWindows.h" ``` #endif 2. Public headers (located under `include/carb`) referencing each other always use path-relative include format: ```cpp #include "../Defines.h" #include "../container/LocklessQueue.h" #include "IAudioGroup.h" ``` 3. Includes of files that are not local to Carbonite (or are pulled in via package) use the search path format. Carbonite source files (under `source/`) may also use search-path format for Carbonite public headers (under `include/carb/`): ```cpp #include <carb/graphics/Graphics.h> // via packman package #include <doctest/doctest.h> ``` 4. All other includes local to Carbonite use the path-relative include format: ```cpp #include "MyHeader.h" #include "../ParentHeader.h" ``` In the example above `MyHeader.h` is next to the source file and `ParentHeader.h` is one level above. It is important to note that these relative includes are not allowed to cross package boundaries. If parts are shipped as separate packages the includes must use the angle bracket search path format in item 1 when referring to headers from other packages. We do also have rules about ordering of includes but all of these are enforced by format_code.{bat|sh} so there is no need to memorize them. They are captured here for completeness: 1. Matching header include for cpp file is first, if it exists - in a separate group of one file. This is to ensure self-sufficiency. 2. carb/Defines.h is it’s own group of one file to ensure that it is included before other includes. 3. Other local includes are in the third group, alphabetically sorted. 4. Search path includes to Carbonite are in the fourth group (`#include <carb/*>`), alphabetically sorted. 5. Other 3rd party includes are in the fifth group (`#include <*/*>`), alphabetically sorted. 6. System includes are in the sixth and final group, alphabetically sorted. Here is an example from `AllocationGroup.cpp` (doesn’t have the fifth group) ```cpp #include "AllocationGroup.h" #include <carb/Defines.h> #include "StackEntry.h" #include <carb/logging/Log.h> #if CARB_PLATFORM_LINUX # include <signal.h> #endif ``` Two things are worth noting about the automatic grouping and reordering that we do with format_code script. If you need to associate a comment with an include put the comment on the same line as the include statement - otherwise clang-format will not move the chunk of code. Like this: ```cpp #include <stdlib.h> // this is needed for size_t on Linux ``` Secondly, if include order is important for some files just put `// clang-format off` and `// clang-format on` around those lines. ## Files - Header files should have the extension `.h`, since this is least surprising. - Source files should have the extension `.cpp`, since this is least surprising. - `.cc` is typically used for UNIX only and not recommended. - Header files must include the preprocessor directive to only include a header file once. ```cpp #pragma once ``` - Source files should include the associated header in the first line of code after the commented license banner. - All files must end in blank line. - Header and source files should be named with `PascalCase` according to their type names and placed in their - Appropriate namespaced folder paths, which are in **lowercase**. A file that doesn’t represent a type name should nevertheless start with uppercase and be written in **PascalCase**, Ex. `carb/Defines.h`. - Type | Path - --- | --- - carb::assets::IAssets | ./include/carb/assets/IAssets.h - carb::audio::IAudioPlayback | ./include/carb/audio/IAudioPlayback.h - carb::settings::ISettings | ./include/carb/settings/ISettings.h - This allows for inclusion of headers that match code casing while creating a unique include path: ```cpp #include <carb/assets/IAssets.h> #include <carb/audio/IAudioPlayback.h> #include <carb/settings/ISettings.h> ``` - In an effort to reduce difficulty downstream, all public header files (i.e. those under the *include* directory) must not use any identifier named `min` or `max`. This is an effort to coexist with `#include <Windows.h>` where `NOMINMAX` has not been specified: - Instead, `include/carb/Defines.h` has global symbols `carb_min()` and `carb_max()` that may be used in similar fashion to `std::min` and `std::max`. - For rare locations where it is necessary to use `min` and `max` (i.e. to use `std::numeric_limits<>::max()` for instance), please use the following construct: ```cpp #pragma push_macro("max") // or "min" #undef max // or min /* use the max or min symbol as you normally would */ #pragma pop_macro("max") // or "min" ``` - Similarly, the identifier `interface` can be problematic since `Windows.h` defines that as `class`. Use of identifiers named `interface` should be replaced by something else, such as `iface`, to avoid conflicts. Note that if unity builds are being used, it may also be necessary to avoid using both `interface` and `min/max` as identifiers even in .cpp files. ## Namespaces Before we dive into usage of namespaces it’s important to establish what namespaces were originally intended for. They were added to prevent name collisions. Instead of each group prefixing all their names with a unique identifier they could now scope their work within a unique namespace. The benefit of this was that implementers could write their implementations within the namespace and did therefore not have to prefix that code with the namespace. However, when adding this feature a few other features were also added and that is where things took a turn for the worse. Outside parties can alias the namespace, i.e. give it a different name when using it. This causes confusion because now a namespace is known by multiple names. Outside parties can hoist the namespace, effectively removing the protection. Hoisting can also be used within a user created namespace to introduce another structure and names for 3rd party namespaces, for an even higher level of confusion. Finally, the namespaces were designed to support nesting of namespaces. It didn’t take long for programmers to [run away with this feature for organization](https://punchlet.wordpress.com/2011/06/18/letter-the-sixth-belatedly/). Nested namespaces stem from a desire to hierarchically organize a library but this is at best a convenience for the implementer and a nuisance for the consumer. Why should our consumers have to learn about our internal organization hierarchy? It should be noted here that the aliasing of namespaces and hoisting of namespaces are often coping mechanisms for consumers trapped in deeply nested namespaces. So, essentially the C++ committee created both the disease and the palliative care. What consumers really need is a namespace to protect their code from clashing with code in external libraries that they have no control over. Notice the word **a** in there. Consumers don’t need nested levels of namespaces in these libraries - one is quite enough for this purpose. This also means that a namespace should preferably map to a team or project, since such governing bodies can easily resolve naming conflicts within their namespace when they arise. With the above in mind we have developed the following rules: - The C++ namespace should be project and/or team based and easily associated with the project. - Ex. The **Carbonite** project namespace is `carb::` and is managed by the Carbonite team - This avoids collisions with other external and internal NVIDIA project namespaces. - We do **not** use a top level `nvidia::` namespace because there is no central governance for this namespace, additionally this would lead to a level of nesting that benefits no one. - Namespaces are all lowercase. - This distinguishes them from classes which is important because the usage is sometimes similar. - This encourages short namespace names, preferably a single word; reduces chances of users hoisting them. - Demands less attention when reading, which is precisely what we want. We want people to use them for protection but not hamper code readability. - Exposed namespaces are no more than two levels deep. - One level deep is sufficient to avoid collisions since by definition the top level namespace is always managed by a governing body (team or project) - A second level is permitted for organization; we accept that in larger systems one level of organization is justifiable (in addition to the top level name-clash preventing namespace). Related plugin interfaces and type headers are often grouped together in a namespace. - Other NVIDIA projects can make plugins and manage namespace and naming within. These rules don’t really apply because we don’t have governance for such projects. However, we recommend that these rules be followed. For a single plugin a top level namespace will typically suffice. For a collection of plugins a single top level namespace may still suffice, but breaking it down into two levels is permitted by these guidelines. - We don’t add indentation for code inside namespaces. - This conserves maximum space for indentation inside code. - We don’t add comments for documenting closing of structs or definitions, but it’s OK for namespaces because they often span many pages and there is no indentation to help: Name Prefixing and Casing ----------------------- The following table outlines the naming prefixing and casing used: | Construct | Prefixing / Casing | |----------------------------|--------------------| | class, struct, enum class and typedef | PascalCase | | constants | kCamelCase | | enum class values | eCamelCase | | functions | camelCase | | private/protected functions| _camelCase | | exported C plugin functions| carbCamelCase | | public member variables | camelCase | | private/protected member variables | m_camelCase | | private/protected static member variables | s_camelCase | | global - static variable at file or project scope | g_camelCase | | local variables | camelCase | # camelCase When a name includes an abbreviation or acronym that is commonly written entirely in uppercase, you must still follow the casing rules laid out above. For instance: ```cpp void* gpuBuffer; // not GPUBuffer struct HtmlPage; // not HTMLPage struct UiElement; // not UIElement using namespace carb::io; // namespaces are always lowercase ``` ## Naming - Guidelines - All names must be written in **US English**. ```cpp std::string fileName; // NOT: dateiName uint32_t color; // NOT: colour ``` - The following names cannot be used according to the C++ standard: - names that are already keywords; - names with a double underscore anywhere are reserved; - names that begin with an underscore followed by an uppercase letter are reserved; - names that begin with an underscore are reserved in the global namespace. - Method names must always begin with a verb. - This avoids confusion about what a method actually does. ```cpp myVector.getLength(); myObject.applyForce(x, y, z); myObject.isDynamic(); texture.getFormat(); ``` - The terms get/set or is/set (**bool**) should be used where an attribute is accessed directly. - This indicates there is no significant computation overhead and only access. ```cpp employee.getName(); employee.setName("Jensen Huang"); light.isEnabled(); light.setEnabled(true); ``` - Use stateful names for all boolean variables. (Ex bool enabled, bool m_initialized, bool g_cached) and leave questions for methods (Ex. isXxxx() and hasXxxx()) ```cpp bool isEnabled() const; void setEnabled(bool enabled); void doSomething() { bool initialized = m_coolSystem.isInitialized(); ... } ``` - Please consult the antonym list if naming symmetric functions. - Avoid redundancy in naming methods and functions. - The name of the object is implicit, and must be avoided in method names. ```cpp line.getLength(); // NOT: line.getLineLength(); ``` - Function names must indicate when a method does significant work. ```cpp float waveHeight = wave.computeHeight(); // NOT: wave.getHeight(); ``` - Avoid public method, arguments and member names that are likely to have been defined in the preprocessor. - When in doubt, use another name or prefix it. ```cpp size_t malloc; // BAD size_t bufferMalloc; // GOOD ``` ```cpp int min, max; // BAD int boundsMin, boundsMax; // GOOD ``` - Avoid conjunctions and sentences in names as much as possible. - Use `Count` at the end of a name for the number of items. ```cpp size_t numberOfShaders; // BAD size_t shaderCount; // GOOD VkBool32 skipIfDataIsCached; // BAD VkBool32 skipCachedData; // GOOD ``` ## Internal code For public header files, a `detail` namespace should be used to declare implementation as private and subject to change, as well as signal to external users that the functions, types, etc. in the `detail` namespace should not be called. Within a translation unit (.cpp file), use an anonymous namespace to prevent external linkage or naming conflicts within a module: ```cpp namespace // anonymous { struct OnlyForMe { }; } ``` In general, prefer anonymous namespaces over `static`. ## Deprecation and Retirement As part of the goal to minimize major version changes, interface functions may be deprecated and retired through the **Deprecation and Retirement Guidelines** section of the Architectural Overview. ## Shader naming HLSL shaders must have the following naming patterns to properly work with our compiler and slangc.py script: - **HLSL shader naming:** - if it contains multiple entry points or stages: [shader name].hlsl - if it contains a single entry point and stage: [shader name].[stage].hlsl - **Compiled shader naming:** [shader name].[entry point name].[stage].dxil/dxbc/spv[.h] - Do not add extra dots to the names, or they will be ignored. You may use underscore instead. ```cpp basic_raytracing.hlsl // Input: DXIL library with multiple entry points basic_raytracing.chs.closesthit.dxil // Output: entry point: chs, stage: closesthit shader color.pixel.hlsl // Input: a pixel shader color.main.pixel.dxbc // Output: entrypoint: main, stage: pixel shader ``` ## Rules for class - Classes that should not be inherited from should be declared as `final`. - Each access modifier appears no more than once in a class, in the order: `public`, `protected`, `private`. - All `public` member variables live at the start of the class. They have no prefix. If they are accessed in a member function that access must be prefixed with `this->` for improved readability and reduced head-scratching. - All `private` member variables live at the end of the class. They are prefixed with `m_`. They should be accessed directly in member functions, adding `this->` to access members in the same class is unnecessary. ## Rules for class - Member variables are judiciously prefixed with `m_`. They should be accessed directly in member functions, adding `this->` to access them is unnecessary. - Constructors and destructor are first methods in a class after `public` member variables unless private scoped in which case they are first `private` methods. - The implementations in cpp should appear in the order which they are declared in the class. - Avoid `inline` implementations unless trivial and needed for optimization. - Use the `override` specifier on all overridden virtual methods. Also, every member function should have at most one of these specifiers: `virtual`, `override`, or `final`. - Do not override pure-virtual method with another pure-virtual method. - Here is a typical class layout: ```cpp #pragma once namespace carb { namespace ui { /** * Defines a user interface widget. */ class Widget { public: Widget(); ~Widget(); const char* getName() const; void setName(const char* name); bool isEnabled() const; bool setEnabled(bool enabled); private: char* m_name; bool m_enabled; }; } } ``` ## Rules for struct - We make a clear distinction between structs and classes. - We do not permit any member functions on structs. Those we make classes. - If you must initialize a member of the struct then use C++14 static initializers for this, but don’t do this for basic types like a Float3 struct because default construction/initialization is not free. - No additional scoping is needed on struct variables. - Not everything needs to be a class object with logic. - Sometimes it’s better to separate the data type from the functionality and structs are a great vehicle for this. - For instance, vector math types follow this convention. - Allows keeping vector math functionality internalized rather than imposing it on users. - Here is a typical struct with plain-old-data (pod): ```cpp struct Float3 { float x; float y; float z; }; // check this out (structs are awesome): Float3 pointA = {0}; Float3 pointB = {1, 0, 0}; ``` ## Rules for function - When declaring a function that accepts a pointer to a memory area and a counter or size for the area we should place them in a fixed order: the address first, followed by the counter. Additionally, `size_t` must be used as the type for the counter. ```cpp void readData(const char* buffer, size_t bufferSize); ``` ```cpp void setNames(const char* names, size_t nameCount); void updateTag(const char* tag, size_t tagLength); ``` ## Rules for enum class and bit flags * We use `enum class` over `enum` to support namespaced values that do not collide. * Keep their names as simple and short-and-sweet as possible. * If you have an enum class as a subclass, then it should be declared inside the class directly before the constructor and destructor. * Here is a typical enum class definition: ```cpp class Camera { public: enum class Projection { ePerspective, eOrthographic }; Camera(); ~Camera(); }; ``` * The values are accessed like this: * `EnumName::eSomeValue` * Note that any sequential or non-sequential enumeration is acceptable - the only rule is that the type should never be able to hold the value of more than one enumeration literal at any time. An example of a type that violates this rule is a bit mask. Those should not be represented by an enum. Instead use constant integers (constexpr) and group them by a prefix. Also, in a `cpp` file you want them to also be `static`. Below we show an example of a bit mask and bit flags in Carbonite: ```cpp namespace carb { namespace graphics { constexpr uint32_t kColorMaskRed = 0x00000001; // static constexpr in .cpp constexpr uint32_t kColorMaskGreen = 0x00000002; constexpr uint32_t kColorMaskBlue = 0x00000004; constexpr uint32_t kColorMaskAlpha = 0x00000008; } namespace input { /** * Type used as an identifier for all subscriptions. */ typedef uint32_t SubscriptionId; /** * Defines possible press states. */ typedef uint32_t ButtonFlags; constexpr uint32_t kButtonFlagNone = 0; constexpr uint32_t kButtonFlagTransitionUp = 1; constexpr uint32_t kButtonFlagStateUp = (1 << 1); constexpr uint32_t kButtonFlagTransitionDown = (1 << 2); constexpr uint32_t kButtonFlagStateDown = (1 << 3); } } ``` ## Rules for Pre-processors and Macros * It’s recommended to place preprocessor definitions in the source files instead of makefiles/compiler/project files. - Try to reduce the use of `#define` (e.g. for constants and small macro functions), and prefer `constexpr` values or functions when possible. - Definitions in the public global namespace must be prefixed with the namespace in uppercase: - Indent macros that are embedded within one another. - All `#define`s should be set to 0, 1 or some other value. Accessing an undefined macro in Carbonite is an error. - All checks for Carbonite macros should use `#if` and not `#ifdef` or `#if defined()` - Macros that are defined for all of Carbonite should be placed in carb/Defines.h - Transient macros that are only needed inside of a header file should be `#undef`ed at the end of the header file. - `CARB_POSIX` is set to `_POSIX_VERSION` on platforms that are mostly POSIX conformant, such as Linux and MacOS. `CARB_POSIX` is set to `0` on other platforms. Functions used in these blocks should be verified to actually follow the POSIX standard, rather than being common but non-standard (e.g. `ptrace`). Non-standard calls inside `CARB_POSIX` blocks should be wrapped in a nested platform check, such as `CARB_PLATFORM_LINUX`. - When adding `#if` pre-processor blocks to support multiple platforms, the block must end with an `#else` clause containing the `CARB_UNSUPPORTED_PLATFORM()` macro. An exception to this is when the `#else` block uses entirely C++ standard code; this sometimes happens in the case of platform-specific optimizations. This logic also applied to `#if` directives nested in an `#if` block for a standard, such as `CARB_POSIX` where the `#else` block follows that platform. In other words, you may not make assumptions about what features future platforms may have, aside from what’s in the C++ standard; all platform-specific code must have the associated platform specifically stated. - Macros that do not have universal appeal (i.e. are only intended to be used within a single header file) shall be prefixed with `CARBLOCAL_` and `#undef`’d at the end of the file. - When adding `#if` pre-processor blocks to support multiple platforms, the block must end with an `#else` clause containing the `CARB_UNSUPPORTED_PLATFORM()` macro. An exception to this is when the `#else` block uses entirely C++ standard code; this sometimes happens in the case of platform-specific optimizations. This logic also applied to `#if` directives nested in an `#if` block for a standard, such as `CARB_POSIX` where the `#else` block follows that platform. In other words, you may not make assumptions about what features future platforms may have, aside from what’s in the C++ standard; all platform-specific code must have the associated platform specifically stated. ## Porting to new platforms This is the process to port Carbonite to new platforms with minimal disruption to development work. This is the process being used to port Carbonite to Mac OS. - The initial commit to master is the minimal code to get the new platform to build. Code paths that cannot be shared with another platform will have a crash macro to mark where they are (The Mac OS port has `CARB_MACOS_UNIMPLEMENTED()`) ## CI Builds and Testing - **Initial Commit**: Code should build and pass all tests on the current platform. - **Crash Macros**: Each crash macro should have a comment with an associated ticket for fixing it. After this point, CI builds should be enabled. - **Code Added After Initial Commit**: Code should still build for the new platform, but new code can use the crash macro if needed. - **CI Testing**: CI testing is enabled on a subset of the tests once the framework is able to run on the new platform. - **Full Support**: Once there are no remaining crash macros and all tests are enabled on CI, the new platform will be considered fully supported. ## Commenting - Header Files - **Avoid Errors**: Avoid spelling and grammatical errors. - **Consider Audience**: Assume customers will read comments. Err on the side of caution. - Cautionary tale: to ‘nuke’ poor implementation code is a fairly idiomatic usage for US coders. It can be highly offensive elsewhere. - **License Banner**: Each source file should start with a comment banner for license. - This should be strictly the first thing in the file. - **Doxygen Format**: Header comments use doxygen format. We are not too sticky on doxygen formatting policy. - **Document Public Elements**: All public functions and variables must be documented. - **Detail Level**: The level of detail for the comment is based on the complexity for the API. - **Clarity**: Most important is that comments are simple and have clarity on how to use the API. - **@brief and @details**: `@brief` can be dropped and automatic assumed on first line of code. `@details` is dropped and automatic assumed proceeding the brief line. - **@param and @return**: `@param` and `@return` are followed with a space after summary brief or details. ```cpp /** * Tests whether this bounding box intersects the specified bounding box. * * You would add any specific details that may be needed here. This is * only necessary if there is complexity to the user of the function. * * @param box The bounding box to test intersection with. * @return true if the specified bounding box intersects this bounding box; * false otherwise. */ bool intersects(const BoundingBox& box) const; ``` - **Overridden Functions**: Overridden functions can simply refer to the base class comments. ```cpp class Bar : public Foo { protected: /** * @see Foo::render */ void render(float elapsedTime) override; }; ``` ## Commenting - Source Files - **Clean Code**: Clean simple code is the best form of commenting. - **Duplicate Comments**: Do not add comments above function definitions in .cpp if they are already in header. - **Implementation Details**: Comment necessary non-obvious implementation details not the API. - **Line Comments**: Only use // line comments on the line above the code you plan to comment. - **Block Comments**: Avoid /* */ block comments inside implementation code (.cpp). This prevents others from easily doing their own block comments when testing, debugging, etc. - **Identifier Comments**: Avoid explicitly referring to identifiers in comments, since that’s an easy way to make your comment outdated when an identifier is renamed. ## License - **License Notice**: The following must be included at the start of every header and source file: ```cpp // Copyright (c) 2020 NVIDIA CORPORATION. All rights reserved. // // NVIDIA CORPORATION and its licensors retain all intellectual property // and proprietary rights in and to this software, related documentation // and any modifications thereto. Any use, reproduction, disclosure or // distribution of this software and related documentation without an express // license agreement from NVIDIA CORPORATION is strictly prohibited. // ``` ## Formatting Code - **Editor Settings**: You should set your Editor/IDE to follow the formatting guidelines. - **.editorconfig**: This repository uses .editorconfig - take advantage of it. - **Line Length**: Keep all code less than 120 characters per line. - **Code Style**: We use a... ```{code-block} .clang-format :linenos: file with clang-format (via `repo_format`) to keep our code auto-formatted. - In some rare cases where code is manually formatted in a pleasing fashion, auto-formatting can be suspended with a comment block: ```cpp // clang-format off ... Manually formatted code // clang-format on ``` - Indentation - Insert **4 spaces** for each tab. We’ve gone back and forth on this but ultimately GitLab ruined our affair with tabs since it relies on default browser behavior for displaying tabs. Most browsers, including Chrome, are set to display each tab as 8 spaces. This made the code out of alignment when viewed in GitLab, where we perform our code reviews. That was the straw that broke the camel’s back. - The repository includes .editorconfig which automatically configures this setting for VisualStudio and many other popular editors. In most cases you won’t have to do a thing to comply with this rule. - Line Spacing - One line of space between function declarations in source and header. - One line after each class scope section in header. - Function call spacing: - No space before bracket. - No space just inside brackets. - One space after each comma separating parameters. ```cpp serializer->writeFloat("range", range, kLightRange); ``` - Conditional statement spacing: - One space after conditional keywords. - No space just inside the brackets. - One space separating commas, colons and condition comparison operators. ```cpp if (enumName.compare("carb::scenerendering::Light::Type") == 0) { switch (static_cast<Light::Type>(value)) { case Light::Type::eDirectional: return "eDirectional"; ... } } ``` - Don’t align blocks of variables or trailing comments to match spacing causing unnecessary code changes when new variables are introduced: ```cpp class Foo { ... private: bool m_very; // Formatting float3 m_annoying; // generates ray m_nooNoo; // spurious uint32_t m_dirtyBits; // diffs. }; ``` - Align indentation space for parameters when wrapping lines to match the initial bracket: ```cpp Matrix::Matrix(float m11, float m12, float m13, float m14, float m21, float m22, float m23, float m24, float m31, float m32, float m33, float m34, float m41, float m42, float m43, float m44) ``` ```cpp return sqrt((point.x - sphere.center.x) * (point.x - sphere.center.x) + (point.y - sphere.center.y) * (point.y - sphere.center.x) + (point.z - sphere.center.z) * (point.z - sphere.center.x)); ``` Use a line of space within .cpp implementation functions to help organize blocks of code. ```cpp // Lookup device surface extensions ... ... // Create the platform surface connection ... ... ... ``` ### Indentation Indent next line after all braces { }. Move code after braces { } to the next line. Always indent the next line of any condition statement line. ```cpp if (box.isEmpty()) { return; } ``` ```cpp for (size_t i = 0; i < count; ++i) { if (distance(sphere, points[i]) > sphere.radius) { return false; } } ``` **Never** leave conditional code statements on same line as condition test: ```cpp if (box.isEmpty()) return; ``` ### C++14 and Beyond Recommendations Carbonite supports a minimum of C++14, therefore public include files should not use any features in later standards. Carbonite includes some implementations of C++17 and later features that will build on C++14. These are located in directories that name the standard, such as `include/carb/cpp17`. ### Pointers and Smart Pointers Use raw C/C++ pointers in the public interface (Plugin ABI). In other cases prefer to use `std::unique_ptr` or `std::shared_ptr` to signal ownership, rather than using raw pointers. Use `std::shared_ptr` only when sharing is required. Any `delete` or `delete[]` call appearing in the code is a red flag and needs a good reason. ### Casts - Casting between numeric types (integer types, float, etc.) or pointer-to/from-numeric ( `size_t(ptr)` ) may use C-style or functional-style casts (i.e. `ptrdiff_t(val)` ) for brevity. One may still use `static_cast` if desired. - Except as mentioned above, avoid using C-style casts wherever possible. Note that `const_cast` can also be used to add/remove the `volatile` qualifier. - For non-numeric types, prefer explicit C++ named casts ( `static_cast` , `const_cast` , `reinterpret_cast` ) over C-style cast or functional cast. That will allow compiler to catch some errors. - Using `dynamic_cast` requires RTTI (Run-Time Type Information), is slow, and happens at runtime. Avoid using `dynamic_cast` unless necessary. - Use the narrowest cast possible. Only use `reinterpret_cast` if it is unavoidable. Note that `static_cast` can be used for `void*`: ```cpp void* userData = ...; MyClass* c = static_cast<MyClass*>(userData); ``` - Containers - You are free to use the STL containers but you can never allow them to cross the ABI boundary. That is, you cannot create them inside one plugin and have another plugin take over the object and be responsible for freeing it via the default C++ means. Instead you must hide the STL object within an opaque data structure and expose create/destroy functions. If you violate this rule you are forced to link the C++ runtime dynamically and the ABI breaks down. See Architecture documentation and ABI Compatibility for more details. - Characters and Strings - All strings internally and in interfaces are of the same type: 8-bit char. This type should always be expected to hold a UTF-8 encoded string. This means that the first 7-bits map directly to ASCII and above that we have escaped multi-byte sequences. Please read Unicode to learn how to interact with OS and third-part APIs that cannot consume UTF8 directly. If you need to enter a text string in code that contains characters outside 7-bit ASCII then you must also read Unicode. - For ABI-safe strings, you can use `omni::string`; a string class similar to `std::string`. - You are free to use `std::string` inside modules but we cannot expose STL string types in public interfaces (violation of Plugin ABI). Instead use (const) char pointers. This does require some thought on lifetime management. Usually the character array can be associated with an object and the lifetime is directly tied to that object. - Even though you can use STL strings and functionality inside your implementation please consider first if what you want to do is easily achievable with the C runtime character functions. These are considerably faster and often lead to fewer lines of code. Additionally the STL string functions can raise exceptions on non-terminal errors and Carbonite plugins are built without exception support so it will most likely just crash. - Auto - Avoid the use of `auto` where it will make code more difficult to read for developers who do not have an in-depth knowledge of the codebase. Reading someone else’s code is harder than writing your own code, so code should be optimized for readability. - `auto` should be used for generic code, such as templates and macros, where the type will differ based on invocation. - `auto` may optionally be used for overly verbose types that have a standard usage, such as iterators. - `auto` may be optionally used for types where the definition makes the type obvious, such as `auto a = std::unique_ptr(new (std::nothrow) Q)` or `auto&& lambda = [](Spline *s) -> Spline* { return s->reticulate(); }` - `auto` may optionally be used for trailing return types, such as `auto MyClass::MyFunction() -> decltype(myPrivateFunction()) { return myPrivateFunction(); }` ### Auto - Use `auto` for type deduction to reduce verbosity and improve maintainability. - Example: `MyEnum` - To avoid typing out types with overly verbose template arguments, it is preferable to define a new type with the `using` keyword rather than using `auto`. For types with a very broad scope, it is generally beneficial for readability to give a type a name that reflects its usage. - Avoid having `auto` variables initialized from methods of other `auto` variables, since this makes the code much harder to follow. - If you find yourself using tools to resolve the type of an `auto` variable, that variable should not be declared as `auto`. - Be careful about accidental copy-by-value when you meant copy-by-reference. - Understand the difference between `auto` and `auto&`. ### Lambdas - Lambdas are acceptable where they make sense. - Focus use around anonymity. - Avoid over use, but especially for std algorithms (Ex. `std::sort`, etc.) they are fine. - For large lambdas, avoid using capture-all `[=]` and `[&]`, and prefer explicit capture (by reference or by value, as needed) - For asynchronous lambdas, such as passed to `ITasking` functions, or `std::thread` or `std::async` make sure to capture local variables by value instead of by reference or pointer as they will have gone out of scope. ### Range-based loops - They’re great, use them. - They don’t necessarily have to be combined with `auto`. - They are often more readable. - For complex objects, make sure to use a reference (`&`) or forwarding reference (`&&`) if possible to avoid copying (especially when combined with `auto`). - Use accurate variable naming, as this is more important than choosing between using `auto` or the type name: ```cpp // BAD: Suggests `dev` might be of type Device even though it's a device ID. `auto` without reference means that a large object could be copied. for (auto dev : devices) // GOOD: More obvious that the iterator is a device index. for (int dev : devices) // BETTER: Even more obvious that the iterator is a device index. for (int devId : devices) // BEST: Blatantly obvious that the iterator and container are device indexes. for (int devId : deviceIndexes) // or for (auto&& devId : deviceIndexes) ``` ### Integer types - We prefer to use the standard integer types as defined in the C++ standard. ``` ```markdown <div class="highlight-cpp notranslate"> <div class="highlight"> <pre><span> ``` ```markdown ``` ```markdown <section id="nullptr"> <h4> nullptr <ul class="simple"> <li> <p> Use <code> nullptr for any pointer types instead of <code> 0 or <code> NULL . ``` ```markdown <section id="friend"> <h4> friend <ul class="simple"> <li> <p> Avoid using friend unless absolutely needed to restrict access to inter-class interop only. <li> <p> It easily leads to <em> difficult-to-untangle inter-dependencies that are hard to maintain. ``` ```markdown <section id="use-of-anonymous-namespaces"> <h4> use of anonymous namespaces <ul class="simple"> <li> <p> Prefer anonymous namespaces to <code> static free functions in <code> .cpp files ( <code> static should be omitted). ``` ```markdown <section id="templated-functions"> <h4> templated functions <ul class="simple"> <li> <p> internal-linkage is implied for non-specialized <code> template d functions functions, and for member functions defined inside a class declaration. You can additionally declare them <code> inline to give the compiler a hint for inlining. <li> <p> neither internal-linkage nor <code> inline is implied for fully specialized <code> template d functions, and thus those follow the rules of non- <code> template d functions (see below) ``` ```markdown <section id="static"> <h4> static <ul class="simple"> <li> <p> Declare non-interface non-member functions as <code> static in <code> .cpp files (or even better, include them in anonymous namespaces). <code> template d free functions (specialized or not) in <code> .cpp files also follow this rule. <li> <p> Declare non-interface non-member functions as <code> static <code> inline in <code> .cpp files (or <code> inline in anonymous namespaces) if you want to give the compiler a hint for inlining. <li> <p> Avoid <code> static non-member functions in includes, as they will cause code to appear multiple times in different translation units. ``` ```markdown <section id="inline"> <h4> inline <ul class="simple"> <li> <p> Declare non-interface non-member functions as <code> inline in include files. Fully-specialized <code> templete d free functions also need to be specified <code> inline (as neither <code> inline nor internal-linkage is implied). <li> <p> Avoid non- <code> static <code> inline non-member functions in <code> .cpp files, as they can hide potential bugs (different function with same signature might get silently merged at link time). ``` ```markdown <section id="static-assert"> <h4> static_assert <ul class="simple"> <li> <p> Use <code> static_assert liberally as it is a compile-time check and can be used to check assumptions at compile time. Failing the check will cause a compile error. Providing an expression that cannot be evaluated at compile time will also produce a compile error. It can be used within global, namespace and block scopes. It can be used within class declarations and function bodies. - `static_assert` should be used to future-proof code. - `static_assert` can be used to purposefully break code that must be maintained when assumptions change (an example of this would be to break code dependent on `enum` values when that `enum` changes). - `static_assert` can also be used to verify that alignment and `sizeof(type)` matches assumptions. - `static_assert` can be used with C++ traits (i.e. `std::is_standard_layout`, etc.) to notify future engineers of broken assumptions. ### Constant Strings Suggested way of declaring string constants: ```cpp // in a .h file constexpr char mystring[] = "constant string"; // in a .cpp file static constexpr char mystring[] = "constant string"; class A { // inside a class: static const char* const mystring = "constant string"; // ^^^ do not use static constexpr members of which an address is required within a class before C++17, otherwise // link errors will occur. } ``` > **NOTE** > Prior to C++17, the use of `static constexpr` as a member within a `struct`/`class` may cause link problems if the > address is taken of the member, unless the definition of the member is contained within a translation unit. This is > not possible for header-only classes. Therefore, avoid using `static constexpr` members when the address is required > of the member (i.e. passed via pointer or reference). Avoid using `static constexpr` string or character array members. > With C++17, it is possible to declare static members as `inline`, and `inline` is implied for `static constexpr` members. > However, Carbonite supports a minimum of C++14. ### Higher-level Concepts #### Internal and Experimental Code Functions, types, etc. that are inside of a `detail`/`details` namespace, or contain `detail[s]` or `internal` (in any capitalization) as part of their name **should not be called by code that is not packaged with and built at the same time**. These functions, types, etc. should be considered private and may change at any time, therefore their existence should not be relied upon. Similarly, any functions, types, etc. marked as **experimental** should be considered as such: the ABI may not be entirely stable and subject to change in the future. Carbonite uses `detail` namespace (not `details`) to contain private/internal code. The `internal` name may also be used for function and type names. Writing thread-safe code is **very difficult**. Introducing asynchronicity and threads means that code will execute non-deterministically and have potentially exponential variations. - Use `std::atomic` types sparingly and with caution! These atomic types can ensure low-level synchronization, but also lead to a false sense of thread safety. Race conditions are still quite possible with `std::atomic`. Consider higher-level synchronization primitives instead. - Avoid explicitly specifying `std::memory_order` on uses of `std::atomic` functions except as described below. The default memory order is also the safest: `std::memory_order_seq_cst` (sequentially consistent). Specifying an incorrect memory order especially on weakly-ordered machines such as ARM can lead to unexpected and extremely-difficult-to-track-down errors. - The `std::memory_order_seq_cst` memory order always can be specified explicitly to indicate where sequential consistency is required. - For performance intensive code, at least one but ideally two Senior or Principal engineers may sign off on more weakly-ordered uses of specified memory orders. - All explicit uses of memory order should be commented. - `volatile` is not a synchronization primitive! It makes no guarantees about atomicity, visibility or ordering. Do not use volatile as a synchronization primitive! Much more control and all of the necessary guarantees are given by `std::atomic`. - Avoid global and function-local `static` variables which may be modified by several threads. - Avoid busy-waiting–spinning while waiting for a condition. This includes loops that call `std::this_thread::yield()` or sleeping for a brief period of time. Properly architected synchronization code will block in the operating system while waiting for a condition to be met. - Many containers and library functions are **not** thread-safe. Be sure to check the documentation and assume that everything is not thread-safe unless explicitly stated. - Carbonite includes two containers that are not only thread-safe but are generally high performance and wait-free: `carb::container::LocklessQueue` and `carb::container::LocklessStack`. - Use the right synchronization primitive for the job: - `std::call_once` executes a callable exactly once, even if called concurrently from several threads. All threads wait until the callable completes execution. - So-called “magic” `static` variables (function-local static initialization) is guaranteed to be thread-safe with C++11. That is, they execute in the same manner as `std::call_once` ensuring that construction happens exactly once and all threads wait until the construction is finished. - However, keep in mind that only the static initialization is guaranteed to be thread-safe, but remember that initialization can be by a function return value, in which case the function is called in a thread-safe manner. - A **Mutex** is one of the most common synchronization primitives for **mutual-exclusion** and can be used to protect memory reads and writes in critical sections of code. See `carb::thread::mutex` and `carb::thread::recursive_mutex`. Since only one thread may have a mutex locked, all other threads must stop and wait in order to gain exclusive access. - A **Shared Mutex** (sometimes called a read/write mutex) is similar to a Mutex but can be accessed in either of two modes: shared (read) mode, or exclusive (write) mode. A thread which has exclusive access causes all other threads to wait. A thread which has shared access allows other threads to also obtain shared access but causes any threads seeking exclusive access to wait. See `carb::thread::shared_mutex`. - **Mutexes** are used to protect shared resources from simultaneous access by multiple threads. There are several types of mutexes: - A **Shared Mutex** allows multiple threads to read a shared resource concurrently, but only one thread to write to it. See `carb::thread::shared_mutex` and `carb::thread::recursive_shared_mutex`. - A **Condition Variable** can be used to signal threads: threads wait until a condition is true, at which point they are signaled. Condition Variables work in concert with a Mutex. See `std::condition_variable` or `std::condition_variable_any`. - A **Semaphore** is a thread-safe counter that controls access to limited resources. See `carb::cpp::binary_semaphore` and `carb::cpp::counting_semaphore`. - A **Latch** is a one-shot gate that opens once a certain number of threads wait at the gate. See `carb::cpp::latch`. - A **Barrier** is similar to a Latch, but operates in phases as opposed to being one-shot. See `carb::cpp::barrier`. - A **Future** and a **Promise** create a thread-safe one-way synchronization channel for passing results of asynchronous operations. See `std::future` and `std::promise`. Note that `carb.tasking` has its own versions that are fiber-aware: `carb::tasking::Promise` and `carb::tasking::Future`. - A **Spin Lock** is a primitive similar to a Mutex that waits by busy-waiting, refusing to give up the thread’s assigned CPU under the guise of resuming as quickly as possible. Spin Locks are not recommended for use in user-level code (kernel or driver code only) and are generally less performant than Mutex due to increased contention. - When using `carb.tasking`’s `carb::tasking::ITasking` interface to launch tasks, the tasks should use synchronization primitives provided by the `ITasking` interface (i.e. `carb::tasking::Mutex`, `carb::tasking::Semaphore`, etc.). - Avoid conditions that may cause a deadlock. A deadlock occurs when one thread has locked Mutex A followed by Mutex B, while another thread has locked Mutex B and wants to lock Mutex A. Each thread owns a resource desired by the other thread and neither will give it up. Therefore, both threads are stuck. A simple way to solve this problem is to always lock Mutexes in the same order (always A followed by B), but this problem can be much more complicated. `std::lock (C++11)` and `std::scoped_lock (C++17)` provide the ability to lock multiple locks with deadlock avoidance heuristics. - Make use of tools that can help visualize and diagnose threading issues: - Visual Studio has several tools, such as **Parallel Stacks** and **Parallel Watch Window**. # Parallel Watch On Linux, Valgrind and Thread Sanitizer may be of use. The `carb.tasking.plugin` has a robust debug visualizer for Visual Studio. When writing C++ functions called from Python, release the Global Interpreter Lock (GIL) as soon as possible. - When using Pybind, this can be accomplished through the `py::gil_scoped_release` RAII class. Consider using `carb.tasking` and thinking in terms of tasks or co-routines. See the main article here. # Testing Main article: Testing # Assertions Compile-time assertions (using `static_assert`) should be preferred. Carbonite offers three kinds of runtime assertions: - `CARB_ASSERT` should be used for non-performance-intensive code and code that is commonly run in debug. It compiles to a no-op in optimized builds. - `CARB_CHECK` is similar to `CARB_ASSERT` but also occurs in optimized builds. - `CARB_FATAL_UNLESS` performs a similar check to `CARB_CHECK` and `CARB_ASSERT`, but calls `std::terminate()` after notifying the assertion handler. # Callbacks Carbonite often runs in a multi-threaded environment, so clear documentation and conformance of how callbacks operate is required. - Basic Callback Hygiene in Carbonite is as follows: - Callback un-registration may occur from within the callback. - Un-registration of a callback must ensure that the callback will never be called again, and any calls to the callback in other threads must be complete. - Holding locks while calling a callback is strongly discouraged, and generally will require that they be recursive locks as a callback function may re-enter the system. ### Exceptions Exceptions may not cross the ABI boundary of a Carbonite plugin because that would require all users to dynamically link to the same C++ runtime as your plugin to operate safely. Functions in a Carbonite interface should be marked `noexcept` as they are not allowed to throw exceptions. If an exception is not handled in a function marked `noexcept`, `std::terminate` will be called, which will prevent the exception from escaping the ABI boundary. It is also helpful to mark internal functions as `noexcept` when they’re known not to throw exceptions (especially if you are building with exceptions enabled). Callback functions passed into Carbonite interfaces should be marked as `noexcept`. Callback types cannot be marked as `noexcept` until C++17, so this cannot be enforced by the compiler. Other entry points into a carbonite plugin, such as `carbOnPluginStartup()` must be marked as `noexcept` as well. The behavior of `noexcept` described above will only occur in code built with exceptions enabled. Code must be built with exceptions unless exceptions will not occur under any circumstances. When using libraries that can throw exceptions (for example, the STL throws exceptions on GCC even when building with `-fno-exceptions`), ensure that your exceptions either are handled gracefully or cause the process to exit before the exception can cross the ABI boundary. See the section on error handling for guidelines on how to choose between these two options. Python bindings all need to be built against the same shared C++ runtime because pybind11 C++ objects in a manner that is not ABI safe (this is why they are distributed as headers). Python will also catch exceptions, so exceptions aren’t fatal when they’re thrown in a python binding. Because of this, exceptions are acceptable to use in python bindings as a method of error handling. Because pybind11 can throw exceptions, callbacks into python must call through `callPythonCodeSafe()` or wrap the callback with `wrapPythonCallback()` (this also ensures the python GIL is locked). ### Error handling Errors that can realistically happen under normal circumstances should always be handled. For example, in almost all cases, you should check whether a file opened successfully before trying to read from the file. Errors that won’t realistically happen or are difficult to recover from, like failed allocation of a 64 byte struct, don’t need to be handled. You must ensure that the application will terminate in a predictable manner if this occurs, however. A failed `CARB_FATAL_UNLESS` statement is a good way to terminate the application in a reliable way. Allowing an exception to reach the end of a `noexcept` function is another way to terminate the application in predictable manner. Performing large allocations or allocations where the size is potentially unbounded (e.g. if the size has been specified by code calling into your plugin), should be considered as cases where a memory allocation failure could potentially occur. This should be handled if it is possible; for example, decoding audio or a texture can easily fail for many reasons, so an allocation failure can be reasonably handled. A more complex case, like allocating a stack for a fiber, may be unrealistic to handle, so crashing is acceptable. ### Logging Log messages should be descriptive enough that the reader would not need to be looking at the code that printed them to understand them. For example, a log message that prints “7” instead of “calculated volume 7” is not acceptable. Strings that are printed in log messages should be wrapped in some form of delimiter, such as `'%s'`, so that it is obvious in log messages if the string was empty. Delimiters may be omitted if the printed string is a compile time constant string or the printed string is already guaranteed to have its own delimiters. ```cpp CARB_LOG_WARN("failed to make '%s' relative to '%s'", baseName, path); ``` Unexpected errors from system library functions should always be logged, preferably as an error. Some examples of unexpected errors would be: memory allocation failure, failing to read from a file for a reason other than reaching the end of the file or GetModuleHandle(nullptr) failing. It is important to log these because this type of thing failing silently can lead to bugs that are very difficult to track down. If a crash handler is bound, immediately crashing after the failure is an acceptable way to log the crash. `CARB_FATAL_UNLESS` is also a good way to terminate an application while logging what the error condition was. Please use portable formatting strings when you print the values of expressions or variables. The format string is composed of zero or more directives: ordinary characters (not `%`), which are copied unchanged to the output stream; and conversion specifications, each of which results in fetching zero or more subsequent arguments. Each conversion specification is introduced by the character `%`, and ends with a **conversion specifier**. In between there may be zero or more **flag characters**, an optional minimum **field width**, an optional **precision**, and an optional **size modifier**. | flag characters | description | |-----------------|-------------| | `#` | The value should be converted in “alternative form”. For `o` conversions, the first character of the output string is made zero (by prefixing a `0` if it was not zero already). For `x` and `X` conversions, a nonzero result prefixed by the string `0x` (or `0X`). For `f`, `e` conversions, the result will always contain a decimal point, even if no digits follow it. For `g` conversion, trailing zeros are not removed from the result. | | `0` | The value should be zero padded. If the `0` and `-` flags both appear, the `0` flag is ignored. If a precision is given with a numeric conversion `d`, `u`, `o`, `x`, `X`, the `0` flag is ignored. | | `-` | The converted value is to be left adjusted on the field boundary. The converted value is padded on the right with blanks, rather than on the left with blanks or zeros. | | (space) | A blank should be left before positive number (or empty string) produced by a signed conversion. | | `+` | A sign (`+` or `-`) should be placed before a number produced by a signed conversion. | <p> A following integer conversion corresponds to <code> signed char or <code> unsigned char argument. <p> A following integer conversion corresponds to <code> short int or <code> unsigned short int argument. <p> A following integer conversion corresponds to <code> long int or <code> unsigned long int argument. <p> A following integer conversion corresponds to <code> long long int or <code> unsigned long long int argument. <p> A following integer conversion corresponds to <code> intmax_t or <code> uintmax_t argument. <p> A following integer conversion corresponds to <code> size_t or <code> ssize_t argument. <p> A following integer conversion corresponds to a <code> ptrdiff_t argument. <p> The integer argument is converted to signed decimal notation. <p> The integer argument is converted to unsigned decimal notation. <p> The integer argument is converted to unsigned octal notation. | Macro | Description | |-------|-------------| | `PRIx16`, `PRIX16` | The integer argument is converted in `d`, `u`, `o`, `x`, `X` notation correspondingly and has `int16_t` or `uint16_t` type. | | `PRId32`, `PRIu32`, `PRIo32`, `PRIx32`, `PRIX32` | The integer argument is converted in `d`, `u`, `o`, `x`, `X` notation correspondingly and has `int32_t` or `uint32_t` type. | | `PRId64`, `PRIu64`, `PRIo64`, `PRIx64`, `PRIX64` | The integer argument is converted in `d`, `u`, `o`, `x`, `X` notation correspondingly and has `int64_t` or `uint64_t` type. | Example: ```cpp int x = 2, y = 3; unsigned long long z = 25ULL; size_t s = sizeof(z); ptrdiff_t d = &y - &x; uint32_t r = 32; CARB_LOG_WARN("x = %d, y = %u, z = %llu", x, y, z); CARB_LOG_INFO("sizeof(z) = %zu", s); ``` ```cpp CARB_LOG_DEBUG("&y - &x = %td", d); CARB_LOG_INFO("r = %"PRIu32"", r); ``` Please note, that Windows-family OSes, contrary to Unix family, uses **fixed-size** types in their API to provide binary compatibility without providing any OS sources. Please use portable macros to make your code portable across different hardware platforms and compilers. | Windows type | compatible fixed-size type | portable format string | |--------------|---------------------------|------------------------| | `BYTE` | `uint8_t` | `"%PRId8"`, `"%PRIu8"`, `"%PRIo8"`, `"%PRIx8"`, `"%PRIX8"` | | `WORD` | `uint16_t` | `"%PRId16"`, `"%PRIu16"`, `"%PRIo16"`, `"%PRIx16"`, `"%PRIX16"` | | `DWORD` | `uint32_t` | `"%PRId32"`, `"%PRIu32"`, `"%PRIo32"`, `"%PRIx32"`, `"%PRIX32"` | | `QWORD` | `uint64_t` | `"%PRId64"`, `"%PRIu64"`, `"%PRIo64"`, `"%PRIx64"`, `"%PRIX64"` | Example: ```cpp DWORD rc = GetLastError(); if (rc != ERROR_SUCCESS) { CARB_LOG_ERROR("Operation failed with error code %#PRIx32", rc); return rc; } ## Debugging Functionality When adding code that is to only run or exist in debug builds, it should be wrapped in an ```c #if CARB_DEBUG ``` block. This symbol is defined for all C++ translation units on all platforms and is set to 1 and 0 for the debug and release configurations correspondingly in the *carb/Defines.h* file. Thus this header file must be included before checking the value of the `CARB_DEBUG`. The `CARB_DEBUG` macro should be preferred over other macros such as ```c NDEBUG _DEBUG ``` etc. The preferred method of enabling or disabling debug code that is purely internal to Carbonite would be to check `CARB_DEBUG`. Do not check the `CARB_DEBUG` with ```c #ifdef #if defined() ``` as it will be defined in both release and debug builds. ## Batch Coding Conventions Please consult David Sullins’s guide when writing Windows batch files. ## Bash Coding Conventions - Bash scripts should be run through shellcheck and pass with 0 warnings (excluding spurious warnings that occur due to edge cases). shellcheck can save you from a wide variety of common bash bugs and typos. For example: ```bash In scratch/bad.sh line 2: rm -rf /usr /share/directory/to/delete ^-- SC2114: Warning: deletes a system directory. Use 'rm --' to disable this message. ``` - Bash scripts should run with ```bash set -e set -o pipefail ``` to immediately exit when an unhandled command error occurs. You can explicitly ignore a command failure by appending ```bash || true ``` . - Bash scripts should be run with ```bash set -u ``` to avoid unexpected issues when variables are unexpectedly unset. A classic example where this is useful is a command such as ```bash rm -rf "$DIRECTORY"/* ``` ; if `DIRECTORY` were unexpectedly undefined, `set -u` would terminate the script instead of destroying your system. If you still want to expand a potentially undefined variable, you can use a default substitution value ```bash ${POSSIBLY_DEFINED-$DEFAULT_VALUE} ``` . If `$POSSIBLY_DEFINED` is defined, it will expand to that value. If `$POSSIBLY_DEFINED` is not defined, it will expand to `$DEFAULT_VALUE`. The default value can be empty ( ```bash ${POSSIBLY_DEFINED-} ``` ), which will give you behavior identical to the default variable expansion in bash without `set -u`. You can also use `:-` instead of `-` (e.g. ```bash ${POSSIBLY_DEFINED:-} ``` ) and empty variables will be treated the same as undefined variables. - For a stronger guarantee that a command such as ```bash rm -rf "$DIRECTORY"/* ``` will not be dangerous, you can expand the variable like this ```bash rm ``` - **Ensure that variables are not empty before using them.** For example, use a command like: ```bash rm -rf "${DIRECTORY:?}"/* ``` , which will terminate the script if it evaluates to an empty string. - **Use arrays to avoid word splitting.** A classic example is something like: ```bash rm $BUILDDIR/*.o ``` This will not work on paths with spaces and shellcheck will warn about this. You can instead use an array so that each file will be passed as a separate argument. ```bash FILES=($BUILDDIR/*.o) rm "${FILES[@]}" ``` - **Set nullglob mode when using wildcards:** ```bash FILES=(*.c) # if there are no .c files, "*.c" will be in the array shopt -s nullglob # set nullglob mode FILES=(*.c) # if there are no .c files, the array will be empty shopt -u nullglob # unset nullglob mode - things will break if you forget this ``` - **Alternatively, use `failglob` to have the command fail out if the glob doesn’t match anything.** - **Bash scripts should use the following shebang:** ```bash #!/usr/bin/env bash ``` This is somewhat more portable than: ```bash #!/bin/bash ```
68,335
commands.md
# Commands Like other extensions, the OmniGraph extensions expose undoable functionality through some basic commands. A lot of the functionality of the commands can be accessed from the `og.Controller` object, described above. OmniGraph has created a shortcut to allow more natural expression of command execution. The raw method of executing a command is something like this: ```python import omni.graph.core as og graph = og.get_graph_by_path("/World/PushGraph") omni.kit.commands.execute("CreateNode", graph=graph, node_path="/World/PushGraph/TestSingleton", node_type="omni.graph.examples.python.TestSingleton", create_usd=True) ``` The abbreviated method, using the constructed `cmds` object looks like this: ```python import omni.graph.core as og graph = og.get_graph_by_path("/World/PushGraph") og.cmds.CreateNode(graph=graph, node_path="/World/PushGraph/TestSingleton", node_type="omni.graph.examples.python.TestSingleton", create_usd=True) ``` However for most operations you would use the controller class, which is a single line: ```python import omni.graph.core as og og.Controller.edit("/World/PushGraph", { og.Controller.Keys.CREATE_NODES: ("TestSingleton", "omni.graph.examples.python.TestSingleton") }) ``` ### Tip Running the Python command `help(omni.graph.core.cmds)` in the script editor will give you a description of all available commands.
1,374
compatibility_index.md
# NVIDIA RTX Remix ## Introduction RTX Remix is a modding platform for remastering a catalog of fixed-function DirectX 8 and 9 games with cutting edge graphics. With NVIDIA RTX Remix, experienced modders can upgrade textures with AI, easily replace game assets with high fidelity assets built with physically accurate (PBR) materials, and inject RTX ray tracing, DLSS and Reflex technologies into the game. It’s like giving your old games a makeover with gorgeous modern-looking graphical mods. Remix consists of two components; there’s the RTX Remix Application (also known as the Toolkit), which is used for creating lights, revamping textures with AI, and adding remastered assets into a game scene that you’ve made with your favorite DCC tool. The second component is the RTX Remix Runtime, which helps you capture classic game scenes to bring into the RTX Remix application to begin your mod. The runtime also is responsible for making your mod “work” when a gamer is playing your mod–in real time, it replaces any old asset with the remastered assets you’ve added to the game scene, and relights the game with path tracing at playback. With the release of the RTX Remix application in Open Beta, the full power of RTX Remix is now in the hands of modders to make next level RTX mods. ## How Does It Work You don’t need to be a computer expert to use RTX Remix. It does most of the hard work for you. But it helps to know a bit about how it works. RTX Remix has two main parts, the runtime which attaches to the game while being played, and the toolkit which is used to edit assets for the game offline (without needing to have the game running). The runtime has two components, the Remix Bridge and Renderer. The Bridge is like a middleman. It sits next to the game and listens to what the game wants to do. It then sends this information to another program called NvRemixBridge.exe, which is a special program that allows the original games renderer to operate in 64-bit, allowing the game to use more of the systems memory than is available in 32-bit (which most classic games are) and because of this, we can use raytracing to render high resolution textures and meshes. The Bridge acts as the messenger - it sends all the game instructions to another part called the RTX Remix Renderer. This Renderer Is a super powerful graphics engine. It takes all the things the game wants to draw, like characters and objects, but does so using a powerful real-time path-tracing engine. The renderer also knows how to swap out the old game stuff with new and improved things from an RTX Remix Mod that you put in a special folder. It keeps track of what’s what using special codes (hash IDs) so it knows what to change in the game as you play. Finally, using the RTX Remix Toolkit, you are able to easily make and add new game objects, materials, and lights. And since it’s built on the NVIDIA Omniverse ecosystem, you’ll have lots of cool tools to make your game look even better. ## Requirements ### Technical Requirements RTX Remix and its mods are built to run on RTX-powered machines. For ideal performance, we recommend using GeForce RTX™ 4070 or higher. For latest drivers, visit NVIDIA Driver Downloads. For Quadro, select ‘Quadro New Feature Driver (QNF). | Level | Operating System | CPU | CPU Cores | RAM | GPU | VRAM | Disk | |-------|------------------|-----|-----------|-----|-----|------|------| ## System Requirements | Min | Windows 10/11 | Intel I7 or AMD Ryzen 7 | 4 | 16 GB | GeForce RTX 3060Ti | 8 GB | 512 GB SSD | | --- | --- | --- | --- | --- | --- | --- | --- | | Rec | Windows 10/11 | Intel I7 or AMD Ryzen 7 | 8 | 32 GB | GeForce RTX 4070 | 12 GB | 512 GB M.2 SSD | ## Recommendations We recommend that you review the Omniverse Technical Requirement Documentation for further details on what is required to use Applications within the Omniverse Platform. ## Requirements For Modders - Windows 10 or 11 - NVIDIA Omniverse ## RTX Remix Runtime Requirements for Developers - Windows 10 or 11 - Visual Studio (VS 2019 or newer) - Windows SDK and emulator (10.0.19041.0 or newer) - Meson (V0.61.4 or newer) - Please Note that v1.2.0 does not work (missing library) - Follow these instructions on how to install and reboot the PC before - Vulkan SDK (1.3.211.0 or newer) - Please Note that you may need to uninstall previous SDK if you have an old version - Python (version 3.9 or newer) ## Compatibility The RTX Remix Runtime is primarily targeting DirectX 8 and 9 games with a fixed function pipeline for compatibility. Injecting the Remix runtime into other content is unlikely to work. It is important to state that even amongst DX8/9 games with fixed function pipelines, there is diversity in how they utilize certain shader techniques or handle rendering. As a result, there are crashes and unexpected rendering scenarios that require improvements to the RTX Remix Runtime for content to work perfectly. It is our goal to work in parallel with the community to identify these errors and improve the runtime to widen compatibility with as many DX8 and 9 fixed function games as possible. As Remix development continues, we will be adding revisions to the RTX Remix Runtime that will expand compatibility for more and more titles. Some of those solutions will be code contributions submitted by our talented developer community, which we will receive on our GitHub as pull requests and integrate into the main RTX Remix Runtime. RTX Remix is a first of its kind modding platform for reimagining a diverse set of classic games with the same workflow, but it’s going to take some investigation and work to achieve that broad compatibility. ### Defining Compatibility Games are ‘compatible’ if the majority of their draw calls can be intercepted by Remix. That doesn’t mean there won’t currently be crashes or other bugs that prevent a specific game from launching. If the game crashes, but the content is compatible, then fixing the crash means the game can be remastered. If the game’s content isn’t compatible, then fixing the crash won’t really achieve anything. This also doesn’t mean that everything in the game will be Remix compatible - often specific effects will either need to be replaced using the existing replacements flow, or will need some kind of custom support added to the runtime. ### Fixed Function Pipelines Remix functions by intercepting the data the game sends to the GPU, recreating the game’s scene based on that data, and then path tracing that recreated scene. With a fixed function graphics pipeline, the game is just sending textures and meshes to the GPU, using standardized data formats. It’s reasonable (though not easy) to recreate a scene from this standardized data. Part of why RTX Remix targets DX8 and 9 titles with fixed function pipelines is because later games utilize shader graphics pipelines, where the game can send the data in any format, and the color of a given surface isn’t determined until it is actually drawn on the screen. This makes it very difficult for RTX Remix to recreate the scene - which, amongst other problems, causes the game to be incompatible. The transition from 100% fixed function to 100% shader was gradual - most early DirectX 9.0 games only used shaders for particularly tricky cases, while later DirectX 9.0 games (like most made with 9.0c) may not use the fixed function pipeline at all. Applying Remix to a game using a mix of techniques will likely result in the fixed function objects showing up, and the shader dependent objects either looking wrong, or not showing up at all. ## Vertex Shader Capture We have some experimental code to handle very simple vertex shaders, which will enable some objects which would otherwise fail. Currently, though, this is very limited. See the ‘Vertex Shader Capture’ option in ‘Game Setup -> Parameters’. ## DirectX Versions Remix functions as a DirectX 9 replacer, and by itself cannot interact with OpenGL or DirectX 7, 8, etc. However, there exists various wrapper libraries which can translate from early OpenGL or DirectX 8 to fixed function DirectX 9. While multiple translation layers introduce even more opportunities for bugs, these have been effectively used to get Remix working with several games that are not DirectX 9. We are not currently aware of any wrapper libraries for DirectX 7 to fixed function DirectX 9, but in theory such a wrapper could be created to extend RTX Remix compatibility further. ## ModDB Compatibility Table ModDB’s community has banded together to make modding with RTX Remix even easier. You can visit the ModDB website and see a community maintained compatibility table, which indicates every game the mod community has found currently works with RTX Remix. It also specifies the last RTX Remix runtime that was tested with any given game, and provides config files (called “rtx.conf” files) that make any compatible game work with RTX Remix out of the box. Take a look, and be sure to contribute and update the table if you make any discoveries of your own. ## Rules of Thumb The following quick checks can help you quickly narrow down on how likely a game is to be compatible, even before you try to run RTX Remix. ### Publish Date The best “at a glance” way to guess if a game is compatible is to look at the publish date. Games released between 2000 and 2005 are most likely to work. Games after 2010 are almost certainly not going to work (unless they are modified to support fixed function pipelines). ### Graphics API version DirectX 8 and DirectX 9.0 will probably be fixed function, and thus feasible. DirectX 9.0c games are usually mostly shader based, so probably won’t work. ### Supported GPU The Nvidia Geforce 2 graphics card was the last card to be fixed function only, so if the game could run on that card, it’s probably fixed function. Note that many games supported fixed functions when they were released, but removed that support in later updates. Testing the content It’s actually possible to tell dxvk to dump out any shaders used by the game by adding these settings to your environment variables: ```bash DXVK_SHADER_DUMP_PATH=/some/path DXVK_LOG_LEVEL=debug ``` If that dumps out a few shaders, then the content may mostly be Remix compatible. If it dumps out a lot of shaders, then the game probably won’t be workable. ### So Is My Game Content Being Processed by Remix? Here is an alternate, more definitive way to check if Remix is processing the some steps to check: 1. Open the developer menu 2. Click Enable Debug View 3. Change the dropdown below it to Geometry Hash If it looks anything like the image below, then the content is probably remixable. If objects have a stable color, those objects are probably replaceable (once the tool comes out). If a mesh’s color changes when you’re in that view, that means the mesh won’t be reliably replaceable using the current settings - though there may be workarounds with different configurations. If nothing changes, the game’s content isn’t going through remix at all. Try lowering the graphics settings as far as they will go, playing with the shader model, or whatever other tricks you can to try to force the game into a fixed function fallback mode. Regarding the geometry hash mode above: Dynamic meshes are expected to change color every frame - things like particle effects and maybe animated meshes. Animated meshes may flicker, depending on how the game does skinning: - Software animation (apply skinning on the CPU) - this will flicker - Hardware animation (apply skinning on the GPU) - this should be stable Some games will support both based on some config value, so you may be able to force it into hardware animation. Remix still can’t actually replace an animated mesh, but that’s relatively straightforward to do if the mesh is GPU skinned- it is on our roadmap to address in the future. We have ideas to also enable CPU skinned meshes… but that’s going to be a big experiment. It is a more speculative feature, and we will be investigating it sometime in the future. ### Why are Shaders Hard to Path Trace? NOTE: This is simplified and meant for someone with no knowledge of computer graphics What is a fixed function pipeline? Imagine you’re making a little shoebox diorama, and you want the background to look like a brick wall. So you print out a picture of a brick wall and glue it on the back of the shoebox. Simple, easy, works great. This is basically what fixed function does - surface + texture, slap it on screen, done. What is a shader? What if you want to make it fancier? What if you wanted more artistic freedom to change the back of your box? Well, you could slap a tablet back there, and just display whatever you want on the screen. You could even write a little program that detects where someone is looking at the box from, and changes what is on the tablet’s screen based on the viewing angle. This is basically what shaders do - they get passed a bunch of arbitrary data from the app, are told the camera position, and are asked what color a tiny piece of an object is supposed to be. Until the pixel shader runs for that tiny piece of that object, for that specific camera position, that object doesn’t actually have a color assigned to it. The shader has to compute what color it should be. It also doesn’t actually output the raw color - it includes lighting and whatever else the game is doing. That just describes pixel shaders though. Vertex shaders let that tablet change shape however it wants… and I think the metaphor starts to fall apart at this point. So why are shaders a problem? First off, shaders don’t require a standardized description of the scene (positions of surfaces, cameras, lights, etc). Remix needs that information to reconstruct the scene for path tracing, and there’s no standard way to extract that information that works across every game. It can be done on a per game basis, but it’s a decent chunk of work for each game. Secondly, we need to know the color (and other material properties) of every surface - without any lighting or shading interfering. With pixel shaders, there’s no straightforward way to get that - even if we could run the shader for every surface, it wouldn’t be outputting the raw color data we need. This may be solvable with automatic shader processing, or letting modders write new ray-hit shaders to replace the functionality of the game’s pixel shaders, but we’ll need to do more experimentation to know what approach will actually work. Thirdly, there are the vertex shaders - but fortunately, we’ve already got an experimental solution that handles most vertex shaders. Once Remix is more stable and fleshed out, it may be possible to remaster shader based games. I’ve seen the modding community succeed at equally complicated projects, so I’m not going to rule that out. But I don’t think it’s worth even starting those projects yet - we should focus on the games that are actually a good fit first, build out and stabilize the tech for those, and get some remasters out the door. Need to leave feedback about the RTX Remix Documentation? Click here
15,172
CompoundNodes.md
# Compound Nodes Compound nodes are nodes whose execution is defined by the evaluation of an `OmniGraph`. Compound nodes can be used to encapsulate complex functionality into a single node, or to create a node that can be reused. The existing release of OmniGraph supports Compound Subgraphs, which allow the user to collapse a subnetwork of nodes into a separate `OmniGraph` that is represented by a node in the owning graph. ## USD Representation of Compound Nodes Similar to non-compound nodes, Compound Nodes are represented in USD using a prim with the schema type `OmniGraphSchema.OmniGraphNode`. A Compound Node also contains attributes and a node type. When a Compound Node represents a Compound Subgraph, the node type is fixed to `omni.graph.nodes.CompoundSubgraph`. For the prim to be recognized by OmniGraph as a compound, a USD schemaAPI is applied: `OmniGraphSchema.OmniGraphCompoundNodeAPI`. This allows the compound node to inherit an attribute representing the compound node type (currently only “subgraph” is supported), and a USD relationship attribute (`omni:graph:compoundGraph`) that references the location of the graph Prim on the stage that represents the definition of the Compound Nodes execution. The Prim representing the graph is a standard OmniGraph Prim. The Prim is inserted as a child of the Compound Node. Connections are made between the Compound Node and the nodes in the Compound Graph. The attributes of the Compound Node generally act as a passthrough, moving data between the owning graph and Compound Graph. ```usd def OmniGraphNode "compound" ( prepend apiSchemas = ["OmniGraphCompoundNodeAPI"] ) { custom token inputs:a custom token inputs:b token node:type = "omni.graph.nodes.CompoundSubgraph" int node:typeVersion = 1 rel omni:graph:compoundGraph = &lt;/World/PushGraph/compound/Subgraph&gt; token omni:graph:compoundType = "subgraph" custom token outputs:sum prepend token outputs:sum.connect = &lt;/World/PushGraph/compound/Subgraph/add.outputs:sum&gt; def OmniGraph "Subgraph" { def OmniGraphNode "add" } } ``` { custom token inputs:a prepend token inputs:a.connect = custom token inputs:b prepend token inputs:b.connect = token node:type = "omni.graph.nodes.Add" int node:typeVersion = 2 custom token outputs:sum } ``` ## Using the og.Controller Class to Create Compound Nodes in Scripts ### Creating Compound Nodes The og.Controller class can be used to create Compound Nodes in scripts. The `og.Controller` class provides a set of methods useful in defining OmniGraphs using python scripting. The most straightforward way to create a compound node using the `og.Controller.edit` function is to define the Compound Subgraph when defining the Compound Node by using a recursive definition, as demonstrated in the following sample: ```python keys = og.Controller.Keys controller = og.Controller() (graph, nodes, _, name_to_object_map) = controller.edit( "/World/MyGraph1", { keys.CREATE_NODES: [ ( "CompoundNode", { # Creates the compound nodes # Defines the subgraph of the compound node keys.CREATE_NODES: [ ("Constant1", "omni.graph.nodes.ConstantDouble"), ("Constant2", "omni.graph.nodes.ConstantDouble"), ("Add", "omni.graph.nodes.Add"), ], keys.CONNECT: [ ("Constant1.inputs:value", "Add.inputs:a"), ("Constant2.inputs:value", "Add.inputs:b"), ], }, ), ], }, ) ``` This will generate a graph that contains a Compound Node containing a Subgraph with two Constant nodes and an Add Node. Note that in the returned values from the `edit` function, the list represented by the `nodes` variable will only contain the Compound Node, and not the nodes defined inside the Compound Subgraph. In other words, the list of returned nodes only contains nodes in the top-level graph. However, the nodes inside the Compound Subgraph can be accessed via the returned `name_to_object_map` dictionary using the defined names of each of the nodes in the Subgraph. For example, to access the `Constant1` node, use `name_to_object_map["Constant1"]`. This does mean that when creating complex graph structures, all node names need to be unique, even if they are defined in different Compound Subgraphs. ### Promoting Compound Node Attributes In the example above, the generated compound node has neither input nor output attributes. In practice, a Compound Node will use data from, and produce data for, adjacent nodes in the graph. This is accomplished using the `og.Controller.keys.PROMOTE_ATTRIBUTES` key. The following example demonstrates how to promote attributes: ```python keys = og.Controller.Keys controller = og.Controller() (graph, nodes, _, name_to_object_map) = controller.edit( "/World/MyGraph2", { keys.CREATE_NODES: [ ( "CompoundNode", { # Creates the compound nodes # Defines the subgraph of the compound node keys.CREATE_NODES: [ ("Constant1", "omni.graph.nodes.ConstantDouble"), ("Constant2", "omni.graph.nodes.ConstantDouble"), ("Add", "omni.graph.nodes.Add"), ], keys.CONNECT: [ ("Constant1.inputs:value", "Add.inputs:a"), ("Constant2.inputs:value", "Add.inputs:b"), ], }, ), ], }, ) ( "CompoundNode", { keys.CREATE_NODES: [ ("Add", "omni.graph.nodes.Add"), ], keys.PROMOTE_ATTRIBUTES: [ ("Add.inputs:a", "inputs:one"), ("Add.inputs:b", "inputs:two"), ("Add.outputs:sum", "outputs:result"), ("Add.outputs:sum", "outputs:alt_result"), ], }, ), ("Constant1", "omni.graph.nodes.ConstantDouble"), ("Constant2", "omni.graph.nodes.ConstantDouble"), ("Consumer", "omni.graph.nodes.Add"), keys.CONNECT: [ # Connect to the promoted attributes ("Constant1.inputs:value", "CompoundNode.inputs:one"), ("Constant2.inputs:value", "CompoundNode.inputs:two"), ("CompoundNode.outputs:result", "Consumer.inputs:a"), ("CompoundNode.outputs:alt_result", "Consumer.inputs:b"), ], While the sample above does not demonstrate a particularly useful Compound Node, it does demonstrates a few important concepts about promotion. When promoting an attribute, the source attribute and the compound attribute name are supplied as a tuple. The source attribute is a path to an attribute in the Compound Subgraph. The second element of the tuple specifies the promoted attribute name in the compound. Note the name does not have to match the source attribute path. The promoted attribute can then be accessed in the owning graph using the name specified in the promotion in the same manner as one would use any other node attributes. If required, an attribute can be promoted multiple times, as demonstrated by the promotion of the **Add.outputs:sum**. Attribute promotion can also be accomplished using the `og.NodeController.promote_attribute` function. The inputs and outputs prefix is optional in the compound attribute name. This means that the type of promoted attribute is determined solely by the port type of the source attribute, and not the prefix of the compound attribute name. The is demonstrated in the following example: ```python keys = og.Controller.Keys controller = og.Controller() (graph, nodes, _, name_to_object_map) = controller.edit( "/World/MyGraph3", { keys.CREATE_NODES: [ ("CompoundNode", { keys.CREATE_NODES: [ ("Add", "omni.graph.nodes.Add"), ("Constant1", "omni.graph.nodes.ConstantDouble"), ], keys.PROMOTE_ATTRIBUTES: [ ("Add.inputs:a", "one"), # Promoted as inputs:one ("Add.inputs:b", "outputs:two"), # Promoted as inputs:outputs:two ("Add.outputs:sum", "inputs:result"), # Promoted as outputs:inputs:result ("Constant1.inputs:value", "const"), # Promoted as outputs:const ], }), ], }, ) ``` Here, the appropriate prefix is appended to each of the promoted attribute names. In the case of Constants, which used input attributes that are marked as **output-only**, they become output attributes on the Compound Node.
8,786
configuration-options_index.md
# Omniverse MJCF Importer ## Omniverse MJCF Importer The MJCF Importer Extension is used to import MuJoCo representations of scenes. MuJoCo Modeling XML File (MJCF), is an XML format for representing a scene in the MuJoCo simulator. ## Getting Started 1. Clone the GitHub repo to your local machine. 2. Open a command prompt and navigate to the root of your cloned repo. 3. Run `build.bat` to bootstrap your dev environment and build the example extensions. 4. Run `_build\{platform}\release\omni.importer.mjcf.app.bat` to start the Kit application. 5. From the menu, select `Isaac Utils->MJCF Importer` to launch the UI for the MJCF Importer extension. This extension is enabled by default. If it is ever disabled, it can be re-enabled from the Extension Manager by searching for `omni.importer.mjcf`. **Note:** On Linux, replace `.bat` with `.sh` in the instructions above. ## Conventions Special characters in link or joint names are not supported and will be replaced with an underscore. In the event that the name starts with an underscore due to the replacement, an a is pre-pended. It is recommended to make these name changes in the mjcf directly. See the [Convention References](https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/reference_conventions.html#isaac-sim-conventions) documentation for a complete list of `Isaac Sim` conventions. ## User Interface ## Configuration Options - **Fix Base Link**: When checked, every world body will have its base fixed where it is placed in world coordinates. - **Import Inertia Tensor**: Check to load inertia from mjcf directly. If the mjcf does not specify an inertia tensor, identity will be used and scaled by the scaling factor. If unchecked, Physx will compute it automatically. - **Stage Units Per Meter**: The default length unit is meters. Here you can set the scaling factor to match the unit used in your MJCF. - **Link Density**: If a link does not have a given mass, it uses this density (in Kg/m^3) to compute mass based on link volume. A value of 0.0 can be used to tell the physics engine to automatically compute density as well. **Clean Stage**: When checked, cleans the stage before loading the new MJCF, otherwise loads it on current open stage at position `(0,0,0)` **Self Collision**: Enables self collision between adjacent links. It may cause instability if the collision meshes are intersecting at the joint. **Create Physics Scene**: Creates a default physics scene on the stage. Because this physics scene is created outside of the scene asset, it will not be loaded into other scenes composed with the robot asset. **Note**: It is recommended to set Self Collision to false unless you are certain that links on the robot are not self colliding ### Robot Properties There might be many properties you want to tune on your robot. These properties can be spread across many different Schemas and APIs. The general steps of getting and setting a parameter are: 1. Find which API is the parameter under. Most common ones can be found in the Pixar USD API. 2. Get the prim handle that the API is applied to. For example, Articulation and Drive APIs are applied to joints, and MassAPIs are applied to the rigid bodies. 3. Get the handle to the API. From there on, you can Get or Set the attributes associated with that API. For example, if we want to set the wheel’s drive velocity and the actuators’ stiffness, we need to find the DriveAPI: ```python # get handle to the Drive API for both wheels left_wheel_drive = UsdPhysics.DriveAPI.Get(stage.GetPrimAtPath("/carter/chassis_link/left_wheel"), "angular") right_wheel_drive = UsdPhysics.DriveAPI.Get(stage.GetPrimAtPath("/carter/chassis_link/right_wheel"), "angular") # Set the velocity drive target in degrees/second left_wheel_drive.GetTargetVelocityAttr().Set(150) right_wheel_drive.GetTargetVelocityAttr().Set(150) # Set the drive damping, which controls the strength of the velocity drive left_wheel_drive.GetDampingAttr().Set(15000) right_wheel_drive.GetDampingAttr().Set(15000) # Set the drive stiffness, which controls the strength of the position drive # In this case because we want to do velocity control this should be set to zero left_wheel_drive.GetStiffnessAttr().Set(0) right_wheel_drive.GetStiffnessAttr().Set(0) ``` **Note**: - The drive stiffness parameter should be set when using position control on a joint drive. - The drive damping parameter should be set when using velocity control on a joint drive. - A combination of setting stiffness and damping on a drive will result in both targets being applied, this can be useful in position control to reduce vibrations. ## Extension Documentation - [MJCF Importer Extension [omni.importer.mjcf]] - Usage - High Level Code Overview - Changelog - Contributing to the MJCF Importer Extension
4,816
configuration_overview.md
# Scene Optimizer Service The Scene Optimizer Service uses the [Scene Optimizer Extension](https://docs.omniverse.nvidia.com/prod_extensions/prod_extensions/ext_scene-optimizer.html) (omni.kit.services.scene.optimizer) to optimize USD files to improve performance. Example use cases for the service: - **Automated UV generation for CAD models** - **Optimizing scenes for runtime interactivity** - **Optimizing scenes for memory efficiency** - **Point Cloud Partitioning** The service will take a USD file from a Nucleus location, use a predefined configuration file with desired optimization operations and create a new optimized USD file in the same Nucleus directory. ## Configuration The service is primarily configured via a json file which describes the operation stack which should be executed on the scene. ### Generating Config Files The file can be written by hand, but the easier way is to use the Scene Optimizer Kit extension to generate the file. The optimization steps can be defined in the Scene Optimizer UI, then saved to a JSON file. See details on how to generate the JSON file from the [Scene Optimizer Kit extension](https://docs.omniverse.nvidia.com/prod_extensions/prod_extensions/ext_scene-optimizer/user-manual.html). ### Example JSON Configuration This sample json will run the following operations: - optimize materials - deduplicate - deduplicate geometry - instanceable reference - prune leaf xforms
1,438
configuring-assets-and-adding-them-to-a-stage_Overview.md
# SimReady Explorer Developer Guide ## Overview SimReady Assets are the building blocks of industrial virtual worlds. They are built on top of the Universal Scene Description (USD) platform and have accurate physical properties, behaviors, and connected data streams. They are comprised on multiple files such as USD layers, material description files (.mdl), textures, thumbnails, etc. The SimReady Explorer extension allows for working with libraries of SimReady Assets, by enabling users to: - Find assets by words matched against tags and asset names - Configure asset behavior, appearance, etc in the browser, before they are assembled into a scene - Assemble assets into virtual worlds through Omniverse applications such as USD Composer, DriveSim, Isaac Sim, etc. Through the SimReady Explorer API, developers can: - Search for assets by matching search words against tags and asset names - Configure various aspects of assets, as defined by the asset class - Add the configured assets to a stage ## Finding assets SimReady Assets have a name, tags, and labels. The labels are derived from the Wiki Data database, and as such are backed by QCodes. Note that the labels and the QCode of an asset are also part of its list of tags. Both tags and labels can consist of multiple, space separated words. Finding SimReady Assets is like querying a database. A list of search terms is each matched against the asset’s name and tags. Partial matches of the search term in names, tags, and labels are also returned. For example, the search term “wood” would match names such as “recycledwoodpallete” and “crestwood_sofa”. See the find_assets() API for details on how to programmatically search for assets. ## Configuring assets and adding them to a stage The find_assets() API returns a list of SimreadyAsset objects, which can be added to the current stage with the desired behaviors enabled. The behaviors supported currently are the USD variant sets exposed by the asset. When adding assets to a stage, they can be inserted at a given location, can be parented to another prim, or can be added at or even replace a list of prims. See the add_asset_to_stage(), and the add_asset_to_stage_using_prims() APIs for details on how to programmatically add assets to the current stage. # SimReady Explorer API Tutorial The following code illustrates how to find assets, add them to the current scene, and even replace some of them with other assets. The code below can be executed from the Script Editor of any Omniverse application. ```python import asyncio from typing import List, Tuple ``` ```python import omni.kit.app import omni.simready.explorer as sre import omni.usd from pxr import Gf, Sdf, Usd async def main(): # 1. Find all residential wooden chair assets. # We use multiple search terms, some will be matched in the tags, others in the asset names assets = await sre.find_assets(search_words=["residential", "chair", "wood"]) print(f"Found {len(assets)} chairs") # 2. Prepare to configure the assets # All SimReady Assets have a Physics behavior, which is implemented as a # variantset named PhysicsVariant. To enable rigid body physics on an asset, # this variantset needs to be set to "RigidBody". variants = {"PhysicsVariant": "RigidBody"} # 3. Add all assets found in step (1) to the current stage as a payload added_prim_paths: List[Sdf.Path] = [] for i, asset in enumerate(assets): pos = -200 + 200 * i res, prim_path = sre.add_asset_to_stage( asset.main_url, position=Gf.Vec3d(pos, 0, -pos), variants=variants, payload=True ) if res: print(f"Added '{prim_path}' from '{asset.main_url}'") # 4. Find an ottoman assets = await sre.find_assets(search_words=["ottoman"]) print(f"Found {len(assets)} ottomans") # 5. Replace the first chair with an ottoman if assets and added_prim_paths: usd_context: omni.usd.UsdContext = omni.usd.get_context() stage: Usd.Stage = usd_context.get_stage() if usd_context else None await omni.kit.app.get_app().next_update_async() res, prim_path = sre.add_asset_to_stage_using_prims( usd_context, stage, assets[0].main_url, variants=variants, replace_prims=True, prim_paths=[added_prim_paths[0]], ) if res: print(f"Replaced assets '{added_prim_paths[0]}' with '{prim_path}' from '{assets[0].main_url}'") asyncio.ensure_future(main()) ``` ## Future enhancements The SimReady Explorer API will be extended in the near future to allow defining custom asset classes with specific behaviors.
4,700
configuring-rules-with-carbonite-settings_index.md
# Omni Asset Validator (Core) Validates assets against Omniverse specific rules to ensure they run smoothly across all Omniverse products. It includes the following components: - A **rule interface** and **registration mechanism** that can be called from external python modules. - An **engine** that runs the rules on a given `Usd.Stage`, layer file, or recursively searches an OV folder for layer files. - An **issue fixing** interface for applying automated fixes if/when individual rules provide suggestions. > Note > The `IssueFixer` API is still a work in-progress. Currently no rules provide the necessary suggestions to fix issues. ## Validation Rules by Category Several categories of **validation rules** are defined in this core module. These include: - The **Basic** rules from Pixar (e.g. the default `usdchecker` rules). - The **ARKit** rules from Apple (also available via `usdchecker`). - These rules are disabled by default in this system, as they are in `usdchecker`. Use Carbonite Settings or the ValidationEngine API to re-enable them. - A few NVIDIA developed rules that we plan to contribute back to the **Basic** set. - Some Omniverse specific rules that apply to all Omniverse apps. These will be available under several **Omni:** prefixed categories. ## Writing your own Rules Any external client code can define new rules and register them with the system. Simply add `omni.asset_validator.core` as a dependency of your tool (e.g. your Kit extension or other python module), derive a new class from `BaseRuleChecker`, and use the `ValidationRulesRegistry` to categorize your rule with the system. See the Core Python API for thorough details and example code. We even provide a small Testing API to ease unittesting your new rules against known USD layer files. # Important Put extra thought into your category name, your class name, your `GetDescription()` implementation, and the messages in any errors, warnings, or failures that your rule generates at runtime. These are the user-facing portions of your rule, and many users will appreciate natural language over engineering semantics. # Running the Validation Engine Validation can be run synchronously (blocking) via `ValidationEngine.validate()`, or asynchronously via either `ValidationEngine.validate_async()` or `ValidationEngine.validate_with_callbacks()`. Currently validation within an individual layer file or `Usd.Stage` is synchronous. This may become asynchronous in the future if it is merited. Validation Issues are captured in a `Results` container. Issues vary in severity (Error, Failure, Warning) and will provide detailed messages explaining the problem. Optionally, they may also provide detail on where the issue occured in the `Usd.Stage` and a suggestion (callable python code) for how it can be fixed automatically. # Fixing Issues automatically Once validation Results have been obtained, they can be displayed for a user as plain text, but we also provide an automatic `IssueFixer` for some Issues. It is up to each individual rule to define the suggested fix via a python callable. See the Core Python API for more details. # Configuring Rules with Carbonite Settings As with many Omniverse tools, `omni.asset_validator.core` is configurable at runtime using Carbonite settings. The following settings can be used to customize which rules are enabled/disabled for a particular app, company, or team. ## Settings - `enabledCategories` / `disabledCategories` are lists of of glob style patterns matched against registered categories. Categories can be force-enabled using an exact match (no wildcards) in `enabledCategories`. - `enabledRules` / `disabledRules` are lists of of glob style patterns matched against class names of registered rules. Rules can be force-enabled using an exact match (no wildcards) in `enabledRules`. > Tip: These settings only affect a default-constructed ValidationEngine. Using the Python API, client code may further configure a ValidationEngine using `enableRule()`. In such cases, the rules may not even be registered with the ValidationRulesRegistry. # API and Changelog We provide a thorough public API for the core validation framework and a minimal public testing API to assist clients in authoring new rules. - Core Python API - Testing API - Rules - Changelog
4,329
configuring-the-omni-kit-telemetry-extension_KitTelemetry.md
# Configuring the `omni.kit.telemetry` Extension ## Overview The `omni.kit.telemetry` extension is responsible for a few major tasks. These largely occur in the background and require no direct interaction from the rest of the app. All of this behavior occurs during the startup of the extension automatically. The major tasks that occur during extension startup are: - Launch the telemetry transmitter app. This app is shipped with the extension and is responsible for parsing, validating, and transmitting all structured log messages produced by the app. Only the specific messages that have been approved and validated will be transmitted. More on this below. - Collect system information and emit structured log messages and crash reporter metadata values for it. The collected system information includes CPU, memory, OS, GPU, and display information. Only information about the capabilities of the system is collected, never any user specific information. - Emit various startup events. This includes events that identify the run environment being used (ie: cloud/enterprise/individual, cloud node/cluster name, etc), the name and version of the app, the various session IDs (ie: telemetry, launcher, cloud, etc), and the point at which the app is ready for the user to interact with it. - Provide interfaces that allow some limited access to information about the session. The `omni::telemetry::ITelemetry` and `omni::telemetry::ITelemetry2` interfaces can be used to access this information. These interfaces are read-only for the most part. Once the extension has successfully started up, it is generally not interacted with again for the duration of the app’s session. ## The Telemetry Transmitter The telemetry transmitter is a separate app that is bundled with the `omni.kit.telemetry` extension. It is launched during the extension’s startup. For the most part the configuration of the transmitter is automatic. However, its configuration can be affected by passing specific settings to the Kit based app itself. In general, any settings under the `/telemetry/` settings branch will be passed directly on to the transmitter when it is launched. There are some settings that may be slightly adjusted or added to however depending on the launch mode. The transmitter process will also inherit any settings under the `/log/` (with a few exceptions) and `/structuredLog/extraFields/` settings branches. In almost all cases, the transmitter process will be unique in the system. At any given time, only a single instance of the transmitter process will be running. If another instance of the transmitter is launched while another one is running, the new instance will immediately exit. This single instance of the transmitter will however handle events produced by all Kit based apps, even if multiple apps are running simultaneously. This limitation can be overcome by specifying a new launch guard name with the `/telemetry/launchGuardName` setting, but is not recommended without also including additional configuration changes for the transmitter such as the log folder to be scanned. Having multiple transmitters running simultaneously could result in duplicate messages being sent and more contention on accessing log files. When the transmitter is successfully launched, it will keep track of how many Kit based apps have attempted to launch it. The transmitter will continue to run until all Kit based apps that tried to launch it have exited. This is true regardless of how each Kit based app exits - whether through a normal exit, crashing, or being terminated by the user. The only cases where the transmitter will exit early will be if it detects that another instance is already running, and if it detects that the user has not given any consent to transmit any data. In the latter case, the transmitter exits because it has no job to perform without user consent. When the transmitter is run with authentication enabled (ie: the ``` \``` /telemetry/transmitter/0/authenticate=true \``` or ``` \``` /telemetry/authenticate=true \``` settings), it requires a way to deliver the authentication token to it. This is usually provided by downloading a JSON file from a certain configurable URL. The authentication token may arrive with an expiry time. The transmitter will request a renewed authentication token only once the expiry time has passed. The authentication token is never stored locally in a file by the transmitter. If the transmitter is unable to acquire an authentication token for any reason (ie: URL not available, attempt to download the token failed or was rejected, etc), that endpoint in the transmitter will simply pause its event processing queue until a valid authentication token can be acquired. When the transmitter starts up, it performs the following checks: - Reads the current privacy consent settings for the user. These settings are found in the ``` \``` privacy.toml \``` file that the Kit based app loaded on startup. By default this file is located in ``` \``` ~/.nvidia-omniverse/config/privacy.toml \``` but can be relocated for a session using the ``` \``` /structuredLog/privacySettingsFile \``` setting. - Loads its configuration settings and builds all the requested transmission profiles. The same set of parsed, validated events can be sent to multiple endpoints if the transmitter is configured to do so. - Downloads the appropriate approved schemas package for the current telemetry mode. Each schema in the package is then loaded and validated. Information about each event in each schema is then stored internally. - Parses out the extra fields passed to it. Each of the named extra fields will be added to each validated message before it is transmitted. - In newer versions of the transmitter (v0.5.0 and later), the list of current schema IDs is downloaded and parsed if running in ‘open endpoint’ mode (ie: authentication is off and the ``` \``` schemaid \``` extra field is passed on to it). This is used to set the latest value for the ``` \``` schemaid \``` field. - Outputs its startup settings to its log file. Depending on how the Kit based app is launched, this log file defaults to either ``` \``` ${kit}/logs/ \``` or ``` \``` ~/.nvidia-omniverse/logs/ \``` . The default name for the log file is ``` \``` omni.telemetry.transmitter.log \``` . While the transmitter is running, it repeatedly performs the following operations: - Scans the log directory for new structured log messages. If no new messages are found, the transmitter will sleep for one minute (by default) before trying again. - All new messages that are found are then validated against the set of loaded events. Any message that fails validation (ie: not formatted correctly or its event type isn’t present in the approved events list) will simply be dropped and not transmitted. - Send the set of new approved, validated events to each of the requested endpoints. The transmitter will remove any endpoint that repeatedly fails to be contacted but continue doing its job for all other endpoints. If all endpoints are removed, the transmitter will simply exit. - Update the progress tags for each endpoint in each log file to indicate how far into the log file it has successfully processed and transmitted. If the transmitter exits and the log files persist, the next run will simply pick off where it left off. - Check whether the transmitter should exit. This can occur if all of the launching Kit based apps have exited or if all endpoints have been removed due to them being unreachable. ## Anonymous Data Mode An anonymous data mode is also supported for Omniverse telemetry. This guarantees that all user information is cleared out, if loaded, very early on startup. Enabling this also enables open endpoint usage, and sets the transmitter to ‘production’ mode. All consent levels will also be enabled once a random user ID is chosen for the session. This mode is enabled using the ``` \``` /telemetry/enableAnonymousData \``` setting (boolean). For more information, please see the [Anonymous Data Mode documentation](#). ## Configuration Options Available to the ``` \``` omni.kit.telemetry \``` Extension The ``` \``` omni.kit.telemetry \``` will do its best to automatically detect the mode that it should run in. However, sometimes an app can be run in a setting where the correct mode cannot be accurately detected. In these cases the extension will just fall back to its default mode. The current mode can be explicitly chosen using the ``` \``` /telemetry/mode \``` setting. However, some choices of mode (ie: ‘test’) may not function properly without the correct build of the extension and transmitter. The extension can run in the following modes: - ``` \``` Production \``` : Only transmits events that are approved for public users. Internal-only events will only be emitted to local log files and will not be transmitted anywhere. The default transmission endpoint is Kratos (public). This is the default mode. - ``` \``` Developer \``` : Transmits events that are approved for both public users and internal users. The default ``` transmission endpoints are Kratos (public) and NVDF (internal only). - Send only locally defined test events. This mode is typically only used for early iterative testing purposes during development. This mode in the transmitter allows locally defined schemas to be provided. The default transmission endpoints are Kratos (public) and NVDF (internal only). The extension also detects the ‘run environment’ it is in as best it can. This detection cannot be overridden by a setting. The current run environment can be retrieved with the `omni::telemetry::ITelemetry2::getRunEnvironment()` function (C++) or the `omni.telemetry.ITelemetry2().run_environment` property (python). The following run environments are detected and supported: - **Individual**: This is the default mode. This launches the transmitter in its default mode as well (ie: `production` unless otherwise specified). If consent is given, all generated and approved telemetry events will be sent to both Kratos (public) and NVDF (internal only). This mode requires that the user be logged into the Omniverse Launcher app since it provides the authentication information that the public data endpoint requires. If the Omniverse Launcher is not running, data transmission will just be paused until the Launcher app is running. This mode is chosen only if no others are detected. This run environment is typically picked for individual users who install their Omniverse apps through the desktop Omniverse Launcher app. This run environment is referred to as “OVI”. - **Cloud**: This launches the transmitter in ‘cloud’ mode. In this mode the final output from the transmitter is not sent anywhere, but rather written to a local file on disk. The intent is that another log consumer service will monitor for changes on this log file and consume events as they become available. This allows more control over which data is ingested and how that data is ingested. This run environment is typically launched through the Omniverse Cloud cockpit web portal and is referred to as “OVC”. - **Enterprise**: This launches the transmitter in ‘enterprise’ mode. In this mode, data is sent to an open endpoint data collector. No authentication is needed in this mode. The data coming in does however get validated before storing. This run environment is typically detected when using the Omniverse Enterprise Launcher app to install or launch the Kit based app. This run environment is referred to as “OVE”. Many of the structured logging and telemetry settings that come from the Carbonite components of the telemetry system also affect how the `omni.kit.telemetry` extension starts up. Some of the more useful settings that affect this are listed below. Other settings listed in the above Carbonite documentation can be referred to for additional information. The following settings can control the startup behavior of the `omni.kit.telemetry` extension, the transmitter launch, and structured logging for the app: - Settings used for configuring the transmitter to use an open endpoint: - `/structuredLog/privacySettingsFile`: Sets the location of the privacy settings TOML file. This setting should only be used when configuring an app in a container to use a special privacy settings file instead of the default one. The default location and name for this file is `~/.nvidia-omniverse/config/privacy.toml`. This setting is undefined by default. - `/telemetry/openTestEndpointUrl`: Sets the URL to use as the test mode open endpoint URL for the transmitter. This value is specified in the extension’s configuration files and may override anything given on the command line or other global config files. - `/telemetry/openEndpointUrl`: Sets the URL to use as the dev or production mode open endpoint URL for the transmitter. This value is specified in the extension’s configuration files and may override anything given on the command line or other global config files. - `/telemetry/enterpriseOpenTestEndpointUrl`: Sets the URL to use as the test mode open endpoint URL for OVE for the transmitter. This value is specified in the extension’s configuration files and may override anything given on the command line or other global config files. - `/telemetry/enterpriseOpenEndpointUrl`: Sets the URL to use as the dev or production mode open endpoint URL for OVE for the transmitter. This value is specified in the extension’s configuration files and may override anything given on the command line or other global config files. - `/telemetry/useOpenEndpoint`: Boolean value to explicitly launch the transmitter in ‘open endpoint’ mode. This will configure the transmitter to set its endpoint to the Kratos open endpoint URL for the current telemetry mode and run environment. In most cases this setting and ensuring that the privacy settings are provided by the user are enough to successfully launch the transmitter in open endpoint mode. This defaults to `false`. - `/telemetry/enableAnonymousData`: Boolean value to override several other telemetry, privacy, and endpoint settings. This will clear out all user information settings (both in the settings registry and cached in the running process), choose a random user ID for the session, enable all consent levels, enable `/telemetry/useOpenEndpoint`, and clear out `/telemetry/mode` so that only production mode events can be used. 1. **Extension Startup**: - `omni.kit.telemetry`: This is the identifier for the telemetry extension when it starts up. 2. **Logging Control Settings**: - `/telemetry/log/level`: Sets the logging level for the transmitter, defaulting to `warning`. Useful for debugging without affecting the app's default logging level. - `/telemetry/log/file`: Sets the log output filename for the transmitter, defaulting to `omni.telemetry.transmitter.log` in the structured log directory (`~/.nvidia-omniverse/logs/`). Can be overridden in 'portable' mode. - Any other `/log/` settings, except `/log/enableStandardStreamOutput`, `/log/file`, and `/log/level`, are inherited by the transmitter. - `/structuredLog/extraFields/`: Settings under this branch are passed to the transmitter unmodified. - `/telemetry/`: Settings under this branch are passed to the transmitter unmodified. - `/structuredLog/privacySettingsFile`: Passed to the transmitter if specified. Transmitter may override these settings if a `privacy.toml` file is detected. - `/structuredLog/logDirectory`: Passed to the transmitter if explicitly given. - `/telemetry/testLogFile`: Specifies the path to a special log file for additional transmitter information, defaulting to disabled. 3. **Telemetry Destination Control Settings**: - `/telemetry/enableNVDF`: Controls whether the NVDF endpoint is added during launch in OVI environments, enabled by default. - `/telemetry/nvdfTestEndpoint`: Specifies the 'test' or 'production' NVDF endpoint if `/telemetry/enableNVDF` is enabled, defaulting to `false`. - `/telemetry/endpoint`: Overrides the default public endpoint, ignored in OVE and open endpoint environments, defaulting to an empty string. - `/telemetry/cloudLogEndpoint`: Allows overriding the default endpoint for OVC, expected as a local file URI, defaulting to `file:///${omni_logs}/kit.transmitter.out.log`. Must set server name to `localhost` or leave it blank. ## File Paths The `<code>` tag in HTML is used to represent computer code. HTML also supports the use of the `<pre>` tag for preformatted text. The `<code>` tag is used to define a piece of computer code. Text within this tag is typically displayed in a monospace font. For example: ```html <code> <span class="pre"> localhost ``` In this case, the text "localhost" would be displayed in a monospace font. The `<pre>` tag is used to define preformatted text. Text within this tag preserves both spaces and line breaks. For example: ```html <pre> This is a line of text. This is another line of text. ``` In this case, the text would be displayed exactly as it is written, with the line breaks preserved. When used together, the `<code>` and `<pre>` tags can be used to display code in a way that is easy to read and understand. For example: ```html <pre> <code> for (int i = 0; i < 10; i++) { printf("Hello, world!\n"); } ``` In this case, the code would be displayed in a monospace font, with the line breaks and indentation preserved. It is important to note that the `<code>` and `<pre>` tags are not the only way to display code in HTML. There are also other tags and attributes that can be used to style code, such as the `<kbd>` tag for keyboard input, and the `style` attribute for custom CSS styles. However, the `<code>` and `<pre>` tags are the most commonly used, and are supported by all modern web browsers. ## Settings to control the extension’s startup behavior - `/exts/omni.kit.telemetry/skipDeferredStartup`: Boolean setting to allow the extension’s various startup tasks to be run serially instead of in parallel. This is often needed for unit test purposes to guarantee that all of the startup tasks have completed before the extension’s tests start to run. If enabled, this will cause the extension to take slightly longer to startup (should still be less than 2s in almost all cases). This defaults to `false`. ## Settings specific to the OVC run environment - `/cloud/cluster`: String setting that specifies the name of the cluster the session will run on. This is expected to be provided by the OVC app launch system. This defaults to an empty string. - `/cloud/node`: String setting that specifies the name of the node that the session will run on. This is expected to be provided by the OVC app launch system. This defaults to an empty string. - `/telemetry/extraFieldsToAdd`: String setting that specifies which of the extra fields under `/structuredLog/extraFields/` that are inherited by the telemetry transmitter on launch should be automatically added to each message by the transmitter. This is expected to be a comma separated list of key names in the `/structuredLog/extraFields/` settings branch. Note that there should not be any whitespace in this setting’s value otherwise some platforms such as Windows could parse it incorrectly. Any keys names in this list that do not exist as extra fields passed to the transmitter will simply be ignored. This defaults to an empty string. Note that if this setting is present and contains the `schemaid` field name, the transmitter will automatically retrieve and add the correct schema ID value to each message that is sent. This automatic behavior also requires the `/telemetry/runEnvironment` setting however to correctly determine which schema ID to use. - `/telemetry/runEnvironment`: String setting that specifies the run environment that the `omni.kit.telemetry` extension has detected. This is automatically passed on to the telemetry transmitter when running in open-endpoint mode. ## Crash Reporter Metadata The `omni.kit.telemetry` extension will set or modify several crash reporter metadata values during its startup. Unless otherwise noted, each of these metadata values will be created by the extension. The following metadata values are managed by this extension: - `environmentName`: This metadata value is originally set by Kit-kernel, but can be modified by `omni.kit.telemetry` if it is left at the value `default`. In this case, its value will be replaced by the current detected run environment - one of `Individual`, `Enterprise`, or `Cloud`. - `runEnvironment`: Contains the current detected run environment - one of `Individual`, `Enterprise`, or `Cloud`. This value is added to explicitly include the run environment name even in cases where `environmentName` is set to something else by Kit-kernel. - `externalBuild`: Set to `true` if the current Kit app is being run by an external (ie: public) user or has not been detected as an internal-only session. Set to `false` if an internal user or session has been detected. - `launcherSessionId`: If the OVI launcher app is currently running in the system, this value is set to the session ID for the launcher. - `cloudPodSessionId`: If in the OVC run environment, this will contain the cloud session ID. - `cpuName`: The friendly name of the system’s main CPU. - `cpuId`: The internal ID of the system’s main CPU. - `cpuVendor`: The name of the system’s main CPU vendor. - `osName`: The friendly name of the operating system. - `osDistro`: The distribution name of the operating system. - `osVersion`: The detailed version number or code of the operating system. - `primaryDisplayRes`: The resolution of the system’s primary display (if any). - `desktopSize`: The size of the entire system desktop for the current user. - `desktopOrigin`: The top-left origin point of the desktop window. On some systems this may just be (0, 0), but others such as Windows allow for negative origin points. - `displayCount`: The number of attached displays (if any). - `displayRes_<n>`: The current resolution in pixels of the n-th display. - `gpu_<n>`: The name of the n-th GPU attached to the system. - `gpuVRAM_<n>`: The amount of video memory the n-th GPU attached to the system has. - `gpuDriver_<n>`: The active driver version for the n-th GPU attached to the system.
22,540
configuring.md
# Configuration Kit comes with a very rich and flexible configuration system based on Carbonite settings. Settings is a runtime representation of typical configuration formats (like json, toml, xml), and is basically a nested dictionary of values. ## Quick Start When you run a kit executable, it doesn’t load any kit app file: ``` > kit.exe ``` That will start kit and exit, without enabling any extensions or applying any configuration, except for the built-in config: `kit-core.json`. > To see all flags call > ``` > > kit.exe -h > ``` To see default kit settings pass `--/app/printConfig=true`: ``` > kit.exe --/app/printConfig=true ``` That will print all settings. This syntax `--/` is used to apply settings from the command line. Any setting can be modified in this way. You may notice that the config it printed includes `app/printConfig`. You can try adding your own settings to the command line and observing them in the printed config to prove yourself that it works as expected. Another useful flag to learn early is `-v` to enable info logging or `-vv` to enable verbose logging. There are settings to control logging more precisely, but this is an easy way to get more logging in console and debug startup routine. ``` > kit.exe -v ``` To make kit do something let’s enable some extensions: ``` > kit.exe --enable omni.kit.window.script_editor ``` That enables a script editor extension. You may also notice that it enabled a few extensions that it depends on. You can stack multiple `--enable` keywords to enable more extensions. You can also add more folders to search in more for extensions with `--ext-folder`: ``` > kit.exe --enable omni.kit.window.script_editor --ext-folder ./exts --enable foo.bar ``` That enables you to create e.g `exts/foo.bar/extension.toml` and start hacking on your own extension right away. Those flags, like `--enable`, `--ext-folder` and many others are just shorthand for commonly-used settings. For example, they just append to `/app/exts/enabled` and `/app/exts/folders` arrays respectively. ### Application Config Settings can also be applied by passing a configuration file as a positional argument to Kit: ``` \> kit.exe my.toml ``` This kind of config file becomes the “Application config”. It receives special treatment from Kit: 1. The config name becomes application name. 2. Separate data, documents and cache folders are created for applications. 3. The Folder where this config exists located becomes the application path. This allows you to build separate applications with their own data and behavior. ### Kit File A Kit file is the recommended way to configure applications. ``` \> kit.exe my.kit ``` Kit files are single-file extensions (basically renamed `extension.toml` files). Only the `[settings]` part of them is applied to settings (as with any extension). Here is an example: ```toml [package] title = "My Script Editor App" version = "0.1.0" keywords = ["app"] [dependencies] "omni.kit.window.script_editor" = {} [settings] foo.bar = "123" exts."omni.kit.window.script_editor".windowOpenByDefault = true ``` As with any extension, it can be named, versioned and even published to the registry. It defines dependencies in the same format to pull in additional extensions. Notice that the setting `windowOpenByDefault` of the script editor extension is being overridden. Any extension can define its own settings and a guideline is to put them in the `extension.toml` file of the extension. Check the `extension.toml` file for `omni.kit.window.script_editor`. Another guideline is to use the root `exts` namespace and the name of extension next. The goal of the .kit file is to bridge the gap between settings and extensions and have one file that the user can click to run Kit-based application (e.g. if `.kit` file extensions are associated with `kit.exe` in the OS). ### System Configs You can create system wide configuration files to override any setting. There are few places to put them: 1. `${shared_documents}/user.toml` - To override settings of any kit application in the shared documents folder, typically in (on Windows): `C:\Users\[username]\Documents\Kit\shared\user.toml` 2. `${app_documents}/user.toml` - To override settings of particular application in the application documents folder, typically in: `C:\Users\[username]\Documents\Kit\apps\[app_name]\user.toml` 3. `<app.kit file>\<0 or more levels above>\deps\user.toml` - To override settings of any kit application locally, near the application .kit file. Only in portable mode. 4. `<app.kit file>\<0 or more levels above>\deps\[app_name]\user.toml` - To override settings of particular application locally, near the application .kit file. Only in portable mode. - To override settings of any kit application in the shared program data, typically in (on Windows): ``` ${shared_program_data}/kit.config.toml ``` ``` %PROGRAMDATA%/NVIDIA Corporation/Kit/kit.config.toml ``` - To override settings of particular application in the application program data, typically in (on Windows): ``` ${app_program_data}/kit.config.toml ``` ``` %PROGRAMDATA%/NVIDIA Corporation/Kit/[app name]/kit.config.toml ``` To find the path of these folders on your system, you can run Kit with info logging enabled and look for ``` Applied configs: ``` and ``` Non-existent configs: ``` messages at the beginning. Also, Look for ``` Tokens: ``` list in the log. For more info: Tokens. ## Special Keys ### Appending Arrays When configs are merged, one value can override another. Sometimes we want to append values for arrays instead of override. For this, use the special ``` ++ ``` key. For example, to add additional extension folders to the ``` /app/folders ``` setting, you can write: ```toml [app.exts] folders."++" = ["c:/temp"] ``` You can put that for instance in ``` user.toml ``` described above to add more extension folder search paths. ### Importing Other Configs You can use the ``` @import@ ``` key to import other config files in that location: ```toml [foo] "@import@" : ["./some.toml"], ``` That will import the config ``` some.toml ``` under the key ``` foo ``` . The ``` ./ ``` syntax implies a relative path, and that the config file is in the same folder. ## Portable Mode A regular kit-based app installation sets and uses system wide data, cache, logs folders. It also reads the global Omniverse config in a known system-specific location. To know which folders are being used, you can look at tokens, like ``` ${data} ``` , ``` ${cache} ``` , ``` ${logs} ``` . They can be found at the beginning of each log file. Kit based apps can also run in a portable mode, using a specified folder as a root for all of those folders. Useful for developers. Local builds by default run in portable mode. There are a few different ways to run kit in portable mode: ### Cmd Args Pass ``` --portable ``` to run kit in portable mode and optionally pass ``` --portable-root [path] ``` to specify the location of the portable root. ### Portable Configs (Markers) Kit looks for the following configs that force it to run in portable mode. It reads the content of a file if it finds one, and treats it as a path. If a path is relative - it is relative to this config folder. The priority of search is: 1. App portable config, e.g. ``` foo.portable ``` near ``` foo.kit ``` when run with: ``` kit.exe foo.kit ``` 2. Kit portable config near experience, e.g. ``` kit.portable ``` near ``` foo.kit ``` when run with: ``` kit.exe foo.kit ``` 1. Kit portable config near `kit.exe`, e.g. `kit.portable` near `kit.exe` ## Changing Settings With Command Line Any setting can be changed via command line using `--/` prefix: ``` > kit.exe --/[path/to/setting]=[value] ``` Path to setting is separated by `/` and prefixed by `--/`. For example, if the required option is `ignoreUnsavedOnExit` as shown in the printed JSON configuration: ```json <ISettings root>: { "app": { "hangDetector": { "enabled": false, "timeout": 120 }, "file": { "ignoreUnsavedOnExit": false, ... }, ... }, ... } ``` To change the value of `ignoreUnsavedOnExit` to `true`, you need to add `--/app/file/ignoreUnsavedOnExit=true` to the command line: ``` > kit.exe --/app/file/ignoreUnsavedOnExit=true ``` To specify a boolean value, `true` and `false` strings must be used. ### Note 1. The values are case-insensitive and using `--/some/path/to/parameter=false` or `--/some/path/to/parameter=FaLsE` produces the same result 2. If you need to set the string value `"true"` or `"false"` escape it with double quotes: `--/some/path/to/text_parameter=\"false\"` 3. It is also possible to use `--/some/path/to/parameter=0` or `--/some/path/to/parameter=1` to set a setting to `true` or `false` correspondingly. In this case the actual value in the settings will be an integer, but functions working with settings will correctly convert it to a boolean. Setting a numeric or string value is straightforward: ``` > kit.exe --/some/number=7 --/another/number=1.5 --/some/string=test ``` If you need to set a string value that can be parsed as a number or a boolean - or if the string value contains whitespaces - use double quotes to escape it: ``` > kit.exe --/sets/string/value=\"7\" --/sets/string/with/whitespaces=\"string with spaces\" ``` ### Note Do not forget to escape the quotes so that the OS doesn’t remove them. ## Changing an array value with command line To set an array value you can: 1. Specify individual array elements by adding their index in the array at the end of the path to the value: for example, ``` > kit.exe --/some/array/1=17 ``` will change ```json ... "some": { "array" : [1, 2, 3], }, ... ``` into ```json ... "some": { "array" : [1, 17, 3], }, ... ``` 1. 要求转化成markdown格式后的数据能够被markdown编译器直接运行,不需要添加额外的解释。 2. 要保证转换后的markdown数据和转换前的html数据,在文本内容上能够完整对应,不要改动或删除html的内容。 3. 在转换时,删除掉html中的url链接,但是要保留链接对应的文本信息。 4. 在转换时,删除掉html中的图片 4. 剔除乱码,解决格式错乱问题。 html格式数据如下: ```markdown - For example, ``` > kit.exe --/some/array=[8,11] ``` replaces ``` { "some": { "array" : [1, 2, 3], }, } ``` with ``` { "some": { "array" : [8, 11], }, } ``` - Note You can use whitespace in the square brackets ( ``` [val0, val1, val2] ``` ), if you escape the whole expression with double quotes, in order to prevent the OS from separating it into several command line arguments: ``` > kit.exe --/some/array="[ 8, 11]" ``` - It is also possible to assign a proper JSON as a parameter value: ``` > kit.exe --/my/json/param={"num":1,"str":"test","arr":[1,2,3],"obj":{"info":42}} ``` results in ``` { "my": { "json" : { "param" : { "num": 1, "str": "test", "arr": [ 1, 2, 3 ], "obj": { "info": 42 } } } }, } ``` - Passing Command Line arguments to extensions Kit ignores all command line arguments after ``` -- ``` . It also writes those into the ``` /app/cmdLineUnprocessedArgs ``` setting. Extensions can use this setting to access them and process as they wish. - Code Examples - Get Setting ```python # Settings/Get Setting import carb.settings settings = carb.settings.get_settings() # get a string print(settings.get("/log/file")) # get an array (tuple) print(settings.get("/app/exts/folders")) # get an array element syntax: print(settings.get("/app/exts/folders/0")) # get a whole dictionary exts = settings.get("/app/exts") print(exts) print(exts["folders"]) # get `None` if doesn't exist print(settings.get("/app/DOES_NOT_EXIST_1111")) ``` - Set Setting ```python # Settings/Set Setting import carb.settings settings = carb.settings.get_settings() # set different types into different keys # guideline: each extension puts settings in /ext/[ext name]/ and lists them extension.toml for discoverability settings.set("/exts/your.ext.name/test/value_int", 23) settings.set("/exts/your.ext.name/test/value_float", 502.45) settings.set("/exts/your.ext.name/test/value_bool", False) settings.set("/exts/your.ext.name/test/value_str", "summer") settings.set("/exts/your.ext.name/test/value_array", [9,13,17,21]) settings.set("/exts/your.ext.name/test/value_dict", {"a":2, "b":"winter"}) # print all: print(settings.get("/exts/your.ext.name/test")) ``` - Set Persistent Setting ```python # Settings/Set Persistent Setting ``` ```python import carb.settings settings = carb.settings.get_settings() # all settings stored under "/persistent" saved between sessions # run that snippet again after restarting an app to see that value is still there: key = "/persistent/exts/your.ext.name/test/value" print("{}: {}".format(key, settings.get(key))) settings.set(key, "string from previous session") # Below is a setting with location of a file where persistent settings are stored. # To reset settings: delete it or run kit with `--reset-user` print("persistent settings are stored in: {}".format(settings.get("/app/userConfigPath"))) ``` ### Subscribe To Setting Changes ```python import carb.settings import omni.kit.app settings = carb.settings.get_settings() def on_change(value, change_type: carb.settings.ChangeEventType): print(value, change_type) # subscribe to value changes, returned object is subscription holder. To unsubscribe - destroy it. subscription1 = omni.kit.app.SettingChangeSubscription("/exts/your.ext.name/test/test/value", on_change) settings.set("/exts/your.ext.name/test/test/value", 23) settings.set("/exts/your.ext.name/test/test/value", "fall") settings.set("/exts/your.ext.name/test/test/value", None) settings.set("/exts/your.ext.name/test/test/value", 89) subscription1 = None # no more notifications settings.set("/exts/your.ext.name/test/test/value", 100) ``` ### Kit Kernel Settings #### `/app/enableStdoutOutput` (default: `true`) Enable kernel standard output. E.g. when extension starts etc. #### `/app/disableCmdArgs` (default: `false`) Disable processing of any command line arguments. #### `/app/printConfig` (default: `false`) Print all settings on startup. #### `/app/settings/persistent` (default: `true`) Enable saving persistent settings (`user.config.json`). It autosaves changed persistent settings (`/persistent` namespace) each frame. #### `/app/settings/loadUserConfig` (default: `true`) Enable loading persistent settings (`user.config.json`) on startup. #### `/app/hangDetector/enabled` (default: `false`) Disable processing of any command line arguments. ## Enable hang detector. ## /app/hangDetector/alwaysEnabled (default: false) It `true` ignore `/app/hangDetector/disableReasons` settings and keep hang detector always enabled. Normally it is disabled during startup and extensions can choose to disable it. ## /app/hangDetector/timeout (default: 120) Hang detector timeout to trigger (in seconds). ## /app/quitAfter (default: -1) Automatically quit app after X frames (if X is positive). ## /app/quitAfterMs (default: -1.0) Automatically quit app after X milliseconds (if X is positive). ## /app/fastShutdown (default: false) Do not perform full extension shutdown flow. Instead only let subscribers handle `IApp` shutdown event and terminate. ## /app/python/logSysStdOutput (default: true) Intercept and log all python standard output in carb logger (info level).
15,595
containers.md
# Containers Container is the base class for grouping items. It’s possible to add children to the container with Python’s `with` statement. It’s not possible to reparent items. Instead, it’s necessary to remove the item and recreate a similar item under another parent. ## Transform Transform is the container that propagates the affine transformations to its children. It has properties to scale the items to screen space and orient the items to the current camera. ```python line_count = 36 for i in range(line_count): weight = i / line_count angle = 2.0 * math.pi * weight # translation matrix move = sc.Matrix44.get_translation_matrix( 8 * (weight - 0.5), 0.5 * math.sin(angle), 0) # rotation matrix rotate = sc.Matrix44.get_rotation_matrix(0, 0, angle) # the final transformation transform = move * rotate color = cl(weight, 1.0 - weight, 1.0) # create transform and put line to it with sc.Transform(transform=transform): sc.Line([0, 0, 0], [0.5, 0, 0], color=color) ```
1,045
content.md
# Repo Overview ## Extensions During build phase extensions are built (native), staged (copied and linked) into `_build/{platform}/{config}/exts` folder. Custom app (`.kit` file config) is used to enable those extensions. Each extension is a folder (or zip archive) in the end. You can write user code in python code only, or C++ only, or both. Ultimately extension archive could contain python code, python bindings (pyd/so files) and C++ plugins (dll/so). Each binary file is platform and configuration (debug/release) specific. For python bindings naming we follow Python standards. For more info refer to Kit documentation. ### example.python_ext Example of pure python extension src: `source/extensions/example.python_ext` ### example.cpp_ext Example of native (C++ only) extension. src: `source/extensions/example.cpp_ext` ### example.mixed_ext Example of mixed extension which has both C++ and python code. They interact via python bindings built and included with this extension. src: `source/extensions/example.mixed_ext` ## Simple App Example of an app which runs only those 3 extensions in Kit (and test_runner for tests). All configs are in `source/apps`, they are linked during build (stage phase). src: `source/apps/omni.app.new_exts_demo_mini.kit` ``` _build\windows-x86_64\release\omni.app.new_exts_demo_mini.bat ``` ## Running Kit from Python It also includes example of running Kit from python, both default Kit and an app which runs only those 3 extensions in Kit. ``` _build\windows-x86_64\release\example.pythonapp.bat ``` That runs default python example, to see list of examples: ``` _build\windows-x86_64\release\example.pythonapp.bat --help ``` Pass different one as first argument to run it. ## App that is deployed in the Launcher Another app included: ``` # Application Structure This section describes the structure of the application. ## App Kit The app kit is located at `source/apps/omni.app.my_app.kit`. It demonstrates an app that ends up in Omniverse Launcher. It has dependencies that come from extension registry. Kit will automatically resolve and download missing extensions when started. But usually we download them at build-time and package final application with everything included to work offline. That is done using the `repo precache_exts` tool. It runs after build, starts **Kit** with special set of flags to download all extensions. In the `[repo_precache_exts]` section of `repo.toml`, you can find list of kit files it uses. Also it locks all versions of each extension, including implicit dependencies (2nd, 3rd etc order) and writes back into the kit file. You can find the generated section in the end of kit file. This version lock is then should be committed. That provides reproducible builds and makes kit file completely and immutably define whole application. To regenerate version lock run `build -u`. ## Config files - `premake5.lua` - all configuration for generating platform specific build solutions. premake5 docs. - `repo.toml` - configuration of all repo tools (build, package, format etc). Notice `import_configs = ["${root}/_repo/deps/repo_kit_tools/kit-template/repo.toml"]` in `repo.toml`. That is a feature of `repo_man` to import another configuration. In practice it means that this `repo.toml` is merged later on top of imported one. You can find many more shared settings in this file. Premake file also imports shared configuration with `dofile("_repo/deps/repo_kit_tools/kit-template/premake5.lua")` line. ## CI / Teamcity Teamcity Project runs on every commit. Builds both platforms, docs, runs tests. Publishing is optional (click “Run” on “publish” configuration). Teamcity configuration is stored in the repo, in `.teamcity` folder. All Teamcity entry points are in `tools/ci` folder. It can also be easily copied in Teamcity along with forking this project on gitlab.
3,891
contents_index.md
# Omniverse Carbonite SDK ## Omniverse Carbonite SDK The Omniverse Carbonite SDK is the foundational software layer for Omniverse applications, microservices, tools, plugins, Connectors, and SDKs. The Omniverse Carbonite SDK provides the following high-level features: ### ABI Stable Interfaces The core of Carbonite provides tooling that enables developers to define and maintain ABI stable interfaces. We call these interfaces Omniverse Native Interfaces (ONI). These interfaces allow the creation of software components that are binary stable across compiler toolchains, OS revisions, and SDK releases. ### Plugins Carbonite provides a plugin system that allows the functionality of applications to be dynamically extended at runtime. ### Cross-Platform Abstractions In order to be useful on a variety of hardware and operating systems, Carbonite exposes ABI-stable interfaces which provide a uniform abstraction over low-level platform facilities. ### Inline Headers Carbonite contains a rich suite of well tested, efficient, cross-platform, general purpose inline headers. ### Diagnostics Universally useful diagnostic APIs for profiling, crash reporting, and telemetry are provided by Carbonite as first-class citizens. ## Building To build on Windows: ```shell build ``` To build on Linux: ```shell ./build.sh ``` ## Testing To run Carbonite’s unit tests on Windows: ```shell _build\windows-x86_64\release\test.unit.exe ``` ## License Carbonite is proprietary software of NVIDIA Corporation. License details can be found in the documentation. # Contents ## Top Level - Carbonite Plugins/Interfaces - Omniverse Native Interfaces - Deploying a Carbonite Application ## Components - Asserts - Audio - Crash Reporter - Function - Carbonite Input Plugin - Overview - Localization - Logging - Memory - Python Bindings - String - Tasking - Telemetry - Unicode ## Guides - ABI Compatibility - Building - Unity Builds - Testing - Packaging - Releasing - Using Valgrind - Carbonite Interface Walkthrough - Creating a New Omniverse Native Interface - Troubleshooting - Extending an Omniverse Native Interface Walkthrough - Using omni.bind ## Documenting - Documentation Guidelines - Restructured Text Guide - C++ Documentation Guide - Python Documentation Guide
2,271
content_install.md
# 3D Content Pack Installation For our Omniverse customers who need to run in a firewalled environment, here is how to configure the sample 3D content that comes with the Omniverse foundation applications. As there is currently a total of roughly 260GB of USD samples, materials and environments available, this document should help you identify the types of content packs that you need to provide to your internal Omniverse users to help them with their workflows. ## 3D Content Pack Download Process There are five steps for IT managers to follow to download and configure individual Omniverse content packs for firewalled environments and users. 1. **Identify**: The first step is to select which 3D content packs are required by your users. Given that users will often ask for content based on where it lives within certain Omniverse foundation app Browsers (e.g. “can I get all of the Base Materials in the Materials tab?”) this documentation organizes the downloadable packs by which Omniverse Browser they normally reside or which Omniverse Extension they relate to. See the following section on the various Omniverse browsers and extensions that include content. 2. **Download**: Once you’ve determined which content packs to download, the next step is to go to the Omniverse Enterprise Web Portal, and to click on the **Content** section to find all of the available archive files. When you find the pack that matches, click **Download**, and choose whether you’re downloading for a Windows or Linux workstation. The download will begin automatically for that pack. Given that many content packs are GBs in size, this process can take some time to complete. > Certain Omniverse foundation applications contain the same browsers, but the content available within them may be slightly different or reduced. Wherever possible, we have indicated which content packs in each browser are included with those apps. 3. **Unpack**: After each content pack is downloaded, you need to unzip it. We’ve tried to make the unpacking process as simple as possible by configuring each zip archive so that it mirrors the same folder structure that exists on our AWS server so that all you have to do is create a top-level folder where you want ALL of your content to live, and then unpack the archives “as-is” into that root location. Doing so will create an exact copy of the NVIDIA folder structure normally available within every Nucleus install. By default, that top-level structure includes five (5) main folders (Assets / Demos / Environments / Materials / Samples): ![Content Structure](../_images/content_01.png) Each of the downloadable content packs is set up to reflect these top-level folders which should make organization of the assets themselves efficient and straightforward. For example, if you download the Base Materials pack, when you open the zip archive, you’ll see this: ![Base Materials Pack Structure](../_images/content_02.png) By default, the content itself lives inside of the sub-folder and is called `package.zip`. If you decompress the entire archive, you’ll end up with a sub-folder and when you open the `package.zip` file within it, you’ll see this: ![Decompressed Base Materials Pack](../_images/content_03.png) At the root is a `/Materials` folder, which matches the online AWS configuration, and within it are the various sub-folders and files that make up the Base Materials MDL library. By unpacking the archive with the folder structure intact, you’ll ensure that your library matches the one that exists online. ```admonition::note **Note** There are two additional files in each archive: - A PACKAGE-INFO.yaml file - this describes the package contents. - A PACKAGE-LICENSES folder with a text document pointing users to the Omniverse Terms of Use. Neither of these is required for your users to access the content packs, and can be safely stored elsewhere or deleted upon the completion of unpacking each archive. ``` **4) Deploy**: In order to make the content visible within Omniverse for your users, you have a choice on how to deploy the content. - **Option 1**: Copy all of the content to a local hard disk drive location - **Option 2**: Copy all of the content to a shared Nucleus server behind your firewall Both options are straightforward as you simply need to transfer the entire content folder structure that you set up in Step 3 to a local physical hard drive location or you can use Nucleus Navigator to copy that content folder to a shared internal Nucleus Server that is accessible to your internal users. **5) Update firewalled TOML files**: In order for a user to see the local files instead of trying to “dial-out” to get content via AWS, you need to add a set of known redirect paths to point those requests to your local hard disk or internal Enterprise Nucleus server. To do this, you must define the new content path root aliases within the `omniverse.toml` file stored in `~/.nvidia-omniverse/config` (Linux) or `\%USERPROFILE%\.nvidia-omniverse\config` (Windows) for each user on the network. If you have opted to place the content on a local hard disk drive location, add the following section in its entirety to the `omniverse.toml` file and replace the `<HardDrivePath>` variable with the folder you chose in step 5 to store all of the content packs. ```toml [aliases] "http://omniverse-content-production.s3.us-west-2.amazonaws.com" = "<HardDrivePath>" "https://omniverse-content-production.s3.us-west-2.amazonaws.com" = "<HardDrivePath>" "http://omniverse-content-production.s3-us-west-2.amazonaws.com" = "<HardDrivePath>" "https://omniverse-content-production.s3-us-west-2.amazonaws.com" = "<HardDrivePath>" "https://twinbru.s3.eu-west-1.amazonaws.com/omniverse" = "<HardDrivePath>" ``` As an example, if you have copied all of your content to the `C:\Temp\NVIDIA_Assets` folder on the local machine, the paths would look like this: ```toml [aliases] "http://omniverse-content-production.s3.us-west-2.amazonaws.com" = "C:\\Temp\\NVIDIA_Assets" "https://omniverse-content-production.s3.us-west-2.amazonaws.com" = "C:\\Temp\\NVIDIA_Assets" "http://omniverse-content-production.s3-us-west-2.amazonaws.com" = "C:\\Temp\\NVIDIA_Assets" "https://omniverse-content-production.s3-us-west-2.amazonaws.com" = "C:\\Temp\\NVIDIA_Assets" "https://twinbru.s3.eu-west-1.amazonaws.com/omniverse" = "C:\\Temp\\NVIDIA_Assets" ``` ```admonition::note **Note** The need for double-backslashes (\\) in the path name if you’re on Windows. ``` If you have opted to place the content on a shared Nucleus location, in the following section replace the `<server_name>` with the actual name of your server (i.e. `<server_name>` is replaced with localhost). ```toml [aliases] "http://omniverse-content-production.s3.us-west-2.amazonaws.com" = "omniverse://<server_name>/<path>" "https://omniverse-content-production.s3.us-west-2.amazonaws.com" = "omniverse://<server_name>/<path>" "http://omniverse-content-production.s3-us-west-2.amazonaws.com" = "omniverse://<server_name>/<path>" "https://omniverse-content-production.s3-us-west-2.amazonaws.com" = "omniverse://<server_name>/<path>" "https://twinbru.s3.eu-west-1.amazonaws.com/omniverse" = "omniverse://<server_name>/<path>" ``` Once this process is complete, when a user launches their copy of an Omniverse foundation app, they should have direct access to the various content packs directly without the application trying to connect to the Internet. # NVIDIA Assets Browser The NVIDIA Assets Browser will indicate that 5 separate zip downloads are needed to provide all of the various 3D assets within it. ## List of Browsers Accompanying Omniverse Foundation Applications - **Core Content** (Strongly Recommended all users grab this) - **NVIDIA Assets** - **Environments** - **Materials** - **Showcases** - **SimReady Explorer** - **Examples** - **Physics Demo Scenes** Additionally, some content shows up with specific Omniverse extensions, and if your users ask for any of these content packs by the extension they support, you can find them here: - **AnimGraph** - **Rendering** - **Particles** - **ActionGraph** - **Warp** - **Flow** Some Omniverse foundation applications also include unique content packs as well. You can find them here: - **Audio2Face** - **IsaacSim** # Core Omniverse App Content When many Omniverse foundation applications start (USD Composer, USD Explorer, Code), it loads a set of default scene templates including the textured ground plane and lighting that comes up automatically. This pack should always be downloaded. It is very small but will help prevent errors in the console when an Omniverse application first starts in an firewalled environment. This content pack includes all of the templates and is essential for firewalled environments. - **Launcher Pack Name**: Default Scene Templates Pack - **Included in Omniverse Apps**: USD Composer / USD Explorer / Code / USD Presenter - **Pack**: `Scene_Templates_NVD@10010.zip` - **Pack Size**: 24MB - **Contents**: All of the scene templates that can be accessed from the **File-&gt;New from Stage Template** menu in Omniverse - **Default Nucleus Location**: NVIDIA/Assets/Scenes/Templates # Browsers Content There are several different browsers where you can access and utilize content provided by NVIDIA. Some of these are visible by default (depending on which foundation Omniverse application you are running), while others are accessible via different menus inside of the applications. For each browser, here is a list of the content packs required. ## NVIDIA Assets Browser There are 5 individual content pack downloads that encompass all of the visible content available within this browser - **Launcher Pack Name**: Commercial 3D Models Pack - **Included in Omniverse Apps**: USD Composer / USD Explorer / Code - **Pack**: `Commercial_NVD@10013.zip` - **Pack Size**: 5.8GB - **Contents**: Commercial furniture and entourage content - **Default Nucleus Location**: NVIDIA/Assets/ArchVis/Commercial # Note In order for the Commercial content within this pack to operate correctly, it needs to have the **Materials / Base Materials Pack** (Base_Materials_NVD@10012.zip) from the **Materials Browser** installed for the materials. ## Industrial 3D Models Pack - **Launcher Pack Name**: Industrial 3D Models Pack - **Included in Omniverse Apps**: USD Composer / USD Explorer / Code - **Pack**: Industrial_NVD@10012.zip - **Pack Size**: 1.8GB - **Contents**: Industrials boxes/shelving and entourage content - **Default Nucleus Location**: NVIDIA/Assets/ArchVis/Industrial ## Residential 3D Models Pack - **Launcher Pack Name**: Residential 3D Models Pack - **Included in Omniverse Apps**: USD Composer / Code - **Pack**: Residential_NVD@10012.zip - **Pack Size**: 22.5GB - **Contents**: Residential furniture and entourage content - **Default Nucleus Location**: NVIDIA/Assets/ArchVis/Residential ## Vegetation 3D Models Pack - **Launcher Pack Name**: Vegetation 3D Models Pack - **Included in Omniverse Apps**: USD Composer / Code - **Pack**: Vegetation_NVD@10012.zip - **Pack Size**: 2.7GB - **Contents**: Selection of plants and tree content - **Default Nucleus Location**: NVIDIA/Assets/Vegetation ## Warehouse 3D Models Pack - **Launcher Pack Name**: Warehouse 3D Models Pack - **Included in Omniverse Apps**: USD Composer / USD Explorer / Code - **Pack**: Warehouse_NVD@10012.zip - **Pack Size**: 18GB - **Contents**: Digital Twin warehouse elements content - **Default Nucleus Location**: NVIDIA/Assets/DigitalTwin/Assets/Warehouse # Showcases Browser In order for the content within this pack to operate correctly, it needs to have the **Examples Browser / Sample Scenes Pack** from the Examples Browser installed for the materials. ## Showcase Scenes 3D Models Pack - **Launcher Pack Name**: Showcase Scenes 3D Models Pack - **Included in Omniverse Apps**: USD Composer - **Pack**: Showcases_Content_NVD@10010.zip - **Pack Size**: 2.3GB - **Contents**: Full warehouse and Ragnarok vehicle content - **Default Nucleus Location**: NVIDIA/Samples/Showcases # Materials Browser There are 3 individual content pack downloads that encompass all of the visible content available through this browser and it is recommended that you download and install both the Base Materials and vMaterials 2 packs as they are often used within other sample content. ## Base Materials Pack - **Launcher Pack Name**: Base Materials Pack - **Included in Omniverse Apps**: USD Composer / USD Explorer / Code - **Pack**: Base_Materials_NVD@10012.zip - **Pack Size**: 8.2GB - **Contents**: (No specific content mentioned) ### Base Materials Library - **Default Nucleus Location**: NVIDIA/Materials/2023_1/Base - **Launcher Pack Name**: VMaterials 2 Pack - **Included in Omniverse Apps**: USD Composer / Code - **Pack**: `vMaterials_2_2_1_NVD@20022.zip` - **Pack Size**: 5.5GB - **Contents**: vMaterials 2 library (v. 2.2.1 is the current release) - **Default Nucleus Location**: NVIDIA/Materials/2023_1/vMaterials_2 - **Launcher Pack Name**: Automotive Materials Pack - **Included in Omniverse Apps**: USD Composer / Code - **Pack**: `Automotive_Materials_NVD@10010.zip` - **Pack Size**: 21GB - **Contents**: Automotive materials library - **Default Nucleus Location**: NVIDIA/Materials/2023_1/Automotive ### Environments Browser - **Launcher Pack Name**: Environments Skies Pack - **Included in Omniverse Apps**: USD Composer / Code - **Pack**: `Environments_NVD@10012.zip` - **Pack Size**: 8.9GB - **Contents**: HDRI skydomes and Dynamic sky environments - **Default Nucleus Location**: NVIDIA/Environments/2023_1/DomeLights - **Launcher Pack Name**: Environment Templates Pack - **Included in Omniverse Apps**: USD Composer / Code - **Pack**: `Environments_Templates_NVD@10010.zip` - **Pack Size**: 16.0GB - **Contents**: Templates that have been designed to assist with automotive presentations - **Default Nucleus Location**: NVIDIA/Environments/2023_1/Templates ### SimReady Explorer - **Note**: There is some redundancy in files between packs that are shared across the entire library, but if you are only interested in a small subset of the content, you will still get all of the supporting materials and configuration files in any one downloaded content pack. - **Launcher Pack Name**: SimReady Warehouse 01 3D Models Pack - **Included in Omniverse Apps**: USD Composer / Code - **Pack**: `SimReady_Warehouse_01_NVD@10010.zip` - **Pack Size**: 13.9GB - **Contents**: Warehouse elements (foot stools, ramps, shelving, pallets) - **Default Nucleus Location**: NVIDIA/Assets/simready_content - **Launcher Pack Name**: SimReady Warehouse 02 3D Models Pack - **Included in Omniverse Apps**: USD Composer / Code - **Pack**: `SimReady_Warehouse_02_NVD@10010.zip` - **Pack Size**: 20.5GB - **Contents**: Warehouse elements (pallets, racks, ramps) - **Default Nucleus Location**: NVIDIA/Assets/simready_content - **Launcher Pack Name**: SimReady Furniture & Misc 3D Models Pack - **Included in Omniverse Apps**: USD Composer / Code - **Pack**: `SimReady_Furniture_Misc_NVD@10010.zip` - **Pack Size**: 9.4GB - **Contents**: Assorted furniture and entourage elements (cones, chairs, sofas, utensils) - **Default Nucleus Location**: NVIDIA/Assets/simready_content - **Launcher Pack Name**: SimReady Containers & Shipping 01 3D Models Pack - **Included in Omniverse Apps**: USD Composer / Code - **Pack**: `SimReady_Containers_Shipping_01_NVD@10010.zip` - **Pack Size**: 21.4GB - **Contents**: Industrial elements (bins, boxes, cases, drums, buckets) - **Default Nucleus Location**: NVIDIA/Assets/simready_content - **Launcher Pack Name**: SimReady Containers & Shipping 02 3D Models Pack - **Included in Omniverse Apps**: USD Composer / Code - **Pack**: `SimReady_Containers_Shipping_02_NVD@10010.zip` - **Pack Size**: 20.6GB - **Contents**: Industrial elements (crates, jugs, IBC tank, bottles, etc.) - **Default Nucleus Location**: NVIDIA/Assets/simready_content - **Launcher Pack Name**: AnimGraph Sample 3D Model Pack - **Included in Omniverse Apps**: USD Composer - **Pack**: `AnimGraph_NVD@10010.zip` - **Pack Size**: 1.6GB - **Contents**: This pack includes all of the character animation samples - **Default Nucleus Location**: NVIDIA/Assets/AnimGraph - **Launcher Pack Name**: Automotive Configurator 3D Models Pack - **Included in Omniverse Apps**: USD Composer - **Pack**: `Configurator_Content_NVD@10010.zip` - **Pack Size**: 2.0GB - **Contents**: This pack contains content for building automotive configurators - **Default Nucleus Location**: NVIDIA/Assets/Configurator - **Launcher Pack Name**: Sample Scenes 3D Models Pack - **Included in Omniverse Apps**: USD Composer / Code - **Pack**: `Sample_Scenes_NVD@10010.zip` - **Pack Size**: 26.0GB - **Contents**: High fidelity rendering scenes including the Astronaut, Marbles and the Old Attic datasets - **Default Nucleus Location**: NVIDIA/Samples/Examples/2023_1/Rendering **Note**: The **Sample_Scenes** pack is also needed if you’ve downloaded the **Showcases** content pack to work as expected. - **Launcher Pack Name**: Particle Systems 3D Models Pack - **Included in Omniverse Apps**: USD Composer - **Pack**: `Particles_NVD@10010.zip` - **Pack Size**: 159MB - **Contents**: (contents not specified) # Contents - **Contents:** This pack includes all of the particle systems sample files - **Default Nucleus Location:** NVIDIA/Assets/Particles - **Launcher Pack Name:** Extensions Samples 3D Models Pack - **Included in Omniverse Apps:** USD Composer / Code - **Pack:** Extensions_Samples_NVD@10010.zip - **Pack Size:** 900MB - **Contents:** Contains sample data for Flow, Paint, Warp and ActionGraph extensions - **Default Nucleus Location:** NVIDIA/Assets/Extensions/Samples # Physics Demo Scenes Browser - **Launcher Pack Name:** Physics Demo Scenes 3D Models Pack - **Included in Omniverse Apps:** USD Composer / Code - **Pack:** Physics_Scenes_NVD@10010.zip - **Pack Size:** 5.5GB - **Contents:** All of the current Physics sample scene files that can be loaded from the Demo Scenes tab - **Default Nucleus Location:** Not in a public Nucleus folder # Extensions Content Extension content is mostly covered within the various Browsers inside of the foundation Omniverse applications. But if you’re interested in a specific extension and the content that showcases it, here’s a list of which downloadable pack contains that content. ## AnimGraph Samples - **Launcher Pack Name:** AnimGraph Sample 3D Models Pack - **Included in Omniverse Apps:** USD Composer / Code - **Pack:** AnimGraph_NVD@10010.zip - **Pack Size:** 1.6GB - **Contents:** Character and motion sample data for use with AnimGraph - **Default Nucleus Location:** NVIDIA/Assets/AnimGraph ## Rendering Samples - **Launcher Pack Name:** Sample Scenes 3D Models Pack - **Included in Omniverse Apps:** USD Composer / Code - **Pack:** Sample_Scenes_NVD@10010.zip - **Pack Size:** 26.0GB - **Contents:** High fidelity rendering scenes including the Astronaut, Marbles and the Old Attic datasets - **Default Nucleus Location:** NVIDIA/Samples/Examples/2023_1/Rendering ## Particle Systems Presets - **Launcher Pack Name:** Particle Systems 3D Models Pack - **Included in Omniverse Apps:** USD Composer - **Pack:** ``` Particles_NVD@10010.zip ``` - **Pack Size:** 159MB - **Contents:** Particles systems presets - **Default Nucleus Location:** NVIDIA/Samples/Examples/2023_1/Visual Scripting ### Note The Ocean sample files are installed locally with the omni.ocean extension and can be found in the following Omniverse install location (USD Composer: - **/Omniverse/Library/prod-create-2023.1.1/extscache/omni.ocean-0.4.1/data** Any installed Omniverse foundation application that includes the omni.ocean extension will include these files, so you can replace the library app path (e.g. `prod-create-2023.1.1`) to find those that are installed on your machine. ## ActionGraph Samples - **ActionGraph:** Available within the Examples Browser under the **Visual Scripting** header - **Launcher Pack Name:** Sample Scenes 3D Models Pack - **Included in Omniverse Apps:** USD Composer / Code - **Pack:** ``` Sample_Scenes_NVD@10010.zip ``` - **Pack Size:** 26.0GB - **Contents:** Tutorial samples for OmniGraph - **Default Nucleus Location:** NVIDIA/Samples/Examples/2023_01/Visual Scripting ## Warp Samples - **Warp:** Available within the Examples Browser under the **Warp** header - **Launcher Pack Name:** Extensions Samples 3D Models Pack - **Included in Omniverse Apps:** USD Composer / Code - **Pack:** ``` Extensions_Samples_NVD@10010.zip ``` - **Pack Size:** 878MB - **Contents:** Tutorial samples for the Warp extension - **Default Nucleus Location:** NVIDIA/Samples/Examples/Warp ### Note Some of the sample files are installed locally with the omni.warp extension and can be found in the following Omniverse install location (USD Composer: - **/Omniverse/Library/extscache/omni.warp-0.8.2/data/scenes** Any installed Omniverse foundation application that includes the omni.warp extension will include these files, so you can replace the library app path (e.g. `prod-create-2023.1.1`) to find those that are installed on your machine. ## Flow Presets - **Flow:** Accessed through the **Window -> Simulation -> Flow Presets** menu - **Launcher Pack Name:** Extensions Samples 3D Models Pack - **Included in Omniverse Apps:** USD Composer - **Pack:** ``` Extensions_Samples_NVD@10010.zip ``` - **Pack Size:** 878MB - **Contents:** Tutorial samples for the Flow simulation extension - **Default Nucleus Location:** NVIDIA/Samples/Examples/Flow - **XR:** This content is accessed directly from within the Nucleus Content browser - **Launcher Pack Name:** XR Samples 3D Models Pack - **Included in Omniverse Apps:** USD Composer - **Pack:** ``` XR_Content_NVD@10010.zip ``` - **Pack Size:** 5.3GB # Contents: - Legacy Create XR templates and stages for working in XR environments - Default Nucleus Location: NVIDIA/Assets/XR # Core Demos: This content is accessed directly from within the Nucleus Content browser - Launcher Pack Name: Core Demo Samples 3D Models Pack - Included in Omniverse Apps: USD Composer - Pack: Core_Demos_NVD@10010.zip - Pack Size: 8.9GB - Contents: Contains multiple demo scenes for MFG, Cloudmaker, Connect and Warehouse Physics - Default Nucleus Location: NVIDIA/Demos # Foundation Apps Specific Content ## Audio2Face App - Launcher Pack Name: Audio2Face Sample 3D Models Pack - Included in Omniverse Apps: Audio2Face / USD Composer - Pack: A2F_Content_NVD@10010.zip - Pack Size: 6.2GB - Contents: All of the core Audio2Face sample content that is available within the Example Browser in the Audio2Face app - Default Nucleus Location: NVIDIA/Assets/Audio2Face ## IsaacSim IsaacSim content is versioned in folders inside of NVIDIA/Assets/Isaac on AWS and as such, it’s been configured in zip archives to mimic this folder structure. Each version of Isaac Sim uses only the specific versioned folder. In the Launcher IsaacSim Assets is split into 3 content packs. - Launcher Pack Name: Isaac Sim Assets Pack 1 - Included in Omniverse Apps: IsaacSim - Pack Size: 19.9GB - Contents: All of the core Isaac Sim sample content plus dependencies from the /NVIDIA folder. Split into 3 packs. - Default Nucleus Location: NVIDIA/Assets/Isaac/2023.1.0 - Launcher Pack Name: Isaac Sim Assets Pack 2 - Included in Omniverse Apps: IsaacSim - Pack Size: 27.8GB - Contents: All of the core Isaac Sim sample content plus dependencies from the /NVIDIA folder. Split into 3 packs. - Default Nucleus Location: NVIDIA/Assets/Isaac/2023.1.0 - Launcher Pack Name: Isaac Sim Assets Pack 3 - Included in Omniverse Apps: IsaacSim - Pack Size: 28.4GB - Contents: All of the core Isaac Sim sample content plus dependencies from the /NVIDIA folder. Split into 3 packs. - Default Nucleus Location: NVIDIA/Assets/Isaac/2023.1.0 # Extra Content Packs ## Datacenter - Launcher Pack Name: Datacenter 3D Models Pack - Included in Omniverse Apps: USD Composer - Pack: Datacenter_NVD@10011.zip - Pack Size: 187MB - Contents: Datacenter assets for creating digital twins - Default Nucleus Location: NVIDIA/Assets/DigitalTwin/Assets/Datacenter ## AECXR - Launcher Pack Name: AEC XR 3D Models Pack - **Included in Omniverse Apps:** USD Composer - **Pack:** ``` AEC_XR_NVD@10012.zip ``` - **Pack Size:** 13MB - **Contents:** Architectural AEC elements for XR testing - **Default Nucleus Location:** No default location on Nucleus - only available through download
24,603
context-menu_Overview.md
# Overview — Omniverse Kit 1.7.8 documentation ## Overview The layer widget extension provides a widget for viewing and interacting with the USD layers in the local layer stack. By default, the widget displays the current layer prim hierarchy, with one column, namely the prim name. ### Functionality #### Searching In the search field, users can type in filter text for prim paths to search for prims with matching keywords. #### Options Menu In the options menu, users can toggle on/off layer widget options, or reset them. #### Context Menu Right clicking in the stage widget will display the stage widget context menu. Context includes the current USD stage context, prim selection, hovered prim, etc. For more details on context menu items, please refer to `omni.kit.widget.layer.ContextMenu`. ### Insert/Create/Remove sublayer Click the “Insert Sublayer” button on the widget bottom or menu item from the context menu, it shows a layer dialog that allows users to pick a new sublayer path. For the new inserted sublayer, there are four new buttons that allow users to save/mute/lock the new layer.
1,109
contributing.md
# Contributing to the URDF Importer Extension ## Did you find a bug? - Check in the GitHub Issues if a report for your bug already exists. - If the bug has not been reported yet, open a new Issue. - Use a short and descriptive title which contains relevant keywords. - Write a clear description of the bug. - Document the environment including your operating system, compiler version, and hardware specifications. - Add code samples and executable test cases with instructions for reproducing the bug. ## Did you find an issue in the documentation? - Please create an Issue if you find a documentation issue. ## Did you write a bug fix? - Open a new Pull Request with your bug fix. - Write a description of the bug which is fixed by your patch or link to related Issues. - If your patch fixes for example Issue #33, write `Fixes #33`. - Explain your solution with a few words. ## Did you write a cosmetic patch? - Patches that are purely cosmetic will not be considered and associated Pull Requests will be closed. - Cosmetic are patches which do not improve stability, performance, functionality, etc. - Examples for cosmetic patches: code formatting, fixing whitespaces. ## Do you have a question? - Search the GitHub Discussions for your question. - If nobody asked your question before, feel free to open a new discussion. - Once somebody shares a satisfying answer to your question, click “Mark as answer”. - GitHub Issues should only be used for bug reports. - If you open an Issue with a question, we may convert it into a discussion.
1,546
contributing_index.md
# Kit C++ Extension Template ## Omniverse Kit C++ Extension Template This project contains everything necessary to develop extensions that contain C++ code, along with a number of examples demonstrating best practices for creating them. ### What Are Extensions? While an extension can consist of a single `extension.toml` file, most contain Python code, C++ code, or a mixture of both: ``` | Kit | ___________________________________|____________________________________ | | | | | Python Only | C++ Only | Mixed (eg. omni.example.python.hello_world) | (eg. omni.example.cpp.hello_world) | (eg. omni.example.cpp.pybind) ``` Extensive documentation detailing what extensions are and how they work can be found [here](https://docs.omniverse.nvidia.com/py/kit/docs/guide/extensions.html). ### Getting Started 1. Clone the [GitHub repo](https://github.com/NVIDIA-Omniverse/kit-extension-template-cpp) to your local machine. 2. Open a command prompt and navigate to the root of your cloned repo. 3. Run `build.bat` to bootstrap your dev environment and build the example extensions. 4. Run `_build\{platform}\release\omni.app.example.extension_browser.bat` to open an example kit application. - Run `omni.app.example.viewport.bat` instead if you want the renderer and main viewport to be enabled. - Run `omni.app.kit.dev.bat` instead if you want the full kit developer experience to be enabled. 5. From the menu, select `Window->Extensions` to open the extension browser window. ``` # Debugging C++ Extensions 1. Run `build.bat` (if you haven’t already) to generate the solution file. 2. Open `_compiler\vs2019\kit-extension-template-cpp.sln` using Visual Studio 2019. 3. Select `omni.app.example.extension_browser` as the startup project (if it isn’t already). - Select `omni.app.example.viewport` instead if you want the renderer and main viewport to be enabled. - Select `omni.app.kit.dev` instead if you want the full kit developer experience to be enabled. 4. Run/debug the example kit application, using the extension browser window to enable/disable extensions. # Creating New C++ Extensions 1. Copy one of the existing extension examples to a new folder within the `source/extensions` folder. - The name of the new folder will be the name of your new extension. - The **omni** prefix is reserved for NVIDIA applications and extensions. 2. Update the fields in your new extension’s `config/extension.toml` file as necessary. 3. Update your new extension’s `premake5.lua` file as necessary. 4. Update your new extension’s C++ code in the `plugins` folder as necessary. 5. Update your new extension’s Python code in the `python` folder as necessary. 6. Update your new extension’s Python bindings in the `bindings` folder as necessary. 7. Update your new extension’s documentation in the `docs` folder as necessary. 8. Run `build.bat` to build your new extension. 9. Refer to the *Getting Started* section above to open the example kit application and extension browser window. 10. Enter the name of your new extension in the search bar at the top of the extension browser window to view it. # Generating Documentation 1. Run `repo.bat docs` to generate the documentation for the repo, including all extensions it contains. - You can generate the documentation for a single extension by running `repo.bat docs -p {extension_name}` 2. Open `_build/docs/kit-extension-template-cpp/latest/index.html` to view the generated documentation. # Publishing Developers can publish publicly hosted extensions to the community extension registry using the following steps: 1. Tag the GitHub repository with the **[omniverse-kit-extension](https://github.com/topics/omniverse-kit-extension)** tag. 2. Create a [GitHub release](https://docs.github.com/en/repositories/releasing-projects-on-github/managing-releases-in-a-repository). 3. Upload the packaged extension archives, created with `./repo.bat package` on Windows or `./repo.sh package` on other platforms. # Publishing Extensions To publish an extension, you must package it and upload it to a GitHub release. This section describes the process of packaging an extension and the naming convention for the packaged extension archive. ## Packaging Package your extension on Linux, to the GitHub release. You must rename the packaged extension archive to match the following convention: - **Linux:** ``` {github-namespace}-{github-repository}-linux-x86_64-{github-release-tag}.zip ``` - **Windows:** ``` {github-namespace}-{github-repository}-windows-x86_64-{github-release-tag}.zip ``` For example, the v0.0.2 release of the extension has archives named `jshrake-nvidia-kit-community-release-test-linux-x86_64-v0.0.2.zip` and `jshrake-nvidia-kit-community-release-test-windows-x86_64-v0.0.2.zip` for Linux and Windows, respectively. Our publishing pipeline runs nightly and discovers any publicly hosted GitHub repository with the `omniverse-kit-extension` tag. The published extensions should be visible in the community registry the day following the creation of a GitHub release. Refer to the kit extension documentation for how to specify the Kit version compatibility for your extension. This ensures that the correct version of your extension is listed in any given Kit application. # Contributing The source code for this repository is provided as-is and we are not accepting outside contributions. # Example Extensions - omni.example.cpp.actions - omni.example.cpp.commands - omni.example.cpp.hello_world - omni.example.cpp.omnigraph_node - omni.example.cpp.pybind - omni.example.cpp.ui_widget - omni.example.cpp.usd - omni.example.cpp.usd_physics - omni.example.cpp.usdrt - omni.example.python.hello_world - omni.example.python.ui - omni.example.python.usdrt
5,881
Controller.md
# Controller Class The primary interface you can use for interacting with OmniGraph is the `omni.graph.core.Controller`. It is accessible through the main module like this: ```python import omni.graph.core as og keys = og.Controller.Keys # Usually handy to keep this name short as it is used often controller = og.Controller() ``` **Note** For the examples below, the above initialization code will be assumed to be present. ## Structure The `omni.graph.core.Controller` class is an amalgam of several other classes with specific subsets of functionality. It derives from each of them, so that all of their functionality can be accessed through a single class. - `omni.graph.core.GraphController` handles operations that affect the structure of the graph - `omni.graph.core.NodeController` handles operations that affect individual nodes - `omni.graph.core.ObjectLookup` provides a generic set of interfaces for finding OmniGraph objects with a flexible set of inputs - `omni.graph.core.DataView` lets you get and set attribute values For the most part the controller functions can be accessed as either static class methods or as regular object methods: ```python await og.Controller.evaluate() await controller.evaluate() ``` The main difference between the two is that the object-based calls can maintain state information to simplify the arguments to future calls. Each of the base classes have their own constructor parameters. The ones accepted by the main controller constructor are as follows: """Set up state information. You only need to create an instance of the Controller if you are going to use the edit() function more than once, when it needs to remember the node mapping used for creation. Args are passed on to the parent classes who have inits and interpreted by them as they see fit. Args: graph_id: If specified then operations are performed on this graph_id unless it is overridden in a particular function call. See GraphController.create_graph() for the data types accepted for the graph description. path_to_object_map: Dictionary of relative paths mapped on to their full path after creation so that the edit_commands can use either full paths or short-forms to specify the nodes. update_usd: If specified then override whether to update the USD after the operations (default False) undoable: If True the operation is added to the undo queue, else it is done immediately and forgotten (default True) allow_exists_attr: If True then attribute creation operation won't fail when the attribute already exists on the node it was being added to (default False) allow_exists_node: If True then node creation operation won't fail when the node already exists in the scene graph (default False) allow_exists_prim: If True then prim creation operation won't fail when the prim already exists in the scene graph (default False) Check the help information for GraphController.__init__(), NodeController.__init__(), DataView.__init__(), and ObjectLookup.__init__() for details on what other constructor arguments are accepted. """ ``` ## Controller Functions In addition to the functions inherited from the other classes the `Controller` has some of its own functions. ### evaluate()/evaluate_sync() These are sync and async versions of a function to evaluate a graph or list of graphs. ```python # This version evaluates all graphs asynchronously await og.Controller.evaluate() # This version evaluates the graph defined by the controller object synchronously controller.evaluate_sync() # This version evaluates the single named graph synchronously og.Controller.evaluate_sync(graph) ``` ### edit() The workhorse of the controller class is the `edit()` method. It provides simple access to a lot of the underlying functionality for manipulating the contents of OmniGraph. One of the most typical operations you will perform in a script is to create a predefined graph and populate it with connections and values. Here is a comprehensive example that illustrates how to use all of the features in an edit call: ```python # Keywords are shown for the edit arguments, however they are not required (graph, nodes, prims, name_to_object_map) = og.Controller.edit( # First parameter is a graph reference, created if it doesn't already exist. # See omni.graph.core.GraphController.create_graph() for more options graph_id="/World/MyGraph", # Second parameter is a dictionary edit_commands={ # Delete a node or list of nodes that already existed in the graph # See omni.graph.core.GraphController.delete_nodes() for more options keys.DELETE_NODES: ["MyNode"], # Create new nodes in the graph - the resulting og.Nodes are returned in the "nodes" part of the tuple # See omni.graph.core.GraphController.create_node() for more options keys.CREATE_NODES: [ ("src", "omni.graph.tutorials.SimpleData"), ("dst", "omni.graph.tutorials.SimpleData"), # Create a new compound node in the graph. Note that the nodes created in the compound are not # returned in the "nodes" part of the tuple, but are inserted into name_to_object_map. ("compound", { keys.CREATE_NODES: [("subNode", "omni.graph.tutorials.SimpleData")], # Expose the compound node's subNode ports as part of the compound node # by specifying the attribute to promote and the name of the attribute on the # compound node keys.PROMOTE_ATTRIBUTES: [("subNode.inputs:a_float", "inputs:attr0")], }), ], # Create new (dynamic) attributes on some nodes. # See omni.graph.core.NodeController.create_attribute() for more options keys.CREATE_ATTRIBUTES: [ ("src.inputs:dyn_float", "float"), ("dst.inputs:dyn_any", "any"), ], # Create new prims in the stage - the resulting Usd.Prims are returned in the "prims" part of the tuple # See omni.graph.core.GraphController.create_prims() for more options keys.CREATE_PRIMS: [ ], }, ) ```python (("Prim1", {"attrFloat": ("float", 2.0)}), ("Prim2", {"attrBool": ("bool", True)}), ), # Expose one of the prims to OmniGraph by creating a USD import node to read it as a bundle. # The resulting node is in the "nodes" part of the tuple, after any native OmniGraph node types. # See omni.graph.core.GraphController.expose_prims() for more options keys.EXPOSE_PRIMS: [(og.Controller.PrimExposureType.AS_BUNDLE, "Prim1", "Prim1Exposed")], # Connect a source output to a destination input to create a flow of data between the two nodes # See omni.graph.core.GraphController.connect() for more options keys.CONNECT: [("src.outputs:a_int", "dst.inputs:a_int")], # Disconnect an already existing connection. # See omni.graph.core.GraphController.disconnect() for more options keys.DISCONNECT: [ ("/World/MyGraph/MyClassNode.outputs:a_bool", "/World/MyGraph/KnownNode.inputs:a_bool") ], # Define an attribute's value (inputs and state attributes only - outputs are computed) # See omni.graph.core.DataView.set() for more options keys.SET_VALUES: [("src.inputs:a_int", 5)], # Create graph-local variable values # See omni.graph.core.GraphController.create_variables() for more options keys.CREATE_VARIABLES: [("a_float_var", og.Type(og.BaseDataType.FLOAT)), ("a_bool_var", "bool")], }, # Parameters from here down could also be saved as part of the object and reused repeatedly if you had # created a controller rather than calling edit() as a class method path_to_object_map=None, # Saved object-to-path map, bootstraps any created as part of the call update_usd=True, # Immediately echo the changes to the underlying USD undoable=True, # If False then do not remember previous state - useful for writing tests allow_exists_node=False, # If True then silently succeed requests to create already existing nodes allow_exists_prim=False, # If True then silently succeed requests to create already existing prims ) ``` See the method documentation for more details on what types of arguments can be accepted for each of the parameters. ## ObjectLookup This class contains the functions you will probably use the most. It provides an extremely flexible method for looking up OmniGraph objects from the information you have on hand. The specs it accepts as arguments can be seen in the class documentation. The subsection titles link to the actual method documentation, while the content consists of a simple example of how to use that method in a typical script. Each subsection assumes a graph set up through the following initialization code: ```python import omni.graph.core as og import omni.usd from pxr import OmniGraphSchema, Sdf keys = og.Controller.Keys # Note that when you extract the parameters this way it is important to have the trailing "," in the node # and prim tuples so that Python doesn't try to interpret them as single objects. (graph, (node,), (prim,), _) = og.Controller.edit( "/World/MyGraph", { keys.CREATE_NODES: ("MyNode", "omni.graph.test.TestAllDataTypes"), keys.CREATE_PRIMS: ("MyPrim", {"myFloat": ("float", 0)}), keys.CREATE_VARIABLES: ("MyVariable", "float"), }, ) assert prim.IsValid() attribute = node.get_attribute("inputs:a_bool") ``` ```python relationship_attribute = node.get_attribute("inputs:a_target") attribute_type = attribute.get_resolved_type() node_type = node.get_node_type() variable = graph.get_variables()[0] stage = omni.usd.get_context().get_stage() ``` ## Graph ```python # Look up the graph directly from itself (to simplify code) assert graph == og.Controller.graph(graph) # Look up the graph by path assert graph == og.Controller.graph("/World/MyGraph") # Look up the graph by Usd Prim graph_prim = stage.GetPrimAtPath("/World/MyGraph") assert graph == og.Controller.graph(graph_prim) # Look up by a Usd Schema object graph_schema = OmniGraphSchema.OmniGraph(graph_prim) assert graph == og.Controller.graph(graph_schema) # Look up the graph by SdfPath path = Sdf.Path("/World/MyGraph") assert graph == og.Controller.graph(path) # Look up a list of graphs by passing a list of any of the above for new_graph in og.Controller.graph([graph, "/World/MyGraph", path]): assert graph == new_graph ``` ## Node ```python # Look up the node directly from itself (to simplify code) assert node == og.Controller.node(node) # Look up the node by path assert node == og.Controller.node("/World/MyGraph/MyNode") # Look up the node by partial path and graph. # The graph parameter can be any of the ones supported by og.Controller.graph() assert node == og.Controller.node(("MyNode", graph)) # Look up the node by SdfPath node_path = Sdf.Path("/World/MyGraph/MyNode") assert node == og.Controller.node(node_path) # Look up the node from its underlying USD prim backing, if it exists node_prim = stage.GetPrimAtPath("/World/MyGraph/MyNode") assert node == og.Controller.node(node_prim) # Look up a list of nodes by passing a list of any of the above for new_node in og.Controller.node([node, "/World/MyGraph/MyNode", ("MyNode", graph), node_path, node_prim]): assert node == new_node ``` ## attribute `attribute()` Finds an `og.Attribute` object. ```python # Look up the attribute directly from itself (to simplify code) assert attribute == og.Controller.attribute(attribute) # Look up the attribute by path assert attribute == og.Controller.attribute("/World/MyGraph/MyNode.inputs:a_bool") # Look up the attribute by SdfPath assert attribute == og.Controller.attribute(Sdf.Path("/World/MyGraph/MyNode.inputs:a_bool")) # Look up the attribute by name and node assert attribute == og.Controller.attribute(("inputs:a_bool", node)) # These can chain, so you can also look up the attribute by name and node, where node is further looked up by # relative path and graph assert attribute == og.Controller.attribute(("inputs:a_bool", ("MyNode", graph))) # Look up the attribute by SdfPath attr_path = Sdf.Path("/World/MyGraph/MyNode.inputs:a_bool") assert attribute == og.Controller.attribute(attr_path) # Look up the attribute through its Usd counterpart stage = omni.usd.get_context().get_stage() node_prim = stage.GetPrimAtPath("/World/MyGraph/MyNode") usd_attribute = node_prim.GetAttribute("inputs:a_bool") assert attribute == og.Controller.attribute(usd_attribute) # Look up a list of attributes by passing a list of any of the above for new_attribute in og.Controller.attribute( [ attribute, "/World/MyGraph/MyNode.inputs:a_bool", Sdf.Path("/World/MyGraph/MyNode.inputs:a_bool"), ("inputs:a_bool", node), ("inputs:a_bool", ("MyNode", graph)), attr_path, usd_attribute, ] ): assert attribute == new_attribute ``` ## attribute-type `attribute_type()` Finds an `og.Type` object. ```python attribute_type = og.Type(og.BaseDataType.FLOAT, tuple_count=3, array_depth=1, role=og.AttributeRole.POSITION) # Look up the attribute type by OGN type name assert attribute_type == og.Controller.attribute_type("pointf[3][]") # Look up the attribute type by SDF type name assert attribute_type == og.Controller.attribute_type("point3f[]") # Look up the attribute type directly from itself (to simplify code) assert attribute_type == og.Controller.attribute_type(attribute_type) ``` # Look up the attribute type from the attribute with that type point_attribute = og.Controller.attribute(("inputs:a_pointf_3_array", node)) assert attribute_type == og.Controller.attribute_type(point_attribute) # Look up the attribute type from the attribute data whose attribute has that type (most commonly done with # attributes that have extended types or attributes belonging to bundles) point_attribute_data = point_attribute.get_attribute_data() assert attribute_type == og.Controller.attribute_type(point_attribute_data) ``` ```markdown ## node_type node_type = node.get_node_type() # Look up the node type directly from itself (to simplify code) assert node_type == og.Controller.node_type(node_type) # Look up the node type from the string that uniquely identifies it assert node_type == og.Controller.node_type("omni.graph.test.TestAllDataTypes") # Look up the node type from the node with that type assert node_type == og.Controller.node_type(node) # Look up the node type from the USD Prim backing a node of that type assert node_type == og.Controller.node_type(node_prim) # Look up a list of node types by passing a list of any of the above for new_node_type in og.Controller.node_type( [ node_type, "omni.graph.test.TestAllDataTypes", node, node_prim, ] ): assert node_type == new_node_type ``` ```markdown ## prim # Look up the prim directly from itself (to simplify code) assert node_prim == og.Controller.prim(node_prim) # Look up the prim from the prim path as a string assert node_prim == og.Controller.prim("/World/MyGraph/MyNode") # Look up the prim from the Sdf.Path pointing to the prim assert node_prim == og.Controller.prim(node_path) # Look up the prim from the OmniGraph node for which it is the backing assert node_prim == og.Controller.prim(node) # Look up the prim from the (node_path, graph) tuple defining the OmniGraph node for which it is the backing assert node_prim == og.Controller.prim(("MyNode", graph)) # Look up the prim from an OmniGraph graph assert graph_prim == og.Controller.prim(graph) # Look up a list of prims by passing a list of any of the above for new_prim in og.Controller.prim( [ node_prim, "/World/MyGraph/MyNode", node_path, node, ("MyNode", graph), ] ): assert node_prim == new_prim ``` ## usd-attribute Finds a **Usd.Attribute** object ```python # USD attributes can be looked up with the same parameters as for looking up an OmniGraph attribute # Look up the USD attribute directly from itself (to simplify code) assert usd_attribute == og.Controller.usd_attribute(usd_attribute) # Look up the USD attribute by path assert usd_attribute == og.Controller.usd_attribute("/World/MyGraph/MyNode.inputs:a_bool") # Look up the USD attribute by name and node assert usd_attribute == og.Controller.usd_attribute(("inputs:a_bool", node)) # These can chain, so you can also look up the USD attribute by name and node, where node is further looked up by # relative path and graph assert usd_attribute == og.Controller.usd_attribute(("inputs:a_bool", ("MyNode", graph))) # Look up the USD attribute by SdfPath attr_path = Sdf.Path("/World/MyGraph/MyNode.inputs:a_bool") assert usd_attribute == og.Controller.usd_attribute(attr_path) # Look up the USD attribute through its OmniGraph counterpart assert usd_attribute == og.Controller.usd_attribute(attribute) # Look up a list of attributes by passing a list of any of the above for new_usd_attribute in og.Controller.usd_attribute( [ usd_attribute, "/World/MyGraph/MyNode.inputs:a_bool", ("inputs:a_bool", node), ("inputs:a_bool", ("MyNode", graph)), attr_path, attribute, ] ): assert usd_attribute == new_usd_attribute ``` ## variable Finds an `og.IVariable` object ```python # Look up the variable directly from itself (to simplify code) assert variable == og.Controller.variable(variable) # Look up the variable from a tuple with the variable name and the graph to which it belongs assert variable == og.Controller.variable((graph, "MyVariable")) # Look up the variable from a path string pointing directly to it assert variable == og.Controller.variable(variable.source_path) # Look up the variable from an Sdf.Path pointing directly to it variable_path = Sdf.Path(variable.source_path) assert variable == og.Controller.variable(variable_path) # Look up a list of variables by passing a list of any of the above for new_variable in og.Controller.variable( [ variable, (graph, "MyVariable"), variable.source_path, variable_path, ] ): assert variable == new_variable ``` ## Utilities In addition to type lookups there are a few methods that provide utility functions related to these lookups. - `attribute_path()` - `node_path()` - `node_path()` - `prim_path()` - `split_graph_from_node_path()` ```python # Look up the path to an attribute given any of the types an attribute lookup recognizes for attribute_spec in [ attribute, "/World/MyGraph/MyNode.inputs:a_bool", ("inputs:a_bool", node), ("inputs:a_bool", ("MyNode", graph)), attr_path, usd_attribute, ]: assert attribute.get_path() == og.Controller.attribute_path(attribute_spec) # Look up the path to a node given any of the types a node lookup recognizes for node_spec in [node, "/World/MyGraph/MyNode", ("MyNode", graph), node_path, node_prim]: assert node.get_prim_path() == og.Controller.node_path(node_spec) # Look up the path to a prim given any of the types a prim lookup recognizes for prim_spec in [node_prim, "/World/MyGraph/MyNode", node_path, node, ("MyNode", graph)]: assert node_prim.GetPrimPath() == og.Controller.prim_path(prim_spec) # Look up the path to a prim given any of the types a graph lookup recognizes graph_path = graph.get_path_to_graph() for graph_spec in [graph, graph_path, Sdf.Path(graph_path)]: assert graph_path == og.Controller.prim_path(graph_spec) # Look up a list of paths to prims given a list of any of the types a prim lookup recognizes for new_path in og.Controller.prim_path( [node_prim, "/World/MyGraph/MyNode", node_path, node, ("MyNode", graph)] ): assert node_prim.GetPrimPath() == new_path # Separate the graph name from the node name in a full path to a node assert (graph, "MyNode") == og.Controller.split_graph_from_node_path("/World/MyGraph/MyNode") # Separate the graph name from the node name in an Sdf.Path to a node assert (graph, "MyNode") == og.Controller.split_graph_from_node_path(node_path) ``` ## GraphController This class contains the functions that manipulate the structure of the graph, including creating a graph. The class documentation describe the details of what it can do. The subsection titles link to the actual method documentation, while the content consists of a simple example of how to use that method in a typical script. Each subsection assumes a set up through the following initialization code: ```python import omni.graph.core as og import omni.kit from pxr import Sdf node_type_name = "omni.graph.test.TestAllDataTypes" node_type = og.Controller.node_type(node_type_name) ``` ### __init__() A few parameters can be shared among multiple calls to the controller if it is instantiated as an object rather than calling the functions as class methods. ```python # Explanation of the non-default values in the constructor controller = og.GraphController( update_usd=False, # Only update Fabric when paths are added are removed, do not propagate to USD undoable=False, # Do not save information on changes for later undo (most applicable to testing) # If a node specification in og.GraphController.create_node() exists then silently succeed instead of # raising an exception allow_exists_node=True, # If a prim specification in og.GraphController.create_prim() exists then silently succeed instead of # raising an exception allow_exists_prim=True, ) assert controller is not None # The default values are what is assumed when class methods are called. Where they apply to any of the # functions they can also be passed to the class method functions to specify non-default values. ``` ### create_graph() Creates a new `og.Graph`, similar to the first parameter to the `og.Controller.edit()` function. ```python # Simple method of creating a graph just passes the desire prim path to it. This creates a graph using all # of the default parameters. graph = og.GraphController.create_graph("/World/MyGraph") assert graph.is_valid() # If you want to customize the type of graph then instead of passing just a path you can pass a dictionary # graph configuration values. See the developer documentation of omni.graph.core.GraphController.create_graph # for details on what each parameter means action_graph = og.GraphController.create_graph( { "graph_path": "/World/MyActionGraph", "node_name": "MyActionGraph", "evaluator_name": "execution", "is_global_graph": True, "backed_by_usd": True, "fc_backing_type": og.GraphBackingType.GRAPH_BACKING_TYPE_FABRIC_SHARED, "pipeline_stage": og.GraphPipelineStage.GRAPH_PIPELINE_STAGE_SIMULATION, "evaluation_mode": og.GraphEvaluationMode.GRAPH_EVALUATION_MODE_AUTOMATIC, } ) # Also accepts the "update_usd" and "undoable" shared construction parameters ``` ### create_node() Creates a new `og.Node`. This performs the same function and takes the same parameters as the `og.Controller.edit()` keyword `og.Controller.Keys.CREATE_NODES`. ```python # Creates a new node in an existing graph. # The two mandatory parameters are node path and node type. The node path can be any of the types recognized # by the omni.graph.core.ObjectLookup.node_path() method and the node type can be any of the types recognized # by the omni.graph.core.ObjectLookup.node_type() method. node_by_path = og.GraphController.create_node( node_id="/World/MyGraph/MyNode", node_type_id=node_type_name, ) assert node_by_path.is_valid() node_by_name = og.GraphController.create_node( ``` node_id=("MyNodeByName", graph), node_type_id=node_type, ) assert node_by_name.is_valid() node_by_sdf_path = Sdf.Path("/World/MyGraph/MyNodeBySdf") node_by_sdf = og.GraphController.create_node(node_id=node_by_sdf_path, node_type_id=node_by_name) assert node_by_sdf.is_valid() # Also accepts the "update_usd", "undoable", and "allow_exists_node" shared construction parameters ## create_prim Creates a new **Usd.Prim**. This performs the same function and takes the same parameters as the `og.Controller.edit()` keyword `og.Controller.Keys.CREATE_PRIMS`. ```python # Creates a new prim on the USD stage # You can just specify the prim path to get a default prim type with no attributes. The prim_path argument # can accept any value accepted by omni.graph.core.ObjectLookup.prim_path prim_empty = og.GraphController.create_prim(prim_path="/World/MyEmptyPrim") assert prim_empty.IsValid() # You can add a prim type if you want the prim to be a specific type, including schema types prim_cube = og.GraphController.create_prim(prim_path=Sdf.Path("/World/MyCube"), prim_type="Cube") assert prim_cube.IsValid() # You can also populate the prim with some attributes and values using the attribute_values parameter, which # accepts a dictionary of Name:(Type,Value). An attribute named "Name" will be created with type "Type" # (specified in either USD or SDF type format), and initial value "Value" (specified in any format compatible # with the Usd.Attribute.Set() function). The names do not have to conform to the usual OGN standards of # starting with one of the "inputs", "outputs", or "state" namespaces, though they can. The "Type" value is # restricted to the USD-native types, so things like "any", "bundle", and "execution" are not allowed. prim_with_values = og.GraphController.create_prim( prim_path="/World/MyValuedPrim", attribute_values={ "someFloat": ("float", 3.0), "inputs:float3": ("float3", [1.0, 2.0, 3.0]), "someFloat3": ("float[3]", [4.0, 5.0, 6.0]), "someColor": ("color3d", [0.5, 0.6, 0.2]), }, ) assert prim_with_values.IsValid() # Also accepts the "undoable" and "allow_exists_prim" shared construction parameters ``` ## create_variable Creates a new `og.IVariable`. This performs the same function and takes the same parameters as the `og.Controller.edit()` keyword `og.Controller.Keys.CREATE_VARIABLES`. ```python # To construct a variable the graph must be specified, along with the name and type of variable. # The variable type can only be an omni.graph.core.Type or a string representing one of those types. float_variable = og.GraphController.create_variable(graph_id=graph, name="FloatVar", var_type="float") assert float_variable.valid ``` color3_type = og.Type(og.BaseDataType.FLOAT, 3, role=og.AttributeRole.COLOR) color3_variable = og.GraphController.create_variable(graph_id=graph, name="Color3Var", var_type=color3_type) assert color3_variable.valid # Also accepts the "undoable" shared construction parameter ## delete_node Deletes an existing `og.Node`. This performs the same function and takes the same parameters as the `og.Controller.edit()` keyword `og.Controller.Keys.DELETE_NODES`. ```python # To delete a node you can pass in a node_id, as accepted by omni.graph.core.ObjectLookup.node, or a node # name and a graph_id as accepted by omni.graph.core.ObjectLookup.graph. og.GraphController.delete_node(node_by_sdf) # The undo flag was the global default so this operation is undoable omni.kit.undo.undo() # HOWEVER, you must get the node reference back as it may have been altered by the undo node_by_sdf = og.Controller.node(node_by_sdf_path) # Try it a different way og.GraphController.delete_node(node_id="MyNodeBySdf", graph_id=graph) # If you do not know if the node exists or not you can choose to ignore that case and silently succeed og.GraphController.delete_node(node_id=node_by_sdf_path, ignore_if_missing=True) # Also accepts the "update_usd" and "undoable" shared construction parameters ``` ## expose_prim Makes a **Usd.Prim** visible to OmniGraph through a read node. This performs the same function and takes the same parameters as the `og.Controller.edit()` keyword `og.Controller.Keys.EXPOSE_PRIMS`. ```python # USD prims cannot be directly visible in OmniGraph so instead you must expose them through an import or # export node. See the documentation of the function for details on the different ways you can expose a prim # to OmniGraph. # A prim is exposed as a new OmniGraph node of a given type where the exposure process creates the node and # the necessary links to the underlying prim. The OmniGraph node can then be used as any others might. # The prim_id can accept any type accepted by omni.graph.core.ObjectLookup.prim() and the node_path_id can # accept any type accepted by omni.graph.core.ObjectLookup.node_path() exposed_empty = og.GraphController.expose_prim( exposure_type=og.GraphController.PrimExposureType.AS_BUNDLE, prim_id="/World/MyEmptyPrim", node_path_id="/World/MyActionGraph/MyEmptyNode", ) assert exposed_empty is not None exposed_cube = og.GraphController.expose_prim( exposure_type=og.GraphController.PrimExposureType.AS_ATTRIBUTES, prim_id=prim_cube, node_path_id=("MyCubeNode", action_graph), ) assert exposed_cube is not None # Also accepts the "update_usd" and "undoable" shared construction parameters ``` ## connect ### connect() Connects two `og.Attributes` together. This performs the same function and takes the same parameters as the `og.Controller.edit()` keyword `og.Controller.Keys.CONNECT`. ```python # Once you have more than one node in a graph you will want to connect them so that the results of one node's # computation can be passed on to another for further computation - the true power of OmniGraph. The connection # sends data from the attribute in "src_spec" and sends it to the attribute in "dst_spec". Both of those # parameters can accept anything accepted by omni.graph.core.ObjectLookup.attribute og.GraphController.connect( src_spec=("outputs:a_bool", ("MyNode", graph)), dst_spec="/World/MyGraph/MyNodeByName/outputs:a_bool", ) # Also accepts the "update_usd" and "undoable" shared construction parameters ``` ## disconnect ### disconnect() Breaks an existing connection between two `og.Attributes`. This performs the same function and takes the same parameters as the `og.Controller.edit()` keyword `og.Controller.Keys.DISCONNECT`. ```python # As part of wiring the nodes together you may also want to break connections, either to make a new connection # elsewhere or just to leave the attributes unconnected. The disconnect method is a mirror of the connect # method, taking the same parameters and breaking any existing connection between them. It is any error to try # to disconnect two unconnected attributes. og.GraphController.disconnect( src_spec=("outputs:a_bool", ("MyNode", graph)), dst_spec="/World/MyGraph/MyNodeByName/outputs:a_bool", ) omni.kit.undo.undo() # Also accepts the "update_usd" and "undoable" shared construction parameters ``` ## disconnect-all ### disconnect_all() Disconnects everything from an existing `og.Attribute`. ```python # Sometimes you don't know or don't care what an attribute is connected to, you just want to remove all of its # connections, both coming to and going from it. The single attribute_spec parameter tells which attribute is to # be disconnected, accepting any value accepted by omni.graph.ObjectLookup.attribute og.GraphController.disconnect_all(attribute_spec=("outputs:a_bool", ("MyNode", graph))) # As this just disconnects "all" if an attribute is not connected to anything it will silently succeed og.GraphController.disconnect_all(attribute_spec=("outputs:a_bool", ("MyNode", graph))) # Also accepts the "update_usd" and "undoable" shared construction parameters ``` ## set-variable-default-value ### set_variable_default_value() ``` ## set_variable_default_value Sets the default value of an `og.IVariable`. ```python # After creation a graph variable will have zeroes as its default value. You may want to set some other default # so that when the graph is instantiated a second time the defaults are non-zero. The variable_id parameter # accepts anything accepted by omni.graph.core.ObjectLookup.variable() and the value must be a data type # compatible with the type of the (already existing) variable # For example you might have a color variable that you wish to initialize in all subsequent graphs to red og.GraphController.set_variable_default_value(variable_id=(graph, "Color3Var"), value=(1.0, 0.0, 0.0)) ``` ## get_variable_default_value Gets the default value of an `og.IVariable`. ```python # If you are using variables to configure your graphs you probably want to know what the default values are, # especially if someone else created them. You can read the default for a given variable, where the variable_id # parameter accepts anything accepted by omni.graph.core.ObjectLookup.variable(). color_default = og.GraphController.get_variable_default_value(variable_id=color3_variable) assert color_default == (1.0, 0.0, 0.0) ``` ## NodeController This class contains the functions that manipulate the contents of a node. It only has a few functions. The class documentation outlines its areas of control. The subsection titles link to the actual method documentation, while the content consists of a simple example of how to use that method in a typical script. Each subsection assumes a graph set up through the following initialization code: ```python import omni.graph.core as og keys = og.Controller.Keys (_, (node,), _, _) = og.Controller.edit( "/World/MyGraph", { keys.CREATE_NODES: ("MyNode", "omni.graph.test.TestAllDataTypes"), }, ) assert node.is_valid() ``` ### __init__() A few parameters can be shared among multiple calls to the controller if it is instantiated as an object rather than calling the functions as class methods. ```python # The NodeController constructor only recognizes one parameter controller = og.NodeController( update_usd=False, # Only update Fabric when attributes are added or removed, do not propagate to USD ) assert controller is not None ``` ### create_attribute Creates a new dynamic `og.Attribute`. This performs the same function and takes the same parameters as the `og.Controller.edit()` keyword `og.Controller.Keys.CREATE_ATTRIBUTES`. ```markdown # description of the attribute. The mandatory pieces are "node" on which it is to be created, accepting # anything accepted by omni.graph.core.ObjectLookup.node, the name of the attribute not including the port # namespace (i.e. without the "inputs:", "outputs:", or "state:" prefix, though you can leave it on if you # prefer), the type "attr_type" of the attribute, accepting anything accepted by # omni.graph.core.ObjectLookup.attribute_type. # The default here is to create an input attribute of type float float_attr = og.NodeController.create_attribute(node, "theFloat", "float") assert float_attr.is_valid() # Using the namespace is okay, but redundant double_attr = og.NodeController.create_attribute("/World/MyGraph/MyNode", "inputs:theDouble", "double") assert double_attr.is_valid() # Unless you want a non-default port type, in which case it will be extracted from the name int_attr = og.NodeController.create_attribute(node, "outputs:theInt", og.Type(og.BaseDataType.INT)) assert int_attr.is_valid() # ...or you can just specify the port explicitly and omit the namespace int2_attr = og.NodeController.create_attribute( node, "theInt2", "int2", attr_port=og.AttributePortType.ATTRIBUTE_PORT_TYPE_OUTPUT ) assert int2_attr.is_valid() # The default will set an initial value on your attribute, though it is not remembered in the future float_1_attr = og.NodeController.create_attribute(node, "the1Float", "float", attr_default=1.0) assert float_1_attr.is_valid() # Mismatching between an explicit namespace and a port type will result in a duplicated namespace so be careful error_attr = og.NodeController.create_attribute(node, "outputs:theError", "float") assert error_attr.get_path() == "/World/MyGraph/MyNode.inputs:outputs:theError" assert error_attr.is_valid() # Lastly the special "extended" types of attributes (any or union) can be explicitly specified through # the "attr_extended_type" parameter. When this is anything other than the default then the "attr_type" # parameter will be ignored in favor of the extended type definition, however it must still be a legal type. # This simplest type of extended attribute is "any", whose value can be any legal type. union_attr = og.NodeController.create_attribute( node, "theAny", attr_type="float", attr_extended_type=og.ExtendedAttributeType.EXTENDED_ATTR_TYPE_ANY ) assert union_attr.is_valid() # Note that with any extended type the "default" is invalid and will be ignored any_other_attr = og.NodeController.create_attribute( node, "theOtherAny", "token", default=5, attr_extended_type=og.ExtendedAttributeType.EXTENDED_ATTR_TYPE_ANY ) assert any_other_attr.is_valid() # If you want a more restricted set of types you can instead use the extended union type. When specifying # that type it will be a 2-tuple where the second value is a list of types accepted by the union. For example # this attribute will accept either doubles or floats as value types. (See the documentation on extended # attribute types for more information on how types are resolved.) union_attr = og.NodeController.create_attribute( node, "theUnion", "token", attr_extended_type=(og.ExtendedAttributeType.EXTENDED_ATTR_TYPE_UNION, ["double", "float"]), ) assert union_attr.is_valid() # Also accepts the "undoable" shared construction parameter ## remove_attribute() Removes an existing dynamic `og.Attribute` from a node. ```python # Dynamic attributes are a powerful method of reconfiguring a node at runtime, and as such you will also want # to remove them. The "attribute" parameter accepts anything accepted by # omni.graph.core.ObjectLookup.attribute(). (The "node" parameter, while still functional, is only there for # historical reasons and can be ignored.) og.NodeController.remove_attribute(error_attr) og.NodeController.remove_attribute(("inputs:theUnion", node)) # Also accepts the "undoable" shared construction parameter ``` ## safe_node_name() Returns a node name based on an `og.NodeType` that is USD-safe. i.e. it replaces any characters that are not accepted by USD as part of a node name, such as a period or vertical bar. ```python # This is a utility you can use to ensure a node name is safe for use in the USD backing prim. Normally # OmniGraph will take care of this for you but if you wish to dynamically create nodes using USD names you can # use this to confirm that your name is safe for use as a prim. assert og.NodeController.safe_node_name("omni.graph.node.name") == "omni_graph_node_name" # There is also an option to use a shortened name rather than replacing dots with underscores assert og.NodeController.safe_node_name("omni.graph.node.name", abbreviated=True) == "name" ``` ## DataView This class contains the functions to get and set attribute values. It has a flexible **init** function that can optionally take an “attribute” parameter to specify either an `og.Attribute` or `og.AttributeData` to which the data operations will apply. The class documentation shows the available functionality. The subsection titles link to the actual method documentation, while the content consists of a simple example of how to use that method in a typical script. Each subsection assumes a graph set up through the following initialization code: ```python import omni.graph.core as og keys = og.Controller.Keys (_, (node, any_node,), _, _) = og.Controller.edit( "/World/MyGraph", { keys.CREATE_NODES: [ ("MyNode", "omni.graph.test.TestAllDataTypes"), ("MyAnyNode", "omni.graph.tutorials.ExtendedTypes"), ], keys.SET_VALUES: [ ("MyNode.inputs:a_int", 3), ], }, ) int_attr = og.Controller.attribute("inputs:a_int", node) union_attr = og.Controller.attribute("inputs:floatOrToken", any_node) float_array_attr = og.Controller.attribute("outputs:a_float_array", node) double_array_attr = og.Controller.attribute("outputs:a_double_array", node) ``` ## __init__() A few parameters can be shared among multiple calls to the controller if it is instantiated as an object rather than calling the functions as class methods. ```python # The DataView constructor can take a number of parameters, mostly useful if you intend to make repeated # calls with the same configuration. # The most common parameter is the attribute on which you will operate. The parameter accepts anything # accepted by omni.graph.core.ObjectLookup.attribute() per_attr_view = og.DataView(attribute=int_attr) assert per_attr_view # Subsequent calls to per_attr_view functions will always apply to "int_attr" # You can also force USD and undo configurations, as per other classes like omni.graph.core.NodeController do_now = og.DataView(update_usd=False, undoable=False) assert do_now is not None # To keep memory operations on a single device you can configure the DataView to always use the GPU gpu_view = og.DataView(on_gpu=True, gpu_ptr_kind=og.PtrToPtrKind.CPU) # You can retrieve the GPU pointer kind (i.e. where the memory pointing to GPU arrays lives) assert gpu_view.gpu_ptr_kind == og.PtrToPtrKind.CPU # And if you are working with an instanced graph you can isolate the DataView to a single instance. Also # handy for looping through different instances. instance_view = og.DataView(instance=1) assert instance_view is not None ``` ## get() Fetches the current value of an attribute. ```python # Reading the value of an attribute is the most common operation you'll want to use. assert og.DataView.get(attribute=int_attr) == 3 # If you've already configured the attribute you don't need to specify it, and you can reuse it assert per_attr_view.get() == 3 assert per_attr_view.get() == 3 # As a special case, when you have array attributes that you want to write on you can specify an array size # when you get the reference with the "reserved_element_count" parameter array_to_write = og.DataView.get(attribute=float_array_attr) assert len(array_to_write) == 2 array_to_write = og.DataView.get(attribute=float_array_attr, reserved_element_count=5) assert len(array_to_write) == 5 # Only valid on GPU array attributes is the "return_type" argument. Normally array values are returned in # numpy wrappers, however you can get the data as raw pointers as well if you want to handle processing of # the data yourself or cast it to some other library type. This also illustrates how you can use the # pre-configured constructed GPU view to get specific attribute values on the GPU. raw_array = gpu_view.get(attribute=double_array_attr, return_type=og.WrappedArrayType.RAW) # The return value is omni.graph.core.DataWrapper, which describes its device-specific data and configuration assert raw_array.gpu_ptr_kind == og.PtrToPtrKind.CPU # Also accepts overrides to the global parameters "on_gpu", "gpu_ptr_kind", and "instance" ``` ## get_array_size() Fetches the number of elements in an array attribute. This is meant to be a quick read of the size-only that avoids fetching the entire array, which can be quite large in some cases. ```python # An array size may be set without actually allocating space for it. In the case of very large arrays this can # be quite useful. The get_array_size function lets you find the number of elements that will be in the array # if you request the data, either on GPU or CPU. assert og.DataView.get_array_size(float_array_attr) == 5 ``` # Also accepts overrides to the global parameter "instance" ## set ### `set()` Sets a new value for an attribute. This performs the same function and takes the same parameters as the `og.Controller.edit()` keyword `og.Controller.Keys.SET_VALUES`. ```python # The counterpart to getting values is of course setting them. Normally through this interface you will be # setting values on input attributes, or sometimes state attributes, relying on the generated database to # provide the interface for setting output values as part of your node's compute function. # The "attribute" parameter accepts anything accepted by omni.graph.core.ObjectLookup.attribute(), and the # "value" parameter must be a legal value for the attribute type. og.DataView.set(attribute=int_attr, value=5) assert og.DataView.get(int_attr) == 5 # An optional "update_usd" argument does what you'd expect, preventing the update of the USD backing value # for the attribute you just set. await og.Controller.evaluate() og.DataView.set(int_attr, value=10, update_usd=False) usd_attribute = og.ObjectLookup.usd_attribute(int_attr) assert usd_attribute.Get() != 10 # The values being set are flexible as well, with the ability to use a dictionary format so that you can set # the type for any of the extended attributes. og.DataView.set(union_attr, value={"type": "float", "value": 3.5}) assert og.DataView.get(union_attr) == 3.5 # Also accepts overrides to the global parameters "on_gpu", "gpu_ptr_kind", and "instance" ``` ## force_usd_update ### `force_usd_update()` A context manager that lets you temporarily bracket a bunch of calls to force them to update immediately to USD or not, regardless of the class or parameter values. ```python # Sometimes you are calling unknown code and you want to ensure USD updates are performed the way you want # them. The DataView class provides a method that returns a contextmanager for just such a purpose. with og.DataView.force_usd_update(False): int_view = og.DataView(int_attr) int_view.set(value=20) # The USD value does not update, even though normally the default is to update assert usd_attribute.Get() != 20 with og.DataView.force_usd_update(True): int_view = og.DataView(int_attr) int_view.set(value=30) # The USD value updates assert usd_attribute.Get() == 30 ```
45,821
Conventions.md
# Conventions ## Naming Conventions ### File And Class Names Class and file naming is `InterCaps` with the prefix `Ogn`. For example, *OgnMyNode.ogn*, *OgnYourNode.py*, *class OgnMyNode*, and *OgnHerNode.cpp*. As with regular writing, active phrasing (verb-noun) is preferred over passive phrasing when a node name refers to its function. | Weak | Preferred | | --- | --- | | OgnSumOfTwoValues | OgnAddTwoValues | | OgnAttributeRemoval | OgnRemoveAttribute | | OgnTextGenerator | OgnGenerateText | **Tip** Some of the automated processes use the `Ogn` prefix to more quickly recognize node files. Using it helps them work at peak efficiency. If the node outputs multiple objects (bundles, prims, etc) then reflect that in the name. (i.e. GetPrimPath for a single path vs GetPrimPaths for an array of paths) The following is a list of common actions/prefixes. These should be used whenever possible for consistency and readability: | Action/Prefix | Purpose | | --- | --- | | Constant | A constant value (i.e. ConstantBool) | | Read | Reading the value of a prim from fabric or USD (i.e. ReadPrimAttribute) | | Write | Writing the value of a prim to fabric or USD (i.e. WritePrimAttribute) | | Set | Setting the value of an object (i.e. SetPrimActive) | | Get | | ## Node Operations - **Computing the value of an object** (i.e. GetPrimPath) - **Find** - Search data for a result (i.e. FindPrims) - **To** - Converting from one value to another (i.e. ToString) - **Insert** - inserting an object into a bundle/array (i.e. InsertPrim) - **Extract** - extracting an object from a bundle/array (i.e. ExtractPrim) - **Remove** - remove an object from a bundle/array (i.e. RemoveAttribute) - **Build/Make** - Constructing a value from one or more objects. (i.e. BuildString) - **Is** - Query for object matching (i.e. IsPrimSelected) - **Has** - Query for object exists on another object (i.e. HasAttr) - **On** - Event node; used for action graph specific nodes. (i.e OnTick) ## Node Names Although the only real restriction on node names is that they consist of alphanumeric, or underscore characters, there are some conventions established to make it easier to work with nodes, both familiar and unfamiliar. A node will generally have two names - a unique name to identify it to OmniGraph, and a user-friendly name for viewing in the UI. The name of the node should be generally related to the name of the class, to make them easy to find. The name will be made unique within the larger space by prepending the extension as a namespace. The name specified in the file only need be unique within its extension, and by convention is in uppercase CamelCase, also known as PascalCase. It’s a good idea to keep your name related to the class name so that you can correlate the two easily. > **Warning** > You can override the prepending of the extension name if you want your node name to be something different, but if you do, you must be prepared to ensure its uniqueness. To override the name simply put it in a namespace by including a `.` character, e.g. `omni.deformer.Visualizer`. Note that the `omni.` prefix is reserved for NVIDIA developed extensions. The user-friendly name can be anything, even an entire phrase, in Title Case. Although any name is acceptable, always keep in mind that your node name will appear in the user interface and its function should be immediately identifiable from its name. If you do not specify a user-friendly name then the unique name will be used in the user interface instead. Here are a few examples from the OmniGraph extensions: | Class Name | Name in the .ogn file | Unique Extended Name | User Facing Node Name | |---------------------|-----------------------|-------------------------------|--------------------------------| | OgnTutorialTupleData | TupleData | omni.graph.tutorials.TupleData | Tutorial Node: Tuple Attributes | | OgnVersionedDeformer | VersionedDeformer | omni.graph.examples.cpp.VersionedDeformer | Example Node: Versioned Deformer | | OgnAdd | Add | omni.graph.nodes.Add | Add | > **Attention** > The unique node name is restricted to the alphanumeric characters and underscore (`_`). Any other characters in the node name will cause the code generation to fail with an error message. ## Attribute Names As you will learn, every node has a set of attributes which describe the data it requires to perform its operations. The attributes also have naming conventions. The mandatory part is that attributes may only contain alphanumeric characters, underscore, and optional colon-separated namespacing. The preferred naming for attributes is camelCase and, as with nodes, both a unique name and a user-friendly name may be specified where the user-friendly name has no real restrictions. ```code inputs: ``` , ```code outputs: ``` , and ```code state: ``` ) so you can use this to have inputs and outputs with the same name since they will be in different namespaces. Here is an example of attribute names on a node that adds two values together: | Name in the .ogn file | Full Name | User Facing Name | |------------------------|-------------------|--------------------| | a | inputs:a | First Addend | | b | inputs:b | Second Addend | | sum | outputs:sum | Sum Of Inputs | ::: attention **Attention** The unique attribute name is restricted to the alphanumeric characters, underscore (`_`), and colon (`:`). Any other characters in the attribute name will cause the code generation to fail with an error message. ::: Suggest multiplicity in the name when using arrays or when `target` or `bundle` attributes are expected to have multiple entries. (ex. **prims** for multiple prims and **prim** for a single prim) The following is a list of common attribute names/suffixes that can be used when appropriate: | Name/Suffix | Purpose | |-------------|-------------------------------------------------------------------------| | prim / target | `target` attribute that targets a single prim | | prims / targets | `target` attribute that targets multiple prims | | bundle | `bundle` attribute that contains non-prim data | | primBundle | `bundle` attribute that contains a single prim | | primsBundle | `bundle` attribute that contains multiple prims | | pattern | `string` attribute that uses a search pattern | | value | attribute name for constants and math functions inputs/outputs | | name | `token` attribute used for specifying attribute names | | A…Z | Dynamic attribute names | ::: tip **Tip** You will find that your node will be easier to use if you minimize the use of attribute namespaces. It has some implications in generated code due to the special meaning of the colon in C++, Python, and USD. For example an output bundle you name `internal:color:bundle` will be accessed in code as `outputs_internal_color_bundle`. ::: ## Illegal Attribute Names Attributes cannot use C++ or python keywords as their name. The following attribute names are illegal and should be avoided regardless of the method of implementing your node. | alignas | char16_t | do | mutable | return | typeid | |---------|----------|----|---------|--------|--------| | alignof | char32_t | double | namespace | short | typename | | and | class | dynamic_cast | new | signed | union | | and_eq | compl | else | noexcept | sizeof | unsigned | | asm | concept | enum | not | static | using | | atomic_cancel | const | explicit | not_eq | static_assert | virtual | | atomic_commit | consteval | export | nullptr | static_cast | void | | atomic_noexcept | constexpr | extern | operator | struct | volatile | | auto | constinit | false | or | switch | wchar_t | | bitand | const_cast | float | or_eq | synchronized | while | | bitor | continue | for | private | template | xor | | bool | co_await | friend | protected | this | xor_eq | | break | co_return | goto | public | thread_local | | | case | co_yield | if | reflexpr | throw | | | C++ | C++ | C++ | C++ | C++ | C++ | |-----------|--------------|--------------|--------------|--------------|--------------| | catch | decltype | inline | register | true | | | char | default | int | reinterpret_cast | try | | | char8_t | delete | long | requires | typedef | | | Python | Python | Python | Python | Python | Python | |-----------|--------------|--------------|--------------|--------------|--------------| | False | async | del | from | lambda | raise | | None | await | elif | global | nonlocal | return | | True | break | else | if | not | try | | and | class | except | import | or | while | | as | continue | finally | in | pass | with | | assert | def | for | is | property | yield | ### Code Conventions #### Errors & Warnings OmniGraph has two ways to communicate to the user when the compute function of a node has failed. Either of these methods take a formatted string that describes the failure. | Python | C++ | Description | |-----------|--------------|-----------------------------------------------------------------------------| | db.log_warn | db.logWarning | Used when a compute encounters unusual data but can still provide an output. This should not trigger the compute to halt. Ex. Request to deform an empty mesh. | | db.log_error | db.logError | Used when a compute encounters inconsistent, invalid or unexpected data and it cannot compute an output. This should trigger the compute to halt. Ex. Request to add two vectors with incompatible sizes. | ## Return Conditions There are two types of compute functions that can be used in nodes; `compute` and `computeVectorized`. For nodes using `compute` return a `bool` type. The convention for these nodes is to return `true` if the function was successful and `false` otherwise. ```cpp static bool compute(GraphContextObj const& contextObj, NodeObj const& nodeObj) { if (db.inputs.value().isValid()) { db.outputs.value() = db.inputs.value(); return true; } return false; } ``` For nodes using `computeVectorized` return a `size_t` value that represents the number of successful instance computations. ```cpp static size_t computeVectorized(GraphContextObj const& contextObj, NodeObj const& nodeObj, size_t count) { size_t ret = 0; for(size_t idx = 0; idx < count; ++idx) { if (db.inputs.value().isValid()) { db.outputs.value() = db.inputs.value(); db.moveToNextInstance(); ++ret; } } return ret; } ``` ### Warning The return value of these functions are not currently used for anything, but this could change in the future. ## Data Conventions ### Prim Path Data #### Tip - Use `target` attributes when a node or bundle needs to consume a prim path. - Use the first path in the `target` array when accessing a single path. Previously, OmniGraph offered several methods of consuming a prim path; bundles, path strings and path tokens. These methods have been replaced with a new `target` type. - `target` attributes are backed by USD relationships, so paths will repair if referenced or if the namespace hierarchy changes. - They are arrays, so consuming multiple paths is possible. When a single path is required, the first path in the array should be used. - Output `target` attributes are possible, so nodes can construct a path array that is read by another node. In the future, previous methods of consuming a path may be deprecated. If any nodes currently use these methods, they must be upgraded to use `target` attributes. - To be able to select multiple targets, add the ogn metadata: ``` allowMultiInputs: "1" ``` - To disallow incoming connections, add the ogn metadata: ``` literalOnly: "1" ``` - Code should not use the USD API to read paths, otherwise incoming connections will be ignored. ### Prim Bundle Data > **Tip** > > - All prim bundle attributes are containers of one or more child bundles. > - Nodes referencing a single prim bundle access the first child bundle from the attribute. > - When reading a prim bundle into a graph, users must use a ReadPrims node. > - The `sourcePrimPath` and `sourcePrimType` attributes define a bundle as a prim bundle. > - All data added to bundles should conform with USD standards for naming and types. Bundles are a flexible data type that is designed to pass a large amount of attribute data through a single connection. **Prim bundles** are special bundles that represent a prim within OmniGraph. Moving forward all prim bundle attributes in OmniGraph will act as containers for one or more child bundles containing the attributes for each prim (i.e multiple prims in bundles) rather than have attributes on the root of the bundle (i.e single prim in bundle). Deeper documentation about the multiple prim bundle structure can be found in the Bundles User Guide. ### Single Prim Bundle There are cases where a node needs to access only a single prim bundle, such as extracting attributes from a bundle for a specific prim. When referencing a single prim bundle, the convention is to access the first child bundle from the bundle attribute. This can be accomplished with `get_child_bundle(0)` in python and `getChildBundle(0)` in C++. Full documentation about how to access child bundles can be found in the Bundle Python API Documentation or Bundle C++ API Documentation. ### Prim Bundle Attributes When constructing a bundle with a ReadPrims node, there are several special attributes added to each child prim bundle. The `sourcePrimPath` and `sourcePrimType` are required attributes that define a prim bundle, and `worldMatrix` is used to track the transformation of each prim. There are also optional attributes generated for bounding boxes and skeletal binding (when applicable). Users should refrain from altering these attributes manually unless it is absolutely necessary. #### Required Attributes | Attribute Name | Type | Purpose | |----------------|--------|----------------------------------------------------------------------------------------------| | sourcePrimPath | token | Path to the prim in the bundle (ex `/World`) | | sourcePrimType | token | Type of the prim in the bundle (ex `Mesh`) | | worldMatrix | matrix4d | The world space matrix of the prim | ## Optional Attributes | Attribute Name | Type | Purpose | |----------------|----------|----------------------------------------------| | bboxTransform | matrix4d | The transform of the bounding box of the prim. | | bboxMinCorner | point3d | The lower corner of the bounding box in world space. | | bboxMaxCorner | point3d | The upper corner of the bounding box in world space. | ## USD Prim Bundle Data There are a set of standard attributes when handling prims that users should use when constructing bundles. These are consistent with the USD naming and types. To allow for maximum compatibility through all nodes, these attributes should be the only ones used when referencing prims. ## Imageable Attributes Base attributes for all prims that may require rendering or visualization of some sort. | Attribute Name | Type | Purpose | |----------------|--------|-------------------------------------------------------------------------| | purpose | token | Purpose is a classification of geometry into categories that can each be independently included or excluded from traversals of prims on a stage, such as rendering or bounding-box computation traversals. [“default”, “render”, “proxy”, “guide”] | | visibility | token | Visibility is meant to be the simplest form of “pruning” visibility that is supported by most DCC apps. [“inherited”, “invisible”] | | proxyPrim | target | The proxyPrim relationship allows us to link a prim whose purpose is “render” to its (single target) purpose=”proxy” prim. This is entirely optional, but can be useful in several scenarios. | ## Xformable Attributes Base attributes for all transformable prims, which allows arbitrary sequences of component affine transformations to be encoded. These transformation attributes also allow a separate suffix (ex. `xformOp:translate:pivot`) that can allow multiple attributes of the same type on a single prim. | Attribute Name | Type | Purpose | |----------------------|----------|-----------------| | xformOp:translate | double3 | Translation | | xformOp:translate:pivot | double3 | Pivot translation | | xformOp:rotateX | double | Single X axis rotation in degrees | | xformOp:rotateY | double | Single Y axis rotation in degrees | | xformOp:rotateZ | double | Single Z axis rotation in degrees | | xformOp:rotateXYZ | double3 | Euler Rotation in XYZ in degrees | | xformOp:rotateXZY | double3 | Euler Rotation in XZY in degrees | | xformOp:rotateYXZ | double3 | Euler Rotation in YXZ in degrees | | xformOp:rotateYXZ | double3 | Euler Rotation in YXZ in degrees | |------------------|---------|----------------------------------| | xformOp:rotateYZX | double3 | Euler Rotation in YZX in degrees | | xformOp:rotateZXY | double3 | Euler Rotation in ZXY in degrees | | xformOp:rotateZYX | double3 | Euler Rotation in ZYX in degrees | | xformOp:scale | double3 | Scale | | xformOp:orient | quatd | Quaternion rotate | | xformOp:transform | matrix4d| Transformation Matrix | | xformOpOrder | token[] | Encodes the sequence of transformation operations in the order in which they should be pushed onto a transform stack while visiting a UsdStage’s prims in a graph traversal that will effect the desired positioning for this prim and its descendant prims. | Kit will also automatically convert from Z to Y up or from a mismatch in units using xformOp:rotateX:unitsResolve xformOp:scale:unitsResolve ## Boundable Attributes Boundable attributes introduce the ability for a prim to persistently cache a rectilinear, local-space, extent. | Attribute Name | Type | Purpose | |----------------|------------|---------------------------------------------------------------------------------------------| | extent | float3[] | Extent is a three dimensional range measuring the geometric extent of the authored gprim in its own local space (i.e. its own transform not applied), without accounting for any shader-induced displacement. This space is aligned to world. | ## Geometric Prim Attributes Base attributes for all geometric primitives. | Attribute Name | Type | Purpose | |----------------------|--------|---------------------------------------------------------------------------------------------| | doublesided | bool | Setting a gprim’s doubleSided attribute to true instructs all renderers to disable optimizations such as backface culling for the gprim, and attempt (not all renderers are able to do so, but the USD reference GL renderer always will) to provide forward-facing normals on each side of the surface for lighting calculations. | | orientation | token | Orientation specifies whether the gprim’s surface normal should be computed using the right hand rule, or the left hand rule. Please see for a deeper explanation and generalization of orientation to composed scenes with transformation hierarchies. [“rightHanded”, “leftHanded”] | | primvars:displayColor | color3f[] | A colorSet that can be used as a display or modeling color, even in the absence of any specified shader for a gprim. | | primvars:displayOpacity | float[] | Companion to displayColor that specifies opacity, broken out as an independent attribute rather than an rgba color, both so that each can be independently overridden, and because shaders rarely consume rgba parameters. | ``` ## Point Based Attributes Base attributes for all prims that possess points, providing common attributes such as normals and velocities. | Attribute Name | Type | |----------------|------| # Purpose | Points | Type | Purpose | |--------|------------|---------------------------------------------------------------------------------------------| | points | point3f[] | The primary geometry attribute for all PointBased prims. Describes points in local space | | normals| normal3f[] | Provide an object-space orientation for individual points, which, depending on subclass, may define a surface, curve, or free points. | | velocities | vector3f[] | If provided, ‘velocities’ should be used by renderers to compute positions between samples for the ‘points’ attribute, rather than interpolating between neighboring ‘points’ samples. Velocity is measured in position units per second, as per most simulation software. | | acceleration | vector3f[] | If provided, ‘accelerations’ should be used with velocities to compute positions between samples for the ‘points’ attribute rather than interpolating between neighboring ‘points’ samples. Acceleration is measured in position units per second-squared. | # Material Attributes Attributes used for prims that include shading data (bindings, uvs, etc.). | Attribute Name | Type | Purpose | |----------------|------------|---------------------------------------------------------------------------------------------| | material:binding | rel (target) | Relationship that targets the bound material. | | primvars:st | texCoord2f[] | Standard UV set. | # Mesh Attributes Attributes used for mesh prims. | Attribute Name | Type | Purpose | |----------------|------------|---------------------------------------------------------------------------------------------| | cornerIndices | int[] | The indices of points for which a corresponding sharpness value is specified in cornerSharpnesses (so the size of this array must match that of cornerSharpnesses) | | cornerSharpness| float[] | The sharpness values associated with a corresponding set of points specified in cornerIndices (so the size of this array must match that of cornerIndices). Use the constant `SHARPNESS_INFINITE` for a perfectly sharp corner. | | creaseIndices | int[] | The indices of points grouped into sets of successive pairs that identify edges to be creased. The size of this array must be equal to the sum of all elements of the creaseLengths attribute. | | creaseLengths | int[] | The length of this array specifies the number of creases (sets of adjacent sharpened edges) on the mesh. Each element gives the number of points of each crease, whose indices are successively laid out in the creaseIndices attribute. Since each crease must be at least one edge long, each element of this array must be at least two. | | creaseSharpness | float[] | The per-crease or per-edge sharpness values for all creases. Since `creaseLengths` encodes the number of points in each crease, the number of elements in this array will be either len(creaseLengths) or the sum over all X of (creaseLengths[X] - 1). Note that while the RI spec allows each crease to have either a single sharpness or a value per-edge, USD will encode either a single sharpness per crease on a mesh, or sharpnesses for all edges making up the creases on a mesh. Use the constant `SHARPNESS_INFINITE` for a perfectly sharp crease. | | faceVaryingLinearInterpolation | token | Specifies how elements of a primvar of interpolation type “faceVarying” are interpolated for subdivision surfaces. Interpolation can be as smooth as a “vertex” primvar or constrained to be linear at features specified by several options. [“none”, “cornersOnly”, “cornersPlus1”, “cornersPlus2”, “boundaries”, “all”] | | | | | |-------|---------------------|--------------------------------------------------------------------------------------| | | faceVertexCount | Provides the number of vertices in each face of the mesh, which is also the number of consecutive indices in faceVertexIndices that define the face. The length of this attribute is the number of faces in the mesh. If this attribute has more than one timeSample, the mesh is considered to be topologically varying. | | | faceVertexIndices | Flat list of the index (into the *points* attribute) of each vertex of each face in the mesh. If this attribute has more than one timeSample, the mesh is considered to be topologically varying. | | | holeIndices | The indices of all faces that should be treated as holes, i.e. made invisible. This is traditionally a feature of subdivision surfaces and not generally applied to polygonal meshes. | | | interpolateBoundary | Specifies how subdivision is applied for faces adjacent to boundary edges and boundary points. ["none", "edgeOnly", "edgeAndCorner"] | | | subdivisionScheme | The subdivision scheme to be applied to the surface. ["catmullClark", "loop", "bilinear", "none"] | | | triangleSubdivisionRule | Specifies an option to the subdivision rules for the Catmull-Clark scheme to try and improve undesirable artifacts when subdividing triangles. ["catmullClark", "smooth"] |
26,973
ConvertActionGraphNodesToAPI.md
# Converting Action Graph Nodes to IActionGraph This document describes how to convert Action Graph nodes written using the legacy method of manipulating execution attributes to the currently recommended approach using `omni::graph::action::IActionGraph_abi` introduced in kit-sdk 105.1. ## What Changed The mechanism of writing and reading control information between nodes and the executor has changed. We now use the API to do this communication instead of reading and writing execution attributes. While the `execution` attributes are used to author the flow of execution of the graph, the `value` of these attributes is no longer relevant. ## OGN Tests Since the values of execution attributes are not relevant, the [OGN tests](../dev/ogn/ogn_reference_guide.html#ogn-test-data) can not usually be used to validate node behavior, instead unit tests should be used. ## Add the extension dependency The new API lives in `omni.graph.action`, so any extension that implements action graph nodes needs to depend on that. ```c++ [dependencies] "omni.graph.action" = {} ``` ## Include the API The `omni::graph::action::IActionGraph_abi` must be included in the node code, and a handle to API itself is acquired. ```c++ #include &lt;omni/graph/action/IActionGraph.h&gt; // Acquire the API auto iActionGraph = omni::graph::action::getInterface(); ``` ```python from omni.graph.action import get_interface # Acquire the API action_graph = get_interface() ``` ## Enabling an Execution Output # All execution outputs are automatically disabled before compute is called. An output can be enabled using the API call. ```C++ db.outputs.opened() = kExecutionAttributeStateEnabled; // Becomes: iActionGraph->setExecutionEnabled(outputs::opened.token(), db.getInstanceIndex()); ``` ```python db.outputs.opened = og.ExecutionState.ENABLED // Becomes: action_graph.set_execution_enabled("outputs:opened") ``` # Reading an Execution Input This is not needed for most nodes, but the state of input attributes should be read with the API call. ```C++ bool resetIsActive = (db.inputs.reset() != kExecutionAttributeStateDisabled); // Becomes: bool resetIsActive = iActionGraph->getExecutionEnabled(inputs::reset.token(), db.getInstanceIndex()); ``` ```python reset_is_active = (db.inputs.reset != db.ExecutionAttributeState.DISABLED) // Becomes: reset_in_is_active = action_graph.get_execution_enabled("inputs:reset") ``` # Starting and Ending Latent State This is not needed for most nodes, but the latent state should be controlled via the API. ```C++ if (starting) db.outputs.finished() = kExecutionAttributeStateLatentPush; else { db.outputs.finished() = kExecutionAttributeStateLatentFinish; } // Becomes: if (starting) iActionGraph->startLatentState(db.getInstanceIndex()); else { iActionGraph->endLatentState(db.getInstanceIndex()); iActionGraph->setExecutionEnabled(outputs::finished.token(), db.getInstanceIndex()); } ``` ```python if starting: db.outputs.finished = og.ExecutionAttributeState.LATENT_PUSH else: db.outputs.finished = og.ExecutionAttributeState.LATENT_FINISH // Becomes: if starting: action_graph.start_latent_state() else: action_graph.end_latent_state() action_graph.set_execution_enabled("outputs:finished") ```
3,288
coordinate-systems_Overview.md
# Overview — kit-omnigraph 1.109.1 documentation ## Overview This extension contains the base functionality that is shared by editors of OmniGraph. It is based on `omni.kit.graph.editor.core` and provides base implementations which are intended to be specialized by editor windows for particular graph types. ### Coordinate Systems There are three coordinate systems used in the graph editor code: screen, view and mouse. Screen coordinates are the pixel coordinates of the display device, with (0, 0) in the upper left corner of the display. View coordinates are the coordinates within the `GraphView` widget. Internally `GraphView` uses an `omni.ui.CanvasFrame` to display the nodes and connections. `CanvasFrame` allows its contents to be zoomed and panned, meaning that view coordinates are not simply an offset from screen coordinates. (If you see references to canvas coordinates those are the same as view coordinates.) To convert screen coordinates to view coordinates call `GraphView.screen_to_view()`. There is currently no method available for converting from view coordinates back to screen. Mouse coordinates are the coordinates supplied by mouse events which occur over the `GraphView`. Normally these are the same as screen coordinates but may differ depending upon certain settings. `GraphView.mouse_to_screen()` and `GraphView.mouse_to_view()` can be used to convert mouse coordinates in a consistent manner, regardless of settings.
1,454
core-converter_Overview.md
# Hoops Converter ## Overview The Hoops Converter extension enables conversion for many common CAD file formats to USD. USD Explorer includes the Hoops Converter extension enabled by default. ## Supported CAD file formats The following file formats are supported by Hoops Converter: - CATIA V5 Files (`*.CATPart, *.CATProduct, *.CGR`) - CATIA V6 Files (`*.3DXML`) - IFC Files (`*.ifc, *.ifczip`) - Siemens NX Files (`*.prt`) - Parasolid Files (`*.xmt, *.x_t, *.x_b, *.xmt_txt`) - SolidWorks Files (`*.sldprt, *.sldasm`) - STL Files (`*.stl`) - Autodesk Inventor Files (`*.IPT, *.IAM`) - AutoCAD 3D Files (`*.DWG, *.DXF`) - Creo - Pro/E Files (`*.ASM, *.PRT`) - Revit Files (`*.RVT, *.RFA`) - Solid Edge Files (`*.ASM, *.PAR, *.PWD, *.PSM`) - Step/Iges (`*.STEP, *.IGES`) - JT Files (`*.jt`) - DGN (`*.DGN`) ::: note Note ::: # 支持的文件格式 *.fbx, *.obj, *.gltf, *.glb, *.lxo, *.md5, *.e57 和 *.pts 是由 Asset Converter 支持的默认文件格式。 ## 注意 如果安装了 Creo、Revit 或 Alias 等专业工具,我们建议使用相应的连接器。这些连接器提供了更广泛的转换选项。 ## 注意 在转换来自 Nucleus 的 CAD 装配体时可能会遇到问题。当转换带有外部引用的装配体时,我们建议使用本地文件或 Omniverse Drive。 # Converter Options 本节涵盖了将 Hoops 文件格式转换为 USD 的配置选项。 # Related Extensions 这些相关扩展构成了 Hoops 转换器。此扩展通过其接口为扩展提供导入任务。 ### Core Converter - Hoops Core: :doc:`omni.kit.converter.hoops_core&lt;omni.kit.converter.hoops_core:Overview&gt; ### Services - CAD Converter Service: :doc:`omni.services.convert.cad ### Utils - Converter Common: :doc:`omni.kit.converter.common
1,442
CoreConcepts.md
# Core Concepts ## Graph The graph comprises two conceptual pieces - the [Authoring Graph](Glossary.html#term-Authoring-Graph) and the [Execution Graph](Glossary.html#term-Execution-Graph). The term is often used to refer to one or both of these graphs, though among users and casual developers it most commonly refers to the [Authoring Graph](Glossary.html#term-Authoring-Graph). One type of graph you might hear a lot of is the [Action Graph](Glossary.html#term-Action-Graph) where you can build up behaviors that are triggered on some event. See this [Action Graph](concepts/ActionGraph.html#ogn-omni-graph-action-overview) for an introduction to how it can be used. ### Authoring Graph The authoring graph is what a user sees when they are constructing a graph for evaluation. It has nodes that come from a node library, connections between them, and specific values on the attributes. It describes the topology and configuration of a graph that in turn describes a specific computation to be made. In C++, the graph is accessed through the `omni::graph::core::IGraph` interface. In Python, the bindings are available in `omni.graph.core.Graph`. ### Execution Graph The execution graph is what the system uses when it is evaluating the computation the authoring graph has described. It is free to take the description in the authoring graph and use it as-is by calling functions in the same graph structure, or to perform any optimization it sees fit to transform the authoring graph into a more efficient representation for execution. For example, it might take two successive nodes that double a number and change it into a single node that quadruples a number. ## Graph Context The graph context contains things such as the current evaluation time and other information relevant to the current context of evaluation (and hence the name). Thus, we can ask it for the values on a particular attribute (as it will need to take things such as time into consideration when determining the value). In C++, the graph is accessed through the `omni::graph::core::IGraphContext` interface. In Python, the bindings are available in `omni.graph.core.GraphContext`. ## Graph Type ## Graph Type The graph type, sometimes referred to as the evaluation type, indicates how the graph is to be executed. This includes the process of graph transformation that takes an **Authoring Graph** and creates from it an **Execution Graph** that follows the desired type of evaluation pattern. Examples of graph types include the **Action Graph**, **Push Graph**, and **Lazy Graph**. ## Graph Registry This is where we register new node types (and unregister, when our plugin is unloaded). The code generated through the descriptive .ogn format automatically handles interaction with the registry. Interaction with the registry is handled through the C++ ABI interface `omni::graph::core::IGraphRegistry` and in Python through the binding `omni.graph.core.GraphRegistry`. ## Node The heart of any node graph system is of course, the node. The most important exposed method on the node is one to get an attribute, which houses all of the data the node uses for computing. The most import method you implement is the `compute` method, which performs the node’s computation algorithm. In C++ the graph is accessed through the `omni::graph::core::INode` interface. In Python the bindings are available in `omni.graph.core.Node`. ## Node Type In order to register a new type of node with the system, you will need to fill the exposed `omni::graph::core::INodeType` interface with your own custom functions in order to register your new node type with the system. To simplify this process a descriptive format (.ogn files) has been created which is described in **OGN User Guide**. Each node type has a unique implementation of the `omni::graph::core::INodeType` interface. This can be used for both C++ and Python node type implementations. The Python nodes use bindings and function forwarding to interface with the **C++ ABI** in `omni.graph.core.NodeType`. ## Attribute An **Attribute** has a name and contains some data. This should be no surprise to anyone who has worked with graphs before. Attributes can be connected to other attributes on other nodes to form an evaluation network. In C++ the graph is accessed through the `omni::graph::core::IAttribute` interface. In Python the bindings are available in `omni.graph.core.Attribute`. ## Attribute Data While the **Attribute** defines the connection points on a node and its **Attribute Type** defines the type(s) of data the attribute will take on, the **Attribute Data** is the actual data of the attribute, usually stored in **Fabric**. In C++ the graph is accessed through the `omni::graph::core::IAttributeData` interface. ## Attribute Data In Python the bindings are available in `omni.graph.core.AttributeData`. ## Attribute Type OmniGraph mostly relies on Fabric for data, and Fabric was designed to mostly just mirror the data in USD, but in more compute friendly form. That said, we did not want to literally use the USD data types, as that creates unnecessary dependencies. Instead, we create data types that are binary compatible with the USD data types (so that in C++ they can be cast directly), but can be defined independently. Also, our types capture some useful metadata, such as the role of the data. For example, a `float[3]` can be used both to describe a position as well as a normal. However, the way the code would want to deal with the data is very different depending on which of the two roles it plays. Our types have a `role` field to capture this sort of meta-data. See more detailed documentation in [Attribute Type Definition](#omnigraph-attribute-type). ## Connections If the [Node](#term-Node) is thought of as the vertex in the [Graph](#term-Graph) then the [Connection](#term-Connection)s are the edges. They are a representation of a directed dependency between two specific [Attribute](#term-Attribute)s on [Node](#term-Node)s in the graph. ## Bundles To address the limitations of regular attributes, we introduced the notion of the [Bundle](#term-Bundle). As the name suggests, this is a flexible bundle of data, similar to a [USD Prim](#term-USD-Prim). One can dynamically create any number of attributes inside the bundle and transport that data down the graph. This serves two important purposes. First, the system becomes more flexible - we are no longer limited to pre-declared data. Second, the system becomes more usable. Instead of many connections in the graph, we have just a single connection, with all the necessary data that needs to be transported. A brief introduction to bundles can be seen [Bundles](#omnigraph-dev-bundles) and [Bundles User Guide](#omnigraph-bundle-user-guide). ## Fabric The data model for OmniGraph is based on the Fabric data manager. Fabric is used as a common location for all data within the nodes in OmniGraph. This common data location allows for many efficiencies. Further, Fabric handles data synchronization between CPU data, GPU data, and USD data, offloading that task from OmniGraph. Fabric is a cache of USD-compatible data in vectorized, compute friendly form. OmniGraph leverages Fabric’s data vectorization feature to help optimize its performance. Fabric also facilitates the movement of data between CPU and GPU, allowing data to migrate between the two in a transparent manner. There are currently two kinds of Fabric caches. There is a single `SimStageWithHistory` cache in the whole system, and any number of `StageWithoutHistory` caches. When a graph is created, it must specify what kind of Fabric cache “backs” it. All the data for the graph will be stored in that cache. ## USD Like the rest of Omniverse, OmniGraph interacts with and is highly compatible with USD. We use USD for persistence of the graph. Moreover, since Fabric is essentially USD in compute efficient form, any transient compute data can be “hardened” to USD if we so choose. That said, while it is possible to do so, we recommend that node writers refrain from accessing USD data directly from the node, as USD is essentially global data, and accessing it from the node would prevent it from being parallel scheduled and thus get in the way of scaling and distribution of the overall system. ## Instancing Instancing is a templating technique that allows to create a graph once, and to apply it to multiple prims. It brings several benefits: - **For authoring**: The graph is created once but can be used multiple time without additional effort. - **For maintenance**: Any change made to this shared “template” is automatically applied to all “instances” - **For runtime**: - Vectorized data: All the runtime data for “instances” of a given graph is allocated in contiguous arrays per attributes. This compact data organization provides data locality which reduces cache misses, thus improving performance. - Compute “transposition”: Instead of iterating over each graph and execute all its node, the framework can iterate over the graph nodes and execute all its “instances”. This brings a huge benefit when there are a lot of “instances”. - Compute “factorization”: All “instances” can be provided to the compute at once, so the framework don’t even have to iterate the “instances” anymore - Vectorized compute: A node can decide to implement a vectorized compute for further optimizations: this allows to use SIMD instruction sets for example, or apply similar types of optimizations. Note that this feature is currently only available for [Push Graph](#term-Push-Graph). # Auto-instancing Auto-instancing is a framework feature that automatically factorizes similar graphs as instances, in order to benefit from the runtime performance gains explained above. This feature will set up compute for all of those graphs, at once, through the same execution pipline than regular instances. This feature does not require any user action, and is meant to bring the same level of performance as regular instancing. It is particularly helpful when replicating/referencing the same “Smart Asset” (an asset embedding its own logic/behavior as a graph) many times in a stage. # Graph Variables Variables are values associated with an instance of a graph that can be read or changed by the graph during its execution. Variables can be used to keep track of state within the graph between execution frames, or to supply individualized values for different instances of the same graph. The initial value of a variable is read from USD, and the runtime value is maintained in Fabric. Variable values are never written back to USD, so any changes to their value during execution are lost when the application completes. In C++ variable properties can be queried through the `omni::graph::core::IVariable` interface, with methods to create, remove and find variables available on `omni::graph::core::IGraph`. In python, the bindings are available on the `omni.graph.core.IVariable` and `omni.graph.core.Graph` classes, respectively. Reading and writing variable runtime values in Fabric can be accomplished by accessing their corresponding Attribute Data available through a Graph Context in a similar manner to working with an Attribute. Reading and writing initial, a.k.a default, values in python can be accomplished with methods provided with the Controller Class. # Compounds Compounds are a representation used to group related nodes together into a sub-network. This grouping of nodes can be done for organization purposes or to create a single definition to be reused multiple times, either within a single OmniGraph or across OmniGraphs. OmniGraph currently supports Compound Subgraphs, which are a collection of nodes that are represented by a single Compound Node in the parent graph. Compound Subgraphs share state, including variables, with the OmniGraph of the Compound Node.
11,926
CrashReporter.md
# Carbonite Crash Reporter ## Overview The crash reporter is intended to catch and handle exceptions and signals that are produced at runtime by any app that loads it. On startup, if configured to do so, the crash reporter will install itself in the background and wait for an unhandled exception or signal to occur. There is no performance overhead for this to occur and for the most part the crash reporter plugin just sits idle until a crash actually occurs. The only exception to this is that it will monitor changes to the `/crashreporter/` branch in the settings registry (managed by the `carb::settings::ISettings` interface if present). The crash reporter plugin does not have any other dependent plugins. It will however make use of the `carb::settings::ISettings` interface if it is loaded in the process at the time that the crash reporter plugin is loaded. Any changes to the `/crashreporter/` settings branch will be monitored by the plugin and will have the possibility to change its configuration at runtime. See below in [Configuration Options](#crashreporter-settings-label) for more information in the specific settings that can be used to control its behavior. Note that if an implementation of the `carb::settings::ISettings` interface is not available before the crash reporter plugin is loaded, the crash reporter will not be able to reconfigure itself automatically at runtime. In this case, the crash reporter will only use its default configuration (ie: no crash report uploads, write reports to the current working directory). If the `carb::settings::ISettings` interface becomes available at a later time, the crash reporter will automatically enable its configuration system. When this occurs, any configuration changes listed in the settings registry will be loaded and acted on immediately. The implementation of the crash reporter plugin that is being referred to here is based on the Google Breakpad project. The specific plugin is called `carb.crashreporter-breakpad.plugin`. ## Common Crash Reporter Terms - **“Crash Report”**: A crash report is a collection of files, metadata, and process state that describes a single crash event. This collection may be compressed into a zip file or stored as a loose files. The crash report may be stored locally or be uploaded to a crash report processing system for later analysis. - **“Crash Dump”**: This is a file included in a crash report that contains the process state at the time of a crash. This typically contains enough information to do a post-mortem analysis on the state of the crashing portion of the process, but does not always contain the complete state of the process. This file is often one of the largest components in a crash report. - **“Crash Log”**: The log file related to the crashing process. This can help a crash investigator determine what the crashing process may have been doing near the time of the crash or perhaps even contain a reason for the crash. This log file is not automatically included in a crash report. Since this log may contain either confidential information or personally identifiable information (ie: file names and paths, user names, etc), care must be taken when deciding to include a log file in a crash report. # Report - **"Metadata"**: Each application can include its own custom data points with any crash report. These are referred to as metadata values. These can be set by the application at any time before a crash occurs. All registered metadata values will be included with a crash report when and if it is generated. See [Crash Handling and Metadata](#crashreporter-metadata-label) for more info on how to add metadata. Metadata comes in two flavors - static and volatile. A static metadata value is not expected to change during the lifetime of the process (or change rarely). For example, the application’s product name and version would be static metadata. See below for a description of volatile metadata. - **"Volatile Metadata"**: A volatile metadata value is one that changes frequently throughout the lifetime of the process. For example, the amount of system memory used at crash time would be considered volatile. Adding such a value as static metadata would potentially be very expensive at runtime. A volatile metadata value instead registers a callback function with the crash reporter. When and if a crash occurs, this callback is called so that the crash reporter can collect the most recent value of the metadata. - **"Extra Files"**: A crash report can contain zero or more custom extra files as well. This is left up to the application to decide which files would be most interesting to collect at crash time. It is the responsibility of the application to ensure that the files being collected do not contain any confidential or personally identifiable information. ## Setting Up the Crash Reporter When the Carbonite framework is initialized and configured, by default an attempt will be made to find and load an implementation of the `carb.crashreporter-*.plugin` plugin. This normally occurs after the initial set of plugins has been loaded, including the plugin that implements the `carb::settings::ISettings` interface. If a crash reporter implementation plugin is successfully loaded, it will be ‘registered’ by the Carbonite framework using a call to `carb::crashreporter::registerCrashReporterForClient()`. This will ensure the crash reporter’s main interface `carb::crashreporter::ICrashReporter` is loaded and available to all modules. The default behavior of loading the crash reporter plugin can be overridden using the flag `carb::StartupFrameworkDesc::disableCrashReporter` when starting the framework. If this is set to `true`, the search, load, and registration for the plugin will be skipped. In that case, it will be up to the host app to explicitly load and register its own crash reporter if its services are desired. Once loaded and registered, the Carbonite framework will make an attempt to upload old crash report files if the `/app/uploadDumpsOnStartup` setting is `true` (this is also the default value). Note that this behavior can also be disabled at the crash reporter level using the setting `/crashreporter/skipOldDumpUpload`. This upload process will happen asynchronously in the background and will not affect the functionality of other tasks. If the process tries to exit early however, this background uploading could cause the exit of the process to be delayed until the current upload finishes (if any). There is not currently any way to cancel an upload that is in progress. Most host apps will not need to interact with the crash reporter very much after this point. The only functionality that may be useful for a host app is to provide the crash reporter with various bits of metadata about the process throughout its lifetime. Providing this metadata is discussed below in [Crash Handling and Metadata](#crashreporter-metadata-label). ## Configuring the Crash Reporter See [Configuration Options](#crashreporter-settings-label) for a full listing of available configuration settings. Once the crash reporter plugin has been loaded, it needs to be configured properly before any crash reports can be sent anywhere. Crash reports will always be generated locally if the crash reporter is loaded and enabled (with the `/crashreporter/enabled` setting). When the crash reporter is disabled, the operating system’s default crash handling will be used instead. To enable uploads of generated crash reports, the following conditions must be met: 1. The `/crashreporter/enabled` setting must be set to `true`. 2. The `/crashreporter/url` setting must be set the URL to send the generated crash report to. 3. The `/crashreporter/product` setting must be set to the name of the product that crashed. ## Crash Reporter Configuration ### Version Information The `/crashreporter/version` setting must be set to the version information of the crashing app. This value may not be empty, but is also not processed for any purposes except crash categorization and display to developers. ### Privacy and Upload Settings Either the `/privacy/performance` setting must be set to `true` or the `/crashreporter/devOnlyOverridePrivacyAndForceUpload` setting must be set to `true`. The former is always preferred but should never be set explicitly in an app’s config file. This is a user consent setting and should only ever be set through explicit user choice. It should also never be overridden with `/crashreporter/devOnlyOverridePrivacyAndForceUpload` except for internal investigations. ### Configuration by Main Process Only the main process (ie: `kit-kernel`) should ever configure the crash reporter. This can either be done in startup config files, on the command line, or programmatically. A plugin or extension should never change this base configuration of the crash reporter. Plugins and extensions may however add crash report metadata and extra files (described below). ### Crash Report Upload Once the crash reporter has been configured in this way, each new crash report that is generated will be attempted to be uploaded to the given URL. Should the upload fail, another crash occurs during the upload (the process is possibly unstable after a crash), or the user terminates the process during the upload, an attempt to upload it again will be made the next time any Carbonite based app starts up using the same crash dump folder. By default, up to 10 attempts will be made to upload any given crash report. Each attempt will be made on a separate run of a Carbonite app. If all attempts fail, the crash report will simply be deleted from the local directory. ### Dump File Preservation After a crash report is successfully uploaded, it will be deleted from the local dump directory along with its metadata file. If the crash report files should remain even after a successful upload, the `/crashreporter/preserveDump` setting should be set to `true`. This option should really only be used for debugging purposes. Note that if a crash report is preserved and it has already been successfully uploaded, another attempt to upload it will not be made. ### Dump Directory By default, the crash reports dump directory will be the app’s current working directory. This can be changed using the `/crashreporter/dumpDir` setting. Any relative or absolute path may be used here. The named directory must exist before any crash occurs. ## Compressed Crash Reports The `carb.crashreporter-breakpad.plugin` implementation includes support for creating zip compressed crash reports. Typically a crash dump file will compress down to ~10% of its original size which can save a lot of time and bandwidth usage uploading the crash reports. Log files typically compress very well too. This feature is enabled with the `/crashreporter/compressDumpFiles` setting. When set to `true`, a zip compressed crash report will be used instead. The crash report management system that NVIDIA provides does support accepting zipped crash report files. When enabled, all files that are to be included with the crash report will be included in a single zip archive and sent along with the crash metadata. In this case, the crash report’s file will have the extension `.dmp.zip` and its metadata file will have the extension `.dmp.zip.toml`. This feature is still in the beta stage, but is being used exclusively by some Carbonite based apps both internally and publicly. ## Crash Handling and Metadata When a crash does occur in the app, the crash reporter will catch it. Upon catching a crash, the crash reporter plugin will create a crash dump file and collect metadata from the running app. The format of the crash dump file will differ depending on the platform. On Windows, a minidump file compatible with Microsoft Visual Studio will be created. On Linux, a proprietary crash dump file will be created. This file is compatible with Microsoft minidump files, but will not necessarily contain information that Visual Studio can fully understand. This Linux crash dump file can be converted to a stripped-down Linux core dump file with the use of a helper tool from the Breakpad library (the tool is `utils/minidump-2-core` in the separately-distributed Google Breakpad packman package). A minidump or core dump file contains some portions of the state of the process at the time it crashed. This state includes the list of running threads, each of their CPU register states, portions of their stack memory, a list of loaded modules, and some selected memory blocks that were referenced on the various thread stacks. From this crash state information some investigation can successfully be done into what may have caused the crash. The dump files do not contain all of the process’ state information by default since that could be several gigabytes of data. The metadata for the crash will be collected from multiple sources both at crash time and as the program runs. The metadata is simply a set of key-value pairs specified by the host app. The metadata values may be any string, integer, floating point, or boolean value (arrays of these values are not currently supported) and are collected from these sources: - Any value specified in a call to `carb::crashreporter::addCrashMetadata()`. This is just a helper wrapper for adding metadata values through the `/crashreporter/data/` settings branch. This is the preferred method of adding constant metadata values. - Any key/value pairs written to the `/crashreporter/data/` branch of the settings registry. This registers a constant metadata key-value pair and is best used for values that do not change at all. or do not change frequently throughout the app’s lifetime. These metadata values are collected and stored immediately. This method of adding metadata can be used on the command line or in a config file for example if the information is known at launch time. Any ‘volatile’ metadata values specified with `carb::crashreporter::ICrashReporter::addVolatileMetadata()`. This registers a value to be collected at crash time through a callback function. This type of metadata is intended to be used for values that change frequently and would be too expensive to update immediately every time they change. The only value that is important is the last value at the time of a crash. For example, this is used internally to collect the process uptime and memory usage information at the time of a crash. Regardless of how a metadata value is added, its key name will be always sanitized to only contain characters that are friendly to database key names. This sanitization will involve replacing most symbol characters with an underscore (‘_’). All key names should only contain ASCII characters as well. Metadata values may contain any UTF-8 codepoints however. ### Python Tracebacks The crash reporter has the ability to run a utility when a crash is detected that will inspect the Python interpreter running in the process and produce a Python traceback of all Python threads. By default this utility is py-spy. The utility’s binary must be in the same directory as the crash reporter, as the application binary, or in the working directory of the process at configuration time. For this process to work correctly, the Python library must be an official distribution, or it must have symbols packaged along with it. Several configuration keys (containing `pythonTraceback`) can configure how the process runs. The metadata key `PythonTracebackStatus` will record the outcome of gathering the Python traceback. The Python traceback is uploaded as a separate text file. The process must return an exit code of 0 to be considered successful. ### User Story The crash reporter can also run a separate process in order to gather information from the user. This is known as the “user story”–the user’s telling of what they were doing when the crash occurred. This may provide some hints necessary to reproduce and diagnose the problem. Several configuration keys (containing `userStory`) can configure how the process runs. The metadata key `UserStoryStatus` will record the outcome of gathering the user story. By default, this is `crashreport.gui`, a small and simple GUI application maintained by Carbonite: The checkbox will be auto-populated based on whether crash dumps are allowed to be uploaded (both configuration and privacy settings affect this). However, if the user submits a crash report, that acts as a privacy consent to send a report. If the user cancels, the crash is deleted (ignoring configuration to persist crashes). If the checkbox is not checked, the ‘Submit’ button is disabled and cannot be pressed. Both pressing the ‘Cancel’ button, or pressing the ‘Submit’ button with the text box empty will cause a popup message to be displayed for the user to confirm the action. These can be suppressed by adding `--/app/window/confirmOnCancel=1` or `--/app/window/confirmOnEmptyRepro=1` respectively to the arguments in the `/crashreporter/userStoryArgs` configuration keys (though the default arguments from that key must be part of your redefinition). The binary is expected to log the user story to `stdout` and produce an exit code of `0` to submit the crash, or `1` to cancel and delete the crash. Any other exit code (or a timeout) will report an error to the `UserStoryStatus` metadata field and proceed as if the user story binary was not run (i.e. uploading based on current configuration). The tool requires `--/app/crashId=<crash ID>` to be passed on the command line (with `<crash ID>` as the current crash ID) in order to fully start, otherwise a message is displayed that the application is not intended to be run manually. ### Adding Extra Files to a Crash Report By default a crash report is sent with just a crash dump file and a set of metadata key/value pairs. If necessary, extra files can be added to the crash report as well. This could include log files, data files, screenshots, etc. However, when adding extra files to a crash report, care must be taken to ensure that private data is not included. This is left up to the system adding the extra files to verify. Private data includes both personal information of users and potential intellectual property information. For example, this type of information is highly likely to unintentionally exist in log files in messages containing file paths. To add an extra file to a crash report, one of the following methods may be used: - Call `carb::crashreporter::addExtraCrashFile`... ## Adding Extra Files to a Crash Report There are two primary ways to add extra files to a crash report: 1. Use the function `carb::crashreporter::addExtraCrashFile()` to add the new file path. This may be a relative or absolute path (though if a relative path is used, the current working directory for the process may not change). This is the preferred method for adding a new file. 2. Add a new key and value to the `/crashreporter/files/` branch of the settings registry. This can be done in a config file or on the command line if the path to the file is known at the time. This can also be done programmatically if necessary. When extra files are included with the crash report, they will all be uploaded in the same POST request as the main crash dump file and metadata. These extra files will be included whether the crash report has been compressed or not. ## Loading a Crash Dump to Investigate On Windows, a crash dump file can be opened by dragging it into Visual Studio then selecting “Debug with native only” on the right hand side of the window. This will attempt to load the state of the process at the time of the crash and search available symbol servers for symbols and code to the modules that were loaded at the time of the crash. The specific symbol and source servers that are needed to collect this information depend on the specific project being debugged. Once loaded, many of the features of the Visual Studio debugger will be available. Note that symbols and source code may or may not be available for every module depending on your access to such resources. Some restrictions in this mode are that you won’t be able to step through code or change the instruction pointer’s position. Also, global data may not be available depending on the contents of the crash dump file. If a particular crash is repeatable, the `/crashreporter/dumpFlags` setting can be used to collect more information in the crash dump file that is created. Note though that some of the flags that are available can make the crash dump very large. On Windows, the following dump flags are available: - `Normal`: only capture enough information for basic stack traces of each thread. - `WithDataSegs`: include the memory for the data sections of each module. This can make the dump file very large because it will include the global memory space for each loaded module. - `WithFullMemory`: include all of the process’ mapped memory in the dump file. This can cause the dump file to become very large. This will however result in the most debuggable dump file in the end. - `WithHandleData`: includes all of the OS level information about open handles in the process. - `FilterMemory`: attempts to filter out blocks of memory that are not strictly needed to generate a stack trace for any given thread. - `ScanMemory`: attempts to scan stack memory for values that may be pointers to interesting memory blocks to include in the dump file. This can result in a larger dump file if a lot of large blocks are included as a result of the scan. - `WithUnloadedModules`: attempts to include a list of modules that had been recently unloaded by the process. - `WithIndirectlyReferencedMemory`: includes blocks of memory that are referenced on the stack of each thread. This can result in a significantly larger dump file. - `FilterModulePaths`: filters out module paths that may include user names or other user related directories. This can avoid potential issues with personally identifying information (PII), but might result in some module information not being found while loading the dump file. - `WithProcessThreadData`: includes full process and thread information from the operating system. - `WithPrivateReadWriteMemory`: searches the process’s virtual memory space and includes all pages that have the `PAGE_READWRITE` protection. - `WithoutOptionalData`: attempts to remove memory blocks that may be specific to the user or is not strictly necessary to create a usable dump file. This does not guarantee that the dump file will be devoid of PII, just reduces the possibility of it. - `WithFullMemoryInfo`: includes information about the various memory regions in the process. This is simply the page allocation, protections, and state information, not the data in those memory regions itself. - `WithThreadInfo`: includes full thread state information. This includes thread context and stack memory. Depending on the number of threads and amount of stack space used, this can make the dump file larger. - `WithCodeSegs`: includes code segments from each module. Depending on the number and size of modules loaded, this can make the dump file much larger. - `WithoutAuxiliaryState`: disables the automatic collection of some extra memory blocks. - `WithFullAuxiliaryState`: includes memory and state from auxilary data providers. This can cause the dump file to become much larger. - **WithPrivateWriteCopyMemory**: includes memory blocks that have the `PAGE_WRITECOPY` protection. This can make the dump file larger if a lot of large blocks exist. - **IgnoreInaccessibleMemory**: if the `WithFullMemory` flag is also used, this prevents the dump file generation from failing if an inaccessible region of memory is encountered. The unreadable pages will not be included in the dump file. - **WithTokenInformation**: includes security token information in the dump file. - **WithModuleHeaders**: includes the headers from each loaded module. - **FilterTriage**: adds filter triage related data (not clear exactly what this adds). - **WithAvxXStateContext**: includes the AVX state context for each thread (x86_64 only). - **WithIptTrace**: includes additional Intel Processor Trace information in the dump file. On Linux, the process for loading a crash dump file is not entirely defined yet. Depending on how in depth the investigation needs to be, there are two currently known methods. Both require some tools from the `Breakpad` SDK. The following methods are suggested but not officially supported yet: - Use the `minidump-2-core` tool from `Breakpad` to convert the crash dump file to a standard Linux core dump file. Note that by default this tool will output the result to `stdout` which can break some terminals. Instead the output should always be redirected to a file. This file can then be opened with GDB using the command `gdb <executable> --core <core_file>`. GDB may also need to be pointed to the various symbol files for the process. Please see the manual for GDB on how to find and load symbol files if needed. Carbonite also provides a `gdb-syms.py` Python script for GDB that will attempt to download symbols from the NVIDIA Omniverse symbol server. - Use the `minidump-stackwalk` tool to attempt to retrieve a stack backtrace for each thread listed in the crash dump file. This will produce a lot of output so it is best to redirect it to a file. This can provide some basic information about where the crash occurred and can give at least an idea of a starting point for an investigation. The current crash report management system (called OmniCrashes) usually does a good job of extracting crash information for all platforms and displays it. This is an internal crash reporting system however and cannot be accessed publicly. It is however a deployable product for customers who need to run their own instance of OmniCrashes. ## Uploading Crash Reports NVIDIA provides a default URL to send crash reports to. At this location, crash dumps and metadata will be accepted via HTTP POST commands. The expected format of the POST is a multipart form that provides key/value pairs for each of the metadata items followed by the binary data for the crash dump file itself, followed by any additional files to be included with the crash report upload. The crash report files are processed at this location and stored for later investigation. This default location can always be overridden by using the `/crashreporter/url` setting. The new URL will still be expected to accept POSTed forms in the same format. Once a crash report is created locally on a machine, the default behavior (if enabled) is to attempt to upload the crash dump and its associated metadata to the current upload URL. There are multiple settings that can affect how and if the crash report upload will occur. See [Configuring the Crash Reporter](#crashreporter-configuring-label) and [Configuration Options](#crashreporter-settings-label) for more information on those specific settings. The upload is performed synchronously in the crashing thread. Once finished and if successful, the crash dump file and its metadata may be deleted locally (depending on the `/crashreporter/preserveDump` setting). If the upload is not successful for any reason, the crash dump and metadata files will be left locally to retry again later. By default, up to 10 attempts will be made for each crash report. Should the upload fail for any reason on the first attempt (ie: in the crashing process), an attempt to upload it again will be made the next time the app is run. The original upload could fail for many reasons including network connection issues, another crash occurred while trying to do the original upload, or even that the server side rejected the upload. When retrying an upload in future runs of the app, old crash dump files will be uploaded sequentially with their original metadata. Should a retry also fail, a counter in the metadata will be incremented. If an upload attempt fails too many times (see `/crashreporter/retryCount` below), the crash dump file and its metadata file will be deleted anyway. If a crash report is successfully uploaded during a retry and the `/crashreporter/preserveDump` setting is set to `true`, the crash report’s metadata will be modified to reflect that change so that another upload attempt is not made. ## Debugging Crashes ## Debugging Crashes Sometimes it is necessary to intentionally cause a crash multiple times in order to debug or triage it properly. For example, this might be done in order to try to determine crash reproduction steps. If the build it is being tested on has the crash reporter properly configured, this could result in a lot of extra crash dumps being uploaded and a lot of unnecessary work and noise being generated (ie: crash notifications, emails, extra similar crashes being reported, etc). In cases like this, it may be desirable to either not upload the crash report at all or at least mark the new crash(es) as “being debugged”. This can be done in one of a few ways: - Add a new metadata value to the crash report indicating that it is an intentional debugging step. This can be done for example with the command line option or config setting `--/crashreporter/data/debuggingCrash=1`. This is the preferred metadata key to use to indicate that crash debugging or triage is in progress. - Disable crash reporter uploads for the app build while testing. The easiest way to do this is to simply remove the upload URL setting. This can be done with a command line option such as `--/crashreporter/url=""`. This should override any settings stored in config files. - If the crash report itself is not interesting during debugging, the crash reporter plugin itself could just be disabled. This can be done with `--/crashreporter/enabled=false`. For situations where a command line option is difficult or impossible, there are also some environment variables that can be used to override certain aspects of the crash reporter’s behavior. Each of these environment variables has the same value requirements as the setting they override (ie: a boolean value is expected to be one of ‘0’, ‘1’, ‘n’, ‘N’, ‘y’, ‘Y’, ‘f’, ‘F’, ‘t’, or ‘T’). The environment variables and the settings they override are: - `OMNI_CRASHREPORTER_URL` will override the value of the `/crashreporter/url` setting. - `OMNI_CRASHREPORTER_ENABLED` will override the value of the `/crashreporter/enabled` setting. - `OMNI_CRASHREPORTER_SKIPOLDDUMPUPLOAD` will override the value of the `/crashreporter/skipOldDumpUpload` setting. - `OMNI_CRASHREPORTER_PRESERVEDUMP` will override the value of the `/crashreporter/preserveDump` setting. - `OMNI_CRASHREPORTER_DEBUGGERATTACHTIMEOUTMS` will override the value of the `/crashreporter/debuggerAttachTimeoutMs` setting. - `OMNI_CRASHREPORTER_CRASHREPORTBASEURL` will override the value of the `/crashreporter/crashReportBaseUrl` setting. It is highly recommended that these environment variable overrides only ever be used in situations where they are the only option. They should also only be used in the most direct way possible to ensure that they do not unintentionally affect the system globally, but only the single intended run of the Carbonite based app. Especially on Windows, environment variables will remain persistent in the terminal they are set in. On Linux, if possible new environment variables should be added to the start of the command line that launches the process being tested (ie: `OMNI_CRASHREPORTER_ENABLED=0 ./kit [<other_arguments>]`). ## Public Interfaces and Utilities Instead of being configured programmatically through an interface, all of the crash reporter’s configuration goes through the `carb::settings::ISettings` settings registry. Upon load of the plugin, the crash reporter plugin will start monitoring for changes in the `/crashreporter/` branch of the settings registry. As soon as any value in that branch changes, the crash reporter will be synchronously notified and will update its configuration. While the crash reporter is intended to be a service that largely works on its own, there are still some operations a host app can perform on it. These are outlined in the documentation for the `carb::crashreporter::ICrashReporter` interface. These operations include starting a task of trying to upload old crash report files, registering callback functions for any time a crash report upload completes, resolving addresses to symbols (for debug purposes only), and adding volatile metadata for the process. There are also some utility helper functions in the `carb::crashreporter` namespace that can simplify some operations such as adding new static metadata values or adding extra files to the crash report. The only set of functions that should be directly called from there are the `carb::crashreporter::addCrashMetaData()`, and `carb::crashreporter::addExtraCrashFile()`. Here are the two functions mentioned: 1. `carb::crashreporter::addExtraCrashFile()` 2. `carb::crashreporter::isExtraCrashFileKeyUsed()` ### Configuration Options The Carbonite crash reporter (`carb.crashreporter-breakpad.plugin`) has several configuration options that can be used to control its behavior. These are specified either in an app’s config file or on the command line. The following settings keys are defined: - `/crashreporter/url`: The URL to use when uploading crash report files. By default this will be an empty string. The URL is expected to be able to accept multipart form messages being posted to it. Many omniverse apps will be automatically configured to use the default upload URL of https://services.nvidia.com/submit using this setting. This can then be overridden on the command line or in a config file if needed. This setting is required in order for any uploads of crash reports to occur. This setting can be overridden with the environment variable `OMNI_CRASHREPORTER_URL`. - `/crashreporter/product`: Sets the name of the product for which crash reports will be generated. This setting is required in order for any uploads of crash reports to occur. This becomes the product name that is included with the crash report’s metadata. Without this metadata value set, the NVIDIA URL will reject the report files. This may be any string value, but should be descriptive enough of the name of the app that it can be distinguished from crash reports for other products. This defaults to an empty string. - `/crashreporter/version`: Sets the version information for the app. This setting is required in order for any uploads of crash reports to occur. This becomes the version information that is included with the crash report’s metadata. Without this metadata value set, the NVIDIA URL will reject the report files. This may be any string value, but should be descriptive enough of the version information of the crashing app that an investigation can be done on it. This defaults to an empty string. - `/crashreporter/dumpDir`: The full path to the location to write crash dump and metadata files to on the local machine. This will also be the location that old crash reports are uploaded from (if they exist) on subsequent runs of the app. This directory must already exist and will not be created by the crash reporter itself. By default this is the current working directory. - `/crashreporter/enabled`: Sets whether the crash reporter is enabled or not. By default, the crash reporter will be enabled on load of the plugin. This setting can change at any point during the process’ lifetime and it will be acted on immediately by the crash reporter. When the crash reporter is disabled, its exception/signal catching hooks will be removed. The plugin will remain loaded and functional, but no action will be taken if a crash does occur. When the crash reporter is enabled, the exception/signal catching hooks will be installed again. This defaults to `true`. - `/crashreporter/devOnlyOverridePrivacyAndForceUpload`: Sets whether crash report files should be uploaded after they are created. This can be used to override the user’s performance consent setting for the purposes of uploading a crash report if needed. If this is `false`, the user’s performance consent setting will control whether uploads are attempted. Note that this setting is effectively ignored if no upload URL has been set in `/crashreporter/url`. This defaults to `false`. This setting should _never_ be used in a released product. This is only intended for local debugging. - `/crashreporter/skipOldDumpUpload`: Indicates whether attempts to upload old crash report files should be skipped. This is useful for situations such as test apps or launching child instances of an app so that they don’t potentially end up blocking during shutdown due to an upload in progress. This defaults to `false`. This setting can be overridden with the environment variable `OMNI_CRASHREPORTER_SKIPOLDDUMPUPLOAD`. - `/crashreporter/log`: When enabled, this indicates whether a stack trace of the crashing thread should be written out to the app log. This will attempt to resolve the symbols on the call stack as best it can with the debugging information that is available. This defaults to `true`. - `/crashreporter/preserveDump`: When enabled, this indicates that crash report files that were successfully uploaded should not be deleted. This is useful in situations such as CI/CD so that any crash report files from a crashed process can be stored as job artifacts. This defaults to `false`. This setting can be overridden with the environment variable `OMNI_CRASHREPORTER_PRESERVEDUMP`. - `/crashreporter/data/`: [This line seems incomplete in the provided HTML] : Settings branch that may contain zero or more crash metadata key/value pairs. Any non-array setting created under this settings branch will be captured as metadata values for the process. These metadata values can be added at any point during runtime up until an actual crash occurs. These settings may also be provided on the command line or in config files if the metadata value is known at the time. A new metadata value can be added programmatically at runtime using the `carb::crashreporter::addCrashMetadata()` helper function. This defaults to an empty settings branch. <div class="admonition note"> <p class="admonition-title">Note <p><strong>Care must be taken to ensure that no user or third party intellectual property information is included in a metadata value : Settings branch that may contain zero or more extra files that should be included in crash reports. Each key/value pair found in this settings branch will identify a new file to be included. The key is expected to be a descriptor of why the file is included. The value is expected to be the relative or absolute path to the file to be included. If a relative path is used, it is up to the app to guarantee that the current working directory that path is relative to will not be modified between when it is added and when a crash report is generated. It is highly suggested that absolute paths always be given to avoid this. Settings in this branch may be added, removed, or modified at any point during runtime up until an actual crash occurs. These settings may also be provided on the command line or in config files if the file name and path is known ahead of time (regardless of whether the file exists at the time). Any listed files that do not exist or are inaccessible at crash time will be silently ignored. A new extra file setting can be added programmatically using the `carb::crashreporter::addExtraCrashFile()` helper function. This default to an empty settings branch. <div class="admonition note"> <p class="admonition-title">Note <p><strong>Care must be taken to ensure that no user or third party intellectual property information is included in any extra file that is sent with a crash report : Array setting to specify which metadata values should also be emitted as telemetry events. Each entry in the array is expected to be a regular expression describing the pattern to try to match each new metadata value to. Note that only new or modified metadata values will be reported as telemetry events once a specified pattern is added to the array. To capture all new metadata key/value pairs, these patterns should be specified either in a config file or on the command line. Note that this will not be able to capture any of the metadata values that are internally generated by the crash reporter since most of these internal metadata values are only given values at crash time. The patterns in this array may be changed at any point during runtime. However, if a pattern changes only new or modified metadata values after that point will be emitted as telemetry events. Only a single telemetry event will be emitted for each new or modified metadata value. If a piece of metadata is set again to its current value, no new event will be emitted. This defaults to an empty array. : Windows only. Provides a timeout in milliseconds that, when exceeded, will consider the upload as failed. This does not limit the actual amount of time that the upload takes due to a bug in `wininet`. Typically this value does not need to be changed. This defaults to 7,200,000ms (2 hours). : Determines the time in milliseconds to wait for a debugger to attach after a crash occurs. If this is a non-zero value, the crash report processing and upload will proceed once a debugger successfully attaches to the process or the given timeout expires. This is useful when trying to debug post-crash functionality since some debuggers don’t let the original exception go completely unhandled to the point where the crash reporter is allowed to handle it (ie: if attached before the crash). This setting defaults to 0ms meaning the wait is disabled. This setting can be overridden with the environment variable `OMNI_CRASHREPORTER_DEBUGGERATTACHTIMEOUTMS`. : Flags to control which data is written to the minidump file (on Windows). These can either be specified as a single hex value for all the flags to use (assuming the user knows what they are doing), or with MiniDump* flag names separated by comma (‘,’), colon (‘:’), bar (‘|’), or whitespace. There should be no whitespace between flags when specified on the command line. The ‘MiniDump’ prefix on each flag name may be omitted if desired. This defaults to an empty string (ie: no extra flags). The flags specified here may either override the default flags or be added to them depending on the value of `/crashreporter/overrideDefaultDumpFlags`. This setting is ignored on Linux. For more information on the flags and their values, look up `MiniDumpNormal` on MSDN or see a brief summary above at Loading a Crash Dump to Investigate. : Indicates whether the crash dump flags specified in `/crashreporter/dumpFlags` should replace the default crash dump flags (when `true`) or simply be added to the default flags (when `false`). This defaults to `false`. - This setting is ignored on Linux. - `/crashreporter/compressDumpFiles`: Indicates whether the crash report files should be compressed as zip files before uploading to the server. The compressed crash dump files are typically ~10% the size of the original, so upload time should be greatly reduced. This feature must be supported on the server side as well to be useful for upload. However, if this setting is enabled the crash reports will still be compressed locally on disk and will occupy less space should the initial upload fail. This defaults to `false`. - `/crashreporter/retryCount`: Determines the maximum number of times to try to upload any given crash report to the server. The number of times the upload has been retried for any given crash report is stored in its metadata. When the report files are first created, the retry count will be set to the 0. Each time the upload fails, the retry count will be incremented by one. When the count reaches this limit (or goes over it if it changes from the default), the dump file and its metadata will be deleted whether the upload succeeds or not. This defaults to 10. - `/crashreporter/crashReportBaseUrl`: The base URL to use to print a message after a successful crash report upload. This message will include a URL that is pieced together using this base URL followed by the ID of the crash report that was just sent. Note that this generated URL is just speculative however and may not be valid for some small amount of time after the crash report has been sent. This setting is optional and defaults to an empty string. This setting should only be used in situations where it will not result in the URL for an internal resource being baked into a setting in builds that will go out to public users. This setting can be overridden by the environment variable `OMNI_CRASHREPORTER_CRASHREPORTBASEURL`. - `/crashreporter/includeEnvironmentAsMetadata`: Determines whether the environment block should be included as crash report metadata. This environment block can be very large and potentially contain private information. When included, the environment block will be scrubbed of detectable user names. This defaults to `false`. - `/crashreporter/includePythonTraceback`: Attempts to gather a Python traceback using the py-spy tool, if available and packaged with the crash reporter. The output from this tool will be a separate file that is uploaded along with the crash dump, configured by `/crashreporter/pythonTracebackFile`. This defaults to `true`. - `/crashreporter/pythonTracebackBinary`: The binary to execute in order to capture a Python traceback. This allows the py-spy binary to exist under a different name if so desired. Defaults to `py-spy.exe` on Windows and `py-spy` on non-Windows. May include a relative (to application working directory) or absolute path. If no directory is provided, first the same directory as the crash reporter is checked, followed by application working directory. - `/crashreporter/pythonTracebackArgs`: Arguments that are passed to `/crashreporter/pythonTracebackBinary`. By default this is `dump --full-filenames --nonblocking --pid $pid` where `$pid` is a simple token that refers to the process ID of the crashing process. **NOTE:** py-spy will not work properly without at least these arguments, and overriding this key will replace these arguments, so these arguments should likely be included in your override. - `/crashreporter/pythonTracebackDir`: The directory where the Python traceback is stored. May be an absolute path, or relative to the working directory of the process at the time the crash reporter is loaded. May be changed dynamically. This has an empty default, which means to use the same directory as `dumpDir`. - `/crashreporter/pythonTracebackName`: The name of the Python traceback file. Any path separators are ignored. The default value is `$crashid.py.txt` where `$crashid` is a simple token that refers to the Crash ID of the crashing process–that is, the UUID generated by Breakpad at crash time. This file is created at crash time in the directory provided by `pythonTracebackDir` if `includePythonTraceback` is `true`. If the file cannot be created at crash time, no Python traceback file will be included. The simple token `$pid` is also available, which is the process ID of the crashing process. - **PythonTracebackTimeoutMs** : (default: 60000 [1 minute]) The time to wait (in milliseconds) for the Python-traceback process to run. If this time is exceeded, the process is terminated and no Python traceback will be available. - **"/crashreporter/gatherUserStory"** : (default: true) When a crash occurs, `carb.crashreporter-breakpad` has the ability to run a process that gathers information from the user, such as steps to reproduce the problem or what the user specifically did. This value can be set to false to prevent gathering this information. - **"/crashreporter/userStoryBinary"** : (default: crashreport.gui) The binary to execute to gather the crash user story. - **"/crashreporter/userStoryArgs"** : (default: --/app/crashId=$crashid --/app/allowUpload=$allowupload) Arguments passed to the binary when it is executed. - **"/crashreporter/userStoryTimeoutMs"** : (default: 600,000 [10 minutes]) The time to wait (in milliseconds) for the crash user story process to run. If this time is exceeded, the process is terminated and no crash user story will be available. ## Internally Created Crash Metadata The crash reporter plugin itself will create several crash metadata values on its own. Many of these metadata key names are considered ‘reserved’ as they are necessary for the functionality of the crash reporter and for categorizing crash reports once received. There are also some other crash metadata values that are created internally by either the crash reporter or kit-kernel that should not be replaced or modified by other plugins, extensions, or configuration options. Both groups are described below: ### Reserved Metadata Keys These key names are reserved by the crash reporter plugin. If an app attempts to set a metadata key under the `/crashreporter/data/` settings branch using one of these names, it will be ignored. Any Carbonite based app will get these metadata values. - **ProductName** : Collected by the crash reporter plugin on startup. This value comes from either the `/crashreporter/product` or `/app/name` setting (in that order of priority). - **Version** : Collected by the crash reporter plugin on startup. This value comes from either the `/crashreporter/version` or `/app/version` setting (in that order of priority). - **comment** : Currently unused and unassigned, but still left as a reserved metadata key name. - **StartupTime** : The time index at the point the crash reporter plugin was initialized. This will be expressed as the number of seconds since midnight GMT on January 1, 1970. - **DumpId** : Collected at crash time by the crash reporter plugin. This is a UUID that is generated to ensure the generated crash report is unique. This ID is used to uniquely identify the crash report on any crash tracking system. - **CarbSdkVersion** : Currently unused and unassigned, but still left as a reserved metadata key name. This named value is currently expressed through the `carboniteSdkVersion` metadata value. - **RetryCount** : The number of retries that should be attempted to upload any crash report that is generated. This defaults to 0 and will be incremented with each failed attempt to upload the report. The default limit for this value is 10 attempts but can be modified with the `/crashreporter/retryCount` setting. - **UploadSuccessful** : Set to ‘1’ for a given crash report when it has been successfully uploaded. This value defaults to ‘0’. This value will always be ‘0’ locally and on any crash report management system except in the case where the `/crashreporter/preserveDump` setting is enabled. - **PythonTracebackStatus** : Set to a status message indicating whether gathering the Python stack trace was successful or why it failed or was skipped. - **CrashTime** : An RFC3339 GMT timestamp expressing when a given crash occurred. - **UserStory** : The message entered into the ‘user story’ dialog by the user before uploading a crash # Report ## User Story Metadata - **UserStory** : This defaults to an empty string and will only be filled in if the `/crashreporter/gatherUserStory` setting is enabled and the user enters text. - **UserStoryStatus** : Set to a status message indicating whether gathering the user story was successful or why it failed or was cancelled. - **LastUploadStatus** : Set to the HTTP status code of the last crash report upload attempt for a given report. This is only checked on attempting to upload a previously failed upload attempt. ## Additional Metadata Collected at Crash Time - **UptimeSeconds** : The total number of seconds that the process ran for. This value is written when the crash report is first generated. - **telemetrySessionId** : The telemetry session ID for the process. This is used to link the crash report to all telemetry events for the session. - **memoryStats** : System memory usage information at crash time. This includes available and total system RAM, swap file, and virtual memory (VM) amounts for the process. - **workingDirectory** : The current working directory for the process at the time of the crash. This value will be scrubbed to not include any user names if found. - **hangDetected** : Set to ‘1’ if a hang is detected and that leads to an intentional crash. - **crashingThread** : Set to the ID of the thread that was presumably detected as hung by the Kit hang detector. Currently this will always be the process’ main thread. - **crashingIntentionallyDueTo** : Set to a reason message for why an intentional crash occurred. This is only present if the crash was intentional and not the result of a program malfunction or bug. ## Metadata Collected on Crash Reporter Startup - **commandLine** : The full command line that was used to launch the process. This will be scrubbed for user names before being added. - **environment** : Lists all environment variables present at process launch time. This metadata value is disabled by default but can be enabled with the `/crashreporter/includeEnvironmentAsMetadata` setting. All environment variables will be scrubbed for user names before being added as metadata. - **runningInContainer** : Boolean indicating if the process is running in a container. ## Metadata Collected on Kit-Kernel Startup - **environmentName** : The runtime environment the app is currently running in. This will either be the name of any detected CI/CD system (ie: TeamCity, GitLab, etc), or ‘default’ if none is detected. This will also be modified later by the `omni.kit.telemetry` extension to be set to either ‘Individual’, ‘Enterprise’, or ‘Cloud’ if it was previously set to ‘default’. - **appState** : Set to either ‘startup’, ‘started’, or ‘shutdown’ depending on which stage of the kit-kernel life cycle the process is currently in. - **carboniteSdkVersion** : The version of the Carbonite SDK that is in use. - **carboniteFrameworkVersion** : The version of the Carbonite framework that is in use. - **appName** : The name of the app that is currently running. - **appVersion** : The version of the app that is currently running. - **portableMode** : Set to ‘1’ if the `--portable` and - `--portable-root`: command line options are used or a portable root is implicitly setup in a local developer build. Set to ‘0’ otherwise. - `email`: Set to the user’s email address as reported in their `privacy.toml` file. Note that this value will not be present in `privacy.toml` for public users. - `userId`: Set to the user’s ID as reported in their `privacy.toml` file. Note that this metadata value will not be set for public users. ### CI/CD Related Metadata These metadata values are detected on crash reporter from various environment variables that common CI/CD systems export for all of their jobs. Currently this only supports TeamCity and GitLab. Any Carbonite based app will get these metadata values. - `crashingJobId`: The CI/CD specific identifier of the running job that crashed. - `crashingJobName`: The name of the CI/CD job that crashed. - `projectName`: The name of the project the crashing CI/CD job belongs to. - `crashingJobUrl`: The URL to the status page for the crashing CI/CD job. - `commitAuthor`: The author of the top commit for the crashing CI/CD job. This is only available for GitLab pipeline jobs. - `commitHash`: The hash of the top commit for the crashing CI/CD job. This is only available for GitLab pipeline jobs. - `commitTimestamp`: The timestamp of the top commit for the crashing CI/CD job. This is only available for GitLab pipeline jobs. - `crashingPipelineUrl`: The URL to the status page for the pipeline that the crashing CI/CD job is running under. This is only available for GitLab pipeline jobs. - `crashingPipelineName`: The name of the pipeline that the crashing CI/CD job is running under. This is only available for GitLab pipeline jobs. - `crashingPipelineId`: The ID of the pipeline that the crashing CI/CD job is running under. This is only available for GitLab pipeline jobs. - `ciEnvironment`: The name of the CI/CD environment that was detected. - `teamcity_run`: Set to ‘true’ if the crash occurred during a job run on TeamCity. - `teamcity_build_number`: The build number of the TeamCity job that crashed. - `teamcity_project_name`: The name of the project that the crashing TeamCity job belongs to. - `teamcity_buildconf_name`: The build configuration name of the project that the crashing TeamCity job belongs to. ### Metadata Added From Assertion Failures These metadata values are only added when a `CARB_ASSERT()`, `CARB_CHECK()`, or `CARB_FATAL_UNLESS()` test fails. These provide information on why the process aborted due to a failed assertion that was not caught and continued by a debugger. Any Carbonite based app will get these metadata values. - `assertionCausedCrash`: Set to ‘true’ if the crash was caused by a failed assertion. This value is not present otherwise. - `assertionCount`: Set to the number of assertions that had failed for the process. For most situations this will just be set to ‘1’. However, it is possible to continue from a failed assertion under a debugger at least so this could be larger than 1 if a developer continued from multiple assertions before finally crashing. - `lastAssertionCondition`: The text of the assertion condition that failed. This will not include any values of any variables mentioned in the condition however. - `lastAssertionFile`: The name and path of the source file the assertion failed in. ## Assertion Failure Metadata - **lastAssertionFunc**: The name of the function the assertion failed in. - **lastAssertionLine**: The source code line number of the failed assertion. ## Metadata Added When Extensions Load These metadata values are added by the extension loading plugin in Kit-kernel. These will be present only in certain situations in Kit based apps. Any Kit based app will get these metadata values. - **extraExts**: Set to the list of extensions that the user has explicitly installed or toggled on or off during the session. This is done through the extension manager window in Kit apps. Extension names will be removed from this list any time an extension is explicitly unloaded by the user at runtime. - **autoloadExts**: Set to the list of extensions that the user has explicitly marked for ‘auto-load’ in the Kit extension manager window. This value will only be written out once on startup.
58,701
create-a-material-and-bind-to-world-plane-method-1_Overview.md
# Overview — Omniverse Kit 1.4.2 documentation ## Overview omni.kit.material.library is library of python functions for materials. Provides simple interface with hydra and neuraylib low level extensions, for creating materials as prims, binding materials to “bindable” prims, material UI, material “Create” menus, material preferences and much more. ### Examples #### Create a material. ```python import omni.kit.material.library # Create a Shader omni.kit.commands.execute('CreateAndBindMdlMaterialFromLibrary', mdl_name='OmniPBR.mdl', mtl_name='OmniPBR') ``` #### Create a material and bind to /World/Plane, method 1. ```python import omni.kit.material.library from pxr import Usd prim_path = "/World/Plane" prim_name = "testplane" # Create a Plane Prim omni.kit.commands.execute('CreatePrimWithDefaultXform', prim_type='Plane', prim_path=prim_path) # Select the Plane omni.kit.commands.execute('SelectPrims', old_selected_paths=[''], new_selected_paths=[prim_path], expand_in_stage=True) # Create a Shader omni.kit.commands.execute('CreateAndBindMdlMaterialFromLibrary', mdl_name='OmniPBR.mdl', mtl_name='OmniPBR', prim_name=prim_name, mtl_created_list=None, bind_selected_prims=True) ``` #### Create a material and bind to /World/Plane, method 2 ```python import omni.kit.material.library # (Code example continues here) ``` ```python from pxr import Usd, UsdShade prim_path = "/World/Plane" prim_name = "testplane" # Create a Plane Prim omni.kit.commands.execute('CreatePrimWithDefaultXform', prim_type='Plane', prim_path=prim_path) # Select the Plane omni.kit.commands.execute('SelectPrims', old_selected_paths=[''], new_selected_paths=[prim_path], expand_in_stage=True) # Create a Shader mtl_created_list = [] omni.kit.commands.execute('CreateAndBindMdlMaterialFromLibrary', mdl_name='OmniPBR.mdl', mtl_name='OmniPBR', mtl_created_list=mtl_created_list) # Bind material to prim omni.kit.commands.execute("BindMaterial", prim_path=[prim_path], material_path=mtl_created_list[0], strength=UsdShade.Tokens.weakerThanDescendants) ``` ## Create a material and modify created material. This can be a problem as just creating the material doesn’t mean its Usd.Attribute’s are immediately accessible. Created materials inputs/outputs then have to be “loaded” into Usd.Prim, so to work around this use on_created_fn callback. NOTE: Materials loaded via a .usd file will also not have the Usd.Attribute immediately accessible. Selecting the prim triggers ```python omni.usd.get_context().load_mdl_parameters_for_prim_async() ``` todo this. ```python import omni.kit.material.library from pxr import Usd # Created material callback async def on_created(shader_prim: Usd.Prim): shader_prim.GetAttribute("inputs:diffuse_texture").Set("./test.png") # Create a Shader omni.kit.commands.execute('CreateAndBindMdlMaterialFromLibrary', mdl_name='OmniPBR.mdl', mtl_name='OmniPBR', on_created_fn=lambda p: asyncio.ensure_future(on_created(p))) ``` ## Get SubIds from material ```python import asyncio import carb import omni.kit.material.library async def get_subs_ids(): mdl_path = carb.tokens.get_tokens_interface().resolve("${kit}/mdl/core/Base/OmniHairPresets.mdl") subid_list = await omni.kit.material.library.get_subidentifier_from_mdl(mdl_file=mdl_path) print(f"subid_list:{subid_list}") asyncio.ensure_future(get_subs_ids()) ``` ## Get SubIds from material using callback ```python import asyncio import carb import omni.kit.material.library def have_subids(id_list): print(f"id_list:{id_list}") mdl_path = carb.tokens.get_tokens_interface().resolve("${kit}/mdl/core/Base/OmniHairPresets.mdl") ``` ```python asyncio.ensure_future(omni.kit.material.library.get_subidentifier_from_mdl(mdl_file=mdl_path, on_complete_fn=have_subids)) ```
4,192
create-a-material-and-bind-to-world-plane-method-2_Overview.md
# Overview ## Extension : omni.kit.material.library-1.4.2 ## Documentation Generated : May 08, 2024 ## Overview omni.kit.material.library is library of python functions for materials. Provides simple interface with hydra and neuraylib low level extensions, for creating materials as prims, binding materials to “bindable” prims, material UI, material “Create” menus, material preferences and much more. ## Examples ### Create a material ```python import omni.kit.material.library # Create a Shader omni.kit.commands.execute('CreateAndBindMdlMaterialFromLibrary', mdl_name='OmniPBR.mdl', mtl_name='OmniPBR') ``` ### Create a material and bind to /World/Plane, method 1 ```python import omni.kit.material.library from pxr import Usd prim_path = "/World/Plane" prim_name = "testplane" # Create a Plane Prim omni.kit.commands.execute('CreatePrimWithDefaultXform', prim_type='Plane', prim_path=prim_path) # Select the Plane omni.kit.commands.execute('SelectPrims', old_selected_paths=[''], new_selected_paths=[prim_path], expand_in_stage=True) # Create a Shader omni.kit.commands.execute('CreateAndBindMdlMaterialFromLibrary', mdl_name='OmniPBR.mdl', mtl_name='OmniPBR', prim_name=prim_name, mtl_created_list=None, bind_selected_prims=True) ``` ### Create a material and bind to /World/Plane, method 2 ```python import omni.kit.material.library # (Code here would be similar to method 1, but with different parameters or steps) ``` ```python from pxr import Usd, UsdShade prim_path = "/World/Plane" prim_name = "testplane" # Create a Plane Prim omni.kit.commands.execute('CreatePrimWithDefaultXform', prim_type='Plane', prim_path=prim_path) # Select the Plane omni.kit.commands.execute('SelectPrims', old_selected_paths=[''], new_selected_paths=[prim_path], expand_in_stage=True) # Create a Shader mtl_created_list = [] omni.kit.commands.execute('CreateAndBindMdlMaterialFromLibrary', mdl_name='OmniPBR.mdl', mtl_name='OmniPBR', mtl_created_list=mtl_created_list) # Bind material to prim omni.kit.commands.execute("BindMaterial", prim_path=[prim_path], material_path=mtl_created_list[0], strength=UsdShade.Tokens.weakerThanDescendants) ``` ## Create a material and modify created material. This can be a problem as just creating the material doesn’t mean its Usd.Attribute’s are immediately accessible. Created materials inputs/outputs then have to be “loaded” into Usd.Prim, so to work around this use on_created_fn callback. NOTE: Materials loaded via a .usd file will also not have the Usd.Attribute immediately accessible. Selecting the prim triggers ```python omni.usd.get_context().load_mdl_parameters_for_prim_async() ``` todo this. ```python import omni.kit.material.library from pxr import Usd # Created material callback async def on_created(shader_prim: Usd.Prim): shader_prim.GetAttribute("inputs:diffuse_texture").Set("./test.png") # Create a Shader omni.kit.commands.execute('CreateAndBindMdlMaterialFromLibrary', mdl_name='OmniPBR.mdl', mtl_name='OmniPBR', on_created_fn=lambda p: asyncio.ensure_future(on_created(p))) ``` ## Get SubIds from material ```python import asyncio import carb import omni.kit.material.library async def get_subs_ids(): mdl_path = carb.tokens.get_tokens_interface().resolve("${kit}/mdl/core/Base/OmniHairPresets.mdl") subid_list = await omni.kit.material.library.get_subidentifier_from_mdl(mdl_file=mdl_path) print(f"subid_list:{subid_list}") asyncio.ensure_future(get_subs_ids()) ``` ## Get SubIds from material using callback ```python import asyncio import carb import omni.kit.material.library def have_subids(id_list): print(f"id_list:{id_list}") mdl_path = carb.tokens.get_tokens_interface().resolve("${kit}/mdl/core/Base/OmniHairPresets.mdl") ``` ```python asyncio.ensure_future(omni.kit.material.library.get_subidentifier_from_mdl(mdl_file=mdl_path, on_complete_fn=have_subids)) ```
4,292
create-a-material-and-modify-created-material_Overview.md
# Overview — Omniverse Kit 1.4.2 documentation ## Overview omni.kit.material.library is library of python functions for materials. Provides simple interface with hydra and neuraylib low level extensions, for creating materials as prims, binding materials to “bindable” prims, material UI, material “Create” menus, material preferences and much more. ### Examples #### Create a material ```python import omni.kit.material.library # Create a Shader omni.kit.commands.execute('CreateAndBindMdlMaterialFromLibrary', mdl_name='OmniPBR.mdl', mtl_name='OmniPBR') ``` #### Create a material and bind to /World/Plane, method 1 ```python import omni.kit.material.library from pxr import Usd prim_path = "/World/Plane" prim_name = "testplane" # Create a Plane Prim omni.kit.commands.execute('CreatePrimWithDefaultXform', prim_type='Plane', prim_path=prim_path) # Select the Plane omni.kit.commands.execute('SelectPrims', old_selected_paths=[''], new_selected_paths=[prim_path], expand_in_stage=True) # Create a Shader omni.kit.commands.execute('CreateAndBindMdlMaterialFromLibrary', mdl_name='OmniPBR.mdl', mtl_name='OmniPBR', prim_name=prim_name, mtl_created_list=None, bind_selected_prims=True) ``` #### Create a material and bind to /World/Plane, method 2 ```python import omni.kit.material.library # (Code snippet for method 2 would be here if provided in the HTML) ``` ```python asyncio.ensure_future(omni.kit.material.library.get_subidentifier_from_mdl(mdl_file=mdl_path, on_complete_fn=have_subids)) ```
1,692
create-an-extension_create_from_usd_explorer.md
# Develop a USD Explorer App ## Important The Omniverse USD Explorer Application is composed of many curated Extensions from Kit SDK. These have been been through quality control to ensure they work together in concert with the specific settings of the USD Explorer Application. In this tutorial, we encourage developers to explore using all Extensions available on the platform; however, it’s important to recognize the need to scrutinize any new Application based on the example. Adding existing Extensions, adding new Extensions, and changing settings are all reasons to do careful quality control of the new app if it is to be used for production. ## This section covers: - **Customizing** the Application **through settings**. - **Adding new functionality** by creating a new Python Extension. - **Hot loading Python code** changes in the app. - **Adding a menu item** to reveal a window. - Connect VSCode to the app for **debugging Python code**. - **Write tests that simulate user interactions**. - How to **develop Applications for Omniverse Cloud**. ## Setup a New App This project provides an `omni.usd_explorer.kit` file as an example of a feature-rich Application. Let’s duplicate the `omni.usd_explorer.kit` app and the associated `omni.usd_explorer.setup` Extension. You could also just rename the existing files - but duplicating them allows keeping an original around for reference. ### Duplicate Files 1. Duplicate `.\source\apps\omni.usd_explorer.kit`. Name the new file `my_company.usd_explorer.kit`. 2. Duplicate directory `.\source\extensions\omni.usd_explorer.setup`. Name the new directory `my_company.usd_explorer.setup`. 3. Rename the `omni` folder in `.\source\extensions\my_company.usd_explorer.setup\`. 1. Replace `omni` with `my_company` in the following code blocks: - From ``` <span class="pre"> omni ``` to ``` <span class="pre"> my_company ``` - Replace ``` <span class="pre"> omni.usd_explorer.setup ``` within the **new files** with ``` <span class="pre"> my_company.usd_explorer.setup ``` - In VSCode, use ``` <span class="pre"> Edit ``` > ``` <span class="pre"> Replace <span class="pre"> in <span class="pre"> Files ``` - For each entry in ``` <span class="pre"> my_company.usd_explorer.kit ``` and entries in the ``` <span class="pre"> my_company.usd_explorer.setup ``` directory use the ``` <span class="pre"> Replace ``` button. - In ``` <span class="pre"> .\my_company.usd_explorer.setup\premake5.lua ``` , change ``` <span class="pre"> repo_build.prebuild_link <span class="pre"> { <span class="pre"> "omni", <span class="pre"> ext.target_dir.."/omni" <span class="pre"> } ``` to ``` <span class="pre"> repo_build.prebuild_link <span class="pre"> { <span class="pre"> "my_company", <span class="pre"> ext.target_dir.."/my_company" <span class="pre"> } ``` 2. Configure build tool to recognize the new Application. - Open ``` <span class="pre"> .\kit-app-template\premake5.lua ``` - Find the section ``` <span class="pre"> -- <span class="pre"> Apps: ``` - Add an entry for the new app: ``` <span class="pre"> define_app("my_company.usd_explorer") ``` . Optionally remove the entry ``` <span class="pre"> define_app("omni.usd_explorer") ``` . 3. Build & Verify - Run a build and verify that the new Application works by starting it: - Windows: ``` <span class="pre"> .\_build\windows-x86_64\release\my_company.usd_explorer.bat ``` - Linux: ``` <span class="pre"> ./_build/linux-x86_64/release/my_company.usd_explorer.sh ``` - Note: The first time the Application launches it could take a while. In the shell, you’ll eventually see ``` <span class="pre"> RTX <span class="pre"> ready ``` - this is when the app is done initializing. - Important: If you accidentally missed a step and the build fails, or if there are errors finding ``` <span class="pre"> my_company.usd_explorer.setup ``` Extension on startup: - Make any necessary changes in ``` <span class="pre"> .\source ``` - Remove ``` <span class="pre"> _build ``` directory with command ``` <span class="pre"> build <span class="pre"> -c ``` or delete the directory manually. - Run a new build. 4. Customize App via Settings - We’ve established that the ``` <span class="pre"> [settings] ``` section of a kit file allows a low code approach to change the behavior of an Extension. But how do you know what settings are available to begin with? Let’s use the ``` <span class="pre"> Debug <span class="pre"> Settings ``` Extension and do some customizations of the app. # Example: Title Bar (Windows only) The title bar Extension is at this writing a Windows-only feature. It creates a custom title bar for the Application where icon, font styles, title etc can be customized via settings. ## Explore Extension Settings 1. Run the app again. 2. Close the `WELCOME TO OMNIVERSE` window. 3. Search for `omni.kit.window.modifier.titlebar` within the `Debug Settings` window and open the `exts` section. Now you can explore the various settings for the Extension. 4. In `my_company.usd_explorer.kit`, search for `[settings.exts."omni.kit.window.modifier.titlebar"]`. ```toml [settings.exts."omni.kit.window.modifier.titlebar"] titleFormatString = " USD EXPLORER {verKey:/app/version,font_color=0x909090} {separator} {file, board=true}" showFileFullPath = true icon.file = "${my_company.usd_explorer.setup}/data/nvidia-omniverse-usd_explorer.ico" icon.size = 18 icon.use_size = true defaultFont.size = 18 defaultFont.color = 0xD0D0D0 ... ``` 1. Now you are able to change for example the title from `USD EXPLORER to MY COMPANY USD EXPLORER` within the `titleFormatString` setting. 2. Observe how resources are referenced by the `icon.file` setting: `${my_company.usd_explorer.setup}` is the root directory of the given Extension and `/data/nvidia-omniverse-usd_explorer.ico` is the file being referenced from within that Extension. Feel free to change that icon but be sure to keep the same resolution. If a path needs to be relative to the app - the `.kit` file - then use `${app}`. 3. Run the app to see changes. # Example: Asset Browser (Windows & Linux) The [Asset Browser](https://docs.omniverse.nvidia.com/extensions/latest/ext_browser-extensions/asset-browser.html) Extension presents files from a list of locations - providing end users with intuitive access to content libraries. Customization of the location list can easily be done via a .kit file. ## Explore Extension Settings ## Run the App 1. Run the app again. 2. Close the `WELCOME TO OMNIVERSE` window. 3. Search for `omni.kit.browser.asset` within the `Debug Settings` window and open the `exts` section to see settings used by the Extension. Here we see that the `omni.kit.browser.asset.folders` setting provides a list of locations. ## Edit Extension Settings In `my_company.usd_explorer.kit`, find the settings line that starts with `"omni.kit.browser.asset".folders`. Add a local folder - or some Nucleus directory - that has some USD files in it - here’s an example adding a `My Company Assets` directory to the list: ### Windows: ```toml "omni.kit.browser.asset".folders = [ "http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/Vegetation", "http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/ArchVis/Commercial", "http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/ArchVis/Industrial", "http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/ArchVis/Residential", "C:/My Company Assets", ] ``` ### Linux: ```toml "omni.kit.browser.asset".folders = [ "http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/Vegetation", "http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/ArchVis/Commercial", "http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/ArchVis/Industrial", "http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/ArchVis/Residential", "/home/my_username/My Company Assets", ] ``` 4. Run the app and switch to the `Layout` mode. 5. Select the `NVIDIA Assets browser` - the folder you added is now listed on the left. Observe just how easy it was to change the behavior of the Asset Browser. Keep this in mind when you create Extensions of your own: expose configurable settings where appropriate. Think of the settings as part of the public API of the Extension. ## Create an Extension Adding functionality to Applications - beyond what is available in existing Extensions - is done by creating new Extensions. Here we’ll create a new Extension and use it in the app. 1. Create a new Extension using `repo template new` command (command cheat-sheet). - For `What do you want to add` choose `extension`. - For `Choose a template` choose `python-extension-window`. - Enter name `my_company.usd_explorer.tutorial`. - Leave version as `0.1.0`. - The new Extension is created in `.\source\extensions\my_company.usd_explorer.tutorial`. The added Extension has this directory structure: ``` ```toml # Extension root directory my_company.usd_explorer.tutorial config # `extension.toml` is the equivalence of an Application .kit file. # This is where package metadata, dependencies and settings are managed. extension.toml # The `data` folder contains resources. At this point it contains some images # used to display the Extension in the Extension Manager. data icon.png preview.png # The `docs` folder contains both the `CHANGELOG.md` and files for building docs. docs CHANGELOG.md Overview.md README.md my_company usd_explorer tutorial tests # A sample of Python files for tests. # These are configured to be used only when the Application runs # in `test` mode. __init__.py test_window.py # These are the Python modules used by the Application. # The directory can have as many files as needed - and subdirectories. __init__.py python_ext.py # `premake5.lua` makes the build process recognize and build the Extension. # Without this file nothing inside the my_company.usd_explorer.tutorial directory # is included in the build. premake5.lua ``` Let’s add the Extension to the app: 1. In `my_company.usd_explorer.kit`, add `"my_company.usd_explorer.tutorial" = {}` in the `[dependencies]` section. 2. Do a build. 3. Run the app again. 4. Close the `WELCOME TO OMNIVERSE` window so the `My Window` can be seen. Hot Loading Python Code Changes ============================== 1. If you closed the app, start it up again and make sure the added window is visible. 2. Open `python_ext.py` from `.\source\extensions\my_company.usd_explorer.tutorial\my_company\usd_explorer\tutorial`. 3. Change one of the button labels; for example, change `ui.Button("Add", clicked_fn=on_click)` to `ui.Button("Increase", clicked_fn=on_click)`. 4. Save the file and look at what happened to the button in the Application. It was updated. **NOTE:** When working with Python Extensions it is possible to leave the Application running when smaller changes are made to the source code. This immediate feedback can be rather useful when making iterative changes. Show Window from a Menu ======================= Change the contents of `python_ext.py` to the below. This will create a ```code``` My Company Window ```code``` omni.ui.MenuItem inside the already existing ```code``` Window ```code``` menu for showing the new window. Note that we’re renaming the window from ```code``` My Window ```code``` to ```code``` My Company Window ```code``` . ```c++ import omni.ext.omni.ui as ui import omni.kit.ui # Functions and vars are available to other Extension as usual in python: `example.python_ext.some_public_function(x)` def some_public_function(x: int): print(f"[omni.hello.world] some_public_function was called with {x}") return x ** x # Any class derived from `omni.ext.IExt` in top level module (defined in `python.modules` of `extension.toml`) will be # instantiated when Extension gets enabled and `on_startup(ext_id)` will be called. Later when Extension gets disabled # on_shutdown() is called. class MyExtension(omni.ext.IExt): # ext_id is current Extension id. It can be used with Extension manager to query additional information, like where # this Extension is located on filesystem. def on_startup(self, ext_id): # Initialize some properties self._count = 0 self._window = None self._menu = None # Create a menu item inside the already existing "Window" menu. editor_menu = omni.kit.ui.get_editor_menu() if editor_menu: self._menu = editor_menu.add_item("Window/My Company Window", self.show_window, toggle=True, value=False) def on_shutdown(self): self._window = None self._menu = None def show_window(self, menu_path: str, visible: bool): if visible: # Create window self._window = ui.Window("My Company Window", width=300, height=300) with self._window.frame: with ui.VStack(): label = ui.Label("") def on_click(): self._count += 1 label.text = f"count: {self._count}" ``` ```python def on_reset(): self._count = 0 label.text = "empty" on_reset() with ui.HStack(): ui.Button("Add", clicked_fn=on_click) ui.Button("Reset", clicked_fn=on_reset) self._window.set_visibility_changed_fn(self._visiblity_changed_fn) elif self._window: # Remove window self._window = None self._count = 0 editor_menu = omni.kit.ui.get_editor_menu() if editor_menu: editor_menu.set_value("Window/My Company Window", visible) def _visiblity_changed_fn(self, visible): editor_menu = omni.kit.ui.get_editor_menu() if editor_menu: # Toggle the checked state of the menu item editor_menu.set_value("Window/My Company Window", visible) ``` ## Review 1. Run the Application again. 2. Close the `WELCOME TO OMNIVERSE` window. 3. Click the `Layout` tab. 4. Open the new window from `Window` > `My Company Window`. ## Debug Code Let’s use the tutorial Extension to explore how to debug Python Extensions running in an App. Kit SDK provides `omni.kit.debug.vscode` that enables Visual Studio Code to attach to a Kit process. Let’s use this Extension and trigger a breakpoint when using the `Add` button in our window. ### Visual Studio Code Setup The project includes a `.\.vscode\launch.json` file which is a configuration for connecting with the debugger. Note the `port` number `3000`: this is the default port used by `omni.kit.debug.vscode`. ```json { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "name": "Python: Remote Attach", "type": "python", ... } ] } ``` # Application Setup Add "omni.kit.debug.vscode" = {} to the dependencies section of the my_company.usd_explorer.kit . # Attach & Debug 1. Run the my_company.usd_explorer app. 2. Note the VS Code Link window. 3. Return to VSCode. 1. Click the ``` ``` Run and Debug ``` ``` button on the left toolbar. 2. Select the ``` ``` Python: Remote Attach ``` ``` option - as named in the above ``` ``` launch.json ``` ``` : ``` ``` "name": "Python: Remote Attach" ``` ``` . 3. Now click the button to run in debug mode. 4. Return to the my_company.usd_explorer app and note that the debugger is attached. 5. Open ".\my_company.usd_explorer.tutorial\my_company\usd_explorer\tutorial\python_ext.py and set a breakpoint inside the on_click() method. 6. Bring up the My Company Window in the app and click on the Add button. The breakpoint in VS Code should be triggered at this point. Observe that the Python file in VSCode is the file in the _build directory: you set the breakpoint on the source file and the breakpoint triggers within the build files. Be sure to remove the omni.kit.debug.vscode dependency from the app when you are done debugging. # Test Now that we know the Extensions works we want to keep it that way. Kit SDK provides a framework for running tests that assert functionalities as Applications and Extensions are improved with more and more functionality. my_company.usd_explorer.tutorial's `extension.toml` file has a section where additional dependencies can be added for when tests run. ```toml [[test]] # Extra dependencies only to be used during test run dependencies = [ "omni.kit.ui_test" # UI testing Extension ] ``` The `omni.kit.test` Extension which provides the foundation for running tests does not need to be added as a dependency because it is added to the executables by the build process. We added `omni.kit.ui_test` because it enables using the UI in tests. The Extension’s `test_window.py` module contains the tests: - It imports modules from within the Extension. - Function `test_hello_public_function` is an example of asserting a method on a module. - Function `test_window_button` is an example of including UI elements in a test. Buttons are found by their UI path and then button clicks are simulated. ## Run Test The `.\_build\windows-x86_64\release` directory contains a number of bat files for testing Extensions and Applications. To run the test of this specific Extension run the `tests-my_company.usd_explorer.tutorial.bat` file inside of a command line to see the output. ## Adjust the Test Note that the test fails because it can’t find the UI elements. That’s because we changed the UI behavior: the `My Company Window` window does not open automatically. We need to simulate a user opening the window in order for the UI elements to be found: Edit the `extension.toml`’s `[[test]]` section by adding `omni.app.setup` as a dependency. This allows the menu to appear just like it would when the Extension is part of an Application. ```toml [[test]] # Extra dependencies only to be used during test run dependencies = [ "omni.app.setup", "omni.kit.ui_test" # UI testing Extension ] ``` Also in the `[[test]]` section, comment out the `--no-window` line. This will allow you to see the UI when running the test. ```toml args = [ "--/app/window/dpiScaleOverride=1.0", "--/app/window/scaleToMonitor=false", # "--no-window" ] ``` In `test_window.py`, add `await ui_test.menu_click("Window/My Company Window")` at the beginning of the `test_window_button` method. This simulates a user showing the window. ```python async def test_window_button(self): # Simulate user clicking menuitem to show window await ui_test.menu_click("Window/My Company Window") ``` Continue adjusting `test_window.py` by changing the UI paths starting with `My Window/`: set to `My Company Window/` instead. ```python # Find a label in our window label = ui_test.find("My Company Window//Frame/**/Label[*]") ``` # Find buttons in our window add_button = ui_test.find("My Company Window//Frame/**/Button[*].text=='Add'") reset_button = ui_test.find("My Company Window//Frame/**/Button[*].text=='Reset'") Run the test again. Notice how the menu is clicked - and the test is successful again. As additional functionality is added to the Extension, more tests can be added to make sure manual QA can be kept to a minimum. Test coverage can be reported as part of running the test by passing the --coverage argument to the executable: tests-my_company.usd_explorer.tutorial.bat --coverage . Note that --coverage should only be used for individual Extensions - not Applications. Read more about code coverage. ::: note **Note** Reference: omni.kit.test Reference: omni.kit.ui_test Reference: Python Test Reference: C++ Test ::: ## Develop for Omniverse Cloud If you are not familiar with Omniverse Cloud then you can read more here and revisit this section in the future. In the above steps you worked with the ``` omni.usd_explorer.kit ``` file as a starting point. You may have noticed the neighboring ``` omni.usd_explorer.ovc.kit ``` file. The latter is a file to use when you plan for an app to run on Omniverse Cloud (OVC). Applications streamed from OVC are very similar to Applications that run on workstations. There are some small but necessary differences in settings mostly. Observe that ``` omni.usd_explorer ``` is a dependency inside ``` omni.usd_explorer.ovc.kit ``` . The “ovc” app is composed with the “base”: ```toml [dependencies] # the base App "omni.usd_explorer" = {} ``` If you wanted to develop a ``` my_company.usd_explorer ``` app for OVC you would: 1. Duplicate the ``` omni.usd_explorer.ovc.kit ``` and name it ``` my_company.usd_explorer.ovc.kit ``` . 2. In ``` my_company.usd_explorer.ovc.kit ``` , change the dependency ``` "omni.usd_explorer" = {} ``` to ``` "my_company.usd_explorer" = {} ``` . 3. Change the ``` [settings.app.extensions] ``` section to: ```toml [settings.app.extensions] generateVersionLockExclude = ["my_company.usd_explorer"] ``` 4. Add ``` define_app("my_company.usd_explorer.ovc") ``` in ``` .\premake5.lua ``` . The developer workflow for creating an OVC app: - Continue developing for the workstation. At the very least, you as a developer still need the ability to run the app locally to test functionality. - Do most changes to dependencies and settings in the “base” kit file. Only make changes in the “ovc” kit file when the change is only relevant to running the app on OVC. - The changes you make in the “base” kit file are automatically picked up by the “ovc” app. - When you package the app, use fat packaging for OVC publishing. You can package both apps in a single package. ## Summary Hopefully this tutorial has been beneficial thus far. We’ve covered everything from creating apps, how to configure dependencies and settings, how to debug and test code. At this point it’s all about iterating on functionality; however, let’s assume that has already been completed. Let’s fast-forward and imagine we want to give the app to end users. Please continue reading through the **Package App** and **Publish App** sections to learn how to do just that.
23,698
create.md
# Create a Project Before development can begin, you must first create a new Project. There are many ways to create an Omniverse Project, and the method you choose depends on what you intend to build and how you prefer to work. Projects tend to align within distinct categories, yet there remains remarkable flexibility within each category to help you address the needs of the user. Here we will give instructions on how to begin the most common types of Projects using the Omniverse platform. ## Extensions Extensions are a common development path within Omniverse and they serve as the fundamental building blocks of Applications and Services. In effect, all user-facing elements in an Omniverse Application are created using Extensions. Extensions such as the Content Browser, Viewport and Stage elements are used by the Omniverse USD Composer and Omniverse USD Presenter Applications among many others. Extensions are persistent within Applications so long as they are configured as an application dependency or they are set to load in the Extension Manager. In creating Omniverse Extensions, multiple options are available: 1. One powerful and flexible technique involves cloning our template from Github, which can be discovered under the **Automated (Repo Tool)** tab above. 2. For a simpler route, we offer a method via the **UI (Extension Manager)** tab. This provides a straightforward UI workflow to create your extension in another application, such as Omniverse Code. ### Automated (Repo Tool) Requirements: - [Git](../../common/glossary-of-terms.html#term-Git) - [Command-Line](../../common/glossary-of-terms.html#term-Command-Line) Optional: - [VS Code](../../common/glossary-of-terms.html#term-VS-Code) A template for Omniverse Project development can be accessed via the following GitHub repository: - [Advanced Template Repository](https://github.com/NVIDIA-Omniverse/kit-app-template). Below we describe the procedure to create an Extension development Project using this template. 1. Fork and/or Clone the Kit App Template repository link into a local directory using [Git](../../common/glossary-of-terms.html#term-Git). The Windows/Linux command-line might resemble: From the chosen local directory: ``` git clone https://github.com/NVIDIA-Omniverse/kit-app-template.git ``` ```bash git clone https://github.com/NVIDIA-Omniverse/kit-app-template ``` This command will generate a subfolder named kit-app-template with multiple files and folders designed to help you customize your Omniverse Project including sample Extension and Application files. 1. Navigate to the newly created ‘kit-app-template’: ```bash cd kit-app-template ``` *optional* If you have VS Code installed, you can now open the Project template in VSCode: ```bash code . ``` Once completed, you should have a folder structure which looks like this. 2. From either the integrated terminal in VSCode or from the command line, create a new Extension: ```bash repo template new ``` This action will trigger a sequence of options. Make the following selections: - **What do you want to add :** extension - **Choose a template :** python-extension-window - **Enter a name:** my_company.my_app.extension_name - **Select a version, for instance:** 0.1.0 The newly created Extension is located at: ``` kit-app-template/source/extensions/my_company.my_app.extension_name ``` You have now created an Extension template and are ready to begin development. **Additional information** The ‘repo’ command is a useful tool within an Omniverse Project. This command is both configurable and customizable to suit your requirements. To review the tools accessible within any Project, enter the following command: ```bash repo -h ``` More details on this tool can be found in the Repo Tools Documentation. If you prefer a comprehensive tutorial that guides you through Application and Extension development see the Kit App Template Tutorial. With the Extension Manager UI, you have the ability to quickly create an extension directly within an existing application. Step-by-step instructions on creating an extension this way are available under Getting Started with Extensions in the Kit Manual. You have now created an Extension template and are ready to begin development. **Apps** Omniverse Applications are simply a collection of Extensions. By configuring these collections, you can create high-performance, custom solutions suited to your organization’s needs without necessarily writing any code. NVIDIA develops and maintains a suite of Omniverse Applications to demonstrate what possible solutions you can create from these collections. An Omniverse Application is a .kit configuration file that instructs the Kit executable to load a predetermined set of Extensions. Either a `.bat` file (for Windows) or a `.sh` file (for Linux) is employed to launch your Application, by passing your Application .kit file to the Kit executable. Applications serve as an ideal option when you need tailor-made software with a unique layout and features. Omniverse Applications offer the flexibility to select from existing Extensions or create novel ones to add desired functionality. ## Application Template For inspiration, you can reference additional applications on the Omniverse Launcher. Download those which appeal to you and explore their installation folders and related .kit files. ### Requirements: - Git - Command-Line ### Optional: - VS Code A template for Omniverse Project development can be accessed via the following GitHub repository: - Advanced Template Repository. Below we describe the procedure to create an Extension development Project using this template. 1. Fork and/or Clone the Kit App Template repository link into a local directory using Git. The Windows/Linux command-line might resemble: ``` git clone https://github.com/NVIDIA-Omniverse/kit-app-template ``` This command will generate a subfolder named kit-app-template with multiple files and folders designed to help you customize your Omniverse Project including sample Extension and Application files. 2. Navigate to the newly created ‘kit-app-template’: ``` cd kit-app-template ``` *optional* If you have VS Code installed, you can now open the Project template in VSCode: ``` code . ``` Once completed, you should have a folder structure which looks like this. 3. Navigate to `/source/apps` folder and examine the sample .kit files found there. You have now created an Application template and are ready to begin development and learn to manipulate your .kit files. ### Additional information - If you prefer a comprehensive tutorial that guides you through Application and Extension development see the Kit App Template Tutorial. ## Connectors Omniverse Connectors serve as middleware, enabling communication between Omniverse and various software applications. They allow for the import and export of 3D assets, data, and models between different workflows and tools through the use of Universal Scene Description (OpenUSD) as the interchange format. Creating a Connector can be beneficial when connecting a third-party application to Omniverse, providing the ability to import, export, and synchronize 3D data via the USD format. A practical use case of this could involve generating a 3D model in Maya, then exporting it to an Omniverse Application like Omniverse USD Composer for rendering with our photo-realistic RTX or Path Tracing renderers. ### Connector Resources To equip you with an understanding of creating Omniverse Connectors, we’ve compiled some beneficial resources below: #### Video Overview Gain insight into Omniverse Connectors with this concise overview. This video helps you understand the basics of Connectors and the unique value they bring. - Omniverse Connectors overview #### Documentation Learn about what you can create with Connectors following our samples using OpenUSD and Omniverse Client Library APIs - Connect Sample - Video Tutorial - Learn how to connect with the Omniverse platform by syncing data to it via OpenUSD. Establish a live sync session and get an OpenUSD 101 overview to get you started. - Making a Connector for Omniverse ## Services Omniverse offers a Services framework based on its foundational Kit SDK. This framework is designed to simplify the construction of Services that can leverage the capabilities of custom Extensions. Developers can choose to run their Services in various settings, such as local machines, virtual machines, or in the cloud. The framework is flexible, allowing Services to be migrated to different infrastructures easily without changing the Service’s code. The framework promotes loose coupling, with its components serving as building blocks that foster scalability and resilience. These reusable components help accelerate the development of your tools and features for databases, logging, metrics collection, and progress monitoring. Creating a Service project employs the same tools used for Extensions, with a similar development process. However, certain aspects, such as configuration and dependencies, are unique to Services. More information is available here. ### Automated (Repo Tool) Requirements: - Git - Command-Line Optional: - VS Code A template for Omniverse Project development can be accessed via the following GitHub repository: - Advanced Template Repository. Below we describe the procedure to create an Extension development Project using this template. 1. Fork and/or Clone the Kit App Template repository link into a local directory using Git. The Windows/Linux command-line might resemble: ``` git clone https://github.com/NVIDIA-Omniverse/kit-app-template ``` This command will generate a subfolder named kit-app-template with multiple files and folders designed to help you customize your Omniverse Project including sample Extension and Application files. 2. Navigate to the newly created ‘kit-app-template’: ``` cd kit-app-template ``` Optional: If you have VS Code installed, you can now open the Project template in VSCode: ``` code . ``` Once completed, you should have a folder structure which looks like this. 3. From either integrated terminal in VSCode or from the command line, create a new Extension: ``` repo template new ``` This action will trigger a sequence of options. Make the following selections: - What do you want to add : extension - Choose a template : python-extension-main - Enter a name: my_company.my_app.extension_name - **Select a version, for instance:** 0.1.0 The newly created Extension is located at: kit-app-template/source/extensions/my_company.my_app.extension_name ``` You have now created an Extension template and are ready to begin development of your Service. **Additional information** - The ‘repo’ command is a useful tool within an Omniverse Project. This command is both configurable and customizable to suit your requirements. To review the tools accessible within any Project, enter the following command: ``` repo -h ``` More details on this tool can be found in the Repo Tools Documentation. - If you prefer a comprehensive tutorial that guides you through Application and Extension development see the Kit App Template Tutorial. With the Extension Manager UI, you have the ability to quickly create an extension directly within an existing application. Step-by-step instructions on creating an extension this way are available under Getting Started with Extensions in the Kit Manual. You have now created an Extension template and are ready to begin development of your Service.
11,687
create_docs.md
# Document This document that you are reading was built from files and tools inside the kit-app-template project. The `.md` and `.toml` files in the `.\docs` directory is the source for this webpage. There’s some images in that directory as well. Using the same tools - with your own markdown files - you can create docs to help end users understand the Applications and Extensions you develop. ## Build Docs To build documentation, simply use the `repo docs` command (command cheat-sheet). To see built documentation open `_build\docs\kit-app-template\latest\index.html`. > **Note** > Reference: Documentation System. ## Link to Docs Once the docs are hosted an Application can provide a button or menu to access it. Here’s a sample for how to open a webpage: ```toml import webbrowser webbrowser.open(url) ```
821
creating-and-registering-custom-actions_Overview.md
# Overview — Kit Extension Template C++ 1.0.1 documentation ## Overview An example C++ extension that can be used as a reference/template for creating new extensions. Demonstrates how to create actions in C++ that can then be executed from either C++ or Python. See the omni.kit.actions.core extension for extensive documentation about actions themselves. ## C++ Usage Examples ### Defining Custom Actions ```c++ using namespace omni::kit::actions::core; class ExampleCustomAction : public Action { public: static carb::ObjectPtr<IAction> create(const char* extensionId, const char* actionId, const MetaData* metaData) { return carb::stealObject<IAction>(new ExampleCustomAction(extensionId, actionId, metaData)); } ExampleCustomAction(const char* extensionId, const char* actionId, const MetaData* metaData) : Action(extensionId, actionId, metaData), m_executionCount(0) { } carb::variant::Variant execute(const carb::variant::Variant& args = {}, const carb::dictionary::Item* kwargs = nullptr) override ``` ```cpp { ++m_executionCount; printf("Executing %s (execution count = %d).\n", getActionId(), m_executionCount); return carb::variant::Variant(m_executionCount); } void invalidate() override { resetExecutionCount(); } uint32_t getExecutionCount() const { return m_executionCount; } protected: void resetExecutionCount() { m_executionCount = 0; } private: uint32_t m_executionCount = 0; }; ``` ## Creating and Registering Custom Actions ```cpp // Example of creating and registering a custom action from C++. Action::MetaData metaData; metaData.displayName = "Example Custom Action Display Name"; metaData.description = "Example Custom Action Description."; carb::ObjectPtr<IAction> exampleCustomAction = ExampleCustomAction::create("omni.example.cpp.actions", "example_custom_action_id", &metaData); carb::getCachedInterface<omni::kit::actions::core::IActionRegistry>()->registerAction(exampleCustomAction); ``` ## Creating and Registering Lambda Actions ```cpp auto actionRegistry = carb::getCachedInterface<omni::kit::actions::core::IActionRegistry>(); // Example of creating and registering a lambda action from C++. omni::kit::actions::core::IAction::MetaData metaData; metaData.displayName = "Example Lambda Action Display Name"; metaData.description = "Example Lambda Action Description."; carb::ObjectPtr<IAction> exampleLambdaAction = omni::kit::actions::core::LambdaAction::create( "omni.example.cpp.actions", "example_lambda_action_id", &metaData, [this](const carb::variant::Variant& args = {}, const carb::dictionary::Item* kwargs = nullptr) { printf("Executing example_lambda_action_id.\n"); }); ``` ```cpp return carb::variant::Variant(); ``` ```cpp carb::getCachedInterface<omni::kit::actions::core::IActionRegistry>()->registerAction(exampleLambdaAction); ``` ```cpp // Example of creating and registering (at the same time) a lambda action from C++. carb::getCachedInterface<omni::kit::actions::core::IActionRegistry>()->registerAction( "omni.example.cpp.actions", "example_lambda_action_id", [](const carb::variant::Variant& args = {}, const carb::dictionary::Item* kwargs = nullptr) { printf("Executing example_lambda_action_id.\n"); return carb::variant::Variant(); }, "Example Lambda Action Display Name", "Example Lambda Action Description."); ``` ## Discovering Actions ```cpp auto registry = carb::getCachedInterface<omni::kit::actions::core::IActionRegistry>(); // Retrieve an action that has been registered using the registering extension id and the action id. carb::ObjectPtr<IAction> action = registry->getAction("omni.example.cpp.actions", "example_custom_action_id"); // Retrieve all actions that have been registered by a specific extension id. std::vector<carb::ObjectPtr<IAction>> actions = registry->getAllActionsForExtension("example"); // Retrieve all actions that have been registered by any extension. std::vector<carb::ObjectPtr<IAction>> actions = registry->getAllActions(); ``` ## Deregistering Actions ```cpp auto actionRegistry = carb::getCachedInterface<omni::kit::actions::core::IActionRegistry>(); // Deregister an action directly... actionRegistry->deregisterAction(exampleCustomAction); // or using the registering extension id and the action id... actionRegistry->deregisterAction("omni.example.cpp.actions", "example_custom_action_id"); // or deregister all actions that were registered by an extension. actionRegistry->deregisterAllActionsForExtension("omni.example.cpp.actions"); ``` ## Executing Actions ```cpp auto actionRegistry = carb::getCachedInterface<omni::kit::actions::core::IActionRegistry>(); // Execute an action after retrieving it from the action registry. auto action = actionRegistry->getAction("omni.example.cpp.actions", "example_custom_action_id"); ``` ```cpp action->execute(); // Execute an action indirectly (retrieves it internally). actionRegistry->executeAction("omni.example.cpp.actions", "example_custom_action_id"); // Execute an action that was stored previously. exampleCustomAction->execute(); ``` Note: All of the above will find any actions that have been registered from either Python or C++, and you can interact with them without needing to know anything about where they were registered. ```
5,368
CreatingCppNodes.md
# Creating C++ Nodes ## Setting Up Your Extension This is a guide to writing a node in C++, including how to set up and structure everything you need in your extension so that the node can be delivered to others in a consistent way. You may already have some of the pieces in place - feel free to skip ahead to just the parts you will need. If you wish to create a Python node then see [Creating Python Nodes](#omnigraph-creating-python-nodes). The Omniverse applications rely on extensions to provide functionality in a modular way and the nodes you write will be integrated best if you follow the same model. You may choose to add much more to your extension. What is described here is the bare minimum required to make an extension containing a single C++ node integrate into a Kit-based application. Whether you are familiar with extension development in Kit or not, the best place to start is from one of the predefined template extensions. **Important** It is assumed you understand how to build C++ files in an extension, usually using the `premake5.lua` build file favored by the Kit extension environment. See the templates in the Kit [github C++ repo](https://github.com/NVIDIA-Omniverse/kit-extension-template-cpp) for more information. Once you have a build directory you can copy the template extension [omni.graph.template.cpp](#ext-omni-graph-template-cpp) into your `source/extensions` directory so that the build process can access it. See [that extension’s documentation](#ext-omni-graph-template-cpp) for a more thorough explanation of how to populate and build your C++ nodes once the extension is in place. **Note** If you will have a mixture of both C++ and Python files in your extension then you should instead use [this template that sets up an extension with both kinds of nodes](#ext-omni-graph-template-mixed). To see details of what capabilities the C++ node and its corresponding .ogn definition have look through some examples in the [OGN User Guide](#ogn-user-guide), or look at some nodes that have already been implemented in the node library reference. See in particular the [OGN Code Samples - C++](#ogn-code-samples-cpp) for examples of how to access different types of data within a node.
2,238
CreatingPythonNodes.md
# Creating Python Nodes This is a guide to writing a node in Python, including how to set up and structure everything you need in your extension so that the node can be delivered to others in a consistent way. You may already have some of the pieces in place - feel free to skip ahead to just the parts you will need. If you wish to create a C++ node then see Creating C++ Nodes. ## Setting Up Your Extension The Omniverse applications rely on extensions to provide functionality in a modular way and the nodes you write will be integrated best if you follow the same model. You may choose to add much more to your extension. What is described here is the bare minimum required to make an extension containing a single Python node integrate into a Kit-based application. Whether you are familiar with extension development in Kit or not, the best place to start is from one of the predefined template extensions. ### Note If you will have a mixture of both C++ and Python files in your extension then you should instead use this template that sets up an extension with both kinds of nodes. ## Python Nodes With A Build The typical method of creating OmniGraph Python nodes is to use a build process with a .ogn definition file to generate Python support code, documentation, and even automated tests for your node type. Once you have a build directory you can copy the template extension omni.graph.template.python into your source/extensions directory so that the build process can access it. See that extension’s documentation for a more thorough explanation of how to populate and build your Python nodes once the extension is in place. ## Python Nodes Without A Build If you want to get up and running faster without the overhead of a build you can build a much smaller extension. This type of extension can only contain Python files, documentation, data, and configuration files such as the mandatory config/extension.toml file. You can define a local extension in your Documents/Kit/shared/exts directory, which will be automatically scanned by the extension manager. A template for an extension with no build process can be found in omni.graph.template.no_build. You can copy this directory into your Documents/Kit/shared/exts directory, which will be automatically scanned by the extension manager. Be sure to rename everything to match your own extension’s requirements. <p> See that extension’s documentation for a more thorough explanation of how to build your Python nodes once the extension is in place. <p> To see details of what capabilities the Python node and its corresponding .ogn definition have look through some examples in the OGN User Guide , or look at some nodes that have already been implemented in the node library reference . <p> See in particular the OGN Code Samples - Python for examples of how to access different types of data within a node. <p> To see details of what capabilities the Python node and its corresponding .ogn definition have look through some examples in the OGN User Guide , or look at some nodes that have already been implemented in the node library reference ). <footer> <hr/>
3,161
custom-protocol-commands.md
# Custom Protocol Commands — Omniverse Launcher latest documentation ## Custom Protocol Commands Launcher supports deep linking which allows using custom URLs to point to specific Launcher screens or run various Launcher commands. Deep linking is built on top of custom protocol URLs that start with `omniverse-launcher://`. Such links can be used by emails, websites or messages to redirect users back to Launcher, or can be used by system administrators to manage installed apps. This document describes the list of all available custom protocol commands for Launcher. ### Showing a Launcher Screen Launcher supports `omniverse-launcher://navigate` command to bring up the main window and open a specific screen there. The screen is specified with the `path` query parameter, for example: - News: omniverse-launcher://navigate?path=/news The list below defines all available screens supported by this command: - News: omniverse-launcher://navigate?path=/news - Library: omniverse-launcher://navigate?path=/library - Installed app in the library: omniverse-launcher://navigate?path=/library/:slug where `:slug` should be replaced with a unique application name. - Installed connectors: omniverse-launcher://navigate?path=/library/connectors/ - Exchange: omniverse-launcher://navigate?path=/exchange - Detailed app info: omniverse-launcher://navigate?path=/exchange/app/:slug where `:slug` should be replaced with a unique application name. - Detailed connector info: omniverse-launcher://navigate?path=/exchange/connector/:slug where `:slug` should be replaced with a unique connector name. - Nucleus: omniverse-launcher://navigate?path=/collaboration ### Installing Apps `omniverse-launcher://install` command can be used to start installing an application. This command requires two query arguments: #### Query Arguments - `slug` - the unique name of the installed app or connector. - `version` - the version of the app or connector to install. - (optional) - the version that needs to be installed. If not specified, then the latest version is installed. - (optional) - defines if Launcher needs to throw an error if the same component is already installed. (true or false, true by default). ### Example The IT Managed Launcher supports only the `path` argument that must point to a zip archive downloaded from the enterprise portal. ### Example ### Example - the unique name of the installed app or connector. - the version that needs to be uninstalled. ### Example This command allows users to start the specified application. The launch command will start the app with the specified *slug* and will use the version that is currently selected by user. This command requires one query argument: - the unique name of the installed app that must be launched. ### Example Note: Users can change their current app versions in the library settings. ### Example ### Example Launcher is also registered as the default handler for `omniverse://` links. The first time when such link is opened by user, Launcher brings up the dialog to select an Omniverse application that should be used to open `omniverse://` links by default. ### Example This command can be used to run Launcher in kiosk mode. In kiosk mode, Launcher is opened fullscreen on top of other applications. This feature is only available on Windows. To disable the kiosk mode, use `omniverse-launcher://kiosk?enabled=false` command. ### Track start and exit data of apps This command can be used to register a launch event when an app has been started. This command accepts two query arguments: - `slug` [required] - the unique name of the app or connector that has been launched # 应用或连接器版本信息 - **version** [required] - the version of the app or connector that has been launched # 应用关闭事件注册命令 - **omniverse-launcher://register-exit** command can be used to register an exit event when an app has been closed. This command accepts two query arguments: # 应用或连接器唯一名称 - **slug** [required] - the unique name of the app or connector that has been closed # 应用或连接器关闭时的版本信息 - **version** [required] - the version of the app or connector that has been closed
4,154
CustomizeTabs.md
# Welcome screen: Customize tabs Tabs can be specified in `[[settings.app.welcome.page]]` sections. Here are default tabs: ```toml [[settings.app.welcome.page]] text = "Open" icon = "${omni.kit.welcome.window}/icons/open_inactive.png" active_icon = "${omni.kit.welcome.window}/icons/open_active.png" extension_id = "omni.kit.welcome.open" order = 0 [[settings.app.welcome.page]] text = "What's New" icon = "${omni.kit.welcome.window}/icons/whats_new_inactive.png" active_icon = "${omni.kit.welcome.window}/icons/whats_new_active.png" extension_id = "omni.kit.welcome.whats_new" order = 10 [[settings.app.welcome.page]] text = "Learn" icon = "${omni.kit.welcome.window}/icons/learn_inactive.png" active_icon = "${omni.kit.welcome.window}/icons/learn_active.png" extension_id = "omni.kit.welcome.learn" order = 20 [[settings.app.welcome.page]] text = "About" icon = "${omni.kit.welcome.window}/icons/about_inactive.png" active_icon = "${omni.kit.welcome.window}/icons/about_active.png" extension_id = "omni.kit.welcome.about" order = 40 ``` - **text**: Text for tab to show in Welcome screen - **icon** and **active_icon**: Icon for tab (normal and selected) to show in Welcome screen, along with “text” - **extension_id**: Extension to load when tab selected - **order**: Order of this tab to show in Welcome screen.
1,322
customizing-the-dialog_Overview.md
# Overview — Omniverse Kit 1.1.12 documentation ## Overview The file_importer extension provides a standardized dialog for importing files. It is a wrapper around the `FilePickerDialog`, but with reasonable defaults for common settings, so it’s a higher-level entry point to that interface. Nevertheless, users will still have the ability to customize some parts but we’ve boiled them down to just the essential ones. Why you should use this extension: - Present a consistent file import experience across the app. - Customize only the essential parts while inheriting sensible defaults elsewhere. - Reduce boilerplate code. - Inherit future improvements. - Checkpoints fully supported if available on the server. ## Quickstart You can pop-up a dialog in just 2 steps. First, retrieve the extension. ```python # Get the singleton extension. file_importer = get_file_importer() if not file_importer: return ``` Then, invoke its show_window method. ```python file_importer.show_window( title="Import File", import_handler=self.import_handler, # filename_url="omniverse://ov-rc/NVIDIA/Samples/Marbles/Marbles_Assets.usd", ) ``` Note that the extension is a singleton, meaning there’s only one instance of it throughout the app. Basically, we are assuming that you’d never open more than one instance of the dialog at any one time. The advantage is that we can channel any development through this single extension and all users will inherit the same changes. ## Customizing the Dialog You can customize these parts of the dialog. - Title - The title of the dialog. - Collections - Which of these collections, ["bookmarks", "omniverse", "my-computer"] to display. - Filename Url - Url of the file to import. - Postfix options - Show only files of these content types. - Extension options - Show only files with these filename extensions. - Import label - Label for the import button. - Import handler - User provided callback to handle the import process. Note that these settings are applied when you show the window. Therefore, each time it’s displayed, the dialog can be tailored to the use case. ## Filter files by type The user has the option to filter what files get shown in the list view. One challenge of working in Omniverse is that everything is a USD file. An expected use case is to show only files of a particular content type. To facilitate this workflow, we suggest adding a postfix to the filename, e.g. “file.animation.usd”. The file bar contains a dropdown that lists the default postfix labels, so you can filter by these. You have the option to override this list. You can also filter by filename extension. By default, we provide the option to show only USD files. If you override either of the lists above, then you’ll also need to provide a filter handler. The handler is called to decide whether or not to display a given file. The default handler is shown below as an example. ```python def default_filter_handler(filename: str, filter_postfix: str, filter_ext: str) -> bool: """ Show only files whose names end with: *<postfix>.<ext>. Args: filename (str): The item's file name. filter_postfix (str): Whether file name match this filter postfix. filter_ext (str): Whether file name match this filter extension. Returns: True if file could show in dialog. Otherwise False. """ if not filename: return True # Show only files whose names end with: *<postfix>.<ext> if filter_ext: # split comma separated string into a list: filter_exts = filter_ext.split(",") if isinstance(filter_ext, str) else filter_ext filter_exts = [x.replace(" ", "") for x in filter_exts] filter_exts = [x for x in filter_exts if x] # check if the file extension matches anything in the list: if not ( "*.*" in filter_exts or any(filename.endswith(f.replace("*", "")) for f in filter_exts) ): # match failed: return False if filter_postfix: # strip extension and check postfix: filename = os.path.splitext(filename)[0] return filename.endswith(filter_postfix) return True ``` ## Import options A common need is to provide user options for the import process. You create the widget for accepting those inputs, then add it to the details pane of the dialog. Do this by subclassing from `ImportOptionsDelegate` and overriding the methods, `ImportOptionsDelegate._build_ui_impl()` and (optionally) `ImportOptionsDelegate._destroy_impl()`. ```python class MyImportOptionsDelegate(ImportOptionsDelegate): def __init__(self): super().__init__(build_fn=self._build_ui_impl, destroy_fn=self._destroy_impl) self._widget = None def _build_ui_impl(self): self._widget = ui.Frame() with self._widget: with ui.VStack(): with ui.HStack(height=24, spacing=2, style={"background_color": 0xFF23211F}): ``` ```python ui.Label("Prim Path", width=0) ui.StringField().model = ui.SimpleStringModel() ui.Spacer(height=8) def _destroy_impl(self, _): if self._widget: self._widget.destroy() self._widget = None ``` Then provide the controller to the file picker for display. ```python self._import_options = MyImportOptionsDelegate() file_importer.add_import_options_frame("Import Options", self._import_options) ``` ## Import handler Provide a handler for when the Import button is clicked. The handler should expect a list of :attr: ``` selections ``` made from the UI. ```python def import_handler(self, filename: str, dirname: str, selections: List[str] = []): # NOTE: Get user inputs from self._import_options, if needed. print(f"> Import '{filename}' from '{dirname}' or selected files '{selections}'") ``` ## Demo app A complete demo, that includes the code snippets above, is included with this extension at Python. ```
5,936
customizing-the-prompt_Overview.md
# Overview The widget extension provides a simple dialog for prompt. Users have the ability to customize buttons on it. ## Quickstart ```c++ prompt = Prompt("title", "message to user", on_closed_fn=lambda: print("prompt close")) prompt.show() ``` ## Customizing the Prompt You can customize these parts of the Prompt: - title: Text appearing in the titlebar of the window. - text: Text of the question being posed to the user. - ok_button_text: Text for the first button. - cancel_button_text: Text for the last button. - middle_button_text: Text for the middle button. - middle_2_button_text: Text for the second middle button. - ok_button_fn: Function executed when the first button is pressed. - cancel_button_fn: Function executed when the last button is pressed. - middle_button_fn: Function executed when the middle button is pressed. - middle_2_button_fn: Function executed when the second middle button is pressed. - modal: True if the window is modal, shutting down other UI until an answer is received - on_closed_fn: Function executed when the window is closed without hitting a button. - shortcut_keys: If it can be confirmed or hidden with shortcut keys like Enter or ESC. - width: The specified width. - height: The specified height. ## Example ```c++ from omni.kit.widget.prompt import Prompt folder_exist_popup = None def on_confirm(): print("overwrite the file") def on_cancel(): folder_exist_popup.hide() folder_exist_popup = None ``` folder_exist_popup = Prompt( title="Overwrite", text="The file already exists, are you sure you want to overwrite it?", ok_button_text="Overwrite", cancel_button_text="Cancel", ok_button_fn=on_confirm, cancel_button_fn=on_cancel, ) folder_exist_popup.show()
1,766
damage-application_structcarb_1_1blast_1_1_blast.md
# carb::blast::Blast Defined in [Blast.h](#file-blast-h) ## structcarb::blast::Blast Plugin interface for the omni.blast extension. ### Destructible Authoring Commands ```cpp const char* ( *combinePrims )( const char* *paths, size_t numPaths, float defaultContactThreshold, const carb::blast::DamageParameters& ); ``` ### DamageParameters * damageParameters, * float defaultMaxContactImpulse **Main entry point to combine a existing prims into a single destructible.** **Param paths** - **[in]** Full USD paths to prims that should be combined. **Param numPaths** - **[in]** How many prims are in the paths array. **Param defaultContactThreshold** - **[in]** How hard the prim needs to be hit to register damage during simulation. **Param damageParameters** - **[in]** See DamageParameters description. **Param defaultMaxContactImpulse** - **[in]** How much force can be used to push other prims away during a collision For kinematic prims only, used to allow heavy objects to continue moving through brittle destructible prims. **Return** - true iff the prims were combined successfully. **fracturePrims** - (const char **paths, size_t numPaths, const char *defaultInteriorMaterial, uint32_t numVoronoiSites, float defaultContactThreshold, DamageParameters *damageParameters) Main entry point to fracture an existing prim. **Param paths** [in] Full USD path(s) to prim(s) that should be fractured. They need to all be part of the same destructible if there are more than one. **Param numPaths** [in] How many prims are in the paths array. **Param defaultInteriorMaterial** [in] Material to set on newly created interior faces. (Ignored when re-fracturing and existing interior material is found.) **Param numVoronoiSites** [in] How many pieces to split the prim into. **Param defaultContactThreshold** [in] How hard the prim needs to be hit to register damage during simulation. **Param damageParameters** See [DamageParameters](structcarb_1_1blast_1_1_damage_parameters.html#structcarb_1_1blast_1_1_damage_parameters) description. **Param defaultMaxContactImpulse** [in] How much force can be used to push other prims away during a collision. For kinematic prims only, used to allow heavy objects to continue moving through brittle destroyable prims. **Param interiorUvScale** [in] Scale to apply to UV frame when mapping to interior face vertices. **Return** path to the new prim if the source prim was fractured successfully, nullptr otherwise. Set the random number generator seed for fracture operations. **Param seed** [in] the new seed. Reset the [Blast](#structcarb_1_1blast_1_1_blast) data (partial or full hierarchy) starting at the given path. The destructible will be rebuilt with only appropriate data remaining. ### Field List - **Param path** - [in] The path to a chunk, instance, or base destructible prim. - **Return** - true iff the operation could be performed on the prim at the given path. ### Function: createExternalAttachment - Modify a blast asset that is stored in the destructible at the given path, so that support chunks which touch static geometry are bound to the world. - All previous world bonds will be removed. - Returns true if the destructible’s NvBlastAsset was modified, but note this is not “if and only if.” If world bonds are removed and replaced with the exact same world bonds (e.g. the blast mesh was not moved since the last time this function was called), then this function will still return true. Note also that if path == NULL, this function always returns true. - **Param path** - [in] The USD path of the blast container. - **Param defaultMaxContactImpulse** - [in] Controls how much force physics can use to stop bodies from penetrating. - **Param relativePadding** - [in] A relative amount to grow chunk bounds in order when calculating world attachment. - **Return** - true if the destructible’s NvBlastAsset was modified (or if path == NULL). ### Function: removeExternalAttachment - Remove all external bonds from the given blast asset. - **Param path** - [in] The USD path of the blast container. - **Return** - true if the destructible’s NvBlastAsset was modified (or if path == NULL). ## recalculateBondAreas Recalculates the areas of bonds. This may be used when a destructible is scaled. ### Parameters - **Param path [in]** - Path to the chunk, instance, or base destructible prim. ### Return - true iff the operation was successful. ## selectChildren Finds all children of the chunks in the given paths, and sets kit’s selection set to the paths of those children. ### Parameters - **Param paths [in]** - Full USD path(s) to chunks. - **Param numPaths [in]** - How many paths are in the paths array. ### Return - true iff the operation was successful. ## selectParent Selects the parent of the chunks in the given paths. ### Parameters - **Param paths [in]** - Full USD path(s) to chunks. - **Param numPaths [in]** - How many paths are in the paths array. ### Return - true iff the operation was successful. ### Function: selectParent Finds all parents of the chunks in the given paths, and sets kit’s selection set to the paths of those parents. **Parameters:** - **paths [in]** Full USD path(s) to chunks. - **numPaths [in]** How many paths are in the paths array. **Return:** - true iff the operation was successful. ### Function: selectSource Finds all source meshes for chunks in the given paths, and sets kit’s selection set to the paths of those meshes. **Parameters:** - **paths [in]** Full USD path(s) to chunks. - **numPaths [in]** How many paths are in the paths array. **Return:** - true iff the operation was successful. ### Function: setInteriorMaterial Sets the material for the interior facets of the chunks at the given paths. **Parameters:** - **paths [in]** Full USD path(s) to chunks. - **numPaths [in]** How many paths are in the paths array. - **interiorMaterial [in]** The material to set for the interior facets. ### Description #### Param paths - **[in]** Full USD path(s) to chunks. #### Param numPaths - **[in]** How many paths are in the paths array. #### Return - the material path if all meshes found at the given paths have the same interior materials. If more than one interior material is found among the meshes found, the empty string (“”) is returned. If no interior material is found, nullptr is returned. #### Description - Recalculates UV coordinates for the interior facets of chunk meshes based upon the UV scale factor given. - If the path given is a chunk, UVs will be recalculated for the chunk’s meshes. If the path is an instance or base prim, all chunk meshes will have their interior facets’ UVs recalculated. #### Param path - **[in]** Path to the chunk, instance, or base destructible prim. #### Param interiorUvScale - **[in]** the scale to use to calculate UV coordinates. A value of 1 will cause the texture to map to a region in space roughly the size of the whole destructible’s largest width. #### Return - true iff the operation was successful. ### createDestructibleInstance Function ```csharp void createDestructibleInstance(const char *path, const DamageParameters *damageParameters, float defaultContactThreshold, float defaultMaxContactImpulse) ``` Creates a destructible instance with default values from the given destructible base. **Parameters:** - **path** [in] Path to the destructible base to instance. - **damageParameters** [in] The damage characteristics to assign to the instance (see DamageParameters). - **defaultContactThreshold** [in] Rigid body parameter to apply to actors generated by the instance. The minimum impulse required for a rigid body to generate a contact event, needed for impact damage. - **defaultMaxContactImpulse** [in] Rigid body parameter to apply to actors generated by the instance. The maximum impulse that a contact constraint on a kinematic rigid body can impart on a colliding body. ### setSimulationParams Function ```csharp void setSimulationParams(int32_t maxNewActorsPerFrame) ``` Sets the maximum number of actors which will be generated by destruction each simulation frame. **Parameters:** - **maxNewActorsPerFrame** [in] The maximum number of actors generated per frame. ```cpp void createDamageEvent(const char *hitPrimPath, DamageEvent *damageEvents, size_t numDamageEvents); ``` Create a destruction event during simulation. **Param hitPrimPath** - **[in]** The full path to the prim to be damaged (may be a blast actor prim or its collision shape). **Param damageEvents** - **[in]** An array of `DamageEvent` structs describing the damage to be applied. **Param numDamageEvents** - **[in]** The size of the damageEvents array. --- ```cpp void setExplodeViewRadius(const char *path, float radius); ``` Set the cached explode view radius for the destructible prim associated with the given path. The prim must have DestructionSchemaDestructibleInstAPI applied. The instance will be rendered with its chunks pushed apart by the radius value. **Param path** - **[in]** Full USD path to a destructible instance. **Param radius** - **[in]** The distance to move apart the instance’s rendered chunks. ``` Gives the cached explode view radius for the destructible instances associated with the given paths, if the cached value for all instances is the same. **Param paths [in]** Array of USD paths to destructible instances. **Param numPaths [in]** The length of the paths array. **Return** The cached explode view radius for all valid destructible instances at the given paths, if that value is the same for all instances. If there is more than one radius found, this function returns -1.0f. If no valid instances are found, this function returns 0.0f. Calculate the maximum depth for all chunks in the destructible prim associated with the given paths. **Param paths [in]** Array of USD paths to destructible prims. **Param numPaths [in]** The length of the paths array. **Return** the maximum chunk depth for all destructibles associated with the given paths. Returns 0 if no destructibles are found. ### getViewDepth Calculates what the view depth should be, factoring in internal override if set. /return return what the view depth should be. #### Parameters - **Param paths [in]** Array of USD paths to destructible prims. - **Param numPaths [in]** The length of the paths array. ### setViewDepth Set the view depth for explode view functionality. #### Parameters - **Param paths [in]** Array of USD paths to destructible prims. - **Param numPaths [in]** The length of the paths array. - **Param depth [in]** Either a string representation of the numerical depth value, or “Leaves” to view leaf chunks. ### setDebugVisualizationInfo Set debug visualization information. #### Parameters - **Param mode [in]** The debug visualization mode. - **Param value [in]** The value associated with the debug visualization mode. ### Set Debug Visualization Info Set the debug visualization mode & type. If mode != debugVisNone, an anonymous USD layer is created which overrides the render meshes for blast objects which are being visualized. #### Parameters - **mode [in]** - Supported modes: "debugVisNone", "debugVisSelected", "debugVisAll" - **type [in]** - Supported modes: "debugVisSupportGraph", "debugVisMaxStressGraph", "debugVisCompressionGraph", "debugVisTensionGraph", "debugVisShearGraph", "debugVisBondPatches" - **Return** - true iff a valid mode is selected. ### Debug Damage Functions #### Set Debug Damage Params Set parameters for the debug damage tool in kit. This is applied using Shift + B + (Left Mouse). A ray is cast from the camera position through the screen point of the mouse cursor, and intersected with scene geometry. The intersection point is used to find nearby destructibles using to damage. ##### Parameters - **amount [in]** - The base damage to be applied to each destructible, as an acceleration in m/s^2. - **impulse [in]** - An impulse to apply to rigid bodies within the given radius, in kg*m/s. (This applies to non-destructible rigid bodies too.) - **radius [in]** - The distance in meters from the ray hit point to search for rigid bodies to affect with this function. #### Apply Debug Damage This function applies debug damage at a specified world position. ### Apply Debug Damage Apply debug damage at the position given, in the direction given. The damage parameters set by setDebugDamageParams will be used. #### Parameters - **Param worldPosition [in]** - The world position at which to apply debug damage. - **Param worldDirection [in]** - The world direction of the applied damage. ### Notice Handler Functions These can be called at any time to enable or disable notice handler monitoring. When enabled, use BlastUsdMonitorNoticeEvents to catch unbuffered Usd/Sdf commands. It will be automatically cleaned up on system shutdown if enabled. - **blastUsdEnableNoticeHandlerMonitor()** - **blastUsdDisableNoticeHandlerMonitor()** ### Destructible Path Utilities These functions find destructible base or instance prims from any associated prim path. - **getDestructibleBasePath(const char* path)** - **Param path [in]** - Any path associated with a destructible base prim. - **Return** - the destructible prim’s path if found, or nullptr otherwise. ### getDestructibleInstancePath ```cpp const char* getDestructibleInstancePath(const char* path) ``` - **Param path**: [in] Any path associated with a destructible instance prim. - **Return**: the destructible prim’s path if found, or nullptr otherwise. ### Blast SDK Cache This function pushes the Blast SDK data that is used during simulation back to USD so it can be saved and then later restored in the same state. This is also the state that will be restored to when sim stops. ```cpp void blastCachePushBinaryDataToUSD() ``` ### Blast Stress This function modifies settings used to drive stress calculations during simulation. - **param path**: [in] Any path associated with a destructible instance prim. - **param gravityEnabled**: [in] Controls if gravity should be applied to stress simulation of the destructible instance. - **param rotationEnabled**: [in] Controls if rotational acceleration should be applied to stress simulation of the destructible instance. - **param residualForceMultiplier**: [in] Multiplies the residual forces on bodies after connecting bonds break. - **param settings**: [in] Values used to control the stress solver. - **return**: true if stress settings were updated, false otherwise. ```cpp bool blastStressUpdateSettings(const char* path, bool gravityEnabled, bool rotationEnabled, float residualForceMultiplier, const StressSettings& settings) ``` ```pre char ``` ```pre * ``` ```pre path ``` ```pre , ``` ```pre bool ``` ```pre gravityEnabled ``` ```pre , ``` ```pre bool ``` ```pre rotationEnabled ``` ```pre , ``` ```pre float ``` ```pre residualForceMultiplier ``` ```pre , ``` ```pre const ``` ```pre StressSolverSettings ``` ```pre & ``` ```pre settings ``` ```pre ) ```
15,056
data-collection-faq.md
# Omniverse Data Collection & Use FAQ NVIDIA Omniverse Enterprise is a simple to deploy, end-to-end collaboration and true-to-reality simulation platform that fundamentally transforms complex design workflows for organizations of any scale. In order to improve the product, Omniverse software collects usage and performance behavior. When an enterprise manages Omniverse deployment via IT managed launcher, IT admin is responsible to configure the data collection setting. If consent is provided, data is collected in an aggregate manner at enterprise account level. Individual user data is completely anonymized. ## Frequently Asked Questions Q: What data is being collected and how is it used? A: Omniverse collects usage data when you install and start interacting with our platform technologies. The data we collect and how we use it are as follows. - Installation and configuration details such as version of operating system, applications installed - This information allows us to recognize usage trends & patterns - Identifiers, such as your unique NVIDIA Enterprise Account ID(org-name) and Session ID which allow us to recognize software usage trends and patterns. - Hardware Details such as CPU, GPU, monitor information - This information allows us to optimize settings in order to provide best performance - Product session and feature usage - This information allows us to understand user journey and product interaction to further enhance workflows - Error and crash logs - This information allows to improve performance & stability for troubleshooting and diagnostic purposes of our software Q: Does NVIDIA collect personal information such as email id, name etc. ? A: When an enterprise manages Omniverse deployment via IT managed launcher, IT admin is responsible to configure the data collection setting. If consent is provided, data is collected in an aggregate manner at enterprise account level. Individual user data is completely anonymized. Q: How can I change my data collection setting - opt-in to data collection? A: NVIDIA provides full flexibility for an enterprise to opt-in to data collection. In the .config folder there is a privacy.toml file which can be set to “true”. For detailed instructions, review the appropriate installation guide: - Installation Guide for Windows - Installation Guide for Linux Q: How can I change my data collection setting - opt-out of data collection? A: NVIDIA provides full flexibility for an enterprise to opt-out of data collection. In the .config folder there is a privacy.toml file which can be set to “false”. For detailed instructions, review the appropriate installation guide: - Installation Guide for Windows - Installation Guide for Linux Q: How can I request the data Omniverse Enterprise has collected? A: If you are an Enterprise customer, please file a support ticket on NVIDIA ENterprise Portal. If any data was collected, NVIDIA will provide all data collected for your organization within 30 days. Q: How will Omniverse collect data in a scenario where my enterprise is firewalled with no Internet access? A: No data will be collected in a firewalled scenario.
3,159
data-files_Overview.md
# Overview ## Overview In order to effectively test the OmniGraph access to nodes and scripts that are not necessary for the graph to operate correctly. In order to minimize the unnecessary files, yet still have nodes and files explicitly for testing, all of that functionality has been broken out into this extension. ## Dependencies As the purpose of this extension is to provide testing facilities for all of OmniGraph it will have load dependencies on all of the `omni.graph.*` extensions. If any new ones are added they should be added to the dependencies in the file `config/extension.toml`. ## Data Files Three types of data files are accessed in this extension: 1. Generic data files, created in other extensions for use by the user (e.g. compound node definitions) 2. Example files, created to illustrate how to use certain nodes but not intended for general use 3. Test files, used only for the purpose of loading to test certain features The `data/` subdirectories in this extension contains the latter of those three. The other files live in the lowest level extension in which they are legal (e.g. if they contain a node from `omni.graph.nodes` then they will live in that extension). As this extension has dependencies on all of the OmniGraph extensions it will have access to all of their data files as well. ## Node Files Most nodes will come from other extensions. Some nodes are created explicitly for testing purposes. These will appear in this extension and should not be used for any other purpose. ### Import Example This simple example shows how the test files from the `omni.graph.examples.python` extension were imported and enabled in this extension. The first step was to move the required files into the directory tree: ``` omni.graph.test/ ├── python/ └── tests/ ├──── test_omnigraph_simple.py └── data/ ├──── TestEventTrigger.usda └──── TestExecutionConnections.usda ``` **Note:** The two .usda files contain only nodes from the `omni.graph.examples.python` extension and are solely used for test purposes. That is why they could be moved into the extension’s test directory. Next the standard automatic test detection file was added to `omni.graph.test/python/tests/__init__.py` ```python """There is no public API to this module.""" ``` __all__ = [] scan_for_test_modules = True """The presence of this object causes the test runner to automatically scan the directory for unit test cases""" Finally, the config/extension.toml ``` had additions made to inform it of the dependency on the new extension: ```toml [package] version = "0.79.1" title = "OmniGraph Regression Testing" category = "Graph" readme = "docs/README.md" changelog = "docs/CHANGELOG.md" description = "Contains test scripts and files used to test the OmniGraph extensions where the tests cannot live in a single extension." keywords = ["kit", "omnigraph", "tests"] python.import_mode = "ParallelThread" preview_image = "data/preview.png" icon = "data/icon.svg" writeTarget.kit = true support_level = "Enterprise" # Main module for the Python interface [[python.module]] name = "omni.graph.test" [[native.plugin]] path = "bin/*.plugin" recursive = false # Watch the .ogn files for hot reloading (only works for Python files) [fswatcher.patterns] include = ["*.ogn", "*.py"] exclude = ["Ogn*Database.py"] # The bare minimum of dependencies required for bringing up the extension [dependencies] "omni.graph.core" = { version = "2.177.1" } "omni.graph" = { version = "1.139.0" } "omni.graph.tools" = { version = "1.77.0" } [[test]] timeout = 1800 # Other extensions that need to load in order for this one to work. # This list deliberately omits omni.graph and omni.graph.tools to ensure that extensions that rely on recursive # dependencies on OmniGraph work properly. dependencies = [ "omni.kit.pipapi", "omni.kit.ui_test", "omni.kit.usd.layers", "omni.graph.examples.cpp", "omni.graph.examples.python", "omni.graph.nodes", "omni.graph.tutorials", "omni.graph.action", "omni.graph.scriptnode", "omni.inspect", "omni.usd", "omni.kit.stage_template.core" ] ``` 54 "omni.kit.primitive.mesh", 55 ] 56 57 stdoutFailPatterns.exclude = [ 58 # Exclude carb.events leak that only shows up locally 59 "*[Error] [carb.events.plugin]*PooledAllocator*", 60 # Exclude messages which say they should be ignored 61 "*Ignore this error/warning*", 62 "*Types: unknown and rel*" # OM-86183 63 ] 64 65 pythonTests.unreliable = [ 66 "*test_change_pipeline_stage*", # OM-66115 67 "*test_read_prim_attribute_nodes_in_non_instanced_lazy_graphs*", # OM-120024 68 "*test_action_compounds*", # OM-120545 69 "*test_recursive_graph_execution*", # OM-120609 70 "*test_read_prim_attribute_nodes_in_instanced_lazy_graphs*", # OM-120675 71 "*test_dirty_push_time_change*", # OM-120536 72 "*test_read_time_nodes_in_non_instanced_lazy_graphs*", # OM-120536 73 "*test_read_time_nodes_in_instanced_lazy_graphs*", # OM-120536 74 "*test_evaluator_type_changed_from_usd*", # OM-120536 75 ] 76 77 args = [ 78 "--no-window" 79 ] 80 81 [documentation] 82 pages = [ 83 "docs/Overview.md", 84 "docs/CHANGELOG.md", 85 ]
5,263
data-types.md
# Data Types — Omniverse Kit 1.140.0 documentation ## Data Types The Python module `omni.graph.core.types` contains definitions for Python type annotations that correspond to all of the data types used by Omnigraph. The annotation can be used to check that data extracted from the OmniGraph Python APIs for retrieving attribute values have the correct types. This table shows the relationships between the attribute type as you might see it in a .ogn file, the corresponding Python type annotation to use in function and variable declarations, and the underlying data type that is returned from Python APIs that retrieve values from attributes with those corresponding OGN data types. ```markdown | .ogn Type Definition | Type annotation | Python Data Type | |----------------------|-----------------|------------------| | any | omni.graph.core.types.any | any | | bool | omni.graph.core.types.bool | bool | | bool[] | omni.graph.core.types.boolarray | numpy.ndarray(shape=(N,), dtype=numpy.bool) | | bundle | omni.graph.core.types.bundle | omni.graph.core.BundleContents | | colord[3] | omni.graph.core.types.color3d | numpy.ndarray(shape=(3,), dtype=numpy.float64) | | colord[3][] | omni.graph.core.types.color3darray | numpy.ndarray(shape=(N,3), dtype=numpy.float64) | | colord[4] | omni.graph.core.types.color4d | numpy.ndarray(shape=(4,), dtype=numpy.float64) | | colord[4][] | omni.graph.core.types.color4darray | numpy.ndarray(shape=(N,4), dtype=numpy.float64) | | colorf[3] | omni.graph.core.types.color3f | numpy.ndarray(shape=(3,), dtype=numpy.float32) | | colorf[3][] | omni.graph.core.types.color3farray | numpy.ndarray(shape=(N,3), dtype=numpy.float32) | ``` ```markdown Note: The above table represents the mapping between .ogn type definitions, their corresponding Python type annotations, and the actual Python data types used. This is crucial for ensuring the correct handling and interpretation of data within the Omnigraph system. | | | | | --- | --- | --- | | **numpy.ndarray(shape=(N,3), dtype=numpy.float32)** | | | | **colorf[4]** | omni.graph.core.types.color4f | numpy.ndarray(shape=(4,), dtype=numpy.float32) | | **colorf[4][]** | omni.graph.core.types.color4farray | numpy.ndarray(shape=(N,4), dtype=numpy.float32) | | **colorh[3]** | omni.graph.core.types.color3h | numpy.ndarray(shape=(3,), dtype=numpy.float16) | | **colorh[3][]** | omni.graph.core.types.color3harray | numpy.ndarray(shape=(N,3), dtype=numpy.float16) | | **colorh[4]** | omni.graph.core.types.color4h | numpy.ndarray(shape=(4,), dtype=numpy.float16) | | **colorh[4][]** | omni.graph.core.types.color4harray | numpy.ndarray(shape=(N,4), dtype=numpy.float16) | | **double** | omni.graph.core.types.double | float | | **double[]** | omni.graph.core.types.doublearray | numpy.ndarray(shape=(N,), dtype=numpy.float64) | | **double[2]** | omni.graph.core.types.double2 | numpy.ndarray(shape=(2,), dtype=numpy.float64) | | **double[2][]** | omni.graph.core.types.double2array | numpy.ndarray(shape=(N,2), dtype=numpy.float64) | | **double[3]** | omni.graph.core.types.double3 | numpy.ndarray(shape=(3,), dtype=numpy.float64) | | **double[3][]** | omni.graph.core.types.double3array | numpy.ndarray(shape=(N,3), dtype=numpy.float64) | | **double[4]** | omni.graph.core.types.double4 | numpy.ndarray(shape=(4,), dtype=numpy.float64) | | **double[4][]** | omni.graph.core.types.double4array | numpy.ndarray(shape=(N,4), dtype=numpy.float64) | | **execution** | omni.graph.core.types.execution | int | | **float** | omni.graph.core.types.float | float | | **float[]** | omni.graph.core.types.floatarray | numpy.ndarray(shape=(N,), dtype=numpy.float32) | | **float[2]** | omni.graph.core.types.float2 | numpy.ndarray(shape=(2,), dtype=numpy.float32) | | **float[2][]** | omni.graph.core.types.float2array | numpy.ndarray(shape=(N,2), dtype=numpy.float32) | | **float[3]** | omni.graph.core.types.float3 | numpy.ndarray(shape=(3,), dtype=numpy.float32) | | **float[3][]** | omni.graph.core.types.float3array | numpy.ndarray(shape=(N,3), dtype=numpy.float32) | | **float[4]** | omni.graph.core.types.float4 | numpy.ndarray(shape=(4,), dtype=numpy.float32) | | HTML Content | Markdown Content | |--------------|------------------| | float[4][] | `float[4][]` | | omni.graph.core.types.float4array | `omni.graph.core.types.float4array` | | numpy.ndarray(shape=(N,4), dtype=numpy.float32) | `numpy.ndarray(shape=(N,4), dtype=numpy.float32)` | | frame[4] | `frame[4]` | | omni.graph.core.types.frame4d | `omni.graph.core.types.frame4d` | | numpy.ndarray(shape=(4,4), dtype=numpy.float64) | `numpy.ndarray(shape=(4,4), dtype=numpy.float64)` | | frame[4][] | `frame[4][]` | | omni.graph.core.types.frame4darray | `omni.graph.core.types.frame4darray` | | numpy.ndarray(shape=(N,4,4), dtype=numpy.float64) | `numpy.ndarray(shape=(N,4,4), dtype=numpy.float64)` | | half | `half` | | omni.graph.core.types.half | `omni.graph.core.types.half` | | float | `float` | | half[] | `half[]` | | omni.graph.core.types.halfarray | `omni.graph.core.types.halfarray` | | numpy.ndarray(shape=(N,), dtype=numpy.float16) | `numpy.ndarray(shape=(N,), dtype=numpy.float16)` | | half[2] | `half[2]` | | omni.graph.core.types.half2 | `omni.graph.core.types.half2` | | numpy.ndarray(shape=(2,), dtype=numpy.float16) | `numpy.ndarray(shape=(2,), dtype=numpy.float16)` | | half[2][] | `half[2][]` | | omni.graph.core.types.half2array | `omni.graph.core.types.half2array` | | numpy.ndarray(shape=(N,2), dtype=numpy.float16) | `numpy.ndarray(shape=(N,2), dtype=numpy.float16)` | | half[3] | `half[3]` | | omni.graph.core.types.half3 | `omni.graph.core.types.half3` | | numpy.ndarray(shape=(3,), dtype=numpy.float16) | `numpy.ndarray(shape=(3,), dtype=numpy.float16)` | | half[3][] | `half[3][]` | | omni.graph.core.types.half3array | `omni.graph.core.types.half3array` | | numpy.ndarray(shape=(N,3), dtype=numpy.float16) | `numpy.ndarray(shape=(N,3), dtype=numpy.float16)` | | half[4] | `half[4]` | | omni.graph.core.types.half4 | `omni.graph.core.types.half4` | | numpy.ndarray(shape=(4,), dtype=numpy.float16) | `numpy.ndarray(shape=(4,), dtype=numpy.float16)` | | half[4][] | `half[4][]` | | omni.graph.core.types.half4array | `omni.graph.core.types.half4array` | | numpy.ndarray(shape=(N,4), dtype=numpy.float16) | `numpy.ndarray(shape=(N,4), dtype=numpy.float16)` | | int | `int` | | omni.graph.core.types.int | `omni.graph.core.types.int` | | int[] | `int[]` | | omni.graph.core.types.intarray | `omni.graph.core.types.intarray` | | numpy.ndarray(shape=(N,), dtype=numpy.int32) | `numpy.ndarray(shape=(N,), dtype=numpy.int32)` | | int[2] | `int[2]` | | omni.graph.core.types.int2 | `omni.graph.core.types.int2` | | numpy.ndarray(shape=(2,), dtype=numpy.int32) | `numpy.ndarray(shape=(2,), dtype=numpy.int32)` | | int[2][] | `int[2][]` | | omni.graph.core.types.int2array | `omni.graph.core.types.int2array` | | numpy.ndarray(shape=(N,2), dtype=numpy.int32) | `numpy.ndarray(shape=(N,2), dtype=numpy.int32)` | | int[3] | `int[3]` | | omni.graph.core.types.int3 | `omni.graph.core.types.int3` | | numpy.ndarray(shape=(3,), dtype=numpy.int32) | `numpy.ndarray(shape=(3,), dtype=numpy.int32)` | | int[3][] | `int[3][]` | | omni.graph.core.types.int3array | `omni.graph.core.types.int3array` | | numpy.ndarray(shape=(N,3), dtype=numpy.int32) | `numpy.ndarray(shape=(N,3), dtype=numpy.int32)` | | int[4] | `int[4]` | | omni.graph.core.types.int4 | `omni.graph.core.types.int4` | | numpy.ndarray(shape=(4,), dtype=numpy.int32) | `numpy.ndarray(shape=(4,), dtype=numpy.int32)` | | int[4][] | `int[4][]` | | omni.graph.core.types.int4array | `omni.graph.core.types.int4array` | | numpy.ndarray(shape=(N,4), dtype=numpy.int32) | `numpy.ndarray(shape=(N,4), dtype=numpy.int32)` | | int64 | `int64` | | omni.graph.core.types.int64 | `omni.graph.core.types.int64` | | int64[] | `int64[]` | | omni.graph.core.types.int64array | `omni.graph.core.types.int64array` | | numpy.ndarray(shape=(N,), dtype=numpy.int64) | `numpy.ndarray(shape=(N,), dtype=numpy.int64)` | | matrixd[2] | `matrixd[2]` | | omni.graph.core.types.matrix2d | `omni.graph.core.types.matrix2d` | | numpy.ndarray(shape=(2,2), dtype=numpy.float64) | `numpy.ndarray(shape=(2,2), dtype=numpy.float64)` | | matrixd[2][] | `matrixd[2][]` | | omni.graph.core.types.matrix2darray | `omni.graph.core.types.matrix2darray` | | numpy.ndarray(shape=(N,2,2), dtype=numpy.float64) | `numpy.ndarray(shape=(N,2,2), dtype=numpy.float64)` | | Emphasis | Description | Data Type | |----------|-------------|-----------| | *matrixd[3]* | omni.graph.core.types.matrix3d | numpy.ndarray(shape=(3,3), dtype=numpy.float64) | | *matrixd[3][]* | omni.graph.core.types.matrix3darray | numpy.ndarray(shape=(N,3,3), dtype=numpy.float64) | | *matrixd[4]* | omni.graph.core.types.matrix4d | numpy.ndarray(shape=(4,4), dtype=numpy.float64) | | *matrixd[4][]* | omni.graph.core.types.matrix4darray | numpy.ndarray(shape=(N,4,4), dtype=numpy.float64) | | *normald[3]* | omni.graph.core.types.normal3d | numpy.ndarray(shape=(3,), dtype=numpy.float64) | | *normald[3][]* | omni.graph.core.types.normal3darray | numpy.ndarray(shape=(N,3), dtype=numpy.float64) | | *normalf[3]* | omni.graph.core.types.normal3f | numpy.ndarray(shape=(3,), dtype=numpy.float32) | | *normalf[3][]* | omni.graph.core.types.normal3farray | numpy.ndarray(shape=(N,3), dtype=numpy.float32) | | *normalh[3]* | omni.graph.core.types.normal3h | numpy.ndarray(shape=(3,), dtype=numpy.float16) | | *normalh[3][]* | omni.graph.core.types.normal3harray | numpy.ndarray(shape=(N,3), dtype=numpy.float16) | | *objectId* | omni.graph.core.types.objectid | int | | *objectId[]* | omni.graph.core.types.objectidarray | numpy.ndarray(shape=(N,), dtype=numpy.uint64) | | *path* | omni.graph.core.types.path | list[usdrt::SdfPath] | | *pointd[3]* | omni.graph.core.types.point3d | numpy.ndarray(shape=(3,), dtype=numpy.float64) | | *pointd[3][]* | omni.graph.core.types.point3darray | numpy.ndarray(shape=(N,3), dtype=numpy.float64) | | *pointf[3]* | omni.graph.core.types.point3f | numpy.ndarray(shape=(3,), dtype=numpy.float32) | | *pointf[3][]* | omni.graph.core.types.point3farray | numpy.ndarray(shape=(N,3), dtype=numpy.float32) | | *pointh[3]* | omni.graph.core.types.point3h | numpy.ndarray(shape=(3,), dtype=numpy.float16) | | *pointh[3][]* | omni.graph.core.types.point3harray | numpy.ndarray(shape=(N,3), dtype=numpy.float16) | | *quatd[4]* | omni.graph.core.types.quatd | numpy.ndarray(shape=(4,), dtype=numpy.float64) | | *quatd[4][]* | omni.graph.core.types.quatdarray | numpy.ndarray(shape=(N,4), dtype=numpy.float64) | | *quatf[4]* | omni.graph.core.types.quatf | numpy.ndarray(shape=(4,), dtype=numpy.float32) | | *quatf[4][]* | omni.graph.core.types.quatfarray | numpy.ndarray(shape=(N,4), dtype=numpy.float32) | | 名称 | 类型 | 描述 | | --- | --- | --- | | quatf[4] | omni.graph.core.types.quatf | numpy.ndarray(shape=(4,), dtype=numpy.float32) | | quatf[4][] | omni.graph.core.types.quatfarray | numpy.ndarray(shape=(N,4), dtype=numpy.float32) | | quath[4] | omni.graph.core.types.quath | numpy.ndarray(shape=(4,), dtype=numpy.float16) | | quath[4][] | omni.graph.core.types.quatharray | numpy.ndarray(shape=(N,4), dtype=numpy.float16) | | string | omni.graph.core.types.string | str | | target | omni.graph.core.types.target | list[usdrt::SdfPath] | | texcoordd[2] | omni.graph.core.types.texcoord2d | numpy.ndarray(shape=(2,), dtype=numpy.float64) | | texcoordd[2][] | omni.graph.core.types.texcoord2darray | numpy.ndarray(shape=(N,2), dtype=numpy.float64) | | texcoordd[3] | omni.graph.core.types.texcoord3d | numpy.ndarray(shape=(3,), dtype=numpy.float64) | | texcoordd[3][] | omni.graph.core.types.texcoord3darray | numpy.ndarray(shape=(N,3), dtype=numpy.float64) | | texcoordf[2] | omni.graph.core.types.texcoord2f | numpy.ndarray(shape=(2,), dtype=numpy.float32) | | texcoordf[2][] | omni.graph.core.types.texcoord2farray | numpy.ndarray(shape=(N,2), dtype=numpy.float32) | | texcoordf[3] | omni.graph.core.types.texcoord3f | numpy.ndarray(shape=(3,), dtype=numpy.float32) | | texcoordf[3][] | omni.graph.core.types.texcoord3farray | numpy.ndarray(shape=(N,3), dtype=numpy.float32) | | texcoordh[2] | omni.graph.core.types.texcoord2h | numpy.ndarray(shape=(2,), dtype=numpy.float16) | | texcoordh[2][] | omni.graph.core.types.texcoord2harray | numpy.ndarray(shape=(N,2), dtype=numpy.float16) | | texcoordh[3] | omni.graph.core.types.texcoord3h | numpy.ndarray(shape=(3,), dtype=numpy.float16) | | texcoordh[3][] | omni.graph.core.types.texcoord3harray | numpy.ndarray(shape=(N,3), dtype=numpy.float16) | | timecode | omni.graph.core.types.timecode | float | | timecode[] | omni.graph.core.types.timecodearray | numpy.ndarray(shape=(N,), dtype=numpy.float64) | | token | omni.graph.core.types.token | str | | token[] | omni.graph.core.types.tokenarray | numpy.ndarray(shape=(N,), dtype=numpy.str) | | uchar | omni.graph.core.types.uchar | int | | uchar[] | omni.graph.core.types.uchararray | numpy.ndarray(shape=(N,), dtype=numpy.uint8) | | | HTML Type | Omni Graph Type | NumPy Type | |------|--------------------------------|------------------------------------------|---------------------------------------------| | Even | *uint* | omni.graph.core.types.uint | int | | Odd | *uint[]* | omni.graph.core.types.uintarray | numpy.ndarray(shape=(N,), dtype=numpy.uint32) | | Even | *uint64* | omni.graph.core.types.uint64 | int | | Odd | *uint64[]* | omni.graph.core.types.uint64array | numpy.ndarray(shape=(N,), dtype=numpy.uint64) | | Even | *vectord[3]* | omni.graph.core.types.vector3d | numpy.ndarray(shape=(3,), dtype=numpy.float64) | | Odd | *vectord[3][]* | omni.graph.core.types.vector3darray | numpy.ndarray(shape=(N,3), dtype=numpy.float64) | | Even | *vectorf[3]* | omni.graph.core.types.vector3f | numpy.ndarray(shape=(3,), dtype=numpy.float32) | | Odd | *vectorf[3][]* | omni.graph.core.types.vector3farray | numpy.ndarray(shape=(N,3), dtype=numpy.float32) | | Even | *vectorh[3]* | omni.graph.core.types.vector3h | numpy.ndarray(shape=(3,), dtype=numpy.float16) | | Odd | *vectorh[3][]* | omni.graph.core.types.vector3harray | numpy.ndarray(shape=(N,3), dtype=numpy.float16) |
14,712
debug.md
# Debug a Build Recognizing the critical role of debugging in development, Omniverse offers tools and automation to streamline and simplify debugging workflows. In combination with third-party tools, Omniverse accelerates bug and anomaly detection, aiming for steady increases in project stability throughout the development process. Omniverse provides utilities for debugging via extensions both for use within a given Application or in conjunction with third-party tools such as VSCode. - **Console Extension**: Allows the user to see log output and input commands directly from the Application interface. - **Visual Studio Code Link Extension**: Enables the connection of an Omniverse Application to VS Code’s python debugger. **Additional Learning:** - Video Tutorial - How to Debug Your Kit Extension with Omniverse Code App. - Advanced Project Template Tutorial - Step-by-step instructions for debugging within the context of an Application development tutorial.
973
Debugging.md
# Debugging When things are not behaving as expected, it is good to start by understanding the topology of the execution graph. As described in the [Graph Concepts](#ef-graph-concepts) section, the execution graph is built of many nested graphs. The framework allows you to visualize a flattened versions of this graph. ```c++ std::ostringstream stream; writeFlattenedAsGraphviz(test.g, stream); ``` Graph utilities will traverse the entire topology of the graph and write it out to a given stream in GraphViz format. Below is an interactive example of an execution graph. The svg file was generated using an online editor. ``` [Graph Concepts]: GraphConcepts.html#ef-graph-concepts The output is flattened, which means that all instantiated NodeGraphDef are expanded in place. We use a small number of colors to help visually distinguish nodes that are in the same topology. It also helps identify when two expanded node graph definitions are references of the same definition in memory.
1,009
declarative-syntax_Overview.md
# Overview ## Extension : omni.ui.scene-1.10.3 ## Documentation Generated : May 08, 2024 ### Overview SceneUI helps build great-looking 3d manipulators and 3d helpers with as little code as possible. It provides shapes and controls for declaring the UI in 3D space. ### Declarative syntax SceneUI uses declarative syntax, so it’s possible to state what the manipulator should do. For example, you can write that you want an item list consisting of an image and lines. The code is simpler and easier to read than ever before. ```python scene_view = sc.SceneView( aspect_ratio_policy=sc.AspectRatioPolicy.PRESERVE_ASPECT_FIT, height=200 ) with scene_view.scene: sc.Line([-0.5,-0.5,0], [-0.5, 0.5, 0], color=cl.red) sc.Line([-0.5,-0.5,0], [0.5, -0.5, 0], color=cl.green) sc.Arc(0.5, color=cl.documentation_nvidia) ``` This declarative style applies to complex concepts like interaction with the mouse pointer. A gesture can be easily added to almost any item with a few lines of code. The system handles all of the steps needed to compute the intersection with the mouse pointer and depth sorting if you click many items at runtime. With this easy input, your manipulator comes ready very quickly.
1,221
default-prim-only-mode_Overview.md
# Overview **Extension** : omni.kit.usd.collect-2.2.21 **Documentation Generated** : May 08, 2024 ## Overview `omni.kit.usd.collect` provides the core API for collecting a USD file with all of its dependencies that are scattered around different locations. ```python from omni.kit.usd.collect import Collector ... collector = Collector(usd_path, target_folder) success, target_root_usd = await collector.collect() ``` Here it instantiates a `omni.kit.usd.collect.Collector` to collect USD file from `usd_path` to target location `target_folder` with default parameters. You can check `omni.kit.usd.collect.Collector.__init__()` for more customizations to instantiate a Collector. ## Differences between Flat Collection and Non-Flat Collection Collector supports to organize a final collection in two different folder structures: flat or non-flat. By default, collector collects all assets with non-flat structure, that collected files are organized in the same folder structure as the source files. In flat mode, folder structure will not be kept and all dependencies will be put into specified folders. Also, you can specify the policy about how to group textures (see `omni.kit.usd.collect.FlatCollectionTextureOptions` for more details). Currently there are 3 available options: | Options | Description | |---------|-------------| | Group by MDL | Textures will be grouped by their parent MDL file name. | | Group by USD | Textures will be grouped by their parent USD file name. | | Flat | All textures will be collected under the same hierarchy under “textures” folder. Note that there might be potential danger of textures overwriting each other, if they have the same names but belong to different assets/mdls. | ## Default Prim Only Mode User can also specify the option to enable “Default Prim Only” mode. In this mode, collector will prune USD files according to the given policy (see the `Keyword Args` section of `omni.kit.usd.collect.Collector.__init__()`). So prim except default prim will be removed to speed up collection. If USD file has no default prim set, it does nothing to the USD file. REMINDER: This is an advanced mode that may remove valid data of your stage, and if you have references with explicit prim set, and the prim is not the default prim from the reference file. It may create stale reference if you apply this mode to all USD layers since non-default prims will be removed. ## Limitations There is no USDZ support currently until Kit resolves MDL loading issue inside USDZ file.
2,528
defining-commands_Overview.md
# Overview An example C++ extension that can be used as a reference/template for creating new extensions. Demonstrates how to create commands in C++ that can then be executed from either C++ or Python. See the omni.kit.commands extension for extensive documentation about commands themselves. # C++ Usage Examples ## Defining Commands ```c++ using namespace omni::kit::commands; class ExampleCppCommand : public Command { public: static carb::ObjectPtr<ICommand> create(const char* extensionId, const char* commandName, const carb::dictionary::Item* kwargs) { return carb::stealObject<ICommand>(new ExampleCppCommand(extensionId, commandName, kwargs)); } static void populateKeywordArgs(carb::dictionary::Item* defaultKwargs, carb::dictionary::Item* optionalKwargs, carb::dictionary::Item* requiredKwargs) { if (carb::dictionary::IDictionary* iDictionary = carb::getCachedInterface<carb::dictionary::IDictionary>()) { iDictionary->makeAtPath(defaultKwargs, "x", 9); iDictionary->makeAtPath(defaultKwargs, "y", -1); } } }; ``` ```cpp ExampleCppCommand(const char* extensionId, const char* commandName, const carb::dictionary::Item* kwargs) : Command(extensionId, commandName) { if (carb::dictionary::IDictionary* iDictionary = carb::getCachedInterface<carb::dictionary::IDictionary>()) { m_x = iDictionary->get<int32_t>(kwargs, "x"); m_y = iDictionary->get<int32_t>(kwargs, "y"); } } void doCommand() override { printf("Executing command '%s' with params 'x=%d' and 'y=%d'.\n", getName(), m_x, m_y); } void undoCommand() override { printf("Undoing command '%s' with params 'x=%d' and 'y=%d'.\n", getName(), m_x, m_y); } private: int32_t m_x = 0; int32_t m_y = 0; }; ``` ### Registering Commands ```cpp auto commandBridge = carb::getCachedInterface<omni::kit::commands::ICommandBridge>()); commandBridge->registerCommand( "omni.example.cpp.commands", "ExampleCppCommand", ExampleCppCommand::create, ExampleCppCommand::populateKeywordArgs); // Note that the command name (in this case "ExampleCppCommand") is arbitrary and does not need to match the C++ class ``` ### Executing Commands ```cpp auto commandBridge = carb::getCachedInterface<omni::kit::commands::ICommandBridge>()); // Create the kwargs dictionary. auto iDictionary = carb::getCachedInterface<carb::dictionary::IDictionary>(); carb::dictionary::Item* kwargs = iDictionary->createItem(nullptr, "", carb::dictionary::ItemType::eDictionary); iDictionary->makeIntAtPath(kwargs, "x", 7); ``` ```c++ iDictionary->makeIntAtPath(kwargs, "y", 9); ``` ```c++ // Execute the command using its name... commandBridge->executeCommand("ExampleCppCommand", kwargs); // or without the 'Command' postfix just like Python commands... commandBridge->executeCommand("ExampleCpp", kwargs); // or fully qualified if needed to disambiguate (works with or without the 'Command)' postfix. commandBridge->executeCommand("omni.example.cpp.commands", "ExampleCppCommand", kwargs); ``` ```c++ // Destroy the kwargs dictionary. iDictionary->destroyItem(kwargs); ``` ```c++ // The C++ command can be executed from Python exactly like any Python command, // and we can also execute Python commands from C++ in the same ways as above: commandBridge->executeCommand("SomePythonCommand", kwargs); // etc. ``` ## Undo/Redo/Repeat Commands ```c++ auto commandBridge = carb::getCachedInterface<omni::kit::commands::ICommandBridge>()); // It doesn't matter whether the command stack contains Python commands, C++ commands, // or a mix of both, and the same stands for when undoing/redoing commands from Python. commandBridge->undoCommand(); commandBridge->redoCommand(); commandBridge->repeatCommand(); ``` ## Deregistering Commands ```c++ auto commandBridge = carb::getCachedInterface<omni::kit::commands::ICommandBridge>()); commandBridge->deregisterCommand("omni.example.cpp.commands", "ExampleCppCommand"); ```
3,990
defining-custom-actions_Overview.md
# Overview — Kit Extension Template C++ 1.0.1 documentation ## Overview An example C++ extension that can be used as a reference/template for creating new extensions. Demonstrates how to create actions in C++ that can then be executed from either C++ or Python. See the omni.kit.actions.core extension for extensive documentation about actions themselves. ## C++ Usage Examples ### Defining Custom Actions ```c++ using namespace omni::kit::actions::core; class ExampleCustomAction : public Action { public: static carb::ObjectPtr<IAction> create(const char* extensionId, const char* actionId, const MetaData* metaData) { return carb::stealObject<IAction>(new ExampleCustomAction(extensionId, actionId, metaData)); } ExampleCustomAction(const char* extensionId, const char* actionId, const MetaData* metaData) : Action(extensionId, actionId, metaData), m_executionCount(0) { } carb::variant::Variant execute(const carb::variant::Variant& args = {}, const carb::dictionary::Item* kwargs = nullptr) override ``` ```c++ return carb::variant::Variant(); ``` ```c++ carb::getCachedInterface<omni::kit::actions::core::IActionRegistry>()->registerAction(exampleLambdaAction); ``` ```c++ // Example of creating and registering (at the same time) a lambda action from C++. carb::getCachedInterface<omni::kit::actions::core::IActionRegistry>()->registerAction( "omni.example.cpp.actions", "example_lambda_action_id", [](const carb::variant::Variant& args = {}, const carb::dictionary::Item* kwargs = nullptr) { printf("Executing example_lambda_action_id.\n"); return carb::variant::Variant(); }, "Example Lambda Action Display Name", "Example Lambda Action Description."); ``` ## Discovering Actions ```c++ auto registry = carb::getCachedInterface<omni::kit::actions::core::IActionRegistry>(); // Retrieve an action that has been registered using the registering extension id and the action id. carb::ObjectPtr<IAction> action = registry->getAction("omni.example.cpp.actions", "example_custom_action_id"); // Retrieve all actions that have been registered by a specific extension id. std::vector<carb::ObjectPtr<IAction>> actions = registry->getAllActionsForExtension("example"); // Retrieve all actions that have been registered by any extension. std::vector<carb::ObjectPtr<IAction>> actions = registry->getAllActions(); ``` ## Deregistering Actions ```c++ auto actionRegistry = carb::getCachedInterface<omni::kit::actions::core::IActionRegistry>(); // Deregister an action directly... actionRegistry->deregisterAction(exampleCustomAction); // or using the registering extension id and the action id... actionRegistry->deregisterAction("omni.example.cpp.actions", "example_custom_action_id"); // or deregister all actions that were registered by an extension. actionRegistry->deregisterAllActionsForExtension("omni.example.cpp.actions"); ``` ## Executing Actions ```c++ auto actionRegistry = carb::getCachedInterface<omni::kit::actions::core::IActionRegistry>(); // Execute an action after retrieving it from the action registry. auto action = actionRegistry->getAction("omni.example.cpp.actions", "example_custom_action_id"); ``` ```cpp action->execute(); // Execute an action indirectly (retrieves it internally). actionRegistry->executeAction("omni.example.cpp.actions", "example_custom_action_id"); // Execute an action that was stored previously. exampleCustomAction->execute(); ``` Note: All of the above will find any actions that have been registered from either Python or C++, and you can interact with them without needing to know anything about where they were registered.
3,680
defining-pybind-module_Overview.md
# Overview An example C++ extension that can be used as a reference/template for creating new extensions. Demonstrates how to reflect C++ code using pybind11 so that it can be called from Python code. The IExampleBoundInterface located in `include/omni/example/cpp/pybind/IExampleBoundInterface.h` is: - Implemented in `plugins/omni.example.cpp.pybind/ExamplePybindExtension.cpp`. - Reflected in `bindings/python/omni.example.cpp.pybind/ExamplePybindBindings.cpp`. - Accessed from Python in `python/tests/test_pybind_example.py` via `python/impl/example_pybind_extension.py`. # C++ Usage Examples ## Defining Pybind Module ```c++ PYBIND11_MODULE(_example_pybind_bindings, m) { using namespace omni::example::cpp::pybind; m.doc() = "pybind11 omni.example.cpp.pybind bindings"; carb::defineInterfaceClass<IExampleBoundInterface>( m, "IExampleBoundInterface", "acquire_bound_interface", "release_bound_interface" ) .def( "register_bound_object", &IExampleBoundInterface::registerBoundObject, R"( Register a bound object. Args: object: The bound object to register. )", py::arg("object") ) .def( "deregister_bound_object", &IExampleBoundInterface::deregisterBoundObject, R"( Deregister a bound object. Args: object: The bound object to deregister. )", py::arg("object") ) ; } ``` ```python def find_bound_object(id: str) -> IExampleBoundInterface: """ Find a bound object. Args: id: Id of the bound object. Return: The bound object if it exists, an empty object otherwise. """ ``` ```python class IExampleBoundObject(carb::ObjectPtr<IExampleBoundObject>): @property def id(self) -> str: """ Get the id of this bound object. Return: The id of this bound object. """ ``` ```python class PythonBoundObject(IExampleBoundObject, carb::ObjectPtr<PythonBoundObject>): def __init__(self, id: str): """ Create a bound object. Args: id: Id of the bound object. Return: The bound object that was created. """ self.m_memberInt = 0 self.m_memberBool = False @property def property_int(self) -> int: """ Int property bound directly. """ @property def property_bool(self) -> bool: """ Bool property bound directly. """ @property def property_string(self) -> str: """ String property bound using accessors. """ def multiply_int_property(self, value_to_multiply: int): """ Bound function that accepts an argument. Args: value_to_multiply: The value to multiply by. """ def toggle_bool_property(self) -> bool: """ Bound function that returns a value. Return: The toggled bool value. """ def append_string_property(self, value_to_append: str): """ Bound function that appends to a string property. Args: value_to_append: The value to append to the string property. """ ``` Bound function that accepts an argument and returns a value. Args: value_to_append: The value to append. Return: The new string value.
3,431
DefinitionCreation.md
# Definition Creation This is a practitioner’s guide to using the Execution Framework. Before continuing, it is recommended you first review the Execution Framework Overview along with basic topics such as Graphs Concepts, Pass Concepts, and Execution Concepts. Definitions in the Execution Framework define the work each node represents. Definitions come in two forms: opaque definitions (implemented by NodeDef) and definitions described by a graph (i.e. NodeGraphDef). Each is critical to EF’s operation. This article covers how to create both. ## Customizing NodeDef NodeDef encapsulates opaque user code the Execution Framework cannot examine/optimize. Probably the best example of how we can customize NodeDef is by looking at how NodeDefLambda is implemented. The implementation is simple. At creation, the object is given a function pointer, which it stores. When INodeDef::execute() is called, the stored function is invoked. ### Implementation of NodeDefLambda ```cpp class NodeDefLambda : public NodeDef { public: // ... (code implementation details) }; ``` //! Templated constructor for wrapper class //! //! The given definition name must not be @c nullptr. //! //! The given invokable object must not be @c nullptr. //! //! The returned object will not be @c nullptr. //! //! @tparam Fn Invokable type (e.g. function, functor, lambda, etc) with the signature `Status(ExecutionTask&amp;)`. //! //! @param definitionName Definition name is considered as a token that transformation passes can register against. //! //! @param fn Execute function body. Signature should be `Status(ExecutionTask&amp;)`. //! //! @param schedInfo Fixed at runtime scheduling constraint. template <typename Fn> static omni::core::ObjectPtr<NodeDefLambda> create(const carb::cpp::string_view& definitionName, Fn&& fn, SchedulingInfo schedInfo) noexcept { OMNI_GRAPH_EXEC_ASSERT(definitionName.data()); return omni::core::steal(new NodeDefLambda(definitionName, std::forward<Fn>(fn), schedInfo)); } protected: //! Templated and protected constructor for wrapper class. //! //! Use the `create` factory method to construct objects of this class. template <typename Fn> NodeDefLambda(const carb::cpp::string_view& definitionName, Fn&& fn, SchedulingInfo schedInfo) noexcept : NodeDef(definitionName), m_fn(std::move(fn)), m_schedulingInfo(schedInfo) { } //! @copydoc omni::graph::exec::unstable::IDef::execute_abi Status execute_abi(ExecutionTask* info) noexcept override { OMNI_GRAPH_EXEC_ASSERT(info); return m_fn(*info); } //! @copydoc omni::graph::exec::unstable::IDef::getSchedulingInfo_abi SchedulingInfo getSchedulingInfo_abi(const ExecutionTask* info) noexcept override { return m_schedulingInfo; } private: std::function<Status(ExecutionTask&)> m_fn; //!< Execute function body SchedulingInfo m_schedulingInfo; //!< Scheduling constraint }; Definition of a behavior tree. ```c++ // // ┌─────────────────┐ // │ │ // │ SEQUENCE │ // │ │ // └────────┬────────┘ // │ // ┌──────────────────────┴──────────────┬─────────────────────────┐ // │ │ │ // ┌────────▼────────┐ ┌────────▼─────────┐ ┌────────▼────────┐ // │ │ │ ┌──────────────┐ │ │ │ // │ SELECTOR │ │ │BtRunAndWinDef│ │ │ CELEBRATE │ // │ │ │ └──────────────┘ │ │ │ // └────────┬────────┘ └──────────────────┘ └─────────────────┘ // │ // ┌──────────────────────┴───────────────────────┐ // │ │ // ┌────────▼────────┐ ┌────────▼────────┐ // │ │ │ │ // │ READY FOR RACE │ │ TRAIN TO RUN │ // │ │ │ │ // └─────────────────┘ └─────────────────┘ //! Nested behavior tree leveraging composability of EF to add training behavior to BtRunAndWinDef definition. //! We added a @p CELEBRATE node which together with the behavior @p SEQUENCE will require proper state propagation //! from nested @p BtRunAndWinDef definition. class BtTrainRunAndWinDef : public NodeGraphDef { public: //! Factory method static omni::core::ObjectPtr<BtTrainRunAndWinDef>& create(IGraphBuilder* builder) { auto def = omni::core::steal(new BtTrainRunAndWinDef(builder->getGraph(), "tests.def.BtTrainRunAndWinDef")); def->build(builder); return def; } // The definition owns its nodes using NodePtr = omni::core::ObjectPtr<Node>; NodePtr sequenceNode; NodePtr selectorNode; NodePtr readyNode; NodePtr trainNode; NodePtr runAndWinNode; NodePtr celebrateNode; protected: //! Constructor BtTrainRunAndWinDef(IGraph* graph, const carb::cpp::string_view& definitionName) noexcept; private: //! Connect the topology of already allocated nodes and populate definition of @p runAndWinNode node void build(IGraphBuilder* parentBuilder) noexcept { // Create the graph seen above using the builder. Only builder objects can modify the topology. auto builder{ GraphBuilder::create(parentBuilder, this) }; builder->connect(getRoot(), sequenceNode); builder->connect(sequenceNode, selectorNode); builder->connect(sequenceNode, runAndWinNode); builder->connect(sequenceNode, celebrateNode); } } ``` ```cpp builder->connect(selectorNode, readyNode); builder->connect(selectorNode, trainNode); builder->setNodeGraphDef(runAndWinNode, BtRunAndWinDef::create(builder.get())); } }; ``` ``` ## Customizing NodeGraphDef When we do not know the nodes at compile time, we are still responsible for maintaining the nodes’ lifetime. We also are encouraged to reuse of nodes between topology changes. In the example below, we create a definition that builds a graph where each node represents a runner. The number of runners is not known at compile time and is specified at runtime as an argument to the `build()` method. During `build()`, each node is stored in a `std::vector` and a definition is attached to the node to define each runner’s behavior. ```cpp // ┌────────────┐ // │ │ // ┌────►│ Runner_1 │ // │ │ │ // │ └────────────┘ // │ ┌────────────┐ // ├───┘ │ │ // ├────────►│ ... │ // ├───┐ │ │ // │ └────────────┘ // │ ┌────────────┐ // │ │ │ // └────►│ Runner_N │ // │ │ // └────────────┘ //! Definition for instantiating a given number of runners. Each runner shares the same @p NodeGraphDef provided as a template parameter RunnerDef. Definition can be repopulated with reuse on nodes and definition. template<typename RunnerDef> class BtRunnersDef : public NodeGraphDef { using ThisClass = BtRunnersDef<RunnerDef>; public: //! Factory method static omni::core::ObjectPtr<ThisClass> create(IGraph* graph) { return omni::core::steal(new ThisClass(graph, "tests.def.BtRunnersDef")); } //! Construct the graph topology by reusing as much as possible already allocated runners. //! All runners will share the same behavior tree instance. void build(IGraphBuilder* builder, uint32_t runnersCount) { if (runnersCount < m_all.size()) { m_all.resize(runnersCount); } else if (runnersCount > m_all.size()) { m_all.reserve(runnersCount); NodeGraphDefPtr def; if (m_all.empty()) { def = RunnerDef::create(builder); } else ``` ```cpp { def = omni::core::borrow(m_all.front()->getNodeGraphDef()); } for (uint64_t i = m_all.size(); i < runnersCount; i++) { std::string newNodeName = carb::fmt::format("Runner_{}", i); auto newNode = Node::create(getTopology(), def, newNodeName); m_all.emplace_back(newNode); } } INode* rootNode = getRoot(); for (uint64_t i = 0; i < m_all.size(); i++) { builder->connect(rootNode, m_all[i].get()); } } //! Acquire runner state in given execution context at given index. If doesn't exist, default one will be allocated. BtActorState* getRunnerState(IExecutionContext* context, uint32_t index); protected: //! Initialize each runner state when topology changes. Make goals for each runner different. void initializeState_abi(ExecutionTask* rootTask) noexcept override; //! Constructor BtRunnersDef(IGraph* graph, const carb::cpp::string_view& definitionName) noexcept : NodeGraphDef(graph, BtRunnersExecutor::create, definitionName) { } private: using NodePtr = omni::core::ObjectPtr<Node>; std::vector<NodePtr> m_all; //!< Holds all runners used in the current topology. }; ``` ## Next Steps Readers are encouraged to examine ```cpp kit/source/extensions/omni.graph.exec/tests.cpp/graphs/TestBehaviorTree.cpp ``` to see the full implementation of behavior trees using EF. Now that you saw how to create definitions, make sure to consult the [Pass Creation](#ef-pass-creation) guide. If you haven’t yet created a module for extending EF, consult the [Plugin Creation](#ef-plugin-creation) guide.
10,125
definitions.md
# Definitions - **exact coverage**: the condition that a walk from any leaf chunk to its ancestor root chunk will always encounter exactly one support chunk - **family**: the memory allocated when an asset is instanced into its initial set of actors, and all descendant actors formed from fracturing the initial set, recursively - **root chunk**: a chunk with no parent - **leaf chunk**: a chunk with no children - **lower-support chunk**: a chunk that is either a support or subsupport chunk - **subsupport chunk**: a chunk that is descended from a support chunk - **supersupport chunk**: a chunk that is the ancestor of a support chunk - **support chunk**: a chunk that is represented in the support graph - **upper-support chunk**: a chunk that is either a support or supersupport chunk
790
demo-app_Overview.md
# Overview — Omniverse Kit 2.0.24 documentation ## Overview A set of simple Popup Dialogs for passing user inputs. All of these dialogs subclass from the base PopupDialog, which provides OK and Cancel buttons. The user is able to re-label these buttons as well as associate callbacks that execute upon being clicked. Why you should use the dialogs in this extension: - Avoid duplicating UI code that you then have to maintain. - Re-use dialogs that have standard look and feel to keep a consistent experience across the app. - Inherit future improvements. ### Form Dialog A form dialog can display a mixed set of input types. Code for above: ```python field_defs = [ FormDialog.FieldDef("string", "String: ", ui.StringField, "default"), FormDialog.FieldDef("int", "Integer: ", ui.IntField, 1), FormDialog.FieldDef("float", "Float: ", ui.FloatField, 2.0), FormDialog.FieldDef( "tuple", "Tuple: ", lambda **kwargs: ui.MultiFloatField(column_count=3, h_spacing=2, **kwargs), None ), FormDialog.FieldDef("slider", "Slider: ", lambda **kwargs: ui.FloatSlider(min=0, max=10, **kwargs), 3.5), FormDialog.FieldDef("bool", "Boolean: ", ui.CheckBox, True), ] dialog = FormDialog( title="Form Dialog", message="Please enter values for the following fields:", field_defs=field_defs, ok_handler=lambda dialog: print(f"Form accepted: '{dialog.get_values()}'"), ) ``` # Input Dialog An input dialog allows one input field. Code for above: ```python dialog = InputDialog( title="String Input", message="Please enter a string value:", pre_label="LDAP Name: ", post_label="@nvidia.com", ok_handler=lambda dialog: print(f"Input accepted: '{dialog.get_value()}'"), ) ``` # Message Dialog A message dialog is the simplest of all popup dialogs; it displays a confirmation message before executing some action. Code for above: ```python message = "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua." dialog = MessageDialog( title="Message", message=message, ok_handler=lambda dialog: print(f"Message acknowledged"), ) ``` # Options Dialog An options dialog displays a set of checkboxes; the choices optionally belong to a radio group - meaning only one choice is active at a given time. Code for above: ```python field_defs = [ OptionsDialog.FieldDef("hard", "Hard place", False), OptionsDialog.FieldDef("harder", "Harder place", True), OptionsDialog.FieldDef("hardest", "Hardest place", False), ] dialog = OptionsDialog( title="Options Dialog", message="Please make your choice:", field_defs=field_defs, width=300, radio_group=True, ok_handler=lambda choice: print(f"Choice: '{dialog.get_choice()}'"), ) ``` # Options Menu Similar to the options dialog, but displayed in menu form. Code for above: ```python field_defs = [ OptionsMenu.FieldDef("audio", "Audio", None, False), OptionsMenu.FieldDef("materials", "Materials", None, True), OptionsMenu.FieldDef("scripts", "Scripts", None, False), OptionsMenu.FieldDef("textures", "Textures", None, False), OptionsMenu.FieldDef("usd", "USD", None, True), ] menu = OptionsMenu( title="Options Menu", field_defs=field_defs, width=150, value_changed_fn=lambda dialog, name: print(f"Value for '{name}' changed to {dialog.get_value(name)}"), ) ``` #  A complete demo, that includes the code snippets above, is included with this extension at “scripts/demo_popup_dialog.py”.
3,536
Deploying.md
# Deploying a Carbonite Application Applications developed with the Carbonite SDK will need to redistribute some of its components to function correctly. The Carbonite package for applications is `carb_sdk+plugins.${platform}` and is distributed via Packman. You can use the Packman search tool to find package versions. Generally speaking, there is little harm in redistributing *too many* files. If in doubt, redistribute it. ## Redistributable The package contains a `_build/{platform}/{config}` directory where binary artifacts that must be redistributed are placed. Not all of these files will need to be redistributed. The following sections describe the requirements in more details. ## Debug vs Release The package contains both *debug* and *release* builds of binaries. If debugging Carbonite itself is not desired, your application can use the *release* binaries, even if the application itself is built as *debug*. This also tends to be faster at runtime as the *debug* binaries are non-optimized and therefore less performant. On Windows, the *debug* binaries may require debug runtime libraries. Carbonite is not licensed to distribute the Microsoft debug runtime files, so these files must be sourced elsewhere. A possible means of acquiring the debug Microsoft libraries is to install a version of Microsoft Visual Studio. ## Core Library If you are using the [Carbonite Framework](carb/Framework.html#carb-framework) with plugins, or [Omniverse Native Interfaces](OmniverseNativeInterfaces.html), or the Carbonite memory management functions (i.e. `carb::allocate()`) you will need to package the core library along with your application. This is `carb.dll` (Windows), `libcarb.so` (Linux) or `libcarb.dylib` (Mac). ## Plugins Only the plugins that your application uses (and their recursive dependencies) must be redistributed. For instance, few applications use `carb.simplegui.plugin` though it is among the largest Carbonite plugins. It need not be redistributed with your application if it is not being used. However, keep in mind that there may be dependencies between plugins. For instance, `carb.settings.plugin` requires `carb.dictionary.plugin`. The provided `plugin.inspector` tool application can be used to examine these dependencies. ## Python Carbonite provides a means of embedding Python through `carb.scripting-python.plugin`. Python 3.7 and 3.10 are both offered. These are meant to be singular: that is, only one version of Python may be loaded into an application. The contents of the `scripting-python-${version}` directory must be redistributed along with your application if you use embedded Python. ## Python Bindings ## Python Bindings If you are using the Carbonite plugins through embedded Python, or are running as a Python application (i.e. started from Python), then you likely also want to include the relevant portions of the *bindings-python* directory. Python uses dot-notation with the *import* directive to load python code from directories and packages. *.py* files may exist loose in a directory to comprise a package, or compiled into a library file (* .pyd* file on Windows or *.so* file on Linux/Mac). Carbonite Python Bindings are generally compiled into a library file. In some cases, Carbonite Python Bindings are prefixed with an underscore (i.e. *_carb*) and a wrapper *__init__.py* file is used to import and augment the library package contents. In these cases, both the library with the underscore prefix and the *__init__.py* file must be redistributed in the same directory structure layout. Since Python interprets directories as package names, the directory structure is important. Therefore, it is important that the directory structure under *bindings-python* is replicated in your application distribution, and the root of the bindings is added to the *PYTHONPATH* environment variable. Bindings are available for the following versions of Python, identified by the *cpXXX* number in their filename: 3.7 (*cp37*), 3.8 (*cp38*), 3.9 (*cp39*), 3.10 (*cp310*). Only the version that is in use must be redistributed. ### Core library bindings The Core library bindings are prefixed with *_carb* and located in the *bindings-python/carb* directory. The *__init__.py* is also required to be redistributed along with the Core library bindings. The Core library bindings are required if you are redistributing any of the other bindings (or have your own bindings for additional plugins). ### Plugin bindings Plugins that do not have an associated *__init__.py* file are located in either the *carb* or *omni* subdirectories of *bindings-python* and do not have an underscore prefix. Plugins that have an associated *__init__.py* are located in an additional subdirectory. The subdirectory name must be redistributed along with the binding library for the desired version of Python as well as the *__init__.py* file. ## Platform Specific ### Windows Some plugins may require the Visual Studio 2019 Runtime Redistributable. The files therein are not distributed as part of the *carb_sdk+plugins* package and must be sourced separately. Typically the files required would be located in the */X64/Microsoft.VC142.CRT* directory: *vcruntime140*.dll and *msvcp140*.dll. In some cases the Windows SDK runtime is required as well: *x64/ucrtbase.dll*. If *carb.profiler-nvtx.plugin* is redistributed, the *nvToolsExt64_1.dll* file must also be present in the same directory. ### Linux If *carb.profiler-nvtx.plugin* is redistributed, the *libnvToolsExt.so* file must also be present in the same directory. ## Telemetry Transmitter If an application makes use of *omni.structuredlog.plugin* to gather telemetry data, the *omni.telemetry.transmitter* application can be used to send the gathered information to a server that collects this data. The *omni.telemetry.transmitter* application changes less frequently and is distributed via Packman in a separate package: *telemetry_transmitter.${platform}*. The entire contents of *_build/${platform}/release* from within that package should be redistributed along with your application in a separate directory. > **Warning** > The *telemetry_transmitter* package contains copies of various plugins required by the transmitter. It is generally assumed that these are older versions than the plugins from the *carb_sdk+plugins* package, and should be located in a separate directory and loaded only by the transmitter. ## Symbols Symbols are not distributed along with any of the Carbonite packages. Instead they are stored at build time using the repo_symbolstore utility.
6,611
depth-compositing_Overview.md
# Overview ## Introduction The `omni.kit.scene_view.opengl` module provides an OpenGL drawing backend for `omni.ui.scene`. The usage is the same as `omni.ui.scene` for creating items, the only difference is in how the top-level `SceneView` object is created. ## How to create a simple OpenGLSceneView ### Python 1. Import the required packages: ```python import omni.ui as ui from omni.ui_scene import scene as sc from omni.kit.scene_view.opengl import OpenGLSceneView ``` 2. Create a simple model to deliver view and projection to the `omni.ui.scene.SceneView`: ```python class SimpleModel(sc.AbstractManipulatorModel): def __init__(self, view=None, projection=None): super().__init__() self.__view = view or [0.7071067811865476, -0.4082482839677536, 0.5773502737830688, 0, 2.7755575615628914e-17, 0.8164965874238355, 0.5773502600027396, 0, -0.7071067811865477, -0.40824828396775353, 0.5773502737830687, 0, 5.246555321336316e-14, -0.0000097441642310514, -866.0254037844385, 1] self.__projection = projection or [1.7320507843521864, 0, 0, 0, 0, 1.7320507843521864, 0, 0, 0, 0, 1.7320507843521864, 0, 0, 0, 0, 1] ``` 0, 2.911189413558437, 0, 0, 0, 0, -1.00000020000002, -1, 0, 0, -2.00000020000002, 0] ```python def get_as_floats(self, item): """Called by SceneView to get projection and view matrices""" if item == self.get_item("projection"): return self.__projection if item == self.get_item("view"): return self.__view ``` 3. Create an `omni.ui.Window` and an OpenGLSceneView in it. ```python window = ui.Window("OpenGLSceneView", width=512, height=512) with window.frame: gl_sceneview = OpenGLSceneView(SimpleModel()) with gl_sceneview.scene: sc.Arc(250, axis=0, tesselation=64, color=[1, 0, 0, 1]) sc.Arc(250, axis=1, tesselation=64, color=[0, 1, 0, 1]) sc.Arc(250, axis=2, tesselation=64, color=[0, 0, 1, 1]) ``` Depth Compositing ================ Because the drawing is done with OpenGL, it is also possible to do in Viewport drawing that is depth-composited. This can be accomplished with the `ViewportOpenGLSceneView` class, which also handles setting up the Viewport to output a depth channel to clip against. Python ------ ```python from omni.ui_scene import scene as sc from omni.kit.scene_view.opengl import ViewportOpenGLSceneView from omni.kit.viewport.utility import get_active_viewport_window def build_fn(gl_scene_view): with gl_scene_view.scene: sc.Arc(45, axis=1, tesselation=64, color=[1, 0.9, 0.4, 1], wireframe=False, thickness=50) # Use a static helper method to set everything up # Just provide a ViewportWindow, unique identifier, and a callable function to build the scene # ui_frame, gl_scene_view = ViewportOpenGLSceneView.create_in_viewport_window(get_active_viewport_window(), "demo.unique.identifier", build_fn) ``` Further reading =============== * [Omni UI Scene](https://docs.omniverse.nvidia.com/kit/docs/omni.ui.scene/latest)
3,413
design_manual.md
# WRAPP CLI usage WRAPP provides a command line tool helping with asset packaging and publishing operations for assets stored in Nucleus servers or file systems. It encourages a structured workflow for defining the content of an asset package, and methods to publish and consume those packages in a version-safe manner. ## Design The WRAPP command line tool is a pure Nucleus client utilizing only publicly available APIs, it lives completely in the Nucleus user space. Thus, all operations performed are limited by the permissions granted to the user executing the script. The tool itself offers a variety of commands that document themselves via the `--help` command line flag. To get a list of all commands, run: ``` wrapp --help ``` To get the help for a single command, do e.g.: ``` wrapp create --help ``` The commands are displayed in alphabetical order, but it is important to understand that the design is based on three layers of increasing abstraction of a pure file-system based workflow. Those layers are: 1. Files & Folders 2. Packages 3. Stages We will present the commands in the order of the lowest abstraction to the highest abstraction because it makes it easier to understand how the later commands function, but in day to day usage mostly layer 2 and 3 will be used. ## Supported URLs Wrapp in general accesses data through the Omniverse Client-Library and therefore supports URLs to Nucleus servers, S3 buckets, Azure containers/blobs and to the local file system: - **Nucleus Servers**: Data on Nucleus servers can be accessed using “omniverse://…” URLs. Authentication will by default occur interactively, for more details please refer to the Nucleus documentation. - **Azure**: Data on Azure can be accessed using “https://…..blob.core.windows.net” URLs. For more details on authentication and requirements on the Azure containers/blobs, please refer to the client-library documentation. - **S3**: Data on S3 can be accessed using “http(s)://…cloudfront.net” or “http(s)://…amazonaws.com” URLs. For more details on authentication and requirements on the S3 buckets, please refer to the client-library documentation. - **Local file system**: Data on the local file system can be accessed using “file://localhost/….” or “file:///…” URLs. Any URL or path that has no scheme is interpreted as a file path, so you can specify `file:local_folder` or `local_folder` to address a local directory. Not all commands support all URL types for all parameters. ## Generic parameters Most if not all commands support the following parameters: - `--verbose`: Specify this to have more visibility on what is currently being processed. - `--time`: Measure the wall clock time the command took to execute. - `--stats`: Produce some stats about the estimated number of roundtrips and file counts encountered. Note many of these roundtrips may be cached and not actually be executed, this is more of informative nature than a benchmark. ## Options - `--jobs` Specify the maximum number of parallel jobs executed. The default is 100. This can be useful to throttle load on the server while running bulk operations. Note that downloads are always maxed out at 10 (or use the OMNI_CONN_TRANSFER_MAX_CONN_COUNT setting to set this specifically) - `--tagging-jobs` Specify the maximum number of parallel jobs run on the tagging service. Default is 50. - `--log-file` Specify the name of the log file. The default name is `wrapp.log` - `--debug` Turn on debug level logging for client library - `--json-logging` Use this to produce a JSON structured log instead of a human readable log ## Authentication By default, wrapp uses interactive authentication which is appropriate for the server and server version your are contacting. It might open a browser window to allow for single sign-on workflows. Successful connections will be cached and no further authentication will be required running commands. If this is not desired or not possible as in headless programs, the `--auth` parameter is used to supply credentials. The credentials need to be in the form of a comma separated triplet, consisting of: 1. The server URL. This needs to start with `omniverse://` and must match the server name as used in the URLs that target the server. 2. The username. This can be a regular username or the special name `$omni-api-token` when the third item is an API token and not a password 3. The password for that user, or the API token generated for a single sign-on user. As an example, this is how to specify a wrapp command authenticating against a localhost workstation with the default username and password: ``` wrapp list-repo omniverse://localhost --auth omniverse://localhost,omniverse,omniverse ``` and this is how you would use an API token stored in an environment variable on Windows: ``` wrapp list-repo omniverse://staging.nvidia.com/staging_remote/beta_packages --auth omniverse::staging.nvidia.com,$omni-api-token,%STAGING_TOKEN% ``` on Linux, don’t forget to escape the $. ## Runnning wrapp commands concurrently If several wrapp commands are executed and awaited concurrently, it is strongly recommended to use them in one context created with the CommandContext.run_scheduler method. ## Layer 1 commands - Files & Folders and their metadata ### Catalog The catalog command can be used to create a list of files and folders in a specified subtree and store the result together with explicit version information in a catalog (aka manifest) file. To catalog the content of a specific subtree on your localhost Nucleus with the assets being at the path `NVIDIA/Assets/Skies/Cloudy/`, just run: ``` wrapp catalog omniverse://localhost/NVIDIA/Assets/Skies/Cloudy/ cloudy_skies_catalog.json --local-hash ``` Of course, replace localhost with the server name if the data is somewhere else. The `--local-hash` is required here only because the data in the example is stored on a mount or if the data is not checkpointed. Use the `--local-hash` to calculate them on the fly, but the data needs to be downloaded to your local machine! The json file produced has now archived the files and their versions at the very moment the command was run. Being a server, running the command again might produce a different catalog when files are added, deleted, or updated in the meantime. To be able to determine if the version that was cataloged is still the same, we can use the `diff` command to compare two catalogs made at different points in time or even different copy locations of the same asset. ### Ignore rules, e.g. for thumbnails The command supports ignore rules that are by default read from a file called `.wrappignore` in the current working directory. The name of the ignore file can also be specified with the `--ignore-file myignorefile.txt`. For example, to ignore all thumbnail directories for cataloging operation and do not include them in the package, create a file called `.wrappignore` in your current directory containing the line ``` .thumbs ``` # If tags need to be cataloged, copied, and diffed as well, specify the `--tags` parameter. This will do a second pass using the Omniverse tagging service and will archive the current state of tags, their namespaces and values in the catalog file: ``` ```markdown wrapp catalog omniverse://example.nvidia.com/lib/props/vegetation vegetation_tagged.json --tags ``` # Creating a catalog from a file list Should your asset be structured differently from a simple folder tree that is traversed recursively by the catalog operation, you can create and specify a file list in the form of a tab separated URL list split into the base and the relative path. As an example, this can be used to create a catalog of an asset structured differently: ```markdown omniverse://localhost/NVIDIA/Assets/Skies/Clear/ evening_road_01_4k.hdr omniverse://localhost/NVIDIA/Assets/Skies/Dynamic/ Cirrus.usd ``` If this is stored in a file called input_files.tsv (with a proper ASCII tab character instead of the `\t` placeholder), you can create the catalog of this asset with the `--file-list` parameter like this: ```markdown wrapp catalog input_files.tsv evening_road.json --local-hash --file-list ``` Both files will now be in the root directory of the package to be created, as only the relative part of the path is kept. # Diff The diff command compares two catalogs, and can be used to find out what has changed or what are the differences between two different copies of the same subtree. Assuming we have two catalogs of the same package from the same location at two different dates, we can just run ```markdown wrapp diff vegetation_catalog_20230505.json vegetation_catalog_20230512.json --show ``` with the –show command asking not only return if there is a diff (exit code will be 1 if a diff is detected, 0 otherwise) but to even print out a list of items that are only in catalog 1, but not 2, those which are only in 2 but not 1, and a list of files that differs in their content. # Get Sometimes, it can be handy to have a quick way of retrieving a single file or folder with a command line tool. This is what the get command was made for. To retrieve a single file onto your local disk, just do ```markdown wrapp get omniverse://localhost/NVIDIA/Assets/Isaac/2022.1/Isaac/Materials/Isaac/nv_green.mdl ``` and the tool will download the usd file. # Cat For viewing the content of a single text file, you can issue the cat command and wrapp will download the content and print it to stdout: ```markdown wrapp cat omniverse://localhost/NVIDIA/Assets/Isaac/2022.1/Isaac/Materials/Isaac/nv_green.mdl ``` # Freeze The freeze command is used to freeze or archive a specific version into a new location. This is used to make sure a specific version can be reproducibly addressed at that location, e.g. to run a CI job on a specific version or to create a reproducible version for QA testing and subsequent release. The freeze command has two modes. The first mode takes a source subtree URL and creates a copy of the head version of the files at the source position. If both source and destination are on the same Nucleus server, the operation is efficient as no data has to be transferred and the files and folders at the new destination are effectively hard links to the same content, causing no data duplication. Note that the history is not copied and the checkpoint numbers will not be the same as in the source. Here is a command to freeze the example vegetation package at the current head version into a new subtree on the same server: ```markdown wrapp freeze omniverse://example.nvidia.com/lib/props/vegetation omniverse://example.nvidia.com/archive/props/vegetation_20230505 ``` The second mode of the command just takes a catalog file as input and again a destination path as second parameter, but needs the flag `--catalog`. ```markdown wrapp freeze vegetation_catalog_20230505.json omniverse://example.nvidia.com/archive/props/vegetation_20230505 --catalog ``` Note that while this allows to defer the copy command to a later point and only catalog the files to be used as a first step, there is no guarantee that the freeze operation will still be able to find all files listed in the catalog - they might have been moved away or obliterated. So while creating the catalog first and freezing later is an optimization, be aware that the content in the catalog file is not securely stored. One useful operation is to specify a local file URL as destination, this allows you to copy out a specific cataloged version out to local disk, e.g. to run a CI job on it ```markdown wrapp freeze vegetation_catalog_20230505.json file:/c:/build_jobs/20230505 --catalog ``` Freeze also supports uses the `.wrappignore` file like catalog, and also supports the `--ignore-file` parameter. So even if files are part of the catalog, they can be ignored at freeze stage by providing an ignore file. To enable respecting tags during the freeze operation and making sure they are copied as well, specify the flag `--copy-tags`. Note this has no effect when doing a copy within the same Nucleus server, as it will always copy the tags anyhow. # create-patch and apply-patch The create-patch command uses a three-way comparison to produce a patch file that will merge one file tree into another given their common ancestor. For example, when we have a file tree at time-point 1 and have created a catalog file for this file tree called “catalog_1.json”. We do a copy of this state to a new location and use it from there. Now work continues in the original location, and we create a new catalog at time point 2 called “catalog_2.json”. If we now want to update the copy of the file tree and our use location, and want to know if it is safe to overwrite the tree or if there are local modifications we want to be alerted about, we use the following steps: 1. First, catalog also the target location, let’s call this catalog_target.json. 2. Then, run the following command to produce the patch or delta file which contains the operations needed for the update: ``` wrapp create-patch catalog_target.json catalog_2.json catalog_1.json –patch update_target.json ``` 3. When the command produced the patch file ok, it indicates no local changes have been made and there are no conflicts. Then run the following command to apply the changes in the update_target.json to the target: ``` wrapp apply-patch update_target.json ``` After this command, the file tree at the target location matches the file tree at the source at time point 2. This is the operation that is done by the higher level `update` command. In case there are local changes to the target, two options are offered: 1. To ignore local changes and keeping them, rather just adding new files and new versions where unmodified in the target, specify the `--ignore` parameter to the merge command. 2. To rollback changes and lose local changes in the target, specify the `--force` parameter to the merge command, this will produce a larger patch file containing also the rollback commands. # Layer 2 commands - Packages So far we have only worked with subtrees like in a versioned file system. This is very powerful and can be used for many use cases, but to have an easier workflow with less complex URLs and fewer possibilities for mistakes, we introduce a few conventions and new commands. The concept of a `repository` is known from distributed versioning systems like git, and denotes a location where a repository or module is stored. We use the term repository to point at a directory on a Nucleus server which is used as an intermediate safe storage for the frozen/archived versions, and consumers of these files use that as a copy source. The package directory is called `.packages`. Each folder in there represents a named package, and has sub-folders of named versions of that package. No prescriptions are made for how packages or versions have to be named, they just have to be valid file and folder names. An example package cache could look like this: - /.packages - /.packages/vegetation_pack - /.packages/vegetation_pack/20230505 - /.packages/vegetation_pack/20230512 - /.packages/vegetation_pack/20230519 - /.packages/rocks_pack - /.packages/rocks_pack/v1.0.0 - /.packages/rocks_pack/v1.1.0 Concretely, we introduce the new commands `new`, `create`, `install`, and `list-repo`. We allow both named and unnamed packages to be used. Unnamed packages are top level directories that are just consumers of packages produced elsewhere and have no own package description file. Names packages are all packages that have a file `.<\package name>.wrapp`. You can create a named package by using the new command, or create a named package from an unnamed package during the create operation (which will leave the unnamed source package to be unnamed - but you can run new for a directory that already contains files!). ## New The new command does not operate on any files or packages, it rather is a shortcut to create a suitable `.<package>.wrapp` file to be used by subsequent install-commands. For instance, when creating a new scenario and wanting to capture the asset packages used, it is useful to have a wrapp.toml (any name is fine) file that will record the dependencies installed. As an example, just run ``` wrapp new san_diego_scenario 1.0.0 omniverse://localhost/scenarios/san_diego ``` This will create a single file `.san_diego_scenario.wrapp` in the given location. You can display the contents with ``` wrapp cat omniverse://localhost/scenarios/san_diego/.san_diego_scenario.wrapp ``` and it will look similar to this: ```json { "format_version": "1", "name": "san_diego_scenario", "version": "1.0.0", "catalog": null, "remote": null, "source_url": "omniverse://localhost/scenarios/san_diego", "dependencies": null } ``` ## Create The create command is a shorter form of freeze. The destination directory for the freeze operation always is a package cache directory, which by default is on the same Nucleus server as the source data. To create a versioned package for reuse from our previous example, run: ``` wrapp create --package omniverse://localhost/scenarios/san_diego/.san_diego_scenario.wrapp ``` When you want to later create a new version of this package, just additionally specify the new version: ```markdown Alternatively, if you have not run new and there is no .wrapp file in the package directory, you can just specify the name and version directly. This will create a .wrapp file only in the package cache, not in the source of the package: ``` ```shell wrapp create vegetation_pack 20230505 omniverse://localhost/lib/props/vegetation ``` ```markdown This will create a copy of the vegetation library in the default package cache at omniverse://localhost/.packages/vegetation_pack/20230505. ``` ```markdown You can use the ‘–repo’ option to specify a different downstream Nucleus server to receive the data, but note that this will first download the data and then upload it to the other server. For example, to create the package on a different Nucleus server that is used for staging tests, we could run: ``` ```shell wrapp create vegetation_pack 20230505 omniverse://localhost/lib/props/vegetation --repo omniverse://staging.nvidia.com ``` ```markdown This will create a copy of the vegetation library in omniverse://staging.nvidia.com/.packages/vegetation_pack/20230505. ``` ```markdown Additionally, this will create a wrapp file recording the package name, the version, and the source from which it was created. The name will be `.{package_name}.wrapp`. Running the new command to prepare a .wrapp file is optional, create will generate the file in case there is none yet. ``` ```markdown Alternatively, packages can be created from previously generated catalogs as well. For this, specify the filename of the catalog file instead of a source URL and add the –catalog option: ``` ```shell wrapp create vegetation_pack 20230505 --catalog vegetation.json --repo omniverse://staging.nvidia.com ``` ## List-repo With the concepts of remotes, you can also list the packages available on any of these. Running ```shell wrapp list-repo omniverse://localhost ``` would give you the list of known packages with the list of the versions present, for example the output could be ```shell > wrapp list-repo omniverse://localhost vegetation_pack: 20230505, 20230401 ``` to show you that one package is available, and that in two different versions. ## Install These are still pure file based operations, and when we copy a version of the asset library into a folder with a version name in it, obviously all references to these files would need to be renamed, making it harder to update to a new version of that asset library from within USD. The idea here is to not reference the package archive directly from within the USD files and materials, but rather to create yet another copy as a subfolder of the scenario or stage, and that subfolder to have no version in its path. This can most easily be achieved via the `install` command. Assume the author of a `SanDiego` scenario stored at omniverse://localhost/scenarios/SanDiego wants to use the vegetation asset pack in a specific version. This can be done with the following command line: ```shell wrapp install vegetation_pack 20230505 omniverse://localhost/scenarios/SanDiego/asset_packs ``` This will look for the package version in the servers .packages directory, and make a hard linked copy in the specified subdirectory `asset_packs/` from where the assets can be imported and used in the scenario scene. The install command can also be used to update a package at the same location to a different version (actually it also allows to downgrade). For that, just specify a different version number. This command will check if the installed package is unmodified, else it will fail with conflicts (to override, just delete the package at the install location and run install again). To update the previously installed vegetation_pack to a newer version, just run ```shell wrapp install vegetation_pack 20230523 omniverse://staging.nvidia.com/scenarios/SanDiego/asset_packs ``` If you use more than one package, it can get quickly complicated to remember which package was installed from where. To help with this, wrapp introduces the concept of package files with dependencies. To create/update a dependency file, specify an additional parameter to the install command like this: ```shell wrapp install vegetation_pack 20230523 omniverse://staging.nvidia.com/scenarios/SanDiego/asset_packs --package omniverse://staging.nvidia.com/scenarios/SanDiego/.sandiego.wrapp ``` This will create a file `.sandiego.wrapp` at the specified location. If any of the files the install command needs to modify have been manually changed in the installation folder, the installation will fail with an appropriate error message, indicating that the file in the installation folder cannot be updated to match the file in the package folder. This is called a “conflict”. The following examples constitute conflicts: - The same file has been changed in the installation folder and the package but with different content. - A new file has been added to both the installation folder and the package but with different content. - A file has been deleted from the package, but modified in the installation folder. This conflict mechanism protects the user from losing any data or modifications to the installation folder. To update the installation folder in such a situation, the patch/apply mechanism can be used. In order to record the conflicts into a patch file, the failed installation can be rerun with an additional parameter specifying the name of the patch file to create. This will apply all non-conflicting changes and record all conflicts in the patch file: ```shell wrapp install vegetation_pack 20230925 omniverse://staging.nvidia.com/scenarios/SanDiego/asset_packs --patch install_conflicts.patch ``` The `install_conflicts.patch` file is a json file with the operations that would resolve/override the conflicts. Inspect this and edit or remove operations not desired, and apply with ```shell wrapp apply install_conflicts.patch ``` # Uninstall Any package that has been installed can be uninstalled again. There are two modes of uninstallation: Via the directory in which the package has been installed, or via pointing to the dependency file which had been used to record the install operation. Then uninstall will also remove the dependency information recorded in that file. ## Uninstall via directory: ```bash wrapp uninstall vegetation_pack omniverse://staging.nvidia.com/scenarios/SanDiego/asset_packs ``` ## or via package file, no need to specify the installation directory: ```bash wrapp uninstall vegetation_pack --package omniverse://staging.nvidia.com/scenarios/SanDiego/dependencies.toml ``` # Mirror When working with multiple servers, it might make sense to transfer created packages (or rather specific versions of these) into the .packages folder on another server so install operations on that server are fast and don’t need to specify the source server as a repository. This is what the mirror operation is built for - it will copy a package version from one server’s .packages directory into another server’s .packages directory. The simple format of the command is ```bash wrapp mirror vegetation_pack 20230523 --source-repo omniverse://dev.nvidia.com --destination-repo omniverse://staging.nvidia.com ``` There is the possibility to resume an aborted transfer. This is implemented by cataloging the destination directory first and then calculating and applying a delta patch. Activate this behavior with the `--resume` parameter. If the destination directory does not exist, this parameter does nothing and is ignored: ```bash wrapp mirror vegetation_pack 20230523 --source-repo omniverse://dev.nvidia.com --destination-repo omniverse://staging.nvidia.com --resume ``` To accelerate upload of subsequent versions, we can force a differential upload versus an arbitrary version that had already been mirrored, just specify the template version as an additional parameter: ```bash wrapp mirror vegetation_pack 20230623 --source-repo omniverse://dev.nvidia.com --destination-repo omniverse://staging.nvidia.com --template-version 20250523 ``` This will first copy, on the target server, the version specified as template version into the target folder. Then, it will calculate a differential update and only upload and delete files that are changed. This can be a big time saver when many files stayed the same between versions, but will slow down things if the difference is actually large because it has to do the additional copy on the destination server and catalog the result of that copy in the destination directory [optimization possible - we could rewrite the source catalog so the subsequent catalog is not required] # Export Instead of directly copying a package from server to server using the mirror command, you can also have wrapp create a tar file with all contents of a package for a subsequent import operation. To export, just run ```bash wrapp export vegetation_pack 20230623 --repo omniverse://dev.nvidia.com ``` this will download everything to you computer and produce an uncompressed tar file called `vegetation_pack.20230623.tar`. You can specify an alternative output file name or path with the `--output` option. You can also specify a catalog to export using “export –catalog”, e.g. ```bash wrapp export vegetation_pack 20230505 --catalog vegetation.json ``` This allows creating tar files and packages from arbitrary sources, e.g. data hosted on S3 or Azure. If you plan on importing the data later using the “wrapp import” command, consider using the “–dedup” switch to avoid downloading and storing the same content several times in the tar file. # Import You might have guessed, an exported package can also be imported again. To do that, run ```bash wrapp import vegetation_pack.20230623.tar --repo omniverse://staging.nvidia.com ``` to import the package into the .packages folder on the specified receiving repository.
27,202
destructible-path-utilities_structcarb_1_1blast_1_1_blast.md
# carb::blast::Blast Defined in [Blast.h](file_Blast.h) ## Destructible Authoring Commands ```cpp const char* combinePrims(const char** paths, size_t numPaths, float defaultContactThreshold, const carb::blast::DamageParameters& damageParameters); ``` ### DamageParameters * damageParameters, * float defaultMaxContactImpulse) **Main entry point to combine a existing prims into a single destructible.** **Param paths** - **[in]** Full USD paths to prims that should be combined. **Param numPaths** - **[in]** How many prims are in the paths array. **Param defaultContactThreshold** - **[in]** How hard the prim needs to be hit to register damage during simulation. **Param damageParameters** - **[in]** See DamageParameters description. **Param defaultMaxContactImpulse** - **[in]** How much force can be used to push other prims away during a collision For kinematic prims only, used to allow heavy objects to continue moving through brittle destructible prims. **Return** - true iff the prims were combined successfully. ### fracturePrims(const char *paths, size_t numPaths, const char *defaultInteriorMaterial, uint32_t numVoronoiSites, float defaultContactThreshold, DamageParameters *damageParameters) Main entry point to fracture an existing prim. **Param paths** [in] Full USD path(s) to prim(s) that should be fractured. They need to all be part of the same destructible if there are more than one. **Param numPaths** [in] How many prims are in the paths array. **Param defaultInteriorMaterial** [in] Material to set on newly created interior faces. (Ignored when re-fracturing and existing interior material is found.) **Param numVoronoiSites** [in] How many pieces to split the prim into. **Param defaultContactThreshold** [in] How hard the prim needs to be hit to register damage during simulation. **Param damageParameters** [in] See DamageParameters description. **Param defaultMaxContactImpulse** [in] How much force can be used to push other prims away during a collision. For kinematic prims only, used to allow heavy objects to continue moving through brittle destroyable prims. **Param interiorUvScale** [in] Scale to apply to UV frame when mapping to interior face vertices. **Return** path to the new prim if the source prim was fractured successfully, nullptr otherwise. Set the random number generator seed for fracture operations. **Param seed** [in] the new seed. Reset the Blast data (partial or full hierarchy) starting at the given path. **Param path** [in] the path to reset. The destructible will be rebuilt with only appropriate data remaining. - **Param path** - **[in]** The path to a chunk, instance, or base destructible prim. - **Return** - true iff the operation could be performed on the prim at the given path. - **Param path** - **[in]** The USD path of the blast container. - **Param defaultMaxContactImpulse** - **[in]** Controls how much force physics can use to stop bodies from penetrating. - **Param relativePadding** - **[in]** A relative amount to grow chunk bounds in order when calculating world attachment. - **Return** - true if the destructible’s NvBlastAsset was modified (or if path == NULL). - **Param path** - **[in]** The USD path of the blast container. - **Return** - true if the destructible’s NvBlastAsset was modified (or if path == NULL). Recalculates the areas of bonds. This may be used when a destructible is scaled. **Param path** - **[in]** Path to the chunk, instance, or base destructible prim. **Return** - true iff the operation was successful. Finds all children of the chunks in the given paths, and sets kit’s selection set to the paths of those children. **Param paths** - **[in]** Full USD path(s) to chunks. **Param numPaths** - **[in]** How many paths are in the paths array. **Return** - true iff the operation was successful. ### Function: selectParent Finds all parents of the chunks in the given paths, and sets kit’s selection set to the paths of those parents. **Parameters:** - **paths [in]** - Full USD path(s) to chunks. - **numPaths [in]** - How many paths are in the paths array. **Return:** - true iff the operation was successful. ### Function: selectSource Finds all source meshes for chunks in the given paths, and sets kit’s selection set to the paths of those meshes. **Parameters:** - **paths [in]** - Full USD path(s) to chunks. - **numPaths [in]** - How many paths are in the paths array. **Return:** - true iff the operation was successful. ### Function: setInteriorMaterial Sets the material for the interior facets of the chunks at the given paths. **Parameters:** - **paths [in]** - Full USD path(s) to chunks. - **numPaths [in]** - How many paths are in the paths array. - **interiorMaterial [in]** - Material for the interior facets. ### Description #### Param paths - **[in]** Full USD path(s) to chunks. #### Param numPaths - **[in]** How many paths are in the paths array. #### Return - true iff the operation was successful. #### Param interiorMaterial - **[in]** Path to the prim holding the material prim to use. #### Description - Gets the material for the interior facets of the chunks at the given paths. #### Param paths - **[in]** Full USD path(s) to chunks. #### Param numPaths - **[in]** How many paths are in the paths array. #### Return - the material path if all meshes found at the given paths have the same interior materials. If more than one interior material is found among the meshes found, the empty string (“”) is returned. If no interior material is found, nullptr is returned. #### Description - Recalculates UV coordinates for the interior facets of chunk meshes based upon the UV scale factor given. - If the path given is a chunk, UVs will be recalculated for the chunk’s meshes. If the path is an instance or base prim, all chunk meshes will have their interior facets’ UVs recalculated. #### Param path - **[in]** Path to the chunk, instance, or base destructible prim. #### Param interiorUvScale - **[in]** the scale to use to calculate UV coordinates. A value of 1 will cause the texture to map to a region in space roughly the size of the whole destructible’s largest width. #### Return - true iff the operation was successful. ```cpp void createDestructibleInstance( const char *path, const DamageParameters *damageParameters, float defaultContactThreshold, float defaultMaxContactImpulse ) ``` Creates a destructible instance with default values from the given destructible base. **Parameters:** - **path [in]** - Path to the destructible base to instance. - **damageParameters [in]** - The damage characteristics to assign to the instance (see DamageParameters). - **defaultContactThreshold [in]** - Rigid body parameter to apply to actors generated by the instance. The minimum impulse required for a rigid body to generate a contact event, needed for impact damage. - **defaultMaxContactImpulse [in]** - Rigid body parameter to apply to actors generated by the instance. The maximum impulse that a contact constraint on a kinematic rigid body can impart on a colliding body. --- ```cpp void setSimulationParams( int32_t maxNewActorsPerFrame ) ``` Sets the maximum number of actors which will be generated by destruction each simulation frame. **Parameters:** - **maxNewActorsPerFrame [in]** - The maximum number of actors generated per frame. ```cpp void createDamageEvent(const char *hitPrimPath, DamageEvent *damageEvents, size_t numDamageEvents) ``` Create a destruction event during simulation. **Parameters:** - **hitPrimPath [in]** - The full path to the prim to be damaged (may be a blast actor prim or its collision shape). - **damageEvents [in]** - An array of DamageEvent structs describing the damage to be applied. - **numDamageEvents [in]** - The size of the damageEvents array. --- ```cpp void setExplodeViewRadius(const char *path, float radius) ``` Set the cached explode view radius for the destructible prim associated with the given path. **Parameters:** - **path [in]** - Full USD path to a destructible instance. - **radius [in]** - The distance to move apart the instance’s rendered chunks. Gives the cached explode view radius for the destructible instances associated with the given paths, if the cached value for all instances is the same. **Param paths [in]** Array of USD paths to destructible instances. **Param numPaths [in]** The length of the paths array. **Return** The cached explode view radius for all valid destructible instances at the given paths, if that value is the same for all instances. If there is more than one radius found, this function returns -1.0f. If no valid instances are found, this function returns 0.0f. Calculate the maximum depth for all chunks in the destructible prim associated with the given paths. **Param paths [in]** Array of USD paths to destructible prims. **Param numPaths [in]** The length of the paths array. **Return** the maximum chunk depth for all destructibles associated with the given paths. Returns 0 if no destructibles are found. ### Calculates what the view depth should be, factoring in internal override if set. - **Param paths [in]** - Array of USD paths to destructible prims. - **Param numPaths [in]** - The length of the paths array. ### Set the view depth for explode view functionality. - **Param paths [in]** - Array of USD paths to destructible prims. - **Param numPaths [in]** - The length of the paths array. - **Param depth [in]** - Either a string representation of the numerical depth value, or “Leaves” to view leaf chunks. ### Set debug visualization info. - **Param mode [in]** - The debug visualization mode. - **Param value [in]** - The value associated with the debug visualization mode. <span class="pre"> type <span class="p"> <span class="pre"> ) <br/> <dd> <p> Set the debug visualization mode & type. <p> If mode != debugVisNone, an anonymous USD layer is created which overrides the render meshes for blast objects which are being visualized. <dl class="field-list simple"> <dt class="field-odd"> Param mode <dd class="field-odd"> <p> <strong> [in] Supported modes: “debugVisNone”, “debugVisSelected”, “debugVisAll” <dt class="field-even"> Param type <dd class="field-even"> <p> <strong> [in] Supported modes: “debugVisSupportGraph”, “debugVisMaxStressGraph”, “debugVisCompressionGraph”, “debugVisTensionGraph”, “debugVisShearGraph”, “debugVisBondPatches” <dt class="field-odd"> Return <dd class="field-odd"> <p> true iff a valid mode is selected. <div class="breathe-sectiondef docutils container"> <p class="breathe-sectiondef-title rubric-h3 rubric" id="breathe-section-title-debug-damage-functions"> Debug Damage Functions <p> <dl class="cpp var"> <dt class="sig sig-object cpp" id="_CPPv4N4carb5blast5Blast20setDebugDamageParamsE"> <span id="_CPPv3N4carb5blast5Blast20setDebugDamageParamsE"> <span id="_CPPv2N4carb5blast5Blast20setDebugDamageParamsE"> <span class="target" id="structcarb_1_1blast_1_1_blast_1a7f97f56019757787927d09e877b58692"> <span class="kt"> <span class="pre"> void <span class="w"> <span class="p"> <span class="pre"> ( <span class="p"> <span class="pre"> * <span class="sig-name descname"> <span class="n"> <span class="pre"> setDebugDamageParams <span class="p"> <span class="pre"> ) <span class="p"> <span class="pre"> ( <span class="kt"> <span class="pre"> float <span class="w"> <span class="n"> <span class="pre"> amount <span class="p"> <span class="pre"> , <span class="w"> <span class="kt"> <span class="pre"> float <span class="w"> <span class="n"> <span class="pre"> impulse <span class="p"> <span class="pre"> , <span class="w"> <span class="kt"> <span class="pre"> float <span class="w"> <span class="n"> <span class="pre"> radius <span class="p"> <span class="pre"> ) <br/> <dd> <p> Set parameters for the debug damage tool in kit. <p> This is applied using Shift + B + (Left Mouse). A ray is cast from the camera position through the screen point of the mouse cursor, and intersected with scene geometry. The intersection point is used to find nearby destructibles using to damage. <dl class="field-list simple"> <dt class="field-odd"> Param amount <dd class="field-odd"> <p> <strong> [in] The base damage to be applied to each destructible, as an acceleration in m/s^2. <dt class="field-even"> Param impulse <dd class="field-even"> <p> <strong> [in] An impulse to apply to rigid bodies within the given radius, in kg*m/s. (This applies to non-destructible rigid bodies too.) <dt class="field-odd"> Param radius <dd class="field-odd"> <p> <strong> [in] The distance in meters from the ray hit point to search for rigid bodies to affect with this function. <dl class="cpp var"> <dt class="sig sig-object cpp" id="_CPPv4N4carb5blast5Blast16applyDebugDamageE"> <span id="_CPPv3N4carb5blast5Blast16applyDebugDamageE"> <span id="_CPPv2N4carb5blast5Blast16applyDebugDamageE"> <span class="target" id="structcarb_1_1blast_1_1_blast_1a027d087862ca71dac812d7111cf84a20"> <span class="kt"> <span class="pre"> void <span class="w"> <span class="p"> <span class="pre"> ( <span class="p"> <span class="pre"> * <span class="sig-name descname"> <span class="n"> <span class="pre"> applyDebugDamage <span class="p"> <span class="pre"> ) <span class="p"> <span class="pre"> ( <span class="k"> <span class="pre"> const <span class="w"> <span class="n"> <span class="pre"> carb <span class="p"> <span class="pre"> :: <span class="n"> <span class="pre"> Float3 <span class="w"> <span class="p"> <span class="pre"> * <span class="n"> <span class="pre"> worldPosition <span class="p"> <span class="pre"> , <span class="w"> <span class="k"> <span class="pre"> const <span class="w"> <span class="n"> <span class="pre"> carb <span class="p"> <span class="pre"> :: <span class="n"> <span class="pre"> Float3 <span class="w"> ### Apply Debug Damage Apply debug damage at the position given, in the direction given. The damage parameters set by setDebugDamageParams will be used. #### Parameters - **Param worldPosition [in]** The world position at which to apply debug damage. - **Param worldDirection [in]** The world direction of the applied damage. ### Notice Handler Functions These can be called at any time to enable or disable notice handler monitoring. When enabled, use BlastUsdMonitorNoticeEvents to catch unbuffered Usd/Sdf commands. It will be automatically cleaned up on system shutdown if enabled. #### Functions - **blastUsdEnableNoticeHandlerMonitor()** - **blastUsdDisableNoticeHandlerMonitor()** ### Destructible Path Utilities These functions find destructible base or instance prims from any associated prim path. #### Functions - **getDestructibleBasePath(const char* path)** - **Param path [in]** Any path associated with a destructible base prim. - **Return** the destructible prim’s path if found, or nullptr otherwise. ## getDestructibleInstancePath ```cpp const char * ( * getDestructibleInstancePath )( const char * path ) ``` Param path: - **[in]** Any path associated with a destructible instance prim. Return: - the destructible prim’s path if found, or nullptr otherwise. ## Blast SDK Cache This function pushes the Blast SDK data that is used during simulation back to USD so it can be saved and then later restored in the same state. This is also the state that will be restored to when sim stops. ```cpp void ( * blastCachePushBinaryDataToUSD )( ) ``` ## Blast Stress This function modifies settings used to drive stress calculations during simulation. param path: - **[in]** Any path associated with a destructible instance prim. param gravityEnabled: - **[in]** Controls if gravity should be applied to stress simulation of the destructible instance. param rotationEnabled: - **[in]** Controls if rotational acceleration should be applied to stress simulation of the destructible instance. param residualForceMultiplier: - **[in]** Multiplies the residual forces on bodies after connecting bonds break. param settings: - **[in]** Values used to control the stress solver. Return: - true if stress settings were updated, false otherwise. ```cpp bool ( * blastStressUpdateSettings )( const ) ``` char * path, bool gravityEnabled, bool rotationEnabled, float residualForceMultiplier, const StressSolverSettings & settings
16,416
dev-1-2023-12-01_CHANGELOG.md
# [201.1.0-dev.8] - 2024-05-06 ## Updated - OM-123014 - Converters now take absolute usd file output path as input. # [201.1.0-dev.7] - 2024-04-30 ## Updated - OM-122942 - Refactored to share code in omni.kit.converter.common # [201.1.0-dev.6] - 2024-04-30 ## Updated - OM-124420 - Renamed cad_core to hoops_core # [201.1.0-dev.5] - 2024-04-24 ## Updated - OM-116478 - Updated set_app_data to include client name / version # [201.1.0-dev.4] - 2024-04-22 ## Updated - OM-123014 - Run scene optimizer as post-conversion task # [201.1.0-dev.3] - 2024-04-19 ## Updated - OM-123008 - Remove converter name from method signature # [201.1.0-dev.2] - 2024-04-16 ## Updated - OM-123571 - Update extension.toml to lock extension to Kit SDK version being used # [201.1.0-dev.1] - 2024-04-09 ## Updated - OM-121673 - Update to 201.1.0, move connect-sdk and scene optimizer to omni.kit.converter.common # [201.0.0-dev.10] - 2024-03-04 ## Updated - OM-121276 - Update to kit-kernel 106.0 # [201.0.0-dev.9] - 2024-02-23 ## Updated - OM-121276 - Update to kit-kernel 106.0 # [201.0.0-dev.8] - 2024-02-12 ## Fixed - **OM-109219** - Fix USD output path of DGN converter # [201.0.0-dev.7] - 2024-02-09 ## Fixed - **OM-118646** - Use same kit-kernel version as Connect SDK # [201.0.0-dev.6] - 2024-01-31 ## Updated - **OM-118567** - Updated keywords for improving searchability for CAD Converters # [201.0.0-dev.5] - 2024-02-08 ## Updated - **OM-118646** - Update to DGN converter that uses Connect SDK # [201.0.0-dev.4] - 2024-02-06 ## Updated - **OM-109082** - Added error when no USD file was created # [201.0.0-dev.3] - 2024-01-12 ## Updated - **OMFP-118316** - Update Connect SDK to release 0.6.0 # [201.0.0-dev.2] - 2023-12-12 ## Fixed - **OMFP-116513** - fix etm - use explicit pre-release # [201.0.0-dev.1] - 2023-12-01 ## Fixed - **OM-115742** - etm-failure-fix and merge release to master # [200.1.1-rc.6] - 2023-12-11 ## Updated - **OM-116923**: Documentation for using the DGN Converter through the service extension. # [200.1.1-rc.5] - 2023-11-28 ## Fixed - Hardcode `--ext-folder`. # [200.1.1-rc.4] - 2023-11-28 ## Fixed - Update search path for `--ext-folder`. - Add `--allow-root`. # [200.1.1-rc.3] - 2023-11-28 ## Fixed - Typo in progress facility # [200.1.1-rc.2] - 2023-11-22 ## Changed - **OMFP-3960** - Update dependency version in extension.toml # [200.1.1-rc.1] - 2023-11-10 ## Changed - **OM-114631** - Setup DGN converter to run as a subprocess # [200.1.1-rc.0] - 2023-11-13 # id22 ## Changed - OM-114367 - Updated CAD Converter deps. Bump all extension versions # rc-2-2023-09-06 ## [0.1.9-rc.2] - 2023-09-06 ### Changed - Updated tests with dgn_core service - Added default json file # id24 ## [0.1.7] - 2023-04-02 ### Changed - Update omni.kit.converter.cad_core - Added response model # id26 ## [0.1.6] - 2023-02-24 ### Changed - Update omni.kit.converter.cad_core deps for flag (retry) + Bump version # id28 ## [0.1.5] - 2023-02-23 ### Changed - Update omni.kit.converter.cad_core deps for flag + Bump version # id30 ## [0.1.4] - 2023-02-18 ### Changed - set exact version to true # id32 ## [0.1.3] - 2023-02-18 ### Changed - Added version lock to omni.kit.converter.cad_core v0.1.0-alpha (headless) # id34 ## [0.1.2] - 2023-02-18 ### Changed - Added version lock to omni.kit.converter.cad_core v0.1.1-alpha (headless) # id36 ## [0.1.1] - 2023-02-18 ### Changed - Added version lock to omni.kit.converter.cad_core v0.1.0-alpha (headless) # id38 ## [0.1.0] - 2023-02-14 ### Added - Added initial version of the Extension.
3,586
develop.md
# Develop a Project After creating a new Project, the development phase begins. In this phase, you configure and use an assortment of tools and extensions, along with automated documentation features to fit the needs of your project. ## Sidebar As a reminder, you can find additional documentation in the left-hand menu, such as: > - [Kit Manual](http://docs.omniverse.nvidia.com/kit/docs/kit-manual/latest/guide/kit_overview.html) for extensive information about programming using the Kit SDK. > - [Extensions](../../../../extensions/latest/index.html) for an extensive list of extensions you can include as dependencies in your project. Having followed the methods outlined in the [Create](../create/create.html) section, you’ve produced configuration files and established a folder setup. Now you will transform this set of default files to enable new functionality. This stage of Omniverse Project Development is undeniably the most in-depth, offering numerous paths to achieve desired outcomes as a developer. In this section, we’ll discuss tools and resources for project development, be it crafting an [Extension](../../common/glossary-of-terms.html#term-Extension), [Application](../../common/glossary-of-terms.html#term-Application), [Service](../../common/glossary-of-terms.html#term-Service), or [Connector](../../common/glossary-of-terms.html#term-Connector). ## Configure TOML Files Both Omniverse Applications and Extensions fundamentally rely on a configuration file in [TOML](../../common/glossary-of-terms.html#term-TOML) format. This file dictates dependencies and settings that the Kit SDK loads and executes. Through this mechanism, Applications can include Extensions, which may further depend on other Extensions, forming a dependency tree. For details on constructing this tree and the corresponding settings for each Extension, it’s essential to understand the specific configuration files. Applications utilize the .kit file, while Extensions are defined using .toml files. For more on each type of configuration file, please refer to the tabs above. ### Extension (extension.toml) Requirements: - Understanding [TOML](../../common/glossary-of-terms.html#term-TOML) file format. - Text Editor ([VS Code](../../common/glossary-of-terms.html#term-VS-Code) recommended) Extensions can contain many types of assets, such as images, python files, data files, C++ code/header files, documentation, and more. However, one thing all Extensions have in common is the **extension.toml** file. Extension.toml should be located in the `./config` folder of your project so that it can be found by various script tools. Here is an example extension.toml file that can be found in the Advanced Template Repository: ```toml [package] version = "1.0.0" title = "Simple UI Extension Template" description = "The simplest python extension example. Use it as a starting point for your extensions." # One of categories for UI. category = "Example" # Keywords for the extension keywords = ["kit", "example"] # Path (relative to the root) or content of readme markdown file for UI. readme = "docs/README.md" # Path (relative to the root) of changelog changelog = "docs/CHANGELOG.md" # URL of the extension source repository. repository = "https://github.com/NVIDIA-Omniverse/kit-project-template" # Icon to show in the extension manager icon = "data/icon.png" # Preview to show in the extension manager preview_image = "data/preview.png" [dependencies] "omni.kit.uiapp" = {} [[python.module]] name = "my.hello.world" ``` Here we will break this down… ```toml [package] version = "1.0.0" ``` This sets the version of your extension. It is critical that this version is set any time you produce a new release of your extension, as this version is most often used to differentiate releases of extensions in registries and databases. As a best practice, it is useful to maintain semantic versioning. It is also best practice to ensure that you document changes you have made to your code. See the Documentation section for more information. ```toml title = "Simple UI Extension Template" description = "The simplest python extension example. Use it as a starting point for your extensions." category = "Example" keywords = ["kit", "example"] ``` The `title` and `description` can be used in registries and publishing destinations to allow users more information on what your extension is used for. The `category` sets an overall filter for where this extension should appear in various UIs. The `keywords` property lists an array of searchable, filterable attributes for this extension. ```toml [dependencies] "omni.kit.uiapp" = {} ``` This section is critical to the development of all aspects of your project. The dependencies section in your toml files specifies which extensions are required. As a best practice, you should ensure that you use the smallest list of dependencies that still accomplishes your goals. When setting dependencies for extensions, ensure you only add extensions that are dependencies of that extension. The brackets `{}` in the dependency line allow for parameters such as the following: - `order=[ordernum]` allows you to define by signed integer which order the dependencies are loaded. Lower integers are loaded first. (e.g. `order=5000`) - `version=["version ID"]` lets you specify which version of an extension is loaded. (e.g. `version="1.0.1"`) - `exact=true` (default is false) - If set to true, parser will use only an exact match for the version, not just a partial match. - This section should contain one or more named python modules that are used by the extension. The name is expected to also match a folder structure within the extension path. In this example, the extension named `my.hello.world` would have the following path: `my/hello/world`. - These are the minimum required settings for extensions and apps. We will discuss more settings later in the Dev Guide, and you can find plenty of examples of these configuration files in the Developer Reference sections of the menu. - Requirements: - Understanding TOML file format. - Text Editor (VS Code recommended) - Applications are not much different than extensions. It is assumed that an application is the “root” of a dependency tree. It also often has settings in it related to the behavior of a particular workflow. Regardless, an App has the same TOML file configuration as extensions, but an App’s TOML file is called a `.kit` file. - `.kit` files should be located in the `./source/apps` folder of your project so that it can be found by various script tools. - Here is an example kit file that provides some of the minimum settings you’ll need. Additional settings and options can be found later: ```toml [package] version = "1.0.0" title = "My Minimum App" description = "A very simple app." # One of categories for UI. category = "Example" # Keywords for the extension keywords = ["kit", "example"] # Path (relative to the root) or content of readme markdown file for UI. readme = "docs/README.md" # Path (relative to the root) of changelog changelog = "docs/CHANGELOG.md" # URL of the extension source repository. repository = "https://github.com/NVIDIA-Omniverse/kit-project-template" # Icon to show in the extension manager icon = "data/icon.png" # Preview to show in the extension manager preview_image = "data/preview.png" [dependencies] "omni.kit.uiapp" = {} ``` - Here we will break this down… ```toml [package] version = "1.0.0" ``` - This sets the version of your extension or app. It is critical that this version is set any time you produce a new release of your extension, as this version is most often used to differentiate releases of extensions/apps in registries and databases. As a best practice, it is useful to maintain semantic versioning. - It is also best practice to ensure that you document changes you have made in your docs show each version you’ve released. - The `title` and `description` can be used in registries and publishing destinations to allow users more information on what your app or extension is used for. - The `category` sets an overall filter for where this project should appear in various UIs. - The `keywords` property lists an array of searchable, filterable attributes for this extension. ```toml [dependencies] "omni.kit.uiapp" = {} ``` # Development Guide ## Dependencies This section is critical to the development of all aspects of your project. The dependencies section in your toml files specifies which extensions are to be used by the app. As a best practice, you should ensure that you use the smallest list of dependencies that still accomplishes your goals. And, in extensions especially, you only add dependencies which THAT extension requires. The brackets `{}` in the dependency line allow for parameters such as the following: - `order=[ordernum]` allows you to define by signed integer which order the dependencies are loaded. Lower integers are loaded first. (e.g. `order=5000`) - `version=["version ID"]` lets you specify which version of an extension is loaded. (e.g. `version="1.0.1"`) - `exact=true` (default is false) - If set and true, parser will use only an exact match for the version, not just a partial match. These are the minimum required settings for Apps. We will discuss more settings later in the Dev Guide, and you can find plenty of examples of these configuration files in the Developer Reference sections of the menu. ## Available Extensions Virtually all user-facing elements in an Omniverse Application, such as Omniverse USD Composer or Omniverse USD Presenter, are created using Extensions. The very same extensions used in Omniverse Applications are also available to you for your own development. The number of extensions provided by both the Community and NVIDIA is continually growing to support new features and use cases. However, a core set of extensions is provided alongside the Omniverse Kit SDK. These ensure basic functionality for your Extensions and Applications including: - Omniverse UI Framework: A UI toolkit for creating beautiful and flexible graphical user interfaces within extensions. - Omni Kit Actions Core: A framework for creating, registering, and discovering programmable Actions in Omniverse. - Omni Scene UI: Provides tooling to create great-looking 3d manipulators and 3d helpers with as little code as possible. - And more. A list of available Extensions can be found via API Search. ## Documentation If you are developing your project using Repo Tools, you also have the ability to create documentation from source files to be included in your build. This powerful feature helps automate html-based documentation from human-readable .md files. You can refer to the `repo docs -h` command to see more information on the docs tool and its parameters. By running ``` repo docs ``` you will generate in the `_build/docs/[project_name]/latest/` folder a set of files which represents the html version of your source documentation. The “home page” for your documentation will be the `index.html` file in that folder. You can find latest information by reading the Omniverse Documentation System. > **Note** > You may find that when running `repo docs` you receive an error message instead of the build proceeding. If this is the case it is likely that you are either using a project that does not contain the “docs” tool OR that your `repo.toml` file is not setup correctly. Please refer to the repo tools documentation linked to above for more information. ## Additional Documentation - Script Editor - Code Samples - Repo Tools
11,704
DeveloperReference.md
# Developer Reference OmniGraph development can be done by users with a wide variety of programming proficiency. A basic familiarity with the Python scripting language is enough to get you started. If you know how to create optimized CUDA code for high throughput machine learning data analysis we’ve got you covered there too. You can start off with some basic [Naming Conventions](Conventions.html#omnigraph-naming-conventions) that let you easily recognize the various pieces of OmniGraph. While you are free to set up your extension in any way you wish, if you follow the [Directory Structure](DirectoryStructure.html#omnigraph-directory-structure) then some LUA utilities will help keep your `premake5.lua` file small. ## Working In Python OmniGraph supports development of nodes implemented in Python, Commands that modify the graph in an undoable way, Python bindings to our C++ ABI, and a general Python scripting API. See the details in the [Python Nodes and Scripts](PythonScripting.html#omnigraph-python-scripting) page. See also the [OGN Code Samples - Python](ogn/ogn_code_samples_python.html#ogn-code-samples-py) for examples of how to access different types of data within a node. ## Working In C++ OmniGraph supports development of nodes implemented in C++, as well as an extensive ABI for accessing data at the low level. See also the [OGN Code Samples - C++](ogn/ogn_code_samples_cpp.html#ogn-code-samples-cpp) for examples of how to access different types of data within a node. ## Implementation Details The architecture and some of the basic components of OmniGraph can be see in the [OmniGraph Architecture](Architecture.html#omnigraph-architecture) description. OmniGraph uses USD as its persistent storage for compute parameters and results. The details of how this USD data corresponds to OmniGraph data can be seen in the [OmniGraph and USD](Usd.html#omnigraph-and-usd) page. All of the details regarding the .ogn format can be found in the [Node Generator](ogn/Overview.html#omnigraph-ogn) page. # Action Graph Action Graph is a type of OmniGraph with unique features that can be used in custom nodes. See Action Code Samples - C++ and Action Graph Code Samples - Python for code examples. # Compound Nodes See Compound Nodes for details about compound nodes; specifically how they are represented in USD, and how to work with them using python.
2,384
dgn-converter-config-file-inputs_Overview.md
# Overview — omni.kit.converter.dgn_core 201.1.0-dev.8 documentation ## Overview ``` ```plaintext omni.kit.converter.dgn_core ``` uses the ODA Kernel and Drawings SDKs to convert the DGN data format to USD. When this extension loads, it will register itself with the CAD Converter service ( ```plaintext omni.services.convert.cad ``` ) if it is available. The resulting USD file from the DGN Converter prepends names of DGN levels to prims. This allows for quick search by users to find geometry belonging to desired converted levels. ## DGN CONVERTER CONFIG FILE INPUTS: Conversion options are configured by supplying a JSON file. Below are the available configuration options. ### JSON Converter Settings: **Format**: “setting name” : default value ```json "sConfigFilePath" : "C:/test/sample_config.json" ``` Configuration file path ```json "dSurfaceTolerance" : 0.2 ``` Sets the maximum distance (surface tolerance) between the tessellated mesh and the source surface. Limits of value are [0,1] ```json "iTessLOD" : 2 ``` Preset level of detail (LOD) values to provide to ODA API for converting solids and surfaces into tessellated meshes. - ```plaintext 0 ``` = ExtraLow, SurfaceTolerance = 1.0, - ```plaintext 1 ``` = Low, SurfaceTolerance = 0.1, - ```plaintext 2 ``` = Medium, SurfaceTolerance = 0.01, - ```plaintext 3 ``` = High, SurfaceTolerance = 0.001, - ```plaintext 4 ``` = ExtraHigh, SurfaceTolerance = 0.0001, ```json "bOptimize" : true ``` Flag to invoke USD scene optimization. ```json "bConvertHidden" : true ``` If true, convert hidden DGN elements but set to invisible; else, skip hidden elements. ```json "bHideLevelsByList" : true ``` Flag to hide DGN levels by name. ```json "hiddenLevels" : ["default", "customLayer1"] ``` <div> <p> Array of level names that contain the name of the custom DGN level and a flag to hide (if true) the levels after conversion. <pre><code>"bImportAttributesByList" : true <p> Flag to export DGN custom properties and convert to DGN attributes. <pre><code> "attributes" : [ { "name" : "myAttribute1", "converted_name" : "myAttribute1_foobar" }, { "name" : "myAttribute2", "converted_name" : "myAttribute2_foobar" } ], <p> Array of attribute objects that contain the name of the custom DGN property and the desired name for the converted USD attributed. <h3> Full <code> sample_config.json : <pre><code>{ "bOptimize" : true, "iTessLOD" : 2, "dSurfaceTolerance" : 0.2, "bConvertHidden" : true, "bHideLevelsByList" : true, "bImportAttributesByList" : true, "attributes" : [ { "name" : "myAttribute1", "converted_name" : "myAttribute1_foobar" }, { "name" : "myAttribute2", "converted_name" : "myAttribute2_foobar" } ], "hiddenLevels" : ["default", "customLayer1"] }
2,909
dgn-converter_Overview.md
# DGN Converter ## Overview The DGN Converter extension enables conversion for DGN file formats to USD. USD Explorer includes the DGN Converter extension enabled by default. ## Supported CAD file formats The following file formats are supported by DGN Converter: - DGN (`*.DGN`) > Note: The file formats *.fbx, *.obj, *.gltf, *.glb, *.lxo, *.md5, *.e57 and *.pts are supported by Asset Converter and also available by default. > Note: If expert tools such as Creo, Revit or Alias are installed, we recommend using the corresponding connectors. These provide more extensive options for conversion. > Note: CAD Assemblies may not work when converting files from Nucleus. When converting assemblies with external references we recommend either working with local files or using Omniverse Drive. ## Converter Options This section covers options for configuration of conversions of DGN file formats to USD. ### Surface Tolerance This is the maximum distance between the tessellated mesh and the original solid/surface. Please refer to Open Design Alliance’s webpage here. The more precise the value (e.g., 0.00001) the more triangles are generated for the mesh. A field is provided when selecting a DGN file. The minimum and maximum values are 0 to 1. If a value of 0 is provided, then surface tolerance of an object is calculated as the diagonal its extents multiplied by 0.025. ## Related Extensions These related extensions make up the DGN Converter. This extension provides import tasks to the extensions through their interfaces. The DGN Core extension is launched and provided configuration options through a subprocess to avoid library conflicts with those loaded by the other converters. ### Core Converter - DGN Core: `omni.kit.converter.dgn_core:Overview` ### Services - CAD Converter Service: `omni.services.convert.cad:Overview` ### Utils - Converter Common: `omni.kit.converter.common:Overview`
1,913
dictionary_settings.md
# Dictionaries and Settings Settings is a generalized subsystem designed to provide a simple to use interface to Kit’s various subsystems, which can be automated, enumerated, serialized and so on. It is accessible from both C++ and scripting bindings such as Python bindings. `carb.settings` is a Python namespace (and, coincidentally, a C++ plugin name) for the Settings subsystem. Settings uses `carb.dictionary` under the hood, and is effectively a singleton dictionary with a specialized API to streamline access. `carb.dictionary` is a Dictionary subsystem, which provides functionality to work with the data structure type known as dictionary, associative array, map, and so on. ## Dictionaries For the low-level description of the design and general principles, please refer to the Carbonite documentation for the `carb.dictionary` interfaces. ## Settings As mentioned above, the settings subsystem is using `carb.dictionary` under the hood, and to learn more about the low-level description of the design and general principles, please refer to the Carbonite documentation. On a higher level, there are several important principles and guidelines for using settings infrastructure, and best practices for using settings within Omniverse Kit. ### Default values Default values need to be set for settings at the initialization stage of the plugin, and in the extension configuration file. A rule of thumb is that no setting should be read when there is no value for it. As always, there are exceptions to this rule, but in the vast majority of cases, settings should be read after the setting owner sets a default value for this particular setting. ### Notifications To ensure optimal performance, it is recommended to use notifications instead of directly polling for settings, to avoid the costs of accessing the settings backend when the value didn’t change. **DON’T**: This is an example of polling in a tight loop, and it is **not recommended** to do things this way: ```c++ while (m_settings->get<bool>("/snippet/app/isRunning")) { doStuff(); // Stop the loop via settings change m_settings->set("/snippet/app/isRunning", false); } ``` **DO**: Instead, use the notification APIs, and available helpers that simplify the notification subscription code, to reduce the overhead significantly: ```c++ carb::settings::ThreadSafeLocalCache<bool> valueTracker; valueTracker.startTracking("/snippet/app/isRunning"); ``` ```c++ while (valueTracker.get()) { doStuff(); // Stop the loop via settings change m_settings->set("/snippet/app/isRunning", false); } valueTracker.stopTracking(); ``` With the bool value, getting and setting the value is cheap, but in cases of more complicated types, e.g. string, marking and clearing dirty marks could be used in the helper. In case a helper is not sufficient for the task at hand - it is always possible to use the settings API in such a way as ``` subscribeToNodeChangeEvents ``` / ``` subscribeToTreeChangeEvents ``` and ``` unsubscribeToChangeEvents ``` to achieve what’s needed with greater flexibility. ## Settings structure Settings are intended to be easily tweakable, serializable and human readable. One of the use-cases is automatic UI creation from the settings snapshot to help users view and tweak settings at run time. **DO**: Simple and readable settings like ``` /app/rendering/enabled ``` **DON’T**: Internal settings that don’t make sense to anyone outside the core developer group, things like: ```c++ /component/modelArray/0=23463214 /component/modelArray/1=54636715 /component/modelArray/2=23543205 ... /component/modelArray/100=66587434 ``` ## Reacting to and consuming settings Ideally settings should be monitored for changes and plugin/extensions should be reacting to the changes accordingly. But exceptions are possible, and in these cases, the settings changes should still be monitored and user should be given a warning that the change in setting is not going to affect the behavior of a particular system. ## Combining API and settings Often, there are at least two ways to modify behavior: via the designated API function call, or via changing the corresponding setting. The question is how to reconcile these two approaches. One way to address this problem - API functions should only change settings, and the core logic tracks settings changes and react to them. Never change the core logic value directly when the corresponding setting value is present. By adding a small detour into the settings subsystem from API calls, you can make sure that the value stored in the core logic and corresponding setting value are never out of sync.
4,665
directories.md
# Directories - [8fa04669143f4cb0](#dir-0612118555fae677dced868d63781571) - [8fa04669143f4cb0/_build](#dir-8a2f7be843a233509bb1bc1ed4f4bc15) - [8fa04669143f4cb0/_build/target-deps](#dir-deb636f69a2bc2a7bceab692952225ef) - [8fa04669143f4cb0/_build/target-deps/hoops_exchange_cad_converter_release](#dir-1eb887b6b7b0977ac20ac43bf8332669) - [8fa04669143f4cb0/_build/target-deps/hoops_exchange_cad_converter_release/hoops_exchange_cad_converter](#dir-ec03f3037ed32d3c17f34b738179950d) - [8fa04669143f4cb0/_build/target-deps/hoops_exchange_cad_converter_release/hoops_exchange_cad_converter/include](#dir-36e3145d041d1326ad902f318aa968b8) - [8fa04669143f4cb0/_build/target-deps/hoops_exchange_cad_converter_release/hoops_exchange_cad_converter/include/hoops_reader](#dir-94d92ab70a1a4d71afe8628ac725ddc2)
800
DirectoryStructure.md
# Directory Structure It is advantageous to consider nodes as a separate type of thing and structure your directories to make them easier to find. While it’s not required in order to make the build work, it’s recommended in order to keep the location of files consistent. The standard Kit extension layout has these directories by default: ``` omni.my.feature/ bindings/ Files related to Python bindings of your C++ config/ extension.toml configuration file docs/ index.rst explaining your extension plugins/ C++ code used by your extension python/ __init__.py extension.py = Imports of your bindings and commands, and a omni.ext.IExt object for startup/shutdown scripts/ Python code used by your extension ``` The contents of your `__init__.py` file should expose the parts of your Python code that you wish to make public, including some boilerplate to register your extension and its nodes. For example, if you have two scripts for general use in a `utility.py` file then your `__init__.py` file might look like this: ```python """Public interface for my.extension""" import .extension import .ogn from .scripts.utility import my_first_useful_script from .scripts.utility import my_second_useful_script ``` The C++ node files (OgnSomeNode.ogn and OgnSomeNode.cpp) will live in a top level `nodes/` directory and the Python ones (OgnSomePythonNode.ogn and OgnSomePythonNode.py) go into a `python/nodes/` subdirectory: ``` omni.my.feature/ bindings/ config/ docs/ nodes/ OgnSomeNode.ogn OgnSomeNode.cpp plugins/ python/ nodes/ OgnSomePythonNode.ogn OgnSomePythonNode.py ``` If your extension has a large number of nodes you might also consider adding extra subdirectories to keep them together: ``` omni.my.feature/ ... nodes/ math/ OgnMathSomeNode.ogn OgnMathSomeNode.cpp physics/ OgnPhysicsSomeNode.ogn OgnPhysicsSomeNode.cpp utility/ OgnUtilitySomeNode.ogn OgnUtilitySomeNode.cpp ... ``` **Tip** Although any directory structure can be used, using this particular structure lets you take advantage of the predefined build project settings for OmniGraph nodes, and makes it easier to find files in both familiar and unfamiliar extensions.
2,421
dir_8fa04669143f4cb0__build_target-deps.md
# 8fa04669143f4cb0/_build/target-deps ## Directories - [hoops_exchange_cad_converter_release](dir_8fa04669143f4cb0__build_target-deps_hoops_exchange_cad_converter_release.html#dir-1eb887b6b7b0977ac20ac43bf8332669)
215
dir_8fa04669143f4cb0__build_target-deps_hoops_exchange_cad_converter_release.md
# 8fa04669143f4cb0/_build/target-deps/hoops_exchange_cad_converter_release ## Directories - [hoops_exchange_cad_converter](dir_8fa04669143f4cb0__build_target-deps_hoops_exchange_cad_converter_release_hoops_exchange_cad_converter.html#dir-ec03f3037ed32d3c17f34b738179950d)
272
documentation_index.md
# Omniverse USD Resolver This is a USD plugin that allows for working with files in Omniverse ## Documentation The latest documentation can be found at ## Getting You can get the latest build from Packman. There are separate packages for each usd flavor, python version, and platform. They are all named: omni_usd_resolver.{usd_flavor}.{python_flavor}.{platform} usd_flavor is one of: - nv-20_08 - nv-21_11 - nv-22_05 - nv-22_11 - pxr-20_08 - pxr-21_08 - pxr-21_11 - 3dsmax-21_11 - 3dsmax-22_11 - 3dsmax-23_11 - maya-21_11 - maya-22_11 - maya-23_11 - (see generate_redist_deps.py for the full list) python_flavor is one of: - nopy - py37 - py38 - py39 - py310 platform is one of: - windows-x86_64 - linux-x86_64 - linux-aarch64 All packages use the same versioning scheme: ``` {major}.{minor}.{patch} ``` ## USD & Client Library The package includes `redist.packman.xml` which point to the versions of USD and the Omniverse Client Library that this plugin was built against. You can include it in your own packman.xml file like this: ```xml <project toolsVersion="5.0"> <import path="../_build/target-deps/omni_usd_resolver/deps/redist.packman.xml" /> <dependency name="usd_debug" linkPath="../_build/target-deps/usd/debug" /> <dependency name="usd_release" linkPath="../_build/target-deps/usd/release" /> <dependency name="omni_client_library" linkPath="../_build/target-deps/omni_client_library" /> # Initializing You must either copy the omni_usd_resolver plugin to the default USD plugin location, or register the plugin location at application startup using `PXR_NS::PlugRegistry::GetInstance().RegisterPlugins`. Be sure to package both the library (.dll or .so) and the “plugInfo.json” file. Be sure to keep the folder structure the same for the “plugInfo.json” file. It should look like this: - omni_usd_resolver.dll or omni_usd_resolver.so - usd/omniverse/resources/plugInfo.json If you use `RegisterPlugins`, provide it the path to the “resources” folder. Otherwise, you can copy the entire ‘debug’ or ‘release’ folders into the standard USD folder structure. # Live Mode In order to send/receive updates you must: 1. `#include <OmniClient.h>` (from client library) 2. Create or open a “.live” file on an Omniverse server 3. Call `omniClientLiveProcess();` periodically For “frame based” applications, you can safely just call `omniClientLiveProcess` inside your main loop. For event based applications, you can register a callback function using `omniClientLiveSetQueuedCallback` to receive a notification that an update is queued and ready to be processed. In either case, make sure that nothing (ie: no other thread) is using the USD library when you call `omniClientLiveProcess` because it will modify the layers and that is not thread safe. # Contents - [C API](_build/docs/usd_resolver/latest/usd_resolver_api.html) - [Python API](docs/python.html) - [Changes](docs/changes.html) ## Technical - [Technical Overview](docs/technical-overview.html) - [OmniUsdResolver Overview](docs/resolver.html) - [OmniUsdResolver Details](docs/resolver-details.html) - [OmniUsdWrapperFileFormat Overview](docs/wrapper-file-format.html) - [OmniUsdLiveFileFormat Overview](docs/live-layers.html) - [OmniUsdLiveFileFormat (Multi-threaded) Overview](docs/live-layers-multithread.html) - [Live Layer Details](docs/live-layers-details.html) - [Live Layer Wire Format](docs/live-layers-wire-format.html) - [Live Layer Data](docs/live-layers-data.html) - [Client Library Live Functions](docs/omni-client-live.html)
3,541
documentation_Overview.md
# Overview This extension is the gold-standard for an extension that contains only OmniGraph Python nodes without a build process to create the generated OmniGraph files. They will be generated at run-time when the extension is enabled. ## The Files To use this template first copy the entire directory into a location that is visible to the extension manager, such as `Documents/Kit/shared/exts`. You will end up with this directory structure. The highlighted lines should be renamed to match your extension, or removed if you do not want to use them. ```text omni.graph.template.no_build/ config/ extension.toml data/ icon.svg preview.png docs/ CHANGELOG.md Overview.md README.md directory.txt ogn/ nodes.json omni/ graph/ template/ no_build/ __init__.py _impl/ __init__.py extension.py nodes/ OgnTemplateNodeNoBuildPy.ogn OgnTemplateNodeNoBuildPy.py tests/ __init__.py test_api.py test_omni_graph_template_no_build.py ``` By convention the Python files are structured in a directory tree that matches a namespace corresponding to the extension name, in this case `omni/graph/template/no_build/`, which corresponds to the extension name *omni.graph.template.no_build*. You’ll want to modify this to match your own extension’s name. The file `ogn/nodes.json` was manually written, usually being a byproduct of the build process. It contains a JSON list of all nodes implemented in this extension with the description, version, extension owner, and implementation language for each node. It is used in the extension window as a preview of nodes in the extension so it is a good idea to provide this file with your extension, though not mandatory. The convention of having implementation details of a module in the `_impl/` subdirectory is to make it clear to the user that they should not be directly accessing anything in that directory, only what is exposed in the `__init__.py`. ## The Configuration Every extension requires a `config/extension.toml` file with metadata describing the extension to the extension management system. Below is the annotated version of this file, where the highlighted lines are the ones you should change to match your own extension. ```toml # Main extension description values [package] # The current extension version number - uses [Semantic Versioning](https://semver.org/spec/v2.0.0.html) version = "2.3.1" ``` # The title of the extension that will appear in the extension window 5 # Longer description of the extension 7 # Authors/owners of the extension - usually an email by convention 9 # Category under which the extension will be organized 11 # Location of the main README file describing the extension for extension developers 13 # Location of the main CHANGELOG file describing the modifications made to the extension during development 15 # Location of the repository in which the extension's source can be found 17 # Keywords to help identify the extension when searching 19 # Image that shows up in the preview pane of the extension window 21 # Image that shows up in the navigation pane of the extension window - can be a .png, .jpg, or .svg 23 # Specifying this ensures that the extension is always published for the matching version of the Kit SDK 25 # Specify the minimum level for support 27 # Main module for the Python interface. This is how the module will be imported. 30 [[python.module]] 31 name = "omni.graph.template.no_build" 32 # Watch the .ogn files for hot reloading. Only useful during development as after delivery files cannot be changed. 34 [fswatcher.patterns] 35 include = ["*.ogn", "*.py"] 36 exclude = ["Ogn*Database.py"] 37 # Other extensions that need to load in order for this one to work 39 [dependencies] 40 "omni.graph" = {} # For basic functionality and node registration 41 "omni.graph.tools" = {} # For node type code generation 42 # Main pages published as part of documentation. (Only if you build and publish your documentation.) 44 [documentation] 45 pages = [ 46 "docs/Overview.md", 47 "docs/CHANGELOG.md", 48 ] 49 # Some extensions are only needed when writing tests, including those automatically generated from a .ogn file. 51 # Having special test-only dependencies lets you avoid introducing a dependency on the test environment when only 52 # using the functionality. 53 [[test]] 54 dependencies = [ 55 "omni.kit.test" # Brings in the Kit testing framework 56 ] 57 Everything in the docs/ ``` subdirectory is considered documentation for the extension. - **README.md** The contents of this file appear in the extension manager window so you will want to customize it. The location of this file is configured in the ``` extension.toml ``` file as the **readme** value. - **CHANGELOG.md** It is good practice to keep track of changes to your extension so that users know what is available. The location of this file is configured in the ``` extension.toml ``` file as the **changelog** value. - **Overview.md** This file is not usually required when not running a build process; in particular a documentation and can be deleted. - **directory.txt** This file can be deleted as it is specific to these instructions. ## The Node Type Definitions You define a new node type using two files, examples of which are in the ``` nodes/ ``` subdirectory. Tailor the definition of your node types for your computations. Start with the OmniGraph User Guide for information on how to configure your own definitions. ## Tests While completely optional it’s always a good idea to add a few tests for your node to ensure that it works as you intend it and continues to work when you make changes to it. The sample tests in the ``` tests/ ``` subdirectory show you how you can integrate with the Kit testing framework to easily run tests on nodes built from your node type definition. That’s all there is to creating a simple node type! You can now open your app, enable the new extension, and your sample node type will be available to use within OmniGraph. > **Note** > Although development is faster without a build process you are sacrificing discoverability of your node type. There will be no automated test or documentation generation, and your node types will not be visible in the extension manager. They will, however, still be visible in the OmniGraph editor windows. There will also be a small one-time performance price as the node type definitions will be generated the first time your extension is enabled.
6,810
DocumentingPython.md
# Documenting This guide is for developers who write API documentation. To build documentation run: ```bash repo docs ``` in the repo and you will find the output under `_build/docs/carbonite/latest/`. ## Documenting Python API The best way to document our Python API is to do so directly in the code. That way it’s always extracted from a location where it’s closest to the actual code and most likely to be correct. We have two scenarios to consider: - Python code - C++ code that is exposed to Python For both of these cases we need to write our documentation in the Python Docstring format (see [PEP 257](https://www.python.org/dev/peps/pep-0257/) for background). In a perfect world we would be able to use exactly the same approach, regardless of whether the Python API was written in Python or coming from C++ code that is exposing Python bindings via pybind11. Our world is unfortunately not perfect here but it’s quite close; most of the approach is the same - we will highlight when a different approach is required for the two cases of Python code and C++ code exposed to Python. Instead of using the older and more cumbersome restructredText Docstring specification we have adopted the more streamlined [Google Python Style Docstring](http://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings) format. This is how you would document an API function in Python: ```python from typing import Optional def answer_question(question: str) -> Optional[str]: """This function can answer some questions. It currently only answers a limited set of questions so don't expect it to know everything. Args: question: The question passed to the function, trailing question mark is not necessary and casing is not important. Returns: The answer to the question or ``None`` if it doesn't know the answer. """ if question.lower().startswith("what is the answer to life, universe, and everything"): return str(42) else: return None ``` After running the documentation generation system we will get this as the output (assuming the above was in a module named carb). There are a few things you will notice: 1. We use the [Python type hints](https://docs.python.org/3/library/typing.html) (introduced in Python 3.5) in the function signature so we don’t need to write any of that information in the docstring. An additional benefit of this approach is that many Python IDEs can utilize this information and perform type checking when programming against the API. Notice that we always do `from typing import ...` so we never have to prefix with `typing` namespace when referring to `List`, `Union`, etc. - **Using Docstrings**: Docstrings are essentially comments that describe what a function, class, or method does. They are written in triple quotes (`'''` or `"""`) and are placed at the beginning of the code block. For example, in Python, you might see something like this: ```python def function_name(arg1, arg2): ''' This is a docstring. It explains what the function does. ''' # function body ``` Docstrings can be accessed using the `__doc__` attribute of the function, class, or method. - **Using `reStructuredText`**: `reStructuredText` (reST) is a lightweight markup language used for documentation in the Python community. It is used to write the documentation for Python libraries and is also used in Sphinx, a documentation generator. Here's an example of how you might use reST in a docstring: ```python def function_name(arg1, arg2): ''' :param arg1: This is the first argument. :type arg1: int :param arg2: This is the second argument. :type arg2: str :returns: This function returns a tuple. :rtype: tuple ''' # function body ``` This format allows for detailed documentation of function parameters and return values. - **Using `Google Style`**: Google style docstrings are a specific format for writing docstrings that is popular in the Python community. They are similar to reST but have a more structured format. Here's an example: ```python def function_name(arg1, arg2): ''' This is a function that does something. Args: arg1 (int): The first argument. arg2 (str): The second argument. Returns: tuple: A tuple of results. ''' # function body ``` Google style docstrings are often used with the `Sphinx` documentation generator and the `Napoleon` extension, which converts them into reST. - **Using `numpy Style`**: Numpy style docstrings are another popular format for writing docstrings in the Python community. They are similar to Google style but have a different structure. Here's an example: ```python def function_name(arg1, arg2): ''' This is a function that does something. Parameters ---------- arg1 : int The first argument. arg2 : str The second argument. Returns ------- tuple A tuple of results. ''' # function body ``` Numpy style docstrings are often used with the `Sphinx` documentation generator and the `Numpydoc` extension, which converts them into reST. module_variable (Optional[str]): This is important ... """ or module_variable = None """Optional[str]: This is important ...""" But we **don’t** write: from typing import Optional module_variable: Optional[str] = None """This is important ...""" This is because the last form (which was introduced in Python 3.6) is still poorly supported by tools - including our documentation system. It also doesn’t work with Python bindings generated from C++ code using pybind11. For instructions on how to document classes, exceptions, etc please consult the Sphinx Napoleon Extension Guide.
5,861
documenting_exts.md
# Documenting Extensions This guide is for developers who write API documentation. To build the documentation, run: ```shell repo.{sh|bat} docs ``` Add the `-o` flag to automatically open the resulting docs in the browser. If multiple projects of documentation are generated, each one will be opened. Add the `--project` flag to specify a project to only generate those docs. Documentation generation can be long for some modules, so this may be important to reduce iteration time when testing your docs. e.g: ```shell repo.bat docs --project kit-sdk repo.bat docs --project omni.ui ``` Add the `-v` / `-vv` flags to repo docs invocations for additional debug information, particularly for low-level Sphinx events. ### Note You must have successfully completed a debug build of the repo before you can build the docs for **Python**. This is due to the documentation being extracted from the `.pyd` and `.py` files in the `_build` folder. Run `build --debug-only` from the root of the repo if you haven’t done this already. As a result of running `repo docs` in the repo, and you will find the project-specific output under `_build/docs/{project}/latest`. The generated `index.html` is what the `-o` flag will launch in the browser if specified. ### Warning sphinx warnings will result in a non-zero exit code for repo docs, therefore will fail a CI build. This means that it is important to maintain docstrings with the correct syntax (as described below) over the lifetime of a project. ## Documenting Python API The best way to document our Python API is to do so directly in the code. That way it’s always extracted from a location where it’s closest to the actual code and most likely to be correct. We have two scenarios to consider: - Python code - C++ code that is exposed to Python For both of these cases we need to write our documentation in the Python Docstring format (see [PEP 257](https://peps.python.org/pep-0257/) for background). Our world is unfortunately not perfect here but it’s quite close; most of the approach is the same - we will highlight when a different approach is required for the two cases of Python code and C++ code exposed to Python. Instead of using the older and more cumbersome restructuredText Docstring specification, we have adopted the more streamlined Google Python Style Docstring format. This is how you would document an API function in Python: ```python from typing import Optional def answer_question(question: str) -> Optional[str]: """This function can answer some questions. It currently only answers a limited set of questions so don't expect it to know everything. Args: question: The question passed to the function, trailing question mark is not necessary and casing is not important. Returns: The answer to the question or ``None`` if it doesn't know the answer. """ if question.lower().startswith("what is the answer to life, universe, and everything"): return str(42) else: return None ``` After running the documentation generation system we will get this as the output (assuming the above was in a module named carb): There are a few things you will notice: 1. We use the Python type hints (introduced in Python 3.5) in the function signature so we don’t need to write any of that information in the docstring. An additional benefit of this approach is that many Python IDEs can utilize this information and perform type checking when programming against the API. Notice that we always do `from typing import ...` so that we never have to prefix with the `typing` namespace when referring to `List`, `Union`, `Dict`, and friends. This is the common approach in the Python community. 2. The high-level structure is essentially in four parts: - A one-liner describing the function (without details or corner cases), referred to by Sphinx as the “brief summary”. - A paragraph that gives more detail on the function behavior (if necessary). - An `Args:` section (if the function takes arguments, note that `self` is not considered an argument). - A `Returns:` section (if the function does return something other than `None`). Before we discuss the other bits to document (modules and module attributes), let’s examine how we would document the very same function if it was written in C++ and exposed to Python using pybind11. ```cpp m.def("answer_question", &answerQuestion, py::arg("question"), R"( This function can answer some questions. It currently only answers a limited set of questions so don't expect it to know everything. Args: question: The question passed to the function, trailing question mark is not necessary and casing is not important. Returns: The answer to the question or empty string if it doesn't know the answer. )"); ``` The outcome is identical to what we saw from the Python source code, except that we cannot return optionally a string in C++. The same docstring syntax rules must be obeyed because they will be propagated through the bindings. We want to draw your attention to the following: 1. pybind11 generates the type information for you, based on the C++ types. The `py::arg` object must be used to get properly named arguments into the function signature (see pybind11 documentation) - otherwise you just get arg0 and so forth in the documentation. 2. Indentation and whitespace are key when writing docstrings. The documentation system is clever enough to remove uniform indentation. That is, as long as all the lines have the same amount of padding, that padding will be ignored and not passed onto the RestructuredText processor. Fortunately clang-format leaves this funky formatting alone - respecting the raw string qualifier. Sphinx warnings caused by non-uniform whitespace can be opaque (such as referring to nested blocks being ended without newlines, etc) Let’s now turn our attention to how we document modules and their attributes. We should of course only document modules that are part of our API (not internal helper modules) and only public attributes. Below is a detailed example: ```python """Example of Google style docstrings for module. This module demonstrates documentation as specified by the `Google Python Style Guide`. Docstrings may extend over multiple lines. Sections are created with a section header and a colon followed by a block of indented text. Example: Examples can be given using either the ``Example`` or ``Examples`` sections. Sections support any reStructuredText formatting, including literal blocks:: $ python example.py Section breaks are created by resuming unindented text. Section breaks are also implicitly created anytime a new section starts. Attributes: module_level_variable1 (int): Module level variables may be documented in either the ``Attributes`` section of the module docstring, or in an inline docstring immediately following the variable. Either form is acceptable, but the two should not be mixed. Choose one convention to document module level variables and be consistent with it. module_level_variable2 (Optional[str]): Use objects from typing, such as Optional, to annotate the type properly. module_level_variable4 (Optional[File]): We can resolve type references to other objects that are built as part of the documentation. This will link to `carb.filesystem.File`. Todo: * For module TODOs if you want them * These can be useful if you want to communicate any shortcomings in the module we plan to address .. _Google Python Style Guide: http://google.github.io/styleguide/pyguide.html """ module_level_variable1 = 12345 module_level_variable3 = 98765 """int: Module level variable documented inline. The type hint should be specified on the first line, separated by a colon from the text. This approach may be preferable since it keeps the documentation closer to the code and the default assignment is shown. A downside is that the variable will get alphabetically sorted among functions in the module so won't have the same cohesion as the approach above.""" module_level_variable2 = None module_level_variable4 = None ``` This is what the documentation would look like: As we have mentioned we should not mix the ``` Attributes: ``` style of documentation with inline documentation of attributes. Notice how ``` module_level_variable3 ``` appears in a separate block from all the other attributes that were documented. It is even after the TODO section. Choose one approach for your module and stick to it. There are valid reasons to pick one style above the other, but don’t cross the streams! As before, we use type hints from ``` typing ``` but we don’t use the typing syntax to attach them. We write: ```python """... Attributes: module_variable (Optional[str]): This is important ... """ ``` or ```python module_variable = None """Optional[str]: This is important ...""" ``` But we **don’t** write: ```python from typing import Optional module_variable: Optional[str] = 12345 """This is important ...""" ``` This is because the last form (which was introduced in Python 3.6) is still poorly supported by tools - including our documentation system. It also doesn’t work with Python bindings generated from C++ code using pybind11. For instructions on how to document classes, exceptions, etc please consult the Sphinx Napoleon Extension Guide. ## Adding Extensions to the automatic-introspection documentation system It used to be necessary to maintain a ``` ./docs/index.rst ``` to write out automodule/autoclass/etc directives, as well as to include hand-written documentation about your extensions. In order to facilitate rapid deployment of high-quality documentation out-of-the-box, a new system has been implemented. > Warning > If your extension’s modules cannot be imported at documentation-generation time, they cannot be documented correctly by this system. > Check the logs for warnings/errors about any failures to import, and any errors propagated. In the Kit ``` repo.toml ``` , the ``` [repo_docs.projects."kit-sdk"] ``` section is responsible for targeting the old system, and the ``` [repo_docs.kit] ``` section is responsible for targeting the new. Opt your extension in to the new system by: 1. Adding the extension to the list of extensions. 2. In ``` ./source/extensions/{ext_name}/docs/ ``` , Add or write an ``` Overview.md ``` if none exists. Users will land here first. 3. In ``` ./source/extensions/{ext_name}/config/extension.toml ``` , Add all markdown files - except ``` README.md ``` - to an entry per the example below. 4. In ``` ./source/extensions/{ext_name}/config/extension.toml ``` , Add all markdown files - except ``` README.md ``` - to an entry per the example below. ``` # Documentation Configuration To configure the documentation, you need to add any extension dependencies that your documentation depends on, such as links or Sphinx ref-targets. This syntax follows the repo_docs tools intersphinx syntax. The `deps` are a list of lists, where the inner list contains the name of the target intersphinx project, followed by the path to the folder containing that project's `objects.inv` file. HTTP links to websites that host their `objects.inv` file online, like Python's, will work as well, if discoverable at docs build time. Apart from web paths, this will only work for projects inside of the kit repo for now. ```python [documentation] deps = [ ["kit-sdk", "_build/docs/kit-sdk/latest"] ] pages = [ "docs/Overview.md", "docs/CHANGELOG.md", ] ``` The first item in the list will be treated as the “main page” for the documentation, and a user will land there first. Changelogs are automatically bumped to the last entry regardless of their position in the list. # Dealing with Sphinx Warnings The introspection system ends up introducing many more objects to Sphinx than previously, and in a much more structured way. It is therefore extremely common to come across many as-yet-undiscovered Sphinx warnings when migrating to this new system. Here are some strategies for dealing with them. ## MyST-parser warnings These are common as we migrate away from the RecommonMark/m2r2 markdown Sphinx extensions, and towards MyST-parser, which is more extensible and stringent. Common issues include: 1. Header-level warnings. MyST does not tolerate jumping from h1 directly to h3, without first passing through h2, for example. 2. Links which fail to match a reference. MyST will flag these to be fixed (Consider it a QC check that your links are not broken). 3. Code block syntax - If the language of a code-block cannot be automatically determined, a highlighting-failure warning may be emitted. Specify the language directly after the first backticks. 4. General markdown syntax - Recommonmark/m2r2 were more forgiving of syntax failures. MyST can raise warnings where they would not previously. ## Docstring syntax warnings The biggest issue with the Sphinx `autodoc` extension’s module-introspection is that it is difficult to control which members to inspect, and doubly so when recursing or imported-members are being inspected. Therefore, it is **strongly advised** that your python modules define `__all__`, which controls which objects are imported when `from module import *` syntax is used. It is also advised to do this step from the perspective of python modules acting as bindings for C++ modules. `__all__` is respected by multiple stages of the documentation generation process (introspection, autosummary stub generation, etc). This has two notable effects: 1. Items that your module imports will not be considered when determining the items to be documented. This speeds up documentation generation. 2. Prevents unnecessary or unwanted autosummary stubs from being generated and included in your docs. 3. Optimizes the import-time of your module when star-imports are used in other modules. 4. Unclutters imported namespaces for easier debugging. 5. Reduces “duplicate object” Sphinx warnings, because the number of imported targets with the same name is reduced to one. Other common sources of docstring syntax warnings: 1. Indentation / whitespace mismatches in docstrings. 2. Improper usage or lack-of newlines where required. e.g. for an indented block. ## C++ docstring issues As a boon to users of the new system, and because default bindings-generated initialization docstrings typically make heavy use of asterisks and backticks, these are automatically escaped at docstring-parse time. Please note that the `pybind11_builtins.pybind11_object` object Base is automatically hidden from class pages.
14,798
EditHotkey.md
# Edit Hotkey ## Edit Hotkey Select or mouse hover a row will show. Click to show the edit bar. Press keyboard to change key binding Click to change trigger option: On Press or On Release Click to save changes and exit, including key binding and trigger option Click to exit without change
295
ef-definition_GraphConcepts.md
# Graph Concepts This article covers core graph concepts found in EF. Readers are encouraged to review the [Execution Framework Overview](Overview.html#ef-framework) before diving into this article. ![The Execution Framework pipeline. This article covers concepts found in the Execution Graph (IR).](_images/ef-graph-concepts.png) The core data structure Execution Framework (EF) uses to describe execution is a *graph of graphs*. Each *graph* contains a [root node](#ef-root-node). The root node can connect to zero or many downstream [nodes](#ef-nodes) via directed [edges](#ef-edges). Nodes represent work to be executed. Edges represent ordering dependencies between nodes. ![A simple graph.](_images/ef-simple.svg) The work each node represents is encapsulated in a [definition](#ef-definition). Each node in the graph may have a pointer to a definition. There are two categories of definitions: [opaque](#ef-opaque-definition) and [graph](#ef-graph-definition). An *opaque definition* is a work implementation hidden from the framework. An example would be a function pointer. The second type of definition is another *graph*. Allowing a node’s work definition to be yet another graph is why we say EF’s core execution data structure is a *graph of graphs*. The top-level container of the *graph of graphs* is called the [execution graph](#ef-execution-graph]. The graphs to which individual nodes point are called [graph definitions](#ef-graph-definition) or simply [graphs](#ef-graph-definition). The following sections dive into each of the topics above with the goal of providing the reader with a general understanding of each of the core concepts in EF’s *graph of graphs*. ## Nodes Nodes in a [graph](#ef-graph-definition) represent work to be executed. The actual work to be performed is stored in a [definition](#ef-definition), to which a node points. Nodes can have both parent and child nodes. This relationship between parent and child defines an ordering dependency. The interface for interacting with nodes is [INode](api/classomni_1_1graph_1_1exec_1_1unstable_1_1INode.html#_CPPv4N4omni5graph4exec8unstable5INodeE). EF contains the [NodeT](api/classomni_1_1graph_1_1exec_1_1unstable_1_1NodeT.html#_CPPv4IDpEN4omni5graph4exec8unstable5NodeTE). Node implementation of INode for instantiation when constructing graph definitions. Each node is logically contained within a single graph definition (i.e. INodeGraphDef). ## Edges Edges represent ordering between nodes in a graph definition. Edges are represented in EF with simple raw pointers between nodes. These pointers can be accessed with INode::getParents() to list nodes that are before a node, and INode::getChildren() to list nodes that are after the node. ## Definitions Definitions define the work each node represents. Definitions can be opaque, meaning EF has no visibility into the actual work being performed. Opaque definitions implement the INodeDef interface. Helper classes, like NodeDefLambda exists to easily wrap chunks of code into an opaque definition. Definitions can also be defined with a graph, making the definitions transparent. The transparency of graph definitions enables EF to perform many optimizations such as: - Execute nodes in the graph in parallel - Optimize the graph for the current hardware environment - Reorder/Defer execution of nodes to minimize lock contention Many of these optimizations are enabled by writing custom passes and executors. See Pass Creation and Executor Creation for details. Graph definitions are defined by the INodeGraphDef interface. During graph construction, it is common for IPass authors to instantiate custom graph definitions to bridge EF with the authoring layer. The NodeGraphDef class is designed to help implement these custom definitions. Definition instances are not unique to each node. Definitions are designed to be shared between multiple nodes. This means two different INode instances are free to point to the same definition instance. This not only saves space, it also decreases graph construction time. Above we see the graph from Figure 8, but now with pointers to definitions (dashed lines). Notice how definitions are shared between nodes. Furthermore, notice that nodes in graph definitions can point to other graph definitions. Both INodeDef and NodeGraphDef are designed to help implement these custom definitions. INodeDef (i.e. opaque definitions) and INodeGraphDef (i.e. graph definitions) inherit from the IDef interface. All user definitions must implement either INodeDef or INodeGraphDef. Definitions are attached to nodes and can be accessed with INode::getDef(). Note, a node is not required to have a definition. In fact, each graph’s root node will not have an attached definition. ### Execution Graph The top-level container for execution is the *execution graph*. The execution graph is special. It is the only entity, other than a node, that can contain a definition. In particular, the execution graph always contains a single graph definition. It is this graph definition that is the actual *graph of graphs*. The execution graph does not contain nodes, rather, it is the execution graph’s definition that contains nodes. In addition to containing the top-level graph definition, the execution graph’s other jobs are to track: - If the graph is currently being constructed - Gross changes to the topologies in the execution graph. See invalidation for details. The execution graph is defined by the IGraph interface. EF contains the Graph implementation of IGraph for applications to instantiate. ### Topology Each graph definition owns a *topology* object. Each topology object is owned by a single graph definition. The topology object has several tasks: - Owns and provides access to the root node - Assigns each node in the graph definition an unique index - Handles and tracks invalidation of the topology (via stamps) Topology is defined by the ITopology interface and accessed via INodeGraphDef::getTopology(). ### Root Nodes Each graph definition contains a topology which owns a *root node*. The root node is where traversal in a graph definition starts. Only descendants of the root node will be traversed. The root node is accessed with # Section Root nodes are special in that they do not have an attached definition, though a graph definition’s executor may assign special meaning to the root node. Root nodes are defined by the `INode` interface, just like any other node. Each graph definition (technically the graph definition’s topology) has a root node. This means there are many root nodes in EF (i.e. EF is a graph of graphs). # Next Steps In this article, an overview of graph concepts was provided. To learn how these concepts are utilized during graph construction, move on to Pass Concepts.
6,832
ef-execution-concepts_ExecutionConcepts.md
# Execution Concepts This article covers core execution concepts. Readers are encouraged to review the [Execution Framework Overview](#ef-framework), [Graph Concepts](#ef-graph-concepts), and [Pass Concepts](#ef-pass-concepts) before diving into this article. Execution Framework (i.e. EF) contains many classes with an `execute()` method. `IExecutionContext`, `IExecutor`, `ExecutionTask`, `INodeDef`, and `INodeGraphDef` are a subset of the classes with said method. With so many classes, understanding how execution works can be daunting. The purpose of this article is to step through how execution works in EF and illustrate some of its abilities. We start with introducing the concepts involved in execution. Once complete, we’ll dive into the details on how they are used together to perform execution. ## Nodes `INode` is the main structural component used to build the graph’s topology. `INode` stores edges to parents (i.e. predecessors) and children (i.e. successors). These edges set an ordering between nodes. In addition to defining the execution graph’s topology, `INode` also defines the execution logic of the graph. Each `INode` has an `execute()` method that is called during the execution of the graph. # INode INode stores one of two definitions: INodeDef or INodeGraphDef. These definitions define the actual computation to be performed when the node is executed. See Graph Concepts for more details on nodes and how they fit into the EF picture. ## Opaque Definitions INodeDef is one of the two definition classes that can be attached to an INode (note the difference in the spelling of INodeDef and INode). Definitions contain the logic of the computation to be performed when the INode is executed. INodeDef defines an *opaque* computation. An *opaque* computation is logic contained within the definition that EF is unable to examine and optimize. ## Graph Definitions INodeGraphDef is one of the two definition classes that can be attached to an INode. INodeGraphDef should not be confused with IGraph, which is the top-level container that stores the entire structure of the graph (i.e., the execution graph). Definitions contain the logic of the computation to be performed when the INode is executed. Unlike INodeDef, which defines opaque computational logic that EF cannot examine (and thereby optimize), INodeGraphDef defines its computation by embedding a subgraph. This subgraph contains INode objects to define the subgraph’s structure (like any other EF graph). Each of these nodes can point to either an INodeDef or yet another INodeGraphDef. (again, like any other EF graph). The ability to define a `INodeGraphDef` which contains nodes that point to additional `INodeGraphDef` objects is where EF gets its **composibility** power. This is why it is said that EF is a “graph of graphs”. Adding new implementations of `INodeGraphDef` is common when extending EF with new graph types. See Definition Creation for details. ## Executors and Schedulers Executors traverse a graph definition, generating tasks for each node *visited*. One of the core concepts of EF is that each graph definition can specify the executor that should be used to execute the subgraph it defines. This allows each graph definition to control a host of strategies for how its subgraph is executed: - If a node should be scheduled - How a node should be scheduled (e.g. parallel, deferred, serially, isolated, etc.) - Where nodes are scheduled (e.g. GPU, CPU core, machine) - The amount of work to be scheduled (i.e. how many tasks should be generated) Executors and schedulers work together to produce, schedule, and execute tasks on behalf of the node. Executors determine which nodes should be visited and generate appropriate work (i.e. tasks). Schedulers collect tasks, possibly concurrently from many executor objects, and map the tasks to hardware resources for execution. Executors are described by the `IExecutor` interface. Most users defining their own executor will inherit from the `Executor` template, which is an implementation of `IExecutor`. `Executor` is a powerful template allowing users to easily control the strategies above. See `Executor`’s documentation for a more in-depth explanation of what’s possible with EF’s executors. ## ExecutionPaths The `ExecutionPath` class is an efficient utility class used to store the *execution path* of an `INode`. Since a graph definition may be pointed to/shared by multiple nodes, nodes within a graph definition can be at multiple “paths”. Consider node *k* below: Figure 16: A flattened execution graph. Graph definitions can be shared amongst multiple nodes (e.g. *X*). As a result, nodes must be identified with a path rather than their pointer value. Executions paths provide context as to what “instance” of a node is being executed. Above, the yellow arrow is pointing to */f/p/k*. However, since *X* is a shared definition, another valid path for *k* is */e/k*. Above, the graph definition *X* is shared by nodes *e* and *p*. The execution path for *k* is either */f/p/k* (the yellow arrow) or */e/k*. Figure 16: A flattened execution graph. Graph definitions can be shared amongst multiple nodes (e.g. *X*). As a result, nodes must be identified with a path rather than their pointer value. Executions paths provide context as to what “instance” of a node is being executed. Above, the yellow arrow is pointing to */f/p/k*. However, since *X* is a shared definition, another valid path for *k* is */e/k*. demonstrates that when associating data with a node, do not use the node’s pointer value. Rather use an `ExecutionPath`. The same holds true for definitions. ## Execution Contexts / Execution State `INodeDef` and `INodeGraphDef` are stateless entities in EF. Likewise, other than connectivity information, `INode` is also stateless. That begs to question, “If my computation needs state, where is it stored?” The answer is in the `IExecutionContext`. `IExecutionContext` is a limited key/value store where each key is an `ExecutionPath` and the value is an application defined subclass of the `IExecutionStateInfo` interface. `IExecutionContext` allows the graph structure to be decoupled from the computational state. As a consequence, the execution graph can be executed in parallel, each execution with its own `IExecutionContext`. In fact, `ExecutionContext::execute()` is the launching point of all computation (more on this below). `IExecutionContext` is meant to store data that lives across multiple executions of the execution graph. This is in contrast to the state data traversals and executors store, which are transient in nature. `IExecutionContext` is implemented by EF’s `ExecutionContext` template. `IExecutionContext` is an important entity during execution, as it serves as the data store for EF’s stateless graph of graphs. This article only touches on execution contexts. Readers should consult `IExecutionContext`’s documentation for a better understanding on how to use `IExecutionContext`. ## Execution Tasks ExecutionTask is a utility class that describes a task to be potentially executed on behalf of a INode in a given IExecutionContext. ExecutionTask stores three key pieces of information: the node to be executed, the path to the node, and the execution context. ## Execution in Practice With the overview of the different pieces in EF execution out of the way, we can now focus on how the pieces fit together. As mentioned above, EF utilizes a *graph of graphs* to define computation and execution order. The structure of these graphs is constructed with INode objects while the computational logic each INode encapsulates is delegated to either INodeDef or INodeGraphDef. The top-level structure that contains the entire graph is the IGraph object (e.g. execution graph). The IGraph object simply contains a single INodeGraphDef object. It is this top-level INodeGraphDef that defines the *graph of graphs*. After a concrete implementation of IGraph has been constructed and populated, computation starts by constructing a concrete subclass of IExecutionContext and calling IExecutionContext::execute(): ### Listing 1 Pattern seen in most uses of EF to execute the execution graph. Create the graph, populate the graph, execute the graph with a context. ```cpp auto graph{ Graph::create("myGraph") }; // populate graph &lt;not shown&gt; MyExecutionState state; auto context{ MyExecutionContext::create(graph, state) }; Status result = context->execute(); ``` IExecutionContext::execute(): IExecutionContext::execute() will initialize the context (if needed) and then pass itself and the IGraph to IExecutionCurrentThread::executeGraph(), which is in charge of creating an ExecutionTask to execute the IGraph’s top-level definition. IExecutionCurrentThread additionally keeps track of which ExecutionTask/IGraph/INode/IExecutionContext/IExecutor is running on the current thread (see getCurrentTask() and getCurrentExecutor()). IExecutionCurrentThread::executeGraph() is special in that it accounts for the odd nature of the top-level INodeGraphDef. The top-level INodeGraphDef is the only such INodeGraphDef that isn’t pointed to by a node and as such special logic must be written to handle this edge case. For all other definitions (and what the remainder of this article covers), execution starts with ExecutionTask::execute(IExecutor&) which calls IExecutionCurrentThread::execute(): ```cpp Status ExecutionCurrentThread::execute_abi(ExecutionTask* task, IExecutor* executor) noexcept ``` Signature of the method used for initiating node execution. ``` Here, the given `task`’s `ExecutionTask::getNode()` points to the node whose definition we wish to execute. The given `executor` is the executor of the `INodeGraphDef` who owns the node we wish to execute and has created the `ExecutionTask` (i.e. `task`) to execute the node. There are three cases `IExecutionCurrentThread::execute()` must handle: 1. If the node points to an **opaque definition** 2. If the node does not point to a definition 3. If the node points to a **graph definition** ### Executing an Opaque Definition The first case, opaque definition, is handled as follows: #### Listing 3: How nodes with an opaque definition are executed. ```cpp auto node = task->getNode(); auto nodeDef = node->getNodeDef(); if (nodeDef) { ScopedExecutionTask setCurrentExecution(task, executor); // important to update task status before calling into continueExecute since it may look at it task->setExecutionStatus(nodeDef->execute(*task)); // the task has had a chance to execute. it may have succeeded, failed, been deferred, etc. it's up to the // user defined IExecutor::continueExecute to determine the status of the task and react appropriately. return executor->continueExecute(*task); } ``` The listing above is straight forward, call `INodeDef::execute()` followed by `IExecutor::continueExecute()`. ### Executing an Empty Definition The second case is also straight-forward: #### Listing 4: How nodes without a definition are executed. ```cpp // empty node...we didn't fail, so just continue execution ScopedExecutionTask setCurrentExecution(task, executor); // important to update task status before calling into continueExecute since it may look at it task->setExecutionStatus(Status::eSuccess); // the task has had a chance to execute. it may have succeeded, failed, been deferred, etc. it's up to the // user defined IExecutor::continueExecute to determine the status of the task and react appropriately. return executor->continueExecute(*task); ``` ## Executing a Graph Definition The third case, graph definitions, is a bit more complex: ### Listing 5 How nodes without a graph definition are executed. ```c++ exec::unstable::ExecutionPath pathToInstancingNode{ task->getUpstreamPath(), task->getNode() }; ExecutionTask rootTask{ task->getContext(), nodeGraphDef->getRoot(), pathToInstancingNode }; ScopedExecutionTask setCurrentExecution(&rootTask, executor); auto status = nodeGraphDef->preExecute(*task); if (status == Status::eSuccess) { status = nodeGraphDef->execute(*task); if (status == Status::eSuccess) { status = nodeGraphDef->postExecute(*task); } } if (status == Status::eSkip) { // we skipped execution, so record this as success status = Status::eSuccess; } // important to update task status before calling into continueExecute since it may look at it task->setExecutionStatus(status); // the task has had a chance to execute. it may have succeeded, failed, been deferred, etc. it's up to the // user defined IExecutor::continueExecute to determine the status of the task and react appropriately. return executor->continueExecute(*task); ``` To execute the node’s graph definition, we start by creating a new task that will execute the graph definition’s root node (i.e., rootTask). This task is given to the graph definition’s `INodeGraphDef::preExecute(ExecutionTask*)`, `INodeGraphDef::execute(ExecutionTask*)`, and `INodeGraphDef::postExecute(ExecutionTask*)`. The meanings of pre- and post-execute are up to the user. ## Creating the Graph Definition’s Executor `INodeGraphDef::execute(ExecutionTask*)`’s job is clear: *execute the node*. `INodeGraphDef` implementations based on EF’s `INodeGraphDef`. ```cpp NodeGraphDef ``` class handle execution by instantiating the graph definition’s executor and telling it to execute the given node (i.e. ```cpp info-&gt;getNode() ``` below): ```cpp INodeGraphDef::execute(ExecutionTask*) ``` ’s implementation instantiates the graph definition’s preferred executor and executes the given node. ```cpp omni::core::ObjectPtr<IExecutor>&lt; IExecutor&gt; executor; if (m_executorFactory) { executor = m_executorFactory(m_topology, *info); } else { executor = ExecutorFallback::create(m_topology, *info); } return executor->execute(); // execute the node specified by info-&gt;getNode() ``` ## Starting Execution In [Listing 5](#ef-listing-execution-current-thread-nodegraphdef), we saw the node to execute was the node’s root. The root node does not have an associated definition, though some executors may assign special meaning when executing it. How ```cpp IExecutor::execute() ``` performs execution is up to the executor. As an example of what’s possible, let’s look at the ```cpp Executor ``` template’s execute method: ```cpp Executor ``` template’s execute method. ```cpp //! Main execution method. Called once by each node instantiating same graph definition. Status execute_abi() noexcept override { // We can bypass all subsequent processing if the node associated with the task starting // this execution has no children. Note that we return an eSuccess status because nothing // invalid has occurred (e.g., we tried to execute an empty NodeGraphDef); we were asked to // compute nothing, and so we computed nothing successfully (no-op)! if (m_task.getNode()->getChildren().empty()) { return Status::eSuccess | m_task.getExecutionStatus(); } (void)continueExecute_abi(&m_task); // Give a chance for the scheduler to complete the execution of potentially parallel work which should complete // within current execution. All background tasks will continue pass this point. // Scheduler is responsible for collecting the execution status for everything that this executor generated. return m_scheduler.getStatus() | m_schedulerBypass; } ``` The ```cpp Executor ``` template ignores the root node and calls ```cpp IExecutor::continueExecute ``` IExecutor::continueExecute()'s job is to continue execution. What it means to “continue execution” is up to the executor. After the call to Executor::continueExecute(const ExecutionTask&amp;) the scheduler’s getStatus() is called. This is a blocking call that will wait for any work generated during Executor::continueExecute(const ExecutionTask&amp;) to report a status (e.g. Status::eSuccess, Status::eDeferred, etc). ## Visiting Nodes and Generating Work Let us assume we’re using the ExecutorFallback executor. In Figure 16, if node /f/n is the node that just executed, calling IExecutor::continueExecute() will visit /f/p (via ExecutionVisit), notice that /f/p’s parents have all executed, create a task to execute /f/p, and given the task to the scheduler. This behavior of ExecutorFallback can be seen in the following listing: ### Listing 8 The ExecutorFallback’s strategy for visiting nodes in IExecutor::continueExecute(). ```c++ //! Graph traversal visit strategy. //! //! Will generate a new task when all upstream nodes have been executed. struct ExecutionVisit { //! Called when the traversal wants to visit a node. This method determines what to do with the node (e.g. schedule //! it, defer it, etc). template <typename ExecutorInfo> static Status tryVisit(ExecutorInfo info) noexcept { auto&amp; nodeData = info.getNodeData(); if (info.currentTask.getExecutionStatus() == Status::eDeferred) { // Implementation details } } } ``` ```cpp nodeData.hasDeferredUpstream = true; // we only set to true...doesn't matter which thread does it first std::size_t requiredCount = info.nextNode->getParents().size() - info.nextNode->getCycleParentCount(); if ((requiredCount == 0) || (++nodeData.visitCount == requiredCount)) { if (!nodeData.hasDeferredUpstream) { // spawning a task within executor doesn't change the upstream path. just reference the same one. ExecutionTask newTask(info.getContext(), info.nextNode, info.getUpstreamPath()); return info.schedule(std::move(newTask)); } else return Status::eDeferred; } return Status::eUnknown; }; ``` The scheduler uses the `SchedulingStrategy` given to the executor to determine how to schedule the task. The strategy may decide to skip scheduling and execute the task immediately. Likewise, the strategy may tell the scheduler to run the task in parallel with other tasks (see [SchedulingInfo](api/enum_namespaceomni_1_1graph_1_1exec_1_1unstable_1a36b9c08e72889b8029dd280279104760.html#_CPPv4N4omni5graph4exec8unstable14SchedulingInfoE) for details). We can see an example of this decision making in the listing below: ```cpp Status ret = Status::eUnknown; SchedulingInfo schedInfo = getSchedulingInfo(newTask); if (schedInfo != SchedulingInfo::eSchedulerBypass) { // this task will finish before we exit executor...just capture as reference to avoid unnecessary cost ret = m_scheduler.schedule([executor = this, task = std::move(newTask)]() mutable -> Status { return task.execute(executor); }, schedInfo); } else // bypass the scheduler...no need for extra scheduling overhead { m_schedulerBypass |= newTask.execute(this); } return ret; ``` Regardless of the scheduling strategy for the task, [ExecutionTask::execute(IExecutor&)](api/classomni_1_1graph_1_1exec_1_1unstable_1_1ExecutionTask.html#_CPPv4N4omni5graph4exec8unstable13ExecutionTask7executeEN4omni4core11ObjectParamI9IExecutorEE) is called. # Ending Execution In Listing 3, Listing 4, and Listing 5 we see they all end the same way, once the node has been executed, tell the executor to continue execution of the current graph definition by calling `IExecutor::continueExecute()`. As covered above, what “continue execution” means is defined by the executor, but a common approach is to visit the children of the node that was just executed. Once there are no more children to visit, the stack starts to unwind and the task is complete. # Generating Dynamic Work Above, we saw how `ExecutorFallback` traverses parents to child, generating a task per-node once its parents have executed. That doesn’t have to be the case though. An executor is free to generate many tasks per-node. In fact, an executor can generate a task, and that task can generate additional tasks using `IExecutor::schedule(ScheduleFunction&&, SchedulingInfo)`. # Deferred Execution In Listing 8 you’ll find references to the “deferred” (e.g. `Status::eDeferred`). Deferred execution refers to tasks that have been designated to finish outside of the current execution frame (i.e. output of the call to `IExecutor::execute()`). # Next Steps In this article, an overview of graph execution was provided. For an in-depth guide to building your own executors, consult the Executor Creation guide. This article concludes the EF concepts journey. Further your EF education by consulting one of the tutorials in the *Guides* section of the manual or explore more in-depth topics in the *Advanced* section.
20,650
ef-framework_Overview.md
# Execution Framework Overview The Omniverse ecosystem enjoys a bevy of software components (e.g. PhysX, RTX, USD, OmniGraph, etc). These software components can be assembled together to form domain specific applications and services. One of the powerful concepts of the Omniverse ecosystem is that the assembly of these components is not limited to compile time. Rather, users are able to assemble these components on-the-fly to create tailor-made tools, services, and experiences. With this great power comes challenges. In particular, many of these software components are siloed and monolithic. Left on their own, they can starve other components from hardware resources, and introduce non-deterministic behavior into the system. Often the only way to integrate these components together was with a model “don’t call me, I’ll call you”. For such a dynamic environment to be viable, an intermediary must be present to guide these different components in a composable way. The **Execution Framework** is this intermediary. The Omniverse Execution Framework’s job is to orchestrate, at runtime, computation across different software components and logical application stages by decoupling the description of the compute from execution. ## Architecture Pillars The Execution Framework (i.e. EF) has three main architecture pillars. ### Decoupled architecture The first pillar is decoupling the authoring format from the computation back end. Multiple authoring front ends are able to populate EF’s intermediate representation (IR). EF calls this intermediate representation the execution graph. Once populated by the front end, the execution graph is transformed and refined, taking into account the available hardware resources. By decoupling the authoring front end from the computation back end, developers are able to assemble software components without worrying about multiple hardware configurations. Furthermore, the decoupling allows EF to optimize the computation for the current execution environment (e.g. HyperScale). ### Extendable architecture The second pillar is extensibility. Extensibility allows developers to augment and extend EF’s capabilities without changes to the core library. Graph transformations, traversals, execution behavior, computation logic, and scheduling are examples of EF features that can be extended by developers. ## Composable architecture The third pillar of EF is **composability**. Composability is the principle of constructing novel building blocks out of existing smaller building blocks. Once constructed, these novel building blocks can be used to build yet other larger building blocks. In EF, these building blocks are nodes (i.e. `Node`). Nodes stores two important pieces of information. The first piece they store is connectivity information to other nodes (i.e. topology edges). Second, they stores the **computation definition**. Computation definitions in EF are defined by the `NodeDef` and `NodeGraphDef` classes. `NodeDef` defines opaque computation while `NodeGraphDef` contains an entirely new graph. It is via `NodeGraphDef` that EF derives its composibility power. The big picture of what EF is trying to do is simple: take all of the software components that wish to run, generate nodes/graphs for the computation each component wants to perform, add edges between the different software components’ nodes/graphs to define execution order, and then optimize the graph for the current execution environment. Once the **execution graph** is constructed, an **executor** traverses the graph (in parallel when possible) making sure each software component gets its chance to compute. ## Practical Examples Let’s take a look at how Omniverse USD Composer, built with Omniverse Kit, handles the the update of the USD stage. Kit maintains a list of extensions (i.e. software components) that either the developer or user has requested to be loaded. These extensions register callbacks into Kit to be executed at fixed points in Kit’s update loop. Using an empty scene, and USD Composer’s default extensions, the populated execution graph looks like this: CurveManipulator OmniSkel SkeletonsCombo omni.anim.retarget.&lt;class 'omni.anim.retarget.ui.scripts.retarget_window.RetargetWindow'&gt; SingletonCurveEditor CurveCreator AnimationGraph Stage Recorder SequencePlayer PhysXCameraPrePhysics PhysXSupportUI PhysxInspector Before Update Physics PhysXVehiclePrePhysics PhysxInspector After Update Physics UsdPhysicsUI PhysXUI PhysXCameraPostPhysics PhysxInspector Debug visualization SkelAnimationAnnotation PhysXVehiclePostPhysics PhysX SceneVisualization PhysXFabric DebugDraw <p> <span class="caption-text"> USD Composer's execution graph used to update the USD stage. <p> Notice in the picture above that each node in the graph is represented as an opaque node, except for the OmniGraph (OG) front-end. The OmniGraph node further refines the compute definition by expressing its update pipeline with <em>pre-simulation <p> Below, we illustrate an example of a graph authored in OG that runs during the simulation stage of the OG pipeline. This example runs as part of Omniverse Kit, with a limited number of extensions loaded to increase the readability of the graph and to illustrate the dynamic aspect of the execution graph population. <p> kit.customPipeline <p> og.pre_simulation(1) | og.def.pre_simulation(18371527843990822053) <p> og.simulation(2) | og.def.simulation(2058752528039269071) <p> og.post_simulation(3) | og.def.post_simulation(12859070463537551084) <figure class="align-center"> <figcaption> <p> <span class="caption-text"> An example of the OmniGraph definition Generating more fine-grained execution definitions allows OG to scale performance with available CPU resources. Leveraging **extensibility** <strong>Open Graph (OG) The final example in this overview focuses on execution pipelines in Omniverse Kit. Leveraging all of the architecture pillars, we can start customizing per application (and/or per scene) execution pipelines. There is no longer a need to base execution ordering only on a fixed number or keep runtime components siloed. In the picture below, as a proof-of-concept, we define at runtime a new custom execution pipeline. This new pipeline runs before the “legacy” one ordered by a magic number and introduces fixed and variable update times. Extending the ability of OG to choose the pipeline stage in which it runs, we are able to place it anywhere in this new custom pipeline. Any other runtime component can do the same thing and leverage the EF architecture to orchestrate executions in their application. custom.async custom.syncMain custom.async omni_graph_nodes_constantdouble3 CubeReadAttrib_02 on_playback_tick og.customPipelineToUsd make_transformation_matrix_from_trs_01 PhysX matrix_multiply SkelAnimationAnnotation get_translation <figure class="align-center"> <figcaption> <p> <span class="caption-text"> The customizable execution pipeline in Kit - POC ## Next Steps Above we provided a brief overview of EF’s philosophy and capabilities. Readers are encouraged to continue learning about EF by first reviewing Graph Concepts.
7,250
ef-graph-traversal-guide_GraphTraversalGuide.md
# Graph Traversal Guide This is a practitioner’s guide to using the Execution Framework. Before continuing, it is recommended you first review the [Execution Framework Overview](#ef-framework) along with basic topics such as [Graphs Concepts](#ef-graph-concepts), [Pass Concepts](#ef-pass-concepts), and [Execution Concepts](#ef-execution-concepts). **Graph traversal** – the systematic visitation of nodes within the IR – is an integral part of EF. EF contains several built-in traversal functions: - `traverseDepthFirst()` traverses a graph in depth-first order. - `traverseBreadthFirst()` traverses a graph in breadth-first order. - `traverseDepthFirstAsync()` traverses a graph in depth-first order, potentially initiating asynchronous work before visiting the next node. - `traverseBreadthFirstAsync()` traverses a graph in breadth-first order, potentially initiating asynchronous work before visiting the next node. The following sections examine a few code examples demonstrating how one can explore EF graphs in a customized manner using the available APIs. ## Getting Started with Writing Graph Traversals In order to further elucidate the concepts embedded in these examples, some of the traversals will be applied to the following sample IR graph \(G_1\) in order to see what the corresponding output would look like for a concrete case: Figure 19: An example IR graph \(G_1\). Note that each node’s downstream edges are ordered alphabetically with respect to their connected children nodes, e.g. for node \(a\), its first, second, and third edges are \(\{a,b\}\), \(\{a,c\}\), and \(\{a,d\}\), respectively. Also note that the below examples are all assumed to reside within the omni::graph::exec::unstable namespace. ### Print all Node Names Listing 26 shows how one can print out all top-level node names present in a given IR graph in serial DFS ordering using the VisitFirst policy. Here the term top-level refers to nodes that lie directly in the top level execution graph definition; any nodes not contained in the execution graph’s NodeGraphDef (implying that they are contained within other node’s NodeGraphDefs) will not have their names printed with the below code-block. Listing 26: Serial DFS using the VisitFirst strategy to print all top-level visited node names. ```cpp std::vector<INode*> nodes; traverseDepthFirst<VisitFirst>(myGraph->getRoot(), [&nodes](auto info, INode* prev, INode* curr) { std::cout << curr->getName() << std::endl; nodes.emplace_back(curr); info.continueVisit(curr); }; ``` If we applied the above code-block to \(G_1\), we would get the following ordered list of visited node names: \[b \rightarrow e \rightarrow g \rightarrow c \rightarrow f \rightarrow d\] Note that the root node \(a\) is ignored since we started our visitations at \(a\), which would make prev point to \(a\) during the very first traversal step, and since we aren’t printing prev \(a\) doesn’t show up in the output. ### Print all Node Traversal Paths **Recursively** Listing 27 shows how one can recursively print the traversal paths (list of upstream nodes that were visited prior to reaching the current node) of all nodes present in a given IR graph in serial DFS ordering using the VisitFirst strategy; this will include all nodes that lie within other non-execution graph definitions (i.e. inside other nodes’ NodeGraphDefs that are nested inside the execution graph definition), hence the need for recursion. The resultant list of nodes can be referred to as the member nodes of the flattened IR. ```cpp auto traversalFn = ``` ```cpp void traverseDepthFirst( INodeGraphDef* nodeGraphDef, INode* topLevelGraphRoot, std::vector<INode*>& currentTraversalPath, std::vector<std::pair<INode*, std::vector<INode*>>>& nodeTraversalPaths, auto& recursionFn) { traverseDepthFirst<VisitFirst>( nodeGraphDef->getRoot(), [nodeGraphDef, topLevelGraphRoot, &currentTraversalPath, &nodeTraversalPaths, &recursionFn]( auto info, INode* prev, INode* curr) { // Remove node elements from the current path until we get back to a common // branching point for the current node. if (prev == topLevelGraphRoot) { currentTraversalPath.clear(); } else if (!prev->isRoot()) { while (!currentTraversalPath.empty() && currentTraversalPath.back()->getName() != prev->getName()) { currentTraversalPath.pop_back(); } } // Add the node to the current traversal path. If the previous node was also a // graph root node, add it as well. if (prev->isRoot()) { currentTraversalPath.emplace_back(prev); } currentTraversalPath.emplace_back(curr); // Store the current node's corresponding traversal path. nodeTraversalPaths.emplace_back( std::piecewise_construct, std::forward_as_tuple(curr), std::forward_as_tuple(currentTraversalPath)); // Continue the traversal. INodeGraphDef* currNodeGraphDef = curr->getNodeGraphDef(); if (currNodeGraphDef) { recursionFn( currNodeGraphDef, topLevelGraphRoot, currentTraversalPath, nodeTraversalPaths, recursionFn); } info.continueVisit(curr); }); }; std::vector<INode*> currentTraversalPath; std::vector<std::pair<INode*, std::vector<INode*>>> nodeTraversalPaths; ``` traversalFn(myGraph->getNodeGraphDef(), myGraph->getNodeGraphDef()->getRoot(), currentTraversalPath, nodeTraversalPaths, traversalFn); // Print the results. Note that nodeTraversalPaths will be ordered in a serial, DFS, VisitFirst-like manner // (even though we used the VisitAll strategy, since we continue traversal along the first edge). for (const std::pair<INode*, std::vector<INode*>>& namePathPair : nodeTraversalPaths) { // Print the node's name. std::cout << namePathPair.first->getName() << ": "; // Print the node's traversal path. for (INode* const pathElement : namePathPair.second) { std::cout << pathElement->getName() << "/"; } std::cout << std::endl; } Applying this logic to \(G_1\), the list of node traversal paths (paired with their names as well for further clarity, and ordered based on when each node was visited) would look something like this: 1. \(b: a/b\) 2. \(e: a/b/e\) 3. \(i: a/b/e/h/i\) 4. \(j: a/b/e/h/i/j\) 5. \(g: a/b/e/g\) 6. \(c: a/c\) 7. \(f: a/c/f\) 8. \(l: a/c/f/k/l\) 9. \(m: a/c/f/k/l/m\) 10. \(i: a/c/f/k/l/m/h/i\) 11. \(j: a/c/f/k/l/m/h/i/j\) 12. \(d: a/c/f/d\) <div class="admonition note"> <p class="admonition-title">Note <p>EF typically uses a more space-efficient path representation called the ExecutionPath when discussing nodal paths; the above example prints the explicit traversal path to highlight how the graph is crawled through. <h3>Print all Edges <strong>Recursively <p>Listing 28 uses the VisitAll strategy to <em>recursively BFS is arbitrary (other search algorithms could have been chosen to still print all top-level edges, albeit in a different order); only the selection of VisitAll matters since it enables us to actually explore all of the edges. Also note that traversal continues along the first discovered edge (similar to the VisitFirst policy). ### Listing 28 #### Serial BFS using the VisitAll strategy to recursively print all edges in the inputted graph. ```cpp std::vector<std::pair<INode*, INode*>> edges; auto traversalFn = [&edges](INodeGraphDef* nodeGraphDef, auto& recursionFn) -> void { traverseBreadthFirst<VisitAll>(nodeGraphDef->getRoot(), [&edges, nodeGraphDef, &recursionFn](auto info, INode* prev, INode* curr) { std::cout << "{" << prev->getName() << ", " << curr->getName() << "}" << std::endl; edges.emplace_back(prev, curr); if (info.isFirstVisit()) { INodeGraphDef* currNodeGraphDef = curr->getNodeGraphDef(); if (currNodeGraphDef) { recursionFn(currNodeGraphDef, recursionFn); } info.continueVisit(curr); } }); }; traversalFn(myGraph->getNodeGraphDef(), traversalFn); ``` Running this traversal on \(G_1\) would produce the following list of edges (in the order that they are visited): \[ \begin{split} &\set{a,b} \rightarrow \set{a,c} \rightarrow \set{a,d} \rightarrow \set{b,e} \rightarrow \set{a/b/e/h,a/b/e/h/i} \rightarrow \set{a/b/e/h/i,a/b/e/h/i/j} \rightarrow \set{a/b/e/h/i/j,a/b/e/h/i} \rightarrow \set{c,f} \rightarrow \set{k,l} \\ &\rightarrow \set{l,m} \rightarrow \set{a/c/f/k/l/m/h,a/c/f/k/l/m/h/i} \rightarrow \set{a/c/f/k/l/m/h/i,a/c/f/k/l/m/h/i/j} \rightarrow \set{a/c/f/k/l/m/h/i/j,a/c/f/k/l/m/h/i} \\ &\rightarrow \set{d,f} \rightarrow \set{e,g} \rightarrow \set{f,d} \rightarrow \set{f,g} \end{split} \] Note that for node instances which share the same definition (e.g. \(i\), \(j\), etc.), we’ve used their full traversal path for clarity’s sake. ### Print all Node Names **Recursively** in **Topological Order** Listing 29 highlights how one can *recursively* print out all node names in *topological order* using the `VisitLast` strategy, meaning that no node will be visited until all of its parents have been visited. Note that any traversal, whether it be a serial DFS, serial BFS, parallel DFS, parallel BFS, or something else entirely, can be considered topological as long as it employs the `VisitLast` strategy; this example has opted to utilize a serial DFS approach. ```cpp std::vector<INode*> nodes; auto traversalFn = [&nodes](INodeGraphDef* nodeGraphDef, auto& recursionFn) -> void { traverseDepthFirst<VisitLast>(nodeGraphDef->getRoot(), [&nodes, nodeGraphDef, &recursionFn](auto info, INode* prev, INode* curr) { }); }; ``` This example uses the `VisitLast` strategy to *recursively* print all visited nodes in *topological order*. ``` ```cpp class PassStronglyConnectedComponents : public Implements<IGlobalPass> { public: static omni::core::ObjectPtr<PassStronglyConnectedComponents> create( omni::core::ObjectParam<exec::unstable::IGraphBuilder> builder) { return omni::core::steal(new PassStronglyConnectedComponents(builder.get())); } protected: PassStronglyConnectedComponents(IGraphBuilder*) { } void run_abi(IGraphBuilder* builder) noexcept override { _detectCycles(builder, builder->getTopology()); } private: void _detectCycles(IGraphBuilder* builder, ITopology* topology) { } }; ``` In the case of \(G_1\), we would obtain the following ordered node name list: \[b \rightarrow e \rightarrow a/b/e/h/i \rightarrow a/b/e/h/i/j \rightarrow c \rightarrow f \rightarrow l \rightarrow m \rightarrow a/c/f/k/l/m/h/i \rightarrow a/c/f/k/l/m/h/i/j \rightarrow d \rightarrow g\] Using Custom `NodeUserData`: Listing 30 showcases how one can pass custom node data into the traversal methods to tackle problems that would otherwise be much more inconvenient (or downright impossible) to solve if the API were missing that flexibility. In this case we are using the `SCC_NodeData` struct to store per-node information that is necessary for implementing Tarjan’s algorithm for strongly connected components; this is what ultimately allows us to create the global graph transformation pass responsible for detecting cycles in the graph. { struct SCC_NodeData { size_t index{0}; size_t lowLink{0}; uint32_t cycleParentCount{0}; bool onStack{false}; }; size_t globalIndex = 0; std::stack<INode*> globalStack; traverseDepthFirst<VisitAll, SCC_NodeData>( topology->getRoot(), [this, builder, &globalIndex, &globalStack](auto info, INode* prev, INode* curr) { auto pushStack = [&globalStack](INode* node, SCC_NodeData& data) { data.onStack = true; globalStack.push(node); }; auto popStack = [builder, &info, &globalStack]() { auto* top = globalStack.top(); globalStack.pop(); auto& userData = info.userData(top); userData.onStack = false; auto node = exec::unstable::cast<exec::unstable::IGraphBuilderNode>(top); node->setCycleParentCount(userData.cycleParentCount); return top; }; auto& userData = info.userData(curr); auto& userDataPrev = info.userData(prev); if (info.isFirstVisit()) { userData.index = userData.lowLink = globalIndex++; pushStack(curr, userData); info.continueVisit(curr); userDataPrev.lowLink = std::min(userDataPrev.lowLink, userData.lowLink); } } ); } ```cpp if (userData.lowLink == userData.index) { auto* top = popStack(); if (top != curr) { while (top != curr) { top = popStack(); } } } auto nodeGraph = curr->getNodeGraphDef(); if (nodeGraph) { this->_detectCycles(builder, nodeGraph->getTopology()); } ``` ```cpp if (!userData.onStack) { userData.index = counter; userData.lowLink = counter; counter++; pushStack(curr); userData.onStack = true; for (auto& edge : curr->getEdges()) { auto* dst = edge.getDst(); if (dst->getUserData().index == -1) { depthFirstSearch(dst, builder); userData.lowLink = std::min(userData.lowLink, dst->getUserData().lowLink); } else if (userData.onStack) { userData.lowLink = std::min(userData.lowLink, dst->getUserData().index); } } if (userData.lowLink == userData.index) { auto* top = popStack(); if (top != curr) { while (top != curr) { top = popStack(); } } } } else if (userData.onStack) { userDataPrev.lowLink = std::min(userDataPrev.lowLink, userData.index); userData.cycleParentCount++; } ``` ### Next Steps To learn more about graph traversals in the context of EF, see [Graph Traversal In-Depth](#ef-graph-traversal-advanced). ```
14,974
ef-pass-concepts_PassConcepts.md
# Pass Concepts This article covers core concepts found in EF’s passes/graph transformations. Readers are encouraged to review both the [Execution Framework Overview](Overview.html#ef-framework) and [Graph Concepts](GraphConcepts.html#ef-graph-concepts) before diving into this article. Now that we understand the underlying structure of an execution graph, let’s dive into the graph transformations to see how the population and partitioning of `NodeGraphDef` is done to achieve the final topology. ## Pass Pipeline `PassPipeline` is the main orchestrator of graph construction. It composes the final topology of the graph, leveraging passes from `PassRegistry`. It is possible to register different passes, some only known to the pass pipeline. To build the graph, the pipeline will instantiate a `GraphBuilder` for each visited definition and give the builder to each of the passes selected to run on the definition. Pass instances are not reused, i.e. each time a pass is selected to run, it will be allocated, run, and immediately destroyed. Definitions can be shared by multiple nodes and care is taken by `PassPipeline` to only process a definition once per topology. Passes are grouped by `PassType`, with each having specific responsibilities and permissions. To learn more about pass types, consult the documentation for `IPopulatePass`, `IPartitionPass`. # Populate and Partitioning Passes Graph construction typically starts by running populate passes (i.e. IPopulatePass) over each node in the graph. If the topology was altered during this step, the pipeline will run partitioning passes (i.e. IPartitionPass) on the graph. If partitioning generated a new Node or NodeGraphDef, the pipeline again runs the population passes on the new entities. Partitioning runs only once on the graph, which means there won’t be a second partitioning pass over the topology if the second run of the population passes altered it. This is because population only alters definitions one level deeper than the currently processed topology. # Global Passes Once the entire topology of the execution graph is processed by population and partitioning passes (potentially in a threaded manner), the pipeline will give a chance to global passes (i.e. IGlobalPass) to run. Because global passes have such a broad impact on both the graph and transformation performance, their use is discouraged. # Graph Builders When passes operate to create or alter the topology of a graph, they rely on GraphBuilder to perform topology modification. Under the hood, builder implementation will leverage a private IGraphBuilderNode interface. Relying directly on the IGraphBuilderNode interface is strongly discouraged. # Transformation Algorithm The following pseudo-code represents the overall graph transformation procedure. For simplicity, it illustrates serial execution, but in Omniverse Kit, the pipeline process node’s concurrently. ```tasm PROC PopulatePass(context, nodeGraphDef) graphBuilder <- create new instance for given nodeGraphDef FOR node IN nodes in topology in DFS order from root CALL PopulateNode(node) IF graphBuilder recorded modifications to the construction stamp CALL PartitionPass() PROC PartitionPass(context, nodeGraphDef) graphBuilder <- create new instance for given nodeGraphDef partitionPassInstances <- allocate and store pass instances that successfully initialize for nodeGraphDef ``` FOR node IN nodes in topology in DFS order from root FOR initializedPass IN partitionPassInstances initializedPass.run(node) FOR initializedPass IN partitionPassInstances initializedPass.commit(graphBuilder) FOR newNodes IN graphBuilder CALL PopulateNode(newNodes) PROC GlobalPass(context, nodeGraphDef) FOR global pass from registry passInstance <- allocate new instance CALL passInstance.run() PROC PopulateNode(node) IF node has registered populate pass populatePassInstance <- allocate new instance populatePassInstance.run() ELSE IF node has NodeGraphDef definition and populate pass exists for it populatePassInstance <- allocate new instance populatePassInstance.run() IF node has NodeGraphDef CALL PopulatePass(context, node.getNodeGraphDef()) PROC GraphTransformations(context, nodeGraphDef) IF nodeGraphDef needs construction CALL PopulatePass CALL GlobalPass og.simulation(2) | og.def.simulation(2058752528039269071) og.post_simulation(3) | og.def.post_simulation(12859070463537551084) omni_graph_nodes_constantdouble3 CubeReadAttrib_02 on_playback_tick make_transformation_matrix_from_trs_01 matrix_multiply get_translation box0WriteAttrib_01 make_transformation_matrix_from_trs # SkelAnimationAnnotation # DebugDraw An example of constructed execution graph. Graph transformation starts with a basic pipeline defined at the top level `NodeGraphDef`. ## Figure 11 Basic execution pipeline with custom and legacy pipeline stages. While traversing the top-level definition, `StageUpdateDef` is created by a populate registered for the `kit.legacyPipeline` node. ## Figure 12 Legacy pipeline with loaded nodes from StageUpdate. The `PopulatePass` procedure from our pseudo-code is now recursively called to expand the definition of any node represented as part of `kit.def.legacyPipeline`. In the example we are exploring, we have several OmniGraph population passes registered. The first one created the execution pipeline for OmniGraph. ## Figure 13 Expanded OmniGraph definition containing nodes representing its pipeline stages: Pre-Simulation -> Simulation -> Post-Simulation. OmniGraph registers populate passes for each pipeline stage it created. These passes populate each pipeline stage’s node with a generic graph definition if the pipeline stage contains nodes in OG. In this example, an action graph is in the simulation pipeline stage. Both the pre-simulation and post-simulation stages are empty. ## Figure 14 OG’s populate passes create EF nodes for each OG graph in each OG pipeline stage. Here we see the Simulation stage contains an Action Graph. Finally, population runs on `og.def.graph_execution` and expands the `NodeGraphDef` to a custom one with an `Executor` responsible for both generating and scheduling work. EXECUTION GRAPH | kit.def.execution(8412328473570437098) /World/ActionGraph(1) | og.def.graph_execution(1354986524710330633) kit.legacyPipeline(2) | kit.def.legacyPipeline(14601541822497998125) OmniGraph(2) | OmniGraphDef(13532122613264624703) kit.customPipeline og.pre_simulation(1) | og.def.pre_simulation(18371527843990822053) og.simulation(2) | og.def.simulation(2058752528039269071) og.post_simulation(3) | og.def.post_simulation(12859070463537551084) omni_graph_nodes_constantdouble3 CubeReadAttrib_02 on_playback_tick make_transformation_matrix_from_trs_01 matrix_multiply get_translation box0WriteAttrib_01 <section id="graph-transformations"> <h2>Graph Transformations <div class="figure align-center"> <svg> <!-- SVG content with nodes and edges --> <figcaption> <p> <span class="caption-text"> Fully populated execution graph after all graph transformations. Here we see the */World/ActionGraph* node has been populated with a definition that describes the OmniGraph Action Graph. <section id="next-steps"> <h2>Next Steps <p> In this article, an overview of graph transformations/graph construction was provided. For an in-depth guide to building your own passes, consult the <span class="std std-ref">Pass Creation
7,645
ef-plugin-creation_PluginCreation.md
# Plugin Creation This is a practitioner’s guide to using the Execution Framework. Before continuing, it is recommended you first review the Execution Framework Overview along with basic topics such as Graphs Concepts, Pass Concepts, and Execution Concepts. The Execution Framework is a graph of graphs. EF allows users, with their own code, to: - Build the graph - Optimize the graph - Defines how/when nodes in the graph are executed - Provide chunks of code to execute in the graph - Customize how graph data is stored - Define custom schedulers to dispatch the graph’s tasks The primary method used to extend EF’s functionality is to subclass from EF’s implementations of its core interfaces: Node, NodeDef, NodeGraphDef, ExecutionContext, Executor, PopulatePass, PartitionPass, etc. A reasonable questions is, “How are these custom user implementations instantiated by EF?” In short: - ExecutionContext objects are usually instantiated by the application. - Node objects are usually instantiated by the application. - `Node` objects are usually instantiated by implementations of `NodeGraphDef`. - `Executor` objects are instantiated by implementations of `NodeGraphDef`. - `NodeGraphDef` objects are usually instantiated by passes (e.g. `PopulatePass`). - `NodeDef` objects are usually instantiated by passes (e.g. `PopulatePass`). - Passes are instantiated by `PassPipeline` which uses a global registry of available passes. - `PassPipeline` is usually instantiated by the application. Visually: Above, we see there are two objects the application will instantiate: `PassPipeline` and `ExecutionContext`. The implementations instantiated here will be application specific. The creation of all other entities can be tied back to passes. As mentioned above, passes are instantiated by the application’s `PassPipeline`, which accesses a global registry of available passes. This global registry, available via the global `getPassRegistry()` function, can be populated by user plugins. In this article, we do not cover application level customization, such as `PassPipeline` and `ExecutionContext`, since such customizations are rare when using the Kit SDK (Kit already does this for you). We will cover how users can create their own plugins to define their own passes, and thereby their own nodes, definitions, and executors. Omniverse has two methods to define plugins: *Carbonite Plugins* and *Omniverse Modules*. ## Creating an Omniverse Module The minimum needed to implement an Omniverse module can be found in the *omni.kit.exec.example-omni* extension. ### Listing 10 Example of defining an Omniverse Module using the Kit SDK. ```c++ #include "OmniExamplePass.h" #include &lt;omni/core/Omni.h&gt; #include &lt;omni/core/ModuleInfo.h&gt; #include &lt;omni/graph/exec/unstable/PassRegistry.h&gt; #include &lt;omni/kit/exec/core/unstable/Module.h&gt; // we need the name in a couple of places so we define it once here #define MODULE_NAME "omni.kit.exec.example-omni.plugin" // this is required by omniverse modules OMNI_MODULE_GLOBALS( MODULE_NAME, // name of the module "Example Execution Framework Module" // description of the module ); // this registers the OmniExamplePass population pass. any time a node named "ef.example.greet" is seen, this pass will // attach a definition to the node that will print out "hi". // // this macro can be called from any .cpp file in the DLL, but must be called at global scope. OMNI_GRAPH_EXEC_REGISTER_POPULATE_PASS(OmniExamplePass, "ef.example.greet"); namespace { omni::core::Result onLoad(const omni::core::InterfaceImplementation** out, uint32_t* outCount) { // this method can be used to register default implementations for objects. for example, omni.kit.exec.core uses // this method to register its singletons: IExecutionControllerFactory, IExecutionGraphSettings, ITbbSchedulerState, // etc. // // this function is not used in this example. return omni::core::kResultSuccess; } // called once the DLL is loaded void onStarted() { // this macro must be called by any DLL providing EF functionality (e.g. passes). it will register any passes found // in the module with EF. OMNI_KIT_EXEC_CORE_ON_MODULE_STARTED( MODULE_NAME, []() { // this optional function is called when any EF module is unloaded. the purpose of this function is to // remove references to any objects that may by potentially be unloaded. }); } // tells the framework that this module can be unloaded bool onCanUnload() { return true; } // called when the DLL is about to be unloaded void onUnload() { // if OMNI_KIT_EXEC_CORE_ON_MODULE_STARTED() is called, this macro must also be called. it will inform EF that the // DLL is about to be unloaded. additionally this macro will unregister any passes registered by the DLL. OMNI_KIT_EXEC_CORE_ON_MODULE_UNLOAD(); } } // end of anonymous namespace // main entry point called by the carbonite framework. OMNI_MODULE_API omni::core::Result omniModuleGetExports(omni::core::ModuleExports* exports) { OMNI_MODULE_SET_EXPORTS(exports); OMNI_MODULE_ON_MODULE_LOAD(exports, onLoad); OMNI_MODULE_ON_MODULE_STARTED(exports, onStarted); OMNI_MODULE_ON_MODULE_CAN_UNLOAD(exports, onCanUnload); OMNI_MODULE_ON_MODULE_UNLOAD(exports, onUnload); return omni::core::kResultSuccess; } ``` Building the DLL is build system dependent, but when using the Kit SDK, the following snippet from `source/extensions/omni.kit.exec.example-omni/premake5.lua` should do the job: ```lua source/extensions/omni.kit.exec.example-omni/premake5.lua ``` ### Listing 11 Example of building an Omniverse Module using the Kit SDK. The **omni.kit.exec.example-omni** extension is a fully functioning extension found at ```code source/extension/omni.kit.exec.example-omni/ ``` . It includes much more than what is presented above, for example, how to create tests for your EF extension. It is a suitable starting point for your own EF extension. ## Creating a Carbonite Plugin The minimum needed to implement a Carbonite plugin can be found in the **omni.kit.exec.example-carb** extension: ```code-block-caption Listing 12 Example of defining an Carbonite plugin using the Kit SDK. ``` ```c++ #define CARB_EXPORTS // must be defined (folks often forget this) #include "CarbExamplePass.h" #include &lt;carb/PluginUtils.h&gt; #include &lt;omni/graph/exec/unstable/PassRegistry.h&gt; #include &lt;omni/kit/exec/core/unstable/Module.h&gt; // we need the name in a couple of places so we define it once here #define MODULE_NAME "omni.kit.exec.example-carb.plugin" // CARB_PLUG_IMPL must be called with an interface. this is an example interface. // // if your plugin does not publish any interfaces, consider using Omniverse Modules rather than a Carbonite Plugin. struct IExampleInterface { CARB_PLUGIN_INTERFACE("omni::graph::exec::example::IExampleInterface", 1, 0) }; void fillInterface(IExampleInterface& iface) { // used to populate your interface } // required. described the plugin to the carbonite framework. const struct carb::PluginImplDesc kPluginImpl = { MODULE_NAME, "Example Execution Framework Plugin", "NVIDIA", carb::PluginHotReload::eDisabled, "dev" }; // call CARB_PLUGIN_IMPL_DEPS if your plugin has static dependencies. this plugin does not. CARB_PLUGIN_IMPL_NO_DEPS(); // required. describes the carbonite interfaces this plugin provides CARB_PLUGIN_IMPL( kPluginImpl, IExampleInterface // add any carbonite interfaces here ) // this registers the CarbExamplePass population pass. any time a node named "ef.example.greet" is seen, this pass will // attach a definition to the node that will print out "hi". // // this macro can be called from any .cpp file in the DLL, but must be called at global scope. OMNI_GRAPH_EXEC_REGISTER_POPULATE_PASS(CarbExamplePass, "ef.example.greet"); // called once the DLL is loaded CARB_EXPORT bool carbOnPluginStartupEx() { // this macro must be called by any DLL providing EF functionality (e.g. passes). it will register any passes found // in the module with EF. OMNI_KIT_EXEC_CORE_ON_MODULE_STARTED( MODULE_NAME, []() { // this optional function is called when any EF module is unloaded. the purpose of this function is to // remove references to any objects that may by potentially be unloaded. }); return true; } // called right before the DLL will be unloaded CARB_EXPORT void carbOnPluginShutdown() { // if OMNI_KIT_EXEC_CORE_ON_MODULE_STARTED() is called, this macro must also be called. it will inform EF that the // DLL is about to be unloaded. additionally this macro will unregister any passes registered by the DLL. } ``` ```cpp OMNI_KIT_EXEC_CORE_ON_MODULE_UNLOAD(); ``` ``` Building the DLL is build system dependent, but when using the Kit SDK, the following snippet from ```cpp source/extensions/omni.kit.exec.example-carb/premake5.lua ``` should do the job: ```cpp -- start the omnigraph/omni.kit.exec.example-carb project.. project_ext(ext, { generate_ext_project=false }) -- target: omnigraph/omni.kit.exec.example-carb/omni.kit.exec.example-carb.plugin -- -- builds the c++ code project_ext_plugin(ext, ext.id..".plugin") add_files("/impl", "plugin") -- add plugins directory to files to be built exceptionhandling "On" -- api layer is allowed to throw exceptions (abi is not) rtti "Off" -- shouldn't be needed since we're using oni ``` ## Deciding on Which Approach to Take When implementing new EF functionality, it is recommended to use Omniverse modules. Omniverse modules work well with EF’s ONI based interfaces. Additionally, if you plan on providing your own ONI interfaces that encapsulate global state that needs to be accessed across many DLLs, Omniverse modules allow you to register interfaces via ```cpp omni::core::ITypeFactory ``` . See *omni.kit.exec.core* for an example. If you are extending an existing Carbonite plugin with EF functionality (e.g. *omni.graph.core* ) using the existing Carbonite plugin is the path of least resistance. By taking this approach, your new EF implementation will be able to access implementation details of existing functionality located in the same plugin. ## Avoiding Crashes at Exit > Note > This section covers a crash on exit problem often seen when using the Kit SDK. The solution provided is not implemented in the core EF library, rather it is implemented in the *omni.kit.exec.core* extension, which bridges EF with Kit. Both the problem and solution are presented here, in the core EF docs, to help users of EF outside of the Kit SDK to understand potential edge cases with EF integration. Applications based on the Kit SDK will shutdown each extension/plugin/module before exit. This can lead to unexpected crashes when DLLs depend upon each other. This coupling of functionality between DLLs is often the case in EF. As an example, consider the *omni.graph.action* extension, which provides definitions and passes to implement OmniGraph’s Action Graph extension. The *omni.graph.action* extension depends upon *omni.graph.core* which in turn depends upon *omni.kit.exec.core* , which depends upon the core EF extension ( *omni.graph.exec* ). When the application starts, this dependency information is used to load *omni.graph.exec* first, followed by *omni.kit.exec.core* second, then *omni.graph.core* , and finally *omni.graph.action* . During shutdown, the extensions are unloaded in reverse order. ```mermaid flowchart LR oge[omni.graph.exec] -- Provides PassRegistry To --> okec[omni.kit.exec.core] okec -- Provides ExecutionController To --> ogc[omni.graph.core] ogc -- Provides OG To --> oga[omni.graph.action] oga -. Stores Data In .--> ogc ogc -. Stores Data In .--> okec ``` Safely unloading extensions is no easy task. Explicit extension dependencies are depicted with solid lines. Implicit reference counting dependencies are depicted with dotted lines. During shutdown, *omni.graph.action* will unload without issue. However, when unloading *omni.graph.core* you’re likely to see a crash when OmniGraph’s destructs its internal objects. This is because OmniGraph stores an ```cpp ObjectPtr ``` to each EF [definition] is creates. This isn’t a bug, as it allows OmniGraph to quickly and precisely [invalidate] parts of EF’s execution graph. However, during shutdown, definitions provided by *omni.graph.action* will crash, because attempting to invoke their destructors will call into unloaded code. EF’s solution to this problem is ```cpp OMNI_KIT_EXEC_CORE_ON_MODULE_STARTED() ``` . This macro’s second argument is a callback ## Next Steps Above, we covered the creation of plugins to extend EF’s functionality. Readers are encouraged to move onto to either - Definition Creation - Pass Creation - Executor Creation to begin implementing new graphs.
13,044
embedded_kit_python.md
# Embedded Python ## Hello Python Run `> kit.exe --exec your_script.py` to run your script using **Kit** Python. ## Using system Python When the Python interpreter is initialized, system-defined environment variables (like `PYTHONHOME`, `PYTHONPATH`) are ignored. Instead, the following setting is used for python home: - `/plugins/carb.scripting-python.plugin/pythonHome` instead of [PYTHONHOME](https://docs.python.org/3.7/using/cmdline.html?highlight=pythonhome#envvar-PYTHONHOME) > **Note** > You can find default values for this setting in `kit-core.json` file. To use a system-level Python installation, override `PYTHONHOME`, e.g.: `--/plugins/carb.scripting-python.plugin/pythonHome="C:\Users\bob\AppData\Local\Programs\Python\Python310"`. Changing `PYTHONHOME` won’t change the loaded Python library. This is platform specific, but for instance on Windows, **Kit** is linked with `python.dll` and loads the one that is in the package using standard dll search rules. However, the standard library, `site-packages`, and everything else will be used from the specified python path. ## Add extra search paths To add search paths (to `sys.path`), you can use the `--/plugins/carb.scripting-python.plugin/extraPaths` setting. ```code sys.path ``` ), the ```code /app/python/extraPaths ``` setting can be used. For example: ``` ```code > kit.exe --/app/python/extraPaths/0="C:/temp" or in a kit file: ```toml [settings] app.python.extraPaths = ["C:/temp"] ``` To summarize, those are all the methods to extend ```code sys.path ``` : - Create new extension with ```code [python.module] ``` definitions (recommended). - Explicitly in python code: ```code sys.path.append(...) ``` - The ```code /app/python/extraPaths ``` setting. ## Other Python Configuration Tweaks Most python configuration variables can be changed using following settings: ``` config variable | python flag documentation --- | --- ```code /plugins/carb.scripting-python.plugin/Py_VerboseFlag ``` | Py_VerboseFlag ```code /plugins/carb.scripting-python.plugin/Py_QuietFlag ``` | Py_QuietFlag ```code /plugins/carb.scripting-python.plugin/Py_NoSiteFlag ``` | Py_NoSiteFlag ```code /plugins/carb.scripting-python.plugin/Py_IgnoreEnvironmentFlag ``` | Py_IgnoreEnvironmentFlag ```code /plugins/carb.scripting-python.plugin/Py_NoUserSiteDirectory ``` | Py_NoUserSiteDirectory ```code /plugins/carb.scripting-python.plugin/Py_UnbufferedStdioFlag ``` | Py_UnbufferedStdioFlag ```code /plugins/carb.scripting-python.plugin/Py_IsolatedFlag ``` | Py_IsolatedFlag ``` ## Using `numpy`, `Pillow` etc. **Kit** comes with `omni.kit.pip_archive` extension which has few popular Python modules bundled into it. Have a look inside of it on filesystem. After this extension is started you can freely do `import numpy`. Declare a dependency on this extension in your extension, or enable it by any other means to use any of them. E.g.: run ```code > kit.exe --enable omni.kit.pip_archive --exec use_numpy.py ``` to run your script that can import and use `numpy`. ## As a starting point change ```python PYTHONHOME ``` setting described above to point to Anaconda environment: ```python --/plugins/carb.scripting-python.plugin/pythonHome="C:/Users/bob/anaconda3/envs/py37" ``` . It is known to work for some packages and fail for others, on a case by case basis. ## Using other packages from pip For most Python packages (installed with any package manager or locally developed) it is enough to add them to the search path ( ```python sys.path ``` ). That makes them discoverable by the python import system. Any of the methods described above can be used for that. Alternatively, **Kit** has the ```python omni.kit.pipapi ``` extension to install modules from the ```python pip ``` package manager at runtime. It will check if the package is not available, and will try to pip install it and cache it. Example of usage: ```python omni.kit.pipapi.install("some_package") ``` . After that call, import the installed package. Enabling the ```python omni.kit.pipapi ``` extension will allow specification of pip dependencies by extensions loaded after it. Refer to ```python omni.kit.pipapi ``` doc. At build-time, any Python module can be packaged into any extension, including packages from pip. That can be done using other Python installations or kit Python. This is the recommended way, so that when an extension is downloaded and installed, it is ready to use. There is also no requirement for connectivity to public registries, and no runtime cost during installation. ## Why do some native Python modules not work in **Kit**? It is common for something that works out of the box as-installed from *pip* or *Anaconda* not to work in **Kit**. Or vice versa, the **Kit** Python module doesn’t load outside of **Kit**. For pure Python modules (only ```python *.py ``` files), finding the root cause might be a matter of following import errors. However, when it involves loading native Python modules ( ```python *.pyd ``` files on Windows and ```python *.so ``` files on Linux), errors are often not really helpful. Native Python modules are just regular OS shared libraries, with a special **C API** that Python looks for. They also are often implicitly linked with other libraries. When loaded, they might not be able to find other libraries, or be in conflict with already loaded libraries. Those issues can be debugged as any other library loading issue, specific to the OS. Some examples are: - Exploring ```python PATH ``` / ```python LD_LIBRARY_PATH ``` env vars. - Exploring libraries that are already loaded by the process. - Using tools like Dependency Walker. - Trying to isolate the issue, by loading in a simpler or more similar environment. **Kit** doesn’t do anything special in this regard, and can be treated as just another instance of Python, with a potentially different set of loaded modules. ## Running **Kit** from Python Normally the ```python kit.exe ``` process starts and loads an embedded Python library. **Kit** provides Python bindings to its core runtime components. This allows you to start Python, and then start **Kit** from that Python. It is an experimental feature, and not used often. An example can be found within the **Kit** package: ```python example.pythonapp.bat ``` . Differences from running normally: - A different Python library file is used (different ```python python.dll ``` ). - There may be some GIL implications, because the call stack is different. - Allows explicit control over the update loop.
6,549
enabling-the-extension_overview.md
# Overview ## Overview Viewport Next is a preview of the next generation of Kit’s Viewport. It was designed to be as light as possible, providing a way to isolate features and compose them as needed to create unique experiences. This documentation will walk through a few simple examples using this technology, as well as how it can be used in tandem with the `omni.ui.scene` framework. ## What is a Viewport Exactly what a viewport is can be a bit ill-defined and dependent on what you are trying to accomplish, so it’s best to define some terms up front and explain what this documentation is targeting. At a very high level a Viewport is a way for a user to visualize (and often interact with) a Renderer’s output of a scene. When you create a “Viewport Next” instance via Kit’s Window menu, you are actually creating a hierarchy of objects. The three objects of interest in this hierarchy are: 1. The `ViewportWindow`, which we will be re-implementing as `StagePreviewWindow`. 2. The `ViewportWidget`, one of which we will be instantiating. 3. The `ViewportTexture`, which is created and owned by the `ViewportWidget`. While we will be using (or re-implementing) all three of those objects, this documentation is primarily targeted towards understanding the `ViewportWidget` and it’s usage in the `omni.kit.viewport.stage_preview`. After creating a Window and instance of a `ViewportWidget`, we will finally add a camera manipulator built with `omni.ui.scene` to interact with the `Usd.Stage`, as well as control aspects of the Renderer’s output to the underlying `ViewportTexture`. Even though the `ViewportWidget` is our main focus, it is good to understand the backing `ViewportTexture` is independent of the `ViewportWidget`, and that a texture’s resolution may not necessarily match the size of the `ViewportWidget` it is contained in. This is particularly important for world-space queries or other advanced usage. ## Enabling the Extension To enable the extension and open a “Viewport Next” window, go to the “Extensions” tab and enable the “Viewport Window” extension ( ```code omni.kit.viewport.window ```code ). ## Simplest example The ```code omni.kit.viewport.stage_preview ```code adds additional features that make may make a first read of the code a bit harder. So before stepping through that example, let's take a moment to reduce it to an even simpler case where we create a single Window and add only a Viewport which is tied to the default ```code UsdContext ```code and ```code Usd.Stage ```code . We won’t be able to interact with the Viewport other than through Python, but because we are associated with the default ```code UsdContext ```code : any changes in the ```code Usd.Stage ```code (from navigation or editing in another Viewport or adding a ```code Usd.Prim ```code from the Create menu) will be reflected in our new view. ```python from omni.kit.widget.viewport import ViewportWidget viewport_window = omni.ui.Window('SimpleViewport', width=1280, height=720+20) # Add 20 for the title-bar with viewport_window.frame: viewport_widget = ViewportWidget(resolution = (1280, 720)) # Control of the ViewportTexture happens through the object held in the viewport_api property viewport_api = viewport_widget.viewport_api # We can reduce the resolution of the render easily viewport_api.resolution = (640, 480) # We can also switch to a different camera if we know the path to one that exists viewport_api.camera_path = '/World/Camera' # And inspect print(viewport_api.projection) print(viewport_api.transform) # Don't forget to destroy the objects when done with them # viewport_widget.destroy() # viewport_window.destroy() # viewport_window, viewport_widget = None, None ``` ```
3,732
enterprise-install.md
# Enterprise Install Guide ## Licensing Need walkthrough steps on setting up your Omniverse Enterprise account and getting your licenses in order? Review the Omniverse Enterprise Licensing Quick Start Guide for more information. ## Enterprise Nucleus Server The following documentation is available to help you properly plan, deploy, and configure an Enterprise Nucleus Server: - Hardware Sizing Guide - Information on server sizing for your environment - Planning Your Installation - Best practices, requirements, and prerequisites - Installing an Enterprise Nucleus Server - An easy step-by-step guide for successful installation ## Launcher Deployment Options The Omniverse Launcher is available in two versions: the Workstation Launcher and the IT Managed Launcher. Omniverse Enterprise customers may choose either version depending on their deployment preference. - The Workstation Launcher offers a complete experience and does not require IT management for application installation or updates. The Omniverse Workstation Launcher requires network connectivity and an NVIDIA account. - The IT Managed Launcher is designed to be used in an air-gapped or tightly controlled environment, and does not require network connectivity or an NVIDIA account. Installation and updates of Omniverse applications are managed by the IT administrator for end users. Both the Workstation Launcher and the IT Managed Launcher are available from the NVIDIA Licensing Portal. ## Virtual Workstation Deployments Kit based apps (including USD Composer, USD Presenter, etc.) can be run in a virtualized environment using NVIDIA’s vGPU products. The Virtual Deployment Guide provides an overview of how to set up a vGPU environment capable of hosting Omniverse. Additionally, Omniverse Virtual Workstations can be run in a Cloud Service Provider (CSP) using the how-to guides here.
1,872
ErrorHandling.md
# Error Handling This document outlines the how the Execution Framework (i.e. EF) handles errors. EF errors fit into one of the following broad categories: - Memory allocation errors. - Invalid pointers passed to the API. - Unmet API preconditions. - Failure to build the execution graph. - Failure to execute. - Failure to retrieve a node’s data. Most API’s in EF are expected to never fail and as such do not return a result indicating success or failure. The general approach taken by EF is to terminate the program when unrecoverable errors or programmer errors are detected. For errors generated by plugins (i.e. developer authored executors and passes), it is up to the developer to report errors via either the integration layer (e.g. `omni.kit.exec.core`) or authoring layer (e.g. `omni.graph.core`). The following sections explore the topics above in-depth. ## Memory Allocation Errors EF allocates memory on the heap during both graph construction and execution. The size of each allocation is generally small (less than 1KB). Because of the small size of each allocation, if an allocation fails, EF considers the system’s memory to be exhausted and no reasonable action can be taken to free memory. The system is in a bad state, and as such, EF terminates the application. This termination happens in two ways: - When allocations via `new` fail, an exception is thrown. Since functions in EF are marked `noexcept`, an uncaught exception triggers `std::unexpected()`, which by default calls `std::terminate()`. - When allocations via `std::malloc()` or `carb::allocate()` fail, the bad allocation is detected and the application terminated via `OMNI_GRAPH_EXEC_FATAL_UNLESS()`. ## Invalid Pointers EF is a low-level API designed with speed in mind. As such, EF spends little time validating and report bad input to its API. The expectation is that the developer is providing valid input. When invalid input is provided, EF immediately terminates the application. While seemingly harsh, this “fail-fast” approach has several benefits: - Developers often neglect to handle errors returned from APIs. This neglect can lead to the application being in an unexpected state and generate hard to find bugs. - By failing-fast and terminating the application, API misuse is captured by Omniverse’s Carbonite Crash Reporter. During local development, the crash reporter immediately reports the stack trace of any API misuse. During testing, the reporter logs the API misuse and generates telemetry. This telemetry can be aggregated and examined to find API misuse across Omniverse’s suite of products before said products ship to customers. To implement this fail-fast strategy, EF primarily uses two macros: ```c OMNI_GRAPH_EXEC_ASSERT() ``` and ```c OMNI_GRAPH_EXEC_FATAL_UNLESS_ARG() ``` . ```c OMNI_GRAPH_EXEC_ASSERT() ``` is used to validate that a supplied pointer is not ```c nullptr ``` . Its use is preferred when the pointer will be dereferenced by the function before it returns. The reason for this is two-fold: 1. ```c OMNI_GRAPH_EXEC_ASSERT() ``` checks the given pointer only in debug builds. This means there is no performance penalty in release builds. 2. Since the pointer will be used by the function performing the check, in release builds a crash will be generated (and reported) due to dereferencing the null pointer. The latter point suggests ```c OMNI_GRAPH_EXEC_ASSERT() ``` is not strictly needed. While true, ```c OMNI_GRAPH_EXEC_ASSERT() ``` serves as “code as documentation” and provides a helpful message when the check fails. Following you can see an example of when it is appropriate to use ```c OMNI_GRAPH_EXEC_ASSERT() ``` . ```c++ void printName(INode* node) noexcept { OMNI_GRAPH_EXEC_ASSERT(node); // prints a useful message in debug builds if node is nullptr // if node is nullptr, a crash will be triggered and reported in the release build. // // prefer using OMNI_GRAPH_EXEC_ASSERT() to check if an input parameter is nullptr when // the pointer is immediately used by the function. this mean you'll get a helpful message in // debug builds and an easy to debug crash in release builds. std::cout << node->getName() << std::endl; } ``` The next macro used is ```c OMNI_GRAPH_EXEC_FATAL_UNLESS_ARG() ``` . EF prefers using this macro when the input pointer is not immediately used, but rather stored for later use. ```c OMNI_GRAPH_EXEC_FATAL_UNLESS_ARG() ``` has the benefit of performing the ```c nullptr ``` check in both debug and release builds. By checking the pointer in both build flavors, we avoid hard to debug situations where the stored pointer is later used and unexpectedly ```c nullptr ``` . When encountering such a situation, questions such as “Was the pointer passed ```c nullptr ``` ?” or “Was the stored pointer corrupted due to an overrun?” are reasonable. Checking for ```c nullptr ``` when the pointer is stored, helps answer questions like these much easier. Below, you can see an example use of ```c OMNI_GRAPH_EXEC_FATAL_UNLESS_ARG() ``` . ```c++ void MyObject::setDef(IDef* def) noexcept { // prints a useful message in both release and debug builds if def is nullptr OMNI_GRAPH_EXEC_FATAL_UNLESS_ARG(node); } ``` // here we store def for later use. by checking if def is nullptr above, we can quickly // debug why m_def is nullptr when later used. m_def = def; ``` ## Unmet Preconditions To avoid the generation of hard to investigate bugs, EF lists expected preconditions for each part of its API and terminates the program if any of these preconditions are not met. Preconditions that are not `nullptr` checks are usually checked with the `OMNI_GRAPH_EXEC_FATAL_UNLESS()` macro. This macro performs the precondition check in both release and debug builds. An example of one of these checks follows: ```cpp PassTypeRegistryEntry getPassAt_abi(uint64_t index) noexcept override { OMNI_GRAPH_EXEC_FATAL_UNLESS(index < passes.size()); return { passes[index].id, passes[index].name.c_str(), passes[index].factory.get(), &amp;(passes[index].nameToMatch), passes[index].priority }; } ``` For hot code paths, `OMNI_GRAPH_EXEC_ASSERT()` can be used to eliminate the performance cost of these checks in release builds. ## Failure to Build the Execution Graph Graph construction is handled by user plugins via passes. The main method in these passes is the `run()` method (e.g. `IPopulatePass::run()`). `run()` does not report errors. It is up to the implementor of `run()` to handle and report errors. How a developer handles errors is their choice. They may choose to flag to the integration layer that the graph should not be executed. They may choose to populate the graph with “pass-through” nodes. They may choose to report the error via an authoring level API or an integration layer API. The main message here is that EF assumes graph construction will succeed and if it does not, it’s up to the developer to handle and report the failure during construction and ensure the program is in a defined state. ## Failure During Graph Execution Failures are expected during graph execution. For example, it is reasonable to assume that a node that makes an I/O request, may periodically fail. EF’s execution APIs are designed to flag that a task failed, but that’s it. EF does not contain APIs to describe the failure or even associate a failure with nodes or definitions. EF’s execution APIs generally return a `Status` object, which is a bit-field of possible execution outcomes. When using the default `ExecutorFallback`, nodes downstream of failing node are still executed and their resulting `Status` or’d together. The end result is a `Status` object. # Failure to Retrieve Node Data Node data needed by the graph during construction and execution is stored in `IExecutionContext`. This context allows each instance of a node to store arbitrary data based on the node’s `path` and a user defined key. The data is accessed with the `IExecutionContext::getNodeData()` method, which returns a pointer to the data. The pointer returned by this method may be `nullptr`. Here we run into a design decision. Does `nullptr` mean the data was never set or does it mean the data was set, but set to `nullptr`? EF is designed to allow for the latter scenario. A returned `nullptr` means the data was explicitly set to `nullptr`. In order to handle the case where the data was never set, `IExecutionContext::getNodeData()` returns an `omni::expected`. `omni::expected` contains either the “expected” value or an “unexpected” value. For `IExecutionContext::getNodeData()`, it contains the value of the pointer set by the user or a `omni::core::Result` with a value of `omni::core::kResultNotFound`. An example of valid usage of this API is as follows: ```c++ auto data = OMNI_GRAPH_EXEC_GET_NODE_DATA_AS( task->getContext(), // pointer to either IExecutionContext or IExecutionStateInfo GraphContextCacheOverride, // the type of the data to retrieve task->getUpstreamPath(), // node path tokens::kInstanceContext // key to use as a lookup in the node's key/value datastore ); if (data) { GraphContextCacheOverride* item = data.value(); // ... } else { omni::core::Result badResult = data.error(); // e.g. kResultNotFound (see docs) // ... } ``` An alternative usage of the API, can be seen here: ```c++ auto data = OMNI_GRAPH_EXEC_GET_NODE_DATA_AS( task->getContext(), // pointer to either IExecutionContext or IExecutionStateInfo GraphContextCacheOverride, // the type of the data to retrieve task->getUpstreamPath(), // node path tokens::kInstanceContext // key to use as a lookup in the node's key/value datastore ).data(); // will throw an exception if the result is unexpected ``` Above, by not checking if the ```cpp omni::expected ``` has an unexpected value, ```cpp omni::expected ``` will throw an exception. This exception can be caught by the developer. If the exception is not caught, it will eventually reach an ABI boundary, call std::unexpected(), and terminate the program. Such a strategy is useful when the missing node data represents an unexpected state in the program. ## Exceptions EF does not use exceptions to report errors. Rather, it uses the error reporting strategies outlined above. This fact introduces two questions developers may ask: - Can I use exceptions in my EF plugin? - What happens if I throw an exception and don’t catch it? Developers are free to use exceptions in their plugins. However, if an exception crosses an ABI boundary (i.e., escapes a function postfixed with ``` _abi ``` ), the following will happen: - The C++ runtime will invoke std::unexpected(), which by default calls std::terminate(). - In Omniverse applications, std::terminate() has been set to be handled by Omniverse’s Carbonite Crash Reporter. The reporter will generate a `.dmp` file for later inspection, print out a stack trace, upload the `.dmp` to Omniverse’s crash aggregation system, and produce telemetry describing the context of the crash. In short, developers should feel free to use exceptions. If an exception can be handled, they should be caught and appropriate cleanup actions performed. If an exception represents an undefined state, it can be ignored so that it is reported by the crash reporting system, which will terminate the ill-defined application.
11,433
event_streams.md
# Event streams ## API/design overview The singleton `IEvents` interface is used to create `IEventStream` objects. Whenever an event is being pushed into an event stream, the **immediate** callback is triggered, and the event stream stores the event in the internal event queue. Then, events can be popped from the queue one by one, or all at once (also called pump), and at this point **deferred** callbacks are triggered. The event stream owner typically controls where this pumping is happening. Event consumers can subscribe to both immediate (push) and deferred (pop) callbacks. Subscription functions create `ISubscription` class, which usually unsubscribes automatically upon destruction. Callbacks are wrapped into `IEventListener` class that allows for context binding to the subscription, and upon triggering, the callback is triggered with the `IEvent` passed as parameter, this parameter describes the event which triggered the callback. `IEvent` contains event type, sender id and custom payload, which is stored as `carb.dictionary` item. ## Recommended usage The events subsystem is flexible and there are several recommendations that are intended to help the most frequent use-cases, as well as provide clarifications on specific parts of the events logic. ### Deferred callbacks As opposed to immediate callback invocation, the recommended way of using events streams is through the deferred callbacks mechanisms, unless using immediate callbacks are absolutely necessary. When an event is pushed into an event stream, it is fairly frequent that the subsequent immediate callback is not a safe place to modify or even read related data outside the event payload. To avoid corruptions, it is recommended to use the deferred callbacks, which will be triggered at some place that the event stream owner deemed safe. ### Event types Each event contains an event type, which is set upon pushing the event into the stream, and can be specified when a consumer subscribes to an event stream. This can be used to narrow/decrease the number of callback invocations, which is especially important when listening to the event stream from the scripting language. It is recommended to use string hashes as event types, as this will help avoid managing the event type allocation in case multiple sources can push events into an event stream. In C++, use `CARB_EVENTS_TYPE_FROM_STR` which provides a 64-bit FNV-1a hash computed in compile-time, or its run-time counterpart, `carb::events::typeFromString`. In Python, `carb.events.type_from_string` can be used. Important event streams design choices: either multiple event streams with fairly limited number of event types served by each, or one single event stream can be created, serving many different event types. The latter approach is more akin to the event bus with many producers and consumers. Event buses are useful when designing a system that is easily extendable. # Transient subscriptions In case you want to implement a deferred-action triggered by some event - instead of subscribing to the event on startup and then checking the action queue on each callback trigger, consider doing the transient subscriptions. This approach involves subscribing to the event stream only after you have a specific instance of action you want to execute in a deferred manner. When the event callback subscription is triggered, you execute the action and immediately unsubscribe, so you don’t introduce an empty callback ticking unconditionally each time the event happens. The transient subscription can also include a simple counter, so you execute your code only on Nth event, not necessarily on the next one. # Code examples ## Subscribe to Shutdown Events ```python # App/Subscribe to Shutdown Events import carb.events import omni.kit.app # Stream where app sends shutdown events shutdown_stream = omni.kit.app.get_app().get_shutdown_event_stream() def on_event(e: carb.events.IEvent): if e.type == omni.kit.app.POST_QUIT_EVENT_TYPE: print("We are about to shutdown") sub = shutdown_stream.create_subscription_to_pop(on_event, name="name of the subscriber for debugging", order=0) ``` ## Subscribe to Update Events ```python # App/Subscribe to Update Events import carb.events import omni.kit.app update_stream = omni.kit.app.get_app().get_update_event_stream() def on_update(e: carb.events.IEvent): print(f"Update: {e.payload['dt']}") sub = update_stream.create_subscription_to_pop(on_update, name="My Subscription Name") ``` ## Create custom event ```python # App/Create Custom Event import carb.events import omni.kit.app # Event is unique integer id. Create it from string by hashing, using helper function. # [ext name].[event name] is a recommended naming convention: MY_CUSTOM_EVENT = carb.events.type_from_string("omni.my.extension.MY_CUSTOM_EVENT") # App provides common event bus. It is event queue which is popped every update (frame). bus = omni.kit.app.get_app().get_message_bus_event_stream() def on_event(e): print(e.type, e.type == MY_CUSTOM_EVENT, e.payload) # Subscribe to the bus. Keep subscription objects (sub1, sub2) alive for subscription to work. # Push to queue is called immediately when pushed sub1 = bus.create_subscription_to_push_by_type(MY_CUSTOM_EVENT, on_event) # Pop is called on next update sub2 = bus.create_subscription_to_pop_by_type(MY_CUSTOM_EVENT, on_event) # Push event the bus with custom payload bus.push(MY_CUSTOM_EVENT, payload={"data": 2, "x": "y"}) ```
5,504
example-multiple-projects-in-a-repo_index.md
# Example: Multiple Projects in a Repo This is an example of a nested documentation project. This project was defined as follows in `repo.toml`: ```toml [repo_docs.projects.nested-project] # example-begin version_selector_enabled version_selector_enabled = false # example-end version_selector_enabled name_in_nav_bar_enabled = true enhanced_search_enabled = false # example-begin solr-search # enable the use of solr search solr_search_enabled = true solr_search_site = "https://docs.nvidia.com" solr_search_path = "/cuda" # example-end solr-search # example-begin temporary-links temporary_links = [ { source = "../repo_docs-link-example", link_path = "tmp" } ] # example-end temporary-links # docs_root should be redefined per-project docs_root = "examples/nested-project" # most keys can be redefined. if a key is not redefined, it inherits the key's value # from the root [repo_docs] table. name = "Example: Nested Project" # we want to link back to repo_docs from this build so we add it as a dependency deps = [ [ "repo_docs", "_build/docs/repo_docs/latest" ], ] ``` See [Defining Multiple Projects](../../repo_docs/0.51.4/docs/Projects.html#multiple-projects-overview) for more information on defining, building, and publishing sub-projects.
1,266
example-project-with-extra-builds_index.md
# Example: Project with Extra Builds This is an example of a project (i.e. “project-with-extra-builds” in `repo.toml`) that defines multiple builds. The project defines two builds: - public - internal ```toml # this defines the "public" build [repo_docs.projects.project-with-extra-builds] docs_root = "examples/project-with-extra-builds" name = "Example: Project with Extra Builds" # we don't want "internal-only.rst" in the public build sphinx_exclude_patterns = [ "internal-only.rst", "tools" ] # we want to link back to repo_docs from this build so we add it as a dependency deps = [ ["repo_docs", "_build/docs/repo_docs/latest"], ] # this defines the "internal" build [repo_docs.projects.project-with-extra-builds.builds.internal] # settings are inherited from the "public" build, but can be redefined as we # do with 'name' here: name = "Example: Project with Extra Builds (Internal)" # reset the exclude patterns so that "internal-only.rst" isn't excluded sphinx_exclude_patterns = [ "tools" ] ``` Above, “public” does not need to be specified because it is considered the default build. Snippets of documentation can be conditionally included based on the build. Consider the following example: ```rst .. ifconfig:: build_name in ('internal') .. note:: This text will only appear in the "internal" build of the documentation. .. ifconfig:: build_name in ('public') .. note:: This text will only appear in the "public" build of the documentation. ``` The snippet above produces the following note in this build of the documentation: > **Note** > This text will only appear in the “public” build of the documentation. For more information on defining multiple builds, see [Multiple Builds](#).
1,758
Example.md
# Examples ## Simplified submenu creation with build_submenu_dict This creates a dictionary of lists from the `name` paths in MenuItemDescription, expanding the path and creating (multiple, if required) sub_menu lists. The last item on the path is assumed to be not a sub_menu item. ```python menu_dict = omni.kit.menu.utils.build_submenu_dict([ MenuItemDescription(name="File/Open"), MenuItemDescription(name="Edit/Select/Select by kind/Group"), MenuItemDescription(name="Window/Viewport/Viewport 1"), MenuItemDescription(name="Help/About"), ]) ``` ## using add_menu_items ```python for group in menu_dict: omni.kit.menu.utils.add_menu_items(menu_dict[group], group) ``` ## using remove_menu_items ```python for group in menu_dict: omni.kit.menu.utils.remove_menu_items(menu_dict[group], group) ``` ## Another example: Adding a menu with submenu for your extension; ```c++ from omni.kit.menu.utils import MenuItemDescription import carb.input def on_startup(self, ext_id): self._file_menu_list = [ MenuItemDescription( name="Sub Menu Example", ) ] ``` ```python import carb import asyncio import omni.ext import omni.ui as ui import omni.kit.menu.utils from omni.kit.menu.utils import MenuItemDescription from .window import ExampleWindow class TestMenu(omni.ext.IExt): """The entry point for Example Extension""" WINDOW_NAME = "Example" MENU_DESCRIPTION = "Example Window" MENU_GROUP = "TEST" def on_startup(self): print(f"[{self.__class__.__name__}] on_startup") ui.Workspace.set_show_window_fn(TestMenu.WINDOW_NAME, lambda v: self.show_window(None, v)) self._menu_entry = [MenuItemDescription( name=TestMenu.MENU_DESCRIPTION, ticked=True, # menu item is ticked ticked_fn=self._is_visible, # gets called when the menu needs to get the state of the ticked menu onclick_fn=self._toggle_window )] omni.kit.menu.utils.add_menu_items(self._menu_entry, name=TestMenu.MENU_GROUP) ``` ```python ui.Workspace.show_window(TestMenu.WINDOW_NAME) def on_shutdown(self): print(f"[{self.__class__.__name__}] on_shutdown") omni.kit.menu.utils.remove_menu_items(self._menu_entry, name=TestMenu.MENU_GROUP) self._menu_entry = None ui.Workspace.set_show_window_fn(TestMenu.WINDOW_NAME, None) if self._window: self._window.destroy() self._window = None async def _destroy_window_async(self): print(f"[{self.__class__.__name__}] _destroy_window_async") # wait one frame, this is due to the one frame defer # in Window::_moveToMainOSWindow() await omni.kit.app.get_app().next_update_async() if self._window: self._window.destroy() self._window = None def _is_visible(self) -> bool: print(f"[{self.__class__.__name__}] _is_visible returning {False if self._window is None else self._window.visible}") return False if self._window is None else self._window.visible def _show(self): print(f"[{self.__class__.__name__}] _show") if self._window is None: self.show_window(None, True) if self._window and not self._window.visible: self.show_window(None, True) def _hide(self): print(f"[{self.__class__.__name__}] _hide") if self._window is not None: self.show_window(None, False) def _toggle_window(self): print(f"[{self.__class__.__name__}] _toggle_window") if self._is_visible(): self._hide() else: self._show() def _visiblity_changed_fn(self, visible): print(f"[{self.__class__.__name__}] _visiblity_changed_fn") if not visible: # Destroy the window, since we are creating new window # in show_window asyncio.ensure_future(self._destroy_window_async()) # this only tags test menu to update when menu is opening, so it # doesn't matter that is called before window has been destroyed omni.kit.menu.utils.refresh_menu_items(TestMenu.MENU_GROUP) ``` ```python def show_window(self, menu, value): print(f"[{self.__class__.__name__}] show_window menu:{menu} value:{value}") if value: self._window = ExampleWindow() self._window.set_visibility_changed_listener(self._visiblity_changed_fn) elif self._window: self._window.visible = False ``` ## Window class ```python import omni.ui as ui class ExampleWindow(ui.Window): """The Example window""" def __init__(self, usd_context_name: str = ""): print(f"[{self.__class__.__name__}] __init__") super().__init__("Example Window", width=300, height=300) self._visiblity_changed_listener = None self.set_visibility_changed_fn(self._visibility_changed_fn) def destroy(self): """ Called by extension before destroying this object. It doesn't happen automatically. Without this hot reloading doesn't work. """ print(f"[{self.__class__.__name__}] destroy") self._visiblity_changed_listener = None super().destroy() def _visibility_changed_fn(self, visible): print(f"[{self.__class__.__name__}] _visibility_changed_fn visible:{visible}") if self._visiblity_changed_listener: self._visiblity_changed_listener(visible) def set_visibility_changed_listener(self, listener): print(f"[{self.__class__.__name__}] set_visibility_changed_listener listener:{listener}") self._visiblity_changed_listener = listener ---
5,512
example.python_ext.HelloPythonExtension.md
# HelloPythonExtension ## HelloPythonExtension ``` class example.python_ext.HelloPythonExtension ``` Bases: ``` omni.ext._extensions.IExt ``` ### Methods | Method | Description | | ------ | ----------- | | `on_shutdown()` | | | `on_startup(ext_id)` | | ``` def __init__(self: omni.ext._extensions.IExt) -> None ``` ```
326
example.python_ext.md
# example.python_ext ## Submodules Summary: | Module | Description | |--------|-------------| | example.python_ext.python_ext | No submodule docstring provided | ## Classes Summary: | Class | Description | |-------|-------------| | HelloPythonExtension | | ## Functions Summary: | Function | Description | |----------|-------------| | some_public_function | |
367