immutableoctet / glare Goto Github PK
View Code? Open in Web Editor NEWGlare: Open Source Game Engine written in Modern C++
License: MIT License
Glare: Open Source Game Engine written in Modern C++
License: MIT License
We should try building reflection code together, since they're a significant bottleneck. For some reason MSVC doesn't want to build them in parallel, so there's no downside in scalability there, either.
Current kinematic resolution has the tendency to stop an entity dead in its tracks. Something like the following might help with making an applied correction less oppressive toward fixed motion:
Abs(dot product of approach angle and surface's forward vector) * penetration amount * surface forward
Haven't tested this, or looked into it any deeper than brainstorming.
Need more context on these and why we should/shouldn't use them. Currently have rigid bodies working as expected, but would be interested in the pros/cons of using these for player objects, etc.
From what I remember, this is one of the last things needed for the input system(s). Implementation of this would likely be done through the input devices themselves (configured via JSON profiles), rather than through the high-level system.
This would be for both ES and C++ threads. On the C++ side, we could support arbitrary function objects and coroutines/fibers.
Part of proposed level editor functionality; select, highlight and manipulate entities.
The ResourceManager
type currently holds shared_ptr
instances, meaning that anything cached will stay allocated even after no one is referencing that resource. There's pros and cons to this, but I think it makes more sense to allow resources to be deallocated naturally, rather than waiting until the ResourceManager
instance goes out of scope.
To achieve this, we will need to look at any data structures tied into the existing caching mechanisms, as well as the obvious switch to weak for storage + strong/shared for return-value/query.
The idea would be that an event could be received only by the targeted entity. We would do this by filtering to only trigger yield operations of Entity Threads of the targeted entity. This could be achieved using either a MetaAny
or a templated wrapper type to hold the event. We could then listen for those targeted events in the Entity System, dispatching accordingly.
This would be useful for messages between entities, and could be extended further to networking concepts like messages to/from a specific player.
This doesn't need to be directly tied into the engine, since input handling is basically its own customization point.
Mainly for simple images/textures, font rendering can be done later.
Animation slices indicate start and end frames of animations.
These slices should be loaded via JSON through character/asset description files.
To be handled at the application level, but interpreted at the game engine level.
The game engine would then be able to interpret these messages/events in order to produce its own device-agnostic input events, etc.
We currently require that all states be declared during the archetype parsing phase. This feature would allow importing additional states to the entity-descriptor if they're referenced as possible transition states in one of the regularly imported state definitions.
I am currently undecided on which approach would be most effective, but I'm leaning towards an "imports" section for each state.
We currently integrate many of the lower-level asset processing routines in the graphics
submodule, rather than separating this into its own portion of the codebase (e.g. the engine
module).
Keyboard devices appear to be triggering as expected, but if a mouse or gamepad button is held, the event only fires once.
engine
) system that covers pressed vs. held vs. released. In the high-level system's case, this still affects the end result, since the held/down events for that system are caused by these device-level events.For scenarios like moving platforms, etc. -- Essentially, this would parent/un-parent an object to a platform or other moving geometry, allowing it to be influenced implicitly, rather than via collision resolution/casting procedures.
Rework the low-level rendering functionality in graphics::Context
to build a data-driven command queue that can be sorted and manipulated for performance and ordering purposes.
This would tie well into the 2D pipeline we intend to implement.
Entities should be constructed primarily through composition. As an extension of this idea, we could standardize a means of composing entities through a data format (e.g. JSON). We could then have a JSON file that describes how to create a Camera, Player, etc. -- rather than having specific free functions to do so.
We could have a Player object's JSON file perform each step of behavior/component attachment with this -- Example:
Would be a lot simpler than having this described in code for each object type.
We could then use our current object creation mechanisms through stage
to point to these entity description (JSON) files so that we can instantiate many of them in a structured way.
The idea behind Zones is that they'd be descriptions of collision shapes (i.e. volumes) that don't have solid behavior, but instead are used to check for specific conditions within their bounds.
With Zones, you could have a sphere or cube volume that waits for a player to enter, then triggers an event to be interpreted by some other system.
This concept could power death planes, events within a game world, cutscene triggers, camera switching, etc.
The app::input
submodule has become much larger than app
itself. I think it makes sense to move this into its own namespace/module.
Implement an interface that allows for variable-like semantics. Something like this:
state = "State Name"_hs;
if (state == "Some Other State"_hs) { /* ... */ }
switch (state)
{
case "A"_hs:
// ...
break;
}
Something similar can be done for the previous state as well, possibly sharing a common base-class.
if (prev_state != "Old State"_hs)
Performance is significantly worse in debug builds in general, but enabling animation seems to slow things to a crawl. (e.g. ~80 fps in debug vs. ~1000 fps in release)
Scene serialization to disk. This would be a similar format to what we currently have for custom-built stages/scenes.
Build times are definitely a bottleneck, and they'll continue to get slower as the project grows.
I've experimented with enabling multi-processor build options in MSVC, which seems to have a significant impact on build times, but we'll definitely want to look at using PCH for later in the project's development as things become more stable / stay the same.
We currently use direct pointers to the transformation component since it's faster than a copy. We could definitely still avoid copies with a patch
based approach.
One side effect of our current approach is that we currently handle transform-change events manually.
Currently using MSVC's project system, but will definitely be looking at Visual Studio's CMAKE integration as time goes on.
Would be a feature of a proposed level/stage editor.
These would be control blocks in EntityScript that would change how often the thread is stepped.
I'm currently looking at the following options:
fixed
-- The thread is stepped at a fixed rate per-second. (e.g. 60hz)realtime
-- The thread is stepped on each realtime update (i.e. each frame)These would function similarly to multi
.
This was a holdover from when I was indecisive about using the STL as much as I have. At this point there's no reason to have memory::ref<>
and co. to wrap smart pointers. It's best just to use the standard types themselves.
We currently compile a single reflection
cpp source file.
As the codebase gets larger, this will become problematic as changing anything this source file references requires a rebuild.
We will need to determine if there are any side effects to changing each module's reflection.hpp
file to reflection.cpp
to allow for parallel and incremental builds.
Currently only factory objects are cached. Because entity descriptors (found in factories) are processed from archetypes, they may share data.
Note: This may be a hard optimization to achieve, since component redefinition and modification can happen at any point during the construction phase of a factory / entity descriptor.
Integrates with #8
Unified (device-agnostic) input system, including events for:
Simple idea: Resulting animation frames should be altered if a skinned object collides with solid geometry.
Expansion of #9 -- We currently only support first/closest hit.
Models who cast shadows are seen in their default T-pose, rather than a specific animation.
Functions would be subroutines that can be added to a script via merge
composition.
The syntax would look something like this:
function f(x, y:int, z:string)
return ((x + y):string + ", " + z)
end
We currently assume filesystem access when processing archetypes. This means we always expect imports and state references to be the proper method of entity-descriptor construction. In a theoretical scenario where we would want to embed all logic data of an entity in a single file, this would not be sufficient.
Take for example an entity description sent to a game client from a server. In this example, we would need to send multiple files from the server in order to construct a single entity factory and descriptor. This becomes wildly inefficient compared to a single (possibly space-optimized) JSON object or string stored in memory.
An entity (with MotionComponent
attached) landing on the ground should align to the normal of the slope they collided with.
Threads terminate fine during an entity's state change, but no events seem to be generated. This is presumably caused by direct usage of the ThreadComponent
type (or similar interface), rather than through a relevant command.
This has been on my mind for a while, but I only more recently started diving into C++20 coroutines. We could rework our current state-machine-styled Assimp loading functionality into synchronous coroutines via co_await
and a scheduler.
Thought about this for the zone system; std::vector
is a bit overkill for a small list of target entities, and using a variant to specialize for single-entity targets seems like unneeded complexity.
This is currently possible with the entity generation that happens when loading models, including their skeletal nodes, model groups, etc.
There isn't currently an easy way to describe e.g. a small sphere collider on a bone of a player's model. The idea here would be to use a JSON format linked to either the asset itself, or to the object/entity type via its creation routines.
Simple bool or maybe set of bit flags as component per entity. We could then use this to trigger on activate and on deactivate events. Maybe even have States so it would be inactive activating activated deactivating. Maybe have timers for this?
You could have things like doors that are closed when deactivated, open when activated and opening/closing in the other states.
Name ideas:
ActivationComponent
Activateable
Trigger/Triggerable
Things like buttons, pressure plates or zones could activate these.
Thought: what about entities in the abstract sense that only perform some function when activated; i.e. as a behavior tied to a component. Maybe even have this tied to the player, etc. (Child entity, but with no transform, etc.) -- we could even have a death trigger entity tied to a player.
We should have straight-forward routines for container deserialization (vector and map types). We currently have support for engine::load
, which lets us handle individual objects via MetaTypeDescriptor
, but we do not have a simplified interface for containers.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.