Input processing from engine version 1.47.0


I have a question about engine input processing.

The order of input judgment using “Use Input” was different between engine 1.47.0 and later and earlier.

-Accepted input from the back entity instead of the previous entity.
-Both entities used “Use Input”
-The front entity is prepared by the template and generated from the template’s AssetId.

Did it change when the engine was updated?

please help me.

Hi! Yes, we’ve made several improvements to the logic for choosing which element takes precedence for input. For elements under 2D screens, the scene hierarchy dictates precedence; while for 3D screens, the element closer to the camera takes precedence.

Can you explain what is your current scene setup and behaviour?


Just curious, when you say scene hierarchy dictates precedence, does it mean the higher it is in the hierarchy, the higher the priority?


Background “title screen”,
The “Loading screen” generated from the template is placed on it.
The camera that moves this screen is owned by the “Loading screen”.

The “Loading screen” has a black background with “Enabled” turned off.

When there is input on the “Title screen”, the black background of the “Loading screen” is displayed,
It is assumed that “Use Input” with a black background prevents input of “Title screen”.

The Entity configuration of the “Title screen” is
–“Button” “Element (Image) (+ UseInput)”
–“Element (Image)”

The Entity configuration of the “Loading screen” is
–“Element (Group)”
–“Element (Group)” ← Enabled OFF
–“Collision (Box)” “Element (Group) (+ UseInput)”
–“Element (Image)”

I pray that the explanation will be transmitted well. Please.

@LeXXik For 2D elements on the same layer, it’s the opposite: the lower in the hierarchy, the higher the priority. Basically, the input precedence follows the rendering, as ‘lower’ elements in the hierarchy are rendered after the ones above it, so they appear to be in front. See the below setup, 2nd child will take both rendering and input priority:


@1129 The colour of the Image Element does not affect its input. If you want to have the black background not take any input, you need to disable the useInput property. Additionally, it looks like the ordering of the Entities is not correct: you need to place the buttons below (in the hierarchy) the background elements. See my examples above for reference. Additionally, make sure all the elements are in the UI layer and under a 2D Screen entity.

The translation was not done correctly.

I want to use the black background of the “Loading screen” to prevent input processing to the “title screen”.

The black screen of the “Loading screen” has “Use Input”.

However, even if “Enabled” on the “Loading screen” is turned on,
The “Title screen” is accepting input.

If I understand the explanation of @jpaulo correctly, you have to place the ‘Loading screen’ lower in the hierarchy.

Very confusing.

This is because under a UI screen on the UI layer, elements are rendered in the order of the hierarchy. Ie the first child of a screen is rendered, than the second, third etc. Therefore the last element in under the screen is rendered last and in front of every element that was rendered before it.

It makes sense to process the input for the elements in the order of what the user sees which means going in reverse hierarchy order.

Taking this example

The 2nd button is rendered last as its lower in the hierarchy. If you clicked where the two buttons overlapped, you would want the 2nd button to get input first so that you can stop propagation before input is processed for the 1st button.

I’m not sure how the input is process with multiple screens :thinking: I know the rendering order is not well defined to the point where I generally recommend to use a single screen where possible.

And what if you do this proces in the opposite way? Then it would more understandable for the regular user. Higher in hierarchy means higher priority and therefore rendered on top. The same goes for layers, I keep forgetting what needs to be on top/bottom.

I’m curious about what the advantage could be with using multiple screens.

Then there’s other costs and considerations. Looking at the button example above and looking at the 1st child. Using the bottom up method, it would be confusing/difficult to have the text be in front of the background for the button and be able to move it around the hierarchy as a group.

We are very unlikely to change the order due as it is ‘standard’ across other engines as well.

You could have multiple templates that can be added/removed. It would make grouping of different dialogues easier to manage such as pop up dialogue boxes.

You may want different UI to scale differently from each other.

Ah, good point! You got me.

Actually, I don’t think it’s that bad to have the text as parent.

Going down that rabbit hole, what if you wanted more elements to be group under a button background? Will they have to be parents as well? How would they be aligned/anchored to the background image so that they are always in the top left corner?

It starts getting complicated and messy very quickly hence the one parent and many children pattern works quite well for this.

If you look closely,
The SCREEN component “Screen Space” of the “Loading screen” was not turned on.

As a result, the 2D and 3D screens are mixed, and
It didn’t work as I expected.

When both were set to 2D, the expected behavior was achieved.

I apologize to you for a fuss over.
thank you.

@1129 Would you be able to create a public test project of the layout to show the issue you are running into please?

You convinced me. Thanks for your substantiation. I always like to understand why some choices are made.

And just for my own clarity, it also applies to layers that the lowest layer is rendered last and visible above the others?

I think @1129 already solved the problem. One of the screens was not enabled correctly, causing it to not work as expected.

This is where it gets muddy. The current system does not take into account of the rendering layers as different elements can be on different layers under the same UI screen. If we do that, the code gets very complex.

We are making the assumption that most users will have all elements under an UI screen on the same layer as generally, that’s what people do.

But it is not needed because you can control it in the hierarchy right now? Does the same apply to 3D objects?