Hi! Yes, we’ve made several improvements to the logic for choosing which element takes precedence for input. For elements under 2D screens, the scene hierarchy dictates precedence; while for 3D screens, the element closer to the camera takes precedence.
Can you explain what is your current scene setup and behaviour?
@LeXXik For 2D elements on the same layer, it’s the opposite: the lower in the hierarchy, the higher the priority. Basically, the input precedence follows the rendering, as ‘lower’ elements in the hierarchy are rendered after the ones above it, so they appear to be in front. See the below setup, 2nd child will take both rendering and input priority:
@1129 The colour of the Image Element does not affect its input. If you want to have the black background not take any input, you need to disable the useInput property. Additionally, it looks like the ordering of the Entities is not correct: you need to place the buttons below (in the hierarchy) the background elements. See my examples above for reference. Additionally, make sure all the elements are in the UI layer and under a 2D Screen entity.
This is because under a UI screen on the UI layer, elements are rendered in the order of the hierarchy. Ie the first child of a screen is rendered, than the second, third etc. Therefore the last element in under the screen is rendered last and in front of every element that was rendered before it.
It makes sense to process the input for the elements in the order of what the user sees which means going in reverse hierarchy order.
The 2nd button is rendered last as its lower in the hierarchy. If you clicked where the two buttons overlapped, you would want the 2nd button to get input first so that you can stop propagation before input is processed for the 1st button.
And what if you do this proces in the opposite way? Then it would more understandable for the regular user. Higher in hierarchy means higher priority and therefore rendered on top. The same goes for layers, I keep forgetting what needs to be on top/bottom.
I’m curious about what the advantage could be with using multiple screens.
Then there’s other costs and considerations. Looking at the button example above and looking at the 1st child. Using the bottom up method, it would be confusing/difficult to have the text be in front of the background for the button and be able to move it around the hierarchy as a group.
We are very unlikely to change the order due as it is ‘standard’ across other engines as well.
Going down that rabbit hole, what if you wanted more elements to be group under a button background? Will they have to be parents as well? How would they be aligned/anchored to the background image so that they are always in the top left corner?
It starts getting complicated and messy very quickly hence the one parent and many children pattern works quite well for this.
This is where it gets muddy. The current system does not take into account of the rendering layers as different elements can be on different layers under the same UI screen. If we do that, the code gets very complex.
We are making the assumption that most users will have all elements under an UI screen on the same layer as generally, that’s what people do.