[SOLVED]2D multiplane animation with alpha raycasting

Hi Guys,

I’m very new to PlayCanvas and fairly new to games engines in general, so if this is a ridiculous question, I apologise in advance.

I have a collection of layers which are png images with alpha, all the same size. I have layered them as separate plane entities within a 2D screen entity, and applied a separate material to each, and loaded the texture as emissive and alpha opacity, and it all looks exactly as it should.

I now need to be able to work out which one image is hit at any particular spot on the screen.

Since the build-in raycasting doesn’t seem suited for that, I was assuming that I should be able to capture the point on the screen, convert that to world co-ordinates, and check the opacity map texture at that equivalent point of the layers front to back, until I find something opaque, at which point I know which layer I have hit, and can carry on.

Although this seems like a good plan, I have managed to capture the mouse event, and convert that to world coordinates, and to work out the bounding box of the layers (which are all the same size), but I cannot work out how to calculate (or read) the correct pixel of the texture map to check the opacity at that point on that layer.

Can anyone help? Or am I being foolish and missing the obvious way you’re supposed to do this sort of thing?


You should be able to get the sprite asset ID from the element component to get the texture from the registry and access the source data: https://developer.playcanvas.com/en/api/pc.Texture.html#getSource

I haven’t done this myself so this is just theory. Good luck :slight_smile:

getSource() - how did I miss that! :slight_smile:

So, that seems to make sense, and I have successfully got texture data, however I am now slightly confused again. getSource() returns a Uint8Array buffer of “pixels”, of length 1,048,576. My textures are 1024 x 1024, which indeed means there should be 1,048,576 pixels in the image, however the images are 8-bit RGBA, which surely means there should be 4 times as many bytes as are actually being returned?

I poked around to see if I was only being returned a single set of R, G, B or A, but I can’t find any evidence of that, or make much sense of the data that is being returned. Does anyone have experience at that level and can shed any light?

8bit colour sounds like 2bit per channel which seems rather low :/.

Might be worth trying with a texture or several textures that are a single colour?

For anyone who finds this later… This is to do with Compression settings when you import the asset. If you load a raster PNG and leave the compression alone, getSource() returns an HTML Image Element. If you opt to compress the image, getSource() returns a binary data buffer, which appears to have the same number of UInt8 elements as pixels in the image, but in what format, I’m not sure.

As I only need to map my image on start-up, I’m going for loading that into a canvas, reading the pixel values from there, and making a map I can use for speed.

I have managed to make this work for the present, but I’m not really satisfied with the solution. I think it should be possible to do something much nicer and more integrated, so I’ll come back to this when I have time.

Thanks for the assistance in the meantime @yaustar.

Would it be possible to share the project or a simplified version of it? I would be interested in looking at your approach and maybe playing around with it a bit :stuck_out_tongue:

I’m busy prototyping at the moment, which I can’t share, but once I get out from under it, I’ll see what I can do.

1 Like