You could use as many canvas elements as layers, or even have single canvas visible, and others to render off-screen and then copy to texture which is then rendered in main canvas.
But there are drawbacks with this approach:
- Performance consideration - having more than one canvas, is expensive, especially for mobile.
- Download consideration - you will have to download both engines, and duplicated resources in case if they are used in both contexes. This true as well for increased VRAM usage.
- Code management - you will have 2 APIs to deal with, and you will have more complex codebase, especially will have to figure out a “bridging” mechanics, how from one context you interact with other context. You could make a logic separate from rendering, sort of Model-View pattern, but this seems to be overly big effort for minimal benefit.
There is 2D API in development actually right now, and can be enabled for individual users if the need rises. It has font rendering (not bitmaps, something more like vector fonts), image rendering, and whole screen-element system.
Additionally we will be working towards exposing Editor API in future, to allow developers to extend it to automate their workflows, but there is no ETA on that.