Scene sizes & additive loading for an open world. Best practices?

Hi,

We are having some discussions about our scene-size ‘issues’ and how to proceed.
As we do not want to corrupt our main scene again.

We want to continue building on our open world.

Right now we have the following flow:

  • Splash-screen → lightweight Main Menu
  • Main Menu (downloading all assets) → Additive loading in scenes → Main Menu destroyed → Gameplay

According to the .json file sizes in our build, we have the following:

  • Main game scene : 5.6MB
  • Second scene: 1.2MB
  • Third scene: 1.1MB

We understand that the max scene size is 16MB.

This raises the following questions:

1: The main scene that gave trouble before is now approx 6MB, can we safely merge the 3 scenes again?
Or does the PlayCanvas editor use some form of encoding (UTF-16 for example, that doubles the file size)?.

2: The current ‘workflow’ makes is very rough to design a large open world. Do we just accept this?
Or are there creative ways to work with multiple scenes. Ideally we want to see multiple scenes simultaneously to create smooth world transitions.

2: As I understand, the additive load of a scene happens in one frame. This causes frame drops/stuttering when loading.
Preferably we want this to be as smooth as possible.

  • e.g. Player enters the game, rest of the world loads in the background without stutters.
    One solution I can think of is to enable entities over several frames when the scene is loaded. This would however worsen the workflow I think.

I hope this isn’t too much for one post.

Thanks in advance!
Nick

Hi @Nickneem,

About the scene file size and encoding I don’t know much about that, let’s if @yaustar knows more on that.

Regarding open world design I’d sa if you are talking about thousands of objects then most likely you will have to use some kind of ECS (entity component system, not to be confused with PlayCanvas entities). Since using regular entities in big numbers at some point comes with a big performance overhead.

Ideally you will be using some system based on HW instancing for rendering so most of your models will be fed directly to the GPU. Similarly for other systems like physics (rigid bodies). This isn’t something PlayCanvas supports out of the box so it can involve a lot of custom coding and some times patching the engine.

If your numbers are in the hundreds and you are ok using regular entities then using separate scenes can work. To avoid stuttering make sure that your resources (models, materials, textures, shaders) for each scene have been loaded in advance. And avoid doing heavy processing on the scripts initialise method.

You may get some stutters from collider shapes (trimeshes) getting created, but if all levels share the same objects these colliders are cached by default.

There isn’t thought a system in the editor to see all scene at the same time, you are only able to load a single scene at any time. But if you are determined it’s doable to code your own editor extension using the editor API to load and view multiple scene (that will work only for viewing, you will still be editing a single scene, the base selected one).

Hope that helps!

2 Likes

This is a tough one tbh.

I can’t remember what the issue was last time but I believe the scene data is UTF-8. The size of the JSON in the Editor may be different compared to a build. It would be best to look at the project archive rather than the published build.

It depends on how you have structured your world. If it’s a chunking system where chunks in the world are load and unloaded depending where the player is, then yes you are generally restricted to how much you can have in one scene at a time.

An alternative method that could work (this is theory) is to have the chunks as templates instead of scenes.

That way, you can potentially add a few chunks into the Editor scene at a time to work on a few at once and delete them when done.

The Editor API can be used to add buttons/tools UI to make this workflow easier.

When a scene is loaded, entity is clones, template instance created, it does the following in a single frame:

  • create all the entities
  • create all the components
  • call initialize on all the components and scripts
  • call postInitialize on all the components and scripts
  • call update with dt of 0 on all the components and scripts

So you need to work out where your bottleneck through profiling is to mitigate this.

Chances are that you will need to create the entities over several frames/times. So perhaps load the larger terrain, then work down to the individual items etc

Also consider how much you are doing in the initialize and update functions of the scripts, is there anything that is expensive there?

Basically, this is going to be a difficult problem to solve with no out of the box solution.

2 Likes

Thanks! @Leonidas & @yaustar

This is helpful.

We currently have a couple solutions to deal with performance. A custom built proximity toggler for entities and physics objects for example. But we will definitely have a internal discussion about HWinstancing and ECS.

I see. Right now we load in most of the environment by instancing a DracoGLB and create mesh collision straight after. Sounds like we want to pre-cache colliders, not sure if this is possible.
In the editor we use FBX to design the world, which are excluded from the build. This however, increases the scene size in the editor I think?


We ran into a Scene corruption bug because our scene was too big 17M while 16M is the limit. But thankfully we have been smooth sailing so far.

The archived project seems to combine all scenes to one .json file. It’s is kind of rough to extract the exact scene data to check the size. The result I got was 5.6MB which is almost exactly the same as the published build.
It would be great if we could see the scene size in the editor :innocent:.

This is insightful and will require some further investigation.
Thanks!

Yeah, we had a couple of ideas here about working around the limit etc but it would be useful to have this number somewhere.

1 Like