Lightmapping, Ambient Occlusion, Image Based Lighting - added User Manual Docs

We’ve added User Manual docs to cover few topics that can help architecture and product visualization and lightmapping for games.

Lightmapping describes how best to unwrap models for global UV’s, and how to get good colors for lightmaps.

Ambient Occlusion clearly shows how important it is and describes how to render it too.

Image Based Lighting describes to render or use environment maps for illumination. Which can lead to very realistic results.

Here is a project that uses all three techniques and published demo here.
PlayCanvas Lightmapping

Hi Max,

This looks really good.

Ok so as I understand your using VRay and Max to make the lighting map onto the textures prior to importing the model and your model is a single big model.

What is good about this is that the model looks very realistic.

I see there is lightmap generation and baked lighting available in PlayCanvas, could your demo be done with PlayCanvas baked lighting rather than the lighting done by VRay? My guess is it would not be as good as VRay? I mean, clearly VRay put a lot of effort into this sort of thing and your doing a game engine not a rendering engine.

I would “like” to be able to do the following.

  1. Be able to programmatically generate a room from a set of walls and have it render as nice as, to be able to generate the static model programmatically without a human using a program such as 3DS MAX and VRay to generate. ie, to avoid 3DS Max and VRay.

  2. Able to add objects to the scene and have them able to move around, I can imagine I can do this already with dynamic lighting and objects in this scene where the lighting can be applied to those objects which move. So I don’t see this as difficult, except the lighting on the object will not be as realistic. I just played around with doing this in your scene and its acceptable.

  3. As a workaround for 1. if that is too hard, I could load multiple models for different rooms and join the rooms together at runtime. I could have 500 different models for a room with different width and length wall size all prepared and load the right one at runtime.
    It would be nice to be able to do constructive solid geometry, for example to be able to cut a hole in the wall with a rectangle so I could fit a doorway and then load a model for a doorway and attach it to this. Then I could have this scene and a doorway and a new room with lighting as good as this and allow the player to walk between them. Then I would only need to load a few static models joined together.
    Clearly I don’t need all operations, I just need to be able to cut into a shape.


Hi, Just for info, I tried to make a realistic scene using models and PlayCanvas only, not pre-computed lighting.

Its kind of … good, but not good enough …

I like yours much more

1 Like

Runtime lightmaps are meant for other scenarios, and currently only replicate dynamic lights. It won’t do any global illumination.

In your case, external lightmaps rendering won’t be suitable approach. You will have to look into another options.
Pre-baking many variations - does not feels like a scaleable approach.

Regarding placing walls, and cutting through - you need to implement your own procedural geometry generation with some rules. There are plenty docs and examples out in the web, and it is more logic challenge, than an engine.

For good lighting, you could approximate some illumination, but you need to look into some very special techniques for your case.
For smaller objects, having probes would help them to look like they are properly in environment, and don’t look detached.

You have posted video, but link to project would be way more useful in order to tell what can be changed, and what looks weird, in order to get good results.

1 Like

Thanks, very informative and I need to do some more learning, what you have produced is already amazing when you think it looks almost like a ray-traced image.

1 Like

Hi Max,

This is more of what I explored regarding lightmaps and AO and thought-dump and suggestion.

I looked into Marmoset Toolbag and Blender for baked lightmaps, Marmoset Toolbag’s baking seems to me to be about converting high-poly objects to low-poly objects with good materials so that the low poly object has a good look even though it has less detail. Also it doesn’t produce a lightmap, but it produces Ambient Occulusion. Its mostly about making models look great, not about the game map with baked lighting. I don’t see anywhere to bake lightmaps.

For Blender however, it it able to produce Ambient Occulusion and Shadow, I have not yet tried to bring Shadow into playcanvas as a lightmap yet, I guess Shadow is equivalent to playcanvas lightmap and will attempt later today.

I followed this tutorial which produces baked shadows and textures

I’m primarily a programmer (Java/Scala preference), so from a programmers perspective I can see that Blender can be scripted and that blender files can be created programmatically Then it occurs to me that better baking of lightmaps could be done by writing a program which passed the PlayCanvas 3D data through blender by creating a blender file, then starting blender with a script and bake lightmap and then use the output images as the lightmap data.

Such a program would have positives and negatives.

  1. Positive, makes realistic lightmaps and ambient occulusion from PlayCanvas models and lighting and outputs those as images. The layout of the model would then determine the lighting in a realistic way and you could get images looking like your above.
  2. Negative, slow as it needs to take time and CPU/GPU to produce, expensive in compute time and therefore un-attractive on server side due to cost. A GPU solution on a powerful GPU card could be faster? but cost.
  3. Negative, slow is boring for user to wait for the render.
  4. Negative, not really a lot of users asking for this I assume.
  5. Could be made faster by only specifying certain objects to participate in the 3D data for bake, user should minimize the number of objects involved, this could be a “High detail lightmap bake” feature, if you checked it, you get a high detail lightmap.
  6. Positive, quite spectacular in game/model, allow for model to be created programmatically and then the lightmap to be generated at runtime, ah but sad, it takes some time to generate. Would be nice at runtime though.

I notice that Planner5d which is used for interior design ( uses Blender on their server side somehow. as I looked a the javascript JSON of their model returning and their model comes down as “Blender 2.7 Exporter”. I found this out by using Chrome developer tool to look at the returning data.

So its not particularly unusual to use Blender on server side in a scripted way. Although they don’t do realistic PBR rendering in browser (yet), they ask users to pay for renderings which I guess they use Blender for rendering and storage of models. You could imagine that realistic PBR in browser could be done if their data was passed through your system.

Well since your PlayCanvas is all scripted I could do it myself right? I don’t see any barrier to that, its a sequence, get the 3D data (have that in runtime when the model runs), send it to my server, convert to blender, pass through blender and generate lightmaps, save lightmap images, server delivers lightmap images to client playcanvas, done. That could be a open source project I could try to make if I can get time in the near future. Certainty in the simple case of just a few Cubic objects its not complicated.


Hi Philip. Just to clarify - you are talking about “runtime” generation of lightmaps for scenes, on demand when application is running by a user?

I will try to classify the workflows of where and how lightmaps are generated:

  1. Offline Lightmapping - is what common today in industry, where artists make their scenes in modeling tool, set up light, and leave it to render on their machine, or have results faster if they have render farms, like in movie industry. This is what used in architecture visualisation industry as well. There are plenty great applications, and in User Manual exact approach were described. This approach is decentralized, and made during production stage of an application development, not runtime. Major benefits of this - it is common practice and people can use their tool of choice.

  2. Offline Engine Lightmapping - this is what most modern multi-purpose engines do: they provide technology to perform offline rendering in a tool on machine of a user, that will take time, but will produce good results. Results are more efficient to nature of realtime and engine rendering, and often can contain extra data, for example radiosity pre-computed data for near-realtime GI, directional data for specularity, and few others. Another major benefit - you don’t need to replicate scene in modeling tool, so workflow is much better for artists. It still takes time, and is pre-production decentralized workflow.

  3. Runtime Engine Lightmapping - this is attempt to provide functionality to render lightmaps on demand, when user is actually running an application. Challenge here is that rendering lightmaps - is costly, and rendering realistic lightmaps with GI and other techniques to make it look realistic, today is possible with ray tracing. But ray tracing is very expensive, especially in a browser. So making this to work fast on demand with as good quality that offline renderers produce in minutes and sometimes hours - is a challenge that is not easy to solve.

Then there is a cloud. You could mix a cloud to #2 and #3. Problem with #3 - is that you would need to handle enormous amount of users, and is much easier to do it once in development time, and then serve as pre-computed textures.

So #2 on the cloud - is an idea we’ve been thinking of. We have even expertise on rendering some decent illumination results, GI, skylight, and other stuff, and it would be able to produce directional data so to enable simple specularity with lightmaps.
But actually it is a complex task, regardless if you use proprietary solution for this or something like Blender on the cloud. Using Blender - is quick solution. In fact you could even make Editor plugin, that would communicate with your rendering back-end, providing hierarchy data as well as assets data, and it would build a scene and render lightmaps, sending them back so Editor could patch assets with that data.

Don’t forget, this is very computation heavy, and servers are not free, and this kind of services will have very high operation costs, this have to be justified from business standing point as well.

But it is great to see what is possible in the Web today, and how Cloud can empower it all :slight_smile:

Hi Max,

Thanks for the info, this is new to me, my motivation come mainly from the following.

I want to generate interior scenes at runtime using PlayCanvas where the Javascript code determines where the walls, doors, etc are located at the startup of the scene. However the layout of walls, ceiling, floor, doors WILL NOT change in a scene after startup but objects in the scene such as bed, table, chair may move around at runtime. The customer wants to be able to set the layout positions of some objects such as a bed or chair, basically furniture positions to see how the interior of the scene would look.

Then I also want it to look realistic. This is for a demo only for some people to show that this could be done for interior design. Its clear that with the PBR materials in PlayCanvas it can look quite realistic, but the lighting in your demo at top of this thread makes it look even more realistic, the key realism comes from lighting on floors and walls from the window and the ambient lighting on floors, walls, ceiling, window.

Since I want the layout of everything to be determined at runtime when the scene is first created with minor changes such as addition of bed etc later, this falls into the category of 3. Runtime Engine Lightmapping, however I only need the baked lightmapping to be computed once since the walls and doors do not change position from the time of creation of the scene and I don’t care to include interior objects such as beds, chairs, tables in this highly detailed lighting.

So I see the lighting computed as follows.

  1. Scene is created from Javascript code using Playcanvas for the walls, ceiling, door, etc. Baked lighting done to produce lightmap at this time. This might take some time.
  2. Objects are added to scene with runtime lightmapping.
  3. User can move objects around.

Scene from 1. may be re-used so should be cached/stored, the room may be re-used for different customers.

It is acceptable to me that only the interior walls, ceiling, windows, floors have the lighting done in a realistic way using Baked Lightmapping and the interior objects such as furniture not having such high quality lighting can have your normal runtime lighting method. Which I think also is how you would want it to be for a game map, expect the main difference here is that I would want the game map determined at startup time from the Javascript.

For example, where I placed chairs inside your demo and made an outside light to “fake” the lighting on the chairs to match the baked lighting. The only problem being there are no shadows for the two chairs, but that seems acceptable here in this adjusted demo.

Thanks, Philip

I came across this, impressive 3D walkthroughs of interiors in browser.

Your case is not an easy one for any rendering engine, especially for the real-time that can be sent to users and will simply run without a need to have a high-end gaming PC.

This is all static content, nothing is dynamic here, and moving furniture would not work. This demo uses same approach as #1 - offline lightmapping, and location of everything is pre-defined.

If you have resources and experts in field, you could go with #2 but “on demand” in a cloud, but this will be associated with high development and operational costs.

Although, there might be solutions here, like in all graphics you need to think out of the box.
There are techniques to combine lightmaps with real-time or another lightmap, This would require to implement lightmap masking. You would need to render lightmaps in own grouped textures, that can be individually masked. Then have realtime baked textures to match, but they have to be properly blended to avoid double shadow problem.

Then in order to make furniture to look good in different places, you would need Probes. They will allow you to have ambient color, diffuse lighting and specularity to look relative to their environment. Probes - is something we will be working in near future.

The task you wanting to make, is complicated, and you do need to respect the technology and its capabilities. It doesn’t mean it is impossible, but this will require a lot of work from development on your side, to solve such specific requirements.

1 Like

Ok, I’m looking forward to your lightfield probes “Probes - is something we will be working in near future.”.

I read some tweet on that recently and they look useful.

The tweet you referenced implement GI using precomputed light fields, this is different to what generally Probes are referred.

Check out this first answer:

What probes we would be using for - is to allow to shade dynamic objects so they look that environment affects them nicely. Dynamic objects is hardest to shade to respect environment they are in, unless you do full on light simulation, which of course is very expensive.

1 Like

Hi Max,

In relation to above, do you have any intent to add real time ray tracing (path tracing) to Play Canvas in future ? I mean for computing the lighting and drawing it on surface textures.


Realtime Ray Tracing is still a very advanced and expensive in terms of performance technology, that is still not used by virtually any commercial game/viz-app for high-end native platforms. For web, I’m doubtful we can see anything in upcoming years, unless there will be some hardware breakthroughs regarding ray tracing, and those capabilities would get exposed in form of Web API.

Hi Max,

I played around the the idea and experimented for doing realistic lighting on surfaces in PlayCanvas.

Here I made a short demo which renders the surfaces of textures at runtime using a path tracer I pulled from ShaderToy. It uses the position of object surfaces to send that to the shader via Javascript and render textures and map texture back to surface.
I used the tags to mark each surface.

To play here edit here
You need to spin the mouse wheel to zoom in and rotate the view to see inside.

After playing around with this idea I had the following thoughts.

  1. Its better to use bi-directional path tracing rather than just path-tracing, but I only used path-tracing, especially if the light source is small, its hard for path tracing to hit the light source.
  2. This could scale out if the textures computed are all computed on client side and each client computes just one of the surface textures at a time, and then saves surface texture back to the server for cache. Each client (for each surface n, if server doesn’t have a cached version, complete surface texture for a period of time, send completed texture to server, look for a new texture which server does not have a cache of)
  3. It could be a nice post-bake solution for interior scenes. It eliminates the complexity of baking but yields realism. Its quite complex to bake inside a baker such as blender but it would be nice if bake happened automatically by playcanvas.
  4. Going to be slow with too many surfaces in play. Need to use some optimization method.
  5. Monte-carlo method can be improved
  6. I understand there are many other ways to add realism…

Also I came across this


1 Like

Hi @Philip_Andrew,

Could you find a way of baking the light maps for the complete scene using java or c# kind of languages?

I am also interested to bake programmatically