State management, immutability and functional programming

Like many, I suspect, I have mostly used JavaScript for web development. Especially web apps using frameworks like Angular and React. There seems to have been a big push in recent years towards functional programming principles. Stuff like pure functions, avoiding or pushing away side-effects, immutability and state management that allows all that.

Is it just me or does game development using an engine like PlayCanvas make it pretty darn impossible to carry over this kind of paradigm? For example, most of the methods on the Vec3 class actually mutate the Vec3 in place rather than returning a new Vec3. Is there a performance related reason for that?

I find it harder to write code that way. Most of my recent JavaScript code has been written using ‘const’ almost exclusively. Now that I’m fiddling with PlayCanvas, I find myself using ‘let’ all too often and it makes me nervous. I stopped counting the times where I ended up changing the value of entity.rigidbody.velocity when all I wanted to do was use that value for some computation. It leads to some pretty frustrating bugs, and funny character behavior.

What are your thoughts on the matter?

Yes. It is centred around memory management and reducing the need to call the GC. Most math libraries have similar functions where they mutate the instance the function is called on. That said, it’s not too hard to extend math library to have ‘static’ functions to give the behaviour you want.

1 Like

I figured it was something like that. I guess I’ll get used to it. Just gonna have to use .clone() where needed. Still, it’s quite a shift, I had gotten used to almost never mutating stuff. Now it’s like mutation galore. It almost makes me feel… dirty, somehow. :smile:

Being an Angular/Typescript programmer myself, initially I was feeling the same. I was missing the classes and OOP candy I was used to.

But! When time came to fiddle with performance tuning and memory management I saw that low level programming has its place in the chase of the millisecond.

Indexed assays versus properties and objects, understanding where you could get away with references versus copying/cloning, pre-instancing as much as possible, memory allocation/unloading, constant profiling etc

All in all that extra care makes you a better programmer.

1 Like

I can see how this would be true. But I’m wondering if we are not sacrificing readability and maintainability on the altar of premature optimization. I don’t know enough about game programming to make such a statement. It could be that instanciating a new Vec3 everytime is such a big performance drain that every game, even the simplest ones, would be affected by it and so vector libraries have all made the choice to optimize right out of the gate.

I usually start by writing code that is easy to reason about and understand, test, than go down to the nuts and bolts if there is a need to optimize. It has rarely been necessary, but then again, I wasn’t dealing with complex graphics engines.

That is a fair argument. And I love to listen and learn from both sides.

Truly though garbage collection is a thing that needs special care always and it isn’t javascript/playcanvas related only.

If you are coming from a Unity background you will be aware of the endless list of tips and tricks to avoid the dreadful garbage collector from firing in.

Knowledge of that would be a key point in making your game running with reasonable performance.

1 Like

But you are right, having a high level system help in managing all that should be a level that all frameworks and programming paradigms aim for.

1 Like

One trick we use all the time is to make a few `pc.Vec3``s ahead of time as temporary vectors and use them when we calculate stuff so we don’t need to clone vectors. If you start cloning vectors all the time you are going to run into performance issues with GC.

These vectors could be the same for every instance of a script e.g :

var Script = pc.createScript('foo');

Script.vecA = new pc.Vec3();
Script.vecB = new pc.Vec3();

Script.prototype.update = function (dt) {
     var velocity = Script.vecA;
     velocity.set(4, 5, 6);
     var pos = this.entity.getPosition();
     var newPos = Script.vecB;
     newPos.add(velocity.scale(dt);
     this.entity.setPosition(newPos);
}
2 Likes

Cool. So, lets say I have this code inside my script update function:

var velocity = this.entity.rigidbody.velocity.clone();

What you are saying is that I should be doing this instead:

Script.velocity = new pc.Vec3();
Script.prototype.update = function(dt) {
  var velocity = Script.velocity.copy(this.entity.rigidbody.velocity);
}

Essentially, using this method, once you have assigned Script.velocity to var velocity, velocity should never be reassigned again.

Am I getting this right?

I think so (if I’m getting what you’re saying correctly). Basically you only create one vector in your example there so you can just reuse that instead of calling clone every time which allocates memory.

1 Like

Would your normally prefer this way of doing things over implementing some kind of object pooling? I’m thinking that making a vector object pool would allow for some cleaner code, easier to reason about, all the while taking care of the GC problem. What are your thoughts on that?

You can do pools as well as long as you can take care of cleaning up and releasing no longer needed vectors to the pool. Personally I think just keeping a few scratch vectors around for temporary math operations is simpler.

1 Like