Enhanced Asset API


#1

Over the weekend we rolled out our Enhanced Asset API. Most of these changes are for the Engine API. Changes will effect you if you’re doing advanced asset and resource loading such as changed scenes at runtime.

More details are our blog post here:

http://blog.playcanvas.com/enhanced-asset-api/

Feel free to ask any questions about the new API.


#2

I used to load assets using loadFromUrl(url, type).then({})
I would have a list of assets, and as each one got loaded, the program would progress on to the next one. (url = toLoad[loadcount] and then in the promise loadcount += 1) , all well and good. (About 10 lines from the bottom)

With the new API, I use:
loadFromUrl(url, type, callback)

This works correctly the first time, but the second time it does not run the callback.
I’ll see if I can make a sample file.


#3

Do you mean second time loading the same asset?


#4

No, loading different assets.
I have a list and associative array:

toLoad = ['model1.json', 'model2.json']
loaded = {}

and then run:

loadcount = 0
function loadObj(){
  if (loadcount < toLoad.length){
    app.assets.loadFromUrl(toLoad[loadcount], finishedLoading);
  }
}
function finishedLoading(asset){
  loadedObjects[toLoad[loadcount]] = asset.id;
  loadcount += 1
  setTimeout(loadObj, 0.1); //load the next object
}

So the callback get’s run with different assets.

As I said, I’ll try and make an example file, but don’t have time at all this week, so probably only on the weekend.


#5

This is the code I’ve tested for loading. It works OK and will load multiple assets in parallel.

        var urls = [
            "../assets/statue/Statue_1.json",
            "../assets/Playbot/Playbot.json",
        ];
        var count = 0;
        for (var i = 0; i < urls.length; i++) {
            app.assets.loadFromUrl(urls[i], "model", function (err, asset) {
                count++;
                if (count === urls.length) {
                    done();
                }
            });
        }

You might need to check your code. The function should be: app.assets.loadFromUrl(url, type, callback); and the callback is passed (err, asset) where err is null if no error occurred.


#6

I’m just wondering why the change to callbacks in the loading API? In Node.js most people want to promisify callback functions to avoid the “pyramid of doom” indentation when taking multiple steps - also having a decent promises library allows for Promise.all() (Q.all()) promises which are very useful when performing multiple parallel steps.


#7

Well, I updated to the latest playcanvas. I like the changes to the asset API and it adds some functionality that either I didn’t know about, or that is new (the whole asset.find thing). I’ll have to see if I can get my loading bar back though, block-loading them in parallel like you do is quite ‘hidden’ to the user.

Interestingly some change since last time I updated also broke the physics until I moved app.start() before trying to add collision/rigitbody components.

@whydoidoit:
I prefer callbacks. They’re familiar to me as a programmer. Promises? Not so much. Never heard of them until I started playcanvas/javascript.


#8

Yeah I know, was that way for me too when I started Node.js programming a year ago.

Promises are a key part of the new Javascript spec though and implemented in most browsers. Their reason for existing is exactly this case. The pyramid effect of multiple nested callbacks is generally frowned upon as well as super hard to read and debug and promises are considered the modern answer to Async programming.

See this for one discussion: http://www.telerik.com/blogs/what-is-the-point-of-promises

I guess I’ll be promisifying the asset loading API :slight_smile:


#9

I like promises which is why I created the previous resource loader using promises. But there are a number of problems with promises in the context of an library and an open-source project.

  1. It introduces a dependency on another project. In our case RSVP.js.

  2. It made debugging harder than it should be. When a loading error occurred it was difficult to work out where the loading request was made. Promise libraries swallow all exceptions so we had to work around that and re-throw errors which made debugging more confusing. The current callback system should give you a straightforward callstack all the way back to the load() call.

  3. I’m was concerned about memory use and performance with promises. They tend to lead to lots of nested closures and it was completely clear to me that we weren’t keeping references to things that we didn’t need to.

  4. It is possible to promise-ify a callback API in your application if required and the above issues are not problems for you. Converting a promise API into callbacks would still have the issues above.


#10

I personally think that promises do not solve real challenges that are introduced by event-driven programming.
It was huge hype about 2 years ago, and then settled down.

What they attempt to solve: prevent pyramid in code. Thats pretty much it. What they actually introduce:

  1. Passing data through is ugly and very inefficient (single object with many keys…)
  2. Many places you need data from few callbacks before - endup building all the same pyramid or pass it in large object.
  3. Error handling and debugging became awful. You end up writing loads of .fail where you wouldn’t want to in first place.
  4. Code become longer than it should be.

Solution to event-driven programming is coming by ES6 and improved by ES7, using generators (yield), allowing developers to write async code in sync manner and handle all debugging in very common and intuitive flow.
Unfortunately we need some time before it comes to all browsers, and it wont be pollyfilable :frowning:

No one stops extending and wrapping classic callbacks into promises for yourself, it probably would be ~50 lines js file that would do it for you. But I don’t think that as product we should enforce some paradigms that have lovers and haters, but rather we stay neutral and allow both parties some options on those terms.


#11

Agreed that yield and all of the ability to do coroutines is great - already doing a lot of that server side in NodeJS iojs etc, much more attractive - and of course that would give us coroutines in PlayCanvas which is a good thing. Exception handling with generators is also very nice. Of course generators work extremely well when you yield a promise or a hard value and the built in support in Q and Bluebird for handling these in function* routines is superb.

To your points:

  1. I never use objects with multiple keys personally, never found the need. Closures cover cases where you really must do things in sequence without consuming the previously returned value and I find these rare, leading to point 2:

  2. Using .all/.when/.spread or whatever on your promise library provides a way of getting multiple results to the .then and leads to more parallel code (with callbacks developers rarely call two at the same time, even if they are both async and potentially parallel).

  3. Promise library long stack traces, Chrome’s ability to debug async remove all of this for me. In fact my error handling ends up being only a few .fails

  4. If I compare code that my developers have built that has been switched to use promises it is shorter, more parallel and more legible.

I believe I mentioned that I would be promisifying the asset loader, it’ll probably be 3 lines of code if I use Bluebird :slight_smile: