This article is sort of rambly but I agree with it.
WebGL is the best way to talk directly to the GPU not through CSS3 transforms. Any experienced developer would be pretty insane to try to supplant a 20 year old community effort (OpenGL) for efficient GPU communication, especially one that has involved deeply all the GPU vendors.
NOTE: My company has bet big on WebGL and Three.JS so I am biased in this regards: http://clara.io
I would argue that the DOM is broken for current requirements and their approach is actually quite sensible - circumvent it. What I would have like to have seen is a proper, UX designer led update to hardware accelerated UIML language that is not 3D and is not HTML.
This project takes a good step in that direction, in the same vein that Bootstrap and others have.
Lots of programming and design hours are wasted because the DOM is intended for Documents and not user interfaces.
True, but I think it's crucial to be able to embed these Documents efficiently in the user interface, with all the text interaction people are used to, and vice versa. Perhaps that's the future of webapps.
What I don't understand is why we don't just skip the DOM. And by that I mean, currently people are using virtual DOMs like that provided by React.js however this still has to access the DOM when it wants to render, but couldn't we rewrite a DOM/CSS renderer with WebGL (or canvas)?
Or perhaps just skip that step and come up with something nicer for specific use cases.
Why aren't we leaving all of this float/clear stuff behind us?
I can only speak from early teen years looking at computer game UIs. It's way harder to make a UI work in "native" ways than it appears at first sight. Especially if you do very manual rendering instead of relying on float/clear stuff. There are reasons that many people are going more and more into the embed HTML-controls into native UIs direction. One example: Your own renderer will have to handle text selection, cursor movement, and mouse interaction in exactly the way the surrounding platform does. Including things like "I'd expect Ctrl+A to jump the the beginning of this textbox".
It's an open question if Apps are more like documents, or more like raster (canvas), vector (SVG) or 3D (WebGL) graphics?
I could argue that in most cases the former is the right answer. Calculating DOM is (well, I'm just guessing here) no more resource intensive than calculating word wrapping and text justification in retina quality canvas.
> couldn't we rewrite a DOM/CSS renderer with WebGL (or canvas)?
Of course we can. :) But writing full UIs are tons of work.
Why are there no OpenGL UI libraries in wide use on desktops? The same argument should apply to a degree? Or is it just that HTML5 is way less efficient that GUI+, Swing, etc.
There are "OpenGL UI libraries" :P It's called OSX.
The main reason why we haven't seen more is because OpenGL up until now has been largely in the sphere of gaming, where standardized UIs aren't a priority.
That being said, even in gaming there are a large number of libraries available that help you implement UIs.
The OpenGL UI libraries I've seen on Windows are pretty horrible/limited.
Those in games are usually based on Scaleform and are also relatively limited in terms of built-in functionality, although it has wide freedom of design:
I am perplexed. I for one am running a project that would have absolutely embraced famo.us as part of the UX. There have been lots of smoke and mirrors, a few emails dotted in to ensure we didn't think the development team had died.
I smell fear here, whether it is investor fear or team fear of losing IP and control when(if) they open source the project.
Whatever the issues they are harming the credibility of the platform. I was an evangelist to begin with - a library built in JS that can silence the compiled crowd is a sure feat.
Now I am more of a skeptic. If this is how long it takes to release the codebase, imagine what updates will be like. I know in the newsletter Steve has been trying to appease, but all that it sounds like now to me is a flash of arrogance over owning a toy that no one else can play with, in an "i know you need this, but hang on" kind of way. I know it is not meant in that way, that would be silly.
WebGl is maturing and if they don't push what they have then they will lose the traction they have for sure.
Hopefully they will just release what they have soon and be done with it.
Assuming you're right, then that fear is about money, if your claim stands right. But, if the fear is money, why don't they just let every developer test it early for a fee? Σ that up and you end up with a great way to fuel your startup, without causing premature investor-friction.
Here's a number:
$ 5 × (100.000 × ¼) = $ 125.000 in Free Capital
What that means for their customers (Early Adaptors) is this:
$ 5 + EULA + (Freemium of sorts in the future, for loyalty) = WIN
TBH: I thought the same, I mean the decision to lock-out so many potential developers willingly is very bad, or just ignorant. Personally I value their startup high as a replacement of <Enter-Mobile-Framework-Here>, but time is not everything, sometimes you can loose a lot of customers, because of the bad first impression you have made in the beginning.
VCs were betting on the longshot that famo.us delivers and actually works as a layer/graphics engine for non-native apps (a $100m+ play for sure). We all saw those bets and the founder certainty as a guarantee this technology would actually live up to the hype where I think we should look at this one more like the VCs did. A 1 in 20 shot and changes everything if it happens vs. expecting it to happen and factoring it in our plans.
Hate to say all this because I'm building a browser limit pushing project myself that could have used famo.us...
I first saw them demo this at the October 2012 HTML5 Developer's Conference.
We attended their last two SF "meetups" in December 2013 where they promised they would release the code right then (or the day after).
Still no release, and no further mailings. This is effectively vaporware in my view, and they can't indefinitely dangle this carrot in front of developers before we get fed up.
Generally, this happens through simplifying assumptions. For example, let's say you need to often check whether an element matches a specific CSS selector. That's easy with `elem.matchesSelector(selector)` but the browser has to assume the selector could be something complex like `#content li > ul input[type="number"]` just as easily as it could be `#phone`. Plus it has to parse the selector every time (or maintain a complex cache). If you as the developer know the selector always an ID you can just check for `elem.id === "phone"` in JavaScript and that will whip the pants off the C++ code that has to deal with an arbitrary selector.
I'd like to see what assumptions famo.us has made and whether they could have just been addressed at the CSS level. An example of that would be things like overlapping elements with rounded corners and drop shadows, which are very expensive for the rendering engine. So perhaps it was a lack of understanding of CSS jank factors that led them to decide a total circumvention was in order? We'd really need to see a side-by-side of the two and understand where the conventional solution was bogging down.
My suspicion is that if you simplify your UI so that it doesn't have the massive amount of automatic layout code that HTML has, you can have optmized code for it. Thus even doing it in JavaScript, because it is a much smaller subset, you can have it faster.
The question is what subset do you choose to implement. And it will necessary not be correct for all cases. Thus what would happen is that this will be one of many specialized / simplified pure JavaScript DOMs.
The slowness problem when making an hybrid app (at least on iOS) is entirely caused by the javascript engine, since the webview doesn't use Safari's Nitro, so I don't see how rendering the page that way would make it faster...
Unless they are purely talking about websites, then I don't see why would they even bring app native/hybrid apps...
I've profiled a lot of web pages/apps and this hasn't usually been true in my experience. Occasionally JavaScript is the culprit but that is more often to be due to really bad O(n-squared) algorithms and script that forces layout. In contrast, the rendering engine combined with complex CSS is often the cause of "janky" page behavior. Do you have some profiler runs that show a JavaScript bottleneck on a webview that doesn't exist when using Nitro or some other browser?
Ugh, this resonates with me completely. Every time I see these guys I think: Ok, I can use Greensock's TweenMax and get every single bit of performance as these dudes are getting in their crazy hyped render engine. Its just totally, completely, silly IMO -- but that said, I'm also willing to be proved wrong. (It also makes me think of Mr. Doobs Periodic table that he put out after the very first demo using the Three.js CSS3 renderer that he wrote. How many lines of code was it, 10, 25? Both clocked at 60fps.)
If you've tried to use GreenSock on a mobile device, you'd see that it stutters quite a bit depending on the number and size of elements. I've tried all the famo.us demos on basic mobile phones and for the most part they all perform well in terms of fps. That is their main advantage, I think.
I am not familiar with clientside JavaScript. Could anyone explain what famo.us is doing and whether it has merit? Their demo looked less impressive than WebGL ones.
Something like direct CSS3 transforms combined with their own UI library? I think they are going to get steamrolled by WebGL personally and also by native apps, or probably more accurately they have already been steamrolled by these two technologies.
Mainly hyping their vaporware product, as far as I can tell. Ok, they've made a 3D-ish periodic table with questionable UX? Great, anything actually useful?
TL;DR the solution to HTML5 performance is to not use HTML5.
They use JavaScript instead and get great results, but that's hardly fixing the problems of HTML5 like the article made it out to be. HTML remains slow, they are just putting forward the idea that JAVASCRIPT apps may be as fast as native apps.
Approaches to updating the DOM seem to be getting a lot smarter - tools like ractive.js are smart enough to make the smallest (fastest) changes to the DOM rather than the traditional deleting and recreating large chunks of the tree.
Edit: React also has a virtual/shadow DOM for high performance updates.
WebGL is the best way to talk directly to the GPU not through CSS3 transforms. Any experienced developer would be pretty insane to try to supplant a 20 year old community effort (OpenGL) for efficient GPU communication, especially one that has involved deeply all the GPU vendors.
NOTE: My company has bet big on WebGL and Three.JS so I am biased in this regards: http://clara.io