Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Carmack reveals the challenges of mobile VR game development (gamasutra.com)
100 points by mariuz on Sept 5, 2014 | hide | past | favorite | 43 comments


  Okay, there’s the normal hell of moving to a new platform -- 
  and I gotta say, Android was more hell to move to than most 
  consoles I’ve adopted.
Note that official Doom platforms include the Atari Jaguar, (Three CPUs! 2 MB of ram! Can't execute code stored in system ram!) Sega 32X, SNES, PS1, (No hardware support for floating-point math!!) (http://www.gamasutra.com/view/feature/4111/dirty_coding_tric...) Sega Saturn, GBA...

I don't know how many of those ports Carmack personally worked with, (I think he did the SNES, and Jaguar ports, at least) but it's safe to say he's got experience with weird consoles. It's significant that he calls out Android for being so bad, multiple times.


"he calls out Android for being so bad, multiple times", but only in the context he's using it. See my quote from the article, he's not saying Android is bad, just it's necessarily "no fun" if you're going to support diverse hardware.

To support that kind of hardware without Android's SDK is worse than "no fun" it's practically impossible.

"Carmack: [..] Android was more hell to move to than most consoles [..] Just because of the way Google has to position things across a diverse hardware spectrum, and because [..] they’d still rather everyone worked in Java. And that’s a defensible position, but it’s certainly not what you want to be doing on a resource-constrained VR system."


I think he's mostly talking about the Android NDK (Native Development Kit), which is required for C/C++ development and makes use of Java's JNI (Java Native Interface). On the Java side everything is nice and shiny, but the NDK is notorious for being hard to setup (especially getting debugging to work is a nightmare), it has a funky non-standard build system, and instead of an executable, a DLL must be created, which is then embedded into an Java application. Also, for all somewhat high-level features, the native code must call back into Java because most of the Android APIs are not mapped to C. All this is as far removed from a standard GCC style cross-compiling toolchain as possible, and a lot of work to integrate into an existing engine (and every couple of NDK versions something in their build system will change and the engine build system integration must be fixed up.


"Funky non-standard build system"?!

While you do have the ndk-build script, NDK is just GCC (or clang, they're both included) preconfigured for cross-compiling. You can easily use ANY C/C++ build system with it, just set CC, CXX, LD, etc. environment variables to point to the GCC build in NDK folders. That way you can practically compile anything from Linux (which uses only available libc calls), not the least ffmpeg, openssl, curl, whatever you need. The JNI rules are exactly the same as JNI rules are for any desktop Java app that uses native code. There is nothing special or "funky" in NDK - it's all standard stuff you know from Linux if you ever used Java.

I'm noticing incredible amount of issues people have (at least in my sample helping them out in Freenode/#android-dev) due to not understanding this simple fact that NDK is just GCC and simply not understanding ANYTHING about how C/C++ app compiling works or not willing to learn basics of how JNI works.


The compiler tools are fine, the problems are: (1) ndk-build scripts instead of Makefiles (2) C++ headers not in standard locations, (3) the Java build stuff which is still needed to build the final APK (ant, maven, gradle or whatever it is at the moment).

All of this together qualifies as a 'funky non-standard build-system' in my book ;)


I might have not explained clearly: you can use standard Makefiles and scripts without issue.


Get downvoted to hell, still! We don't need that knowledge!!


I think he was referring to the build scripts and ecosystem, not the compiler.


Idunno, I think his criticism of Java as the official language on a platform that's intended to support gaming is legit. Java can be used for gaming, obviously, but it's hardly ideal.


Agree with this - he has ranted about how he doesn't like Java before:

http://armadilloaerospace.com/n.x/johnc/Recent%20Updates


Well, obviously he's talking about it in the context of VR. Printing "Hello World" to the screen is trivial on any platform, doing realtime 3d graphics is something else entirely.


Doing any native development (C, C++) on Android is a hassle, because all of the android lifecycle events, os calls, etc. are through Java (JNI). That sounds like exactly the hassle he's talking about.

There are some applications that don't require native development (if you don't care about speed or portability) but this isn't one of them.


> because all of the android lifecycle events, os calls, etc. are through Java (JNI).

NativeActivity: http://developer.android.com/reference/android/app/NativeAct...

No need for any Java/JNI whatsoever.


Which just shows your NDK experience.

Outside of frame buffer, OpenGL, sensors and sound APIs, everything else is only available via JNI.

Even the Play Games API only got a C++ wrapper after lots of pressure from the gaming community. At first, Google just posted a DevBytes video how to build a basic JNI wrapper for it.


No, there's also controller input, touch, asset manager, and a handful of other stuff. <insert snide personal attack here>

If you are writing a game, everything you need is available through NativeActivity without JNI.


Have you ever bothered to look at the code for the NativeActivity?

NativeActivity is a Java Activity that calls into C code via JNI. This C code then makes use of UNIX IPC to send the said events back and forth to the C/C++ code that runs in a separate thread.

Every time Android OS calls into the Activity there is JNI involved.

This only gets worse, if the developer plans to do a portable application, and not just a plain game.

Just because you don't write the JNI code yourself, it doesn't mean it isn't there.


"I don't know how many of those ports Carmack personally worked with [...]"

Although it's not Doom, he personally ported Wolfenstein 3D for iPhone. You can read his write-up here [1]. Wayback Machine, FTW... it's no longer on id software's site, and I couldn't locate it in Google's cache, either.

"Rather than having a big confrontation over the issue, I told them to just send the project to me and I would do it myself."

[1] http://web.archive.org/web/20110808014701/http://www.idsoftw...


`"At this point one of the most experienced programmers in the team, one who had survived many years of development in the "good old days," decided to take matters into his own hands. He called me into his office, and we set out upon what I imagined would be another exhausting session of freeing up memory.

Instead, he brought up a source file and pointed to this line:

static char buffer[1024 * 1024 * 2]; "See this?" he said. And then deleted it with a single keystroke. Done!

He probably saw the horror in my eyes, so he explained to me that he had put aside those two megabytes of memory early in the development cycle. He knew from experience that it was always impossible to cut content down to memory budgets, and that many projects had come close to failing because of it. So now, as a regular practice, he always put aside a nice block of memory to free up when it's really needed.`


Budget allocation is really the thing to do - this way on the code side, or dummy sound file, dummy texture, dummy big model on the art side.

It's been done this way, and it'll be done in the future.


>> I emailed Bobby Prince, the original composer, to see if he had any high quality versions still around, but he didn't get back with me.

How do you ignore an email from johnc??


Those composerz are always high, or drunk -- didn't you know?


The thing is consoles historically let you close to the metal, which means if there's a problem it's either likely to be in hardware or your code. Since the 360/PS3 era the amount of software between the game and the hardware has exploded, and on smartphones it's in a whole other league again. When you have a problem on Android establishing who or what the problem was, especially on under development firmware, is nightmarish because of how many things are going on. The inconsistency between philosophies of different components is what makes this task truly evil.

That said, the problems of handling screen size diversity and just getting your C++ OpenGL code with touch input wired up and working are greatly exaggerated by many.


I'm pretty sure he's mentioned at a QuakeCon how much of a pleasure it was developing for the iPhone/iPad. Can't find the video but I think it was at 2013 QuakeCon.


Here is a hard one:

On all Apple devices only one GPU is used: PowerVR - and only one texture format for it - PVRTC

On all Android devices many different GPU's are used, and you have to ship at least TWO different formats: one DXT-like (ETC?) and one PVRTC

This can duplicate the size of your Android game compared to even PC.


Everything Carmack says is absolutely correct. You will never win an argument on technical details with him. He knows this stuff front and back and inside out.

But he is way over stating the problem of latency. It is certainly a problem, and lower latency is always better, but it's an engineering problem, one that will eventually be solved. We have the necessary hardware to do this right now, the problem is our current state of the industry puts hardware and software developers way too far apart, and puts startup programmers and mathematicians too far apart, too.

Of far more concern is the UX issue. NO ONE has any clue how to design intuitive UI metaphors for VR yet. Oh sure, first person shooters are obvious. But what about everything that isn't a game? Speaking from experience here [1], it is a pain in the ass to be constantly taking the head set off to switch between apps, to fiddle with the phone, to have to take it out of the headset rig to use the touch screen. What is first and foremost needed is a VR window manager.

Carmack is an excellent mathematician who understand how hardware works almost intuitively. He is not a designer; just look at Doom 3. And he does all of his work in the dark. If we were talking about web technology, people would dismiss a project for being so closed and inaccessible. We've come to expect open source for web tech, but for some reason we still have a view that default state of game tech is closed source. If VR is going to have the adoption rates necessary to have a healthy app ecosystem for users, it needs the low barrier to entry that open source provides for developers. It needs parallel, competing framework development. I don't want Oculus and Samsung to be the only arbiters of how to design VR code. I don't want users with $500 burning a hole in their pocket to be the only users who can experience VR. We can't even get people to pay $3 for a smartphone app now, adding extra hardware cost is just going to make that worse.

We need creative people hacking away at the problem in their bedrooms and garages. Carmack is pursuing a model that requires startup capital just to get to play around. I'd rather reserve that level of effort for real content, so we can eventually have real content.

[1] I'm doing this open source and I'm interested in collaborations. Please check it out and consider contributing: https://github.com/capnmidnight/VR


You obviously never worked on VR. Latency is the only problem, and far more important than the UX. Why do you think we still have no consumer grade VR equipment, even if all the technical problems and fancy UX solutions have been all solved 20 years ago already? Latency.

With a bit too much latency in VR your users will not only loose immersion, they will start vomitting and suing the maker. Google glass was ready for >20 years, but nobody dared to sell it. 20 years all the VR industry only cared about were the various UX tricks (speech, motion, position, floating windows, tiles, ...) but still latency, i.e. HW brought it to a halt. Even with special GL HW.

VR window managers were amongst the biggest projects out of Xerox Parc. Everybody had tons of ideas how to design intuitive UI metaphors for VR then. Creative people will play around nifty interfaces for sure as they did during VR times, but we still need the cheap, reactive and not overheating HW first.


"Start vomiting and suing the maker" is the type of overstating the problem that I'm specifically referring to.


> VR window managers were amongst the biggest projects out of Xerox Parc

Do you know where I could read more about those?


If it wouldn't be too expensive, they could put the gpu core( or cpu whichever is smaller/cheaper ) and extra batteries into the headcase, which would be designed with a heatsink.


The Gear HMD is so cheap because it can dump a lot of development cost on to the Note 4. If you had to do another Note 4 version just for the HMD, then it'd probably put the price point at $1000+


How cheap is it? Have they announced a price yet?


They haven't, actually. But it's an accessory for the Note 4: a plastic headset with a couple lenses and a touchpad. The point is if it was a complete product the dev costs wouldn't be amortized over a million high end phones.


Looks like some site is taking preorders for 150 GBP? It's not dirt cheap, considering the Oculus DK2 costs 210 GBP.


[deleted]



...which is why I didn't include the cost of the phone in the 150 GPB.


Any thoughts (especially from Carmack) on bringing iOS to this state-of-the-art mobile VR? Google Cardboard should be a huge wake-up call on the subject, being a gloriously cheap & effective solution easily duplicated IF a respected standard is developed.


Though I guess it isn't a dealbreaker, iPhones currently don't have the low persistence OLED displays used in this Samsung device. The resolution is also lower, but I suppose that isn't likely to remain an issue after the Apple announcement.


Oculus Rift DK1 & DK2 Owner here

Why are we bringing VR to mobile when it still has great challenges on the PC?

"The ability to put it on your head and walk around" ? Into walls? Why?


For one, a lot more people have powerful mobile devices than they have powerful desktops/laptops.

Second, price-wise, you can sell the VR device a lot cheaper if you rely on the user's existing phone to power it, making it more affordable.

So it looks that it's going to be the other-way around: get a more basic mobile version first that will finance the more more tech-savvy (and more demanding) desktop audience.


Oh, how I wish Carmack and Oculus wouldn't have sold out to a company like Facebook. I really don't think they needed the money that fast, yet it seemed like they sold to the first company that would offer them $2 billion, even if there are few companies that would've been worse than Facebook. I still believe Marc Andreessen, the Oculus majority holder and Facebook board member, forced the decision more than anything else.

The rest of the team feels like they were duped into the deal, and then fed some BS that they then told everyone else, too, about how much they "needed" the deal and how great Facebook is going to be for them.


I can think of a lot worse companies than Facebook they could have been picked up by. Google, for one. Then, Oculus as we know it would have been absorbed and we'd have never heard anything from the people involved again.

Am I the only person on Hacker News that doesn't have this pathological hate of FB?


Literally nothing in this article has anything to do with Facebook. Do you just post this on any thread related to the Oculus?


Do you have any arguments, or are you just saying you personally don't like the deal?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: