This tantilising and makes me want to work out what the problem is pre-Blackhat. Looking at JAR signing in general (which is what Android packages are) I see a few possible flaws:
* The zip format doesn't structurally guarantee uniqueness of names in file entries. If the APK signature verification chooses the first matching file entry for a given name, and unpacking chooses the last then you're screwed in the way described.
* The JAR signing scheme signs a file containing hashes of file name/data hash pairs. However, there seems no part of the verification steps (in the JAR specification) where extra files not mentioned in the signed data cause signature rejection. This seems like a bad idea.
From the description, though, it sounds like a key management problem. Anyway, this talk is definitely on my Blackhat schedule!
You should really dig into this more, regardless of Bluebox's work.
I was approached about buying two different Android 0days related to APK signing about 3 months ago, so definitely some issues to be found. The seller wanted unrealistic terms, so I never got full details.
Maybe I'm missing something, but I don't see this security bug could be used against a majority of people. Can someone explain to me how this exploit could be used against people?
To me, it seems like someone would have to side-load an application. Anything coming from the Play store should be safe?
Sit on an open wi-fi network that you're connected to and intercept Play Store traffic and spoof an app update, like Gmail, inject my malicious app download URL which might look exactly like the Gmail app, except my malicious version of Gmail also sends your emails to me.
Then I go to paypal, forgot password, get email, gain account access, ruin your life.
Not sure how plausible any of this is, I'd have to be a really dedicated hacker to set all that up.
But it's all possible.
Well, only to the extent that the PKI model is trustworthy. If somebody can perform a MITM attack against you, you are wholly reliant on them not being able to generate a certificate for the domain you're looking at. It's better, but there are 650+ different companies who can sign certificates for all domains worldwide.
My thoughts exactly. DNS spoofing on open wi-fi is shockingly easy. Most people just "trust" certificates without looking at the details too.
If I got a cert for facebook1234.com (just an example) and setup my server to grab credentials then send you to the real FB, nobody would notice.
HTTPS is only good when you know what you're connecting to.
It doesn't require that at all. You could simply strip the SSL and present a HTTP version of the page, and most people would be none the wiser. Or you could redirect to e.g. www.facebook23.com and have them log in there, with a nice little padlock. At no point would they get a certificate warning. They wouldn't get a certificate warning if you had compromised one of those 650+ CAs either, unless the site in question used certificate pinning (almost no sites do, currently.)
but as far as I can see (skimming) -- there's nothing there about standard software doing pinning -- it's all about when you're deploying (part of) the client as well?
True.. Especially because users get conditioned to regularly. As an example:
London Underground have partnered with a variety of commercial providers to offer free wifi at Underground and Overground stations. At least one of those providers forcibly redirects all traffic to plain http to show some "welcome" page, and won't let you use ssl at all.
So users might get surprised once, see that the page they get finally get to is an official looking page, and then just shrug and continue.
(which in this specific situation also creates the perfect setup for MITM'ing people without even have to go through the trouble of a cert: A portable base station, a higher powered antenna, a little setup to fake the same setup with a copy of their page, and voila you have a bunch of commuters happily tolerating surfing without SSL to get their free wifi while you intercept all their traffic)
If someone is able to do a MitM attack with a valid cert, I'd guess they have the capabilities of doing the attack upstream of the wifi. So having the wifi open or not won't matter much.
I think that's an unreasonable assumption. It's far more likely that a CA is compromised than an ISP.
Also, don't underestimate the power of e.g. sslstrip. Most users enter "google.com", not "https://google.com". If you're not careful, somebody will just remove all the SSL links on your pages, or lead you to another domain with a valid one, via DNS or HTTP redirects.
"It's far more likely that a CA is compromised than an ISP."
I wonder if that's true?
At a high enough stakes game - one would have to assume that state level actors have already trivially compromised both - the NSA clearly has pretty much 100% compliance from Verizon, and I don't think its going out on a limb to assume they've got similar compliance from at least one of (and possibly all of) the US based CAs. Egypt at a state level have been seen using fraudulent SSL certs - and you'd be foolish to assume there isn't equivalent CA access available to any government who has a root CA authority under their jurisdiction.
But that's kind of a moot point - if the NSA is targeting me individually, I have to assume they'll gain access to pretty much everything - even if I strongly encrypt everything (and don't ever make a mistake doing so), many of the intended recipients of my communication are in jurisdictions where they've got enough power that wouldn't be able to resist the NSA's demands to reveal the unencrypted contents. (If they can ground head-of-states private planes in various European countries, there are probably very few places they cant "lean on" someone strongly enough to make it a not-very-difficult question about whether to give up my personal data.) "Lesser" state level actors - GCHQ, or ASIO here in .au for example - might not have quite such god-like global power, but I'm under no delusion about the privacy protection I've got against the "feeble compared to the NSA" local government intelligence organisation of whatever country I happen to be in, if it takes a personal interest in me.
At the attack levels lower down though - carders, identity thieves, the generic "internet fraud" level attacker - I suspect ISPs are significantly more likely to be compromised than CAs - or at least the important part of the CA infrastructure that holds the root signing keys. I'd guess typical ISP infrastructure is not as well secured as typical CA root keys - and that zerodays, unpatched known vulnerabilities, and rogue/disgruntled/underpaid sysadmins in small (and perhaps even large) ISPs represent a much higher risk than non-state-level attacks via stolen CA keys.
But if I want to attack you with an ISP compromise, I have to compromise your ISP, regardless of how well-secured it is. If I want to attack you with a CA compromise, on the other hand, I get the choose the absolute weakest-secured of the 600+ CAs there are.
> So my question is: is HTTPS over open wi-fi trustworthy?
As trustworthy as it is otherwise, so generally yes.
Since Android supported arbitrary VPNs (as of ICS) I use OpenVPN to home on my tablet (as I always did with my netbook) - that way I should be no more unsafe, even for unencrypted protocols, than if I were sat at my desk. It does add latency of course, which in less than ideal conditions (an iffy 3G/GPRS connection) affects throughput too (on top of the small amount of packed wrapping overhead in the VPN protocol itself), but I've not found that to be noticeable for my use pattern.
I'd be curious to see what information is transferred by Android phones that is not through SSL. Given all the apps out there that have access to your personal data and could be transmitting whoknowswhat through non-ssl requests, probably enough ways to mess with someone.
Imagine using a pineapple http://wifipineapple.com/ and sitting in a coffee shop with your laptop out messing with people's insecure traffic! The ramifications of such could be quite potent given any chance that an insecure app could allow insertion of basically any content into a http request.
This is why you secure your damn wifi. Even if the password and user are on the wall, the traffic is still encrypted!
If you are in a coffeshop you too will likely have the passphrase to the WiFi, then you can still intercept traffic of your unsuspecting users, ARP poisoning and whatnot.
So just because its WPA2 doesnt make it magically safe from tampering.
And usually a coffeshop that has WPA2 will still have admin/admin as their router credentials, if you are lucky they have Linux and from there you have # and can use iptables to divert traffic to your device as you wish.
Fortunately there are a few projects out there that will let you wrap up a pretty decent radius+WPA2-enterprise setup -- but even with that ease-of-use -- the things that can go wrong to take down the network sky-rockets, and few coffee shops etc will be deploying it (too much hassle/too much time/too little awareness).
I believe there is ample opportunity to sell a "secure wifi box" with some kind of fanless linux/*bsd-box with a (more) secure access point in it than what most ISPs currently deliver. Throw in a caching, ad-blocking proxy... (Alternative business plan: give the boxes away, sell ads based on location -- (re)placing ads in web content...).
Not that I'm aware of. I've man-in-the-middled a few of them in the past couple months. On my phone, Twitter and Foursquare are the only ones that I've seen that actually do certificate pinning.
As I understand google policy, they allow a user supplied CA to over-rule certificate pinning, to allow for the common use case of an enterprise supplied certificate rewriting at the firewall. So unless you're using one of the magic wildcard sub-ca certs issued by a default CA I suspect you're seeing the google apps allowing an exception to pinning.
If so, that's pretty cool. Happen to know if foursquare / twitter have anything in place that lets them handle the enterprise case w/o letting me mitm them?
That implies that the play store isn't connecting over a secure connection and is actually spoofable right? Which would be a security blog post in itself.
Pfff. I could get mugged walking down the street too, but I still carry my wallet and keys around. And yes, I have been the target of some robbery attempts in the past.
I own a lot more valuable possessions than I'd walk through a sketchy area of a city late at night carrying - but that doesn't mean I wont carry my wallet/phone with me all the time.
I'm happy enough using open wifi to read the news or check train times. I kinda rely on gmail "getting SSL right" so that if my mail checks while I'm doing that I'm still OK. I wouldn't log in to my internet banking or PayPal on open wifi. I know I'm leaving myself open to "forgetting" and accidentally using and open wifi connection for a sensitive transaction/login, but for me the probability of me forgetting multiplied by the probability of any particular open wifi being malicious is "low enough" that I'm happy to accept the consequences.
Be aware the open wifi has risks. Take suitable precautions. Use them if it's useful - but modify your behaviour to minimise the risks. But if you just want to check the news or the weather or the bus schedule, someone saying "never connect to open wifi" is probably not giving or taking into account appropriate context.
Maybe a better comparison would be with having my pocket picked. And yes, I do know if the digital equivalent takes place, unless the attacker has been so thorough as to also subvert my router and modem admin consoles, in which case they probably didn't need to wait for me to wander through an open wi-fi network.
Your scenario doesn't work at all given that such traffic is encrypted -- you can't simply inject into it. If someone breaks that, we have much bigger problems.
However, to your ending comment, some time back I asked whether passwords actually gave you security from other people who know the same password (e.g. a coffee shop that has a big sign telling you the WPA password). While I have zero knowledge to confirm or deny this, someone in the know claimed that no, given that the wifi password is used for the initial key exchange, it offers superficial interception/monitoring protection against someone malicious who knows the same password. Take that with a grain of salt, but I never could find other resources on that.
I just checked - the Play store downloads aren't encrypted (client hits a HTTP URL that 301 redirects to another HTTP url) and all of these were fully visible in a HTTP proxy. There may be some additional hash-checking going on though, I can't say for sure (the actual listings of apps and their details are indeed encrypted). Hopefully they're not relying 100% on the APK signature check for downloads.
There is additional hash checking indeed. Google Play will fail to install an APK you MitM to the app.
Edit: the hash checking is done in addition to the APK signature verification, i.e. you can not replace the APK with a different APK signed with the same developer key.
Indeed. That was the first paragraph of my comment.
The second responded to the comment about never using open wifi networks, where I took open to mean unencrypted. My comment is that open and encrypted networks are synonymous (or so I have been told) if the person who wants to capture your traffic knows the network password.
Which loops back to the first point, that you should only use encrypted services on networks you don't have full faith in.
True, 'tis only an example. Some rooted phonesmight be using 3rd party apps not on the Play Store that doesn't go through SSL. Not really sure how that works, I haven't rooted my phone.
Also, I am assuming apps can make standard HTTP requests if they want to other external services, so an app that has access to your sensitive data might transmit it to home base non-securely.
Hopefully this never actually happens, but given all the junior-made apps out there, some security holes like that probably exist.
Make no mistake, the article is dripping in FUD. It's a real flaw, and it's interesting, but I'm surprised blackhat was convinced to let it be the subject of a talk. The blog post contains no useful information and is entirely self-serving to shamelessly build hype.
Often I see people installing leaked updates for apps like Gmail which are shared through Dropbox, etc. These would normally not install because they don't have matched signatures, unless something like this vulnerability was used.
Android allows apps to install apps (how the amazon store works) and if the new app appears signed with the device vendor's key then I'm guessing it wouldn't be an "untrusted source".
Apparently the Play store has hosted malware apps in the past...
My understanding is that only apps that are present in some form in flash (even if subsequently updated) are allowed to install without the device asking the user explicitly for confirmation. Compare the process with the Play store (store app prompts before download, can install from the web without prompt) with that of the Amazon store on non-Amazon devices (device prompts after download).
android.permission.INSTALL_PACKAGES controls which of the in-flash applications may install stuff: it's not available to non-built-in applications at all, and the Amazon store doesn't have (or need) it.
That would be my assumption as well, but there are third-party application stores, like Amazon or the custom application repositories of carriers/manufacturers.
Presumably it is easy for any individual app store to filter for apps with duplicate cryptographic signatures but different content. But if someone could find an app and a spoofable signer that is present on one store but not another, they could submit an altered app under a spoofed name on the store that lacks the app. Spoofing privileged signers would hopefully be difficult on the Play store, but might be possible elsewhere.
If you can get a user to download an apk from a non-market source (like your website), this should let you install as an update to an existing app. The user wouldn't even see the permissions page.
It allows for a man-in-the-middle attack when downloading new- or updating existing applications, for example using a Wi-Fi honeypot or compromised public access point.
If you can control any app you could do things like send texts to paid services, harvest bank passwords, send email death threats against the president...
Is it my imagination or is 5 months a very short disclosure window for a vulnerability that affects Androids since Donut?
I think about how manufacturers drag their feet on normal updates and can't imagine what heaven and earth movement would be required to patch this industry wide.
Then again, maybe the attack surface for this is small enough that it's manageable.
Well he's got to create some publicity for the Blackhat presentation. Though I only bought it in March, I know my android phone has downloaded several system updates since then. It isn't clear to me from TFA precisely how the signature validation process was broken, and I'm not familiar enough with Android architecture to be sure this is possible, but it seems like the validation for un-hacked devices should have been fixed by now.
Those are for bugs that are "under attack", where it's important that those being attacked have the information necessary. There's no evidence that this is being exploited in the wild (at this point).
"Under attack" allows people to distort reality as much as they want for their own benefit. Google sure looked like a tough cookie by disclosing bugs in 7 days and now it gets a pass.
If I'm reading this correctly, this hack would potentially allow a standard app developer to create an app that has elevated permissions and thereby be able to access and transmit any data on the phone.
The story also says that this hack could be used to send text messages and other communications. In the wrong hands this could be a devastating financial and social exploit.
It gives you the ability to modify the app without changing its cryptographic signature. If such problem would exist in standard PC world, it would essentially give you the ability to modify the binary without changing its hash.
This is a major blow to essential system of Android security system, the core functionality is broken the consequences can be massive not to mention it will never be fixed on old devices.
In the PC world, authenticode on executables does not really offer that much security: Any malware can be signed and you normally don't verify the signature of applications.
And with Android: Just because APKs could be forged, what exactly is the attack vector? If sideloading is not enabled, and the play store uses HTTPS, how would such an forged APK with an stolen signature get placed on your device?
Could other apps modify the APK of another app? Doesn't each app have it's own Linux userid and aren't there access restrictions? How would some random game go and write into the APK of an app with high privileges in order to inject code?
If that were possible, there would already be DOS like attacks: One game destroying the APK of a competing game, etc.
The thing that makes this dangerous is the "system" certificate for core apps. If you hijack traffic to any update to such an app (and OEMs have a ton of such apps), you you can inject code before it's installed under "system" abs that's that.
No, you can't actually go poking into other apps' apks but how many people would press "update" if they see the package manager's "Installing Gallery update, no permissions required" dialog?
Not everybody can download from the Play Store because of ridiculous device and country restrictions. If you are from the US or Europe, you're mostly fine but from other countries you often have to download the APKs from non-official websites and I guess that's where there could be a problem.
It's worth noting that (for a while), the Play Store / Play Services has offered to verify even sideloaded apps. I notice this regularly when I install a new build of Firefox Aurora. It's never rejected an update, but I believe it is looking for exactly this sort of thing. So as long as you have Google Play Services in any form, you should be covered.
The more interesting question is when / how will Amazon (and other reputable app stores) deal with the issue.
Why? I installed most of my apps from Play Store, if some apps have country restriction, it's because they have a good reason to do so (range of service or support)
Where I live, many apps are not available and it's not clear at all why. Usually I just end up searching for the apk, install it and it works just fine.
I don't get why root access is so much worse than any other problem:
1. Apps like Skype already allow themselves access to so much sensitive and private information and things like the Motorola spyware uncovered recently (https://news.ycombinator.com/item?id=5973282 ) are so bad that I find the extra evilness possible with root access not so significant. What amount of additional harm would it really be? Intercepting network traffic? Better hidden rootkit that even hides from the few users who have jailbroken their phone?
2. The Linux kernel regularly has security bugs and we know that Android phone manufactures don't update devices timely or at all. Wouldn't every Android phone not have at least one exploit for the kernel itself at any given point in time? Where are the apps that just use this to gain root access? Or has Google hardened the kernel well enough that there are no known exploits by which an APK with native code doing syscalls can increase it's privileges?
So, it seems, that there is some "special, less strict" way to install packages for "trusted vendors", because it is much less probable to find a major flaw in a standard jarsigner + zipalign procedure. If so, it is just another idiotic "management decision".
Yes, precisely my point. Hence the phrase "rooting without rooting".
If it's possible use this vulnerability to arbitrarily rewrite the contents of any or all of the APKs loaded onto a given phone, then this flaw allows for the ability to engage in a key behavior that makes rooting Android popular: Disabling all of the unwanted bloatware that device manufacturers usually prevent you from deleting.
But how did it get there? I'm pretty sure the device would have to be rooted, and the user would need to have run some code written by the attacker and allowed it run with superuser privileges (or haphazardly followed some malicious tutorial online, e.g. "follow these steps to install new themes!" or "use this tool to unlock your phone!").
You install an app from the Play store that is very normal, it only requests Internet permissions.
Then after a week or so, it wakes up and downloads its payload which with it can modify itself, transforming itself to seemingly be the Gmail app you are so often using, while moving the real gmail away. Then you continue using and updating "gmail" like you normally would, except you know, its now using a proxy and no ssl.
* The zip format doesn't structurally guarantee uniqueness of names in file entries. If the APK signature verification chooses the first matching file entry for a given name, and unpacking chooses the last then you're screwed in the way described.
* The JAR signing scheme signs a file containing hashes of file name/data hash pairs. However, there seems no part of the verification steps (in the JAR specification) where extra files not mentioned in the signed data cause signature rejection. This seems like a bad idea.
From the description, though, it sounds like a key management problem. Anyway, this talk is definitely on my Blackhat schedule!