While I love the concept of Cloudflare and know and like many of the people behind it I'm no longer promoting them. Cloudflare royally messed up and then messed up their response as well. I can forgive the former but I have a hard time with the latter. Also, unless your site is high traffic or get hit with DDOS attacks I don't see any need for it. Just optimize your site and you'll be fine even without a CDN.
While he's talking about cloudbleed that affair does raise questions as to their reliability. A spur of the moment decision like that doesn't give much confidence in their integrity. It's fully understandable of course (they're Nazis) but it was still a decision to terminate service with no warning. You're probably safe from such things if you're not a Nazi, but it's not 100% safe.
Is it weird that for a certain category of tech company that I intuit would learn from their mistakes, I trust them more _because_ they've had one catastrophic failure? And just sort of assume that many of the ones who haven't are still riding on a wave of blissful ignorance? Maybe that's nonsense, but it's still a thought that occupies my mind.
I would totally agree with you had they not blown their chance to handle this responsibly. The finger pointing and downplaying of what had happened totally destroyed my will to give them the benefit of the doubt in that respect.
I think you should at least take a good look at how they handled that one catastrophic failure, and also consider whether it was the result of an unfortunate accident or a momentary lapse, rather than just the first of many failures-waiting-to-happen that actually happened.
If you have a global audience, using a CDN can make a lot of sense. Most content will be static and having that part close to the user enhances the experience a lot.
The one part I disagree with is the section on Cloudflare. I agree that it's better than nothing, but you do have to trust them. I'm not sure that's wise. Not because I expect them to deliberately misuse the access, but because it's an increase in attack surface.
It's kind of funny that we have Google pushing to force everything to be HTTPS, and in response everyone adopting a service provider that MITMs them through shared proxies written in a memory unsafe language, and doesn't require a certificate or even HTTPS from the origin site (so that third parties can still do MITM attacks), etc.
It's still an improvement over an `http` scheme site in an absolute sense -- at least it protects customers from their own ISP or unencrypted wifi -- but it also hides the insecurity from the user. Oh, well.
Maybe what we really need is strict liability for data breaches. A few companies getting successfully sued for $100 per user account after a breach would actually start to change the culture around security.
I see CF as a step backwards at this point. I'm sure they have learned their lesson but their response was absolutely terrible and that makes me wonder about their leadership. As long as that doesn't change there is a good chance that there will be a repeat at some point and so I'm not comfortable with placing that much trust in them.
Fortunately I don't need them, if you are in a position where a CDN is a must then that is a decision you're going to have to live with (or find another one than CF).
It's really not my area of expertise, but it seems to me that today you could use subresource integrity to serve everything static from a CDN without having to trust it a lot.
The real problem is the other stuff they do, like defense against DOS attacks.
It's not about subresource protection, it's about them essentially man-in-themiddle-ing each and every connection to your website which defeats - imo - the whole purpose of using HTTPS in the first place. What's the point if it isn't end-to-end?
TLS between users and a proxy protects users from lots of different attacks (including the lovely wpa stuff). It's useful, and the alternative is frequently "no TLS at all", not dyi.
Yes end-to-end is better. But you're still going to have to trust infrastructure providers along the way, whether it's at the proxy level or they can just read your disks.
Also I pretty much agree with everything you said about CF ...
You are usually going to have to trust some third parties, i.e. your datacenter provider (though even then, I'm a believer in locked cages etc.). But there's a difference between trusting a named third party and trusting the public internet between CF and your hosting.
Anyone who wants to attack you will try to intercept/MITM close to you. Unless they have a lot of resources, they will have a hard time finding the datastream between CF servers and origin.
A typical scenario is surfing in public wifi (cafe, airport). Someone could identify a victim and MITM the connection. If the connection is secure to CF, they'll be mostly out of luck.
Right, my suggestion was that you would not give them a certificate for your domain example.com, you would just give them a certificate for staticresources.example.com, and you would never load anything from that subdomain without subresource integrity checks (or load anything whose contents or mere existence leaked important information, or...)
Cynical answer: The padlock, the "https://" in the address, etc. A phony sense of "security".
CF's comfort with playing man-in-the-middle should not surprise anyone familiar with the company's origins in examining the content of web traffic. Remember Project Honeypot?
Also their free cert terminates at their servers. If you're in e.g. GCE or AWS, that means part of your trip is unencrypted between their servers and yours, unless you pay for their FullSSL product.
Full SSL is free on Cloudflare. It's on all plans and we'll either work with the certificate you have on the origin (e.g. one from a paid CA or Let's Encrypt) or we'll give you one for free (https://blog.cloudflare.com/cloudflare-ca-encryption-origin/)
We looked it up a few months ago and came to the opposite conclusion. Perhaps it's bundled into a larger CloudFlare plan, but on the free tier, I don't think so. I'm having trouble finding pricing for it on their site though, so I may be incorrect! But why even offer the FlexibleSSL option in that case?
The flexible SSL option is just much easier to deploy. It tacks on TLS without any server reconfiguration. Whether or not you want that is another thing, but it certainly has a market.
Cloudbleed, security of MITM… all that stuff is NOT why I dislike CloudFlare.
It's the centralization of the internet! If everyone except Google/Netflix/Facebook/… is routed through CloudFlare, the internet would be even more centralized than ever :(
" [..] we've known it's coming for quite a while now [..]"
Who is "we" ? A lot of people are about to start calling random tech support numbers asking them why their internet is not secure. However, I think the UI changes coming in are a vastly improved solution to mitm than the cheap hack that is HSTS.
I'm also about 90% sure that in two years, most people will have at least one custom CA cert from a job, isp, or other as-yet-undetermined need to inspect content. I also expect it to become commonplace to ignore certificate errors, as the number of new HTTPS sites also increases the amount of faulty TLS setup.
And what's really annoying is that HTTPS doesn't really affect user security much. It mainly just affects privacy. Most people are not hacked by a man in the middle. They're hacked by a person accessing a database, or running an authentic looking website, or exploiting a bug. So while a lot of headaches will be caused by adopting HTTPS everywhere, people won't necessarily be any safer.
> And what's really annoying is that HTTPS doesn't really affect user security much. It mainly just affects privacy. Most people are not hacked by a man in the middle. They're hacked by a person accessing a database, or running an authentic looking website, or exploiting a bug. So while a lot of headaches will be caused by adopting HTTPS everywhere, people won't necessarily be any safer.
Maybe I misunderstood, but it sounds like you're criticizing HTTPS for its success. People only rarely get hacked using man-in-the-middle attacks, because popular sites are already using HTTPS. If they didn't, I'm sure they would be MITM-hacked all the time.
MITM is a lot more work for a lot less payoff. Say you found a small ISP that allows DNS cache poisoning. Once you succeed you get X users over a day or two. It's still a fraction of users of a small ISP. With a botnet, they can collect hundreds of thousands to millions of users, and all they have to do is own one site. Or they can send spam all day and not have to own anything.
Of course MITM is a concern, it's just not the biggest concern, IMO.
Ah, I wasn't really thinking about DNS cache poisoning. I was thinking about someone going to a public place (a school, a cafe, an airport), setting up a deceptively named Wi-Fi hotspot on their smartphone, and intercepting all non-HTTPS traffic that's going through.
Maybe this is not a lucrative opportunity for someone who also has the skills to gather a botnet that consists of millions of computers. But this attack requires minimal skills. If Gmail didn't use HTTPS, there would be an easy-to-use Gmail hacking app. If Facebook didn't use HTTPS, there would be an easy-to-use Facebook hacking app. The risk of getting caught is small. And by going to the right place, there's a reasonable chance of targeting a particular person, which many would find appealing. I think that the only reason attacks like this aren't more common is that most of the high-value attack targets are already using HTTPS.
+1 for LetsEncrypt. The web agency I work for standardized on this for all new builds and existing clients when their existing cert expires. Zero to HTTPS in less than five minutes.
Funny timing, I was speaking earlier today with a startup CTO who was annoyed with various HTTPS problems. Put simply, businesses don't want to deal with this stuff. That means they'll pay for it, that's an opportunity for someone.
Offer this "happy path" as a service, companies will pay for it right now.
Everyone is offering this happy path. And people do pay for it, but the market price for "a Lets Encrypt cert on my site" is $0. It's mostly just a great way to get people started on a more valuable service.
Lets Encrypt works fine for personal sites, etc. but it absolutely crushes the use case of being a large scale service needing to support thousands of individual sites, each with their own SSL cert.
Heroku, Hubspot, Shopify, etc. all have implemented LetsEncrypt and UX wise, is now pretty much just the default.
If that is painful, you should have automated it. I don't even know how many certs I have ATM (maybe 20 or 30 or so), because it's so easy to add new domains. I described my setup here: https://blog.bethselamin.de/posts/how-i-run-certbot.html
Nice article. At my company, $9 per year SSL through namecheap means we go with that for the support, which has come in handy. For non-critical things and my personal stuff I am on letsencrypt. If you are looking to go beyond this article, use a free SSL scanner such as https://www.htbridge.com/ssl/ which will dig into cipher suites which comes into play with HIPAA and PCI.
Yeah for $9 a year they are getting the service where when their cert expires they have to manually go through all the process again. Making sure to manually create a signing request and keeping keys secure.
For $0 a year with letsencrypt you can have it autorenew and not need any support.
Also what support would a https provider provide other then here are the commands to generate a csr?
As much as I appreciate https for sensitive information, I feel that for many things, it also functions as a way to lock down the standard protocols of the web and lock users in to proprietary software and services. No longer can you spy on the requests that that piece of closed source software is making while it phones home. No longer can you rescue and reuse IoT devices whose manufacturers have shut down their servers, or refuse to provide updates or promised functionality. Now all communications are inscrutable binary streams that can't be examined or improved, especially from devices that don't allow you to upload a replacement certificate authority.
Those should never be exposed beyond your local network. Any access to them should be vetted and encrypted by something more heavyweight before it hits the WAN.
The problem is browsers are deprecating feature if not https so those devices can’t use the features without https but there is no solution for them to use https
Well they shouldn't be accessed via http if you're not in your own network. So you can just create a proxy that has a https cert and proxies to http. Let's you add authentication, log access, etc.
You can probably automate this actually. There are several domains you can subdomain on afraid (or better yet the router manufacturer / nas manufacturer / camera company etc should be providing a subdomained address per device) and then get issued a cert for (letsencrypt uses the public suffix list to remove rate limiting from some of the afraid domains mentioned before, and any company interested should get their catchall domain added to that list).
Then it just comes down to doing the same redirects / hsts policy for insecure connections, but I believe they would have to be done at the root domain level, again going back to manufacturers providing them.
The letsencrypt part itself is the easy part, the hard bit is getting the domain name to register under. All the good open source router firmware today already supports Dynamic DNS.
All that being said, every Internet of Things device made is a security nightmare and none of the manufacturers give a damn, so they will never see a reasonable expectation of security or do anything like this. They cannot even produce a common OS amongst themselves to keep their firmware up to date and free of security vulnerabilities.
I'm hoping in the future it'll be easier (in a secure fashion) to implement self-signed certs for those sorts of things that only have to serve pages on a local network.
Wow, that's an ingenious solution! It's such a shame that this requires jumping through so many hoops (and becoming a Registration Authority... Not sure how difficult or pricy that would be).
And I never heard of Signal K. That is an awesome project, and definitely could benefit from end-to-end encryption for transporting marine data.
Yeah, becoming a CA is kind of a PITA. I think a "Sub-domain CA" would be a kind of happy middle ground, as it wouldn't need to be trusted as much.
Also, the Plex approach requires some DNS trickery, which works well when you're online, but on a boat in the middle of an ocean, it might be harder... We need a way to secure local-only communications, with local-only names (like .local). While we're at it, mDNS support in browsers would be nice, too.
The whole idea of HTTPS is that a given domain name has a single, globally-consistent meaning, that a user can trust that a given name will be the same thing anywhere. I don't think .local should be able to override that, that would be a large-scale phishing incident waiting to happen.
Setting up an internal CA really isn't a lot of hoops - any nontrivial tech company does it, anyone running a Mumble or OpenVPN server does it. (I don't know about the requirements for having one signed by the public infrastructure though)
The actual key generation and cert signing is easy (I did it once, before LetsEncrypt). It's getting your root cert trusted by others and not getting hacked that's hard. I'd rather avoid it if I can.
If you create your own CA it's not trusted by your browser. Intermediate chains have nothing to do with it because at the end of the chain is no trust.
The browsers have a list of CA's they trust. Let's call them the root CAs. You normally won't get a http cert signed by a root CA, it's normally done by an intermediate CA. Now your browser doesn't that, but it's been signed by a root CA so your browser eventually trusts it. You get your server to also serve the intermediate CA certs to save a round trip for requesting it. But in the end, because it's the chain of trust, your browser trusts it because it's eventually signed by a trusted root CA.
Now if you create your own CA you will need to be installed in the users browsers to trust. Even if you put the intermediate certificates as your browser can't establish a link of trust.
> Now if you create your own CA you will need to be installed in the users browsers to trust. Even if you put the intermediate certificates as your browser can't establish a link of trust.
If you create your own root CA you will need to be installed in the user's browsers to trust. If you create your own intermediate CA then you only need for your CA's certificate to be signed by an existing root CA.
I've never found a good, single document that covers self-signed certificates well. Googling for guidance results in a lot of different ways to do it, and if you don't find one that does exactly what you want, trying to combine that one with a different one that does the things missing from the first might not work because they are using different approaches.
I did manage to glean enough to make a pair of simple scripts that seemed to work for my needs, but have no idea if they are actually right, or if putting together things from different guides messed things up.
Here's the script I use for my root certificate (makeroot.sh):
#!/bin/bash
NAME=${1:?Must specify name for root}
CN=${CN:-My Little Certificate Authority}
O=${O:-My Home Network}
C=${C:-US}
DAYS=${DAYS:-365}
PW=${PW:--aes256}
SUBJ="/CN=$CN/O=$O/C=$C"
openssl genrsa $PW -out $NAME.key 2048
openssl req -x509 -new -subj "$SUBJ" -nodes -key $NAME.key -sha256 -days $DAYS -out $NAME.pem
So suppose I'm setting up a private certificate authority for my iot devices. I would do:
$ ./makeroot.sh iot
That will make iot.key and iot.pem.
To make a certificate signed with the iot certificate I use this script (makecert.sh):
#!/bin/bash
NAME=${1:?Must specify name for cert}
ROOT=${2:?Must specify name for root}
CA="-CA $ROOT.pem -CAkey $ROOT.key -CAcreateserial"
O=${O:-My Home Network}
C=${C:-US}
DAYS=${DAYS:-365}
SUBJ="/CN=$NAME/O=$O/C=$C"
openssl genrsa -out $NAME.key 2048
openssl req -new -key $NAME.key -subj "$SUBJ" -out $NAME.csr
openssl x509 -req -in $NAME.csr $CA -out $NAME.crt -days $DAYS -sha256
rm $NAME.csr
E.g., to make a key for an iot doorbell:
$ ./makecert.sh doorbell iot
That makes doorbell.crt and doorbell.key.
That worked well until the need arose to make an SNI certificate. I had expected that it would just be a matter of slightly tweaking makecert.sh, add a few more arguments.
But all I could find in the way of examples took a different approach, where most of the information is passed in a config file, and some of the guides I read indicated that you cannot pass the names on the command line to openssl when doing SNI. Anyway, this is the script I came up with (makesni.sh):
#!/bin/bash
function makeconf {
CN=$1
DIR='$dir'
shift
cat > tmp.conf <<HERE
[ req ]
distinguished_name = dn
req_extensions = req_ext
unique_subject = no
prompt = no
[ ca ]
default_ca = tzs_ca
[ tzs_ca ]
dir = ca-files
private_key = $DIR/iot.key
certificate = $DIR/iot.pem
new_certs_dir = $DIR
database = $DIR/index.txt # database index file.
unique_subject = no
default_md = sha256 # use public key default MD
serial = $DIR/iot.srl # The current serial number
email_in_dn = tzs@mouse-potato.com
default_days = 365
[ dn ]
CN = $CN
O = My Home Network
C = US
[ policy_anything ]
countryName = optional
stateOrProvinceName = optional
localityName = optional
organizationName = optional
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
[req_ext]
HERE
if [ -n "$1" ]
then
cat >> tmp.conf <<HERE
subjectAltName = @alts
[ alts ]
DNS.1 = $1
HERE
shift
fi
POS=2
while [ -n "$1" ]
do
echo "DNS.$POS = $1" >> tmp.conf
shift
POS=$((POS + 1))
done
}
NAME=${1:?Must specify name for cert}
makeconf $*
openssl genrsa -out out/$NAME.key 2048
openssl req -new -key out/$NAME.key -config tmp.conf -out out/$NAME.csr
openssl ca -policy policy_anything -out out/$NAME.crt -config tmp.conf -extensions req_ext -infiles out/$NAME.csr
rm out/$NAME.csr tmp.conf
This assumes a directory, ca_files, that contains iot.key and iot.pem, and also an empty file named index.txt. Also it assumes in out directory.
To make an SNI certificate for a device that has three hosts, named fridge, freezer, and icemaker:
$ ./makesni.sh fridge freezer icemaker
That will make fridge.crt and fridge.key in the out directory, and that will have fridge as the CN and freezer and icemaker as subject alternate names.
OK, so now I can put doorbell.crt and doorbell.key on the doorbell device, and fridge.crt and fridge.key on the device that has the fridge, freezer, and icemaker sites, and all is well, right?
Well...I also need to give my browsers iot.pem so they will recognize those other certificates. I'd also like curl and Perl and Python to recognize them, so that I can write scripts that do fancy iot things.
I didn't find any good guide to installing the CA certificate. From a bunch of Googling I came up with:
• On Debian Linux, copy iot.pem to /usr/local/share/ca-certificates, and change the extension to .crt.
Run update-ca-certificates.
That should make it available to Perl, Python, and curl.
For Chrome and Firefox, find their certificate management dialogs and use them to add the certificate.
• On OS X, double-click iot.pem. That should open Keychain Access. Let it import the certificate. Set it to be trusted for SSL. That will make it work in Chrome and Safari.
For Firefox, same as Linux.
For Perl, do one of:
1. Set environment variable PERL_LWP_SSL_CA_PATH or HTTPS_CA_DIR to point to a directory containing iot.pem, or
2. Set the environment variable PERL_LWP_SSL_CA_FILE of HTTPS_CA_FILE to point to iot.pem, or
The first requires that the directory contain symbolic links of the form hash.N where hash is a hash of the certificate and N is a sequence number. You need two of these, because there apparently is an old hash and a new hash. 'openssl x509 -subject_hash -noout -in iot.pem' to get the new hash, and 'openssl x509 -subject_hash_old -noout -in iot.pem' to get the old hash. Or just go to the cert directory and run 'c_rehash .' and 'c_rehash -old .' and those will make all the symlinks for you.
For Python3, similar to Perl but with SSL_CERT_DIR and SSL_CERT_FILE as the environment variable. Same hash symlinks as Perl (although I think you only need the new hash).
For curl, "--cacert iot.pem" or "--capath <dir>" where <dir> is a directory with the certificate. Same symlink considerations as Python3. Or set CURL_CA_BUNDLE env variable to point to a bundle that contains iot.pem. A bundle is just a bunch of certificate fills concatenated.
I wrote a similar amateur script years ago which has required updates from time to time to reflect new developments in the ongoing SSL saga.
This was started before anyone cared about SNI.
If the use case for the CA is a home network, I prefer using a local SSL-enabled proxy for connecting to both remote and local SSL-enabled endpoints. This lets me specify a short list of acceptable ciphers. It also permits me to use legacy, non SNI-enabled SSL/TLS clients. The remote and local hosts are "backends" that go in the proxy config file. I control DNS via local root or use HOSTS to redirect to the proxy.
If I was running multiple local SSL enabled devices/servers from the same local IP on a home network, then I believe I could just put these in the config file as backends listening on high, unprivileged ports. I do not believe I would need SNI because I could filter requests based on domain name and/or the filepath in the url.
The ultimate solution IMO would be an sslwrap type utility that has been revised to use more modern encryption, i.e. NaCl. Any application could then use NaCl instead of SSL/TLS, without having to modify the application. The NaCl author has menitoned several times that he has written such a utility and even said he will release it at some point.
IMO, that would be a big step in freeing us from the ills of SSL/TLS and the 3rd party CA system. Users would get easy-to-use, high quality, high speed cryptography without having to learn all of the SSL/TLS complexity, not to mention having to keep up with incessant bugs and security updates.
If you shed all legacy compatibility and narrow your configuration down to modern AEAD ciphersuites and an ECDH handshake, you can get TLS to a point asymptotically as secure as the best secure transport you'd achieve with NaCL and no other crypto primitives. Which leaves you to wonder why you'd bother doing the NaCL thing at all --- which is probably why not many people use custom secure transports.
Most (not all) of what's gone wrong with TLS has gone wrong in things that had already been outmoded for a very long time, and long since replaced with better constructions in later versions of the protocol.
One of the things that perplexes me as a naive user about TLS is the ridiculous number of cipher choices. How do you explain it? Some people still need them? Impossible/impractical to cut TLS down to size? People who comment intelligently about crypto seem to agree that complexity is the enemy of security, but does TLS even try to reduce complexity?
There is so much accumulated cruft to SSL/TLS and the implemented x509 certificate scheme that to me, as a noob who knows nothing about crypto i.e. the average user, the easier path is to scrap all that ("shed all legacy" junk) and focus on learning a few things that are both flexible and known to be useful. I believe NaCl fits this role.
I have some basic UNIX-like utilities built with NaCl, one for each function (crypto_box, crypto_secretbox, etc.). Minimal non-djb code. In the interests of experimentation and learning, I prefer using small, separate utilities versus applications that do multiple jobs.
I would be interested in an "expert" opinion on my experiments with these utilities but crypto is such a devisive topic. Though they may be initiated as sensible, honest questions, online discussions quickly turn toward dogma supporting status quo, mindless memes, subtle insults and are sometimes derailed into the realm of absurdity. I would post an example of the usage for comment, but I am not interested in being chastised by the HN peanut gallery. It is just intellectual experimentation, nothing more.
IMO, as a general principle not limited to crypto, there is nothing wrong with "custom" anything if it has passed the same QA tests as the "mass-produced" version. Sometimes in fact custom is higher quality than mass-produced. Popularity does not always signify higher quality. Perhaps nowhere is this more evident than in the world of software.
IMO, a "naclwrap" program would be very useful, even in a "TLS world". Like stunnel but for nacl. Whether "everyone" would use it is an interesting question but ultimately not something I care about.
There is no question that TLS is loaded up with cruft and dangerous legacy goo.
But, again, you can remove 80-90% of it if you don't care about compat. People run into a cognitive block when they think about TLS because they assume TLS means "browser compatible". But no browser speaks a NaCL transport today, so that's out the window.
Without the requirement to support browsers, you can:
* Allow only an ECDH handshake.
* Allow only the Chapoly ciphersuite, in TLS 1.2's AEAD format.
* Eliminate CAs and do an SSH-style key-continuity scheme.
This TLS "subprotocol" already exists and is already supported by most of the TLS libraries, all of which have been audited. It's supported by middleboxes and monitoring tools so it can be deployed operationally. Every mainstream programming language has bindings to it. Meanwhile, NaCL doesn't actually provide a transport protocol or even the security semantics of a secure transport; that's work you'll have to do de novo, and you will generate bugs doing it.
So what's the advantage to scrapping TLS and redoing it with NaCL?
I agree. Especially with the part about CAs and a more SSH-like approach.
But unlike you I have little interest in "browsers". This is why I would want a "naclwrap" utility but you might not see any point. CurveCP is the experimental transport protocol. I do not need it in a browser because I am not interested in browsers. I like experimenting outside of browsers and the "web" so I have no reason to resist CurveCP.
What is the advantage of scraping TLS? If you mean for everyone, maybe there is none. Why not allow both TLS and NaCl to coexist? Why does TLS have to be "redone" with NaCl?
For me personally, the advantage of "scraping TLS" is that I get to ignore all the cruft and complexity that I have to sift through to get to the proper "subprotocol" within TLS. Too much work.
I am not the one who will write "naclwrap" if it ever is released. The person who wrote it does not introduce bugs and security issues. He is not like the people who work on TLS and most developers in general. He is careful.
In any case, I am not tasked with persuading anyone else to scrap TLS. I am simply a user who 1. likes NaCl, 2. prefers the idea of per packet encryption to the notion of encrypted "tunnels" and 3. is apt to complain about TLS only because like other web users I am forced to use it whether I want to or not.
For someone focused on influencing developers who write programs for other people, shaping web standards or at least very interested in where they might be heading, your comments are poignant. But I am just a user. I write trivial programs for myself. I am not futilely trying to tell developers what to do, shape standards nor am I very interested in where things are heading, except to the extent I can minimize the computer usage-related annoyances I must endure.
Compared to NaCl, for someone who wants to learn how cryptography libraries work, "TLS" is too much of a moving target. It is unfinished software that may never be finished. Too many versions of too many libraries supporting too many ciphers by too many developers. Too many knobs and switches.
As for OpenSSL, not everyone is on 1.1 yet. There is no ChaCha20-Poly1305 in 1.0.1_. So while some bits of NaCl have been adopted into the TLS suite, it is only a subset of web servers that are supporting the so-called "subprotocol". And still no Ed25519, even though it has been used in OpenSSH for some time now.
All these factors make TLS undesirable for me. Too much complexity compared to NaCl, IMO. For others, TLS may be the right choice.
I do not believe I would need SNI because I could filter requests based on domain name and/or the filepath in the url.
The point of SNI is to send the domain name as part of the TLS handshake, because the URL is only sent after the secure connection is established (after the cert is sent).
The point of SNI, i.e. the reason for its existence, is to allow more than one SSL-enabled website to use the same shared IP address. Prior to SNI each SSL-enabled website required its own dedicated IP address. On a home network, is shortage of RFC1918 IP addresses really a problem?
The appropriate way for users to defend themselves is to simply install https-everywhere and check "Block all unencrypted requests".
This avoids sslstrip, requires no redirect magic and no HSTS.
Although somebody should really patch it to just display big fat warnings because it is somewhat annoying to turn it on and off all the time.
Yep: HSTS only applies to your site, while upgrade-insecure-requests applies to every resource your site loads, even on third-party domains? Meanwhile, upgrade-insecure-requests does not replace HSTS because it doesn't help secure links from offsite or direct entry, which HSTS solves especially with preloading. Monitoring CSP headers and actually fixing bugs would help fix things in browsers that don't support upgrade-insecure-requests.
Thanks for the clarification. I did not realize that `upgrade-insecure-requests` applies cross origin. If you do not load any insecure content is setting HSTS and `block-all-mixed-content` the best strategy?
As pointed out by Microsoft earlier today, MDN is one of the best resources on this sort of thing. Here they write:
> The upgrade-insecure-requests directive is evaluated before block-all-mixed-content and If the former is set, the latter is effectively a no-op. It is recommended to set one directive or the other – not both.
Yes they are similar, and you have to watch which one you set -- but you can also achieve a similar effect at a more granular level using CSP as also indicated in MDN. These rules are equivalent to saying `default-src https:` in the CSP rule.
In fact, the best option is individual CSP directives which can get more granular than the `https:` scheme alone, because you can then specify which trusted third-party domains (if any) are allowed to load resources on your pages and conditions (like nonces) for running script tags, data URIs, etc. After all, your secure third-party resources could still have servers compromised and they might then send malicious assets over SSL to your unsuspecting users' browsers.
CSP, if trusted enough to set it to block instead of just report (though you can run both modes at the same time), is one of the best defence-in-depth ways to protect your page from attack, right up there with HttpOnly and Secure flags on cookies. https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies#Se...
If you're looking for checklists, have a look at https://wiki.mozilla.org/Security/Guidelines/Web_Security and https://blog.appcanary.com/2017/http-security-headers.html though remember no checklist is going to deliver bulletproof security on its own (you'll have to inspect your app and environment for flaws, implement monitoring tools, etc.), and blindly implementing security headers or features without knowing what they do can obviously break your app. (Again, monitoring your app can help.)