Get a decent high level architecture, good, consistent database design and you don't write anywhere near as much application code. Start hacking about using one field for two purposes or having "special cases" and everything starts to get messy. These special one off cases will involve adding in more code at the application level increasing overall complexity. Repeat enough times and you will code a big ball of mud.
Instead people argue about number of characters per line or prefixing variable names with something and other such trivialities. (These things do help readability, but overall I think they are quite minor in comparison to the database design / overall architecture - assuming you are writing a database backed application).
>Start hacking about using one field for two purposes or having "special cases" and everything starts to get messy.
That advice is not as easy to put into practice as you make it sound. For instance, using one field for two purposes is often done to avoid special cases.
I think the eternal problem of software development is that both being more abstract and being more specific comes with a cost, and the middle ground is always shifting as requirements keep changing.
There's relevance in talking about both micro- and macroscopic guidelines. Both are important.
Very rarely does someone "read" an entire code base "with one look" and be able to deduce issues. You do, at some point, have to get into the weeds. Managing that experience is what articles like these are about.
There are more people talking about code that don't know actual code than people that know what they talk about when they say code source. A software is not only a source code, it's many things: business rules, GUI, database, network, programming language etc. It's seems logical that there's more talking that is macroscopic from the point of view of coder's own microscopic. Hopefully, hackernews is here to help ;)
I always read both side of the story the thing that is as the top (database design, overall architecture) basically the abstract principle. And the bottom part the actually precise drawing that executes as a program. Actually reviewing only unknown code from in an unknown file in an unknown function is very rare. You always come from the design/architecture point of view diving into more details. The translation of the big picture into code has also its pratice just like high level language of your code.
"Actually reviewing only unknown code from in an unknown file in an unknown function is very rare."
I am not sure if I am understanding you correctly, but every programming job where you weren't the original author involves looking at unknown code and trying to work out what the hell it is doing.
Instead people argue about number of characters per line or prefixing variable names with something and other such trivialities. (These things do help readability, but overall I think they are quite minor in comparison to the database design / overall architecture - assuming you are writing a database backed application).