I agree with the various other commenters on the blog post - that treating the wrapper function like an unacceptable amount of redirection is insane premature optimisation at expense of clarity.
Not to mention the wrapper can be inlined in C99, or would be automatically inlined with -O3 in gcc, not that it matters one iota in this case.
The other glaring bug would seem to be Windows' implementation for POSIX tmpfile() requiring admin access. But I guess that's not too surprising.
Indeed. What's a wrapper function's overhead compared to opening a file, which subsequently will be written to?
Frankly, I wish this sort of blog post wouldn't be upvoted here. It's one thing when someone asks a basic question on StackOverflow. Even experienced programmers can learn with the answers. But when someone writes an "authoritative" blog post that misses the basics, it's more noise than signal.
"What to Submit On-Topic: Anything that good hackers would find interesting." [1]
I agree about this being an insane premature optimization, but the C:\ being a bug regarding line continuation within comments? That's fucking fascinating.
I don't know - that's actually very surprising to me considering each user has their own temporary folder.
I can't imagine what reason could exist for this functionality except possibly to provide backwards compatibility with some popular apps that have been poorly written.
that's actually very surprising to me considering each user has their own temporary folder
To extrapolate, I found it unsurprising not because it's either Windows or POSIX individually, but because it's both together.
It's been many years since I was a Windows developer, but I remember back then that the opinion of the POSIX implementation (XP era, not NT era) was "there be dragons" and to avoid it.
Because the proper Windows way is to specify where to create the temporary file; for example the developer can choose between the user-specific 'temp' folder, or the 'temporary internet files' folder, or the folder returned by GetTempPath(). Some old programs rely on the temp folder being shared between application, others don't. And then you run into access issues - sometimes on Vista you don't have write access to whatever GetTempPath() returns and you'll have to find something else, like the user dir returned by SHGetSpecialFolderLocation or any of its predecessors or new versions, and tack "\Temp" onto that yourself.
So, to get back to your question, the posix compatibility layer in Windows is very old and back then all the above didn't exist and they had to make a choice on how to implement the function. Back then (this decision may have been made as early as the late 1980's), using C:\ wasn't all that stupid an idea; either way, the point was that the file would be deleted when the file handle was closed anyway, so it's not like C:\ would be polluted with all sorts of files.
Of course those assumptions changed in the past 20 years, but you can't just change the behavior of your core OS functions because you don't know how it will break old applications. If you're interested in Windows development, the The Old New Thing blog goes into detail on issues like this.
I've actually encountered this, but I can't say it stumped me at all.. my syntax highlighting made it immediately apparent what would have happened had I tried to build.
A coworker of mine also ran into a similar issue (not realizing something was commented out, thus odd behaviour) with a if 0 .... #endif comment. Vim highlights that correctly too but his emacs setup did not (no idea how stock it was).
"#if 0" is not how you declare comments in C. If you want your dead code to show up as a comment, try M-x comment-region. It adds /* */ around each line, being careful to ensure the ability to reverse the operation. I know vim doesn't have good commenting functionality in the core, so this is probably a common vim user idiom. But it's still not correct.
"That has a performance impact on all platforms..."
I mean, tmpfile() is about to make AT LEAST one system call, access the disk, and do a bunch of stuff he has no control over whatsoever, yet he's worried about one extra CALL instruction. Talk about premature optimization.. (and incompetence, if you will).
The comment from SteveB in the linked-to thread was interesting:
Most editors i have used highlight the next line in green as a comment,
however i have come across one that took me over a year to solve, that is
remarkably similar.
//why does this next line not compile??/
x = 0;
because the ??/ operator is a trigraph meaning \
That was never been highlighted correctly any editor I have used.
You know how the usual wisdom is that you can never remove anything from a public interface? I think compiler writers should test that by removing trigraphs. Silently at first, to avoid panic, then announce that a mistake has been fixed where some groups of three characters were lexed improperly into a completely unrelated single character. Never mention that the mistake was in the standard.
I seriously doubt anyone would care if you did it that way.
(Modern versions of gcc (4.5.2 at least) ignore trigraphs by default but issue warnings saying they ignored one and giving the command line option to honor them. But gcc has long honored the dumber parts of the standard more in the breach than the observance.)
Are EBCDIC machines such a common development platform that trigraphs should be default on instead of default off with the option to enable?
Though I suspect trigraphs being enabled is a very rare source of bugs to begin with, it seems likely that it is still more common than the scenario of active development on platforms that don't support the normal characters (eg the actual development is done on the platform rather than cross-compiled).
Compiling with gcc's -Wall option, which implies -Wcomment, would spit out "warning: multi-line comment". Unless you have good reason to do otherwise, always use -Wall. I recommend -Wextra and -Werror as well.
Also -ansi and -pedantic if you're aiming at the C89 standard for some reason, it's great to look in the gcc manpage and see all the little flags that are around. I don't often like -Werror since it can interrupt my flow. If I'm testing runtime correctness, and something like "suggest parentheses around assignment used as truth value" pops up when that was intentional, I don't want to make the context switch to go back and fix it and recompile before context switching again back to test whatever runtime correctness I was aiming for. I'll fix it later. Philosophically, -Werror assumes I'm the type of person who ignores warnings completely for the duration of a project, and will stop you from ignoring them in the first place even if there may be reasons to; I don't think most people are that way.
I wish they were standard GCC behaviour. Sometimes when I rapidly make a toy program, I forget something important, like including stdio.h and without warnings, I won't know until the program crashed.
"That has a performance impact on all platforms: the direct call to tmpfile() is now an indirect, which defeats optimization, and well, everything."
This couldn't be any more wrong. First of all, even if there were a performance impact, it would be meaningless compared to the time it takes to perform the system calls required for tmpfile().
Second, the compiler will inline the function call if it can. You can use a regular "inline" function and if you're not convinced it's good enough, then use __attribute__(force_inline) or equivalent. Even if the damn function is not an inline function, it will get inlined if it's in the same translation unit as where it is called (look at the asm output of gcc -O3). In the (not so far) future we'll be able to enjoy link time optimizations and don't have to care about translation units w.r.t. inlining optimizations.
Even without inlining, it will be subject to tail call elimination with any sane compiler: my_tmpfile will just branch to _imp_tmpfile instead or returning.
This is a good article because it shows the author's thought process. That means we can see what he did wrong and learn to think differently.
The first thing that sticks out as problematic is the assertion that a level of indirection around tmpfile is going to be slow. There are two ways we can determine if this is true; Try It And See, or read the output of the compiler to see what code it generates. Let's try the second one.
We'll make a file called slow.c with the contents:
int f(void){
return 42;
}
int g(void){
return f();
}
Then we can see what code gcc generates with the -S option:
$ gcc -S slow.c
and looking at slow.s. Here's the important part of g (with some annotations removed):
g:
.LFB1:
pushq %rbp
movq %rsp, %rbp
call f
popq %rbp
ret
So, we're calling f here. But you don't compile your production code with -O0, so let's try -O3. Here's f:
f:
.LFB0:
movl $42, %eax
ret
And here's g:
g:
.LFB1:
movl $42, %eax
ret
Notice how we're not calling f anymore; we just get f's body right in our function. So there's no speed cost to writing a function that only calls another function. This is the simplest possible case for an optimizer to optimize, and of course it does.
If you want to be explicit, you can always inline g. Then, at g's call site, we see:
main:
.LFB2:
pushq %rbp
movq %rsp, %rbp
call f
movl $0, %eax
popq %rbp
ret
Which is almost the same thing as what -O3 produced, except g's body is copied into its caller instead of f's body being copied into g. (And this is at -O0.)
Another case where the thought process goes awry is the guess that there is a preprocessor bug. It's easy to verify, run gcc with -E and see what you get. Here's our test case:
#define foo f
int f(void){
return 42;
}
int g(void){
return foo();
}
Preprocess:
$ gcc -E buggy.c
int f(void){
return 42;
}
int g(void){
return f();
}
And look. No bug!
So let's move on to the final problem, the weird comment. We'll again see what our preprocessor output looks like on a minimal test case:
#define FILE int
#ifdef _WIN32
# define tmpfile w32_tmpfile
#endif
FILE *tmpfile(void) { return (FILE *) 0 }
FILE *w32_tmpfile ( void ) { return (FILE *) 0 }
void test(void){
FILE *g;
if (code == 200) { // Downloading whole file
/* Write new file (plus allow reading once we finish) */
// FIXME Win32 native version fails here because Microsoft's version of tmpfile() creates the file in C:\
g = fname ? fopen(fname, "w+") : tmpfile();
}
}
We end up with:
int *tmpfile(void) { return (int *) 0 }
int *w32_tmpfile ( void ) { return (int *) 0 }
void test(void){
int *g;
if (code == 200) {
}
}
Which looks like our code has suspiciously gone missing. Let's try
removing the comments:
int *tmpfile(void) { return (int *) 0 }
int *w32_tmpfile ( void ) { return (int *) 0 }
void test(void){
int *g;
if (code == 200) {
g = fname ? fopen(fname, "w+") : tmpfile();
}
}
And hey, our code is back. Must be something to do with the comments.
Anyway, the C toolchain is very simple, so let the computer do the
compilation for you when you suspect the compiler isn't doing what you
want. Read and understand its output; don't rely on a tool
that does something that you couldn't do yourself.
Something I mentioned in that code completion article the other day was "don't rely on your editor to be your brain". This got a lot of downvotes, but this is exactly the problem the author has here. His editor didn't interpret the source code in the same way that the compiler did, and that's what matters.
Learn the language
you are using so that you can spot bugs! Don't think something isn't
a comment just because your editor doesn't show it in italic orange!
Suspect an editor bug, not a compiler bug.
And finally, if you use correct grammar in your comments, this wouldn't have happened:
/* We use our own tmpfile implementation here, since Windows' writes directly to C:/. */
Alternatively, use a debugger. This issue would have become obvious in seconds if he had stepped through it with a debugger. Of course if you only have gdb as a debugger, that's easier said than done - I'd too rather stab myself in the eye with a hot poker than using gdb again. Proper tools make issues like this much less painful to solve.
What was it so painful about it you had to highlight gdb specifically? I'm just starting out with it, so far it seems a lot more flexible than its visual counterparts do.
Agreed. I've spent a lot of time in gdb and have never had any problems with it.
I think he probably wants some GUI where the watches automatically update, or where -O3'd code magically executes as though the compiler didn't do any optimizations on it.
Its lack of easily seeing the surrounding code you're stepping through. Its cumbersome ways of use (having to type commands for inspecting variables rather than just being able to point at them). Its difficulty of learning it.
And all of that for no benefit - I spend hours learning vim too, but at least there I got flexibility and a supremely powerful editor in return; but there is nothing gdb can do that the Visual Studio debugger can't. And there you just set a breakpoint and press run, while for gdb it takes you a day just to get integration with your editor set up the first time. I'm not sure what you mean with 'flexible' - a debugger is by its very nature interactive, so the typical advantages of text-based tools don't apply. But of course YMMV etc.
It depends on your mental model and how comfortable you are with the commands. It's similar to the disconnect between navigating a file system on the command line and through a GUI.
gdb can "list" and "where", which gives you the lexical and dynamic contexts respectively.
Here come the downvotes... but if you don't know the code running in the debugger well enough to remember the details, well, you've found your bug. Read and understand your code very carefully before you waste your time single-stepping through it.
(This is the biggest problem I see with developers that use IDEs -- they forget that they have a brain, and that your brain is always going to be faster than clicking shit in Eclipse.)
Oh great, another internet toughman boasting about how all those IDE-using pussies are just stupid and how Real Programming should be left to Real Programmers (queue story of Mel here somewhere). rollseyes
When you have to type commands to see what you would otherwise see immediately, how is that not a usability deficiency? GDB fans just have Stockholm syndrome.
And if you think there is anyone on this world who has a complete mental model of each line of code in a 10k-line project, let alone a 100k or 1m one, you're insane. I'm not talking about debugging a 15-line function, But even then, single-stepping through a function of that size doesn't even cost extra time when you use proper tools (and not that POS gdb), so 'waste your time' doesn't even apply there.
Debuggers have a bad rep with some crowds because of the damage gdb did - people think that that's how debugger work and can only work. Navel-gazing at its worst.
Oh great, another internet toughman boasting about how all those IDE-using pussies are just stupid and how Real Programming should be left to Real Programmers (queue story of Mel here somewhere). rollseyes
If you want to talk like this, go back to Reddit. I'm not an Internet toughman, I'm describing the motivations behind the interface of gdb. Turns out that just because someone disagrees with you doesn't mean they're dumber than you.
In this case, I think it's worth listening to me because I've used pretty much every development tool imaginable to write hundreds of thousands of lines of code. I may know what I'm talking about! Shocking!
When you have to type commands to see what you would otherwise see immediately, how is that not a usability deficiency? GDB fans just have Stockholm syndrome.
Information overload is just as bad as too little information. GDB lets you see what you need to see when you need to see it. It doesn't guess what you want it to do; you tell it. You're the brain and it's the computer.
And, it hooks into your text editor if you want that little => next to the line of code GDB is on. I've personally never found this to be useful. (As I mention in a comment above, it's rare for modern compilers to produce code that runs top-to-bottom exactly as you've typed it anyway. The source code is an explanation of the sort of steps you want the computer to take, but it's not an exact model of what the comptuer's going to do. Saying "we're now running this line of code" can be very misleading.)
And if you think there is anyone on this world who has a complete mental model of each line of code in a 10k-line project, let alone a 100k or 1m one, you're insane. I'm not talking about debugging a 15-line function.
This is just plain wrong. The debugger is the wrong tool for analyzing a bug that persists over 10k lines of code. The key to debugging is to reduce the scope of the problem, and the way to do that is by not tightly-coupling large chunks of code. If you have a bug that requires a mental model of 10k-1m lines of code, unit tests and refactoring is the tool you need to be using.
But, with that in mind, 10k lines is not that much to keep in one's memory. I maintain cperl-mode.el, which is 10k lines of elisp that I did not write. I haven't touched it for months, but I still know where the important parts are and how they interact. Reading is one of the most important skills a programmer can have. If you can't remember what you've read, you need to slow down. Lack of understanding causes bugs.
single-stepping through a function of that size doesn't even cost extra time when you use proper tools (and not that POS gdb), so 'waste your time' doesn't even apply there
Yes it does. Single-stepping requires a round-trip between the computer and your brain for every line of code. When your IDE jumps you to a new file because you stepped into a function, your brain has to context-switch and read the surrounding code. All in all, it's a slow and complicated operation compared to the normal process of running your unit tests and scanning the logs if there is a failure. That case is fast because most parts are done by the computer; it just gives you a few pieces of information that you can use to analyze, tweak, and re-run. And you can drink some coffee while the computer does most of the work!
A debugger is for a very special case where unit tests cannot find bugs, or the effort to write these test would be too high. A good example is examining a core dump from a production process. You load it up in your debugger and can inspect the program state in great detail. You see that the current call frame is 0x626f6f66. That looks like the text "foob", which seems like it might have come from some text your program was working with, "the foobar chronicles". But you're using bounds-checking for your string operations, so why the bug? Use the debugger to inspect some local in-memory state, like the "len" parameter to strlcpy. It's 2938473873! Must be an integer overflow somewhere.
The key here is that everything we did was interactive and required human thought at each step. We didn't know what the problem was, so we asked the debugger for some data. Then we thought about it for a while, came up with a theory to test, and asked the debugger for more information to prove or disprove our theory. After repeating this a few times, we came to the conclusion that we are doing an addition wrong in the foo function, and that's making the program segfault.
With that in mind, we switch back to our text editor, write a test for the length-calculation math, and then fix the math. We shouldn't need a trip to the debugger for a while, because we use the debugger to collect information in odd cases, not to watch our program run normally.
So anyway, I guess my hangup with the graphical IDE-based debuggers is "why". Why are you writing code that needs to be single-stepped regularly? Why is your codebase so messy that you can't understand it? Why is your time so worthless to you that you are doing the computer's work for it, instead of the other way around?
A debugger is a tool to use for investigating weird things, not something you should bust out everyday.
Debuggers have a bad rep with some crowds because of the damage gdb did - people think that that's how debugger work and can only work. Navel-gazing at its worst.
A lot of insults, but not a lot of information. If you're so sure that Visual Studio's debugger has solved all of programming's problems, why not share a concrete example of how you use it? Your post adds nothing to the conversation and makes people wish your computer would punch you in the face.
I'm not going to argue further since Mithaldu has already made the points I'd make and in a much more coherent way than I would anyway, but I have to give it to you that
"Your post adds nothing to the conversation and makes people wish your computer would punch you in the face."
made me chuckle, and I will be adding this to my list of favorite internet insults.
> I'm not an Internet toughman, I'm describing the motivations behind the interface of gdb.
Actually, yes, you did that, but you still are one.
For the plain and simple reason that you repeatedly stooped to insults while making your points instead of letting them stand on their own. You should not be surprised people think little of you if you cannot teach without belittling people.
It's funny how both sides in any argument or disagreement think one side is belittling the other by doing nothing more than assuming the other side doesn't understand something or another (this includes you just now btw). Certes pointing out ad hominem counts as ad hominem every single time too.
Most certainly not. The factual divide is one issue and his politeness another. The posts of jrockway are laced with flat out insults that are unnecessary and have nothing to do with any assumptions about understanding.
Let me point one out:
> Your post [...] makes people wish your computer would punch you in the face.
Surely there was insult and ad hominem in his post (not arguing the fact), but I believe it was in reply to the insult and ad homimen from the post he was replying to. Calling someone an internet toughman counts as ad hominem just as much as calling someone computerfacepunch.
Although, the more I think about it, your point was probably: no need to stoop to the same level.
> they forget that they have a brain
> where -O3'd code magically executes as though the compiler didn't do any optimizations on it.
> And finally, if you use correct grammar in your comments
These have varying degrees of subtlety, but they are definitely aimed digs.
There was a definite precedent that warranted the toughman comment and made it seem like an accurate description to me even when i was actively trying to be nice.
Edit: I understand your point and think about it fairly often, i.e. when programmers in one language attack another and the attacked group goes on to point out how they refrain from attacks. I just do not think i have any ash to sprinkle on my shoulder in this case.
It's not a dig. I do think you're dumb if you don't like gdb. Naturally, my opinion comes out in my writing, which I think is my Creative License. You should realize that anything you read is biased. You should read the post for content, mix it with its credibility, and then reach a decision.
In the end, I think that my experience programming adds more credibility than my love for snide comments subtracts. That doesn't make me a toughguy, that makes my writing interesting to read.
And for a more factual reply: You're making a lot of wrong assumptions.
Ahead: I will be talking about a Perl debugger.
> Information overload is just as bad as too little information.
Here you're making the assumption that such a GUI debugger would show ALL OF THE THINGS ALL THE TIME. Instead however they can be pretty damn intelligent. My weapon of choice inspects the lexical scope around the current position and displays variables around it, as such there is a window of things that the user is very likely to care about whenever the debugger stops. Then, to top it off: The debugger does not actually show all the contents of complex variables at once and instead permits it to show/hide/page their contents and remembers what state they were in when looked at the last time.
Information overload is a real problem, but technology has advanced and as long as people don't try to create actual intelligence, agents that take out the work out of certain oft-repeated tasks are viable and useful.
> Single-stepping requires a round-trip between the computer and your brain for every line of code.
Here you're making the assumption that one has to step through every single line of code to get to the place one wants to look at. In reality, breakpoints are employed instead and sometimes even conditional breakpoints, in order to avoid having to s-s-s-s on a point that's executed repeatedly with data one doesn't care about. Additionally they have nifty tools like "step out" or "step over" that permit it to make it easier to get to interesting points without going through entire irrelevant functions. And lastly, breakpoints are not static, they can easily be set or disabled at runtime, making exploration easy.
> And, it hooks into your text editor if you want that little => next to the line of code GDB is on. I've personally never found this to be useful.
Here you're making the assumption that all the code is in one file most of the time. I do not think i need to expand on that. Also, keep in mind that debuggers are often more than clever enough to realize that per-line debugging is stupid and per-statement debugging much more useful. As such, the little arrow can, for example, be really useful in showing you that your code keeps stepping on a certain line where you expected it to proceed.
> The key to debugging is to reduce the scope of the problem
When one starts using third-party libraries the debugger is stupidly useful in showing exactly where inside that library the input data is operated on, so it can actually be the best tool to reducing the scope.
> unit tests unit tests unit tests
You like them a lot. To the point where you seem to ignore that they're entirely synthetic and can mislead a lot about what actually happens when code collides with real data. Unit tests are not a debugging tool. They are self-checking documentation of your code.
Ok, i can't be arsed to write more.
Please try to understand that there are many perspectives and that insulting people will only lead to them hardening their perspectives.
The automatic inlining only happens if the inner function is defined in the same file as the outer function, or in a file that is included in it. If the inner function is called in many places, the normal way in C is to put it in just one C file, and have a declaration of it in a header file that is included where the function is called. Then it will NOT be inlined, unless the linker does link-time inlining, or you do some sort of global compile by concatenating all source files before compiling.
You have to put the whole definition of the inner function in the header file. Also it must be static or inline, otherwise the linker will complain about duplicate functions.
Martin Fowler in "Refactoring" supported one of my prejudices, which is that "comments are bugs". So we could have nipped this bug in the beginning by not commenting here.
Then there is the question of why are we even thinking about the optimization issues about calling tmpfile? Are we doing billions of these calls? Or more on the order of tens of them. But in either case, wouldn't it be evident prior to any optimization or measuring effort that the time taken up by the OS to actually create the file would swamp any gain done by this optimization?
One might then argue that the bugs are 1) using comments 2) attempting an optimization that simple pre-calculation shows no real benefit.
emacs unfortunately fails on the trigraph-sequence "??/" and ofcourse the line immediately following doesn't get executed. having AST based syntax hi-lighting would of course fix the issue. most editors, afaik, used hacked together regexe's which break-down under various conditions.
Use spaces between arithmetic operators. You've already got them around the = sign. It looks much better, and prevents this problem:
int i = 10;
int *p = &i;
int j = 100 / i; // j==10 right? right
int k = 100 / *p; // k==10 right? right
Also, any decent syntax highlighter would have caught the latter problem. The problem in the blog post was much more subtle, because most (all?) syntax highlighters won't catch it.
I don't agree here. != is a primitive, not a negative condition in the usual sense. Putting things this way around is natural, where the usual action appears first and the exceptional action appears later.
I'm not particularly disagreeing with the point you are making - but would it really be asking to much for people to provide explanations of why one form is preferred over another?
The else block in this case is executed when "NULL != fname" is false. That's an unnecessary double negative and takes a nonzero amount of mental effort to unscramble. The example I gave reverses this. This would also be acceptable:
if(fname) {
g = fopen(fname, "w+");
} else {
g = tmpfile();
}
(I also don't regard either statement as an "exceptional condition", they both seem equally likely to me.)
Thanks - I've started getting a bit grumpy about coding rules that come without any explanation of why you should do something (simply saying something is "best practise" often being the worst "explanation" of all).
That has merit. Personally I usually find myself doing something more like this:
if (argc != 3) {
printf ("Wrong number of arguments");
// throw an exception or return NULL or something
}
if (!fname) {
printf ("File name cannot be null");
// throw an exception or return NULL or something
}
/* Do stuff */
The important thing, to my mind, is the absence of an else block.
Yeah, that is a pattern I occasionally use but I prefer what I have written above as the control flow is more explicit. A missing return (or even one which is quite deliberately omitted) is much harder to spot.
I guess it would, it's just not something I've run into.
The only time I've ever seen backslash's used in C or C++ as a line-break is in huge multi line macros with lots of variables that has been broken up for readability.
It never occurred to me it could be used to do multi line comments but there you are.
It's marginally more likely that your C tutor hadn't noticed that //-style comments are now C-style too, and imagined that Visual Studio permits it as some kind of non-standard extension.
(Correctly arguing he is right, then reasonably rebutting this argument, is an exercise for the reader.)
This feature has only been with us as part of the standard since 1999, which in geological terms is merely the blink of an eye. I imagine this is why so many people seem not to know about it.
Unlikely, probably just because C++ style comments aren't valid in the C89 standard. I assume Visual Studio, like gcc, defaults to compiling with "c89+", their own flavor of the c89 standards with sane extensions like C++ comments and not needing to declare variables before all statements. My C class forced us to always compile gcc with "-Wall -Wextra -pedantic -ansi", and I forget the "cl" and Borland arguments. (Fortunately Makefiles were usually provided.)
I mean, if this is a reason not to use C++ style comments, then it seems like a very weak one of many reasons not to use C++, I don't think that was intended.
Oh right, that would be it: this was a mechatronic engineering course and when we first learned C they taught it in Visual Studio. It sounds like your C class was better :) I wish they'd taught us initially to do the compiling/linking etc. ourselves. I guess they wanted to just teach coding and leave the rest as an exercise for us to learn independently (the irony of course being that learning all that other stuff seems to me to be vastly more complex and involved, and in need of formal educational structure, than writing code, but anyway ... )
So I guess most of the C compilers for the target microcontrollers we were working with in later courses would have used the C89 standard
What I do is I turn on my compilers warnings for C++ comments in C code, then use C++ comments with all my TODO comments. That way I know if there are any outstanding TODOs when I go to build.
warning: C++ style comments are not allowed in ISO C90 [enabled by default]
Might be worth noting at this point since I'll already halfway though the GCC manpage again, that -Wcomments seems to be exactly for the sort of situation the author of the article found himself in.
It depends on what you write. If your goal isn't portability, using // has nothing to do with author's unawareness of the semantics of the line continuation character in C (that is, escaped line end character). It's a C feature since forever, and not knowing about it would bite him sooner or later even without the use of C++ comments.
It isn't a bug, it is a feature. C wasn't designed to abstract away the reality of pointers, which are the basic building blocks of assembly. If you don't like pointers, then don't use C. All programming languages have their time and place (even something like whitespace).
The advantage of not abstracting pointers is that it makes low-level programming much easier. Instead of having to write your operating system kernel in assembly, you can write it in a portable language that is much easier to read!
Every language which does not support pointers and pointer arithmetic by default has a more complicated way of doing it. Look at Python, for example. When you want to deal with arrays for interfacing with C code or image processing or just for plain performance gains, you have to deal with Numpy arrays which are even more difficult than pointers (esp. when it comes to C code wrappers).
Haskell gets this right. You never write regular Haskell code with pointers but when you need to do some low level stuff, there is a native pointer type and I/O operations peek and poke.
The pass by value semantics of C are well suited for the low level language that C is.
Not to mention the wrapper can be inlined in C99, or would be automatically inlined with -O3 in gcc, not that it matters one iota in this case.
The other glaring bug would seem to be Windows' implementation for POSIX tmpfile() requiring admin access. But I guess that's not too surprising.