Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A function could be compiled assuming the passed argument is always 2. All of the code for other values is just left out.

As long as the compiled code has a check for values that are not 2 this code works great. It isn't correct though.

At least, that's my interpretation.



Maybe I'm getting tripped up in the terminology here but to me this case is still a JIT jittin' - you look at runtime data and decide it's worth it to crank out a special case optimization for the input of 2. You produce that that optimization, which is sound along with a check to make sure it is applied only in the special case. You get to defer other optimization. The advantage here still seems to come from the runtimeness of things rather than from being clever about soundness and there's really no guessing about soundness. So perhaps that's not it.


> to me this case is still a JIT jittin'

I think the point is that some JITs never do this kind of optimisation - they just produce the same code an AOT compiler would, but at runtime. Such as the .NET JIT.


I don't think that's the point the comment I'm replying to is making, or at least, it's not the point I'm asking about.

Edit: Your example in the other comment about the locks is the sort of thing I'm asking about. There, an optimization is made which is sound under some specific conditions and then unmade when those conditions change.


I think that is indeed the point pron was making, or at least similar. You can't actually ignore soundness, but JVMs sometimes go farther than I'd expect. (Example: don't check for null, just handle SIGBUS if null is "very rare")


Yes, on re-reading the thread again it might be entirely (or almost) about language. As in, it's really something along the lines of 'The power of the JIT approach comes from runtime information and dynamism. But you can also be a 'just' a JIT without making use of any of that'. And I'm getting stuck on 'secret weapon [...] soundness' and imagining some unfathomable-to-mortals ninja something.


I guess you need to update your knowledge regarding the several .NET JITs in use.


Maybe I'm only familiar with "the main one" and mono... Are there other .NET VMs?

If I recall correctly, it will do constant folding, but won't speculate that a certain parameter is always essentially constant at runtime, but wasn't at compile time.

An easy example is a config loaded from a file as the server boots but never changes for the lifetime of the process. That won't constant fold without speculation.


There is the old style JIT, RyuJIT introduced with .NET Framework 4.6, MDIL, .NET Native, Mono, .NET CF, IL2CPP, and the research ones from Singularity and Midori.

So while it is hard to state what each AOT/JIT compiler is capable of, naturally they aren't 100% all the same.


Ah! I should read more about the world. I was recalling from a conversation I had with a .NET engineer at JVMLS last year.


As I cautioned in another comment

> (or did, last time I checked)

Do implementations of .NET JITs now do speculative optimisations or dynamic compilation? They didn't see the need for it for about 15 years.


15 years ago there weren't RyuJIT which replaced the JIT you learned from, MDIL (Windows 8/8.1), .NET Native (UWP), IL2CPP (Unity), and the research ones from Singularity and Midori.

In what concerns the need for it, they have been trying to make C# more relevant for the kinds of C++ workloads and getting among the first places at TechEmpower.

So .NET has been getting Modula-3 like low level handling of value types within a GC environment, RyuJIT is now tiered, supports SIMD and some automatic vectorization.

.NET Framework 4.6 got the first version of what is the .NET way of doing AppCDS.

There are a couple of blog posts regarding RyuJIT improvements with each release after its introduction.


So which of these implementations does speculation? I remember when RyuJIT came out it still wasn't speculative - has that now changed?

If you read the blog posts, they always talk about speculation being something they may try in the future. I've not seen anything where they say they went ahead and implemented it.


Here is some information

Background JIT overview, which is a kind of PGO for the .NET Frameworok

https://msdn.microsoft.com/en-us/magazine/mt683795.aspx

And I think this goes into line with what you are discussing,

https://github.com/dotnet/coreclr/pull/21270

I also agree that many things remain to be done in line with what Graal is capable of.


Yeah, I think this is the relevant bit: https://github.com/dotnet/coreclr/blob/master/Documentation/...

Seems like they started trying speculative optimizations about six months ago. Speculative optimizations are not only the foundation of Graal but also of C2, BTW.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: