Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: DDD was a great debugger – what would a modern equivalent look like?
41 points by manux81 8 hours ago | hide | past | favorite | 45 comments
I’ve always thought that DDD was a surprisingly good debugger for its time.

It made program execution feel visible: stacks, data, and control flow were all there at once. You could really “see” what the program was doing.

At the same time, it’s clearly a product of a different era:

– single-process

– mostly synchronous code

– no real notion of concurrency or async

– dated UI and interaction model

Today we debug very different systems: multithreaded code, async runtimes, long-running services, distributed components.

Yet most debuggers still feel conceptually close to GDB + stepping, just wrapped in a nicer UI.

I’m curious how others think about this:

– what ideas from DDD (or similar old tools) are still valuable?

– what would a “modern DDD” need to handle today’s software?

– do you think interactive debugging is still the right abstraction at all?

I’m asking mostly from a design perspective — I’ve been experimenting with some debugger ideas myself, but I’m much more interested in hearing how experienced engineers see this problem today.





Maybe you can have a look at RadDbg [0], as I understand the project has been bought by Epic Games recently. The goal is to make a performant debugger, from what I understand its target audience is mostly game developers, but you can use it for other programs obviously. You can see a talk of the core developer and his vision here: https://www.youtube.com/watch?v=_9_bK_WjuYY

Sadly it's windows only yet, but they have plans to port it to other platforms.

- [0]: https://github.com/EpicGamesExt/raddebugger


The readme talks about plans for Linux support, but I'm guessing that's no longer on the table after the Epic acquisition? Sweeney is the single most publicly anti-Linux CEO I'm aware of.

AFAIK, RAD was acquired by Epic before Ryan began working on the Linux port

It usually takes some time for an acquisition to result in significant cultural shift in the acquired company, but it always happens in the end.

Do you expect an IAP peddler to support free software?

Yes actually: plenty of companies don't care where the money comes from, they're happy as long as there's money. Unity, the other big ad- and IAP-peddling game engine company, has pretty good Linux support.

It's weird for a company to explicitly say, "if you use this one operating system you can go F yourself, we don't want your money". (Note: this is not the same as saying "we only officially support Windows at this time, sorry". There's seething hatred in Sweeney's words.)


I suppose Apple bribed him to support Macs. Who's going to bribe him to support Linux, RedHat? Canonical?

> Do you expect an IAP peddler to support free software?

Valve does.


Valve doesn't own RAD though

I always point to the Metal-API debugger in Xcode for what I'd want a CPU debugger to look like, visualizing dependencies and dataflows with an actual graphical UI, letting me view data as something else than just hex dumps (e.g. as image or 3D mesh), etc... etc...

Of course the CPU side usually lacks semantics to automatically create data visualizations (while modern 3D APIs have enough context to figure out data dependencies and data formats), and that would be the "interesting" part to solve - e.g. how to tunnel richer debug information from the programming language to the debugger.

Also there's a middle ground of directly adding a realtime runtime debugging UI to applications via something like Dear Imgui (https://github.com/ocornut/imgui/) which is at least extremely popular in game development - and in this case it's trivial to provide the additional context since you basically develop the debugging system alongside the application.

PS: I'd also like a timeslider that "just works", e.g. travelling back to a previous state, taking snapshots and exploring different "state forks". And of course while at it, live editing / hot code reloading, so that there is no difference between development and debugging session, both merge into the same workflow.


This is slightly off topic, but do you know of a graphics debugger on macOS which supports OpenGL? ...or a graphics debugger on Linux which supports Wayland?

We used to have the Apple OpenGL Profiler etc on macOS, but all of Apple's tools these days are only focused on Metal. We used to have RenderDoc on Linux, but it got left behind by the Wayland transition and doesn't work anymore. So I'm kinda lacking anything to debug OpenGL at the moment...


I don't know sorry. For debugging GL I usually switch to Windows with RenderDoc (or some current NVIDIA tool - does Nsight still exist?).

RenderDoc on Linux appears to work fine for X11 apps via XWayland btw (just testing here via the sokol-samples with the sokol-gfx GL backend: https://github.com/floooh/sokol-samples, just another reason to not have a native Wayland backend but instead rely on the XWayland shim I guess...


I haven’t touched in a while, but Visual Studio’s (standalone, not code) debugger was very cool.

Also rr is impressive in theory, although it never worked on codebases that I worked on.


FoundationDB's approach - look up their testing framework.

I've worked in a company that, for all intents and purposes, had the same thing - single thread & multi process everything (i.e. process per core), asserts in prod (like why tf would you not), absurdly detailed in-memory ring buffer binary logs & good tooling to access them plus normal logs (journalctl), telemetry, graphing, etc.

So basically - it's about making your software debuggable and resilient in the first place. These two kind of go hand-in-hand, and absolutely don't have to cost you performance. They might even add performance, actually :P


May I recommend this episode of The Stand Up podcast with Ryan Fleury as guest, who is the driving force behind the rad debugger.

https://www.youtube.com/watch?v=O-3gEsfEm0g

Casey also makes a good point here on why printf-debugging is still extremely popular.


My background is in teaching C programming at the university level and DDD was very helpful there, although not very comfortable to use. For years, I've looked for a replacement and finally found Seer and was very happy with it.

https://github.com/epasveer/seer

Interactive debugging is definitely useful when teaching but obviously teaching is a different context. But Seer is not an educational tool and I believe it will hold up in other cases as well.


Have you also tried KDbg, and if so, what's the reason for picking seer over KDbg?

I remember trying it but I couldn't use it, I don't remember exactly why now. The KDE dependencies might have been a problem to have the students install it on their computers but I also remember not being able to use the app properly. I tried to test it out again just now but couldn't find a binary download and couldn't compile myself.

Pernosco

Blows everything else out of the water.

https://pernos.co/ ( I’m not affiliated to them in any way, just a happy customer)


Seems like IDA Pro does the same but 10x better

It doesn't sound like you have even used it so "seems like" is doing a lot of heavy lifting here.

printf("Got here, x=%u"\n", x);

I'm not too offended by this answer. We all reach for it before we seriously think about the debugger. But debugging should be treated as a specialist skill that's almost as complex as programming, and just as empowering. There are two ways I can think of right away, in which the mastery of debuggers can enrich us.

The first is that it gives you an unparalleled insight into the real stuff behind the scenes. You'll never stop learning new things about the machine with a debugger. But at the minimum, it will make you a much better programmer. With the newly found context, those convoluted pesky programming guidelines will finally start to make sense.

The second is that print is an option only for a program you have the source code for. A debugger gives you observability and even control over practically any program, even one already in flight or one that's on a different machine altogether. Granted, it's hard to debug a binary program. But in most cases on Linux or BSD, that's only because the source code and the debugging symbols are too large to ship with the software. Most distros and BSDs actually make them available on demand using the debuginfod software. It's a powerful tool in the hands of anyone who wishes to tinker with it. But even without it, Linux gamers are known to ship coredumps to the developers when games crash. Debugging is the doorway to an entirely different world.


“The most effective debugging tool is still careful thought, coupled with judiciously placed print statements.”

- Brian Kernighan


That's the holy Unix justification for self-flagellation via deficient tooling and you're sticking to it.

Although that was true at the time, it was before the creation of modern omniscient debuggers like Pernosco (<https://pernos.co/>).

Very convenient to use LLMs for the that "Please add debug fprintf(stderr, {print var x y here})". The "please comment out the debug fprintfs"

Something like this maybe:

https://whitebox.systems/

Doesn't seem to meet all your desired features though.


Yes, that’s a good example — thanks for the link. Tools like this seem very strong at visualizing and exploring state, but they still tend to stay fairly close to the traditional “pause and inspect” model. What I keep struggling with is understanding how a particular state came to be — especially with concurrency or events that happened much earlier. That gap between state visualization and causality feels hard to bridge, and I’m not sure what the right abstraction should be yet.

Sounds like you want a time travel debugger, eg. rr.

Sophisticated live debuggers are great when you can use them but you have to be able to reproduce the bug under the debugger. Particularly in distributed systems, the hardest bugs aren't reproducible at all and there are multiple levels of difficulty below that before you get to ones that can be reliably reproduced under a live debugger, which are usually relatively easy. Not being able to use your most powerful tools on your hardest problems rather reduces their value. (Time travel debuggers do record/replay, which expands the set of problems you can use them on, but you still need to get the behaviour to happen while it's being recorded.)


This doesn't sound like a particularly difficult problem for some scenarios.

It's definitely convoluted as it comes to memory obtained from the stack, but for heap allocations, a debugger could trace the returns of the allocator APIs, use that as a beginning point of some data's lifetime, and then trace any access to that address, and then gather the high-level info on the address of the reader/writer.

Global variables should also be trivial (fairly so) as you'll just need to track memory accesses to their address.

(Of course, further work is required to actually apply this.)

For variables on the stack, or registers, though, you'll possibly need heuristics which account for reusage of memory/variables, and maybe maintain a strong association with the thread this is happening in (for both the thread's allocated stack and the thread context), etc.


Sounds like you want time travel debugging [1]. Then you can just run forwards and backwards as needed and look at the full evolution of state and causality. You usually want to use a integrated history visualization tool to make the most of that since the amount of state you are looking at is truly immense; identifying the single wrong store 17 billion instructions ago can be a pain without it.

[1] https://en.wikipedia.org/wiki/Time_travel_debugging


Here's another one

https://scrutinydebugger.com/

It's for embedded systems though, which is where I come from. In embedded we have this concept called instruction trace where every instruction executed with the target gets sent over to the host. The host can reconstruct part of what's been going on in the target system. But there's usually so much data, I've always assumed a live view is kind of impractical and only used it for offline debugging. But maybe that's not a correct assumption. I would love to see better observability in embedded systems.


I think for Unix-likes, a good old TUI based like Turbo Debugger would be very useful.

Check out Binary Ninja if you haven’t. Especially if you have large binaries!

Plan 9's acid.

What are you using as a Rust debugger?

I am pretty happy with the debugger from Visual Studio.

Linaro (need Allinea) DDT?

Modern equivalent, working on Linux with remote connection from Visual Studio Code is LLDB.

Takes some effort to configure it but beats "printf" (i.e. logging) in the end.


Radare and friends maybe. For sure it has some graph-like options as DDD had where you could graphically see everything.

Most RE tools today will integrate a debugger (or talk to gdb).


Domain driven design?

I assume is this one https://www.gnu.org/software/ddd/

I used it back in Uni, in 98, and it really helped me to understand debuggers. After it, even using gdb made sense.


I was so confused. Why is domain driven design especially good for debugging? I guess context is bound within the models... And then all the other comments were just talking about debugging tools. Glad I was not the only one.

This is a legitimate question but this was clearly generated using an LLM.

To add something constructive, this demo represents an amazing ideal of what debugging could be: https://www.youtube.com/watch?v=72y2EC5fkcE




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: