Skip to content

Debuggers and Operating Systems (Keno Fischer Quora Session)

Keno Fischer
JuliaHub Blog Home Icon Gray

Keno Fischer, Julia Computing Co-Founder and Chief Technology Officer (Tools) participated in a Quora Session March 18-23. One of Keno's responses was also featured in Forbes.

How could the functionality of compilers, debuggers, and operating systems be improved?

I will leave out compilers, because I've spent some time talking about them in other answers (with respect to machine learning in particular), but let me spend some time on the other two.

I think debugging is probably the most under-appreciated and under-developed part of systems software development, despite being one of the most important. Debuggers are one of my favorite topics to gripe about, so I will list just a few complaints.

  1. The quality of debug information is horrible , particularly for optimized code. This is a result of debug info being considered a "best effort" kind of thing, where "best" in this case means "the absolute minimum we can get away with, without inciting a revolt." It doesn't have to be this way. These days, we generally have the compiler and the original source code that produced the binary available. Even if it were prohibitive to store the debug information ahead of time, we could easily recompute it by rerunning the compiler (computers are deterministic machines after all). Instead we're left to try to reconstruct what the program was doing by reading assembly and poking at memory. Like a detective figuring out what's going on in the kitchen by positioning themselves in the sewer.

  2. The debugging experience is horrible. Suppose I've just spent twenty minutes reproducing a bug and I'm just about to find out what went wrong in the debugger. Then I fat finger the debugger command and oops all my state is gone and I get to spend another twenty minutes (even worse when you used the right command, but the debugger - for one reason or another - misses the correct target and just keeps running the program anyway). This might seem like something fundamental, after all time runs forward, but it actually isn't. Time travel is quite real in the debugging world and it's amazing. The rr project (https://github.com/mozilla/rr) does this for arbitrary Linux programs and there are similar approaches in other contexts. It's probably the single most powerful debugging technology I know, but nobody invests in it.

  3. Debuggers don't make use of compute power. My regular development machine has 40 cores, but still the debugger needs me to make all the decisions. Ideally I'd just point at some memory or registers, or variable values, tell the debugger they look wrong and have it go off and do a bunch of simulations, or SMT solves or whatever to figure out exactly what conditions would cause such a thing to happen. Right now I basically do that manually with memory watchpoints and careful examination of the code. There's a huge fertile research field here, particularly when combined with something like rr.

I'm less actively disappointed in operating systems, but I do think there is some very interesting potential avenues for next generation operating systems.

  1. Many-device applications. To some extent we have this with the Web, but to me there seems to be very little reason that each of my devices are a separate execution domain. I should be able start on my computer at home, open an application, continue where I left off on my phone during my commute and then arrive at the office, plug my phone into my workstation and use the nearby TV as the monitor (with rendering done on the TV to keep down latency). All the while, I want a unified file system with all my files, state synchronized between devices (e.g. the chat message I half typed out), and ideally I'd like to not rely on the cloud for any of this to work. What would the right operating system and APIs for a system that allowed this look like?

  2. Security. This has been a holy grail for a long time, but it does seem like we're approaching a world where it is feasible to write real world operating systems in a formally verified (or at least a memory safe language). The number of security vulnerabilities in mainstream operating systems is quite frankly horrifying for a system part of whose primary job it is to provide security isolation between processes.

  3. Syscalls without domain transitions. Every time we ask the operating system to perform some work for a user space process, we pay significant overhead just to transition from user space to kernel space. Can we come up with better alternatives, either through more fine grained security domains in hardware (the Mill people have some interesting ideas here) or through software techniques (e.g. by distributing applications as some sort of intermediate representation and validating or enforcing the requisite security properties).

  4. Support for reversible debugging. And there you thought we were done with debuggers ;) As I said in the debugging list, rr currently works only on Linux (and Intel hardware), but that also just barely. Linux could do a lot better at supporting tools like rr and other operating systems could make it possible at all. It's hard to overstate the utility.

Recent Posts

Learn More

Want to learn more about our capabilities? We are here to help.