I do not use a debugger

I learned to program with BASIC back when I was twelve. I would write elaborate programs and run them. Invariably, they would surprise me by failing to do what I expect. I would struggle for a time, but I’d eventually give up and just accept that whatever “bugs” I had created were there to stay.

It would take me a decade to learn how to produce reliable and useful software. To this day, I still get angry with people who take it for granted that software should do what they expect without fail.

In any case, I eventually graduated to something better: Turbo Pascal. Turbo Pascal was a great programming language coupled with a fantastic programming environment that is comparable, in many ways, to modern integrated development environments (IDEs). Yet it is three decades old. It had something impressive: you could use a “debugger”. What this means is that you could run through the program, line by line, watching what happened to variables. You could set breakpoints where the program would halt and give you control.

At the time, I thought that programming with a debugger was the future.

Decades later, I program in various languages, C, JavaScript, Go, Java, C++, Python… and I almost never use a debugger. I use fancy tools, and I certainly do use tools that are called debuggers (like gdb), but I almost never step through my programs line-by-line watching variable values. I almost never set breakpoints. I say “almost” because there are cases where a debugger is the right tool, mostly on simple or quick-and-dirty projects, or in contexts where my brain is overwhelmed because I do not fully master the language or the code. This being said I do not recall the last time I used a debugger as a debugger to step through the code. I have a vague recollection of doing so to debug a dirty piece of JavaScript.

I am not alone. In five minutes, I was able to find several famous programmers who took positions against debuggers or who reported barely using them.

I should make it clear that I do not think that there is one objective truth regarding tools. It is true that our tools shape us, but there is a complex set of interactions between how you work, what you do, who you work with, what other tools you are using and so forth. Whatever works for you might be best.

However, the fact that Linus Torvalds, who is in charge of a critical piece of our infrastructure made of 15 million lines of code (the Linux kernel), does not use a debugger tells us something about debuggers

Anyhow, so why did I stop using debuggers?

Debuggers were conceived in an era where we worked on moderately small projects, with simple processors (no thread, no out-of-order execution), simple compilers, relatively small problems and no formal testing.

For what I do, I feel that debuggers do not scale. There is only so much time in life. You either write code, or you do something else, like running line-by-line through your code. Doing “something else” means (1) rethinking your code so that it is easier to maintain or less buggy (2) adding smarter tests so that, in the future, bugs are readily identified effortlessly. Investing your time in this manner makes your code better in a lasting manner… whereas debugging your code line-by-line fixes one tiny problem without improving your process or your future diagnostics. The larger and more complex the project gets, the less useful the debugger gets. Will your debugger scale to hundreds of processors and terabytes of data, with trillions of closely related instructions? I’d rather not take the risk.

My ultimate goal when work on a difficult project is that when problems arise, as they always do, it should require almost no effort to pinpoint and fix the problem. Relying on a debugger as your first line of defense can be a bad investment, you should always try to improve the code first.

Rob Pike (one of the authors of the Go language) once came to a similar conclusion:

If you dive into the bug, you tend to fix the local issue in the code, but if you think about the bug first, how the bug came to be, you often find and correct a higher-level problem in the code that will improve the design and prevent further bugs.

I don’t want to be misunderstood, however. We need to use tools, better tools… so that we can program ever more sophisticated software. However, running through the code line-by-line checking the values of your variables is no way to scale up in complexity and it encourages the wrong kind of designs.

41 thoughts on “I do not use a debugger”

  1. I’ve always felt like I missed the boat on the value of debuggers. A coworker once showed me how “cool” it was that he had a remote debugger running on a Server stepping through code and looking at the state of things… it occurred to me that the tool was needed BECAUSE the code was a mess, (too many “smart”/hierarchical/mutable objects interacting with each other).

    I do see the value of using a debugger to understand code you don’t own…but there is always the idea that you ended up using a debugger because you really don’t understand the model and state transitions the program can go through. If that is the case, time should be spent figuring out the model and state transitions in a top down way… rather than iterating on figuring out where the model and or transitions are broken (in a bottom up way) by figuring out the appropriate places to put breakpoints. (But I guess this is an art in and of itself)

    1. time should be spent figuring out the model and state transitions in a top down way

      Excellent! Just put a breakpoint in your main() method, and step into each relevant method, seeing what’s going on.

  2. A debugger is a very useful tool. I think people which are not using them is people who have catch bad habits while working in environments where there was no good debugger. When you work with Visual Studio with its marvelous debugger, you can find and resolve bugs faster. Of course, you can always think harder to the problem and review your code, but when you’re tired, the debugger is a nice helper.

    1. I think people which are not using them is people who have catch bad habits while working in environments where there was no good debugger.

      Good debuggers are hardly new. Turbo Pascal had a fantastic debugger three decades ago, and even then it wasn’t a particularly innovative feature. A debugger is a very widely available tool and it has been so for many decades. IDEs (with debuggers) are also widely available and have been for a very long time… E.g., Visual Studio, Eclipse, Intellij, KDevelop and so forth.

      There are a few small things that have improved with respect to debuggers… such as backward execution, remote cross-platform debugging, pretty-printing STL data structures… but overall, debuggers have not changed very much.

      IDEs have gotten quite a bit nicer and smarter… Do IDEs make you more productive? That’s another debate.

      Of course, you can always think harder to the problem and review your code, but when you’re tired, the debugger is a nice helper.

      I agree that using tools to make yourself more effective, especially when you are cognitively impaired in some way, is very important. But in most cases there are better tools than a debugger.

  3. I also shy from using the debugger, main reason for me is to avoid the overhead of switching context, i.e. one minute you’re in your familiar IDE editing your program, and the next you’re in a different environment with new windows (the debugger).

    It’s like watching 2 movies at the same time.

    It’s much less burden for my tiny brain to just insert some printf()s where my code is suspect and watch that output.

  4. My two cents is that for me assert and print are by far the most useful debugging tools, in a wide range of programming paradigms.

    Well designed asserts are helpful for both catching logic errors and documenting the code. In fact asserts explicitly highlight assumptions about the algorithm state that otherwise could go unnoticed.

    As Daniel Lemire correctly points out, one should not focus its attention on where the code is wrong, but on why the code is wrong, and asserts are ideal to this purpose.

    As what regards print statements, they are the poor man’s debugger, but somehow I learned to cope with the fact that I’m a poor man, without the energy for mastering a high polish modern debugger tool…

    1. I also try to write as many good assertions as I can in my code. There’s also the new-ish thing in reliability: property based testing (a la QuickCheck). I like to think of property based testing as somewhere in the nebulous space between conventional use of assertions and full-blown formal verification. Doing property based testing well involves thinking carefully about the logical properties that your code relies on and adheres to, which I think is usually a good investment.

  5. I use frameworks like spring-boot. And with dependency injection its sometimes hard to figure out, why i get a result that i did not expect. So i use the debugger to get a feeling, what is called and how that thing has been wired together and where the part actually is, that produces my result. That helps me.

  6. When showing disregard to debuggers, people usually make very serious assumptions.

    1. Debug printfs are better than debugger. In reality, it’s easier to set up ad debug printf inside debugger than inline. So the distinction debug printf vs. debugger is a false one.

    2. It’s better to know and understand your code. For sure it is, but frequently you just cannot. Imagine code with poor quality, written by strangers, as a quick hacky proof of concept and developed for the next ten years in the same manner. You don’t have infinite time to make mental model of the code and finding flow is hard in itself, so then debugger greatly speeds up the process by simply showing call stack and state of local variables. Or imagine implementation of simple graph algorithm – without drawing it on a piece of paper it’s hard to grasp what’s going on, even if it is short, well structured, with comments and unit tests and functional tests. It’s easier to use debugger with conditional statements to figure out what to write with pen and paper.

    That’s the same argument mathematicians did with regards to mathematical proofs: I don’t need no tools other than those from primary school to do maths. And then came four color theorem proof.

    3. It’s better to have tests. But what if there’s a bug without a test for it? In case of Linux kernel it’s relatively easy to use sheer power of your mind, especially when you’re authoritarian who’ve seen all the changes. Try to do the same with UI code of Chromium with hundreds of committers and frequent serious redesigns.

    What if the tests are unstable? What if the tests started failing because of two different, on a first glance not connected reasons, and one of the reason was introduced into code few months ago in place you’re not familiar with?

    Again, it’s false dichotomy.

    4. I own the code, thus I have the obligation to understand it now and for the future. But what if you have to understand minified JavaScript of a random webpage without contacting its authors?

    5. In some cases some people don’t need debuggers, thus debuggers are superfluous. Those people are famous, so they have to be right. Typical demagogy.

    6. Editor and debugger are distinct environments. Well, no, IDE contains both editor and debugger.

    7. I dislike Microsoft so I don’t consider Visual Studio debugger as a decent tool just because.

    8. I mainly program in languages which debuggers are inferior or with operating systems on which debuggers are inferior thus all debuggers are unusable.

    1. It seems that a lot of your argument is that using tools can make you more productive. I agree with that.

      Some comments…

      Tools are not neutral. Some systems discourage good practices like clean maintainable code and systematical testing. Getting people to use tools to cope with crappy code instead of having them fix the crappy code is not a net win.

      It’s better to have tests. But what if there’s a bug without a test for it?

      Then write one. I sure hope you are not trying to fix bugs without systematic testing. It is not 1985 anymore.

      I won’t blame you for using a debugger… but I will certainly blame you for working without systematic testing.

      Try to do the same with UI code of Chromium with hundreds of committers and frequent serious redesigns.

      I sure hope that Chromium is not held together by people running the code line-by-line in a debugger.

      Those people are famous, so they have to be right.

      That’s not my argument. My argument is that if these highly productive programmers do well without debuggers… then it tells us something. Short answer: debuggers are not required to be highly productive. Note that it does not tell us that using a debugger prevents you from being productive. Donald Knuth, for example, uses debuggers.

      I dislike Microsoft so I don’t consider Visual Studio debugger as a decent tool just because.

      I am unclear why Microsoft keep popping up in this thread. I don’t think I mention Microsoft in my post. It is irrelevant.

      I mainly program in languages which debuggers are inferior or with operating systems on which debuggers are inferior thus all debuggers are unusable.

      I am not sure what these languages or systems are. JavaScript used to have poor support for debuggers, maybe that’s what you have in mind, but that has changed in recent years. It is fairly easy to use a debugger with JavaScript today.

  7. I would like to know how those people do to find memory leaks in a monstruous piece of very bad code that has been managed by other people during 10 years (and of course you’re unable to use precious things like valgrind and others).

    I have a limited use of debuggers but there are cases where they are just usefull.

  8. The article is correctly arguing but wrongly labeled. It is in almost all cases correct not to use a debugger … to single-step through code. But there are plenty of other things debuggers do. The most important one for me may be the ability to add printfs into an executable while it is running or even after it has failed. In case of hard-to-trigger bugs that may save crucial time that is otherwise spend on reproducing the failure. I also often use the debugger as disassembler of the important parts to check wether the compiler actually agreed with me that some abstraction was zero-overhead or some optimization might be a good idea.

    That said, most languages come with debuggers worse than those of Turbo Pascal, so spending time to learn to use one may be wasted effort nonetheless.

  9. I feel bad for your students. I agree that a most simple bugs can be detected just by thinking about what happen, what should happen and how the code looks. And I also agree that there are cases where it is really hard to use a debugger.

    But: all in all not using a debugger and saying it is useless is one of the dumbest things i’ve heard in a long time. I’ve seen people “not using a debugger” – and if they would knew how to use a debugger properly they could says so much time.

    And for the println/assert guys:
    – assert are just are poor mens unit test. You wouldn’t need them if you had proper units test.
    – using logs to monitor a running system is fine. But using println to find a bug on your local system is most inefficient way to do it. And if you teach your student such nonsense they will have a pretty hard time in there professional life.

    1. Debuggers tend to become decreasingly useful, especially if you have high-volume throughput systems that are full of async code. Debuggers are pretty useless here, since the debugger state usually doesn’t match up reality.

      We tend to have a lot of error checking and asserts in code, since they tell is we have a real problem. We use debuggers, but mainly as a tool to catch certain asserts (which don’t always directly show the root cause), and as a means to see state at a certain point, without having to dig though a logfile that spits out 4000 lines of debug info for every request.

      It’s also pretty difficult to unit test bugs that happen to pop up very infrequent, and only occur in certain situations, when certain messages arrive in a certain order. I’ve had asserts catch more then the unit tests.

    2. In my (limited) experience, unit testing and asserts lie on almost orthogonal dimensions: you cannot substitute one for the other. Suppose you are developing a CUDA kernel, say for some linear algebra algorithm: a few well designed asserts can catch bugs that are very hard to even notice by unit testing. On the contrary, without unit testing, broken but “formally correct” portions of code do not trigger any assertion failure…

  10. I agree with what you say. I tend to use debuggers as glorified print statements. With a debugger, I don’t need to write a print statement for every variable I need to watch, all variable values are available when a breakpoint is hit.

    One important information that a debugger offers is the stack trace which is very useful especially when debugging event based software. Stack traces are hard to obtain with simple printfs.

  11. I also stopped using a debugger and over 15 years ago. The reason for this is that debuggers are not that useful if the program being debugged is multi-threaded. Instead, logs are useful. And once you start down the path of logging then there’s almost no need for the debugger. Plus logs are universal but your favorite debugger feature might be available only in gdb but not in the Windows or embedded system debugger.

    I’ve worked on a few bigger projects which have the printf()s embedded all the time in what could be called ‘permanent instrumentation’. In these projects then it was possible to build a debug version of the program which contained all printf()s for all levels of verbosity. And you could also build a non-debug version which had most of the printf()s stripped out. The debug versions also supported special function entry and exit printf()s in order to format the resulting log hierarchically to show something similar to the stack trace but for the entire run-time call tree, i.e. you can see the last assert in the log, see the printf()s before it, see the called function, and all the other called functions before it.

    As a side note, I’ve also developed techniques in the past to automatically instrument C functions with entry and exit printf()s. It’s a useful technique to comprehend a new source base faster. For example, I tried it on WebKit which was about 500 MB of source code and just too big to look through! I also have a prototype somewhere for automatically instrumenting projects without changing one line of source code or make file. I remember it worked pretty well on the NGINX and Tor code bases. It worked by pretending to be the compiler (e.g. CC=secretly_instrument make) and then during the compile of the debug version it would secretly generate assembly output, and change that assembly output so that each function had a shadow function and called through a vector. This was a bit clunky and only worked for Intel CPUs, but was an easy way to generate the run-time entry and exit printf()s without doing any work.

    Anyway, once the permanent instrumentation is available then there are special benefits, especially in a team environment. For example, if a unit test stops working or becomes flappy, just compare the debug log with the last running version with the debug log of the failing version. 9 times out of 10 it’s completely obvious what the problem is after doing the comparison, and this is generally much faster than firing up the debugger and single stepping etc. And if log info is missing then just add more instrumentation. And for new developers joining a team then the permanent instrumentation gives them a new way of comprehending the code base in addition to reading the source code; they can read the human friendly debug logs and see which function calls which other function and the main flow etc. When you start programming like this then you soon start to realize that the permanent instrumentation just mostly replaces comments, but are way more useful than comments.

    It’s sucky however when you get used to this technique and then try to use it in higher level languages. Why? Because the more permanent instrumentation is added to a code base then generally the slower it runs at run-time. In C it’s possible to work around this issue with the non-debug version using pre-processor macros to ignore printf()s at compile time which have a verbosity which is too high. In this way the non-debug version of the program is not impacted at run-time with constantly executing the equivalent of if($current_verbosity > $this_verbosity){ printf(…); }. Also, the physical size of the source code is not impacted… which can have an effect on performance too. However, most scripting languages do not have an equivalent mechanism and who wants to use the C pre-processor with PHP? Not me 🙂 Plus it’s difficult to retro-fit an existing source base with the C pre-processor.

    So what can you do? You can use Perl which has a feature called ‘constant folding’ built in. That means if you write if(MYCONSTANT){} and MYCONSTANT is zero then the entire if() statement never gets compiled. In this way you can add as much permanent instrumentation to a Perl script as desired and if you switch off logging then the Perl script will run as fast as if it had no logging; there is no if() clauses getting constantly executed under the covers.

    However, recently I wanted to add permanent instrumentation to PHP. So what to do in this case? PHP doesn’t have any equivalent to constant folding, so the more if(verbosity){printf()} statements I add then the slower the PHP scripts run, not to mention that the run-time byte code gets fatter 🙁 In the end I created a simple FUSE file system which duplicates one source tree into another one called the debug tree, e.g. ./my-source/… and ./my-source-debug/… . If a program loads a PHP script from the debug source tree then under the covers the FUSE program loads the source code from the other source tree, but before delivering the source code, it ‘uncomments’ permanent instrumentation lines. In this way the permanent instrumentation lives in the source code as regular comments and is therefore guaranteed to not cause a performance issues at run-time for the non-debug version of the code.

    Using no debugger, permanent instrumentation, unit tests, and enforcing 100% code coverage then it’s possible to create programs with remarkably few bugs in them. I recommend this technique.

  12. I think the new trend is to write good unit tests with high coverage plus having good logging statements in case something goes wrong and you want to find out where the problem occurred. The logs are the new standard in big companies for figuring out where the problem happened. This is specially useful in today’s big server software where you cannot really step through the code.

  13. I once watched a pro using the Visual Age for Java (that was 2001…) debugger… wow, just wow! I think you could use the debugger in VA to **write** code which was then loaded into the VM and executed. So basically you could step into your empty function, see what you have and then write the next line and step one step below and so on.

    I agree with the “think about the design of your code” stuff and using unittests and logging. But regarding prints vs debugger, if you know how to use a debugger, you get a much better overview and are much faster than (iteratively) adding print statements. Especially if you use unittests together with a debugger: you get the bug nicely isolated and can then run through the internals. Usually, when I come to a hard to debug problem, I’m often annoyed that I started with print statements and didn’t go the debugger route directly.

    [Note: I just used interpreter based languages like java, python, and R, not C and I use only the fancy GUI based debuggers in graphical IDEs]

  14. Debuggers are not an alternative to good code design, it’s not about “using a debugger” OR “writing good quality code”. You can (and should) do both. Tools are here to save time and help you find issues you didn’t even think of, they are no substitute to best practices.

    You are talking of line-by-line interactive debugging, but that’s only a small part of the picture. Modern debuggers can:
    – be started non-interactively, without GUI, and generate bug reports
    – can rely on data provided by version control
    – can be fully integrated within continuous integration tools
    – etc.

    In my company, we use debuggers. That doesn’t mean we waste our time with tools or botch our applications. Quite the opposite. We use them every day without even noticing because we have seamlessly integrated them in our development workflow.

    Also, keep in mind that all developers are not experts. Some people (e.g. students) are beginners and debugging tools can really help them understand what they are doing.

  15. @Patrick

    Elsewhere on this blog, I have repeated promoted the use of better tools. I am definitively not a fan of working on “raw” code as if compilers had just been invented and we had nothing else to go by.

    I think that my blog post is clear on what I mean by “debugger” which is interactive line-by-line execution. I specifically say that I use tools that are called debuggers. But I do not use them as debuggers.

  16. I think your article stands on a semantic issue only. What is a debugger, other than a tool that displays values at certain points in your program? The step by step line execution is not what defines it. Even when software scales to much that it is impossible to step-by-step, as it is in case of parallel programming, you still use a debugger to check states at a certain point. As such, using print statements or placing a breakpoint and then checking what values are at that point seems to be the same thing.

    But maybe you are on to something. I would find value in a hybrid approach that just appends bits of code in certain places of a program. With this system pausing the execution and then displaying some user interface would just be a piece of code you appended, but not part of the source code. You want to log the values in some part of your program, you can add logging code in this way. Food for thought.

    1. I think your article stands on a semantic issue only.

      I don’t think it is only semantics. I used to debug my programming with breakpoints and line-by-line execution (15, 30 years ago). I think it is quite common still.

      I think that my use of the terminology is common and clear from my post. Here is what Linus wrote:

      I don’t like debuggers. Never have, probably never will. I use gdb all the time, but I tend to use it not as a debugger, but as a disassembler on steroids that you can program.

      As such, using print statements or placing a breakpoint and then checking what values are at that point seems to be the same thing.

      Exactly. That is, if you define “using a debugger” broadly enough, then everyone uses a debugger and the statement is void of meaning.

  17. First of all, I totally agree that rethinking code and adding tests/assertions are much more productive in the long run than debugging.

    The thing I want to clarify is what exactly makes “stepping line-by-line through code” bad. I think there are three big problems: you need to know beforehand where to place an initial break point, the debugger shows variable values/call stack only for a single point in the program lifetime, and it takes time to get to this point.

    “Print statements” are better as they create a log showing program state in multiple points and it is faster than stepping through code manually.

    Personally, I rarely use print statements either as there are tools that can show program execution details without manual instrumentation. (Like my Runtime Flow tool for .NET and other similar offerings for Java.)

  18. I am pretty much in the “all-the-above” camp. I use assertions. I build tests. I think carefully about what I write. And I use debuggers.

    But not all the code I use is mine, or as carefully done. I do not know all the possible state transitions.

    In a prior project, for my code to work, all the other code incorporated in the project had to work. This included code from other developers (who were not as careful), and boatloads third-party libraries. (This was Java code, and the group had unfortunately chosen to use the Spring framework – which is full of unexpected behaviors.)

    Using a debugger (in the Eclipse Java IDE) was an enormous productivity boost. When running tests (quite a large set towards the end of the project), an error occurs. Perhaps there is a stack trace (often but not always useful). Using log entries to narrow down the scope of the error, place a breakpoint. (Ideally on a path only taken in the error case. If possible, introduce such a path.) Re-start the test(s).

    Sometime later – possibly minutes or hours – the breakpoint is hit, and the IDE is showing the point of error (or near). Often inspection reveals the error, update the code, and Eclipse IDE can do in-place replacement of the code. Resume the test.

    When you are using long running tests, and failures are somewhat non-deterministic, this approach is an enormous boost.

    That said, I used a debugger much less often on the last project (Python code with OpenStack), and could primarily rely on logging. Depends on the problem.

    1. I agree that there are cases where using a debugger with breakpoints is the right thing to do. I also agree that IDEs like Eclipse can be great, sometimes. What you describe does not sound like fun however:

      Sometime later – possibly minutes or hours – the breakpoint is hit, and the IDE is showing the point of error (or near). Often inspection reveals the error, update the code, and Eclipse IDE can do in-place replacement of the code. Resume the test.

      I think that people like Rob Pike are telling us that there ought to be a better way.

  19. if you work alone “Maybe”,
    but working in any company you will have code that existed for a long time, and even you will forget your code, debugger help you pinpoint the problem much faster.
    you need to write good code and know where the problem is but it’s extra tool than can help and your as good as your tools.

    1. your as good as your tools

      I do believe that there is a lot of truth in this.

      you will have code that existed for a long time, and even you will forget your code

      Linus Torvalds do not use a compiler. Do you have code that is qualitatively larger, older and more complex than the Linux kernel?

      if you work alone “Maybe”

      There are at least 300 developers involved in any new version of the Linux kernel. How big is your team?

  20. Agree with the points others have raised. Even as someone with a bit of a reputation among my colleagues for spotting bugs via inspection a on code review (and also writing understandable code myself), I still find a debugger so valuable and time-saving on occasion that I tend to avoid environments where running a debugger is impossible.

    The central problem is, no matter how good your own designs and code are, developers are not islands anymore. In any project of a reasonable size, your own code is only a tiny fraction of what’s happening. The rest of it is your coworkers, the language’s core libraries, and numerous third-party libraries of varying quality– both in terms of correctness and how easy the code is to read. The former you can’t change, and for the latter, you might submit a fix for a third party lib, but you’re probably not going to investing in cleaning up the architecture. There’s just no substitute for stepping through JRE classes which something really unexpected is happening.

  21. Interestingly, those people you cite probably hardly ever work on other people’s code (or other people’s libraries).

    For the rest of us, debuggers are excellent tools for figuring out how the legacy software that we’re maintaining works, or to report bugs in third party libraries.

  22. Linus’ coding practices are not to be commended, especially after the recent Git security vulnerabilities, which were completely predicable given the software development process that is typical in the Unix/C orbit. There’s a certain steampunk, almost Luddite, approach to software in Unix, and the debuggers are pretty bad.

    The reason Microsoft keeps coming up here is that Visual Studio’s C++ debugger is widely regarded as the best in the world. (I’d expect the same to be the case for their C# debugger, but they have less competition there.)

    Putting together some recent threads, here and elsewhere, I think there’s a real deficit in the academic CS community, a stunning lack of awareness of large swaths of present day computing. There’s little awareness of Microsoft tools, Windows, and related file systems, languages, and so forth. This is compromising the quality of CS research – the community is far homogeneous and conformist with this whole Unix and C thing. And it leads to things like advice to not use debuggers, based only on this steampunk Unix/C paradigm. I feel like this monoculture slows progress in computing. Separately, software security is devastating right now, and we need much more powerful tools to deal with it than C and its printf flintstones. We need much richer debuggers with all sorts of modeling methods.

    1. Linus’ coding practices are not to be commended, especially after the recent Git security vulnerabilities, which were completely predicable given the software development process that is typical in the Unix/C orbit.

      There are more computers running the Linux kernel than computers running any other operating system. Linux is the foundation of our Internet…

      Putting together some recent threads, here and elsewhere, I think there’s a real deficit in the academic CS community, a stunning lack of awareness of large swaths of present day computing. There’s little awareness of Microsoft tools, Windows, and related file systems, languages, and so forth.

      I think that the vast majority of CS academics are Windows users. The vast majority of CS courses are prepared on Windows for Windows users. Probably, in second, you find Apple technology followed, far below, by Linux technology. Though I use Linux machines (as servers), I haven’t used Linux as my main machine in a decade or so. I know a few exceptional CS professors who use Linux as their primary machine… but they are few.

      It is rather in industry that you find the most massive Linux adoption… at key corporations like Google, Amazon and so forth. Academics are not behind the sustained Linux popularity. In fact, if you go to a random CS school and pick a random CS professor, chances are that he won’t be able to tell you much about modern Linux development tools. Most academics have had nothing to do with cloud computing and have never setup a container, assuming they know what it is.

  23. Great post! As an engineer, I always thought that using prints to debug code was an amateur thing. I hardly use debuggers, only when I cannot figure out the problem by investigating the prints.

    Greetings from Brazil.

  24. Thank you!

    Just stumbled upon the sentence “step-debugging is one of the key skills for any developer” somewhere and wanted to see whether there is anyone else but me who is missing this “key skill” and so arrived at your post.

    My history is quite similar to your’s (Basic at school back in the 70ies, Turbo Pascal, then C++ and now mostly Java).

    Very much like you said I actually stopped using a debugger when I (or rather my code) started to get seriously multi-threaded. For debugging I use assertions, sometimes only “soft assertions”, ie. simply error-logging when some expected condition does not hold, and logging/tracing. And yes, I frequently read and work on code produced by others, in particular when taking over after someone has left the company. To understand other’s code ‘grep’ and ‘etags’ are much more helpful tools for me than a debugger.

    Stepping, break-points and watches usually don’t provide more information than logging statements can do, but logging statements provide a history of the program execution and are still there when some error occurs at later stages of the program’s life cycle, ie. in production. When talking about logging/tracing I refer to configurable logging tools not printing to stdout (like log4j/slf4j rather than System.out.println in context of Java).

    Actually I once was in a situation where even logging didn’t help, because the error disappeared whenever even the least logging statement was inserted — obviously a tight race condition. In this case there was no way other than debugging in mind and it actually took me two days of mental stepping through the code to find the bug.

    Currently I only use a debugger when debugging Lisp (Elisp) code but then exactly as you analyzed it is a single-threaded situation and small debugging objects (code units with at most a few functions).

  25. I disagree. One says that a debugger doesn’t scale, but logging (print is a primitive form of logging) does. Now, your 15 million lines code would be only 10 million without extensive logging you had to do because of lacking debugger. Plus, searching something in the agglomeration of messages turns not scaleable too.

    To me, debugger is a great tool, by providing the capability of inspecting at once, in a given moment, all the local variables, stack or the members of a class.

    In my career I’ve debugged C, C++, -O3 C++, Java, Pyton and Assembler. If an IDE doesn’t have debugger, it does not qualify for my attention. Not knowing to use a debugger (as well as a profiler) is a red flag.

    1. One says that a debugger doesn’t scale, but logging (print is a primitive form of logging) does. Now, your 15 million lines code would be only 10 million without extensive logging you had to do because of lacking debugger.

      Must debuggers and logging be our primary tools to debug problems?

  26. 15 millions of code lines for a kernel sounds like the most mismanaged project in the Solar System. I wouldn’t give a cent for that code, the binaries nor 100 hours of the one managing such nonsense.

Leave a Reply

Your email address will not be published. Required fields are marked *