The software industry is probably the most dynamic and innovation of all industries. However, many people also try to convince us to adapt new ideas despite their dubious practical value.
So what are the ideas that stick… ideas that are genuinely good and important?
Here is my current list:
- Structured programming;
- Unix and its corresponding philosophy;
- Database transactions;
- The “relational database”;
graphicaluser interface ;
- Software testing;
- The most basic data structures (the heap, the hash table, and trees) and a handful of basic algorithms such as Quicksort;
- Public-key encryption and cryptographic hashing;
- (new:) High-level programming and typing;
- (new:) Version control.
Let me put it this way: if you were to meet a master of software programming, what are you absolutely sure he will recommend to a kid who wants to become a programmer?
Am I missing anything important?
98 thoughts on “What are the genuinely useful ideas in programming?”
Rather than enlist your current list, the Master should use the kid’s intuition to realize the significance of those items. Von Neumann considered compiler as clerical work beneath computers & prized his systems. Compilers are significant today. By the time the kid gains enough insights to be Master, the kid’s version of your list is unlikely to overlap alot owing to some items still undergoing refinement.
Types (whether static or dynamic, cannot be ignored).
First-class functions, lexical scoping.
Universality (subsumes compilers, interpreters, virtual machines).
I’d second the suggestions of garbage collection, version control, and compilers (including JIT compilers). And, if you like those ideas, I’d consider adding debuggers and profilers, since, without those tools, a programmer is crippled.
But there is some question of what your goal is. There are concepts that are important in computer science (such as algorithms and complexity analysis) that are less important to programmers (who are might be more concerned with productivity and tools that enhance productivity).
The Liskov Substitution Principle.
Separation of concerns.
I’d say design patterns and anti-patterns.
The IP suite in networking and regular expressions could both be seen as components of Unix, but they may be worth calling out separately.
Is GC really still controversial? I thought at this point, the consensus had settled long ago on “garbage collection is great to have in almost all cases but extremely performant, predictable, or embedded systems”.
To the list, we can perhaps borrow some ideas from functional programming: functions (perhaps subsumed under “structured programming”), closures, recursion.
You also forgot “typing”. Whether it be strong or weak, static or dynamic, declared or inferred – the languages in use now all have *some* form of typing. We’re a long way from assembler (“everything is bits! Hurray!”) or macro-substitution languages (“everything is a string! Hurray!”).
High-level languages (and therefore compilers/interpreters). And by high-level I mean “not machine code”.
I’d like to add 1 more:
Getting familiar with good debugging tools. Don’t just get used to logs if there are good debuggers to do step-by-step troubleshooting). If no such a tool for your applications (e.g doing MPI programming on supercomputer), try to create a test environment for quick verification before submitting the jobs.
One more: Getting familiar with 1 or 2 shell script languages (Python/Perl are fine too).
This is important when evaluating the performance with different parameters. Design the programs to accept parameters, and write a script to automatically run through all parameters.
Basically, let computers do the routine jobs for human beings.
I can’t believe you missed object oriented programming. Other essentials:
– asymptotic complexity
– finite state machines
– GÃ¶del’s Theorem
Object-oriented programming is mentioned in my post. I use object-oriented programming daily, but I also think it has been overhyped.
Can you elaborate on how GÃ¶del’s Theorem is useful for programmers?
I definitively mean “programming” and not “computer science”. That’s why I did not include complexity analysis.
Ideas that are useful to computer science would be a distinct question.
API docs, though there’s a difference between using API docs and writing them.
Benchmarking perhaps? I wouldn’t call comlpexity analysis necessary in many cases, but simple benchmarks are useful.
Database concepts as a cluster. In a sense, a database is just arrays and hash tables, potentially much larger than memory, and persisted. Transactions add another layer of value. Any given use may not need all the attributes of a database, while still recognizable as database usage.
Data in algorithmically smart storage? Need a more concise description.
GÃ¶del: over the years, I’ve repeatedly bumped into programmers and managers who have undertaken efforts to build code to solve undecidable problems. Not quite grasping decidability, they assume that adding a new programming team or spending another month will solve a problem that cannot be computed. Knowing that there are undecidable problems and well as intractable ones is invaluable.
Also drop “graphical” from user interface design. This is an entire area of expertise, not just the 2D bits on a screen.
If I added anything to the list it would simply be ‘abstraction’.
Something about concurrency primitives, like locks, critical section, atomic interlocked instructions etc
Code is read many times, but written only once.
The importance of writing good comments with your code.
Amazing that nobody else mentioned this yet.
Understanding how the target hardware works (high-level is fantastic, but does not replace understanding the basement).
I would add “Open source philosophy” too.
and regarding OOP the most important things are encapsulation then encapsulation, then encapsulation 🙂 , other crap is overhyped.
also regex is kind of a nice example of declarative programming.
Inspired by this post, I wrote up a quick list of things I feel every software engineer should aim to know:
This is a bit off from your original question, but the motivation is similar.
A few things I’d add specifically to your list above:
– modularity, elegant simplicity (particularly in interface design)
– debugging & test amplification via assertions and other internal self checks
– concurrency, dependence, happens-before, races, and related pieces
– design patterns, and their evolution as encoding of best practice at the current time
– adversarial thinking (for both security and debugging)
Using the Wizard book to paraphrase some of the other comments here…
+ Primitive Expressions
+ Means of Combination
+ Means of Abstraction
Maybe, also, the notion that programs can themselves be treated as data.
I might be tempted to generalize “Database transactions” to just “transactions”, but that’s a minot point.
More importantly, I don’t see anything that mentions time/interactivity in your list. This is the big thing that’s missing from foundational models, like Turing machines and the lambda calculus. Software is about more than functions (in the mathematical sense); it is also about interaction with an environment. As a bullet-point summary of these ideas, I nominate “event handling”.
Elegance (as in an elegant theory), and clarity. Two distinct but strongly related notions. Elegance in finding the simplest workable solution. This argues *against* unneeded abstractions (the bane of the Java world), and overdone encapulation.
Clarity in expression means a reader of your code, coming along later, can readily follow what the code does. Clarity of expression requires clarity in how you think about the problem. If your thinking is unclear, your code will be unclear, and likely contain bugs.
Note that this argues against comments, somewhat. Clearly written code does not require comments, for the most part. Some exceptionally tricky sections require nearly an accompanying essay.
I think this list is not defensible. I, for example, may argue that metaprogramming and object oriented approach is essential, but other people would critique me and say that we better use functional languages (this is just an example).
It is really hard to define what basic is.
I think it all keeps changing. What was basic yesterday is not basic today. You can create a list by a majority voting, but the results would depend a lot on who votes (which country, etc).
My own list may not be defensible, I will grant you that… but the question still makes sense.
There are things that are universally accepted, such as (principled) testing or the use of compilers.
When something is still up to debate, maybe it is an indication that we don’t know if it is really a good idea.
write less code. it helps tremendously in all aspects, but most of all : quality control and testing.
Mirroring & Simulation of a Machine Mind Collective
Self adaptive software as well as…
Quality of Service (Resource Management)
Activity Based Metering & Costing
I didn’t see anyone mention regular expressions.
Documentation is also important. At the very least knowing how to use tools to automatically generate documentation from code is essential in a modern professional environment.
The Murphy’s Law, which really derives from Godel’s theorem. And so does the general principle that things eventually break, including encryption, and that security only works up to a certain point.
It is very important for programmers to know the limitations of what they’re trying to put together, and take them into account from the beginning.
The keys to good programming are a firm understanding of lambda calculus and Symbolic logic (including boolean simplification.) If you understand that, then you understand what led us to make computers in the first place. The rest is all syntax and a set of good/bad practices to either follow or avoid.
I’ve taught kids in various age ranges to program, and really the first thing to do is demonstrate to them that they can make the computer do something without a lot of frustration. Get them past that first couple of exercises of hello world, basic input/output and conditional logic and they can do a lot of things they didn’t think was possible. For this I recommend Scratch and Inform 7.
For older kids that have already made the jump to abstract thinking (basically high-school or above with some pre-algebra or better) then learning to program by example is usually sufficient. The concepts sink in as they put them to use. In the professional world we seek out examples to illustrate concepts all the time, even for stuff we’re already familiar with. Learning how to seek out good examples is a valuable skill.
I know at this point you’re probably scoffing that I’ve said very little of computer science theory. That’s because in real-world programming you rarely use 75% of it unless you’re really in a pickle, or unless you do very hard-core stuff. Most programmers, especially web programmers, spend most of their time writing simple conditional logic and concatenating strings. It sucks, but it’s true.
It is great to have these tech skills down, but the greatest asset is understanding the business of the customer. Meet the customer’s need and they will love you, even if you need to ask a tech guru for help putting the application or system together. I found in my 20+ year career that this one skill opened more doors than knowledge of library XYZ, language ABC.
I think your list is very much aimed at the 90%, you’ve missed critical low level things like memory layouts or even assembly. A lot of us are still down in the ditches. You also missed security (buffer overflows, sql injections, etc).
I would argue cryptography is next to useless; unless you’re dedicated to learning it (and that’s a harsh mistress) then all you really need to know about it is the concept of a public & private key.
Knowing how quicksort isn’t that helpful, you should instead know about templates (or the equiv. in other languages) and use prebuilt quicksort implementations. Also, Unix & that design philosophy is great for working in Unix systems, but windows & mobile are both more important, by the numbers.
I don’t think you can possibly create this list. I think a more helpful list would be metaprogramming techniques, like modularization, separation, etc.
FSM or Turing Machine. Big O notation.
I would second abstraction: from the original abstractions, the MACRO Assembler language, to the control structure abstractions that arrived later (while, for…to, until) and the functional and object oriented abstractions. Database systems in abstraction are applications of set theory.
The buffer overflow. *The* most useful concept in understanding how to get a computer to do something beyond what it was intended to do…. which opens oh so many doors, especially in ones mind.
That different hardware architectures exist. You can point out Harvard vs Von Neumann, you can point out endian-ness of integers, but especially point out how important it gets when working with one piece of code for multiple devices.
Floats and fixed point variables. Every coder should know why you don’t store dollars and cents as a floating point type. Depending on how the hardware represents it, adding a dime to a wallet might not add up to the right number every time and for all values that might start in the wallet. Understanding Typing is fine, but knowing why you need to know about Typing objects can be better, I think.
Operating System function calls and the associated mutex requirements. Teaching Unix and Posix in general might be useful, but you can throw in how writing to a single drive could block a read, and how to use a mutex if they ever get to the threaded code level. Knowing about threads versus processes could be thrown in with this too. Not a how-to, but the basic concepts.
I’d almost add pointers, but that depends on the language. I just reviewed a friend’s code who treated pointers as ints half the time, and reset them to 0 instead of null in C++ (architecture issues again!). And while he knew the difference between stack and heap variables in C type languages, he didn’t know why to check that new() didn’t return null! A fairly basic check to the documentation of C++ would tell you way that is needed. But since many languages don’t use pointers at all, it might not be a necessity to learn them beyond how the CPU and OS uses them; since they often do use them even if the language obscures it away.
And I guess that means the biggest point should be “Know your language”. Don’t assume that a built-in function or keyword functions the way the intro text or demo code describes, look at the specifications and get to know them well. You don’t have to do this for every language you pick up, but doing it for one will help you spot the occasional caveats when things misbehave in a new-to-you language.
I’d say “structure in programming”, as a generalisation encompassing structured programming, modularity, and so on. It’s easy to get carried away here (polymorphism, trying to fit everything in hierarchies) but the principles and an overview of the various tools available is useful.
I’d like to add (in similar fashion to above, a more general version of) “line coding”. XML (and json, and numerous others) is an attempt at “appliance-izing” this, but time and again we see that without a background in why the thing is useful people still manage to break things, with the added benefit of massive overhead.
GUIs were a similar movement, with people genuinely believing that if only everyone used them, rampant productivity increase would ensue. Turns out, doesn’t quite work that way. And it isn’t just because a good GUI is very hard to design. You can fsck up CLIs into frustrating unusability just the same. Knowing what is appropriate, when, is useful.
In the same vein I wouldn’t say “some algorithms”, I’d say “grounding in algorithms analysis”, big-Oh notation, and so on. The basic algorithms are included, of course.
If you’re serious about this programming thing, there’s a lot of theory to give some structure to the field of pouring thought into instructions for a computer. Not everybody needs to be a toolmaker, but every craftsman needs to at least have seen a good selection of different tools and their uses, and to learn when and how to pick appropriate ones for the problem at hand. In context, programming languages are but one class of tool.
Realising this is probably the first thing one should do, and thus that needs to go up top.
Another vote for some concept of modularity. Not necessarily OO, but certainly modules, static data and functions, libraries and APIs to define the entry points to those libraries.
You could probably argue that all of those are implicit either in the UNIX design philosophy or else in structured programming, but I still think the basic idea is important enough to warrant a bullet point of its own.
The incremental constraint based computation found in spreadsheets (sometimes called reactive programming). It may often be overlooked in the form that it takes in spreadsheets, but its very powerful when used to develop dynamic user interfaces that automatically update in response to state change.
1. Using a book to progress.
2. Asking someone, anyone to help you.
3. Explaining to the boss that s**t takes time.
4. Taking a break.
That’s about it.
I second the idea expressed (implicitly) above that every programmer should know at least some very basic facts from theoretical computer science, say
about the existence of undecidable problems and basic facts about complexity classes.
Otherwise one always risks to waste a lot of time trying to accomplish impossible things.
The open client-server architecture.
Email, DNS, DHCP, web (including all the http based systems), databases, and so on.
– Continuous integration (Even if you work alone, it can help you find bugs that you would have missed when doing the commit, e.g. when build file needs to be updated after adding a new file. It also lets you know when you forgot to commit a file as compilation fails on CI machine, while works locally. Also, if you release binaries, it lets you create and distribute those easily)
– Bug/Task tracking tool, if you have any users.
– Writing version history for each public release about major changes, and also adding a version number for each release. It helps a lot with the bug fixing to know what version your users are using.
– Refactoring tools of your IDE (e.g. Eclipse can do a lot of work for you, if you just know how to use it).
– Use some kind of formatter to keep the style of your code in order, instead of requiring you to do it manually. Setup your IDE to do it automatically.
– If you need to do big changes and you are not sure how everything goes. Just write the code to do it. Then revert all the changes you made. Now you know much more, e.g. what small changes you could do first to make the big change possible. Repeat writing and reverting until you have everything done in small parts.
– Review others code and try to get people to review your code. If possible, do some pair programming with different people. Sometimes it sucks, but at least once in a while you can learn or teach something new.
I would almost certainly recommend some good books. For example if they were wanting to code C++ then I’d recommend they read ‘Effective C++’ cover to cover before writing another line of code. Then write some code and read it again.
A nice list. I would like to add ‘basic hardware knowledge’ myself, so that you understand what are the bottlenecks; what has to be efficient, and what does not really matter. Know the size of the cache of your machine.
And ‘add comment’, which is more a rule than a concept, but still, it is golden.
Thinks like OO/non-OO, XML, etc. are all choices. Not necessities. They depend on the type of program, not on computer programming in general.
Divide and conquer. Algebra.
Communication protocols, including client-server as a class of examples.
Virtualization hasn’t really been mentioned, either.
Atomicity is also only alluded to.
I’d like to see mention, somehow, of the notion of a program creating a program, both as a compiler does, and also as a preprocessor (e.g., macro processor) does.
(Just saw your blog mentioned on /. – perhaps this discussion is “over”.)
In my days in commercial Windows application development, a very rough knowledge of complexity theory was a big thing that separated people who would write terrible code with massive performance & responsiveness problems with those who would write vaguely efficient code that ran an order of magnitude faster.
It isn’t that you need to know O(N) notation, but knowing that if you nest three loops inside each other, all going over every data point in an array or table, it will get very slow with any big number of data points can save people from a lot of mistakes, and is a very common mistake amongst beginner programmers.
The second related thing is having an idea of where ‘hidden’ loops may be, things like indexof, searching or whatever, which in theory people learn on a CS degree, but in practice when you get into someone else’s framework or library, there are often less obvious hidden loops and searches; knowing that is more of a craft knowledge related to specific technologies, but you can certainly learn the symptoms of it, and get better at guessing (or looking up in the source) what is causing it internally to the library.
Object orientated programming. Separation of concerns, keeping knowledge in one place, the ability to think about the problem conceptually.
Maybe it’s something you learn by doing it wrong. One of my programs as an intern was terrible. I learned the hard way the maintenance headache of using switch statements all over the place where polymorphism would have kept the contract in one place and clear and the Things (they were shapes in a Finite Element Analysis system) would encapsulate their own knowledge of how they work. Sad thing – I see code like this in production software written by professional teams today.
Other areas I’ve found – maybe not so essential:
Too few people understand concurrent programming – yet with more and more massive multi-user systems (web sites) we need it more.
Simple structures and algorithms, including recursion and depth versus breadth first approaches. Basics of computers really.
Network programming, not specific to sockets / streams , but how communication networks changed the very fiber of algorithms.
Adding to Rodrigob’s post: Basic Computer Architecture. How does cpu, memory, addressing, program counter, registers, stack, decisions, math, pointers, etc. It should include assembly programming.
Its the language of computer languages. Without it you’re playing chess by just moving pieces randomly.
I’ve seen lack of this knowledge cause so many problems. “Why can’t I start a process, create a file, and write to it in a tight loop to check if it exists?”
As has already been said, abstraction and encapsulation. But these concepts are hard to explain to novice developers unless they understand the central point of all programming; we use high level languages not to talk to the computer, but to talk to the next programmer who has to read/maintain our code.
So concrete examples
+ Constants – use them, understand why, and name them correctly
+ Naming conventions for variables, procedures and functions
+ Site standards – understand them and deviate only with many, many explicit comments to explain why.
The halting problem/Godel’s incompleteness theorem. In particular, an appreciation of rigor but humility about its limits. For example, understanding that you can never know that you’ve found the last bug in code that deals with problems of a certain (common) degree of complexity.
@Leonid and @Daniel
I think the point is that your criteria for inclusion is opinion so it really devalues the list when you imply that it is not. Saying things like “must be universally accepted” isn’t criteria.
There will always be contradictory opinions because opinions don’t have to be based on fact. Maybe people don’t like your list because you used an ugly font.
So you can stop pretending to be striving for some unattainable goal and just agree with @Mark that no formal list can be both consistent and complete.
(Yes, I’m old.)
Invariants: any time you can identify some quantity (or combination of variables, etc) that should remain constant throughout your program, it is a Good Thing
On a much more basic level, the hexadecimal and binary number systems. It is shocking at how many university graduates don’t even know what a bit or a nibble is (in the context of computers).
Integer & floating point arithmetic.
Memory and addressing, file systems, virtual memory, MMFs.
Multi-threaded programming, synchronization primitives, atomicity, interrupts, reentrancy.
@Mark @Leonid @Keith
I agree that no list can be exhaustive and correct. But I am not hoping to build a list that has these attributes.
Note that my initial list was not meant to be either exhaustive or correct.
The idea that code itself is a sort of data, which can be manipulated by other code. This concept underlies the very idea of “compilers” and rule-based systems.
The systems approach is big…recognizing that your work doesn’t exist in isolation. Big enough? Maybe, but there’s a lot of good stuff here already. I’m just saying…
I would suggest adding:
Big-O notation. This is largely what teaching algorithms is about as it provides a great way to look at it, and it gets to the heart of some of the complexity analysis mentioned above.
Profiling. Almost any application will run into performance issues at some point. In some ways, that is the definition of a non-trivial application. You generally can’t make a living just writing trivial applications.
Recursion. Either as a useful tool or a trap to avoid. Essential.
Concurrency. Cellphones these days are coming with multi-core CPUs. Unless you go down to 16-bit micro-controllers, odds are you’re going to be in an environment with at least threading support. Knowing how to determine if workloads are parallelisable and how to split up said workloads is part of addressing issues such as performance issues. It also separates “I can handle 10 clients simultaneously” from “I can design the next Amazon.com”. This also gets into issues of database transactions above. If you understand parallelism and your dataset you can make much better use of the transactional database.
I am against adding:
Regular expressions. They may be useful for a large number of problems, but those are within certain problem domains. If you are doing any kind of bit-banging (sockets, graphics, file-systems) there’s a good chance you’ll never need to use them at all. I wouldn’t qualify it as a “basic” requirement.
Network programming. Very important for a lot of things. But not required. If you understand Unix pipes, you have the basics and can go from there. OTOH, if you are doing, say, computer graphics or something you could easily avoid it.
Buffer overflows. This is really language-dependent. Really important if you are writing in C/C++. Not so much if you are writing in Python or Perl, for example. As a systems programmer, I have to be acutely aware of this, but a large amount of programming isn’t systems programming.
1) Learning to code better by reading other peoples’ code.
2) Mathematics: propositional logic, set theory, probability, optimization, error/reliability estimation, big O estimation of run-time/space, etc.
3) Runtime error detection and recovery.
I agree with: comments, pointers, open source, version control and backups, macros, the *philosophy* of object orientation.
Something very important that software developers should know is how to listen to their customers or marketplace and understand the exact problems to be solved, so that software does well what it needs to do.
Understanding deeply what is needed to be done comes first, everything else will follow from this.
Anything David Parnas says is important.
I’d include linked lists in the basic data structures. There’s a beauty in the algorithms used to traverse it and add/remove items.
May be related to structured programming, but I think it deserves an entry of it’s own: Reusable coding (think functions/procedures/macros)
I second (or third) the notion of shared open source software as the greatest of achievements. The software library of alexandria will never be able to burn down.
How about events / event-based programming? You should have at least a vague notion of what callbacks and/or bindings do and how they work.
Usability and user experience.
“Don’t Repeat Yourself” and encapsulation.
I think that the DRY principle is likely the single most important concept in programming.
The most important of all IMHO: Input Validation 🙂
This is a “soft skill” rather than a technical skill, but I would recommend communication skills. Unless you’re operating in a vacuum, at some point you’re going to need to explain your code to someone else or negotiate for resources (time, money, hardware, QA time, etc.) The more skilled you are with written and/or verbal communication the easier those explanations or negotiations will be.
Some of the people to whom you may need to explain are: a fellow developer that you’re asking to help you improve the code, a tester who just reported a bug that isn’t a bug, a documentation writer who needs to know what they should explain to the user, your boss who needs to explain to the higher-ups why you need more time/resources, or even yourself after being away from the code for six months (or six weeks.)
To steal from a very smart man who I no longer remember his name, “Software is code, data and documentation maintained in a specific configuration.” Explicitly, code and data have to exist while implicitly they must be version controlled and have documentation to make it valuable to users. The most hidden or most obvious part of this statement is the critical need for testing. Would one configure and document failure – on purpose that is? When one considers that 80% of all software projects fail due to schedule and/or cost we see the obvious need for innovation to reduce the cost for software. Many of the items in the list and suggested items are tools in a toolbox. A good example is UNIX. UNIX is not a final destination when you consider the age of programming. There are many other operating systems. Hardware, software and firmware are just tools. Pick the best tool for the job. Perhaps design patterns and standards are just as important? Perhaps having a tool to capture requirements and understanding their testability is critical going forward? How can we build extendable systems that are easy to interface with so we don’t lose the investment made years before? I believe your project here is very important in finding our way to the future of programming, whether people will be able to do more in less time or that some programming is automated. It is good to question.
Network protocols that guarantee specific characteristics (ordering, delivery, speed, fault tolerance, etc).
Commenting, commenting, commenting.
Not just echoing each line of code with an identical line of prose, but explaining /why/ the code needs to be done this way. Especially if you are writing anything that is abstract and/or which interacts with some other piece of code in a different file, you need to explain that interaction.
It doesn’t matter if you are coding in a team, or by yourself. I have heard several times that, “Any software that you have written that you have not looked at for more than six months might as well have been written by someone else.” It has proven true several times in my experience.
If you leave the company, or get hit by a bus, somebody else is going to have to understand your code. And even if that is not the case, you may be asked to re-visit your code at a later date, and why would you want to re-learn your own code from scratch?
* functional/stateless programming
* immutable data structures
* [distributed] version control
* code comments
* automated testing
Design By Contract. API documents go some of the way to that.
How to read code – I cannot overstate the value that came out of my first stint as a maintenance programmer.
1. To read and remember the code flow without the help of an IDE like eclipse. Best way to practice this would be to use editors like Vim and emacs, although those can act like IDE.
2. To debug the code without using debuggers. Avoiding the usage of debuggers will force to rethink the design and code flow.
Virtual memory; very useful concept.
Virtualization (vm or hardware virtualization).
Packet networks, routing, switching.
Domain specific languages,
Formal grammar, parser generators.
Code generation techniques,
Virtualization (software vm or hardware virtualization)
Packet networks, routing & switching.
Agree with several of the comments, particularly around understanding what a compiler (or interpreter depending on your language) is actually doing. You don’t need to be able to actually write a compiler (though that’s fun), but to understand how your code gets mapped to machine-level instructions is crucial to being able to debug (yes, sometimes compilers create bugs and you’re totally screwed trying to find them if you don’t realize that).
Any work estimate of the form “between x and y days/months/years” means “at least y days/months/years”.
Test Driven Development.
Meet the customer’s need!!!
I totally agree this is of utmost importance!
Most of us are application programmers.
By this I mean people who make programs to do something useful for the customer.
The rest is technical knowledge, most of which is found relatively easy on the WEB.
And basic knowledge of Boolean,
and basic control structures.
It may sound horrible, but it is true.
This is most important for a successful USEFUL program.
– continuous integration : Jenkins, Teamcity …
– Technical architecture solution (hardware server, containers …) && data flow
– java && web frameworks
hope this would help 😉
You may subscribe to this blog by email.