The journey to becoming an intelligent software developer is quite involved, while the journey to becoming an unintelligent software developer seems quite simple. This is of course always true for everything, but people generally don't try to become paid chess professionals after playing chess for a few months, whereas people do regularly get well-paying jobs with the hoity toity title "Software Engineer" after writing code for a few months.
Across decades of software development, I keep seeing people falling into the same mental traps that stop them from advancing, and I want to talk about one of the things that causes the separation between novices and experts, why many people remain novices for many years, why many people who believe that they're experts are really still novices, and how novices can get unstuck.
The naive outsider view of software development is the user's view. The user has desires and needs that the user cares about. These are all expressed with "I" statements.
The user understands that their actions must necessarily somehow translate into internal instructions and, therefore, that computers can , in some mysterious way, be told what to do. They may even understand that special languages are involved. The available understanding, like all understanding, hinges on what can be observed ("when I move my mouse, the arrow on the screen also moves commensurately"). What actually happens inside the computer may as well be the present day embodiment of the old expression "sufficiently advanced technology is indistinguishable from magic".
Now, every insider started life as an outsider, and so clearly there is a way to bridge the gap from user desires to machine instructions, but the "how" of writing software encompasses two unequally transparent and unequally important aspects, and the difference between the two parts is astronomical, and it's extremely common for people to get stuck at the threshold, with their foot inside the house, but with the door not open far enough for the rest of their body to pass through, learning only the simpler and less important of those two parts and never even considering that the other more important one even exists, let alone that it's far far more important.
First, there's "what" we write to tell the computer to do something. "What" here means a specific bit of code--a specific line, function call, variable assignment, etc. Together they are the syntax of the chosen programming language. This part is the obvious part, because anyone at all can point at a line of code and say "This is a line of code. This bit here means that we're going to add these two numbers together, and this bit here shows the result of that addition on the screen." And the outsider can look at that line of code and say "Ah, and if I write this line of code then I will make the computer do that?" and they will have learned something about the instruction of computers akin to a translation dictionary and phrasebook that shows the foreign language learner how to ask where the bathroom is or how much a pound of strawberries costs.
But in addition to the "what" there's also the much less obvious and much more difficult to learn "why" we tell the computer to do exactly that thing in exactly that place/time. "Why" here means the conscious decisions and planning and ideas about the intrinsic physical nature of the computer system itself which lead to writing a particular line in a particular place. It's different from just the user's story though. It's where the user's story intersects with the computer's story. This part remains tantalizingly out of reach long after discovery of the translation phrasebook for three reasons:
Furthering our analogy, the translation dictionary and phrasebook does not instruct on the likely outcomes of insulting someone's children. Nor does it explore any of the variety of circumstances where otherwise-less-likely outcomes may occur. It is clear that an understanding of context is vital to the safe and proper use of language, but it's also quite difficult to explain and is thoroughly disconnected from the mechanical process of composing sensical grammar.
You can explore the difficulty of this problem for yourself. You know that insulting someone's children will likely lead to certain consequences. Try explaining in your head to an imaginary understudy who knows absolutely nothing of social norms why the likely outcomes you have in mind are likely and what alternate circumstances might exist that would lead to different outcomes. And then when you have satisfied your imaginary understudy with an answer to why, ask yourself whether the understudy yet knows the why of that answer. Like maybe you've settled on explaining that some quirk of human biological evolution has made it so that parents' brains rewire themselves to feel such and such a way under such and such circumstances because of something to do with maximizing the potential for propagation of their genetic line. Filling in the blanks appropriately, it certainly seems like a reasonable answer. Ok, but why do they do that ? Parents of small children (or former small children) tend to be well acquainted with this problem. The infinite loop of "why" never ends, and it's tempting to give up, throw our hands in the air, and say only "because".
Apparently a large amount of rapid instruction out there just skips over the problem. Boot camps, crash courses, internships, none of them have the time to address it. Maybe they don't even know it or think about it. Maybe they don't think it matters.
But this is the fundamental core of what programming is really all about, because the short version of the answer to "why did you write that line of code in that place" is that actions in a known context have predictable consequences. And those consequences create new contexts, and actions in those new contexts have new consequences which create new contexts, and so on. And in order to build a program, one has to clairvoyantly predict a connected path through innumerable infinitely branching contexts and consequences that ultimately both accomplishes success and also avoids failure.
In short, you cannot learn the "why" just by seeing the "what". In fact, often the opposite is true. Often people learn entirely wrong lessons by looking at what someone did in a particular place and believing that it's good or generalizable or both just because it was done. It's easy to say that surely the author must have had a reason, but also almost certainly their reasons were in some way unsound and the reasoning was in some way myopic.
Note: While you cannot learn the "why" by inspection, you can get to a point in your understanding of the world of programming where you are able to know at least some of it. I often look at broken code and know, without being told, exactly what it was supposed to do that is different than what it does do. This skill is very useful for fixing bugs. How do I know it? I run every bit of the code in my head the way the computer does, quickly, from start to finish. I reason about the interplay between all of the thousands of contexts and consequences, and I look for the consequence that leads to an action that doesn't match its context.
I recall a story about an IRC chatbot built by someone in my social group a long time ago as a silly joke to do one seemingly innocuous thing: any time someone said the phrase "I can't believe", it would respond with "Oh, I can believe it." And someone would say something like "I can't believe they discontinued my favorite snack," and the bot would say "Oh, I can believe it," and everyone would have a good laugh. But every once in a while, someone would say something like "I can't believe it's been a year since my dad died," and then the response wouldn't be funny anymore. In fact it became harmful. It's clear that what we build and how we build it has consequences, and that the core problem here wasn't a technical one. The person who made the bot didn't fail in code; they failed in reasoning because they neglected the importance of predicting negative possibilities.
And I think this is where junior programmers get stuck. A novice developer looks at a line of code, says "I don't know why this was written but I can tell what it does", and then assesses only whether it does what it says it does. But of course it does what it says it does! It's code! It can't do anything else other than what it says it does, because the computer doesn't have the agency to say "no thanks, I think I'll go for a swim instead." It's never useful to sleepwalk through reading code like this without trying to divine from context why the code was written as it was and by extension whether it does what it's supposed to in a reasonable way. The junior developer must banish and outlaw from their mind all instances of the "I know what this means, so it's probably right" mode of thinking.
I think the tendency comes from thinking in small ways about small problems, and the advanced developer needs to think in big ways about all problems. Whether the code does what it says it does is less important in basically every way than whether it does the right thing safely and reliably. A line of code is not just a line of code. It is a line of code that is affected by and that also affects everything else in its causal sequence. This is not just a function. This function is called somewhere else. How is it called? Why is it called? Does it even need to exist? Would the code be easier to run step-by-step inside your head if it were inlined? Do these things look similar? If two things look very similar, how similar are they? Are they in fact the same thing entirely in slightly different clothing? How can this break? If it breaks here what happens? How can we make sure that whatever happens is good? And so on. You have to rev your brain into high gear, not let it coast through busy intersections in neutral.
Getting to the point where obvious-in-hindsight consequences are instead obvious-in-foresight requires practicing intentional consideration of the contextual consequences of all actions. I'm very fond of the peanut butter (and optionally jelly) sandwich exercise for this. Maybe that's because I'm a sucker for nostalgia and it was the first thing we ever did in my first computer science class in school, but, rose-tinted glasses aside, I think it's actually the perfect exercise, and I get sad when I hear that someone struggling to improve as a software developer has never personally done it.
Briefly, the exercise involves an understudy writing a set of instructions for assembling a peanut butter (and optionally jelly) sandwich, and then an instructor maliciously interprets the given instructions as literally as possible using real bread, peanut butter, knives, etc, usually with humorously messy results. The common premise of the exercise is to teach students about the essentiality of literalism, that the computer can only do exactly what you instruct, not guess or intuit your intent. But I think, far more importantly, it also teaches both the value and process of predicting and discovering the consequences of actions given a series of linked contexts. Teaching literalism itself is cute and fun, but that just prepares students for their first introduction to programming languages which have miserly vocabularies. Immediately upon encountering any programming language, the lesson of literalism is no longer useful. Of course you must be literal. You have no other choice given the command syntax at your disposal. Literalism is an uninteresting and unimportant attribute of programming. No, the true value of the exercise is that the student must try to predict exactly what will happen next, and, based on that, exactly what will happen next, and, based on that, etc. They must try to anticipate consequences (good and bad) that cascade at every step. And because in this exercise they have no example code to copy, they must continuously expand their knowledge of the underlying context by trying something and seeing what happens.
Too often people enter software development without learning the lesson that the code itself is not important, that the tools are not important, that the frameworks and the technology are not important, and that programming comes from the branching journey of intentional decisions and not mere syntax.