recentpopularlog in

Copy this bookmark:





to read

bookmark detail

Jayesh Lalwani's answer to Since programming can be self-taught, why not major in something other than computer science? - Quora
Since programming can be self-taught, why not major in something other than computer science?

This...This exact assumption behind the question is what holds people in software industry back : The idea that Computer science is just programming is a myth. Calling a computer scientist/software engineer a programmer is like calling a medical scientist a microscope looker, or a calling diplomat a meeting attendee, or calling a surgeon a meat cutter, or calling an architect a drafter. Your job is not defined by the physical actions that you do as part of the job. Your job is defined by the goals that you are trying to achieve.

If you have just passed out with a undergraduate/graduate degree in a field related to computing , get it in your head : I'm not a programmer. Programming is what I do. I'm not a programmer. A lot of entry level (and dare I say even mid to senior level ) developers get into a rut because they don't get this in their head. They expect clear requirements and clear deadlines, and a clear design, and they expect that their job is to take a set of clear requirement/designs and convert it into code. Then, they sit around expecting someone else to generate clear requirements and clear design and mope about when they don't have it. They expect that handling ambiguity is someone else's job. Why? "I'm a programmer. I program. Tell me what to program. Because I'm a programmer, and I want to program all day"

No. Wait let me think about it... NO! You are not a programmer. Programming is what you do. Sure, the best way to program is to resolve all ambiguities before you sit down to code. However, that doesn't mean that it's someone else's job. You are a computer scientist/software engineer. It is you who should be resolving the ambiguities. It is you who should be digging and digging till you understand the business need. It is you who should be figuring out the best way to meet the business need. You are a ambiguity resolver. Programs are what you produce as part of the ambiguity resolutions process . Indeed in an agile world, programs are used as a tool to resolve business ambiguities. When the business need is ambiguous, you implement it one way, try it out, see whether it works, and if doesn't you implement it another way. The programs that you produce are trying to achieve a goal that goes beyond the program itself. Your job is to achieve that goal.. not to program

But wait! I am begging a question here, aren't I? I am making an assumption that a person trained in computing is the best person to solve these higher goals. After all, if you are in making an application for doctors, isn't a doctor the best person to say what the application should do? Or in other words, Aren't domain experts the best people to resolve ambiguities? which is what this question boils down to. After all, they know the domain better than professional technologists.

The thing is. It's been tried before. Before we had professional technologists, computing was seen as a tool to make jobs easier. Computers were thought of as really great calculators. And just like a physicists/engineers/scientist would be trained on use of calculators, they would be trained on the use of computers, which involved learning how to program. There were no professional programmers. Programming was what people of other professions did. There were several problems with this

1) Non professional programmers create crap programs
Just about anyone with moderate intelligence can program. There are a few people who can program well. Writing correct, maintainable, scalable code is hard. Doing it in an efficient manner is even harder. Doing it in a sustainable manner with a team of programmers is even more harder. Eventually, what happened is physicists/engineers/scientists started having dedicated programmers, and then they realized that they have these people who program 100% of the time, but they have been trained to do something else. Early "programmers" like Turing and Von Nuemann were mathematicians, who started building concepts around computing. Then, later people like Dennis Ritchie who had degrees in physics and mathematics, converted those concepts into tools , essentially creating an engineering field out of it. As the needs of computer programs became more complex, and the tools became more sophisticated, there was a growing need to have people trained on those tools and problems. This lead to Universities creating training programs to teach people on those tools and problems

2) Humans are creatures of ambiguity. Computers are creatures of specification
End of the day, humans are creature of ambiguity. Even the most intelligent people communicate in ways that leads to a lot of scope for misunderstanding. Just because you are a world-renowned scientist doesn't mean you are good at resolving ambiguities. Most of the time we don't even realize it. Once we get our point across to other humans, we don't need to.
Computers need everything spelled out for them. They are creatures of exact specifications. Someone has to resolve the ambiguities in the process of creating computer programs?
The question is who is better at it, the technologist or the domain expert. I know a lot of "programmers" would have the domain experts do it. These same programmers also complain about how crappy the requirements are without doing anything about it. End of the day, the person who knows what the best way for a computer to do things is the technologist. The technologist understanding the domain creates a more elegant solution. You need to have an understanding of how things work behind the scene. It's not just about how to write a program. To have an elegant solution, you need to know how compilers work, and how OS works, and how databases work, and how disks spin, and how bytes float around the network. Also, having the technologist understand the domain allows the technologist to plan for the future, which allows him/her to write maintainable code cheaper.

Of course, the technologist needs help from the domain expert. However, it;s the technologist's responsibility. It's the "programmer's" responsibility to have clear requirements.. not the BA's, not the manager's, not the architect's... the programmer's responsibility. Throwing the job requirement ball over the wall leads to bad, unmaintainable code.

The task of requirements gathering has to be a joint effort between the technologist and the domain expert, and the technologist cannot divorce him/herself from the messy parts by calling him/herself a "programmer"

3) Not everyone can program
Generally speaking, Physicists make good programmers. Mathematicians make good programmers. Financial analysts don't make good programmers. Artists don't make good programmers. People have difficulty understanding computers. Long time ago, computers started up with a prompt, and you had to specify what you wanted the computer to do by writing a program. Most people couldn't use those computers. The only people who used them were people who could program (as a job or as a hobby). The history of personal computing (and UX design in general) is been a series of evolutionary steps that allows the layperson to be divorced from the inner workings of the computer

In a nutshell, Programmers program. Computer engineers/scientists program better than programmers because they do it in a manner that reduces long term costs. It's better for a computer engineer/scientist to be involved in learning the domain than the domain expert to learn technology, because that lends to elegant solutions.
programming  career  cs 
july 2015 by kme
view in context