by Abdul Wasay Bhatt
Hey, fellas, how's it been? I hope it's great. Because if it wasn't then all you got to do is stick with the flow of this post, keep your geek modes on, and BOOM, you are in for a treat!
OK, so I took you guys to a two post journey of knowing the aromatic outer space and diving into the depths of surreal dreams, but what was my next post going to be about? Honestly, I didn't know as well. I could have decided to quit writing further. But then something just clicked in my mind while I was doing my MATLAB coding assignment late one afternoon. I tried to search it on the internet but it was all scattered data and no satisfying answer. Thus, I decided that I would be the one solving this query for any computer programmer out there, beginner or expert, who has been (or will be) continuously writing codes and programs and algorithms, but doesn't really know why is it even necessary.
But what's a code? :
|Yes, I can code!|
The saddest part of our century is that we have provided literally everyone with computers, laptops, smartphones and tablets but more than half of these users do not know what goes into making these devices do our biding; what exactly happens when we click our mouses and track-pads or tap the screens of our iPhones and iPads?
Yes, if you guessed it, you're more than just right: it is where computer programming or coding comes into play. Programmers must have seen how any of the many computer languages is structured. It is a series of lines of sets of instructions usually done with one or two common language words (e.g. English) as a command and the rest of the instruction as variables and numbers etc. elucidating about what is to be done. I am talking about relatively higher level languages here, of course-- the languages which are more comprehensible by humans and easy to code in. Leaving generic definitions aside, it is just a way of describing a computer how to perform a task. It is a lot like penning down steps for performing a task, much similar to the instructions that are given in a user manual.
The world of binary numbers, bits and bytes:
We enter a set of instructions and the computer does it. Just like that. Well, actually, it isn't "just like that" in a more realistic approach. There are some other factors involved which are invisible to the human eye. Basically, our computer systems are working a bit different than many naturally occurring phenomenon. They are digital. By digital, I mean that there are just two stages on which our computers are capable of functioning: ON (1) and OFF (0). They are also represented by HIGH and LOW respectively. Each of these stages are called "bits" in the computer world. "Bytes" is a set of eight "bits". Bits, in any circuit system, correspond to a particular or more switches. The bits and bytes together make the computer data that is either stored or transmitted with the help of the operation of these switches.
The main reason computers can only operate on two stages is because of the electricity. In any electric system, we would notice that at any instant, the current is either in the state of flow (1 or ON) or off or disconnected (0 or OFF). It can be further explained with the example of a light bulb connected to a power source and a circuit breaker. When the circuit is complete and the breaker isn't removed, the current at that stage would be 1 in binary. Otherwise, when circuit is broken, the light bulb would not glow i.e. the condition would be called 0 in binary. There cannot be another stage for digital computers because there is no logical stage between ON and OFF. The answer to the question why is it very close to impossible to make a computer that operates on a base other than 2 would be because of non-availability of any logical state between ON and OFF. It is by no means implied here that the computers were built because of the base 2 system, rather a base 2 system was designed to be more easily accommodated with our circuit technology restrictions of having just 2 states.
So, why do we need to code? :
Now we know what is the power of binary numbers, bits and bytes in computer world. Next comes arranging these bits and bytes in such a way that we could perform certain tasks like multiplication of numbers etc from the computer. For that, scientists came up with a set or sequence of 1s and 0s to control the current flow to the switches within the computer to perform tasks of daily computation, hence the derivation of the name "computer" (for the computing machine) and "code" (for the sequence of operating the switches). This was the first recorded computer coding done in early 1840s. This is how the evolution of "machine language" or the sequences of 1s and 0s came into being. For humans, the machine language became tedious to work with because of the increased difficulty in the usage of millions of 1s and 0s. Thus certain other human friendly computer languages or high-level languages were born e.g. FORTRAN, Pascal, Assembly, C, C++, Java etc. These languages were not perfect hence increasing the research for attaining that one language that is perfect enough to be able to code in much more easily, and now we have a number of programming languages. These high level languages need a device called "compiler" to convert them to low-level languages or machine language which ultimately drives the computers.
In the concluding statements, I would like to sum up the whole process we looked at. We looked at binary numbers, their importance, the codes and the need to write them. The true nature of computer programming is to bring out the best of the driving logic from the programmer to help build the world of technology a better one. This is exactly why we need computer programming.
Abdul Wasay Bhatt is a Mechatronics Engineering student from Pakistan. If you have any queries or you want to personally contact him, you can do so by clicking the URL name given above.
Also, do not forget to like, share and comment about this post below.