1957: FORTRAN Delivered to Westinghouse-Bettis for the First Time on an IBM 704
FORTRAN, which stands for FORMula TRANslator, is credited as the oldest high-level programming language that is still in use in a variety of forms today.
Its origins begin as far back as 1953 when John Backus of IBM submitted a proposal for an alternative to assembly language programming. In addition to simplifying the process of programming, it also introduced the concept of a compiler, which meant that FORTRAN could be used on any machine—prior to this, each machine would have to be programmed using its specific assembly language instructions.
FORTRAN was geared towards scientific and mathematical applications, used for everything from weather prediction to fluid dynamics. It was delivered on the IBM 704 for the first time in April 1957 at Westinghouse-Bettis, where it would be used for nuclear reactor design.
The IBM team that created FORTRAN lead by John Backus. Image courtesy of IBM.
FORTRAN's biggest achievement was making programming more accessible, preventing users from having to learn complicated assembly language on several different machines. FORTRAN could be learned once and subsequently used across systems. It made programming more intuitive, using more “natural” syntax, as well as sped up programming by reducing the necessary code by 20 fold. Keep in mind that this was back when programming was being done on punch cards.
It was quickly adopted across multiple industries and rapidly became a standard. There are many who consider the invention of FORTRAN to be one of the greatest milestones in computer science history.
1959: LISP Unveiled by John McCarthy
LISP, which stands for LISt Processor, is one of the oldest high-level programming languages that is still used today. In particular, it's popular in Artificial Intelligence research. It was developed by John McCarthy as a Turing-machine style language that could be used in algorithmic design.
In 1960, McCarthy published “Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I”, which describes LISP’s functions and expressions, as well as its implementation on the IBM 704.
John McCarthy who first described LISP. Image courtesy of Stanford News.
The original paper uses what is called “m-expressions” notation, which utilizes square brackets. In practice, however, the “s-expression” form using rounded brackets would be used more widely, i.e., (car[const[A,B]] vs. (car(cons A B))).
Since then, a nearly exhaustive list of LISP varieties has been developed. In the 90s, there was an attempt to unify and standardize the language under “Common LISP”.
1965: Intel Co-Founder Gordon Moore Describes What Would Become "Moore's Law"
Gordon Moore, co-founder of Intel, first described what would be known as “Moore’s Law” in his 1965 publication “Cramming More Components onto Integrated Circuits”. In this publication, he described his predictions for the future of integrated electronics: more powerful computers organized in novel ways such as distribution of memory, “automatic controls for automobiles”, and the overall lower costs for faster and more powerful computers.
Gordon Moore. Image courtesy of Intel.
Most famously, Moore recognized that the number of transistors on an integrated chip would double every two years since its invention in 1958. In 1965, he predicted that this trend would continue at the same pace for at least another decade into 1975. He did mention that, after this time frame, it was not certain if the trend would continue at the same rate, possibly slowing down or speeding up.
For many decades, "Moore’s Law" was held up as the standard for how integrated circuit technology would progress. While it appears we are beginning to reach our physical limitations, there are still efforts to continue to downsize transistors and increase their density. Simultaneously, there are efforts to make clever use out of high-performance computing systems, parallel processing, and the like to continue improving performance, lowering costs, and lowering energy requirements.
1992: Windows 3.1 Is Released and Becomes Ubiquitious on the IBM Personal Computer
In 1992, Microsoft would begin an advertising campaign in March to promote its newest operating system: Windows 3.1. The OS came on either floppy disk or, later, on the cutting edge storage medium, the CD-ROM. The minimum system requirements for Windows 3.1 were:
- 80386 Intel Processor (32-bit, although Windows 3.1 was a 16-bit OS it had a 32-bit disk access mode called the “386 Enhanced Control Panel”)
- 2MB RAM
- 8MB of HDD space
- 3.5” or 5.25” floppy disk drive
- VGA video
- DOS 3.1
- An IBM compatible system equipped with a mouse and keyboard
The new OS would come with a slew of innovative new features that we take for granted today:
- TrueType Font System: This system gave font developers greater precision and control over how their fonts were displayed, right down to the pixel. This made Windows 3.1 an ideal platform for publishers and font developers, and was a viable competitor to Adobe’s Type 1 fonts.
- Backward Compatibility: Windows 3.1 was designed to be backward compatible with previous versions of Windows, including Windows 3.0.
- The Registry: The registry was introduced for the first time in Windows 3.1, allowing customized settings for the operating system to be changed at one of the lowest levels. The registry is still a feature in the latest Windows operating system.
- Drag and Drop: Icons that represented files or programs could be dragged and dropped in Windows 3.1 for the first time. Even more interesting for users was that if you dragged and dropped an icon to another appropriate icon, you could invoke certain actions to take place (such as dropping a document file onto the printer icon to print that document).
Probably the most important aspect of Windows 3.1 was its ubiquity—it became the standard operating system in IBM personal computers, pre-installed and distributed widely, making it one of the most commonly used operating systems.
Check in again next month for another roundup!