IT Security History and Architecture Part 3 of 6

Monday, August 16, 2010

Dr. Steve Belovich


This is the third installment of six part series on IT Security History and Architecture (Part One)(Part Two).

3.0 A Quick History of Computer and O/S Technology

3.1 The 1950s

In the 1950s, IBM dominated the landscape and hardware ISAs (Instruction Set Architectures) were changing constantly which meant that the “operating software” (precursor to the operating system) was redesigned with each new machine.

The concept of an operating system (O/S) was introduced and it was usually an “add-on” because the profits came from hardware sales. The O/S handled single-user and/or batch operations and provided very simple file systems and related file services.

The need for security did not exist because there was no remote access and physical security of the building and computer hardware equated to IT system security. Physical access meant that you were authorized – simple and effective.

3.2 The 1960s and 1970s

In the 1960s and early 1970s, computing hardware was moving from “one-at-a-time” to automated production. Operating Systems (O/S) concepts were evolving and the concept of microprogramming was introduced by IBM in 1960 with its 360 series of machines (later used on the space shuttle).

OS/360 was introduced at nearly the same time. The IBM 370 was introduced in 1964 and had the first interruptible multi-cycle instructions.

So-called “mini-computers” (e.g., PDP-8, PDP-12, and the PDP-11 series) were introduced by DEC in the mid 1960s and early 1970s. These machines fit into 19-inch racks, were air-cooled and could run on 208V (three-phase), 220V (bi-phase) or 120V single-phase.

They used 7400 series TTL logic (SSI/MSI chips) and could be mass produced (I actually own a few of these machines which I repaired as a grad student over 25 years ago).

During this time, O/S technology made the leap from single-user/batch to multi-user and time-sharing.

This leap – and it was a big one – meant that hardware & software mechanisms had to be invented to provide protection so that no user's program could “escape from its playpen” and interfere with the operation of other users' programs or the system itself.

Thus the concept of security in the form of memory protection was introduced at both the hardware and software levels. The concept of memory management was also introduced which created “virtual memory” which could be allocated to different regions of physical memory “on demand”.

3.3 Programmers Are Born

These concepts allowed the logical (and later the physical) segregation of programming from hardware design. People writing the instructions (the “code” or the “software”) for the computer did not need to know exactly what the machine really did or how it really did it.

Thus, “programmers” were created who were able to do their job without being hardware engineers and the field of “Computer Science” was born. This field originated out of electrical engineering and mathematics, but its modern incarnation has forgotten large portions of those disciplines.

Further, the relatively small installed base of machines allowed for a lot of experimentation so that good ideas could be brought to market and bad ideas were quickly buried. Multi-user protection mechanisms improved, memory management got “smart” and operating system services expanded greatly.

3.4 Compilers Get Smart Because Hardware is Smarter

Compilers also got smart, with improved optimization techniques that took advantage of the tremendous advances in hardware technology. Some of those hardware advances include multi-level, set-associative cache RAM, pipelined CPUs, instruction pre-fetching, score-boarding, “eager” branch execution, multi-port I/O and the migration of more functionality into the firmware and/or the hardware to free up the O/S from the  details of disk and tape management, etc.

In the late 1970s, the crippling limitation of address space was aggressively addressed by DEC and IBM when they expanded to 32bit (VAX architecture) and 44bits (ESA architecture) respectively.

Mass-market sixty-four-bit architectures came in 1992 with the introduction of the DEC Alpha 21064 microprocessor. Others followed including Intel and IBM. Compilers lagged but eventually caught up with new, larger data types including 64-bit integers, 128-bit floating point numbers and expanded virtual memory address space management.

3.5 Installed Base Grows – Creating Dependency

Meanwhile, the installed base of computers was exploding at a phenomenal rate. Further, businesses were becoming totally dependent upon these machines and became less tolerant of shutdowns for any reason – including new hardware installation and upgrades.

Software applications were being written and there was little organization or thought given to what do we do in a year or two when we have to expand our capability? No one really thought that one through because that was not budgeted – very scary.

3.6 Installed Base Impedes Technological Advancement

What this meant was that expansion in fundamental computing technology (e.g., the introduction of newer and better Instruction Set Architectures or ISAs) actually slowed down because the sheer size of the installed software and hardware base severely limited new experimentation and discovery.

Although new fundamentally better hardware and software designs could be brought to market quickly, the market simply could not absorb them. Shutdowns for any reason became intolerable, whether it be for maintenance or a complete new machine and software applications.

So, the computer industry continually improved the hardware & software, but the Instruction Set Architectures (ISAs) were largely preserved. The sheer size of the installed software base also prevented rapid change, no matter how “good” that change might have been for the industry.

3.7 The Economics of A New ISA & O/S Does Not Compute

The economics of the installed based has prevented major innovations to ISAs during the past twenty-five years. This “slow down” was a business necessity to preserve existing operations for customers while still selling them new hardware and software.

It's a real tough sell when you have to tell your customers to throw everything out and buy all new and different stuff - especially when you sold them the stuff that you're now telling them to toss out!

The ISA is essentially the interface between hardware and software and thus had to remain static for upward compatibility purposes. There has been little innovation in this critical area for twenty-five years.

There are too many economic barriers preventing the creation and deployment of the key hardware components required for a secure system, such as support for multi-mode instruction set execution and duplicate register sets.

Any fundamentally new O/S would require the purchase and deployment of an entire set of new apps - which economically simply could not happen. To avoid alienating the customer base, changes had to be made slowly (if at all) to preserve existing architectures.

That meant – and continues to mean - uncomfortable trade-offs between capability and what could be sold. Experimentation and invention in this critical area cannot proceed economically because there is too much already built on top of what is currently deployed. In short, while some evolution is still happening, revolution has almost ceased.

Much more to come, stay tuned......

Possibly Related Articles:
Security Management
Post Rating I Like this!
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.