AWL

Academic Word List

A Project for Teaching and Learning Academic English Vocabulary

Wednesday, 04 24th

Last updateFri, 05 Sep 2014 10am

Computer Architecture and Organization

Search:

Computer Architecture and Organization: From Software to

Hardware

Manoj Franklin

University of Maryland, College Park

c Manoj Franklin 2007

 

Preface

Introduction

 

Welcome! Bienvenidoes! Bienvenue! Benvenuto! This book provides a fresh introduction to computer architecture and organization. The subject of computer architecture dates back to the early periods of computer development, although the term was coined more recently.

Over the years many introductory books have been written on this fascinating subject, as the subject underwent many changes due to technological (hardware) and application (software) changes. Today computer architecture is a topic of great importance to computer science, computer engineering, and electrical engineering. It bridges the yawning gap between highl evel language programming in computer science and VLSI design in electrical engineering. The spheres of influence exercised by computer architecture has expanded significantly in recent years. A fresh introduction of the subject is therefore essential for modern computer users, programmers, and designers.

 

This book is for students of computer science, computer engineering, electrical engineering, and any others who are interested in learning the fundamentals of computer architecture in a structured manner. It contains core material that is essential to students in all of these disciplines. It is designed for use in a computer architecture or computer organization course typically offered at the undergraduate level by computer science, computer engineering, electrical engineering, or information systems departments. On successful completion of this book you will have a clear understanding of the foundational principles of computer architecture. Many of you may have taken a course in high-level language programming and in digital logic before using this book. We assume most readers will have some familiarity with computers, and perhaps have even done some programming in a high-level language. We also assume that readers have had exposure to preliminary digital logic design.

 

This book will extend that knowledge to the core areas of computer architecture, namely assembly-level architecture, instruction set architecture, and micro architecture.The Word Reference dictionary defines computer architecture as \the structure and organization of a computer's hardware or system software." Dictionary.com defines it as \the art of assembling logical elements into a computing device; the specification of the relation between parts of a computer system." Computer architecture deals with the way in which the elements of a computer relate to each other. It is concerned with all aspects of the design and operation of the computer system. It extends upward into software as a system's architecture is intimately tied to the operating system and other system software. It is almost impossible to design a good operating system without knowing the underlying architecture of the systems where the operating system will run. Similarly, the compiler requires an even more intimate knowledge of the architecture. It is important to understand the general principles behind the design of computers, and to see how those principles are put into practice in real computers. The goal of this book is to provide a complete discussion of the fundamental concepts, along with an extensive set of examples that reinforce these concepts. A few detailed examples are also given for the students to have a better appreciation of real-life intricacies. These examples are presented in a manner that does not distract the student from the fundamental concepts. Clearly, we cannot cover every single aspect of computer architecture in an introductory book. Our goal is to cover the fundamentals and to lay a good foundation upon which motivated students can easily build later. For each topic, we use the following test to decide if it should get included in the text: is the topic foundational? If the answer is positive, we include the topic.

 

Almost every aspect of computer architecture is replete with trade-offs, involving characteristics such as programmability, software compatibility, portability, speed, cost, power consumption, die size, and reliability. For general-purpose computers, one trade-o_ drives the most important choices the computer architect must make: speed versus cost. For laptops and embedded systems, the important considerations are size and power consumption. For space applications and other mission-critical applications, reliability and power consumption are of primary concern. Among these considerations, we highlight programmability, performance, cost, and power consumption throughout the text, as they are fundamental factors affecting how a computer is designed. However, this coverage is somewhat qualitative, and not intended to be quantitative in nature. Extensive coverage of quantitative analysis is traded o_ in favor of qualitative explanation of issues. Students will have plenty of opportunity to study quantitative analysis in a graduate-level computer architecture course. Additional emphasis is also placed on how various parts of the system are related to real-world demand and technology constraints. Performance and functionality are key to the utility of a computer system. Perhaps one of the most important reasons for studying computer architecture is to learn how to extract the best performance from a computer. As an assembly language programmer, for instance, you need to understand how to use the system's functionality most effectively. Specifically, you must understand its architecture so that you will be able to exploit that architecture during programming.

 

Coverage of Software and Hardware

Computer architecture/organization is a discipline with many facets, ranging from translation of high-level language programs through design of instruction set architecture and microarchitecture to the logic-level design of computers. Some of these facets have more of a software luster whereas others have more of a hardware luster. We believe that a good introduction to the discipline should give a broad overview of all the facets and their interrelationships, leaving a non-specialist with a decent perspective on computer architecture, and providing an undergraduate student with a solid foundation upon which related and advanced subjects can be built. Traditional introductory textbooks focussing only on software topics or on hardware topics do not ful_ll these objectives.

 

Our presentation is unique in that we cover both software and hardware concepts. These include high-level language, assembly language programming, systems programming, instruction set architecture design, microarchitecture design, system design, and digital logic design.

There are four legs that form the foundation of computer architecture: assembly-level architecture, instruction set architecture, microarchitecture, and logic-level architecture.

This book is uniquely concerned about all four legs. Starting from the assembly-level architecture, we carry out the design of the important portions of a computer system all the way to the lower hardware levels, considering plausible alternatives at each level.

Structured Approach In an e_ort to systematically cover all of these fundamental topics, the material has been organized in a structured manner, from the high-level architecture to the logic-level architecture. Our coverage begins with a high-level language programmer's view of expressing algorithms in an HLL such as C and moves towards the less abstract levels. Although there are a few textbooks that start from the digital logic level and work their way towards the more abstract levels, in our view the fundamental issues of computer architecture/ organization are best learned starting with the software levels, with which most of the students are already familiar. Moreover, it is easier to appreciate why a level is designed in a particular manner if the student knows what the design is supposed to implement.

This structured approach from abstract software levels to less abstract software levels to abstract hardware levels to less abstract hardware levels is faithfully followed throughout the book. We make exceptions only in a few places where such a deviation tends to improve clarity. For example, while discussing ISA (instruction set architecture) design options in Chapter 5, we allude to hardware issues such as pipelining and multiple-issue, which influence ISA design.

For each architecture level we answer the following fundamental questions: What is the nature of the machine at this level? What are the ways in which its building blocks interact? How does the machine interact with the outside world? How is programming done at this level? How is a higher-level program translated/interpreted for controlling the machine at this level? We are confident that after you have mastered these fundamental concepts, building upon them will be quite straightforward.

 

Example Instruction Set

As an important goal of this book is to lay a good foundation for the general subject of computer architecture, we have refrained from focusing on a single architecture in our discussion of the fundamental concepts. Thus, when presenting concepts at each architecture level, great care is taken to keep the discussion general, without tailoring to a specific architecture.

For instance, when discussing the assembly language architecture, we discuss register-based approach as well as a stack-based approach. When discussing virtual memory, we discuss a paging-based approach as well as a segmentation-based approach. In other words, at each stage of the design, we discuss alternative approaches, and the associated trade-offs. While one alternative may seem better today, technological innovations may tip the scale towards another in the future.

For ease of learning, the discussion of concepts is peppered with suitable examples. We have found that students learn the different levels and their inter-relationships better when there is continuity among many of the examples used in different parts of the book. For this purpose, we have used the standard MIPS assembly language [ref] and the standard

MIPS Lite instruction set architecture [ref], a subset of the MIPS-I ISA [ref]. We use MIPS because it is very simple and has had commercial success, both in general-purpose computing and in embedded systems. The MIPS architecture had its beginnings in 1984, and was _rst implemented in 1985. By the late 1980s, the architecture had been adopted by several workstation and server companies, including Digital Equipment Corporation and Silicon Graphics. Now MIPS processors are widely used in Sony and Nintendo game machines, palmtops, laser printers, Cisco routers, and SGI high-performance graphics engines.

More importantly, some popular texts on Advanced Computer Architecture use the MIPS architecture. The use of the MIPS instruction set in this introductory book will therefore provide good continuity for those students wishing to pursue higher studies in Computer Science or Engineering.

In rare occasions, I have changed some terminology, not to protect the innocent but simply to make it clearer to understand.

Organization and Usage of the Book

This book is organized to meet the needs of several potential audiences. It can serve as an undergraduate text, as well as a professional reference for engineers and members of the technical community who find themselves frequently dealing with computing. The book uses a structured approach, and is intended to be read sequentially. Each chapter builds upon the previous ones. Certain sections contain somewhat advanced technical material, and can be skipped by the reader without loss in continuity. These sections are marked with an asterisk. We recommend, however, that even those sections be skimmed, at least to get a superficial idea of their contents.

Each chapter is followed by a \Concluding Remarks" section and an \Exercises" section.

The exercises are particularly important. They help master the material by integrating a number of different concepts. The book also includes many real-world examples, both historical and current, in each chapter. Instead of presenting real-world examples in isolation, such examples are included while presenting the major concepts.

This book is organized into 9 chapters, which are grouped into 3 parts. The first part provides an overview of the subject. The second part covers the software levels, and the third part covers the hardware levels. The coverage of the software levels is not intended to make the readers proficient in programming in these levels, but rather to help them understand what each level does, how programs at immediately higher level are converted to this level, and how to design this level in a better way.

A layered approach is used to cover the topics. Each new layer builds upon the previous material to add depth and understanding to the reader's knowledge.

Chapter 1 provides an overview of .... It opens with a discussion of the expanding role of computers, and the trends in technology and software applications. It briefly introduces

..... Chapter 2 ... Chapter 3 ..... Most of the material in Chapter 3 should be familiar to readers with a background in computer programming, and they can probably browse through this chapter. Starting with Chapter 4, the material deals with the core issues in computer architecture. Chapter 4 ..... Chapter 5 ..... Chapter 6 .....

The book can be tailored for use in software-centric as well as hardware-centric courses.

For instance, skipping the last chapter (or the last 3 chapters) makes the book becomes suitable for a software-centric course, and skipping chapter 2 makes it suitable for a hardware centric course.

\If you are planning for a year, sow rice;

if you are planning for a decade, plant trees;

if you are planning for a lifetime, educate people."

| Chinese Proverb

\Therefore, since brevity is the soul of wit, And tediousness the limbs and outward flourishes, I will be brief"

| William Shakespeare, Hamlet

 

Chapter 1: Introduction

 

Let the wise listen and add to their learning, and let the discerning get guidance

Proverbs 1: 5

We begin this book with a broad overview of digital computers This chapter serves as a context for the remainder of this book. It begins by examining the nature of the computing process. It then discusses the fundamental aspects of digital computers, and moves on to recent trends in desktop computer systems. Finally, it introduces the concept of the computer as a hierarchical system. The major levels of this hierarchical view are introduced. The remainder of the book is organized in terms of these levels. \The computer is by all odds the most extraordinary of the technological clothing ever devised by man, since it is an extension of our central nervous system. Beside it the wheel is a mere hula hoop...."

| Marshall McLuhan. War and Peace in the Global Village

Born a few years back, digital computer technology, in cohort with telecommunication technology, has ushered us into the information age and is exerting a profound influence on almost every facet of our daily lives1. Most of us spend a substantial time every day in front of a computer (most of it on the internet or on some games!). Rest of the time, we are on the cell phone or some other electronic device with one or more computers embedded within. On a more serious note, we are well aware of the critical role played by computers in ying modern aircraft and spacecraft; in keeping track of large databases such as airline reservations and bank accounts; in telecommunications applications such as routing and controlling millions of telephone calls over the entire world; and in controlling power stations and hazardous chemical plants. Companies and governmental agencies are virtually crippled

This too shall pass when their computer systems go down, and a growing number of sophisticated medical procedures are completely dependent on computers. Biologists are using computers for performing extremely complex computations and simulations. Computer designers are using them extensively for developing tomorrow's faster and denser computer chips. Publishers use them for typesetting, graphical picture processing, and desktop publishing. The writing of this book itself has benefitted substantially from desktop publishing software, especially Latex. Thus, computers have taken away many of our boring chores, and have replaced them with addictions such as chatting, browsing, and computerized music.

 

What exactly is a computer? A computer science definition would be as follows: a computer is a programmable symbol-processing machine that accepts input symbols, processes it according to a sequence of instructions called a computer program, and produces the resulting output symbols. The input symbols as well as the output symbols can represent numbers, characters, pictures, sound, or other kinds of data such as chess pieces. The most striking property of the computer is that it is programmable, making it a truly general purpose machine. The user can change the program or the input data according to specific requirements. Depending on the software run, the end user \sees" a different machine; the computer user's view thus depends on the program being run on the computer at any given instant. Suppose a computer is executing a chess program. As far as the computer user is concerned, at that instant the computer is a chess player because it behaves exactly as if it were an electronic chess player2. Because of the ability to execute different programs,

a computer is a truly general-purpose machine. The same computer can thus perform a variety of information-processing tasks that range over a wide spectrum of applications| for example, as a word processor, a calculator, or a video game by executing different programs on it; a multitasking computer can even simultaneously perform different tasks.

The computer's ability to perform a wide variety of tasks at very high speeds and with high degrees of accuracy is what makes it so ubiquitous.

\The computer is only a fast idiot, it has no imagination; it cannot originate action.

It is, and will remain, only a tool to man."

| American Library Association's reaction to the UNIVAC computer exhibit at the

1964 New York World's Fair 2 In the late 1990s, a computer made by IBM called Deep Thought even defeated the previous World Chess Champion Gary Kasparov. It is interesting to note, however, that if the rules of chess are changed even slightly (for example, by allowing the king to move two steps at a time), then current computers will have a di_cult time, unless they are reprogrammed or reconstructed by humans. In contrast, even an amateur human player will be able to comprehend the new rules in a short time and play a reasonably good game under the new rules!

 

1.1 Computing and Computers

The notion of computing (or problem solving) is much more fundamental than the notion of a computer, and predates the invention of computers by thousands of years. In fact, computing has been an integral aspect of human life and civilization throughout history.

Over the centuries, mathematicians developed algorithms for solving a wide variety of mathematical problems. Scientists and engineers used these algorithms to obtain solutions for specific problems, both practical and recreational. And, we have been computing ever since we entered kindergarten, using fingers, followed later by paper and pencil. We have been adding, subtracting, multiplying, dividing, computing lengths, areas, volumes and many other things. In all these computations, we follow some definite, unambiguous set of rules that have been established. For instance, once the rules for calculating the area of a complex shape have been established divide it into non-overlapping basic shapes and add up the areas of the shapes we can calculate the area of any complex shape.

A typical modern-day computing problem is much more complex, but works on the same fundamental principles. Consider a metropolitan traffic control center where traffic video images from multiple cameras are being fed, and a human operator looks at the images and takes various traffic control decisions. Imagine automating this process, and letting a computer do the merging of the images and taking various decisions! How should we go about designing such a computer system?

 

1.1.1 The Problem-Solving Process

Finding a solution to a problem, irrespective of whether or not we use a computer, involves two important phases, as illustrated in Figure 1.1:

_ Algorithm development

_ Algorithm execution

We shall take a detailed look at these two phases.

 

1.1.1.1 Algorithm Development

The first phase of computing involves the development of a solution algorithm or a step by- step procedure that describes how to solve the problem. When we explicitly write down the rules (or instructions) for solving a given computation problem, we call it an algorithm.

An example algorithm is the procedure for finding the solution of a quadratic equation.

Informally speaking, many of the recipes, procedures, and methods in everyday life are algorithms.

What should be the granularity of the steps in an algorithm? This depends on the sophistication of the person or machine who will execute it, and can vary significantly from one algorithm to another; a step can be as complex as finding the solution of a sub-problem, or it can be as simple as an addition/subtraction operation. Interestingly, an addition step itself can be viewed as a problem to be solved, for which a solution algorithm can be developed in terms of 1-bit addition with carry-ins and carry-outs. It should also be noted that one may occasionally tailor an algorithm to a specific set of input data, in which case it is not very general.

Algorithm development has always been done with human brain power, and in all likelihood will continue like that for years to come! Algorithm development has been recorded as early as 1800 B.C., when Babylonian mathematicians at the time of Hammurabi developed rules for solving many types of equations [4]. The word \algorithm" itself was derived from the last name of al-Khw^arizm^i, a 9th century Persian mathematician whose textbook on arithmetic had a significant influence for more than 500 years.

 

1.1.1.2 Algorithm Execution

Algorithm execution the second phase of the problem-solving process |means applying a solution algorithm on a particular set of input values, so as to obtain the solution of the problem for that set of input values. Algorithm development and execution phases are generally done one after the other; once an algorithm has been developed, it may be executed any number of times with different sets of data without further modifications.

However, it is possible to do both these phases concurrently, in a lock-step manner! This typically happens when the same person performs both phases, and is attempting to solve a problem for the first time.

The actions involved in algorithm execution can be broken down into two parts, as illustrated in Figure 1.2. _ Sequencing through the algorithm steps: This part involves selecting from the algorithm the next step to be executed.

 

Executing the next step of the algorithm, as determined by the sequencing part.

Determine Next Step

Algorithm

Execute the Step

Input Data

Output Data (Results)

Step

Data

 

For hundreds of years, people relied mainly on human brain power for performing both of these parts. As centuries went by (and the gene pool deteriorated), a variety of computing aids were invented to aid human brains in executing the individual steps of solution algorithms.

The Chinese abacus and the Japanese soroban were two of the earliest documented aids used for doing the arithmetic calculations specified in algorithm steps. The slide rule was a more sophisticated computing aid invented in the early 1600s by William Oughtred, an English clergyman; it helped to perform a variety of computation operations including multiplication and division. Later examples of computing aids included Pascaline, the mechanical adder built in 1642 by the French mathematician Blaise Pascal (to assist his father in adding long columns of numbers in the tax accounting office) and the stepped wheel machine of Gottfried Wilhelm Leibniz in 1672 (which could perform multiplication and division in addition to addition and subtraction).

 

As problems increased in complexity, the number of steps required to solve them also increased accordingly. Several mechanical and electrical calculators were commercially produced in the 19th century to speed up speci_c computation steps. The time taken by a calculator to perform a computation step was in the order of a few milliseconds, in contrast to the several seconds or minutes taken by a person to perform the same step. It is important to note that even after the introduction of calculators, the sequencing part of algorithm execution was still done by people, who punched in the numbers and the operations. It is also important to note that the granularity of the steps in an algorithm is related to the capabilities and sophistication of the calculating aids used. Thus, a typical calculator lets us specify algorithm steps such as multiplication and square root, for instance, whereas an abacus can perform only more primitive computation steps.

 

1.1.2 Automating Algorithm Execution with Computers

We saw that calculators and other computing aids allowed an algorithm's computation steps to be executed much faster than what was possible without any computing aides. However, the algorithm execution phase still consumed a significant amount of time for

6 Chapter 1. Introduction the following reasons: (i) the sequencing process was still manual, and (ii) the execution of each computation step involved manual inputting of data into the calculating aid. Both of these limitations can be overcome if the sequencing process is automated by means of an appropriate machine, and the data to be processed is stored in the machine itself. This is the basic idea behind computers.

\Stripped of its interfaces, a bare computer boils down to little more than a pocket calculator that can push its own buttons and remember what it has done." { Arnold Penzias. Ideas and Information.

One of the earliest attempts to automate algorithm execution was that of Charles Babbage,

a 19th century mathematics professor. He developed a mechanical computing machine called Difference Engine. This computer was designed to execute only a single algorithm the method of (finite) differences using polynomials. Although this algorithm used only addition and subtraction operations, it permitted many complex and useful functions to be calculated. (Chapter 1 of [2] provides a good description of the use of this algorithm in calculating different functions.) The Difference Engine performed the sequencing process automatically, in addition to performing the operation specified in each computation step. This is a major advantage because it allows the algorithm execution phase to be performed at machine speeds, rather than at the speed with which it can be done manually. One limitation of executing a single algorithm, however, is that only a few problems can be solved by a single algorithm; such a computing machine is therefore not useful for general-purpose computing.

 

After a few years, Babbage envisioned the Analytical Engine, another massive brass, steam-powered, mechanical (digital) computing machine. The radical shift that it introduced was to have the machine accept an arbitrary solution algorithm (in punched card format), and execute the algorithm by itself. This approach allows arbitrary algorithms to be executed at the speed of the machine, making the machine a general-purpose computer.

The radical idea embodied in the Analytical Engine was the recognition that a machine could be \programmed" to perform a long sequence of arithmetic and decision operations without human intervention.

 What if a calculating engine could not only foresee but could act on that foresight?" { Charles Babbage. November 1834.

 

The Analytical Engine served as a blueprint for the _rst real programmable computer, which came into existence a century later3. The basic organization proposed by Babbage is given in Figure 1.3. The main parts are the mill, the store, the printer and card punch, the operation cards, and the variable cards. The instructions were given to the machine on punch cards, and the input data was supplied through the variable cards. Punched cards had 3Primitive forms of \programmable" machines had existed centuries ago, dating back to Al-Jazari's musical automata in the 13th century and even to Heron's mobile automaton in the 1st century been recently invented by Jacquard for controlling weaving looms. Augusta Ada, Countess of Lovelace as well as a mathematician, was one of the few people who fully understood Babbage's vision. She helped Babbage in designing the Analytical Engine's instruction set, and in describing, analyzing, and publicizing his ideas. In an article published in 1843, she predicted that such a machine might be used to compose complex music, to produce graphics, and would be used for both practical and scientific use. She also created a plan for how the engine might calculate a mathematical sequence known as Bernoulli numbers.

This plan is now regarded as the first \computer program," and A da is credited as the first computer programmer.

 

The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform."

\Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent."

Automated algorithm execution has two side-effects that we need to keep in mind. First, it forces the algorithm development and algorithm execution phases to happen one after the other. It also implies that the algorithm must allow for the occurrence of all possible inputs. Hence computer algorithms are seldom developed to solve just a single instance of a problem; rather they are developed to handle different sets of input values. Thus, in moving from the manual approach to the automated approach, we are forced to sacrifice the versatility inherent in the concurrent development and execution of an algorithm. The big gain, however, is in the speed and storage capabilities offered by the computer machine.

Another side-effect of automated algorithm execution is that for a machine to follow an algorithm, the algorithm must be represented in a formal and detailed manner: the less sophisticated the follower, the more detailed the algorithm needs to be! Detailed algorithms written for computers are called computer programs. By definition, a computer program is an expression of an algorithm in a computer programming language, which is a precise language that can be made understandable to a computer. Because of the extensive efforts involved in developing a computer program to make it suitable for execution in a computer, the program itself is often developed with a view to solve a range of related problems rather than a single problem. For instance, it may not be profitable to develop a computer program to process a single type of bank transaction; instead, it is profitable to develop the program with the ability to process different types of transactions.

 

In spite of these minor side-effects, the idea of using computers to perform automated algorithm execution has been found to have tremendous potential. First of all, once an algorithm has been manually developed to solve a particular problem, computers can be used to execute the algorithm at very high speeds. This makes it possible to execute long-running algorithms that require billions of operations, which previously could never be executed in a reasonable period of time4. In fact, a lion's share of computer development took place in the 1930s and 1940s, mostly in response to computation problems that arose in the WW II effort, such as ballistic computations and code-breaking. The ability to execute complex algorithms in real-time is the leading reason for the acceptance of computers in many embedded applications, such as automobiles and aircraft. Secondly, the same computing machine can be used to execute different algorithms at different times, thus having a truly general-purpose computing machine. Thirdly, computers are immune to emotional and physical factors such as distraction and fatigue, and can provide accurate and reliable results almost all the time5. Finally, embedded applications often involve working in hazardous environments where humans cannot go, and computers are good candidates for use in such environments.

 

At this stage, it is instructive to contrast the computing machine against other types of machines such as clocks, which predate the computer by hundreds of years. Such machines are constructed to perform a specific sequence of internal actions to solve a specific problem.

For instance, the hands of a clock go around at _fixed speeds; this is in fact a mechanical implementation of an algorithm to keep track of time. A digital clock keeps track of time using a quartz crystal and digital circuitry. Such machines can only do the one thing they are constructed to do. A computing machine, on the other hand, is general-purpose in that it can perform a large variety of widely differing functions, based on the algorithm that it is operating upon at any given time. Because the algorithm can be changed, different functions can be implemented by acquiring a single hardware system and then developing different algorithms to perform different functions in the hardware. Thus, by executing a

4Interestingly, even now, at a time when computers have become faster by several orders of magnitude, there are prodigies like Sakuntala Devi [] who have demonstrated superiority over computers in performing certain kind of complex calculations!

We should mention that computers are indeed susceptible to some environmental factors such as electrical noise and high temperatures. Modern computers use error-correcting codes and other fault tolerance measures to combat the effect of electrical noise and other environmental effects.

 

1.2. The Digital Computer

Computer program for keeping track of time, a computer can implement a clock! This feature is the crucial difference between general-purpose computing machines and special-purpose machines that are geared to perform specific functions.

We have described some of the landmark computers in history. Besides the few computers mentioned here, there are many other precursors to the modern computer. Extensive coverage of these computers can be found in the IEEE Annals of the History of Computing, now in its 28th volume [ref].

\Computers in the future will weigh no more than 0.5 tons."

| Popular Mechanics: Forecasting Advance of Science, 1949

 

We saw how computers play a major role in executing algorithms or programs to obtain solutions for problems. Solving a problem involves manipulating information of one kind or other. In order to process information, any computer mechanical or electrical should internally represent information by some means. Some of the early computers were analog computers, in that they represented information by physical quantities that can take values from a continuum, rather than by numbers or bit patterns that represent such quantities.

Physical quantities can change their values by an arbitrarily small amount; examples are the rotational positions of gears in mechanical computers, and voltages in electrical computers.

Analog quantities represent data in a continuous form that closely resemble the information they represent. The electrical signals on a telephone line, for instance, are analog-data representations of the original voices. Instead of doing arithmetic or logical operations, an analog computer uses the physical characteristics of its data to determine solutions. For instance, addition could be done just by using a circuit whose output voltage is the sum of its input voltages.

Analog computers were a natural outcome of the desire to directly model the smoothly varying properties of physical systems. By making use of different properties of physical quantities, analog computers can often avoid time-consuming arithmetic and logical operations.

Although analog computers can nicely represent smoothly changing values and make use of their properties, they suffer from the difficulty in measuring physical quantities precisely, and the difficulty in storing them precisely due to changes in temperature, humidity, and so on. The subtle errors introduced to the stored values due to such noise are difficult to detect, let alone correct.

The 20th century saw the emergence of digital computers, which eventually replaced analog computers in the general-purpose computing domain. Digital computers represent and manipulate information using discrete elements called symbols. A major advantage of using symbols to represent information is resilience to error. Even if a symbol gets distorted, it can still be recognized, as long as the distortion does not cause it to appear like another symbol. This is the basis behind error-correcting features used to combat the effects of electrical noise in digital systems. Representing information in digital format has a side effect, however. As we can only have a limited number of bits, only a finite number of values can be uniquely represented. This means that some of the values can be represented with high degree of precision, whereas the remaining ones will need to be approximated.

Electronic versions of the digital computer are typically built out of a large collection of electronic switches, and use distinct voltage states (or current states) to represent different symbols. Each switch can be in one of two positions, on or o_; designing a digital computer will therefore be a lot simpler if it is restricted to handling just two symbols.

So most of the digital computers use only two symbols in their alphabet and are binary systems, although we can design computers and other digital circuits that handle multiple symbols with multiple-valued logic. The two symbols of the computer alphabet are usually represented as 0 and 1; each symbol is called a binary digit or a bit. Computers often need to represent different kinds of information, such as instructions, integers, floating-point numbers, and characters. Whatever be the type of information, digital computers represent them by concatenations of bits called bit patterns, just like representing information in English by concatenating English alphabets and punctuation marks. The finite number of

English alphabets and punctuation marks do not impose an inherent limit on how much we can communicate in English; similarly the two symbols of the computer alphabet do not place any inherent limits on what can be communicated to the digital computer. Notice, however, that information in the computer language won't be as cryptic as in English, just like information in English is not as cryptic as in Chinese (which has far more symbols).

\Even the most sophisticated computer is really only a large, well-organized volume

of bits."

| David Harel. Algorithmics: The Spirit of Computing

By virtue of their speed and other nice properties, these electronic versions completely replaced mechanical and electromechanical versions. At present, the default meaning of the term \computer" is a a general-purpose automatic electronic digital computer.

 

1.2.1 Representing Programs in a Digital Computer: The Stored Program

Concept

We saw that a computer solves a problem by executing a program with the appropriate set of input data. How is this program conveyed to the computer from the external world? And, how is it represented within the computer? In the ENIAC system developed at University of

Pennsylvania in early 1940s, for instance, the program was a function of how its electrical circuits were wired, i.e., the program was a function of the physical arrangement of the cables in the system. The steps to be executed were specified by the connections within the hardware unit. Every time a different program needed to be executed, the system had to be rewired. Conveying a new program to the hardware sometimes took several weeks! Other early computers used plug boards, punched paper tape, or some other external means to represent programs. Developing a new program involved re-wiring a plug board, for instance.

And, loading a program meant physically plugging in a patch board or running a paper tape through a reader.

A marked change occurred in the mid-1940s when it was found that programs could be represented inside computers in the same manner as data, i.e., by symbols or bit patterns.

This permits programs to be stored and transferred like data. This concept is called the stored program concept, and was _rst described in a landmark paper by Burks, Goldstein, and von Neumann in 1946 [1]. In a digital computer implementing the stored program concept, a program will be a collection of bit patterns. When programs are represented and stored as bit patterns, a new program can be conveyed to the hardware very easily.

Moreover, several programs can be simultaneously stored in the computer's memory. This makes it easy not only to execute different programs one after the other, but also to switch from one program to another and then back to the first, without any hardware modification.

Stored program computers are truly \general-purpose," as they can be easily adapted to do different types of computational and information storage tasks. For instance, a computer can instantaneously switch from being a word processor to a telecommunications terminal,

a game machine, or a musical instrument! Right from its inception, the stored program concept was found to be such a good idea that it has been the basis for virtually every general-purpose computer designed since then. In fact it has become so much a part of the modern computer's functioning that it is not even mentioned as a feature!

 

In a stored program computer, the program being executed can even manipulate another program as if it were data for example, load it into the computer's memory from a storage device, copy it from one part of memory to another, and store it back on a storage device.

Altering a program becomes as easy as modifying the contents of a portion of the computer's memory. The ability to manipulate stored programs as data gave rise to compilers and assemblers that take programs as input and translate them into other languages.

The advent of compilers and assemblers have introduced several additional steps in solving problems using modern digital computers. Figure 1.3 depicts the steps involved in solving a problem using today's computers. First, an algorithm, or step-by-step procedure,

is developed to solve the problem. Then this algorithm is expressed as a program in a high-level programming language by considering the syntax rules and semantic rules of the programming language. Some of the common high-level languages are C, FORTRAN, C++, Java, and Visual Basic.

 

The source program in the high-level language is translated into an executable program (in the language of the machine) using programs such as compilers, assemblers, and linkers.

During this compilation process, syntax errors are detected, which are then corrected by the programmer. Once the syntax errors are corrected, the program is re-compiled. Once all syntax errors are corrected, the compiler produces the executable program. The executable program can be executed with a set of input values on the computer to obtain the results. Semantic errors manifest as run-time errors, and are corrected by the programmer.

 

1.2.2 Basic Software Organization

 

As discussed in the previous section, today's computers use the stored program concept.

Accordingly the software consists of symbols or bit patterns that can be stored in storage devices such as CD-ROMs, hard disks, and floppy disks. A program consists of two parts| instructions and data both of which are represented by bit patterns. The instructions indicate specific operations to be performed on individual data items. The data items can be numeric or non-numeric.

 

It is possible to write stand-alone programs that can utilize and manage all of the system resources, so as to perform the required task. This is commonly done in controllers and embedded computers, which typically store a single program in a ROM (read-only memory), and run the same program forever. In the mid and higher end of the computer spectrum, starting with some embedded computers, a dichotomy is usually practiced, however, for a variety of reasons. Instead of writing stand-alone programs that have the ability to access and control all of the hardware resources, the access and control of many of the hardware resources (typically IO devices) are regulated through a supervisory program called the operating system. When a program needs to access a regulated hardware resource, it requests the operating system, which then provides the requested service if it is legitimate request. This dichotomy has led to the development of two major kinds of software user programs and kernel programs as shown in Figure 1.4.

The operating system is one of the most important pieces of software to go into a modern computer system. It provides other programs a uniform software interface to the hardware.

 

User programs Kernel programs

(Application software) (Operating system)

Figure 1.5: Basic Software Organization in a Digital Computer resources. In addition to providing a standard interface to system resources, in multitasking environments, the operating system enables multiple user programs to share the hardware resources in an orderly fashion6. This sharing increases overall system performance, and ensures security and privacy for the individual programs. To do this sharing in a safe and efficient manner, the operating system is the software that is \closest to the hardware". All other programs use the OS as an interface to shared resources and as a means to support sharing among concurrently executing programs.

 

A hardware timer periodically interrupts the running program, allowing the processor to run the operating system. The operating system then decides which of the simultaneously active application programs should be run next on the processor; it takes this decision with a view to minimize processor waste time. Peripheral devices also interrupt the running program, at which times the operating system intervenes and services the devices.

 

For similar reasons, memory management and exception handling functions are also typically included in the operating system. Memory management involves supporting a large virtual memory address space with a much smaller physical memory, and also sharing the available physical memory among the simultaneously active programs. Exception handling involves dealing with situations that cause unexpected events such as arithmetic overow and divide by zero.

 

1.2.3 Basic Hardware Organization

Even the most complex software, with its excellent abstraction and generality features, is only like the mental picture an artist has before creating a masterpiece. By itself it does not solve any problem. For it to be productive, it must be eventually executed on suitable hardware with proper data, just like the artist executing his/her mental picture on a suitable

canvas with proper paint. The hardware is thus an integral part of any computer system.

\You'll never plow a field by turning it over in your mind."

| An Irish Proverb

While nearly every class of computer hardware has its own unique features, from a function.

Even in multitasking computers, hardware diagnostic programs are often run entirely by themselves, with no intervention from the operating system.

 

This organization consists of three functionally independent parts: the CPU (central processing unit), the memory unit, and the input/output unit. The actions performed by the computer are controlled and coordinated by the program that is currently being executed by the CPU. The input/output unit is a collection of diverse devices that enable the computer to communicate with the outside world. Standard input/output devices include the keyboard, the mouse, the monitor, and so on. Programs and data are brought into the computer from the external world using the input devices and their controllers. The input unit's function is to accept information from human users or other machines, through devices such as the keyboard, the mouse, the modem, and the actuators. The results of computations are sent to the outside world through the output unit. Some of the input/output devices are storage devices, such as hard disks, CD-ROMs, and tapes, which can store information for an indefinite period of time.

 

When the program and data are ready to be used, they are copied into the memory unit from either the external environment or a storage device. The memory unit stores two types of information: (i) the instructions of the program being executed, and (ii) the data for the program being executed. The CPU executes a memory-resident program by reading the program instructions and data from the memory. The execution of the program is carried out by the CPU's control unit, which reads each instruction in the program, decodes the instruction, and causes it to be executed in the data path. The control unit is the brain of the system, and behaves like a puppeteer who pulls the right strings to make the puppets behave exactly as needed.

 

It is interesting to compare and contrast this organization with that of a human being's information processing system, which among other things, involves the brain. The main similarity lies in the way information is input and output. Like the digital computer, the human information processing system obtains its inputs through its input unit (the sense organs), and provides its outputs through its output unit by way of speech and various motions.

 

The dissimilarity lies both in the organization of the remaining parts and in the way information is stored and processed. All of the information storage and processing happens in a single unit, the brain. Again, the brain stores information not as 0s and 1s in memory elements, but instead by means of its internal connectivity. Information processing is done in the brain on a massively parallel manner. This is in contrast to how information processing is done in a digital computer, where the information is stored in different memory units from where small pieces are brought into the CPU and processed7.

 

1.2.4 Software versus Hardware

Software consists of abstract ideas, algorithms, and their computer representations, namely programs. Hardware, in contrast, consists of tangible objects such as integrated circuits, printed circuit boards, cables, power supplies, memories, and printers. Software and hardware aspects are intimately tied together, and to achieve a good understanding of computer systems, it is important to study both, especially how they integrate with each other.

Therefore, the initial portions of this book deal with software and programming, and the latter portions deal with the hardware components. This introductory chapter introduces a number of software and hardware concepts, and gives a broad overview of the fundamental aspects of both topics. More detailed discussions follow in subsequent chapters.

The boundary between the software and the hardware is of particular interest to systems programmers and compiler developers. In the very first computers, this boundary the instruction set architecture was quite clear; the hardware presented the programmer with an abstract model that took instructions from a serial program one at a time and executed them in the order in which they appear in the program. Over time, however, this boundary blurred considerably, as more and more hardware features are exposed to the software, and hardware design itself involves software programming techniques. Nowadays, it is often difficult to tell software and hardware apart, especially at the boundary between them. In fact, a central theme of this book is: Hardware and software are logically equivalent.

 

Any operation performed by software can also be built directly into the hardware. Embedded systems, which are more specialized than their general-purpose counterpart, tend to do more through hardware than through software. In general, new functionality is first introduced in software, as it is likely to undergo many changes. As the functionality becomes more standard and is less likely to change, it is migrated to hardware.

\Hardware is petrified software."

| ????

Of course, the reverse is also true: Any instruction executed by the hardware can also

7Research is under way to develop computers made from quantum circuits, and even biological circuits.

In the next decade, we may very well have computers made with such \hardware", and working with different computation models be simulated in software. Suppose an end user is using a computer to play a video game.

It is possible to construct an electronic circuit to directly handle video games, but this is seldom done. Instead a video game program is executed to simulate a video game. The decision to put certain functions in hardware and others in software is based on such factors as cost, speed, reliability, and frequency of expected changes. These decisions change with trends in technology and computer usage.

 

1.2.5 Computer Platforms

Classification is fundamental to understanding anything that comes in a wide variety. Automobiles can be classified according to manufacturer, body style, pickup, and size. Students often classify university faculty based on their teaching style, sense of humor, and strictness of grading. They classify textbooks according to price, contents, and ease of understanding.

Likewise, computers come in various sizes, speeds, and prices, from small-scale to large scale.

Table 1.1 gives a rough categorization of today's computers. This categorization is somewhat idealized. Within each category, there is wide variability in features and cost; in practice the boundary between two adjacent categories is also somewhat blurred. The approximate price figures in the table are only intended to show order of magnitude differences between different categories. All computers are functionally similar, irrespective of where they line up in the spectrum. The general principles of computer architecture and organization are the same for the entire computer spectrum, from workstations to multiprocessors and distributed computer systems.

At one end of the spectrum we have disposable computers like the ones used in greeting cards, inexpensive watches, and other similar applications. These are quite inexpensive because they use a single chip with small amounts of memory, and are produced in large quantities. Then there are a wide variety of embedded computers, used in applications such as automobiles and home appliances. The entertainment PCs are computer systems that are optimized for games, personal communications, and video playback. They typically have high-quality graphics, video, and audio so as to support high clarity and realism.

Desktop computers and laptop computers are typically intended for a single user to run applications such as word processing, web browsing, and receiving/sending email. These computers come with different features and costs. In the immediately higher category, we have servers. A server is a high-performance computer that serves as a gateway in a computer network. At the other end of the spectrum, we have the supercomputers, which are used for applications involving very complex calculations, such as weather prediction and nuclear explosion modeling. The lower end of the spectrum often provides the best price/performance ratio, and the decision on which system to purchase is often dictated by such issues as software and object code compatibility.

 

1.3 A Modern Computer System

As discussed earlier, computers come in various sizes and kinds. Among these, perhaps the most commonly seen one and one that comes to mind vividly when one thinks of a computer, is a desktop computer. Desktop computers are designed to be truly general-purpose. For these reasons, we provide a detailed description of a typical desktop computer system in this section. In fact, many of the issues discussed here are applicable to all members of the computer spectrum.

 

1.3.1 Hardware

It has a system unit which is the case or box that houses the motherboard, other printed circuit boards, the storage devices, and the power supply. The system unit is generally designed in such a way that it can be easily opened to add or replace modules. The different components in the system unit are typically connected together using a bus, which is a set of wires for transferring electrical signals. Each printed circuit board houses a number of chips, some of which are soldered and the rest are plugged into the board. The latter permits the user to upgrade the computer components.

Circuits etched into the boards act like wires, providing a path for transporting data from one chip to another.

 

Processor: The processor, also called the central processing unit (CPU), is perhaps the most important part of a computer. It carries out the execution of the instructions of a program.

 

Chip Sets: The chipsets provide hardware interfaces for the processor to interact with other devices, such as DRAM and graphics cards.

Motherboard: The motherboard is the main printed circuit board, and holds the computer's processor chip(s), ROM chips, RAM chips, and several other key electronic components.

The processor is an important part of a computer, and can be a single chip or a collection of chips. ROM chips typically contain a small set of programs that start the computer, run system diagnostics, and control low-level input and output activities. These programs are collectively called BIOS (basic input output system) in PCs. The instructions in the ROM chips are permanent, and the only way to modify them is to reprogram the

ROM chips. RAM chips are volatile and hold program and data that is temporary in nature.

A battery powered real-time clock chip keeps track of the current date and time. The motherboard also typically contains expansion slots, which are sockets into which expansion cards such as video card, sound card, and internal modem, can be plugged in. An expansion card has a card edge connector with metal contacts, which when plugged into an expansion slot socket, connect the circuitry on the card to the circuitry on the motherboard. The number of expansion slots in the motherboard determines its expandability.

 

Storage Devices: The commonly used storage devices are oppy disk drives, hard disk drives, CD-ROM drives, and ZIP drives. A oppy disk drive is a device that reads and writes data on oppy disks. A typical oppy disk drive uses 3 1

2 -inch oppy disks each of which can store up to 1.44 MB. A hard disk drive can store billions of bytes on a non-removable disk platter. A CD-ROM drive is a storage device that uses laser technology to read data from a CD-ROM. The storage devices are typically mounted in the system unit.

The ones involving removable media such as the floppy disk drive, the CD-ROM drive, and the ZIP drive are mounted on the front side of the system unit, and the hard disk drives are typically mounted inside the system unit.

 

Input/Output Devices: Two of the commonly used input devices in a desktop computer are the keyboard and the mouse. A computer keyboard looks similar to that of a typewriter, with the addition of number keys, as well as several additional keys that control computer-specific tasks. The mouse is useful in manipulating objects depicted on the screen. Other commonly used input device is the microphone. The primary output device in a desktop computer is the monitor, a display device that forms an image by converting electrical signals from the computer into points of colored light on the screen. Its functioning is very to a television picture tube, but has a much higher resolution so that a user sitting at close quarters can clearly see computer-generated data such as text and images. Other frequently used output devices are the printer and the speakers.

 

Device Controllers: Each device|keyboard, mouse, printer, monitor, etc|requires special controller circuitry for transferring data from the processor and memory to the device, and vice versa. A device controller is designed either as a chip which is placed in the motherboard or as a printed circuit board which is plugged into an expansion slot of the motherboard. The peripheral devices are connected to their respective controllers in the system unit using special cables to sockets called expansion ports. The ports are located on the backside of the system unit and provide connections through holes in the back of the system unit. Parallel ports transfer several bits simultaneously and are commonly used to connect printers to the computer. Serial ports transfer a single bit at a time, and are commonly used to connect mice and communication equipment to the computer. Device controllers are very complex. Each logical command from the processor must typically be decomposed into long sequences of low-level commands to trigger the actions to be performed by the device and to supervise the progress of the operation by testing the device's status. For instance, to read a word from a disk, the disk controller generates a sequence of commands to move the read/write arm of the disk to the correct track, await the rotational delay until the correct sector passes under the read/write arm, transfer the word, and check for a number of possible error conditions. A sound card contains circuitry to convert digital signals from the computer to sounds that play through speakers or headphones that are connected to the expansion ports of the card. A modem card connects the computer to the telephone system so as to transport data from one computer to another over phone lines.

A network card, on the other hand, provides the circuitry to connect a computer to other computers on a local area network.

 

1.3.2 Software

A desktop computer typically comes with pre-installed software. This software can be categorized into two categories application software and systems software.

Application Software: Application programs are designed to satisfy end-user needs by operating on input data to perform a given job, for example, to prepare a report, update a master payroll _le, or print customer bills. Application software may be packaged or custom. Packaged software includes programs pre-written by professional programmers, and are typically offered for sale in a floppy disk or CD-ROM. Custom software includes programs written for a highly specialized task.

Systems Software: Systems software enables the application software to interact with the computer, and helps the computer manage its internal and external resources. Systems software is required to run applications software; however, the converse is not true. Systems software can be classified into three types of utility programs, language translators, and the operating system. Utility programs are generally used to support, enhance, or expand the development of application programs. Examples consist of editors and programs for merging files.

 

A language translator or compiler is a software program that translates a program written in a high-level language such as C into machine language, which the hardware can directly execute. Thus a compiler provides the end user with the capability to write programs in a high-level language.

 

Operating System: The operating system is a major component of the systems software.

Desktop operating systems allocate and control the use of all hardware resources: the processor, the main memory, and the peripheral devices. They also add a variety of new features, above and beyond what the hardware provides. Running the shell provides the end user with a more \capable" machine, in that the computer system provides direct capability to specify commands by typing them on a keyboard. The GUI (graphical user interface) goes one step further by providing the user with a graphical view of the desktop, and letting the user enter commands by clicking on icons. The multitasking feature of the OS provides the user with the capability to run multiple tasks \concurrently". The _le system of the OS provides the user with a structured way of storing and accessing \permanent" information.

The operating system is thus an important part of most computer systems because it exerts

a major influence on the overall function and performance of the entire computer. Normally, the OS is implemented in software, but there is no theoretical reason why it could not be implemented in hardware!

Device Driver (Software Driver): Most application programs need to access input/output devices and storage devices such as disks, terminals, and printers. Allowing these programs to perform the low-level IO activity required to directly control an input/output device

is not desirable for a variety of reasons. First, most application programmers would find it extremely difficult to do the intricate actions required to directly control an IO device. Second, inappropriate accesses of the IO devices by amateur or malicious programmers can wreck plenty of havoc. The standard solution adopted in computer systems is therefore to provide a more abstract interface to the application programmer, and let an interface program perform the required low-level IO activity. This interface program is called a device driver or software driver. Each device requires specific device driver software, because each device has its own specific commands whereas an application program uses generic commands. The device driver receives generic commands from the application program and converts them into the specialized commands for the device, and vice versa.

 

1.3.3 Starting the Computer System: The Boot Process

Now that you have a good understanding of the role of an operating system in a modern computer, it would be interesting to learn how the operating system is activated each time a computer is turned on. When a computer is turned o_, the data in the registers and memory are lost. Thus when the computer is turned on, the OS program is not residing in the main memory, and needs to be brought into main memory from a storage device such as a diskette or hard disk. In modern computers, this copying is done by executing a program called the bootstrap program or boot program for short. How can the computer execute this copy program if the memory contains no useful contents? To solve this dilemma, a portion of the memory is implemented using non-volatile memory devices such as a read-only memory (ROM). This memory contains the boot program. When the computer is turned on, it starts executing instructions from the starting address of the boot program.

 

The boot program contains code to perform diagnostic tests of crucial system components and load the operating system from a disk to the main memory. This bootstrap loader may be comprehensive enough to copy the nucleus of the operating system into memory. Or it may first store a more comprehensive loader that, in turn, installs the nucleus in memory.

Once loaded, the OS remains in main memory until the computer is turned off.

For copying the OS from a disk drive to the RAM, the computer needs to know how the disk has been formatted, i.e., the number of tracks and sectors and the size of each sector. If information about the hard disk were stored in the ROM, then replacing the hard disk becomes a difficult proposition, because the computer will not be able to access the new hard disk with information about the old disk. Therefore, a computer must have a semi-permanent medium for storing boot information, such as the number of hard disk drive cylinders and sectors. For this purpose, it uses CMOS (complementary metal oxide semiconductor) memory, which requires very little power to retain its contents and can therefore be powered by battery. The battery helps the CMOS memory to retain vital information about the computer system configuration, even when the computer is turned off. When changing the computer system configuration, the information stored in the CMOS memory must be updated, either by the user or by the plug-and-play feature.??

 

1.3.4 Computer Network

Till now we were mostly discussing stand-alone computers, which are not connected to any computer network. Most of today's desktop computers are instead connected to a network, and therefore it is useful for us to have a brief introduction to this topic. A computer network is a collection of computers and other devices that communicate to share data, hardware, and software. Each device on a network is called a node. A network that is located within a relatively limited area such as a building or campus is called a local area network or LAN, and a network that covers a large geographical area is called a wide area network or WAN. The former is typically found in medium-sized and large businesses, educational institutions, and government offices. Different types of networks provide different services, use different technology, and require users to follow different procedures. Popular network types include Ethernet, Token Ring, ARCnet, FDDI, and ATM.

Give a figure here

 

A computer connected to a network can still use all of its local resources, such as hard drive, software, data _les, and printer. In addition, it has access to network resources, which typically include network servers and network printers. Network servers can serve as a _le server, application server, or both. A _le server serves as a common repository for storing program _les and data _les that need to be accessible from multiple workstations| client nodes|on the network. When an individual client node sends a request to the _le server, it supplies the stored information to the client node. Thus, when the user of a client workstation attempts to execute a program, the client's OS sends a request to the _le server to get a copy of the executable program. Once the server sends the program, it is copied into the memory of the client workstation, and the program is executed in the client. The

_le server can also supply data _les to clients in a similar manner. An application server, on the other hand, runs application software on request from other computers, and forwards the results to the requesting client.

 

In order to connect a computer to a network, a network interface card (NIC) is required. This interface card sends data from the workstation out over the network and collects incoming data for the workstation. The NIC for a desktop computer can be plugged into one of the expansion slots in the motherboard. The NIC for a laptop computer is usually a PCMCIA card. Different types of networks require different types of NICs.

A computer network requires a network operating system to control the flow of data, maintain security, and keep track of user accounts. Commonly used operating systems such as UNIX, Windows XP, and Windows Vista already include networking capability. There are also software packages such as Novell Netware that are designed exclusively for use as network operating system. A network operating system usually has two components: server software and client software. The server software is installed in the server workstation; it has features to control _le access from the server hard drive, manage the print queue, and track network user data such as userids and passwords. The client software is installed on the local hard drive of each client workstation; it is essentially a device driver for the NIC. When the client workstation boots up, the network client software is activated and establishes the connection between the client and the other devices on the network.

 

1.4 Trends in Computing

Computer systems have undergone dramatic changes since their inception a few decades ago. It is difficult to say whether it is the hardware that drives the software or if it is the other way around. Both are intimately tied to each other; trends in one do affect the other and vice versa. We shall first discuss trends in hardware technology.

1.4.1 Hardware Technology Trends

Ever since transistors began to be integrated in a large scale, producing LSI and VLSI (Very

Large Scale Integration) circuits, there have been non-stop efforts to continually reduce the transistor size. Over the last three decades, the feature size has decreased nearly by a factor of 100, resulting in smaller and smaller transistors. This steady decrease in transistor sizes, coupled with occasional increases in die sizes, have resulted in more and more transistors being integrated in a single chip.

In 2008, Intel released the first processor chip that integrates more than 2 billion Transistor of the quad-core Tukwila. As of 2008, it is also the biggest microprocessor made, with a die size of 21.5 _ 32.5mm2.

 

Below we highlight some of the main trends we see in hardware technology today:

_ Clock speed: Clock speed had been steadily increasing over the last several decades; however, the current trend hints more of a saturation. In 2007, IBM released the dual-core Power6 processor, which operates at an astonishing 4.7 GHz clock.

 

 Low-power systems: In the late 1990s, as the number of transistors as well as the clock speed steadily increased, power consumption especially power density began to increase at an alarming rate. High power densities translated to higher temperatures, necessitating expensive cooling technologies. Today, power consumption has become a _rst-order design constraint, and power-aware hardware designs are commonplace.

_

Large memory systems: Memory size has always been increasing steadily, mirroring the downward trend in price per bit. However, memory access time increases with size, necessitating the use of cache memories to reduce average memory latencies.

Nowadays, it is commonplace to see multi-level cache organizations in general-purpose computers.

 

_ Multi-core processors: The current trend is to incorporate multiple processor cores on the same die. These cores parallely execute multiple threads of execution.

 

_ Embedded systems: Although embedded systems have been around for a while, their popularity has never been as high as it is today. Cell phones, automobile controls, computer game machines | you name it! | all have become so sophisticated, thanks to advances in embedded system technology.

 

Some of these trends become clear when we look at the microprocessors developed over the last four decades for desktop systems by one of the major processor manufacturers,

Intel r. Table 1.2 succinctly provides various features of these processors.

 

1.4.2 Software Technology Trends

As processors and memory became smaller and faster | providing the potential for significant boosts in performance | application programs and operating systems strived to offset that benefit by becoming bigger and sluggish. The time to boot up a computer, for instance, has remained steady if not increased over the years. This does not mean that software technology has made no progress. On the contrary, there have been tremendous improvements in software application development. The driving philosophy in software technology development has also been performance and efficiency, but of the programmer!.

Below we highlight a few of the current trends in software applications:

_ Application Nature:

{ Multimedia

{ Graphics

{ Bioinformatics

{ Web-based

_ Programming Methodology and Interface:

{ Object-oriented programming: Java, C#

{ Visual programming: Scripting, HTML

{ Multi-threading

{ Application Programming Interface (API): POSIX, Windows API

_ Operating System:

{ Linux

1.5. Software Design Issues 25

{ Windows Vista

{ Mac OS X

_ Security:

 

A good software piece is not one that just works correctly.

 

1.6 Hardware Design Issues

Among the two phases in the life of a program of its development and execution |hardware designers are concerned with the execution phase. As we will see in Section 1.8, hardware design is carried out in various stages, at different levels of abstraction. When designing hardware, the factors that stand at the forefront are performance, power consumption, size, and price; well designed hardware structures are those that have adequate performance and long battery life (if running o_ a battery), and are compact and affordable. Other issues that become important, depending on the computing environment, are binary compatibility, reliability, and security.

 

1.6.1 Performance

The speed with which computer systems execute programs has always been a key design parameter in hardware design. We can think of two different metrics when measuring the speed of a computer system: response time and throughput. Whereas response time refers to how long the computer takes to do an activity, throughput refers to how much the computer does in unit time. The response time is measured by time elapsed from the initiation of some activity until its completion. A frequently used response time metric is program execution time, which specifies the time the computer takes for executing the program once.

The execution time of a program, ET, can be expressed as the product of three quantities:

(i) the number of instructions executed or instruction count (IC), (ii) the average number of clock cycles required to execute an instruction or cycles per instruction (CPI), and (iii) the duration of a clock cycle or cycle time (CT). Thus,

ET = IC _ CPI _ CT

Although this simple formula seems to provide a good handle on program execution time, and therefore on computer performance, the reality is not so simple! The instruction count of a program may vary depending on the data values supplied as input to the program. And,

the cycles per instruction obtained may vary depending on what other programs are simultaneously active in the system8. Finally, we have computer systems that dynamically adjust.

 

Nowadays, almost all computer systems execute multiple programs at the same time. The clock cycle time of dynamic frequency scaling in order to reduce power consumption. While reporting program execution time, the standard practice used to deal with the first problem is to measure the execution time with a standard set of input data. The second and third problems are avoided by not running only one benchmark program at a time and by not exercising dynamic frequency scaling. Throughput, the other metric of performance, specifies the number of programs, jobs, or transactions the computer system completes per unit time. If the system completes C programs during an observation period of T seconds, its throughput X is measured as C/T programs/seconds. For processors, a more commonly used throughput measure is the number of instructions executed in a clock cycle, referred to as its IPC (instructions per cycle).

Although throughput and response time are related, improving the throughput of a computer system does not necessarily result in reduced response time. For instance, the throughput of a computer system improves when we incorporate additional processing cores and use these cores for executing independent tasks, but that does not decrease the execution time of any single program. On the other hand, replacing a processor with a faster one would invariably decrease program execution time as well as improve throughput.

 

1.6.2 Power Consumption

After performance, power consumption is perhaps the biggest design issue to occupy the hearts and minds of computer designers. In fact, in some application domains, power consumption has edged out performance as the most important design factor. Why is power such an important issue? This is because it directly translates to heat production. Most of the integrated circuits will fail to work correctly if the temperature rises beyond a few degrees.

 

Again, the designer's goal is to reduce the power consumption occurring during program execution, as program development can be carried out in a different system where power consumption may not be burning issue.

Power consumption has two components: dynamic and static. Dynamic power relates to power consumed when there is switching activity (or change of state) in a system, whereas static power relates to power consumed even when there is no switching activity in the system. Dynamic power is directly proportional to the extent of switching activity in the system and the clock frequency of operation. It is also proportional to the capacitance in the circuits and wires, and to the square of the supply voltage of the circuits.

Static power consumption occurs due to leakage currents in the system. With continued scaling in transistor technology reduction in transistor sizes static power consumption is becoming comparable to dynamic power consumption. Static power consumption is also related to the supply voltage. Therefore, to develop low-power systems, computer hardware designers strive to reduce the supply voltage of the circuits as well as reduce the amount of hardware used to perform the required functionality.

 

Price is an important factor that makes or breaks the success of any computer system. Between two comparable computer systems, all things being equal, price will be an important factor. The major factors affecting the price are design cost, manufacturing cost, and profit margin, all of which may be impacted by the sales volume. In general, price increases exponentially with the complexity of the system. Therefore, it is imperative to reduce hardware complexity at all costs.

Size is an important design consideration, especially for laptops and embedded systems.

 

From the above discussion, it is apparent that design the computer hardware is a complex process, where one has to focus on several factors at the same time. Often, focusing on one factor comes at the expense of others. For instance, attempting to improve performance by using substantial amounts of hardware generally results in high power consumption as well as size and price. A good design will attempt to achieve good performance without increase in hardware complexity, thereby conserving power, and reducing the size and price as well.

 

1.7 Theoretical Underpinnings

1.7.1 Computability and the Turing Machine

The edging days of computers saw them only solving problems of a numerical nature; soon they began to process various kinds of information. A question that begs an answer is:

What kinds of problems can a computer solve? The answer, as per computer science theory, is that given enough memory and time, a computer can solve all problems for which a finite solution algorithm exists. One of the computer pioneers who defined and formalized computation was the British mathematician Alan Turing. While a graduate student at Princeton

University in 1936, Turing published a seminal paper titled \On Computable Numbers with an Application to the Entscheidungs problem," which laid a theoretical foundation for modern computer science. In that paper he envisioned a theoretical machine, which later became known as a Turing Machine that could read instructions from a punched paper tape and perform all the critical operations of a computer. One of Turing's remarkable achievements was to prove that a universal Turing machine a simulator of Turing machines can perform every reasonable computation [?]. If given a description of a particular Turing machine TM, the universal Turing machine simulates all the operations performed by TM.

It can do anything that any real computer can do, and therefore serves as an abstract model of all general-purpose computers. Turing's paper also established the limits of computer science by mathematically demonstrating that some problems do not lend themselves to algorithmic representations, and therefore cannot be solved by any kind of computer.

 

1.7.2 Limitations of Computers

\Computers are useless. They can only give you answers."

| Pablo Picasso (1881 - 1973).

For the computer to solve a problem, it is imperative to first develop a solution algorithm, or step-by-step procedure, for the problem. Although a general-purpose computer can be used to solve a wide variety of problems by executing appropriate algorithms, there are certain classes of problems that cannot be solved using a computer, either in principle or in practice! Such problems may be grouped into three categories:

_ Undecidable or non-computable problems

_ Unsolvable problems

_ Intractable problems

Unsolvable

Tractable NP−complete

Partially computable

Computable Non-computable

Well−defined problems

 

Turing and Alonzo Church demonstrated a set of undecidable problems in 1936. One of these is what has become known as the Turing machine halting problem, which states that no algorithm exists to determine if an arbitrary Turing machine with arbitrary input data will ever halt once it has started working. A practical implication of this result is that given a sufficiently complex computer program with loops, it is impossible to determine if under certain inputs the program will ever halt. Since the 1930s, a number of other problems have been proven to be undecidable. It will never be possible in a logically consistent system to build a computer, however powerful, that by manipulating symbols can solve these undecidable problems in a finite number of steps!

 

Unsolvable Problems: This category includes well-defined problems that have not been proved to be undecidable, but for which no finite algorithm has yet been developed. An example is Goldbach's conjecture, formulated by the 18th century mathematician Christian Goldbach. The conjecture states that every even integer greater than 2 is the sum of exactly two prime numbers. Although this conjecture has been verified for a large number of even integers, it has not yet been proved to be true for every even integer, nor has any finite algorithm been developed to prove this conjecture. An algorithm that examines all even integers is not finite, and therefore will not terminate. An unsolved problem may eventually be solved or proved to be undecidable.

 

Intractable Problems: This category includes problems that have a finite solution algorithm, but executing the best available algorithm requires unreasonable amounts of time, computer memory, and/or cost. In general, this is the case when the complexity of the best available algorithm grows exponentially with the size of the problem. An example of an intractable problem is the traveling salesman problem. The objective in this problem is to find a minimum-distance tour through a given set of cities. The best available solution algorithms for this problem are exponential in n, where n is the number of cities in the tour. This means that the execution time of the algorithm increases exponentially as n increases.

For reasonably large values of n, executing such an algorithm becomes infeasible. Many problems that occur in real life are closely related to the traveling salesman problem; two common examples are the scheduling of airline flights, and the routing of wires in a VLSI chip. An intractable problem becomes more tractable with technological advances that make it feasible to design faster computers. Algorithm developers often tackle intractable problems by devising approximate or inexact solution algorithms. These approximate algorithms often involve the use of various heuristics, and are near-optimal most of the time. Simulated annealing is an example of such an algorithm. Recent research seems to indicate that quantum computing has the potential to solve many of the intractable problems more efficiently.

 

1.8 Virtual Machines: The Abstraction Tower

 

If we look at the computer as a physicist would do, we will see that a digital computer executes an algorithm by controlled movement of electrons through silicon substrates and metal wires. A complete description of a computer could be given in terms of all of its silicon substrates, impurity dopings, wire connections, and their properties. Such a view, although very precise, is too detailed even for computer hardware designers, let alone the programmers. Hardly any programs would have been written in all these years if programmers were given such a specification!

 

Like many other machines built today, computers are incredibly complex. The functions involved in developing programs and in designing the hardware to execute them are so diverse and complex that it is difficult for a user/programmer/designer to have mastery of all of the functions. A practical technique for dealing with complexity in everyday life is abstraction9. An automobile driver, for instance, need not be concerned with the details of how exactly the automobile engine works. This is possible because the driver works with a high-level abstract view of the car that encapsulates the essentials of what is required for driving the vehicle. The car mechanic, on the other hand, has a more detailed view of the machine. In a similar manner, abstraction is used to deal with the complexity of computers. That is, computer software and hardware can be viewed as a series of architectural abstractions or virtual machines. Different users see different (virtual) machines depending on the level at which they use the computer. For instance, a high level language programmer sees a virtual machine that is capable of executing statements specified in a high-level language. An assembly language programmer, on the other hand, sees a different machine with registers and memory locations that can execute instructions specified in an assembly language. Thus, the study of a computer system is _filled with abstractions. There is yet another advantage to viewing the computer at several abstraction levels. Programs developed for a particular abstraction level can be executed in different platforms which differ in speed, cost, and power consumption that implement the same abstract machine.

 

The user who interacts with a computer system at a particular abstraction level has a view of what its capabilities are at this level, and this view results directly from the functions that the computer can perform at this level. Conceptually, each architectural abstraction is a set of rules that describes the logical function of a computer as observable by a program running on that abstract machine. The architecture does not specify the details of exactly how its functions will be performed; it only specifies the architecture's functionality.

Implementation issues related to the functionality are left to the lower-level machines. An abstraction is a representation that hides details so that one can focus on a few concepts at a time.

Formal abstractions have a well-defined syntax and semantics. Hence, they provide a way of conveying the information about a system in a consistent way that can be interpreted unambiguously. This abstraction (or specification) is like a contract: it defines how the system behaves. The abstraction defines how the outside world interacts with the system. The implementation, on the other hand, defines how the system is built, as seen from the inside entire point of defining each architectural abstraction is to insulate programmers of that level from those details. The instruction set architecture, for instance, provides a level of abstraction that allows the same (machine language) program to be run on a family of computers having different implementations (i.e., microarchitectures).

\`There are many paths to the top of the mountain, but the view is always the same."

| Chinese Proverb

In order to make it easier for comprehension purposes, we have organized the computer virtual machines along a single dimension as an abstraction tower, with one machine \above" the other. Each virtual machine except the one at the lowest level is implemented by the virtual machine below it. This approach is called hierarchical abstraction. By viewing the computer as a hierarchy of abstractions, it becomes easier to master the complexity of computers and to design computer systems in a systematic, organized way.

Appropriate interfaces are used to specify the interaction between different abstractions.

 

This implementation is done by translating or interpreting the steps or instructions of one level using instructions or facilities from the lower level. A particular computer designer or user needs to be familiar only with the level at which he/she is using the computer.

For instance, a programmer writing a C program can assume that the program will be executed on a virtual machine that directly executes C programs. The programmer need not be concerned about how the virtual machine implements C's semantics. Similarly, in a multitasked computer system, each active program sees a separate virtual machine, although physically there may be only a single computer! Some of the machine levels themselves can be viewed as a collection of multiple abstractions. One such breakdown occurs in the assembly-level language machine where we further break it into User mode and Kernel mode.

 

For each machine level, we can write programs specific to that level to control that machine. The solid blocks depict the people/ software/hardware who transform a program for one machine level to a program for the machine level below it. For the sake of clarity and to put things in proper perspective, the figure also includes a few levels (at the top) that are currently implemented by humans. It is important to note that these abstract machines are somewhat different from the virtual machines seen by end users when they run different programs on a computer system. For instance, when you run a MIPS assembler program on a host computer system, you do not \see" that host as a MIPS assembly-level machine or a MIPS ISA-level machine. Instead, your view of the host is simply that of a machine capable of taking a MIPS assembly language program as input, and churning out an equivalent MIPS machine language program as output! You can even run that assembler program without knowing anything about the MIPS assembly language or machine language. The person who wrote the MIPS assembler,

on the other hand, does see the MIPS assembly-level architecture as well as the MIPS ISA.

 

Finally, the MIPS assembler program may have been originally written in an HLL, for example C, in which case its developer also sees an abstract C machine that takes as commands C statements!

 

The Use of a Language for an Abstraction: Each virtual machine level provides an abstraction that is suitable for the computer user/designer working at that level. In order to make use of a particular abstraction, it must be possible to specify commands/instructions to that virtual machine. Without a well-defined language, it becomes difficult to specify a program of reasonable size and complexity. Most of the abstraction levels therefore provide a separate language to enable the user at that level to specify the actions to be performed by the virtual machine. The language specifies what data can be named by a program at that level, what operations can be performed on the named data, and what ordering exists among the operations. The language must be rich enough to capture the intricacies of the corresponding virtual machine. When describing each of the abstraction levels, we will show how the language for that level captures the essence of that level, and how it serves as a vehicle to represent the commands specified by the user of that level. \The limits of my language mean the limits of my world."

 

Translators, Interpreters, Emulators, and Simulators: An important aspect of the layered treatment of computers is that, as already mentioned, the commands specified at each abstraction level need to be converted to commands specific to the immediately lower level. Such a transformation makes the computer behave as a different machine than the one for which the original program was written. This transformation process can be done by translation or interpretation. In computer parlance, the term translation indicates taking a static program or routine, and producing a functionally equivalent static program, usually at a lower level. A static program is one that is not being executed currently. Thus, translation of a loop involves translating each command of the loop exactly once. The term interpretation, on the other hand, indicates taking individual steps of a dynamic program and producing an equivalent sequence of steps, usually at a lower level.

 

A dynamic program is one that is in the process of being executed. Thus, interpretation of a loop involves interpreting each command of the loop multiple times, depending on the number of times the loop gets executed. Because of this dynamic nature, an interpreter essentially makes one machine (the host machine) appear as another (the target machine). A translator is almost always implemented in software, whereas an interpreter is implemented in software or hardware. A software interpreter is often called a simulator, and a hardware interpreter is often called an emulator. Of course, it is possible to build interpreters that use a combination of software and hardware techniques. A simulator is often used to illustrate the working of a virtual machine. It is also used to allow programs compiled for one machine to execute on another machine. For instance, a simulator can execute programs written for an older machine on a newer machine.

 

Advantages of using interpretation include (i) the ability to execute the (source) program on different platforms, without additional compilation steps, and (ii) the ease of carrying out interactive debugging. The main disadvantage is performance.

1.8.1 Problem Definition and Modeling Level Architecture

\At the highest level, the description is greatly chunked, and takes on a completely different feel, despite the fact that many of the same concepts appear on the lowest and highest levels." | Douglas R. Hofstadter, in Godel, Escher, Bach: An Eternal Golden Band

 

The highest abstraction level that we can think of is the level at which problem definition and modeling are done. At this level, we can view the computer system as a machine that can solve well-defined computer problems. We loosely define a well-defined computer problem as one that can be represented and manipulated inside a computer. The problem modeling person, therefore, takes complex real-life problems and precisely formulates them so as to be solved on the computer. This process involves representing the real-life problem's data by some form of data that can be manipulated by a computer.

 

This process is called abstraction or modeling| creating the right model for the problem so as to make it possible to eventually develop an appropriate algorithm to solve it. Notice that in this context, modeling often implies simplification, the replacement of a complex and detailed real-world situation by a comprehendible model within which we can solve a problem. That is, the model captures the essence of the problem, \abstracting away" the details whose effect on the problem's solution is nil or minimal. Almost any branch of mathematics or science may be utilized in the modeling process.

 

Problems that are numerical in nature are typically modeled by mathematical concepts such as simultaneous linear equations (e.g., finding currents in electrical circuits, or finding stresses in frames made of connected beams) and differential equations (e.g., predicting population growth, or predicting the rate at which chemicals will react). Several problems can be modeled as graph theoretical problems. Symbol and text processing problems can

be modeled by character strings and formal grammars. Once a problem is formalized, the algorithm developer at the lower level can look for solutions in terms of a precise model and determine if an algorithm already exists to solve that problem. Even if there is no known solution, knowledge of the model properties might aid in developing a solution algorithm.

 

1.8.2 Algorithm-Level Architecture

The architecture abstraction below the problem definition level is the algorithm-level architecture.

At this level, we see the computer system as a machine that is capable of executing algorithms. An algorithm, as we saw earlier, is a step-by-step procedure that can be carried out mechanically so as to get a specific output from a specific input. A key feature of computer algorithms is that the steps are precisely defined so as to be executed by a machine.

In other words, it describes a process so unambiguously that it becomes mechanical, in the sense that it does not require much intelligence, and can be performed by rote or a machine.

 

Computer scientists also require that an algorithm be finite, meaning that (i) the number of steps must be finite so that it terminates eventually, and (ii) each step must require only finite time and computational resources. The basic actions involved in a computer algorithm are:

_ Specify data values (using abstract data types)

_ Perform calculations and assign data values

_ Test data values and select alternate courses of actions including repetitions

_ Terminate the algorithm

 

The algorithm-level architecture supports abstract data types and abstract data structures; the algorithm designer formulates suitable abstract data structures and develops an algorithm that operates on the data structures so as to solve the problem. Providing abstract data types enables the algorithm designer to develop more general algorithms that can be used for different applications involving different data types. This is often called algorithm abstraction. For instance, a sorting algorithm that has been developed without specifying the data types being sorted, can be programmed to sort a set of integers or a set of characters. Similarly, when considering a data structure, such as an array, it is often more productive to ignore certain details, such as the exact bounds of its indices. This is often called data abstraction.

 

Algorithm Efficiency: Computer theorists are mainly concerned with discovering the most efficient algorithms for a given class of problems. The algorithm's efficiency relates its resource usage, such as execution time or memory consumption, to the size of its input data,

n. The efficiency is stated using the \Big O" notation, O(n). For example, if an algorithm takes 4n - 2n + 2 steps to solve a problem of size n, we can say that the number of steps is O(n2). Programmers use their knowledge of well-established algorithms and their respective complexities to choose algorithms that are best suited to the circumstances. Examples of such algorithms are quick-sort for sorting data (which has an  (n log n) average running time), and binary search for searching through sorted data (which has an O(log2n) time).

 

Algorithms can be specified in different ways. Two common methods are pseudo code descriptions and flowchart diagrams. A pseudo code description uses English, mathematical notations, and a limited set of special commands to describe the actions of an algorithm. A flowchart diagram provides the same information graphically, using diagrams with a finite set of symbols in the place of the more elegant features of the pseudo code. A computer cannot directly understand either pseudo code or flowcharts, and so algorithm descriptions are translated to computer language programs, most often by human programmers. Thus, a computer program is an embodiment of an algorithm; strictly speaking, an algorithm is a mental concept that exists independently of any representation.

 

1.8.2.1 Computation Models

Another important tenet of an algorithm-level architecture is the computational model it supports. A computational model conceptualizes computation in a particular way by specifying the kinds of primitives, relationships, and events that can be described in an algorithm. A computational model will generally have the following features: _ Primitives: They represent the simplest objects that can be expressed in the model. Examples of primitives found in most of the computation models are constants and variables.

_ Methods of combination: They specify how the primitives can be combined with one another to obtain compound expressions.

_ Methods of abstraction: They specify how compound objects can be named and manipulated as units.

 

The computational model determines the kind of computations that can be specified by an algorithm. For example, if we consider a geometric computational model that supports only ruler and compass construction primitives, then we can specify algorithms (rules) for bisecting a line segment, bisecting an angle, and other similar problems. We cannot, however, specify an algorithm to trisect an angle. For solving this problem, we require additional primitives such as a protractor. For arithmetic computation we can use models incorporating different primitives such as an abacus, a slide rule, or even a calculator. With each of these computation models, the type of arithmetic problems that can be solved is different. The algorithms for solving a specific problem would also be different.

 

Algorithm development is always done for a specific algorithm-level architecture having an underlying computational model. Three basic computational models are currently in use, some of them being more popular than the others: imperative, functional, and logic.

These models of computation are equivalent in the sense that, in principle, any problem that has a solution in one model is solvable in every one of the other models also.

Imperative Model: The imperative model of computation is based on the execution of a sequence of instructions that modify storage called state. The basic concept is the notion of a machine state (comprising variables in the high-level architecture, or registers and memory locations in the assembly-level architecture). Program development consists of specifying a sequence of state changes to arrive at the solution. An imperative program would therefore consist of a sequence of statements or instructions and side-effect-prone functions; the execution of each statement (or instruction) would cause the machine to change the value of one or more elements of its state, thereby taking the machine to a new state. A side-effect-prone function is one whose execution can result in a change in the machine state. Historically, the imperative model has been the most widely used model; most computer programmers start their programming career with this computational model.

It is the closest to modeling the computer hardware. This tends to make it the most efficient model in terms of execution speed. Commonly used programming languages such as C, C++, FORTRAN, and COBOL are based on this computational model.

 

Applicative (Functional) Model: The functional model has its foundation in mathematical logic. In this model, computing is based on recursive function theory (RFT), an alternative (and equivalent) model of effective computability. As with the Turing machine model, RFT can express anything that is computable. Two of the prominent computer scientists who pioneered this computational model are Stephen Kleene and Alonso Church. The functional model consists of a set of values, functions, and the application of side-effect free functions. A side-effect-free function is one in which the entire result of computation is produced by the return value(s) of the function. Side-effect-free functions can only access explicit input parameters; there are no global variables in a fully functional model. And, in the purest functional models, there are no assignment statements either. Functions may be named and may be composed with other functions. Functions can take other functions as arguments and return functions as results. Programs consist of definitions of functions, and computations are application of functions to values. A classic example of a programming language that is built on this model is LISP.

 

Rule-based (Logic) Model: The logic model of computation is a formalization of the logical reasoning process. It is based on relations and logical inference. An algorithm in this model involves a collection of rules, in no particular order. Each rule consists of an enabling condition and an action. The execution order is determined by the order in which the enabling conditions are satisfied. The logic model is related to relational data bases and expert systems. A programming language designed with this model in mind is Prolog.

 

Computational Model Extensions: Apart from the three popular computational models described above, many other computational models have been proposed. Many extensions have also been proposed to computational models to improve programmer efficiency or hardware efficiency. Two important such extensions are the object-oriented programming model and the concurrent programming model.

_ Object-Oriented Model: In this model, an algorithm consists of a set of objects that compute by exchanging messages. Each object is bound up with a value and a set of operations that determine the messages to which it can respond. Functions are thus designed to operate on objects. Objects are organized hierarchically. That is, complex objects are designed as extensions of simple objects; the complex object will \inherit" the properties of the simple object. The object-oriented model may be implemented within any of the other computational models. Imperative programming languages that use the object-oriented approach are C++ and Java.

 

Concurrent Model: In this model, an algorithm consists of multiple processes or tasks that may exchange information. The computations may occur concurrently or in any order. The model is primarily concerned with methods for synchronization and communication between processes. The concurrent model may also be implemented within any of the other computational models. Concurrency in the imperative model can be viewed as a generalization of control. Concurrency is particularly attractive within the functional and logic models, as sub-expression evaluation and inferences may then be performed concurrently. Hardware description languages (HDLs) such as Verilog and VHDL use the concurrency model, as they model hardware components, which tend to operate concurrently.

 

1.8.3 High-Level Architecture

The abstraction level below the algorithm-level architecture is the high-level architecture. This is the highest level that we study in this book, and is defined by different high-level languages, such as C, C++, FORTRAN, Java, LISP, Prolog, and Visual Basic. This level is used by application programmers and systems programmers who take algorithms and formally express them in a high-level language. HLL programmers who develop their own algorithms often perform both these steps concurrently. That is, the algorithm development is done side by side with HLL program development.

 

To the HLL programmer the computer is a machine that can directly accept programs written in a high-level language that uses alphabets as well as symbols like +,

BLOG COMMENTS POWERED BY DISQUS

About AWL

The AWL is the most recent and widely referred word list for teaching and learning academic vocabulary. The AWL was developed by Averil Coxhead from the Victoria University of Wellington,

Read More...

Contact us

  • Add: Prof. Amit Hiray
    Assistant Professor
    MIT Academy of Engineering,
    Alandi (D) Pune – 412 105
  • Tel: +91-9923083701