Programming Language Concepts Using C and C++/Control Level Structure

In this chapter, we discuss implicit and explicit control structures, structures that affect the sequence in which the computer performs the set of operations that make up the program, at the expression level, the statement level, and the subprogram level.

This sequence will sometimes be implicitly specified by the rules of the language, while at times it will be imposed by the programmer explicitly by means of some construct found in the programming language. The former is referred to as an implicit control structure, whereas the latter is called an explicit control structure.

Control Over Operations
Control at the expression level is concerned with the control over operations, the order of operations to be performed in computing the value of an expression.

Expressions
An expression is a formula for computing a value, and it is represented as a formalized sequence of operators, operands, and parentheses. An expression results in a computed value that resides in a temporary storage location until it is used. So, an expression has an implicit type that is derived from its subexpressions.

An expression is written as a sequence of operators, operands, and parentheses. An operator is a primitive function represented by a single symbol or a string of two symbols. An operand, which represents a data value and is actually a means of accessing a value, may be a constant, a variable name, a function reference, an array reference, or even another expression.

Programming languages provide implicit control over expressions by means of precedence and associativity rules. Programmer can impose explicit control through the use of parentheses.

An operator may be classified on the basis of the number of operands it takes.


 * Unary: It takes only one operand. Examples are,   operators in C-based programming languages.
 * Binary: The operator takes two operands. Typical examples are the fundamental arithmetic operators,  ,  , and.
 * Ternary: There is only one operator that takes three operands, conditional expression:

Representation of Expressions
There are a variety of ways to represent expressions. These are:


 * 1) Functional form: Functional form represents all operations as functions; the operands are arguments to these functions. It may also be called applicative form or ordinary prefix form. Using this representation,   would be expressed as.
 * 2) Cambridge Polish: Being the type of functional form employed in LISP, the parentheses surround the operator and its associated operands. According to this representaiton,   is expressed as.
 * 3) Infix: This is the traditional way of representing expressions where the operator appears in the middle of the operands.
 * 4) Prefix: In prefix notation, the operator appears before the operands. It is also referred to as the Polish notation. Using this notation,   is expressed as.
 * 5) Postfix: In postfix notation, operands appear before the operator. It is also known as the Reverse Polish notation.   is expressed as.

Note that the prefix and postfix notations are parenthesis-free. That is because we do not need to impose the order of evaluation explicitly; the sequence of operators and operands alone are sufficient to indicate the order of operation application.

Control Over Statements
Control at the statement level is concerned with the ordering of the statements that make up the program–a composition that may be accomplished sequentially, by selection among alternatives, or by iteration.

Simple Sequence
Simple sequence, or sequential flow, is the default control flow at the statement level. Execution of the statements follows their order in the program source. Often, simple sequence alone is insufficient for expressing the necessary computation.

Selection
Selective composition is employed when we wish to choose among two or more alternative statements (or, groups of statements). The typical examples of selective composition are:


 * if: An if statement provides for the conditional execution of either a statement or statement block based on whether a specified expression is true. Generally, it takes the form


 * Some programming languages, such as Scheme and Perl, offer a negated version of the if statement: unless.


 * case: Deeply nested if-else statements can often be correct syntactically and yet not express the intended logic of the programmer. Unintended else-if matchings, for example, are more likely to pass unnoticed. Modifications to the statements are also much harder to get right. That’s why, as an alternative method of choosing among a set of mutually exclusive choices, most programming languages provide a case statement.


 * One point to keep in mind: many programming languages constrain the type of selector expression to integral values. That is, the following fragment may be rejected.

Iteration
Iterative composition is employed when part of a program is to be executed for zero or more times. There are four basic iteration constructs.

Indexed Loop
Also known as the deterministic loop, we use this structure when we know the number of times the loop will be executed. An example is the  loop in Pascal.

Note that the  statement in C-based languages is a much more powerful, general construct and should be treated as a test-before loop. In addition to implementing nondeterministic loops, deterministic ones can easily be simulated. For example, the above fragment can be written as

There is a variation on the  loop that iterates over a list of values. It takes each value in the list&mdash;instead of a number from a number range&mdash;and executes one or more commands on it.

Test-Before Loop
Being one of the two nondeterministic loops, the condition is checked at the beginning of the loop. It means that the loop body may never get executed. A typical example is the while statement in ALGOL-based programming languages.

The general form of the for loop in C-based languages is given below. is evaluated once before the loop starts. It serves to make initializations. is evaluated before each iteration and as soon as it evaluates to zero (or false in safer programming languages like Java and C#) execution will continue at the line following the for loop. is evaluated at the end of each iteration.

Any of these expressions can be omitted, although the semicolons must remain. If the first expression is omitted, the for loop simply does not perform any initializations. If the third expression is omitted, there won’t be any side effects taking place at the end of the iterations. As for the absence of the second expression, it will be taken to be true. That is,

In Java and C#, it is equivalent to

For more on the equivalence of for and while loops, see the Goto Statement section.

Test-After Loop
In the case of a test after loop, the condition is checked at the end of the loop. So, the loop body is executed at least once. Typical examples are the  statement in Pascal-based programming languages and   statement in C-based programming languages.

Unconditional Loop
In addition to these three constructs, some programming languages provide an unconditional looping structure. This structure is typically used together with an exit statement. An example to this is the LOOP-END structure in Modula-3.

Goto Statement
It has been shown that if constructs for selection and iteration are available, then any program can be written without the use of the goto statement. But it doesn’t mean that it is entirely useless. Different forms of iteration can also be written in terms of others. Does it mean that they are useless? No!

It is granted that goto is a rather low-level control structure and it had better be avoided where possible. But there may be places where it turns out to be the best choice. For instance, what can we do when we have to exit a loop prematurely? Some programming languages, such as C, C++, Java and Modula-3, as seen from the example of unconditional looping structure, provide control structures like  and  ; but there are programming languages that do not. The only answer in such cases is either to complicate the test expression of the loop or use a  with its target as the statement following the end of the loop. So, the corollary is: avoid its use but don’t ever say that it is entirely useless.

Control Over Subprograms
Control at the subprogram level is concerned with subprogram invocation and the relationship between the calling module and the called module. What follows is a list of possible ways the caller and the callee are related.

Simple Call
In simple call, the caller exerts total control over the callee. That is, there is a master/slave relationship between them.



When a subprogram is invoked, control of execution branches to the entry point of the subprogram, while the address of the next instruction, the one that will be processed after completion of the subprogram, is saved. When the exit point of the subprogram is encountered during execution, control returns to the caller and to the instruction corresponding to the address saved at the point of call.

The constraints inherent in this sort of relationship between the caller and the callee are:


 * 1) A subprogram cannot call itself.
 * 2) The subprogram is invoked by means of an explicit call statement within the sequence of statements that make up the calling program.
 * 3) Only a single subprogram has control of execution at any one time. Thus, two subprograms cannot be invoked to execute concurrently.
 * 4) And, of course, the calling program has total control over the subordinate subprogram, as a master to slave.

Recursion
When we remove the first constraint in the above list we get recursion. A separate activation record (frame) is created for each invocation of the recursive subprogram. Considering the cost of a call instruction and space used in the run-time stack, it is fair to say that a recursive solution is likely to be less efficient than its iterative counterpart with regard to both time and space considerations. However, for the sake of fairness, it should be stressed that the cost is due to the call instruction, not recursion itself.

In functional programming languages, recursion, a looping structure in disguise, is preferred over iteration. For avoiding the performance penalty and the run-time stack overflow due to function calls, Scheme&mdash;a functional programming language&mdash;dictates that tail-recursive calls be transformed to goto statements by the compiler. For the same reasons, many compilers&mdash;not languages!&mdash;include tail-recursion removal in their bag of tricks.

Some programming languages require a subprogram to be explicitly declared recursive. Examples are PL/I and FORTRAN 90. Pre-90 versions of FORTRAN and COBOL do not even allow recursive subprogram definitions. The reason why such a crucial problem solving tool is ruled out is not because the designers of these programming language could not foresee the use of recursion. These programming languages, when they came out, had to compete with assembly and machine code. For making this goal more attainable, their designers decided to make all data entities static, which automatically excluded the possibility of recursion.

Implicit Calls
When we lift the requirement that subprograms must be called explicitly, we have the possibility of subprograms being called outside the control of the programmer. This happens in two ways:


 * 1) Exception Handling: An exception is an event that occurs unexpectedly, infrequently, and at random intervals. Examples are divide by zero, subscript out of range, and end-of-file. An exception handler is a subprogram written by the programmer, interfaces with the operating system, and is invoked only when a specified “exceptional condition” is encountered.


 * 1) Scheduled Call: A scheduled call is typical of the type of subprogram control used in event-oriented simulation programs. Control over some subprograms is managed by a timing mechanism, a subprogram that may be built into the programming language or coded by the programmer. For example, suppose subprogram A "calls"&mdash;that is, pre-schedules&mdash;subprogram B with a particular activation time. An activation record is created for subprogram B and inserted into a queue maintained in ascending order by activation time. The timing routine periodically examines the first record in this queue; when its activation time matches the current simulated time, the record is removed from the queue, placed on the run-time stack, and the subprogram is invoked. Subprograms may also be scheduled to occur not at an explicit time, but "as soon as possible." This might occur when the subprogram depends on the availability of a particular resource or the completion of another subprogram.

Parallel Processing
Parallel processing, also called concurrent processing, refers to the concurrent execution of two or more subprograms in order to solve a specific problem. Parallelism may be real, that is, executing simultaneously in real time on multiple processors, or virtual, simulated parallelism on a single central processor.



In designing programs with parallel control, a programmer must consider some important problems including synchronization, critical section, and deadlock.

Coroutines
If we remove the requirement of caller having total control over callee, we get mutual control. Subprograms exerting mutual control over each other are called coroutines. Coroutines can each call the other. But, different from an ordinary call, a call by B to A actually transfers control to the point in subprogram A where A last called B. A different keyword, such as, may be used in place of call.



The way different processes in an operating system are related to each other is pretty much the same. When a process gives up the processor another process gets executed on the processor. Once that process completes its time slot, it yields the processor, probably back to the previous process. This process does not start execution from the first line but rather from where it left off before.