Overview

Two primary aspects to the program
  1. A collection of algorithms (that is, the programmed instructions to solve a particular task).
  2. A collection of data against which the algorithms are run to provide each unique solution.
These two primary program aspects, algorithms and data, have remained invariant throughout the short history of computing. What has evolved is the relationship between them. This relationship is spoken of as a programming paradigm.

C is a fairly simple language. All it really offers is macros, pointers, structs, arrays and functions. No matter what the problem is, the solution will always boil down to macros, pointers, structs, arrays and functions. Not so in C++. These things are still there, but so are class, reference, inheritance, polymorphism, template, generic algorithm, exception, namespace and more. The design space is much richer in C++ than it is in C: there are just a lot more options to consider.

C++ is not an object-oriented language but a multi-paradigm programming language with a bias towards systems programming (inherit from its parent, C) that supports There are many ways to employ C++: The benefit is that able to provide a solution best suited to the problem — in practice, no one paradigm represents a best solution to every problem. The drawback is that it makes for a larger and more complicated language.

C is a procedural language. A function is a procedure. C++ can be used simply as improved C. Procedural programming is associated with a design technique known as top-down design.

In top-down design, a problem is associated with a procedure, for example a problem of producing a schedule for a manufacturing task such as building an automobile. It is labeled as MainProblem, then is assigned to the procedure main. Because MainProblem is too complex to solve straightforwardly in main, it is decomposed into subproblems such as building the chassis, building the engine, building the drivetrain, assembling the already built components and inspecting the components and their assembly. Then each subproblem is assigned to a subprocedure, which is a function that main invokes.

The subprograms may be further decomposed. This process of top-down, functional decomposition continues until a subproblem is straightforward enough that the corresponding subprocedure can solve it. Top-down design has the appeal of being intuitive and orderly. Many hard problems continue to be solved using this design technique. Yet the technique has drawbacks, especially with respect to what is known euphemistically as software maintenance, which deals with the testing, debugging and upgrading of software systems.

Experienced programmers know that the most difficult task is not writing a program in the first place, but rather changing it afterwards because the program is flawed ("infected by bugs"), the program's requirements change, the program needs to execute more efficiently, and so on.

Suppose that MainProblem need to change to pass an additional argument to each of the subprocedures that it invokes, and that each of these must pass this additional argument to their subprocedures, and so on until the change has rippled throughout the entire hierarchy. This phenomenon is known as cascading changes: a change in a procedure such as main cascades or ripples down to its subprocedures and to their subprocedures and so on until the change impacts much if not all of the decomposition hierarchy. The threat of cascaded changes can take the fun out of maintenance programming!


ANSI C89 is almost a subset of C++ but since ANSI C99, C and C++ branch out towards their own direction. Why C++?

From years of software development, it was learnt that the major defect of the data-structure problem solving paradigm is the scope and visibility that the key data structures have with respect to the surrounding software system. Hence, programming language should have the following properties:

This results the concept of object.


Consider the challenge that faces modern software products such as word processors, databases, spreadsheets and the like. These products undergo rapid development cycles in which a new and perhaps significantly different version replaces an older version. How can users survive such changes? For one thing, the products try to maintain a consistent interface. Consider the word processor. From one version to the next, it still supports commands that allow to Open a new document, Save a document, Print a document, Copy a document, Format a document in some special way, and the like.

The word processor's interface is the functionality made available to the user through commands such as the ones just listed. It is said the interface is public to underscore that it is visible to the user. What is typically not public in a modern software product is the underlying implementation. So the implementation is private to underscore that it is not visible to the user.

A goal of modern software design is thus to keep a product's public interface as constant as possible so that users remain fluent in the product even as its private implementation improves or otherwise changes.


Object-oriented programming is based on a client/server model of computing. This model explains the emphasis placed on information hiding in object-oriented programming. Suppose a String class with a private implementation that supports a public interface consisting of methods to create and destroy string, concatenate them, search them for characters and so on. The String class is a provider of services for string processing. The String class is a server of String objects that provide string-processing functionality. An application such as a C++ program that uses the String class is a client. A client requests services from a String object by invoking one of its methods, which is characterized as sending a message to the object. For example, the code segment

String s1;
s1.set("Hello World!");
n = s1.getLen();
first creates a String object s1. The code segment then sends a message to s1 to request s1 set its value to "Hello World!". Finally, the code segment requests that s1 return its length. This code segment occurs in a program that acts as a client of the String object s1, a server. Note that a message is passed to a server such as s1 by invoking one of its methods.

A good server provides services with a minimum of effort on the client's part. In particular, the client should not be required to know how the server provides the services. The server's implementation details should be hidden from the client. The client should need to know only the server's interface, which typically consists of methods that the client can invoke. Such methods send messages to the server, which are requests for services. The server may send data back to the client, perform actions that the client requests and so on. Good servers practice information hiding so that clients find them easy to use.


C/ C++ jargon
  1. Undefined behaviour

    Behavior, such as might arise upon use of an erroneous program construct or erroneous data, for which this International Standard imposes no requirements. Undefined behavior may also be expected when this International Standard omits the description of any explicit definition of behavior.
    [Note: permissible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message). Many erroneous program constructs do not engender undefined behavior; they are required to be diagnosed.]

    It basically means that the standards describe code as resulting in undefined behaviour if there is no limit on what happens when the code is compiled and executed (no standard behavior defined).
    /*  first example */
      int *x = NULL;
      *x = 42;
    
    /* second example */
      char x[5];
      strcpy(x, "More than five bytes, including terminating zero");
    

    Programs that use undefined behavior are said to be nonportable. Although the program may execute correctly for the current compiler, there is no guarantee that the same program, compiled under a different compiler or a subsequent release of the current compiler, will continue to run correctly.

    In practice, compilers are completely silent when code does this, and what often happens is an abnormal program termination (eg a core dump under unix, a general protection fault under windows). However, such a program crash is only one possibility. In theory, your computer could also give you an electric shock when it executes this code (it is perhaps unfortunate that electric shocks rarely result from undefined behaviour, as undefined behaviour is one of the largest causes of unexplained software bugs).

    Why do the standards allow such things? The above examples are constructed so it is obvious what is happening. However, in complex code where variables are shared or passed between functions, it is technically very difficult (and in some cases, where data may be read from files at run time, impossible) to detect.

    A compiler (or run time environment) that could detect all instances of undefined behaviour, in general, would be both expensive and run slowly. Programmers tend to insist on things like inexpensive compilers, fast compile times, minimal memory footprints, and fast execution of their code. Detecting all cases of undefined behaviour would compromise those things.

    Most real-world examples of undefined behaviour encountered in practice are some sort of problem with pointers.

  2. Unspecified behaviour

    Behavior, for a wellformed program construct and correct data, that depends on the implementation. The implementation is not required to document which behavior occurs.
    [Note: usually, the range of possible behaviors is delineated by this International Standard.]

    Unspecified behaviour is like undefined behaviour, except that the standards impose some constraints on what is allowed to happen. However, those constraints do not result in only one possible action. An obvious example, is the call f(g(), h(), i()); .

    In this example, the standard requires that g(), h() and i() will be called and their return values then passed to f(). However, the standard does not specify the order in which g(), h() and i() will be called. Some compilers will call g() first and i() last. Some compilers do it in reverse order. In theory, a really smart compiler could do all three calls concurrently, by executing each on a different CPU.

    The reason for such freedom to compiler writers is to allow performance optimisations on a range of target operating systems and hardware. In other words, allow the compiler writer to decide the best order to do things.

    If one cares about the order in which g(), h() and i() are called, one can do

    int gret, hret, iret; 
    gret = g(); 
    hret = h(); 
    iret = i(); 
    f(gret, hret, iret); 
    
    as the compiler is required to not reorder the function calls in this case. The code does not rely on any form of unspecifed behaviour to work correctly.

  3. Implementation defined behaviour

    Behavior, for a wellformed program construct and correct data, that depends on the implementation and that each implementation shall document.

    Common examples of implementation defined behaviour include the bit layout of integer and floating point types, and how operations like addition are implemented. These are things the implementation (i.e. compiler and libraries) are required to define, but different implementations are allowed to define do these things differently.

    sizeof(int) is an example of an implementation defined value. With older 16 bit compilers, sizeof(int) was often 2. With modern 32-bit compilers, sizeof(int) is often 4.

Index