You are currently browsing the category archive for the ‘MaintenanceWare’ category.

In this story we start from the known definition for Expressive power to extend its scope to semantics programming languages, which have numerous ways to express the specific meanings (below 8 formalismis are listed) of tems of languages. We suggest that (near) all of these semantic principles can be combined in the atomistic semantics, which is a novel hybrid construction, a high-abstraction way to simulate source code.

What is Expressive power

Expressive power is in computer science a feature of a language, which refers to: 

  • what can be said in the language (at all)
  • how concisely it can be said.

Formal discussions mostly use the term in the first sense, leaving conciseness as a less important factor. This is the case in areas of mathematics that deal with the exact description of languages and their meaning, such as formal language theory, mathematical logic and process algebra.

In informal discussions, the term often refers to the second sense, or to both. This is often the case when discussing programming languages. Efforts have been made to formalize these informal uses of the term [2].

The notion of expressive power is always relative to a particular kind of thing that the language in question can describe, and the term is normally used when comparing languages that describe the same kind of things, or at least comparable kinds of things.

The design of languages and formalisms involves a trade-off between expressive power and analyzability. The more a formalism can express, the harder it becomes to understand what instances of the formalism say. Decision problems become harder to answer or completely undecidable.

Formal language theory mostly studies formalisms to describe sets of strings, such as context-free grammars and regular expressions. Each instance of a formalism, e.g. each grammar and each regular expression, describes a particular set of strings. In this context, the expressive power of a formalism is the set of sets of strings its instances describe, and comparing expressive power is a matter of comparing these sets.

An important yardstick for describing the relative expressive power of formalisms in this area is the Chomsky hierarchy.

Expressive power means one quality attribute of programming language semantics

There are some ways to describe expressive power of each language term (e.g. clause).

1) Axiomatic semantics

  • Axiomatic semantics define the meaning of a command in a program by describing its effect on assertions about the program state. The assertions are logical statements – predicates with variables, where the variables define the state of the program.

2) Algebraic semantics

  • In logic, algebraic semantics is a formal semantics based on algebras.

3) Operational semantics

4) Denotational semantics

  • Denotational semantics is concerned with finding mathematical objects called domains that represent what programs do. For example, programs (or program phrases) might be represented by partial functions, or by Actor event diagram scenarios, or by games between the environment and the system

5) Action semantics

  • Action semantics is an approach that tries to modularize denotational semantics, splitting the formalization process in two layers (macro and microsemantics) and predefining three semantic entities (actions, data and yielders) to simplify the specification

6) WP-semantics / Predicate transformer semantics

  • Predicate transformer semantics is an extension of Floyd-Hoare Logic invented by Dijkstra and extended and refined by other researchers. It was first introduced in Dijkstra’s paper “Guarded commands, nondeterminacy and formal derivation of programs”. It is a method for defining the semantics of an imperative programming language by assigning to each command in the language a corresponding predicate transformer. A predicate transformer is a total function mapping between two predicates on the state space of a program.
  • The canonical predicate transformer of sequential imperative programming is the so-called “weakest preconditionwp(S,R). Here S denotes a list of commands and R denotes a predicate on the space, called the “postcondition“. The result of applying this function gives the “weakest pre-condition” for S terminating with R true. An example is the following definition of the assignment statement:
 wp(x := E, R)\ =\ R_E^x

This gives a predicate that is a copy of R with the value of x replaced by E.

  • Refinement calculus is a formalized approach to stepwise refinement for program construction. The required behaviour of the final executable program is specified as an abstract and perhaps non-executable “program”, which is then refined by a series of correctness-preserving transformations into an efficiently executable program.

7) Hoare’s axiomatic semantics e.g. Hoare logic

  • Hoare logic (also known as Floyd–Hoare logic or Hoare rules) is a formal system with a set of logical rules for reasoning rigorously about the correctness of computer programs. It was proposed in 1969 by the British computer scientist and logician C. A. R. Hoare, and subsequently refined by Hoare and other researchers. The original ideas were seeded by the work of Robert Floyd, who had published a similar system for flowcharts.
  • The central feature of Hoare logic is the Hoare triple. A triple describes how the execution of a piece of code changes the state of the computation. A Hoare triple is of the form
\{P\}\;C\;\{Q\}
where P and Q are assertions and C is a command. P is called the precondition and Q the postcondition: if the precondition is met, the command establishes the postcondition. Assertions are formulas in predicate logic.

8) Atomistic semantics

Atomistic semantics is a hybrid construction, whose expressive power comes from Prolog, a first order predicate logic extended for logic programming. Its type system is rich in expressing any axiomatic feature. The other side of that semantics is object-oriented. That makes it possible to define algebraic equations, formulas or any dependendies using object handles and their static counterparts, tree-based Prolog references, as the base.

There is no research to compare atomistic semantics with denotational semantics, but there are many comparisons between Lisp and Prolog. They are rather compatible to express complicated models and functions, even Lisp is more generic in expressing meta-features. The latter one is not important in expressing features from source code (when ignoring macros and precompilers).

Operational semantics is the hardest one, having two levels.

Below our summary as a picture.

Expressive power of a formalism

If two formalisms have equal power, then it is possible to express same kind of methods (working practices from the user point-of-view) using tools of the corresponding formalisms. In program comprehension (PC) there are multiple needs for the user to do: to define hypotheses, to learn structures, to navigate, to understand behavior, to prove correctness and to simulate and evaluate specific expressions. Therefore, for so many different purposes of program comprehension, atomistic semantics is a remarkable suggestion.

Some links:

Advertisements

Logic is the study of reasoning. Logic is used in most intellectual activity, but is studied primarily in the disciplines of philosophy, mathematics, and computer science. Logic examines general forms which arguments may take, which forms are valid, and which are fallacies. It is one kind of critical thinking. In philosophy, the study of logic falls in the area of epistemology: how do we know what we know. In mathematics, it is the study of valid inferences within some formal language.

Symbolic logic is the study of symbolic abstractions that capture the formal features of logical inference. It is the area of mathematics which studies the purely formal properties of strings of symbols. The interest in this area springs from two sources. First, the symbols used in symbolic logic can be seen as representing the words used in philosophical logic. Second, the rules for manipulating symbols found in symbolic logic can be implemented on a computing machine

Symbolic logic is often divided into two branches, propositional logic and predicate logic.

Using Symbolic Logic in  Symbolic Analysis

We use propositional logic in order to express the clauses of the Symbolic language, who have been translated from Java and C++. We use predicate logic for executing the necessary traversing and simulating features of the symbolic atomistic model.

First-Order Logic

First-order logic is a formal logic used in mathematics, philosophy, linguistics, and computer science. It goes by many names, including: first-order predicate calculus, the lower predicate calculus, and predicate logic. First-order logic is distinguished from propositional logic by its use of quantifiers; each interpretation of first-order logic includes a domain of discourse over which the quantifiers range.

There are many deductive systems for first-order logic that are sound (only deriving correct results) and complete (able to derive any logically valid implication). Although the logical consequence relation is only semidecidable, much progress has been made in automated theorem proving in first-order logic. First-order logic also satisfies several metalogical theorems that make it amenable to analysis in proof theory, such as the Löwenheim–Skolem theorem and the compactness theorem.

Higher-order logic

In mathematics and logic, a higher-order logic is distinguished from first-order logic in a number of ways. One of these is the type of variables appearing in quantifications; in first-order logic, roughly speaking, it is forbidden to quantify over predicates. See second-order logic for systems in which this is permitted. Another way in which higher-order logic differs from first-order logic is in the constructions allowed in the underlying type theory. A higher-order predicate is a predicate that takes one or more other predicates as arguments. In general, a higher-order predicate of order n takes one or more predicates of order n − 1 as arguments, where n > 1. A similar remark holds for higher-order functions.

Higher-order logic, abbreviated as HOL, is also commonly used to mean higher order simple predicate logic, that is the underlying type theory is simple, not polymorphic or dependent.[1]

Higher-order logics are more expressive, but their properties, in particular with respect to model theory, make them less well-behaved for many applications. By a result of Gödel, classical higher-order logic does not admit a (recursively axiomatized) sound and complete proof calculus; however, such a proof calculus does exist which is sound and complete with respect to Henkin models.

Using Different Orders of Logic in Symbolic Analysis

Original clauses captured from the code are first-order logic. Side effects created by the original clauses are second-order logic. Side effect elements stacks the information created by the original clause with its arguments. For example, the side effect predicate created(A, B) means that symbol A created symbol B.

However, there are some ambiguities when simulating object-oriented code, freely, wherever. These ambiguities increase the order of logic, when they are not solved by the user. However, the user can in certain conditions work as a Decider to select most relevant information for the ambiguous values and types and conditions.

Output is Program Comprehension Specific Information

Different orders of logic define certain graphs connecting the relevant symbols with propositional logic and predicate logic and higher-order relations. Some parts of that information are architecture specific (module dependencies), some are behavior-oriented (side effects) and some are static.

Some links:

Pragmatics is a subfield of linguistics which studies the ways in which context contributes to meaning. Pragmatics encompasses many approaches to language behavior in philosophy, sociology, and linguistics.

It studies how the transmission of meaning depends not only on the linguistic knowledge (e.g. grammar, lexicon etc.) of the speaker and listener, but also on the context of the utterance, knowledge about the status of those involved, the inferred intent of the speaker, and so on.

Technology step from linguistics to source code area

We apply those principles of linguistic pragmatics into software analysis, and corrective maintenance. There is no speech interface in it, but the problems of ambiguities, different kinds of contexts, and the gap between the code and the corresponding behavior. For removing the gap simulation techniques are needed. For solving ambiguities, user is needed to select the context when partial simulation.

It is useful to seek common multidisciplinary approaches for own research, because there are many traditional taxonomies and evaluation methods, ready to be used.

One of the most appreciated pragmatists is Charles Sanders Peirce (see Charles Sanders Peirce bibliography for more information about his really productive life)

As an input for KnowledgeWare there is the output tape produced by Symbolic Absract Machine,  SAM. For the user it is possible to specify and check whether the selected features of the outputs are as assumed by using hypotheses and theorems.

Because of the computational power, the computer should be used systematically as far as possible in order to make the maintenance systematic and reliable. In addition to improved performance, computer support can improve the communication between persons, because every member of a team can see exactly the same models, sequences, and subsequently discuss the problems and assumptions for solving very complex problems, which almost without an exception require laborious operations in skill, rule, knowledge, and meta levels. FIGURE shows the overall principle for the methodology.

The task is transformed into problem recognition, which gives the focused elements in the model, and into problem formulation that gives a logical notation, which describes the sequences and triples that are critical and need to be checked. Checking is done by using the atomistic model as queries that activate simulating runs. For these runs, the user does model checking either manually or by using the theorem prover that gives a proof.  The actions concerning the model are directed at levels specified by Rasmussen (1983), and the information is classified according to the principles of the information ladder.

Resulting_Model_for_KnowledgeWare_and_MaintenanceWare

With the help of the actions of the information ladder at the different levels the user can communicate with the computer in both directions.

Let’s describe a situation where a TCP/IP-server, acting as a class Server, does not start. All the necessary initial knowledge pertaining to the TCP/IP server consists of that the communication is done with the object ServerSocket.  The user can employ the initial knowledge about the implemented modifications, the history of the software and earlier errors in many different ways, in order to locate the problem as efficiently as possibly.

A general rule for isolating the problem using symbolic analysis is as follows:

  • In the knowledge level the user creates a hypothesis about a relation between the symptom and its possible cause. If the function doesn’t start at all the user may assume that the problem is either a state problem or a control flow problem between the main and the focus.
  • The user confirms the hypothesis that the problem can only occur when the program is started from the main method (skill action).
  • In the rule level, the user validates the paths and, in the skill level, the invocations.
  • In the skill level the user validates all the conditions and their relevancy. The user inspects the control paths that are close to the target and in the end of the output tape, because it may be assumed that the problem is one of the last conditions of the simulation (unless a totally erroneous logic path has been selected).

In this particular case the process can be divided into iterative phases as follows:

1. Problem Recognition:

  • The failure is located into an application, which is responsible for TCP/IP communication. The process and code for creating a ServerSocket object should be found there.

2. Problem Formulation:

  • It is found that there is only a main method in the application. Thus, it is selected as the start element. The formulation is then to investigate the flow between it (main) and the target element, the ServerSocket instantiation.
  • Locating the ServerSocket is a search activity that requires skill knowledge to identify the possible invocations (skill action).
  • The object ServerSocket is the selected target (problem object), the focused element. All possible input paths leading to it are found (KnowledgeAction).

3. Problem Analysis:

  • When the start and the target are fixed, a complete query can be made from the main to creating the object ServerSocket. This is simulated and the output tape is created.
  • If an object is not created, then the problem should be in the selected sequence.
  • The assignments are checked (variable port etc).
  • The user finds that the only things that can have influence in the variable port are the command line and the variable DEFAULT_PORT.
  • The value for DEFAULT_PORT is checked.
  • In this particular case, a problem is found in the side effect element, considering the value of the variable DEFAULT_PORT.

4. Change Specification:

  • The modification should be done to the area of the variable port. It can be written in several ways. Evaluating the best way can be done by means of impact analysis, by using forward chaining theorems for the current model in order to minimize the number of impacts.

Unit Testing:

  • After the modification, the same code area with the new code is inspected again. The correction can be read from the simulation output tape without going through problem formulation or preliminary analysis.
  • In these cases where the elements that have been modified to form complex logic between their environments a theorem prover can be used for showing that the newer version uses valid control paths. Inspecting side effects is not necessary if there are no changes in the references. Instead, in regression testing all test cases are exhaustively checked.

The results from a proving process vary depending on the problem type and the eagerness. In familiarization, in the read-to-recall process (Burkhardt et al., 2002) the user gets a number of chunks including information about program relations (see TABLE). If the task was a troubleshooting problem, then the user loads the personal memory as economically as possible without learning unnecessary features. Then the output from the proving process is a number of matches, where contradictions have an important role. Each contradiction that cannot be explained by other matches or other contradictions are possible errors.

When planning changes, each possible place to be modified can be considered as a contradiction against the current program model, because there is a mismatch between all impacts of the planned modification against the current behavior.  As found in TABLE a proving method can be seen as a state transition table, whose state variables are initial/current knowledge, proof result and the type of the PC effort. These lead step-by-step to conclusions, to next steps either for familiarization or troubleshooting.

This section describes a computer-aided process, where the maintenance task is performed iteratively, in a systematic way, based on user actions and computations of the computer. It has the following features:

  • An iterative, gradually deepening process (Deming, 2000) from concepts via ProgramConcepts to ProgramContexts.
  • The user has total control for the simulation in order to select all the relevant program flows, e.g., ProgramContexts to be analyzed.
  • Each relevant ProgramContext is simulated to produce an output tape.

TABLE     Making conclusions based on proof results.

Type of current PC phase

Initial

knowledge

Proof result

Conclusions and remarks and possible decisions

Familiarization

Low

Finding relevant plans

Chunking relevant places

Familiarization

Perfect

No contradictions

As above, no conclusions

Familiarization

Perfect

Contradiction

A conflict. Either an error or an exception. It starts a  troubleshooting phase

Troubleshooting

Low

No contradictions

Find relevant places. Use a familiarization phase to  increase eagerness

Troubleshooting

Moderate

Contradiction

Find more information by using  familiarization if needed

Troubleshooting

Moderate

No contradictions

Continue, skip to next subtask

Troubleshooting

High

No contradictions

Continue proofing

Troubleshooting

High

A contradiction

Conclusion: Isolate the problem, fix the bug

Software maintenance is a process to update the software started from specified tasks, implemented as a maintenance process (MaintenanceWare) to meet the selected quality and reliability (error-less) requirements for necessary the software.

Change request (CR) is a major driver for maintenance in typical situations. For changing code the following is a typical process:

  1. CR is formulated (MaintenanceWare)
  2. Necessary information about CR is collected (KnowledgeWare)
  3. Problem is recoqnized either using created models (ModelWare) or finding the patterns from code (Grammar related search)
  4. Problem area is analyzed (ofen simulated by SimulationWare) in order to validate the most relevant impacts and possible changes
  5. Changes are done.
  6. Software is checked again, using new tests and older regression tests.

The main data flow starts (rather often) from code (GrammarWare). Using it models are created (ModelWare) in order to create knowledge (KnowledgeWare), where simulation is needed in order to understand control flows, data flows and object oriented features. This simulation/excamination process (SimulationWare) can be made manually or by tools (Maintenance ToolWare). This process shows the developer the most critical parts for changes, the change candidates (CC).

Maintenance ToolWare is a technology space focusing on tools for maintenance tasks .

MaintenanceWare: technology focusing on maintenance tasks.

Erkki Laitila, PhD (2008) computer engineer (1977)