You are currently browsing the category archive for the ‘Towards Practical Use and Usecases’ category.

There is an attractive presentation about presenting knowledge from the various roles like practicians, software people, and scientists: How to tell stuff to the computer. It describes a triangle (see below), whose corners are practical domain knowledge (lower left corner), software artifacts (top corner) and science (low right corner). The picture proposes some technologies inside the triangle. The most important things in the triangle are – in writer’s opininion – the steps in the lines connecting corners.

Triangle of Knowledge Representation (KR, see http://www.lisperati.com).

In his/hers conclusion the writer forecasts that in future there is a revolution, caused by descriptive logics (see http://www.lisperati.com/tellstuff/conclusion.html). I warmly agree that conclusion, because logic has a very strong role in the framework of symbolic analysis.

Instead, it is difficult to see what is the beef in the desriptive logic here: http://www.lisperati.com/tellstuff/dl.html, the text contains traditional monolitic Lisp. However, the idea of the title: Marriage of Logic and Objects is a very good vision. I have had the same goal in the architecture of AHO hybrid objects. Furthermore, there is a solid contact surface between semantic web and symbolic analysis (see more).

Symbolic and holistic approach for estimating knowlege produced by the software

The triangle (above) is useful as a base for illlustrating software development and its knowledghe representation, too.  In the lower triangle (see below) I have named the corners respectively: Domain knowlege, source of program and information system (IS) pragmatics caused by the software.

Software Knowledge Representation (SKR). (Laitila 2010)

The last corner is not science as in the triangle about, but it simulates all purposes to understand the software and its value as an empiric product. The last corner is then an attempt to get empiric and practical research information from the implemented software. It is then a large approach. It has two sides:

  1. problem specific approach supported by reverse engineering and
  2. holistic approach in order to evaluate the whole

There some essential roles in the figure. All essential information is thought to be stored into an imagined megamodel (specification, resource information, sprints, tests etc).

The three lines are:

  1. The left line describes software development, to code.
  2. The line from top to the right lower corner is symbolic analysis containing the technology spaces: GrammarWare, ModelWare, SimulationWare and KnowledgeWare. For practical purposes there is a problem reasonign technology (PRT) close to the right corner.
  3. The bottom line is a problem, because there is no direct support for estimating how does a system satisfy all possible user needs, but there are some technologies to create end user services so that they can be mapped into code and remain visible in the system. SOA, aspects, Zachman architecture and metrics are some means for that purpose.

Some links:

Advertisements

Automated theorem proving (ATP) or automated deduction, currently the most well-developed subfield of automated reasoning (AR), is the proving of mathematical theorems by a computer program.

Depending on the underlying logic, the problem of deciding the validity of a formula varies from trivial to impossible. For the frequent case of propositional logic, the problem is decidable but NP-complete, and hence only exponential-time algorithms are believed to exist for general proof tasks. For a first order predicate calculus, with no (“proper”) axioms, Gödel’s completeness theorem states that the theorems (provable statements) are exactly the logically valid well-formed formulas, so identifying valid formulas is recursively enumerable: given unbounded resources, any valid formula can eventually be proven.

However, invalid formulas (those that are not entailed by a given theory), cannot always be recognized.

A simpler, but related, problem is proof verification, where an existing proof for a theorem is certified valid. For this, it is generally required that each individual proof step can be verified by a primitive recursive function or program, and hence the problem is always decidable.

Some techniques for theorem proving

————————————

Further reading

How to make it as easily as possible?

A lean theorem prover is an automated theorem prover implemented in a minimum amount of code. Lean provers are generally[citation needed] implemented in Prolog, and make proficient use of the backtracking engine and logic variables of that language. Lean provers can be as small as a few hundred bytes of source code.

The following is a small Prolog implementation:

propexp=
not_(propexp);
 or_(propexp,propexp);
 and_(propexp,propexp);
 implies_(propexp,propexp);
 s_(symbol);
 a ; b ; c.

The beef is that this procedural definition for a theorem can be converted into an atomistic model so that each domain will be transformed into a symbolic element.

Therefore,  proving  the theorem means running the top element of the theorem. The principle is exactly the same as in source code simulation (see SimulationWare).

When those variable a, b and c will be captured from the original code, this work will enable proving source code sequences written in Java or similar languages.

Some links:

Symbolic Analysis: From Theory to Practice

Hello, reader!

During a long period working  as a software engineer I have found that the amount of complex cognitive tasks is rapidly increasing in our society. Software is becoming public, which opens new challenging possibilities for active persons to use other programmers’s work. Futhermore, our society is becoming more and more dependant of information systems, which makes our life more risky and uncertain.

Mission: Unified Theory and Framework for Mastering Symbols

Atomistic Information Model for Cognitive Tasks

During my study (2004-2008) I found that it is possible to create an objec-oriented hybrid software construction, which allows presenting symbols as they are in our mind. For me it was a nice discovery – perhaps for you, too! Because of this nice over-simple architecture it is possible to make sophisticated models, which formulate mathematical expressions, source code, traffic, theorem provers and any implementations which have a formal base.

Executable Model, which Helps in Understanding the Information

The information model is not enough. Due to the virtues of  the formal model, it is possible to execute / simulate / traverse the model in a way that resembles our thinking.  We illustrate all possible thinking strategies for any symbol (like a car, computer or a SocketServer object in our software) as an automaton. The contents of the automaton is different for different purposes like data-analysis, control-flow analysis etc.

Unfortunately, we cannot remove the Chinese room argument or symbol grounding problem, but our executable model helps the user in synchronizing with tool activities in order to understand those relations that are formal and explicit.  Referring to programming language models  it is possible for the user the select from possible alternative ambiguous situations those branches and contexts that are most relevant for him/her.

The Practice:  Problem Isolation and Bug Detection

We introduce a unique problemsolving framework for symbolic analysis as a practical application. It contains the most useful source code analysis methods in order to help the user in navigating software and its features in hunting bugs in the code. The user interface is query based, but the tool allows the user to follow on every detail in the simulation (understanding) process – an extreme high-granularity, which cannot be provided by static and dynamic analyses.

You are wellcome to read more from other topics of this blog. The information is categorized using technology spaces.

Erkki Laitila, PhD, SwMaster Ltd

erkki.laitila at swmaster.fi

Understanding side effects is one of the most complex and challenging things of program comprehension.

In computer science, a function or expression or method or class or a package is said to have a side effect if, in addition to producing a value, it also modifies some state or has an observable interaction with calling functions or the outside world. For example, a function might modify a global or a static variable, modify one of its arguments, raise an exception, write data to a display or file, read data, or call other side-effecting functions. In the presence of side effects, a program’s behavior depends on history; that is, the order of evaluation matters. Because understanding an effectful program requires thinking about all possible histories, side effects often make a program harder to understand.

Side effects are essential to enable a program to interact with the outside world (people, filesystems, other computers on networks). But the degree to which side effects are used depends on the programming paradigm. Imperative programming is known for uncontrolled, promiscuous use of side effects. Thus, the languages Java, C++ and other imperative ones are problematic referring to the scope of this title.

How side effects can be captured in the analyzing paradigms

  • Static analysis cannot handle side effects, at all.
  • For dynamic analysis, selected clauses can be stored into a trace or an output.
  • For symbolic analysis (Laitila), every activity for every symbol can be traced and stored, if needed.

The output of symbolic analysis is the most accurate trace

If symbol S activates symbol S1 in order to evaluate whether it is true or not, an side effect symbol SE1 can be created every time when S1 is evaluated. Hence, after execution the output trace is S • S1 • SE1.

That if-condition was only one small example, but by extending the same principle for any statement and for any evaluation, it is possible to get detailed information about complex algorithms and separated threads, almost any typical clause of the code. However, there are some restrictions about whether it is possible to evaluate a statement (clause) or not. See more about  computability conditions.

Summary about side effects

Symbolic analysis is the most accurate paradigm to save evaluation results: all or selectively any category of  computations.

By using side effect elements it is possible for the user, to verify some specific parts of the program, piece by piece,  if only it is possible for him/her to formulate the necessary input preconditons (prerequisities) for the code.  In some cases verifying program parts is still rather challenging manually. In those cases, it is recommended to load the output(s) of the symbolic analysis into any commercial or free graph tool, e.g. pert graph tool,  a visualization tool, Excel or similar. Those generic tools are helpful in finding anomalies or general rules based on the symbols and side effects.

 

 

 

 

 

Symbolic analysis is a hybrid methodology, which contains the features of static analysis with its simulating capabilities, the execution facilities, which are normally done by dynamic analysis. However, symbolic analysis often suffers from missing information. It cannot have access to databases, libraries, proxies, servers etc.

Therefore, the main use of the methodology is to connect symbols with each other and by means of these links to explain relations between symbols.

Its main uses are:

  • Familiarization: Getting a preview about methods and classes that work with each other.
  • Troubleshooting: Verifying paths and calculations

Erkki Laitila, PhD (2008) computer engineer (1977)