You are currently browsing the category archive for the ‘KnowledgeWare’ category.

There is an attractive presentation about presenting knowledge from the various roles like practicians, software people, and scientists: How to tell stuff to the computer. It describes a triangle (see below), whose corners are practical domain knowledge (lower left corner), software artifacts (top corner) and science (low right corner). The picture proposes some technologies inside the triangle. The most important things in the triangle are – in writer’s opininion – the steps in the lines connecting corners.

Triangle of Knowledge Representation (KR, see http://www.lisperati.com).

In his/hers conclusion the writer forecasts that in future there is a revolution, caused by descriptive logics (see http://www.lisperati.com/tellstuff/conclusion.html). I warmly agree that conclusion, because logic has a very strong role in the framework of symbolic analysis.

Instead, it is difficult to see what is the beef in the desriptive logic here: http://www.lisperati.com/tellstuff/dl.html, the text contains traditional monolitic Lisp. However, the idea of the title: Marriage of Logic and Objects is a very good vision. I have had the same goal in the architecture of AHO hybrid objects. Furthermore, there is a solid contact surface between semantic web and symbolic analysis (see more).

Symbolic and holistic approach for estimating knowlege produced by the software

The triangle (above) is useful as a base for illlustrating software development and its knowledghe representation, too.  In the lower triangle (see below) I have named the corners respectively: Domain knowlege, source of program and information system (IS) pragmatics caused by the software.

Software Knowledge Representation (SKR). (Laitila 2010)

The last corner is not science as in the triangle about, but it simulates all purposes to understand the software and its value as an empiric product. The last corner is then an attempt to get empiric and practical research information from the implemented software. It is then a large approach. It has two sides:

  1. problem specific approach supported by reverse engineering and
  2. holistic approach in order to evaluate the whole

There some essential roles in the figure. All essential information is thought to be stored into an imagined megamodel (specification, resource information, sprints, tests etc).

The three lines are:

  1. The left line describes software development, to code.
  2. The line from top to the right lower corner is symbolic analysis containing the technology spaces: GrammarWare, ModelWare, SimulationWare and KnowledgeWare. For practical purposes there is a problem reasonign technology (PRT) close to the right corner.
  3. The bottom line is a problem, because there is no direct support for estimating how does a system satisfy all possible user needs, but there are some technologies to create end user services so that they can be mapped into code and remain visible in the system. SOA, aspects, Zachman architecture and metrics are some means for that purpose.

Some links:

Advertisements

Man and His Symbols is a famous book edited by Carl Jung. He was one of the great doctors of all time and one of the great thinkers of that century. His object always was to help men and women to know themselves, so that by self-knowledge and thoughtful self-use they could lead full, rich, and happy lives.

In his book there are following parts:

  • Part 1. Approaching The Unconscious

  • Part 2 Ancient Myths and Modern Man Joseph L. Henderson The Eternal Symbols

  • Part 3  The Process of Individuation M.-L von Franz  The Pattern of Psychic Growth

  • Part 4 Symbolism in the Visual Arts Aniela Jaffe

  • Part 5 Symbolis in Individual Analysis Jolande Jacobi

The book is a great work describing symbols and symbolism which cannot be completely explained. It is psychology.

Dreams and feeling and our attitude towards different kinds of icons like Coca Cola, Nokia, Bible or sudden dead are strongly personal. Many features of our behavior depend on our temper or our personal characteristic inherited in our birth day.

Symbolism and symbolic analysis

Symbolic analysis, presented in this blog, is the opposite of Jung’s work. In it symbols are imagined to be either formal or known. If some symbol is not known and not formal, it can be skipped if it is not relevant. If it is relevant and not known, there should be a learning process in order to make it familiar to the user. We cannot do a systematic learning process for dreams and for other typical phenomenon described in Jung’s book. However, the book is nice to be read with its great figures and photos. And it describes the real unfamiliar life of everybody.

There are two kinds of symbols: the one of Psychology and the one for formal notations like Computer science.

Links:

Cognition is the research term for “the process of thought“. Usage of the term varies in different disciplines; for example in psychology and cognitive science, it usually refers to an information processing view of an individual’s psychological functions. Other interpretations of the meaning of cognition link it to the development of concepts; individual minds, groups, and organizations.

Cognitive space uses the analogy of location in two, three or higher dimensional space to describe and categorize thoughts, memories and ideas. Each individual has his/her cognitive space, resulting in a unique categorization of their ideas. The dimensions of this cognitive space depend on information, training and finally on a person’s awareness. All this depends globally on the cultural setting.

http://cybergeo.revues.org/index194.html

When understanding software we need certain types of cognition spaces.  In book Symbolic Analysis for PC the spaces are named:

  1. The most abstract space is Concept (an unlimited definition for a thing).
  2. The next concrete space is Context (a situation)
  3. The next concrete space is Architecture slice (high level module with its dependencies and attributes)
  4. The concrete space is Slice, which refers to symbols (part of program).

A typical piece of software (slice) can be modeled upwards in a person’s mind using these cognitive spaces. There is a certain meaning for it to be understood (concept), it have some ways to be used (context) and it can have some extensions to the architecture implementation. It is the responsibility for the developer and maintainer to catch this information, but tools can be useful in order to shorten that time.

An analogy between topological understanding (see figure) and software understanding is clear.

Intelligence is an umbrella term describing a property of the mind including related abilities, such as the capacities for abstract thought, reasoning, planning, problem solving, communication, and learning. Problem solving is the most promising area of these from the point-of-view of  symbolic analysis.

There is much research on what is intelligence and how to define it (see more).

J. P. Guilford is one of them. He  explored the scope of the adult intellect by providing the concept of intelligence with a strong, comprehensive theoretical backing. The Structure-of-Intellect model (SI model) was designed as a cross classification system with intersections in the model providing the basis for abilities similar to periodic table in chemistry. The three-dimensional cube—shaped model includes five content categories (the way in which information is presented on a test; visual, auditory, symbolic, semantic, and behavioral), six operation categories (what is done on a test; evaluation, convergent production, divergent production, memory retention, memory recording, and cognition), and six product categories (the form in which information is processed on a test; units, classes, relations, systems, transformations, and implications). The intersection of three categories provides a frame of reference for generating one or more new hypothetical factors of intelligence.

Mapping Guilford’s cube to Symbolic Analysis

An interesting idea is to map Guilford’s cube to symbolic analysis, the AHO objects. In the atomistic model every atom is a symbol and have a formal contents. Operations for atoms allow studying its impacts and transformations automatically (this is the AI-approach).

There is an article coming on that topic.

Some links:

A cognitive architecture is a blueprint for intelligent agents. It proposes (artificial) computational processes that act like certain cognitive systems, most often, like a person, or acts intelligent under some definition. Cognitive architectures form a subset of general agent architectures. The term ‘architecture’ implies an approach that attempts to model not only behavior, but also structural properties of the modelled system. These need not be physical properties: they can be properties of virtual machines implemented in physical machines (e.g. brains or computers).

Common researchers on cognitive architectures is the belief that understanding (human, animal or machine) cognitive processes means being able to implement them in a working system, though opinions differ as to what form such a system can have: some researchers assume that it will necessarily be a computational system whereas others argue for alternative models such as dynamical systems.

Cognitive architectures can be symbolic, connectionist, or hybrid. Some cognitive architectures or models are based on a set of generic rules, as, e.g., the Information Processing Language (e.g., Soar based on the unified theory of cognition, or similarly ACT). Many of these architectures are based on the-mind-is-like-a-computer analogy. In contrast subsymbolic processing specifies no such rules a priori and relies on emergent properties of processing units (e.g. nodes). Hybrid architectures combine both types of processing (such as CLARION).

Connectionism is a set of approaches in the fields of artificial intelligence, cognitive psychology, cognitive science, neuroscience and philosophy of mind, that models mental or behavioral phenomena as the emergent processes of interconnected networks of simple units. There are many forms of connectionism, but the most common forms use neural network models.

The central connectionist principle is that mental phenomena can be described by interconnected networks of simple and often uniform units.  Neural networks are by far the most commonly used connectionist model today, but there are some more generic approaches, too.

Is the Symbolic atomistic model (SAM) a cognitive architecture?

Hybrid Cognitive Architecture (Laitila)

Hybrid Cognitive Architecture (Laitila)

The SAM-principle, presented in this blog, is a hybrid cognitive architecture, which contains both the symbolic and connectionist approach.  There is a paper published about it at the Conference of  Hybrid Artificial Intelligence Systems (Burgos Spain, 2008).

Some links:

A Turing machine is a theoretical device that manipulates symbols contained on a strip of tape. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside of a computer.

A Turing machine that is able to simulate any other Turing machine is called a Universal Turing machine (UTM, or simply a universal machine).

Many misunderstandings of Turing machine

There are several wrong and misleading concepts considering Turing machine TM, argued by Andrew Wells[1].

The correct concept includes the facts that TM should have a finite control automaton, a register or similar to save status and an ability to read and write and a tape or similar as a memory. Many times persons define TM too specified, although its origin describes a person making computations.

Cognitive approach for Turing machine

From the cognitive approach Turing machine is a concept to model a human making computations. It is then a simple problem solving framework with in-built rules, which model the finite control automaton [2].

There is much information relating to cognitive architectures, symbolic paradigm and how our mind works, also criticism.

Symbolic analysis

Symbolic analysis (SAM) is a framework, which has been built based on automata defined by the corresponding symbols. Together the symbols and their original semantics (command facts) build cognitive models from the source code.  For more information, see KnowledgeWare.

Some links:

  • [1] A. Wells: Rethinking Cognitive Computation (Palgrave).
  • [2[ H. Putnam: Mind, language and reality
  • [3] T. Fodor: The mind doesn’t work that way.

Truth can have a variety of meanings, from the state of being the case, being in accord with a particular fact or reality, being in accord with the body of real things, events, actuality, or fidelity to an original or to a standard. In archaic usage it could be fidelity, constancy or sincerity in action, character, and utterance.

There are differing claims on such questions as what constitutes truth; what things are truthbearers capable of being true or false; how to define and identify truth; the roles that revealed and acquired knowledge play; and whether truth is subjective, relative, objective, or absolute. This post presents some topics how symbolic analysis covers some attributes of truth and truth theories.

Correspondence theory for truth

For the truth to correspond it must first be proved by evidence or an individuals valid opinion, which have similar meaning or context. This type of theory posits a relationship between thoughts or statements on the one hand, and things or objects on the other. It is a traditional model which goes back at least to some of the classical Greek philosophers such as Socrates, Plato, and Aristotle.

Coherence theory for truth

For coherence theories in general, truth requires a proper fit of elements within a whole system. Very often, though, coherence is taken to imply something more than simple logical consistency; often there is a demand that the propositions in a coherent system lend mutual inferential support to each other. Some variants of coherence theory are claimed to characterize the essential and intrinsic properties of formal systems in logic and mathematics. However, formal reasoners are content to contemplate axiomatically independent and sometimes mutually contradictory systems side by side, for example, the various alternative geometries.

Concensus theory

Consensus theory holds that truth is whatever is agreed upon, or in some versions, might come to be agreed upon, by some specified group. Such a group might include all human beings, or a subset thereof consisting of more than one person.

Pragmatic theory

The three most influential forms of the pragmatic theory of truth were introduced around the turn of the 20th century by Charles Sanders Peirce, William James, and John Dewey. Although there are wide differences in viewpoint among these and other proponents of pragmatic theory, they hold in common that truth is verified and confirmed by the results of putting one’s concepts into practice.

Peirce defines truth as follows: “Truth is that concordance of an abstract statement with the ideal limit towards which endless investigation would tend to bring scientific belief, which concordance the abstract statement may possess by virtue of the confession of its inaccuracy and one-sidedness, and this confession is an essential ingredient of truth.”

Peirce emphasizes that ideas of approximation, incompleteness, and partiality, what he describes elsewhere as fallibilism and “reference to the future”, are essential to a proper conception of truth. Although Peirce uses words like concordance and correspondence to describe one aspect of the pragmatic sign relation, he is also quite explicit in saying that definitions of truth based on mere correspondence are no more than nominal definitions, which he accords a lower status than real definitions.

Semantic theory of truth

The semantic theory of truth has as its general case for a given language:

‘P’ is true if and only if P

where ‘P’ is a reference to the sentence (the sentence’s name), and P is just the sentence itself.

Alfred Tarski developed the theory for formal languages (such as formal logic). Here he restricted it in this way: no language could contain its own truth predicate, that is, the expression is true could only apply to sentences in some other language. The latter he called an object language, the language being talked about. (It may, in turn, have a truth predicate that can be applied to sentences in still another language.) The reason for his restriction was that languages that contain their own truth predicate will contain paradoxical sentences like the Liar: This sentence is not true. See The Liar paradox. As a result Tarski held that the semantic theory could not be applied to any natural language, such as English, because they contain their own truth predicates. Donald Davidson used it as the foundation of his truth-conditional semantics and linked it to radical interpretation in a form of coherentism.

Truth in logic

Logic is concerned with the patterns in reason that can help tell us if a proposition is true or not. However, logic does not deal with truth in the absolute sense, as for instance a metaphysician does. Logicians use formal languages to express the truths which they are concerned with, and as such there is only truth under some interpretation or truth within some logical system.

A logical truth (also called an analytic truth or a necessary truth) is a statement which is true in all possible worlds[38] or under all possible interpretations, as contrasted to a fact (also called a synthetic claim or a contingency) which is only true in this world as it has historically unfolded. A proposition such as “If p and q, then p.” is considered to be logical truth because it is true because of the meaning of the symbols and words in it and not because of any facts of any particular world. They are such that they could not be untrue.

How does Symbolic Analysis cover the theories (presented above)?

Symbolic analysis has been created based on Prolog (predicate logic) so that each symbol of the logic model refers to the corresponding term of a program,  reverse engineered from an application. Therefore, in the lowest level, each term in the  atomistic symbolic model is a propostion typical for truth in logic.  Each term has been implemeted as a AHO-artefact, a symbolic element having an excellent correspondence to the code . Links between these elements are coherent having a symbolic language to describe semantics of each term (semantic theory of truth).

AHO elements create in  simulation (aho:run()) results which are side effects caused of each element. They are mimicking computations of the program. Hence, it is possible for the user to compare side effects with his/her hypotheses in a higher level of logic. If those assumptions are different or equal, this information has pragmatic value from the program comprehension point-of-view. For possible contradictions some reasons should be found. They are possible bugs/errors of the code.

Some links:

Computational semiotics is an interdisciplinary field that applies, conducts, and draws on research in logic, mathematics, the theory and practice of computation, formal and natural language studies, the cognitive sciences generally, and semiotics proper. A common theme of this work is the adoption of a sign-theoretic perspective on issues of artificial intelligence and knowledge representation. Many of its applications lie in the field of computer-human interaction (CHI) and the fundamental devices of recognition (work at IASE in California).

Computational semiotics is that branch of one,which deals with the study and application of logic,computation to formal,natural language in terms of cognition and signs.

One part of this field, known as algebraic semiotics, combines aspects of algebraic specification and social semiotics, and has been applied to user interface design and to the representation of mathematical proofs.

Computational Semiotics by Gudwin

Fig below (http://www.dca.fee.unicamp.br/~gudwin/compsemio/Image24.gif):

Singularities captured from real world.

Gudwin describes knowlege as knowledge units (see figure above).

Computational Semiotics vs Symbol-Driven Engineering

SDE expresses phenomena using symbols. There are interpretations between symbols, expressed in predicates.

The classification of knowledge units by Gudwin is below (the origin is from Charles Peirce):

Classification of knowledge units.

First, source code fits well to the branch of the tree, which starts from the node rhematic. Second, when code is considered as sequences or using traces, then the dicent approach is relevant.  Third, when some features of the code or its assumed behavior is considered, then the approach argumentative is relevant.

Symbolic analysis is then a tuple (rhematic, dicent, argumentative).

Mastering these three branches of the knowledge tree gives possibilities to master source code.

Some links:

A concept is a cognitive unit of meaning— an abstract idea or a mental symbol sometimes defined as a “unit of knowledge,” built from other units which act as a concept’s characteristics. A concept is typically associated with a corresponding representation in a language or symbology such as a word.

The meaning of “concept” is explored in mainstream cognitive science, metaphysics, and philosophy of mind.

Concepts considering programs

Concepts, which model coming code or existing code, are mental models, whose purpose is the content of the concept. Typical program concepts include some contexts, which limit the use.  Implementation of program concepts is either a model or program code, slices.

Some links:

Context is a very common concept in research. It is an essential word in KnowledgeWare to separate different slices and their result from each other.

In Wiki there are 16 different use cases for Context. Most of them can be used in computer science, too.

Then it is relevant to study whether the different definitions for it are conflicting or not. However, explaining all of them in this blog post should require tool much space. Therefore, I summarize most of the definitions very shortly:

  • Context is a more generic term as a use case in software
  • Definition for a context-sensitive or context-free language and the generic use of context match (no problems).
  • Context in computing match with the general purpose of context.

As a summary, the uses of the word context are rather compatible with each other. Because of that it is possible to use context widely in science to express all situations, which have influence in the research approach: side effect, use case, aspect, scenario, context-sensitive feature etc.  These word pairs can have many uses for many disciplines, but it is not a problem. Vice versa,  this very handy worrd helps all other people to understand their cases, when we emphasize a specific word using a generic definition of context.

Context is then a valuable wotd to be used in trouble-shooting, in cause-effect-analysis (root cause analysis) and in understanding state machines

and reactive systems – or for expressing context-awareness of a mobile device etc.

Context-free art has been expressed by, written by simple grapgical software.

Erkki Laitila, PhD (2008) computer engineer (1977)