Saturday, January 11, 2014

Ontological Engineering

Knowledge  Representation - Ontological Engineering
Some of the abstract knowledge representation mechanisms are the following:
Simple relational knowledge
The simplest way of storing facts is to use a relational method where each fact about a set
of objects is set out systematically in columns. This representation gives little opportunity
for inference, but it can be used as the knowledge basis for inference engines.
• Simple way to store facts.
• Each fact about a set of objects is set out systematically in columns.
• Little opportunity for inference.
• Knowledge basis for inference engines.
We can ask things like:
• Who is dead?
• Who plays Jazz/Trumpet etc.?
This sort of representation is popular in database systems.
Inheritable knowledge
Relational knowledge is made up of objects consisting of
• attributes
• corresponding associated values.
We extend the base more by allowing inference mechanisms:
• Property inheritance
o elements inherit values from being members of a class.
o data must be organised into a hierarchy of classes.


Inferential Efficiency
- the ability to direct the inferential mechanisms into the most productive
directions by storing appropriate guides;
Acquisitional Efficiency
- the ability to acquire new knowledge using automatic methods wherever
possible rather than reliance on human intervention.

Ontological Engineering
It represents a simple planning agent is very similar to problem-solving agents in that it constructs plans that achieve its goals, and then executes them. The limitations of the problemsolving approach motivates the design of planning systems.

To solve a planning problem using a state-space search approach we would let the:
• initial state = initial situation
• goal-test predicate = goal state description
• successor function computed from the set of operators
• once a goal is found, solution plan is the sequence of operators in the path from
the start node to the goal node

In searches, operators are used simply to generate successor states and we can not look
"inside" an operator to see how it’s defined. The goal-test predicate also is used as a
"black box" to test if a state is a goal or not. The search cannot use properties of how a
goal is defined in order to reason about finding path to that goal. Hence this approach is
all algorithm and representation weak.

Planning is considered different from problem solving because of the difference in the
way they represent states, goals, actions, and the differences in the way they construct
action sequences.

Remember the search-based problem solver had four basic elements:
• Representations of actions: programs that develop successor state descriptions which
represent actions.
• Representation of state: every state description is complete. This is because a
complete description of the initial state is given, and actions are represented by a program
that creates complete state descriptions.
• Representation of goals: a problem solving agent has only information about it's
goal, which is in terms of a goal test and the heuristic function.
• Representation of plans: in problem solving, the solution is a sequence of actions.
In a simple problem: "Get a quart of milk and a bunch of bananas and a variable speed
cordless drill" for a problem solving exercise we need to specify:
Initial State: the agent is at home without any objects that he is wanting.
Operator Set: everything the agent can do.

Action
Represent knowledge as formal logic:
All dogs have tails : dog(x) hasatail(x) Advantages:
• A set of strict rules.
o Can be used to derive more facts.
o Truths of new statements can be verified.
o Guaranteed correctness.
• Many inference procedures available to in implement standard rules of logic.
• Popular in AI systems. e.g Automated theorem proving.

Categories and Objects

Basic idea:
• Knowledge encoded in some procedures
o small programs that know how to do specific things, how to proceed.
o e.g a parser in a natural language understander has the knowledge that a
noun phrase may contain articles, adjectives and nouns. It is represented
by calls to routines that know how to process articles, adjectives and
nouns.
Advantages:
Heuristic or domain specific knowledge can be represented.
Extended logical inferences, such as default reasoning facilitated.
Side effects of actions may be modelled. Some rules may become false in time.
Keeping track of this in large systems may be tricky.
Disadvantages:
• Completeness -- not all cases may be represented.
• Consistency -- not all deductions may be correct.
e.g If we know that Fred is a bird we might deduce that Fred can fly. Later we
might discover that Fred is an emu.
• Modularity is sacrificed. Changes in knowledge base might have far-reaching
effects.
• Cumbersome control information.
The following properties should be possessed by a knowledge representation system.
Representational Adequacy
-- the ability to represent the required knowledge;
Inferential Adequacy
- the ability to manipulate the knowledge represented to produce new knowledge
corresponding to that inferred from the original;

Simulation
The interpreter controls the application of the rules, given the working memory, thus controlling the system's activity. It is based on a cycle of activity sometimes known as a recognise-act cycle.
The system first checks to find all the rules whose conditions hold, given the current state of working memory. It then selects one and performs the actions in the action part of the rule. (The selection of a rule to fire is based on fixed strategies, known as conflict resolution strategies.)
The actions will result in a new working memory, and the cycle begins again. This cycle will be repeated until either no rules fire, or some specified goal state is satisfied.

Mental Objects and Mental Events

-- the ability to represent the required knowledge;
- the ability to manipulate the knowledge represented to produce new knowledge
corresponding to that inferred from the original.


No comments:

Post a Comment