Artificial Intelligence Lecture 2:Knowledge Representation I - PowerPoint PPT Presentation

1 / 27
About This Presentation
Title:

Artificial Intelligence Lecture 2:Knowledge Representation I

Description:

Fantasy or Reality? ... Contact lens problem Contact lens problem Imagine that you are looking for a contact lens that you dropped on a football field. – PowerPoint PPT presentation

Number of Views:122
Avg rating:3.0/5.0
Slides: 28
Provided by: Munta9
Category:

less

Transcript and Presenter's Notes

Title: Artificial Intelligence Lecture 2:Knowledge Representation I


1
Artificial IntelligenceLecture 2Knowledge
Representation I
  • Faculty of Mathematical Sciences
  • 4th
  • 5th IT
  • Elmuntasir Abdallah Hag Eltom

2
Lecture ObjectivesPart I Chapter 2
  • Look at some of the arguments against strong AI
    (the belief that a computer is capable of having
    mental states).
  • Look at the prevalence of Artificial Intelligence
    today and explain why it has become such a vital
    area of study.
  • Look at the extent to which the Artificial
    Intelligence community has been successful so far
    in achieving the goals that were believed to be
    possible decades ago. In particular, we will look
    at whether the computer HAL in the science
    fiction film 2001 A Space Odyssey is a
    possibility with todays technologies.

3
Lecture Objectives Part II Chapter 3
  • Discuss representations. The reason for this is
    that in order for a computer to solve a problem
    that relates to the real world, it first needs
    some way to represent the real world internally.
    In dealing with that internal representation, the
    computer is then able to solve problems.
  • Introduce a number of representations, such as
    semantic nets, goal trees, and search trees.
  • Explains why these representations provide such a
    powerful way to solve a wide range of problems.
  • Introduce frames and the way in which inheritance
    can be used to provide a powerful
    representational system.

4
The limits of my language mean the limits of my
world.-Ludwig Wittgenstein.The Chinese Room
  • The American philosopher John Searle has argued
    strongly against the proponents of strong AI who
    believe that a computer that behaves sufficiently
    intelligently could in fact be intelligent and
    have consciousness, or mental states, in much the
    same way that a human does.
  • One example of this is that it is possible using
    data structures called scripts to produce a
    system that can be given a story (for example, a
    story about a man having dinner in a restaurant)
    and then answer questions (some of which involve
    a degree of subtlety) about the story. Proponents
    of strong AI would claim that systems that can
    extend this ability to deal with arbitrary
    stories and other problems would be intelligent.

5
The limits of my language mean the limits of my
world.-Ludwig Wittgenstein.The Chinese Room
  • Searles Chinese Room experiment was based on
    this idea and is described as follows
  • An English-speaking human is placed inside a
    room. This human does not speak any language
    other than English and in particular has no
    ability to read, speak, or understand Chinese.
  • Inside the room with the human are a set of
    cards, upon which are printed Chinese symbols,
    and a set of instructions that are written in
    English.
  • A story, in Chinese, is fed into the room through
    a slot, along with a set of questions about the
    story.

6
The limits of my language mean the limits of my
world.-Ludwig Wittgenstein.The Chinese Room
  • By following the instructions that he has, the
    human is able to construct answers to the
    questions from the cards with Chinese symbols and
    pass them back out through the slot to the
    questioner.
  • If the system were set up properly, the answers
    to the questions would be sufficient that the
    questioner would believe that the room (or the
    person inside the room) truly understood the
    story, the questions, and the answers it gave.

7
The limits of my language mean the limits of my
world.-Ludwig Wittgenstein.The Chinese Room
  • Searles argument is now a simple one.
  • The man in the room does not understand Chinese.
    The pieces of card do not understand Chinese. The
    room itself does not understand Chinese, and yet
    the system as a whole is able to exhibit
    properties that lead an observer to believe that
    the system (or some part of it) does understand
    Chinese
  • In other words, running a computer program that
    behaves in an intelligent way does not
    necessarily produce understanding, consciousness,
    or real intelligence.

8
The limits of my language mean the limits of my
world.-Ludwig Wittgenstein.The Chinese Room
  • This argument clearly contrasts with Turings
    view that a computer system that could fool a
    human into thinking it was human too would
    actually be intelligent.
  • One response to Searles Chinese Room argument,
    the Systems Reply, claims that although the human
    in the room does not understand Chinese, the room
    itself does. In other words, the combination of
    the room, the human, the cards with Chinese
    characters, and the instructions form a system
    that in some sense is capable of understanding
    Chinese stories. There have been a great number
    of other objections to Searles argument, and the
    debate continues.
  • Find more other arguments like the Chinese Room

9
Human Brain as a Computer
  • The Halting Problem and G?odels incompleteness
    theorem tell us that there are some functions
    that a computer cannot be programmed to compute,
    and as a result, it would seem to be impossible
    to program a computer to perform all the
    computations needed for real consciousness. This
    is a difficult argument, and one potential
    response to it is to claim that the human brain
    is in fact a computer, and that although it must
    also be limited by the Halting Problem, it is
    still capable of intelligence.

10
Human Brain as a Computer
  • Neural Networks is based on the claim that the
    human brain is a computer.
  • By combining the processing power of individual
    neurons, we are able to produce artificial neural
    networks that are capable of solving extremely
    complex problems, such as recognizing faces.
  • Proponents of strong AI might argue that such
    successes are steps along the way to producing an
    electronic human being.
  • Objectors would point out that this is simply a
    way to solve one small set of problemsnot only
    does it not solve the whole range of problems
    that humans are capable of, but it also does not
    in any way exhibit anything approaching
    consciousness.

11
HALFantasy or Reality?
  • In the film 2001 A Space Odyssey. One of the
    main characters in the film is HAL, a
    Heuristically programmed ALgorithmic computer. In
    the film, HAL behaves, speaks, and interacts with
    humans in much the same way that a human would,
    In fact, this humanity is taken to extremes by
    the fact that HAL eventually goes mad.
  • In the film, HAL played chess, worked out what
    people were saying by reading their lips, and
    engaged in conversation with other humans.
  • How many of these tasks are computers capable of
    today? Games, Natural Language Processing,
    Machine Vision
  • Finally, the likelihood of a computer becoming
    insane is a rather remote one, although it is of
    course possible that a malfunction of some kind
    could cause a computer to exhibit properties not
    unlike insanity!

12
Fantasy or Reality?
  • Artificial Intelligence has been widely
    represented in other films. The Stephen Spielberg
    film AIArtificial Intelligence is a good
    example. In this film, a couple buy a robotic boy
    to replace their lost son. The audiences
    sympathies are for the boy who feels emotions and
    is clearly as intelligent (if not more so) as a
    human being. This is strong AI, and while it may
    be the ultimate goal of some Artificial
    Intelligence research, even the most optimistic
    proponents of strong AI would agree that it is
    not likely to be achieved in the next century

13
AI in the 21st Century
  • Artificial Intelligence is all around us.
  • Fuzzy logic, for example, is widely used in
    washing machines, cars, and elevator control
    mechanisms. (Note that no one would claim that as
    a result those machines were intelligent, or
    anything like it! They are simply using
    techniques that enable them to behave in a more
    intelligent way than a simpler control mechanism
    would allow.)

14
AI in the 21st Century
  • Intelligent agents, are widely used. For example,
    there are agents that help us to solve problems
    while using our computers and agents that
    traverse the Internet, helping us to find
    documents that might be of interest. The physical
    embodiment of agents, robots, are also becoming
    more widely used. Robots are used to explore the
    oceans and other worlds, being able to travel in
    environments inhospitable to humans. It is still
    not the case, as was once predicted, that robots
    are widely used by households, for example, to
    carry shopping items or to play with children,
    although the AIBO robotic dog produced by Sony
    and other similar toys are a step in this
    direction.

15
Part I Chapter 2 Summary
  • The Chinese Room argument is a thought
    experiment designed by
  • John Searle, which is designed to refute strong
    AI.
  • The computer HAL, as described in the film
    2001 A Space Odyssey,
  • is not strictly possible using todays
    technology, but many of its
  • capabilities are not entirely unrealistic today.
  • The computer program, Deep Blue, beat world
    chess champion
  • Garry Kasparov in a six-game chess match in 1997.
    This feat has
  • not been repeated, and it does not yet represent
    the end of human
  • supremacy at this game.
  • Artificial Intelligence is all around us and is
    widely used in industry,
  • computer games, cars, and other devices, as well
    as being a
  • valuable tool used in many computer software
    programs.

16
Part II Knowledge Representation
  • If, for a given problem, we have a means of
    checking a proposed solution, then we can solve
    the problem by testing all possible answers. But
    this always takes much too long to be of
    practical interest. Any device that can reduce
    this search may be of value.
  • -Marvin Minsky, Steps Toward Artificial
    Intelligence

17
Part II Knowledge Representation
  • The way in which the computer represents a
    problem, the variables it uses, and the operators
    it applies to those variables can make the
    difference between an efficient algorithm and an
    algorithm that doesnt work at all. This is true
    of all Artificial Intelligence problems, and as
    we see in the following, it is vital for search.
  • The example Contact lens problem

18
Contact lens problem
  • Imagine that you are looking for a contact lens
    that you dropped on a football field. You will
    probably use some knowledge about where you were
    on the field to help you look for it. If you
    spent time in only half of the field, you do not
    need to waste time looking in the other half.

19
Contact lens problem
  • Now let us suppose that you are having a computer
    search the field for the contact lens, and let us
    further suppose that the computer has access to
    an omniscient oracle that will answer questions
    about the field and can accurately identify
    whether the contact lens is in a particular spot.
  • Now we must choose a representation for the
    computer to use so that it can formulate the
    correct questions to ask.

20
Contact lens problem Representation 1
  • One representation might be to have the computer
    divide the field into four equal squares and ask
    the oracle for each square, Is the lens in this
    square?.
  • This will identify the location on the field of
    the lens but will not really be very helpful to
    you because you will still have a large area to
    search once you find which quarter of the field
    the lens is in.

21
Contact lens problem Representation 2
  • Another representation might be for the computer
    to have a grid containing a representation of
    every atom contained in the field. For each atom,
    the computer could ask its oracle, Is the lens
    in contact with this atom?
  • This would give a very accurate answer indeed,
    but would be an extremely inefficient way of
    finding the lens. Even an extremely powerful
    computer would take a very long time indeed to
    locate the lens.

22
Contact lens problem Representation 3
  • Perhaps a better representation would be to
    divide the field up into a grid where each square
    is one foot by one foot and to eliminate all the
    squares from the grid that you know are nowhere
    near where you were when you lost the lens. This
    representation would be much more helpful.

23
  • In fact, the representations we have described
    for the contact lens problem are all really the
    same representation, but at different levels of
    granularity.
  • The more difficult problem is to determine the
    data structure that will be used to represent the
    problem we are exploring.
  • There are a wide range of representations used in
    Artificial Intelligence.

24
  • When applying Artificial Intelligence to search
    problems, a useful, efficient, and meaningful
    representation is essential. In other words, the
    representation should be such that the computer
    does not waste too much time on pointless
    computations, it should be such that the
    representation really does relate to the problem
    that is being solved, and it should provide a
    means by which the computer can actually solve
    the problem.

25
Semantic Nets
  • A semantic net is a graph consisting of nodes
    that are connected by edges.
  • The nodes represent objects.
  • The links between nodes represent relationships
    between those objects.
  • The links are usually labeled to indicate the
    nature of the relationship

26
Semantic Nets
Instances
27
Semantic Nets
  • The links are arrows, meaning that they have a
    direction. In this way. It may be that Fang does
    chase Fido as well, but this information is not
    presented in this diagram.
  • Semantic nets do have limitations, such as the
    inability to represent negations Fido is not a
    cat., this kind of fact can be expressed easily
    in first-order predicate logic and can also be
    managed by rule-based systems.
Write a Comment
User Comments (0)
About PowerShow.com