CITS4404 Artificial Intelligence - PowerPoint PPT Presentation

1 / 35
About This Presentation
Title:

CITS4404 Artificial Intelligence

Description:

CITS4404 Artificial Intelligence and Adaptive Systems Introduction – PowerPoint PPT presentation

Number of Views:181
Avg rating:3.0/5.0
Slides: 36
Provided by: undergrad4
Category:

less

Transcript and Presenter's Notes

Title: CITS4404 Artificial Intelligence


1
Introduction
CITS4404 Artificial Intelligence and Adaptive
Systems
2
What is Artificial Intelligence and Adaptive
Systems?
  • Pseudonyms
  • Computational Intelligence
  • Natural Computation
  • Nature-Inspired Computing
  • Soft Computing
  • Adaptive Systems
  • Artificial Intelligence(?)

3
A potted history of computing
4
First there was the switch
5
and something called a computer
http//www.lib.jmu.edu/special/jmuphotopages/stbu.
aspx
6
Then there was the thermionic valve (an
automated or controlled switch)
7
and all sorts of things became possible!
Colossus Mark II
8
Some even dreamed of a home computer!
Source http//cleantechnica.com/category/recyclin
g/
9
Then came the transistor (another switch)
http//ees.wikispaces.com/Historiadeltransistor
10
which got smaller
11
and smaller, and smaller...
12
...but it was still just a lot of switches!
13
The switches could do a lot of cool things...
  • They loved logic!
  • very small logic gates (and lots of them!) on a
    tiny chip
  • From logic you could make adders, multipliers,
    decoders, storage devices, ...
  • This meant you could do lots of arithmetic really
    fast
  • so we called them computers
  • The switches could also be used to control
    things, like what sums to do
  • then you could decide what to do next depending
    on the answer
  • And the switches could repeat it as many times as
    you wanted
  • the switches could keep working while you played
    golf!
  • But it was very laborious setting up all that
    arithmetic up by hand

14
So along came programming
  • In theory you could control the switches, but
    telling them what to do in binary was a lot of
    hard work, and very error-prone
  • So we came up with mnemonics which were clearer
    to humans, and the switches decoded them into
    control signals
  • we called this assembly language
  • ADD A, STO B, JMP C,...

blog.wired.com
15
  • We found we used lots of patterns of these
    mnemonics over and over again, and specifying
    every little step was still time-consuming and
    difficult
  • Why not write human-like statements and get the
    switches to compile them into mnemonics (and
    hence into binary) for you?
  • This abstraction away from switch language,
    along with bigger and better storage devices,
    allowed us to program the switches for a whole
    heap of stuff
  • calculations, graphics, data storage and
    retrieval, communications networking, music,
    video, ...

16
But ultimately they were still just binary
switches...
  • ...and we dont live in a binary world!
  • Not everything is true or false
  • Not everything is black or white
  • how many tall people in this room?
  • who thinks the Dockers are a great football team?
  • raise your hand when the following bar becomes
    red...

17
Where switches run into trouble
  • Traditional switch programs run into difficulty
    with
  • uncertain information
  • missing information
  • fuzzy concepts or categories
  • noisy/erroneous information
  • the analogue world
  • visual-spatial, temporal change, inexact,
    adaptive, abstract

18
Yet humans and other animals do amazingly well
  • Huge complexity
  • extraction of important features from masses of
    information
  • Abstraction from detail
  • dealing with uncertainty, inexactness, noise,
    incompleteness

19
So how do we get computers to do these things?
20
Programming from an alternative perspective
  • Nature as proof of concept
  • how does nature do it?
  • what can we learn? copy? mimic?

21
Natural Computation (as defined on
csse.uwa.edu.au)
Nature is a remarkable problem solver. From
multi-resistant super-bugs to ever more inventive
humans, the mechanisms of construction,
adaptation, learning, and co-operative behaviour
found in nature have solved a vast array of
problems in very challenging environments.
Natural Computation is the field of computing
that draws from the successes of natural systems
to develop new ways of solving computational
problems in the real world. The sources of
inspiration for natural computation systems come
from a number of closely related areas. Four of
the main areas are outlined below.
22
Evolution designing the basics
Genes and chromosomes contain the recipe for
nature's designs. Evolution - through the
immense power of natural selection acting over
geological time - provides a mechanism for
reusing good designs and improving
performance. Evolutionary techniques - such as
genetic algorithms, evolutionary algorithms,
evolutionary computation, and differential
evolution -have been used widely to solve a huge
range of problems in design and optimisation.
23
Projects we have conducted locally include -
Designing (networks of) comminution equipment for
the mining industry. - Designing torpedo tracks
for the weapons industry. - Evolving tactics for
artificial soccer players. - Solving sports
scheduling problems. - Solving 2D and 3D cutting
packing problems. - Debugging logical errors
in Java programs. - Evolving neural networks
(neuro-evolution). - and many others
24
Development connecting the components
While genes encode the building blocks for an
organism, the way the organism develops depends
upon both its internal and external
environments. Your brain, for example,
continues to develop its hardware until at
least your 20s, and there is evidence to suggest
that it retains its plasticity for much
longer. The way that an organism develops using
its genetic recipe and the factors that control
it are still not well understood, and of our four
examples this has had the least transfer to
practical systems.
25
Projects we have conducted locally include -
Developing topographic maps between the retina
and optic tectum.
26
Learning training the software
While an organism is born with and develops
many raw capabilities, it is through interaction
with the environment that it learns to use these
capabilities. You have all seen this in your own
experience, from a small child learning not to
touch a hot stove, to the many hours of training
that lead to the expertise of a chess grandmaster
or a surgeon. In the early days of computing it
was often stated that computers could never
exhibit intelligence because they could only
carry out tasks that their programmers had
anticipated and pre-programmed solutions for.
The field of machine learning has shown that
programs are able to develop competencies far
greater than those of their programmers, and this
is one the most exciting areas for the future.
27
Projects we have conducted locally include -
Learning to play games (Poker, Go, Pac-Man, many
others). - Fraud detection. - Learning to
identify intention in natural language
dialogue. - Learning to tag parts of speech
using case-based reasoning. - Applying
computational learning theory (CLT) to planner
induction. - Reinforcement learning in
high-dimensional state spaces. - Learning to
compose music. - Reinforcement learning in small
mobile robots.
28
Co-operation the whole exceeds the sum of its
parts
The success of many species relies not just on
each individual's capabilities, but rather on the
co-operative behaviour or complex social
interactions of groups. This allows problems to
be solved that could not be solved by any single
individual. Well-known examples include ants,
bees, many predators, and, of course,
humans. The ideas behind social co-operation
have led to algorithms such as particle swarm
optimisation, ant colony optimisation, and
learning classifier systems.
29
Projects we have conducted locally include -
Using PSO and LCSs to derive complex game
strategies. - Using PSO to optimise weights in
recurrent neural networks. - Investigating the
convergence properties of PSO algorithms. -
Using LCSs to evolve sets for classification
problems.
30
So what is Computational Intelligence?
  • Is CI defined entirely by analogy with the
    natural world, or can we be more precise?
  • The key properties of Computational Intelligence
    are
  • Identifying simple mechanisms to produce good
    solutions, rather than complex mechanisms to
    produce an optimal solution.
  • Exploiting heuristics and simple rule sets with
    complex emergent behavior.
  • Adaptation to the environment CI processes will
    incrementally improve their performance with
    exposure to an given environment.
  • Using approximate (fuzzy) measures in evaluating
    the outputs of processes.
  • The key technologies (EAs, NNs, ACO, PSO) all
    exhibit these qualities, and fuzzy logic is used
    to design controllers that are tolerant of the
    approximative nature of these methods.

31
What CI is not Artificial Intelligence
  • Classical AI is the endeavor of making a machine
    appear intelligent. For example the Turing Test
    is assessed by the perception of an external
    agent. The internal mechanism that mimics
    intelligence is not as significant as the
    perception of intelligence.
  • Consider a key famous success of AI Deep Blue
    beating Kasparov in 1997. The system that beat
    Kasparov was highly optimized, with an extensive
    knowledge base. There is nothing emergent or
    fuzzy about Deep Blue.
  • CI is a specific subset of AI, but while AI
    focuses on the outcome/appearance, CI focuses
    on the mechanism.

32
What CI is not Cognitive Science
  • Cognitive science is the study of how
    intelligence is actually manifested, rather than
    simply the perception of intelligence. It
    combines neuroscience, psychology, linguistics,
    philosophy, and anthropology, as well as AI and
    CI, to suggest how the human mind actually
    works.
  • While cognitive science is also in the
    intersection of biological models and AI, its
    focus is on how the mind works, rather than how
    we might exploit these models to solve
    computational problems.
  • CI often abstracts the complexities away from
    cognitive science to find the simple underlying
    mechanisms that might be exploited.

33
What CI is not Machine Learning
  • Machine learning is the study of mechanisms that
    allow a machine to infer complex patterns and
    behaviors from empirical data. Machine learning
    is used extensively in image and speech
    recognition, as well as in data-mining
    applications.
  • While similar in nature to many CI techniques,
    the main difference is that the core
    representation of knowledge in machine learning
    is statistical, whereas the core representation
    of knowledge in computational intelligence is
    approximate in nature (fuzzy).
  • Examples of machine learning include Bayesian
    networks and Markov models.

34
What CI is not Ontological Engineering
  • Ontology is the study of that which exists. In
    computer science terms, ontologies are semantic
    networks that assign meaning to concepts.
    Popular AI often delves into complex
    understandings of concepts (such as self
    awareness), and the semantic web intends to
    devise a markup language for ontologies to allow
    agents to infer the meaning of complex data.
  • CI does not intend to learn ontologies or derive
    meanings more often it seeks to optimize a
    behavior or function in a given ontology.

35
Key Technologies
  • Evolutionary computation
  • Populations of solutions competing/co-operating
    to improve over time
  • Neural networks
  • Modelling the connectivity of the brain
  • Fuzzy logic
  • Modelling with partial truth and probabilistic
    logic
  • Far more details next week!
Write a Comment
User Comments (0)
About PowerShow.com