MACHINE LEARNING - PowerPoint PPT Presentation

1 / 42
About This Presentation
Title:

MACHINE LEARNING

Description:

Learning is any change in a system that allows it to perform ... of interestingness . Its basic control cycle is. to select the first task from the agenda,work ... – PowerPoint PPT presentation

Number of Views:47
Avg rating:3.0/5.0
Slides: 43
Provided by: fsar5
Category:

less

Transcript and Presenter's Notes

Title: MACHINE LEARNING


1
MACHINE LEARNING
  • Fatemeh Saremi
  • Mina Razaghpour

2
Overview
  • What is Learning?
  • What is Machine Learning?
  • Why should Machines Learn?
  • How machines learn?
  • Specific Machine Learning Methods
  • Solving Traveling Salesman Problem with Ant
    colony
  • Summary
  • References

3
What is Learning?
  • Learning is any change in a system that allows it
    to perform better the second time on repetition
    of the same task or on another task drawn from
    the same population.
  • One part of learning is acquiring knowledge and
    new information
  • And the other part is problem-solving .

4
What is Machine Learning?
  • The goal of machine learning is to build computer
    systems that can adapt and learn from their
    experience.
  • Machine Learning algorithms discover the
    relationships between the variables of a system
    (input, output and hidden) from direct samples of
    the system.

System
. .
5
Why Should Machines Learn?
  • We expect machines to learn from their mistakes.
  • An Intelligence that didnt learn ,would not be
    much of an Intelligence.
  • Machine Learning is a prerequisite for any mature
    programme of Artificial Intelligence.

6
How Machines Learn?
  • Machine Learning typically follows three phases
  • Training
  • A training set of examples of correct behavior
    is analyzed and some representation of the newly
    learnt knowledge is stored. This is some form of
    rules.

7
How Machines Learn? (cont.)
  • Validation
  • The rules are checked and ,if necessary
    ,additional training
  • is given . Sometimes additional test data are
    used , but instead , a human expert may validate
    the rules , or some other automatic
    knowledge - based component may be used. . The
    role of the tester is often called the critic.
  • Application
  • The rules are used in responding to some new
    situation.

8
How Machines Learn? (cont.)
Training set
Existing knowledge
Training
Test data
New knowledge
Validation
New situation
Critic
Application
Response
9
Specific Machine Learning Methods
10
Learning by Memorizing
  • The simplest way of learning
  • Storing examples of correct behavior
  • An example
  • Learn to play Checkers written by Samuel

11
Checkers
  • Using min-max method.
  • When complete search is impossible , use a Static
    Evaluation Function .
  • At the end of each turn , Record computed values
    for each state.
  • Reaching a state ,visited in previous games,
    stop the search and use the stored value.

12
Learning by Memorizing (cont.)
  • It is too simple , and it is not sufficient for
    complicated problems.
  • We also need
  • Organized information storing
  • Generalization
  • Direction
  • So in this method learning is similar to problem
    solving , but its success is dependent on proper
    structure for knowledge-base.

13
Learning by Adjusting Parameters
  • Determining parameters
  • Assigning initial weight to each parameter
  • Modifying weights as the program goes on
  • In Checkers
  • 16 parameters for each state
  • f c1t1 c2t2 c16t16
  • When to modify a coefficient?
  • And to what degree?

14
Learning by Adjusting Parameters(cont.)
  • So in this method learning is similar to other
    problemsolving methods,and it is dependent
  • on searching algorithms.

15
Learning by Exploration
  • This program explores domains , looking
  • for interesting patterns and generalizations.
  • A remarkable program AM developed by
  • Doug Lenat
  • AM works in the domain of elementary mathematics.
  • It maintains a large , growing database of
    concepts , such as set and function in the
    mathematics domain.

16
Learning by Exploration (cont.)
  • The program maintains an agenda of tasks ,
  • and keeps them stored in decreasing order
  • of interestingness . Its basic control cycle
    is
  • to select the first task from the agenda,work
  • on it (which may add new tasks), and repeat.
  • Working on a task is done by rules called
    heuristics .

17
Learning by Exploration (cont.)
  • Another example is Eurisko ,developed by Doug
    Lenat
  • Eurisko works in a variety of domains , including
    three-dimensional VLSI circuits and the design of
    battle fleets for a space warfare game.
  • Eurisko is more complex than AM , and was
    designed to overcome some of AM s flaws. But
    both programs operate similarly.

18
Ant Colony SystemA Learning Approach to TSP
19
Ant Colony Algorithms
  • Inspired from Ants Natural behavior
  • Ants can find the shortest path between two
    points.
  • However, they cant see! So How?

20
Finding the shortest path
  • Ants choose paths according to amount of
    pheromone.
  • Pheromone is accumulated faster on shorter path.
  • After some time ,all of ants choose the shorter
    path.

21
Natural behavior of ant
22
ACS for Traveling Salesman Problem
  • Having a set of simple agents called ants
  • Each edge has a desirability measure called
    Pheromone
  • Ants search in parallel for good solutions to TSP
  • Ants Cooperate through pheromone-mediated
    communication

23
Algorithm
  • Initialize randomly place ants in cities
  • Each ant constructs a tour iteratively
  • It chooses the next city by
  • A Greedy Heuristic the nearest city
  • Use past experience the Edge with Highest
    Pheromone Level

24
Updating Pheromone Level
  • Global Updating At the end of each round
  • The best solutions get extra point
  • Local Updating

25
Algorithm
  • Loop
  • randomly place m artificial ants on n cities
  • For city1 to n
  • For ant1 to m
  • select probabilistically the next
    city according
  • to exploration and
    exploitation
  • apply the local updating rule
  • End For
  • End For
  • Apply the global updating rule using the best
    ant
  • Until End_condition

26
Transition function
With Probability q0 Exploitation
With Probability (1- q0) Exploration
27
A simple TSP example
A
B
C
D
E
dAB 100dBC 60dDE 150
28
Iteration 1
A
B
C
D
E
29
How to build next sub-solution?
A
B
C
D
E
30
Iteration 2
A
B
C
D
E
31
Iteration 3
A
B
C
D
E
32
Iteration 4
A
B
C
D
E
33
Iteration 5
A
B
C
D
E
34
Path and Pheromone Evaluation
L1 300
L2 450
L3 260
L4 280
L5 420
35
Global Pheromone Updating
  • Only the ant that generated the best tour is
    allowed to globally update the amount of
    pheromone on its tour edges.

36
Local Pheromone Updating
  • If edge (r,s) is visited by ant

37
Effect of the Local Rule
Local update rule makes the edge pheromone level
diminish.
Visited edges are less less attractive as they
are visited by the various ants.
Favors exploration of not yet visited edges.
This helps in shuffling the cities so that
cities visited early in one ants tours are being
visited later in another ants tour.
38
Enhancements to ACS
  • The Algorithm can be performed in Parallel, so
    the order is independent of ants number.
  • For each size of problem a special set of values
    for ants number ant other parameters lead the
    best result.

39
Compare results with some well-known Algorithms
Problem name ACS GA EP SA optimum
Oliver30 (30-city) 420 830 421 3200 420 40000 424 24617 420
Eil50 (50-city 425 1830 428 25000 426 100000 443 68512 425
Eil75 (75-city 535 3480 545 80000 542 325000 580 173250 535
Kroa100 (100city) 21,282 4820 21,761 103000 N/A N/A 21,282
40
SUMMARY
  • The goal of machine learning is to build computer
    systems
  • that can adapt and learn from their
    experience.
  • An Intelligence that didnt learn ,would not be
    much of an Intelligence.
  • Machine Learning typically follows three phases
  • Training
  • Validation
  • Application
  • Specific Machine Learning Methods
  • Learning by Memorizing
  • Learning by Adjusting Parameters
  • Learning by Exploration
  • Ant colony algorithms
  • an efficient ,nature-inspired learning algorithm
    for TSP

41
References
  • J. Finlay A. Dix , An introduction to
    Artificial Inteligence , 1997.
  • R.S. Michalaski J.G. Carbonell T.M.Mitchell ,
    Machine Learning ,1983.
  • E. Charniak D. McDermott , Introduction to
    Artificial Inteligence , 1985.
  • M. Fahimi , Artificial Inteligence , 2002.
  • M.Dorigo,L.Gambadrella A Cooperative learning
    approach to the travelling salesman problem,IEEE
    Transactions,1997
  • L.Gambadrella,M.Dorigo Ant Colonies for the
    travelling salesman problem,1997
  • V. Maniezzo, L. Gambardella, F. de Luigi Ant
    colony Optimization,2001

42
QUESTIONS???
  • Thanks for your attention!
Write a Comment
User Comments (0)
About PowerShow.com