Announcements - PowerPoint PPT Presentation

About This Presentation
Title:

Announcements

Description:

Bring 1 8x11.5 piece of paper with anything you want written on it ... Learn the 'days on which my friend Aldo enjoys his favorite water sport' Yes. Change ... – PowerPoint PPT presentation

Number of Views:40
Avg rating:3.0/5.0
Slides: 31
Provided by: dlaw3
Learn more at: http://www.cs.loyola.edu
Category:

less

Transcript and Presenter's Notes

Title: Announcements


1
Announcements
  • Project 1 is due Tuesday, October 16
  • Send me the name of your konane bot
  • Midterm is Thursday, October 18
  • Bring 1 8x11.5 piece of paper with anything you
    want written on it
  • Book Review is due Thursday, October 25
  • Andrew's Current Event
  • Volunteer for Tuesday?

2
Introduction to Machine Learning
  • Lecture 14

3
Effects of Programs that Learn
  • Application areas
  • Learning from medical records which treatments
    are most effective for new diseases
  • Houses learning from experience to optimize
    energy costs based on usage patterns of their
    occupants
  • Personal software assistants learning the
    evolving interests of users in order to highlight
    especially relevant stories from the online
    morning newspaper

4
Effective Applications of Learning
  • Speech recognition
  • outperform all other approaches that have been
    attempted to date
  • Data mining
  • Learning algorithms being used to discover
    valuable knowledge from large commercial
    databases
  • detect fraudulent use of credit cards
  • Play Games
  • Play backgammon at levels approaching the
    performance of human world champions

5
Learning Programs
  • A computer program is said to learn from
    experiences E with respect to some class of tasks
    T and performance P, if its performance at tasks
    in T, as measured by P, improves with experience
    E
  • Examples
  • A checkers learning problem
  • Task T playing checkers
  • Performance measure P percent of games won
    against opponents
  • Training experience E playing practice games
    against itself
  • Handwriting recognition learning problem
  • Task T recognizing and classifying handwritten
    words within images
  • Performance measure P percent of words correctly
    classified
  • Training experience E a database of handwritten
    words with given classifications

6
Designing a Learning System
  • Consider designing a program to learn to play
    checkers, with the goal of entering it in the
    world checkers tournament
  • Requires the following sets
  • Choosing Training Experience
  • Choosing the Target Function
  • Choosing the Representation of the Target
    Function
  • Choosing the Function Approximation Algorithm

7
Choosing the Training Experience (1)
  • Will the training experience provide direct or
    indirect feedback?
  • Direct Feedback system learns from examples of
    individual checkers board states and the correct
    move for each
  • Indirect Feedback Move sequences and final
    outcomes of various games played
  • Credit assignment problem Value of early states
    must be inferred from the outcome
  • Degree to which the learner controls the sequence
    of training examples
  • Teacher selects informative boards and gives
    correct move
  • Learner proposes board states that it finds
    particularly confusing. Teacher provides correct
    moves
  • Learner controls board states and (indirect)
    training classifications

8
Choosing the Training Experience (2)
  • How well the training experience represents the
    distribution of examples over which the final
    system performance P will be measured
  • If training the checkers program consists only of
    experiences played against itself, it may never
    encounter crucial board states that are likely to
    be played by the human checkers champion
  • Most theory of machine learning rests on the
    assumption that the distribution of training
    examples is identical to the distribution of test
    examples

9
Partial Design of Checkers Learning Program
  • A checkers learning problem
  • Task T playing checkers
  • Performance measure P percent of games won in
    the world tournament
  • Training experience E games played against
    itself
  • Remaining choices
  • The exact type of knowledge to be learned
  • A representation for this target knowledge
  • A learning mechanism

10
Choosing the Target Function (1)
  • Assume that you can determine legal moves
  • Program needs to learn the best move from among
    legal moves
  • Defines large search space known a priori
  • target function ChooseMove B ? M
  • ChooseMove is difficult to learn given indirect
    training
  • Alternative target function
  • An evaluation function that assigns a numerical
    score to any given board state
  • V B ? ( where is the set of real
    numbers)
  • V(b) for an arbitrary board state b in B
  • if b is a final board state that is won, then
    V(b) 100
  • if b is a final board state that is lost, then
    V(b) -100
  • if b is a final board state that is drawn, then
    V(b) 0
  • if b is not a final state, then V(b) V(b '),
    where b' is the best final board state that can
    be achieved starting from b and playing optimally
    until the end of the game

11
Choosing the Target Function (2)
  • V(b) gives a recursive definition for board state
    b
  • Not usable because not efficient to compute
    except is first three trivial cases
  • nonoperational definition
  • Goal of learning is to discover an operational
    description of V
  • Learning the target function is often called
    function approximation
  • Referred to as

12
Choosing a Representation for the Target Function
  • Choice of representations involve trade offs
  • Pick a very expressive representation to allow
    close approximation to the ideal target function
    V
  • More expressive, more training data required to
    choose among alternative hypotheses
  • Use linear combination of the following board
    features
  • x1 the number of black pieces on the board
  • x2 the number of red pieces on the board
  • x3 the number of black kings on the board
  • x4 the number of red kings on the board
  • x5 the number of black pieces threatened by red
    (i.e. which can be captured on red's next turn)
  • x6 the number of red pieces threatened by black

13
Partial Design of Checkers Learning Program
  • A checkers learning problem
  • Task T playing checkers
  • Performance measure P percent of games won in
    the world tournament
  • Training experience E games played against
    itself
  • Target Function V Board ?
  • Target function representation

14
Choosing a Function Approximation Algorithm
  • To learn we require a set of training
    examples describing the board b and the training
    value Vtrain(b)
  • Ordered pair

15
Estimating Training Values
  • Need to assign specific scores to intermediate
    board states
  • Approximate intermediate board state b using the
    learner's current approximation of the next board
    state following b
  • Simple and successful approach
  • More accurate for states closer to end states

16
Adjusting the Weights
  • Choose the weights wi to best fit the set of
    training examples
  • Minimize the squared error E between the train
    values and the values predicted by the hypothesis
  • Require an algorithm that
  • will incrementally refine weights as new training
    examples become available
  • will be robust to errors in these estimated
    training values
  • Least Mean Squares (LMS) is one such algorithm

17
LMS Weight Update Rule
  • For each train example
  • Use the current weights to calculate
  • For each weight wi, update it as
  • where
  • is a small constant (e.g. 0.1)

18
Final Design
Experiment Generator
Hypothesis
New problem (initial game board)
Generalizer
Performance System
Solution trace (game history)
Training examples
Critic
19
Summary of Design Choices
Determine Type of Training Experience
Games against Experts
Games against itself
Table of correct Moves

Determine Target Function

Board ? value
Board ? move
Determine Representation of Learned Function
Polynomial
Artificial neural network

Linear function of six features
Determine Learning Algorithm
Gradient descent
Linear Programming

Complete Design
20
Training Classification Problems
  • Many learning problems involve classifying inputs
    into a discrete set of possible categories.
  • Learning is only possible if there is a
    relationship between the data and the
    classifications.
  • Training involves providing the system with data
    which has been manually classified.
  • Learning systems use the training data to learn
    to classify unseen data.

21
Rote Learning
  • A very simple learning method.
  • Simply involves memorizing the classifications of
    the training data.
  • Can only classify previously seen data unseen
    data cannot be classified by a rote learner.

22
Concept Learning
  • Concept learning involves determining a mapping
    from a set of input variables to a Boolean value.
  • Such methods are known as inductive learning
    methods.
  • If a function can be found which maps training
    data to correct classifications, then it will
    also work well for unseen data hopefully!
  • This process is known as generalization.

23
Example Learning Task
  • Learn the "days on which my friend Aldo enjoys
    his favorite water sport"

24
Hypotheses
  • A hypothesis is a vector of constraints for each
    attribute
  • indicate by a "?" that any value is acceptable
    for this attribute
  • specify a single required value for the attribute
  • indication by a "Ø" that no value is acceptable
  • If some instance x satisfies all the constraints
    of hypothesis h, then h classifies x as a
    positive example (h(x) 1)
  • Example hypothesis for EnjoySport

25
EnjoySport concept learning task
  • Given
  • Instances X Possible days, each described by the
    attributes
  • Sky (with possible values Sunny, Cloudy, and
    Rainy)
  • AirTemp (with values Warm and Cold)
  • Humidity (with values Normal and High)
  • Wind (with values Strong and Weak)
  • Water (with values Warm and Cool), and
  • Forecast (with values Same and Change)
  • Hypothesis H Each hypothesis is described by a
    conjunction of constraints on the attributes.
    The constraints may be "?", "Ø", or a specific
    value
  • Target concept c EnjoySport X ? 0,1
  • Training Examples D Positive or negative
    examples of the target function
  • Determine
  • A hypothesis h in H such that h(x) c(x) for all
    x in X

26
Inductive Learning Hypothesis
  • Determine a hypothesis h identical to the target
    concept c over the entire set of instances X
  • only information about c is its values over the
    training examples
  • Inductive learning at best guarantees that the
    output hypothesis fits the target concept over
    the training data
  • Fundamental assumption of inductive learning
  • Inductive Learning Hypothesis Any hypothesis
    found to approximate the target function well
    over a sufficiently large set of training
    examples will also approximate the target
    function well over other unobserved examples

27
Concept Learning As Search
  • Search through a large space of hypothesis
    implicitly defined by the hypothesis
    representation
  • Find the hypothesis that best fits the training
    examples
  • How big is the hypothesis space?
  • In EnjoySport six attributes Sky has 3 values,
    and the rest have 2
  • How many distinct instances?
  • How many hypothesis?

28
General to Specific Ordering
  • This hypothesis is the most general hypothesis.
    It represents the idea that every day is a
    positive example
  • hg lt?, ?, ?, ?, ?, ?gt
  • The following hypothesis is the most specific
    hypothesis it says that no day is a positive
    example
  • hs ltØ, Ø, Ø, Ø, Ø, Øgt
  • We can define a partial order over the set of
    hypotheses
  • h1 gtg h2
  • This states that h1 is more general than h2
  • Let h1 ltSunny, ?, ?, ?, ?, ?gt
  • Let h2 ltSunny, ?, ?, Strong, ?, ?gt
  • Given hypothesis hj and hk, hj is
    more_general_than_or_equal_to hk if and only if
    any instance that satisfies hk also satisfies hj
  • One learning method is to determine the most
    specific hypothesis that matches all the training
    data.

29
Partial Ordering
Hypotheses H
Instances X
Specific
h1
h3
x1
h2
x2
General
x1 ltSunny, Warm, High, Strong, Cool, Samegt x2
ltSunny, Warm, High, Light, Warm, Samegt
h1 ltSunny, ?, ?, Strong, ?, ?gt h2 ltSunny, ?,
?, ?, ?, ?gt h3 ltSunny, ?, ?, ?, Cool, ?gt
30
Find-S Finding a maximally Specific Hypothesis
  • Initialize h to the most specific hypothesis in H
  • For each positive training instance x
  • For each attribute constraint ai in h
  • If the constraint ai is satisfied by x
  • Then do nothing
  • Else replace ai in h by the next more general
    constraint that is satisfied by x
  • Output hypothesis h
  • Begin h ? ltØ, Ø, Ø, Ø, Ø, Øgt
Write a Comment
User Comments (0)
About PowerShow.com