ShyhKang Jeng - PowerPoint PPT Presentation

1 / 136
About This Presentation
Title:

ShyhKang Jeng

Description:

http://www.labs.bt.com/projects/agents.com/ SpiderHunter.com. http://www. ... Agent accesses indexed files/email; runs in Emacs buffer observing keystrokes ... – PowerPoint PPT presentation

Number of Views:79
Avg rating:3.0/5.0
Slides: 137
Provided by: shyhka
Category:
Tags: shyhkang | bt | jeng | yahoo

less

Transcript and Presenter's Notes

Title: ShyhKang Jeng


1
Introduction
  • Shyh-Kang Jeng
  • Department of Electrical Engineering/
  • Graduate Institute of Communication Engineering
  • National Taiwan University

2
Course Website
http//cc.ee.ntu.edu.tw/skjeng/IntelligentAgent20
01Spring.htm
3
Contents
  • Directories, Courses, Researches and Applications
    of Intelligent Agents
  • Features of Intelligent Agents
  • Agents as Assistants
  • Mobile Agents
  • Multi-Agent Cooperation
  • Agent Roles
  • Eager Assistants
  • Guides
  • Memory Aids
  • Filters/Critics
  • Matchmakers
  • Agents for Buying/Selling
  • Future and Implication

4
Directories, Applications and Researches of
Intelligent Agents
5
A Search for Intelligent Agents via Yahoo!
6
UMBC Agent Webhttp//agents.umbc.edu/
7
BottomDollarhttp//bottomdollar.shopnow.com/
8
Personal Travel Assistant by Zeushttp//www.labs.
bt.com/projects/agents.com/
9
SpiderHunter.comhttp//www.spiderhunter.com/
10
Agent Researches in IBMhttp//www.research.ibm.co
m/iagents/
11
Agent Studies in NTUEE
  • Professor Sheng-De Wang
  • Professor Chin-Laung Lei
  • Professor Ming-Syan Chen

12
Agent Course Offered by Professor Jane Hsu
http//hugo.csie.ntu.edu.tw/yjhsu/course/u1760/
13
Agent Researches in NCCUhttp//www.cs.nccu.edu.tw
/jong/agent/agent.html
14
MIT Media Lab Software Agents Grouphttp//agents
.www.media.mit.edu/groups/agents
15
Software Agent Projects in MIT Media Lab SA
Grouphttp//agents.www.media.mit.edu/groups/proje
cts
16
Pattie Maeshttp//pattie.www.media.it.edu/people/
pattie/
17
Maess 1994 CACM Paper
  • Agents that Reduce Work and Information Overload
  • http//hugo.csie.ntu.edu.tw/yjhsu/course/u1760/pa
    pers/Maes-CACM94/CACM-94_p1.html

18
Maess CHI 1997 Talk
  • CHI97 Software Agents Tutorial
  • http//pattie.www.media.mit.edu/people/pattie/CHI9
    7/

19
Features of Intelligent Agents
20
What is an Agent?
  • A computational system which
  • Is long-lived
  • Has goals, sensors, and effectors
  • Decides autonomously which actions to take in the
    current situation to maximize progress toward its
    (time-varying) goals

21
Agents
22
Common Issues Studied
  • Action selection by one agent
  • Knowledge representation and inference by one
    agent
  • Agent learning, adapting
  • Communication, collaboration among agents

23
Type of Agents
  • Autonomous robots
  • Synthetic characters
  • Expert assistants
  • Software agents, knowbots, softbots

24
What is a Software Agent?
  • Particular type of agent, inhabiting computers
    and networks, assisting users with computer-based
    tasks

25
How is an Agent Different from Other Software?
  • Personalized, customized
  • Proactive, takes initiative
  • Long-lived, autonomous
  • Adaptive

26
Agents as Assistants
27
Change of Metaphor for HCI
Old direct manipulation
New indirect management
28
Direct Manipulation
  • Task for which it is designed
  • Closed, static, relatively small and structured
    information world
  • Method used
  • Visualize the objects
  • Actions on objects in interface correspond to
    actions on real objects
  • Nothing happens unless the user makes it happen

29
Indirect Management/Agents
  • Task for which it is designed
  • Open, dynamic, vast and unstructured information
    world
  • Method used
  • User delegates to agents that knows the users
    interests, habits, preferences
  • Agents make suggestions and/or act on behalf of
    user
  • Lots of things happen all the time (even when
    user is not active)

30
Software Agent ? Expert System
  • Naïve user
  • Agents ? average users
  • Expert systems ? expert users
  • Common task
  • Agents ? common tasks
  • Expert systems ? high-level tasks

31
Agents vs. Expert Systems
  • Personalized
  • Agents ? different actions
  • Expert systems ? same actions
  • Active, autonomous
  • Agents ? on their own
  • Expert systems ? passively answer
  • Adaptive
  • Agents ? Learn and change
  • Expert systems ? Remain fixed

32
Types of Software Agents
  • Nature of task performed
  • User facing vs. background task
  • Nature and source of intelligence
  • How is it built? Who programs it?
  • Mobility/location
  • Where does it reside? Can it move?
  • Role fulfilled
  • What task does it help the user with?

33
Nature of Task Performed
  • User agents
  • Assist user, know interests/preferences/habits,
    may act on users behalf
  • e.g., personal news editor, personal e-shopper,
    personal web guide
  • Service agents
  • Perform more general tasks in the background
  • e.g., web indexing, info retrieval, phone network
    load balancing

34
Nature of Intelligence
  • User programmed
  • Person provides rules and criteria directly
  • Simplest
  • Not very smart
  • Relies on users programming skill
  • Commercially available

35
Example of User-Programmed Agents My
Yahoo!http//my.yahoo.com/?myHome
36
User Programmed Agents
37
AI Engineered System
  • Created by traditional, knowledge-based AI
    techniques
  • Very complex
  • Smart
  • Programmed by a knowledge engineer
  • Not commercially available yet

38
Knowledge-Based Agents
39
Nature of Intelligence
  • Learning Agents program themselves
  • Patterns in users actions and among users are
    detected and exploited
  • Medium complexity
  • smart in key areas (where user concentrates)
  • Beginning to be commercially available

40
Learning from the User
User
Interacts with
collaborate
Observes And imitates
Application
Interacts with
Agent
41
Learning from Other Agents
User
Application
Agent
Agent
Application
User
42
Agent Maxims
43
Agent Maxims
44
Example of Learning Agent Maxims
  • (1) Learning from the user
  • Learns rules for sorting, forwarding, archiving
  • Correlates attributes of messages and situation ?
    actions
  • Evolve with user however, can take time to be
    useful (patterns must be learned from many
    examples)

45
Maxims Learning Agent (cont.)
  • (2) Also learns from peers
  • Agent knows other agents exist
  • Agents registered at public B Board
  • Agents can make use of peers experience
  • Model if other users are like me, my agent
    should act like others (as a default)

46
Learning from Peers Peer-Peer Model
  • Agent can present unknown situations to peers and
    ask What should you do in this situation?
  • Use the most trusted recommendation
  • Compare and combine the recommendation
  • Peers are like situational experts for my agent.
    From users point of view, his agent acts like an
    expert.

47
Pros/Cons of Approaches
  • User-programmed agent
  • Simple ()
  • Customized ()
  • Users do not recognize opportunity for an agent
    (-)
  • Users do not like to program (-)
  • Agent does not adapt (-)
  • Agent has no common sense (-)

48
Pros/Cons of Approaches
  • AI-Engineered Agent
  • Sophisticated, knowledge-based ()
  • Agent ready to go from start ()
  • Not customized (-)
  • Expensive solution (-)
  • Agent does not adapt (-)

49
Pros/Cons of Approaches
  • Learning Agent
  • Agent adapts ()
  • Customized ()
  • Manageable complexity ()
  • Agent takes time to learn/relearn (-)
  • Agent only automates pre-existing patterns (-)
  • Agent has no common sense (-)

50
Which Approach is Best?
  • Combination of 3 approaches
  • Give agent access to background knowledge which
    is available and general
  • Allow user to program the agent, especially when
    the agent is new or drastic changes occur in
    users behavior
  • Agent learns to adapt and suggest changes

51
Interface Agent ? User
  • Understanding Does user understand agents? Can
    user trust the agent?
  • Control How does user control agent?
  • Distraction How to minimize distraction?
  • Ease of use How expert does user have to be?
  • Personification How to represent agent to user?

52
Issue 1 Understanding
  • Problem Agent-user collaboration is only
    successful if the user can understand and trust
    the agent
  • How do we give users insight into agent states
    and functioning?
  • How does the person learn all that an agent can
    do?

53
Issue 1 Understanding
  • Solution
  • Make user model available to user
  • Give continuous feedback to user about agents
    state, actions, and learning
  • Agent behavior can not be too complicated

54
Issue 2 Control
  • Problem Users must be able to turn over control
    of tasks to agents which act autonomously but
    users must not feel out of control
  • How do we allow agents to do work but not be too
    independent of the user?
  • How do we accommodate different users wanting
    different amounts of control?

55
Issue 2 Control
  • Solution
  • Allow variable degrees of autonomy
  • Allow user to decide on level of autonomy
  • Allow user programming of the agent
  • Always allow user to bypass agent

56
Issue 3 Distraction
  • Problem Autonomous agents should keep user
    informed and interrupt if necessary
  • How can users control the level of interaction
    they want from agents?
  • When is an issue/event important enough that the
    agent is allowed to interrupt?
  • How can agent actions be made known to users
    without unnecessary interruption?

57
Issue 3 Distraction
  • Solution
  • Gradually decrease number of interruptions
  • Allow user to program situations that require
    interruption
  • Give feedback about agent behavior without
    requiring users full attention

58
Issue 4 Ease of Use
  • Problem Agents should be employed for tasks user
    can not or do not want to concern themselves
    with. If using the agent is too complex, users
    will not use them.
  • How do we enable users to instruct agents without
    requiring programming?
  • How do we enable agents to fit unobtrusively into
    users task environments?

59
Issue 4 Ease of Use
  • Solution
  • Avoid making user learn a new language
  • Use language of application to communicate
    between agent and user

60
Issue 5 Personification
  • Problem Personification is a natural process.
    Agents are often personified to remind the user
    that a process is at work taking action on their
    behalf.
  • How do we personify without misleading people
    into thinking the computer is intelligent?
  • How can we take advantage of personification
    tendencies?

61
Issue 5 Personification
  • Solution???
  • Jury is still out on this one
  • Pros Laurel, Nass,
  • Cons Schneiderman, Lanier, Norman,
  • Experiments Koda, King, Walker,

62
Personified Agent Research
  • Issues
  • Animation
  • Facial expressions
  • Gestures
  • Natural language/speech I/O

63
Mobile Agents
64
Location of Agents
  • Stationary in client
  • Stationary in server
  • Mobile client-gtserver-gtserver-gt

65
Location of Agents (cont.)
  • In server versus in client
  • Easy/hard to locate other agents
  • Close to/far from the data operating on
  • Central failure point/more fault-tolerant
  • Less privacy/safer
  • Higher load on servers/more scalable
  • Potential trust problems running ones code on
    your machine (witness recent JAVA problems)

66
NTUEE XMAS
67
Mobility of Agents
  • NOT a necessary characteristic for some program
    to be an agent
  • Why move agents around?
  • Reduce network traffic
  • Share load among machines
  • Go to the data if the data cant come to you
  • User may have only infrequent connection to
    network

68
Mobility of Agents (cont.)
  • Examples of languages supporting mobility
  • Telescript agents go to centralized servers,
    conduct business, return to users distributed
    locations with results
  • Java agents (applets) move to distributed web
    browsers, bringing data and program to execute
    locally
  • TCL/TK executing scripts remotely

69
Multi-Agent Cooperation
70
Interface Agent ? Other Agent
  • Problems to be solved
  • Finding other agents
  • Common language, common ontology
  • Negotiation and commitment methods
  • Modeling other agents
  • Identification/authentication

71
Common Language, Common Ontology
  • Homogeneous agents ? no problem
  • Heterogeneous agents ? user standard like KQML,
    etc.

72
Negotiation and Commitment Methods
  • Approaches
  • Theoretical, but less applicable (e.g.,
    Rosenschein and Zlotkin, MIT press book)
  • Practical, but less general

73
Modeling Other Agents
  • Logical approach
  • Beliefs, desires, intentions (e.g. Shohams work
    at Stanford)
  • Pragmatic approach
  • Agents keeping a trust level of other agents for
    a set of problems (e.g., Maes et al work at MIT)

74
Agent Roles
75
Roles for Software Agents
  • Eager assistant
  • Guide
  • Memory aid
  • Filter/Critic
  • Matchmaker/Referral-giver
  • Buyers/Sellers (on your behalf)

76
Agents as Eager Assistants
77
Agents as Eager Assistants
  • Eager (Cypher, Apple Computer)
  • Calendar Agent (Kozierok, MIT Media Lab)
  • Note Taking (Schlimmer, Univ. Washington)
  • Email Agent (Metral, MIT Media Lab)
  • Meeting Scheduling (Dent Mitchell, CMU)
  • Open Sesame (Charles River Assoc.)
  • Microsoft Office 97

78
Maxims Email Agent
  • Agent observes user actions, learn patterns
  • Memory-based reasoning associates features of
    situation with action sequences

79
Eager Assistant Agent
Memory of examples
Situation1? action1
New situation
Situation2? action2
Situation3? action3
Predicted action, Confidence level
SituationN? actionN
80
Details of One Example
  • Situation
  • Type new message received
  • Sender nicholas_at_media.mit.edu
  • Date 3/10/95 1604
  • Topic meet to discuss funding
  • Receiver pattie_at_media.mit.edu
  • Body ltgt
  • Keywordsltgt
  • Action taken
  • Message read first out of 20
  • Message read first time

81
Prediction and Confidence Level Computation
  • Prediction
  • Compute k nearest situations and distances ds
  • Compute score for every action S1/ds
  • Pick action with highest score
  • Confidence level
  • Higher if ds smaller
  • Higher if less disagreement
  • Higher if more examples in memory

82
Using the Prediction
  • Agent operates directly on confidence level for
    every action that can be automated

1
Do-it threshold
Tell-me threshold
0
83
Maxims Agent States
84
Meeting Scheduling Agent
85
Results and Confidence Level
86
Multi-agent Collaboration
  • If agents confidence lt tell-me threshold
  • Agent contacts other agents via bboard
  • Agent sends details of situation
  • Some agents respond with Pi and Ci
  • Agents computes score for every action SCiTi,s
  • Agent picks action with highest score
  • Agent updates trust levels Ti,s

87
Agent Interface
  • Caricature faces convey state of agent alert,
    thinking, working, suggestion, no clue, etc.
  • Agent produces a report of tasks automated
  • User can look at agents memory and instruct to
    forget (whole class of) examples

88
Explanation
  • User can ask why
  • Agent cites similar examples and relevant
    features
  • Or asks agents of other users

89
Discussion (cont.)
  • Limitations
  • Agent should have access to all features that may
    be relevant
  • Applications need to be scriptable recordable
  • Application needs to be used frequently and every
    users behavior is different

90
Eager Assistants Dimensions
  • Does agent take initiative to suggest a rule?
  • How much generalization can agent make?
  • Do actions need to happen in sequence? Or can
    agent detect similarities across situations
    happening at very different times?
  • Does it remember the rule/macro in between
    sessions?
  • Does agent communicate about the rule it has
    detected?
  • How much background knowledge is used?

91
Agent as Guides
92
Agent as Guides
  • Guide 3.0 (Oren, Apple Computer)
  • Letizia (Lieberman, MIT Media Lab)
  • Webwatcher
  • Syskill Webert

93
Letiziahttp//lieber.www.media.mit.edu/people/lie
ber/Lieberary/Letizia/Letizia.html/
94
Letizia
  • Agent for assisting web browsing
  • Human browsing usually depth-first. Agent can
    augment this by doing breadth-first browse
  • When you look at a page, agent
  • Retrieves nearby links
  • Analyze pages for prominent keywords
  • Pre-fetches interesting docs to 2nd window

95
Feature-Based Filtering Analyzing Documents for
Relevant Keywords
  • SMART algorithm (Salton, Cornell, 69)
  • Remove stop words
  • Stem other words
  • Compute ratio (frequency of word in this
    document)/(average frequency of word in all
    documents)
  • Pick n words with highest ratios as the
    representation of the document

96
Feature-Based Filtering Updating User Profile
Based on One More Datapoint Document
  • If user likes document, then increases weights of
    relevant keywords of the document in the profile
  • If user dislikes document, then decreases weights
    of relevant keywords of the document in the
    profile

97
Feature-Based Filtering Filtering Based on User
Profile
  • Extract relevant keywords of document
  • Compare those with keywords in users profile
  • Compute score of document S weightratio of
    occurrence for all keywords in the profile
  • If score above threshold then present to user

98
Agents as Memory Aids
99
Agents as Memory Aids
  • Forget-me-not (Lamming, RXRC)
  • Remembrance agent (Rhodes Starner, MIT Media
    Lab)

100
Remembrance Agentshttp//www.media.mit.edu/rhode
s/Rememberance-distribution/
101
Remembrance Agent
  • Agent accesses indexed files/email runs in Emacs
    buffer observing keystrokes
  • Words matched to index terms. Agent makes three
    kinds of suggestions
  • 1. Long-Term Best match on entire buffer
  • 2. Near-Term Best on most recent paragraphs
  • 3. Immediate Best match on last sentence
  • Indexing of files/email and current document
    based on SMART

102
Agents as Filters/Critics
103
Agents as Filters/Critics
  • NewT (Sheth, MIT Media Lab)
  • Fishwrap (Bender, MIT Media Lab)
  • Pointcast, Individual, Surfbot, Netattache,
    Webcompass
  • HOMR (Metral, MIT Media Lab)
  • Webhound (Lashkari, MIT Media Lab)
  • Firefly
  • Grouplens (Resnick, MIT Sloan)
  • Videos_at_bellcore

104
Agent NewT
105
Technology Under Critics Feature-Based Filtering
(FBF)
  • Analyze item user likes/dislikes result most
    important features values of item
  • e.g., use SMART to compute vector of most
    important keywords and weights
  • Use ML techniques to generalize from examples
    result more general description of types of
    items liked by user
  • Present new items matching users tastes
  • Update user model continuously

106
Complementary Technology Underlying Critics
Automated Collaborative Filtering (ACF)
User-1
User-2
User-k
User-n
5
7
6
v1n
Item-1
1
v2n
2
Item-2
7
v3n
6
/
3
Item-k
5
vln
4
6
Item-l
vmn
Item-m
/
2
3
107
ACF Predicting a Rating
  • Distance between users x and y
  • 1/L sqrt( S(vix viy)(vix-viy) )
  • Predicted value for user x for item a
  • 1/k SWjVaj

108
Typical Results of Agent Ringo
109
Collaborative vs. Feature Based
  • FBF techniques attempt analysis of object
    content ACF does not
  • Can be applied to areas like music where content
    not easily analyzed

110
Comparison (cont.)
  • FBF techniques raise why questions ACF dodges
    this bullet
  • Assume that comparable people like comparable
    things for comparable reasons
  • FBF techniques require potentially unbounded
    real-world knowledge ACF uses the knowledge of
    other people
  • Agent is actually built with no AI at all!

111
Some Other Neat Features
  • User-added Content
  • Database extensible by users
  • Artist dossiers
  • Personal reviews (signed/non-signed)

112
Neat Features (cont.)
  • Systems like this improve by themselves
  • At start (HOMR data)
  • 20 users, 575 items
  • After 8 months
  • gt20,000 users, gt15,000 artists albums
  • Qualitative and quantitative results support our
    contention of improved results

113
Neat Features (cont.)
  • Ability to ask questions of agent about self in
    relation to community
  • How mainstream am I?
  • Would I like album X?
  • Recommend album similar to (i.e., liked by people
    who also like) album Y?
  • Highest rated artists/albums?

114
Neat Features (cont.)
  • Community building (Firefly)
  • Messaging with neighbors
  • Chatting
  • Given a neighbor, what do we have in common?
  • Leave picture of self, summary of my taste for
    others to see

115
Agents as Matchmakers
116
Agents as Matchmakers
  • Yenta (Foner, MIT Media Lab)
  • ATT work

117
Yentahttp//foner.www.media.mit.edu/people/foner/
yenta-brief.html/
118
Yenta
  • System for connecting/introducing people based on
    interests and activities
  • Matchmaking (1-to-1)
  • Coalition-building (n-to-n)
  • Technical goals
  • Distributed avoid single failure bottlenecks
  • Use cryptography careful design to protect user
    privacy

119
Yenta Applications
  • Internet
  • Intranet
  • Professional network

120
Yenta Methods
  • Distributed, peer-to-peer communication, no
    central site
  • Each user runs her own copy of the agent
  • Agents distributed across the entire Internet
  • Long-lived, local agents, with significant
    permanent state
  • Extensive use of cryptography

121
Yenta Algorithm
  • Scan personal info for keywords correlate with
    others info. Caveats
  • Dont leak contents anywhere
  • Dont assume trustworthiness
  • Requires many techniques. Example
  • Use referrals to hill-climb into appropriate
    clumps of other users

122
Agents for Buying/Selling
123
Agents for Buying/Selling
  • BarginFinder (Krulwich, Anderson Consulting)
  • Fido (Goodman, Continuum Software)
  • ShopBot (Etzioni Weld, UW)
  • Kasbah (Chavez, MIT Media Lab)
  • Bazaar (Guttman, MIT Media Lab)

124
Kasbahhttp//agents.www.media.mit.edu/groups/agen
ts/projects/
125
Kasbah
  • Kasbah intelligent classified ad.
  • The ad is an agent. Given key data it
  • Searches out compatible agents in a centralized
    marketplace (sellers look for buyers/bidders
    vice versa).
  • Conducts business for users.
  • Completes transaction, if authorized.

126
Kasbah Example
  • Sell Macintosh IIci
  • Deadline October 10th, 1996
  • Start price 900.00, min. price 700.0
  • Description 4 year old Mac IIci, 14 screen, 16
    M Memory, 120M hard disc
  • Method Dutch auction
  • Location local
  • Level of autonomy check before
    transactionReporting method once a day

127
Future and Implications
128
The Future of Agent Technology
  • Markets for/of agents
  • Ecologies of agents

129
Future Direction Markets for Agents
  • Necessary evolution if we want to keep any
    bandwidth to ourselves
  • Agents paying other agents for services
  • Deceitful agents, honest agents
  • Specialized agents built by different vendors
  • Agents advertising services for/to other agents

130
Future Direction Ecologies of Agents
  • Natural selection evolution
  • Tierra (Tom Ray, ATR)
  • Parasites
  • Special niches
  • Meta-Crawler (Etzioni, U. Washington)
  • Symbiosis

131
Implications of Software Agents
  • Economic
  • Social
  • Political
  • Legal

132
Benefits to Consumer
  • Personalized help with boring or time-consuming
    tasks when and how you want it
  • Finding people with similar interests, habits
  • Ability to concentrate on key tasks/data
  • Ability to deal with larger quantities of
    information

133
Benefits to Markets/Producers
  • Continuous and detailed user feedback about
    software/products
  • Market/produce products with much narrower
    consumer base, because agents help with
    matchmaking
  • Targeted marketing
  • More active/engaged consumers

134
Speculations Effects of Agents on Markets
  • Middlemen no longer needed (i.e., distributors)
  • More producers of goods/publishers
  • Smaller niche markets are viable
  • Easier to do comparable shopping
  • Back to bartering??

135
Open Questions
  • Privacy issues are key!
  • Personification good or bad?
  • Should agent augment bad habits or enforce
    better ones upon user?
  • Avoid running amok, local minima, etc.
  • Effects on society, economy, etc.

136
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com