Title: ShyhKang Jeng
1Introduction
- Shyh-Kang Jeng
- Department of Electrical Engineering/
- Graduate Institute of Communication Engineering
- National Taiwan University
2Course Website
http//cc.ee.ntu.edu.tw/skjeng/IntelligentAgent20
01Spring.htm
3Contents
- Directories, Courses, Researches and Applications
of Intelligent Agents - Features of Intelligent Agents
- Agents as Assistants
- Mobile Agents
- Multi-Agent Cooperation
- Agent Roles
- Eager Assistants
- Guides
- Memory Aids
- Filters/Critics
- Matchmakers
- Agents for Buying/Selling
- Future and Implication
4Directories, Applications and Researches of
Intelligent Agents
5A Search for Intelligent Agents via Yahoo!
6UMBC Agent Webhttp//agents.umbc.edu/
7BottomDollarhttp//bottomdollar.shopnow.com/
8Personal Travel Assistant by Zeushttp//www.labs.
bt.com/projects/agents.com/
9SpiderHunter.comhttp//www.spiderhunter.com/
10Agent Researches in IBMhttp//www.research.ibm.co
m/iagents/
11Agent Studies in NTUEE
- Professor Sheng-De Wang
- Professor Chin-Laung Lei
- Professor Ming-Syan Chen
12Agent Course Offered by Professor Jane Hsu
http//hugo.csie.ntu.edu.tw/yjhsu/course/u1760/
13Agent Researches in NCCUhttp//www.cs.nccu.edu.tw
/jong/agent/agent.html
14MIT Media Lab Software Agents Grouphttp//agents
.www.media.mit.edu/groups/agents
15Software Agent Projects in MIT Media Lab SA
Grouphttp//agents.www.media.mit.edu/groups/proje
cts
16Pattie Maeshttp//pattie.www.media.it.edu/people/
pattie/
17Maess 1994 CACM Paper
- Agents that Reduce Work and Information Overload
- http//hugo.csie.ntu.edu.tw/yjhsu/course/u1760/pa
pers/Maes-CACM94/CACM-94_p1.html
18Maess CHI 1997 Talk
- CHI97 Software Agents Tutorial
- http//pattie.www.media.mit.edu/people/pattie/CHI9
7/
19Features of Intelligent Agents
20What is an Agent?
- A computational system which
- Is long-lived
- Has goals, sensors, and effectors
- Decides autonomously which actions to take in the
current situation to maximize progress toward its
(time-varying) goals
21Agents
22Common Issues Studied
- Action selection by one agent
- Knowledge representation and inference by one
agent - Agent learning, adapting
- Communication, collaboration among agents
23Type of Agents
- Autonomous robots
- Synthetic characters
- Expert assistants
- Software agents, knowbots, softbots
24What is a Software Agent?
- Particular type of agent, inhabiting computers
and networks, assisting users with computer-based
tasks
25How is an Agent Different from Other Software?
- Personalized, customized
- Proactive, takes initiative
- Long-lived, autonomous
- Adaptive
26Agents as Assistants
27Change of Metaphor for HCI
Old direct manipulation
New indirect management
28Direct Manipulation
- Task for which it is designed
- Closed, static, relatively small and structured
information world - Method used
- Visualize the objects
- Actions on objects in interface correspond to
actions on real objects - Nothing happens unless the user makes it happen
29Indirect Management/Agents
- Task for which it is designed
- Open, dynamic, vast and unstructured information
world - Method used
- User delegates to agents that knows the users
interests, habits, preferences - Agents make suggestions and/or act on behalf of
user - Lots of things happen all the time (even when
user is not active)
30Software Agent ? Expert System
- Naïve user
- Agents ? average users
- Expert systems ? expert users
- Common task
- Agents ? common tasks
- Expert systems ? high-level tasks
31Agents vs. Expert Systems
- Personalized
- Agents ? different actions
- Expert systems ? same actions
- Active, autonomous
- Agents ? on their own
- Expert systems ? passively answer
- Adaptive
- Agents ? Learn and change
- Expert systems ? Remain fixed
32Types of Software Agents
- Nature of task performed
- User facing vs. background task
- Nature and source of intelligence
- How is it built? Who programs it?
- Mobility/location
- Where does it reside? Can it move?
- Role fulfilled
- What task does it help the user with?
33Nature of Task Performed
- User agents
- Assist user, know interests/preferences/habits,
may act on users behalf - e.g., personal news editor, personal e-shopper,
personal web guide - Service agents
- Perform more general tasks in the background
- e.g., web indexing, info retrieval, phone network
load balancing
34Nature of Intelligence
- User programmed
- Person provides rules and criteria directly
- Simplest
- Not very smart
- Relies on users programming skill
- Commercially available
35Example of User-Programmed Agents My
Yahoo!http//my.yahoo.com/?myHome
36User Programmed Agents
37AI Engineered System
- Created by traditional, knowledge-based AI
techniques - Very complex
- Smart
- Programmed by a knowledge engineer
- Not commercially available yet
38Knowledge-Based Agents
39Nature of Intelligence
- Learning Agents program themselves
- Patterns in users actions and among users are
detected and exploited - Medium complexity
- smart in key areas (where user concentrates)
- Beginning to be commercially available
40Learning from the User
User
Interacts with
collaborate
Observes And imitates
Application
Interacts with
Agent
41Learning from Other Agents
User
Application
Agent
Agent
Application
User
42Agent Maxims
43Agent Maxims
44Example of Learning Agent Maxims
- (1) Learning from the user
- Learns rules for sorting, forwarding, archiving
- Correlates attributes of messages and situation ?
actions - Evolve with user however, can take time to be
useful (patterns must be learned from many
examples)
45Maxims Learning Agent (cont.)
- (2) Also learns from peers
- Agent knows other agents exist
- Agents registered at public B Board
- Agents can make use of peers experience
- Model if other users are like me, my agent
should act like others (as a default)
46Learning from Peers Peer-Peer Model
- Agent can present unknown situations to peers and
ask What should you do in this situation? - Use the most trusted recommendation
- Compare and combine the recommendation
- Peers are like situational experts for my agent.
From users point of view, his agent acts like an
expert.
47Pros/Cons of Approaches
- User-programmed agent
- Simple ()
- Customized ()
- Users do not recognize opportunity for an agent
(-) - Users do not like to program (-)
- Agent does not adapt (-)
- Agent has no common sense (-)
48Pros/Cons of Approaches
- AI-Engineered Agent
- Sophisticated, knowledge-based ()
- Agent ready to go from start ()
- Not customized (-)
- Expensive solution (-)
- Agent does not adapt (-)
49Pros/Cons of Approaches
- Learning Agent
- Agent adapts ()
- Customized ()
- Manageable complexity ()
- Agent takes time to learn/relearn (-)
- Agent only automates pre-existing patterns (-)
- Agent has no common sense (-)
50Which Approach is Best?
- Combination of 3 approaches
- Give agent access to background knowledge which
is available and general - Allow user to program the agent, especially when
the agent is new or drastic changes occur in
users behavior - Agent learns to adapt and suggest changes
51Interface Agent ? User
- Understanding Does user understand agents? Can
user trust the agent? - Control How does user control agent?
- Distraction How to minimize distraction?
- Ease of use How expert does user have to be?
- Personification How to represent agent to user?
52Issue 1 Understanding
- Problem Agent-user collaboration is only
successful if the user can understand and trust
the agent - How do we give users insight into agent states
and functioning? - How does the person learn all that an agent can
do?
53Issue 1 Understanding
- Solution
- Make user model available to user
- Give continuous feedback to user about agents
state, actions, and learning - Agent behavior can not be too complicated
54Issue 2 Control
- Problem Users must be able to turn over control
of tasks to agents which act autonomously but
users must not feel out of control - How do we allow agents to do work but not be too
independent of the user? - How do we accommodate different users wanting
different amounts of control?
55Issue 2 Control
- Solution
- Allow variable degrees of autonomy
- Allow user to decide on level of autonomy
- Allow user programming of the agent
- Always allow user to bypass agent
56Issue 3 Distraction
- Problem Autonomous agents should keep user
informed and interrupt if necessary - How can users control the level of interaction
they want from agents? - When is an issue/event important enough that the
agent is allowed to interrupt? - How can agent actions be made known to users
without unnecessary interruption?
57Issue 3 Distraction
- Solution
- Gradually decrease number of interruptions
- Allow user to program situations that require
interruption - Give feedback about agent behavior without
requiring users full attention
58Issue 4 Ease of Use
- Problem Agents should be employed for tasks user
can not or do not want to concern themselves
with. If using the agent is too complex, users
will not use them. - How do we enable users to instruct agents without
requiring programming? - How do we enable agents to fit unobtrusively into
users task environments?
59Issue 4 Ease of Use
- Solution
- Avoid making user learn a new language
- Use language of application to communicate
between agent and user
60Issue 5 Personification
- Problem Personification is a natural process.
Agents are often personified to remind the user
that a process is at work taking action on their
behalf. - How do we personify without misleading people
into thinking the computer is intelligent? - How can we take advantage of personification
tendencies?
61Issue 5 Personification
- Solution???
- Jury is still out on this one
- Pros Laurel, Nass,
- Cons Schneiderman, Lanier, Norman,
- Experiments Koda, King, Walker,
62Personified Agent Research
- Issues
- Animation
- Facial expressions
- Gestures
- Natural language/speech I/O
63Mobile Agents
64Location of Agents
- Stationary in client
- Stationary in server
- Mobile client-gtserver-gtserver-gt
65Location of Agents (cont.)
- In server versus in client
- Easy/hard to locate other agents
- Close to/far from the data operating on
- Central failure point/more fault-tolerant
- Less privacy/safer
- Higher load on servers/more scalable
- Potential trust problems running ones code on
your machine (witness recent JAVA problems)
66NTUEE XMAS
67Mobility of Agents
- NOT a necessary characteristic for some program
to be an agent - Why move agents around?
- Reduce network traffic
- Share load among machines
- Go to the data if the data cant come to you
- User may have only infrequent connection to
network
68Mobility of Agents (cont.)
- Examples of languages supporting mobility
- Telescript agents go to centralized servers,
conduct business, return to users distributed
locations with results - Java agents (applets) move to distributed web
browsers, bringing data and program to execute
locally - TCL/TK executing scripts remotely
69Multi-Agent Cooperation
70Interface Agent ? Other Agent
- Problems to be solved
- Finding other agents
- Common language, common ontology
- Negotiation and commitment methods
- Modeling other agents
- Identification/authentication
71Common Language, Common Ontology
- Homogeneous agents ? no problem
- Heterogeneous agents ? user standard like KQML,
etc.
72Negotiation and Commitment Methods
- Approaches
- Theoretical, but less applicable (e.g.,
Rosenschein and Zlotkin, MIT press book) - Practical, but less general
73Modeling Other Agents
- Logical approach
- Beliefs, desires, intentions (e.g. Shohams work
at Stanford) - Pragmatic approach
- Agents keeping a trust level of other agents for
a set of problems (e.g., Maes et al work at MIT)
74Agent Roles
75Roles for Software Agents
- Eager assistant
- Guide
- Memory aid
- Filter/Critic
- Matchmaker/Referral-giver
- Buyers/Sellers (on your behalf)
76Agents as Eager Assistants
77Agents as Eager Assistants
- Eager (Cypher, Apple Computer)
- Calendar Agent (Kozierok, MIT Media Lab)
- Note Taking (Schlimmer, Univ. Washington)
- Email Agent (Metral, MIT Media Lab)
- Meeting Scheduling (Dent Mitchell, CMU)
- Open Sesame (Charles River Assoc.)
- Microsoft Office 97
78Maxims Email Agent
- Agent observes user actions, learn patterns
- Memory-based reasoning associates features of
situation with action sequences
79Eager Assistant Agent
Memory of examples
Situation1? action1
New situation
Situation2? action2
Situation3? action3
Predicted action, Confidence level
SituationN? actionN
80Details of One Example
- Situation
- Type new message received
- Sender nicholas_at_media.mit.edu
- Date 3/10/95 1604
- Topic meet to discuss funding
- Receiver pattie_at_media.mit.edu
- Body ltgt
- Keywordsltgt
-
- Action taken
- Message read first out of 20
- Message read first time
81Prediction and Confidence Level Computation
- Prediction
- Compute k nearest situations and distances ds
- Compute score for every action S1/ds
- Pick action with highest score
- Confidence level
- Higher if ds smaller
- Higher if less disagreement
- Higher if more examples in memory
82Using the Prediction
- Agent operates directly on confidence level for
every action that can be automated
1
Do-it threshold
Tell-me threshold
0
83Maxims Agent States
84Meeting Scheduling Agent
85Results and Confidence Level
86Multi-agent Collaboration
- If agents confidence lt tell-me threshold
- Agent contacts other agents via bboard
- Agent sends details of situation
- Some agents respond with Pi and Ci
- Agents computes score for every action SCiTi,s
- Agent picks action with highest score
- Agent updates trust levels Ti,s
87Agent Interface
- Caricature faces convey state of agent alert,
thinking, working, suggestion, no clue, etc. - Agent produces a report of tasks automated
- User can look at agents memory and instruct to
forget (whole class of) examples
88Explanation
- User can ask why
- Agent cites similar examples and relevant
features - Or asks agents of other users
89Discussion (cont.)
- Limitations
- Agent should have access to all features that may
be relevant - Applications need to be scriptable recordable
- Application needs to be used frequently and every
users behavior is different
90Eager Assistants Dimensions
- Does agent take initiative to suggest a rule?
- How much generalization can agent make?
- Do actions need to happen in sequence? Or can
agent detect similarities across situations
happening at very different times? - Does it remember the rule/macro in between
sessions? - Does agent communicate about the rule it has
detected? - How much background knowledge is used?
91Agent as Guides
92Agent as Guides
- Guide 3.0 (Oren, Apple Computer)
- Letizia (Lieberman, MIT Media Lab)
- Webwatcher
- Syskill Webert
93Letiziahttp//lieber.www.media.mit.edu/people/lie
ber/Lieberary/Letizia/Letizia.html/
94Letizia
- Agent for assisting web browsing
- Human browsing usually depth-first. Agent can
augment this by doing breadth-first browse - When you look at a page, agent
- Retrieves nearby links
- Analyze pages for prominent keywords
- Pre-fetches interesting docs to 2nd window
95Feature-Based Filtering Analyzing Documents for
Relevant Keywords
- SMART algorithm (Salton, Cornell, 69)
- Remove stop words
- Stem other words
- Compute ratio (frequency of word in this
document)/(average frequency of word in all
documents) - Pick n words with highest ratios as the
representation of the document
96Feature-Based Filtering Updating User Profile
Based on One More Datapoint Document
- If user likes document, then increases weights of
relevant keywords of the document in the profile - If user dislikes document, then decreases weights
of relevant keywords of the document in the
profile
97Feature-Based Filtering Filtering Based on User
Profile
- Extract relevant keywords of document
- Compare those with keywords in users profile
- Compute score of document S weightratio of
occurrence for all keywords in the profile - If score above threshold then present to user
98Agents as Memory Aids
99Agents as Memory Aids
- Forget-me-not (Lamming, RXRC)
- Remembrance agent (Rhodes Starner, MIT Media
Lab)
100Remembrance Agentshttp//www.media.mit.edu/rhode
s/Rememberance-distribution/
101Remembrance Agent
- Agent accesses indexed files/email runs in Emacs
buffer observing keystrokes - Words matched to index terms. Agent makes three
kinds of suggestions - 1. Long-Term Best match on entire buffer
- 2. Near-Term Best on most recent paragraphs
- 3. Immediate Best match on last sentence
- Indexing of files/email and current document
based on SMART
102Agents as Filters/Critics
103Agents as Filters/Critics
- NewT (Sheth, MIT Media Lab)
- Fishwrap (Bender, MIT Media Lab)
- Pointcast, Individual, Surfbot, Netattache,
Webcompass - HOMR (Metral, MIT Media Lab)
- Webhound (Lashkari, MIT Media Lab)
- Firefly
- Grouplens (Resnick, MIT Sloan)
- Videos_at_bellcore
104Agent NewT
105Technology Under Critics Feature-Based Filtering
(FBF)
- Analyze item user likes/dislikes result most
important features values of item - e.g., use SMART to compute vector of most
important keywords and weights - Use ML techniques to generalize from examples
result more general description of types of
items liked by user - Present new items matching users tastes
- Update user model continuously
106Complementary Technology Underlying Critics
Automated Collaborative Filtering (ACF)
User-1
User-2
User-k
User-n
5
7
6
v1n
Item-1
1
v2n
2
Item-2
7
v3n
6
/
3
Item-k
5
vln
4
6
Item-l
vmn
Item-m
/
2
3
107ACF Predicting a Rating
- Distance between users x and y
- 1/L sqrt( S(vix viy)(vix-viy) )
- Predicted value for user x for item a
- 1/k SWjVaj
108Typical Results of Agent Ringo
109Collaborative vs. Feature Based
- FBF techniques attempt analysis of object
content ACF does not - Can be applied to areas like music where content
not easily analyzed
110Comparison (cont.)
- FBF techniques raise why questions ACF dodges
this bullet - Assume that comparable people like comparable
things for comparable reasons - FBF techniques require potentially unbounded
real-world knowledge ACF uses the knowledge of
other people - Agent is actually built with no AI at all!
111Some Other Neat Features
- User-added Content
- Database extensible by users
- Artist dossiers
- Personal reviews (signed/non-signed)
112Neat Features (cont.)
- Systems like this improve by themselves
- At start (HOMR data)
- 20 users, 575 items
- After 8 months
- gt20,000 users, gt15,000 artists albums
- Qualitative and quantitative results support our
contention of improved results
113Neat Features (cont.)
- Ability to ask questions of agent about self in
relation to community - How mainstream am I?
- Would I like album X?
- Recommend album similar to (i.e., liked by people
who also like) album Y? - Highest rated artists/albums?
114Neat Features (cont.)
- Community building (Firefly)
- Messaging with neighbors
- Chatting
- Given a neighbor, what do we have in common?
- Leave picture of self, summary of my taste for
others to see
115Agents as Matchmakers
116Agents as Matchmakers
- Yenta (Foner, MIT Media Lab)
- ATT work
117Yentahttp//foner.www.media.mit.edu/people/foner/
yenta-brief.html/
118Yenta
- System for connecting/introducing people based on
interests and activities - Matchmaking (1-to-1)
- Coalition-building (n-to-n)
- Technical goals
- Distributed avoid single failure bottlenecks
- Use cryptography careful design to protect user
privacy
119Yenta Applications
- Internet
- Intranet
- Professional network
120Yenta Methods
- Distributed, peer-to-peer communication, no
central site - Each user runs her own copy of the agent
- Agents distributed across the entire Internet
- Long-lived, local agents, with significant
permanent state - Extensive use of cryptography
121Yenta Algorithm
- Scan personal info for keywords correlate with
others info. Caveats - Dont leak contents anywhere
- Dont assume trustworthiness
- Requires many techniques. Example
- Use referrals to hill-climb into appropriate
clumps of other users
122Agents for Buying/Selling
123Agents for Buying/Selling
- BarginFinder (Krulwich, Anderson Consulting)
- Fido (Goodman, Continuum Software)
- ShopBot (Etzioni Weld, UW)
- Kasbah (Chavez, MIT Media Lab)
- Bazaar (Guttman, MIT Media Lab)
124Kasbahhttp//agents.www.media.mit.edu/groups/agen
ts/projects/
125Kasbah
- Kasbah intelligent classified ad.
- The ad is an agent. Given key data it
- Searches out compatible agents in a centralized
marketplace (sellers look for buyers/bidders
vice versa). - Conducts business for users.
- Completes transaction, if authorized.
126Kasbah Example
- Sell Macintosh IIci
- Deadline October 10th, 1996
- Start price 900.00, min. price 700.0
- Description 4 year old Mac IIci, 14 screen, 16
M Memory, 120M hard disc - Method Dutch auction
- Location local
- Level of autonomy check before
transactionReporting method once a day
127Future and Implications
128The Future of Agent Technology
- Markets for/of agents
- Ecologies of agents
129Future Direction Markets for Agents
- Necessary evolution if we want to keep any
bandwidth to ourselves - Agents paying other agents for services
- Deceitful agents, honest agents
- Specialized agents built by different vendors
- Agents advertising services for/to other agents
130Future Direction Ecologies of Agents
- Natural selection evolution
- Tierra (Tom Ray, ATR)
- Parasites
- Special niches
- Meta-Crawler (Etzioni, U. Washington)
- Symbiosis
131Implications of Software Agents
- Economic
- Social
- Political
- Legal
132Benefits to Consumer
- Personalized help with boring or time-consuming
tasks when and how you want it - Finding people with similar interests, habits
- Ability to concentrate on key tasks/data
- Ability to deal with larger quantities of
information
133Benefits to Markets/Producers
- Continuous and detailed user feedback about
software/products - Market/produce products with much narrower
consumer base, because agents help with
matchmaking - Targeted marketing
- More active/engaged consumers
134Speculations Effects of Agents on Markets
- Middlemen no longer needed (i.e., distributors)
- More producers of goods/publishers
- Smaller niche markets are viable
- Easier to do comparable shopping
- Back to bartering??
135Open Questions
- Privacy issues are key!
- Personification good or bad?
- Should agent augment bad habits or enforce
better ones upon user? - Avoid running amok, local minima, etc.
- Effects on society, economy, etc.
136(No Transcript)