Reputation - PowerPoint PPT Presentation

About This Presentation
Title:

Reputation

Description:

According to Oxford dictionary, reputation is the 'Common or general estimate of ... may be trustworthy on selling books but not on providing medical devices. ... – PowerPoint PPT presentation

Number of Views:280
Avg rating:3.0/5.0
Slides: 77
Provided by: LZ8
Learn more at: https://www.cse.unt.edu
Category:
Tags: reputation

less

Transcript and Presenter's Notes

Title: Reputation


1
  • Reputation
  • Prakash Kolan
  • Liqin Zhang
  • Venkatesh Kancherla

2
Introduction
  • Internet
  • No longer just a medium for non-commercial
    informal information exchange between scientists
    and universities13
  • It has become a public network also used to
    support commercial transactions
  • Unclear what will happen when this extremely open
    network is used in the new context of commerce
  • Likely that the introduction of money will be the
    motivation for criminal activities previously
    considered uninteresting.

3
Introduction
  • Expansion of the Internet
  • People and services are called upon to interact
    with independent parties in application areas
    like e-commerce, knowledge sharing, game playing
    etc.
  • Anyone is free to add what components (hardware
    and software) as he/she wishes
  • No central authority keeps track of who is using
    it and how
  • An electronic market with a centralized verifying
    authority that checks and certifies (human and
    electronic) participants would be a very non-open
    solution

4
Introduction
  • Parties are autonomous and potentially subject to
    different administrative and legal domains
  • Important that
  • Decentralized and open mechanisms exist that
    allow participants in a market to know something
    about other participants
  • Each participant should be able to identify
    trustworthy parties or correspondents with whom
    they should interact and untrustworthy
    correspondents with whom they should avoid
    interaction without having to rely on some
    external central authority

5
Need for Reputation
  • We need Reputation because
  • Internet is as an open system more like a big
    city than a small village13
  • Possible to act in any possible way without
    anyone being able to stop it.
  • Large amount of fraud and con men doing
    businesses and lots of harmful content floating
    out there
  • In a big city you cant know who you are dealing
    with if you meet for the first time

6
Need for Reputation
  • We need Reputation because
  • How is it considered possible to negotiate,
    cooperate and perform online communication if
    there is no way of formally knowing the
    intentions of the other participants

7
Defining Reputation
  • According to Oxford dictionary, reputation is the
    Common or general estimate of a person with
    respect to characters or other qualities5
  • Reputation refers to a perception that an agent
    has of anothers intentions and norms15
  • An entitys reputation is some notion or report
    of its propensity to fulfill the trust placed in
    it (during a particular situation) its
    reputation is created through feedback from
    individuals who have previously interacted with
    the entity16.

8
Defining Reputation
  • Reputation, a distributed knowledge phenomenon,
    lives in time. When people interact with one
    another over time, the history of their past
    interactions informs others about their abilities
    and depositions17.
  • Reputation systems are complex social systems
    that continually collect, aggregate, and
    distribute feedback about a person, an
    organization, a scholarly work, or some other
    entity, based on the assessments of others from
    their interactions or experiences with the
    entity17.

9
Reputation Systems
  • Types of Reputation Systems - Dimensions of
    Classification
  • Amount of effort required of users to generate
    reputations.
  • Explicit action by the users, such as giving
    ratings and scores
  • Users behavior, such as return rates
  • Ease of understanding by the user
  • Ease of implementation for developers
  • Degree of personal relevance of ratings to users
  • Personal relevance is the degree to which ratings
    take into consideration the users likes and
    dislikes or the extent to which recommendations
    are tailored to the individual user

10
Reputation Systems
  • Reputation systems can be grouped according to
    the nature of information they give about the
    object of interest and how the rating is
    generated.
  • Ranking Systems
  • Rating Systems
  • Collaborative Filtering Systems
  • Peer Based Reputation systems
  • Implicit Peer Based Reputation systems
  • Explicit Peer Based Reputation systems

11
Reputation Systems
  • Ranking Systems
  • Use quantifiable measures of users behavior
    (implicit information) to generate a rating.
  • Example ranking systems High score lists,
    information about length of membership, frequency
    of visits, replies etc.
  • Easy to implement and interpret and are most
    suited for goal oriented activities
  • These reputation systems typically only provide
    information about what kind of pattern users
    follow, and reveal little or no personally
    relevant information.

12
Reputation Systems
  • Rating Systems
  • Use explicit evaluations given by users.
  • These evaluations are used to generate a weighted
    average for each object of interest.
  • Ratings are global, meaning that all users
    looking at the same object of interest will see
    the same score.
  • Provide more personally relevant information than
    ranking systems, they treat the population as a
    single homogenous group.

13
Reputation Systems
  • Collaborative Rating Systems
  • These systems weight explicit or implicit
    evaluations by how much the rater and the user
    have concurred on other items
  • More sophisticated than rating systems, capturing
    significant amounts of personally relevant
    informationusers likes and dislikes.
  • Most expensive to build, populate, maintain, as
    well as the most complicated for users to
    understand

14
Reputation Systems
  • Peer Based Reputation Systems
  • Based on peer recommendations like friends and
    family
  • Peer-based recommendations (or social network
    based reputation systems), whether they are given
    explicitly or inferred through the observations
    of peer behavior, are a significant influence on
    everyday decision-making
  • The social context provided by friend of a
    friend recommendations should be especially
    important in socially-oriented situations
  • The more social the situation, the more important
    peer based information is.

15
Reputation Systems
  • Implicit Peer Based Reputation Systems
  • These systems track the behavior of a users
    friends, generating ratings from this data.
  • Such systems observe what a users friends do
    (e.g., with whom they interact, what they look
    at, what they buy), and make recommendations
    accordingly.
  • These types of systems is that they provide
    information that is very socially relevant and
    tailored to the individual.
  • Potential drawbacks are the implementation costs,
    privacy concerns, and that such ratings might be
    difficult to understand for users

16
Reputation Systems
  • Explicit Peer Based Reputation Systems
  • These systems rely on the evaluations given by a
    users friends
  • Users select a group of friends or trusted
    raters, and the evaluations made by this group
    are used to generate composite ratings.
  • These system weights or filters ratings based on
    who we know and choose to trust.
  • Ratings are highly relevant and tailored to the
    user.
  • Drawbacks include implement costs and difficulty
    in understanding

17
Reputation Systems
18
Notions of Reputation
Reputation can be viewed as a global or
personalized quantity15
Reputation Typology
19
Notions of Reputation
  • Individual Group reputation

Reputation is a function of the cumulative
ratings on users by others for a individual
A firms (group) reputation can be modeled as the
average of all its members individual reputation
20
Notions of Reputation
  • Direct Indirect( Individual) reputation

Reputation estimates by an evaluator based on
direct experiences (seen or experienced by the
evaluating agent first hand)
Reputation estimates that are based on
second-hand evidence (such as by word-of-mouth).
21
Notions of Reputation
  • Direct Reputation

Reputation based on actual encounter with the
reputed agent
Reputation based on evaluators rating for the
reputed agent
22
Notions of Reputation
  • Indirect Reputation

Reputation based on the prior belief regarding
the reputed agent
Reputation for the reputed agent based on the
group he belongs to
Reputation garnered from different evaluating
agents for the reputed agent
23
Requirements
24
Challenges in Eliciting feedback
  • The first is that people may not bother to
    provide feedback at all. For example, when a
    trade is completed successfully at eBay, there is
    little incentive to spend another few minutes
    filling out a form
  • People could be paid for providing feedback
  • Secondly,It is especially difficult to elicit
    negative feedback. For example, at eBay it is
    common practice to negotiate first before
    resorting to negative feedback. Therefore, only
    really bad performances are reported.

25
Challenges in Eliciting feedback
  • The third difficulty is assuring honest reports.
  • One party could blackmail anotherthat is,
    threaten to post negative feedback unrelated to
    actual performance.
  • At the other extreme, in order to accumulate
    positive feedback a group of people might
    collaborate and rate each other positively,
    artificially inflating their reputations.

26
Challenges in Distributing feedback
  • The first is name changes. At many sites, people
    choose a pseudonym when they register. If they
    register again, they can choose another
    pseudonym, effectively erasing prior feedback.
  • Two methods to avoid Name Changes
  • Game theoretic analysis
  • Another alternative is to prevent name changes,
    either by using real names, or by preventing
    people from acquiring multiple pseudonyms, a
    technique called once-in-a-lifetime pseudonyms

27
Challenges in Distributing feedback
  • A second difficulty in distributing feedback
    stems from lack of portability between systems.
  • Amazon.com initially allowed users to import
    their ratings from eBay. eBay protested
    vigorously, claiming that their user ratings were
    proprietary. Ultimately Amazon discontinued its
    rating-import service.
  • Efforts are underway to construct a more
    universal framework. For example,
    virtualfeedback.com provides a rating service for
    users across different systems, but it has yet to
    gain wide public acceptance.

28
Challenges in Distributing feedback
  • Finally,There is also a potential difficulty in
    aggregating and displaying feedback so that it is
    truly useful in influencing future decisions
    about who to trust.
  • eBay displays the net feedback (positives minus
    negatives). Other sites such as Amazon display an
    average.

29
Context and location awareness
  • Another important consideration is the
    context and location awareness, as many of the
    applications are sensitive to the context or the
    location of the transactions.
  • For example, the functionality of the transaction
    is an important context to be incorporated into
    the trust metric. Amazon.com may be trustworthy
    on selling books but not on providing medical
    devices.

30
Different methods
  • Basic models
  • Reputation models in peer-to-peer networks
  • Reputation models in social networks

31
Rating systems
  • Reputation is taken to be a function of the
    cumulative positive or negative rating for a
    seller or buyer
  • Rating model
  • Uniform context environment heard rating from
    one agent
  • Multiple context environment from multiple
    agents
  • Centrality-based rating based on in/out degree
    of a node
  • Preference-based rating Consider the preferences
    of each member when selecting the reputable
    members
  • Bayesian estimate rating to compute reputation
    with recommendation of different context

32
Basic models
  • Computational model
  • Based on how much deeds exchanged
  • Collaborative model
  • Based on recommendation from similar tasted people

33
Computational model2
  • If Reputation increase, trust increase
  • If trust increase, reciprocity increase
  • If reciprocity increase, reputation increase

Reputation
Reciprocity mutual exchange of deeds
Net benefit
Reciprocity
Trust
34
A Collaborative reputation mechanism
  • Collaborative filtering
  • To detect patterns among opinions of different
    users
  • Make recommendation based on rating of people
    with similar taste
  • Fake rating
  • 1. Rate more than once
  • 2. Fake identity
  • Solve rating from people with high reputation in
    network weighted more

35
Reputation model in peer-to-peer11
  • P2P network
  • peers cooperate to perform a critical function
    in a decentralized manner
  • Peers are both consumers and providers of
    resources
  • Peers can access each other directly
  • Allow peers to represent and update their trust
    in other peers in open networks for sharing files

36
Models in peer-to-peer networks
  • Based on recommendation from other peers
  • Combine with Bayesian network
  • Based on global trust value

37
Method 1 Reputation based on recommendation 11

38
  • Recomendation from different kind of peers
  • Different weight
  • Update references weight
  • Final reputation and trust is computed based on
    Bayesian network
  • Solve reputation on different aspects of a peer

39
Method2 based on global trust value---Eigen
Trust Algorithm12
  • Decreases the number of downloads of
    unauthenticated files in a peer-to-peer file
    sharing network by assigning a unique global
    trust value
  • A distributed and secure method to compute global
    trust values based on power iteration
  • Peers use these global trust values to choose the
    peers from whom they download and share files

40
Reputation Peer to Peer N/w
  • Limited Reputation Sharing in P2P Systems14
  • Techniques based on collecting reputation
    information which uses only limited or no
    information sharing between nodes.
  • Effect of limited reputation information sharing
    in a peer-to-peer system.
  • Efficiency
  • Load distribution and balancing
  • Message traffic

41
Reputation models in Social networks310
  • Social network
  • a representation of the relationships existing
    within a community
  • Each node provide both services and referrals for
    services to each other

42
Importance of the nodes
  • Proposal 1 all nodes are equal important
  • Proposal 2 some nodes are important than others
  • Referrals from A, B, C,D,E is more important than
    those nodes in only local network pivot
  • You may trust the referral from a friend of you
    than strangers
  • You may also need consider the your preference
    regarding to referral

43
Models in social network
  • Reputation extracting model
  • Ranking the reputation for each node in network
    based on their location
  • Social ReGreT model
  • Based on information collected from three
    dimension

44
Reputation models in Social networks
  • Extracting Reputation in Multi agent systems8
  • Feedback after interaction between agents
  • Also consider the position of an agent in social
    network
  • Node ranking creating a ranking of reputation
    ratings of community members
  • Based on the in-degree and out-degree of a node
    (like Pagerank)

45
Reputation models in Social Networks
  • Social ReGreT5
  • Analysis social relation
  • To identify valuable features in e-commerce
  • Aimed to solve the problem of referrers false,
    biased or incomplete information
  • Based on three dimensions of reputation
  • If use only interaction inf. --- individual
    dimension(single)
  • If also use inf. from others --- social dimension
    (multiple)
  • Three dimension
  • Witness reputation from pivot agents
  • Neighborhood reputation
  • System reputation default reputation value based
    on the role played by the target agent

46
Metrics
  • The algorithm used to calculate an agents
    reputation is the metric of the reputation
    system.
  • The strength of a metric is measured by its
    resistance against different threat models, i.e,
    different types of hostile agents.

47
Formal Model
  • The model provides an abstract view of a
    reputation system that allows the comparison of
    the core metrics of different reputation systems.
  • According to definition of reputation a
    transaction between two peers is the basis of a
    rating. An agent cannot rate another one without
    having had a transaction with him.

48
Formal Model
  • A is the set of agents.
  • C is the context of a transaction.
  • set C T V
  • where T 0, 1, . . . , tnow is the set of
    times and V is the set of transaction values
  • E is the set of all encounters between different
    agents that have happened until now.

49
Formal Model
  • An encounter contains information about the
    participating peers and the context
  • A rating is a mapping between a target agent a
    belongs to A and an encounter e belongs to E
    to the set of all possible ratings Q
  • In the simple case Q is a small set of possible
    values Qebay -1, 0, 1

50
Formal Model
  • Ea represents the subset of all encounters in
    which a has participated and received a rating
  • All encounters between a and b with a valid
    rating for a are
  • subset of all most recent encounters between a
    and other agents.

51
Formal Model
  • The reputation of an agent a belongs to A is
    defined by the function r A T -gtR.
  • In short r(a)r(a,tnow)
  • A complete Metric M is defined as
  • M(p,r,Q,R,r0)

52
METRICS IN REPUTATION SYSTEM
  • Accumulative Systems
  • Average Systems
  • Blurred Systems
  • OnlyLast Systems
  • EigenTrust System

53
Accumulative Systems
  • If a system accumulates all given ratings to get
    the overall reputation of an agent we call it an
    accumulative system.
  • Example Ebay system
  • Possible ratings are p A E gt -1, 0, 1.
  • The basic idea of these metrics is, that the more
    often an agent behaves in a good way the more
    sure can the others be, that this agent is an
    honest one.

54
Accumulative Systems
  • The reputation of an agent a belongs to A
  • computes with

  • (ebay)
  • No transaction values and multiple ratings
  • The reputation in value system is given by

55
Average Systems
  • This kind of reputation system computes the
    reputation for an agent as the average of all
    ratings the agent has received
  • The idea of this metric is, that agents behave
    the same way most of their lifetime. Unusual
    ratings have only little weight in the
    computation of the final reputation
  • The simulated systems use
  • p A E -gt -1, 0, 1

56
Average system
  • The reputation of an agent a belongs to A in
    the Average-system without considering multiple
    ratings and transaction values is
  • In average-value system

57
Blurred Systems
  • These reputation systems compute a weighted sum
    of all ratings.
  • The older a rating is, the less it influences the
    current reputation
  • Possible ratings are p A E -gt -1, 0, 1

58
Blurred System
  • The reputation of an agent a belongs to A
    without considering transaction values is
  • With consideration of transaction values

59
OnlyLast System
  • This system considers the most recent rating of
    an agent
  • Ratings are p A E -gt -1, 0, 1
  • Here we expect an agent to behave like he did
    last time, no matter what he did before.

60
OnlyLast System
  • Without considering transaction values in the
    OnlyLast system the reputation of an agent a
    belongs to A is
  • With consideration of the transaction value in
    the OnlyLastValue system the reputation of an
    agent a belongs to A is

61
EigenTrust System
  • In this metric the computed reputation depends on
    the ratings, the reputation of the raters, the
    transaction context (e.g. transaction value), and
    some community properties
  • Ratings are p A E -gt -1,1
  • First we have to build a reputation matrix M,
    where
  • (mij) contains the standardized sum of ratings
    from Agent i for Agent j

62
EigenTrust System
63
(No Transcript)
64
New Metric
  • We can combine different metrics to compensate
    for the individual weaknesses.
  • Both Average and OnlyLast systems can be
    understood as summing up the previous ratings of
    an agent using different weights.
  • The Blurred-system is somewhere in between, but
    could not handle the disturbing agents.

65
New Metric
  • Thus we can interpolate between the Average and
    the OnlyLast-system by weighting the ratings not
    linear, like we did in the Blurred-system, but
    quadratic, so that the recent ratings have more
  • influence on the reputation.
  • The resulting metric M (p, r) is
  • p A E -gt -1, 0, 1

66
New Metric
  • We call this metric a BlurredSquared System
  • This system is invulnerable to disturbing, evil,
    and selfish agents. It resists malicious agent up
    to an amount of 60.

67
Conclusions
  • Reputation is very important in electronic
    communities
  • Reputation can have different notation such as
    general estimate a person, perception that an
    agent has of anothers intentions and norms
  • Reputation systems can be grouped according to
    the nature of information they give about the
    object of interest and how the rating is
    generated, 4 reputation systems are discussed

68
Conclusions
  • Reputation can be classified to individual and
    group reputation, individual reputation can be
    further classified
  • The challenge for reputation includes less
    feedback, negative feedback, un-honesty feedback
    (change name), context and location awareness
  • An agent can be honesty, malicious, evil, selfish
  • Discussed 7 metrics with benchmarks

69
Conclusions Comparison methods
  • Basic models
  • Computation model
  • based on how much deeds exchanged
  • Can be used in P2P and Social network
  • Doesnt consider references/recommendation,
    weight of deeds
  • Collaborative model
  • Based on the recommendation from similar tasted
    people
  • Recommendation is weighted based on referrers
    reputation avoid fake recommendation
  • Doesnt consider the location of referrer

70
Conclusions Comparison methods
  • In P2P network,
  • Bayesian network model
  • Based on information collected from friends
  • Peers share recommendations
  • It allows to develop different trust regarding to
    different aspects of the peers capability
  • Overall trust need combine all aspect
  • Doesnt consider location

71
Conclusions Comparison methods
  • In social network
  • Can consider the position of an agent, Pivot
    agents are more important than other agents
  • NodeRanking
  • Ranking the reputation in social network based on
    position
  • Used to find the pivot
  • Social ReGreT model
  • Consider three dimension
  • Witness pivot node
  • Neighborhood recommendation
  • System value

72
Conclusions
  • The reputation computation need consider
    recommendation of friends, the position of the
    referrer, weight for referrer
  • friends may refer to its neighborhood, or the
    group of people who has the similar taste, or
    people you trust
  • Weight for referrer can avoid fake recommendation
  • No models consider all of the factors

73
References
  • 1. Computational Models of Trust and
    Reputation Agents, Evolutionary Games, and
    Social Networks, www.cdm.csail.mit.edu/ftp/lmui/
    computational20models20of20trust20and20reputa
    tion.pdf
  • 2. A computation model of Trust and
    Reputation, http//csdl2.computer.org/comp/proceed
    ings/hicss/2002/1435/07/14350188.pdf
  • 3. Trust and Reputation Management in a
    Small-World Network, ICMAS Proceedings of the
    Fourth International Conference on MultiAgent
    Systems (ICMAS-2000), 2000
  • 4. How Social Structure Improves Distributed
    Reputation Systems, http//www.ipd.uka.de/nimis/p
    ublications/ap2pc04.pdf
  • 5. Social ReGreT, a reputation model based on
    social relations , ACM SIGecom Exchanges Volume 3
    ,  Issue 1   Winter, 2002,Pages 44 56
  • 6. Detecting deception in reputation
    management, Proceedings of the second
    international joint conference on Autonomous
    agents and multiagent systems , 2003

74
References
  • 7. Finding others online reputation systems
    for social online spaces, Proceedings of the
    SIGCHI conference on Human factors in computing
    systems Changing our world, changing ourselves,
    2002, Pages 447 - 454  
  • 8. J. Pujol and R. Sanguesa and J. Delgado,
    Extracting reputation in multi-agent systems by
    means of social network topology, In
    Proceedings of First International Joint pages
    467--474, 2002
  • 9. J. Sabater and C. Sierra,Reputation and
    social network analysis in multi-agent systems,
    Proceedings of the first international joint
    conference on Autonomous agents and multiagent
    systems P475 482,2002
  • 10. Trust evaluation through relationship
    analysis, Proceedings of the fourth international
    joint conference on Autonomous agents and
    multiagent systems,P1005 1011, 2005
  • 11 Trust and Reputation model in peer-to-peer
    networks, www.cs.usask.ca/grads/
    yaw181/publications/120_wang_y.pdf

75
References
  • 12 S. D. Kamvar, M. T. Schlosser, and H.
    Garcia-Molina. The Eigen Trust algorithm for
    reputation management in p2p networks. In
    Proceedings of the Twelfth International World
    Wide Web Conference, 2003.
  • 13 Lars Rasmusson and Sverker Jansson,
    Simulated social control. for secure internet
    commerce, in New Security Paradigms 96.
    September 1996
  • 14 S. Marti, H. Garcial-Molina, Limited
    Reputation Sharing in P2P Systems, ACM Conference
    on Electronic Commerce (EC'04)
  • 15 Lik Mui, Computational Models of Trust and
    Reputation Agents, Evolutionary Games, and
    Social Networks, Ph. D Dissertation,
    Massachusetts Institute of Technology
  • 16 Goecks, J. and Mynatt E.D. (2002). Enabling
    privacy management in ubiquitous computing
    environments through trust and reputation
    systems. Workshop on Privacy in Digital
    Environments Empowering Users. Proceedings of
    CSCW 2002

76
References
  • 17 G.L. Rein, Reputation Information Systems A
    Reference Model, Proceedings of the 38th Hawaii
    International Conference on System Sciences - 2005
Write a Comment
User Comments (0)
About PowerShow.com