A trust metric assigns an exact amount of trust to each identity. Thus it usually measures how much the community as a whole trusts that identity. (In contrast, a WebOfTrust system determines for each user the amount of trust they can place in someone.)
What is trust? When talking about trust metrics or webs, "trust" might be something as basic as knowing who an individual is, or it might include other factors, like ability in some technical field. Within any trust-based authentication system, "trust" has a specific technical meaning, which may differ from its everyday usage.
One important property of many trust metrics is a degree of transitivity. That is, if you trust a person, then you implicitly have some level of trust in anyone whom that person trusts. (The level of trust may fall off with increased distance.) Therefore, whatever the specific notion of trust is in such a metric, it should include a confidence in the individual's ability to rank other individuals.
What is a metric? The term metric indicates a system of measurement that operates within a WikiPedia:Metric_space, though unfortunately not all quantitative comparison methods meet all metric-space requirements.
Quantification (reducing a measurement to a number) is the easiest property of a metric space to notice and to implement, but a measurement lacking in other metric-space properties will violate people's expectations based on their intuition and experience with everyday metrics, such as spatial distance.
Formally, a metric is a measurement of a kind of distance between two points. A short distance would correspond to a good trust between two people, so that their points of view (their distance to any other point) should agree; but unlike distance, trust need not be symmetric. In this sense of a function with two variables, it represents a subjective measurement of trust, like in the WebOfTrust concept. But its meaning from the PublicKeyInfrastructure world represents rather some absolute measure obtained by fixing some group of reference.
In brief: AdvoGato is a resource for free software developers. Its notion of trust is based on a user's experience in and contributions to the free software community. Each user can certify any other user at one of three trust levels, thus creating a weighted, directed graph of certifications. Trust flows through this graph from user to user, starting at a "seed" of well-known individuals likely to be trusted by most of the community.
See also the SlashDot "karma" system.
One important property of most trust metrics a degree of transitivity.
True, but friendship is not transitive. There's an important distinction that isn't necessarily understood by people implementing trust metrics.
Certainly. But the purpose of a trust metric is not to measure friendship. Its purpose is to create a secure authentication system based on trust between individuals. Metrics based on a graph-flow model allow the work of certifying users to be distributed through the community. Just remember that such systems' notions of "trust" include confidence in users' ability to rate others. Distribute your own trust accordingly.
In some situations trust isn't transitive at all. For example, suppose I authenticate as valid any signature which comes within reach, even if faked. Then you may be confident that I am me but not that other people are who I say they are. The two notions are separate and it is surely a mistake to conflate them.
The same applies at the second derivative. Someone may be good at authenticating other people, but poor at judging whether other people in turn are good at authenticating. -- DaveHarris
It is possible to separate trust in identity and competence. I believe that the WebOfTrust system used in PrettyGoodPrivacy has a confidence level that allows one to trust a key without trusting other keys it has signed.
Also, trust isn't one dimensional. I trust my mechanic to fix my car, but not to fix my teeth. I trust individuals not to steal my car, but not society as a whole (innocent until proven guilty). You know, trust in God, but lock your car door. -- SunirShah
Therefore, it's important to remember that in this context, "trust" becomes a technical term with a specific meaning. For each TrustMetric, one must ask: What does "trust" mean within this system? It may differ from the meaning of "trust" in everyday English, just as the meaning of "confidence" is different for a statistician than for the same statistician when drinking at a bar with friends.
The original meaning of "trust metric" is from the PublicKeyInfrastructure world. The literature is littered with papers proposing one trust metric or another. None of them work very well. In fact, building a PublicKeyInfrastructure with any kind of robustness or trustworthiness is a very hard problem, perhaps even destined to be a FailedDream.
Two other concepts related to the classical sense of "trust metric" may be relevant: MattBlaze?'s PolicyMaker? system for evaluating trust assertions in a distributed setting, and the WebOfTrust from PrettyGoodPrivacy. The latter is not set up for any kind of automatic evaluation, incidentaly, which confuses people, because most everything else is.
When I created AdvoGato (in part to test out the spiffy new trust metrics I had designed for my original goal of building a better PKI), I adopted the phrase "trust metric" without modification, even though what it actually measures is quite different. In the case of AdvoGato, it is used simply to determine membership in a community, in this case the community of FreeSoftware developers.
The fact that I'm seeing it applied to other areas such as SlashDot's moderation is rather interesting from a memetics point of view :)
See Wiki:TrustMetric for more discussion.
There are a few types of attacks on a TrustMetric:
In one, the nodes of the graph--the people--act in bad faith. Indeed, a TrustMetric is designed to handle this attack explicitly. Any proof of correctness must deal with the three classes of nodes:
The root set may be either good or confused, but not bad. Essentially, one must show that nodes that are bad are not trusted. This must also include a notion of time, as good users may become bad users over time. That is why trust should be a DynamicValue.
Another attack comes from trust links made in bad faith (or lack of faith). We don't want users who are bad to somehow create trust by creating a dense web of trust links. No matter how large a group of attackers, and no matter how much they trust each other, they should only gain trust from good nodes.
Consider also the in-between case of the repeater node. Whatever you rate the repeater node, the repeater rates you in turn. The argument may be, "Of course I'm perfectly trustable. If you can see that, I trust you. If you can't, I don't trust you." Either way, if trusted users lend trust to the repeater, the repeater can relay that trust to bad nodes. Thus, a repeater is confused.
Another interesting cases would be the random rater, the complete mistrust rater, or the complete trust rater.
Good nodes in the trust metric may be compromised by attackers. For a good [example], a few trusted users on AdvoGato had weak passwords, which were compromised by spammers. The spammers then used the trusted users to assign trust to a bad local web they had created with their spam accounts.
If we can assume the trustability of a node is somehow dependent on their ability to accurately assess the trust of others, and we have negative trust, we can back flow the negative trust along the links from bad nodes up towards the confused nodes using the same kind of algorithm we used to forward flow positive trust from the confused nodes to the bad nodes. In this way, negative trust can be considered a kind of "antitrust" ala antimatter. Thus, if many people jointly consider someone an attacker, we downrate the quality of trust of any node that trusts that attacker. However, if we continue to propagate the antitrust, since a confused node must necessarily be linked to the root set, the root set may bizarrely have a trust rating less than 1.0.
If the root entity...
1. Trusts other people to assess the trustworthiness of 3rd parties. 2. Trusts other people to recognize that a 3rd party is an attacker (and hence untrustworthy).And then a 3rd party, say "X", is called trustworthy by some #1's above, and an attacker by some #2s above, then someone is clearly Confused. It could be that the root is confused.
The root knows that either someone is lying that "X" is attacking, or someone is lying that "X" wouldn't attack. Positive evidence of an attack rules, so you assume that everyone who said that "X" wouldn't attack must be wrong. Then, therefore, everyone who said that those people are a good judge of other people's willingness to attack must have been wrong. ...which will, of course, to propagate all the way back to the root.
Presumably the second form of attack, not discussed above, concerns forging online identities.
Part of the definition of "trustworthy" would have to be that the person keeps the token that identifies themselves from being stolen/copied.
It aims to list all possible trust metrics, before analyzing them: http://trustlet.org/wiki/AnalyzedTrustMetrics
Is there any way to create a similar sort of trust metric without any notion of a fixed "root"? For example, say you have a network of traders in a collective barter system, and you "trust" people not to rip each other off--but say the root node or some of the "seed" nodes get power happy and become untrustworthy. How would you create such a system that would allow for defending against attacks that arise when previously good nodes become bad nodes? Think of it like the cells in the body--the immune system attacks ones that become carcinogenic...