[Home]DistributedTrust

MeatballWiki | RecentChanges | Random Page | Indices | Categories

1. Trust in Systems

Chopra and Wallace (2003) offer four domains where trust decisions are often made in information systems:

In these contexts, the natural question "Whom do you trust" can be answered variably depending on the system from another person, groups, organizations, or computer systems (Chopra and Wallace, 2003). The latter case is of critical interest in this paper. The first two types of systems describe trust relationships between machine and machine. These kinds of trust relationships are distinct from human to human or human/machine relationships.

Common descriptions of trust in machine-to-machine systems, also known as distributed systems, are not about fostering trust or building relationships, but rather about maintaining as much control over the system as if the system was not distributed. Consider that before distributed (or component-based) systems, system designers had full control over their systems. If a fault was detected, it was within their power to resolve it. In a distributed system, one must give up control over the system to other designers who one may not have a power-over relationship with, or any relationship with whatsoever. If their part of the system faults, it may cause a total system failure, say if one of the filters in a pipe-and-filter design fails to pass control onto the next filter. Consequently, these environments are often presumptively distrustful, or only marginally trustful, and the predominant themes are that trust is about security, privacy, and reliability (e.g. Camp, 2001; Grandison and Sloman, 2000).

1.1. Trust vs. security

Trust is a higher-level concept than security. Security has no meaning without deciding whom to grant access or authorization, and the decision of whom to grant access falls under trust. Conversely, trust is implemented through security mechanisms. So, as Grandison and Sloman (2000) summarize,

1.2. Trust vs. privacy and integrity

If others have conferred trust in you to protect their information, or you have sensitive information you wish to protect, it is important to ensure that information remains contained. However, in a distributed system we often have to pass this information onto third parties outside of our control. In response, many people use forms of social recourse (as described above) to guarantee fealty.

Finally, when we delegate work to other parts of the system not under control, we implicitly trust anyone or anything those parts trust. If we give up sensitive information (e.g. credit card numbers or personal information), we trust the other party will not divulge that information. If they pass on that information as part of the work, we trust that they have secured those relationships as well. Hence, transitive trust is very important in delegation systems.

1.2.1. Reliability

In practice, one cannot guarantee the reliability of another component in the system. Abdul-Rahman and Hailes (1997) lament how the confusion around the word 'trust' leads to the misconception that trusted means nothing can go wrong. In practice, one can only control whom they do business with, again using reputation systems such as with privacy. Another approach is the Trusted Computing Platform Alliance (2001) that has decided to totally control what can or cannot run on a machine by securing the full computing platform under a centralized authority. However, Abdul-Rahman and Hailes have criticized this approach as being naive; as the TCPA cannot guarantee code will not crash, only punish those who write faulty code. A better solution is to secure system capabilities, particularly operating system capabilities, so that unknown applications have only limited access to the resources of the host machine. Then a trust management model allows one to sandbox the mobile code by controlling what system capabilities will be exposed to the program (Blaze, Feigenbaum, Ioannidis, and Keromytis, 1999).

1.3. Problem types

Grandison and Sloman (2000) have identified the general types of problems that are typically considered when designing for trust, important to consider since each problem type requires a different type of trust.

1.4. Design strategies

Over all, we can summarize a number of common design strategies by describing the major design tensions.

These design trade offs underpin all machine-machine descriptions of trust and control in distributed systems. More complex trade offs dealing with machine-human and human-machine-human interactions

References

See WhatIsTrust.


Discussion

MeatballWiki | RecentChanges | Random Page | Indices | Categories
Edit text of this page | View other revisions
Search: