MeatballWiki |
RecentChanges |
Random Page |
Indices |
Categories
1. Trust in Systems
Chopra and Wallace (2003) offer four domains where trust decisions are often made in information systems:
- Information. Can we trust the information we obtain? To their discussion, we could also add the concern do we trust others to hold our information (i.e. with respect to privacy and integrity).
- Information systems. Are the computing systems we rely upon trustworthy? Will they fail?
- E-commerce. Can we trust buyers / sellers?
- Online relationships. Can we trust the people we form relationships with in chat rooms?
In these contexts, the natural question "Whom do you trust" can be answered variably depending on the system from another person, groups, organizations, or computer systems (Chopra and Wallace, 2003). The latter case is of critical interest in this paper. The first two types of systems describe trust relationships between machine and machine. These kinds of trust relationships are distinct from human to human or human/machine relationships.
Common descriptions of trust in machine-to-machine systems, also known as distributed systems, are not about fostering trust or building relationships, but rather about maintaining as much control over the system as if the system was not distributed. Consider that before distributed (or component-based) systems, system designers had full control over their systems. If a fault was detected, it was within their power to resolve it. In a distributed system, one must give up control over the system to other designers who one may not have a power-over relationship with, or any relationship with whatsoever. If their part of the system faults, it may cause a total system failure, say if one of the filters in a pipe-and-filter design fails to pass control onto the next filter. Consequently, these environments are often presumptively distrustful, or only marginally trustful, and the predominant themes are that trust is about security, privacy, and reliability (e.g. Camp, 2001; Grandison and Sloman, 2000).
1.1. Trust vs. security
Trust is a higher-level concept than security. Security has no meaning without deciding whom to grant access or authorization, and the decision of whom to grant access falls under trust. Conversely, trust is implemented through security mechanisms. So, as Grandison and Sloman (2000) summarize,
- Authorization. Trust often results in authorization over some resources under the system's control or ownership, such as running a native process or having access to the database. Security controls access.
- Authentication. Ensuring people are who they say they are, is essential for trust, as you need to know whom you are trusting. Identification does not have to be named. Anonymous authorization can be implemented using capabilities or certificates. Merely, if granting trust to some individual, it is necessary to validate that individual deserves trust.
- Certification. A common mechanism to identify individuals is through certification, such as the Medical Association identifying capable doctors. Certificates need to be cryptographically secure to prevent forgery.
1.2. Trust vs. privacy and integrity
If others have conferred trust in you to protect their information, or you have sensitive information you wish to protect, it is important to ensure that information remains contained. However, in a distributed system we often have to pass this information onto third parties outside of our control. In response, many people use forms of social recourse (as described above) to guarantee fealty.
- Authentication. Again, knowing exactly whom one is dealing with will help in redressing grievances later.
- Reputation. Tracking reputation in a distributed system (e.g. Yu and Singh, 2002) allows one to punish others socially (e.g. Guha, Kumar, Raghavan, and Tomkins, 2004) through the propagating of negative recommendations (distrust).
Finally, when we delegate work to other parts of the system not under control, we implicitly trust anyone or anything those parts trust. If we give up sensitive information (e.g. credit card numbers or personal information), we trust the other party will not divulge that information. If they pass on that information as part of the work, we trust that they have secured those relationships as well. Hence, transitive trust is very important in delegation systems.
1.2.1. Reliability
In practice, one cannot guarantee the reliability of another component in the system. Abdul-Rahman and Hailes (1997) lament how the confusion around the word 'trust' leads to the misconception that trusted means nothing can go wrong. In practice, one can only control whom they do business with, again using reputation systems such as with privacy. Another approach is the Trusted Computing Platform Alliance (2001) that has decided to totally control what can or cannot run on a machine by securing the full computing platform under a centralized authority. However, Abdul-Rahman and Hailes have criticized this approach as being naive; as the TCPA cannot guarantee code will not crash, only punish those who write faulty code. A better solution is to secure system capabilities, particularly operating system capabilities, so that unknown applications have only limited access to the resources of the host machine. Then a trust management model allows one to sandbox the mobile code by controlling what system capabilities will be exposed to the program (Blaze, Feigenbaum, Ioannidis, and Keromytis, 1999).
1.3. Problem types
Grandison and Sloman (2000) have identified the general types of problems that are typically considered when designing for trust, important to consider since each problem type requires a different type of trust.
- Resource control. One grants trust over resources that he owns or controls, which as mentioned above, could be running native code or granting access to a database or any other resource. Historically, this has been the widest preoccupation in the systems literature (Abrams and Joyce, 1995), where the answer has been focused on access control mechanisms. Common solutions are operating system passwords, firewalls, and access control lists. GoodFaith and other ethical concerns are very important in these situations.
- Delegation. Trusting someone else to make decisions on ones behalf, perhaps even given resources under their control to them. Work is delegated to subordinates, often giving them resources with a customer account. Control is also delegated through flow by using third party web services, although this often requires no transfer of resources. In these domains trust is embedded in the decision to do business with the other party, and the main preoccupations are competence. Delegation often involves discussion of transitive trust, as one must trust another party, and whomever they trust. For issues like privacy, this can be important.
- Infrastructure. As mentioned above, the Trusted Computing Platform Alliance (2001) has created the notion of trusting the infrastructure upon which the information systems rest.
1.4. Design strategies
Over all, we can summarize a number of common design strategies by describing the major design tensions.
- Judgment vs. recommendation. One decides to trust someone, as described in length above, by either a personal judgment of them and the situation, or based on the recommendation of someone else. While many systems trust recommendations completely, in normal human practice, trust is evaluated even with a recommendation. Computer systems can also do this, and research such as with TrustManagement? is moving in this direction. However, this is not easy since distributed systems lack the rich contextual interpersonal cues that humans possess in social encounters. As such, in practice, the only option open to networked agents is to use recommendations (Abdul-Rahman & Hailes, 1997a).
- Centralization vs. decentralization. One can centralize authority, say with government institutions, or decentralize authority across the whole social network. The two most striking examples of this are PGP and X.509. What we can say here is that centralization offers advantages of making global policy decisions easier, administration easier, as well as making verification of the relative trustworthiness of the recommendations stronger, even if their credibility may also deplete if they certify too many (Abdul-Rahman & Hailes, 1997b). However, centralization is most attractive to those who hold the power. The Trusted Computing Platform Alliance is mostly a Microsoft initiative (Mundie, de Vries, and Corwine, 2002), since it will help them secure their market dominance. Indeed, this can grate with many, as many would concur with the statement "With decentralization, each rational agent will be allowed to take responsibility for its own fate. This is a basic human right." (Abdul-Rahman & Hailes, 1997a, p.51)
- All-or-nothing vs. levels of trust. Camp (2003) criticizes that today's Internet has mostly an all-or-nothing model of trust. That is, once authenticated, a person or machine is fully authorized on the system. This could be called the "firewall model", since if you let fire behind the wall, the wall is useless. Further, Camp laments that firewalls encourage power users to tunnel around the total access restrictions. A better approach is to spread levels of trust so that people do not feel compelled to circumvent all the security, just to do something small. The best approach to do this, to date, is trust management systems. Consider again, however, the singular example of running native code. With a gradual trust model, one can grudgingly grant only the necessary capabilities to applications rather than giving them full access for simply running under the user account (Blaze, Feigenbaum, Ioannidis, and Keromytis, 1999).
These design trade offs underpin all machine-machine descriptions of trust and control in distributed systems. More complex trade offs dealing with machine-human and human-machine-human interactions
References
See WhatIsTrust.