Home > Articles > Security > Network Security

  • Print
  • + Share This
Like this article? We recommend

How Can Platforms Be Said To Be "Trusted"?

As noted, there are different aspects to trust. The TCPA definition of trust is that something is trusted "if it always behaves in the expected manner for the intended purpose." A similar approach is adopted in the third part of the ISO/IEC 15408 standard: "A trusted component, operation, or process is one whose behaviour is predictable under almost any operating condition and which is highly resistant to subversion by application software, viruses, and a given level of physical interference."

In this section I'll give a description of the TCPA trust mechanisms and argue that categorizing trust in terms of behavioral and social components helps in understanding how Trusted Platforms enhance trust.

The way in which you (as a local user or third party) know whether a platform can be trusted is as follows:

  • When you want to trust that platform for some particular purpose, you ask for measurements (called integrity metrics) about the platform, digitally signed by the trusted component on that platform. You then compare these integrity metrics with expected values that represent software that you would trust sufficiently to interact with the platform for whatever purpose you have in mind. (The actual measurement values that are compared could differ according to the particular intended use of the platform.)

  • If the measured values are the same as the expected values, you can safely interact with the platform for the desired purpose. Anomalous integrity metrics indicate that the platform is not operating as expected, and you'll need to judge whether to proceed with the interaction based on this information.

In order to trust a computer platform, it's necessary to use both behavioral and social elements of trust; mechanisms provide information about the behavior of a platform, but you'll only trust that information if you trust the people who vouch for the mechanisms themselves, as well as for the expected value of such information.

Behavioral Components

A TCPA Trusted Platform provides trust mechanisms to generate reliably, store, and report measurements about its software environment. These trust mechanisms dynamically collect and provide evidence of the platform's post-boot behavioral history.

There are two minimal roots of trust for these mechanisms:

  • The root of trust for measurement (RTM) starts the measurement process.

  • The other root of trust stores the results of the measurement processes as they happen, in such a way that measurements cannot be "undone." It cryptographically reports the current measured values and prevents the release of a secret if the current measured values don't match the values stored with that secret. This second root of trust is implemented as a hardware chip rather like an internal smart card chip, called the Trusted Platform Module (TPM). The TPM is protected by being tamper-resistant, so that what goes on inside the chip cannot be tampered with by the platform, by the user, or by a third party. The TPM is something that's trusted by everyone.

This evidence of behavior is the behavioral component of trust, since it provides the means of knowing whether a platform can be trusted.

Social Components

The social component of trust relates to what it is to be trustable (capable of behaving properly); that is, trustworthy in a social sense, when people agree that the trusted item is bona fide and will do the right things.

Social trust in a Trusted Platform is an expression of confidence in behavioral trust, because it's an assurance about the implementation and operation of that Trusted Platform. Such information provides the means of knowing whether a platform should be trusted. Trusted Platforms use social trust to provide confidence in the integrity collection and reporting mechanisms mentioned above via delegation of the analysis of such mechanisms. They also use social trust to provide confidence that particular values of integrity metrics published by another organization or individual indicate that the platform can safely be used for a particular purpose.

A specific TPM relies on social trust—you can inspect its platform certificate, which is a trustable assertion by the company that made it. Other elements of a Trusted Platform also have certificates that vouch for the design of a Trusted Platform—that a specific TPM was incorporated into a Trusted Platform, that the design of the RTM and TPM meet the TCPA specification, and so on.

In summary, social trust underpins why a Trusted Platform can be said to be "trusted"; third parties are prepared to endorse the platform because they've assessed the platform and are willing to state that if measurements of the integrity of that platform have a certain value, it can be trusted for particular purposes. Whether you're a local or a remote user, so long as you trust the judgment of the third parties, if the platform proves its identity and the measurements match the expected measurements, you'll trust that the platform will behave in a trustworthy and predictable manner.

In this section I've given a brief introduction to the trust mechanisms provided by TCPA. Unfortunately, these trust mechanisms are inherently difficult and can't be directly managed by individual (human) users; they require cryptographic operations and complex comparisons. As a consequence, you always need a computing engine to challenge a Trusted Platform, even when that Trusted Platform is right next to you. Challenging a remote Trusted Platform involves a straightforward use of the trust mechanisms described above, so long as you trust that your challenging device will convey its findings to you and make its analysis about the integrity of the platform in a trustworthy manner. But what if you can't trust it? And what about checking a platform in front of you that you want to use but that you don't necessarily trust?

  • + Share This
  • 🔖 Save To Your Account