How should a Trusted Platform be implemented? What parts of a Trusted Platform should be protected via hardware instantiation and what should not? TCPA avoids the issue, saying only that sensitive data must be held in "shielded locations" while it's capable of being viewed or manipulated, and only "trusted capabilities" can access data in such shielded locations. This avoidance is deliberate, because TCPA doesn't want to predicate any particular solution to the problem.
Of course, it's best to sidestep the issue and do as much as possible with functions that don't need to be trusted. (Normal software processes can do untrusted processing.) This approach is obviously illogical for primitives such as digital signing, for example, but advantageous when implementing more complex functions. (TCPA used this approach in its protocol to obtain "attestation identities" for a Trusted Platform. Both the start of the protocol [sending data to a CA] and the end of that protocol [receiving data from a CA] were split into two separate functions. In each case, one function does all the processing that must be trusted, and the other function does the remaining processing that doesn't have to be trusted.)
Once unnecessary functions have been eliminated, any remaining trusted functions form the root-of-trust and must be protected somehow. Protection mechanisms should be as straightforward as possible; the more convoluted a mechanism is, the harder it is to be confident that it will do what it's intended to do. And "less hardware is more" because hardware cost is the irreducible cost of a platform, even if functions and data in dedicated hardware are easier to protect from software attack and physical attack. So it's difficult but critical (both commercially and for security) to make a good implementation choice for a trusted function. An obvious choice is to put all the root-of-trust in a self-contained microcontroller or customized device, so that secrets don't appear on device pins or on circuit board traces. These devices can be considered to be derivatives of smart cards, which use proven techniques to try to resist chip peeling and other attacks. Essentially the same device can be used on many different platform types, benefiting from increased volume of manufacture and hence reduced cost of individual devices.
On the other hand, it would be nice to avoid the extra cost of an additional chip if at all possible. A normal computer platform (a PC, for example) has a distributed computing engine, in the sense that the components that constitute the engine are usually physically distributed on the motherboard. That distributed computing engine can execute trusted functions if:
Secrets are only revealed to trusted functions.
Trusted functions cannot be subverted or arbitrarily created
The computer platform provides sufficient protection against physical attack on trusted functions.
Some types of existing processors are built with physical support for than one level of processing privilege, whereby the processor's hardware stops lower-level instructions from accessing resources available to the higher level. A higher privilege level could be used to execute trusted functions while a lower privilege level is used for executing normal functions. Then normal software processes cannot interfere with trusted processes and their secrets. This approach satisfies requirements 1 and 2 above. Unfortunately, requirement 3 requires other alterations. Anyone with physical access to the motherboard, for example, has access to whatever signals appear on pins of devices, or on traces between devices. Unless such access can be prevented (by locked enclosures or rooms, or human guards), sensitive information passing between devices must be encrypted to prevent its revelation while in transit. This means that all devices dealing with secrets need to be able to encrypt and decrypt signals. Then the problem becomes one of distribution of secrets between devices, to be able to identify the source and destination of encrypted secrets.