It's worth noting that there are already several portable assembly-like languages. One of the first was P-Code, used by Pascal compilers, which was a stack-based virtual architecture that was intended to be easy to translate to native code. A more widespread example is JVM bytecode—inspired by Smalltalk bytecode—which is used by Java and other languages that run in a JVM. Android provides another, in the form of Dalvik.
It is, therefore, very important that it is impossible—or as close to impossible as is actually feasible—for the code to do anything malicious, such as crash your computer (or even just the web browser) or leak private information to the Internet.
In this setting, garbage collection was vital. Not only could a memory bug in some untrusted code gain access to the browser but it could also gain access to the entire system.
Now, however, it is much less important. Systems such as FreeBSD's capsicum, SELinux, or the Darwin sandbox subsystem show that modern operating systems are perfectly capable of running totally untrusted machine code and strictly restricting what it can do. On FreeBSD, for example, it is only under a dozen lines of code to start an environment that can do nothing other than talk to a specific remote server and read and write files in a temporary directory.
In the modern world, the isolation features of a virtual execution environment are far less important. If anything, they are a limitation: The virtual machine code dramatically increases the size of the trusted computing base. Compare the number of privilege elevation vulnerabilities in any modern kernel (including Windows) with the number of sandbox-escaping vulnerabilities in something like Flash or the JVM over the past year, and you'll see a sharp contrast.