Introducing the Confidential Compute Architecture

Over the last few years, we’ve seen security, and security breaches of hardware be at the forefront of news, with many vulnerabilities such as Spectre, Meltdown, and all of their sibling side-channel attacks showcasing that there’s a fundamental need for a re-think of how to approach security. One way Arm wants to address this overarching issue is to re-architect how secure applications work with the introduction of the Arm Confidential Compute Architecture.

Before continuing, I want to warn that today’s disclosures are merely high-level explanations of how the new CCA operates, with Arm saying more details on how exactly the new security mechanism works will be unveiled later this summer.

The goal of the CCA is to more from the current software stack situation where applications which are run on a device have to inherently trust the operating system and the hypervisor they are running on. The traditional model of security is built around the fact that the more privileged tiers of software are allowed to and are able to see into the execution of lower tiers, which can be an issue when the OS or the hypervisor is compromised in any way.

CCA introduces a new concept of dynamically creates “realms”, which can be viewed as secured containerised execution environments that are completely opaque to the OS or hypervisor. The hypervisor would still exist, but be solely responsible for scheduling and resource allocation. The realms instead, would be managed by a new entity called the “realm manager”, which is supposed to be a new piece of code roughly 1/10th the size of a hypervisor.

Applications within a realm would be able to “attest” a realm manager in order to determine that it can be trusted, which isn’t possible with say a traditional hypervisor.

Arm didn’t go into more depth of what exactly creates this separation between the realms and the non-secure world of the OS and hypervisors, but it did sound like hardware backed address spaces which cannot interact with each other.

The advantage of the usage of realms is that it vastly reduces the chain of trust of a given application running on a device, with the OS becoming largely transparent to security issues. Mission-critical applications that require supervisory controls would be able to run on any device as say opposed to today’s situation where corporate or businesses require one to use dedicated devices with authorised software stacks.

Not new to v9 but rather introduced with v8.5, MTE or memory tagging extensions are aimed to help with two of the most persistent security issues in the world’s software. Buffers overflows and use-after-free are continuing software design issues that have been part of software design for the past 50 years, and can take years for them to be identified or resolved. MTE is aimed at helping identify such issues by tagging pointers upon allocation and checking upon use.

Security is to Armv9 is what 64-bit was to Armv8 Future Arm CPU Roadmaps, mention of Raytracing GPUs
Comments Locked

74 Comments

View All Comments

  • JoeDuarte - Wednesday, March 31, 2021 - link

    This is not true. Most developers have never used any SIMD and don't plan to. Some of them don't even know what SIMD is. You're severely overestimating its importance. Software developers are generally lazy and produce lots of underperforming and poorly optimized code.

    Given that Arm introduced SVE several years ago, and no one has even implemented it in a processor that you can buy, I don't know why you think Arm's noises about SVE2 matter. It won't matter. They're so fragmented that they can't even get consistent implementation of the latest versions of v8, like v8.3/4/5.

    Apple doesn't even want developers to optimize at that level, to use assembly or intrinsics, so they make it hard to even know what instructions are supported in their Arm CPUs. They want everyone to use terrible languages like Swift. On Android, there's so much fragmentation that you can't count on support for later versions of v8.x.

    SVE2 would matter on servers if and when Arm servers become a thing, a real thing, like a you can buy one from Supermicro kind of thing. They would need to be common, with accessible hardware. Developers will need access to the chips, either on their desks or in the cloud. It would need to be reliable access – the cloud generally isn't reliable that way, as there have been cases where AWS dropped people down to instances running on pre-Haswell CPUs, which broke developers' code using AVX2 instructions...

    You can't develop for SVE2 without access to hardware that supports it. Right now that hardware does not exist. Arm v9 isn't going to yield any hardware that supports SVE2 for a year or longer, and it might be four years or so before it's easily accessed, or longer, possibly never. By the time it's readily available, so many other variables will have changed in the market dynamic between Arm, AMD, and Intel that your claim doesn't work.
  • Ppietra - Friday, April 2, 2021 - link

    A lot of developers might not even know what SIMD is, but I would argue that a lot of apps actually end up using SIMD simply because many APIs to the system make use of NEON
  • Krysto - Tuesday, March 30, 2021 - link

    MTE will likely end up more of a short-term solution, as all such solutions are.

    If Arm was serious about actually getting rid of the majority of memory bugs, they would have announced first-class support for the Rust programming language.
  • SarahKerrigan - Tuesday, March 30, 2021 - link

    https://developer.arm.com/solutions/internet-of-th...

    Rust has been well-supported on ARM for a while.
  • Wilco1 - Wednesday, March 31, 2021 - link

    Many languages have claimed to solve all computing problems, but none did as well as C/C++. Why would Rust be any better than Java, C#, D, Swift, Go etc?

    Also you're forgetting that compilers and runtimes will still have bugs. 100% memory safe is only achievable using ROM.
  • kgardas - Wednesday, March 31, 2021 - link

    Because from all mentioned languages, Rust is not GC-based language and has highest chance to be involved in system programming. See Rust addition into the Linux kernel. See MS praise for Rust etc. Generally speaking Rust is more typesafe/memory safe than C, and good old C is really old enough to be replaced completely.
  • Wilco1 - Wednesday, March 31, 2021 - link

    Ditching GC is good but it doesn't solve the fundamental problem. I once worked on a new OS written in a C# variant, and it was riddled with constructs that switch off type checking and GC in order to get actual work done. So in the end it didn't gain safety while still suffering from all the extra overheads of using a "safe" language.

    So I remain sceptical that yet another new language can solve anything - it's hard to remain safe while messing about with low level registers, stacks, pointers, heaps etc. Low-level programming is something some people can do really well and others can never seem to master.
  • mdriftmeyer - Thursday, April 1, 2021 - link

    We're not talking C89 but C17 and most OS solutions are already implementing those modern features. C2x has an awful lot of work being finalized into it.

    http://www.open-std.org/jtc1/sc22/wg14/www/wg14_do...

    Good old C isn't old anymore.

    And no, Linus, Apple, Microsoft aren't ditching C/C++ in their Kernels for Rust.
  • melgross - Saturday, April 10, 2021 - link

    Rust will be safer until the hacking community is interested enough to find all of the bugs and poor thinking that undoubtedly exists in Rust, as it has in every language over the decades that was declared safe.
  • JoeDuarte - Wednesday, March 31, 2021 - link

    Are you aware of formal verification? There are formally verified OSes now, like seL4.

    There's also the CHERI CPU project, which Arm in involved in.

    And formally verified compilers line INIFRIA.

    We need to junk C and C++ and replace them with serious programming languages that are far more intuitive and sane, as well as being memory safe. Rust is terrible from a syntax and learning standpoint, and much better languages are possible. The software industry is appallingly lazy.

Log in

Don't have an account? Sign up now