oneAPI: Intel’s Solution to Software

Having the hardware is all well and good, but the other angle (and perhaps more important angle) is software. Intel is keen to point out that before this new oneAPI initiative, it had over 200+ software angles and projects across the company to do with software development. oneAPI is meant to bring all of those angles and projects under one roof, and provide a single entry point for developers to access whether they are programming for CPU, GPU, AI, or FPGA.

The slogan ‘no transistor left behind’ is going to be an important part of Intel’s ethos here. It’s a nice slogan, even if it does come across as if it is a bit of a gimmick. It should also be noted that this slogan is missing a key word: ‘no Intel transistor left behind’. oneAPI won’t help you as much with non-Intel hardware.

This sounds somewhat too good to be true. There is no way that a single entry point can do all things to all developers, and Intel knows this. The point of oneAPI is more about unifying the software stack such that high-level programmers can do what they do regardless of hardware, and low level programmers that want to target specific hardware and do micro-optimizations at the lowest level can do that too.

Everything for oneAPI is going to be driven through the oneAPI stack. At the bottom of the stack is the hardware, and at the top of the stack is the user workload – in between there are five areas which Intel is going address.

The underlying area that covers the rest is system programming. This includes scheduler management, peer-to-peer communications, device and memory management, but also trace and debug tools. The latter of which will appear in its own context as well.

For direct programming languages, Intel is leaning heavily on its ‘Distributed Parallel C++’ standard, or DPC++. This is going to be the main language that it encourages people to use if they want portable code over all different types of hardware that oneAPI is going to cover. DPC++ is an intrinsic mix of C++ and SYCL, with Intel in charge of where that goes.

But not everyone is going to want to re-write their code in a new programming paradigm. To that end, Intel is also working to build a Fortran with OpenMP compiler, a standard C++ with OpenMP compiler, and a python distribution network that also works with the rest of oneAPI.

For anyone with a categorically popular workload, Intel is going to direct you to its library of libraries. Most of these users will have heard of before, such as the Intel Math Kernel Library (MKL) or the MPI libraries. What Intel is doing here is refactoring its most popular libraries specifically for oneAPI, so all the hooks needed for hardware targets are present and accounted for. It’s worth noting that these libraries, like their non oneAPI counterparts, are likely to be sold on a licencing model.

One big element to oneAPI is going to be migration tools. Intel has made a big deal what they want to be able to support CUDA translation to Intel hardware. If that sounds familiar, it’s because Raja Koduri already tried to do that with HIP at AMD. The HIP tool works well in some cases, although in almost all instances it still requires adjustment to the code to get something in CUDA to work on AMD. When we asked Raja about what he learned about previous conversion tools and what makes it different for Intel, Raja said that the issue is when code written for a wide vector machine gets moved to a narrower vector machine, which was AMD’s main issue. With Xe, the nature of the variable vector width means that oneAPI shouldn’t have as many issues translating CUDA to Xe in that instance. Time will tell, for obvious reasons. If Intel wants to be big in HPC, that’s the one trick they’ll need to execute on.

The final internal pillar of oneAPI are the analysis and debug tools. Popular products like vTune and Trace Analyzer will be getting the oneAPI overhaul so they can integrate more easily for a variety of hardware and code paths.

At the Intel HPC Developer Conference, Intel announced that the first version of the public beta is now available. Interested parties can start to use it, and Intel is interested in feedback.

The other angle to Intel’s oneAPI strategy is supporting it with its DevCloud platform. This allows users to have access to oneAPI tools without needing the hardware or installing the software. Intel stated that they aim to provide a wide variety of hardware on DevCloud such that potential users who are interested in specific hardware but are unsure what works best for them will be able to try it out before making a purchasing decision. DevCloud with the oneAPI beta is also now available.

Ponte Vecchio: The Old Bridge in the land of Gelato The First Xe-HPC Deployment: Aurora, with Xe Link
Comments Locked

47 Comments

View All Comments

  • peevee - Monday, December 30, 2019 - link

    "Xe contains two fundamental units: SIMT and SIMD. In essence, SIMD (single instruction, multiple data) is CPU like and can be performed on single elements with multiple data sources, while SIMT (single instruction, multiple threads) involves using the same instructions on blocks of data"

    That phrase makes absolutely no sense. "CPU-like" SIMD executes the same instruction on multiple data elements, not on "single elements".
  • peevee - Monday, December 30, 2019 - link

    What the H Lenovo, a Chinese company, is doing developing a critical tool for top-secret projects within DoE?
  • henryiv - Thursday, January 2, 2020 - link

    Thanks for the great article. DPC++ stands for data-parallel c++ btw (which is basically SYCL implementation of Intel).
  • Deicidium369 - Wednesday, January 27, 2021 - link

    Xe HP was shown with 4 tiles and 42 TFLOPS so each tile = 10.5 TFLOPS at FP32 or half of that for FP64. Assuming FP64 is the most likely

    Xe HPC has 16 Tiles x 5.25 TFLOPS per tile = 84 TFLOPS per Xe HPC. There are 6 Xe HPC per sled = 504 TFLOPS per sled or roughly 0.5 PFLOPS - so ~2000 sleds needed for 1 ExaFLOP FP64.

    2000 sleds - 20 sleds per rack = 100 racks at FP64

    230 Petabytes of storage at the densest config 1U = 1PB so 230 1U 1PB - 230 U = less than 6 racks...

    Even if using 2.5" would not need more than 20 racks for storage

    So 100 rack cabinets of Compute + 20 rack cabinets to reach 1 ExaFLOP and 230PB - Networking could be 1-2 racks - not sure the water cooling components are in standalone racks or not. So 122 Cabinets + ??? for cooling.

Log in

Don't have an account? Sign up now