Moore's Lobby Podcast

Quantum Computing: Sci-Fi Technology Requires Real-World Engineering

Episode #71 / 55:52 / February 13, 2024 by Daniel Bogdanoff
Episode Sponsor: Mouser Electronics

As CTO of IBM Quantum, Oliver Dial leads the development of the world’s most advanced quantum computers. He discusses the challenges of building quantum ICs, operating them at cryogenic temperatures, the future of quantum computing, and more.

A decade after demonstrating the first entanglement of semiconducting spin quantum bits, or qubits, Oliver Dial and IBM Quantum are developing the ICs, cryogenic systems, error mitigation techniques, and software tools that will identify solutions to problems beyond the scope of classical computers. Recently, the IBM Quantum team announced the Heron 133-qubit and Condor 1,121-qubit quantum processors, and Dial joins us to talk about a subject that he loves.


BM's Heron 133-qubit quantum processor

IBM's Heron 133-qubit quantum processor. Image used courtesy of IBM Quantum


The highlights of this conversation between Dial and our Moore’s Lobby host, Daniel Bogdanoff, include:

  • A comparison of quantum and classic Turing computing systems.
  • Temperatures down to 0.1 kelvin (brrr!) to noise temperatures of 30,000 kelvin (hotter than the sun, but not really).
  • An audio symphony of quantum circuits running computations from around the world.
  • Qubits are probably much bigger than you would expect.
  • Why packaging engineers are the unsung heroes of the semiconductor and quantum industries.
  • Semiconductor engineers telling quantum engineers, “you guys are doing these all wrong.”
  • The technology advance in the newer Heron processor that Dial is most excited about.


A Big Thank You to Our Sponsors!


Meet Oliver Dial

Oliver Dial has performed pioneering research on semiconductor singlet-triplet qubits and quantum Hall experiments. His experience in quantum physics helped shape the development of IBM's 20-qubit quantum processor, which, at its release, was the world's most advanced quantum computer. Today, he continues to lead IBM Quantum’s efforts to scale superconducting quantum processors.



Oliver led the standardization of how quantum experiments are run and recorded. This formed the foundation of IBM's quantum backend code. He is acknowledged as one of IBM's, and the world’s, leading experts on quantum hardware.

Oliver has a BS in Physics from CalTech and a PhD in Physics from MIT. As a postdoctoral fellow at Harvard, he demonstrated the first case of semiconductor spin-tangling in qubits.



The following transcript has been edited for clarity.

All About Circuits’ Daniel Bogdanoff: From EETech Media and All About Circuits, this is Moore's Lobby. I'm your host, Daniel Bogdanoff.

Today in the Lobby, I'm joined by Dr. Oliver Dial, the CTO of IBM Quantum. He's well known for his work in coherence gate fidelity and increasing yield from large superconducting quantum processors. He shaped the development of IBM's 20-qubit quantum processor and has led work to standardize how quantum experiments are run and recorded, ultimately forming the basis of IBM's quantum backend code.

This episode is adapted from an Industry Tech Days keynote. I guess a technically poor metaphor is this is a superposition of keynote and podcast episodes, but let's get going.

Dr. Oliver Dial, thanks for being here. I'm really excited for this.

Oliver Dial: I'm delighted to be here. Thanks for the opportunity.


Entry Into the Quantum Industry

DB: I'm curious, how did you first get into science and physics and technology?

Dial: Well, I guess for me, computers were just always these really amazing things. The way you could program them, and they would go off and do things. But what was really exciting to me, when I was in high school mostly, was the fact that computers could control things outside of themselves. They could hook it up to a motor, and it could move something. You could hook up to a camera, and it could take pictures.

I guess all the computers have cameras now, but at the time, that was something special. And so I was really excited about sort of how computers could look at and manipulate the world around them.

Then when I went to college, I went into physics, kind of thinking that was the coolest science I knew of, kind of how the universe works. But my interest in how computers interact with the world kind of pulled me back into more electrically oriented physics experiments. It pulled me into condensed matter physics, which is the study of semiconductors and metals and the electronic properties of complicated materials. And so then it kind of all came together in these really big computer controlled experiments that were very complicated, lots of measurements, lots of things going on.

And then, ultimately, that turned into me studying quantum computing.

DB: It almost sounds like before quantum computing was a thing, you were setting yourself up to be really good at quantum computing research.

Dial: It was sort of an accident. But I can certainly tell you that when I look at the team around me, we have a lot of people who are really interested in the intersection between technology and science and the interplay between those things.

DB: Can you talk me through your academic career? Because it's quite the CV.

Dial: That wasn’t done on purpose, I promise you! So, for my undergraduate, I went to Caltech, which, sorry, if you want to study physics, that is definitely the place to go. But while I was there, I got kind of wrapped up in semiconductor fabrication, both from some undergraduate research projects as well as some opportunities working with some lab groups there.

I actually worked with a guy, Axel Scherr, whose specialty is nanofabrication. But he described himself as kind of running the machine shop for the nanoscale world. He just wanted to make things.

And so then, from there, I went on to MIT, where I was studying, as I said, condensed matter physics. So I was studying something called the quantum Hall effect, which is really cool. It's what happens when you have electrons in really high purity, high mobility structures, kind of like in the field effect transistor channel, but also in extremely strong magnetic fields. Magnetic fields are so tight that they go in circles that are small compared to the distance between the electrons. So it's a really fascinating piece of physics and also completely useless in the real world.

DB: <Laughs>

Dial: As I was wrapping up my Ph.D., though, I had kind of this background of experience with electrons and really cold structures and high magnetic fields and also microwave manipulation of things inside of cryostats, equipment that brings electronic devices down to temperatures just above absolute zero.

And so, at the time, there were a set of really cool experiments that were getting done in something called quantum dots, which are little, teeny tiny transistors where you trap just exactly one electron inside of them, and looking at what happens when you manipulate that last electron that was left. And so it kind of felt like a natural extension of what I've been doing.

But at the same time, I kind of unwittingly brought myself into quantum computing. Because those exact same quantum dots you can relabel as a spin qubit, and it becomes a fundamental piece of a quantum computer.

So that transition to quantum dots actually happened at Harvard, which was great for me at the time, because it meant I didn't have to move. I left the house, and instead of turning left to go to MIT, I turned right to go to Harvard. And I spent a couple of years doing that.

Then finally, I came to IBM to study superconducting qubits, and that's where I've been for the last ten years.


Quantum Computing Is Not Faster Classical Computing

DB: So, let's dig into the basics of quantum computing. For those who aren't familiar, how do you describe a quantum computer to people who are trying to understand it?

Dial: Well, I think it's a lot easier to talk about what a quantum computer can do than to talk about the details of how it works. And so the first thing a lot of people ask me is quantum computing like the next classical computer? In 2030 is my MacBook Pro going to have a quantum processor? And I think the answer is probably not.

The reality is that every computer people have ever made from kind of starting World War II, you know, Alan Turing, figuring out how to do computation up until today, are equivalent in some sense. A problem that's possible to do on one of these computers is possible to do on any of them. In fact, it's called Turing equivalents. Something that's hard for one of these computers is hard for all of them.

So the neat thing about quantum computers is that they are not Turing equivalent to your classical computer. There are problems which are easy for a quantum computer, that are hard for a classical computer. And so despite the fact that there's a lot of overhead associated with building a quantum computer (that everything, all the ones we know how to build today, involve some really fancy physics and technology, and they're big, and they're power-hungry) if you're interested in solving one of those problems, the quantum computer can do it exponentially more quickly.

And so there are certain problems, like in simulating chemistry and simulating materials, where mankind can just never hope to solve these problems with classical computer even if we make it as big as the universe. Where with a moderately sized quantum computer, these things become possible. So it's not a replacement for your classical computer. What it is is more like a coprocessor. It's an accelerator for these types of problems. And I think it's going to be an essential part of any supercomputer that we build in the near future, actually.

DB: Yeah, that's one of the areas I see people get really confused, is it's like, oh, it's just another type of computer that's more powerful, and it's like, no, you just got to come at it from a totally different angle.

Dial: Yeah, it's not a step along the road, it's a branching. There are classic computers and there are quantum computers, and they're both the right thing to use in their own problem domain.

Describing IBM’s Superconducting Transmon Qubits

DB: And I understand that there are a number of different types of qubit architectures and approaches that people are taking. Can you talk me through IBM's qubit architecture and philosophy?

Dial: Yeah, absolutely. So the essence of a quantum computer is you need to be able to take a little bit of information and store it in a quantum state. And there's a couple of key things about that quantum state.

One of the big ones is superposition. For a classical bit, it could be one, or it could be zero. But it's definitely one of those two things. Hopefully, unless your computer is broken.

For a quantum computer, we could be in a superposition. We could be in one plus zero or one minus zero. If you've heard of sort of the spooky quantum mechanical things that can be in more than one place at the same time. This is the spooky quantum mechanical bit that can be in more than one state at the same time.

Now, you may have heard about wave function collapse. The idea is that if I have something in two places at the same time, if I look at it, then it collapses into one state or the other. We actually have the same problem with qubits. If you look at the state of one of our qubits, it's not in that superposition anymore. So it becomes one or zero. And your computation is totally ruined.

So the thing that all the different types of qubits people build have in common is that it's possible to isolate them exquisitely well for the rest of the universe. You need to keep everything in the universe from looking at your computation while it's ongoing. There are a couple of different approaches for that.

At IBM, we use something called a superconducting transmon qubit. Which sounds very, very fancy. But it's really just an LC oscillator. There's a capacitor that's a piece of superconducting metal on a chip. And there is an inductor that's provided by kind of a weird circuit element called a Josephson junction. But you can really think of it just as a slightly nonlinear inductor.

And then the one state of our qubit is when we put exactly one photon into that LC oscillator. And the zero state is when we put zero photons into it. And because we've made everything out of superconductors and we’ve made everything out of dissipationless materials, these LC oscillators have pretty amazing Qs. And our best devices that we have available to outside users is Qs of about 30 million.

So if you could imagine using that in a filter design. It's pretty cool.

DB: <Chuckles> 30 million?!

Dial: Yeah, 30 million. And that's, by the way, not what's in our commercial product. In the research lab, we do a little bit better than that.

So you have these little superconducting LC oscillators. Each one of those is a qubit. We keep them at ridiculously low temperatures. About ten milli-kelvins. That's 0.1 degrees kelvin. For reference, deep space is about three kelvin. So this is orders of magnitude colder than deep space.

And the reason why we do that is, if there's a thermal photon, microwave radiation—everything blows at microwave wavelengths at room temperature—that comes in and interacts with our qubit, it can actually mess with the state. And so by cooling it down so cold, we protect it from the rest of the universe.

Then finally, we design all this stuff to work at microwave frequencies. We typically make these LC oscillators to have resonances at around 5 GHz. The reason for this is we can win from everything that's been developed for telecommunications. We can use all of that stuff to build good microwave sources, to build good isolators, to build good amplifiers. And then we manipulate these devices by sending microwaves into them and measuring microwaves coming back out.

So, that’s kind of a super flying tour.


Quantum Entanglement and Quantum Logic Gates

DB: So, speaking of sending microwave pulses in and reading them out, can you talk me through the process of entanglement theory and, on a more practical scale, what that looks like for you?

Dial: Absolutely. So, theory, we talked a little bit about superposition. The qubit can be in one or in zero, right? Or it can be in a superposition of those two things. That alone isn't enough to get ourselves the kind of exponential advantage that you get out of a quantum computer.

But what does get you that advantage is if I bring in a second qubit, I now have four states available. They could both be in zero. They could both be in one, or it could be 10, or it can be 01, or it could be a superposition of all four of those states. So, for adding two qubits, instead of having two numbers to describe the state, one to kind of describe the amplitude zero, or one the other to describe the phase, now, I actually need four complex numbers to describe the state of the system. And the fact that I can be in those correlated states, like 00+11 or 00-11, that's what we call entanglement. Because if I knew somehow, magically, the state of one of the qubits, it would also tell me the state of the other one, even though both of them individually by itself is in an indeterminate state.

As you add more and more qubits, that causes the number of numbers that I need to describe the system to grow exponentially. And that's really where the power of these quantum computers comes from. So that's like the theory of entanglement.

Now, in practice, to get entanglement, what I need to be able to do is the equivalent of a Boolean operation that has two bits of input, like an AND or an XOR. It turns out the rules of quantum mechanics, though, don't allow you to destroy information. So AND and OR are off limits because, AND, I don't really know what the two bits coming in, but something that looks a little bit like an XOR is absolutely fine.

The main one we use is something called a controlled NOT gate, where if we have one qubit in the ground state, we do nothing to the other qubit. If it's in the excited state, we flip the other qubit. You can see that it doesn't destroy information. So if I do this twice, I come back to the same place I started. The same as kind of doing a bitwise XOR between things.

So, to make a Bell state—to make this entanglement—an easy way to do it is I prepare one qubit in a superposition of zero and one, and I do a Cdot on the other one. So if the qubit was in the ground state, the zero state, the other qubit stays in the zero state. If the qubit was in the one state, the other qubit goes into the one state. And so I've made this entanglement.

Now, the kind of magical thing here is my gate somehow has to do that without my ever finding out what the state of either qubit was. Because, again, we have to do this without looking at the system. So I can't just measure it and flip.

For our qubits we use, currently, a gate called cross resonance to do this. And it's actually kind of cool. Our qubits are LC resonators that have frequencies associated with them, right? One might be 5 GHz, and one might be 5.1 GHz. So if I drive a microwave signal at that 5.1 GHz qubit, but I direct it through the 5 GHz qubit, it picks up a little bit of an amplitude and phase shift depending on the state of that 5 GHz qubit. And so it'll do a slightly different thing to that 5.1 GHz qubit.

And so if I really carefully calibrate the amplitude and the phase of the signal and actually some correction signals I apply to that other qubit, then I can get it to flip. Flip, to me, means adding a photon to that 5.1 GHz qubit only if the 5 GHz qubit was in its ground state.

So, at some high level, it looks a lot like a kind of a quadrature amplitude modulation scheme, where instead of talking across a communication link, now I'm talking to qubits. I'm sending in signals with the right amplitude and the right phase to manipulate the quantum system in exactly the way that I want.

Now, if it sounds too easy, you have to remember we're putting thousands of these qubits on one chip millimeters apart. And so we have an enormous problem just engineering this chip to have the right properties—to have low enough microwave crosstalk that we can really send these signals only to the qubit that we want to, and so that the capacitance matrix that couples all these qubits together is exactly what we wanted it to be, what we designed it to be—to get these gates to work. And so there's a huge microwave engineering challenge around building the chips to allow you to do this in the first place.

DB: As a test gear aficionado, I've been in the same room as the control systems, and it's just racks of SMA cables, and it's a sight to behold, for sure.

Dial: You need to come to our labs again. That's changed, actually.

DB: Okay, good to know!

Dial: Yeah. Because ultimately, we're trying to build these systems bigger and bigger at a higher scale. And part of that means reducing the component count. And so those racks and racks of SMA cables are gone.

These days, we have gauge connectors going to the fringes, and our control systems are more compact and more integrated. And it's just part of the work that we're doing as we try to bring these devices kind of out of the physics lab and into the data center to bring down the price, bring up the reliability, increase the quality, the scale, and the speed of our hardware.


Quantum Noise Sources and Error Mitigation

DB: Speaking of quality and scale, I understand noise is a big issue in quantum qubits and qubits in general. Can you talk me through the challenges of noise and maybe a high level on error correction? I think we'll get into that more in a little bit, but just paint the picture for us.

Dial: Yeah, absolutely. So, physicists use the word noise really weirdly. Let me warn you to start with that. Noise is sort of a catch-all that we use for all the bad things that can possibly happen.

We've actually already talked about one a little bit—that microwave photons, just because everything glows at the microwave wavelengths. If it comes in, it interacts with the qubit. It can measure the state of it. It can destroy that superposition. We call that de-phasing.

And you can kind of understand that as the microwave photon comes in. Because our qubits are ridiculously nonlinear, it causes a little phase shift of the oscillation of the LC oscillator. So you can think of it as a clock. As the qubit starts to run a little bit fast or a little bit slow.

The second thing we've actually alluded to as well, we talked a little bit about the quality factor of the qubits. Think of that as a Q of 30 million. That simply says if I put a photon into my LC resonator (remember that photon meant it was in the one state), how long does that photon hang around? A Q of 30 million for us corresponds to about a 300 microsecond time period that that photon hangs around for.

If the photon decays away, either because it radiates out into the environment or because we have some microwave-lossy material in the vicinity of our qubits, obviously, that's destroyed in a quantum state as well. So when you're doing these computations, it's sort of a race between how fast you can manipulate the qubits and run all these operations—we talked about running C NOTS with microwave gates—there are other microwaves you apply to do different things, and the information leaking away from things like decoherence or from the photon radiating away.

Ultimately, that ratio (kind of how fast you can go to how long things last) sets an error rate that determines how complicated of a computation you can run on these machines before you start getting the wrong answer. And we talk about that kind of generically as quality to remind ourselves that we can do better by going faster, or we can do better by making the qubits better. Or, in the best of all possible worlds, we do better by doing both.

Now, there is a limit to how far we can go. There are some things that are just really hard to change about life. Materials have microwave loss. We've made a lot of progress on that, but it's still there.

But also, the control signals that we use aren't perfect. And so sometimes when we'll try to say, do that C NOT gate, we'll get it a little bit wrong. And so instead of getting a 180-degree rotation from zero to one, we'll get a 179-degree rotation. That kind of brings in a third type of noise, which is our control noise from our electronics.

And, hey, the electronics we use are amazing! The communication industry has been working on this for decades and decades, and there is just a limit to how accurate we can ever get that to be. So in the face of these kinds of hard limits on our error rates, on our noise, there are a couple of approaches you can use to try to get a perfect answer out of a quantum computer anyway.

One of them is a series of techniques that we call error mitigation. Mitigation: meaning you don't really solve the errors, you just kind of take care of their effects. And in error mitigation, what we do is we very carefully measure the types of errors that one of our quantum processor makes, and then we give ourselves a way to correct for those errors in post-processing.

It turns out that you can do that if you're willing to run the same problem not once, but millions or tens of millions of hundreds of millions of times, with just very slight variance on how you run it each time, so you can take out the systematic bias from the errors. And so that's kind of one approach. It's an approach that we used recently in our utility paper. I don't know if you, are you aware of that work or have you seen it?

DB: Why don't you talk me through it?

Dial: So, in that paper, they were simulating—actually, going back to my roots of condensed matter system, which I love—but it's a condensed matter system that it's hard to simulate on a classical computer. It's 100 spins interacting in a magnetic field. And you're going to hear that 100 number a lot here. And the reason is it's sort of about a magic number, that if you have a system of about 100 quantum objects, it's safely big enough that there is no way any classical computer is ever going to simulate this in detail. So we kind of like to say inside of the team, if you're not using at least 100 qubits, you're not doing quantum computing. That’s sort of threshold that you have.

DB: <Laughs> That's a nice position to be able to take, right?

Dial: Yeah, we go up to 1000 now. So, just saying. Anyway, they were simulating about 100 spins in a magnetic field, and using these error mitigation techniques, they were able to get accurate estimates as to exactly how those spins would evolve as a function of time.

Which is a problem that only a physicist could love. But the really cool thing is, it's a problem that you can't exactly simulate on a classical computer. So they used a technique called probabilistic error amplification, which is one of these error mitigation techniques where they ran the problem on our quantum processor. And then, because they knew exactly how much noise there was, they sort of added twice that much noise, three times that much noise, and four times that much noise. And then it could extrapolate back to exactly what the answer would have been in the absence of any noise. So it's pretty cool. It's solving this exactly.

But the really amazing thing to me is within a month of that paper coming out—there are classical approximate ways of describing this system— there were half a dozen, maybe a dozen papers that were using different approximate techniques to simulate this exact same system and giving answers that agree with what we got out of the quantum computer.

And so, to me, that's just really cool, because it means we're able to confirm that the simulation of the system was giving us the right answer. So it's just beginning to get our fingertips on the place where the quantum computer is doing things that classical computers never can. And then as we go a little bit further, we start to move into problems where there aren't these approximate classical solutions available. We'll really start to see these machines take off.


Scale, Quality, and Speed: 3 Key Factors For Quantum Computing

DB: So let's talk qubits. Everyone likes to throw around their number of qubits on their quantum computer. At what point do you feel like there's a viable quantity or is quantity not the thing we should be talking about?

Dial: Well, as I said before, we usually talk about scale, quality, and speed. Scale the number of qubits. If you don't have 100 qubits, you're in a domain where there is a very good chance a classical computer can simulate what you're doing. You really should not be using a quantum computer at that point.

We talked about use classical computers for classical problems, quantum computers for quantum problems. You need to focus on the quantum problems.

Quality. These are these error rates that we were talking about at the beginning. The higher that quality is, the more operations you can run on those qubits before you get a bad answer. And so that tells you kind of how complicated of a problem you can tackle. That's what really gets you out of the regime where these classical approximate solvers work. And so we need to improve that as well.

We have a goal next year for something we call the 100 by 100 challenge, which is to demonstrate a circuit on a device that is 100 qubits wide. So it uses 100 qubits, and it runs 100 operations deep. We think with error mitigation, that's going to be just barely possible. And we're pretty sure that that's going to bring us into this regime of quantum advantage, where we're solving useful problems on a quantum computer that we can't yet solve on a classical computer.

Then finally, speed. I mean, for a tech guy, speed should be pretty obvious, but it's worth noting that error mitigation lets us trade off speed for scale and quality, because it involves running extra circuits. And so it's really essential to making our systems give useful answers in the near future.

DB: We'll be back with Oliver in a moment, but first, a word from the sponsor for Industry Tech Days’ keynotes, Mouser Electronics. As your design fulfillment distributor, Mouser is your authorized source for millions of components from thousands of leading brands. Come discover, design, develop. And now back to Oliver Dial, IBM Quantum's CTO.

Why the Weird Numbers of Bits in Quantum Processors?

DB: So, I was looking at the number of qubits in a system eagle with 127, Osprey with 433. As a classically trained electrical engineer, I don't have a number scheme that gets me to 127. Can you talk me through where those numbers are coming from?

Dial: Yeah, it starts to seem like numerology at some point. So, where these come from is actually something that goes a little bit beyond quantum error mitigation to something called quantum error correction/ Which is the idea that instead of storing a qubit of one qubit, that you could take that same information and spread it amongst many qubits. And then using parity checks between those qubits, you could detect errors, much the same way we use parity checks to detect errors in memory and communications all the time.

These checks are a little bit complicated because you have to figure out how to do them without looking at the qubits; going back to the universe can't look at your qubits. But there's a whole bunch of error correcting codes that people have developed. Our processors, in particular, where a lot of these numbers come from, are particular error correcting codes on what we call heavy hex lattices. These are lattices where the qubits are connected sort of in a hexagonal pattern, but with an extra bead on the string end between each hexagon. You can look at one of the pictures of our processors if you want that to make a bit more sense.

But these error correcting codes have particular distances (numbers of errors they can correct), and they need various really weird numbers at qubits to work. So, for example, the 27 qubit Falcon processors were a distance three error correcting code. Now, we don't actually think that heavy hex codes are very useful because they have a pretty high overhead. The fact that you need 27 qubits in that Falcon processor to in principle store as one logical qubit, sort of gives you a hint for where those are.

On the other hand, that 127 qubits Eagle processor is a D equals five in the same code. So it can correct up to five errors. We think the overhead of these codes is too high to be practical in real life, though, and in practice, we don't actually use that feature of our processors. We focus instead on error mitigation.

But a really exciting advance this year has been in what we call good LDPC codes, which is a much more efficient category of error correcting codes. We think actually we can use about ten times fewer physical qubits to encode each logical qubit. And so this isn't a technology for this year or next year. This is not the technology we're going to see our first instances of quantum advantage. But I think this is something that we're definitely going to see by the end of the decade.

And to go back to your number question, the favorite code of our team, at least right now, is something we're internally calling the gross code. Because it involves 144 or one gross of qubits. It certainly costs some heads to turn what people started walking down the hallway talking about, “Ooh, what's that gross code?”

DB: <Laughs> Only a group of physicists would get so excited about numbers like that.


What Really Happens When You Run a Quantum Computing Program?

DB: I've played a bit with IBM's online quantum computer interface. If I hit run on something I've built, what's happening?

Dial: Okay, that's at least a little bit more bounded question. Okay, so you've hit run on some quantum circuit. The first thing that will happen is we talked a little bit about the CNOT gate and the way we run a signal from one qubit to another. We don't have wires that connect every qubit on our chip. But there are wires that connect individual pairs of qubits. That leads to what we call the device topology.

So if qubit a and b are connected by a wire, you can run a gate between them. But if a and c are not connected, you can't. And so there is a piece of code called the transpiler that takes that into account. And so if you ask for a gate between qubits a and c, first it'll swap a and b, then it'll run it between b and c, then it'll swap a and b back again.

You can think of this as kind of a register allocation problem, but on a 2D lattice. So that actually happens on your computer. After that, it gets passed off to a second piece of software called the Qiskit Runtime that lives on our computer.

That piece of software is responsible for applying techniques like error mitigation to your circuit. It's also responsible for then taking that circuit and converting it from an abstract series of things like run a CNOT here, run an X gate here, run a Z gate there, and turning it into what we call a scheduled circuit. This is where it figures out exactly how many nanoseconds each one of those steps is going to take, and works through the problem of making sure that we don't try to do the same two different things to the same Q at the same time.

That then gets passed on to what we call our quantum engine. And that does a really, for me, as a physicist, the most fun part of this, it converts these sort of abstract mathematical objects, these gates, and it turns them into microwave pulses that are actually going to get sent down to the device. So it looks at a calibration database that has all the right phases and amplitudes and frequencies to use, and it turns that into a series of programs for what we call AWGs, or arbitrary waveform generators. And these are machines that can emit kind of relatively arbitrary microwave pulses within some range of frequencies and powers.

Those then get transmitted to those arbitrary waveform generators, which are those racks and racks of equipment that you were talking about earlier. And then at some point, we trigger those all, we hit go, and they start firing off the sequence of pulses that go down a series of cables into our dilution refrigerator, down into this milli-kelvin environment.

Now, the AWGs are noisy—they have Johnson noise and they have shot noise. In physicist terms, we like to say they're about 30,000 degrees kelvin, so hotter than the sun. That's just in terms of their noise temperature. They're not actually.

DB: I’ve never heard noise described that way!

Dial: Yeah, physicists. But it's kind of a handy term to use for us, because it turns out that you can then talk about attenuation. If I take a signal, and I attenuate its power by a factor of ten (so ten dB of attenuation) I reduce that noise temperature by a factor of ten. So from 30,000 degrees kelvin down to 3000 degrees kelvin.

So as we go into the fridge, as we're going from room temperature down to this ten milli-kelvin, we have attenuation that’s matched the temperature changes. And what that's doing is attenuating the signal, (bringing down that noise temperature). Until, finally, when it reaches the quantum device, it's at a temperature of about 100 milli-kelvin, which is 0.1 degrees kelvin, which is cold enough that it's not going to mess up the quantum state of our device.

And so there's an entire infrastructure around getting that attenuation to work correctly. If those attenuators heat up, obviously, we’ve kind of lost the game. But also, refrigerating at 10 milli-kelvin is expensive. If you've seen your air conditioning bill at home, now imagine if you were trying to keep your home just a little bit above the temperature of absolute zero!

And so we take insulation very, very seriously in these systems. We need to make sure that these wires don't conduct too much heat into the fridge.

Now, then, these microwaves go down and interact with our quantum device; and some little itty-bitty, tiny signals come out that are the signs of the quantum state that we want to measure at the end of our computation. We need to get those back out of the fridge.

So that goes through a chain of amplifiers. The very bottom amplifier is what we call a quantum-limited amplifier, which, given the context of the conversation, you can probably guess, is an amplifier that works as well as the laws of quantum mechanics allows it to. Which is pretty cool.

That goes up through a series of more conventional microwave amplifiers until it finally comes back and gets digitized. We figure out what those signals said the qubits did. We put that actually into a JSON file. I'm sorry! <laughs> And then we send that to you so that you get the results of your circuit.

But to you, from your viewpoint, you just go to your Python notebook and hit shift-enter, and this entire lab of microwave equipment and signal sources and everything reconfigures itself to your whim.

Now, one thing I kind of missed is that with our oldest generation of electronics, there is actually a relay in the AWG that was used to blank the output when you're reprogramming it. And so whenever somebody was running a circuit, which is, by the way, nonstop on these systems, you'd walk into the lab, you just hear, tick, tick, tick, tick, tick, tick. This clacking noise as all these AWGs turned on and off and on and off as people around the world were running circuits on our quantum devices. Unfortunately, our new AWGs are silent, so we don't get to hear that anymore. But it used to be fun.

DB: Yeah. A little symphony of quantum circuits running. That's very fun.


ICs for Quantum Computing

DB: One of the things that I love about Quantum computing is people think about the chandelier. Right. But really, the fascinating physics and science are happening at the very bottom of it on a little IC. Can you talk me through what IC technologies and fabrication techniques and feature sizes, stack-ups, and all that. Can you talk me through those chips?

Dial: Absolutely. So the first thing you asked about was feature size. And that actually turns out to be a little counterintuitive, because the entire semiconductor has been going smaller, smaller, smaller, smaller for so long.

But we mentioned one of the real boogeymen for our devices is microwave loss. Anything that can cause the microwave photon that stores our qubit state to get absorbed, it is for us a source of error.

Now, it turns out one of our big sources of loss is interfaces. Our qubits, as I said before, are a capacitor actually on the surface of a chip. And that means they're exposed in a way that a transistor in a CMOS process never is. These things actually live at the boundary between the chip and the air.

And there's all kinds of horrible stuff there: there's absorbed water, there's possibly processing residue, there's oxides. All of these things are sources of microwave loss, and we really hate them. We can't get rid of them completely, though, because we have to make this chip on the planet Earth in a real place. And so there's always going to be some crud there.

So what we can do, though, is take advantage of the fact that all of these are thin films, and they have a thickness that's always the same. If we make our qubit ten times bigger, we're storing the electrical energy in the qubit in a ten times larger volume, and the fraction of the volume occupied by these horrible films goes down. That drives us to make our qubits as big as possible before they start to turn into great antennas and radiate.

In our devices, the qubits are about half a millimeter on the side. Now, to me, that number is just shocking in two senses. One is, this hardly sounds like microelectronics anymore, but also, we're talking about all these weird quantum mechanical things, superposition entanglement. We're doing this with objects that are so big, you can see them with your eyes; obviously not when they're in the fridge operating, but when they're outside of the fridge. And that is like the antithesis of what I usually think of when I think about quantum mechanics. In fact, this is like a huge physics result from the mid-90s, this macroscopic quantum entanglement that becomes possible in these devices.


The Signal Wiring Solution

Dial: Now, the wiring that connects them is much more aggressive, but it's still micron scale. And the main reason there is it keeps the processing cheap. It keeps the processing very reliable. Our yield is quite high in that wiring, and it simply doesn't benefit us any to shrink it much smaller than that, when we have these absolutely big honking qubits everywhere.

Now, one of the big problems in building these qubits is protecting them. We talked about thermal protection, but we also need to protect them from each other. They're great big microwave antennas. They're on a chip that if they could communicate with each other, we wouldn't be able to control the system anymore.

And so, actually, we've gone through a series of packaging throughout different stack-ups for our chips over the years. Currently, we're on what we call our generation three packaging, which came out with our Eagle processors, which was, to me, a really exciting thing, because it actually let us really cleanly separate the classical from the quantum parts of our chips.

And in fact, I actually have a little picture of it that we can put up for you if you like. And the main thing to realize about this is it's a very simple 3D-integrated stack up.

We have one wafer that has the qubits on it—they live on the bottom side of that wafer. There is absolutely minimal processing done on that wafer, because the less that we do to it, the better the qubits are going to work.

There are through substrate vias that go up above those qubits and connect to a ground plane. That lets us make a Faraday cage around the entire qubit. It protects it from the electromagnetic environment. It protects them from each other.

That's then flip-chip bonded onto a carrier that has everything classical about our devices. It has the readout resonators, which are basically microwave resonators that we can use to measure the state of the qubit.

But then it also has through substrate vias that go down to what we call multi-level wiring, which looks a lot like CMOS backend-of-line wiring. It's just alternating levels of metal and dielectric that we can use to deliver signals into the center of the chip. That sounds really simple. CMOS has been doing this forever.

For us that was really important; because imagine this 2D lattice of qubits. You need to be able to get control signals to the one in the middle without touching any of the ones on the way in. Before we had that multilevel wiring down there, we had to play funny games with jogging signals up and down, back and forth between these chips, and just sort of hoping they didn't talk too much to the qubits they passed by on the way. This gave us a really clean way of separating out the control wiring from the readouts. And it's really what's led us scale.

If you look in the years going up to that packaging kind of with 16 qubits, 20 qubits, 27 qubits, 60 qubits. If you look since then, we've got 100 qubits, 400 qubits, 1000 qubits. This has really been what's given us a way to scale without worrying too much about the engineering of how many of these guys were putting onto a chip.

DB: I've said it before, and I'll say it again, I really think packaging engineers are some of the unsung heroes of the semiconductor world because they make it all work.

Dial: Absolutely. Well, I mean, the packaging problems on quantum devices are really bad, because ultimately we require one control signal per qubit. We require one microwave wide per qubit. And so just the I/O density into and out of these chips is one of the biggest struggles that we face on a day-to-day basis.

If you find a photo of our Osprey processor online, and there are plenty of them out there, the chip itself is about this big, but the board that routes all the signals into it is almost twelve inches on the side. So we're using almost ten times as much space for I/O as we are for the device itself.

DB: It's backwards. That's funny.

Dial: We're working on it.


The Future of Quantum Scaling

DB: So, how big can these go? You mentioned you can physically see these qubits, but I have to imagine as you get to 1000 or more, you're maybe size limited.

Dial: Yeah, well, I mean, the most fundamental limitation is that we actually build all of our quantum devices here at Yorktown Heights. It's a 200-millimeter line. And so if we wanted to get really silly, we could get these chips up to 200 mm in diameter. No one really wants to do that, though. There are yield problems associated.

DB: Yeah, yield for sure.

Dial: You don't want to be the guy that drops that on the ground. It just has to go <imitates sound of wafer shattering>. So really we want to limit the size of the chips to a moderate number of qubits for yield and to a moderate size because it makes processing easier and it makes manufacturing easier.

And so one of the things that we're really interested in the next couple of years, as we continue our technology development, is developing what we call modular technologies. Right now, we have to have all the qubits on one chip. If there's anything quantum mechanical going on, it happens within that chip. And once you go off the chip, things are entirely classical.

We're really interested in developing quantum mechanical technologies for coupling two chips together. We're pursuing two different variants of that. One is what we call an M-coupler. Sorry. We like acronyms. Stands for modular, but it's really just a chip-to-chip connection. So you can think of this now as giving us the ability to build multi-chip modules so that we could have each chip be smaller and have a smaller number of qubits on it. We can test it independently, and then we can assemble many of those into a larger processor.

The other one that we're interested in is what we call L-couplers. L for long. Not very creative. And there we really want to be able to take the quantum signal, put it onto a superconducting cable, an actual wire that you can pick up and bend with your hands, and run it to another processor. The reason why we want to do that is the I/O density problems.

Remember, we said there's one microwave line for every qubit on one of these chips. And so if you think about this quantum chip that’s surrounded by kind of an entire 500 centimeters in diameter bundle of cables, there's a point at which you just run out of space. And so these L-couplers give us a way to move over to a different cryostat with an additional 500-centimeter diameter bundle of cables and move one processor over.

So we think with these two technologies, we're going to be able to build modular systems that let us scale this technology indefinitely. Now, that's not to say that we're not also working on putting more qubits onto each individual chip, because that, of course, is where our cost-effectiveness comes in.

But that's a more conventional semiconductor scaling problem that people in this building have been dealing with for decades. And we're going to continue to make progress on in the background.

DB: If you have modular systems or even in different fridges, do you still have to entangle all of those together? I imagine that would be a challenge.

Dial: Yeah, so it turns out there are tricks that you can use to use quantum processors in parallel, even if they're not connected quantum mechanically. The most straightforward, as we talked about, is error mitigation. You can use multiple processors to run error mitigation in parallel.

But, also you can use something called entanglement forging and circuit knitting to kind of fake out the rules of quantum mechanics and make it think it's one processor, even though it's two. There's an overhead associated with that. It's exponential, kind of, at how many wires you’ve got between the processors. But it does work, and it's an area that we have very active research.

Actually, at our summit this year, we intend to have a series of kind of cool demos around how you can use that to use multiple processors that are connected together classically, but not quantum mechanically. But if you can get this quantum mechanical link working, and there's enough academic literature in this area that we're pretty sure we can, it allows a much more efficient computation to be shared between two quantum processors. That's really the direction that we want to go in.

DB: Okay. Modularity seems like the future for something like this, if it's doable, for sure.

Dial: Yeah. I mean, right now, essentially, every time we come out with a new processor, we very nearly throw out everything we did the year before. We just come up with an entirely new design with a new number of qubits, new packaging, new wiring, new everything. And so I think modularity is going to give us a much more effective way to continue to turn the crank in the future.

DB: When traditional semiconductor engineers and industry comes in and talks to your group, what surprises them most about what you're working on?

Dial: Well, actually, the first thing is probably just the size of the chips and the fact that everything is so huge. In fact, a lot of time we hear something along the lines of, you guys are doing this all wrong! But you have to realize we're taking these incredibly sensitive systems and trying to do computations with them, and right now, it's worth anything to protect them more.

Another thing that surprises them is this I/O density bottleneck. And actually, it's worth noting with our newest processor, the one that we're going to be releasing to clients this year, that I/O density problems gotten worse.

With our Eagle processors, I talked a little bit about cross-resonance and driving one qubit through another as a way of running gates. That turns out to have a yield problem associated with it, which is actually the third thing we hear about from the traditional process engineers, that our yield is awful. The yield problem comes from the fact that the qubits have frequencies, and we can't control those perfectly. And so if you have two qubits that are sitting next to each other at the exact same frequency, that cross-resonance gate stops working, and so the device becomes defective.

We have essentially never been able to make a 100-qubit device where there's not been at least one of those frequency collisions. And so, in the harshest possible sense, our defect-free yield of our Eagle processors is zero. Not to say that they're not incredibly useful. We've done these utility demonstrations with them. There are lots of ways of working around it. But as a hardware guy, I want perfect, not zero.

And so the new processor that we're delivering this year is called Heron, and it features a new thing. It has an extra knob that actually lets us turn on and off coupling between the qubits. Because we can turn that coupling on and off, all these frequency collision problems go away.

And so we think that with these Heron processors, we're not only going to see a vast improvement in our gate fidelities (because the ability to turn the coupling on and off lets us do the gates faster) but we're going to see a much, much higher yield of kind of defect-free regions of our devices.

So I actually think this Heron processor…since that Eagle stack up came out in 2019…is the next big technology that I'm personally the most excited about because it's going to be such a huge stat in the quality of our devices. So that's kind of what has everyone, at least in my neck of the woods, on tenterhooks right now, seeing this processor come to life.


Working at Extreme Cryogenic Temperatures

DB: So you mentioned earlier that this is really just almost absurdly cold. What surprises people or surprised you as you were learning about this and starting to work with it? Working at these temperatures, what is surprising or extra challenging?

Dial: Well, actually working at these temperatures has changed a lot, even just over the past ten years. When I first started working in cryogenic physics, all these experiments were cooled by liquid helium. And so literally, a truck would drive up to your university with a big dewar. Think of it as like a big double-walled flask full of liquid helium.

And, once a day, you'd hook a tube up between that and your experiment, and you transfer a whole bunch of (siphon, actually) a whole bunch of liquid helium into your experiment. It didn't matter. Day, night, Saturday, Sunday, Christmas, New Year's. You're in the lab filling this thing over and over again. It's a little bit like getting water from a well, and it's expensive, it's labor intensive.

I got a pretty exciting case of frostbite once when I was a little bit sloppy with one of these transfers, but it works. It also turns out liquid helium is a pretty expensive thing. You don't want to just be boiling it off in your lab in great big steamy clouds as you're writing your experiment, and it's getting harder to source as well as demand is going up.

So one of the really nice things that's come along is something called pulse tube coolers. And they're really neat. They are closed-cycle coolers. They still use helium, but they don't send it out into the air anymore. It just cycles it back and forth and back and forth, kind of like the refrigerant in your refrigerator.

But the really neat thing about them, compared to the technologies that had been popular before, is there are no cold moving parts. And so these things just work forever. It's really changed the way we do our research to where you can walk up to one of these fridges and push a button and just kind of go home for the weekend, and when you come back, it'll be cold. So it's just a really neat piece of technology.

It also gives you this really distinctive chirping noise as it pushes that helium in and out of the fridge. And so if you walk into one of our labs, it's just this kind of chirp, chirp, chirp, chirp, all sort of syncopated, a little bit of dancing going on. It's kind of a fun noise once you get used to it.

So that pulse tube cooler gets you down to four degrees kelvin. So at that temperature, the only things that are left that are liquid at room pressure (atmospheric pressure), are helium four and helium three. Everything else is frozen. Air is frozen. Hydrogen is frozen. Oxygen is frozen, and nitrogen is frozen.

And so, obviously, these are then very good vacuum systems, because even if there's a tiny leak, that's the first thing it hits, it sticks to, you're done. And that's pumps itself. You've got to rough it out beforehand, because if there's too much air inside, then it can frost things together, and make a mess of your experiment. But it's super important for getting the insulation that you need for the latter stages.

Now, once you get rid of all that air that would normally be conducting the heat, what's left is radiation, like infrared radiation. And so when you open up one of our cryostats, what you see looks like a Russian doll. There's a metal can inside a metal can inside a metal can. And to reduce the heat load on it from light coming into it, they are all the shiniest gold that you can imagine. And so these are just absolutely like works of art sitting in the lab. They're beautiful to look at.

Once you go below four degrees kelvin, there's fewer options for refrigeration. At one-degree kelvin—so one degree above absolute zero—that's about where helium four, the most common type of helium, stops boiling. And so to get down to one-degree kelvin, what you can actually do is boil helium four into a vacuum; vacuum pump and get down there. You can also get there by boiling helium three, which is a less common isotope of helium.

So if you can't refrigerate by boiling, you might say, well, how on earth do you get down to 0.1 kelvin? And this is actually a problem that was solved decades ago. The way that you do it is you actually boil helium three and helium four. That's why these things are called dilution refrigerators, because you can think of it as diluting helium three with helium four. And that entropy of mixing gives you the cooling power.

So there's a closed cycle of helium three that goes into and out of and into and out of this fridge that provides us that last stage of cooling. From a technological viewpoint, this is a little bit frightening, because it turns out that helium three is a limited resource. The natural abundance is essentially zero. And so most of what we have in the world comes from nuclear power and from nuclear weapons production. We don't want to encourage more of the latter, for sure.

Figuring out how to run these systems so that they use less power per qubit, so that we don't need as big of a dilution refrigerator to cool these systems is another really core research area for us—that kind of going from low-power electronics to just ridiculously tiny power electronics.

Our current budget for qubit is about one nanowatt at that 0.1 kelvin, and we're trying to push that down further.

DB: Goodness gracious. Nanowatts, okay. Yeah, heat is tricky!


Applications of Quantum Computing

DB: As we are getting near the end of our time, what applications of quantum computing are most exciting for you personally?

Dial: Well, for me, one of the most exciting one goes kind of back to my roots—these problems in condensed matter physics. These are problems in highly correlated systems like superconductors, where really the interactions between the electrons in the material form these really exotic states of matter that nobody really knows how to model perfectly. And there's the potential to just simulate these exactly and learn a ton about them using these systems.

They're also really attractive systems to talk about because our devices are 2D lattices right now. And typically in these condensed matter systems, the interactions are sort of 2D lattices as well. So it's something that maps extremely efficiently to our quantum computers, and it's an area where we think we might have the advantage the soonest.

Now for societal impact, simulating chemistry is, for me, without a doubt, the place I'm most excited about, because that's just something that touches every aspect of our life. And I know a lot of people, they say, okay, it's great that you're building a quantum computer, but that's never going to matter to me because I don't do that. I kind of feel like, well, okay, but you might feel the same way about classical supercomputers, but I guarantee you pick up the phone and say, “Hey, Siri,” from time to time. Tou never think about what it took to train Siri to be able to answer that. And actually, Siri is trying to answer me now.

DB: So many people watching just had their phones go off. <laughs>

Dial: Yes, I apologize for that, everyone!

So, by the same token, you may never personally use a quantum computer. Although, as you found out, you actually can. That's kind of fun. But by being able to simulate problems in material science and being able to simulate problems in chemistry, even if you never touch the quantum computer, you're going to be benefiting from the results that come out of it as far as giving us better tools to combat all the really serious problems that mankind faces today.

DB: Yeah, you might take medicine someday. That was the molecule was designed through quantum computing research.

Dial: I hope they work a little bit better than they do today before we go there, but, yes, absolutely.

DB: Fair enough!


Lightning Round

DB: As we wrap up each of these, we'd like to do a lightning round. Some quick questions, quick answers.

Quantum is one of those terms that sci-fi writers love to just attach into some gobbledygook phrase. Do you have a favorite or least favorite example of misusing the word quantum, or properly using the word quantum, in pop culture movie, book, or that sort of thing.

Dial: Know it's totally irreverent. But the first thing that came to mind there was Quantum of Solace. And the funny thing is, it's actually being used correctly there. Right. It's a teeny tiny bit of solace, which is the exact opposite of what you see so often. Like quantum leap, meaning a huge leap. No, that's wrong!

DB: The opposite. Like the smallest possible leap, yeah.

Dial: Exactly! We gave them the very smallest amount of solace that could be imagined.

DB: Okay, so we like Quantum of Solace. We don't like Quantum Leap.

Dial: Yeah. Or maybe I'm just saying I like James Bond. I don't know.

DB: For hardware or software folks that are interested in getting into quantum, what sort of things should they read up on or look into? And what sort of career path could they start going down to get into this in the future?

Dial: It's funny that there are so many different problems in so many different areas we're facing. It's really hard to say something that's wrong. You mentioned how much we should all honor the packaging engineers. I mean, we have an enormous packaging effort within our team, and I was actually just talking to someone about their early career in our packaging team, and he got started in electroplating. They turned from electroplating metal parts to electroplating bump bonds. And now he's like core to our packaging team.

We have huge software development problems. For the circuits that you were doing on your laptop: a couple of hundred gates, no problem. As we look forward to an error-corrected future, we're talking about circuits that involve 1010 to 1011 gates. This is actually a pretty big classical problem to compile and optimize that thing. There's just a huge amount of work at the top end of the software stack to be done.

We have a lot of conventional CMOS designers on our team because although we didn't talk about it much, today, we build our own control electronics. It's just the only way for us to get the scale-up and the price down and the power down at the same time.

And physicists of all walks of life. We have people on the team that started in trapped ions, so trapping single atoms of lasers. We have me; I came from spin qubits, the competing technology. We have folks who studied the basic physics of superconductivity. Really, we benefit from having this huge breadth of backgrounds on the team.

But the main thing I would say is, if you think you might be interested in it, there's just a ton of material you can use to learn about it available online. There's weekly YouTube videos where different practitioners in the field are talking about their technologies. And of course, just to plug it, we do have our IBM Quantum lab where you can go try all of this stuff out quickly and easily.

DB: Fantastic. Well, this has been great. Thank you so much for your time.

Dial: Thank you so much for having me. It's always delightful for me to talk about what I love.

DB: And thank you to Mouser Electronics for sponsoring this keynote. As a side note, I personally feel like no quantum computing interview is complete until somebody says, “ Schrödinger’s cat.” And we didn't say it during that interview. So here we go, “Schrödinger’s cat.”

If you feel like that was an unnecessary interjection, or you just really liked today's podcast episode, please come tell us on the Moore's Lobby social media pages and share this episode with a friend. I'm Daniel Bogdanoff. Thank you for joining me today in the Lobby.

We'll be back in two weeks with Jack Kang, SVP of Business Development and Customer Experience at SiFive.