Researchers at the Department of Energy's Lawrence Berkeley National Laboratory have created the world's first one-nanometer transistor gate.

It's no secret that modern research has been vested into reducing the size of electronic components. But how small these new transistors are may even shock industry researchers.

Our modern world of electronics has been achieved due to one remarkable trend: the constant quest to reduce the size of metal oxide semiconductor field effect transistors—MOSFETs.

This component is what is used to amplify and switch electronic signals and is essentially the foundation of most integrated circuits. In the last fifty years, MOSFETs have shrunk down from a few micrometers in size to just 20 nanometers, nearly a thousand-fold smaller.

Unfortunately, conventional silicon electronics have been approaching a fundamental limit to their size. This is due to quantum effects that occur as silicon approaches a gate length of five nanometers, causing unpredictable behavior that significantly inhibits their ability to function reliably. This effect is known as quantum tunneling, where an electron is capable of passing through barriers and is increasingly likely to do so as the barrier size gets smaller.

Companies such as Intel have announced that their intention to research alternative materials to replace silicon once 7nm-length gates were achieved last year. They've now been succeeded by DOE researchers.

 

A visualization of quantum tunneling. Image courtesy of IEEE Spectrum.

 

Developing the World's Smallest Transistor Gate

Earlier this month, a research team led by Ali Javey at Lawrence Berkeley National Laboratory created the first working transistor with a one-nanometer gate. The breakthrough was accomplished when the team created a two-dimensional MOSFET using a material called molybdenum disulfide (an alternative to silicon) and a single-walled carbon nanotube as a gate instead of assorted metals. Carbon nanotubes have been the subject of intensive research for years now and recently outperformed silicon in transistors.

The material, MoS2—which has found use in multiple applications across the industry—has quite handily offered a solution to the problem that silicon has with its fundamental limitations. Electrons in MoS2 molecules will act as if they are heavier than those in silicon molecules. This causes the electrons to move more slowly which, in turn, makes them much easier to control.

 

A representation of the molybdenum disulfide channel and 1-nanometer carbon nanotube gate. Image credit: Sujay Desai/UC Berkeley

 

Silicon is actually preferable to MoS2 as a channel material in most cases where the electrons will encounter less resistance as they pass through the material. But miniaturizing past the 5nm point with silicon inhibits the gate from preventing electrons that will pass through to the drain, preventing the transistor from switching to off.

The gate was reconstructed using carbon nanotubes due to a reevaluation of materials. Knowing that traditional lithography approaches were not applicable at this scale, the research team instead looked to use carbon nanotubes.

The team was able to measure a device and showed that the new 1nm-thick transistor was capable of controlling electron flow. Including the wiring, the entire device is larger than 1nm and the circuit is altogether functional.

 

A Realistic View of Nano-Sized Transistors

Amidst the hype of creating the world’s first functioning 1nm gate, the researchers have advised people not to get overly eager to see the new transistor anytime soon.

The paper's lead author, Ali Javey, describes this 1nm gate as "a proof of concept." The process of creating the gate will need to be repeated many, many times before it can even be truly optimized for efficiency. After that point, it will need to be developed further and attached to chips. All of this will likely take years.

In the meantime, however, this step is extraordinarily important because it breaks the five-nanometer limit that silicon places upon transistors.

With this new advancement, Javey says, we can still apply Moore's Law and look to the future of tech with optimism.

The original research paper was published in the AAAS journal, Science.

 

Comments

6 Comments


  • ronsoy2 2016-11-18

    I wonder at what point a single cosmic ray disintegration will cause the transistor to erroneously conduct and crash the computer!

    • jgruszynski 2016-11-20

      This is pretty well established knowledge.  And also a routine issue.  We long ago reached that point!

      A cosmic ray hit will create a mass of hole-electron pairs in a cylinder of charge roughly 10 µm long and 250 nm-2 µm in diameter.  This is largely than any current minimum geometry devices.  The injected charge is in the range of 10fC-10nC.  When you consider most MOS junction capacitances are in the 1fF-50fF range, clearly a single hit already wrecks havoc and has done so for a good 30 years. 

      This is the reason why you MUST already have error correction if data reliability matters to you.  Quite often consumer applications can survive without this - a bit flip in a JPG image isn’t really noticeable.  Also near the Earth’s surface, the atmosphere absorbs most cosmic rays so the probability of a hit is substantially reduced.  However, in space, this is a major issue you have to design remediation for.  This includes anything above LEO and definitely interplanetary space.

  • LiverDye 2016-11-18

    Great explanation, great article. Thanks!

  • LiverDye 2016-11-18

    Great explanation, great article. Thanks!

  • bn880 2016-11-18

    What a bunch of horseshit, “One-Nanometer Transistor Keeps Moore’s Law Relevant Another Year” Which year, 2025?  :p

    • jgruszynski 2016-11-20

      That’s my issue as well.  If you had something “100% ready to go and proven to be compatible with existing processes” then you might have a shot at 10 years time.  Instead you typically need 20-40 years if it’s 100% new and untested, and merely some “lab experiment” level of proof/evidence.  If there’s an IEEE paper on it and nothing else, than the time frame is 20 years.  This is, unfortunately, in the latter category.  I work with most of the major semiconductor companies and consortia and this is NOT in their pipeline/radar right now.

      This is exactly what we who work in process design and technology development have been warning about for nearly 20 year now - things do NOT happen instantaneously and you must keep the pipeline filled (at nontrivial cost).  In fact the “meme” that “technology is speeding up” is 100% false - it hasn’t sped up at all.  Instead what happened is that 20-40 years ago, the exponential economic growth at the time allowed the 20-40 year pipeline to be filled exponentially more which meant that an apparent exponential increased in new inventions appeared out the other side.  The problem is that pipeline has been drained over the last 40 years.  R&D investment by corporations and government have mostly declined.  There are no Bell Labs like centers anymore that focus on pure R&D - most have been converted to applied R&D which are required to demonstrate a product-specific, profit-generating result within 5 years of any initially funded research direction (I know this because I used to work at HP and this is what HP Labs was altered to 20 years ago).  This assures that the pipeline empties.