Intel's Tri-gate FinFETs lead // TSMC Skips 22nm , leapfrog to 20nm Instead in 2012.
By David Lammers Monday, May 16th, 2011
Experts : Intel's Tri-gate Not Easy to Match
Intel Corp. may have as much as a five-year lead in bringing FinFETs into widescale production, experts said. A dual-epi process, and close control over other steps, represent manufacturing challenges which may prevent other companies from quickly following Intel’s lead, they said.
Chenmin Hu, who led a University of California at Berkeley team which proposed a workable finFET a dozen years ago, said “The main point is that Intel is taking FinFETs into production. Intel deserves a lot of respect, because they continue to lead the industry on a two-year cycle of scaling.”
Hu said the May 4 announcememt “further bolsters their reputation as a company with a can-do attitude. And it shows that if an organization invests sufficiently, they can make a very return on those investments. Initially, these transitions are going to be so very difficult, but if the right amount of time, money, and people are invested, they can get it done.”
“I remain steadfast in my comments about both FinFETs and UTB-SOI going to manufacturing. I expect both to go into production. The very large companies, such as Intel and TSMC, will have the resources to go to FinFETs. Some other companies may go to UTB-SOI. ST Microelectronics is probably the closest to using UTB-SOI,” he said.
“FinFETs may be more versatile in performance and power. On the other hand, FinFETs take a lot more development resources, in terms of the manufacturing control, the layouts, and the libraries,” Hu said.
Making a finFET is challenging. “The sense that I have gotten is that the equipment industry was not much more in the know than the rest of the world, which tells me that Intel really didn’t have to do that much in terms of new equipment. If the interface with the design team is close, and the resources are large enough, the lure of finFETs is that they can be scaled. But it does take investments. UTB-SOI does not take as much technology development investment,” Hu said.
Hu said the fin must be very thin, about equal to the gate length, in order to accomplish the goal of suppressing the leakage current. To keep the fin thickness exactly the same across the wafer requires that the process be very well controlled.
“To scale the finFETs, the industry will need to make the fin thinner and thinner. Back in our 1999 paper, we theorized that it can be scaled to 10 nm, but now I believe we can go beyond that,” Hu said.
Thompson Sees Five-Year Lead
Scott Thompson, a professor at the University of Florida, said, “developing a complex technology like tri-gate requires significant investments in silicon resources and manpower — development teams of perhaps more than 1,000 people.”
The challenges are so complex that Intel probably ran hundreds of thousands of wafers to solve the issues. Tri-gate is “at least an order of magnitude more complex than strained silicon at the 90nm generation or high-k metal gate at 45nm. That is why it took Intel eight years to implement and why I don’t think anyone else will have it in the market for greater than five years,” said Thompson, who earlier worked as a technology program manager at Intel’s technology manufacturing group.
Tri-Gate has quite a number of very innovative elements, he said, the most critical of which are “not in production in any foundry today.” The complexity resides in developing a true “gate-last” stack with dielectric and metal deposited last with atomic layer deposition (ALD) tools.
“The integration of the p+ SiGe S/D (with >50% Ge) / n+ Si S/(>1e19) requires a very sophisticated process, materials and exotic recipes. The fin silicon density present during growth appears to be very low and it appears challenging to get uniform epi on the fin’s etched sidewalls,” he added.
To have low contact resistance to the fins, epitaxy must be grown on the source and drain of the fins — otherwise the drive current and performance would suffer. “Based on Intel’s performance claims, it can be concluded for pFETs, a highly in situ doped boron p+ SiGe is grown on the fin for p-type transistors. For the nFET, silicon or doped silicon epitaxy needs to be grown on the source/drains.”
To grow the dual selective epitaxial films on pFETs and nFETs with low defects on the fins “is a difficult, complex and expensive process,” he said. To manufacture the dual epi, Thompson said he believes Intel “implemented many restrictive design rules on the layout of the fins. These new design rules will prevent reuse of legacy IP. None of these issues are a problem for Intel’s targeted market: high-performance and high-margin CPUs. But the economic trade-offs are different for the SOC world, where many different type of transistors are offered,” Thompson said.
Thompson said the fins on the 22nm tri-gate appear to be relatively short, and over the next two nodes Intel will likely try to make them taller. That provides more area. “But taller fins will introduce additional capacitance with a higher overlap capacitance between the fin and contact, and a larger gate capacitance. Variation along the fin height dimension increases the threshold variation, greater than the standard line edge roughness and will increase Vt variation, which is critical at 22nm and below.”
“Intel did not publish their AVT, but others have for a bulk finFET and reported a value of 2 mVum, which is higher than desired. The physics behind this is that it is very difficult to dope the fins, and random dopant effects (i.e. some fins get 50 dopant atoms and others get 100) cause drawn identical transistors to have different threshold voltages,” Thompson said.
Manufacturing perfect fins over billion or trillions transistors is “quite a challenge,” Thompson said, though Intel’s advantage is that Intel’s fabs run a single process, with equipment and settings that are kept constant.
“The tri-gate structure requires very complex elements that are difficult to control and reproduce with high yield. The manufacturing flow has unique advantages for high-end processors but it does have problems supporting several key features needed for SOCs: multiple Vt’s, and thin and thick oxides in support of analog. There still quite some work to do to use finFETs to manufacture SoCs like Apple’s A5 or Nvidia’s Tegra,” he said.
Intel’s development costs are difficult to compute, Thompson said, adding that “a realistic estimate could be upwards of $2 billion. Significant investments are needed to fund the required development for numerous new modules: new STI/fin pattering, mid-section etch modules, metrology all on fins, SiGe etch and deposition processes, new gate stack, and inline metrology modules.”
The cost of a new fab to produce 22nm wafers is high, in the range of $4-5B, which may be behind Intel’s decision raise its capital spending plan to about $10 billion this year, Thompson said.
Applied Sees ‘Bold Step’
Klaus Schuegraf, the chief technology officer at the Silicon Systems Group at Applied Materials, said “I think we first should recognize Prof. Chenmin Hu and the device group at Berkeley for leading the vision in 1999. This is part of a decade-long quest into how to meet the ultimate capability of a transistor, with respect to its ability to turn on in a very abrupt fashion.”
Intel’s move at the 22nm generation “is a bold step, and it took courage by Intel to make it a reality. With FinFETs we are in new territory,” Schuegraf said.
FinFETs require that the fin be “very vertical,” which presents challenges on a few fronts. To etch a structure at a near 90 degree angle, with no taper, is essential. “That etched structure is actually the channel for the device. If it slightly off-axis, the mobility of the transistor will be less than ideal.”
Lithography must be able to make a very narrow fin, less than the scanner’s resolution limit. “In this case, fin is in the same situation as the gate. The fin is less than the photo limits, so two layers have to be less.”
How to shrink the CD at such a tight pitich, how to etch the films to get a reliable shrink, at less than the photo limit and do it reliably, is just one of many challenges.
Working at the Dan Maydan Center, Applied’s technologists have been “working very diligently on CD uniformity on etch products, very precise control on etch processes. The need for this is universal, not just for finFETs but for many other structures” including flash and DRAMs, Schuegraf said.
About four years ago, Intel Corp. committed to the tri-gate transistor architecture for its 22nm technology and set to work on the manufacturing and design challenges, said Kaizad Mistry, the 22nm program manager at Intel Corp.’s technology manufacturing group in Hillsboro, Ore.
Intel calls its finFET design a tri-gate transistor because current flows along the two sides and the top of the fin.
“We had long discussions, both within the technology group and with our design partners at Intel. The tri-gate technology is more challenging, and it did make design execution different, because the transistor width is now discretized. After the debate both within the technology group and with design emerged a consensus on both sides that the improved performance characteristics were worth the effort,” Mistry said.
Mistry said “the biggest challenge with the tri-gate technology is to have a robust manufacturing process, to pattern the fins with the required fidelity of the fin width and height, and do it for billions of transistors,” he said.
Extracting the full performance benefits requires dealing with the series resistance and parasitic capacitance issues, which he said were “secondary challenges.”
“The principal difficulty is maintaining the integrity of the fin,” he said. While more double patterning is required for the critical layers, Intel was still able to use immersion 193nm scanners. With no radically different techniques or equipment required, making the tri-gate required improvements in “conventional control,” a realization that bolstered Intel’s confidence several years ago when it committed to a tri-gate architecture.
Current flows on both sides and the top of the tri-gate device. Source: Intel Corp.
Controlling the width of the fin is important to limiting the short-channel effect (SCE). The width — and to some extent the doping — also is critical to setting the threshold voltage (Vt), and to fully deplete carriers from the depletion layer.
“The fin has to be at the right width to get that fully depleted behavior,” Mistry said in a telephone interview following the May 4 rollout of the 22nm platform.
There is a tradeoff for both the fin width and fin height. A skinnier fin provides the fully depleted behavior, as well as controlling the short channel effect. But Mistry explained that “if the fin is too skinny, then the current resistance gets larger. Too wide, and we don’t get nice fully depleted behavior. Too narrow, and there is too much series resistance.”
There is a similar tradeoff for the fin height. A taller fin delivers more drive current, but at a higher gate capacitance. “The tradeoff is there, but it depends on the type of circuit, whether it has more interconnect load or more transistor load.”
The vertical structure supports a higher density, as the fins can be placed as close together as the lithography permits. And there is more transistor area with a vertical device than a planar design.
The effective transistor width, or W, equals twice the height plus the width, 2H plus W. Unlike planar transistors, which can be made with a variable width, the effective width is the same for every fin – a “discretized” W. Intel will use up to six fins, traversed by a single gate, in circuits requiring high drive current.
“With more fins, we get more drive current,” Mistry said. And with a bigger area, there is more drive current. “We have to deal with the series resistance, and we don’t get more series resistance if we add fins in parallel. In a planar regime, we would draw a wider transistor, but with the tri-gate, the transistor is now discretized. Conceptually it is not different, we just have to deal in increments of one fin.”
The fully depleted tri-gate supports a steeper threshold swing than in a planar bulk transistor. For partially depleted planar bulk transistors, as the gate is trying to turn the inversion layer off, the silicon substrate has a fixed influence on the inversion layer: the “body effect.”
In a fully depleted transistor, on the other hand, the substrate effect on the channel is turned off. For the tri-gate transistor, the effect of the silicon substrate is “completely shielded,” Mistry said, removing the body effect, and allowing the sub-threshold slope to be steeper.
In a fully depleted transistor, the width of the depletion region is less than the thickness of the silicon. While the width of the depletion region does depend on the doping level, Mistry said the Vt is less dependent on doping. “The silicon is not completely undoped, but it is much more lightly doped. Fewer dopants improves the performance, because there is less ion impurity scattering,” he said.
Ions provide free electrons or holes that provide charge to the rest of the dopant atoms stuck in place in the lattice. Ion impurity scattering reduces the mobility or velocity of the electrons or holes. “Having fewer dopant atoms does improve the transistor’s performance, particularly at low voltages,” he said.
With fewer dopants, there is a marked improvement to variations in the threshold voltage, the Vt mismatch. With a steeper sub-threshold slope and improved Vt mismatch, the threshold voltage can be lower, supporting a lower operating voltage.
The minimum voltage, or Vmin, that a circuit can operate at reliably is largely depending on the Vt mismatch. The improved Vt variability allows circuits which must retain data — including cache memory, register files, latches, and others — to operate at a lower Vmin than Intel’s planar transistors.
A tri-gate transistor can have a lower operating voltage.
“We estimate we can drop the operating voltage by 100 to 150 millivolts, closer to 150,” Mistry said. “Depending on the type of circuit, we can operate anywhere from 100-150-200 millivolts better than planar.”
An operating voltage drop of 100 millivolts, combined with the transistor size shrink, means that the access power for a given logical function is cut by half, or more, while running at the same frequency. “That is a pretty big deal,” Mistry said, saying that advantage helped drive Intel towards the tri-gate design during its pathfinding stage.
The advantages in density, performance, and power led Intel’s design group to take on the design challenges. “Being an IDM is an advantage for us in that respect. Whenever the design tools need to change, we are able to partner very quickly to create the tools, and with the designers themselves.”
Asked if designing with a tri-gate was more complicated than with a planar transistor, Mistry said, “It is just different. I wouldn’t say it is more complex.” Earlier, the design tools created the optimum W for power and delay. That tool “now has to map a discrete width for a fin. That is not more complicated, just different.”
Executives described challenges and opportunities facing the semiconductor industry, at the Semico Summit, held in early May in Phoenix.
Gregg Bartlett, GlobalFoundries:The Economics of Innovation
The convergence of mobility, communication and computing has produced multifunctional end applications that are placing huge demands on semiconductor manufacturers. These new devices require low power, high performance, and a lot of advanced manufacturing capacity at a low cost.
At the 2011 Semico Summit, Gregg Bartlett, Senior Vice President of Technology and Research and Development, GlobalFoundries talked about the economics of innovation, highlighting the daunting economic and technology challenges to bring products to market. Just a few of the major costs include the following:
$1-2 billion in leading edge process technology development,
3-4 years of development,
$40-$50 million in chip design costs,
$250 million for design enablement such as libraries and IP,
$5-$7 billion for an advanced 300mm fab.
Today’s market is a high stakes game. Its no wonder that the industry has embraced a collaborative environment at all levels.
Bartlett stated that the 20nm node signals an inflection point in the development of technology. The process will be tightly coupled with the end market application needs. GlobalFoundries is already working on 20nm and is planning on offering different techniques and options which will allow the customer to optimize the final solution. But that also means early stage engagements to define the needs.
GlobalFoundries is leveraging the consortia approach in its development of EUV. The first tool will be installed in their new Malta fab in the second half 2012. Bartlett stated that double patterning has actually lowered the barrier to entry for EUV. Foundries such a GlobalFoundries, offer higher product diversity which means there will be some very low volume products. The cost of double patterning including the amortized cost of the mask could be prohibitive, forcing EUV into the picture.
It’s now becoming evident that the economics of innovation are influenced by the limitations of existing options. Those with foresight already see the limitations of 300mm.
By Joanne Itow, Managing Director, Manufacturing, Semico Research
Ganesh Moorthy, Microchip Technology: The Invisible Computers in Our Lives
Ganesh Moorthy, Chief Operating Officer of Microchip Technology, examined how much embedded computing permeates our lives. But he also pointed out how much more opportunity there is for microcontrollers.
Mr. Moorthy showed how several applications that have evolved from very simple solutions to solutions that utilize sensors and intelligence. This has enabled products that are adaptable, have more security, simplified user experience, improved energy efficiency and more. Among these are developments in automotive, lighting, thermostats and appliances. There are new applications for microcontrollers providing support management in personal computing, data centers, handsets, asset tracking & management and personal medical equipment. Embedded computing is found throughout various applications within the smart power grid.
Mr. Moorthy cited several innovation enablers.
More integrated features, lower cost
Higher performance, lower power
Wired and Wireless connectivity
High quality, low cost graphics
Touch – buttons, sliders, screens
Energy Efficiency building blocks
Chip vendors need to invest in customer support. There are more software engineers than hardware engineers involved in the development of MCUs. A chip vendor cannot just produce silicon, it must also help system designers with tools and expertise. Today’s applications are just the tip of the iceberg, according to Mr. Moorthy. There are many more innovations yet to come.
By Tony Massimini, Chief of Technology, Semico Research
Bob Krysiak, STMicroelectronics: Doing Well by Doing Good
STMicroelectronics presented its view on shaping the semiconductor future. Bob Krysiak, Executive VP and GM of the Americas Region, spoke on how ST and the semiconductor industry is “doing well by doing good.”
Mr. Krysiak pointed out the demographic changes that are occurring. There is increasing world population with most of this growth in non-Western countries. By 2050 there will be nearly 10 billion people, an increase of 3 billion over today. In addition we have an aging population. This puts pressure on many resources.
The theme of his presentation, “doing well by doing good,” presents the internet and connectivity as key elements in addressing these issues. He noted that the internet and connectivity have become the plumbing of our world and industry. There are a growing number of online users, many in China.
We will depend more on the internet and connectivity for increases in productivity and security. Human productivity will depend more on mobility and wireless. Banking will be transformed by this, but then security becomes more important. This will lead to growth in brand authentication, protection and trusted platform security.
With an aging population countries need to deal with healthcare management. Mr. Krysiak pointed out that in developed countries 12-18% of GDP is for healthcare spending. Much of this is for chronic disease management, such as diabetes. Remote monitoring and wellness are the next big explosion of content. Connectivity will play a major role in this. Semiconductor technology, including MEMS, offers more affordable solutions, with greater reliability and precision.
The Smart Grid applies IT and networking expertise to deliver energy efficiently. This includes smart meters, photovoltaics, electric vehicles and Home Area Networks (HAN) working together for energy efficiency by balancing supply and demand. Network security plays an important role.
The semiconductor industry offers the intelligent control and high performance analog technology to make all of this happen.
By Tony Massimini, Chief of Technology, Semico Research
Tom Dietrich, Freescale Semiconductor: Sensors Changing the Way We Do Business
Freescale’s Senior VP and GM of the RF, Analog & Sensor Group, Tom Dietrich, described Freescale’s vision of a sensor-based future.
Over the next few years Freescale sees the future changing the world, and Freescale will be leading the change as they focus on four growth markets: Automotive, Networking, Industrial, and Consumer while they leaverage three growth trends: The Net Effect, Health & Safety, and Going Green.
For the consumer market we can see how sensors are changing the way we interact with our electronics just by looking at the iPhone and the top ranking apps. Games now rely on the touchscreen, some rely on tilting the phone, others respond to shaking. Add this in with networking and we have Cloud Computing. For example in Japan, a good way to use sensors in cell phones is to have an earthquake app that can combine data from everyone’s phone to a central hub where the data will be analyzed to predict more accurately when and where the next earthquake will occur. And considering that seismologists are warning of another magnitude-8 quake, this is a feature of sensors that can save lives.
Another feature for the consumer market Tom discussed was Augmented Reality for games. For example, with sensors, a gamer at home may compete with the pros on the course, using the pros real time moves to compete against in their game.
In the automotive industry, Tom discussed how sensors will help cars, namely with radar, to have a cooperative highways, where cars will proactivity monitor other cars’ locations in order to stop accidents before they can occur. Another life saving feature changing the way we interact with hardware.
Even the healthcare industry benefits from sensors, with in-home monitoring becoming more widely available, allowing doctors and nurses to monitor a patient’s health and quickly react to changes.
While all these ideas are exciting to the average consumer, for Freescale, sensors are a puzzle to solve in how to add more capability to sensors, while continuing to rely on minimal power. And it looks like they’ve done it.
By Michell Prunty, Consumer Analyst, Semico Research
Paolo Gargini, Intel Corp.: Technology Takes Time
Paolo Gargini—Intel Fellow, Technology and Manufacturing Group and Director of Technology Strategy for Intel—highlighted the time gap between when an idea is formed, to when the science, technology and engineering are able to make that idea a reality. The incubation time for an idea to become real has shortened from several hundred years for satellites, to 12-15 years now for many ideas.
The driving technology in the semiconductor industry to date has been the ability to scale CMOS transistors. The Nanoelectronics Research Initiative (NRI) is a consortium begun by Semiconductor Industry Association member companies to run a university-based research program to determine what will come next after the limits of CMOS scaling have been reached. The National Institute of Standards and Technology (NIST) joined as a full participant in 2007. NRI’s goal is to have a demonstrable solution by 2020. The solution is supposed to show benefits in power, performance, density and/or cost in order to continue the cost and performance gains from traditional scaling. There are four main branches of the NRI-NIST program: Western Institute of Nanoelectronics (WIN) headed by UCLA, the Institute for Nanoelectronics Discovery and Exploration (INDEX) headed by SUNY-Albany, the SouthWest Academy for Nanoelectronics (SWAN) headed by UT-Austin, and the Midwest Institute for Nanoelectronics Discovery (MIND).
Science, technology and engineering companies have been working together to invent the next new product that we all can’t live without. The semiconductor industry has relied on Moore’s Law to set a sustainable pace for the past 40 years. As chips have integrated more functions, become more dense with transistors, and become available in large quantities, multiple end-product waves have been able to occur. New technologies are being developed by groups such as NRI which promise to continue the pace of new chip introductions we have experienced so far. Problems occur when the chips can’t meet the products’ required functionality, but that’s when other similar products can be repurposed in order to drive the eventual success of the end product.
By Adrienne Downey, Director of Technology Research, Semico Research
Danny Biran, Altera: Device Boundaries Blur
Danny Biran, Senior VP of Marketing at Altera, discussed new opportunities as the boundaries between semiconductor logic device types become blurred.
According to Mr. Biran, the boundary between FPGAs, ASICS, ASSPS and CPUs (MPUs, MCUs and DSPs has until recently been extremely well defined. FPGAs were customer programmable standard products. Programming was developed for and owned by the customer. ASICs used a standard cell design methodology. The design was owned by the customer. ASSPs were a standard high-volume product developed by the semiconductor vendor for sale to multiple customers. MPUs, MCUs and DSPs were standard products, but the software needed to implement an application was developed by the customer. Now, the boundaries between those categories are becoming blurred.
Various semiconductor vendors are offering FPGAs with an on-board MPU, ASICs that include an FPGA block or ASSPs with multiple processing cores. Altera’s Stratix V FPGA is an example of this trend. It combines high speed transceivers, hard IP, soft IP, logic blocks, memory arrays and advanced DSP blocks on one IC.
There are several factors driving this trend, including increasing levels of integration, the high cost of developing leading edge ASICs and the availability of IP (Intellectual Property). Another factor is the availability of integration tools, such as Altera’s Qsys system integration tool.
There is more to the blurring of boundaries between device types than just the availability of advanced ICs. There are a wide variety of system integration tools, intellectual property blocks, floating point IP libraries and other tools available to today’s design engineer. This requires a fundamental change in the way that system engineers approach their tasks.
Mr. Biran made the point that system companies should not be trapped into thinking about design solutions in terms of IC categories: FPGAs, ASICs, ASSPs or processors. Instead, they should think about the combination of technologies that provides the best solution. This may require reorganization or the acquisition of new skills. For example, the emphasis might shift from standard cell design skills to programming skills. The optimum solution for a part of the design might be an FPGA. The optimum solution for another part of the design might be an MPU or a DSP. The significant change is that both of these, the FPGA and the processor, or other device types, can now be integrated onto one IC.
This requires another change in thinking. In the past, a system company might begin coordinating with vendors relatively late in the design cycle. Now, with the boundaries blurring, a system company can achieve a better solution by consulting with a vendor from the very beginning of the design cycle.
As we all know, the number of transistors per IC is increasing, in accordance with Moore’s law. This is making it possible to combine several functionalities on one IC. In fact, according to Mr. Biran, the record for the number of transistors on an IC is held by an FPGA, not a microprocessor. This can lead to better design solutions, but only if system companies recognize the trend and alter their design concepts to take advantage of the possibilities,
By Morry Marshall, VP Strategic Technologies, Semico Research
Sandeep Vij, President and CEO of MIPS Technologies made some very interesting observations regarding Consumer electronics applications and their use of memory resources.
We all know that the feature sets and functionality of devices aimed at Consumer applications have been increasing over the last 3-4 years. This is driven by the requirements of users of these devices for OEMs to deliver ever-increasing amounts of functionality like HD quality video, video downloads, touch screens, multiple HD cameras, personal video conferencing and multiple types of integrated sensors. Future requirements will include, but are not limited to, medical sensors, 100’s if not 1000’s of apps run in the devices, 3D-HD video, etc.
These new levels of functionality must be fulfilled by placing higher levels of complexity into these silicon solutions to provide the right feature sets consumers desire. All this takes an increasing amount of resources to deliver the right user experience.
MIPS is the second largest CPU IP vendor next to ARM and is one of the first companies to see what these new levels of functionality demand in terms of the compute and system resources that must be placed into the system. The message here is that as the demand for compute power increases so to must the resources to service the new performance levels increase.
In the case of Consumer devices, this is prompting a move to 64-bit CPU cores geared to deliver much higher performance to meet the much higher levels of complexity in these devices. However, that is not the end of the story since memory is one of the primary system resources that computing power requires to function efficiently. The reality is now that memory densities must also increase, moving beyond 1GB and approaching 4GB and even 8GB in some cases. This will put added emphasis on embedded memory IP vendors and even discrete memory vendors to reduce power consumption to the lowest possible levels while still providing the right mix of density and performance to the CPU elements in the system.
It is Semico’s view that, even though system OEMs will not like to hear the news that memory densities are going to increase in next generation Consumer devices, it is probably an inescapable conclusion that they must do so if they are to provide the right level of functionality to meet consumer requirements.
It is our belief MIPS is very prescient in pointing out this trend and is demonstrating a leadership role in creating solutions to deliver the right mix of compute performance and functionality to meet the challenge of these next generation applications. Their move to introduce 64-bit, multi-threading and multicore CPU cores is in answer to the market needs that are just emerging and provides ample evidence that MIPS is one of the premier CPU IP companies in the world today.
By Rich Wawrzyniak, senior analysts, Semico Research
Moshe Gavrielov, Xilinx: FPGAs a Fast Moving Sector
Tthe CEO of Xilinx Corporation, Moshe Gavrielov, delivered a presentation on developments in the Programmable Logic market.
Gavrielov made several points: Today, FPGA companies are usually the first to move to doing designs on cutting edge process geometries.
Large ASIC and SoC development costs have cut down on the willingness of Venture Capitalists to invest in new, start-up companies since they need ever larger investments just to deliver their first products to market cutting down on the VC’s returns.
It is possible today to acquire a 20M gate, ARM-based SoC with multiple, high bandwidth SerDes channels that incorporates $2.00 to $7.00 worth of Mixed Signal functionality: all for around $15.00 in volume. That part just happens to be an FPGA from Xilinx. In essence, a SoC with a programmable fabric built around it.
This last point is perhaps the most surprising in that, for the first time, an FPGA company would refer to one of its products as a SoC with programmability and not as a programmable logic part with some SoC-like functions included. In Semico’s opinion this has been prompted by the unrelenting march of many end markets towards requiring higher levels of complexity and the demands of the users of products aimed at those markets for better, richer user experiences.
As the old adage says, “it takes more to do more.”
The FPGA market, and specifically Xilinx, has taken this to heart in the creation of new families of products that deliver much higher capability at the 28nm process node with a corresponding minimal increase in cost structure than in previous generations.
Semico believes the main message to be taken from all this evolution is that FPGAs are leading the way towards delivering new levels of functionality at reasonable price points and thus opening the door for a host of new applications to surface in the markets. This is something sorely needed as development costs approach and exceed $100M for complex, first time efforts at the 28nm node and below.
The semiconductor industry is built on three basic premises: the development of amazing technology, the ability to turn the amazing technology into products we all want and can afford and the creation of even more new applications based off of the amazing technology, all to drive continued development of the amazing technology we all crave. Xilinx, and others, have met the challenges of these premises and are delivering great, timely products that are creating new demand and enabling new applications over the mid-term.
By Rich Wawrzyniak
Len Perham, MoSys: The GigaChip Interface
Len Perham CEO, MoSys, Inc. discussed looming problems in the processing of Internet traffic and offered a solution.
According to Mr. Perham, Internet traffic will increase exponentially over the next three years, driven by applications such as video streaming, IPTV, P2P, cloud computing, social networking and VoIP + video. Today’s traffic routing methods will not be able to keep up with that growth, and memory is the bottleneck. The problem is that today’s 40Gbps and 100Gbps packet processor line cards address memory on parallel connections, which will not be adequate at faster speeds beyond 100Gbps. Routing data at those speeds will require a serial connection to the memory, not a parallel connection.
MoSys has developed the GigaChip™ Interface, which is now an open standard supported by the GigaChip Alliance. The GigaChip Interface is a short-reach, low-power serial interface, which enables highly efficient, high-bandwidth, low-latency performance. It provides a fundamental performance breakthrough similar to the breakthrough achieved by DDR (Double Data Rate) DRAM. The GigaChip Interface, using differential SerDes technology, is the next breakthrough in network processor to memory connections. It allows a multiple-processor network processor to address multi-bank, multi-partitioned memory, so that each processor has access to memory without waiting.
Mr. Perham also discussed the advantages offered by 1T-SRAM®, a memory architecture originally invented by MoSys . 1T-SRAM has the approximately the same latency as the standard 6T SRAM cell generally used in today’s high speed applications, but because it has one transistor per cell as opposed to six per cell for standard SRAM’s the ultimate memory area is approximately one third that of the standard SRAM. The MoSys Bandwidth Engine® IC Roadmap anticipates a BE-3, utilizing the 1T-SRAM architecture, which will have a memory capacity of 1Gb and an access speed of 7.4Gbs.
Memory access has been a continuing problem for network processors over the past several years as Web traffic has increased, requiring ever faster processing speed. Various schemes have been used to speed up access times on parallel interfaces. Now, parallel access appears to have run out of steam. Serial access, using the GigaChip standard shows promise as a solution going forward. The 1T SRAM may have found a home in this application, an applications that needs the 1T SRAM’s unique combination of access speed, high density and low power.
By Morry Marshall
Joe Sawicki, Mentor Graphics: Dual Paths Down the Cost Curve: Scaling and 3D
Joe Sawicki, VP and GM of the Design-to-Silicon Division for Mentor Graphics, joined us at the Semico Summit on Tuesday to discuss scaling and the conversion to 3D. He focused on a motto of “Willful Optimism” for the future.
Moore’s Law has been a cornerstone of our industry for 40 years, and a trend the speakers at the 2011 Summit were discussing was “More than Moore,” an idea that we are moving away from density to integration. Joe Sawicki addressed this idea by discussing how scaling can only get us so far with advancing our speed and storage capabilities. By 2026, he said, if we hold to Moore’s Law, we’ll be holding half a year’s movie collection on our phone.
In the future, Mentor Graphics believes we may be seeing the “e-Cube,” where we’ll have cubes of semiconductors instead of a die.
In discussing transitioning to 3D, there are cost and thermal issues, regardless of the advantages. As a stepping stone, the industry can obtain many of the advantages of 3D by using 2.5D, a cost effective method to swing into the next generation.
For Mentor Graphics, the question becomes how ICs will be created in the future to continue the advancements we’ve seen under the last 40 years with Moore’s Law – and they have some interesting ideas on how to get there – but we’ll need standards to accomplish their innovative ideas.
Four years after making a shift to high-k dielectrics, the world’s largest semiconductor company has done it again, this time saying “bye bye” to the planar transistor design. Intel Corp. said Wednesday (May 4) it has succeeded in what senior fellow Mark Bohr called “a radical redesign” of the basic transistor structure at the 22nm node, moving to a tri-gate 3D structure that he said will support sharply lower operating voltages.
“The real advantage of going off into the third dimension is lower voltages, lower leakage,” Bohr said, adding that Intel estimates the wafer-level cost adder to be only 2-3 percent higher than for a planar transistor.
Bohr, along with Intel senior vice president Bill Holt and Intel Architecture general manager Dadi Perlmutter, demonstrated working servers, desktops and notebooks based on the upcoming “Ivy Bridge” line of MPUs based on the 22nm technology, expected to be in consumers’ hands early next year.
Bohr noted that all of the transistors on Intel’s 22nm products will use the tri-gate structure, casting aside earlier speculation that Intel might adopt a hybrid planar/finFET approach. (SemiMD reported on March 22 that Intel would go to a FinFET technology at the 22nm node.) And he said transistor performance could be varied by using six fins for some devices and only two on others, for example. The SRAM transistors would use a somewhat modified version of the tri-gate than the logic transistors, he added.
Bohr said Intel will again deploy a tri-gate design at the 14nm node, possibly with a taller fin to increase performance further. By going from a nominal 22nm design rule to 14nm, Intel may achieve a higher-than-normal increase in transistor density compared with previous technology transitions.
Bohr said Intel decided early on that for its 22nm technology it would need to move beyond the planar channel that has served the chip industry for the past 50 years. By wrapping the gate around a conducting channel that includes the top and two sides of the tall, narrow fin, Bohr said Intel is able to gain better control of the channel, drive more current, and sharply reduce power consumption.
Compared with Intel’s best 32nm planar transistors, the 22nm “3D” device achieves 37 percent higher performance at .7 V operation, a Vddthat is largely impractical for planar structures. Later in his presentation, Bohr said the tri-gate structure would deliver roughly the same gate delay, but at .2 V lower supply voltages, or 0.8 V. At the same voltage as a planar transistor, the tri-gate will deliver “dramatic performance gains,” Bohr said.
“A 22nm tri-gate can deliver the same performance as 32nm planar, but at 0.8V instead of 1.0V, providing more than a 50 percent active power reduction,” Bohr said.
Bohr gave the assembled reporters a short tutorial on transistor device physics, saying that with planar devices a weak voltage on the substrate layer prevented the optimum sub-threshold voltage swing. With fully depleted silicon-on-insulator (SOI) devices, Bohr said the main challenge was wafer cost. With the ultra thin body SOI technology which STMicroelectronics and others are promoting, Bohr said bringing the buried oxide layer much closer to the silicon surface requires close wafer-manufacturing controls. “Those extremely thin (silicon and BOX layer) SOI wafers are available, but they are very expensive, and pretty hard to get,” Bohr said. Intel’s estimate is that using a thin-layered SOI wafer would add 10 percent to the finished wafer cost. (The SOI camp argues that fewer isolation and other steps translate into fewer masks, equalizing the wafer cost adder.)
Source: Intel Corp.
Perlmutter said Intel can operate its products at lower voltages and get “way lower power, while still getting to half the transistor size” compared with the 32nm planar technology. Intel has struggled to get its Atom line of SoC products adopted, partly because of lower power budgets for the ARM-based mobile SoCs. Perlmutter said Intel would move more quickly than in the past, bringing the Atom-based products on to the 22nm manufacturing platform fairly quickly compared with the 45nm and 32nm generations.
The first products using the tri-gate transistors will be a dual-core Ivy Bridge design. But he said the mobile products supported by the Atom core need the tri-gate technology. “It is extremely needed. It delivers more ‘power-full’ performance and more ‘power-less’ power consumption,” he quipped.
Dan Hutcheson, CEO of market research firm VLSI Research Inc, said the shift to a tri-gate structure is “truly a historic event,” coming some 50-plus years after Bob Noyce, Jack Kilby and others developed the first planar transistors. “This is like delivering the advantages of two nodes in one,” Hutcheson said.
Dean Freeman, an analyst at Gartner Inc., said “this is a big coup for Intel.”
The early shift to a tri-gate architecture “does give Intel a crack at the mobile device market, as the power consumption is very good. The performance capability should blow ARM devices out of the water.”
TSMC, which manufactures many of the ARM-based SoCs for the mobile chipset vendors, will “follow Intel’s lead to finFETs, but likely at 14nm. I don’t think IBM will hit finFETs until 14 as well,” Freeman said.
There are several manufacturing challenges with finFETs, the Gartner analyst said. The lithography tools must be able to print the feature size with the required alignment. “It was thought that EUV was going to be needed for this but it is possible that the overlay ASML and Nikon are achieving now will allow them to be successful,” Freeman said.
Sidewall doping is another challenge. “While the plasma immersion does a fairly good job there have been concerns regarding uniform doping density on the side wall of the source and drain,” he said.
Sidewall roughness — always an issue at leading edge design rules — can be a particularly tough issue for finFETs during etching of the poly or silicon line, he said.
Intel said its 22nm Trigate design performs well at low voltage operation.
The 22nd IEEE/SEMI Advanced Semiconductor Manufacturing Conference (ASMC 2011) will feature a panel discussionon“Models for Successful Partnerships in Semiconductor Manufacturing” April 17 in Saratoga Springs, N.Y.
The panel session will highlight the vital role of partnerships in furthering semiconductor manufacturing innovation and advancements.
Panelists will examine how to collaborate across the semiconductor development and manufacturing supply chain. The panel includes:
Dr. Walid Ali, Advanced Technology Investment Company (ATIC)
Olivier Demolliens, CEA-Leti
Prof. Michael M. Fancher, College of Nanoscale Science and Engineering (CNSE)
Market research firm IHS iSuppli said 2010 worldwide semiconductor industry revenues finally broke through the $300 billion mark, rising 32.1 percent to $304.1 billion, up from $230.2 billion the previous year.
The final IHS iSuppli 2010 semiconductor revenue ranking noted that Samsung Electronics Co. Ltd.’s revenues increased 59.1 percent last year, as DRAM sales expanded by 75 percent and NAND flash grew by 38.6 percent.
“Continuing its steady rise in the semiconductor industry, Samsung in 2010 came closer to challenging Intel Corp.’s chip market leadership than any company had in more than a decade,” the IHS iSuppli report said.
With a 9.2 percent share of global chip revenues, up from 7.6 percent in 2009, Samsung is only 4.1 percentage points behind Intel. “The rise of Samsung is one of the biggest stories of the last decade in the worldwide semiconductor market,” said
IHS analyst Dale Ford. “When experts discuss competition for Intel, they almost always focus on Advanced Micro Devices Inc. (AMD). While it is true that AMD is Intel’s major competitor in the MPU market, Samsung is the primary rival of Intel for overall semiconductor market share. Although they are mainly indirect competitors in the marketplace, Intel and Samsung have been ranked No. 1 and No. 2, respectively, for a number of years.”
In 2001 Intel’s market share at 14.9 percent was more than three times that of Samsung at 3.9 percent; Samsung ranked fifth then. Since that time, Intel’s market share has ranged between 11.9 percent and 14.8 percent. Meanwhile, Samsung has seen its revenues grow by 355 percent.
Micron Technology, Hynix Semiconductor, and Elpida Memory expanded their share of the total market by 1.1 percent, 0.7 percent and 0.4 percent, respectively. For Micron, the combination of strong memory market growth and its acquisition of Numonyx propelled Micron up five places, into the Top 10 to No. 8. Hynix and Elpida achieved revenue expansion of 66.2 percent and 63.3 percent, respectively — the largest increase among Top 20 semiconductor companies based entirely on organic growth. As a result, Elpida jumped up four spots in ranking from No. 15 in 2009 to No. 11 in 2010, while Hynix advanced one place to No. 6.
Renesas Electronics Corp. went up in the rankings from No. 9 in 2009 to No. 5 in 2010 due to the merger of Renesas Technology and NEC Electronics. The two companies, which had combined revenues in 2009 of $9.5 billion, grew 24.7 percent, less than the overall market, to $11.9 billion in 2010.
Maxim Integrated Products jumped six places to No. 24, Marvell Technology Group jumped five places to No. 18, and Broadcom moved into the Top 10 for the first time. Qualcomm, with growth of 12.4 percent, slipped from No. 6 to No. 9.
“In a notable reversal from historical trends, fabless semiconductor suppliers underperformed the overall semiconductor market” Ford said. Fabless semiconductor suppliers as a group achieved revenue growth of only 26 percent in 2010. However, seven fabless companies ranked among the Top 25 suppliers in 2010, up from six in 2009. The seven included Qualcomm, Broadcom, AMD, Marvell Technology Group, MediaTek, Nvidia, and Xilinx.
Intel Corp. is developing a heterogeneous CMOS tri-gate solution that could be ready “seven or eight years down the road,” Intel fellow Paolo Gargini said during a presentation at the SEMI Industry Strategy Symposium (ISS) Europe event, held in Grenoble, France.
“It makes sense to look at germanium again” for the PFET, with a III-V indigum gallium arsenide (InGaAs) compound in the NFET channel, he said. By using chemical vapor deposition (CVD), germanium and InGaAs could be deposited locally on a silicon substrate. “It takes about five years to do, but with germanium we could get a saturation current that is 2X better at the same leakage than silicon, with a smaller supply voltage. The next step is to bring in the III-V’s,” Gargini said. He added that the heterogeneous Ge/InGaAs combination is one of several options that Intel is considering.
Intel has been researching InGaAs quantum well FETs, with an IEDM 2010 presentation on the work by Marko Radosavljevic and colleagues. The IEDM paper includes details about a high-k dielectric applied to the III-V gate stack.
Gargini cautioned that single-crystal III-V channel materials “still have a lot of defects, so it may be seven or eight years down the road before we can make it workable. But a tri-gate with III-Vs is a real structure, it is not just a Powerpoint implementation.”
Gargini said Intel’s work on a FinFET structure – which Intel calls a TriGate transistor – would carry over into the heterogeneous Ge/III-V generation of technology. “It could come in by 2020 if we can make it manufacturable. We know it will work,” he said. During the SEMI ISS Europe presentation he called on Europe’s equipment and materials manufacturers to work with Intel on bringing the heterogeneous technology to fruition.
A Ge/III-V implementation exhibits better DIBL (drain induced barrier lowering), a key challenge at future CMOS nodes. “There is a reduced influence of the drain on lowering the voltage on the source,” he said. In the IEDM 2010 paper, Radosavljevic wrote that compared to the planar high-k InGaAs QWFET with similar Tox, the non-planar, multigate InGaAs QWFET has a better enhancement-mode Vt and significantly improved electrostatics due to better gate control.
At the SEMI ISS Europe event, Gargini was asked if CMOS is nearing the end of scaling. He said scaling “is not a beauty contest. We will squeeze the existing technologies to the limit. We will get the III-V technology ready, and then the manufacturing will be ready two to four years later.” He noted that heterogeneous Ge/III-V transistors have moved from the International Technology Roadmap for Semiconductors (ITRS) emerging technologies committee to the PIDS committee. Gargini is chairman of the ITRS general committee.
Further out, Gargini said he sees some promise in tunneling transistors, or TFETs. He described how the early work of Nobel Prize winner Leo Esaki in TFETs could provide a path toward band tunneling transistors, operating at 0.3 Volts. And the devices could be optimized for low leakage, providing one order of magnitude faster speeds and two orders lower leakage.
Fig. 1: Evolution of InGaAs QWFET from planar to non-planar, multi-gate architecture:
(a) Planar Schottky gate QWFET with source/drain comprised of n++ InGaAs cap, thick upper barriers and Si–doping.
(b) Planar QWFET structure similar to (a) except the Schottky gate is replaced by high-K/metal gate stack.
(c) Planar high-K QWFET similar to (b) except the thick upper barriers and Si-doping are removed in the S/D area [this work].
(d) Non-planar, multi-gate high-K QWFET, with the transistor channel being in the shape of a “fi”, and ultra scaled drain-gate and source-gate separations (LSIDE) [this work].
Eliminating the thick upper barriers and Si-doping in (c) and (d) while using n++ InGaAs cap as the carrier supply enables S/D contact area scaling with low resistance. (Source: IEDM 2010 presentation)
To know what’s happening in lithography for high-volume manufacturing (HVM) the SPIE Advanced Lithography conference and exhibit is the place to be. Presentations at the Nikon Precision and KLA-Tencor customer events the day before provide vital context. On Day 0 of the conference, Intel fellow Sam Sivakumar said EUV is too late for Intel’s 14nm node, and may be too late for the development of 10nm generation design-rules (DR). EUV could still be on time for single-exposure work in 16/14nm nodes at foundries and memory fabs, and many companies may use EUV to cut grid-lines made using 193i tools.
Before the industry can get to high-volume manufacturing (HVM) of commercial ICs in a fab, companies need to lock-in processes in pilot production. Before that, they must have design-rules (DR) set. Intel leads the world in the race to the smallest features, with the 32nm node in HVM, 22nm node now in pilot production, and 14nm node design rules already set for HVM in 2013. An Intel 10nm node in HVM would follow in 2015.
Sam Sivakumar (source: Intel)
At Nikon’s LithoVision workshop, Sam Sivakumar, Intel fellow (figure), explained that the DR set for 14nm used 193nm-immersion double-patterning (193i-DP), and that for the 10nm node—featuring 20nm actual line width, and 40nm pitch—design rules will be frozen early in 2013. “So production EUV tools will be delivered too late to meet the need to develop DR for the 10nm node, though Intel remains committed to going into production with EUV” said Sivakumar. Regarding the possibility of re-insertion, it is possible but only after surmounting a barrier. “We’d need to reset the DR for EUV,” elaborated Sivakumar, “because it doesn’t make sense to use DR developed for 193i with EUV. The key point is that DR flexibility needs to be built in, so that we can smoothly insert EUV and derive maximum benefit.”
[UPDATE 3/1: In an evening panel session, Sivakumar asserted Intel's official position as, “Our primary plan is to use EUV for 10nm, but we need ArF double-patterning as a backup.” Intel will have a pre-production 3100 tool this year, and likely will want a production 3300 whenever available. “When going to immersion, it took us well over a year to be able to get the defectivity levels down to that of dry. I'm hoping that all the learnings that will come from the 3100 will map to the 3300.”]
[UPDATE 3/2: In an exclusive followup meeting with SemiMD, Sivakumar explained that Intel has long planned to do 14nm node pilot using EUV, and should be on schedule with the shipment of the 3100 to meet plans. Many of the learnings to be found during pilot, such as SMO-dependencies, should map to HVM so there is confidence that the technology will be capable of ramping into 10nm node production.]
Discussing lithographic CoO for 22nm node patterning, Hidetami Yaegashi of TEL showed data that 193i double-patterning (193i-DP) should be less than EUV running at 150 wph and perhaps only 50% of EUV running at 60 wph.
Any delays in EUV seem to be due to the tough science and engineering challenges associated with sources and resists, while stepper OEMs have been meeting their development commitments.
Part of the main body of a ASML NXE3100 "pre-production" EUV lithography tool being installed at IMEC (source: IMEC)
Leading off Day 1, Luc Van den hove, IMEC president and CEO, announced in his plenary keynote that ASML started shipping the new NXE: 3100 “pre-production” EUV stepper/scanner to IEC last week, using 20 trucks. “The body is being installed even as we speak here,” (figure) announced Van den hove. Source and resist improvements could get us to 6o wph in another year, but all bets are off the table for when ASML could double that. He also said that flare in the new tool is only ~4% compared to 8-10% for the “alpha-demo-tool” first installed.
Next in the morning was the plenary keynote by Shang-yi Chiang, senior vice president of R&D at TSMC, who said “most people believe that Moore’s Law is nearing the end…whether we can extend Moore’s Law into the next decade is in your hands.” The eventual limit will be economic, not technical. “Within transistor and interconnect technology developments we do not see any roadblocks,” he explained. “So the lithography cost is the single greatest factor which may limit our ability to extend Moore’s Law into the next decade.” TSMC’s CoO modeling indicates that 100-150wph EUV should cost less than 193i-DP for 14nm nodes and beyond.
Franklin Kalk, Toppan Photomasks VP, met this afternoon with SemiMD to discuss the many known challenges with lithography for the 22nm node and beyond. Kalk sees pragmatic evolution of optical technologies continuing. Metaphorically speaking, we’re not about to crash into the ground. “I feel like we’re going to land and everything will be OK,” reassured Kalk. “But we may need reverse thrusters to not run off the runway, and we may land on one wheel and bounce a bit.”
After years of debate and development, air gaps are finally seeing commercial introduction in Flash interconnects. An IEDM 2010paper presented by Kirk Prall of Micron Technology and Krishna Parat of Intel described the interconnect technology used for the companies’ 25nm multi-level-cell 64 Gbit NAND, which includes air-gaps in low-k dielectric materials (figure).
Intel/Micron 25nm node NAND Flash structures using air-gaps in a) the wordline direction to reduce floating-gate interference by 25%, and b) the bitline direction to reduce capacitance by 30%. (source: IEDM2010 S05P02)
The disclosure follows details first shown by Intel at theInternational Interconnect Technology Conference (IITC 2010) on the reliability of air-gaps for electrical insulation in nano-scale devices. While other companies have shown tests of air-gaps, this is the first time that a commercial chip has been designed using air-gaps. Prior Flash chips from many manufacturers have had air-gaps, but seemingly only as anticipated accidents. Philips (now NXP) and IBM have reported on air-gaps for logic chips, but thus far without product commitments.
To reduce the dielectric constant (k) in shrinking integrated circuits (IC), there once was a roadmap for new materials to be deployed with ever lower k at each node. However, integration challenges in practice limited new materials to essentially two moves over the last 15 years: from SiO2 (k~4) to SiOF (k~3.5) and then to SiOCH (k~3). Because solid materials with k<3 have generally not met integration requirements for mechanical stability, much effort was expended to try to add pores (with k~1) to SiOCH to make a porous-low-k (PLK) dielectric with bulk k value proportional to the percent of air incorporated. PLK with porosity up to ~10% allows for k~2.7, and such a film can be integrated with minimal extra work (perhaps just a UV stabilization anneal step) compared to pure SiOCH. However, adding porosity >10% mandates the use of extra barrier/cap layers that almost always combine to produce new failure mechanisms, and the extra layers add capacitance to the whole dielectric stack such that the “effective-k” (keff) tends to end up back at ~2.7 after integration.
Cross-sectional schematic of CVD filling the space between two lines where a) first the deposition is relatively conformal, then b) a “bread-loaf” profile at top corners grows, such that c) the top “pinches-off” to form an additive air-gap. (source: BetaSights)
At an abstract conceptual level, an air-gap in a dielectric may be considered as a limiting case of PLK, where there is merely one large pore designed into the center of the structure. This is different from “air-bridges” where all solid and liquid dielectric is removed from interconnect structures. Some solid SiOCH dielectric remains around the air-gaps to provide mechanical strength and chemical barrier.
At IEDM 2010, Hynix R&D Division researcher Sungjoo Hong discussed the challenges of continued scaling of NAND Flash technology based on the floating gate (FG) architecture. Scaling has created word-line (WL) to WL spacings such that cross-coupling effects now decrease programming speed. Hong stated that air-gap technology can minimize cross-coupling, but it is necessary to control the integrated process for precise uniformity.
For over twenty years, air-gaps have been seen as defects in dielectrics deposited between metal lines. This editor once ran the application lab for Watkins-Johnson (W-J) atmospheric pressure chemical-vapor deposition (APCVD) systems, and learned that any CVD tool can be tuned to produce air-gaps in between lines of equal spacing. As the line sidewalls get coated the cross-section of the coating starts to look like a “bread-loaf” on each side until the top sides “pinch-off” to form an “air-gap” (figure). If you are working with subtractive metal patterning like that for aluminum or tungsten then additive air-gaps can come free with the dielectric deposition.
If you are working with additive metal like copper then additive air-gaps cost an extra etch and deposition step. Of course in either case, the “gap” is really an elongated bubble, and the “air” inside is a combination of the ambient inside the CVD chamber along with trace vapors from the dielectric material.
ChipWorks (the IC reverse-engineering experts in Ottawa) have cross-sectioned commercial memory chips for many years, and often observed dielectric voids in NAND Flash structures. “We have seen voids in between the wordlines of NAND Flash chips, in structures that appear to be not too different from what have been regular in memory,” said ChipWorks’ senior technology advisor Dick James. “We’ve seen voids at ~50nm half-poly-pitch in NAND chips. Even at ~90nm there have been variable voids.”
SEM cross-section of the wordlines in a Samsung 90nm branded NAND Flash chip, showing air-gaps formed between some of the lines as beneficial accidents of processing. Micron and Toshiba NAND Flash chips show similar accidental air-gaps appearing at the 90nm node and in all smaller chips. (source: ChipWorks)
However, the voids seen so far in commercially available NAND chips appear to be incidental, since they vary in size from gate to gate and sometime disappear entirely (figure). Since ChipWorks has SEM cross-sections of chips from Micron, Samsung, and Toshiba all showing sporadic air-gaps, and since none of these companies had declared air-gaps as intentional design elements, it appears likely they truly were anticipated accidents. Anticipated since the design and manufacturing must allow for air-gaps to be present, yet accidental since the air-gaps may not appear in any one place. Generally speaking, the repeating structures of memory chips makes such anticipated accidental integration possible, while random logic structures don’t allow for such accidents.
Subtractive air-gaps: staying between the lines
To control where air-gaps form outside of periodically structured arrays, another lithography step may be needed to align an etch mask, as shown by Philips and IBM. During their IEDM presentation, Micron and Intel did not detail the air-gap integration processes used for the WL and bitline (BL) dielectrics. However, since the two cross-sections appear differently in the IEDM paper, they probably used different processes. The WL direction looks similar to the WL seen by ChipWorks in previous NAND structures, so was probably formed additively. Presuming the designers are given forbidden pitches to avoid in a layer, the air-gaps could truly come free without even the need for a “non-critical” mask to block out inconvenient areas.
Air-gaps may soon appear in logic structures as well. Since it appears likely that the 22/20nm node of logic will rely upon 1D grid layouts, such severely restricted-design-rules (RDR) will already include forbidden pitches. Consequently, pinch-off additive air-gaps could easily be tuned into CVD processes for dielectrics. If subtractive air-gap flows are needed, then array patterns still allow for relatively easier patterning. Both Intel and IBM are now in top-secret pilot production with this node, but within the year we should learn whether air-gaps are only for memory or whether they will be the mainstream low-k dielectric solution for all future ICs.
In a bid to offer the most advanced fabrication process technology among contract semiconductor manufacturers, Taiwan Semiconductor Manufacturing Company has decided to skip development of 22nm manufacturing process and move straight to 20nm process technology already in the second half of 2012 with risk production, which results into volume manufacturing in 2013.
The technology will be based on a planar process with enhanced high-K metal gate (HKMG), novel strained silicon, and low-resistance copper ultra-low-K interconnects. The technical rationale behind the move is based on the capability of innovative patterning technology and layout design methodologies required at these advanced technology nodes.
During his address to nearly 1.5 thousand TSMC customers and third party alliances, Dr. Shang-yi Chiang, TSMC senior vice president of research and development, said that the move to 20nm creates a superior gate density and chip performance to cost ratio than a 22nm process technology and makes it a more viable platform for advanced technology designers. He also announced that TSMC is expected to enter 20nm risk production in the second half of 2012. Dr. Chiang also indicated that the company has demonstrated record-setting feasibility of other transistor structures such as FinFET and high-mobility devices.
"We have reached a point in advanced technology development where we need to be actively concerned about the ROI of advanced technology. We also need to broaden our thinking beyond the process technology barriers that are inherent in every new node. Collaborative and co-optimized innovation is required to overcome the technological and economic challenges,” said Dr. Chiang.
TSMC recently decided to cancel development of 32nm manufacturing process and develop 28nm HKMG fabrication technology instead. Even though the move is projected to improve the company’s competitive position in 2011, the decision comes after the company failed to deliver sufficient production yields with 40nm process technology, which was designed after TSMC decided to skip 45nm production tech.
They are a total mess today, so they are talking about the future. It is much easier.
What's interesting is that by the time this is a reality, with production in H1 2013, IBM's group will be at 18nm, the half node of 22nm, where 'Moore's Law will' supposedly 'die' according to many. TSMC WILL NOT OFFER A 18NM (or any half) NODE.They will likely jump to some weird half-node of 16nm, like 14nm, just like they did with 40nm and now 20nm. It will likely not be quick, as lots of things need to change below 18nm. They WILL, at some point, be stuck on 20nm when GF is at 18nm by going this route. Return on investment, yes, because of splitting the difference and using one node, but at the cost of process leadership.
While I always applaud the cutthroat attitude TSMC has taken on nodes, I sure as hell hope they plan it out better than they did with 40/32nm instead of 45nm. So far, there looks to be little reason it shouldn't work, as sub-18nm is where things get trickier and need a massive overhaul, ie like HKMG for 40/32nm that they didn't use.
In short: 20nm lets look good against the competition on 22nm in the same time frame. The competition won't be on 22nm as long as TSMC is on 20nm. Whom gets the next process out after that (16/15/14nm) and how/when they do it is the thing to watch. TSMC coming out with a smaller node first, and then focusing on that task does not guarantee they will accomplish it faster or better.
It is logical to assume that transition from 28nm to 22nm process technology will make less economic sense than transition from 40nm to 28nm. However, jumping directly from 28nm to 20nm essentially kills 16nm fabrication process.
Basically, TSMC aims at very aggressive size of elements and essentially leaves itself without ability to test certain new materials in less extreme conditions (which may result in low yields, etc).
So, we can make rather simple conclusion: half-node process technologies are going to disappear already in the mid-term future, as both Globalfoundries and TSMC are aiming at the smallest transistor sizes and it seems that there is no substantial economical effect to further shrink existing technologies. In fact, 40nm is already a full-node process since 45nm essentially does not exist.
There is a big question how do they plan to diversify process technologies in terms of high-performance, low-power, etc.
0 0 [Posted by: Anton | Date: 04/15/10 07:26:30 AM]
The transistors on computer chips — whether for PC’s or smartphones — have been designed in essentially the same way since 1959 when Robert Noyce, Intel’s co-founder, and Jack Kilby of Texas Instrumentsindependently invented the first integrated circuits that became the basic building block of electronic devices in the information age.Readers' Comments
These early transistors were built on a flat surface. But like a real estate developer building skyscrapers to get more rentable space from a plot of land, Intel is now building up. When the space between the billions of tiny electronic switches on the flat surface of a computer chip is measured in the width of just dozens of atoms, designers needed the third dimension to find more room.
The company has already begun making its microprocessors using a new 3-D transistor design, called a Finfet (for fin field-effect transistor), which is based around a remarkably small pillar, or fin, of silicon that rises above the surface of the chip. Intel, based in Santa Clara, Calif., plans to enter general production based on the new technology some time later this year.
Although the company did not give technical details about its new process in its Wednesday announcement, it said that it expected to be able to make chips that run as much as 37 percent faster in low-voltage applications and it would be able to cut power consumption as much as 50 percent.
Intel currently uses a photolithographic process to make a chip, in which the smallest feature on the chip is just 32 nanometers, a level of microscopic manufacture that was reached in 2009. (By comparison a human red blood cell is 7,500 nanometers in width and a strand of DNA is 2.5 nanometers.) “Intel is on track for 22-nanometer manufacturing later this year,” said Mark T. Bohr, an Intel senior fellow and the scientist who has overseen the effort to develop the next generation of smaller transistors.
The company’s engineers said that they now felt confident that they would be able to solve the challenges of making chips through at least the 10-nanometer generation, which is likely to happen in 2015.
The timing of the announcement Wednesday is significant, Dr. Bohr said, because it is evidence that the world’s largest chip maker is not slipping from the pace of doubling the number of transistors that can be etched onto a sliver of silicon every two years, a phenomenon known as Moore’s Law. Although not a law of physics, the 1965 observation by Intel’s co-founder, Gordon Moore, has defined the speed of innovation for much of the world’s economy. It has also set the computing industry apart from other types of manufacturing because it has continued to improve at an accelerating rate, offering greater computing power and lower cost at regular intervals.
However, despite its promise and the company’s bold claims, Intel’s 3-D transistor is still a controversial technology within the chip industry. Indeed, a number of the company’s competitors say they believe that Intel is taking a what could be a disastrous multibillion-dollar gamble on an unproved technology.
There has been industry speculation that Finfet technology will give Intel a clear speed advantage, but possibly less control over power consumption than alternative approaches.
By opting for a technology that emphasizes speed over low power, Intel faces the possibility that it could win the technology battle and yet lose the more important battle in the marketplace. The scope of Intel’s gamble is underscored by the fact that while the company dominates in the markets for data center computers, desktops and laptops, it has largely been locked out of the tablet and smartphone markets, which are growing far more quickly than the traditional PC industry.
Those devices use ultra-low-powered chips to conserve battery power and reduce overheating. Apple, for example, uses Intel’s microprocessors for its desktops and laptops, but for the iPhone and iPad it has chosen to use a rival low-power design, built by others, that Apple originally helped pioneer in the late 1980s.
Industry executives and analysts have said that Intel is likely to have a lead of a full generation over its rivals in the shift to 3-D transistors. For example, T.S.M.C., the Taiwan-based chip maker, has said that it does not plan to deploy Finfet transistor technology for another two years.
Other companies, like ST Microelectronics, are wagering that an alternative technology based on placing a remarkably thin insulating layer below traditional transistors will chart a safer course toward the next generation of chip manufacturing. They believe that the insulation approach will excel in low-power applications, and that could be a crucial advantage in consumer-oriented markets where a vast majority of popular products are both hand-held and battery-powered.
“Silicon-on-insulator could be a win in terms of power efficiency,” said David Lammers, the editor in chief of Semiconductor Manufacturing and Design Community, a Web site. “From what I am hearing from the S.O.I. camp, there is a consensus and concession that Finfets are faster. That’s the way you want to go for leading-edge performance.”
In a factory tour here last week, Intel used a scanning electronic microscope to display a computer chip made using the new 22-nanometer manufacturing process. Viewed at a magnification of more than 100,000 times, the silicon fins are clearly visible as a series of walls projected above a flat surface.
It is possible to make transistors out of one or a number of the tiny fins to build switches that have different characteristics, such as faster switching speeds or extremely low power. Looking at the chip under less magnification, it is possible to see the wiring design, which appears much like a street map displaying millions of intersections.
Despite the impressive display, Intel’s executives acknowledge the challenge the company is facing in trying to catch up in the new consumer markets that so far have eluded it.
“The ecosystem right now is not aligned in our favor,” said Andy D. Bryant, Intel’s chief administrative officer, who now runs the company’s technology and manufacturing group. “It has to be good enough for the ecosystem to take notice and say, ‘We better pay attention to those guys.’ ”
This article has been revised to reflect the following correction:
Correction: May 4, 2011
An earlier version of this article misspelled the dateline as Hillsborough.