075582814553
Top 10 Technology Trends in the Global Semiconductor Industry in 2022

FREE-SKY (HK) ELECTRONICS CO.,LIMITED / 04-04 20:44

The chip shortage that began to explode globally in the fall of 2020 has continued for a full year in 2021 without easing the trend. The semiconductor industry is expanding capacity while aggressively upgrading processes to increase output rates. On the other hand, the COVI-19 continues to mutate, and the continuation of the outbreak continues to have an impact on the entire semiconductor industry. The formation of telecommuting, online meetings, and online education habits has accelerated the digital transformation of several industries and has also promoted technological updates in network communications, AI, storage, and cloud services from the side.

1 3nm process mass production

In terms of semiconductor cutting-edge manufacturing processes, Samsung foundry temporarily adjusted 4LPE to the complete process node in 2020. That is, the 4nm process will become the focus of Samsung's promotion for the next period of time. 2021 October,  TSMC released the news basically clear N3 process slightly delayed, 2022 may become the year of the 4nm process. iPhone 14 to catch up with the 3nm process is almost hopeless.

But basically, it is clear that although the fastest chip using TSMC's N3 process will probably need to wait until the first quarter of 2023, but the N3 process mass production is clear in the fourth quarter of 2022.

We believe that the Samsung 3nm GAA maybe a little later than TSMC N3. Samsung started to use GAA structure transistors at the 3nm node as the focus, but in fact, Samsung also failed to follow the timing to advance as scheduled. And based on Samsung's current public data, its earliest 3nm process may be subject to greater uncertainty at the technical level.

As for Intel 3, it is completely unable to catch the shuttle bus in 2022. TSMC N3 will continue to maintain its dominant market position and has a significant lead over the other two rivals for the time being. But putting a foot on the brakes in N3 actually lays a hidden danger for the advent of the 2nm era.

On the one hand, the Intel 20A process is expected to arrive in the first half of 2024. Intel 18A may be seen in the second half of 2025 - Intel's determination to return to technology leadership at these two nodes is quite strong. On the other hand, Samsung is expected to mass production of 2nm process in the second half of 2025. It will be its third generation of GAA structure transistors, that is, its 3nm process although it is difficult to achieve a dominant market position, the technology will provide strong support for its 2nm process. These have increased uncertainty for the subsequent 2nm process market competition.

2 DDR5 standard memory enters mass production and commercial

On July 15, 2020, to address the performance and power challenges facing a wide range of applications from client systems to high-performance servers, the Solid State Technology Council (JEDEC) officially released the final specifications for the next-generation mainstream memory standard DDR5 SDRAM (JESD79-5), kicking off a new era for global computer memory technology. JEDEC describes DDR5 as a "revolutionary" memory architecture and believes that its emergence marks the industry's imminent transition to DDR5 server dual inline memory modules (DIMMs).

DDR5

DDR5

According to market research firm Omdia, market demand for DDR5 began to emerge in 2020, and DDR5 will account for 10 percent of the overall DRAM market by 2022, expanding to 43 percent by 2024. In 2023, DDR5 will be widely adopted in mainstream markets such as cell phones, notebooks and PCs, with shipments significantly exceeding DDR4, completing a rapid transition between the two technologies.

Memory bandwidth is growing much faster than processor performance, which is the fundamental driving force behind DDR5's launch. But unlike previous iterations of the product, which focused primarily on how to reduce power consumption and prioritized PCs as applications, the industry generally believes that DDR5 will follow DDR4's lead and take the lead in data centers.

The most eye-catching part of DDR5 is that the speed is faster than the already "super fast" DDR4. Compared to DDR4 memory's maximum 3.2Gbps transfer speed at 1.6GHz clock frequency, the new DDR5 memory reaches a maximum transfer rate of 6.4Gbps, and synchronizes the power supply voltage from DDR4's 1.2V to 1.1V, further enhancing the memory's energy efficiency performance.

Currently, global storage giants such as Samsung, SK Hynix, and Micron have announced their respective mass production and commercial timelines for DDR5 products. However, DDR5 will not come to market overnight and will require strong support from the ecosystem, including system and chip service providers, channel vendors, cloud service providers, and original equipment manufacturers.

3 DPU market pie continues to grow and explode

The DPU moniker became popular near the end of 2020. We believe that the market behavior that made the term DPU popular was: first, NVIDIA's acquisition of the Israeli company Mellanox, which coined the term "DPU" the following year, and, second, the startup Fungible's big promotion of the name DPU in the same year.

The D in DPU refers to DATA data. smartNIC rocketed to become a DPU data processor. With lightning speed, dozens of DPU startups sprang up in a short period of time.

DPU is essentially the evolution of smartNIC, but it is not difficult to see from the fervor of DPU that data centers have a passionate thirst for dedicated processors in the data direction, as well as further fixation and standardization in form.

DPU

DPU

In the early years of data centers, there was a term called "data center tax", i.e., the servers purchased many-core CPUs. But for the final business, some of these cores were "cannibalized" by default. Because these processor resources need to be used to do data virtual networking, security, storage, virtualization, and other work. When these tasks become increasingly complex, the DPU emerges. Just as there are GPUs for graphics computing and NPUs for AI computing, DPUs are a product of the rise of dedicated computing in this era.

In general, the work of DPU includes: firstly, offload the original CPU OVS, storage, security services, and other activities; secondly, hypervisor management to do isolation, virtualization implementation; thirdly, in a variety of ways, to further accelerate the cross-node data processing.

It is not difficult to understand that DPU has become the standard for the data center. However, in terms of specific implementation, different DPUs should not be on the same stage, which is caused by the difference in the roles they play. For example, although Intel's IPU is also a DPU, it is still different from the NVIDIA DPU in terms of responsibilities and work bias. So there is some possibility that the DPU market might be segmented. As well as data center system companies are researching their own more adaptable DPU, which brings uncertainty to the DPU market.

4 Storage and computing integration over the "storage wall" and "power wall"

The formation of the concept of Processing in-memory (PIM) can be traced back to the 1970s, but was limited by the complexity of chip design and manufacturing costs and the lack of killer big data applications to drive.

With the advancement of the chip manufacturing process and the development of artificial intelligence (AI) applications in recent years, processors are becoming more and more powerful, faster and faster, and have more and more storage capacity. Faced with a flood of data, problems such as slow data handling and high energy consumption for handling have become computational bottlenecks. When data is extracted from the memory outside the processing unit, the handling time is often hundreds or thousands of times longer than the computing time, and the energy consumption of the whole process is roughly between 60% and 90%, which is very inefficient.

On the other hand, Moore's law, which is close to the limit, and the von Neumann architecture, which is limited by the storage wall, can no longer meet the needs of this era in terms of computing power enhancement. Current non-von Neumann architectures that attempt to address the "storage wall" and "power wall" include low-voltage sub-threshold digital logic ASICs, neuromorphics, and analog computing. The non-von Neumann architectures include low-voltage sub-threshold digital logic ASICs, neuromorphics computing, and analog computing, with memory-computing integration being the most direct and efficient.

This is a new type of computing architecture that does two- and three-dimensional matrix multiplication operations rather than optimizing on traditional logical computing units. This can theoretically eliminate the delay and power consumption of data transfer, hundreds of times more efficient AI computing and reduce costs, so it is particularly suitable for neural networks.

5 5G construction focus on independent networking and millimeter-wave

With fiber-like speed, ultra-low latency, and network capacity, 5G is having as much impact as electricity, revolutionizing all industries.

As a powerful complement to the Sub-6GHz band, 5G millimeter wave has several outstanding advantages such as high frequency broadband capacity, easy combination with beam fugacity, and ultra-low latency, which is conducive to promoting the development of industrial Internet, AR/VR, cloud gaming, real-time computing and other industries. At the same time, millimeter-wave can support the deployment of dense areas for high-precision positioning and high equipment integration, which will help promote the miniaturization of base stations and terminals.

According to the GSMA's "The Value of Millimeter Wave Adoption" report, 5G millimeter wave is expected to create $565 billion in global GDP and generate $152 billion in tax revenue by 2035, accounting for 25% of the total value created by 5G.

Currently, 186 operators in 48 countries are planning to develop 5G in the 26-28 GHz, 37-40 GHz and 47-48 GHz millimeter wave spectrum; 134 operators in 23 countries hold licenses for millimeter wave deployments, with North America, Europe and Asia accounting for 75 percent of all spectrum deployments. Of these, 26-28 GHz is the millimeter wave band that has been deployed and licensed the most, with the 37-40 GHz band following closely behind.

But not all application scenarios need millimeter wave coverage. In July 201, the China Industrial Network, the Ministry of Industry and Information Technology conducted a 5G business deepening for 9 scenarios, ports, electricity, agriculture. The above scenarios have very high requirements, and the delay is very high, which is advantageous for millimeter waves.

6 EDA tools began to use AI design chip

The current smartphone, car network, IoT, and other terminals have put forward higher requirements for SOC PPA (power consumption, performance, area). Faced with the size of the chip design of hundreds of millions of transistors, as well as new package directions such as heterogeneous integration, system-level package, Chiplets, without machine learning (ml) and artificial intelligence assistant, engineers will face more serious challenges.

Upgrade AI design from the concept to the real-world stage, whether it is the application of AI algorithms in EDA tools to enable chip design "AI Inside", or focus on how to design EDA tools to help AI chip efficient design "AI Outside ", EDA industry and academia have begun to act. At the national strategic level, the U.S. Defense Advanced Research Projects Agency (DARPA) even began to electronic assets intelligent design (IEDA) as a representative project, focusing on breaking through optimization algorithms, sub-7nm chip design support, wiring and equipment automation, and other key technical challenges.

In fact, AI for chip design is not new. Google used AI technology in the design of TPU chips back then; Samsung integrated AI technology into the design of the chip, allegedly beyond the previously achievable chip PPA effect; Nvidia is also using AI algorithms to optimize the design of 5nm and 3nm chips.

In general, the back end of the chip design (or physical implementation), especially in the area of layout and wiring, which accounts for a huge proportion of manpower, is the key to AI. Fast modeling, circuit simulation, improving VLSI QoR, etc. are also the direction of EDA using AI. It can be seen that the current strength of AI is to perform large-scale computing, comparison extraction, or enhancements to some functions, while in the "0 to 1" creation stage and decision-making stage, still need to cooperate with human engineers. But no matter how to say, AI will be the ultimate form of EDA future development, but also the key to improving the efficiency of the chip design in the next few years.

7 Matter will promote the unification of IoT and smart home connectivity standards

Connectivity Standards Alliance (formerly Zigbee Alliance) and smart home manufacturers such as Amazon, Apple, and Google have developed Matter, a standardized interconnection based on the original Project Connected Home over IP (CHIP). Matter is a standardized interconnection protocol designed to enable interoperability and compatibility of IoT devices from different manufacturers and using various wireless connectivity standards, thus bringing better device installation and operation experience to consumers, and simplifying the IoT device development process for manufacturers and developers.

Matter serves as the application layer that unifies devices operating with various IP protocols and interconnection standards, supporting them to communicate across platforms. The Matter protocol currently supports three underlying communication protocols - Ethernet, Wi-Fi, and Thread - and also unifies the use of low-power Bluetooth (BLE) as a pairing method. It is an architecture that runs on top of existing protocols and will support more protocols in the future, including Zigbee and Z-Wave.

The Matter standard is already supported by Internet giants (Amazon, Apple, and Google), chip vendors (Silicon Labs, NXP, and Loxin Technology), IoT and smart home device manufacturers (IKEA, Huawei, and OPPO), and smart home platforms (Doodle and Wulian), and is expected to grow and spread rapidly worldwide from 2022 onwards. It is expected to become the unified interconnection standard for IoT and smart homes from 2022.

8 RISC-V architecture processor enters high-performance computing application

10 years ago, RISC-V originated from UC-Berkeley has now become the mainstream microprocessor architecture instruction set (ISA), but its main application is also limited to the field of embedded systems and microcontrollers (MCUs), especially the Internet of Things market. Can this open-source, free and free microprocessor architecture act as high-performance computing (HPC) of X86 and ARM? From chip giants, Fabless startups to microprocessor kernel IP developers are trying to introduce RISC-V to high-performance computing applications such as AI, 5G, and servers.

RISC-V

RISC-V

SIFIVE's Performance Series is its highest performance RISC-V kernel, designed for network, edge computing, autonomous machine, 5G base station, virtual / enhancement reality. The latest P550 microprocessor uses RISC-V RV64GBC ISA, 13-level flow line / three-launch / chart, and the quad-core cluster has 4MB of three-level cache, main frequency 2.4 GHz. The P550 kernel SPECINT 2006 test performance is 8.65 / GHz. Compared with the ARM Cortex-A75, there is higher performance in the SPECINT2006 and SPECFP2006 integer / floating-point reference test, and the occupied area is much smaller. The occupied space of the quad-core P550 cluster is roughly equivalent to a single Cortex-A75.

Intel will use the P550 cores in its 7nm Horse Creek platform, and by combining Intel interface IPs such as DDR and PCIe with SiFive's highest performance processors, Horse Creek will provide valuable and scalable development tools for high-end RISC-V applications.

Esperanto, a Silicon Valley IC design startup, has introduced ET-SoC-1, an AI gas pedal chip with more than 1,000 integrated RISC-V cores designed for AI reasoning in data centers. Using TSMC's 7nm process and integrating 24 billion transistors, ET-SoC-1 includes 1088 high-performance ET-Minion 64-bit RISC-V ordered cores (and each core comes with a vector/tensor unit); four high-performance ET-Maxion 64-bit RISC-V disordered cores; and over 160MB of on-chip SRAM. External mass memory interfaces for LPDDR4x DRAM and eMMC FLASH; PCIe x8 Gen4 and other general-purpose I/O interfaces. The chip's peak computing performance of 100-200 TOPS for ML inference, and its operating power consumption is less than 20W.

9 Advanced packaging technology becomes "new Moore Law"

Over the past few decades, Moore's Law is like a beacon to lead the development of the semiconductor industry. However, for the reason of physical limits and manufacturing costs, when the advanced process technology reaches 5nm, 3nm, or even 2nm, through the logic of transistor miniaturization process to achieve higher economic value is gradually becoming less effective.

From the market trend, the development of data calculation over the past decade has exceeded the sum of the past forty years. Cloud computing, big data analysis, artificial intelligence, AI inferred, mobile computing, and even automatic driving vehicles require mass calculations. To resolve the force growth problem, in addition to continuing to improve the density through CMOS miniature, it is important to combine different processes/architectures, different instructions, and different functions of the hardware.

Thus, a no-longer-straight-line IC technology development path, and the market's demand for innovative solutions, have pushed packaging, especially advanced packaging technology, to the forefront of innovation.

The latest research data shows that the advanced packaging market will grow at a CAGR of approximately 7.9% from 2020 to 2026. By 2025, the market will exceed $42 billion in revenue alone, which is almost three times the expected growth rate of the traditional packaging market (2.2%). Among them, 2.5D/3D stacked IC, embedded die (Embedded Die, ED), and fan-out package (Fan-Out, FO) is the fastest growing technology platforms, with a CAGR of 21%, 18%, and 16% respectively.

Currently, OSAT companies, foundries, IDMs, Fabless companies, EDA tool vendors, etc. are joining in the race for the advanced packaging market and spending huge amounts of money. However, in general, in the foreseeable future, 2.5D/3D packaging technology will become the core of "advanced packaging", and improving interconnection density and adopting Chiplet design will be the two technical paths to drive the development of "advanced packaging". To show the maximum value of advanced packaging, synergy from the whole industry chain is needed.

10 Automotive domain controller and automotive brain

The entire automotive electrical and electronic architecture is experiencing a shift from traditional distributed architecture, to DCU-based centralized architecture and DCU fusion-based zonal architecture.

At present, the automotive electronic and electrical architecture mainly presents the situation of three-domain control architecture, namely, intelligent cabin, intelligent computing, and intelligent driving. It is expected that after 2030, with the gradual maturity of the autonomous driving technology route, the autonomous driving high-performance chip will be further integrated with the cockpit main control chip to the central computing chip, thus further enhancing computing efficiency and reducing costs through integration.

This means that the car now needs a very powerful "brain" - both to play the hardware hub, but also has a very strong computing power to meet the new needs of hardware and software generated in the process of the above transformation.

In fact, for the development of autonomous driving systems, the industry generally believes that the progressive route from L2+ assisted driving to L4/L5 level autonomous driving is the most feasible path. This requires the corresponding central computing platform to have superb scalability, support the smooth evolution of system development, meet the differentiated requirements of all levels of autonomous driving for arithmetic power and power consumption, and improve the development efficiency of partners such as OEMs.

Of course, the car brain chip cannot only care about the peak power but be a comprehensive balance. Information security, functional security, heterogeneous architecture, different data type processing, heat management, etc. should be considered. At the same time, considering the "software definition car" has become an industry consensus, in design, there is also a need to reserve enough redundant space to deal with the changing car architecture and the AI algorithm.

In the future, the car will have no doubt that it is an electromechanical intelligence. The existing subsystem is integrated as much integration will become a trend. This also makes the hardware development bottleneck after breakthrough, the excellent user experience leading to the software began to become an important selling point of the car.


Processed in 0.122188 Second , 24 querys.