1. | EXECUTIVE SUMMARY |
1.1. | What is high performance computing (HPC)? |
1.2. | HPC architectures |
1.3. | On-premises, cloud, and hybrid HPC infrastructures: shift towards cloud & hybrid |
1.4. | Development to exascale supercomputers and beyond |
1.5. | Distribution of TOP500 supercomputers (June 2024) |
1.6. | Exascale supercomputers around the world |
1.7. | The HPC manufacturer landscape |
1.8. | Supercomputing in China and the US-China trade war (I): no more contribution to the TOP500 list |
1.9. | Supercomputing in China and the US-China trade war (II): onshoring chip supply |
1.10. | State of the art exascale supercomputers: Aurora, Frontier and El Capitan |
1.11. | El Capitan: the top US HPC in November 2024 |
1.12. | Overview of players in HPC |
1.13. | CPUs for HPC and AI workloads introduction |
1.14. | CPU introduction |
1.15. | Core Architecture in a CPU |
1.16. | Key CPU Requirements for HPC and AI Workloads (1) |
1.17. | Key CPU Requirements for HPC and AI Workloads (2) |
1.18. | HPC CPU Market Trends by Company in the TOP500 Supercomputers |
1.19. | HPC Market Trends by Company in the TOP500 Supercomputers |
1.20. | CPU and GPU Architecture Comparison |
1.21. | High Performance GPU Architecture Breakdown |
1.22. | Highlights from Key GPU Players |
1.23. | Trends in Data Center GPUs |
1.24. | 2.5D and 3D semiconductor packaging technologies key to HPC and AI |
1.25. | Key metrics for advanced semiconductor packaging performance |
1.26. | Overview of trends in HPC chip integration |
1.27. | Advanced semiconductor packaging path for HPC |
1.28. | DDR memory dominates CPUs whereas HBM is key to GPU performance |
1.29. | Forecast: Yearly Unit Sales and Market Size of High Bandwidth Memory (HBM) |
1.30. | Forecasts: Memory and Storage for Servers |
1.31. | Memory bottlenecks are driving segmentation of memory hierarchy |
1.32. | Data movement through storage tiers in clusters for AI workloads |
1.33. | TRL of storage technologies for HPC Datacenter and Enterprise |
1.34. | TDP Trend: Historic Data and Forecast Data - CPU |
1.35. | Cooling Methods Overview for High Performance Computers |
1.36. | Different Cooling on The Chip Level |
1.37. | Yearly Revenue Forecast By Cooling Method: 2022-2035 |
1.38. | Forecasts: HPC Hardware Market Size |
2. | INTRODUCTION |
2.1. | HPC - overview |
2.1.1. | The growing demand for data computing power |
2.1.2. | Integrated circuits explained |
2.1.3. | The need for specialized chips |
2.1.4. | Fundamentals of abundant data computing system |
2.1.5. | Memory bandwidth deficit |
2.1.6. | Four key area of growth for abundance data computing system |
2.1.7. | Key parameters for transistor device scaling |
2.1.8. | Evolution of transistor device architectures |
2.1.9. | Scaling technology roadmap overview |
2.1.10. | Process nodes |
2.1.11. | Semiconductor foundries and their roadmap |
2.1.12. | Roadmap for advanced nodes |
2.1.13. | The rise of advanced semiconductor packaging and its challenges |
2.2. | Data centers |
2.2.1. | Introduction to Data Centers |
2.2.2. | Data Center Equipment - Top Level Overview |
2.2.3. | Data Center Server Rack and Server Structure |
2.2.4. | Data Center Switch Topology - Three Layer and Spine-Leaf Architecture |
2.2.5. | K-ary Fat Tree Topology |
2.2.6. | TDP Increases Over Time - GPU |
2.2.7. | Server Boards TDP Increases - Moore's Law |
2.3. | What is HPC? |
2.3.1. | High performance computing (HPC) |
2.3.2. | Types of High Performance Computing (HPC) Architecture |
2.3.3. | HPC architectures |
2.3.4. | Supercomputers and mainframes for HPC |
2.3.5. | On-premises, cloud and hybrid HPC infrastructures: shift towards cloud & hybrid |
2.3.6. | Fundamentals of abundance data computing system |
2.3.7. | Development to exascale supercomputers and beyond |
2.3.8. | Distribution of TOP500 supercomputers (June 2024) |
2.3.9. | Limitations of the TOP500 list |
2.4. | Chip technologies for HPC |
2.4.1. | The rise and the challenges of LLM |
2.4.2. | What is an AI chip? |
2.4.3. | AI acceleration |
2.4.4. | Why AI acceleration is needed |
2.4.5. | The AI chip landscape - overview |
2.4.6. | State-of-the-art high-end AI and HPC chips |
2.4.7. | Evolution of AI compute system architecture |
2.4.8. | NVIDIA and AMD solutions for next-gen AI |
2.4.9. | The next step forward to improve system bandwidth |
2.5. | Benchmarking HPC |
2.5.1. | HPL |
2.5.2. | HPC benchmarking methods overview |
2.5.3. | Comparison of Energy Efficiency and Performance of Supercomputers |
2.5.4. | HPL-AI |
2.5.5. | MLPerf by MLCommons |
2.5.6. | MLPerf - HPC |
2.5.7. | MLPerf Inference- Datacenter |
2.5.8. | Adoption of Accelerators/Co-Processors in Supercomputers is Growing but Far from Universal |
2.5.9. | Growth in Accelerator Adoption within the TOP500 is Highest in the Vendor Segment |
2.6. | HPC Installation Case Studies |
2.6.1. | Exascale supercomputers around the world |
2.6.2. | The HPC manufacturer landscape |
2.6.3. | Lenovo's global reach is unique |
2.6.4. | The MareNostrum 5's dual-purpose architecture |
2.6.5. | Fujitsu pioneered ARM architectures in HPC |
2.6.6. | The Fugaku HPC at Riken |
2.6.7. | Supercomputing in China and the US-China trade war (I): no more contribution to the TOP500 list |
2.6.8. | Supercomputing in China and the US-China trade war (II): onshoring chip supply |
2.6.9. | Tianhe-3/Xingyi: China's most powerful HPC |
2.6.10. | HPE is a major force in the global supercomputing market |
2.6.11. | Overview of the node architecture on the Frontier supercomputer |
2.6.12. | State of the art exascale supercomputers: Aurora, Frontier and El Capitan |
2.6.13. | El Capitan: the top US HPC in November 2024 |
2.6.14. | Overview of players in HPC |
3. | FORECASTS |
3.1. | Forecast Methodology |
3.2. | Forecasts: HPC Server Unit Sales |
3.3. | Forecast: Unit Sales of CPUs and GPUs for HPC |
3.4. | Forecast: Market Size of CPUs and GPUs for HPC |
3.5. | Forecast: Yearly Unit Sales and Market Size of High Bandwidth Memory (HBM) |
3.6. | Forecasts: Memory and Storage for Servers |
3.7. | Forecasts: HPC Hardware Market Size |
3.8. | Yearly Revenue Forecast By Cooling Method: 2022-2035 |
4. | CPUS FOR HPC AND AI |
4.1. | Overview |
4.1.1. | CPUs for HPC and AI Workloads Introduction |
4.1.2. | CPU Introduction |
4.1.3. | Players: CPUs for Server Processors |
4.1.4. | Core Architecture in a CPU |
4.1.5. | The Instruction Cycle |
4.2. | Key Requirements for CPUs in HPC and AI Workloads and Technology Landscape |
4.2.1. | Key CPU Requirements for HPC and AI Workloads (1) |
4.2.2. | Key CPU Requirements for HPC and AI Workloads (2) |
4.2.3. | Intel CPUs found in Server Processors: Current Landscape |
4.2.4. | Intel CPUs found in Server Processors: Technology Overview |
4.2.5. | Intel CPUs get Built-In AI Accelerators with Intel Advanced Matrix Extensions |
4.2.6. | AMD CPUs found in Server Processors: Current Landscape |
4.2.7. | AMD CPUs found in Server Processors: Technology Overview |
4.2.8. | AMD uses 3D V-Cache Technology for 1152MB L3 Cache |
4.2.9. | AVX-512 Vector Extensions for x86-64 Instruction Set |
4.2.10. | IBM Power CPUs found in Server Processors: Current Landscape |
4.2.11. | Arm Holdings Licences Core Designs with its RISC-Based Instruction Sets |
4.2.12. | Arm based A64FX Fujitsu Processors for Supercomputers |
4.2.13. | Chinese Sunway Processors Perform well despite Trade Restrictions |
4.3. | CPU Market Trends |
4.3.1. | Intel Xeon and AMD EPYC Uptake in TOP500 Supercomputers |
4.3.2. | HPC Market Trends by Company in the TOP500 Supercomputers |
4.3.3. | Processor Comparison: TDP, Price and Core Count (1) |
4.3.4. | Processor Comparison: TDP, Price and Core Count (2) |
4.3.5. | RISC Adoption Consideration and Specialised GPUs |
5. | GPUS AND OTHER AI ACCELERATORS |
5.1. | GPUs Background |
5.1.1. | Importance of GPU Software Stacks |
5.1.2. | High Performance GPU Architecture Breakdown |
5.1.3. | CPU and GPU Architecture Comparison |
5.1.4. | Integrated Heterogeneous Systems |
5.1.5. | Highlights from Key GPU Players |
5.2. | GPU Case Studies |
5.2.1. | NVIDIA: NVIDIA CUDA and Tensor Cores |
5.2.2. | NVIDIA: Blackwell GPU |
5.2.3. | NVIDIA Rack-Scale Solutions for Datacenters |
5.2.4. | AMD: CDNA 3 Architecture and Compute Units for GPU Compute |
5.2.5. | AMD: AMD Instinct MI300A Data Centre APUs are Heterogeneous Systems |
5.2.6. | Intel: Intel GPU Max and the Xe-HPC Architecture |
5.2.7. | Intel: Falcon Shores Integrates HPC and AI GPU with AI Accelerator Tech |
5.2.8. | Precision Performance for HPC and AI Workloads |
5.2.9. | Benchmark |
5.3. | GPU Market Insights |
5.3.1. | HPC Accelerator Market Trends by Company in the TOP500 Supercomputers |
5.3.2. | HPC Accelerator Market Trends by Company in the TOP500 Supercomputers |
5.3.3. | Datacenter GPUs Use Cases and Business Models |
5.4. | Trends and Solutions In GPU Technology |
5.4.1. | Trends in Data Center GPUs |
5.4.2. | Transistor Technologies Advance for High-Performance GPUs |
5.4.3. | Challenges in Transistor Scaling |
5.4.4. | Overcoming the Reticle Limit for Die Scaling |
5.4.5. | Case Study: Cerebras Overcomes Die Scaling using Single-Chip Wafers |
5.4.6. | Tachyum Builds from the Ground up to Deliver Exceptional Performance |
5.4.7. | Tachyum: Barriers to Entry for Startups |
5.5. | Application-Specific Hardware for Data Centers |
5.5.1. | Application-Specific Hardware Provides Solution for Slowing Moore's Law in GPUs for HPC and AI |
5.5.2. | AI Accelerators |
5.5.3. | AI chip types |
5.5.4. | Accelerators in servers |
5.5.5. | Main Players for AI Accelerators |
5.6. | Summary: GPUs and Accelerators |
5.6.1. | GPU and AI Accelerators Summary |
6. | MEMORY (RAM) FOR HPC AND AI |
6.1. | Overview |
6.1.1. | Hierarchy of computing memory |
6.1.2. | Memory bottlenecks for HPC/AI workloads and processor under-utilization |
6.1.3. | Types of DRAM and Comparison of HBM with DDR |
6.1.4. | Memory technology in TOP500 supercomputers processors and accelerators |
6.1.5. | HBM vs DDR for computing - market trend |
6.1.6. | Demand outgrows supply for HBM in 2024 |
6.2. | High Bandwidth Memory (HBM) |
6.2.1. | High bandwidth memory (HBM) and comparison with other DRAM technologies |
6.2.2. | HBM (High Bandwidth Memory) packaging |
6.2.3. | HBM packaging transition to hybrid bonding |
6.2.4. | Benchmark of HBM performance utilizing µ bump and hybrid bonding |
6.2.5. | SK Hynix has started volume production of 12-layer HBM3E |
6.2.6. | Micron released 24GB HBM3E for NVIDIA H200 and is sampling 36GB HBM3E |
6.2.7. | Samsung expects production of HBM3E 36GB within 2024 |
6.2.8. | Overview of current HBM stacking technologies by 3 main players |
6.2.9. | Evolution of HBM generations and transition to HBM4 |
6.2.10. | Benchmarking of HBM technologies in the market from key players (1) |
6.2.11. | Benchmarking of HBM technologies in the market from key players (2) |
6.2.12. | Examples of CPUs and accelerators using HBM |
6.2.13. | Intel's CPU Max series for HPC workloads has HBM and optional DDR |
6.2.14. | AMD CDNA 3 APU architecture with unified HBM memory for HPC |
6.2.15. | Three main approaches to package HBM and GPU |
6.2.16. | Drawbacks of High Bandwidth Memory (HBM) |
6.3. | DDR Memory |
6.3.1. | Developments in double data rate (DDR) memory |
6.3.2. | DDR5 memory in AMD's 4th Gen EPYC processors for HPC workloads |
6.3.3. | DDR5 MRDIMM increases capacity and bandwidth for high CPU core counts |
6.3.4. | NVIDIA's Grace CPU uses LPDDR5X memory to lower power consumption |
6.3.5. | GDDR7 announced by major players targeting HPC and AI applications |
6.3.6. | Comparison of GDDR6 and GDDR7 modules |
6.4. | Memory Expansion |
6.4.1. | Samsung's CMM-D memory expansion for AI and datacenter server applications |
6.4.2. | Micron's CXL memory expansion modules for storage tiering in datacenters |
6.5. | Emerging memory technologies |
6.5.1. | Shortfalls of DRAM is driving research into next-generation memory solutions |
6.5.2. | Phase change memory and phase change materials for 3DXP |
6.5.3. | Selector only memory (SOM) for storage class memory solutions |
6.5.4. | Magnetic Random Access Memory (MRAM) can boost energy-efficiency |
6.5.5. | Resistive Random Access Memory (ReRAM) |
6.6. | Market and Outlook |
6.6.1. | DDR memory dominates CPUs whereas HBM is key to GPU performance |
6.6.2. | Memory bottlenecks are driving segmentation of memory hierarchy |
6.6.3. | Memory and Storage: market status and outlook |
7. | STORAGE FOR HPC |
7.1. | Overview |
7.1.1. | Flash storage is the leading storage technology for HPC and AI applications |
7.1.2. | HPC and AI require large-scale and high-performance data storage |
7.1.3. | Storage requirements varies depending on the AI workloads |
7.1.4. | Data movement through storage tiers in clusters for AI workloads |
7.2. | NAND Flash SSD Technologies for Datacenter and Enterprise |
7.2.1. | NAND Flash memory uses floating gates or charge traps to store data |
7.2.2. | Data center and enterprise SSD form factors transitioning towards EDSFF |
7.2.3. | SK Hynix - NAND technology development |
7.2.4. | SK Hynix: overcoming stacking limitations to increase capacity using 4D2.0 |
7.2.5. | Examples of SK Hynix NAND Flash storage chips for AI and data centers |
7.2.6. | Solidigm (SK Hynix subsidiary) offers SSDs previously manufactured by Intel |
7.2.7. | Micron's 9550 SSDs are designed for AI-critical workloads with PCIe Gen5 |
7.2.8. | Micron has a range of SSDs for applications in datacenters and AI |
7.2.9. | Example of SSD configurations and solutions for AI and HPC workloads |
7.2.10. | KIOXIA uses BiCS 3D FLASHTM Technology to increase storage density |
7.2.11. | KIOXIA offers a range of datacenter and enterprise SSD solutions |
7.2.12. | Samsung's 9th generation V-NAND technology is targeting AI applications |
7.2.13. | Intel's Optane SSD latency-sensitive solutions were designed for server caching |
7.3. | Technology Comparison and Trends |
7.3.1. | Increasing SSD capacity through emerging lower cost QLC NAND |
7.3.2. | QLC affords higher capacity at a lower cost per bit but with performance deficits |
7.3.3. | Step change in sequential read bandwidth with each generation of PCIe |
7.3.4. | Evolution of PCIe generations in the SSD market |
7.3.5. | SSDs performance for different datacenter and enterprise applications |
7.3.6. | SSDs for storage class memory bridging gap to volatile memory |
7.3.7. | Vertical stacking is unlikely to be the only factor in increasing storage density |
7.3.8. | Energy efficiency of SSD storage technologies for datacenter and enterprise |
7.3.9. | Examples of SSDs for AI workloads |
7.4. | Storage servers |
7.4.1. | HPE's Cray ClusterStor E1000 for HPC applications |
7.4.2. | Storage capacity of Cray ClusterStor E1000 is scalable according to configuration |
7.4.3. | NetApp offers all-flash storage systems for datacenters and enterprises |
7.4.4. | Examples of other all-Flash scalable storage systems |
7.5. | Players and Outlook |
7.5.1. | Examples of players in HPC storage |
7.5.2. | TRL of storage technologies for HPC Datacenter and Enterprise |
8. | ADVANCED SEMICONDUCTOR PACKAGING TO ENABLE HPC |
8.1. | Overview of Advanced Semiconductor Packaging Technologies |
8.1.1. | Progression from 1D to 3D semiconductor packaging |
8.1.2. | Key metrics for advanced semiconductor packaging performance |
8.1.3. | Four key factors of advanced semiconductor packaging |
8.1.4. | Overview of interconnection technique in semiconductor packaging |
8.1.5. | Overview of 2.5D packaging structure |
8.1.6. | 2.5D advanced semiconductor packaging technology portfolio |
8.1.7. | Evolution of bumping technologies |
8.1.8. | 3D advanced semiconductor packaging technology portfolio |
8.2. | How can advanced semiconductor packaging address these challenges? |
8.2.1. | Why traditional Moore's Law scaling cannot meet the growing needs for HPC |
8.2.2. | Key Factors affecting HPC datacenter performance |
8.2.3. | Advanced semiconductor packaging path for HPC |
8.2.4. | HPC chips integration trend - overview |
8.2.5. | HPC chips integration trend - explanation |
8.2.6. | Enhancing Energy Efficiency Through Hybrid Bonding Pitch Scaling |
8.2.7. | What is critical in packaging for AI & HPC |
8.2.8. | Future system-in-package architecture |
8.3. | Case studies |
8.3.1. | AMD Instinct MI300 |
8.3.2. | AMD Instinct MI300 is being used for exascale supercomputing |
8.3.3. | 3D chip stacking using hybrid bonding - AMD |
8.3.4. | Advanced semiconductor packaging for Intel latest Xeon server CPU (1) |
8.3.5. | Xilinx FPGA packaging |
8.3.6. | Future packaging trend for chiplet server CPU |
8.3.7. | High-end commercial chips based on advanced semiconductor packaging technology (1) |
8.3.8. | High-end commercial chips based on advanced semiconductor packaging technology (2) |
9. | NETWORKING AND INTERCONNECT IN HPC |
9.1. | Overview |
9.1.1. | Introduction: networks in HPC |
9.1.2. | Switches: Key components in a modern data center |
9.1.3. | Why does HPC require high-performance transceivers? |
9.1.4. | Current AI system architecture |
9.1.5. | L1 backside compute architecture with Cu systems |
9.1.6. | Number of Cu wires in current AI system Interconnects |
9.1.7. | Limitations in current copper systems in AI |
9.1.8. | What is an Optical Engine (OE) |
9.1.9. | Integrated Photonic Transceivers |
9.1.10. | Nvidia's connectivity choices: Copper vs. optical for high-bandwidth systems |
9.1.11. | Pluggable optics vs CPO |
9.1.12. | L1 backside compute architecture with optical interconnect: Co-Packaged Optics (CPO) |
9.1.13. | Key CPO applications: Network switch and computing optical I/O |
9.1.14. | Design decisions for CPO compared to Pluggables |
9.1.15. | Power efficiency comparison: CPO vs. Pluggable optics vs. copper interconnects |
9.1.16. | Latency of 60cm data transmission technology benchmark |
9.1.17. | Future AI architecture predicted by IDTechEx |
9.2. | Optical on-device interconnects |
9.2.1. | On-device interconnects |
9.2.2. | Overcoming the von Neumann bottleneck |
9.2.3. | Electrical Interconnects Case Study: NVIDIA Grace Hopper for AI |
9.2.4. | Why improve on-device interconnects? |
9.2.5. | Improved Interconnects for Switches |
9.2.6. | Case Study: Ayer Labs TeraPHY |
9.2.7. | Case Study: Lightmatter's 'Passage' PIC-based Interconnect |
9.3. | Interconnect standards in HPC |
9.3.1. | The role of interconnect families in HPC |
9.3.2. | Lanes in interconnect |
9.3.3. | Comparing roadmaps |
9.3.4. | NVIDIA and the resurgence in InfiniBand interconnects |
9.3.5. | InfiniBand is more dominant when considering new-build HPCs |
9.3.6. | Intel, Cornelis Networks and Omni-Path |
9.3.7. | Cornelis Networks and the new Omni-Path CN5000 standard |
9.3.8. | Gigabit ethernet: the open standard for HPC interconnect |
9.3.9. | HPE Cray's Slingshot interconnect |
9.4. | Summary: Networking in HPC |
9.4.1. | Summary: networking for HPC |
10. | THERMAL MANAGEMENT FOR HPC |
10.1. | Increasing TDP Drives More Efficient Thermal Management |
10.2. | Historic Data of TDP - GPU |
10.3. | TDP Trend: Historic Data and Forecast Data - CPU |
10.4. | Data Center Cooling Classification |
10.5. | Cooling Methods Overview |
10.6. | Different Cooling on Chip Level |
10.7. | Cooling Technology Comparison |
10.8. | Cold Plate/Direct to Chip Cooling - Standalone Cold Plate |
10.9. | Liquid Cooling Cold Plates |
10.10. | Single and Two-Phase Cold Plate: Challenges |
10.11. | Single-Phase and Two-Phase Immersion - Overview (1) |
10.12. | Single and Two-Phase Immersion: Challenges |
10.13. | Coolant Comparison |
10.14. | Coolant Comparison - PFAS Regulations |
10.15. | Coolant Distribution Units (CDU) |
10.16. | Heat Transfer - Thermal Interface Materials (TIMs) (1) |
10.17. | Heat Transfer - Thermal Interface Materials (TIMs) (2) |
11. | COMPANY PROFILES |
11.1. | Accelsius — Two-Phase Direct-to-Chip Cooling |
11.2. | ACCRETECH (Grinding Tool) |
11.3. | AEPONYX |
11.4. | Amazon AWS Data Center |
11.5. | Amkor — Advanced Semiconductor Packaging |
11.6. | Arieca (2024) |
11.7. | Arieca (2020) |
11.8. | ASE — Advanced Semiconductor Packaging |
11.9. | Asperitas Immersed Computing |
11.10. | Calyos: Data Center Applications |
11.11. | Camtek |
11.12. | CEA-Leti (Advanced Semiconductor Packaging) |
11.13. | Ciena |
11.14. | Coherent: Photonic Integrated Circuit-Based Transceivers |
11.15. | Comet Yxlon |
11.16. | DELO: Semiconductor Packaging |
11.17. | EFFECT Photonics |
11.18. | Engineered Fluids |
11.19. | EU Chips Act — SemiWise |
11.20. | EVG (D Hybrid Bonding Tool) |
11.21. | GlobalFoundries |
11.22. | Green Revolution Cooling (GRC) |
11.23. | Henkel: microTIM and data centers |
11.24. | JCET Group |
11.25. | JSR Corporation |
11.26. | Kioxia |
11.27. | Lightelligence |
11.28. | Lightmatter |
11.29. | LioniX |
11.30. | LIPAC |
11.31. | LiquidCool Solutions — Chassis-Based Immersion Cooling |
11.32. | LiSAT |
11.33. | LPKF |
11.34. | Lumiphase |
11.35. | Micron |
11.36. | Mitsui Mining & Smelting (Advanced Semiconductor Packaging) |
11.37. | Nano-Join |
11.38. | NanoWired |
11.39. | NeoFan |
11.40. | Neurok Thermocon Inc |
11.41. | Parker Lord: Dispensable Gap Fillers |
11.42. | Resonac (RDL Insulation Layer) |
11.43. | Resonac Holdings |
11.44. | Samsung Electronics (Memory) |
11.45. | SK Hynix |
11.46. | Solidigm |
11.47. | Sumitomo Chemical Co, Ltd |
11.48. | Taybo (Shanghai) Environmental Technology Co, Ltd |
11.49. | TDK |
11.50. | TOK |
11.51. | TSMC (Advanced Semiconductor Packaging) |
11.52. | Tyson |
11.53. | Vertiv Holdings - Data Center Liquid Cooling |
11.54. | Vitron (Through-Glass Via Manufacturing) — A LPKF Trademark |
11.55. | ZutaCore |