# Cerebras Systems Inc. | S-1 | Filed 2026-04-17

Table of Contents

  
 As filed with the U.S. Securities and Exchange Commission on April 17, 2026.
 Registration No. 333-
  

|  |
| --- |

 UNITED STATES
 SECURITIES AND EXCHANGE COMMISSION
 Washington, D.C. 20549
  

|  |
| --- |

 FORM S-1
 REGISTRATION STATEMENT
 UNDER
 THE SECURITIES ACT OF 1933
  

|  |
| --- |

 Cerebras Systems Inc.
 (Exact name of registrant as specified in its charter)
  

> **Delaware / 3674 / 81-2256092**
>
> (State or other jurisdiction of incorporation or organization) ... (Primary Standard Industrial Classification Code Number) / (I.R.S. Employer  Identification Number)

 1237 E. Arques Avenue
 Sunnyvale, California 94085
 (650) 933-4980
 (Address, including zip code, and telephone number, including area code, of registrant’s principal executive offices)
  

|  |
| --- |

 Andrew D. Feldman
 Chief Executive Officer and President
 1237 E. Arques Avenue
 Sunnyvale, California 94085
 (650) 933-4980
 (Name, address, including zip code, and telephone number, including area code, of agent for service)
  

|  |
| --- |

 Copies to:
  

> **Tad J. Freese Sarah B. Axtell Zuzanna V. Gruca Latham & Watkins LLP 140 Scott Drive Menlo Park, California 94025 (650) 328-4600 / Shirley X. Li Christopher Ing Cerebras Systems Inc. 1237 E. Arques Avenue Sunnyvale, California 94085 (650) 933-4980 / Alan F. Denenberg Elizabeth W. LeBow Davis Polk & Wardwell LLP 900 Middlefield Road Redwood City, California 94063 (650) 752-2000**
>
> Tad J. Freese Sarah B. Axtell Zuzanna V. Gruca Latham & Watkins LLP 140 Scott Drive Menlo Park, California 94025 (650) 328-4600 ... Shirley X. Li Christopher Ing Cerebras Systems Inc. 1237 E. Arques Avenue Sunnyvale, California 94085 (650) 933-4980 / Alan F. Denenberg Elizabeth W. LeBow Davis Polk & Wardwell LLP 900 Middlefield Road Redwood City, California 94063 (650) 752-2000

 Approximate date of commencement of proposed sale to the public: As soon as practicable after the effective date of this registration statement.
 If any of the securities being registered on this Form are to be offered on a delayed or continuous basis pursuant to Rule 415 under the Securities Act of 1933, check the following
 box. ☐
 If this Form is filed to register additional securities for an offering pursuant to Rule 462(b) under the Securities Act, please check the following box and list the Securities Act
 registration statement number of the earlier effective registration statement for the same offering. ☐
 If this Form is a post-effective amendment filed pursuant to Rule 462(c) under the Securities Act, check the following box and list the Securities Act registration statement number
 of the earlier effective registration statement for the same offering. ☐
 If this Form is a post-effective amendment filed pursuant to Rule 462(d) under the Securities Act, check the following box and list the Securities Act registration statement number
 of the earlier effective registration statement for the same offering. ☐
 Indicate by check mark whether the registrant is a large accelerated filer, an accelerated filer, a non-accelerated filer, a smaller reporting company or an emerging growth company.
 See the definitions of “large accelerated filer,” “accelerated filer,” “smaller reporting company” and “emerging growth company” in Rule 12b-2 of the Exchange Act.
  

| Large accelerated filer | ☐ | Accelerated filer | ☐ |
| --- | --- | --- | --- |
| Non-accelerated filer | ☒ | Smaller reporting company | ☐ |
|  |  | Emerging growth company | ☒ |

 If an emerging growth company, indicate by check mark if the registrant has elected not to use the extended transition period for complying with any new or revised financial
 accounting standards provided pursuant to Section 7(a)(2)(B) of the Securities Act. ☐
  

|  |
| --- |

 The Registrant hereby amends this registration statement on such date or dates as may be necessary to delay its effective date until the registrant shall file a further
 amendment which specifically states that this registration statement shall thereafter become effective in accordance with Section 8(a) of the Securities Act of 1933, or until the
 registration statement shall become effective on such date as the Securities and Exchange Commission, acting pursuant to said Section 8(a), may determine.
  

|  |
| --- |

  

    Table of Contents

  
 The information in this preliminary prospectus is not complete and may be changed. We may not sell these securities until the registration statement filed with the
 Securities and Exchange Commission is effective. This preliminary prospectus is not an offer to sell these securities and we are not soliciting offers to buy these securities
 in any jurisdiction where the offer or sale is not permitted.
 PRELIMINARY PROSPECTUS (Subject to Completion)
 Issued                , 2026
                Shares
    ![cerebraslogoa.jpg](assets/cerebraslogoa.jpg)

*cerebraslogoa.jpg*

 Cerebras Systems Inc.
 Class A Common Stock
  

|  |
| --- |

 Cerebras Systems Inc. is offering                 shares of its Class A common stock. This is our initial public offering and no public market currently exists for
 shares of our Class A common stock. We anticipate that the initial public offering price per share of our Class A common stock will be between $           and
 $           .
  

|  |
| --- |

 We have applied to list our Class A common stock on the Nasdaq Global Select Market under the symbol “CBRS,” and this offering is contingent upon the
 listing of our Class A common stock on the Nasdaq Global Select Market.
  

|  |
| --- |

 Following completion of this offering, we will have three classes of authorized common stock: Class A common stock, Class B common stock, and Class N
 common stock. The rights of the holders of Class A common stock, Class B common stock, and Class N common stock are identical, except with respect to
 voting and conversion rights. Each share of Class A common stock is entitled to one vote. Each share of Class B common stock is entitled to 20 votes and is
 convertible at any time into one share of Class A common stock. Each share of Class N common stock is non-voting and is convertible into one share of
 Class A common stock. Outstanding shares of Class B common stock will represent approximately           % of the voting power of our outstanding capital stock
 immediately following this offering. See the section titled “Description of Capital Stock” for additional information.
 We are an “emerging growth company” as defined under the U.S. federal securities laws and, as such, may elect to comply with certain reduced public
 company reporting requirements for this and future filings.
  

|  |
| --- |

 Investing in our Class A common stock involves risks. See the section titled “Risk Factors” beginning on page 22 to read about factors you should consider
 before deciding to invest in our Class A common stock.
  

|  |
| --- |

 PRICE $               A SHARE
  

|  |
| --- |

  

> **Price to Public / Underwriting  Discounts and  Commissions / Proceeds to  Cerebras**
>
> Per Share    .................................................................................................................................................. ... $ / $ / $
> Total     .......................................................................................................................................................... ... $ / $ / $

 _______________
 (1)See the section titled “Underwriters” for a description of the compensation payable to the underwriters.
 At our request, the underwriters have reserved up to           % of the shares of Class A common stock offered by this prospectus for sale at the initial public
 offering price through a directed share program to certain persons identified by our management and certain long-tenured employees, which may include
 parties with whom we have a business relationship and friends and family of management and such employees. See the section titled “Underwriters—Directed
 Share Program” for additional information.
 We will grant the underwriters the right to purchase up to an additional                 shares of our Class A common stock from us to cover over-allotments, if any,
 at the initial public offering price less the underwriting discount.
 The Securities and Exchange Commission and state securities regulators have not approved or disapproved these securities or determined if this prospectus is
 truthful or complete. Any representation to the contrary is a criminal offense.
 The underwriters expect to deliver the shares against payment on                , 2026.
  

|  |
| --- |

  

| MORGAN STANLEY | CITIGROUP | BARCLAYS | UBS INVESTMENT BANK |
| --- | --- | --- | --- |

  

| MIZUHO | TD COWEN |
| --- | --- |

  

| NEEDHAM & COMPANY | CRAIG-HALLUM | WEDBUSH SECURITIES | ROSENBLATT | ACADEMY SECURITIES |
| --- | --- | --- | --- | --- |

 Prospectus dated                , 2026

  

  

  
    ![coverart1aa.jpg](assets/coverart1aa.jpg)

*coverart1aa.jpg*

  

  

  
    ![coverart2ea.jpg](assets/coverart2ea.jpg)

*coverart2ea.jpg*

  

  

  
    ![cover3ba.jpg](assets/cover3ba.jpg)

*cover3ba.jpg*

  

  

  
    ![coverart4b.jpg](assets/coverart4b.jpg)

*coverart4b.jpg*

  

  

  
    ![coverart5a.jpg](assets/coverart5a.jpg)

*coverart5a.jpg*

    i

  

  

## Table

  
 TABLE OF CONTENTS
  

> **Page**
>
> Founders Letter    ............................................ ... iii
> Glossary of Certain Terms    ........................... ... vii
> Prospectus Summary    .................................... ... 1
> Risk Factors   .................................................. ... 22
> Special Note Regarding Forward-Looking    Statements  ................................................. ... 78
> Market and Industry Data ............................. ... 80
> Use of Proceeds     ............................................ ... 81
> Dividend Policy    ............................................ ... 82
> Capitalization     ............................................... ... 83
> Dilution      ........................................................ ... 87
> Management’s Discussion and Analysis of    Financial Condition and Results of    Operations    ................................................. ... 90
> Business      ....................................................... ... 111
> Management   ................................................. ... 146

  

|  | Page |
| --- | --- |
| Executive and Director Compensation   ......... | 154 |
| Certain Relationships and Related Party    Transactions    .............................................. | 168 |
| Principal Stockholders    ................................. | 173 |
| Description of Capital Stock   ........................ | 178 |
| Shares Eligible for Future Sale     .................... | 187 |
| Material U.S. Federal Income Tax    Consequences to Non-U.S. Holders      ......... | 194 |
| Underwriters    ................................................. | 198 |
| Legal Matters    ............................................... | 211 |
| Change in Independent Accountant   ............. | 211 |
| Experts       ......................................................... | 211 |
| Where You Can Find Additional    Information      ............................................... | 212 |
| Index to Consolidated Financial Statements | F-1 |

  

|  |
| --- |

 Through and including                , 2026 (the 25th day after the date of this prospectus), all dealers that
 buy, sell, or trade shares of our Class A common stock, whether or not participating in this offering, may be
 required to deliver a prospectus. This delivery requirement is in addition to the obligation of dealers to
 deliver a prospectus when acting as underwriters and with respect to their unsold allotments or subscriptions.
 As used in this prospectus, unless the context otherwise requires, references to “Cerebras Systems,” “Cerebras,”
 the “company,” “we,” “us,” “our,” and similar terms refer to Cerebras Systems Inc. and, where appropriate, its
 subsidiaries, taken as a whole.
 “Cerebras,” “Cerebras Systems,” the Cerebras logos, and other trade names, trademarks, or service marks of
 Cerebras appearing in this prospectus are the property of Cerebras Systems Inc. Other trade names, trademarks, or
 service marks appearing in this prospectus are the property of their respective holders. Solely for convenience, trade
 names, trademarks, and service marks referred to in this prospectus appear without the ®, ™, and  symbols, but
 those references are not intended to indicate, in any way, that we will not assert, to the fullest extent under applicable
 law, our rights or that the applicable owner will not assert its rights, to these trade names, trademarks, and service
 marks.
 Numerical figures included in this prospectus have been subject to rounding adjustments. Accordingly,
 numerical figures shown as totals in various tables may not be arithmetic aggregations of the figures that precede
 them.
 References to www.cerebras.ai in this prospectus are inactive textual references only, and the information
 contained on, or that can be accessed through, our website does not constitute part of this prospectus.
 We have not, and the underwriters have not, authorized anyone to provide you any information or to make any
 representations other than those contained in this prospectus or in any free writing prospectus prepared by or on
 behalf of us or to which we have referred you. Neither we nor the underwriters take responsibility for, or provide
 any assurance as to the reliability of, any other information others may give you. This prospectus is an offer to sell
 only the shares offered hereby, and only under circumstances and in jurisdictions where it is lawful to do so. We are
 not, and the underwriters are not, making an offer to sell these securities in any jurisdiction where the offer or sale is
 not permitted. The information contained in this prospectus is accurate only as of the date of this prospectus,

    ii

  

  regardless of the time of delivery of this prospectus or any sale of the shares of our Class A common stock. Our
 business, financial condition, results of operations, and prospects may have changed since that date.
 For investors outside the United States: We have not, and the underwriters have not, done anything that would
 permit this offering or the possession or distribution of this prospectus or any free writing prospectus in connection
 with this offering in any jurisdiction where action for that purpose is required, other than in the United States.
 Persons outside the United States who come into possession of this prospectus must inform themselves about, and
 observe any restrictions relating to, the offering of the shares of our Class A common stock and the distribution of
 this prospectus outside the United States. See the section titled “Underwriters” for additional information.

    iii

    Table of Contents

  

## Founders Letter

  
 FOUNDERS LETTER
 In 2015, we saw AI on the horizon and knew it would consume vast amounts of compute.
 AI was a new and unusual workload. And, for computer architects, new workloads create opportunities by forcing
 tectonic market shifts.
 The founders made two fundamental bets.
 The first bet: that existing general-purpose processors would not be sufficient, and that what has always been true
 throughout the history of compute would also be true for AI – that transformative compute workloads require
 purpose-built silicon. This is what PCs did for x86, graphics did for GPUs, and mobile did for ARM.
 The second bet: that modifying existing compute architectures would not realize AI’s potential. We would need to
 build a new computer architecture from first principles, optimized in every way for AI.
 Both bets were contrarian. And both turned out to be right.
 Bigger is Better, Many Times Better
 At a computational level, graphics is a parallelism-bound problem and mobile is a power-bound problem. But AI is a
 communication-bound problem.
 The faster compute communicates with memory, and the faster compute communicates with other compute, the
 faster and smarter the AI, and the better the user experience.
 The enemy of speed is communication latency. And since communication is thousands of times faster on-chip, than
 across chips, the best way to reduce latency is to keep communication on-chip.
 Our answer: build the largest commercial chip in the history of the computer industry. We used the entire wafer for
 one chip: a technique called wafer-scale integration.
 Wafer-scale integration allowed us to bring together quantities of compute and memory never before assembled on a
 single commercial chip and deliver AI at previously unimaginable speeds. We could avoid the latency and the
 power-draw induced by the traditional approach of chopping up the AI problem and spreading it across lots of little
 chips.
 It was a logical approach in principle, but it was also a daunting challenge in practice. Wafer-scale integration was
 one of the holy grails of computer architecture. Every previous effort to commercialize it had failed.
 We had to prove it was possible: to design it, fabricate it, yield it, power it, and run production workloads on it. This
 was a complex, multi-dimensional technical challenge: one that cut across chip design, system design, high
 performance software, and AI algorithms.
 The Grind
 When we set out to do it, nobody knew how to make it work.
 Nobody knew how to yield a chip 58 times larger than the leading GPU. Nobody knew how to deliver power to a
 chip the size of a dinner plate without melting the motherboard. Nobody knew how to package such a big chip
 without cracking it. Nobody knew how to cool a chip of this size, with air or water, without the coolant getting warm
 before it reached the other side. Nobody knew, and we didn’t know.

    iv

    Table of Contents

  Many pointed out challenges they said were impossible to solve. But ironically, those weren’t the hardest challenges
 we had to solve. The hardest challenges were ones nobody had ever seen, because no one had ever progressed that
 far.
 Fundamental invention is profoundly difficult. We failed for a long time.
 But we believe in fearless engineering combined with relentless drive. It’s not just our culture. It is who we are. We
 attacked the unknown with a disciplined methodology: first principal analysis, experimentation, failure, failure
 analysis. Rinse and repeat.
 We believe that enduring moats are built by solving hard technical problems with novel solutions. Wafer-scale
 integration is one such moat that took a decade of invention and engineering to achieve.
 The Wrong Time
 We delivered our first systems in 2020 and the second generation in 2022. We had built something extraordinary,
 but the market wasn’t ready. A few visionary customers in supercompute and life sciences saw the potential, but to
 most of the world, the benefits were not immediately apparent.
 AI was nascent. It was raw and unproven. Training was time-consuming, a black art, and the domain of a select few.
 GPUs were not yet the bottleneck. And our solutions struggled to find a home.
 Meeting the Moment
 Change arrived with ChatGPT. Suddenly everyone was talking about AI, and entrepreneurs were extending the art
 of the possible. By early 2025, AI was smart enough to be valuable, and people were using AI everywhere.
 Inference usage exploded.
 Suddenly, everyone remembered the lesson from Google search: speed produces more satisfied and more frequent
 users, while even tiny delays significantly reduce user satisfaction, search frequency, and search revenue.
 The market had shifted. Everyone realized that fast AI is more useful than slow AI. By the end of 2025, it was clear:
 fast inference was powering the highest-value workloads. Fast inference was making engineers more productive
 because fast AI coding agents could write code, edit it, test it, and get a product to market more quickly. Fast AI
 made lawyers, analysts, bankers, doctors, and researchers more productive than their counterparts waiting on slow
 insights.
 And so the flywheel started: as fast AI produced better answers in less time, users would do more with it, stay
 longer, and run higher-value workloads. Cerebras’s original vision to accelerate AI with custom silicon, systems,
 and software finally met a market that urgently needed—and demanded—the fast AI our innovations make possible.
 Our speed separates us from other AI infrastructure players.
 For many workloads, Cerebras is up to 15 times faster than leading GPU-based solutions as benchmarked on leading
 open-source models. In some more exotic workloads, we have been more than 1,000 times faster. When customers
 experienced that speed for themselves, they knew they had to have it; we won new customers, and existing
 customers reordered, then reordered again.
 Smarter models combined with fast inference makes AI more productive. And since tokens are how AI converts
 compute into intelligence, token consumption is growing exponentially. And because Cerebras generates tokens
 faster, we believe we are extraordinarily well-positioned to win in this market.

    v

    Table of Contents

  Selected by the Leaders
 In January 2026, we announced a multi-year deal with OpenAI valued at more than $20 billion dollars. OpenAI
 has agreed to deploy 750 megawatts of Cerebras’s high-speed AI compute, and OpenAI and Cerebras have agreed to
 co-design future models for future Cerebras hardware.
 In March 2026, we started a multi-year partnership with AWS to bring fast inference to an even bigger scale
 through global distribution. This is planned to give every startup, AI native, and enterprise company easy access to
 Cerebras’s blisteringly fast inference.
 We firmly believe that once you go fast, you can never go back. How much would we need to pay you to go back to
 slow internet?
 The explosion in AI usage profoundly changed the market. Cerebras was ready: meeting the changing needs of our
 customers and quickly adapting our business model. We made it easy for our customers to consume AI compute. We
 were among the first semi-conductor companies to have a cloud business. We deliver hardware on-premises to
 customers concerned about data security and sovereignty. And we reach other customers through the cloud – both
 the Cerebras cloud and soon the AWS cloud. This allows us to reach customers who want to rent compute by month,
 year, or to pay by the token, and provides Cerebras with an attractive mix of recurring revenue as well as lumpier
 hardware revenue.
 The Future
 As we look forward, we will continue to engage in fearless engineering aimed at solving the hardest technical
 problems – the ones that create fundamental differentiation, customer value, and durable moats.
 We believe that shareholder value comes from doing the things customers want, but others cannot do. We will focus
 relentlessly on these items, make bold decisions, and weigh tradeoffs in favor of long-term value creation.
 We’re proud of our accomplishments, and we have inventions underway that build on and extend our advantages.
 The fundamental building blocks in computer architecture are calculation (cores), the storage of results (memory),
 and the movements of results to where they are needed (communication). Wafer-scale integration has provided a
 platform for more cores than any previous commercialized processor. It allows us to use memory in ways that are
 foreclosed to traditional sized processors. Our inventions are blurring the lines that have traditionally forced
 tradeoffs between memory capacity and speed. And the massive size of our processor serves as a foundation for
 pioneering communication techniques that are only available to wafer scale solutions.
 Smarter models and faster inference are transforming entire sectors of the economy. The way we write software has
 changed forever. But we believe we have yet to fully contemplate the growing landscape for autonomous vehicles,
 robotics, and other latency-sensitive applications. These applications will continue to push fast inference demand,
 present novel challenges, and take the industry in new directions.
 The AI market is moving extraordinarily quickly, and we are well-positioned to anticipate its twists and turns.
 Our insights are now informed by a decade of pioneering invention, unique expertise in technologies that underpin
 fast AI, learning from our customers, and deploying some of the largest AI clusters in the world. Our partnerships
 provide not only visibility into the future, but also a chance to co-design hardware with those creating it.
 We invite you to join us on this extraordinary journey through a technological revolution more profound than any
 that has come before.
 Nothing excites us more than the future of AI.

    vi

    Table of Contents

  We are grateful to our families for their patience, our team for their passion and hard work, our investors for their
 support and encouragement, and our customers and partners for their trust. To all those who have been with us
 through the first chapter in Cerebras’s life, we say thank you.
 And it is with great pride we set off on the next chapter.
 Andrew, Gary, Sean, Michael, and JP

    vii

    Table of Contents

  

## Glossary of Certain Terms

  
 GLOSSARY OF CERTAIN TERMS
 The following are abbreviations, acronyms, and definitions of certain terms used in this prospectus:
 •“AI” stands for artificial intelligence. AI includes GenAI, machine learning, and other artificial intelligence
 tools, systems, products, and related technologies.
 •“API” stands for Application Programming Interface. An API is a set of rules, protocols, and tools that
 allow different software applications to communicate and interact with each other.
 •“Chassis” means the metal frame that supports and houses the components of an electronic device,
 including the circuits that connect the components.
 •“CPU” stands for Central Processing Unit. A CPU is the brain of a computer, responsible for executing
 instructions and carrying out computations. It is a complex IC that fetches, decodes, and executes
 instructions, typically from main memory under the control of software programs.
 •“Customers” refers to our end customers. When the context requires, we may use “end customers,” which
 include hyperscalers, foundation model labs, AI-native and digital-native businesses, Fortune 500
 companies, and Sovereign AI initiatives. When used in our audited consolidated financial statements
 included elsewhere in this prospectus, “customers” means parties we directly invoice for products or
 services.
 •“GenAI” stands for generative AI. GenAI is a type of AI technology that can produce various types of
 content, including text, imagery, audio, and synthetic data.
 •“GPU” stands for Graphics Processing Unit. GPU is a specialized IC with a high degree of parallelism used
 to accelerate the rendering of complex graphics onto a screen. Due to their ability to perform numerous
 computations simultaneously, GPUs outperform CPUs on certain tasks and are used for scientific
 computing and accelerating AI workloads, such as training and inference workloads of large language
 models.
 •“HBM” stands for High Bandwidth Memory. HBM is a type of computer memory designed to provide high
 bandwidth and low latency for GPUs, other AI accelerators, and CPUs. HBM is significantly faster and
 more expensive than traditional DRAM memory and is typically integrated within the IC package.
 •“Hyperscalers” means large technology companies that offer highly scalable cloud computing services,
 utilizing extensive data centers. They offer a wide range of dynamically-provisioned services, including
 computing infrastructure, software platforms, and, increasingly, AI model training and inferencing. These
 services are available on an as-needed basis, managed and scaled via software by the users.
 •“IC” stands for an Integrated Circuit. IC is a miniaturized electronic circuit that combines multiple
 transistor components and other elements into a single small package. ICs are the fundamental building
 blocks of modern electronics, and they are used in a wide variety of applications, including computers,
 servers, networking equipment, smartphones, automobiles, and medical devices.
 •“Inference” means the process of using a trained machine learning model to make predictions or decisions
 based on new data. It involves applying the patterns and knowledge the model learned during training to
 analyze and interpret new, unseen inputs.
 •“IT” stands for information technology.
 •“LLM” stands for Large Language Model. LLMs are a class of artificial intelligence models that are trained
 on vast amounts of text data to understand, interpret, and generate human-like language.

    viii

    Table of Contents

  •“Node,” in the context of chip manufacturing, is used as shorthand for “process node,” which refers to
 specific semiconductor manufacturing processes corresponding to different circuit generations and
 architectures, for example, 14 nanometer and 5 nanometer nodes.
 •“Rack” means an open-frame cabinet of standard dimensions used to organize and house servers,
 networking equipment, power supplies, and other IT hardware. A data center typically houses thousands of
 racks interconnected by networking switches typically using Ethernet protocol.
 •“Sovereign AI” refers to AI systems that are developed, controlled, and managed by a particular nation or
 established in furtherance of such nation’s public interests.
 •“SRAM” stands for Static Random-Access Memory. SRAM is a type of memory that stores data within
 transistors so long as power is being supplied. Compared to DRAM (Dynamic Random-Access Memory),
 another common type of RAM used in computers, SRAM is faster and consumes less power during active
 use. However, it is more expensive and takes up more space than DRAM due to its complex architecture.
 SRAM is often used on-chip in processors for cache memory because of its speed and efficiency, providing
 quick access to frequently used data.
 •“Tape-out” is the final phase of the chip design process for integrated circuits, where the completed design
 is released to manufacturing.
 •“Training” refers to the process of teaching an artificial intelligence model to make accurate predictions or
 decisions by feeding it large amounts of data and adjusting its internal parameters based on identified
 patterns. During training, the AI model uses algorithms to learn from the input data, iteratively refining its
 accuracy by adapting its behavior to minimize errors.
 •“Wafer” means a thin slice of a semiconductor material, typically made of silicon, upon which integrated
 circuits are fabricated. Wafers serve as the foundation for the production of electronic components,
 including microchips and microprocessors.

  
   1

    Table of Contents