← Overview

Prospectus Summary

14,886 tokens · 76,436 chars

Prospectus Summary

PROSPECTUS SUMMARY This summary highlights selected information contained elsewhere in this prospectus. This summary does not contain all of the information that you should consider before deciding to invest in our Class A common stock. You should read this entire prospectus carefully, including the sections titled “Risk Factors,” “Special Note Regarding Forward-Looking Statements,” “Management’s Discussion and Analysis of Financial Condition and Results of Operations,” and “Business,” and our consolidated financial statements and related notes included elsewhere in this prospectus before making an investment decision. Overview We are building the fastest AI infrastructure in the world. In AI, speed is critical to win. Speed improves user engagement, expands product capabilities, can lower operating costs, and opens new markets. It shortens iteration cycles for engineers, researchers, and professionals across industries, allowing them to be more productive. Speed unlocks new applications and new industries. In technology, “speed unlocking value” is a pattern that has repeated itself over the past 30 years. Faster solutions are used more often and for more demanding tasks. For example, the speed of broadband transformed the internet from static pages into real-time applications, enabling new products and industries. Similarly, in search, Google showed that even short delays in delivering answers significantly reduced usage and engagement. AI repeats this pattern. As AI has moved from novelty to necessity, AI work has grown more demanding, and speed has become a bottleneck. Faster AI does more work in less time, providing better answers sooner. Our solutions are built for speed. Cerebras Inference delivers answers up to 15 times faster than leading GPU- based solutions as benchmarked on leading open-source models. Similarly, many customers have achieved more than 10 times faster training time-to-solution compared to leading GPU systems of the same generation. These performance breakthroughs are the result of our core innovation: the world’s first and only commercialized wafer-scale processor. Called the Wafer-Scale Engine (“WSE”), our processor is 58 times larger than NVIDIA’s B200 chip and has 2,625 times more memory bandwidth than NVIDIA’s B200 package, which contains two individual chips. To build the WSE, we solved the 75-year-old compute industry problem of wafer- scale integration to produce, yield, power, and cool a chip of this size. This size is what enables our incredible AI speeds. By bringing massive compute and memory onto a single piece of silicon and integrating it into a purpose- built system and software stack, we deliver exceptional AI speed for customers on premises and via the cloud. Our strategic partners and customers include hyperscalers, foundation model labs, AI-native and digital-native businesses, enterprises, and Sovereign AI initiatives. OpenAI, the world’s leading foundation model lab, selected us to be its fast inference solution. With Cerebras, OpenAI’s Codex-Spark users turn ideas into working software in seconds. Amazon Web Services (“AWS”), the world’s leading hyperscale cloud, has signed a binding term sheet with us to become the first hyperscaler to deploy Cerebras in its own data centers, providing massive distribution to a broad base of enterprise customers. Our customers use Cerebras solutions to run applications that demand speed, scale, and intelligence. This work includes training and serving large frontier models with near-instant responses, processing massive datasets in real time, and generating full-stack applications in a single step. Once customers adopt fast inference, user expectations for interactivity rise, and engineering teams shift from latency optimizations to other work, making it difficult to return to slower inference. We deliver our solutions to customers in several different ways. Organizations that require full data and infrastructure control can purchase Cerebras AI supercomputers for on-premises deployments. Customers seeking cloud flexibility can access Cerebras compute through consumption-based models on Cerebras Cloud or through partner clouds. For example, our high-speed inference services are available through partners, including AWS Marketplace, Microsoft Marketplace, IBM watsonx Model Gateway, Vercel AI Gateway, OpenRouter, and Hugging Face, enabling seamless adoption within existing workflows.

2

Table of Contents

Our ability to deliver differentiated performance has made us a strategic partner to many of our largest customers. Beyond providing compute infrastructure, we provide AI services to our customers to co-develop solutions to address their most complex challenges, from training state-of-the-art models to optimizing deployments for each application’s needs. These partnerships have expanded over time; notably, our top ten customers by year-to- date revenue through December 31, 2025 increased their aggregate spend with us by approximately 80% within 12 months of their initial purchase, often including contracts for co-development. AI is one of the fastest growing technologies in history. We believe that our high-speed AI solutions give us a meaningful competitive advantage in this market. We believe that further adoption of AI, accelerated by increased penetration, more frequent usage, and more complex applications, will continue to rapidly expand the market. According to IDC, investments in AI solutions and services are projected to yield a global cumulative impact of $22.3 trillion by 2030, representing approximately 3.7% of the global gross domestic product (“GDP”). The combined market for AI training infrastructure and our addressable market within AI inference is estimated to be $251 billion in 2025 and is expected to grow to $672 billion by 2029—a 28% CAGR, according to Bloomberg Intelligence. This estimate indicates that AI inference will grow more than twice as fast as AI training infrastructure through 2029. With the fastest inference platform on the market, as benchmarked by Artificial Analysis, and a proven track record in large-scale training, we believe we are well-positioned to capture growth across both parts of the AI infrastructure market. Our growth reflects the broader acceleration of AI adoption. Our revenue increased from $24.6 million in 2022 to $78.7 million in 2023 and to $290.3 million in 2024, representing a more than tenfold increase over three years. Our revenue increased to $510.0 million in 2025, representing year-over-year growth of 76%. We earned net income of $237.8 million in 2025 and incurred net loss of $481.6 million in 2024. We incurred non-GAAP net loss of $75.7 million in 2025 and $21.8 million in 2024, after excluding the impact of stock-based compensation expense and change in fair value (extinguishment) of forward contract liability from our GAAP net income (loss). For more information and for a reconciliation of non-GAAP net loss to net income (loss), see the section titled “Management’s Discussion and Analysis of Financial Condition and Results of Operations—Non-GAAP Financial Measures.” Industry Background AI is the Next Technological Shift Over the past 50 years, the compute industry has undergone a series of secular shifts, each of which expanded access to compute and transformed global productivity. We believe AI represents the next major technological shift —one with the potential to exceed the transformational impact of prior cycles. According to Pew Research Center, as of June 2025, around 62% of U.S. adults interacted with AI at least several times a week, with 31% doing so almost constantly (at least several times a day), and one-third of U.S. adults under 30 saying they interacted with AI several times a day. Additionally, the Digital Education Council found in 2024 that 86% of higher-education students used AI. According to a McKinsey survey in 2025, the share of respondents saying their organizations are using AI in at least one business function has increased since their research last year: 88% reported regular AI use in at least one business function in 2025 compared with 78% a year ago. In the third quarter of 2025, Gallup reported daily use of AI in the workplace had more than doubled in the past 12 months, with 10% of U.S. employees reporting they used AI in their daily roles. The strong rate of AI adoption is driven by the simple fact that AI has transitioned from novelty to necessity and is now used across consumer and enterprise domains. Individuals and organizations rely on AI to solve problems, build products, accelerate research, improve patient outcomes, enhance decision-making, streamline operations, enable innovation, and deliver personalized experiences. The rise of AI depends on massive computational resources. This is where Cerebras fits in.

3

Table of Contents

Inference is Driving the AI Compute Demand, as Frontier AI Models Grow More Capable AI is composed of two stages: training and inference. Training is the process of creating and teaching the AI model; inference is the process of using the model to generate responses. Today, AI has entered a new era centered on inference. New techniques have emerged that make models smarter as they are being used. This approach— called “inference-time compute” or “test-time compute”—has become the dominant mode of inference. Instead of depending primarily on the trained model for accuracy, today’s frontier models—such as OpenAI’s GPT-5.4, Anthropic’s Claude Opus 4.7, and Google’s Gemini 3.1 Pro—perform substantial computation during inference to simulate reasoning. These models effectively “think through” the problem: planning steps, checking their own work, and refining responses before delivering a final, higher-quality result. These additional steps use substantially more compute during inference, while producing more accurate answers. These reasoning capabilities have fundamentally changed how people use AI. Inference is no longer limited to answering questions; modern AI applications now perform actions on behalf of their users. They can directly book travel itineraries, code full web applications from scratch, help customers apply for mortgages, automatically analyze legal contracts for discrepancies, process insurance claims, and more. As a result, demand for AI inference has surged alongside the adoption of these smarter reasoning models that leverage more inference-time compute. Ultimately, inference compute demand is driven by the compounding effect of three forces: the number of users, the frequency of use, and the compute per use. Each of these forces is growing at an extraordinary rate, producing a geometric expansion of demand for inference and its underlying compute. Reasoning during inference delivers smarter AI responses but requires significantly more compute. As models become more capable, users rely on them for increasingly ambitious tasks, further driving compute needs. Today’s workloads—including video generation, deep research, and long-form analysis—can require many orders of magnitude more compute than answering basic questions. Reasoning Makes Inference Speed a Necessity Speed enables reasoning models to deliver more accurate answers faster, reducing the frustration created by forcing customers to wait for answers. Complex tasks (harder problems) are more valuable to solve but they require the reasoning system to go through a longer sequence of steps. This amplifies the benefit of speed and the penalty for being slow. Speed enables more accurate answers to harder problems in less time. Speed expands the range of tasks that AI can address, thereby broadening its addressable market. Fast Inference Enables the Next Generation of AI Workloads, With Coding as a Clear Early Signal As AI uses more compute to tackle increasingly complex problems, a fundamental challenge emerges: everyone wants a better response for complicated requests, but nobody wants to wait to get a response. We are solving this problem. Cerebras Inference delivers answers up to 15 times faster than leading GPU-based solutions as benchmarked on leading open-source models. This speed advantage enables our solutions to deliver real-time performance for the most advanced reasoning models, enabling complex tasks to be completed more accurately and quickly.

4

Table of Contents

These dynamics are already visible in the market. Three fast-growing categories—software development, deep research systems, and voice applications—illustrate the importance of speed. For these and many other similar applications, inference speed is a necessity. •AI-powered software development provides a clear early signal. Coding with AI is interactive and sensitive to delay. Delay impairs a developer’s train of thought, and as a result, developers are more likely to abandon tools that slow them down. AI can now write code. It reasons over large codebases and then uses the multi-step process previously described to generate, modify, and run code. Inference speed has become a primary determinant for adoption. Products such as Cursor, Claude Code, Codex, Windsurf, and GitHub Copilot act as autonomous collaborators—planning, editing, and validating code across repositories in response to natural-language instructions from developers. These systems require complex, multi-step tasks, including continuous reasoning and long-context memory. Fast inference is the only way to avoid frustrating wait times. AI-native coding products barely existed in 2023. Yet they collectively generated billions in ARR in 2025 and continue to accelerate. For example, AI coding applications like Lovable and Cursor are among some of the fastest growing developer tools in history. AI coding agents have become central to how software is written. Anthropic’s Claude Code is already at a reported annual revenue run rate of $2.5 billion as of February 2026; Claude Code’s creator said in January 2026 that he writes 100% of his code with AI. In addition, professional developers report that 42% of code is now AI-generated or assisted, according to a survey conducted by SonarSource in October 2025. By droves, software engineers are shifting from writing code to supervising fleets of AI coding agents. Faster inference means more productive engineers. Coding demonstrates a fundamental pattern in reasoning systems: wherever AI involves continuous interaction, multi-step reasoning, and sensitivity to response time, speed determines utility. Those same conditions are present across a growing set of AI applications. •Deep research systems apply similar reasoning to knowledge work, performing multi-step retrieval and synthesis across large datasets to deliver structured insights in real time. Platforms such as AlphaSense rely on real-time inference to sift through a higher volume of documents to help analysts and enterprises find answers faster. •Voice applications include conversational agents, avatars, and digital twins from companies like Meta, Tavus, and OpenCall. Real-time performance is critical for voice: sub-second latency makes interactions feel natural and gives these systems time to call tools or retrieve data mid-conversation for richer, contextual responses. Together, we believe these applications lead the way in the next phase of AI adoption: systems that think, act, and interact continuously, driving sustained demand for faster and more efficient compute infrastructure. In this environment, speed directly shapes usage. Long wait times limit real-time applications, stunt the diffusion of AI capabilities, and can inhibit new markets and applications. As a result, slow systems lose users, limit capability, and stall innovation, while faster systems are used more often and for more demanding workloads. We believe speed is a defining advantage in modern AI. Reasoning is intelligence, and intelligence compounds with speed. We believe the ability to deliver fast, scalable reasoning will define not only the next decade of technology, but also shape the future of how people work, create, and interact. Our Solution We are building the fastest commercial AI infrastructure in the world. Our AI supercomputers are purpose built to make AI fast. Our full-stack hardware and software platform is designed to complete AI tasks significantly faster and more efficiently than comparable GPU-based solutions, whether deployed on premises, through the Cerebras Cloud, or via partner clouds.

5

Table of Contents

  1. Hardware Platform

At the core of our solution is the Cerebras WSE, the largest and fastest AI processor ever brought to market in high volumes. The WSE combines 900,000 compute cores, 44 gigabytes of on-chip memory, and 21 petabytes of memory bandwidth on the largest commercial chip ever built. The WSE-3 is 58 times larger than NVIDIA’s B200 chip. The WSE has 19 times more transistors, 250 times more on-chip memory, and 2,625 times more memory bandwidth than NVIDIA’s B200 package, which contains two individual chips. Each WSE is housed inside a Cerebras CS-3 system, our fully integrated AI compute system that includes advanced cooling, power delivery, and interconnect technology. Multiple CS-3 systems connect to form Cerebras AI supercomputers deployed on premises in customer data centers and in the cloud.

  1. Co-designed Software Platform

Our software platform makes wafer-scale computing simple to use. It spans the full AI life cycle—from model programming and compilation, to training and inference, to cluster orchestration. •Cerebras Compiler compiles PyTorch models directly to the WSE, eliminating the need for CUDA or distributed programming and providing an easy-to-use developer experience. •Cerebras Inference Serving Stack delivers ultra-low-latency inference with industry-standard APIs for production use. •Cerebras Cluster Manager orchestrates multiple CS-3 systems into one logical AI supercomputer, handling scheduling, telemetry, and health monitoring at scale. Because every layer is co-designed with our hardware, customers can scale training and inference across frontier-size models without rewriting code or managing distributed infrastructure.

  1. Flexible Deployment Models

Our technology is designed to be delivered in the form that best accelerates a customer’s AI roadmap. Our platform is designed for flexibility—meeting organizations where they are, and scaling with them as their ambitions grow. •Cerebras Cloud: Provides high-performance AI compute through a simple API, allowing customers to serve open-source, fine-tuned, or proprietary models with production-grade reliability. •Partner Clouds: Offer seamless access to Cerebras systems through leading cloud providers including AWS Marketplace, Microsoft Marketplace, IBM watsonx Model Gateway, Vercel AI Gateway, OpenRouter, and Hugging Face, extending our reach across the global AI ecosystem. •On-Premises Deployments: Deliver fully integrated AI supercomputers and install them directly in customer environments, giving enterprises, Sovereign AI initiatives, national laboratories, and defense organizations complete control over data, performance, and operations. We also operate and manage large clusters of AI supercomputers for some of our customers. •Hybrid Deployments: Enable customers to move fluidly between on-premises and cloud environments through a unified software stack, maintaining consistent performance and workflows as they scale. Customers choose the consumption model that fits their needs—buying inference by the token, running training workloads by the week or month, reserving dedicated capacity for long-term production deployments, or purchasing on-premises infrastructure.

6

Table of Contents

  1. AI Model Services

Our AI experts accelerate customers’ ability to take AI applications from concept to production. With deep experience training and deploying frontier-scale models across modalities, our team helps customers select model architectures, prepare large-scale training data, and train and fine-tune models for production. We also design optimized deployments for customers—training draft or speculative decoding models and tuning configurations to balance latency, throughput, and cost for each application. What This Means for Customers Our customers, which include hyperscalers, foundation model labs, AI-native and digital-native businesses, enterprises, and leaders of Sovereign AI initiatives, complete tasks dramatically faster than on GPU-based systems. Faster reasoning improves user experience, increases engagement, accelerates iteration, and enables new classes of AI applications. This speed advantage compounds in production environments, where reduced latency and shorter training cycles have meaningful business impact. Key Customer Benefits Through our full-stack AI offerings, we deliver tangible improvements across four key dimensions that define AI value in the real world: speed, quality, cost, and simplicity.

  1. Speed: Real-Time Reasoning Unlocks New Benefits From AI

Our systems achieve dramatically faster inference than GPU clusters, enabling applications such as real-time coding agents, nearly instant deep research, and digital twins that were previously impractical or impossible. Customers describe the leap in inference speed as akin to going from dial-up to broadband—an advancement that redefines what AI can do. New classes of products that customers have built and use daily with Cerebras include: •Real-time coding agents: Copilots that read, write, and debug code nearly instantly—turning AI into an interactive programming partner. •Nearly instant deep research agents: Systems that analyze thousands of documents in seconds, accelerating market, scientific, and policy research. •Digital twins: Lifelike AI personas that think, speak, and react in real time. With Cerebras, avatars respond without awkward delays, and carry conversations that are more natural and interactive.

  1. Quality: More Accurate Responses Faster

On GPUs, latency forces a tradeoff between speed and intelligence. Developers often have to limit the accuracy of a response in order to have it delivered in a reasonable amount of time. Our offerings are designed to remove this tradeoff. Our inference speed allows developers to use substantially more reasoning tokens while maintaining the same end-to-end task completion time. We turn quality from a limitation into a feature; customers can now serve some of the largest models at full strength, in nearly real time.

  1. Cost: Higher Performance at Lower Power

Moving data from one chip to another is one of the most power-intensive parts of AI compute. And power is the largest contributor to operating expenses in AI compute. Our wafer-scale architecture keeps data on-chip, reducing data movement significantly, which in turn reduces power consumption. It also eliminates layers of costly and complex networking equipment. By way of comparison,

7

Table of Contents

moving a bit of data on the WSE-3 consumes a fraction of the energy required to move the same bit of data over GPU interconnects. Because our performance advantages stem from fundamental architectural efficiency, we expect these benefits to endure across future generations that continue to build on our wafer-scale technology.

  1. Simplicity: One Platform; No Distributed Programming; Easy to Train and Deploy Models

We eliminate the complexity of distributed programming across GPU clusters, which is one of the most challenging aspects of AI deployment. Even extremely large models run without code changes, and scale automatically and seamlessly across clusters of Cerebras systems. Because training, fine-tuning, and inference all occur on a unified platform, customers avoid the operational overhead of moving between different compute environments, enabling inference, fine-tuning, and training from scratch on the same cluster. Cerebras Compiler’s PyTorch integration makes model customization and compilation simple, the Inference Serving Stack enables deployment of frontier-sized models in minutes, and our AI experts support customers throughout the model life cycle to accelerate results. Cerebras’s deployment platform also allows customers to run models that were not trained on Cerebras hardware and still achieve exceptional inference performance. Our Technology Wafer-Scale Integration: The Foundation Cerebras started with a simple question: How could a new class of processors be designed with the singular goal of solving the compute challenges presented by AI? Beginning with a clean slate, how could we avoid the trade-offs made for graphics and other workloads to ensure that every transistor, every single part of the processor, was optimized for the requirements of AI? Our answer is wafer-scale integration. Wafer-scale integration enabled us to use a vastly faster memory and avoid the complexity of switches and routers and associated complexity necessary to link together thousands of GPUs. SRAM is the fastest memory to date. But existing industry players could not use as much SRAM because they could not fit it on their chip. By building a chip 58 times larger than NVIDIA’s B200 chip, we can maximize fast, on-chip SRAM and get the benefits of two worlds: (1) significantly more memory capacity because we built such a big chip, and (2) the benefits of the massive bandwidth provided by SRAM. Wafer scale enables us to deliver a solution with 2,625 times more memory bandwidth than NVIDIA’s B200 package, which is how we are able to deliver inference at extremely fast speeds. The second fundamental advantage provided by wafer-scale integration is that it kept the wafer intact. Instead of building a wafer, cutting it into dozens of small GPUs, and using expensive, power-hungry switches, and complex cables to wire them back together, our solution consists of one processor that is the size of an entire silicon wafer. This reduced the need, cost, managerial complexity, and power draw of much of the networking stack required to build a GPU solution. Our wafer-scale solution unifies compute and memory and communications on the same piece of silicon, eliminating the data-movement bottlenecks that slow GPU systems. The Underpinnings of Wafer-Scale Integration We solved a problem that flummoxed the compute industry for its entire history: how to build chips the size of full silicon wafers. The advantages of size were well known. But no company had ever brought a wafer-scale solution to market.

8

Table of Contents

To make wafer-scale commercially viable, we invented and productized two foundational semiconductor technologies: •Multi-die interconnect: Traditionally, die—regions of silicon containing an integrated circuit—are individually stamped onto a silicon wafer and then cut up (“diced”) into small, separate chips. Prior to Cerebras, the largest known chip was about 840 mm. We invented technology to interconnect these otherwise independent die together at the wafer level, at the semiconductor fabrication plant. The inter-die connectivity uses a proprietary cross-reticle connection that is integrated into our overall fabrication process. This allowed us to use existing processes to do something we believe had never been done before —namely, deliver a wafer that communicated across the entire 46,225 mm of silicon and therefore is a single massive processor. •Fault-tolerant architecture: A primary factor in the commercial viability of a semiconductor is the yield. Flaws are present in wafers. Large chips have a higher probability of hitting such a flaw. Traditionally, chips with flaws have been thrown out or “down binned,” that is, sold as a less capable part. Thus, using traditional techniques, larger chips have lower yield and are therefore more expensive. We designed the architecture to absorb and route around defects using redundant building blocks—similar to a hyperscale data center but on the wafer. Flaws are designed to be recognized, shut down, and routed around. Redundant building blocks are used to re-form a logically functional whole. This approach had been previously used in memory manufacturing to achieve near-perfect yield, but to our knowledge, prior to Cerebras had not been used to build processors. These innovations made wafer-scale computing commercially viable for the first time in semiconductor history. The Cerebras Chip, System, and Software Cerebras delivers a full-stack AI infrastructure solution. It contains innovations at each layer. At the base is the Cerebras WSE, our wafer-scale processor. Each WSE is integrated into a CS-3 system with advanced power delivery, cooling, and system management. Multiple CS-3 systems link together to form Cerebras AI supercomputers that are deployed in data centers around the world. Lightweight management and orchestration software operate these systems as one logical computer, while our training and inference platforms make it simple to run large models at scale. Because each layer is designed with the others in mind, the platform delivers consistent performance, reduced infrastructure complexity, and faster time to deployment and results.

  1. The Chip: Cerebras Wafer-Scale Engine

At the heart of our platform is the Cerebras WSE, the world’s largest and fastest commercialized AI processor. A single WSE replaces an entire cluster of GPUs by combining 900,000 compute cores and 44 gigabytes of on-chip memory on one piece of silicon, with 21 petabytes per second of on-chip memory bandwidth. The WSE-3 is 58 times larger than NVIDIA’s B200 chip. The WSE-3 also has 19 times more transistors, 250 times more on-chip memory, and 2,625 times more memory bandwidth than NVIDIA’s B200 package, which contains two individual chips. We believe our architecture solves for memory bandwidth, which is a primary bottleneck in modern AI. By keeping compute and memory on a single chip, WSE-3 eliminates the off-chip data transfers that dominate GPU latency and power consumption. As a result, our systems are faster, simpler to program, and more power-efficient than GPUs on AI tasks.

  1. CS-3: System Innovation for Wafer-Scale Compute

The WSE-3 is deployed inside the CS-3 system, a data center-ready appliance engineered to support wafer- scale operation and integrate seamlessly into enterprise and Sovereign AI environments. The CS-3 provides the

9

Table of Contents

power delivery, cooling, networking, and system management required to operate a wafer-scale processor reliably and at scale. Multiple CS-3 systems can be connected to form Cerebras AI supercomputers, which function as a single logical computer for large-scale training and inference.

  1. Cerebras Software: Making Wafer-Scale Simple

Our software platform extends our hardware advantage by making wafer-scale computing simple to use and highly efficient. Our software spans the full AI life cycle—from programming and compiling models, to training and inference, to orchestration across large clusters. Each layer is co-designed with our hardware to deliver maximum performance with minimal developer effort. •Model Programming and Compilation. Our Cerebras Compiler (CSoft) makes it simple to run large language models on our systems. CSoft is core to our solution and provides intuitive usability for developers. CSoft eliminates the need for low-level programming in CUDA or other hardware-specific languages. For both training and inference, our CSoft platform enables developers to easily represent and map large language models onto the Cerebras Wafer-Scale Engine using familiar frameworks such as PyTorch. CSoft allows machine-learning users to accelerate training and inference on models of any size, scaled across any configuration of the Cerebras AI supercomputer. •Inference Serving Stack. Our Cerebras Inference Serving Stack manages model hosting, scaling, and request routing across Cerebras systems and clusters. It provides real-time observability and load balancing, enabling ultra-low-latency inference for production workloads. Customers can serve both open-source and proprietary models through standard APIs, including industry-standard endpoints, with consistent performance across on-premises and cloud deployments. •Orchestration and Life Cycle Management. Our Cerebras Cluster Manager orchestration software unifies multiple CS-3 systems into a single logical computer, managing scheduling, telemetry, and health monitoring. Built-in observability of all hardware and software components is designed to ensure reliability and high utilization across on-premises and in cloud environments. This orchestration layer also allows customers to switch seamlessly between training and inference on the same systems. Together, these components form a unified software platform that integrates seamlessly with our hardware to deliver a complete, end-to-end AI computing system that can be deployed on customer premises or in the cloud. Because our software and hardware are co-designed, customers can train and/or deploy frontier-scale models with consistent and simple workflows—without rewriting code or managing distributed infrastructure. Technology and Roadmap Wafer-scale integration is not a single achievement—it is a collection of technologies and processes with a multi-generation roadmap. Each successive WSE generation (from 16 nanometer to 7 nanometer and now to 5 nanometer) has delivered substantial improvements in performance, memory bandwidth, efficiency, yield, and manufacturability, without requiring changes to how developers program or deploy models. Our roadmap builds on the advantages of wafer-scale integration. We intend to invest heavily in research and development to continue to expand on-chip memory and memory bandwidth, improve interconnect density, and leverage advancements in process technology to increase transistor counts and reduce power in future WSE generations. As a result, we expect that future generations of WSEs will have faster compute, and more and faster memory and communication onto and off of the wafer. The same architectural foundation also supports long-term extensibility across emerging AI workloads. As models grow in size, increase in reasoning depth, and shift toward real-time, multi-step interactions, they place even greater emphasis on memory bandwidth and locality—all areas where wafer-scale architectures possess inherent, structural advantages.

10

Table of Contents

We believe wafer-scale computing positions us as a leader in AI infrastructure, providing a long-term technology roadmap designed to scale with the requirements of modern and future AI systems. Competitive Strengths 1.Our culture of fearless engineering has enabled us to do pioneering engineering work; we are the only company ever to deliver a wafer-scale processor to market. Our culture of fearless engineering enables us to solve problems that others failed to solve or were afraid to tackle. 2.We have durable advantages rooted in our unique silicon architecture. We believe wafer-scale integration is a fundamental advantage in AI compute, enabling large amounts of high-speed memory and hundreds of thousands of compute cores to reside close together on the same piece of silicon. We have now delivered three generations of wafer-scale processors at the 16, 7, and 5 nanometer nodes. 3.We are an end-to-end systems company. From inception, we co-designed our wafer-scale engine, our CS-system, and our software stack for optimal AI performance. We were among the first in the AI community to deliver water cooling to the processor, enabling us to run colder and extend our processors’ lifetime. The co-design of processor, system, and software is a meaningful competitive advantage. 4.We are building the fastest inference infrastructure in the world. On Cerebras infrastructure, AI responses are up to 15 times faster than leading GPU-based solutions as benchmarked on leading open- source models. Speed is customer experience. Speed changes the way companies design their experiences. 5.We are serving some of the largest and most demanding customers in the AI market. We are engaged with customers such as OpenAI, the world’s leading foundation model lab, and AWS, the world’s leading hyperscale cloud, who have stringent requirements for performance, scale, and reliability. We offer a full- stack hardware and software platform that can be optimized for each customer’s workloads and paired with AI services in order to deploy and operate high-capacity, production-grade systems without requiring customers to manage complex infrastructure. 6.We operate at massive scale with more than 100 exaflops of deployed compute. In collaboration with our partners, we have trained some of the largest models in the industry, gaining unique experience and providing rare insight. Risk Factors Summary Our business is subject to a number of risks and uncertainties of which you should be aware before making a decision to invest in our Class A common stock. These risks are more fully described in the section titled “Risk Factors.” These risks include, among others, the following: •We may not sustain our growth rate, and we may not be able to manage future growth effectively. •We have a history of generating net losses, and if we are unable to achieve adequate revenue growth while our expenses increase, we may not achieve and maintain profitability in the future. •We have a limited operating history at our current scale, and we may have difficulty evaluating our current business and accurately predicting our future revenue for the purpose of appropriately budgeting and adjusting our expenses. •A substantial portion of our revenue has been, and is expected to continue to be, driven by a limited number of customers. A reduction in demand from, or a material adverse development in our relationship with any of our significant customers, including OpenAI, G42, MBZUAI, and AWS, or our failure to meet our obligations under the MRA with OpenAI, would harm our business, financial condition, results of operations, and prospects.

11

Table of Contents

•Our revenue historically has been derived from sales of our hardware systems. We are in the early stages of delivering our cloud-based offerings, the market for which is new and evolving rapidly, and which require significant data center capacity and capital investments for which we expect to require significant additional capital. There is no assurance that we will be able to sustain revenues from these efforts. •Our cloud-based offerings are subject to certain risks and challenges. Unfavorable or uncertain conditions in the training or inference cloud market, as well as for AI infrastructure, may cause fluctuations in our results of operations. •The broader adoption, use, and commercialization of AI technology, and the continued rapid pace of developments in the AI field, are inherently uncertain. If we are unable to expand the application of our products, keep up with evolving AI technology requirements, or if the new products we develop and introduce into the market are not successful, our business, financial condition, results of operations, and prospects may be harmed. •The market for AI computing solutions is competitive, evolving, and requires scale, and if we do not compete effectively, our business, financial condition, results of operations, and prospects may be harmed. •We depend on third-party suppliers, including certain sole sources, and substantially all of our manufacturing services and components are procured on a purchase order basis without capacity or volume commitments, which may harm our ability to compete with larger companies, meet customer demand, satisfy customer contracts or bring products to market, and our reputation, business, financial condition, results of operations, and prospects. •Our supply chain is long, complex, and global, with many interdependencies. Any significant fluctuations of supply and demand or disruption to our supply chain may harm our ability to manufacture and deliver our products to our customers. •Our business and our products and services are subject to various governmental regulations, and compliance with these regulations may cause us to incur significant expense. If we fail to comply with applicable regulations, we could be subject to civil or criminal penalties. •Our offerings are subject to U.S. export controls and may be exported outside the United States only with the required export license or through a license exception. We cannot guarantee that we will be successful in obtaining all required licenses in the future. If we are unable to obtain licenses to export our products, our business, financial condition, results of operations, and prospects may be harmed. •We identified material weaknesses in our internal control over financial reporting. If we are unable to remediate these material weaknesses, or if we identify additional material weaknesses in the future or otherwise fail to maintain an effective system of internal controls, we may not be able to accurately or timely report our financial condition or results of operations, which may adversely affect investor confidence in us and, as a result, the value of our Class A common stock. •The multi-class structure of our capital stock as contained in our amended and restated certificate of incorporation has the effect of concentrating voting control with those stockholders who held our securities prior to this offering, including our executive officers, employees, and directors and their affiliates, and limiting your ability to influence corporate matters, which could adversely affect the price of our Class A common stock. •No public market for our common stock currently exists and an active liquid market may not develop or be sustained following this offering.

12

Table of Contents

Corporate Information We were incorporated in April 2016 as a Delaware corporation. Our principal executive offices are located at 1237 E. Arques Avenue, Sunnyvale, California 94085, and our telephone number is (650) 933-4980. Our website address is www.cerebras.ai. Information contained on, or that can be accessed through, our website does not constitute part of this prospectus, and the inclusion of our website address in this prospectus is an inactive textual reference only. Implications of Being an Emerging Growth Company We are an emerging growth company as defined in the Jumpstart Our Business Startups Act of 2012 (the “JOBS Act”). We will remain an emerging growth company until the earliest of: (i) the last day of the fiscal year following the fifth anniversary of the completion of this offering; (ii) the last day of the fiscal year in which we have total annual gross revenue of at least $1.235 billion; (iii) the last day of the fiscal year in which we are deemed to be a “large accelerated filer” as defined in Rule 12b-2 under the Securities Exchange Act of 1934, as amended (the “Exchange Act”), which would occur if the market value of our Class A common stock held by non-affiliates exceeded $700.0 million as of the last business day of the second fiscal quarter of such year; or (iv) the date on which we have issued more than $1.0 billion in non-convertible debt securities during the prior three-year period. An emerging growth company may take advantage of specified reduced reporting requirements and is relieved of certain other significant requirements that are otherwise generally applicable to public companies. As an emerging growth company: •we will present in this prospectus only two years of audited annual financial statements, plus any required unaudited condensed consolidated financial statements, and related management’s discussion and analysis of financial condition and results of operations; •we will avail ourselves of the exemption from the requirement to obtain an attestation and report from our independent registered public accounting firm on the assessment of our internal control over financial reporting pursuant to the Sarbanes-Oxley Act of 2002; •we will provide less extensive disclosure about our executive compensation arrangements; and •we will not require stockholder non-binding advisory votes on executive compensation or golden parachute arrangements. In addition, the JOBS Act provides that an emerging growth company can take advantage of an extended transition period for complying with new or revised accounting standards. This provision allows an emerging growth company to delay the adoption of some accounting standards until those standards would otherwise apply to private companies. We have elected to use the extended transition period for any other new or revised accounting standards until the date that we are no longer an emerging growth company or affirmatively and irrevocably opt out of the extended transition period. As a result, our financial statements may not be comparable to companies that comply with new or revised accounting pronouncements as of public company effective dates.

13

Table of Contents

THE OFFERING

Class A common stock offered by us ............................... / shares.

Over-allotment option to purchase additional shares of Class A common stock from us..................................... ... shares.

Class A common stock to be outstanding immediately after this offering ........................................................... ... shares (or              shares if the underwriters exercise their over-allotment option in full).

Class B common stock to be outstanding immediately after this offering ........................................................... ... shares.

Class N common stock to be outstanding immediately after this offering ........................................................... ... None.

Total Class A common stock, Class B common stock, and Class N common stock to be outstanding after this offering ................................................................... ... shares (or              shares if the underwriters exercise their over-allotment option in full).

Use of proceeds ................................................................. ... We estimate that we will receive net proceeds from this offering of approximately $                (or $                if the underwriters exercise their over-allotment option in full), based upon the assumed initial public offering price of $           per share of Class A common stock, which is the midpoint of the estimated price range set forth on the cover page of this prospectus, and after deducting estimated underwriting discounts and commissions and estimated offering expenses payable by us. The principal purposes of this offering are to obtain additional capital to fund our operations, create a public market for our Class A common stock, facilitate our future access to the public equity markets, and increase awareness of our company among potential partners. We currently intend to use the net proceeds from this offering, together with our existing cash, cash equivalents, and investments, for general corporate purposes, including working capital, operating expenses, and capital expenditures. We may also use a portion of the net proceeds to in-license, acquire, or invest in complementary technologies, assets, businesses, or intellectual property. We periodically evaluate strategic opportunities; however, we have no current commitments to enter into any such acquisitions or make any such investments. We intend to use approximately $                of the net proceeds to satisfy tax withholding and remittance obligations related to the RSU Net Settlement (as defined below) for restricted stock units (“RSUs”) that will vest in connection with this offering. We will have broad discretion in the way that we use the net proceeds of this offering. See the section titled “Use of Proceeds” for additional information.

14

Table of Contents

Voting rights ..................................................................... / We will have three classes of common stock: Class A common stock, Class B common stock, and Class N common stock. Each share of Class A common stock is entitled to one vote per share, each share of Class B common stock is entitled to 20 votes per share and is convertible at any time into one share of Class A common stock, and each share of Class N common stock is non-voting and is convertible into one share of Class A common stock. Holders of Class A common stock and Class B common stock will generally vote together as a single class, unless otherwise required by law or our amended and restated certificate of incorporation that will be in effect immediately prior to the completion of this offering. Once this offering is completed (and without giving effect to any shares that may be purchased in this offering or pursuant to our directed share program), the holders of our outstanding Class B common stock will hold approximately           % of our outstanding shares and control approximately           % of the voting power of our outstanding shares, and our executive officers, directors, and stockholders holding more than 5% of our outstanding capital stock, together with their affiliates, will beneficially own, in the aggregate, approximately           % of our outstanding shares and control approximately           % of the voting power of our outstanding shares. The holders of our outstanding Class B common stock will have the ability to control the outcome of matters submitted to our stockholders for approval, including the election of our directors and the approval of any change in control transaction. See the sections titled “Principal Stockholders” and “Description of Capital Stock” for additional information.

Directed share program ..................................................... ... At our request, the underwriters have reserved up to           % of the shares of Class A common stock offered by this prospectus, for sale at the initial public offering price through a directed share program to certain persons identified by our management and certain long- tenured employees, which may include parties with whom we have a business relationship and friends and family of management and such employees. Any reserved shares of Class A common stock that are not so purchased will be offered by the underwriters to the general public on the same terms as the other shares of our Class A common stock offered by this prospectus. See the section titled “Underwriters—Directed Share Program” for additional information. If purchased by these persons, these shares will not be subject to lock-up restrictions, except to the extent that the purchasers of such shares are otherwise subject to lock-up agreements as a result of their relationships with us. The number of shares of Class A common stock available for sale to the general public will be reduced by the number of reserved shares sold pursuant to this program.

Risk factors ....................................................................... ... See the section titled “Risk Factors” and other information included in this prospectus for a discussion of factors you should carefully consider before deciding whether to invest in our Class A common stock.

15

Table of Contents

| --- | --- | | Proposed Nasdaq Global Select Market trading symbol .. | “CBRS” |

In this prospectus, the number of shares of our common stock to be outstanding after this offering is based on no shares of our Class A common stock,                 shares of our Class B common stock, and no shares of our Class N common stock outstanding as of December 31, 2025, after giving effect to the Preferred Stock Conversion, the Common Stock Reclassification, and the RSU Net Settlement (each as defined below), and excludes: •28,361,707 shares of our Class B common stock issuable upon the exercise of outstanding stock options as of December 31, 2025, with a weighted-average exercise price of $4.97 per share; •                shares of our Class B common stock issuable upon the vesting and settlement of RSUs subject to service-based and liquidity-based vesting conditions outstanding as of December 31, 2025, for which the service-based vesting condition was not yet satisfied as of December 31, 2025 and for which the liquidity- based vesting condition will be satisfied in connection with this offering, after giving effect to the RSU Net Settlement; •                 shares of Class B common stock issuable upon the vesting and settlement of RSUs subject to service-based and liquidity-based vesting conditions granted after December 31, 2025, for which the service-based vesting condition was not yet satisfied as of December 31, 2025 and for which the liquidity- based vesting condition will be satisfied in connection with this offering, after giving effect to the RSU Net Settlement; •9,000,000 shares of Class B common stock issuable upon the vesting and settlement of RSUs subject to market-based vesting conditions (“PRSUs”) granted after December 31, 2025, for which the market-based vesting condition was not yet satisfied as of December 31, 2025 (see the section titled “Executive and Director Compensation—Narrative to Summary Compensation Table—Equity-Based Compensation— 2026 Founder PRSU Awards” for additional information); •33,445,026 shares of our Class N common stock issuable upon the exercise of a warrant outstanding as of December 31, 2025, with an exercise price of $0.00001 per share (the “OpenAI Warrant”), subject to satisfaction of vesting conditions (see the section titled “Capitalization—Vesting of Shares Underlying the OpenAI Warrant” for additional information); •2,696,678 shares of our Class N common stock issuable upon the exercise of a warrant authorized after December 31, 2025, with an exercise price of $100.00 per share, subject to satisfaction of vesting conditions; •3,682,000 shares of our Class N common stock issued after December 31, 2025; •                shares of our Class A common stock reserved for future issuance under our 2026 Incentive Award Plan (the “2026 Plan”), which will become effective on the day immediately prior to the date of effectiveness of the registration statement of which this prospectus forms a part, including                  new shares and the number of shares (i) that remain available for grant of future awards under our 2016 Equity Incentive Plan (as amended, the “2016 Plan”) at the time the 2026 Plan becomes effective, which shares will cease to be available for issuance under the 2016 Plan at such time and (ii) underlying outstanding stock-based compensation awards granted under the 2016 Plan (such awards outstanding under such plan, the “Prior Plan Awards”) that expire, or are cancelled, forfeited, reacquired, or withheld; and •                shares of our Class A common stock reserved for future issuance under our 2026 Employee Stock Purchase Plan (the “ESPP”), which will become effective on the day immediately prior to the date of effectiveness of the registration statement of which this prospectus forms a part.

16

Table of Contents

The 2026 Plan and the ESPP also provide for automatic annual increases in the number of shares reserved thereunder. See the section titled “Executive and Director Compensation—Equity Compensation Plans” for additional information. Except as otherwise indicated, all information in this prospectus assumes or gives effect to: •the adoption, filing, and effectiveness of our amended and restated certificate of incorporation and the adoption of our amended and restated bylaws, each of which will occur immediately prior to the completion of this offering; •the automatic conversion of all outstanding shares of our redeemable convertible preferred stock into an aggregate of 124,652,775 shares of our newly created Class B common stock, which will occur prior to the completion of this offering (the “Preferred Stock Conversion”); •the reclassification of our outstanding Class A common stock into a newly created Class B common stock and the authorization of a new Class A common stock, which will occur prior to the completion of this offering (the “Common Stock Reclassification”); •the net issuance of                shares of our Class B common stock issuable upon the vesting and settlement of RSUs subject to service-based and liquidity-based vesting conditions outstanding as of                , 2026, for which the service-based vesting condition was satisfied as of                , 2026 and for which the liquidity-based vesting condition will be satisfied in connection with this offering, after giving effect to the withholding of an estimated                 shares to satisfy estimated tax withholding and remittance obligations (based on an assumed           % tax withholding rate) (the “RSU Net Settlement”); •no repurchase of outstanding shares of our capital stock after December 31, 2025; •no exercise of outstanding stock options or warrants or settlement of outstanding RSUs after December 31, 2025, except for the RSU Net Settlement; and •no exercise of the underwriters’ over-allotment option to purchase additional shares from us.

17

Table of Contents

Summary Consolidated Financial Data The following tables set forth our summary consolidated financial data. The summary consolidated statements of operations data for the years ended December 31, 2025 and 2024 have been derived from our audited consolidated financial statements included elsewhere in this prospectus. Our historical results are not necessarily indicative of results that may be expected in the future. You should read the following summary consolidated financial data in conjunction with the section titled “Management’s Discussion and Analysis of Financial Condition and Results of Operations” and our consolidated financial statements and related notes included elsewhere in this prospectus. The summary consolidated financial data in this section are not intended to replace, and are qualified in their entirety by, the consolidated financial statements and related notes.

18

Table of Contents

Year Ended December 31, / Year Ended December 31, / Year Ended December 31,

2025 / 2024

(in thousands, except per share amounts) / (in thousands, except per share amounts) / (in thousands, except per share amounts)

Consolidated Statement of Operations:

Revenue

Hardware ............................................................................................................... ... $358,440 / $211,965

Cloud and other services ....................................................................................... ... 151,551 / 78,287

Total revenue ............................................................................................................. ... 509,991 / 290,252

Cost of revenue

Hardware ............................................................................................................... ... 204,746 / 137,310

Cloud and other services ....................................................................................... ... 106,174 / 30,204

Total cost of revenue ................................................................................................. ... 310,920 / 167,514

Gross profit ................................................................................................................ ... 199,071 / 122,738

Operating expenses

Research and development ................................................................................ ... 243,319 / 158,234

Sales and marketing ........................................................................................... ... 70,645 / 20,980

General and administrative ................................................................................ ... 30,969 / 44,962

Total operating expenses ........................................................................................... ... 344,933 / 224,176

Loss from operations ................................................................................................. ... (145,862) / (101,438)

Other income (expense), net ...................................................................................... ... 390,746 / (378,237)

Income (loss) before income taxes ............................................................................ ... 244,884 / (479,675)

Income tax expense ................................................................................................... ... 7,057 / 1,927

Net income (loss) ....................................................................................................... ... $237,827 / $(481,602)

Less: Net income attributable to participating securities ........................................... ... 149,952 / —

Less: Deemed dividend on issuance of Series F-1 redeemable convertible preferred stock ........................................................................................................ ... — / 3,182

Net income (loss) attributable to common shareholders ........................................... ... $87,875 / $(484,784)

Net income (loss) per share attributable to common shareholders: ...........................

Basic ...................................................................................................................... ... $1.64 / $(9.90)

Diluted .................................................................................................................. ... $1.38 / $(9.90)

Weighted average shares used in per share computation:

Basic ...................................................................................................................... ... 53,616 / 48,972

Diluted .................................................................................................................. ... 171,821 / 48,972

Pro forma net loss per share attributable to common stockholders, basic and diluted .................................................................................................................

Pro forma weighted-average shares used in calculating pro forma net loss per share attributable to common stockholders, basic and diluted ............................

Other Financial Information:

Non-GAAP operating loss ...................................................................................... ... $(96,095) / $(42,874)

Non-GAAP net loss ................................................................................................ ... $(75,742) / $(21,774)

Net cash provided by (used in) operating activities ................................................... ... $(10,050) / $451,978

19

Table of Contents

_______________ (1)Includes stock-based compensation expense as follows:

Year Ended December 31, / Year Ended December 31, / Year Ended December 31,

2025 / 2024

(in thousands) / (in thousands) / (in thousands)

Cost of revenue ................................................................................................... ... $827 / $921

Research and development ................................................................................. ... 32,154 / 41,397

Sales and marketing ............................................................................................ ... 9,950 / 8,723

General and administrative ................................................................................. ... 6,836 / 7,523

Total stock-based compensation expense ........................................................... ... $49,767 / $58,564

Stock-based compensation expense included $14.1 million and $30.7 million for the years ended December 31, 2025 and 2024, respectively, related to secondary transactions in each period. See Note 14 to our audited consolidated financial statements included elsewhere in this prospectus for additional details on the secondary transactions. (2)See Note 7 to our audited consolidated financial statements included elsewhere in this prospectus for an explanation of the method used to calculate our basic and diluted net income (loss) per share and the weighted- average number of shares used in the computation of per share amounts. (3)The pro forma weighted-average shares used in computing pro forma net loss per share gives effect to (i) the Preferred Stock Conversion, (ii) the Common Stock Reclassification, and (iii) the RSU Net Settlement. The pro forma net loss used to calculate pro forma net loss per share reflects stock-based compensation expense of approximately $          that we will recognize upon the completion of this offering related to RSUs subject to service-based and liquidity-based vesting conditions for which the service-based vesting condition was satisfied as of December 31, 2025 and for which the liquidity-based vesting condition will be satisfied in connection with this offering. (4)See “Non-GAAP Operating Loss” below for additional information and for a reconciliation of Non-GAAP operating loss to loss from operations, the most directly comparable financial measure calculated and presented in accordance with U.S. generally accepted accounting principles (“GAAP”). (5)See “Non-GAAP Net Loss” below for additional information and for a reconciliation of Non-GAAP net loss to net loss, the most directly comparable financial measure calculated and presented in accordance with GAAP.

As of December 31, 2025

As of December 31, 2025 / As of December 31, 2025 / As of December 31, 2025 / As of December 31, 2025

Actual / Pro Forma / Pro Forma As Adjusted

(in thousands) / (in thousands) / (in thousands) / (in thousands) / (in thousands)

Consolidated Balance Sheet Data:

Cash and cash equivalents ......................................................................... ... $701,706 / $ / $

Working capital ....................................................................................... ... 824,106

Total assets ................................................................................................. ... 2,326,037

Total liabilities ........................................................................................... ... 971,344

Redeemable convertible preferred stock .................................................... ... 1,933,348

Stockholders’ deficit .................................................................................. ... $(578,655)

_______________ (1)The pro forma column above gives effect to (i) the filing and effectiveness of our amended and restated certificate of incorporation, which will occur immediately prior to the completion of this offering; (ii) the Preferred Stock Conversion; (iii) the Common Stock Reclassification; (iv) the RSU Net Settlement; (v) the increase in accrued expenses and other current liabilities and an equivalent decrease in additional paid-in capital of $          in connection with the estimated tax withholding and remittance obligations related to the RSU Net Settlement; and (vi) stock-based compensation expense of approximately $          that we will recognize upon the completion of this offering related to RSUs subject to service-based and liquidity-based vesting conditions for which the service-based vesting condition was satisfied as of December 31, 2025 and for which the liquidity-based vesting condition will be satisfied in connection with this offering.

20

Table of Contents

(2)The pro forma as adjusted column above gives further effect to (i) the pro forma adjustments set forth above; (ii) the issuance and sale of                    shares of Class A common stock by us in this offering at an assumed initial public offering price of $          per share, which is the midpoint of the estimated price range set forth on the cover page of this prospectus, after deducting estimated underwriting discounts and commissions and estimated offering expenses payable by us; and (iii) the use of a portion of the net proceeds from this offering to satisfy the estimated tax withholding and remittance obligations related to the RSU Net Settlement. (3)Each $1.00 increase or decrease in the assumed initial public offering price of $          per share, which is the midpoint of the estimated price range set forth on the cover page of this prospectus, would increase or decrease, as applicable, each of cash and cash equivalents, working capital, total assets, and stockholders’ deficit by $         , assuming that the number of shares of Class A common stock offered by us, as set forth on the cover page of this prospectus, remains the same, and after deducting estimated underwriting discounts and commissions and estimated offering expenses payable by us. Similarly, each increase or decrease of 1.0 million shares in the number of shares of Class A common stock offered by us would increase or decrease, as applicable, each of cash and cash equivalents, working capital, total assets, and stockholders’ deficit by $         , assuming the assumed initial public offering price of $          per share remains the same, and after deducting estimated underwriting discounts and commissions and estimated offering expenses payable by us. In addition, each 1.0% increase or decrease in the assumed tax withholding rate would increase or decrease, as applicable, the amount of estimated tax withholding and remittance obligations related to the RSU Net Settlement by $         . Pro forma adjustments in the footnotes above and the related information in the consolidated balance sheet data are illustrative only and will be adjusted based on the actual initial public offering price and other terms of this offering determined at pricing, the actual tax withholding rate, as well as the actual amount of RSUs settled in connection with this offering (including after accounting for forfeitures prior to the settlement date). (4)Working capital is defined as total current assets less total current liabilities. See our unaudited interim consolidated financial statements and the related notes thereto included elsewhere in this prospectus for further details regarding our current assets and current liabilities.

Non-GAAP Financial Measures We use certain non-GAAP financial measures to supplement the performance measures in our consolidated financial statements, which are presented in accordance with GAAP. These non-GAAP financial measures include non-GAAP operating loss and non-GAAP net loss. We use these non-GAAP financial measures for financial and operational decision-making and as a means to assist us in evaluating period-to-period comparisons. By excluding certain items that may not be indicative of our recurring core operating results, we believe that non-GAAP operating loss and non-GAAP net loss provide meaningful supplemental information regarding our performance. Accordingly, we believe these non-GAAP financial measures are useful to investors and others because they allow for additional information with respect to financial measures used by management in its financial and operational decision-making and they may be used by our institutional investors and the analyst community to help them analyze the health of our business. However, there are a number of limitations related to the use of non-GAAP financial measures, and these non-GAAP measures should be considered in addition to, not as a substitute for or in isolation from, our financial results prepared in accordance with GAAP. Other companies, including companies in our industry, may calculate these non-GAAP financial measures differently or not at all, which reduces their usefulness as comparative measures. Non-GAAP Operating Loss We define non-GAAP operating loss as operating loss presented in accordance with GAAP, adjusted to exclude stock-based compensation expenses. We have presented non-GAAP operating loss because we consider non-GAAP operating loss to be a useful metric for investors and other users of our financial information in evaluating our operating performance because it excludes the impact of stock-based compensation, a non-cash charge that can vary from period to period as such variations are unrelated to our core operating performance. This metric also provides investors and other users of our financial information with an additional tool to compare business performance across companies and periods, while eliminating the effects of items that may vary for different companies for reasons unrelated to core operating performance.

21

Table of Contents

A reconciliation of our GAAP operating loss, the most directly comparable GAAP financial measure, to non- GAAP operating loss is presented below:

Year Ended December 31, / Year Ended December 31, / Year Ended December 31,

2025 / 2024

(in thousands) / (in thousands) / (in thousands)

GAAP operating loss ................................................................................................. ... $(145,862) / $(101,438)

Add: Stock-based compensation expense .................................................................. ... 49,767 / 58,564

Non-GAAP operating loss ......................................................................................... ... $(96,095) / $(42,874)

Non-GAAP Net Loss We monitor non-GAAP net loss for planning and performance measurement purposes. We define non-GAAP net loss as net loss reported on our consolidated statements of operations, excluding the impact of stock-based compensation expenses and change in fair value (extinguishment) of forward contract liability. We have presented non-GAAP net loss because we believe that the exclusion of these charges allows for a more relevant comparison of our results of operations to other companies in our industry and facilitates period-to-period comparisons as it eliminates the effect of certain factors unrelated to our overall operating performance. Our calculation of non-GAAP net loss does not currently include the tax effects of the stock-based compensation expense adjustment because such tax effects have not been material to date. A reconciliation of our GAAP net loss, the most directly comparable GAAP financial measure, to our non- GAAP net loss is presented below:

Year Ended December 31, / Year Ended December 31, / Year Ended December 31,

2025 / 2024

(in thousands) / (in thousands) / (in thousands)

GAAP net income (loss) ............................................................................................ ... $237,827 / $(481,602)

Add: Stock-based compensation expense ............................................................... ... 49,767 / 58,564

Add: Change in fair value (extinguishment) of forward contract liability ................ ... (363,336) / 401,264

Non-GAAP net loss ................................................................................................... ... $(75,742) / $(21,774)

_______________ (1)Non-GAAP net loss does not include the tax effects of the stock-based compensation expense adjustment because such tax effects were not material during the periods presented.

22

Table of Contents