BUSINESS Overview We are building the fastest AI infrastructure in the world. In AI, speed is critical to win. Speed improves user engagement, expands product capabilities, can lower operating costs, and opens new markets. It shortens iteration cycles for engineers, researchers, and professionals across industries, allowing them to be more productive. Speed unlocks new applications and new industries. In technology, “speed unlocking value” is a pattern that has repeated itself over the past 30 years. Faster solutions are used more often and for more demanding tasks. For example, the speed of broadband transformed the internet from static pages into real-time applications, enabling new products and industries. Similarly, in search, Google showed that even short delays in delivering answers significantly reduced usage and engagement. AI repeats this pattern. As AI has moved from novelty to necessity, AI work has grown more demanding, and speed has become a bottleneck. Faster AI does more work in less time, providing better answers sooner. Our solutions are built for speed. Cerebras Inference delivers answers up to 15 times faster than leading GPU- based solutions as benchmarked on leading open-source models. Similarly, many customers have achieved more than 10 times faster training time-to-solution compared to leading GPU systems of the same generation. These performance breakthroughs are the result of our core innovation: the world’s first and only commercialized wafer-scale processor. Called the Wafer-Scale Engine (“WSE”), our processor is 58 times larger than NVIDIA’s B200 chip and has 2,625 times more memory bandwidth than NVIDIA’s B200 package, which contains two individual chips. To build the WSE, we solved the 75-year-old compute industry problem of wafer- scale integration to produce, yield, power, and cool a chip of this size. This size is what enables our incredible AI speeds. By bringing massive compute and memory onto a single piece of silicon and integrating it into a purpose- built system and software stack, we deliver exceptional AI speed for customers on premises and via the cloud. Our strategic partners and customers include hyperscalers, foundation model labs, AI-native and digital-native businesses, enterprises, and Sovereign AI initiatives. OpenAI, the world’s leading foundation model lab, selected us to be its fast inference solution. With Cerebras, OpenAI’s Codex-Spark users turn ideas into working software in seconds. This partnership is an example of tight hardware-software co-design with a leading frontier model lab. AWS, the world’s leading hyperscale cloud, has signed a binding term sheet with us to become the first hyperscaler to deploy Cerebras in its own data centers, providing massive distribution to a broad base of enterprise customers. Our customers use Cerebras solutions to run applications that demand speed, scale, and intelligence. This work includes training and serving large frontier models with near-instant responses, processing massive datasets in real time, and generating full-stack applications in a single step. Once customers adopt fast inference, user expectations for interactivity rise, and engineering teams shift from latency optimizations to other work, making it difficult to return to slower inference. We deliver our solutions to customers in several different ways. Organizations that require full data and infrastructure control can purchase Cerebras AI supercomputers for on-premises deployments. Customers seeking cloud flexibility can access Cerebras compute through consumption-based models on Cerebras Cloud or through partner clouds. For example, our high-speed inference services are available through partners, including AWS Marketplace, Microsoft Marketplace, IBM watsonx Model Gateway, Vercel AI Gateway, OpenRouter, and Hugging Face, enabling seamless adoption within existing workflows. Our ability to deliver differentiated performance has made us a strategic partner to many of our largest customers. Beyond providing compute infrastructure, we provide AI services to our customers to co-develop solutions to address their most complex challenges, from training state-of-the-art models to optimizing deployments for each application’s needs. These partnerships have expanded over time; notably, our top ten customers by year-to-
112
Table of Contents
date revenue through December 31, 2025 increased their aggregate spend with us by approximately 80% within 12 months of their initial purchase, often including contracts for co-development. AI is one of the fastest growing technologies in history. We believe that our high-speed AI solutions give us a meaningful competitive advantage in this market. We believe that further adoption of AI, accelerated by increased penetration, more frequent usage, and more complex applications, will continue to rapidly expand the market. According to IDC, investments in AI solutions and services are projected to yield a global cumulative impact of $22.3 trillion by 2030, representing approximately 3.7% of the global GDP. The combined market for AI training infrastructure and our addressable market within AI inference is estimated to be $251 billion in 2025 and is expected to grow to $672 billion by 2029—a 28% CAGR, according to Bloomberg Intelligence. This estimate indicates that AI inference will grow more than twice as fast as AI training infrastructure through 2029. With the fastest inference platform on the market, as benchmarked by Artificial Analysis, and a proven track record in large-scale training, we believe we are well-positioned to capture growth across both parts of the AI infrastructure market. Our growth reflects the broader acceleration of AI adoption. Our revenue increased from $24.6 million in 2022 to $78.7 million in 2023 and to $290.3 million in 2024, representing a more than tenfold increase over three years. Our revenue increased to $510.0 million in 2025, representing year-over-year growth of 76%. We earned net income of $237.8 million in 2025 and incurred net loss of $481.6 million in 2024. Our gross margin was 12%, 33%, 42%, and 39% in 2022, 2023, 2024, and 2025, respectively. We incurred non-GAAP net loss of $75.7 million in 2025 and $21.8 million in 2024, after excluding the impact of stock-based compensation expense and change in fair value (extinguishment) of forward contract liability from our GAAP net income (loss). For more information and for a reconciliation of non-GAAP net loss to net income (loss), see the section titled “Management’s Discussion and Analysis of Financial Condition and Results of Operations—Non-GAAP Financial Measures.” Industry Background AI is the Next Technological Shift Over the past 50 years, the compute industry has undergone a series of secular shifts, each of which expanded access to compute and transformed global productivity. In the 1990s, the Internet reshaped how people worked, communicated, transacted, and learned, catalyzing new industries and business models. In the 2000s and 2010s, the proliferation of mobile devices and the emergence of cloud computing delivered unprecedented flexibility, scale, and reach, supporting millions of new digital products and experiences. We believe AI represents the next major technological shift—one with the potential to exceed the transformational impact of prior cycles. In comparison to previous technology shifts, the adoption of AI is astonishing. Its market penetration has occurred multiple times faster than the PC and the cloud. ChatGPT reached 100 million users in less than 2.5
113
Table of Contents
months, more than twenty times faster than Facebook. As of September 2025, ChatGPT reported 700 million weekly active users. !busienss1ba.jpg
*busienss1ba.jpg*
According to Pew Research Center, as of June 2025, around 62% of U.S. adults interacted with AI at least several times a week, with 31% doing so almost constantly (at least several times a day), and one-third of U.S. adults under 30 saying they interacted with AI several times a day. Additionally, the Digital Education Council found in 2024 that 86% of higher-education students used AI. According to a McKinsey survey in 2025, the share of respondents saying their organizations are using AI in at least one business function has increased since their research last year: 88% reported regular AI use in at least one business function in 2025 compared with 78% a year ago. In the third quarter of 2025, Gallup reported daily use of AI in the workplace had more than doubled in the past 12 months, with 10% of U.S. employees reporting they used AI in their daily roles. The strong rate of AI adoption is driven by the simple fact that AI has transitioned from novelty to necessity and is now used across consumer and enterprise domains. Individuals and organizations rely on AI to solve problems, build products, accelerate research, improve patient outcomes, enhance decision-making, streamline operations, enable innovation, and deliver personalized experiences.
114
Table of Contents
The rise of AI depends on massive computational resources. This is where Cerebras fits in. !cerebras-drsx1219b.jpg
*cerebras-drsx1219b.jpg*
Inference is Driving the AI Compute Demand, as Frontier AI Models Grow More Capable AI is composed of two stages: training and inference. Training is the process of creating and teaching the AI model; inference is the process of using the model to generate responses. Early progress in AI came primarily from training larger models. Larger models, which used more compute during training, improved AI’s accuracy. In this training-centric era, inference was straightforward and required little computation; it simply generated answers from a trained model in a single step. Today, AI has entered a new era centered on inference. New techniques have emerged that make models smarter as they are being used. This approach—called “inference-time compute” or “test-time compute”—has become the dominant mode of inference. !cerebras-drsx1219c.jpg
*cerebras-drsx1219c.jpg*
115
Table of Contents
Instead of depending primarily on the trained model for accuracy, today’s frontier models—such as OpenAI’s GPT-5.4, Anthropic’s Claude Opus 4.7, and Google’s Gemini 3.1 Pro—perform substantial computation during inference to simulate reasoning. These models effectively “think through” the problem: planning steps, checking their own work, and refining responses before delivering a final, higher-quality result. These additional steps use substantially more compute during inference, while producing more accurate answers. !business4ba.jpg
*business4ba.jpg*
These reasoning capabilities have fundamentally changed how people use AI. Inference is no longer limited to answering questions; modern AI applications now perform actions on behalf of their users. They can directly book travel itineraries, code full web applications from scratch, help customers apply for mortgages, automatically analyze legal contracts for discrepancies, process insurance claims, and more. As a result, demand for AI inference has surged alongside the adoption of these smarter reasoning models that leverage more inference-time compute.
116
Table of Contents
Ultimately, inference compute demand is driven by the compounding effect of three forces: the number of users, the frequency of use, and the compute per use. Each of these forces is growing at an extraordinary rate, producing a geometric expansion of demand for inference and its underlying compute. !business5da.jpg
*business5da.jpg*
Reasoning during inference delivers smarter AI responses but requires significantly more compute. As models become more capable, users rely on them for increasingly ambitious tasks, further driving compute needs. Today’s workloads—including video generation, deep research, and long-form analysis—can require many orders of magnitude more compute than answering basic questions. Reasoning Makes Inference Speed a Necessity Speed enables reasoning models to deliver more accurate answers faster, reducing the frustration created by forcing customers to wait for answers. Reasoning changes the shape of inference. Reasoning systems do not complete tasks in a single request-and- response step. They execute a sequence of sequential and dependent steps—such as planning, refinement, and verification—until the task is completed. Each step consumes compute and contributes to total completion time. Slower execution of each step compounds, and then the task takes much longer to complete. Faster execution at each step shortens the overall time to answer. Complex tasks (harder problems) are more valuable to solve but they require the reasoning system to go through a longer sequence of steps. This amplifies the benefit of speed and the penalty for being slow. Speed enables more accurate answers to harder problems in less time. Speed expands the range of tasks that AI can address, thereby
117
Table of Contents
broadening its addressable market. Conversely, slow AI produces longer wait times, making many applications impractical to deploy. !business6da.jpg
*business6da.jpg*
Speed enables AI to address more complex, higher value tasks. This, in turn, brings new users to AI, who use AI more frequently and to solve more complex problems. And herein is the flywheel. More users, more frequent users, and more complex use cases all increase AI compute usage. Fast Inference Enables the Next Generation of AI Workloads, With Coding as a Clear Early Signal As AI uses more compute to tackle increasingly complex problems, a fundamental challenge emerges: everyone wants a better response for complicated requests, but nobody wants to wait to get a response. We are solving this problem. Cerebras Inference delivers answers up to 15 times faster than leading GPU-based solutions as benchmarked on leading open-source models. This speed advantage enables our solutions to deliver real-time performance for the most advanced reasoning models, enabling complex tasks to be completed more accurately and quickly.
118
Table of Contents
As discussed, fast and accurate results delight users, drive engagement, and unlock new classes of applications and business opportunities. Faster AI compute produces answers in less time, which drives more frequent usage, new types of applications, and therefore greater compute demand. These dynamics are already visible in the market. Three fast-growing categories—software development, deep research systems, and voice applications—illustrate the importance of speed. For these and many other similar applications, inference speed is a necessity. •AI-powered software development provides a clear early signal. Coding with AI is interactive and sensitive to delay. Delay impairs a developer’s train of thought, and as a result, developers are more likely to abandon tools that slow them down. AI can now write code. It reasons over large codebases and then uses the multi-step process previously described to generate, modify, and run code. Inference speed has become a primary determinant for adoption. Products such as Cursor, Claude Code, Codex, Windsurf, and GitHub Copilot act as autonomous collaborators—planning, editing, and validating code across repositories in response to natural-language instructions from developers. These systems require complex, multi-step tasks, including continuous reasoning and long-context memory. Fast inference is the only way to avoid frustrating wait times. AI-native coding products barely existed in 2023. Yet they collectively generated billions in ARR in 2025 and continue to accelerate. For example, AI coding applications like Lovable and Cursor are among some of the fastest growing developer tools in history. AI coding agents have become central to how software is written. Anthropic’s Claude Code is already at a reported annual revenue run rate of $2.5 billion as of February 2026; Claude Code’s creator said in January 2026 that he writes 100% of his code with AI. In addition, professional developers report that 42% of code is now AI-generated or assisted, according to a survey conducted by SonarSource in October 2025. By droves, software engineers are shifting from writing code to supervising fleets of AI coding agents. Faster inference means more productive engineers. Coding demonstrates a fundamental pattern in reasoning systems: wherever AI involves continuous interaction, multi-step reasoning, and sensitivity to response time, speed determines utility. Those same conditions are present across a growing set of AI applications. •Deep research systems apply similar reasoning to knowledge work, performing multi-step retrieval and synthesis across large datasets to deliver structured insights in real time. Platforms such as AlphaSense rely on real-time inference to sift through a higher volume of documents to help analysts and enterprises find answers faster. •Voice applications include conversational agents, avatars, and digital twins from companies like Meta, Tavus, and OpenCall. Real-time performance is critical for voice: sub-second latency makes interactions feel natural and gives these systems time to call tools or retrieve data mid-conversation for richer, contextual responses. Together, we believe these applications lead the way in the next phase of AI adoption: systems that think, act, and interact continuously, driving sustained demand for faster and more efficient compute infrastructure. In this environment, speed directly shapes usage. Long wait times limit real-time applications, stunt the diffusion of AI capabilities, and can inhibit new markets and applications. As a result, slow systems lose users, limit capability, and stall innovation, while faster systems are used more often and for more demanding workloads. We believe speed is a defining advantage in modern AI. Reasoning is intelligence, and intelligence compounds with speed. We believe the ability to deliver fast, scalable reasoning will define not only the next decade of technology, but also shape the future of how people work, create, and interact.
119
Table of Contents
Our Market Opportunity We address a large and rapidly growing market for AI infrastructure. According to Dell’Oro Group, worldwide data center infrastructure capital expenditures are expected to grow from $679 billion in 2025 to $1.7 trillion by 2030, representing a 21% CAGR. AI infrastructure increasingly dominates global IT spending. Training The AI training infrastructure market is expected to grow from approximately $185 billion in 2025 to $380 billion by 2029, a 20% CAGR, according to Bloomberg Intelligence. This market is characterized by large-scale capital buildouts as hyperscalers, foundation model labs, enterprises, and Sovereign AI initiatives, invest in developing foundation models and fine-tuning capabilities. We have demonstrated strong success in this market, most notably through hardware and models we’ve trained for G42, MBZUAI, GlaxoSmithKline, Sandia National Laboratory, the U.S. Department of Defense, and other training customers. Inference Based on Bloomberg Intelligence data, our addressable market within the AI inference market is expected to grow from approximately $66 billion in 2025 to $292 billion by 2029, a 45% CAGR. The AI inference market scales with the number of AI users, a number we expect to converge with the global internet user base over time. Inference compute can be accessed at the hardware level through on-premises deployments and at the cloud/API level, measured in tokens served. We serve both through Cerebras AI supercomputers, which are deployed directly in customer data centers, and Cerebras Inference Cloud, which addresses the token-based API market. The token-based market is expanding rapidly. In October 2025, Google reported Gemini was serving 1.3 quadrillion tokens per month—a market that was effectively zero before the launch of ChatGPT in late 2022. Cerebras Inference Cloud directly serves this market. Because the AI compute we provide is general purpose, we serve a wide range of models used across verticals—consumer applications, code generation, enterprise AI, and more. The same infrastructure that powers a chat application can power a financial model or a coding agent. The combined market for AI training infrastructure and our addressable market within AI inference is estimated to be $251 billion in 2025 and is expected to grow to $672 billion by 2029—a 28% CAGR, according to Bloomberg Intelligence. This estimate indicates that AI inference will grow more than twice as fast as AI training infrastructure through 2029, and we expect AI inference to represent an increasing share of total AI infrastructure demand as deployed models scale to serve global user bases. With the fastest inference platform on the market, as benchmarked by Artificial Analysis, and a proven track record in large-scale training, we believe we are well-positioned to capture growth across both markets. Our Solution We are building the fastest commercial AI infrastructure in the world. Our AI supercomputers are purpose built to make AI fast. They are built for the latency-sensitive, reasoning workloads that define modern AI. Our full-stack hardware and software platform is designed to complete AI tasks significantly faster and more efficiently than comparable GPU-based solutions, whether deployed on premises, through the Cerebras Cloud, or via partner clouds.
At the core of our solution is the Cerebras WSE, the largest and fastest AI processor ever brought to market in high volumes. The WSE combines 900,000 compute cores, 44 gigabytes of on-chip memory, and 21 petabytes of memory bandwidth on the largest commercial chip ever built. The WSE-3 is 58 times larger than NVIDIA’s B200 chip. The WSE has 19 times more transistors, 250 times more on-chip memory, and 2,625 times more memory bandwidth than NVIDIA’s B200 package, which contains two individual chips.
120
Table of Contents
Each WSE is housed inside a Cerebras CS-3 system, our fully integrated AI compute system that includes advanced cooling, power delivery, and interconnect technology. Multiple CS-3 systems connect to form Cerebras AI supercomputers deployed on premises in customer data centers and in the cloud.
*cerebras-drsx12191a.jpg*
122
Table Of Contents
Our software platform makes wafer-scale computing simple to use. It spans the full AI life cycle—from model programming and compilation, to training and inference, to cluster orchestration. •Cerebras Compiler compiles PyTorch models directly to the WSE, eliminating the need for CUDA or distributed programming and providing an easy-to-use developer experience. •Cerebras Inference Serving Stack delivers ultra-low-latency inference with industry-standard APIs for production use. •Cerebras Cluster Manager orchestrates multiple CS-3 systems into one logical AI supercomputer, handling scheduling, telemetry, and health monitoring at scale. Because every layer is co-designed with our hardware, customers can scale training and inference across frontier-size models without rewriting code or managing distributed infrastructure.
Our technology is designed to be delivered in the form that best accelerates a customer’s AI roadmap. Our platform is designed for flexibility—meeting organizations where they are, and scaling with them as their ambitions grow. •Cerebras Cloud: Provides high-performance AI compute through a simple API, allowing customers to serve open-source, fine-tuned, or proprietary models with production-grade reliability. •Partner Clouds: Offer seamless access to Cerebras systems through leading cloud providers including AWS Marketplace, Microsoft Marketplace, IBM watsonx Model Gateway, Vercel AI Gateway, OpenRouter, and Hugging Face, extending our reach across the global AI ecosystem. •On-Premises Deployments: Deliver fully integrated AI supercomputers and install them directly in customer environments, giving enterprises, Sovereign AI initiatives, national laboratories, and defense organizations complete control over data, performance, and operations. We also operate and manage large clusters of AI supercomputers for some of our customers. •Hybrid Deployments: Enable customers to move fluidly between on-premises and cloud environments through a unified software stack, maintaining consistent performance and workflows as they scale.
123
Table Of Contents
Customers choose the consumption model that fits their needs—buying inference by the token, running training workloads by the week or month, reserving dedicated capacity for long-term production deployments, or purchasing on-premises infrastructure. !business15ca.jpg
*business15ca.jpg*
Our AI experts accelerate customers’ ability to take AI applications from concept to production. With deep experience training and deploying frontier-scale models across modalities, our team helps customers select model architectures, prepare large-scale training data, and train and fine-tune models for production. We also design optimized deployments for customers—training draft or speculative decoding models and tuning configurations to balance latency, throughput, and cost for each application. We excel at turning AI ambition into business results. By augmenting customer teams with advanced AI expertise, we help customers design, build, and deploy custom models that often outperform existing state of the art, giving customers a meaningful competitive advantage. Together, our hardware platform, unified software, and AI model services form an integrated platform that becomes increasingly valuable over time. As customers build models, workflows, and applications on Cerebras, the platform can become deeply embedded in their AI development and operations, leading to durable relationships. What This Means for Customers Our customers, which include hyperscalers, foundation model labs, AI-native and digital-native businesses, enterprises, and leaders of Sovereign AI initiatives, complete tasks dramatically faster than on GPU-based systems. Faster reasoning improves user experience, increases engagement, accelerates iteration, and enables new classes of AI applications. This speed advantage compounds in production environments, where reduced latency and shorter training cycles have meaningful business impact. Key Customer Benefits Through our full-stack AI offerings, we deliver tangible improvements across four key dimensions that define AI value in the real world: speed, quality, cost, and simplicity.
124
Table Of Contents
Our systems achieve dramatically faster inference than GPU clusters, enabling applications such as real-time coding agents, nearly instant deep research, and digital twins that were previously impractical or impossible. Customers describe the leap in inference speed as akin to going from dial-up to broadband—an advancement that redefines what AI can do. New classes of products that customers have built and use daily with Cerebras include: •Real-time coding agents: Copilots that read, write, and debug code nearly instantly—turning AI into an interactive programming partner. •Nearly instant deep research agents: Systems that analyze thousands of documents in seconds, accelerating market, scientific, and policy research. •Digital twins: Lifelike AI personas that think, speak, and react in real time. With Cerebras, avatars respond without awkward delays, and carry conversations that are more natural and interactive.
On GPUs, latency forces a tradeoff between speed and intelligence. Developers often have to limit the accuracy of a response in order to have it delivered in a reasonable amount of time. Our offerings are designed to remove this tradeoff. Customers run long-context, multi-step reasoning models interactively, delivering higher-quality results without comparable delays. Our inference speed allows developers to use substantially more reasoning tokens while maintaining the same end-to-end task completion time. We turn quality from a limitation into a feature; customers can now serve some of the largest models at full strength, in nearly real time.
Moving data from one chip to another is one of the most power-intensive parts of AI compute. And power is the largest contributor to operating expenses in AI compute. Our wafer-scale architecture keeps data on-chip, reducing data movement significantly, which in turn reduces power consumption. It also eliminates layers of costly and complex networking equipment. By way of comparison, moving a bit of data on the WSE-3 consumes a fraction of the energy required to move the same bit of data over GPU interconnects. Because our performance advantages stem from fundamental architectural efficiency, we expect these benefits to endure across future generations that continue to build on our wafer-scale technology.
We eliminate the complexity of distributed programming across GPU clusters, which is one of the most challenging aspects of AI deployment. Even extremely large models run without code changes, and scale automatically and seamlessly across clusters of Cerebras systems. Because training, fine-tuning, and inference all occur on a unified platform, customers avoid the operational overhead of moving between different compute environments, enabling inference, fine-tuning, and training from scratch on the same cluster. Cerebras Compiler’s PyTorch integration makes model customization and compilation simple, the Inference Serving Stack enables deployment of frontier-sized models in minutes, and our AI experts support customers throughout the model life cycle to accelerate results. Cerebras’s deployment platform also allows customers to run models that were not trained on Cerebras hardware and still achieve exceptional inference performance.
125
Table Of Contents
Factors Preventing GPUs From Being Faster at AI AI inference speed is limited by how fast data moves between memory and compute; this is called memory bandwidth. When a large language model generates a response, it predicts one word (token) at a time. Each token generated requires a large amount of data—all of the model weights—to be moved from memory to compute. Because each token depends on the previous one, this work cannot be parallelized, making AI inference speed fundamentally limited by memory bandwidth. !business16aa.jpg
*business16aa.jpg*
GPUs were designed for graphics workloads. Graphics can tolerate slower memory movement because the highly parallelizable workload allows the GPU to keep many compute cores busy while more data is moved over, masking the memory latency. These workload characteristics made high-capacity off-chip memory, placed far away from the compute processor, a strong architectural choice for graphics. But the tradeoff of off-chip memory is speed. Off-chip memory connects to the compute processors through a narrow data “pipe” with low memory bandwidth. This was the right tradeoff for graphics, but it creates a critical limitation for AI speed, where data movement is the bottleneck. The result is a GPU “memory wall” for AI, where a GPU’s memory bandwidth cannot keep up with compute for AI workloads, and thereby limits the speed with which AI answers can be generated. These are not software issues. They are the physical limits of the memory + GPU architecture. In order to be fast, we believe AI requires a fundamentally different architecture that solves the memory wall. Such architecture must provide vastly more memory bandwidth to enable data to move more quickly between memory and compute, which can thereby accelerate the generation of AI responses.
126
Table Of Contents
Our Technology Wafer-Scale Integration: The Foundation Cerebras started with a simple question: How could a new class of processors be designed with the singular goal of solving the compute challenges presented by AI? Beginning with a clean slate, how could we avoid the trade-offs made for graphics and other workloads to ensure that every transistor, every single part of the processor, was optimized for the requirements of AI? Our answer is wafer-scale integration. Wafer-scale integration enabled us to use a vastly faster memory and avoid the complexity of switches and routers and associated complexity necessary to link together thousands of GPUs. SRAM is the fastest memory to date. But existing industry players could not use as much SRAM because they could not fit it on their chip. By building a chip 58 times larger than NVIDIA’s B200 chip, we can maximize fast, on-chip SRAM and get the benefits of two worlds: (1) significantly more memory capacity because we built such a big chip, and (2) the benefits of the massive bandwidth provided by SRAM. Wafer scale enables us to deliver a solution with 2,625 times more memory bandwidth than NVIDIA’s B200 package, which is how we are able to deliver inference at extremely fast speeds. The second fundamental advantage provided by wafer-scale integration is that it kept the wafer intact. Instead of building a wafer, cutting it into dozens of small GPUs, and using expensive, power-hungry switches, and complex cables to wire them back together, our solution consists of one processor that is the size of an entire silicon wafer. This reduced the need, cost, managerial complexity, and power draw of much of the networking stack required to build a GPU solution. Our wafer-scale solution unifies compute and memory and communications on the same piece of silicon, eliminating the data-movement bottlenecks that slow GPU systems. The Underpinnings of Wafer-Scale Integration We solved a problem that flummoxed the compute industry for its entire history: how to build chips the size of full silicon wafers. The advantages of size were well known. But no company had ever brought a wafer-scale solution to market. To make wafer-scale commercially viable, we invented and productized two foundational semiconductor technologies: •Multi-die interconnect: Traditionally, die—regions of silicon containing an integrated circuit—are individually stamped onto a silicon wafer and then cut up (“diced”) into small, separate chips. Prior to Cerebras, the largest known chip was about 840 mm. We invented technology to interconnect these otherwise independent die together at the wafer level, at the semiconductor fabrication plant. The inter-die connectivity uses a proprietary cross-reticle connection that is integrated into our overall fabrication process. This allowed us to use existing processes to do something we believe had never been done before —namely, deliver a wafer that communicated across the entire 46,225 mm of silicon and therefore is a single massive processor. •Fault-tolerant architecture: A primary factor in the commercial viability of a semiconductor is the yield. Flaws are present in wafers. Large chips have a higher probability of hitting such a flaw. Traditionally, chips with flaws have been thrown out or “down binned,” that is, sold as a less capable part. Thus, using traditional techniques, larger chips have lower yield and are therefore more expensive. We designed the architecture to absorb and route around defects using redundant building blocks—similar to a hyperscale data center but on the wafer. Flaws are designed to be recognized, shut down, and routed around. Redundant building blocks are used to re-form a logically functional whole. This approach had been
127
Table Of Contents
previously used in memory manufacturing to achieve near-perfect yield, but to our knowledge, prior to Cerebras had not been used to build processors. These innovations made wafer-scale computing commercially viable for the first time in semiconductor history. !business17ga.jpg
*business17ga.jpg*
The Cerebras Chip, System, and Software Cerebras delivers a full-stack AI infrastructure solution. It contains innovations at each layer. At the base is the Cerebras WSE, our wafer-scale processor. Each WSE is integrated into a CS-3 system with advanced power delivery, cooling, and system management. Multiple CS-3 systems link together to form Cerebras AI supercomputers that are deployed in data centers around the world. Lightweight management and orchestration software operate these systems as one logical computer, while our training and inference platforms make it simple to run large models at scale. Because each layer is designed with the
128
Table Of Contents
others in mind, the platform delivers consistent performance, reduced infrastructure complexity, and faster time to deployment and results.
At the heart of our platform is the Cerebras WSE, the world’s largest and fastest commercialized AI processor. A single WSE replaces an entire cluster of GPUs by combining 900,000 compute cores and 44 gigabytes of on-chip memory on one piece of silicon, with 21 petabytes per second of on-chip memory bandwidth. The WSE-3 is 58 times larger than NVIDIA’s B200 chip. The WSE-3 also has 19 times more transistors, 250 times more on-chip memory, and 2,625 times more memory bandwidth than NVIDIA’s B200 package, which contains two individual chips. We believe our architecture solves for memory bandwidth, which is a primary bottleneck in modern AI. By keeping compute and memory on a single chip, WSE-3 eliminates the off-chip data transfers that dominate GPU latency and power consumption. As a result, our systems are faster, simpler to program, and more power-efficient than GPUs on AI tasks. Fast inference depends on memory bandwidth. Below, we show the traditional GPU architecture with HBM, a type of off chip DRAM, and a GPU. For the GPU to generate a single word based on an inference prompt for a 70 billion parameter model, it must move more than 140 gigabytes of data from memory to compute. That is roughly 100 1-hour HD movies. This is to generate a single word. And this must be done again and again for each word in sequence. !business4aa.jpg
*business4aa.jpg*
The speed of generating a response is limited by the rate at which data can move from memory to computer. In the figure below, we show the underpinning of our performance advantage. We have a 2,625 times larger pipe
129
Table Of Contents
between memory and compute. More data can move though our pipes, meaning we generate “words” (tokens) much more quickly. !business5ba.jpg
*business5ba.jpg*
The WSE-3 is deployed inside the CS-3 system, a data center-ready appliance engineered to support wafer- scale operation and integrate seamlessly into enterprise and Sovereign AI environments. The CS-3 provides the power delivery, cooling, networking, and system management required to operate a wafer-scale processor reliably and at scale. Multiple CS-3 systems can be connected to form Cerebras AI supercomputers, which function as a single logical computer for large-scale training and inference.
*businessart2ea.jpg*
131
Table Of Contents
Our software platform extends our hardware advantage by making wafer-scale computing simple to use and highly efficient. Our software spans the full AI life cycle—from programming and compiling models, to training and inference, to orchestration across large clusters. Each layer is co-designed with our hardware to deliver maximum performance with minimal developer effort. Model Programming and Compilation. Our Cerebras Compiler (CSoft) makes it simple to run large language models on our systems. CSoft is core to our solution and provides intuitive usability for developers. CSoft eliminates the need for low-level programming in CUDA or other hardware-specific languages. For both training and inference, our CSoft platform enables developers to easily represent and map large language models onto the Cerebras Wafer-Scale Engine using familiar frameworks such as PyTorch. Starting from a user’s PyTorch model, the CSoft graph compiler automatically maps model operations to the WSE, creating an optimized executable without user-level intervention. CSoft allows machine-learning users to accelerate training and inference on models of any size, scaled across any configuration of the Cerebras AI supercomputer, just by changing one number in a configuration file, simulating a single-device programming experience without the complexities of distributed programming. This drastically reduces operational overhead and speeds up developer iteration time and business impact. Inference Serving Stack. Our Cerebras Inference Serving Stack manages model hosting, scaling, and request routing across Cerebras systems and clusters. It provides real-time observability and load balancing, enabling ultra-low-latency inference for production workloads. Customers can serve both open-source and proprietary models through standard APIs, including industry-standard endpoints, with consistent performance across on-premises and cloud deployments. Orchestration and Life Cycle Management. Our Cerebras Cluster Manager orchestration software unifies multiple CS-3 systems into a single logical computer, managing scheduling, telemetry, and health monitoring. Built- in observability of all hardware and software components is designed to ensure reliability and high utilization across on-premises and in cloud environments. This orchestration layer also allows customers to switch seamlessly between training and inference on the same systems. With simple commands, CS-3 systems can be reconfigured from large-scale model training to real-time inference, driving utilization and shortening deployment cycles. Together, these components form a unified software platform that integrates seamlessly with our hardware to deliver a complete, end-to-end AI computing system that can be deployed on customer premises or in the cloud.
132
Table Of Contents
Because our software and hardware are co-designed, customers can train and/or deploy frontier-scale models with consistent and simple workflows—without rewriting code or managing distributed infrastructure. !business1ba.jpg
*business1ba.jpg*
Technology and Roadmap Wafer-scale integration is not a single achievement—it is a collection of technologies and processes with a multi-generation roadmap. Each successive WSE generation (from 16 nanometer to 7 nanometer and now to 5 nanometer) has delivered substantial improvements in performance, memory bandwidth, efficiency, yield, and manufacturability, without requiring changes to how developers program or deploy models. Competing approaches—such as multi-die packages and chiplet based designs—remain constrained by the physics of small chips and limited off-chip memory bandwidth. Even with advances in packaging technology, these architectures cannot match the bandwidth, locality, or simplicity of computation that result from keeping compute and memory together on a single piece of silicon. Our roadmap builds on the advantages of wafer-scale integration. We intend to invest heavily in research and development to continue to expand on-chip memory and memory bandwidth, improve interconnect density, and leverage advancements in process technology to increase transistor counts and reduce power in future WSE generations. As a result, we expect that future generations of WSEs will have faster compute, and more and faster memory and communication onto and off of the wafer. Because the WSE presents itself as a single programmable device, these improvements compound naturally in both performance and simplicity, without introducing the complexity of massive distributed compute clusters of GPU solutions. The same architectural foundation also supports long-term extensibility across emerging AI workloads. As models grow in size, increase in reasoning depth, and shift toward real-time, multi-step interactions, they place even greater emphasis on memory bandwidth and locality—all areas where wafer-scale architectures possess inherent, structural advantages.
133
Table Of Contents
Our roadmap includes development of a disaggregated inference-serving solution. Inference disaggregation is a technique that separates AI inference into two stages: prompt processing, or “prefill,” and output generation, or “decode.” These two stages have different computational characteristics. Prefill is natively parallel and requires very little memory bandwidth. Decode, on the other hand, is inherently serial and memory bandwidth intensive. Decode is typically the bottleneck. It dominates total inference time, and defines the speed of the user experience. Cerebras’s wafer-scale engine would be the fastest at both prefill and decode, but in relative terms, it is much faster at decode. Our wafer-scale architecture and ultra-high memory bandwidth delivers faster output token generation where speed matters most. Disaggregated inference would allow Cerebras to operate alongside other architectures, serving as the high-performance engine for decode while other systems handle prefill. We believe wafer-scale computing positions us as a leader in AI infrastructure, providing a long-term technology roadmap designed to scale with the requirements of modern and future AI systems. Competitive Strengths 1.Our culture of fearless engineering has enabled us to do pioneering engineering work; we are the only company ever to deliver a wafer-scale processor to market. Our culture of fearless engineering enables us to solve problems that others failed to solve or were afraid to tackle. As a result, we have solved problems that had remained unsolved for the entire 75-year history of the compute industry, namely wafer- scale integration. A culture of fearless engineering is a foundation for our continued innovation. 2.We have durable advantages rooted in our unique silicon architecture. We believe wafer-scale integration is a fundamental advantage in AI compute, enabling large amounts of high-speed memory and hundreds of thousands of compute cores to reside close together on the same piece of silicon. We have now delivered three generations of wafer-scale processors at the 16, 7, and 5 nanometer nodes. We believe these will be the foundation of our future generations of silicon. 3.We are an end-to-end systems company. From inception, we co-designed our wafer-scale engine, our CS-system, and our software stack for optimal AI performance. We were among the first in the AI community to deliver water cooling to the processor, enabling us to run colder and extend our processors’ lifetime. The co-design of processor, system, and software is a meaningful competitive advantage. 4.We are building the fastest inference infrastructure in the world. On Cerebras infrastructure, AI responses are up to 15 times faster than leading GPU-based solutions as benchmarked on leading open- source models. Third-party benchmarker Artificial Analysis wrote in August 2024, “Cerebras Inference is achieving the fastest speeds we have ever benchmarked on Artificial Analysis.” Speed is customer experience. It enables more accurate answers in less time. It enables applications that require real-time interaction such as coding agents, research agents, and voice interfaces. Speed changes the way companies design their experiences; it changes team structures and behaviors; it changes expectations and the perception of what is possible, which can make returning to slower speeds more painful. 5.We are serving some of the largest and most demanding customers in the AI market. We are engaged with customers such as OpenAI, the world’s leading foundation model lab, and AWS, the world’s leading hyperscale cloud, who have stringent requirements for performance, scale, and reliability. We offer a full- stack hardware and software platform that can be optimized for each customer’s workloads and paired with AI services in order to deploy and operate high-capacity, production-grade systems without requiring customers to manage complex infrastructure. 6.We operate at massive scale with more than 100 exaflops of deployed compute. In collaboration with our partners, we have trained some of the largest models in the industry, gaining unique experience and providing rare insight. We license space in six data centers in North America, providing geographic
134
Table Of Contents
redundancy and regional deployment options for customers with data residency or network time requirements. !business2f.jpg
*business2f.jpg*
*business3d.jpg*
135
Table Of Contents
*business4d.jpg*
Our Business Model: Make Buying Easy, by Reaching Customers Where They Are The AI market is one of the fastest-growing technology sectors in history. Within this rapidly evolving landscape, we engage customers through a combination of direct sales and an ecosystem of strategic partners. Our sales organization, together with our partners’ sales teams, delivers our high-performance AI solutions through multiple consumption models: (i) on premises, (ii) through our own Cerebras Cloud, (iii) via partners’ clouds, or (iv) through hybrid combinations of these approaches. This flexible delivery model allows customers to adopt our technology in the manner that best aligns with their procurement preferences, operational requirements, and infrastructure strategies. Our product portfolio spans on-premises AI supercomputing systems, cloud-based compute for training and inference, and forward-deployed AI services to help customers accelerate the creation and deployment of AI capabilities. On-Premises Solutions Cerebras AI supercomputers support both model training and inference and are deployed directly within a customer’s environment. This deployment model is well suited for customers with regulated and high-security environments that require full control over data, infrastructure, and system behavior. Our on-premises customers include large enterprises, national laboratories, the U.S. Department of Defense, and Sovereign AI initiatives. Commercial Model for On-Premises Deployments On-premises customers procure our AI supercomputers through a traditional purchase-order process with payment received upon delivery or acceptance. Each system combines tightly integrated hardware and software and the purchase includes a separate renewable software subscription for continuous updates and upgrades, generating a
136
Table Of Contents
recurring revenue stream. On-premises deployments are often paired with our forward-deployed AI services, in which we assist customers with data preparation, model architecture design, training management, inference optimization, and, in select cases, ongoing system operations. Cloud Solutions We also provide access to our high-performance compute through Cerebras Cloud and through our partner cloud platforms, which include AWS Marketplace, Microsoft Marketplace, IBM watsonx Model Gateway, Vercel AI Gateway, OpenRouter, and Hugging Face. These offerings enable customers to utilize the full capabilities of our AI supercomputers without incurring the capital expenditures associated with building or maintaining on-premises infrastructure, and without the operational complexity of assembling and managing training or inference software stacks. Provisioning is highly streamlined, allowing customers to begin using our cloud resources within minutes. Cerebras Cloud serves a broad spectrum of users—from individual developers to some of the world’s largest enterprises. Customers run open-source, fine-tuned, and proprietary models for both training and inference workloads. Across all use cases, our cloud offerings provide access to ultra-high-performance AI compute. Customers procure cloud capacity from us and our cloud partners through two primary models: Dedicated Capacity and On-Demand. Dedicated Cloud Capacity Customers can contract for dedicated AI compute capacity for training or inference over defined terms. These contracts are generally structured as take-or-pay commitments, under which customers pay for dedicated compute capacity irrespective of utilization. Dedicated capacity provides availability and is well suited for production deployments and large-scale workloads. Customers in this model include leading hyperscalers, foundation model labs, AI-native and digital- native businesses, enterprises, and Sovereign AI initiatives operating open-source, fine-tuned, or proprietary models. Dedicated capacity contracts also include access to tailored workload telemetry that enables customers to optimize performance on our systems. This deep integration supports long-term engagement and increases platform stickiness. On-Demand Cloud Capacity For customers with variable or unpredictable workload requirements, we offer a consumption-based “pay-as- you-go” option. In this model, customers either purchase tokens—which represent units of compute—as they consume them, or pre-purchase token bundles and draw down their balance as workloads run. Enterprise customers are billed monthly, while individual developers access the service through a self-service portal. The on-demand model allows customers to scale elastically and is particularly effective for dynamic inference workloads. Historically, many customers have begun with on-demand usage and transitioned to dedicated capacity as their workloads expand. Customers and Go-to-Market Strategy Our customers include many of the world’s leading AI organizations. These span frontier model developers; hyperscalers; AI-native companies; as well as enterprises, research institutions, and national laboratories. Across these segments, customers rely on our solutions to accelerate model development and to deploy AI capabilities at production scale. We go to market through a combination of strategic partnerships, direct sales, channel partnerships, and product-led expansion.
137
Table Of Contents
Strategic Partnerships We partner with frontier model labs and hyperscalers to co-develop and deploy AI systems at scale alongside some of the most influential players in the AI ecosystem. OpenAI. We signed the MRA with OpenAI on December 24, 2025. On January 23, 2026, we began delivering capacity to OpenAI, and on February 12, 2026, OpenAI’s Codex-Spark model, powered by Cerebras infrastructure, was made available to the public. Spark is OpenAI’s model designed for real-time coding. Using Cerebras, OpenAI’s customers can translate ideas into working software in seconds, enabling developers to create software at the speed of thought. OpenAI has committed to purchase 750MW of Cerebras inference compute capacity over the next three years. Our partnership with OpenAI also allows for collaboration and co-design across both frontier model development and hardware architecture. This hardware-software co-development enables OpenAI to design models built for our hardware architecture and Cerebras to evolve hardware design in response to the needs of upcoming frontier model architectures. This creates a continuous feedback loop that can help our systems prepare for the next generation of AI, establishing a structural advantage for Cerebras. AWS. We signed a binding term sheet with Amazon Web Services for AWS to become the first hyperscaler to deploy Cerebras systems in its data centers. Deployment in AWS data centers will require us to meet strict standards for performance, scale, and reliability. Pursuant to the term sheet, we will create a co-designed, disaggregated inference-serving solution that will integrate AWS Trainium3 chips with Cerebras CS-3 systems, connected via high-bandwidth networking, to partition inference workloads across Trainium3 and CS-3. Each system will perform the type of computation at which it most excels. The approach is expected to deliver 5 times more token throughput in the same hardware footprint, at up to 15 times faster speeds compared to leading GPU-based solutions as benchmarked on leading open-source models. Direct Sales, Channel Partnerships, and Product-Led Expansion Direct Sales. We employ a targeted named-account strategy built on deep technical and commercial engagement. Dedicated account teams work closely with customer executives and their engineering leadership to identify business-critical workloads and then successfully integrate, optimize performance, and scale the deployment of Cerebras systems. This hands-on engagement builds operational trust and frequently results in the expansion of initial projects into multi-system or multi-year commitments. Alongside our direct sales force, we maintain a dedicated team of AI experts. This team provides customers with access to leading AI expertise, so that they are positioned to leverage our technology effectively. Channel and Technology Partnerships. To broaden our market reach, we leverage a diversified network of channel and technology partners. Cerebras solutions are accessible through AWS Marketplace, Microsoft Marketplace, IBM watsonx Model Gateway, Vercel AI Gateway, OpenRouter, and Hugging Face, allowing developers and enterprises to incorporate Cerebras performance seamlessly into existing workflows and deployment environments. Product-Led Growth. Our product-led growth motion introduces developers, startups, and emerging AI organizations to Cerebras through our self-serve inference platform and API. These early interactions often seed future named-account relationships. In addition, partnerships with cloud providers, system integrators, and software platforms that embed Cerebras capabilities into established workflows further expand access and reinforce our enterprise sales motion. We further extend our presence through integrations with widely used open-source development environments —including Visual Studio Code, Cline, RooCode, and OpenCode—embedding our technology, powered by the
138
Table Of Contents
Cerebras Cloud, directly where developers build and iterate. These channels enhance visibility within software- developer communities, foster product-led adoption of our self-serve offerings, and extend the reach of our platform beyond traditional enterprise sales.
*customerspotlight1ca.jpg*
*customerspotlight2e.jpg*
*customerspotlight4f.jpg*
*customerspotlight5g.jpg*
143
Table o f Contents
Sales and Marketing Our sales and marketing strategy centers on deep market understanding and customer-centric product development. We leverage our extensive market knowledge, proven track record in delivering large-scale compute solutions, and close customer collaborations to optimize our product roadmap. This is designed to ensure our solutions consistently deliver significant value to our customers. We focus our sales and marketing efforts on industry leaders, specifically large enterprises domestically and abroad with rich data assets. Our customers are seeking to leverage their rich proprietary data and combine it with Cerebras’s industry leading compute and AI expertise to build a durable competitive advantage. Among our customers, word of success travels quickly, and as a result, it is very important to our future that we maintain strong and collaborative relationships and that we invest in the success of our customers. We utilize master purchase agreements, purchase orders, and statements of work, to define work scope, price, quantities, delivery terms, warranties, and software subscriptions. We predominantly sell our solutions directly to customers via on-premises hardware or via the cloud, based on a dedicated capacity or consumption-based model. Research and Development We are committed to relentless innovation in both hardware and software to address the rapidly-evolving computational needs of AI. We dedicate significant resources to ongoing research and development. We invest heavily in attracting and retaining a global team of highly skilled engineers across dedicated facilities in the United States, Canada, and India. This unwavering commitment to innovation fuels our growth and positions us as a leader in the AI landscape. Manufacturing and Suppliers We operate a fabless manufacturing model, strategically partnering with industry leaders for the production of our AI compute systems which include ICs, boards, and systems. Our core manufacturing partners include: TSMC, a leading semiconductor foundry, fabricates our cutting-edge WSEs. Advanced Semiconductor Engineering (“ASE”) handles specialized processes, including the deposition of redistribution layers, and we manage final wafer packaging, assembly, and testing in our Sunnyvale, California facility. We also use a small number of third parties to manufacture subassemblies and critical components such as printed circuit boards, I/O subsystems, cooling assemblies and power delivery modules. The manufacturing process is subject to extensive testing and verification. Our supply chain is designed for flexibility and for quality, as we plan to ramp up production to meet the growing global demand for our AI compute systems. Simultaneously, we are committed to rigorous quality control throughout the manufacturing process to confirm reliability in even the most demanding environments at our customer facilities. Our contract manufacturing partners perform system assembly and extensive testing, and we have verification protocols in place at every stage including post assembly. Final system-level burn-in and test is conducted by Cerebras. Our quality processes include high production test coverage, full product traceability, and extensive post assembly burn-in. We employ a dedicated quality team that continuously monitors feedback during manufacturing and after deployment. This data-driven approach allows us to improve our product quality and reliability, and enables us to meet the stringent demands of our customers worldwide. Intellectual Property Protecting our intellectual property and proprietary technology, including our AI products and solutions, is an important aspect of our business. We rely on a combination of intellectual property rights, including patent, trademark, trade secret, and other related laws in the United States and internationally as well as confidentiality procedures and contractual provisions to protect, maintain, and enforce our proprietary technology, intellectual
144
Table o f Contents
property rights, and brand. Our intellectual property portfolio includes patents, trademarks, proprietary software, and trade secrets. As of March 31, 2026, we owned 96 issued patents and 50 pending patent applications globally. Of these, 50 are issued U.S. patents and 47 are pending U.S. patent applications. Our issued patents and pending patent applications generally relate to the design and fabrication of large-scale (e.g., wafer scale) processors, the assembly, packaging, and cooling of processors, and hardware, and software architectures for accelerated deep learning and for inference. The expiration dates of the U.S.-issued patents are between 2038 and 2041, not taking into account any applicable patent term extensions. We routinely review our development efforts to assess the existence and patentability of new inventions. We have a policy of requiring employees and consultants to execute confidentiality agreements upon the commencement of an employment or consulting relationship with us. Our employee and independent contractor agreements also require relevant employees and independent contractors to assign to us all rights to any inventions made or conceived during their employment or engagement with us. In addition, we typically require individuals and entities with whom we discuss potential business relationships to sign non-disclosure agreements that contain customary confidentiality provisions. Competition We offer a purpose-built AI compute platform. Our hardware primarily competes against solutions from NVIDIA Corporation, Advanced Micro Devices, Inc., Intel Corporation, as well as AI accelerators developed by hyperscalers and private companies. We also compete against full-service cloud service providers such as Amazon.com, Inc. (AWS), Microsoft Corporation (Azure), Alphabet Inc. (Google Cloud Platform), and Oracle Corporation, as well as AI-optimized specialized clouds such as CoreWeave, Inc. and other neo-clouds. We believe that our ability to remain competitive will depend on how well we are able to anticipate the features and functions that customers will require and whether we are able to deliver consistent volumes of our products and services at acceptable levels of quality and at competitive prices. We expect competition to increase from both existing competitors and new market entrants with products that may be lower priced than ours or may provide better performance or additional features not provided by our products and services. In addition, it is possible that new competitors or alliances among competitors could emerge and acquire significant market share. Some of our competitors have greater marketing, financial, distribution, and manufacturing resources than we do and may be more able to adapt to customers or technological changes. We expect an increasingly competitive environment in the future. Human Capital As of December 31, 2025, we had 708 employees, including 426 in the United States, and we have employees located internationally, including in Canada and India. We maintain a full-time workforce and supplement our workforce with contractors and consultants. To our knowledge, none of our employees are represented by a labor union or party to a collective bargaining agreement. We consider our relationships with our employees to be good. Our human capital resources objectives include, as applicable, identifying, recruiting, retaining, incentivizing, and integrating our existing and new employees. The principal purposes of our equity incentive plans are to attract, retain, and reward personnel through the granting of stock-based compensation awards in order to motivate these individuals to perform to the best of their abilities, enabling us to achieve our objectives. Facilities Our corporate headquarters is located in Sunnyvale, California, where we lease approximately 68,000 square feet, pursuant to a lease agreement that expires in November 2027, subject to the terms thereof. We lease additional facilities in Canada and India for research and development.
145
Table o f Contents
We also enter into agreements for offsite colocation facilities to house and operate our AI supercomputers. We enter into these agreements for our own corporate purposes as well as on behalf of our customers. Currently, these data center facilities are in California, Oklahoma, and Canada. We believe that our facilities are suitable to meet our current needs. We intend to expand our facilities or add new facilities as we grow, and we believe that suitable additional or alternative spaces will be available on commercially reasonable terms, if required. Government Regulations We are subject to many U.S. federal and state laws, rules, and regulations, as well as laws, rules, and regulations imposed by various non-U.S. governmental authorities, including those related to AI, intellectual property, tax, import and export requirements, anti-corruption, economic and trade sanctions, national security and foreign investment, foreign exchange controls and cash repatriation restrictions, data privacy and security requirements, competition, advertising, employment, product regulations, environment, health, and safety requirements, and consumer laws. These laws and regulations are complex, are constantly evolving, and may be interpreted, applied, created, or amended, in a manner that could harm our business. The import and export of our offerings are subject to laws and regulations, including international treaties, U.S. and various non-U.S. export controls and sanctions laws, customs regulations, and other trade rules. The scope, nature, and severity of such controls varies widely across different countries and may change frequently over time. Such laws, rules, and regulations may delay the introduction of some of our offerings or impact our competitiveness through restricting our ability to do business in certain countries or territories or with certain parties (including certain governments) or certain jurisdictions. U.S. export restrictions also require us to obtain licenses from the U.S. Department of Commerce to allow the export or transfer of our offerings, and there can be no assurance that export permissions will be granted. These restrictive governmental actions and any similar measures that may be imposed on U.S. companies by other governments could limit our ability to conduct business globally. See the section titled “Risk Factors” for additional information regarding risks we face related to government regulation. Legal Proceedings From time to time, we may be subject to legal proceedings, claims, and investigations in the ordinary course of business. We are not presently a party to any litigation to which the outcome, we believe, if determined adversely to us, would individually or taken together have a material adverse effect on us. We cannot predict the results of any such proceedings, claims, or investigations, and despite the potential outcomes, the existence thereof may have a material adverse impact on us due to diversion of management time and attention as well as the financial costs related to resolving such matters.
146
Table o f Contents
MANAGEMENT Executive Officers and Directors The following table sets forth information regarding our executive officers and directors as of April 17, 2026:
Name / Age / Position(s)
Executive Officers and Employee Director:
Andrew D. Feldman .................................................. ... 56 / Chief Executive Officer, President, and Director
Robert Komin ............................................................ ... 63 / Chief Financial Officer
Sean Lie ..................................................................... ... 46 / Chief Technology Officer
Dhiraj Mallick ........................................................... ... 54 / Chief Operating Officer
Non-Employee Directors:
Paul Auvil ............................................................ ... 62 / Director
Elena Donio ......................................................... ... 56 / Director
Lior Susan ........................................................... ... 42 / Director
Steve Vassallo ..................................................... ... 54 / Director
Eric Vishria ............................................................ ... 46 / Director
_______________ (1)Member of the audit committee. (2)Member of the compensation committee. (3)Member of the nominating and corporate governance committee. Executive Officers and Employee Director Andrew D. Feldman is one of our co-founders and has served as our Chief Executive Officer and President and as a member of our board of directors since April 2016. From February 2012 to June 2014, Mr. Feldman served as Corporate Vice President and General Manager at Advanced Micro Devices, Inc. (“AMD”), a semiconductor company. From November 2007 to February 2012, Mr. Feldman served as Chief Executive Officer at SeaMicro, a dense microserver company acquired by AMD. From August 2003 to December 2006, Mr. Feldman served as Vice President, Marketing and Product Management at Force10 Networks, Inc., a computer networking company acquired by Dell, Inc. From March 2000 to August 2003, Mr. Feldman served as Vice President, Corporate Marketing and Corporate Development at Riverstone Networks Inc., a networking switching hardware company. Mr. Feldman holds an M.B.A. from Stanford University and a B.A. in Economics and Political Science from Stanford University. We believe Mr. Feldman is qualified to serve as a member of our board of directors because of the perspective and experience he brings as our co-founder and Chief Executive Officer. See “—Involvement in Certain Legal Proceedings” for certain details regarding historical legal proceedings involving Mr. Feldman. Robert Komin has served as our Chief Financial Officer and Treasurer since March 2024. Mr. Komin previously served as Chief Financial Officer of Sunrun Inc. (“Sunrun”), a residential solar and storage company, from March 2015 to May 2020, and then continued as a consultant until January 2021. From September 2013 to January 2015, Mr. Komin served as Chief Financial Officer at Flurry, Inc., a mobile analytics and advertising company. From August 2012 to August 2013, Mr. Komin served as Chief Financial Officer at Ticketfly, Inc., a music ticketing and marketing services provider. From January 2010 to July 2012, Mr. Komin served as Chief Operating Officer and Chief Financial Officer at Linden Research, Inc., a creator of virtual digital entertainment and cybercurrency. Mr. Komin previously served as a member of the board of directors and chairman of the audit committee of Bird Global Inc., a micromobility company, from June 2021 to April 2024. Mr. Komin holds an
147
Table o f Contents
M.B.A. from Harvard Business School and a B.S. in Accounting and General Science from the University of Oregon. Sean Lie is one of our co-founders and has served in various roles since 2016, including most recently as our Chief Technology Officer since April 2022. From April 2012 to June 2015, Mr. Lie served as Chief Architect, Data Center Server Solutions at AMD. From March 2008 to March 2012, Mr. Lie served as Lead Hardware Architect of the IO virtualization fabric ASIC at SeaMicro. From July 2004 to February 2008, Mr. Lie worked as a microprocessor architect at AMD in the advanced architecture team. Mr. Lie holds an M.Eng. and a B.S. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology. Dhiraj Mallick has served as our Chief Operating Officer since September 2023. From June 2018 to September 2023, Mr. Mallick served as our Senior Vice President of Engineering and Operations. From November 2015 to May 2018, Mr. Mallick served as Vice President of Innovation, Pathfinding and Architecture, Data Center Group at Intel Corporation, a multinational technology company. From April 2012 to August 2015, Mr. Mallick served as General Manager and Corporate Vice President at AMD. Since January 2020, Mr. Mallick has served on the Global Advisory Group of the Global Semiconductor Alliance, a semiconductor and technology industry organization. Mr. Mallick holds an M.S. in Electrical Engineering from Stanford University and a B.S. in Electrical Engineering from the University of Rochester. Non-Employee Directors Paul Auvil has served as a member of our board of directors since July 2024. Mr. Auvil previously served as Chief Financial Officer of Proofpoint, Inc., an enterprise security company, from March 2007 to February 2023. From September 2006 to March 2007, Mr. Auvil was an entrepreneur-in-residence with Benchmark, a venture capital firm. From August 2002 to July 2006, Mr. Auvil served as Chief Financial Officer at VMware, Inc., a cloud- computing and virtualization company. From April 1998 to January 2002, Mr. Auvil served as Chief Financial Officer at Vitria Technology, Inc., an eBusiness platform company. Mr. Auvil held various executive positions at VLSI Technology, Inc., a semiconductor and circuit manufacturing company, from August 1988 to March 1998, including serving as the Vice President of the Internet and Secure Products Division. Mr. Auvil has served as a member of the boards of directors of Modern Treasury Corp., a fintech company, since October 2024, Chainalysis Inc., a blockchain data platform, since November 2024, and Elastic N.V., a platform for search-powered solutions, since October 2023. Mr. Auvil previously served as a member of the boards of directors of 1Life Healthcare, Inc. (doing business as One Medical), a primary care organization acquired by Amazon.com, Inc., from September 2019 to February 2023, Quantum Corporation, a data storage company, from August 2007 to November 2017, Marin Software Incorporated, a cloud-based advertisement management platform company, from October 2009 to April 2017, and OpenTV Corp., a provider of interactive television software and services, from January 2010 to April
B.E. in Electrical Engineering from Dartmouth College. We believe Mr. Auvil is qualified to serve as a member of our board of directors because of his extensive experience in the technology industry and as an executive and a member of the boards of directors of technology companies. Elena Donio has served as a member of our board of directors since April 2026. Ms. Donio previously served in various roles at Twilio Inc., a cloud-based communications platform, including as President, Data and Applications from February 2023 to December 2023 and as President, Revenue from May 2022 to February 2023. Prior to Twilio, Ms. Donio served as Chief Executive Officer Axiom Global, Inc., a provider of technology-enabled legal services, from November 2016 to July 2020, and as a member of its board of directors from November 2016 to January 2021. From January 1998 to November 2016, Ms. Donio served in various roles at Concur Technologies, Inc., a business travel and expense management software company acquired by SAP SE in 2014, including as Concur President, Executive Vice President, and General Manager of Worldwide Small and Mid-Sized Businesses, and Vice President of Sales and Marketing. Ms. Donio previously served as a member of the board of directors of Twilio from February 2016 to May 2022. Ms. Donio serves as a member of the boards of directors of several private
148
Table o f Contents
companies, including Databricks, Inc., Plaid Inc., and Benchling, Inc. Ms. Donio holds a B.A. in Economics from the University of California, San Diego. We believe Ms. Donio is qualified to serve as a member of our board of directors because of her expertise and experience working with and for technology companies. Lior Susan has served as a member of our board of directors since April 2023. Since January 2015, Mr. Susan has served as Founder and Managing Partner of Eclipse, an investment firm. Mr. Susan is a co-founder of Bright Machines, Inc., a software company, and has served as its Executive Chairman since January 2018 and as its Chief Executive Officer from January 2018 to May 2018, from December 2021 to December 2022, and from August 2023 to the present. From June 2012 to January 2015, Mr. Susan served as Founder and General Partner at LabIX, the hardware investment platform of Flextronics International Ltd., an end-to-end supply chain solutions company. Mr. Susan served as an Advisor at Intucell Ltd., a self-optimizing network software company, from April 2008 until it was sold to Cisco in 2012. Mr. Susan has served as a member of the board of directors of Owlet, Inc., a health technology company, since March 2015, and has served as the chairman of its board of directors since July 2021. He also serves as a member of the boards of directors of several private companies, including Augury, Inc., Bright Machines, Inc., Cybertoka Ltd., Datapelago, Inc, Dutch Pet, Inc., Prime Minute, Inc., Skyryse, Inc., The Heart Company, Inc., and Usrsa Major Technologies, Inc. Mr. Susan previously served as a member of the board of directors of Lucira Health, Inc., a medical diagnostics company, from August 2020 to December 2022. Mr. Susan is a former member of an elite Special Forces unit in the Israel Defense Force. We believe Mr. Susan is qualified to serve as a member of our board of directors because of his expertise and experience working with and investing in technology companies. Steve Vassallo has served as a member of our board of directors since May 2016. Since October 2007, Mr. Vassallo has served as a general partner and in various other roles at Foundation Capital, a venture capital firm. From September 2004 to September 2006, Mr. Vassallo served as Vice President of Product and Engineering at Ning Interactive Inc., a social platform. From May 1999 to September 2002, Mr. Vassallo served as director of engineering at Immersion Corporation, a haptic technology company. Mr. Vassallo previously served as a member of the board of directors of Sunrun from May 2008 to June 2019. Mr. Vassallo also serves as a member of the boards of directors of several private companies. Mr. Vassallo holds an M.B.A. from Stanford University, an M.S. in Electromechanical Engineering from Stanford University, and a B.S. in Mechanical Engineering from Worcester Polytechnic Institute. We believe Mr. Vassallo is qualified to serve on our board of directors because of his extensive experience in the technology industry. Eric Vishria has served as a member of our board of directors since May 2016. Since July 2014, Mr. Vishria has served as a General Partner of Benchmark, a venture capital firm. From August 2013 to August 2014, Mr. Vishria served as Vice President, Digital Magazines and Verticals at Yahoo! Inc., a web services provider. From November 2008 to August 2013, Mr. Vishria served as co-founder and Chief Executive Officer of RockMelt, Inc., a social media web browser. He has served as a member of the board of directors of Confluent, Inc., a data solutions company, since September 2014. Mr. Vishria previously served as a member of the board of directors of Amplitude, Inc., a digital optimization company, from December 2014 to June 2025. He also serves as a member of the boards of directors of several private companies. Mr. Vishria holds a B.S. in Mathematical and Computational Science from Stanford University. We believe Mr. Vishria is qualified to serve on our board of directors because of his extensive experience as a venture capital investor and a member of the boards of directors of other technology companies. Involvement in Certain Legal Proceedings Andrew D. Feldman was previously one of six named defendants in the action SEC v. Pereira, No. 3:06- cv-06384-CRB (N.D. Cal.), in which the SEC alleged, among other things, that Mr. Feldman, as Vice President,
149
Table o f Contents
Corporate Marketing and Corporate Development of Riverstone Networks, Inc., negotiated, reviewed, approved, or was otherwise aware of sales transactions in 2001 and 2002 that were improperly accounted for by Riverstone and aided and abetted Riverstone in violating U. S. securities laws. Without admitting or denying the allegations of the complaint, Mr. Feldman settled the claims against him in 2008 by entering into an agreement with the SEC permanently restraining and enjoining Mr. Feldman from violating federal securities laws and requiring Mr. Feldman to pay $289,507 plus interest. In connection with the same alleged facts, Mr. Feldman also pled guilty in December 2007 to one count of circumventing accounting controls of an issuer in violation of 15 U.S.C. sections 78m(b)(5) and 78ff, and was sentenced to three years of probation and fined $5,000 in connection with an action brought by the U.S. Department of Justice captioned USA v. Feldman, No. 3:07-cr-07-00731-0001-CRB (N.D. Cal.). Family Relationships There are no family relationships among any of our executive officers or directors. Board Structure and Composition Director Independence Our board of directors currently consists of six members. Our board of directors has determined that all of our directors, other than Mr. Feldman, qualify as independent directors in accordance with the Nasdaq Listing Rules. Mr. Feldman is not considered independent by virtue of his position as an executive officer of the company. Under the Nasdaq Listing Rules, the definition of independence includes a series of objective tests, such as that the director is not, and has not been for at least three years, one of our employees and that neither the director nor any of his or her family members has engaged in various types of business dealings with us. In addition, as required by the Nasdaq Listing Rules, our board of directors has made a subjective determination as to each independent director that no relationships exists that, in the opinion of our board of directors, would interfere with the exercise of independent judgment in carrying out the responsibilities of a director. In making these determinations, our board of directors reviewed and discussed information provided by the directors and us with regard to each director’s relationships as they may relate to us and our management. Classified Board of Directors In accordance with our amended and restated certificate of incorporation, which will be effective immediately prior to the completion of this offering, our board of directors will be divided into three classes with staggered three- year terms. At each annual general meeting of stockholders, the successors to directors whose terms then expire will be elected to serve from the time of election and qualification until the third annual meeting following their election. Our directors will be divided among the three classes as follows: •The Class I directors will be Andrew Feldman and Paul Auvil, and their terms will expire at the annual meeting of stockholders to be held in 2027; •The Class II directors will be Steve Vassallo and Eric Vishria, and their terms will expire at the annual meeting of stockholders to be held in 2028; and •The Class III directors will be Elena Donio and Lior Susan, and their terms will expire at the annual meeting of stockholders to be held in 2029. We expect that any additional directorships resulting from an increase in the number of directors will be distributed among the three classes so that, as nearly as possible, each class will consist of one-third of the directors. The division of our board of directors into three classes with staggered three-year terms may delay or prevent a change of our management or a change in control.
150
Table o f Contents
Leadership Structure of the Board of Directors Our amended and restated bylaws and corporate governance guidelines to be adopted immediately following the effectiveness of the registration statement of which this prospectus forms a part will provide our board of directors with flexibility to combine or separate the positions of chairperson of the board of directors and Chief Executive Officer and to implement a lead director in accordance with its determination regarding which structure would be in the best interests of our company. Our board of directors currently believes that our existing leadership structure, under which our chief executive officer, Mr. Feldman, serves as chairman of our board of directors, is effective. Our board of directors will continue to periodically review our leadership structure and may make such changes in the future as it deems appropriate. Our board of directors has appointed Eric Vishria to serve as lead independent director. As lead independent director, Mr. Vishria will preside at all meetings of the board of directors at which the chairman of the board of directors is not present, including executive sessions, and perform such additional responsibilities as set forth in our corporate governance guidelines. Voting Arrangements The election of the members of our board of directors is currently governed by our amended and restated voting agreement that we entered into with certain holders of our capital stock and the related provisions of our current amended and restated certificate of incorporation. Pursuant to our amended and restated voting agreement and current amended and restated certificate of incorporation, Mr. Feldman was elected by certain holders of our common stock, voting together as a single class, and Messrs. Susan, Vassallo, and Vishria were elected by the holders of our Series A redeemable convertible preferred stock. Our amended and restated voting agreement will terminate and the provisions of our current amended and restated certificate of incorporation by which our directors were elected will be amended and restated in connection with this offering. After this offering, the number of directors will be fixed by our board of directors, subject to the terms of our amended and restated certificate of incorporation and amended and restated bylaws that will become effective immediately prior to the completion of this offering. Each of our current directors will continue to serve as a director until the election and qualification of his or her successor, or until his or her earlier death, resignation, or removal. Role of Board in Risk Oversight Process Risk assessment and oversight are an integral part of our governance and management processes. Our board of directors encourages management to promote a culture that incorporates risk management into our corporate strategy and day-to-day business operations. Management discusses strategic and operational risks at regular management meetings, and conducts specific strategic planning and review sessions during the year that include a focused discussion and analysis of the risks facing us. Throughout the year, senior management reviews these risks with the board of directors at regular board meetings as part of management presentations that focus on particular business functions, operations or strategies, and presents the steps taken by management to mitigate or eliminate such risks. Our board of directors does not have a standing risk management committee, but rather administers this oversight function directly through our board of directors as a whole, as well as through various standing committees of our board of directors that address risks inherent in their respective areas of oversight. While our board of directors is responsible for monitoring and assessing strategic risk exposure, our audit committee is responsible for overseeing our major financial and cybersecurity risk exposures and the steps our management has taken to monitor and control these exposures. The audit committee also approves or disapproves any related person transactions. Our nominating and corporate governance committee monitors the effectiveness of our corporate governance guidelines. Our compensation committee assesses and monitors whether any of our compensation policies and programs has the
151
Table o f Contents
potential to encourage excessive risk-taking. The risk oversight process also includes receiving regular reports from our committees and members of senior management to enable our board of directors to understand our risk identification, risk management, and risk mitigation strategies with respect to areas of potential material risk, including operations, finance, legal, regulatory, cybersecurity, strategic, and reputational risk. Board Committees Effective as of the date the registration statement of which this prospectus forms a part is declared effective by the SEC, our board of directors will have three standing committees: an audit committee; a compensation committee; and a nominating and corporate governance committee. Each committee is governed by a charter that will be available on our website following completion of this offering. Members serve on these committees until their resignation or until otherwise determined by our board of directors. Audit Committee Effective as of the date the registration statement of which this prospectus forms a part is declared effective by the SEC, the members of our audit committee will consist of Paul Auvil, Elena Donio, and Eric Vishria. Mr. Auvil will be the chairperson of our audit committee. The composition of our audit committee meets the requirements for independence under the current Nasdaq Listing Rules and Rule 10A-3 of the Exchange Act. Each member of our audit committee is financially literate. In addition, our board of directors has determined that Mr. Auvil is an “audit committee financial expert” within the meaning of the SEC rules. This designation does not impose on such directors any duties, obligations, or liabilities that are greater than are generally imposed on members of our audit committee and our board of directors. Our audit committee is directly responsible for, among other things: •appointing, retaining, compensating, and overseeing the work of our independent registered public accounting firm; •assessing the independence and performance of the independent registered public accounting firm; •reviewing with our independent registered public accounting firm the scope and results of the firm’s annual audit of our financial statements; •overseeing the financial reporting process and discussing with management and our independent registered public accounting firm the financial statements that we will file with the SEC; •pre-approving all audit and permissible non-audit services to be performed by our independent registered public accounting firm; •reviewing policies and practices related to risk assessment and management; •reviewing our accounting and financial reporting policies and practices and accounting controls, as well as compliance with legal and regulatory requirements; •reviewing cybersecurity matters; •reviewing, overseeing, approving, or disapproving any related-person transactions; •reviewing with our management the scope and results of management’s evaluation of our disclosure controls and procedures and management’s assessment of our internal control over financial reporting, including the related certifications to be included in the periodic reports we will file with the SEC; and •establishing procedures for the confidential anonymous submission of concerns regarding questionable accounting, internal controls, or auditing matters, or other ethics or compliance issues.
152
Table o f Contents
Compensation Committee Effective as of the date the registration statement of which this prospectus forms a part is declared effective by the SEC, the members of our compensation committee will consist of Elena Donio, Lior Susan, and Steve Vassallo. Ms. Donio will be the chairperson of our compensation committee. Each of Ms. Donio and Messrs. Susan and Vassallo is a non-employee director, as defined by Rule 16b-3 promulgated under the Exchange Act and meets the requirements for independence under the current Nasdaq Listing Rules. Our compensation committee is responsible for, among other things: •reviewing and approving the compensation of our executive officers, including reviewing and approving corporate goals and objectives with respect to compensation; •authority to act as an administrator of our equity incentive plans; •reviewing and approving, or making recommendations to our board of directors with respect to, incentive compensation and equity plans; •reviewing and recommending that our board of directors approve the compensation for our non-employee directors; and •establishing and reviewing general policies relating to compensation and benefits of our employees. Nominating and Corporate Governance Committee Effective as of the date the registration statement of which this prospectus forms a part is declared effective by the SEC, the members of our nominating and corporate governance committee will consist of Paul Auvil, Lior Susan, and Steve Vassallo. Mr. Vassallo will be the chairperson of our nominating and corporate governance committee. Each of Messrs. Auvil, Susan, and Vassallo meet the requirements for independence under the current Nasdaq Listing Rules. Our nominating and corporate governance committee is responsible for, among other things: •identifying and recommending candidates for membership on our board of directors, including the consideration of nominees submitted by stockholders, and on each of the board’s committees; •reviewing and recommending our corporate governance guidelines and policies; •reviewing proposed waivers of the code of business conduct and ethics for directors and executive officers; •overseeing the process of evaluating the performance of our board of directors; and •assisting our board of directors on corporate governance matters. Code of Business Conduct and Ethics In connection with this offering, our board of directors will adopt a code of business conduct and ethics that applies to all of our employees, officers, and directors, including our Chief Executive Officer, Chief Financial Officer, and other executive and senior financial officers. Upon completion of this offering, the full text of our code of business conduct and ethics will be posted on the investor relations section of our website. We intend to disclose future amendments to our code of business conduct and ethics, or any waivers of such code, on our website or in public filings. Indemnification and Insurance We maintain directors’ and officers’ liability insurance. Our amended and restated certificate of incorporation and amended and restated bylaws will include provisions limiting the liability of directors and officers and
153
Table o f Contents
indemnifying them under certain circumstances. We have entered or will enter into indemnification agreements with each of our directors and officers to provide our directors and officers with additional indemnification and related rights. See the section titled “Description of Capital Stock—Limitations on Liability and Indemnification Matters” for additional information. Compensation Committee Interlocks and Insider Participation None of the members of our board of directors who will serve on our compensation committee upon the effectiveness of the registration statement of which this prospectus forms a part is or has been an officer or employee of our company. None of our executive officers currently serves, or in the past fiscal year has served, as a member of a compensation committee (or if no committee performs that function, the board of directors) of any other entity that has an executive officer serving as a member of our board of directors.
154
Table o f Contents
EXECUTIVE AND DIRECTOR COMPENSATION Executive Compensation The following is a discussion and analysis of compensation arrangements of our named executive officers (“NEOs”). This discussion contains forward looking statements that are based on our current plans, considerations, expectations and determinations regarding future compensation programs. Actual compensation programs that we adopt may differ materially from currently planned programs as summarized in this discussion. As an “emerging growth company” as defined in the JOBS Act, we are not required to include a Compensation Discussion and Analysis section and have elected to comply with the scaled disclosure requirements applicable to emerging growth companies. We seek to ensure that the total compensation paid to our executive officers is reasonable and competitive. Compensation of our executive officers is structured around the achievement of individual performance and near- term corporate targets as well as long-term business objectives. Our NEOs for the year ended December 31, 2025 were: •Andrew D. Feldman, our Chief Executive Officer and President; •Sean Lie, our Chief Technology Officer; and •Dhiraj Mallick, our Chief Operating Officer. Summary Compensation Table The following table sets forth total compensation paid to our NEOs for 2025.
Name and Principal Position
Year / Salary ($) / Bonus ($) / Stock Awards ($) / Option Awards ($) / Non-Equity Incentive Plan Compensation ($) / All Other Compensation ($) / Total ($)
Andrew D. Feldman, Chief Executive Officer ... ... 2025 / 450,000 / — / 10,816,000 / — / 487,500 / — / 11,753,500
Sean Lie, Chief Technology Officer ..... 2025 / 400,000 / — / 10,816,000 / — / 350,000 / — / 11,566,000
Dhiraj Mallick, Chief Operating Officer .. ... 2025 / 400,000 / — / — / — / 290,000 / — / 690,000
_______________ (1)Amounts shown represent the grant date fair value of RSUs granted during 2025 as calculated in accordance with ASC Topic 718. See “Stock-Based Compensation” in Note 14 to our audited consolidated financial statements included elsewhere in this prospectus for the assumptions used in calculating this amount. (2)Amounts shown represent annual performance-based cash bonuses earned for 2025. Narrative to Summary Compensation Table Annual Base Salaries We pay each of our NEOs a base salary to compensate them for services rendered to our company. The base salary payable to each NEO is intended to provide a fixed component of compensation reflecting the executive’s skill set, experience, role, and responsibilities. The compensation committee of our board of directors established the annual base salary of each NEO for 2025 as follows: Mr. Feldman, $450,000; Mr. Lie, $400,000; and Mr. Mallick, $400,000. In February 2026, our
155
Table o f Contents
compensation committee increased the 2026 base salaries of Messrs. Feldman, Lie, and Mallick to $600,000, $500,000, and $450,000, respectively. Our board of directors and compensation committee may adjust base salaries from time to time in their discretion. Annual Bonuses We maintain an annual performance-based cash bonus program in which each of our NEOs participated in
target bonus opportunity for each NEO, with the NEO’s earnings based 50% on the achievement of certain corporate financial objectives established by our compensation committee and 50% upon individual performance. The target bonus opportunity established for each NEO for 2025 was as follows: Mr. Feldman, $250,000; Mr. Lie, $200,000; and Mr. Mallick, $200,000. In January 2026, our compensation committee determined achievement of the corporate financial objectives under our annual performance-based cash bonus program (which were achieved at 200% of target), and individual performance levels, and approved the following bonuses for our NEOs based on 2025 performance: Mr. Feldman, $487,500; Mr. Lie, $350,000; and Mr. Mallick: $290,000. For 2026, our compensation committee adopted a similar annual performance-based cash bonus program. Additionally, in February 2026, our compensation committee increased the annual target bonus opportunities of Messrs. Feldman and Lie to $600,000 and $375,000, respectively. Our board of directors and compensation committee may adjust annual target bonus opportunities or award discretionary bonuses from time to time. Equity-Based Compensation We have granted stock options and RSUs to our NEOs to attract and retain them, as well as to align their interests with the interests of our stockholders. Our stock options are generally exercisable prior to vesting (with any unvested shares subject to repurchase at the original exercise price upon any termination of service) and generally vest over one, three or four years, subject to continued service to the company. Our RSUs generally require satisfaction of both a service-based vesting condition, which is generally satisfied over a one- to four-year period, and a liquidity-based vesting condition, which will be satisfied upon completion of this offering. 2025 Equity Awards In February 2025, we granted each of Mr. Feldman and Mr. Lie awards of 400,000 RSUs, which vest subject to satisfaction of service-based and liquidity-based vesting conditions. The service-based condition is satisfied as to 1/48th of the total number of RSUs on each monthly anniversary of January 5, 2025, subject to the executive continuing to provide services through the applicable date. The liquidity-based vesting condition will be satisfied upon the completion of this offering. 2026 Founder Equity Awards In connection with this offering, our compensation committee and board of directors worked closely with Compensia, the compensation committee’s independent compensation consultant, to design a comprehensive, integrated equity compensation package for our co-founder NEOs, Mr. Feldman and Mr. Lie. This package consists of three complementary components: make-whole time-based RSU awards, annual time-based RSU awards, and large performance-based RSU (“PRSU”) awards. These components are designed to work together to align the interests of our co-founder executives with those of our stockholders and to drive long-term stockholder value. In designing this equity compensation package, our compensation committee and board of directors engaged Compensia to perform a comprehensive study of market practices among our peers, including overall equity
156
Table o f Contents
holdings of Mr. Feldman and Mr. Lie in comparison to similarly situated executives at peer companies, and the size and structure of equity awards granted to similarly situated co-founder executives. After considerable deliberation, in February 2026, upon the recommendation of our compensation committee, our board of directors granted the equity awards described below. First Component: Founder Make-Whole RSU Awards After reviewing market data on similarly situated co-founder executives at peer companies and the relative equity holdings of each of Mr. Feldman and Mr. Lie, our compensation committee and board of directors determined to grant each executive an award of RSUs intended to bring their existing equity stake in the Company in line with the top 10% of similarly situated executives at our peers. Accordingly, our board of directors granted Mr. Feldman an award of 500,000 RSUs and Mr. Lie an award of 312,500 RSUs. Each award vests subject to satisfaction of service-based and liquidity-based vesting conditions. The service-based condition is satisfied as to 1/60th of the total number of RSUs on each monthly anniversary of January 5, 2026, subject to the executive’s continued employment as the Company’s Chief Executive Officer (for Mr. Feldman) or Chief Technology Officer or higher (for Mr. Lie) (in either case, “Qualified Service”) through the applicable date. The liquidity-based vesting condition will be satisfied upon the completion of this offering, subject to the executive’s Qualified Service through such completion. Second Component: Founder Annual RSU Awards After reviewing market data on similarly situated co-founder executives at peer companies, our compensation committee and board of directors determined to grant each of Mr. Feldman and Mr. Lie an award of RSUs intended to constitute an annual refresh grant that rewards the executive for performance over the prior year while ensuring continued retention. Accordingly, our board of directors granted Mr. Feldman an award of 243,902 RSUs and Mr. Lie an award of 182,926 RSUs. Each award vests subject to satisfaction of service-based and liquidity-based vesting conditions. The service-based condition is satisfied as to 1/48th of the total number of RSUs on each monthly anniversary of January 5, 2026, subject to the executive’s Qualified Service through the applicable date. The liquidity-based vesting condition will be satisfied upon the completion of this offering, subject to the executive’s Qualified Service through such completion. Third Component: Founder PRSU Awards After reviewing market data on similarly situated co-founder executives at peer companies that underwent an initial public offering of their common stock, our compensation committee and board of directors determined to grant each of Mr. Feldman and Mr. Lie an award of PRSUs intended to provide a meaningful incentive to drive long-term value to our stockholders. Accordingly, our board of directors granted Mr. Feldman an award of 5,700,000 PRSUs and Mr. Lie an award of 3,300,000 PRSUs. These PRSU awards significantly align Mr. Feldman’s and Mr. Lie’s compensation with our stockholders’ interests by requiring sustained achievement of market capitalization targets. The size of the awards was determined after consideration of Mr. Feldman’s and Mr. Lie’s current equity holdings, after giving effect to the 2026 make-whole RSUs and 2026 annual RSUs described above, and similar equity awards granted to founders of publicly traded companies serving in executive positions. Each PRSU represents the right to receive one share of Class B common stock upon vesting. The PRSUs are eligible to vest starting six months following completion of this offering in three separate tranches in the event the market capitalization hurdles in the table below are achieved, subject to the executive’s Qualified Service through the vesting date.
| Tranche | Market Capitalization Hurdle | # of PRSUs Vesting (Feldman) | # of PRSUs Vesting (Lie) |
|---|---|---|---|
| 1 | $75 billion | 1,900,000 | 1,100,000 |
| 2 | $150 billion | 1,900,000 | 1,100,000 |
| 3 | $250 billion | 1,900,000 | 1,100,000 |
157
Table o f Contents
For purposes of determining whether a market capitalization hurdle has been achieved, the 90-trading-day trailing average of the product of (i) the closing trading price of our Class A common stock on the applicable trading day multiplied by (ii) the number of outstanding shares of Class A common stock on such trading day, must equal or exceed the applicable market capitalization hurdle. In the event of a change in control of the Company, subject to the executive’s Qualified Service through such change in control, the per share amount to be paid in connection with the change in control will be used to determine final market capitalization hurdle achievement (using linear interpolation if such amount falls between two market capitalization hurdles). In the event of a significant merger or acquisition by the Company pursuant to which Company capital stock is used as full or partial consideration, the market capitalization hurdles shall be appropriately and proportionately adjusted upwards to reflect the capital stock issued in such merger or acquisition. Except as otherwise determined by the compensation committee, in the event of a termination of Mr. Feldman’s or Mr. Lie’s Qualified Service, all then-unvested PRSUs held by such executive will automatically forfeit. Additionally, any PRSUs that remain unvested as of the ninth anniversary of the completion of this offering will automatically forfeit. Other 2026 Equity Awards In February 2026, we granted Mr. Mallick an award of 150,000 RSUs, which vests subject to satisfaction of service-based and liquidity-based vesting conditions. The service-based condition is satisfied as to 1/24th of the total number of RSUs on each monthly anniversary of August 5, 2026, subject to him continuing to provide services through the applicable date. The liquidity-based vesting condition will be satisfied upon the completion of this offering. New Equity Plan In connection with this offering, we intend to adopt a 2026 Incentive Award Plan (the “2026 Plan”), in order to facilitate the grant of cash and equity incentives to employees (including our NEOs), directors, and consultants of our company and certain of our affiliates and to enable us to obtain and retain services of these individuals, which is essential to our long-term success. We expect that the 2026 Plan will become effective on the day immediately prior to the date of effectiveness of the registration statement of which this prospectus forms a part, subject to approval of such plan by our stockholders. For additional information about the 2026 Plan, see the section titled “Equity Compensation Plans” below. Other Elements of Compensation Retirement Savings and Health and Welfare Benefits We currently maintain a 401(k) retirement savings plan for our employees, including our NEOs, who satisfy certain eligibility requirements. Our NEOs are eligible to participate in the 401(k) plan on the same terms as other full-time employees. The U.S. Internal Revenue Code of 1986, as amended (the “Code”) allows eligible employees to defer a portion of their compensation, within prescribed limits, on a pre-tax basis through contributions to the 401(k) plan. We believe that providing a vehicle for tax-deferred retirement savings through our 401(k) plan adds to the overall desirability of our executive compensation package and further incentivizes our employees, including our NEOs, in accordance with our compensation policies. All of our full-time employees, including our NEOs, are eligible to participate in our health and welfare plans, including medical, dental, and vision benefits; medical and dependent care flexible spending accounts; short-term and long-term disability insurance; and life and accidental death and dismemberment insurance. Perquisites and Other Personal Benefits We provide perquisites and other personal benefits to our NEOs when we believe it is necessary to attract or retain the NEO. None of our NEOs received any perquisites during 2025.
158
Table o f Contents
Outstanding Equity Awards at 2025 Year End The following table lists all outstanding equity awards held by our NEOs as of December 31, 2025.
Option Awards
Option Awards / Option Awards / Option Awards / Option Awards / Option Awards / Option Awards / Stock Awards / Stock Awards / Stock Awards
Name .................................... Vesting Commencement Date / Vesting Commencement Date / Number of Securities Underlying Unexercised Options (#) Exercisable / Number of Securities Underlying Unexercised Options (#) Unexercisable / Option Exercise Price ($) / Option Expiration Date / Number of Shares or Units of Stock That Have Not Vested (#) / Market Value of Shares or Units of Stock That Have Not Vested ($)
Andrew D. Feldman ....................... 2/15/2019 / 1,150,000 / — / 2.40 / 5/13/2029 / — / —
3/1/2022 / 562,500 / 37,500 / 2.72 / 12/7/2030 / — / —
1/1/2023 / 145,833 / 4,167 / 7.89 / 1/11/2032 / — / —
1/1/2023 / 109,375 / 40,625 / 5.02 / 2/13/2033 / — / —
2/14/2023 / — / — / — / — / 23,438 / 1,922,385
1/1/2024 / 383,333 / 416,667 / 5.48 / 2/6/2034 / — / —
1/5/2025 / — / — / — / — / 400,000 / 32,808,000
Sean Lie ................... ......... 2/15/2019 / 350,000 / — / 2.40 / 5/13/2029 / — / —
3/1/2021 / 175,000 / — / 2.72 / 12/7/2030 / — / —
1/1/2022 / 97,916 / 2,084 / 7.89 / 1/11/2032 / — / —
1/1/2023 / 109,375 / 40,625 / 5.02 / 2/13/2033 / — / —
2/14/2023 / — / — / — / — / 21,094 / 1,730,130
1/1/2024 / 191,666 / 208,334 / 5.48 / 2/6/2034 / — / —
1/5/2025 / — / — / — / — / 400,000 / 32,808,000
Dhiraj Mallick ......... ............. 6/28/2018 / 367,370 / — / 0.98 / 7/16/2028 / — / —
6/28/2021 / 200,000 / — / 2.72 / 7/6/2030 / — / —
6/15/2021 / 100,000 / — / 2.89 / 3/14/2031 / — / —
10/1/2021 / — / — / — / — / 325,000 / 26,656,500
8/23/2022 / 250,000 / 50,000 / 6.47 / 8/22/2032 / — / —
1/1/2023 / 145,833 / 54,167 / 5.02 / 2/13/2033 / — / —
8/1/2023 / 35,000 / — / 5.02 / 7/31/2033 / — / —
8/1/2024 / 35,000 / — / 5.02 / 7/31/2033 / — / —
8/1/2025 / 11,666 / 23,334 / 5.02 / 7/31/2033 / — / —
8/1/2023 / — / — / — / — / 700,000 / 57,414,000
______________ (1)Each option is exercisable as to all shares underlying the option with any shares purchased upon exercise subject to the same vesting conditions applicable to the option. In the event of any termination of employment, unvested shares may be repurchased by us for the exercise price of the related option. The portion of the option included under the “Number of Securities Underlying Unexercised Options Unexercisable” represents the unvested portion of the option notwithstanding that it is fully exercisable. (2)Amount reported calculated by multiplying $82.02, which our board of directors determined equaled fair market value of our Class A common stock as of December 31, 2025, by the number of unvested shares comprising or underlying the stock award. (3)The option vests as to 1/48th of the shares underlying the option on each monthly anniversary of the vesting commencement date, subject to the executive continuing to provide services to us through the applicable vesting date. (4)The option vests as to 1/36th of the shares underlying the option on each monthly anniversary of the vesting commencement date, subject to the executive continuing to provide services to us through the applicable vesting date.
159
Table o f Contents
(5)The RSUs vest upon the completion of this offering, subject to the executive continuing to provide services to us through such completion. (6)The RSUs vest on the date both service-based and liquidity-based vesting conditions are satisfied. The service- based vesting condition is satisfied as to 1/48th of the total number of RSUs on each monthly anniversary of the vesting commencement date, subject to the executive continuing to provide services through the applicable date. The liquidity-based vesting condition will be satisfied upon the completion of this offering. (7)The option vests as to 1/12th of the shares underlying the option on each monthly anniversary of the vesting commencement date, subject to the executive continuing to provide services to us through the applicable vesting date. (8)The RSUs vest on the date both service-based and liquidity-based vesting conditions are satisfied. The service- based vesting condition is satisfied as to 1/36th of the total number of RSUs on each monthly anniversary of the vesting commencement date, subject to the executive continuing to provide services through the applicable date. The liquidity-based vesting condition will be satisfied upon the completion of this offering. Executive Compensation Arrangements Offer Letters We are party to continued employment offer letters with Mr. Feldman and Mr. Lie, and an amended and restated offer letter with Mr. Mallick. Each offer letter provides for an initial base salary, target bonus opportunity, eligibility for employee benefits, and eligibility for future equity awards. The continued employment offer letters for Mr. Feldman and Mr. Lie provide that, commencing in 2027 and for so long as they remain employed by us, they will each be eligible to receive an annual equity award, (i) with the size and terms of such awards to be determined by our compensation committee and approved by our board of directors each year, (ii) with such awards to be based on a market assessment of the value of equity awards granted to chief executive officers (for Mr. Feldman) or chief technology officers (for Mr. Lie) in our compensation peer group for the applicable year, and (iii) with the terms and conditions of such awards to be no less favorable than those of the annual equity awards granted to the other executive officers in our compensation peer group for the applicable year, other than with respect to the mix between performance-based and time-based equity awards. Change in Control and Severance Arrangements Each NEO participates in our Executive Change in Control and Severance Plan (the “Severance Plan”), and their benefits under the Severance Plan are described below. Payments and benefits under the Severance Plan are contingent on the applicable executive timely providing us with (and not revoking) a general release of claims. If we terminate an NEO’s employment without “cause” or the NEO resigns for “good reason” (in each case, as defined in the Severance Plan) other than during the period beginning three months before and ending 12 months after a change in control (the “CIC Protection Period”), in addition to any accrued obligations, the executive will be entitled to the following: (i) an amount equal to 15 months (for Mr. Feldman and Mr. Lie) or 12 months (for Mr. Mallick) of the executive’s then-current base salary, (ii) payment or reimbursement of COBRA premiums for up to 12 months following the termination date, and (iii) for Mr. Feldman and Mr. Lie only, six months of additional vesting for the executive’s outstanding equity awards that vest solely based on continued service to the Company. If we terminate an NEO’s employment without “cause” or the NEO resigns for “good reason” during the CIC Protection Period, in addition to any accrued obligations, in lieu of the benefits in the paragraph above the NEO will be entitled to the following: (i) an amount equal to 18 months (for Mr. Feldman and Mr. Lie) or 12 months (for Mr. Mallick) of the executive’s then-current base salary, (ii) an amount equal to 1.5 times (for Mr. Feldman and Mr. Lie) or 1.0 times (for Mr. Mallick) the executive’s target annual bonus for the year of termination, (iii) payment or reimbursement of COBRA premiums for up to 12 months following the termination date, and (iv) full vesting acceleration of the executive’s outstanding equity awards that vest based solely on continued service to the Company.
160
Table o f Contents
Equity Compensation Plans The following summarizes the material terms of the long-term incentive compensation plan and employee stock purchase plan in which our NEOs will be eligible to participate following this offering and our existing equity plans, under which we have previously made periodic grants of equity and equity-based awards to our NEOs and other employees. 2026 Incentive Award Plan We have adopted the 2026 Plan, which will become effective on the day immediately prior to the date of effectiveness of the registration statement of which this prospectus forms a part. The principal purpose of the 2026 Plan is to attract, retain and motivate selected employees, directors, and consultants through the granting of stock- based compensation awards and cash-based performance bonus awards. The material terms of the 2026 Plan are summarized below. Share Reserve. Under the 2026 Plan, shares of our Class A common stock will be initially reserved for issuance pursuant to a variety of stock-based compensation awards, including stock options, stock appreciation rights (“SARs”), restricted stock awards, RSU awards, and other stock-based awards. The number of shares initially reserved for issuance or transfer pursuant to awards under the 2026 Plan will be increased by (i) the number of shares represented by awards outstanding under the 2016 Plan that become available for issuance under the counting provisions described below following the effective date and (ii) an annual increase on the first day of each fiscal year beginning in 2027 and ending in 2036, equal to the lesser of (A) 5% of the sum of (1) all shares of all classes of our common stock, and (2) the number of shares issuable upon the exercise of warrants to purchase shares of our common stock with an exercise price per share of $0.01 or less, in each case, outstanding on the last day of the immediately preceding fiscal year and (B) such smaller number of shares of stock as determined by our board of directors; provided, however, that no more than shares may be issued upon the exercise of incentive stock options. The following counting provisions will be in effect for the share reserve under the 2026 Plan: •to the extent that an award (including an award granted under the 2016 Plan (a “Prior Plan Award”)) terminates, expires, or lapses for any reason or an award is settled in cash without the delivery of shares, any shares subject to the award at such time will be available for future grants under the 2026 Plan; •to the extent shares are tendered or withheld to satisfy the grant, exercise price, or tax withholding obligation with respect to any award under the 2026 Plan or Prior Plan Award, such tendered or withheld shares will be available for future grants under the 2026 Plan; •to the extent shares subject to stock appreciation rights are not issued in connection with the stock settlement of stock appreciation rights on exercise thereof, such shares will be available for future grants under the 2026 Plan; •to the extent that shares of our Class A common stock are repurchased by us prior to vesting so that shares are returned to us, such shares will be available for future grants under the 2026 Plan; •the payment of dividend equivalents in cash in conjunction with any outstanding awards or Prior Plan Awards will not be counted against the shares available for issuance under the 2026 Plan; and •to the extent permitted by applicable law or any exchange rule, shares issued in assumption of, or in substitution for, any outstanding awards of any entity acquired in any form of combination by us or any of our subsidiaries will not be counted against the shares available for issuance under the 2026 Plan. In addition, the sum of the grant date fair value of all equity-based awards and the maximum that may become payable pursuant to all cash-based awards to any individual for services as a non-employee director during any
161
Table o f Contents
calendar year may not exceed $4,000,000 for a non-employee director’s first year of service as a non-employee director, and $1,000,000 for each year thereafter. Our board of directors may make exceptions to this limit for individual non-employee directors in extraordinary circumstances, as our board of directors may determine in its discretion, provided that the non-employee director receiving such additional compensation may not participate in the decision to award such compensation or in other contemporaneous compensation decisions involving non- employee directors. Administration. The compensation committee of our board of directors is expected to administer the 2026 Plan unless our board of directors assumes authority for administration. The compensation committee must consist of at least three members of our board of directors, each of whom is intended to qualify as a “non-employee director” for purposes of Rule 16b-3 under the Exchange Act and an “independent director” within the meaning of the rules of the applicable stock exchange, or other principal securities market on which shares of our Class A common stock are traded. The 2026 Plan provides that our board of directors or compensation committee may delegate its authority to grant awards to employees other than executive officers and certain senior executives of the company to a committee consisting of one or more members of our board of directors or one or more of our officers, other than awards made to our non-employee directors, which must be approved by our full board of directors. Subject to the terms and conditions of the 2026 Plan, the administrator has the authority to select the persons to whom awards are to be made, to determine the number of shares to be subject to awards and the terms and conditions of awards, and to make all other determinations and to take all other actions necessary or advisable for the administration of the 2026 Plan. The administrator is also authorized to adopt, amend or rescind rules relating to administration of the 2026 Plan. Our board of directors may at any time remove the compensation committee as the administrator and revest in itself the authority to administer the 2026 Plan. The full board of directors will administer the 2026 Plan with respect to awards to non-employee directors. Eligibility. Awards under the 2026 Plan may be granted to individuals who are then our officers, employees, or consultants or are the officers, employees, or consultants of certain of our subsidiaries. Such awards also may be granted to our directors. Only employees of our company or certain of our subsidiaries may be granted incentive stock options. Awards. The 2026 Plan provides that the administrator may grant or issue stock options, SARs, restricted stock, RSUs, other stock- or cash-based awards and dividend equivalents, or any combination thereof. Each award will be set forth in a separate agreement with the person receiving the award and will indicate the type, terms, and conditions of the award. •Nonstatutory Stock Options (“NSOs”) will provide for the right to purchase shares of our Class A common stock at a specified price which may not be less than fair market value on the date of grant, and usually will become exercisable (at the discretion of the administrator) in one or more installments after the grant date, subject to the participant’s continued employment or service with us and/or subject to the satisfaction of corporate performance targets and individual performance targets established by the administrator. NSOs may be granted for any term specified by the administrator that does not exceed ten years. •Incentive Stock Options (“ISOs”) will be designed in a manner intended to comply with the provisions of Section 422 of the Code and will be subject to specified restrictions contained in the Code. Among such restrictions, ISOs must have an exercise price of not less than the fair market value of a share of Class A common stock on the date of grant, may only be granted to employees, and must not be exercisable after a period of ten years measured from the date of grant. In the case of an ISO granted to an individual who owns (or is deemed to own) at least 10% of the total combined voting power of all classes of our capital stock, the 2026 Plan provides that the exercise price must be at least 110% of the fair market value of a share of Class A common stock on the date of grant and the ISO must not be exercisable after a period of five years measured from the date of grant. •Restricted Stock may be granted to any eligible individual and made subject to such restrictions as may be determined by the administrator. Restricted stock, typically, may be forfeited for no consideration or
162
Table o f Contents
repurchased by us at the original purchase price if the conditions or restrictions on vesting are not met. In general, restricted stock may not be sold or otherwise transferred until restrictions are removed or expire. Purchasers of restricted stock, unlike recipients of options, will have voting rights and will have the right to receive dividends, if any, prior to the time when the restrictions lapse, however, extraordinary dividends will generally be placed in escrow, and will not be released until restrictions are removed or expire. •RSUs may be awarded to any eligible individual, typically without payment of consideration, but subject to vesting conditions based on continued employment or service or on performance criteria established by the administrator. Like restricted stock, RSUs may not be sold, or otherwise transferred or hypothecated, until vesting conditions are removed or expire. Unlike restricted stock, stock underlying RSUs will not be issued until the RSUs have vested, and recipients of RSUs generally will have no voting or dividend rights prior to the time when vesting conditions are satisfied. •SARs may be granted in connection with stock options or other awards, or separately. SARs granted in connection with stock options or other awards typically will provide for payments to the holder based upon increases in the price of our Class A common stock over a set exercise price. The exercise price of any SAR granted under the 2026 Plan must be at least 100% of the fair market value of a share of our Class A common stock on the date of grant. SARs under the 2026 Plan will be settled in cash or shares of our Class A common stock, or in a combination of both, at the election of the administrator. •Other Stock or Cash Based Awards are awards of cash, fully vested shares of our Class A common stock and other awards valued wholly or partially by referring to, or otherwise based on, shares of our Class A common stock. Other stock- or cash-based awards may be granted to participants and may also be available as a payment form in the settlement of other awards, as standalone payments and as payment in lieu of base salary, bonus, fees or other cash compensation otherwise payable to any individual who is eligible to receive awards. The administrator will determine the terms and conditions of other stock- or cash-based awards, which may include vesting conditions based on continued service, performance and/or other conditions. •Dividend Equivalents represent the right to receive the equivalent value of dividends paid on shares of our Class A common stock and may be granted alone or in tandem with awards other than stock options or SARs. Dividend equivalents are credited as of dividend payments dates during the period between a specified date and the date such award terminates or expires, as determined by the administrator. In addition, dividend equivalents with respect to shares covered by a performance award will only be paid to the participant at the same time or times and to the same extent that the vesting conditions, if any, are subsequently satisfied and the performance award vests with respect to such shares. Any award may be granted as a performance award, meaning that the award will be subject to vesting and/or payment based on the attainment of specified performance goals. Change in Control. In the event of a change in control, unless the administrator elects to terminate an award in exchange for cash, rights, or other property, or cause an award to accelerate in full prior to the change in control, such award will continue in effect or be assumed or substituted by the acquirer, provided that any performance- based portion of the award will be subject to the terms and conditions of the applicable award agreement. In the event the acquirer refuses to assume or replace awards granted, prior to the completion of such transaction, awards issued under the 2026 Plan will be subject to accelerated vesting such that 100% of such awards will become vested and exercisable or payable, as applicable. The administrator may also make appropriate adjustments to awards under the 2026 Plan and is authorized to provide for the acceleration, cash-out, termination, assumption, substitution or conversion of such awards in the event of a change in control or certain other unusual or nonrecurring events or transactions. Adjustments of Awards. In the event of any stock dividend or other distribution, stock split, reverse stock split, reorganization, combination or exchange of shares, merger, consolidation, split-up, spin-off, recapitalization, repurchase, or any other corporate event affecting the number of outstanding shares of our Class A common stock or
163
Table o f Contents
the share price of our Class A common stock that would require adjustments to the 2026 Plan or any awards under the 2026 Plan in order to prevent the dilution or enlargement of the potential benefits intended to be made available thereunder, the administrator will make appropriate, proportionate adjustments to: (i) the aggregate number and type of shares subject to the 2026 Plan; (ii) the number and kind of shares subject to outstanding awards and terms and conditions of outstanding awards (including, without limitation, any applicable performance targets or criteria with respect to such awards); and (iii) the grant or exercise price per share of any outstanding awards under the 2026 Plan. Amendment and Termination. The administrator may terminate, amend or modify the 2026 Plan at any time and from time to time. However, we must generally obtain stockholder approval to the extent required by applicable law, rule or regulation (including any applicable stock exchange rule). Notwithstanding the foregoing, an option may be amended to reduce the per share exercise price below the per share exercise price of such option on the grant date and options may be granted in exchange for, or in connection with, the cancellation or surrender of options having a higher per share exercise price without receiving additional stockholder approval. No ISOs may be granted pursuant to the 2026 Plan after the tenth anniversary of the effective date of the 2026 Plan, and no additional annual share increases to the 2026 Plan’s aggregate share limit will occur from and after such anniversary. Any award that is outstanding on the termination date of the 2026 Plan will remain in force according to the terms of the 2026 Plan and the applicable award agreement. 2016 Equity Incentive Plan We currently maintain the 2016 Plan, which became effective on May 5, 2016 upon its adoption by our board of directors and approval of our stockholders. Following this offering and in connection with the effectiveness of our 2026 Plan, the 2016 Plan will terminate and no further awards will be granted under the 2016 Plan. However, all outstanding awards will continue to be governed by their existing terms. Administration. Our board of directors, the compensation committee, or another committee thereof appointed by our board of directors, has the authority to administer the 2016 Plan and the awards granted under it. The administrator has the authority to select the service providers to whom awards will be granted under the 2016 Plan, the number of shares to be subject to those awards under the 2016 Plan, and the terms and conditions of the awards granted. In addition, the administrator has the authority to construe and interpret the 2016 Plan and to adopt rules relating to the 2016 Plan and exercise such other powers that it deems necessary and desirable to promote the best interests of the company and that are consistent with the terms of the 2016 Plan. Share Reserve. We have reserved an aggregate of shares of our common stock for issuance under the 2016 Plan. As of December 31, 2025, after giving effect to the Common Stock Reclassification and the RSU Net Settlement, options to purchase a total of shares of our Class B common stock were outstanding, shares of restricted stock acquired upon exercise of options prior to vesting were outstanding, RSUs covering shares of our Class B common stock were outstanding, and shares remained available for future grants. Awards. The 2016 Plan provides that the administrator may grant or issue options, including ISOs and NSOs, restricted stock, and RSUs to employees, directors, and consultants, provided that only employees may be granted ISOs. •Stock Options. The 2016 Plan provides for the grant of ISOs or NSOs. ISOs may be granted only to employees. NSOs may be granted to employees, directors, or consultants. The exercise price of ISOs granted to employees who at the time of grant own stock representing more than 10% of the voting power of all classes of our common stock may not be less than 110% of the fair market value per share of our common stock on the date of grant, and the exercise price of ISOs granted to any other employees may not be less than 100% of the fair market value per share of our common stock on the date of grant. The exercise price of NSOs to employees, directors, or consultants may not be less than 100% of the fair market value per share of our common stock on the date of grant.
164
Table o f Contents
•Restricted Stock. The 2016 Plan provides for the grant of restricted stock. Each share of restricted stock that is accepted will be governed by a restricted stock purchase agreement, which will detail the restrictions on transferability, risk of forfeiture, and other restrictions the administrator approves. In general, restricted stock acquired upon exercise of a stock purchase right may not be sold, transferred, pledged, hypothecated, margined, or otherwise encumbered until restrictions are removed or expire. Holders of restricted stock, unlike recipients of stock options, will have voting rights and will have the right to receive dividends, if any, prior to the time when the restrictions lapse. •RSUs. The 2016 Plan provides for the grant of RSUs. Each RSU represents the unfunded, unsecured right to receive a share of our common stock or an amount of cash or other consideration equal to the fair market value of a share of our common stock. The terms of each award of RSUs are set forth in a RSU agreement. •SARs. The 2016 Plan provides for the grant of SARs. SARs may be settled in cash or shares (which may consist of restricted stock or RSUs or a combination thereof, having a value equal to multiplying the difference between the fair market value on the date of exercise over the exercise price and the number of shares with respect to which the SARs are being exercised). All grants of SARs made will be evidenced by an award agreement. Adjustments of Awards. In the event of any dividend or other distribution, reorganization, merger, consolidation, combination, repurchase, liquidation, dissolution, or sale, transfer, exchange, or other disposition of substantially all of our assets, or exchange of shares or other similar corporate transaction or event, the administrator will make adjustments to the number and class of shares available for issuance under the 2016 Plan and the number, class, and price of shares subject to outstanding awards, in order to prevent dilution or enlargement of benefits. Change in Control. In the event of a change in control, any outstanding awards acquired under the 2016 Plan shall be subject to the agreement evidencing the change of control. The successor or acquiring entity may elect for such outstanding awards to be assumed or substituted. Otherwise, in the event of a merger or change in control, the change of control agreement has broad discretion to determine the treatment of each outstanding award, including providing for awards to terminate or accelerate or for awards to terminate in exchange for cash or other property. Amendment and Termination. Our board of directors may amend or terminate the 2016 Plan or any portion thereof at any time. However, no amendment may impair the rights of a holder of an outstanding option grant without the holder’s consent, and any action by our board of directors to increase the number of shares subject to the plan or extend the term of the plan is subject to the approval of our stockholders. Additionally, an amendment of the plan shall be subject to the approval of our stockholders, where such approval by our stockholders of an amendment is required by applicable law. Following this offering and in connection with the effectiveness of our 2026 Plan, the 2016 Plan will terminate and no further awards will be granted under the 2016 Plan. 2026 Employee Stock Purchase Plan We have adopted the ESPP, which will become effective on the day immediately prior to the date of effectiveness of the registration statement of which this prospectus forms a part. The ESPP is designed to allow our eligible employees to purchase shares of our Class A common stock, at periodic intervals, with their accumulated payroll deductions. The ESPP is intended to qualify under Section 423 of the Code. The material terms of the ESPP are summarized below. Administration. Subject to the terms and conditions of the ESPP, our compensation committee will administer the ESPP. Our compensation committee can delegate administrative tasks under the ESPP to the services of an agent and/or employees to assist in the administration of the ESPP. The administrator will have the discretionary authority to administer and interpret the ESPP. Interpretations and constructions of the administrator of any provision of the ESPP or of any rights thereunder will be conclusive and binding on all persons. We will bear all expenses and liabilities incurred by the ESPP administrator.
165
Table o f Contents
Share Reserve. The maximum number of our shares of our Class A common stock that will be authorized for sale under the ESPP is equal to the sum of (i) shares of Class A common stock and (ii) an annual increase on the first day of each fiscal year beginning in 2027 and ending in 2036, equal to the lesser of (A) 1% of the sum of (1) all shares of all classes of our common stock, and (2) the number of shares issuable upon the exercise of warrants to purchase shares of our common stock with an exercise price per share of $0.01 or less, in each case, outstanding on the last day of the immediately preceding fiscal year and (B) such number of shares of Class A common stock as determined by our board of directors; provided, however, no more than shares of our Class A common stock may be issued under the ESPP. The shares reserved for issuance under the ESPP may be authorized but unissued shares or reacquired shares. Eligibility. Employees eligible to participate in the ESPP for a given offering period generally include employees who are employed by us or one of our subsidiaries on the first day of the offering period. Our employees (and, if applicable, any employees of our subsidiaries) who customarily work less than five months in a calendar year or are customarily scheduled to work less than 20 hours per week will not be eligible to participate in the ESPP. Finally, an employee who owns (or is deemed to own through attribution) 5% or more of the combined voting power or value of all our classes of stock or of one of our subsidiaries will not be allowed to participate in the ESPP. Participation. Employees will enroll under the ESPP by completing a payroll deduction form permitting the deduction from their compensation of at least 1% of their base compensation but not more than 15% of their base compensation. Such payroll deductions may be expressed as either a whole number percentage or a fixed dollar amount, and the accumulated deductions will be applied to the purchase of shares on each purchase date. However, a participant may not purchase more than 10,000 shares in each offering period and may not accrue the right to purchase shares of Class A common stock at a rate that exceeds $25,000 in fair market value of shares of our Class A common stock (determined at the time the option is granted) for each calendar year the option is outstanding (as determined in accordance with Section 423 of the Code). The ESPP administrator has the authority to change these limitations for any particular offering period. Offering. Under the ESPP, participants are offered the option to purchase shares of our Class A common stock at a discount during a series of successive offering periods, the duration and timing of which will be determined by the ESPP administrator. However, in no event may an offering period be longer than 27 months in length. The first offering period is currently expected to commence on the date of effectiveness of the registration statement of which this prospectus forms a part and end in February 2027. The option purchase price will be the lower of 85% of the closing trading price per share of our Class A common stock as of the first date of an offering period in which a participant is enrolled or 85% of the closing trading price per share as of the purchase date, which will occur on the last day of each purchase period within an offering period. Unless a participant has previously cancelled his or her participation in the ESPP before the purchase date, the participant will be deemed to have exercised his or her option in full as of each purchase date. Upon exercise, the participant will purchase the number of whole shares that his or her accumulated payroll deductions will buy at the option purchase price, subject to the participation limitations listed above. A participant may cancel his or her payroll deduction authorization at any time prior to the end of the offering period. Upon cancellation, the participant will have the option to either (i) receive a refund of the participant’s account balance in cash without interest or (ii) exercise the participant’s option for the current offering period for the maximum number of shares of Class A common stock on the applicable purchase date, with the remaining account balance refunded in cash without interest. Following at least one payroll deduction, a participant may also decrease (but not increase) his or her payroll deduction authorization once during any offering period. If a participant wants to increase or decrease the rate of payroll withholding, he or she may do so effective for the next offering period by submitting a new form before the offering period for which such change is to be effective. A participant may not assign, transfer, pledge or otherwise dispose of (other than by will or the laws of descent and distribution) payroll deductions credited to a participant’s account or any rights to exercise an option or to
166
Table o f Contents
receive shares of our Class A common stock under the ESPP, and during a participant’s lifetime, options in the ESPP shall be exercisable only by such participant. Any such attempt at assignment, transfer, pledge, or other disposition will not be given effect. Adjustments Upon Changes in Recapitalization, Dissolution, Liquidation, Merger, or Asset Sale. In the event of any increase or decrease in the number of issued shares of our Class A common stock resulting from a stock split, reverse stock split, stock dividend, combination, or reclassification of the Class A common stock, or any other increase or decrease in the number of shares of Class A common stock effected without receipt of consideration by us, we will proportionately adjust the aggregate number of shares of our Class A common stock offered under the ESPP, the number and price of shares which any participant has elected to purchase under the ESPP and the maximum number of shares which a participant may elect to purchase in any single offering period. If there is a proposal to dissolve or liquidate us, then the ESPP will terminate immediately prior to the consummation of such proposed dissolution or liquidation, and any offering period then in progress will be shortened by setting a new purchase date to take place before the date of our dissolution or liquidation. We will notify each participant of such change in writing prior to the new exercise date. If we undergo a merger with or into another corporation or sell all or substantially all of our assets, each outstanding option will be assumed or an equivalent option substituted by the successor corporation or the parent or subsidiary of the successor corporation. If the successor corporation refuses to assume the outstanding options or substitute equivalent options, then any offering period then in progress will be shortened by setting a new purchase date to take place before the date of our proposed sale or merger. We will notify each participant of such change in writing prior to the new exercise date. Amendment and Termination. Our board of directors may amend, suspend, or terminate the ESPP at any time. However, the board of directors may not amend the ESPP without obtaining stockholder approval within 12 months before or after such amendment to the extent required by applicable laws. Director Compensation For the year ended December 31, 2025, we did not have a formalized non-employee director compensation program, and none of our non-employee directors was paid cash compensation or granted an option or stock award in connection with the non-employee director’s service to us during 2025. As of December 31, 2025, Paul Auvil held an option to purchase 215,000 shares of our Class A common stock, Glenda Dorchak held an option to purchase 215,000 shares of our Class A common stock, and Thomas Lantzsch held 13,918 RSUs. In connection with Ms. Dorchak stepping down from our board of directors in April 2026, the vesting of her option was partially accelerated such that a total of 120,000 shares were deemed vested. Prior to Mr. Lantzsch stepping down from our board of directors in April 2026, he was granted an award of 5,000 fully vested RSUs, and the service- and liquidity- based vesting conditions on his 13,918 RSUs were fully accelerated. We have adopted a non-employee director compensation program (the “Director Compensation Program”) that, effective upon the completion of this offering, provides for annual retainers for board and committee service and the automatic grant of initial and annual equity awards. Under the Director Compensation Program, our non-employee directors will receive cash compensation, paid quarterly in arrears, as follows: •Each non-employee director (other than the non-employee chairperson of our board of directors or the lead independent director) will receive a cash retainer in the amount of $60,000 per year. •The non-employee chairperson of our board of directors or the lead independent director will receive a cash retainer in the amount of $70,000 per year. •The chairperson of the audit committee will receive a cash retainer in the amount of $25,000 per year for such chairperson’s service on the audit committee. Each non-chairperson member of the audit committee will receive a cash retainer in the amount of $12,500 per year for such member’s service on the audit committee.
167
Table o f Contents
•The chairperson of the compensation committee will receive a cash retainer in the amount of $20,000 per year for such chairperson’s service on the compensation committee. Each non-chairperson member of the compensation committee will receive a cash retainer in the amount of $10,000 per year for such member’s service on the compensation committee. •The chairperson of the nominating and corporate governance committee will receive a cash retainer in the amount of $10,000 per year for such chairperson’s service on the nominating and corporate governance committee. Each non-chairperson member of the nominating and corporate governance committee will receive a cash retainer in the amount of $5,000 per year for such member’s service on the nominating and corporate governance committee. At its discretion, our board of directors or compensation committee may allow non-employee directors to elect to convert all or a portion of their cash retainer into a number of RSUs granted under the 2026 Plan, which will be fully vested on the date of grant. Under the Director Compensation Program, each non-employee director on the date the non-employee director is appointed to our board of directors will automatically be granted that number of RSUs (the “Initial Grant”) under the 2026 Plan determined by dividing $3,000,000 by the average closing trading price of a share of our Class A common stock over the most recent 30 trading days as of the date of grant. The Initial Grant will vest in substantially equal annual installments over three years, subject to continued service on our board of directors through the applicable vesting date. In addition, on the date of each annual meeting of our stockholders, each non-employee director who will continue to serve as a non-employee director immediately following such annual meeting will automatically be granted that number of RSUs (the “Annual Grant”) under the 2026 Plan determined by dividing $250,000 by the average closing trading price of a share of our Class A common stock over the most recent 30 trading days as of the date of grant. The Annual Grant will vest in full on the earlier of (i) the first anniversary of the grant date and (ii) immediately prior to the annual meeting of our stockholders following the date of grant, subject to continued service on our board of directors through the applicable vesting date. At its discretion, our board of directors or compensation committee may allow non-employee directors to elect to defer the settlement of RSUs granted to them under the Director Compensation Program. Pursuant to the Director Compensation Program, upon a change in control transaction, all outstanding equity awards held by our non-employee directors will vest in full.
168
Table o f Contents
CERTAIN RELATIONSHIPS AND RELATED PARTY TRANSACTIONS The following includes a summary of transactions since January 1, 2023 and any currently proposed transactions, to which we were or are to be a participant, in which (i) the amount involved exceeded or will exceed $120,000; and (ii) any of our directors, executive officers, or holders of more than 5% of our capital stock, or any affiliate or member of the immediate family of the foregoing persons or entities, had or will have a direct or indirect material interest, other than compensation and other arrangements that are described under the section titled “Executive and Director Compensation.” We believe the terms obtained or consideration that we paid or received, as applicable, in connection with the transactions described below were comparable to terms available or the amounts that we would pay or receive, as applicable, in arm’s-length transactions. Redeemable Convertible Preferred Stock Financings Series F-1 Redeemable Convertible Preferred Stock Financing In May 2024, we entered into a Series F-1 redeemable convertible preferred stock purchase agreement, as subsequently amended and restated in August 2025, with various investors pursuant to which we agreed to issue and sell 5,798,089 shares of our Series F-1 redeemable convertible preferred stock at a purchase price of $14.66 per share, for aggregate gross proceeds of approximately $85.0 million. In July and August 2024, various investors purchased an aggregate of 2,728,512 shares of our Series F-1 redeemable convertible preferred stock for an aggregate purchase price of approximately $40.0 million. In September 2024, Alpha Wave Ventures II, LP, an existing stockholder, purchased 3,069,577 shares of our Series F-1 redeemable convertible preferred stock for an aggregate purchase price of approximately $45.0 million. Entities affiliated with Alpha Wave collectively beneficially own more than 5% of our outstanding capital stock following the purchase of our Series F-1 redeemable convertible preferred stock. See the section titled “Principal Stockholders” for additional information. Series G Redeemable Convertible Preferred Stock Financing In September 2025, we entered into a Series G redeemable convertible preferred stock purchase agreement with various investors pursuant to which we issued and sold an aggregate of 30,359,557 shares of our Series G redeemable convertible preferred stock at a purchase price of $36.2324 per share, for an aggregate purchase price of approximately $1.1 billion in multiple closings through October 2025. The table below sets forth the number of shares of our Series G redeemable convertible preferred stock purchased by holders of more than 5% of our capital stock and their affiliated entities. None of our directors or executive officers purchased shares of Series G redeemable convertible preferred stock.
Name / Shares of Series G Redeemable Convertible Preferred Stock / Aggregate Purchase Price
Entities affiliated with Alpha Wave .................................................................... ... 1,241,982 / $44,999,989
Entities affiliated with Benchmark ...................................................................... ... 689,990 / $24,999,994
Entities affiliated with Fidelity ............................................................................ ... 19,319,724 / $699,999,968
_______________ (1)See the section titled “Principal Stockholders” for additional information regarding these stockholders and their equity holdings. (2)Entities affiliated with Alpha Wave collectively beneficially own more than 5% of our outstanding capital stock.
169
Table o f Contents
(3)Entities affiliated with Benchmark collectively beneficially own more than 5% of our outstanding capital stock. Eric Vishria, a member of our board of directors, is a General Partner of Benchmark. (4)Entities affiliated with Fidelity collectively beneficially own more than 5% of our outstanding capital stock following the purchase of our Series G redeemable convertible preferred stock. Series H Redeemable Convertible Preferred Stock Financing In January 2026, we entered into a Series H redeemable convertible preferred stock purchase agreement with various investors pursuant to which we issued and sold an aggregate of 11,394,059 shares of our Series H redeemable convertible preferred stock at a purchase price of $89.0156 per share, for an aggregate purchase price of approximately $1.0 billion in multiple closings through February 2026. The table below sets forth the number of shares of our Series H redeemable convertible preferred stock purchased by holders of more than 5% of our capital stock and their affiliated entities. None of our directors or executive officers purchased shares of Series H redeemable convertible preferred stock.
Name / Shares of Series H Redeemable Convertible Preferred Stock / Aggregate Purchase Price
Entities affiliated with Alpha Wave .................................................................... ... 1,123,398 / $99,999,947
Entities affiliated with Benchmark ...................................................................... ... 2,527,646 / $224,999,925
Entities affiliated with Fidelity ............................................................................ ... 1,123,398 / $99,999,947
_______________ (1)See the section titled “Principal Stockholders” for additional information regarding these stockholders and their equity holdings. (2)Entities affiliated with Alpha Wave collectively beneficially own more than 5% of our outstanding capital stock. (3)Entities affiliated with Benchmark collectively beneficially own more than 5% of our outstanding capital stock. Eric Vishria, a member of our board of directors, is a General Partner of Benchmark. (4)Entities affiliated with Fidelity collectively beneficially own more than 5% of our outstanding capital stock. Tender Offer In December 2025, we completed a tender offer for shares of our outstanding Class B common stock from certain of our employees and purchased an aggregate of 2,156,765 shares of our outstanding Class B common stock at a purchase price of $36.2324 per share, for an aggregate gross purchase price of $78.1 million (the “2025 Tender Offer”). We repurchased an aggregate of 27,599 shares of our Class B common stock from Robert Komin, our Chief Financial Officer, and an aggregate of 27,599 shares of our Class B common stock from Dhiraj Mallick, our Chief Operating Officer, in the 2025 Tender Offer, for an aggregate gross purchase price of $1.0 million and $1.0 million, respectively. OpenAI Relationship Master Relationship Agreement In December 2025, we entered into the MRA with OpenAI, under which OpenAI agreed to purchase the Committed Capacity and related services. We expect to deploy the Committed Capacity in tranches during 2026 through 2028, with each tranche of deployed capacity having a term of three or four years that is extendable by OpenAI to a maximum of five years in total. In addition to the Committed Capacity, which we are contractually obligated to deliver and OpenAI is contractually obligated to purchase, OpenAI has the option to purchase the Additional Capacity for deployment in tranches by the end of 2030.
170
Table o f Contents
In January 2026, OpenAI advanced to us the $1.0 billion Working Capital Loan to accelerate our engineering development, manufacturing scale-up, and data center expansion. The Working Capital Loan can be repaid in cash or through the delivery of compute capacity or purchase of hardware or other services under the MRA. The Working Capital Loan is subject to a secured promissory note with a maturity date of no later than December 31, 2032. OpenAI Warrant In December 2025, we issued the OpenAI Warrant to OpenAI in connection with the execution of the MRA. Pursuant to the OpenAI Warrant, OpenAI has the right to purchase up to 33,445,026 shares of our Class N common stock at an exercise price of $0.00001 per share. The OpenAI Warrant expires on the earlier of December 24, 2035 and five business days following the first date during which there is no binding capacity purchase commitments or contractually obligated current or future payments under the MRA. The shares of Class N common stock underlying the OpenAI Warrant vest and become exercisable upon the occurrence of certain events, as set forth below: •4,459,337 shares vested in January 2026 upon our receipt of the Working Capital Loan; •5,574,171 shares will vest upon the earlier of (i) the first date that our market capitalization exceeds $40 billion, measured by the product of (a) the number of shares of common stock outstanding (on an as- converted basis for each authorized class or series of our common stock), multiplied by (b) the 30-day volume-weighted average closing price per share of our Class A common stock on Nasdaq, and (ii) receipt by us of certain fee payments from OpenAI under the MRA; and •23,411,518 shares in the aggregate will vest in multiple tranches on certain committed delivery dates of compute capacity pursuant to the MRA, including committed delivery dates to be mutually agreed upon for the Additional Capacity (if any). The OpenAI Warrant will only fully vest if OpenAI exercises all options to purchase Additional Capacity under the MRA, such that a total of 2GW of AI inference compute capacity and related services is purchased by OpenAI. Registration Rights Amended and Restated Investors’ Rights Agreement We are party to an amended and restated investors’ rights agreement which provides, among other things, that certain holders of our capital stock, including entities affiliated with Alpha Wave, Benchmark, Eclipse, Fidelity, and Foundation Capital, each of which hold more than 5% of our outstanding capital stock, have the right to demand that we file a registration statement. This agreement also provides that such parties and others, including Andrew D. Feldman, our Chief Executive Officer, President, and a member of our board of directors, and Sean Lie, our Chief Technology Officer, have the right to request that their shares of our capital stock be included on a registration statement that we are otherwise filing. See the section titled “Description of Capital Stock—Registration Rights— Amended and Restated Investors’ Rights Agreement” for additional information regarding these registration rights. OpenAI Registration Rights Agreement In December 2025, we entered into a registration rights agreement with OpenAI in connection with the OpenAI Warrant, pursuant to which, among other things, OpenAI has the right to demand that we file a registration statement to register for resale the shares of Class N common stock issued or issuable to OpenAI upon the exercise of the OpenAI Warrant. See the section titled “Description of Capital Stock—Registration Rights—OpenAI Registration Rights Agreement” for additional information regarding these registration rights.
171
Table o f Contents
Right of First Refusal Pursuant to our equity compensation plans and certain agreements with our stockholders, including a right of first refusal and co-sale agreement with certain holders of our capital stock, including entities affiliated with Alpha Wave, Benchmark, Eclipse, Fidelity, and Foundation Capital, each of which hold more than 5% of our outstanding capital stock, and Messrs. Feldman, Lie, and Mallick, we or our assignees have a right to purchase shares of our capital stock which certain stockholders propose to sell to other parties. This right under the right of first refusal and co-sale agreement will terminate upon the effectiveness of the registration statement of which this prospectus forms a part. Since January 1, 2023, we have waived our right of first refusal in connection with secondary sales of shares of our capital stock, including sales by certain of our executive officers. Voting Agreement We are party to an amended and restated voting agreement under which certain holders of our capital stock, including entities affiliated with Alpha Wave, Benchmark, Eclipse, Fidelity, and Foundation Capital, each of which hold more than 5% of our outstanding capital stock, and Messrs. Feldman, Lie, and Mallick have agreed as to the manner in which they will vote their shares of our capital stock on certain matters, including with respect to the election of directors. Upon the effectiveness of the registration statement of which this prospectus forms a part, the voting agreement will terminate and none of our stockholders will have any special rights regarding the election or designation of members of our board of directors. Directed Share Program At our request, the underwriters have reserved up to % of the shares of Class A common stock offered by this prospectus, for sale at the initial public offering price through a directed share program to certain persons identified by our management and certain long-tenured employees, which may include parties with whom we have a business relationship and friends and family of management and such employees. See the section titled “Underwriters—Directed Share Program” for additional information. Other Transactions We have entered into offer letter agreements with certain of our executive officers that, among other things, provide for certain compensatory and change in control benefits. For a description of these agreements with our named executive officers, see the section titled “Executive and Director Compensation—Executive Compensation Arrangements.” We have also granted stock options, RSUs, and restricted stock to our executive officers and directors. For a description of these equity awards, see the section titled “Executive and Director Compensation—Outstanding Equity Awards at 2025 Year End.” Director and Officer Indemnification We have entered into indemnification agreements with certain of our current executive officers and directors, and intend to enter into new indemnification agreements with each of our current executive officers and directors before the completion of this offering. Our amended and restated certificate of incorporation also provides that, to the fullest extent permitted by law, we will indemnify any officer or director of our company against all damages, claims, and liabilities arising out of the fact that the person is or was our officer or director, or served any other enterprise at our request as an officer or director. Amending this provision will not reduce our indemnification obligations relating to actions taken before an amendment.
172
Table o f Contents
Related Person Transaction Policy We have a written related person transaction policy, to be effective upon the completion of this offering, that applies to our executive officers, directors, director nominees, holders of more than 5% of any class of our voting securities and any member of the immediate family of, and any entity affiliated with, any of the foregoing persons. Such persons will not be permitted to enter into a related person transaction with us without the prior consent of our audit committee, or other independent members of our board of directors in the event it is inappropriate for our audit committee to review such transaction due to a conflict of interest. Any request for us to enter into a transaction with an executive officer, director, director nominee, principal stockholder, or any of their immediate family members or affiliates, in which the amount involved exceeds $120,000 must first be presented to our audit committee for review, consideration, and approval. In approving or rejecting any such proposal, our audit committee will consider the relevant facts and circumstances available and deemed relevant to our audit committee, including, but not limited to, the commercial reasonableness of the terms of the transaction and the materiality and character of the related person’s direct or indirect interest in the transaction. All of the transactions described in this section occurred prior to the adoption of this policy.
173
Table o f Contents