← Overview

Business

43,257 tokens · 207,970 chars

Business


BUSINESS Our Mission We believe AI is the most transformative technology of our generation. Our mission is to accelerate AI by making it faster, easier to use, and more energy efficient, making AI accessible around the world. Company Overview Cerebras is an AI company. We design processors for AI training and inference. We build AI systems to power, cool, and feed the processors data. We develop software to link these systems together into industry-leading supercomputers that are simple to use, even for the most complicated AI work, using familiar ML frameworks like PyTorch. Customers use our supercomputers to train industry-leading models. We use these supercomputers to run inference at speeds unobtainable on alternative commercial technologies. We deliver these AI capabilities to our customers on premise and via the cloud. AI compute is comprised of training and inference. For training, many of our customers have achieved over 10 times faster training time-to-solution compared to leading 8-way GPU systems of the same generation and have produced their own state-of-the-art models. For inference, we deliver over 10 times faster output generation speeds than GPU-based solutions from top CSPs, as benchmarked on leading open-source models. This enables real-time interactivity for AI applications and the development of smarter, more capable AI agents. The Cerebras solution requires less infrastructure, is simpler to use, and consumes less power than leading GPU architectures. It enables faster development and eliminates the complex distributed compute work required when using thousands of GPUs. Cerebras democratizes AI, enabling organizations that have less in-house AI or distributed computing expertise to leverage the full potential of AI. The rise of AI presents a unique set of compute challenges. Unlike other computational workloads, both training and inference require a huge number of relatively simple calculations, the results of which necessitate constant movement to and from memory, and to and from millions or tens of millions of compute cores. This traditionally demands hundreds or thousands of chips, and puts tremendous pressure on memory, memory bandwidth, and the communication fabric linking them all together. Cerebras started with a simple question: How can we design a processor, purpose-built to meet these exact challenges? If we were to start with a clean sheet, how would we avoid carrying forward the tradeoffs made for graphics and other workloads, and ensure that every transistor is optimized for the specific challenges presented by AI? Our answer is wafer-scale integration. Cerebras solved a problem that was open for the entire 75-year history of the computer industry: building chips the size of full silicon wafers. The third-generation Cerebras Wafer-Scale Engine (the “WSE-3”) is the largest chip ever sold. It is 57 times larger than the leading commercially available GPU. It has 52 times more compute cores, 880 times more on-chip memory (44 gigabytes), and 7,000 times more memory bandwidth (21 petabytes per second). The sheer size of the wafer-scale chip allows us to keep more work on-silicon and minimize the time-consuming, power-hungry movement of data. This enables Cerebras customers to solve problems in less time and using less power. Our AI compute platform combines processors, systems, software, and AI expert services, to deliver massive acceleration on even the largest, most capable AI models. It substantially reduces training times and inference latencies, while reducing programming complexity. Our business model is designed to meet the needs of our customers. Organizations seeking control over their data and AI compute infrastructure can purchase Cerebras AI supercomputers for on-premise deployment. Those that want the flexibility of a cloud-based platform can purchase Cerebras high-performance AI compute via a consumption-based model through the Cerebras Cloud, or via our partner’s cloud. We offer customers the flexibility to choose the solution that best aligns with their budgetary, security, and scalability requirements, and some customers choose to use both options simultaneously. 102


We have established a growing set of customer engagements spanning CSPs, leading enterprises, Sovereign AI programs, national laboratories, research institutions, and other innovators at the forefront of AI. While a substantial portion of our current business is supported by one primary customer, we are actively seeking to expand our reach and diversify our customer base. We collaborate with our customers to harness the power of AI to tackle their most significant challenges and drive breakthroughs across industries. Bloomberg Intelligence estimates that the AI market will grow to $1.3 trillion by 2032. Consumer and enterprise models like Google’s Gemini, Meta’s Llama, and OpenAI’s ChatGPT have driven demand for AI infrastructure training and inference solutions, powering AI applications such as specialized assistants, agents, and services. We believe that our AI compute platform addresses a large and growing AI hardware and software opportunity across training and inference, as well as software and expert services. We believe that further adoption of AI, accelerated by the advent of GenAI, and the widespread integration of AI into business processes, will rapidly expand our total addressable market (“TAM”) from an estimated $131 billion in 2024 to $453 billion by 2027, a compounded annual growth rate (“CAGR”) of 51%. We have experienced rapid growth, with revenue of $78.7 million and $24.6 million for the years ended December 31, 2023 and 2022, respectively, representing year-over-year growth of 220%. During the six months ended June 30, 2024 and 2023, we generated $136.4 million and $8.7 million in revenue, respectively. Since our inception, we have incurred operating losses and negative cash flows to develop, market, and expand our product portfolio and to continue our research and development activities. Our net loss for the years ended December 31, 2023 and 2022 was $127.2 million and $177.7 million, respectively, representing a year-over-year reduction of 28%. Our net loss for the six months ended June 30, 2024 and 2023 was $66.6 million and $77.8 million, respectively, representing a year-over-year reduction of 14%. Industry Background Over the past 40 years, the computer industry has followed a clear pattern: as major new computational workloads with distinct characteristics emerged, so too have new compute architectures. For example, the general-purpose needs of personal computing led to the x86 CPU. The low-energy needs of mobile devices resulted in the widespread adoption of ARM. Advancements in graphics demanded greater rendering parallelism, resulting in the creation of the GPU. With each new compute paradigm, technologists first attempted to adapt existing compute architectures to these workloads. But in each case, a new purpose-built architecture was ultimately needed to unlock the potential of the new paradigm. We believe this pattern is continuing with the rise of AI – the next major compute workload, with its own unique computational demands. In 2023, IDC estimated that the worldwide economic impact of GenAI would be close to $10 trillion by the end of 2033. This growth has been accelerated by the emergence of GenAI, a new class of powerful AI models that can create new content and reason across broad domains and multiple data types. These breakthrough capabilities translate to tremendous potential value creation and have driven rapid GenAI adoption. For example, ChatGPT took only five days to reach one million users (a feat that took Instagram 10 weeks and Twitter two years), and similarly, Meta’s open-source Llama 2 model attracted hundreds of thousands of AI developers within days of its release. The Computational Demands of Training and Inference The explosion of GenAI in the past 18 months has been likened to the “iPhone” moment, marking a pivotal shift in the industry. GenAI is already consuming compute resources at an unprecedented rate, faster than any other workload in history. As more powerful models are created and new use cases for GenAI are brought to market, we believe that demand for powerful and efficient AI compute solutions will continue to grow rapidly. The recent growth in the U.S. data center market supports this belief. Over the course of 2023, demand for U.S. third-party data centers has grown by nearly 50%, with AI driving significant growth. Major companies like Microsoft, Google, and Amazon are driving 500+ MW deals, whereas in 2015, data center deals were commonly in the 5 MW range. 103


Both training and inference demand immense compute, each with unique compute, memory, and memory bandwidth requirements. They represent two stages in the continuous lifecycle of AI models. Once trained, a model is “served” in production and used for inference. As part of this cycle, models in production are continuously being optimized to use fewer compute resources, and while that is happening, new and more powerful models are being trained, leading to the eventual obsolescence of the previous model—starting the cycle over. !business1a.jpg

*business1a.jpg*

Model Optimization Is Important, but As You Optimize, Models Continue to Evolve For training, the compute required is a function of a model’s size (number of parameters) and the amount of data used (number of tokens). The most capable GenAI models today have trillions of parameters and are trained on trillions of tokens. These models demonstrate superior accuracy and capabilities compared to small models due to their greater capacity to discern nuanced patterns from the data. As the industry has pushed to achieve greater AI capabilities, the size of GenAI models has grown, and we expect this trend to continue. The increase in model and data sizes has led to a dramatic surge in computational demand. As shown below, requirements to train GenAI models have grown 40,000-fold in just five years. Today, training for GenAI requires enormous GPU clusters, sophisticated engineering teams, and months of time for a single run. A training run can cost over a hundred million dollars, and improving the model and keeping it fresh with new data requires additional fine-tuning and regular re-training. 104


!business2a.jpg

*business2a.jpg*

Illustration of Growing Compute Requirements Over Time (ExaFLOPS to Train LLMs) For inference, the required compute is a function of model size, user demand, and time spent on inference. Larger models use more computational resources, and each user request also increases the compute need. Dedicating more compute resources to inference also generally results in better output quality, as more compute enables deeper chains of thought and reasoning. These factors contribute to significant and ongoing operational costs. We expect the demand for inference to grow, especially as larger and more capable models become more widely adopted. There is currently a direct tradeoff between model capability and responsiveness of user experience because the largest and most capable models demand more inference compute and therefore run more slowly during inference. As inference speeds improve, larger GenAI models can be deployed in a responsive manner, expanding their use in real-time applications and agent-based systems. Faster inference can enable multiple inference requests to be made to the same or different GenAI models, with each request building upon the results of the previous request, all within the time it previously took to execute a single inference request. Advancements in model reasoning capabilities are equally important. For instance, OpenAI’s o1 model, released in September 2024, uses chain of thought—a reasoning process that leverages multiple intermediate inference steps—to solve complex coding, math, and science problems that were unsolvable by earlier models. We expect that increasing compute resources used for inference by orders of magnitude to enable deeper chains of thought will continue to yield significant improvements in output quality for problems that require complex reasoning. These reasoning improvements enable models to tackle more sophisticated tasks, broadening their range of use cases. We believe that as both inference speed and reasoning capabilities advance, GenAI models will support more demanding applications, thereby expanding the market opportunity for inference. As of June 2024, we estimate that 40% of the AI data center market is attributable to inference, and we expect this to grow as more AI applications and products are brought to market. Existing Compute Architectures Are Fundamentally Limited for GenAI Training and Inference GPUs, though better than CPUs for AI workloads, face fundamental limitations when processing the unique characteristics of large GenAI models. In graphics, the calculation for each pixel is often independent of every other pixel. This means that for large graphics workloads, many GPUs can easily execute on separate parts of the rendering problem, without a need for high levels of interdependent communication. The AI workload is different. GenAI models are complex, interconnected compute graphs that require the constant communication of intermediate calculations to train. This requires a high amount of data movement to and from memory and across cores. Similarly, during inference, these models generate outputs sequentially – each 105


dependent on the previous output – requiring the full model to be constantly moved in and out of memory to produce successive outputs, and again requiring massive data movement between cores and memory. GPUs face inherent scalability and complexity challenges when faced with the distinct, communication-heavy requirements of GenAI workloads. For Training – Individual GPUs Are Too Small, and Scaling to Many GPUs is Highly Inefficient Large GenAI models far exceed the memory and processing limits of a single GPU. For example, to train GPT-3, which OpenAI has said required ~3.14 x 10^23 floating point operations, it would take a single NVIDIA H100 more than eight years of running at peak theoretical performance to train the model. Recent models like GPT-4 and Gemini are over 10 times larger in parameter size than GPT-3. Consequently, training a large GenAI model on GPUs in a tractable amount of time requires breaking up the model and calculations, and distributing the pieces across hundreds or thousands of GPUs. These hundreds or thousands of GPUs then need to constantly communicate with each other across a network, creating extreme communication bottlenecks and power inefficiencies. This distributed compute problem also creates a high level of complexity for developers, who are responsible for partitioning and coordinating the compute, memory, and communication across GPUs, so that they can work together in a complex choreography. This is an ongoing cost and slows down time-to-solution, as the delicate balance of bottlenecks needs to be reconfigured every time the ML developer wants to change the model architecture, model size, or run on a different number of GPUs. For many organizations, distributed programming is one of the highest barriers to entry for leveraging GenAI. !business3a.jpg

*business3a.jpg*

Large GenAI Models Must Be Divided and Coordinated Across Thousands of GPUs, Leading to Communication Bottlenecks and Developer Complexity For Inference – GPU Efficiency is Low and Limited by Memory Bandwidth During generative inference, the full model must be run for each word that is generated. Since large models exceed on-chip GPU memory, this requires frequent data movement to and from off-chip memory. GPUs have 106


relatively low memory bandwidth, meaning that the rate at which they can move information from off-chip HBM to on-chip SRAM, where the computation is done, is severely limited. This leads to low performance as GPUs cores are idle while waiting for data – they can run at less than 5% utilization on interactive generative inference tasks. Low utilization and limited memory bandwidth impact the responsiveness and throughput of GPU-based systems and hinders real-time applications for larger models. This can limit the capability and adoption of emerging inference applications which are especially latency-sensitive, such as code generation and multi-turn AI agents that need to string together multiple calls to different GenAI models. This inefficiency also necessitates larger GPU deployments and dramatically drives up the cost of inference. GPU companies have tried to address these challenges, but the issues of small core count, memory size, and memory bandwidth are fundamental hardware limitations that persist. Interconnect technologies like InfiniBand, PCIe, and NVLink are limited by their physical interfaces, and moving data across them is thousands of times slower and more power-hungry than keeping and moving the data on silicon. Software libraries intended to simplify distributed computing still require developers to manage complex parallelism strategies and extensive codebases. Realizing the physical challenges of repurposing small chips for a large compute problem, the GPU industry has recently announced new multi-chip packaging techniques, but these also yield only minimal expansions of GPU chip size. Accelerating GenAI requires a dedicated compute solution, designed for the unique requirements of GenAI, that can deliver faster training times, real-time inference speeds, and simple developer workflows, at reasonable cost. Our Solution We believe Cerebras has built the world’s fastest commercially available AI training and inference solution. Our dedicated AI hardware and software platform is powered by the Cerebras Wafer-Scale Engine – a processor the size of an entire silicon wafer that brings more on-chip compute, memory, and bandwidth resources together than any other commercially available processor in the semiconductor industry. A single WSE replaces a cluster of GPUs, reducing the time-consuming, power-hungry movement of data, removing the need for complex distributed programming, and providing exceptional compute speed. Compared to the leading 8-way GPU system of the same generation, many of our customers have achieved over 10 times faster training time-to-solution. Our inference offering is over 10 times faster than GPU-based solutions from top CSPs, as benchmarked on leading open-source models. A single CS-3 system delivers three times more compute per watt than the leading 8-way GPU system. Our solution consists of the following elements: Cerebras Wafer-Scale Engine (WSE). At the heart of our solution is the Cerebras Wafer-Scale Engine, the largest chip ever sold. Our third generation WSE, the WSE-3, is 57 times larger than the leading commercially available GPU (NVIDIA H100) and has 52 times more compute cores, totaling 900,000. It features 880 times more on-chip memory (44 gigabytes SRAM) and 7,000 times more memory bandwidth (21 petabytes per second) than the leading commercially available GPU. Developing the WSE required overcoming decades-long-standing industry challenges, including inter-die connectivity, yield optimization, efficient packaging, power management, and advanced cooling. 107


!business4ba.jpg

*business4ba.jpg*

Cerebras Wafer-Scale Engine 3 Versus NVIDIA H100 Size Comparison Image is illustrative; scale is approximate. !business5a.jpg

*business5a.jpg*

Cerebras Wafer-Scale Engine 3 Versus NVIDIA H100 Capability Advantage The immense scale of the WSE delivers significant acceleration, efficiency, and simplicity for AI training and inference. 108


For Training. Each Cerebras WSE has enough compute and on-chip memory to run even the largest, multi-trillion parameter GenAI models on a single chip, thus avoiding the complexities of chip-to-chip data movement. This is unlike GPUs, which require users to fragment large models across multiple processors and deal with complex inter-GPU communication. To further speed up training time-to-solution, Cerebras users can simply add more WSEs to the problem. Since each WSE can fit the whole model, multiple WSEs can accelerate model training simply by having each chip independently work on a subset of the training data. Because the model is never split across the WSEs, no complex model distribution or carefully orchestrated communication is needed. The elegance of this architecture is designed to allow users to effortlessly increase training speeds with near-linear performance scaling as more WSEs are added. For Inference. Wafer-scale integration keeps all critical data on-chip and close to compute cores, resulting in 7,000 times more memory bandwidth than the leading GPU solution. This allows the WSE-3 to deliver over 10 times lower latency for real-time GenAI inference, which means an inference response that takes 10 seconds to generate on GPU-based platforms from top CSPs takes only one second to complete using the Cerebras solution. The WSE-3 can accomplish this speedup at vastly lower power consumption, as retrieving one bit of data from on-chip SRAM on 5nm silicon requires only 1% the energy that is needed to do the same from off-chip HBM. !business6ba.jpg

*business6ba.jpg*

Cerebras Inference Is the Fastest Inference Solution on Llama 3.1 8B 109


!business7ba.jpg

*business7ba.jpg*

Cerebras Inference Is the Fastest Inference Solution on Llama 3.1 70B Cerebras System (CS). The CS AI computer system houses the WSE and delivers innovative power and cooling to the chip. Our third generation CS (“CS-3”) delivers three times more compute per unit power than the leading 8-way GPU system (NVIDIA DGX H100). This compact AI powerhouse is designed to easily integrate into standard data centers – it occupies less than half a standard rack (16RU/27 inches tall), and connects into the network via standards-based 100G Ethernet. !business8a.jpg

*business8a.jpg*

The CS-3 System, with Innovative Power and Cooling, is Designed to Fit in a Standard Data Center Rack 110


Cerebras AI Supercomputer. The Cerebras AI Supercomputer is designed to streamline scaling up to 2,048 CS-3 systems for maximum AI acceleration, with more efficiency and simplicity than scaling up to large GPU clusters. Unlike with GPUs, where users must break up and distribute their model across many chips, thereby introducing the need to communicate across those compute elements, each WSE can run the model in its entirety. In turn, a Cerebras AI Supercomputer only needs to spin up another copy of the full model on each additional CS system to process the training dataset more quickly. This enables a near-linear performance increase as CS systems are added to a problem, takes only seconds to configure, and does not incur the overhead or complexity of heavy inter-chip, inter-system communication. Our scalable execution model is designed to simplify the development workflow for large GenAI training. We designed our AI Supercomputer to enable users to elastically scale workload computing resources up to 256 exaFLOPS (2,048 CS-3 systems) just by changing a single number in their code, allowing users to program as if for a single powerful device. Training a GPT-3 sized model on Cerebras, for example, uses 97% fewer lines of code compared to on clusters of GPUs, greatly accelerating the speed of AI model developments for larger-scale models. The combined power and scaling simplicity of the Cerebras AI Supercomputer provides industry-leading training and inference speeds for even the largest and most complex GenAI models, while obviating the need to invest thousands of programmer hours into distributed programming work. !business9a.jpg

*business9a.jpg*

Zero-Complexity Scaling on Cerebras Versus GPUs: Training a GPT-3 Sized Model With Cerebras Requires 97% Fewer Lines of Code Compared to on Clusters of GPUs 111


!business10a.jpg

*business10a.jpg*

A Cerebras Supercomputer Deployment at One of Our Colocation Facilities Cerebras Software Platform (CSoft). Our proprietary software platform, CSoft, is core to our solution and provides intuitive usability and improved developer productivity. CSoft seamlessly integrates with industry-standard ML frameworks like PyTorch and with popular software development tools, allowing developers to easily migrate to the Cerebras platform. CSoft automatically saves model outputs in a standard “checkpoint” format, enabling users to resume training or inference on model work started on other hardware platforms. This checkpoint compatibility extends to open-source models available in the popular HuggingFace repository. CSoft eliminates the need for low-level programming in CUDA, or other hardware-specific languages. Starting from a user’s PyTorch model, the CSoft graph compiler automatically maps model operations to the WSE, creating an optimized executable without user-level intervention. CSoft is co-designed with the wafer-scale hardware architecture to provide programming efficiency and simplicity. It allows ML users to accelerate training and inference on models of any size, scaled across any configuration of the Cerebras AI Supercomputer, just by changing one number in a configuration file, simulating a single-device programming experience without the complexities of distributed programming. This drastically reduces operational overhead and speeds up developer iteration time and business impact. Cerebras Inference Serving Stack. The Cerebras Inference Stack is built on top of CSoft and is designed to allow customers to easily deploy even the largest GenAI models at industry-leading inference speeds. With the Cerebras Inference API, developers are able to easily point their applications to popular or custom models, just by changing their API key. This simple API, which closely matches the interface design of the popular OpenAI API, is designed to facilitate rapid developer adoption and ease of use. We believe there will be stickiness to the Cerebras platform as customers experience industry-leading inference performance for their interactive applications. Our serving software automatically handles system-level optimizations for our inference solution and is designed to enable low latency and high cost effectiveness. 112


AI Model Services. Our AI model services further amplify speed to solution. Our team of AI practitioners helps customers design research experiments, train models, and optimize processes designed to achieve the fastest time-to-solution. These services complement our advanced hardware and software platform, providing an end-to-end solution for rapid and efficient AI development and deployment. We believe we are one of a select number of companies in the world that has trained high-quality, large GenAI models on massively parallel compute clusters. We have contributed state-of-the-art models into the open-source community and have published widely on AI methodology and practice. Our team’s work spans across model architectures, parameter sizes, and data types – including text, time series data, code, biological and other sequence data, electronic healthcare records, medical imagery, radio frequency, and radar data. Cerebras AI researchers have trained LLMs in English, Arabic, Spanish, Japanese, and Catalan. We excel at helping customers translate AI potential to business impact. Our AI experts guide customers in developing custom AI strategies, designing and building AI models, and applying cutting edge AI techniques to achieve high-quality results, thereby delivering downstream business impact. We augment our customers’ existing teams with specialized AI capabilities, and the models we build together routinely beat the existing state of the art, providing customers with a distinct competitive advantage. Summary of key customer benefits include: Enables over 10 times faster training time-to-solution compared to leading 8-way GPU systems of the same generation, as reported by many of our customers. This dramatically accelerates AI model time-to-solution, enabling businesses to test ideas faster, iterate more quickly, and bring next-generation GenAI-powered products and services to market, faster and more cost effectively. Delivers over 10 times faster GenAI inference compared to GPU-based solutions from top CSPs, as benchmarked on leading open-source models. The WSE’s massive on-chip SRAM capacity keeps the vast majority of memory-to-compute communication on-silicon, thereby avoiding the memory bandwidth bottleneck faced by GPU-based solutions. Our resulting ultra-low latency delivers industry-leading inference speeds and real-time responses back to users, even on large, cutting-edge GenAI models. Ten times more speed compared to GPU-based solutions from top CSPs also allows developers to make ten times more inference calls in the same amount of time. This supports a new level of model capability delivered by techniques like multi-step inference and agentic AI flows, which leverage more inference calls to produce higher reasoning capability for more complex tasks in domains such as coding, math, and science applications. End-to-end solution. Cerebras offers a unified platform purpose-built to accelerate fundamental compute characteristics of both AI training and inference. While many other emerging AI chips companies have chosen to specialize in only one phase to allow for optimization tradeoffs across the axes of compute, memory, memory bandwidth, and simplicity of use, Cerebras excels across these key dimensions, made possible by wafer-scale integration. This allows customers to swiftly transition from training to fine-tuning to deploying high-quality GenAI models on the same platform, eliminating the need for investing in and maintaining separate hardware infrastructure. Zero distribution complexity. The biggest challenge in training on GPU clusters is the complexity of distributed programming across multiple devices and systems. This often requires large engineering teams weeks or months of optimization, and results in limited performance scaling. Using Cerebras, users can effortlessly run a GenAI model of any size. It takes no additional code to achieve automatic near-linear performance scaling across the nodes of a Cerebras Supercomputer, unlike the 20,000+ lines of distributed programming code required to scale large models across similar memory and compute resources on large GPU clusters. Low migration cost. Our proprietary CSoft platform integrates seamlessly with familiar ML frameworks and tools, like PyTorch, eliminating the need for AI teams to learn new languages or adapt to new development environments and easing the transition from other hardware platforms. Power and operational efficiency. Cerebras outperforms GPU systems in power efficiency due to both hardware and architectural advantages. Wafer-scale integration allows the majority of AI workload communication 113


to remain on-chip, significantly reducing data movement distance and power consumption; moving a bit of data on the WSE-3 takes less than 1% of the energy needed to do the same over off-chip GPU interconnects. This drives significant operational cost savings (CS-3 draws one-third of the power of the leading 8-way GPU system) and streamlined management for organizations deploying AI at scale. Expert-led model training and AI integration services. We offer expert-led foundation model training, fine-tuning, and retrieval-augmented generation services to customers. Our team provides guidance on cutting-edge AI methods that work on top of our hardware, assisting customers to derive the maximum value from their AI investment. Examples of Customer and Industry Impact We have an expanding customer base that includes leading enterprises, Sovereign AI initiatives, cloud service providers, government agencies, and research institutions at the forefront of AI and at the intersection of AI and HPC. These customers leverage Cerebras to tackle complex AI challenges and achieve previously unattainable business outcomes, even with the most advanced GPU systems. Our customers find fundamental value in the simplicity of the Cerebras solution. It unlocks breakthrough business and scientific use cases by removing limitations in development time, programming complexity, and runtime speed. For example, a leading pharmaceutical company trained an epigenomic language model using Cerebras and noted that the training speedup afforded by the Cerebras system enabled them to explore architecture variations, tokenization schemes and hyperparameter settings in a way that would have been prohibitively time and resource intensive on a typical GPU cluster. As we continue to expand our go-to-market capabilities and more customers recognize the benefits of our solution we expect our pipeline to grow. The tangible benefits of our products are evident in the strong gains in performance and time-to-value experienced by our customers across various industries: A global AI technology group (G42) reduced AI model convergence time for a 13 billion parameter Arabic language model from 68 days on a GPU cluster to just four days using Cerebras, improving training time by 17 times while also achieving higher model quality. They furthered this work by subsequently training a state-of-the-art bilingual Arabic-English 30 billion parameter model on 1.6 trillion tokens, now being served on the Microsoft Azure AI Model Catalog. A leading pharmaceutical company reduced training time on an epigenomic language model from 24 days to just 2.5 days using Cerebras, achieving a 90% reduction in time-to-solution. This resulted in a new model that was the first of its kind to combine both DNA data and epigenetic state data of chromosomes for 127 different cell types. This model exceeded the state-of-the-art on multiple benchmarks and is being used by the customer to facilitate further study of gene regulation and function. A major U.S. healthcare provider leveraged Cerebras AI services and compute to develop and train an industry-leading medical model in eight weeks, beating open-source medical model benchmarks, and upskilling their in-house AI team through partnership with our AI expertise. A U.S. national lab trained a genome-scale foundation model, predicting the evolution of emergent variants of SARS-CoV-2 virus, 19 times more quickly on a single Cerebras system compared to the leading 8-GPU system of the time (152 times faster than a single GPU). The same national lab trained an AI model using Cerebras that could not converge on any number of GPUs. This model won the 2022 Gordon Bell Special Prize for High Performance Computing-Based COVID-19 Research. Another U.S. national lab showed 179 times faster performance on a molecular dynamics simulation using a single Cerebras system, compared to the entirety of Oak Ridge National Laboratory’s Frontier, the largest 114


supercomputer in the world, with more than 37,000 GPUs and reportedly costing approximately $600 million to build. A U.S. government agency is leveraging our real-time inference capabilities to create the world’s first, large-scale, virtual radio frequency simulation environment for developing, training, and testing advanced radio frequency systems. Our Business Model We use a combination of direct sales and partnerships to address the rapidly expanding AI market. We offer both on-premise solutions and cloud-based solutions to provide maximum flexibility to our customers. We offer a collection of services from data center deployment to AI expert services and AI Supercomputer operation and management, to provide our customers with the support they need to train, deploy, and accelerate GenAI time-to-value. On Premise. Our direct-to-customer sales force sells our AI Supercomputers to leading organizations who seek maximum control over their data and their AI infrastructure, fulfilling their needs for high-performance AI compute on premise. For on-premise use, we offer deployment services, as well as a subscription to an ongoing stream of software updates and upgrades. Our on-premise AI Supercomputers support both training and inference. They can be configured either for both workloads, or to be further optimized for only one, depending on our customers’ needs. Cloud-Based Computing Services. We sell Cerebras solutions via our cloud offering as well as via the Condor Galaxy Cloud owned by Group 42 Holding Ltd (together with its affiliates, “G42”), our partner CSP. Our cloud solutions provide customers fast and flexible access to our powerful AI acceleration hardware, with payment based on time or work (number of models). This offering gives our customers the ability to train LLMs with extraordinary speed and deploy them for inference at ultra-low latencies, all without the complexity or time needed to build and manage on-premise infrastructure. Cerebras Inference Cloud. Our real-time inference solution is also available via a dedicated inference cloud service. Leveraging our Cerebras Inference Serving Stack, this cloud API offering allows developers to directly point their applications towards efficient and reliable model serving endpoints. On Cerebras Inference Cloud, we host both popular open-source models and proprietary customer models. For customers who do not need direct compute access and are not interested in managing their own inference serving software stack, our inference cloud offering is the quickest and simplest way to leverage our fast model inference services. We provide a combination of these offerings to customers who may benefit from leveraging both on-premise and cloud-based options—for example, enabling them to quickly use the cloud for their largest training jobs, while enjoying the cost efficiencies of on-premise infrastructure for their baseline AI work. This flexibility allows customers to choose the solution that best aligns with their budgetary, security, and scalability requirements. Additionally, customers can train models on-premise and then leverage our inference cloud for production, benefiting from flexible serving resources that can adapt to fluctuating demand. This end-to-end solution allows seamless integration from training to production, serving the entire lifecycle of a customer’s AI needs. We also provide professional services to assist customers throughout the AI workflow. From developing strategy, designing and building models, to deploying and maintaining the final models either on-premise or in the cloud, we help customers achieve optimal outcomes. Our Market Opportunity We participate in a large and growing AI market. Enterprises, research organizations, and governments alike are developing AI initiatives to rapidly evolve and drive efficiencies across their entities. As estimated by McKinsey in 2023, GenAI use cases may add up to $4.4 trillion to the global economy on an annual basis. Our full suite of AI computing solutions addresses use cases for training, inference, software, and expert services. We estimate the TAM 115


for our AI computing platform to be approximately $131 billion in 2024, growing to $453 billion in 2027, a 51% CAGR. This TAM is comprised of the following core markets: AI Compute for Training. Demand for GenAI solutions has grown significantly across virtually all use cases. In a recent Gartner, Inc. poll of more than 1,400 executive leaders, 45% reported that they are in piloting mode with GenAI, and another 10% have put GenAI solutions into production (Source: Gartner®). Significant capital is being dedicated to evaluating GenAI solutions and the impact they could have on everything from productivity, efficiency, to customer support. As businesses continue to evaluate and deploy solutions, we believe the market for training new models will continue to grow. Based on market estimates in Bloomberg Intelligence research, our estimate of the TAM for AI Training Infrastructure is $72 billion in 2024, growing to $192 billion in 2027, a 39% CAGR. This segment of the market includes AI Infrastructure-as-a-Service. AI Compute for Inference. While GenAI training is essential to developing models that are powerful and accurate, we believe inference is the next phase of the ongoing wave of AI disruption. As more enterprises develop models and start to deploy their models in applications at scale, the need for high performance and efficient inference is becoming critical to fully realize the commercial potential of ML. The demand for inference compute is driven by the number of users and frequency that inference is used. We believe that the inference opportunity is enormous, as the market is in the early phases in its adoption cycle. Our estimate of the TAM for AI Inference is $43 billion in 2024, growing at an estimated 63% CAGR to $186 billion in 2027. Software and Expert Services. Based on market estimates in Bloomberg Intelligence research, our estimate of the TAM for GenAI Software and Services is $16 billion in 2024, growing to $75 billion in 2027, a 67% CAGR. We believe we are at the very early stages of a large and fast-growing market opportunity. As adoption of AI continues to accelerate, we expect numerous new applications will be identified, and we believe our solutions are well-positioned to capitalize on the wave of disruption that will come in the coming years. Customers and Ecosystem Our customers include leading enterprises, Sovereign AI initiatives, cloud service providers, government agencies, and research institutions at the forefront of AI and at the intersection of AI and HPC. The majority of our revenue for 2023 is from the sale and deployment of our AI systems, but we expect significant traction in onboarding new customers onto our cloud offering. The scalability and flexibility of our solutions enables customers to leverage both hardware and cloud solutions concurrently, allowing them to adapt their approach as their needs grow. Our customers also leverage our AI model services to achieve state-of-the-art AI results. We sell our products directly to our customers and in some cases, we rely on fulfillment partners for deployment and logistical purposes. Additionally, we share our innovations and research on open-source platforms like HuggingFace, where our published models and datasets have garnered significant traction. An example is our open-source BTLM Model, which was downloaded over one million times. This broad engagement keeps us at the cutting edge of AI technology and fosters a collaborative relationship with the ML community. Competitive Strengths We engineer our purpose-built AI systems to be faster and more capable than any other solution available in the market today. We believe our design is capable of meeting today’s needs and is scalable to address tomorrow’s challenges. Our competitive strengths include: The world’s first and only wafer-scale chip in the market. Achieving wafer-scale success has been a goal of the semiconductor industry for decades, and we are the first company to overcome and solve the fundamental technical challenges required to productize a wafer-scale chip, including inter-die connectivity, yield, packaging, power, and cooling. Our wafer-scale chip architecture combines a massive amount of fast, on-chip memory, directly 116


next to vast compute resources on the same piece of silicon. It is capable of running the largest GenAI models on a single wafer-scale chip instead of tens, hundreds, or thousands of competitor devices. This eliminates the need for distributed computing while running large GenAI models, enabling AI developers to use up to 97% less code when working with large models on our platform compared to on clusters of GPUs and greatly accelerating the speed of AI model development for larger-scale models. The details of the hardware are completely abstracted away from the AI developer, allowing them to focus on model development and deployment, which they are empowered to do faster than on any other system. Full system solution that is easy to deploy and efficient to operate. Built with our wafer-scale chip at its core, we design full systems that seamlessly fit into data center racks. Unlike many other players in this industry, we both design the chip and the system around it. By designing the processor and the system together, we are able to address thermal and power delivery challenges by optimizing the full system with our proprietary power delivery and cooling technology. This allows us to keep the system operating efficiently, optimizing the energy consumption and underlying operating costs for our customers. Comprehensive software suite that leads to ease of adoption and shortens time to deployment. Developed over eight years by our world-class team, the Cerebras Software platform allows for seamless programmability of AI models through our integration with PyTorch, the industry-standard ML framework for AI developers. Our tools allow users to bring models that have been trained on other hardware onto our platform for training or for inference. Likewise, users can train models on our platform and deploy those models for inference elsewhere. This increases ease of adoption and shortens time-to-deployment. Our AI platform addresses both training and inferencing markets. We are strongly positioned to provide an AI platform that is powerful, efficient and easier to deploy for both training and inferencing workloads. For training, our AI platform enables customers to effortlessly and swiftly use the most advanced GenAI models available on the market. It allows any single user to effectively develop and refine models with multi-billion-to-trillion parameters, without the need for specialized software frameworks or help from distributed computing experts. For inferencing, given our advantages in memory capacity, our AI platform delivers industry-lowest latencies and high generation throughput. This helps our customers to unlock cutting edge performance for ultra-low-latency use cases leveraging GenAI, such as real-time, interactive user applications and intelligent AI agents that often require processing data across multiple GenAI models and APIs. Scalable architecture. We have developed our solutions to support models and data sets of large and varying sizes. Our current generation CS-3 is designed to support models with up to 24 trillion parameters, much larger than even today’s state-of-the-art GenAI models. Our platform is designed to seamlessly scale from 1 to 2,048 CS-3 systems, forming an AI supercomputer that is even further differentiated by our proprietary interconnect and memory technology. This enables customers to seamlessly increase compute resources from small-scale experiments to large-scale deployments. Our future-ready design enables us to support the most demanding AI applications and are designed to ensure that our customers’ investments remain forward-compatible as new advancements in AI emerge. We expect this benefit to be crucial for long-term customer satisfaction and retention. AI model services help customers translate AI potential to business impact. We provide customers with AI model services to help them develop solutions that are customized to meet their needs and help them realize the full value of their AI investments. These services include model selection, data preparation, training, and solutions integration. World-class AI talent with a proven track record of innovation and execution. We have assembled a world-class team of industry leaders in integrated circuit design, processor architecture, power delivery, cooling, system engineering and software. Over the last five years, we have introduced three generations of our WSE, each time achieving two times the performance of its predecessor, and bringing new IC, power, and cooling technologies to bear. This track record of innovation and execution demonstrates our singular focus on solving the critical performance challenges of AI compute, and systematically improving our product to achieve more performant AI compute. Our research and development organization was over 80% of our total headcount as of December 31, 2023, with approximately three-quarters of research and development headcount consisting of software engineers. 117


Growth Strategies We believe we are positioned for sustained growth in the rapidly expanding market for AI acceleration solutions. We have designed our focused strategies to drive continued success and establish ourselves as a long-term leader: Increase sales to our existing customers. We have established a strong land-and-expand track record with our existing customer base, which comprises leading enterprises and research institutions looking to harness the potential of AI. We intend to deepen these relationships by expanding our product and service offerings tailored to their evolving needs. Our strategy focuses on demonstrating the value proposition of our solutions through initial engagements, cross-selling complementary products, and identifying new use cases within existing customers. Our deep technical expertise and customer-focused professional services consistently deliver value beyond initial expectations. This success helps drive increased investment from existing clients. Expand our customer base. We believe a core component of our growth strategy centers on expanding our diverse customer base across industries and applications. We plan to aggressively pursue opportunities in relevant sectors such as healthcare, pharmaceutical, biotechnology, government, financial services, sovereign, and energy, to name a few, where our AI acceleration capabilities can address critical computational bottlenecks. We will seek to drive this expansion by focused sales and marketing initiatives, highlighting the transformative potential of our technology with targeted use cases. We intend to leverage our existing success stories and strategic partnerships to both bolster our credibility within new markets and establish key channels for customer acquisition. For example, we entered into a partnership with a world-leading European AI company to create sovereign and secure AI solutions for governments and enterprises resulting in a first-time buy of over $4 million. Additionally, we will continue investing in the development of tools and resources that streamline the onboarding experience for new customers to enable seamless adoption of our solution. Further penetrate into the rapidly growing inference market. We recently launched our inference solution. A large opportunity for our growth is to make our inference solution widely available. The immense amount of memory bandwidth and capacity on our chip allows us to deliver significantly lower generation latency and higher generation throughput over GPUs. By making API-based inferencing available through our cloud offering, we could significantly accelerate our adoption into inferencing use cases. Based on the inference TAM of $43 billion in 2024, growing at an estimated 63% CAGR to $186 billion in 2027, we believe our expansion into inference will be a significant growth driver. Benefit from opportunities in large adjacent AI and compute-intensive markets. We intend to leverage our differentiated solution to address ever-evolving computing challenges in emerging end markets and applications. We are actively enabling applications in fields like life sciences, materials science, and financial modeling, where our cutting-edge AI computing solutions can unlock new discoveries and solve complex problems. Geographically, our strategy includes deepening our investment in partnerships with large Sovereign AI initiatives and new markets to accelerate AI adoption. For instance, AI’s economic value in the Middle East’s Gulf Cooperation Council countries is projected to be as much as $150 billion, or approximately 9% of such countries’ combined gross domestic product, according to a McKinsey report issued in 2023. We believe our strategic partnership with G42 positions our solutions at the forefront of innovation and expansion in these emerging AI markets. Accelerate our existing product roadmap as well as develop new products for emerging use cases. We believe continued technological innovation is a cornerstone of our future growth. Our substantial investment in research and development fuels advancements to further differentiate our wafer-scale technology and develop our software to optimize performance, accessibility, and ease of use of our platform. Our close partnerships with leading AI researchers, industry pioneers, and the open-source community grant us early insights into cutting-edge market developments. Moreover, as AI infrastructure requirements scale, we expect emerging use cases to require new products with added functionalities to solve data, networking, and memory bottlenecks. With our continued focus on innovation, we intend to develop and introduce new products and form factors that will enable us to service a larger portion of our market opportunity. 118


Advance product adoption by proliferating cloud deployment of our AI solutions. We intend to accelerate our growth by expanding access to our revolutionary AI systems through cloud deployment. We expect this strategy to make our unique, high-performance computing capabilities available to a significantly broader customer base. Cloud solutions reduce the upfront capital investment required for customers, enabling more rapid experimentation and wider adoption of our technology. By offering our technology as a cloud service, we can streamline workflows for data scientists and machine learning engineers, allowing them to focus on innovation instead of infrastructure management. Our Technology We believe we have built the fastest commercially available AI training and inference solution in the world. Our solution combines our industry-leading wafer-scale engine with a supercomputer system architecture and a co-designed software suite. Every component of our platform is specifically designed to meet the computational needs of AI workloads. We combine massive computational resources, fast on-chip memory, and high-bandwidth interconnect to simplify and accelerate AI training and inference. Our product offerings are available through a unified platform that is both easy to use and simple to deploy. Our platform’s exceptional performance is driven by a portfolio of fundamental technological innovations, that resolve challenges in the computing industry that had been open for decades. WSE-3 – The World’s Largest, Most Powerful Commercially Available Chip Throughout the history of computing, the drive towards larger chip sizes with more integrated components has consistently produced significant performance and efficiency improvements. This has driven the entire semiconductor industry towards Moore’s Law, which predicts that the number of transistors in an integrated circuit will double every two years. However, the unprecedented demand for AI computing has underscored the need for improvements that go beyond the traditional cadence of compute advancements. To meet this demand, we have designed the Cerebras Wafer-Scale Engine – the largest chip ever sold, and which has broken Moore’s Law. With four trillion transistors, the WSE-3 achieves a milestone that Moore’s Law did not predict would occur until 2034. This leap marks a significant advancement in chip design and manufacturing, setting new standards for computational power and efficiency. 119


!business11a.jpg

*business11a.jpg*

“Moore’s Law” Over the Past 40 Years The white line shows the Moore’s Law trend of the largest processor chips in the industry. The orange line shows the new trajectory created by the Cerebras wafer-scale technology. Wafer-scale integration is a unique capability. In the entire history of the computer industry, only Cerebras has delivered a wafer-scale product to market. To achieve wafer-scale integration, we invented and productized key chip design technologies: •Multi-die interconnect. Traditionally, small die—regions of silicon containing an integrated circuit—are replicated independently on a silicon wafer and then cut up (“diced”) into small, separate chips. We have developed the technology to connect these otherwise independent die together at the wafer level, at the semiconductor fabrication plant. The inter-die connectivity uses a special cross-reticle connection that is integrated into the overall fabrication process. •Fault-tolerant architecture. A primary factor in the commercial viability of a semiconductor is the yield. The fabrication process introduces tiny defects – and chips with these defects are discarded and considered yield loss. Recognizing that large chips are more prone to defects than small chips, we invented new techniques designed to withstand defects, rather than seek to avoid them. Combining architectural innovations with insights from adjacent industries, we used a combination of redundant compute cores and redundant routing to address the yield problems. The wafer behaves like a hyper-scale data center, using a “fail in place” mechanism to handle the inevitable failures at scale. Flaws are designed to be recognized, shut down, and routed around. Redundant cores are used to re-form a logically functional whole. The enormous scale of our wafer-scale engine provides four fundamental technical advantages for AI: 1.Massive, tightly-coupled compute. All modern, high-quality GenAI models require too much compute to run on a single GPU. Our wafer-scale processor is purpose-built for these AI workloads and tightly couples a massive amount of compute resources (900,000 cores, 52 times more than the leading GPU currently in the market) onto the same piece of silicon. By integrating the equivalent compute of a cluster of GPUs onto a single chip, the WSE can train or run inference on large AI workloads without breaking up the problem across a large number of small devices. Keeping so much compute in one place also allows the WSE to 120


achieve significantly higher performance and power efficiency, compared to GPU clusters that require heavy communication across hundreds or thousands of small chips. Put simply, the WSE avoids the bandwidth, latency, and power tax of distributing communication-heavy AI computation across chips. !business12b.jpg

*business12b.jpg*

Comparison of Cerebras Wafer-Scale Integration Versus Traditional GPU Cluster Integration 2.Fast, efficient interconnect. For both inference and training, AI requires extraordinary amounts of data movement. On GPU clusters, most of the data movement occurs between individual GPU chips using traditional I/O interconnects such as NVLink or Infiniband. On the Cerebras wafer-scale chip, most of the data movement is performed locally using the on-chip fabric that achieves nearly 30,000 times higher interconnect bandwidth and is over 100 times more power-efficient per bit of data moved, compared to leading GPU interconnects. This architecture achieves industry-leading interconnect performance and efficiency compared to current GPU servers. This directly translates into faster acceleration for both training and inference, as it minimizes communication across slow, physical networking interfaces. 3.Outsized on-chip memory and memory bandwidth. AI workloads use memory to store the model state (parameters) and active data samples running through the model (activations). Fast access to this data is critical to the performance of both training and inference. The WSE uses a unique memory-in-compute architecture, which integrates memory and processing cores on the same chip, minimizing the distance that data must travel between memory and compute. Memory-in-compute architectures provide a step-change improvement in bandwidth compared to traditional external-memory architectures. This directly results in dramatically reduced latency and power consumption. Traditionally, memory-in-compute architectures were restricted to only small-scale problems because the tight integration of memory and compute was constrained by the small size of single chips. With wafer-scale integration, we are able to scale to enormous on-chip memory (SRAM) capacity (44 GB) on a single chip, while achieving 7,000 times more memory bandwidth (21 petabytes per second) compared to the leading GPU. 121


!business13a.jpg

*business13a.jpg*

WSE-3 Achieves 7,000 Times More Memory Bandwidth Compared to NVIDIA H100 4.Native hardware acceleration for sparsity (zeros in data). AI models are designed to be over-parameterized and created with more parameters than required to produce the result. The goal of training a neural network is to find the important subset of the parameters to produce the best result. Therefore, AI models can contain a large number of zero-valued parameters, which are less important, and do not impact the result. These zero-valued parameters are considered “sparse” and can mathematically be skipped when computing the model, since multiply-by-zero is always zero. The WSE-3 architecture has the memory bandwidth and built-in fine-grained dataflow hardware mechanisms to skip multiplies by zero, only performing work for the non-zero important parameters, and thereby accelerating performance. We believe the WSE-3 is the only commercially available hardware that can accelerate every sparsity pattern because our hardware supports fully unstructured sparsity. This sparsity acceleration uses less power by skipping unnecessary compute and can provide a significant additional performance advantage both for training and inference. !business14a.jpg

*business14a.jpg*

Sparsity Within Neural Networks. Dense networks can be made sparse by removing the less important parameters (connections between nodes) and retaining only the important information. We believe Cerebras has the only hardware architecture on the market capable of accelerating all forms of sparsity, including unstructured sparsity. 122


CS-3 System Powered by the WSE-3 – Innovative Power and Cooling for Our Wafer-Scale Chip The CS-3 system houses the WSE-3 and is engineered to tackle the unique thermal and power challenges of wafer-scale integration. To support the wafer-scale chip and make it easy to deploy in modern data centers, we have developed and productized key system and packaging technologies: •Thermal expansion-tolerant packaging. The physical connection between the chip and the surrounding infrastructure must be tolerant to the physical stresses caused by thermal expansion over a wide temperature range. At wafer-scale, the thermal expansion stress is significantly higher than traditional chip packages because of the size of the chip. We have developed a unique packaging technique using flexible connections to compensate for the high degree of thermal stresses, designed to enable reliable and high performance in all workload scenarios. •Wafer-scale power and cooling. Since the wafer-scale chip is significantly larger in size than traditional chips, it requires innovative methods of power and cooling that cannot be satisfied by typical methods designed for smaller chips. To address this challenge, we have developed a novel, perpendicular power and water-cooling delivery system designed to provide uniform power and cooling to the entire surface of the wafer, maximizing power efficiency and thermal management. Additionally, the CS-3 provides all the physical infrastructure, interfaces, and management to easily integrate into a standard data center. The CS-3 is a 16RU chassis with high-availability, redundant, and hot-swappable power supplies and fans. The system uses high-speed, standards-based 100G Ethernet connections to communicate to the cluster and rest of the data center. We have also developed an advanced system manager that is constantly monitoring and adjusting the unique wafer-scale operating conditions and providing system administrator interfaces. !business15a.jpg

*business15a.jpg*

Cerebras CS-3 System. The left diagram shows the wafer packaging, called the “engine block” providing direct power and cooling to the wafer. The right diagram shows the entire CS-3 system, which houses a single WSE-3 and engine block, and provides the physical infrastructure to integrate into a standard data center. The Cerebras AI Supercomputer – Purpose Built for Scaling Training and Inference Performance While a single CS-3 already has the equivalent performance of a GPU cluster, there is often a need to accelerate even further by using multiple CS-3 systems. Our specialized wafer-scale AI Supercomputer architecture is designed to scale near-linearly on up to 2,048 CS-3 systems, and to bring exceptional acceleration for training and inference 123


on even the largest AI models, without distribution complexity. Through co-design with the WSE-3 chip and CS-3 system, Cerebras has developed and productized key cluster-scaling technologies: •Training parameter storage. The cluster uses a device called MemoryX to centrally store model parameters during training, enabling seamless model scaling. Parameters are streamed to the CS-3 system for training computations, allowing even a single CS-3 to train the largest models, limited only by MemoryX storage capacity. This separation of storage from compute allows for scaling the model size independent of compute capacity. In the CS-3 wafer-scale cluster, SKUs support models ranging from 30 billion to 24 trillion parameters. •Data-parallel training interconnect. The cluster features a specialized interconnect, SwarmX, enabling data-parallel scaling for training large models. SwarmX includes active devices for weight broadcast and gradient reduction, simplifying operations compared to traditional GPU clusters. In the CS-3 wafer-scale cluster, SwarmX uses 400G and 800G links, designed to support up to 2,048 CS-3 systems in a single cluster. •Inference runtime processing. The AI Supercomputer has proprietary software that handles inference runtime operations. Model parameters are stored in on-chip memory for ultra-low latency during inference. The inference runtime subsystem provides low latency data input to each CS-3, coordinates their operations, and interfaces with external inference systems. It also manages numerous simultaneous inference streams while maintaining ultra-low latency in the CS-3 wafer-scale cluster. !business16a.jpg

*business16a.jpg*

The Cerebras Wafer-Scale Cluster. The single cluster architecture has resources to support training (shown on the left) and inference (shown on the right). These resources are assigned to CS-3 systems (shown in the middle) such that any CS-3 system can be used either for training or inference. CSoft Compiler Co-designed With the Hardware to Deliver Easy-To-Use Performance The Cerebras Software platform (CSoft) is foundational to the usability and user productivity advantages of the Cerebras solution. CSoft allows ML users to program for Cerebras Supercomputer clusters as simply as they would for a single computer, without any distributed computing complexity. This enables rapid iteration and allows users 124


to focus purely on ML experimentation, model quality, and business value creation. To achieve these goals, we developed and productized key software and AI technologies: •PyTorch tracing and integration. PyTorch is the industry-standard programming language of ML practitioners because it is easy to use and abstracts away the hardware. PyTorch allows users to focus on the ML algorithm, programming in the high-level Python language while avoiding low-level languages such as CUDA. CSoft is tightly integrated with PyTorch so that programs written in PyTorch can be extracted and then mapped to the Cerebras architecture efficiently. The PyTorch integration uses an advanced tracing technique that can analyze the AI model prior to executing it, to identify all the model components to be globally optimized. This tracing technique is designed to be leverageable for other ML frameworks that gain future popularity as well, allowing us to adapt as the needs of the ML community change. •Performance optimizing graph compiler. Once the AI model has been extracted from PyTorch, the CSoft Graph Compiler compiles the model for the Cerebras hardware and automatically generates the machine code that runs on each core of each WSE-3. The compile process uses a series of proprietary steps to transform the high-level program to efficient, low-level machine code. To find an optimal mapping, we developed a series of advanced optimization algorithms using constraint solving and simulated annealing to produce high-performance out-of-the-box on the CS-3 hardware. •AI model and workflow libraries. On top of the PyTorch and compiler foundation, the remaining critical enabler to state-of-the-art AI is the ML algorithmic techniques needed to create and run the model for training or inference. We have developed and validated state-of-the-art ML techniques that have led to many research publications and releases of best-in-class models, including GPT model scaling laws, long context architectures, multi-modal image and text models, and hyper-parameter tuning recipes. The list of techniques continues to grow as we further our applied ML research and learnings from customer engagements. We have distilled these techniques into a library of AI models, layer abstractions, and workflows that enable users to directly achieve state-of-the-art ML results, without having to build the model or learn the techniques from scratch. This library is called the Cerebras Model Zoo, an open-source repository of reference model implementations using the latest ML techniques, data preprocessing tools, and training recipes. Model Zoo includes a wide range of GenAI model examples supporting a diverse set of downstream applications and provides users with an easy foundation to build upon that already leverages state-of-the-art advancements and best practices. !business17a.jpg

*business17a.jpg*

The CSoft Software Stack. User programs are extracted from industry standard PyTorch, optimized and lowered to execute on the CS-3 systems with high performance without further user intervention. 125


Extraordinary Performance, Efficiency, and Ease of Use – Fundamental Technology Innovation Produces Industry-Leading User Value The above Cerebras technology advancements combine to provide direct customer benefit across key areas, solving fundamental challenges faced by GPU clusters: 1.High performance training: over 10 times faster training time-to-solution than the leading 8-way GPU systems of the same generation (as reported by many of our customers), designed to scale to 256 exaFLOPS. 2.High performance inference: over 10 times faster output generation speeds than GPU-based solutions from top CSPs. 3.High power efficiency: three times higher performance per watt than the leading 8-way GPU system. 4.Easy to program, with low developer switching cost. Sales and Marketing Our sales and marketing strategy centers on deep market understanding and customer-centric product development. We leverage our extensive market knowledge, proven track record in delivering large-scale compute solutions, and close customer collaborations to optimize our product roadmap. This is designed to ensure our solutions consistently deliver significant value to our customers. Alongside a focused sales force, we maintain a dedicated Field ML team. This team provides customers with access to leading AI expertise, ensuring they are positioned to leverage our technology effectively. Field ML teams are supported by Applied ML teams, product applications engineers, marketing, and business development/strategy teams, fostering a comprehensive customer support structure. We focus our sales and marketing efforts on industry leaders, specifically large enterprises domestically and abroad with rich data assets. Our customers are seeking to leverage their rich proprietary data and combine it with Cerebras’ industry leading compute and AI expertise to build a durable competitive advantage. Among our customers, word of success travels quickly, and as a result, it is very important to our future that we maintain strong and collaborative relationships and that we invest behind the largest and most successful of our customers. We utilize master purchase agreements, purchase orders, and statements of work, to define work scope, price, quantities, delivery terms, warranties, and software subscriptions. We predominantly sell our solutions directly to customers via on-premise hardware or via the cloud, based on a consumption-based model. Research and Development We are committed to relentless innovation in both hardware and software to address the rapidly-evolving computational needs of GenAI. In the hardware domain, our research and development efforts focus on chip and system design. We spearheaded the development of wafer-scale integration, resulting in three generations of industry-leading processors at the 16nm, 7nm, and 5nm process nodes. While the idea that bigger silicon would lead to computational efficiencies was novel in 2016, the rest of the industry has now moved in this direction. Cerebras’ innovations in wafer-scale technology helped solve a technical problem that had been open in the processor industry for 75 years. Core members of our founding and early engineering team are world-leaders in chip design and took big risks to help drive the computational industry forward. Our rigorous design methodology utilizes cutting-edge simulation tools and partnerships with Electronic Design Automation providers to create high-quality processors and accelerated development cycles. 126


In the software domain, we develop specialized compilers that translate AI models for optimal performance on our hardware, maximizing performance and efficiency. Our researchers are leaders in sparsity techniques, a critical approach for efficient AI workloads, as evidenced by our publications in this field. These techniques are represented in new and advanced models that we make available to our customers. We dedicate significant resources to ongoing research and development. We invest heavily in attracting and retaining a global team of highly skilled engineers across dedicated facilities in the United States, Canada, and India. This unwavering commitment to innovation fuels our growth and positions us as a leader in the GenAI landscape. Manufacturing and Suppliers We operate a fabless manufacturing model, strategically partnering with industry leaders for the production of our AI compute systems which include ICs, boards, and systems. Our core manufacturing partners include: TSMC, a leading semiconductor foundry, fabricates our cutting-edge WSEs. Advanced Semiconductor Engineering (“ASE”) handles specialized processes, including the deposition of redistribution layers, and we manage final wafer packaging, assembly, and testing in our Sunnyvale, California facility. We also use a small number of third parties to manufacture subassemblies and critical components such as printed circuit boards, I/O subsystems, cooling assemblies and power delivery modules. The manufacturing process is subject to extensive testing and verification. Our supply chain is designed for flexibility and for quality, as we plan to ramp up production to meet the growing global demand for our AI compute systems. Simultaneously, we are committed to rigorous quality control throughout the manufacturing process to confirm reliability in even the most demanding environments at our customer facilities. Our contract manufacturing partners perform system assembly, extensive testing, and verification protocols are in place at every stage including post assembly. Final system-level burn-in and test is conducted by Cerebras. Our quality processes include high production test coverage, full product traceability, and extensive post assembly burn-in. We employ a dedicated quality team that continuously monitors feedback during manufacturing and after deployment. This data-driven approach allows us to improve our product quality and reliability, and help us meet the stringent demands of our customers worldwide. Intellectual Property Protecting our intellectual property and proprietary technology, including our AI products and solutions, is an important aspect of our business. We rely on a combination of intellectual property rights, including patent, trademark, trade secret, and other related laws in the United States and internationally as well as confidentiality procedures and contractual provisions to protect, maintain, and enforce our proprietary technology, intellectual property rights, and brand. Our intellectual property portfolio includes patents, trademarks, proprietary software, and trade secrets. As of June 30, 2024, we owned 85 issued patents and 15 pending patent applications globally. Of these, 44 are issued U.S. patents and nine are pending U.S. patent applications. Our issued patents and pending patent applications generally relate to the design and fabrication of wafer-scale processors, the assembly, packaging, and cooling of wafer-scale processors and hardware, and software architectures for accelerated deep learning. The expiration dates of the U.S.-issued patents are between 2038 and 2042, not taking into account any applicable patent term extensions. We routinely review our development efforts to assess the existence and patentability of new inventions. We have a policy of requiring employees and consultants to execute confidentiality agreements upon the commencement of an employment or consulting relationship with us. Our employee and independent contractor agreements also require relevant employees and independent contractors to assign to us all rights to any inventions made or conceived during their employment or engagement with us. In addition, we typically require individuals and 127


entities with which we discuss potential business relationships to sign non-disclosure agreements that contain customary confidentiality provisions. Competition We offer a purpose-built AI compute platform. Our CS product family primarily competes against solutions from NVIDIA Corporation, Advanced Micro Devices, Inc., Intel Corporation, Microsoft Corp., and Alphabet Inc., among others, as well as internally developed custom application-specific integrated circuits and a variety of private companies, some of which are focused on inference-only offerings. We believe that our ability to remain competitive will depend on how well we are able to anticipate the features and functions that customers will require and whether we are able to deliver consistent volumes of our products at acceptable levels of quality and at competitive prices. We expect competition to increase from both existing competitors and new market entrants with products that may be lower priced than ours or may provide better performance or additional features not provided by our products. In addition, it is possible that new competitors or alliances among competitors could emerge and acquire significant market share. Some of our competitors have greater marketing, financial, distribution, and manufacturing resources than we do and may be more able to adapt to customers or technological changes. We expect an increasingly competitive environment in the future. Human Capital As of September 26, 2024, we had 401 employees, including 252 in the United States, and we have employees located internationally, including in Canada, and in India. We maintain a full-time workforce and supplement our workforce with contractors and consultants. To our knowledge, none of our employees are represented by a labor union or party to a collective bargaining agreement. We consider our relationships with our employees to be good. Our human capital resources objectives include, as applicable, identifying, recruiting, retaining, incentivizing, and integrating our existing and new employees. The principal purposes of our equity incentive plans are to attract, retain, and reward personnel through the granting of stock-based compensation awards in order to increase stockholder value and the success of our company by motivating such individuals to perform to the best of their abilities and achieve our objectives. Facilities Our corporate headquarters is located in Sunnyvale, California, where we lease approximately 68,000 square feet for office space, research and development, and testing, pursuant to a lease agreement that expires in November 2027, subject to the terms thereof. We lease additional facilities in San Diego, Canada, and India for research and development. We also enter into agreements for offsite colocation facilities to house and operate our AI supercomputers. We enter into these agreements for our own corporate purposes as well as on behalf of our customers. Currently, our data center facilities are in California, and we manage another data center facility in Texas on behalf of a customer. We believe that our facilities are suitable to meet our current needs. We intend to expand our facilities or add new facilities as we grow, and we believe that suitable additional or alternative spaces will be available on commercially reasonable terms, if required. Government Regulations We are subject to many U.S. federal and state laws, rules, and regulations, as well as laws, rules, and regulations imposed by various non-U.S. governmental authorities, including those related to intellectual property, tax, import and export requirements, anti-corruption, economic and trade sanctions, national security and foreign investment, foreign exchange controls and cash repatriation restrictions, data privacy and security requirements, competition, advertising, employment, product regulations, environment, health, and safety requirements, and consumer laws. 128


These laws and regulations are complex, are constantly evolving, and may be interpreted, applied, created, or amended, in a manner that could harm our business. The import and export of our products and technology are subject to laws and regulations, including international treaties, U.S. and various non-U.S. export controls and sanctions laws, customs regulations, and other trade rules. The scope, nature, and severity of such controls varies widely across different countries and may change frequently over time. Such laws, rules, and regulations may delay the introduction of some of our products or impact our competitiveness through restricting our ability to do business in certain countries or territories or with certain parties (including certain governments) or certain jurisdictions. U.S. export restrictions also require us to obtain licenses from the U.S. Department of Commerce to allow the export or transfer of our products (including our software and technology), and there can be no assurance that export permissions will be granted. See the section titled “Risk Factors” for additional information regarding risks we face related to government regulation. Legal Proceedings From time to time, we may be subject to legal proceedings, claims, and investigations in the ordinary course of business. We are not presently a party to any litigation to which the outcome, we believe, if determined adversely to us, would individually or taken together have a material adverse effect on us. We cannot predict the results of any such proceedings, claims, or investigations, and despite the potential outcomes, the existence thereof may have a material adverse impact on us due to diversion of management time and attention as well as the financial costs related to resolving such matters. 129

Management


MANAGEMENT Executive Officers and Directors The following table sets forth information regarding our executive officers and directors as of September 30, 2024:

Name

Name / Name / Age / Age / Age / Position(s) / Position(s) / Position(s)

Executive Officers and Employee Director: ... Executive Officers and Employee Director: / Executive Officers and Employee Director:

Andrew D. Feldman ....................... Andrew D. Feldman / Andrew D. Feldman / 55 / 55 / 55 / Chief Executive Officer, President, and Director / Chief Executive Officer, President, and Director / Chief Executive Officer, President, and Director

Robert Komin ............................ Robert Komin / Robert Komin / 61 / 61 / 61 / Chief Financial Officer / Chief Financial Officer / Chief Financial Officer

Dhiraj Mallick .......................... Dhiraj Mallick / Dhiraj Mallick / 52 / 52 / 52 / Chief Operating Officer / Chief Operating Officer / Chief Operating Officer

Non-Employee Directors: ................. Non-Employee Directors: / Non-Employee Directors:

Paul Auvil(1)(3) ........................ Paul Auvil(1)(3) / Paul Auvil(1)(3) / 61 / 61 / 61 / Director / Director / Director

Glenda Dorchak(1)(2) .................... Glenda Dorchak(1)(2) / Glenda Dorchak(1)(2) / 70 / 70 / 70 / Director / Director / Director

Thomas Lantzsch(2) ...................... Thomas Lantzsch(2) / Thomas Lantzsch(2) / 64 / 64 / 64 / Director / Director / Director

Lior Susan(3) ........................... Lior Susan(3) / Lior Susan(3) / 40 / 40 / 40 / Director / Director / Director

Steve Vassallo(2)(3) .................... Steve Vassallo(2)(3) / Steve Vassallo(2)(3) / 53 / 53 / 53 / Director / Director / Director

Eric Vishria(1) ......................... Eric Vishria(1) / Eric Vishria(1) / 45 / 45 / 45 / Director / Director / Director

_______________ (1)Member of the audit committee. (2)Member of the compensation committee. (3)Member of the nominating and corporate governance committee. Executive Officers and Employee Director Andrew D. Feldman is one of our co-founders and has served as our Chief Executive Officer and President and as a member of our board of directors since April 2016. From February 2012 to June 2014, Mr. Feldman served as Corporate Vice President and General Manager at Advanced Micro Devices, Inc. (“AMD”), a semiconductor company. From November 2007 to February 2012, Mr. Feldman served as Chief Executive Officer at SeaMicro, a dense microserver company acquired by AMD. From August 2003 to December 2006, Mr. Feldman served as Vice President, Marketing and Product Management at Force10 Networks, Inc., a computer networking company acquired by Dell, Inc. From March 2000 to August 2003, Mr. Feldman served as Vice President, Corporate Marketing and Corporate Development at Riverstone Networks Inc., a networking switching hardware company. Mr. Feldman holds an M.B.A. from Stanford University and a B.A. in Economics and Political Science from Stanford University. We believe Mr. Feldman is qualified to serve as a member of our board of directors because of the perspective and experience he brings as our co-founder and Chief Executive Officer. See “—Involvement in Certain Legal Proceedings” for certain details regarding historical legal proceedings involving Mr. Feldman. Robert Komin has served as our Chief Financial Officer and Treasurer since March 2024. Mr. Komin previously served as Chief Financial Officer of Sunrun Inc. (“Sunrun”), a residential solar and storage company, from March 2015 to May 2020, and then continued as a consultant until January 2021. From September 2013 to January 2015, Mr. Komin served as Chief Financial Officer at Flurry, Inc., a mobile analytics and advertising company. From August 2012 to August 2013, Mr. Komin served as Chief Financial Officer at Ticketfly, Inc., a music ticketing and marketing services provider. From January 2010 to July 2012, Mr. Komin served as Chief Operating Officer and Chief Financial Officer at Linden Research, Inc., a creator of virtual digital entertainment and cybercurrency. Mr. Komin previously served as a member of the board of directors and audit committee of Bird 130


Global Inc., a micromobility company, from June 2021 to April 2024. Mr. Komin holds an M.B.A. from Harvard Business School and a B.S. in Accounting and General Science from the University of Oregon. Dhiraj Mallick has served as our Chief Operating Officer since September 2023. From June 2018 to September 2023, Mr. Mallick served as our Senior Vice President of Engineering and Operations. From November 2015 to May 2018, Mr. Mallick served as Vice President of Innovation, Pathfinding and Architecture, Data Center Group at Intel Corporation, a multinational technology company. From April 2012 to August 2015, Mr. Mallick served as General Manager and Corporate Vice President at AMD. Since January 2020, Mr. Mallick has served on the Global Advisory Group of the Global Semiconductor Alliance, a semiconductor and technology industry organization. Mr. Mallick holds an M.S. in Electrical Engineering from Stanford University and a B.S. in Electrical Engineering from the University of Rochester. Non-Employee Directors Paul Auvil has served as a member of our board of directors since July 2024. Mr. Auvil previously served as Chief Financial Officer of Proofpoint, Inc., an enterprise security company, from March 2007 to February 2023. From September 2006 to March 2007, Mr. Auvil was an entrepreneur-in-residence with Benchmark, a venture capital firm. From August 2002 to July 2006, Mr. Auvil served as Chief Financial Officer at VMware, Inc., a cloud-computing and virtualization company. From April 1998 to January 2002, Mr. Auvil served as Chief Financial Officer at Vitria Technology, Inc., an eBusiness platform company. Mr. Auvil held various executive positions at VLSI Technology, Inc., a semiconductor and circuit manufacturing company, from August 1988 to March 1998, including serving as the Vice President of the Internet and Secure Products Division. Mr. Auvil has served as a member of the board of directors of Elastic N.V., a platform for search-powered solutions, since October 2023. Mr. Auvil previously served as a member of the boards of directors of 1Life Healthcare, Inc. (doing business as One Medical), a primary care organization acquired by Amazon, Inc., from September 2019 to February 2023, Quantum Corporation, a data storage company, from August 2007 to November 2017, Marin Software Incorporated, a cloud-based advertisement management platform company, from October 2009 to April 2017, and OpenTV Corp., a provider of interactive television software and services, from January 2010 to April 2010. Mr. Auvil holds an M.M. from the Kellogg Graduate School of Management at Northwestern University and a B.E. in Electrical Engineering from Dartmouth College. We believe Mr. Auvil is qualified to serve as a member of our board of directors because of his extensive experience in the technology industry and as an executive and a member of the boards of directors of technology companies. Glenda Dorchak has served as a member of our board of directors since July 2024. Ms. Dorchak previously served as Executive Vice President and General Manager of Global Business for Spansion Inc., a flash memory manufacturer, from April 2012 to June 2013. Ms. Dorchak served as Chief Executive Officer of VirtualLogix, Inc., a virtualization software solutions company, from January 2009 to October 2010, and as Chief Executive Officer of Intrinsyc Software, a software company, from July 2006 to November 2008. From March 2001 to July 2006, Ms. Dorchak served in various roles at Intel Corporation, including as Vice President and Chief Operating Officer Intel Communications Group, Vice President and General Manager Intel Broadband Products Group, and Vice President and General Manager Intel Consumer Electronics Group. Ms. Dorchak served as Chairman and Chief Executive Officer at Value America, an e-retailer, from September 1998 to November 2000. From July 1974 to September 1998, Ms. Dorchak held various management and executive positions at IBM Corporation, a semiconductor and circuit manufacturing company, including General Manager and Director of IBM Direct, and Director, General Business, of IBM Personal Systems Group North America. Ms. Dorchak has served as a member of the boards of directors of Wolfspeed, Inc., a provider of silicon carbide materials and semiconductor products, since January 2020, Globalfoundries Inc., a semiconductor contract manufacturing and design company, since June 2019, and Ansys Inc., an engineering simulation software company, since July 2018. She previously served as a member of the boards of directors of Viavi Solutions Inc., a provider of network test, monitoring, and assurance solutions, from November 2019 to October 2021, Mellanox Technologies, Ltd., a multinational supplier of computer networking products, from June 2009 to April 2020, Quantenna Communications, a communication device 131


company, from June 2018 to June 2019, and Energy Focus Inc., a developer of energy-efficient LED lighting systems and controls, from July 2015 to February 2019. We believe Ms. Dorchak is qualified to serve as a member of our board of directors because of her experience in our industry and as a member of the boards of directors of public companies. Thomas Lantzsch has served as a member of our board of directors since September 2024. Mr. Lantzsch previously served as Senior Vice President and General Manager, Internet of Things, of Intel Corporation from January 2017 to January 2023. From December 2006 to November 2016, Mr. Lantzsch served as Executive Vice President of Strategy and Corporate Development of Arm Inc., a semiconductor company. Mr. Lantzsch has served as a member of the board of directors of Canatu Oyj, a carbon nanomaterial developer company, since November 2023. Mr. Lantzsch holds an M.S. in Finance from the Naveen Jindal School of Management at the University of Texas at Dallas and a B.S. in Electrical Engineering from Michigan State University. We believe Mr. Lantzsch is qualified to serve as a member of our board of directors because of his expertise and experience working with and for technology and semiconductor companies. Lior Susan has served as a member of our board of directors since April 2023. Since January 2015, Mr. Susan has served as Founder and Managing Partner of Eclipse Ventures, a venture capital firm. Mr. Susan is a co-founder of Bright Machines, Inc., a software company, and has served as its Executive Chairman since January 2018 and as its Chief Executive Officer from January 2018 to May 2018, from December 2021 to December 2022, and from August 2023 to the present. From June 2012 to January 2015, Mr. Susan served as Founder and General Partner at LabIX, the hardware investment platform of Flextronics International Ltd., an end-to-end supply chain solutions company. Mr. Susan served as an Advisor at Intucell Ltd., a self-optimizing network software company, from April 2008 until it was sold to Cisco in 2012. Mr. Susan has served as a member of the board of directors of Owlet, Inc., a health technology company, since March 2015, and has served as the chairman of its board of directors since July 2021. He also serves as a member of the boards of directors of several private companies, including Augury, Inc., Bright Machines, Inc., Chord Commerce, Inc., Cybertoka Ltd., Datapelago, Inc, Datorios, Inc., Dutch Pet, Inc., Flex Logix Technologies, Inc., InsidePacket, Ltd., Senser, Ltd., and Skyryse, Inc. Mr. Susan is former member of an elite Special Forces unit in the Israel Defense Force. We believe Mr. Susan is qualified to serve as a member of our board of directors because of his expertise and experience working with and investing in technology companies. Steve Vassallo has served as a member of our board of directors since May 2016. Since October 2007, Mr. Vassallo has served as a general partner and in various other roles at Foundation Capital, a venture capital firm. From September 2004 to September 2006, Mr. Vassallo served as Vice President of Product and Engineering at Ning Interactive Inc., a social platform. From May 1999 to September 2002, Mr. Vassallo served as director of engineering at Immersion Corporation, a haptic technology company. Mr. Vassallo previously served as a member of the board of directors of Sunrun from May 2008 to June 2019. Mr. Vassallo also serves as a member of the boards of directors of several private companies. Mr. Vassallo holds an M.B.A. from Stanford University, an M.S. in Electromechanical Engineering from Stanford University, and a B.S. in Mechanical Engineering from Worcester Polytechnic Institute. We believe Mr. Vassallo is qualified to serve on our board of directors because of his extensive experience in the technology industry. Eric Vishria has served as a member of our board of directors since May 2016. Since July 2014, Mr. Vishria has served as a General Partner of Benchmark, a venture capital firm. From August 2013 to August 2014, Mr. Vishria served as Vice President, Digital Magazines and Verticals at Yahoo! Inc., a web services provider. From November 2008 to August 2013, Mr. Vishria served as co-founder and Chief Executive Officer of RockMelt, Inc., a social media web browser. He has served as a member of the boards of directors of Amplitude, Inc., a digital optimization company, since December 2014, and Confluent, Inc., a data solutions company, since September 2014. 132


He also serves as a member of the boards of directors of several private companies. Mr. Vishria holds a B.S. in Mathematical and Computational Science from Stanford University. We believe Mr. Vishria is qualified to serve on our board of directors because of his extensive experience as a venture capital investor and a member of the boards of directors of other technology companies. Involvement in Certain Legal Proceedings Andrew D. Feldman was previously one of six named defendants in the action SEC v. Pereira, No. 3:06-cv-06384-CRB (N.D. Cal.), in which the SEC alleged, among other things, that Mr. Feldman, as Vice President, Corporate Marketing and Corporate Development of Riverstone Networks, Inc., negotiated, reviewed, approved, or was otherwise aware of sales transactions in 2001 and 2002 that were improperly accounted for by Riverstone and aided and abetted Riverstone in violating U. S. securities laws. Without admitting or denying the allegations of the complaint, Mr. Feldman settled the claims against him in 2008 by entering into an agreement with the SEC permanently restraining and enjoining Mr. Feldman from violating federal securities laws and requiring Mr. Feldman to pay $289,507 plus interest. In connection with the same alleged facts, Mr. Feldman also pled guilty in December 2007 to one count of circumventing accounting controls of an issuer in violation of 15 U.S.C. sections 78m(b)(5) and 78ff, and was sentenced to three years of probation and fined $5,000 in connection with an action brought by the U.S. Department of Justice captioned USA v. Feldman, No. 3:07-cr-07-00731-0001-CRB (N.D. Cal.). Family Relationships There are no family relationships among any of our executive officers or directors. Board Structure and Composition Director Independence Our board of directors currently consists of seven members. Our board of directors has determined that all of our directors, other than Mr. Feldman, qualify as independent directors in accordance with the Nasdaq Listing Rules. Mr. Feldman is not considered independent by virtue of his position as an executive officer of the company. Under the Nasdaq Listing Rules, the definition of independence includes a series of objective tests, such as that the director is not, and has not been for at least three years, one of our employees and that neither the director nor any of his or her family members has engaged in various types of business dealings with us. In addition, as required by the Nasdaq Listing Rules, our board of directors has made a subjective determination as to each independent director that no relationships exists that, in the opinion of our board of directors, would interfere with the exercise of independent judgment in carrying out the responsibilities of a director. In making these determinations, our board of directors reviewed and discussed information provided by the directors and us with regard to each director’s relationships as they may relate to us and our management. Classified Board of Directors In accordance with our amended and restated certificate of incorporation, which will be effective immediately prior to the completion of this offering, our board of directors will be divided into three classes with staggered three-year terms. At each annual general meeting of stockholders, the successors to directors whose terms then expire will be elected to serve from the time of election and qualification until the third annual meeting following their election. Our directors will be divided among the three classes as follows: •The Class I directors will be Andrew D. Feldman, Paul Auvil, and Eric Vishria, and their terms will expire at the annual meeting of stockholders to be held in 2025; •The Class II directors will be Glenda Dorchak and Steve Vassallo, and their terms will expire at the annual meeting of stockholders to be held in 2026; and 133


•The Class III directors will be Thomas Lantzsch and Lior Susan, and their terms will expire at the annual meeting of stockholders to be held in 2027. We expect that any additional directorships resulting from an increase in the number of directors will be distributed among the three classes so that, as nearly as possible, each class will consist of one-third of the directors. The division of our board of directors into three classes with staggered three-year terms may delay or prevent a change of our management or a change in control. Leadership Structure of the Board of Directors Our amended and restated bylaws and corporate governance guidelines to be adopted immediately following the effectiveness of the registration statement of which this prospectus forms a part will provide our board of directors with flexibility to combine or separate the positions of chairperson of the board of directors and Chief Executive Officer and to implement a lead director in accordance with its determination regarding which structure would be in the best interests of our company. Our board of directors currently believes that our existing leadership structure, under which our chief executive officer, Mr. Feldman, serves as chairman of our board of directors, is effective. Our board of directors will continue to periodically review our leadership structure and may make such changes in the future as it deems appropriate. Our board of directors has elected Eric Vishria to serve as lead independent director. As lead independent director, Mr. Vishria will preside at all meetings of the board of directors at which the chairman of the board of directors is not present, including executive sessions, and perform such additional responsibilities as set forth in our corporate governance guidelines. Voting Arrangements The election of the members of our board of directors is currently governed by our amended and restated voting agreement that we entered into with certain holders of our capital stock and the related provisions of our current amended and restated certificate of incorporation. Pursuant to our amended and restated voting agreement and current amended and restated certificate of incorporation, Mr. Feldman was elected by certain holders of our common stock, voting together as a single class, and Messrs. Susan, Vassallo, and Vishria were elected by the holders of our Series A redeemable convertible preferred stock. Our amended and restated voting agreement will terminate and the provisions of our current amended and restated certificate of incorporation by which our directors were elected will be amended and restated in connection with this offering. After this offering, the number of directors will be fixed by our board of directors, subject to the terms of our amended and restated certificate of incorporation and amended and restated bylaws that will become effective immediately prior to the completion of this offering. Each of our current directors will continue to serve as a director until the election and qualification of his or her successor, or until his or her earlier death, resignation, or removal. Role of Board in Risk Oversight Process Risk assessment and oversight are an integral part of our governance and management processes. Our board of directors encourages management to promote a culture that incorporates risk management into our corporate strategy and day-to-day business operations. Management discusses strategic and operational risks at regular management meetings, and conducts specific strategic planning and review sessions during the year that include a focused discussion and analysis of the risks facing us. Throughout the year, senior management reviews these risks with the board of directors at regular board meetings as part of management presentations that focus on particular business functions, operations or strategies, and presents the steps taken by management to mitigate or eliminate such risks. 134


Our board of directors does not have a standing risk management committee, but rather administers this oversight function directly through our board of directors as a whole, as well as through various standing committees of our board of directors that address risks inherent in their respective areas of oversight. While our board of directors is responsible for monitoring and assessing strategic risk exposure, our audit committee is responsible for overseeing our major financial and cybersecurity risk exposures and the steps our management has taken to monitor and control these exposures. The audit committee also approves or disapproves any related person transactions. Our nominating and corporate governance committee monitors the effectiveness of our corporate governance guidelines. Our compensation committee assesses and monitors whether any of our compensation policies and programs has the potential to encourage excessive risk-taking. The risk oversight process also includes receiving regular reports from our committees and members of senior management to enable our board of directors to understand our risk identification, risk management, and risk mitigation strategies with respect to areas of potential material risk, including operations, finance, legal, regulatory, cybersecurity, strategic and reputational risk. Board Committees Effective as of the date the registration statement of which this prospectus forms a part is declared effective by the SEC, our board of directors will have three standing committees: an audit committee; a compensation committee; and a nominating and corporate governance committee. Each committee is governed by a charter that will be available on our website following completion of this offering. Members serve on these committees until their resignation or until otherwise determined by our board of directors. Audit Committee Effective as of the date the registration statement of which this prospectus forms a part is declared effective by the SEC, the members of our audit committee will consist of Paul Auvil, Glenda Dorchak, and Eric Vishria. Mr. Auvil will be the chairperson of our audit committee. The composition of our audit committee meets the requirements for independence under the current Nasdaq Listing Rules and Rule 10A-3 of the Exchange Act. Each member of our audit committee is financially literate. In addition, our board of directors has determined that Mr. Auvil is an “audit committee financial expert” within the meaning of the SEC rules. This designation does not impose on such directors any duties, obligations, or liabilities that are greater than are generally imposed on members of our audit committee and our board of directors. Our audit committee is directly responsible for, among other things: •appointing, retaining, compensating, and overseeing the work of our independent registered public accounting firm; •assessing the independence and performance of the independent registered public accounting firm; •reviewing with our independent registered public accounting firm the scope and results of the firm’s annual audit of our financial statements; •overseeing the financial reporting process and discussing with management and our independent registered public accounting firm the financial statements that we will file with the SEC; •pre-approving all audit and permissible non-audit services to be performed by our independent registered public accounting firm; •reviewing policies and practices related to risk assessment and management; •reviewing our accounting and financial reporting policies and practices and accounting controls, as well as compliance with legal and regulatory requirements; •reviewing cybersecurity matters; 135


•reviewing, overseeing, approving, or disapproving any related-person transactions; •reviewing with our management the scope and results of management’s evaluation of our disclosure controls and procedures and management’s assessment of our internal control over financial reporting, including the related certifications to be included in the periodic reports we will file with the SEC; and •establishing procedures for the confidential anonymous submission of concerns regarding questionable accounting, internal controls, or auditing matters, or other ethics or compliance issues. Compensation Committee Effective as of the date the registration statement of which this prospectus forms a part is declared effective by the SEC, the members of our compensation committee will consist of Glenda Dorchak, Thomas Lantzsch, and Steve Vassallo. Ms. Dorchak will be the chairperson of our compensation committee. Each of Ms. Dorchak and Messrs. Lantzsch and Vassallo is a non-employee director, as defined by Rule 16b-3 promulgated under the Exchange Act and meets the requirements for independence under the current Nasdaq Listing Rules. Our compensation committee is responsible for, among other things: •reviewing and approving the compensation of our executive officers, including reviewing and approving corporate goals and objectives with respect to compensation; •authority to act as an administrator of our equity incentive plans; •reviewing and approving, or making recommendations to our board of directors with respect to, incentive compensation and equity plans; •reviewing and recommending that our board of directors approve the compensation for our non-employee directors; and •establishing and reviewing general policies relating to compensation and benefits of our employees. Nominating and Corporate Governance Committee Effective as of the date the registration statement of which this prospectus forms a part is declared effective by the SEC, the members of our nominating and corporate governance committee will consist of Steve Vassallo, Paul Auvil, and Lior Susan. Mr. Vassallo will be the chairperson of our nominating and corporate governance committee. Messrs. Vassallo, Auvil, and Susan meet the requirements for independence under the current Nasdaq Listing Rules. Our nominating and corporate governance committee is responsible for, among other things: •identifying and recommending candidates for membership on our board of directors, including the consideration of nominees submitted by stockholders, and on each of the board’s committees; •reviewing and recommending our corporate governance guidelines and policies; •reviewing proposed waivers of the code of business conduct and ethics for directors and executive officers; •overseeing the process of evaluating the performance of our board of directors; and •assisting our board of directors on corporate governance matters. Code of Business Conduct and Ethics In connection with this offering, our board of directors will adopt a code of business conduct and ethics that applies to all of our employees, officers, and directors, including our Chief Executive Officer, Chief Financial 136


Officer, and other executive and senior financial officers. Upon completion of this offering, the full text of our code of business conduct and ethics will be posted on the investor relations section of our website. We intend to disclose future amendments to our code of business conduct and ethics, or any waivers of such code, on our website or in public filings. Indemnification and Insurance We maintain directors’ and officers’ liability insurance. Our amended and restated certificate of incorporation and amended and restated bylaws will include provisions limiting the liability of directors and officers and indemnifying them under certain circumstances. We have entered or will enter into indemnification agreements with each of our directors and officers to provide our directors and officers with additional indemnification and related rights. See the section titled “Description of Capital Stock—Limitations on Liability and Indemnification Matters” for additional information. Compensation Committee Interlocks and Insider Participation None of the members of our board of directors who will serve on our compensation committee upon the effectiveness of the registration statement of which this prospectus forms a part is or has been an officer or employee of our company. None of our executive officers currently serves, or in the past fiscal year has served, as a member of a compensation committee (or if no committee performs that function, the board of directors) of any other entity that has an executive officer serving as a member of our board of directors. 137

Executive and Director Compensation


EXECUTIVE AND DIRECTOR COMPENSATION Executive Compensation The following is a discussion and analysis of compensation arrangements of our named executive officers (“NEOs”). This discussion contains forward looking statements that are based on our current plans, considerations, expectations and determinations regarding future compensation programs. Actual compensation programs that we adopt may differ materially from currently planned programs as summarized in this discussion. As an “emerging growth company” as defined in the JOBS Act, we are not required to include a Compensation Discussion and Analysis section and have elected to comply with the scaled disclosure requirements applicable to emerging growth companies. We seek to ensure that the total compensation paid to our executive officers is reasonable and competitive. Compensation of our executive officers is structured around the achievement of individual performance and near-term corporate targets as well as long-term business objectives. Our NEOs for the year ended December 31, 2023 were: •Andrew D. Feldman, our Chief Executive Officer and President; •Dhiraj Mallick, our Chief Operating Officer; and •Anthony E. Maslowski, our former Chief Financial Officer. Mr. Maslowski ceased serving as our Chief Financial Officer on March 15, 2024. Summary Compensation Table The following table sets forth total compensation paid to our NEOs for 2023.

Name and Principal Position

Name and Principal Position / Name and Principal Position / Year / Year / Year / Salary ($) / Salary ($) / Salary ($) / Bonus ($) / Bonus ($) / Bonus ($) / Stock Awards ($)(1) / Stock Awards ($)(1) / Stock Awards ($)(1) / Option Awards ($)(1) / Option Awards ($)(1) / Option Awards ($)(1) / Non-Equity Incentive Plan Compensation ($)(2) / Non-Equity Incentive Plan Compensation ($)(2) / Non-Equity Incentive Plan Compensation ($)(2) / All Other Compensation ($) / All Other Compensation ($) / All Other Compensation ($) / Total($) / Total($) / Total($)

Andrew D. Feldman Chief Executive Officer ... Andrew D. Feldman Chief Executive Officer / Andrew D. Feldman Chief Executive Officer / 2023 / 2023 / 2023 / 400,000 / 400,000 / — / — / 117,659 / 117,659 / 444,000 / 444,000 / 342,500 / 342,500 / — / — / 1,304,159 / 1,304,159

Dhiraj Mallick Chief Operating Officer ... Dhiraj Mallick Chief Operating Officer / Dhiraj Mallick Chief Operating Officer / 2023 / 2023 / 2023 / 385,417 / 385,417 / — / — / 3,514,000 / 3,514,000 / 900,350 / 900,350 / 187,500 / 187,500 / — / — / 4,987,267 / 4,987,267

Anthony E. Maslowski Former Chief Financial Officer ... Anthony E. Maslowski Former Chief Financial Officer / Anthony E. Maslowski Former Chief Financial Officer / 2023 / 2023 / 2023 / 350,000 / 350,000 / — / — / — / — / 592,000 / 592,000 / 100,000 / 100,000 / — / — / 1,042,000 / 1,042,000

_______________ (1)Amounts shown represent the grant date fair value of stock options and RSUs granted during 2023 as calculated in accordance with ASC Topic 718. See “Stock-Based Compensation” in Note 12 to our audited consolidated financial statements included elsewhere in this prospectus for the assumptions used in calculating this amount. (2)Amounts shown represent performance-based cash bonuses earned based on the achievement of a certain corporate financial objective during 2023. 138


Narrative to Summary Compensation Table Annual Base Salaries We pay each of our NEOs a base salary to compensate them for services rendered to our company. The base salary payable to each NEO is intended to provide a fixed component of compensation reflecting the executive’s skill set, experience, role, and responsibilities. For 2023, the compensation committee of our board of directors established the annual base salary of each NEO for 2023 as follows: Mr. Feldman, $400,000; Mr. Mallick, $375,000; and Mr. Maslowski, $350,000. In August 2023, our compensation committee increased Mr. Mallick’s annual base salary to $400,000, and in February 2024, our compensation committee increased the annual base salary of Mr. Feldman to $425,000. Our board of directors and compensation committee may adjust base salaries from time to time in their discretion. Annual Bonuses We maintain an annual performance-based cash bonus program in which each of our NEOs participated in 2023. Under our annual performance-based cash bonus program, our compensation committee has established a target bonus opportunity for each NEO, with the NEO earnings based on the achievement of a certain corporate financial objective established by our compensation committee. The target bonus opportunity established for each NEO for 2023 was as follows: Mr. Feldman, $250,000; Mr. Mallick, $175,000; and Mr. Maslowski, $100,000. In August 2023, Mr. Mallick’s annual target bonus opportunity was increased to $200,000. In January 2024, our compensation committee determined the corporate financial objective was achieved under our annual performance-based cash bonus program and approved the following bonuses for our NEOs based on 2023 performance: Mr. Feldman, $342,500; Mr. Mallick, $187,500; and Mr. Maslowski, $100,000. The annual bonus approved for Mr. Feldman included an adjustment above achievement of the corporate financial objective of $92,500 based on our compensation committee’s overall assessment of Mr. Feldman’s performance during 2023. For 2024, our compensation committee adopted a similar annual performance-based cash bonus program. Our board of directors and compensation committee may adjust annual target bonus opportunities or award discretionary bonuses from time to time. Equity-Based Compensation We have granted stock options and RSUs to our NEOs to attract and retain them, as well as to align their interests with the interests of our stockholders. Our stock options are generally exercisable prior to vesting (with any unvested shares subject to repurchase at the original exercise price upon any termination of service) and generally vest over one, three or four years, subject to continued service to the company. Our RSUs generally require satisfaction of both a service-based vesting condition, which is generally satisfied over a four-year period, and a liquidity-based vesting condition, which will be satisfied upon completion of this offering. In February 2023, we granted Mr. Feldman an option to purchase 150,000 shares of our Class A common stock and 23,438 RSUs. The RSUs were granted in satisfaction of Mr. Feldman’s 2022 performance-based bonus and will fully vest upon completion of this offering or, if earlier, the date of a change in control. The option was immediately exercisable and vests in substantially equal monthly installments over four years from January 1, 2023, subject to Mr. Feldman’s continued service. Vesting of the option is accelerated in full in the event Mr. Feldman’s employment is terminated by us without cause or Mr. Feldman for good reason during the period commencing three months prior to a change in control and ending 12 months following the closing of the change in control. The option has a term of 10 years, subject to earlier termination upon a termination of employment, and an exercise price per share of $5.02. 139


In February 2023, we granted each of Mr. Mallick and Mr. Maslowski an option to purchase 200,000 shares of our Class A common stock. Each option was immediately exercisable and vests in substantially equal monthly installments over four years from January 1, 2023, subject to the executive’s continued service. Vesting of each option is accelerated in full in the event the executive’s employment is terminated by us without cause or the executive for good reason during the period commencing three months prior to a change in control and ending 12 months following the closing of the change in control. Each option has a term of 10 years, subject to earlier termination upon a termination of employment, and an exercise price per share of $5.02. In August 2023, we granted Mr. Mallick three additional options, each providing Mr. Mallick the right to purchase 35,000 shares of our Class A common stock for an exercise price of $5.02 per share. Each option was immediately exercisable and vests in 12 substantially equal monthly installments from August 1, 2023, 2024, and 2025, respectively, such that each option will be fully vested on the first anniversary of the applicable vesting start date. For each option, Mr. Mallick may choose between keeping the option or giving up the option in return for a cash bonus of $300,000. If the bonus is chosen, the applicable option will terminate. Otherwise, each option has a term of 10 years, subject to earlier termination upon a termination of employment. In connection with this offering, we intend to adopt a 2024 Incentive Award Plan, referred to below as the 2024 Plan, in order to facilitate the grant of cash and equity incentives to employees (including our NEOs), directors, and consultants of our company and certain of our affiliates and to enable us to obtain and retain services of these individuals, which is essential to our long-term success. We expect that the 2024 Plan will become effective on the business day immediately prior to the date of effectiveness of the registration statement of which this prospectus forms a part, subject to approval of such plan by our stockholders. For additional information about the 2024 Plan, see the section titled “Equity Compensation Plans” below. Other Elements of Compensation Retirement Savings and Health and Welfare Benefits We currently maintain a 401(k) retirement savings plan for our employees, including our NEOs, who satisfy certain eligibility requirements. Our NEOs are eligible to participate in the 401(k) plan on the same terms as other full-time employees. The U.S. Internal Revenue Code of 1986, as amended (the “Code”) allows eligible employees to defer a portion of their compensation, within prescribed limits, on a pre-tax basis through contributions to the 401(k) plan. We believe that providing a vehicle for tax-deferred retirement savings though our 401(k) plan adds to the overall desirability of our executive compensation package and further incentivizes our employees, including our NEOs, in accordance with our compensation policies. All of our full-time employees, including our NEOs, are eligible to participate in our health and welfare plans, including medical, dental, and vision benefits; medical and dependent care flexible spending accounts; short-term and long-term disability insurance; and life and accidental death and dismemberment insurance. Perquisites and Other Personal Benefits We provide perquisites and other personal benefits to our NEOs when we believe it is necessary to attract or retain the NEO. None of our NEOs received any perquisites during 2023. 140


Outstanding Equity Awards at Year End The following table lists all outstanding equity awards held by our NEOs as of December 31, 2023.

Option Awards(1)(2)

Option Awards(1)(2) / Option Awards(1)(2) / Option Awards(1)(2) / Option Awards(1)(2) / Option Awards(1)(2) / Option Awards(1)(2) / Option Awards(1)(2) / Option Awards(1)(2) / Option Awards(1)(2) / Option Awards(1)(2) / Option Awards(1)(2) / Option Awards(1)(2) / Option Awards(1)(2) / Option Awards(1)(2) / Option Awards(1)(2) / Option Awards(1)(2) / Option Awards(1)(2) / Option Awards(1)(2) / Option Awards(1)(2) / Option Awards(1)(2) / Stock Awards(3) / Stock Awards(3) / Stock Awards(3) / Stock Awards(3) / Stock Awards(3) / Stock Awards(3) / Stock Awards(3) / Stock Awards(3) / Stock Awards(3)

Name .................................... Name / Name / Vesting Commencement Date / Vesting Commencement Date / Vesting Commencement Date / Number of Securities Underlying Unexercised Options (#) Exercisable / Number of Securities Underlying Unexercised Options (#) Exercisable / Number of Securities Underlying Unexercised Options (#) Exercisable / Number of Securities Underlying Unexercised Options (#) Unexercisable / Number of Securities Underlying Unexercised Options (#) Unexercisable / Number of Securities Underlying Unexercised Options (#) Unexercisable / Option Exercise Price ($) / Option Exercise Price ($) / Option Exercise Price ($) / Option Expiration Date / Option Expiration Date / Option Expiration Date / Number of Shares or Units of Stock That Have Not Vested (#) / Number of Shares or Units of Stock That Have Not Vested (#) / Number of Shares or Units of Stock That Have Not Vested (#) / Market Value of Shares or Units of Stock That Have Not Vested ($)(3) / Market Value of Shares or Units of Stock That Have Not Vested ($)(3) / Market Value of Shares or Units of Stock That Have Not Vested ($)(3)

Andrew D. Feldman ....................... Andrew D. Feldman / Andrew D. Feldman / 2/15/2019 / 2/15/2019 / 2/15/2019 / 1,150,000 / 1,150,000 / 2.40 / 2.40 / 5/13/2029 / 5/13/2029 / 5/13/2029

3/1/2022 / 3/1/2022 / 3/1/2022 / (4) / (4) / (4) / 262,500 / 262,500 / 337,500 / 337,500 / 2.72 / 2.72 / 12/7/2030 / 12/7/2030 / 12/7/2030

1/1/2023 / 1/1/2023 / 1/1/2023 / (4) / (4) / (4) / 45,833 / 45,833 / 104,167 / 104,167 / 7.89 / 7.89 / 1/11/2032 / 1/11/2032 / 1/11/2032

1/1/2023 / 1/1/2023 / 1/1/2023 / (4) / (4) / (4) / 34,375 / 34,375 / 115,625 / 115,625 / 5.02 / 5.02 / 2/13/2033 / 2/13/2033 / 2/13/2033

2/14/2023 / 2/14/2023 / 2/14/2023 / (5) / (5) / (5) / 23,438 / 23,438 / 128,440 / 128,440

Dhiraj Mallick .......................... Dhiraj Mallick / Dhiraj Mallick / 6/28/2018 / 6/28/2018 / 6/28/2018 / 469,410 / 469,410 / $ / 0.98 / 7/16/2028 / 7/16/2028 / 7/16/2028

6/28/2021 / 6/28/2021 / 6/28/2021 / (6) / (6) / (6) / 166,666 / 166,666 / 33,334 / 33,334 / $ / 2.72 / 7/6/2030 / 7/6/2030 / 7/6/2030

6/15/2021 / 6/15/2021 / 6/15/2021 / (4) / (4) / (4) / 62,500 / 62,500 / 37,500 / 37,500 / $ / 2.89 / 3/14/2031 / 3/14/2031 / 3/14/2031

10/1/2021 / 10/1/2021 / 10/1/2021 / (7) / (7) / (7) / 325,000 / 325,000 / 1,781,000 / 1,781,000

8/23/2022 / 8/23/2022 / 8/23/2022 / (4) / (4) / (4) / 100,000 / 100,000 / 200,000 / 200,000 / $ / 6.47 / 8/22/2032 / 8/22/2032 / 8/22/2032

1/1/2023 / 1/1/2023 / 1/1/2023 / (4) / (4) / (4) / 45,833 / 45,833 / 154,167 / 154,167 / $ / 5.02 / 2/13/2033 / 2/13/2033 / 2/13/2033

8/1/2023 / 8/1/2023 / 8/1/2023 / (8) / (8) / (8) / 11,666 / 11,666 / 23,334 / 23,334 / $ / 5.02 / 7/31/2033 / 7/31/2033 / 7/31/2033

8/1/2024 / 8/1/2024 / 8/1/2024 / (8) / (8) / (8) / 35,000 / 35,000 / $ / 5.02 / 7/31/2033 / 7/31/2033 / 7/31/2033

8/1/2025 / 8/1/2025 / 8/1/2025 / (8) / (8) / (8) / 35,000 / 35,000 / $ / 5.02 / 7/31/2033 / 7/31/2033 / 7/31/2033

8/1/2023 / 8/1/2023 / 8/1/2023 / (9) / (9) / (9) / 700,000 / 700,000 / 3,836,000 / 3,836,000

Anthony E. Maslowski .................... Anthony E. Maslowski / Anthony E. Maslowski / 1/28/2020 / 1/28/2020 / 1/28/2020 / (10) / (10) / (10) / 14,867 / 14,867 / 81,471 / 81,471

1/1/2023 / 1/1/2023 / 1/1/2023 / (6) / (6) / (6) / 30,555 / 30,555 / 69,445 / 69,445 / $ / 7.89 / 1/11/2032 / 1/11/2032 / 1/11/2032

1/1/2023 / 1/1/2023 / 1/1/2023 / (4) / (4) / (4) / 45,833 / 45,833 / 154,167 / 154,167 / $ / 5.02 / 2/13/2033 / 2/13/2033 / 2/13/2033

_______________ (1)Each option is exercisable as to all shares underlying the option with any shares purchased upon exercise subject to the same vesting conditions applicable to the option. In the event of any termination of employment, unvested shares may be repurchased by us for the exercise price of the related option. The portion of the option included under the “Number of Securities Underlying Unexercised Options Unexercisable” represents the unvested portion of the option notwithstanding that it is fully exercisable. (2)Except as otherwise provided, each option vests at a rate of 25% of the shares underlying the option on the first anniversary of the vesting commencement date and as to 1/48th of the shares underlying the option on each monthly anniversary of the vesting commencement date thereafter, in each case, subject to the executive continuing to provide services to us through the applicable vesting date. (3)Amount reported calculated by multiplying $5.48, which our board of directors determined equaled fair market value of our Class A common stock as of December 31, 2023, by the number of unvested shares comprising or underlying the stock award. (4)The option vests as to 1/48th of the shares underlying the option on each monthly anniversary of the vesting commencement date, subject to the executive continuing to provide services to us through the applicable vesting date. (5)The RSUs vest upon the completion of this offering, subject to the executive continuing to provide services to us through such completion. (6)The option vests as to 1/36th of the shares underlying the option on each monthly anniversary of the vesting commencement date, subject to the executive continuing to provide services to us through the applicable vesting date. (7)Each RSU vests on the date both service-based and liquidity-based vesting conditions are satisfied. The service-based vesting condition is satisfied as to 1/48th of the total number of RSUs on each monthly anniversary of the 141


vesting commencement date, subject to the executive continuing to provide services through the applicable date. The liquidity-based vesting condition will be satisfied upon the completion of this offering. (8)The option vests as to 1/12th of the shares underlying the option on each monthly anniversary of the vesting commencement date, subject to the executive continuing to provide services to us through the applicable vesting date. (9)Each RSU vests on the date both service-based and liquidity-based vesting conditions are satisfied. The service-based vesting condition is satisfied as to 1/36th of the total number of RSUs on each monthly anniversary of the vesting commencement date, subject to the executive continuing to provide services through the applicable date. The liquidity-based vesting condition will be satisfied upon the completion of this offering. (10)Constitutes restricted stock acquired upon exercise of an option prior to vesting, which are subject to repurchase at the original exercise price upon any termination of service. The restricted stock vests in full on January 28, 2024, subject to the executive continuing to provide services through such date. Executive Compensation Arrangements We are party to offer letters with each of our NEOs, other than Mr. Feldman. Each offer letter provides for an initial base salary, target bonus opportunity, two initial stock option grants with accelerated vesting of the two option grants in the event of certain terminations of employment in connection with a change in control, and participation in our benefits plans. Mr. Maslowski began serving as our Chief Financial Officer in January 2020 and ceased serving as our Chief Financial Officer on March 15, 2024. In February 2024, we entered into a separation agreement with Mr. Maslowski after a commitment to an extended tenure was not reached. Under the separation agreement, in exchange for a general release of claims against us and our affiliates, we paid Mr. Maslowski $225,000, which constituted six months of his base salary at the rate in effect as of his date of termination and 50% of his target bonus opportunity. The separation agreement also provided Mr. Maslowski with access to COBRA coverage for 36 months from his separation date, and (i) for the first 24 months of such period, the premiums will be paid for by us and (ii) for the remaining 12 months of such period, Mr. Maslowski will be responsible for payment of the full premiums. The separation agreement also provided Mr. Maslowski with the full vesting acceleration of each of his outstanding stock options and shares of restricted stock, up to 12 months to exercise the stock option granted to him in February 2023, and up to 18 months to exercise the stock option granted to him in January 2022. Equity Compensation Plans The following summarizes the material terms of the long-term incentive compensation plan and employee stock purchase plan in which our NEOs will be eligible to participate following this offering and our existing equity plans, under which we have previously made periodic grants of equity and equity-based awards to our NEOs and other employees. 2024 Incentive Award Plan We intend to adopt the 2024 Plan, which will become effective on the business day immediately prior to the date of effectiveness of the registration statement of which this prospectus forms a part. The principal purpose of the 2024 Plan is to attract, retain and motivate selected employees, directors, and consultants through the granting of stock-based compensation awards and cash-based performance bonus awards. The material terms of the 2024 Plan, as it is currently contemplated, are summarized below. Share Reserve. Under the 2024 Plan,                 shares of our Class A common stock will be initially reserved for issuance pursuant to a variety of stock-based compensation awards, including stock options, stock appreciation rights (“SARs”), restricted stock awards, RSU awards, and other stock-based awards. The number of shares initially reserved for issuance or transfer pursuant to awards under the 2024 Plan will be increased by (i) the number of shares represented by awards outstanding under the 2016 Plan, that become available for issuance under the counting provisions described below following the effective date and (ii) an annual increase on the first day of each fiscal year beginning in 2025 and ending in 2034, equal to the lesser of (A) 5% of the shares of our Class A common 142


stock outstanding (on an as converted basis) on the last day of the immediately preceding fiscal year and (B) such smaller number of shares of stock as determined by our board of directors; provided, however, that no more than                 shares of stock may be issued upon the exercise of incentive stock options. The following counting provisions will be in effect for the share reserve under the 2024 Plan: •to the extent that an award (including an award granted under the 2016 Plan (a “Prior Plan Award”)) terminates, expires, or lapses for any reason or an award is settled in cash without the delivery of shares, any shares subject to the award at such time will be available for future grants under the 2024 Plan; •to the extent shares are tendered or withheld to satisfy the grant, exercise price, or tax withholding obligation with respect to any award under the 2024 Plan or Prior Plan Award, such tendered or withheld shares will be available for future grants under the 2024 Plan; •to the extent shares subject to stock appreciation rights are not issued in connection with the stock settlement of stock appreciation rights on exercise thereof, such shares will be available for future grants under the 2024 Plan; •to the extent that shares of our Class A common stock are repurchased by us prior to vesting so that shares are returned to us, such shares will be available for future grants under the 2024 Plan; •the payment of dividend equivalents in cash in conjunction with any outstanding awards or Prior Plan Awards will not be counted against the shares available for issuance under the 2024 Plan; and •to the extent permitted by applicable law or any exchange rule, shares issued in assumption of, or in substitution for, any outstanding awards of any entity acquired in any form of combination by us or any of our subsidiaries will not be counted against the shares available for issuance under the 2024 Plan. In addition, the sum of the grant date fair value of all equity-based awards and the maximum that may become payable pursuant to all cash-based awards to any individual for services as a non-employee director during any calendar year may not exceed $               . Administration. The compensation committee of our board of directors is expected to administer the 2024 Plan unless our board of directors assumes authority for administration. The compensation committee must consist of at least three members of our board of directors, each of whom is intended to qualify as a “non-employee director” for purposes of Rule 16b-3 under the Exchange Act and an “independent director” within the meaning of the rules of the applicable stock exchange, or other principal securities market on which shares of our Class A common stock are traded. The 2024 Plan provides that our board of directors or compensation committee may delegate its authority to grant awards to employees other than executive officers and certain senior executives of the company to a committee consisting of one or more members of our board of directors or one or more of our officers, other than awards made to our non-employee directors, which must be approved by our full board of directors. Subject to the terms and conditions of the 2024 Plan, the administrator has the authority to select the persons to whom awards are to be made, to determine the number of shares to be subject to awards and the terms and conditions of awards, and to make all other determinations and to take all other actions necessary or advisable for the administration of the 2024 Plan. The administrator is also authorized to adopt, amend or rescind rules relating to administration of the 2024 Plan. Our board of directors may at any time remove the compensation committee as the administrator and revest in itself the authority to administer the 2024 Plan. The full board of directors will administer the 2024 Plan with respect to awards to non-employee directors. Eligibility. Awards under the 2024 Plan may be granted to individuals who are then our officers, employees, or consultants or are the officers, employees, or consultants of certain of our subsidiaries. Such awards also may be granted to our directors. Only employees of our company or certain of our subsidiaries may be granted incentive stock options. 143


Awards. The 2024 Plan provides that the administrator may grant or issue stock options, SARs, restricted stock, RSUs, other stock- or cash-based awards and dividend equivalents, or any combination thereof. Each award will be set forth in a separate agreement with the person receiving the award and will indicate the type, terms, and conditions of the award. •Nonstatutory Stock Options (“NSOs”) will provide for the right to purchase shares of our Class A common stock at a specified price which may not be less than fair market value on the date of grant, and usually will become exercisable (at the discretion of the administrator) in one or more installments after the grant date, subject to the participant’s continued employment or service with us and/or subject to the satisfaction of corporate performance targets and individual performance targets established by the administrator. NSOs may be granted for any term specified by the administrator that does not exceed ten years. •Incentive Stock Options (“ISOs”) will be designed in a manner intended to comply with the provisions of Section 422 of the Code and will be subject to specified restrictions contained in the Code. Among such restrictions, ISOs must have an exercise price of not less than the fair market value of a share of Class A common stock on the date of grant, may only be granted to employees, and must not be exercisable after a period of ten years measured from the date of grant. In the case of an ISO granted to an individual who owns (or is deemed to own) at least 10% of the total combined voting power of all classes of our capital stock, the 2024 Plan provides that the exercise price must be at least 110% of the fair market value of a share of Class A common stock on the date of grant and the ISO must not be exercisable after a period of five years measured from the date of grant. •Restricted Stock may be granted to any eligible individual and made subject to such restrictions as may be determined by the administrator. Restricted stock, typically, may be forfeited for no consideration or repurchased by us at the original purchase price if the conditions or restrictions on vesting are not met. In general, restricted stock may not be sold or otherwise transferred until restrictions are removed or expire. Purchasers of restricted stock, unlike recipients of options, will have voting rights and will have the right to receive dividends, if any, prior to the time when the restrictions lapse, however, extraordinary dividends will generally be placed in escrow, and will not be released until restrictions are removed or expire. •RSUs may be awarded to any eligible individual, typically without payment of consideration, but subject to vesting conditions based on continued employment or service or on performance criteria established by the administrator. Like restricted stock, RSUs may not be sold, or otherwise transferred or hypothecated, until vesting conditions are removed or expire. Unlike restricted stock, stock underlying RSUs will not be issued until the RSUs have vested, and recipients of RSUs generally will have no voting or dividend rights prior to the time when vesting conditions are satisfied. •SARs may be granted in connection with stock options or other awards, or separately. SARs granted in connection with stock options or other awards typically will provide for payments to the holder based upon increases in the price of our Class A common stock over a set exercise price. The exercise price of any SAR granted under the 2024 Plan must be at least 100% of the fair market value of a share of our Class A common stock on the date of grant. SARs under the 2024 Plan will be settled in cash or shares of our Class A common stock, or in a combination of both, at the election of the administrator. •Other Stock or Cash Based Awards are awards of cash, fully vested shares of our Class A common stock and other awards valued wholly or partially by referring to, or otherwise based on, shares of our Class A common stock. Other stock- or cash-based awards may be granted to participants and may also be available as a payment form in the settlement of other awards, as standalone payments and as payment in lieu of base salary, bonus, fees or other cash compensation otherwise payable to any individual who is eligible to receive awards. The administrator will determine the terms and conditions of other stock- or cash-based awards, which may include vesting conditions based on continued service, performance and/or other conditions. 144


•Dividend Equivalents represent the right to receive the equivalent value of dividends paid on shares of our Class A common stock and may be granted alone or in tandem with awards other than stock options or SARs. Dividend equivalents are credited as of dividend payments dates during the period between a specified date and the date such award terminates or expires, as determined by the administrator. In addition, dividend equivalents with respect to shares covered by a performance award will only be paid to the participant at the same time or times and to the same extent that the vesting conditions, if any, are subsequently satisfied and the performance award vests with respect to such shares. Any award may be granted as a performance award, meaning that the award will be subject to vesting and/or payment based on the attainment of specified performance goals. Change in Control. In the event of a change in control, unless the administrator elects to terminate an award in exchange for cash, rights, or other property, or cause an award to accelerate in full prior to the change in control, such award will continue in effect or be assumed or substituted by the acquirer, provided that any performance-based portion of the award will be subject to the terms and conditions of the applicable award agreement. In the event the acquirer refuses to assume or replace awards granted, prior to the completion of such transaction, awards issued under the 2024 Plan will be subject to accelerated vesting such that 100% of such awards will become vested and exercisable or payable, as applicable. The administrator may also make appropriate adjustments to awards under the 2024 Plan and is authorized to provide for the acceleration, cash-out, termination, assumption, substitution or conversion of such awards in the event of a change in control or certain other unusual or nonrecurring events or transactions. Adjustments of Awards. In the event of any stock dividend or other distribution, stock split, reverse stock split, reorganization, combination or exchange of shares, merger, consolidation, split-up, spin-off, recapitalization, repurchase, or any other corporate event affecting the number of outstanding shares of our Class A common stock or the share price of our Class A common stock that would require adjustments to the 2024 Plan or any awards under the 2024 Plan in order to prevent the dilution or enlargement of the potential benefits intended to be made available thereunder, the administrator will make appropriate, proportionate adjustments to: (i) the aggregate number and type of shares subject to the 2024 Plan; (ii) the number and kind of shares subject to outstanding awards and terms and conditions of outstanding awards (including, without limitation, any applicable performance targets or criteria with respect to such awards); and (iii) the grant or exercise price per share of any outstanding awards under the 2024 Plan. Amendment and Termination. The administrator may terminate, amend or modify the 2024 Plan at any time and from time to time. However, we must generally obtain stockholder approval to the extent required by applicable law, rule or regulation (including any applicable stock exchange rule). Notwithstanding the foregoing, an option may be amended to reduce the per share exercise price below the per share exercise price of such option on the grant date and options may be granted in exchange for, or in connection with, the cancellation or surrender of options having a higher per share exercise price without receiving additional stockholder approval. No ISOs may be granted pursuant to the 2024 Plan after the tenth anniversary of the effective date of the 2024 Plan, and no additional annual share increases to the 2024 Plan’s aggregate share limit will occur from and after such anniversary. Any award that is outstanding on the termination date of the 2024 Plan will remain in force according to the terms of the 2024 Plan and the applicable award agreement. 2016 Equity Incentive Plan We currently maintain the 2016 Plan, which became effective on May 5, 2016 upon its adoption by our board of directors and approval of our stockholders. Following this offering and in connection with the effectiveness of our 2024 Plan, the 2016 Plan will terminate and no further awards will be granted under the 2016 Plan. However, all outstanding awards will continue to be governed by their existing terms. Administration. Our board of directors, the compensation committee, or another committee thereof appointed by our board of directors, has the authority to administer the 2016 Plan and the awards granted under it. The 145


administrator has the authority to select the service providers to whom awards will be granted under the 2016 Plan, the number of shares to be subject to those awards under the 2016 Plan, and the terms and conditions of the awards granted. In addition, the administrator has the authority to construe and interpret the 2016 Plan and to adopt rules for the relating to the 2016 Plan and exercise such other powers that it deems necessary and desirable to promote the bests interests of the company and that are consistent with the terms of the 2016 Plan. Share Reserve. We have reserved an aggregate of 65,711,838 shares of our Class A common stock for issuance under the 2016 Plan. As of June 30, 2024, after giving effect to the Option Exercise and the RSU Net Settlement, options to purchase a total of                 shares of our Class A common stock were outstanding,                 shares of restricted stock acquired upon exercise of options prior to vesting were outstanding,                RSUs covering shares of our Class A common stock were outstanding, and                 shares remained available for future grants. Awards. The 2016 Plan provides that the administrator may grant or issue options, including ISOs and NSOs, restricted stock, and RSUs to employees, directors, and consultants, provided that only employees may be granted ISOs. •Stock Options. The 2016 Plan provides for the grant of ISOs or NSOs. ISOs may be granted only to employees. NSOs may be granted to employees, directors, or consultants. The exercise price of ISOs granted to employees who at the time of grant own stock representing more than 10% of the voting power of all classes of our Class A common stock may not be less than 110% of the fair market value per share of our Class A common stock on the date of grant, and the exercise price of ISOs granted to any other employees may not be less than 100% of the fair market value per share of our Class A common stock on the date of grant. The exercise price of NSOs to employees, directors, or consultants may not be less than 100% of the fair market value per share of our Class A common stock on the date of grant. •Restricted Stock. The 2016 Plan provides for the grant of restricted stock. Each share of restricted stock that is accepted will be governed by a restricted stock purchase agreement, which will detail the restrictions on transferability, risk of forfeiture, and other restrictions the administrator approves. In general, restricted stock acquired upon exercise of a stock purchase right may not be sold, transferred, pledged, hypothecated, margined, or otherwise encumbered until restrictions are removed or expire. Holders of restricted stock, unlike recipients of stock options, will have voting rights and will have the right to receive dividends, if any, prior to the time when the restrictions lapse. •RSUs. The 2016 Plan provides for the grant of RSUs. Each RSU represents the unfunded, unsecured right to receive a share of our Class A common stock or an amount of cash or other consideration equal to the fair market value of a share of our Class A common stock. The terms of each award of RSUs are set forth in a RSU agreement. •SARs. The 2016 Plan provides for the grant of SARs. SARs may be settled in cash or shares (which may consist of restricted stock or RSUs or a combination thereof, having a value equal to multiplying the difference between the fair market value on the date of exercise over the exercise price and the number of shares with respect to which the SARs are being exercised). All grants of SARs made will be evidenced by an award agreement. Adjustments of Awards. In the event of any dividend or other distribution, reorganization, merger, consolidation, combination, repurchase, liquidation, dissolution, or sale, transfer, exchange, or other disposition of substantially all of our assets, or exchange of shares or other similar corporate transaction or event, the administrator will make adjustments to the number and class of shares available for issuance under the 2016 Plan and the number, class, and price of shares subject to outstanding awards, in order to prevent dilution or enlargement of benefits. Change in Control. In the event of a change in control, any outstanding awards acquired under the 2016 Plan shall be subject to the agreement evidencing the change of control. The successor or acquiring entity may elect for such outstanding awards to be assumed or substituted. Otherwise, in the event of a merger or change in control, the 146


change of control agreement has broad discretion to determine the treatment of each outstanding award, including providing for awards to terminate or accelerate or for awards to terminate in exchange for cash or other property. Amendment and Termination. Our board of directors may amend or terminate the 2016 Plan or any portion thereof at any time. However, no amendment may impair the rights of a holder of an outstanding option grant without the holder’s consent, and any action by our board of directors to increase the number of shares subject to the plan or extend the term of the plan is subject to the approval of our stockholders. Additionally, an amendment of the plan shall be subject to the approval of our stockholders, where such approval by our stockholders of an amendment is required by applicable law. Following this offering and in connection with the effectiveness of our 2016 Plan, the 2016 Plan will terminate and no further awards will be granted under the 2016 Plan. 2024 Employee Stock Purchase Plan We intend to adopt the ESPP, which will become effective on the business day immediately prior to the date of effectiveness of the registration statement of which this prospectus forms a part. The ESPP is designed to allow our eligible employees to purchase shares of our Class A common stock, at periodic intervals, with their accumulated payroll deductions. The ESPP is intended to qualify under Section 423 of the Code. The material terms of the ESPP, as it is currently contemplated, are summarized below. Administration. Subject to the terms and conditions of the ESPP, our compensation committee will administer the ESPP. Our compensation committee can delegate administrative tasks under the ESPP to the services of an agent and/or employees to assist in the administration of the ESPP. The administrator will have the discretionary authority to administer and interpret the ESPP. Interpretations and constructions of the administrator of any provision of the ESPP or of any rights thereunder will be conclusive and binding on all persons. We will bear all expenses and liabilities incurred by the ESPP administrator. Share Reserve. The maximum number of our shares of our Class A common stock that will be authorized for sale under the ESPP is equal to the sum of (i)                 shares of Class A common stock and (ii) an annual increase on the first day of each fiscal year beginning in 2025 and ending in 2034, equal to the lesser of (A) 1% of the shares of Class A common stock outstanding (on an as converted basis) on the last day of the immediately preceding fiscal year and (B) such number of shares of Class A common stock as determined by our board of directors; provided, however, no more than                 shares of our Class A common stock may be issued under the ESPP. The shares reserved for issuance under the ESPP may be authorized but unissued shares or reacquired shares. Eligibility. Employees eligible to participate in the ESPP for a given offering period generally include employees who are employed by us or one of our subsidiaries on the first day of the offering period. Our employees (and, if applicable, any employees of our subsidiaries) who customarily work less than five months in a calendar year or are customarily scheduled to work less than 20 hours per week will not be eligible to participate in the ESPP. Finally, an employee who owns (or is deemed to own through attribution) 5% or more of the combined voting power or value of all our classes of stock or of one of our subsidiaries will not be allowed to participate in the ESPP. Participation. Employees will enroll under the ESPP by completing a payroll deduction form permitting the deduction from their compensation of at least 1% of their compensation but not more than 15% of their base compensation. Such payroll deductions may be expressed as either a whole number percentage or a fixed dollar amount, and the accumulated deductions will be applied to the purchase of shares on each purchase date. However, a participant may not purchase more than 100,000 shares in each offering period and may not accrue the right to purchase shares of Class A common stock at a rate that exceeds $25,000 in fair market value of shares of our Class A common stock (determined at the time the option is granted) for each calendar year the option is outstanding (as determined in accordance with Section 423 of the Code). The ESPP administrator has the authority to change these limitations for any subsequent offering period. 147


Offering. Under the ESPP, participants are offered the option to purchase shares of our Class A common stock at a discount during a series of successive offering periods, the duration and timing of which will be determined by the ESPP administrator. However, in no event may an offering period be longer than 27 months in length. The option purchase price will be the lower of 85% of the closing trading price per share of our Class A common stock on the first trading date of an offering period in which a participant is enrolled or 85% of the closing trading price per share on the purchase date, which will occur on the last trading day of each purchase period within an offering period. Unless a participant has previously cancelled his or her participation in the ESPP before the purchase date, the participant will be deemed to have exercised his or her option in full as of each purchase date. Upon exercise, the participant will purchase the number of whole shares that his or her accumulated payroll deductions will buy at the option purchase price, subject to the participation limitations listed above. A participant may cancel his or her payroll deduction authorization at any time prior to the end of the offering period. Upon cancellation, the participant will have the option to either (i) receive a refund of the participant’s account balance in cash without interest or (ii) exercise the participant’s option for the current offering period for the maximum number of shares of Class A common stock on the applicable purchase date, with the remaining account balance refunded in cash without interest. Following at least one payroll deduction, a participant may also decrease (but not increase) his or her payroll deduction authorization once during any offering period. If a participant wants to increase or decrease the rate of payroll withholding, he or she may do so effective for the next offering period by submitting a new form before the offering period for which such change is to be effective. A participant may not assign, transfer, pledge or otherwise dispose of (other than by will or the laws of descent and distribution) payroll deductions credited to a participant’s account or any rights to exercise an option or to receive shares of our Class A common stock under the ESPP, and during a participant’s lifetime, options in the ESPP shall be exercisable only by such participant. Any such attempt at assignment, transfer, pledge, or other disposition will not be given effect. Adjustments upon Changes in Recapitalization, Dissolution, Liquidation, Merger, or Asset Sale. In the event of any increase or decrease in the number of issued shares of our Class A common stock resulting from a stock split, reverse stock split, stock dividend, combination, or reclassification of the Class A common stock, or any other increase or decrease in the number of shares of Class A common stock effected without receipt of consideration by us, we will proportionately adjust the aggregate number of shares of our Class A common stock offered under the ESPP, the number and price of shares which any participant has elected to purchase under the ESPP and the maximum number of shares which a participant may elect to purchase in any single offering period. If there is a proposal to dissolve or liquidate us, then the ESPP will terminate immediately prior to the consummation of such proposed dissolution or liquidation, and any offering period then in progress will be shortened by setting a new purchase date to take place before the date of our dissolution or liquidation. We will notify each participant of such change in writing prior to the new exercise date. If we undergo a merger with or into another corporation or sell all or substantially all of our assets, each outstanding option will be assumed or an equivalent option substituted by the successor corporation or the parent or subsidiary of the successor corporation. If the successor corporation refuses to assume the outstanding options or substitute equivalent options, then any offering period then in progress will be shortened by setting a new purchase date to take place before the date of our proposed sale or merger. We will notify each participant of such change in writing prior to the new exercise date. Amendment and Termination. Our board of directors may amend, suspend, or terminate the ESPP at any time. However, the board of directors may not amend the ESPP without obtaining stockholder approval within 12 months before or after such amendment to the extent required by applicable laws. 148


Director Compensation For the year ended December 31, 2023, we did not have a formalized non-employee director compensation program, and none of our non-employee directors was paid cash compensation or granted an option or stock award in connection with the non-employee director’s service to us during 2023. In connection with this offering, we intend to adopt a non-employee director compensation program that will provide for annual retainers for board and committee service and the automatic grant of initial and annual equity awards. Under our non-employee director compensation program (the “Director Compensation Program”) that will become effective upon the completion of this offering, our non-employee directors will receive cash compensation, paid quarterly in arrears, as follows: •Each non-employee director will receive a cash retainer in the amount of $                per year. •The non-employee chair of our board of directors will receive an additional cash retainer in the amount of $                per year. •The chair of the Audit Committee receives a cash retainer in the amount of $                per year for such chairperson’s service on the Audit Committee. Each non-chairperson member of the Audit Committee receives a cash retainer in the amount of $                per year for such member’s service on the Audit Committee. •The chair of the Compensation Committee will receive a cash retainer in the amount of $                per year for such chairperson’s service on the Compensation Committee. Each non-chairperson member of the Compensation Committee receives a cash retainer in the amount of $                per year for such member’s service on the Compensation Committee. •The chair of the Nominating and Corporate Governance Committee will receive a cash retainer in the amount of $                per year for such chairperson’s service on the Nominating and Corporate Governance committee. Each non-chairperson member of the Nominating and Corporate Governance committee will receive a cash retainer in the amount of $                per year for such member’s service on the Nominating and Corporate Governance committee. Under the Director Compensation Program, each non-employee director on the date the non-employee director is appointed to our board of directors will automatically be granted that number of RSUs (the “Initial Grant”) under the 2024 Plan determined by dividing $                by the closing trading price of a share of our Class A common stock on the date of grant. The Initial Grant will vest in substantially equal quarterly installments over three years, subject to continued service through the applicable vesting date. In addition, on the date of each annual meeting of our stockholders, each non-employee director who will continue to serve as a non-employee director immediately following such annual meeting will automatically be granted that number of RSUs (the “Annual Grant”) under the 2024 Plan determined by dividing $                by the closing trading price of a share of our Class A common stock on the date of grant. The Annual Grant will vest in full on the earlier of the (i) first anniversary of the grant date and (ii) immediately prior to the annual meeting of our stockholders following the date of grant, subject to continued service through the applicable vesting date. Pursuant to the Director Compensation Program, upon a change in control transaction, all outstanding equity awards held by our non-employee directors will vest in full. 149

Certain Relationships and Related


CERTAIN RELATIONSHIPS AND RELATED PARTY TRANSACTIONS The following includes a summary of transactions since January 1, 2021 and any currently proposed transactions, to which we were or are to be a participant, in which (i) the amount involved exceeded or will exceed $120,000; and (ii) any of our directors, executive officers, or holders of more than 5% of our capital stock, or any affiliate or member of the immediate family of the foregoing persons or entities, had or will have a direct or indirect material interest, other than compensation and other arrangements that are described under the section titled “Executive and Director Compensation.” We believe the terms obtained or consideration that we paid or received, as applicable, in connection with the transactions described below were comparable to terms available or the amounts that we would pay or receive, as applicable, in arm’s-length transactions. Redeemable Convertible Preferred Stock Financings Series F Redeemable Convertible Preferred Stock Financing In October 2021, we entered into a Series F redeemable convertible preferred stock purchase agreement with various investors pursuant to which we issued and sold an aggregate of 9,168,419 shares of our Series F redeemable convertible preferred stock at a purchase price of $27.7448 per share, for an aggregate purchase price of $254.4 million in multiple closings through December 2021. The table below sets forth the number of shares of our Series F redeemable convertible preferred stock purchased by holders of more than 5% of our capital stock and their affiliated entities. None of our directors or executive officers purchased shares of Series F redeemable convertible preferred stock.

Name(1)

Name(1) / Name(1) / Shares of Series F Redeemable Convertible Preferred Stock / Shares of Series F Redeemable Convertible Preferred Stock / Shares of Series F Redeemable Convertible Preferred Stock / Aggregate Purchase Price / Aggregate Purchase Price / Aggregate Purchase Price

Entities affiliated with Altimeter(2) ... Entities affiliated with Altimeter(2) / Entities affiliated with Altimeter(2) / 360,427 / 360,427 / $ / 9,999,975

Entities affiliated with Coatue(3) ...... Entities affiliated with Coatue(3) / Entities affiliated with Coatue(3) / 360,427 / 360,427 / $ / 9,999,975

Entities affiliated with Eclipse Ventures(4) ... Entities affiliated with Eclipse Ventures(4) / Entities affiliated with Eclipse Ventures(4) / 9,010 / 9,010 / $ / 249,981

_______________ (1)See the section titled “Principal and Selling Stockholders” for additional information regarding these stockholders and their equity holdings. (2)Entities affiliated with Altimeter collectively beneficially own more than 5% of our outstanding capital stock. Brad Gerstner, Chief Executive Officer of Altimeter Capital Management, was a member of our board of directors at the time of the Series F redeemable convertible preferred stock financing. (3)Entities affiliated with Coatue collectively beneficially own more than 5% of our outstanding capital stock. (4)Entities affiliated with Eclipse Ventures collectively beneficially own more than 5% of our outstanding capital stock. Pierre Lamond, a Partner Emeritus at Eclipse Ventures, was a member of our board of directors at the time of the Series F redeemable convertible preferred stock financing. Lior Susan, a current member of our board of directors, is the Founder and Managing Partner of Eclipse Ventures. Series F-1 and Series F-2 Redeemable Convertible Preferred Stock Financing In May 2024, we entered into a Series F-1 redeemable convertible preferred stock purchase agreement (as subsequently amended and restated in September 2024, the “Preferred Stock Purchase Agreement”) with various investors pursuant to which we agreed to issue and sell 5,798,089 shares of our Series F-1 redeemable convertible preferred stock and 22,851,296 shares of our Series F-2 redeemable convertible preferred stock, each at a purchase price of $14.66. In July and August 2024, various investors purchased an aggregate of 2,728,512 shares of our Series F-1 redeemable convertible preferred stock for an aggregate purchase price of $40.0 million. 150


In September 2024, Alpha Wave Ventures II, LP, an existing stockholder, purchased the remaining 3,069,577 shares of our Series F-1 redeemable convertible preferred stock for an aggregate purchase price of $45.0 million. Entities affiliated with Alpha Wave collectively beneficially own more than 5% of our outstanding capital stock following the purchase of our Series F-1 redeemable convertible preferred stock. See the section titled “Principal and Selling Stockholders” for additional information. G42 We amended and restated the Preferred Stock Purchase Agreement in September 2024. Pursuant to the Preferred Stock Purchase Agreement, an entity affiliated with Group 42 Holding Ltd (together with its affiliates, “G42”) agreed to purchase an aggregate of 22,851,296 shares of our non-voting Series F-2 redeemable convertible preferred stock (or, if purchased following the completion of this offering, shares of our non-voting Class N common stock) for an aggregate purchase price of $335.0 million (the “G42 Primary Purchase”). G42 has committed to purchase these shares by April 15, 2025. Each share of our Class N common stock is convertible at any time at the option of the holder into one share of our Class A common stock. G42 has agreed to not convert its shares of Class N common stock into shares of Class A common stock before July 31, 2025. G42 will beneficially own more than 5% of our outstanding capital stock following the completion of the G42 Primary Purchase. See the section titled “Principal and Selling Stockholders” for additional information. In addition, pursuant to the Preferred Stock Purchase Agreement, if G42 or certain third parties at the direction of G42 (who may be affiliated or unaffiliated and under a commercial agreement with G42) purchase more than $500.0 million in one purchase order, and less than $5.0 billion in the aggregate, of high-performance computing clusters from us (the “G42 Option Threshold”), we will grant G42 the option to purchase additional shares of our Series F-2 redeemable convertible preferred stock (or, if such option is granted or exercised following the completion of this offering, shares of our Class N common stock), subject to the terms thereof (the “G42 Option”). Shares may be issued pursuant to the G42 Option in one or more closings. The $1.43 billion of products and services that G42 has committed to purchase pursuant to the G42 May 2024 Agreement (as defined below) does not count toward the G42 Option Threshold. The G42 Option expires if the G42 Option Threshold is not achieved by December 31, 2025. The G42 Option may be subject to approval by the Committee on Foreign Investment in the United States (“CFIUS”) or the expiration or early termination of applicable waiting periods under the Hart-Scott-Rodino Antitrust Improvements Act of 1976, as amended. The maximum number of shares that may be purchased pursuant to the G42 Option will be the quotient of (i) the total aggregate purchase price for the G42 Option, which will equal 10% of the value of the relevant purchaser order(s), divided by (ii) (A) if the G42 Option is granted and exercised prior to the completion of this offering, a price per share that is 17.5% below the price per share of our then most recent arms-length sale of our redeemable convertible preferred stock (excluding our Series F-1 and Series F-2 redeemable convertible preferred stock), or (B) if the G42 Option is granted or exercised following the completion of this offering, a price per share that is 17.5% below the average closing price per share of our Class A common stock over the 30-day period prior to the G42 Option Threshold being met. Assuming the G42 Option Threshold is satisfied after this offering, the minimum and maximum number of shares that could be purchased pursuant to the G42 Option, assuming the average closing price per share of our common stock over the 30-day period prior to the G42 Option Threshold being met is $           per share, which is the midpoint of the estimated price range set forth on the cover page of this prospectus, would be                 shares of common stock (assuming the G42 Option Threshold is satisfied by product sales of $500.0 million in one purchase order) and                 shares of common stock (assuming the G42 Option Threshold is satisfied by product sales of $5.0 billion in the aggregate), respectively. If the price per share of our common stock over the 30-day period prior to the G42 Option Threshold being met is greater or lesser than $           per share, the number of shares that could be purchased pursuant to the G42 Option would decrease or increase, respectively. Prior to amending and restating the Preferred Stock Purchase Agreement, we and G42 submitted a joint voluntary notice filing to CFIUS for review of G42’s purchase of voting securities pursuant to the Preferred Stock Purchase Agreement. We later amended the Preferred Stock Purchase Agreement to provide that the shares to be 151


purchased by G42 will be non-voting securities. We believe that CFIUS does not have jurisdiction over G42’s purchase of our non-voting securities in this case. Consequently, in September 2024, we submitted a request to CFIUS to withdraw the parties’ notice. CFIUS is considering our withdrawal request but has not yet approved our request or otherwise approved G42’s purchase of shares, and there is no guarantee that CFIUS will approve the request or G42’s purchase of securities. If CFIUS indicates that any of the purchases of securities by G42 under the Preferred Stock Purchase Agreement remain subject to its jurisdiction, and in the course of reviewing the investment, CFIUS or the President of the United States takes any action that prevents implementation of the transactions contemplated by the Preferred Stock Purchase Agreement, we and G42 have agreed to use reasonable best efforts to agree in good faith on a suitable economic solution that reflects the economic structure (i.e., pay-ins and pay-outs) to which G42 would have otherwise been entitled had G42 been able to purchase shares as contemplated by the Preferred Stock Purchase Agreement (the “Alternative Economic Position”). The parties have not discussed, or agreed upon, what form such a solution might take and may not agree on that solution. The Alternative Economic Position terminates upon the completion of the G42 Primary Purchase, which is to be no later than April 15, 2025. G42 Relationship We have developed a strategic relationship with G42 as a partner, customer, and investor. In September 2023, we entered into a framework agreement with G42 (the “Framework Agreement”). Pursuant to the Framework Agreement, we have agreed to supply goods to G42 as specified in purchase orders issued to us from time to time by G42. Under the Framework Agreement, we are entitled to receive an aggregate of approximately $389.0 million upon performance of purchase orders: (i) for our high-performance computing systems, including installation and support services, and a subscription for software updates, dated September 13, 2023; (ii) for our high-performance computing systems, including installation and support services, and a subscription for software updates, dated October 7, 2023; (iii) for our high-performance computing systems, including installation and support services, and a subscription for software updates, dated December 29, 2023 and amended and restated effective January 24, 2024; and (iv) for our high-performance computing systems, including installation and support services, and a subscription for software updates, dated July 10, 2024. In September 2023, we entered into a master services agreement with G42 (the “Master Services Agreement”). Pursuant to the Master Services Agreement, we have agreed to provide services to G42 as specified in statements of work issued to us from time to time by G42. Under the Master Services Agreement, we are entitled to receive an aggregate of approximately $88.8 million upon performance of statements of work: (i) for power, space, communication, operation, and management of certain of our high-performance computing systems purchased from us by G42, dated September 13, 2023; (ii) for power, space, communication, operation, and management of certain of our high-performance computing systems purchased from us by G42, dated February 23, 2024, and amended and restated effective July 10, 2024; and (iii) for operation and management of certain of our high-performance computing systems purchased from us by G42, dated July 10, 2024. In April 2024, we entered into a letter of award with G42 (the “G42 April 2024 Agreement”) pursuant to which G42, or a third party nominee affiliated with G42, intends to issue purchase orders for high-performance computing products and services for a minimum value of $300 million. Pursuant to the G42 April 2024 Agreement, we received a prepayment of $300 million from G42 in May 2024 to be used for payments to third-party vendors to manufacture high-performance computing infrastructure. If the purchase orders are not issued by G42, any portion of the prepayment not paid by us to our third-party vendors will be payable to G42 on demand and our rights to inventory purchased with the prepayment will transfer to G42. In May 2024, we entered into an agreement with G42 (the “G42 May 2024 Agreement”) pursuant to which we agreed to certain pricing commitments with G42 through the end of 2025, and G42 agreed that it will purchase, or will cause a third party that may be affiliated or unaffiliated and under a commercial agreement with G42, to purchase, our high-performance computing systems, installation, and support services in an aggregate amount of 152


approximately $1.43 billion by completing the prepayment of such amount before February 28, 2025 and executing binding purchase orders totaling such amount. See the section titled “Management’s Discussion & Analysis—G42 Relationship” for additional information. Stock Repurchase and Sale In August 2022, we (i) repurchased an aggregate of 599,880 shares of our outstanding common stock at a purchase price of $16.7525 per share, for an aggregate purchase price of $10.0 million, from certain of our stockholders, including 69,468 shares from Andrew D. Feldman, our Chief Executive Officer, President, and director, for a purchase price of $1.2 million and (ii) issued and sold to Eclipse SPV XIII, L.P, which together with its affiliated entities beneficially owns more than 5% of our outstanding capital stock, an aggregate of 599,880 shares of our common stock at a purchase price of $16.7525 per share, for gross proceeds of $10.0 million. Pierre Lamond, a Partner Emeritus at Eclipse Ventures, was a member of our board of directors. Lior Susan, a current member of our board of directors, is the Founder and Managing Partner of Eclipse Ventures. Registration Rights We are party to an amended and restated investors’ rights agreement which provides, among other things, that certain holders of our capital stock, including entities affiliated with Alpha Wave, Altimeter, Benchmark, Coatue, Eclipse Ventures, and Foundation Capital, each of which hold more than 5% of our outstanding capital stock, have the right to demand that we file a registration statement. This agreement also provides that such parties, along with Mr. Feldman and others, have the right to request that their shares of our capital stock be included on a registration statement that we are otherwise filing. We have agreed to give G42 identical registration rights pursuant to the Preferred Stock Purchase Agreement. See the section titled “Description of Capital Stock—Registration Rights” for additional information regarding these registration rights. Right of First Refusal Pursuant to our equity compensation plans and certain agreements with our stockholders, including a right of first refusal and co-sale agreement with certain holders of our capital stock, including entities affiliated with Alpha Wave, Altimeter, Benchmark, Coatue, Eclipse Ventures, and Foundation Capital, each of which hold more than 5% of our outstanding capital stock, and Mr. Feldman we or our assignees have a right to purchase shares of our capital stock which certain stockholders propose to sell to other parties. This right under the right of first refusal and co-sale agreement will terminate upon the effectiveness of the registration statement of which this prospectus forms a part. Since January 1, 2021, we have waived our right of first refusal in connection with secondary sales of shares of our capital stock, including sales by certain of our executive officers. Voting Agreement We are party to an amended and restated voting agreement under which certain holders of our capital stock, including entities affiliated with Alpha Wave, Altimeter, Benchmark, Coatue, Eclipse Ventures, and Foundation Capital, each of which hold more than 5% of our outstanding capital stock, and Mr. Feldman have agreed as to the manner in which they will vote their shares of our capital stock on certain matters, including with respect to the election of directors. Upon the effectiveness of the registration statement of which this prospectus forms a part, the voting agreement will terminate and none of our stockholders will have any special rights regarding the election or designation of members of our board of directors. Other Transactions We have entered into offer letter agreements with certain of our executive officers that, among other things, provide for certain compensatory and change in control benefits. For a description of these agreements with our 153


named executive officers, see the subsection titled “Executive and Director Compensation—Executive Compensation Arrangements.” We have also granted stock options, RSUs, and restricted stock to our executive officers. For a description of these equity awards, see the subsection titled “Executive and Director Compensation—Outstanding Equity Awards at Year End.” Director and Officer Indemnification We have entered into indemnification agreements with certain of our current executive officers and directors, and intend to enter into new indemnification agreements with each of our current executive officers and directors before the completion of this offering. Our amended and restated certificate of incorporation also provides that, to the fullest extent permitted by law, we will indemnify any officer or director of our company against all damages, claims, and liabilities arising out of the fact that the person is or was our officer or director, or served any other enterprise at our request as an officer or director. Amending this provision will not reduce our indemnification obligations relating to actions taken before an amendment. Related Person Transaction Policy We have a written related person transaction policy, to be effective upon the completion of this offering, that applies to our executive officers, directors, director nominees, holders of more than 5% of any class of our voting securities and any member of the immediate family of, and any entity affiliated with, any of the foregoing persons. Such persons will not be permitted to enter into a related person transaction with us without the prior consent of our audit committee, or other independent members of our board of directors in the event it is inappropriate for our audit committee to review such transaction due to a conflict of interest. Any request for us to enter into a transaction with an executive officer, director, director nominee, principal stockholder, or any of their immediate family members or affiliates, in which the amount involved exceeds $120,000 must first be presented to our audit committee for review, consideration, and approval. In approving or rejecting any such proposal, our audit committee will consider the relevant facts and circumstances available and deemed relevant to our audit committee, including, but not limited to, the commercial reasonableness of the terms of the transaction and the materiality and character of the related person’s direct or indirect interest in the transaction. All of the transactions described in this section occurred prior to the adoption of this policy. 154

Principal


PRINCIPAL AND SELLING STOCKHOLDERS The following table contains information about the beneficial ownership of our common stock as of September 15, 2024, (i) immediately prior to the completion of this offering and (ii) as adjusted to the sale of shares of our common stock offered by this prospectus, assuming no exercise of the underwriters’ over-allotment option to purchase additional shares from us, by: •each of our directors; •each of our named executive officers; •all directors and executive officers as a group; •each of the selling stockholders; and •each person, or group of persons, known to us who beneficially owns more than 5% of our capital stock;. We have based percentage ownership of our common stock before this offering on                 shares of our Class A common stock and no shares of our Class N common stock outstanding, in each case, as of September 15, 2024, and assume the occurrence of each of the filing and effectiveness of our amended and restated certificate of incorporation, which will be in effect immediately prior to the completion of this offering, the Preferred Stock Conversion, the Option Exercise, and the RSU Net Settlement, in each case as if it had occurred as of September 15, 2024, but do not give effect to any voting proxies that will expire in connection with this offering. The exact number of shares of our Class A common stock that will be withheld from a stockholder in connection with the RSU Net Settlement will differ based on the stockholder’s personal tax rates. The percentage ownership of our common stock after this offering also assumes the foregoing and the issuance and sale of                 shares of Class A common stock by us in this offering, and assumes no exercise of the underwriters’ over-allotment option. In accordance with the rules of the SEC, beneficial ownership includes voting or investment power with respect to securities and includes the shares issuable pursuant to stock options that are exercisable within 60 days of September 15, 2024 or issuable pursuant to RSUs which are subject to vesting and settlement conditions expected to occur within 60 days of September 15, 2024 (including those for which the liquidity-based vesting condition will be satisfied in connection with this offering). Shares issuable pursuant to stock options are deemed outstanding for computing the percentage of the person holding such options but are not outstanding for computing the percentage of any other person. For further information regarding material transactions between us and certain of our stockholders, see the section titled “Certain Relationships and Related Party Transactions.” Unless otherwise indicated, the address for each listed stockholder is: c/o Cerebras Systems Inc., 1237 E. Arques Avenue, Sunnyvale, California 94085. Except as indicated in the footnotes to the following table or pursuant to applicable community property laws, we believe, based on information furnished to us, that each stockholder named in the table has sole voting and investment power with respect to the shares set forth opposite such stockholder’s name. The term “G42 Funds” refers to Mozn Holding RSC Ltd. and Expansion Project Technologies Holding 8 SPV RSC Ltd. 155


Shares Beneficially Owned Before this Offering

Shares Beneficially Owned Before this Offering / Shares Beneficially Owned Before this Offering / Shares Beneficially Owned Before this Offering / Shares Beneficially Owned Before this Offering / Shares Beneficially Owned Before this Offering / Shares Beneficially Owned Before this Offering / Shares Beneficially Owned Before this Offering / Shares Beneficially Owned Before this Offering / Shares Beneficially Owned Before this Offering / Shares Beneficially Owned Before this Offering / Shares Beneficially Owned Before this Offering / Shares Beneficially Owned Before this Offering / Shares Beneficially Owned Before this Offering / Shares Beneficially Owned Before this Offering / Shares Beneficially Owned After this Offering / Shares Beneficially Owned After this Offering / Shares Beneficially Owned After this Offering / Shares Beneficially Owned After this Offering / Shares Beneficially Owned After this Offering / Shares Beneficially Owned After this Offering / Shares Beneficially Owned After this Offering / Shares Beneficially Owned After this Offering / Shares Beneficially Owned After this Offering / Shares Beneficially Owned After this Offering / Shares Beneficially Owned After this Offering / Shares Beneficially Owned After this Offering / Shares Beneficially Owned After this Offering / Shares Beneficially Owned After this Offering / Shares Beneficially Owned After this Offering

Class A Common Stock / Class A Common Stock / Class A Common Stock / Class A Common Stock / Class A Common Stock / Class A Common Stock / Class A Common Stock / Class A Common Stock / Class A Common Stock / % of Total Voting Power / % of Total Voting Power / % of Total Voting Power / Shares of Class A Common Stock Being Offered / Shares of Class A Common Stock Being Offered / Shares of Class A Common Stock Being Offered / Class A Common Stock / Class A Common Stock / Class A Common Stock / Class A Common Stock / Class A Common Stock / Class A Common Stock / Class A Common Stock / Class A Common Stock / Class A Common Stock / % of Total Voting Power / % of Total Voting Power / % of Total Voting Power

Name of Beneficial Owner ................ Name of Beneficial Owner / Name of Beneficial Owner / Shares / Shares / Shares / % / % / % / % of Total Voting Power / Shares of Class A Common Stock Being Offered / Shares / Shares / Shares / % / % / % / % of Total Voting Power

Named Executive Officers and Directors: ... Named Executive Officers and Directors: / Named Executive Officers and Directors:

Andrew D. Feldman(1) .................... Andrew D. Feldman(1) / Andrew D. Feldman(1) / 10,605,921 / 10,605,921 / 10,605,921

Dhiraj Mallick(2) ....................... Dhiraj Mallick(2) / Dhiraj Mallick(2) / 1,960,244 / 1,960,244 / 1,960,244

Anthony E. Maslowski(3) ................. Anthony E. Maslowski(3) / Anthony E. Maslowski(3) / 682,445 / 682,445 / 682,445

Paul Auvil(4) ........................... Paul Auvil(4) / Paul Auvil(4) / 215,000 / 215,000 / 215,000

Glenda Dorchak(5) ....................... Glenda Dorchak(5) / Glenda Dorchak(5) / 215,000 / 215,000 / 215,000

Thomas Lantzsch ......................... Thomas Lantzsch / Thomas Lantzsch / — / —

Lior Susan(6) ........................... Lior Susan(6) / Lior Susan(6) / 13,840,909 / 13,840,909 / 13,840,909

Eric Vishria(7) ......................... Eric Vishria(7) / Eric Vishria(7) / 14,385,347 / 14,385,347 / 14,385,347

Steve Vassallo(8) ....................... Steve Vassallo(8) / Steve Vassallo(8) / 15,302,343 / 15,302,343 / 15,302,343

All current executive officers and directors as a group (9 persons)(9) ... All current executive officers and directors as a group (9 persons)(9) / All current executive officers and directors as a group (9 persons)(9) / 57,524,764 / 57,524,764 / 57,524,764

Other 5% or Greater Stockholders: ....... Other 5% or Greater Stockholders: / Other 5% or Greater Stockholders:

Entities affiliated with Alpha Wave(10) ... Entities affiliated with Alpha Wave(10) / Entities affiliated with Alpha Wave(10) / 9,209,509 / 9,209,509 / 9,209,509

Entities affiliated with Altimeter(11) ... Entities affiliated with Altimeter(11) / Entities affiliated with Altimeter(11) / 7,041,117 / 7,041,117 / 7,041,117

Entities affiliated with Benchmark(12) ... Entities affiliated with Benchmark(12) / Entities affiliated with Benchmark(12) / 14,385,347 / 14,385,347 / 14,385,347

Entities affiliated with Coatue(13) ..... Entities affiliated with Coatue(13) / Entities affiliated with Coatue(13) / 7,483,724 / 7,483,724 / 7,483,724

Entities affiliated with Eclipse Ventures(14) ... Entities affiliated with Eclipse Ventures(14) / Entities affiliated with Eclipse Ventures(14) / 13,840,909 / 13,840,909 / 13,840,909

Entities affiliated with Foundation Capital(15) ... Entities affiliated with Foundation Capital(15) / Entities affiliated with Foundation Capital(15) / 15,302,343 / 15,302,343 / 15,302,343

Other Stockholders: ..................... Other Stockholders: / Other Stockholders:

G42 Funds(16) ........................... G42 Funds(16) / G42 Funds(16) / 1,441,711 / 1,441,711 / 1,441,711

Selling Stockholders: ................... Selling Stockholders: / Selling Stockholders:

All selling stockholders who beneficially own, in the aggregate, less than 1% of our common stock ... All selling stockholders who beneficially own, in the aggregate, less than 1% of our common stock / All selling stockholders who beneficially own, in the aggregate, less than 1% of our common stock

156


_______________ *Represents beneficial ownership of less than 1%. (1)Represents (i) 7,732,483 shares of Class A common stock; (ii) 2,850,000 shares underlying options to purchase shares of Class A common stock that are exercisable within 60 days of September 15, 2024; and (iii) 23,438 shares issuable upon settlement of RSUs that will have satisfied the service-based and liquidity-based vesting conditions in connection with this offering, before giving effect to the RSU Net Settlement. (2)Represents (i) 145,688 shares of Class A common stock; (ii) 1,272,370 shares underlying options to purchase shares of Class A common stock that are exercisable within 60 days of September 15, 2024; (iii) 489,756 shares issuable upon settlement of RSUs that will have satisfied the service-based and liquidity-based vesting conditions in connection with this offering, before giving effect to the RSU Net Settlement; and (iv) an additional 52,430 shares that may be acquired upon the settlement of outstanding RSUs within 60 days of September 15, 2024. (3)Represents (i) 568,083 shares of Class A common stock; (ii) 362 shares of Class A common stock held by the Anthony E. Maslowski Trust; (iii) 7,000 shares of Class A common stock held by Toobe Manka Cheng, as custodian under the California Uniform Transfers to Minors Act (“CUTMA”) for Mr. Maslowski’s minor child; (iv) 7,000 shares of Class A common stock held by Toobe Manka Cheng, as custodian under CUTMA for Mr. Maslowski’s minor child; and (v) 100,000 shares underlying options to purchase shares of Class A common stock that are exercisable within 60 days of September 15, 2024. Mr. Maslowski is our former Chief Financial Officer and served through March 15, 2024. (4)Represents 215,000 shares of Class A common stock, of which 210,521 shares are subject to repurchase within 60 days of September 15, 2024. (5)Represents 215,000 shares underlying options to purchase shares of Class A common stock that are exercisable within 60 days of September 15, 2024. (6)See footnote (14) for shares held by the entities affiliated with Eclipse Ventures. Mr. Susan, the Founder and Managing Partner of Eclipse Ventures, is a member of our board of directors. (7)See footnote (12) for shares held by the entities affiliated with Benchmark. Mr. Vishria, a general partner of Benchmark, is a member of our board of directors. (8)See footnote (15) for shares held by the entities affiliated with Foundation Capital. Mr. Vassallo, a general partner of Foundation Capital, is a member of our board of directors. (9)Represents (i) 52,603,522 shares of Class A common stock beneficially owned by our current executive officers and directors as a group, of which 1,192,273 shares are subject to repurchase within 60 days of September 15, 2024; (ii) 4,355,618 shares underlying options to purchase shares of Class A common stock that are exercisable within 60 days of September 15, 2024; (iii) 513,194 shares issuable upon settlement of RSUs that will have satisfied the service-based and liquidity-based vesting conditions in connection with this offering, before giving effect the RSU Net Settlement; and (iv) an additional 52,430 shares that may be acquired upon the settlement of outstanding RSUs within 60 days of September 15, 2024. (10)Represents (i) 5,502,465 shares of Class A common stock held by Alpha Wave Ventures II, LP (“Alpha Wave Ventures”); (ii) 1,364,263 shares of Class A common stock held by Alpha Wave Holdings, LP (“Alpha Wave Holdings”); and (iii) 2,342,781 shares of Class A common stock held by Falcon Q LP (“Falcon Q,” and together with Alpha Wave Ventures and Alpha Wave Holdings, “Alpha Wave”). Alpha Wave Ventures GP, Ltd (“Alpha Wave Ventures GP”) is the general partner of Alpha Wave Ventures and may be deemed to exercise voting and dispositive control over the shares held by Alpha Wave Ventures. Alpha Wave Ventures GP is a joint venture between Alpha Wave Global, LP (“Alpha Wave Global”) and Lunate Capital Holding RSC LTD (“Lunate”). Lunate is majority owned by Chimera Investment LLC (“Chimera”). Chimera is controlled by its board of directors. The managing partners of Lunate Capital Limited, a wholly owned investment manager subsidiary of Lunate, manage the investment activities of Lunate. Richard Gerson is the Chairman and Chief Investment Officer of Alpha Wave Global. Alpha Wave Global is the Investment Manager for Alpha Wave Holdings and Falcon Q. Mr. Gerson therefore may be deemed to exercise voting and dispositive control over the shares held by the entities affiliated with Alpha Wave. The address for all entities affiliated with Alpha Wave is c/o Alpha Wave Global, LP, 667 Madison Ave, 19th Floor, New York, New York 10065. The address for Lunate is Unit No. 1, Floor 8, 9, 10, 11, 12, Al Maryah Tower, Abu Dhabi Global Market Square, Al Maryah Island, Abu Dhabi, United Arab Emirates. 157


(11)Represents (i) 1,548,390 shares of Class A common stock held by Altimeter Growth Partners Fund III, L.P. (“Altimeter Growth Fund III”); (ii) 1,637,116 shares of Class A common stock held by Altimeter Growth Partners Fund IV, L.P. (“Altimeter Growth Fund IV”); (iii) 3,389,587 shares of Class A common stock held by Altimeter Partners Fund, L.P. (“Altimeter Partners Fund”); and (iv) 466,024 shares of Class A common stock held by Altimeter Private Partners Fund II, L.P. (“Altimeter Private Partners Fund,” and together with Altimeter Growth Fund III, Altimeter Growth Fund IV, and Altimeter Partners Fund, each an “Altimeter Fund” and collectively, “Altimeter”). Brad Gerstner is a managing member of the general partner of each of the Altimeter Funds and may be deemed to have shared voting, investment, and dispositive power with respect to the shares held by these entities. The address for all entities affiliated with Altimeter is One International Place, Suite 4610, Boston, Massachusetts 02110. (12)Represents 14,385,347 shares of Class A common stock held by Benchmark Capital Partners VIII, L.P. (“BCP VIII”), for itself and as nominee for Benchmark Founders’ Fund VIII, L.P. (“BFF VIII”) and Benchmark Founders’ Fund VIII-B, L.P. (“BFF VIII-B”). Benchmark Capital Management Co. VIII, L.L.C. (“BCMC VIII”) is the general partner of each of BCP VIII, BFF VIII, and BFF VIII-B and may be deemed to have sole voting and investment power with respect to such shares. Mr. Vishria, a member of our board of directors, Matthew R. Cohler, Peter H. Fenton, J. William Gurley, An-Yen Hu, Mitchell H. Lasky, Chetan Puttagunta, and Sarah E. Tavel are the managing members of BCMC VIII. The address for all entities affiliated with Benchmark is 2965 Woodside Road, Woodside, California 94062. (13)Represents (i) 272,852 shares of Class A common stock held by Coatue CT 61 LLC (“Coatue CT 61”) and (ii) 7,210,872 shares of Class A common stock held by Coatue Private Fund II LP (“Coatue Private Fund,” and together with Coatue CT 61, “Coatue”). Each Coatue entity has designated Coatue Management, L.L.C. to serve as its respective investment manager. Philippe Laffont serves as the control person of Coatue Management, L.L.C. Voting and dispositive decisions with respect to the shares held by the Coatue entities are made by Coatue Management, L.L.C. (14)Represents (i) 800,358 shares of Class A common stock held by Eclipse Continuity Fund I, L.P. (“Eclipse Continuity Fund”); (ii) 6,548,466 shares of Class A common stock held by Eclipse SPV II, L.P. (“Eclipse SPV II”); (iii) 599,880 shares of Class A common stock held by Eclipse SPV XIII, L.P. (“Eclipse SPV XIII”); and (iv) 5,892,205 shares of Class A common stock held by Eclipse Ventures Fund I, L.P. (“Eclipse Ventures Fund,” and together with Eclipse Continuity Fund, Eclipse SPV II, and Eclipse SPV XIII, “Eclipse Ventures Entities”). Mr. Susan is the sole managing member of the general partner of each of the Eclipse Ventures Entities and may be deemed to have voting, investment, and dispositive power with respect to the shares held by such entities. The address for the Eclipse Ventures Entities is 514 High Street, Palo Alto, California 94301. (15)Represents (i) 1,091,411 shares of Class A common stock held by Foundation Capital Leadership Fund II, L.P. (“Foundation Leadership Fund”); (ii) 299,627 shares of Class A common stock held by Foundation Capital VIII Principals Fund, LLC (“Foundation Capital VIII Principals”); and (iii) 13,911,305 shares of Class A common stock held by Foundation Capital VIII, L.P. (“Foundation Capital VIII,” and together with Foundation Leadership Fund and Foundation Capital VIII Principals, “Foundation Capital”). Foundation Capital Management Co. VIII, L.L.C. is the General Partner of Foundation Capital VIII and the Manager of Foundation Capital VIII Principals and has sole voting and investment power. Ashu Garg, Paul R. Holland, Charles P. Moldow, and Steven P. Vassallo are the Managers of Foundation Capital Management Co. VIII, L.L.C. and share such powers. Foundation Capital Management Co. LF II, L.L.C. is the General Partner of Foundation Capital Leadership Fund and has sole voting and investment power. Ashu Garg, Charles P. Moldow, and Steven P. Vassallo are the Managers of Foundation Capital Management Co. LF II, L.L.C. and share such powers. The address for all entities affiliated with the Foundation Capital is 550 High Street, 3rd Floor, Palo Alto, California 94301. (16)Represents 1,441,711 shares of Class A common stock held by Mozn Holding RSC Limited (“Mozn”). Does not include 22,851,296 shares of Class N common stock that Expansion Project Technologies Holding 8 SPV RSC Ltd. (“EPTH”) has agreed to purchase by April 15, 2025 pursuant to the Preferred Stock Purchase Agreement. Each share of Class N common stock is non-voting and is convertible into one share of Class A common stock. EPTH has agreed to not convert its shares of Class N common stock until on or after July 31, 2025. Mozn’s registered office is 2475Register08, 24, Al Sila Tower, Abu Dhabi Global Market Square, Abu Dhabi, Al Maryah Island, United Arab Emirates, and EPTH’s registered office is 8th floor, 8, Al Khatem Tower, Adgm Square, Al Maryah Island, Abu Dhabi, United Arab Emirates. HH Sheikh Tahnoon Bin Zayed S. Al-Nahyan is the ultimate beneficial owner of the shares held (or to be held) by each of Mozn and EPTH and has sole dispositive and voting power over such shares. 158