DigitalOcean Holdings Inc
NYSE:DOCN
US |
Fubotv Inc
NYSE:FUBO
|
Media
|
|
US |
Bank of America Corp
NYSE:BAC
|
Banking
|
|
US |
Palantir Technologies Inc
NYSE:PLTR
|
Technology
|
|
US |
C
|
C3.ai Inc
NYSE:AI
|
Technology
|
US |
Uber Technologies Inc
NYSE:UBER
|
Road & Rail
|
|
CN |
NIO Inc
NYSE:NIO
|
Automobiles
|
|
US |
Fluor Corp
NYSE:FLR
|
Construction
|
|
US |
Jacobs Engineering Group Inc
NYSE:J
|
Professional Services
|
|
US |
TopBuild Corp
NYSE:BLD
|
Consumer products
|
|
US |
Abbott Laboratories
NYSE:ABT
|
Health Care
|
|
US |
Chevron Corp
NYSE:CVX
|
Energy
|
|
US |
Occidental Petroleum Corp
NYSE:OXY
|
Energy
|
|
US |
Matrix Service Co
NASDAQ:MTRX
|
Construction
|
|
US |
Automatic Data Processing Inc
NASDAQ:ADP
|
Technology
|
|
US |
Qualcomm Inc
NASDAQ:QCOM
|
Semiconductors
|
|
US |
Ambarella Inc
NASDAQ:AMBA
|
Semiconductors
|
Utilize notes to systematically review your investment decisions. By reflecting on past outcomes, you can discern effective strategies and identify those that underperformed. This continuous feedback loop enables you to adapt and refine your approach, optimizing for future success.
Each note serves as a learning point, offering insights into your decision-making processes. Over time, you'll accumulate a personalized database of knowledge, enhancing your ability to make informed decisions quickly and effectively.
With a comprehensive record of your investment history at your fingertips, you can compare current opportunities against past experiences. This not only bolsters your confidence but also ensures that each decision is grounded in a well-documented rationale.
Do you really want to delete this note?
This action cannot be undone.
52 Week Range |
27.27
44
|
Price Target |
|
We'll email you a reminder when the closing price reaches USD.
Choose the stock you wish to monitor with a price alert.
Fubotv Inc
NYSE:FUBO
|
US | |
Bank of America Corp
NYSE:BAC
|
US | |
Palantir Technologies Inc
NYSE:PLTR
|
US | |
C
|
C3.ai Inc
NYSE:AI
|
US |
Uber Technologies Inc
NYSE:UBER
|
US | |
NIO Inc
NYSE:NIO
|
CN | |
Fluor Corp
NYSE:FLR
|
US | |
Jacobs Engineering Group Inc
NYSE:J
|
US | |
TopBuild Corp
NYSE:BLD
|
US | |
Abbott Laboratories
NYSE:ABT
|
US | |
Chevron Corp
NYSE:CVX
|
US | |
Occidental Petroleum Corp
NYSE:OXY
|
US | |
Matrix Service Co
NASDAQ:MTRX
|
US | |
Automatic Data Processing Inc
NASDAQ:ADP
|
US | |
Qualcomm Inc
NASDAQ:QCOM
|
US | |
Ambarella Inc
NASDAQ:AMBA
|
US |
This alert will be permanently deleted.
Earnings Call Analysis
Q3-2024 Analysis
DigitalOcean Holdings Inc
In the third quarter of 2024, DigitalOcean achieved revenue of $198.5 million, marking a 12% increase year-over-year. The annual run rate (ARR) also grew 12% to $798.3 million, illustrating steady growth amid a competitive landscape. Notably, the company added $17 million in new ARR during this quarter, driven mainly by its largest customers known as builders and scalers, who saw a remarkable 15% growth, indicating their vital role as they constitute 88% of total revenue.
Interest in DigitalOcean's Artificial Intelligence and Machine Learning (AIML) platform soared with an ARR growth of nearly 200% year-over-year. This surge reflects the company's successful integration of AI capabilities, providing substantial contributions from newly onboarded customers as well as existing clients increasing their use of DigitalOcean services. Despite these gains in AIML, overall revenue growth faced some headwinds due to challenging year-over-year comparisons from previous pricing adjustments and regional revenue fluctuations.
The financials show a healthy adjusted EBITDA of $87 million for Q3, reflecting a 14% increase from the previous year. The adjusted EBITDA margin stood at an impressive 44%, demonstrating effective cost management strategies. In terms of profitability, diluted net income per share rose 65% to $0.33 while non-GAAP diluted net income soared 18% year-over-year to $0.52. Adjusted free cash flow totaled $26 million, equating to 13% of revenue, which provides flexibility for further investments in AI and growth initiatives.
DigitalOcean reported a net dollar retention (NDR) rate of 97%, maintaining stable customer retention levels despite challenging pricing landscape changes. However, a historical drop in NDR indicates the need to enhance metrics moving forward. The company plans to leverage product innovation to improve customer engagement and retention, driving future growth as they build towards a growth rate exceeding 100% in the pending term.
Encouragingly, DigitalOcean adjusted its full-year revenue guidance upwards by $5 million on the low end and $2 million on the high end, projecting full-year revenues in the range of $775 million to $777 million. For Q4 specifically, expected revenue ranges between $199 million to $201 million indicate an approximate 11% growth year-over-year. The company has also raised its adjusted EBITDA margin expectation for the full year to 40%-41%, with Q4 adjusted EBITDA margins forecasted at 34%-38%.
The company has emphasized shifting its focus towards servicing larger customers and those utilizing AI services. Recent product innovations include the introduction of new features tailored to meet the needs of scalers, potentially enhancing their cloud adoption rates and driving long-term commitment contracts. DigitalOcean acknowledges this strategic pivot as essential to capturing a greater market share, particularly as companies migrate from hyperscalers to their platform.
While not providing explicit guidance for 2025, DigitalOcean anticipates entering this year with baseline growth scenarios in the low to mid-teens. This outlook is supported by consistent improvements in customer retention, with expectations for NDR to rise as further innovations are rolled out. The company also hinted at continuing contributions from its AIML initiatives, which are poised to help in sustaining overall growth as the digital landscape evolves.
Thank you for standing by, and welcome to the DigitalOcean Third Quarter 2024 Earnings Conference Call. [Operator Instructions] I'd now like to turn the call over to Melanie Strate, Head of Investor Relations. You may begin.
Thank you, and good morning. Thank you all for joining us today to review DigitalOcean's Third Quarter 2024 financial results. Joining me on the call today are Paddy Srinivasan, our Chief Executive Officer; and Matt Steinfort, our Chief Financial Officer. After our prepared remarks, we will open the call up to a question-and-answer session.
Before we begin, let me remind you that certain statements made on the call today may be considered forward-looking statements, which reflect management's best judgment based on currently available information. I refer specifically to the discussion of our expectations and beliefs regarding our financial outlook for the fourth quarter and full year 2024 towards our business goals and outlook. Our actual results may differ materially from those projected in these forward-looking statements. I direct your attention to the risk factors contained in our filings with the Securities and Exchange Commission and those referenced in today's press release that is posted on our website.
DigitalOcean expressly disclaims any obligation or undertaking to release publicly any updates or revisions to any forward-looking statements made today. Additionally, non-GAAP financial measures will be measured on this conference call and reconciliations to the most directly comparable GAAP financial measures are also available in today's press release as well as in our investor presentation that outlines the financial discussion on today's call. A webcast of today's call is also available on the IR section of our website.
And with that, I will turn the call over to Paddy.
Thank you, Melanie. Good morning, everyone, and thank you for joining us today as we review our third quarter 2024 results. DigitalOcean had a successful third quarter, continuing to deliver progress on our key metrics and executing on the initiatives we laid out earlier in the year, further establishing ourselves as the simplest scalable cloud.
In my remarks today, I will briefly highlight our third quarter results, share tangible examples of how our increased pace of innovation is benefiting our customers, discuss the continued momentum we are seeing with our AIML platform and give an update on our strategic partnerships and engagement with the developer ecosystem. First, I would like to briefly recap our third quarter 2024 financial results.
Revenue growth remained steady in the third quarter at 12% year-over-year with solid performance in core cloud and continued growth in AI despite lapping difficult comps from our managed hosting price increase in April 2023 and from the Paperspace acquisition in July 2023.
We continue to see momentum in demand for our AIML products, where Q3 ARR again grew close to 200% year-over-year. In addition, we saw revenue growth contributions from new customers and steady growth from our core business as we continue to enhance our customer success and go-to-market motions. Having delivered strong results through the first 3 quarters, we are increasing the lower end of our full year revenue guide by $5 million and the top end by $2 million.
We continue to focus the majority of our product innovation and go-to-market investments on our builders and scalers, who drive 88% of our total revenue and are growing 15% year-over-year ahead of our overall 12% revenue growth. We also delivered strong adjusted EBITDA margins at 44% and have maintained our full year free cash flow margin guidance as we continue to manage costs effectively while still investing to accelerate product innovation in cloud and AI.
Matt will walk you through more details on our financial results and guidance later in this call. Let me start by giving you an update on our core cloud computing platform.
In Q3, we continued our increased product velocity, specifically focused on the needs of our largest and fastest-growing customer cohort, the 17,000-plus scalers that drive 58% of our total revenue, and that grew 19% year-over-year in the quarter.
In Q3, we released 42 new product features in total, which is almost double what we delivered in the previous quarter. We are accelerating features that will benefit our existing and potential scalers that are on other hyperscaler cloud today. Let me now provide a few highlights from these efforts that are specifically focused on the needs of these larger workloads.
We announced the early availability of virtual private cloud period, or VPC peering, for short, that gives customers the ability to connect 2 different VPCs on the DigitalOcean platform within a data center or between different data centers. Through VPC peering, customers can create strong data isolation and privacy via direct and secure networking between resources that doesn't expose traffic to the public Internet.
Our global load balancer, or GLB, is now generally available for all of our customers. GLB offers global traffic distribution based on geographical proximity of the end user, enabling lower latency services, dynamic multiregional traffic fail-over, enabling more service availability for our customers' applications, data center prioritization, edge caching, and automatic scaling of the load balancers. We are thrilled to be able to roll it out to all of our customers, particularly to our scaler customers with existing multinational deployments that will benefit directly from this new product.
During the third quarter, we progressed daily backups from early availability to general availability, giving our customers the additional flexibility to manage backups at a daily and weekly cadence. This enables increased protection for our customers' workloads. As with daily backups, we automatically retained the 7 most recent backup copies. This was an explicit need given the large volume and growth of data we are seeing on our platform with our spaces, object storage footprint growing 50% year-over-year.
We're also launching larger droplet configurations, including 48 VCPU memory and storage optimized droplets, 60 VCPU, CPU optimized and general purpose droplets and larger 7 terabytes and 10 terabytes disk density variance droplets. These large droplet configurations are particularly relevant to our scaler customers who can quickly scale up their workloads that require more CPU, memory or storage versus horizontally scaling out with multiple nodes.
In September, we announced Kubernetes' lot forwarding, which also enables Kubernetes customers centralized log management, simplifying the monitoring and troubleshooting of their applications in the DigitalOcean platform. This was built with simplicity in mind.
With just a few clicks from the Kubernetes settings panel, customers can easily forward cluster event logs from Kubernetes directly to the DigitalOcean managed open search for further analysis. We also enhanced application security for our cloud-based managed hosting product by introducing a new malware protection solution and saw 3,650 net activations within the first week.
To date, we have seen near 0 false positives or false negative rates from our malware protection. This malware protection capability is now 1 of the fastest-growing revenue-generating product models we have seen on our managed hosting platform. All these innovations are not only helping us meet the needs of our large customers, but also helping us move customers with these larger workloads from purely usage-based to committed contracts. For example, an existing cybersecurity customer of ours, Sibu, a leader in Threat Intelligence, signed a multiyear 7-figure commitment in this quarter.
The decision to continue leveraging DigitalOcean and sign a multiyear deal was driven by the release of our new large premium CPU optimized droplets that helps customers run computationally heavy workloads. Cyber is a petabyte scale company. And after several weeks of diligence, they chose DigitalOcean for this new workload due to our scalability, coverage and cost efficiency.
Another great example is Traject Data, who signed a multiyear commitment for a broad portfolio of DigitalOcean services, including over 500 droplets, managed Mongo DB, spaces, backups and volumes. Traject Data requires robust scalable and reliable infrastructure to power their real-time clean and bulk process data insights, serving domains, including marketing, retail and analytics. They use the DigitalOcean platform to host their APIs and manage vast amounts of search engine results page and e-commerce data to deliver critical insights to their customers. These product innovations and enhanced customer engagement is also helping customers migrate workloads to DigitalOcean from the hyperscalers.
One specific example is PCAP, a leading ride sharing and logistics based in Latin America, operating in Mexico, Brazil, Peru and Colombia, and they moved all of their workloads from various clouds to DigitalOcean in the third quarter. They migrated to DO due to the simplicity of our products, transparent and simple pricing model and strong support from our customer-facing teams.
Another example is No Bid, a customer specializing in optimizing ad revenue for online publishers through real-time bidding technology. Upon technical validation of the DO platform scale, they moved most of their large-scale production applications from a hyperscaler to the DO platform, reinforcing our opportunity to increase our share of wallet with our scaler customers. Next, let me provide some updates on the AML side.
Our AI strategy reflects our belief that the AI market will evolve in a similar fashion to other major technology transformations with initial progress and monetization at the infrastructure layer eventually be eclipsed by the opportunities in value creation of platform and application layers. Like others in the market today, we are actively participating in the infrastructure layer.
But we are also innovating rapidly in the platform and application pillars to make it easy for our customers to use Gen AI at scale without requiring deep AIML expertise. This is where we see our differentiation as our customers seek to consume AI through platforms and agents rather than building everything themselves using raw GPU infrastructure.
At the infrastructure layer, we made GPU droplets accelerated by NVIDIA H-100 TensorCore GPUs generally available to all of our customers. Now all digital ocean customers can leverage on-demand and fractional access to GPUs, which is a critical step in achieving our overarching mission of democratizing AI for all customers.
In Q3, we also announced the early availability of NVIDIA H-100 TensorCore GPU worker nodes on the DigitalOcean Kubernetes platform, or DOK for short, providing customers with a managed experience with GPU nodes ready with NVIDIA drivers, NVLink fabric manager and NVIDIA container toolkit. Customers can take advantage of the NVIDIA GPU operator and NVIDIA Mellanox network operator to install a comprehensive suite of tools required for production deployment, both GPU droplets and the H-100 GPU nodes on DOKs are examples of how we are innovating even in the infrastructure layer, making it simpler for customers.
Let me give you an example. Calian Exchange is a pay tech company that specializes in providing enterprise blockchain-based solution for bank payments. And they're leveraging DigitalOcean's H-100 infrastructure to accurate the processing of high-volume financial transactions by providing advanced computational power. They use machine learning models to detect fraud in real time assess risk and ensure that payments are processed securely and quickly the GPU infrastructure allows them to process more transactions while maintaining low latency and improving the overall user experience for both banks and end customers.
Next, at the platform layer. In this quarter, we launched the early availability of our new Gen AI platform to select customers so that we can iterate with them and shape the product and keep easy for them to build Gen AI applications that deliver real business value.
Users of this product will be able to combine their data with the power of foundational models to create personalized agents to integrate with their applications in just a few minutes. Customers can leverage our platform to create AI applications with foundational models and agent routing, knowledge bases and retrieval augmented generation or RAG.
This is a key step towards our software-centric AI strategy, which is aimed at enabling customers drive business value from AI in a friction-free manner.
An example of a customer that is already leveraging our Gen AI platform is Autonoma Cloud, a planned digitization company that offers a platform for manufacturing plants and machine manufacturers. Autonoma Cloud creates and manages large volumes of documentation and data for each of their customers' plans and individual machines, and we're looking to create AI agents that understood their user-specific contacts and retrieve answers and machine-specific data to their queries.
With DO's new Gen AI platform [indiscernible] built an interactive experience with their custom data and that reduces the cognitive overhead for users. It is very important to note that these companies are not just doing internal proof of concepts or R&D projects, but are now starting to leverage our AIML products to build AI into their own products to deliver real business value to their customers without requiring deep expertise in AI, machine learning, data science or data engineering.
Finally, let me talk about the third pillar of our AI strategy, the application or a genetic layer. As I just talked about, our customers are using our Gen AI platform to create their own AI-driven agents. In addition to that, we are also innovating on this front by further simplifying cloud computing using AI and automating workflows that were previously done by humans.
One of the frequent pain points for our customers is debugging their cloud applications when something goes wrong because one, it is a very complex set of technical tasks. And number two, they typically don't have specialized site reliability engineers, or SREs, available in their staff to perform these complex tasks. So we set out to mitigate this pain point for our customers using Gen AI by building a new AI agent to perform some of these tasks that are typically done by human SREs.
We are using this AI SRE agent, both internally on our systems and externally by integrating it with our cloud-based products. Let me explain.
Internally, we are using the AI SRE agent to help our human SREs troubleshoot ongoing technical incidents in the DO cloud platform. Based on our internal -- initial internal data, the AI SRE agent is reducing the time it takes to identify root causes by almost 35% by leveraging AI to quickly process an enormous amount of log data from disparate systems to pinpoint root causes and make next step decisions, including recommendations to fixes for underlying problems.
Externally, we integrated this AI SRE agent into our cloud-based product, which host hundreds of thousands of mission-critical websites. Today, when issues happen on customer service and applications, they have to work with support engineers to debug the root cause and then apply a fix. This is true not just for the digital Ocean platform, but across all managed hosting platforms.
This can be a time-consuming job during which their business and even websites can be affected, if not offline. Our new AI SRE agent jumps into action upon detection of any performance degradation due to common issues like aggressive bot crawlers, denial of service attacks and so forth to investigate and gather insights and provide recommendations real time on how to fix these issues thereby reducing the time to resolution significantly. Our testing results are very encouraging, and we have just started working with a few customers in early availability mode.
Rounding out our AI strategy, we opened up a new front door by launching a strategic partnership with Hugging Face in Q3. Hugging Face is the leading open source and open science platform that helps users build, deploy and train machine learning models. As a result of this partnership, DigitalOcean now offers model inferencing through 1-click deployable models on GPU droplets, allowing users to quickly and easily deploy the most popular third-party models with the simplicity of GPU droplets and optimal performance accelerated by NVIDIA H-100 TensorCore GPUs.
This offering simplifies the deployment complexity of the most popular open source AI/ML models as DigitalOcean has natively integrated and optimize these models for GPU droplets, enabling fast deployment and superior performance. The hugging phase partnership will make it easier for the more than [ 1 ] trillion Hugging Face users to discover and use the DigitalOcean platform.
In Q3, we also announced a new partnership with Netlify, a leading Web development platform to enable customers to seamlessly connect their Netlify applications to DigitalOcean managed MongoDB, offering developers all the right tools to build and scale their applications without the complexities of managing infrastructure. These announcements, in addition to the various other partnerships we already have in flight, highlight our efforts to augment our durable product-led growth mode with additional channels, including new front doors through partnerships with leading players in our ecosystem that will also help shape and improve our product offerings.
I'm also excited to highlight the material progress we are making with a renewed engagement with the developer community. In October, we hosted the 11th addition of OctoberFest, which has now evolved from being an internal Hackaton event at DigitalOcean to 1 of the largest and premier open source community events.
This year, over 65,000 developers from 172 countries participated in more than 115 community run events and contributed to 15,000 open source projects. Beyond Hacker Fest, we also hosted more than 10 DigitalOcean meetups for developers and AIML community and participated in a number of industry conferences. This broad-based community engagement effort reinforces DigitalOcean's ongoing community to our developer ecosystem.
In closing, I am encouraged by the progress on product innovation and customer engagement particularly as it is helping our builder and scaler customers continue to grow on our platform as their businesses expand. We're also making great strides towards our software-centric AI vision by rapidly shipping products in each of the 3 layers: infrastructure, platform and applications. We're starting to see the green shoots from these investments in the form of customer wins, including cloud migrations from the hyperscalers, multiyear commitment contracts and real-world deployment of AI using the DO AI platform.
We will continue to focus on our largest and fastest-growing customer cohorts as we seek to accelerate growth in the quarters to come.
Before I turn the call over to Matt, I'm very excited to share that we will be hosting an Investor Day in New York City, and we are currently targeting late March or early calendar Q2 2025, in which we will share more on our long-term strategy, including more detail on our progress and metrics as well as a view of our long-term financial outlook.
I will now hand the call over to Matt Steinfort, our CFO, who will now provide some additional details on our financial results and our outlook for Q4 2024. Thank you.
Thanks, Paddy. Good morning, everyone, and thanks for joining us today. As Paddy just covered, we had a very successful Q3, both executing on key initiatives and delivering solid financial performance.
In Q3, we continued to see increased momentum from our AIML platform and steady growth across our core business while consistently delivering attractive adjusted EBITDA and adjusted free cash flow margins.
Revenue in the third quarter was $198.5 million, up 12% year-over-year. Annual run rate revenue or ARR in the third quarter was $798.3 million, also up 12% year-over-year. We added $17 million of ARR in the quarter. Most notably, builders and scalers, which are our largest customers, together grew 15% year-over-year.
Contributing to our overall growth was healthy incremental revenue from new customers and increased momentum from our AIML platform, which saw significant growth, again growing close to 200% year-over-year on an ARR basis.
Overall growth was partially muted by our managed hosting platform as we are lapping difficult comps related to the April 2023 managed hosting price increase and a temporal surge of managed hosting revenue in Asia in late 2023. Our Q3 net dollar retention rate was steady at 97%.
As with prior quarters, we continued to see consistent but below historical net expansion levels, while our churn levels have remained low for well over a year. We will continue efforts to improve growth in NDR, including executing our product road map, working to layer on additional go-to-market motions to complement our durable product-led growth engine.
Turning to the P&L. Gross margin for the quarter was 60%, which was 100 basis points lower than the prior quarter and consistent with the prior year. We are able to maintain healthy gross margins, while continuing our investment in AI infrastructure due to the success of our ongoing cost optimization efforts.
Adjusted EBITDA was $87 million, an increase of 14% year-over-year. Adjusted EBITDA margin was 44% in the quarter, approximately 200 basis points higher than the prior quarter. This increase quarter-over-quarter was primarily driven again by our ongoing operating cost discipline. Diluted net income per share was $0.33, a 65% increase year-over-year, and non-GAAP diluted net income per share was $0.52, an 18% increase year-over-year. This is directly a result of our ability to increase our per share profitability levels by continuing to drive operating leverage while mitigating through share buybacks.
Finally, Q3 adjusted free cash flow was $26 million or 13% of revenue. This is lower than the prior quarter by approximately 600 basis points due to timing of capital expense payments as we continue to make investments capitalized on the AI opportunity to fuel future growth. As a reminder, quarterly free cash flow margin will vary given the timing of capital spend and other working capital impacts.
The lower free cash flow in Q3 does not change our expected full year free cash flow margin. Turning to our customer metrics. The number of builders and scalers on our platform, those that spend more than $50 per month, was approximately 163,000, representing an increase of 6% year-over-year. The revenue growth associated with builders and scalers was 15% year-over-year, ahead of our overall revenue growth rate of 12%.
The number of builders and scalers on our platform, which together represent 88% of our total revenue, increased by 2,260 quarter-over-quarter. The continued growth of our larger spending cohorts is a direct result of our focused product development, much of which is driven by direct customer feedback, and the customer success and go-to-market investments that are concentrated on these builders and scalers.
Our overall revenue mix continued to shift more towards our higher spend and higher growth customers, and we saw total ARPU increase 11% year-over-year to $102.51. Our balance sheet remains very strong as we ended the quarter with $440 million of cash and cash equivalents. We also continue to execute against our share repurchase program with $11 million of repurchases in the quarter, bringing total share repurchases to $29.9 million through the first 3 quarters of the year.
With our healthy cash position and ongoing free cash flow generation, we are well positioned to continue to balance investment in organic growth with share repurchases, while moving towards our 2.5x to 3x net leverage target and maintaining appropriate flexibility to address our 2026 convert at the appropriate time.
Moving on to guidance. Based on our performance year-to-date, we are increasing the bottom end of our full year 2024 revenue guide by $5 million and the top end by $2 million, projecting revenue to be in the range of $775 million to $777 million, a $3.5 million increase in the midpoint of our guidance range, which will represent year-over-year growth of approximately 12%.
This full year guide implies Q4 revenue to be in the range of $199 million to $201 million, representing approximately 11% year-over-year growth at the midpoint of our guidance range. While we are not yet going to provide 2025 revenue guidance, we expect to enter 2025 with baseline growth in the low to mid-teens.
As demonstrated throughout 2024, we remain committed to driving continued operating leverage in our core DigitalOcean platform. Given our solid performance throughout the first 3 quarters of the year, we are raising our adjusted EBITDA margin guidance for the full year to be in the range of 40% to 41%. This full year adjusted EBITDA guide implies Q4 adjusted EBITDA margins to be in the range of 34% to 38%. For the full year, we expect non-GAAP diluted earnings per share to be between $1.70 to $1.75. This implies Q4 non-GAAP diluted earnings per share to be $0.27 to $0.32 based on approximately 103 million to 104 million in weighted average fully diluted shares outstanding.
Turning to adjusted free cash flow. We expect adjusted free cash flow margins for the full year to be in the range of 15% to 17%, consistent with what we guided in the prior quarter. While free cash flow margin will continue to vary quarter-to-quarter, we anticipate remaining in a similar 15% to 17% range the rolling average quarterly basis in 2025 as we continue to accelerate the pace of product innovation and make disciplined investments to expand our emerging AI capabilities.
That concludes our prepared remarks, and we will now open up the call to Q&A.
[Operator Instructions] Your first question comes from the line of Raimo Lenschow from Barclays.
Perfect. Paddy, there is a lot of product innovation that you kind of discussed. Can you talk a little bit about how we have to think about those new innovations around product and how that feeds into the installed base in terms of like what's the uptake there, what's the timing there? Because the financial number Matt and us will look at as the NRR is 97% stable, but there seems to be a little bit of a disconnect. Can you maybe talk to kind of timing there, et cetera? And I have 1 follow-up for Matt.
Thank you, Raimo, for the question. Yes, we are seeing a lot of product innovation across the board, both in the core cloud. That's why I spent so much time explaining all the things we are pumping out, especially in that are scalers, allowing them or enabling them to run larger workloads on DigitalOcean.
As you know from a timing and sequence point of view, there's no magical answer that we can provide, which translates our product innovation to adoption and hence, impact on our financial performance.
But we see, we have to do this to enable our customers move many of their larger workloads that they are currently running in other clouds and make it super compelling for them to run those workloads on the GO platform. And as I did just a few minutes ago, we will get into a habit of explaining some very concrete examples of customers that are starting to do that.
So the examples I gave, we are now starting to sign customers in multiyear contracts with commitments on our platform. We are also starting to see a steady dose of migrations coming from other clouds, especially the hyperscalers.
So we are playing -- we have to ensure that we have patience in terms of building these capabilities. We are starting to see the green shoots in terms of customer adoption and translation of that into leading indicators. And I have no question if we keep doing it for a handful of quarters, we are going to start seeing the translation into other lagging indicators, including some of the ones that you just mentioned, Raimo.
So in terms of the NDR, I think Matt alluded to this fact that we have -- what we are seeing from a core -- the NDR of the core business is trending a little bit ahead of what we are reporting on a blended basis. So it gives us enough reasons to believe that what we are doing is starting to be appreciated by our customers.
And as you know, this takes time for the adoption to happen. I have to keep reminding ourselves that we have 638,000 plus customers. So it takes time for the propagation to happen across the board with our customer base.
Okay. Perfect. And then 1 for you, Matt. Like if you look on the EBITDA, that's kind of the 1 we're kind of at the moment outperforming quite a bit. Can you talk a little bit about that, how do you achieve that? Like how sustainable are is the progression there, especially as you think about like more services coming on stream. You probably want to support them more and then obviously, more AI services coming as well.
Thanks, Raimo. Yes, I'd say from a cost standpoint, Q2 was definitely a good quarter from an EBITDA margin perspective. As we brought on our new executives, we had talked about implicit in our guide for the full year. We were making sure that we had enough room to invest to enable them to really improve the pace of innovation and layer on additional go-to-market motion.
But at the same time, we were evaluating, okay, what cost do we have now that we just aren't earning a return on? And can we clean those out before the team gets going with the new expenses? So we, I think, did a really good job of optimizing for that. And we also made some decisions to make sure that we were appropriately pacing the increases to see if we're getting a return on the investments as we did it.
So I think it was just disciplined kind of cost management in Q3. And as you saw from the guide in Q4, we are expecting to ramp our [indiscernible] heading into next year.
I don't think it's going to be a meaningful kind of change in the overall expense level. We feel pretty good about the kind of the trailing margin profile that we have and being able to continue that into next year.
Our next question comes from the line of Mike Cikos from Needham.
I think the first would go to Matt, just coming off of Raimo's question there. But if I look at the EBITDA guide that we have today, the 34% to 38% margin guide in Q4 is the widest range that I think we've had in recent memory. And just wanted to get a little bit more granular there as far as, I guess, what needs to go wrong or right or what you guys are weighing for that gets you at the 34% margin versus the 38% margin in December quarter?
Yes, it's a good question, Mike. I think part of it is, as we've been ramping the -- particularly on the R&D spend, we're evaluating kind of surge resources using contractors for -- to accelerate a handful of things on the product road map and the timing of that, which, again, I view that as a relatively lumpy potential investment and the timing of being able to get that spun up and fully staffed and moving, whether that hits in Q1 or it hits in Q4, I think that's really what's causing the range.
Again, I think on a go-forward basis, we don't anticipate a material change in the overall kind of R&D as a percent of revenue, but we are advancing the expense. So in any 1 quarter, it may be a little bit lumpier. But again, over a longer period of time, we don't -- we think we grow into that, and some of that is surge resource.
Terrific. And just another follow-up. I know that you guys aren't providing explicit guidance for 2025 here. I do appreciate the qualitative commentary. Just wanted to see what gives you the confidence to kind of put that out there for the baseline growth? And how should we be thinking about what it takes for DigitalOcean to be entering the year with that kind of baseline growth that you had commented on?
Yes. I think it's very similar to what we described at the beginning of this year, right? What can you count on. Well, you can count on the growth on the self-serve funnel, and we're a little bit better, doing a little bit better than that, on that year-to-date, and we had outlined at the beginning of the year.
We've got the managed hosting business, which is kind of returning to growth after lapping some difficult comps. We've got AIML that we had said would contribute around 3 points of growth. It's a little bit ahead of that for the year. And then NDR, while it's frustrating that we've had to print a bunch of 97s in a row. As Paddy said, the core DO, NDR is actually ahead of that it. We've got a little bit of a headwind for managed hosting that's going to be in place for the next, call it, through the first quarter of next year.
And so if you take all those together, we've moved up a couple of points from the baseline growth that we had described coming into the year. And none of it is on the back of kind of macro improvements. It's steady kind of improvements and continuing to deploy products that our customers, need. And so as we look at that trajectory, we feel comfortable kind of at the pace of growth that we're at right now and hopefully continuing to improve every month going into next year and beyond to eventually get it to be above 100%.
So I'd say we're just making sure that folks understand that we feel pretty comfortable with the baseline growth that we're delivering.
Our next question comes from the line of James Fish from Piper Sandler.
Paddy, for you. You guys are seeing adoption -- are you seeing adoption of the GPU droplet with more of the builders and scalers or more net new customers? And how should we think about the mix between on demand versus your contracts and what you guys are seeing around supply availability with GPUs?
Yes, great. Thank you, Jim, for the question. So in terms of the adoption, we are seeing adoption across the board. A lot of new customers, which we absolutely love, that are taking the tires and also explained on the call, building real-world applications on our GPU infrastructure, both droplets as well as more hard and betterment type of services.
I would say from between on-demand versus contract, we see more contracts when the customer is deploying live workloads, whether it is training or inferencing. And sometimes these contracts are fairly short term, but some are longer term. And on demand is typically for experimentation, which is what we had -- we would have guessed when we started this journey. But that's where things are.
And from an on-demand point of view, we're also seeing a very nice uptake and interest in our Gen AI platform. So companies that don't have the deep bench in terms of AIML skill set have a very easy time just using our Gen AI platform standing something up very quickly many times in a matter of a few minutes just to see if they can prove to themselves that there's value in integrating Gen AI into their platform. So that's what we are seeing broadly from an adoption point of view.
And from a supply chain standpoint, yes, on the supply chain, we don't see same kind of headwinds that we had seen coming into the year. We've got orders out for the next generations of the technology, the H-200 is coming. We're keeping an eye on Blackwell and see the timing of that.
And it's certainly not so tight that you can get it in a week or 2 from ordering. But it's -- I'd say the supply chain is open enough that we've been able to get the equipment in the timeframes that we need it. And again, with our build-out of the line of data center coming on kind of at the beginning of next year, we're in good shape from a logistics and scheduling standpoint.
Got it. And then, Matt, for you, circling back in the 97% expansion rate, the AI side of things turned organic this quarter. On my math, it's probably adding about 1 to 2 points to NRR. It looks like net new ARR for AI was up around 10 million. So what's going on with that core business, specifically you are starting to mention around cloud where obviously, the price increased lapping. But why is that business kind of weaker than what you guys are anticipating? And how should we think about the mix of cloud waste hosting digital ocean versus other cloud platforms?
That's a great question and good clarification. The AI products are not in NDR. So making sure that everybody understands that. The revenue from the AI products are not in net dollar retention. And that's -- it's clear in the definitions that we have.
A lot of the AI revenue, if you think about it, is project-based, right? So someone's training something, somebody's coming in and experimenting. It's not yet at the point where people are coming in and running large-scale inference workloads where you could say, "Oh, well, that should -- the revenue that you get from that inference workload should be bigger next year than it was this year because they're spending -- they have a lot more customers." If someone comes in and train the model for a month or 2 and then turns it off and then does -- goes and focuses on inferencing, the revenue is going to be lumpy.
And so at this point, and we could reevaluate this going forward. But at this point, AI is not reflected in NDR. So it contributes not thanks to the improvement in the NDR that we see -- what we have seen is steady improvement in the core cloud business, which we said is tracking above the reported NDR Cloud ways, which has historically been literally until we lack the price increase, it was always a positive contribution to NDR. It's been a headwind to NDR since April and will continue to be probably until next April because of the vagarities of the lagging like NDR.
But we expect both the cloud ways, the managed hosting business and the core DO business, we expect to be able to get those back above 100, and we're certainly working aggressively to accomplish that. And we can't tell you exactly when that's going to happen. But we're very encouraged by the green shoots that we're seeing in both businesses on that improvement in NDR.
Our next question comes from the line of Gabriela Borges from Goldman Sachs.
Matt, I wanted to follow up some of your comments for 2025 and more specifically on how we should think about the seasonality of the business, given some of the moving pieces we've had this year versus last. So any comments on tonality. I'm noticing that the size of the beat this quarter was about 1% versus the 2% last quarter. Any nuances we should be aware of there in terms of why the size of the beat was smaller this quarter?
I don't think there's any seasonality in the business that would reflect that. I think that again, we've been very focused on the full year and providing guidance that's appropriate and reflective of that, and that causes kind of bigger wins than the quarterly kind of beats, right? So we're more focused, I think, from an annual standpoint.
But I think that as we look at the business going into next year, again, going back to my earlier comments, we're very encouraged by the growth that we're seeing and improving growth versus what we thought with the self-serve funnel and feel confident about that.
The managed hosting business is coming back from, again, some difficult comps. The AI business is likely ahead of where we had expected and kind of the last to move for us, which would give us the confidence to increase the -- our outlook on the revenue is that NDR just needs to come up and expense steadily, but stubbornly moving up.
And so I don't think there's anything seasonal that would suggest we would be more or less on an individual quarter.
Got it. Okay. And then the follow-up is for Paddy. So given the paper change that we're seeing in the digital service market. Maybe you could walk us through what are 1 or 2 of the areas where you feel like you've learned the most over the last 3 months as it relates to your AI services strategy and particularly around your LLM as a service offerings, the platform offerings, how you think you can differentiate versus something like a sage maker or a better option?
Thank you, Gabriela. Great question. So in terms of what we have learned over the last 90 days, we have learned a lot. As you can see, we have also shipped a lot. So the preparation of that, I think we have learned quite a bit on all 3 layers of our platform.
I would say, for me, personally, the biggest learning has been that our customers, which are typically companies that don't have a tremendous bench of deep machine learning, data scientists or data engineering skill sets, they look at the AI platform almost in an inverted fashion.
What I mean by that is everyone, us included, the market, everyone looks at it from an infrastructure first and then platform and then finally applications, our customers actually look at it top down. They look at, okay, what applications can I or agents can I leverage today from Gen AI that makes my app more productive or my customers save money or deliver some innovation that was not possible so far. So it's almost a realization that we need to innovate more rapidly on the platform and application layer is why we accelerated some of our Gen AI platform capabilities, and we already have seen a customer push that into production, which is amazing.
And the part 2 of your question is what makes our Gen AI platform stand out against something like a Sagemaker or Bedrock. As you know, we have very deep expertise in both Sagemaker and Bedrock at DigitalOcean today. And the biggest difference is some of our -- some of the technologies you mentioned are phenomenal. They're very broad and very powerful if you have a broad set of skills available to take all of that and build something fantastic for a very complex use case.
For our customers and the customer that I talked about during the prepared remarks, specifically tested a variety of different Gen AI platforms and picked us primarily because of how easy it was for them to get started to inject their own custom data to build the RAK pipeline, to create a knowledge base and finally create a chatbot where they could project exactly how much it would cost that and develop a business model that would be friendly to their customers. So all of these things individually are fairly complex.
But when you add these different steps to build a productionized application, it just balloons in its complexity. And we have tried to measure every click it takes to simplify the journey for our customers. And I think that's how we established ourselves as a credible cloud provider, and that's what we are doing to establish ourselves and differentiate ourselves in Gen AI.
And also, we should also not forget that there's a lot of differentiation we are pushing even in the agent player. As I explained, we just came out with our first agent. We are working with customers in earlier availability mode. So we will learn and innovate on that faster. But the combination of the platform and applications gives us the ability to make things that are super salable, but at the same time, an order of magnitude, simpler to use compared to other alternate platforms that are available.
Our next question comes from the line of Jeff Hickey from UBS.
The first 1 I wanted to ask is it's very helpful just detailing that AI is not included in the NDR metric. But maybe with some of the existing AI customers that you've had a few quarters that maybe do have some workloads already in production, do you have any sense of like how they're expanding their spend over time, maybe even just on a quarterly basis? Or do you typically see those customers kind of launch a workload and then have that spend at sort of a stable level from there.
We've seen good traction with a number of our early AI customers that have come in and experimented on the platform, and they may have started with kind of a small cluster. And then as they've expanded their use of the platform. So if the question is when we land customers, do we see them grow or do we see a big rotation of customers in and out, we actually see a fairly healthy expansion from the customers when they come in.
But again, back to my earlier comment, it's in, okay, I'm training a model. I need 8 nodes. And now I'm going to do something, I need more. But it's not the same dynamic because they're still evaluating. They're still kind of going through the testing phases. But we've seen very good traction, growing customers, the initial customers that we've had on the AI platform.
And Matt, 1 thing I will add to that is it's interesting to note that our AI customers are also very similar to our core cloud customers in the sense that most of them, if not all of them, are ISVs or independent software vendors or digital native application providers. So they are taking -- they are building solutions on our AI platforms, whether it is infrastructure or Gen AI, to create software solutions for their customers.
So as they grow and expand, they will -- they are expanding their footprint, to Matt's point, on our platform. So that's a very interesting thing for us to notice versus a customer that is coming to build a solution for their internal use.
Got it. That's really helpful. And then 1 just quick follow-up. You mentioned earlier about just supply and that's gotten better relative to the beginning of the year for AI investments. Just curious with the October 1 launch of H-100 instances broadly available. Are you supply constrained at all right now as we're kind of in the fourth quarter? Or are you able to meet all the demand you currently have as well?
We've ordered enough. And we talked about this in the last earnings, because we have the ability to see the demand and plan out the capacity, that we've been able to get enough capacity to meet the demand as we've gone, which is a very good sign because we don't have those supply constraints.
So again, because we're not spending hundreds of millions of dollars on GPU, we can get the quantums that we need to meet our requirements. And when you have something like the GPU droplet, which is more on demand and it's less committed contract, you have to see kind of what the utilization is and then plan your purchases based on that capacity utilization.
And we've been able to manage that effectively. So it hasn't been a drag or a constraint to us.
Our next question comes from the line of Josh Baer from Morgan Stanley.
One for Paddy. I guess just thinking about the 42 new product features more the higher period, and I think calling out some of them features that hyperscale customers are generally looking for moving contracts to committed contracts, even migrating workloads from hyperscalers to DO. It's like in the past, the story was more about DigitalOcean's simplicity of the platform and better support, lower pricing and maybe a little bit less about sort of getting into the competitive dynamic with hyperscale.
Just wondering, is the right takeaway that there is a little bit of a shift in focus, either upmarket or a little bit of expansion outside the simplest start-ups in SMBs just to be a little bit more competitive in the market? Is there a strategy shift there?
Yes. Thank you, Josh. Great question, as always. The shift is essentially following our customers' fleet, honestly. So as I made a point during the prepared statements to make sure that we are not abandoning or taking our eye off being part of the DigitalOcean and the developer community. We continue to nurture that. In fact, we are doing a lot more with the developer ecosystem this year compared to the recent past.
But at the same time, we do recognize that we have 17,000 hyperscalers who on an average spend more than $25,000 with us. That's a big, and that's 58% of our revenue. And if you add scalers, that's almost 88% of our revenue, which are growing much faster than our blended average growth rate. So we have a unique opportunity to follow their lead and make sure that we are delivering capabilities that will enable them to run or expand their footprint on DigitalOcean.
We are increasingly in a multi-cloud world even for smaller customers, like the ones we target. And there is an opportunity for us to keep expanding our share of wallet to these companies. And the examples that I shared are just a starting point for what we believe our fair share slice of this enormous market. And if we keep doing what we are doing now, which is add compelling feature sets that enable our scaler ustomers to expand their footprint on us, I think there's a lot of value to be created for our customers on our platform.
Very clear. If I could follow up with 1 for Matt. Just on the -- some of the factors I called out, the managed hosting, tough comp, pricing increases, the Asia influx of revenue, even some of the M&A, like get how that could be impacting some of the -- like the net dollar retention rate or the year-over-year growth. If I'm just looking at quarter-over-quarter net new ARR added 32 million last quarter and 17 million this quarter. Anything to call out as far as that difference just on a quarter-over-quarter basis?
Yes, that's a good point, Josh. The big difference was the availability like the -- we brought on a ton of AI capacity in Q2, which we had pent-up demand for. So we got a bump, a material bump in ARR last quarter.
If you look at we were -- we're on 17 or 18 in the quarter before, and then we jumped to 30, now back to 17. I'd say last quarter was more of an anomaly than this quarter. Clearly, we're looking to add more incremental ARR going forward. But most of that change was the result of a surge in AI capacity last quarter.
Our next question comes from the line of Pinjalim Bora from JPMorgan.
Great. Just 1 question for me. The baseline growth outlook that you kind of shared entering 2025 seems pretty positive. It calls for an acceleration from the exit growth rate in this year. So I want to understand that a little bit more. Are you seeing some signals that projects will accelerate next year around the core business based on some customer conversations? Does that assume 100% MDR as you go into Q1? And how should we think about AI?
Pinja, I didn't hear the last part of that question. I heard the first part, but let me answer and then you can maybe come back with the AI question.
We're seeing a lot of green shoots around kind of like, as you said, all the product traction that we're getting and some of our larger customers, even be willing to commit to longer term -- to long-term contracts and commitment contracts, which isn't something that the company has done extensively in the past.
But then the core NDR is improving steadily. We're not assuming that it gets to 100 by Q1. That's not implicit in that kind of comments that we made regarding next year. I mean we're going to work aggressively to get it to be 100, but we can deliver the growth rates that we talked about because we're effectively delivering that now, right, at 12% growth with an NDR that's only 97. And we expect both the managed hosting NDR and the core cloud NDR to improve as we head into next year.
And we'll continue to get growth, very positive growth contributions from AI capabilities. We said earlier this year that we thought we'd get 3% of overall growth from AI, and we'll end a little bit ahead of that this year. So that's also positive and encouraging as we think about what the baseline growth is heading into next year.
Understood. One quick follow-up. The multiyear commitments is definitely interesting. Are you leaning in on any way to drive those commitments? Is that largely coming from customers? Are you putting in processes to kind of enable those discussions?
Yes. I can take that. So we are -- at this point, Pinjalim, we're just letting it happen organically. So we don't have any pronounced, established go-to-market motion. We're not pushing it on our customers. We're just letting it organically happen.
The most important thing for us is to learn, learn the patterns, learn what kind of technologies we need to build, learn the migration process itself and things like that. And going into next year, we, of course, will look into packaging it a little bit, partitizing it and also expand our third-party ecosystem that can help orchestrate some of these things. So there's a lot of work to be done to scale it.
But right now, we are focused on nailing it and understanding exactly what it takes to be successful.
And that is all the time we have for questions. This concludes today's conference call. Thank you for your participation. You may now disconnect.