Applied Blockchain Inc
NASDAQ:APLD
US |
Fubotv Inc
NYSE:FUBO
|
Media
|
|
US |
Bank of America Corp
NYSE:BAC
|
Banking
|
|
US |
Palantir Technologies Inc
NYSE:PLTR
|
Technology
|
|
US |
C
|
C3.ai Inc
NYSE:AI
|
Technology
|
US |
Uber Technologies Inc
NYSE:UBER
|
Road & Rail
|
|
CN |
NIO Inc
NYSE:NIO
|
Automobiles
|
|
US |
Fluor Corp
NYSE:FLR
|
Construction
|
|
US |
Jacobs Engineering Group Inc
NYSE:J
|
Professional Services
|
|
US |
TopBuild Corp
NYSE:BLD
|
Consumer products
|
|
US |
Abbott Laboratories
NYSE:ABT
|
Health Care
|
|
US |
Chevron Corp
NYSE:CVX
|
Energy
|
|
US |
Occidental Petroleum Corp
NYSE:OXY
|
Energy
|
|
US |
Matrix Service Co
NASDAQ:MTRX
|
Construction
|
|
US |
Automatic Data Processing Inc
NASDAQ:ADP
|
Technology
|
|
US |
Qualcomm Inc
NASDAQ:QCOM
|
Semiconductors
|
|
US |
Ambarella Inc
NASDAQ:AMBA
|
Semiconductors
|
Utilize notes to systematically review your investment decisions. By reflecting on past outcomes, you can discern effective strategies and identify those that underperformed. This continuous feedback loop enables you to adapt and refine your approach, optimizing for future success.
Each note serves as a learning point, offering insights into your decision-making processes. Over time, you'll accumulate a personalized database of knowledge, enhancing your ability to make informed decisions quickly and effectively.
With a comprehensive record of your investment history at your fingertips, you can compare current opportunities against past experiences. This not only bolsters your confidence but also ensures that each decision is grounded in a well-documented rationale.
Do you really want to delete this note?
This action cannot be undone.
52 Week Range |
2.39
9.16
|
Price Target |
|
We'll email you a reminder when the closing price reaches USD.
Choose the stock you wish to monitor with a price alert.
Fubotv Inc
NYSE:FUBO
|
US | |
Bank of America Corp
NYSE:BAC
|
US | |
Palantir Technologies Inc
NYSE:PLTR
|
US | |
C
|
C3.ai Inc
NYSE:AI
|
US |
Uber Technologies Inc
NYSE:UBER
|
US | |
NIO Inc
NYSE:NIO
|
CN | |
Fluor Corp
NYSE:FLR
|
US | |
Jacobs Engineering Group Inc
NYSE:J
|
US | |
TopBuild Corp
NYSE:BLD
|
US | |
Abbott Laboratories
NYSE:ABT
|
US | |
Chevron Corp
NYSE:CVX
|
US | |
Occidental Petroleum Corp
NYSE:OXY
|
US | |
Matrix Service Co
NASDAQ:MTRX
|
US | |
Automatic Data Processing Inc
NASDAQ:ADP
|
US | |
Qualcomm Inc
NASDAQ:QCOM
|
US | |
Ambarella Inc
NASDAQ:AMBA
|
US |
This alert will be permanently deleted.
Earnings Call Analysis
Summary
Q1-2024
Applied Digital confirmed expectations for FY 2024, projecting revenues between $385 million to $405 million and adjusted EBITDA within $195 million to $205 million. The company is seeing robust demand and targeting a 70% contracting of capacity on 7- to 10-year terms to kick off data center builds in North Dakota and Utah. Their site construction, guided by 80% loan to cost, is adapting to high-density designs necessary for AI workloads, incorporating a liquid cooling system to support up to 150 kW per rack. They've also redone their Ellendale facility for increased density, networking efficiency, with a new design focused on liquid cooling infrastructure to meet higher power demands.
Good morning, and welcome to Applied Digital's Fiscal First Quarter 2024 Conference Call. My name is Robin, I'll be your operator for today.
Before this call, Applied Digital issued its financial results for the fiscal first quarter ended August 31, 2023, in a press release, a copy of which will be furnished in a report on Form 8-K filed with the SEC and will be available in the Investor Relations section of the company's website.
Joining us on today's call are Applied Digital's Chairman and CEO, Wes Cummins; and CFO, David Rench. Following their remarks, we will open the call for questions. Before we begin, Alex Kovtun from Gateway Group will make a brief introductory statement. Mr. Kovtun, please proceed.
Great. Thank you, operator. Good morning, everyone, and welcome to Applied Digital's Fiscal First Quarter 2024 Conference Call. Before management begins their formal remarks, we would like to remind everyone that some statements we're making today may be considered forward-looking statements under securities laws and involve a number of risks and uncertainties. As a result, we caution that there are a number of factors, many of which are beyond our control, which could cause actual results and events to differ materially from those described in the forward-looking statements.
For more detailed risks, uncertainties and assumptions relating to our forward-looking statements, please see the disclosures in our earnings release and public filings made with the Securities and Exchange Commission. We disclaim any obligation or undertaking to update forward-looking statements to reflect circumstances or events that occur after the date the forward-looking statements are made, except as required by law.
We will also discuss non-GAAP financial metrics and encourage you to read our disclosures and the reconciliation tables, applicable GAAP measures in our earnings release carefully as you consider these metrics. We refer you to our filings with the Securities and Exchange Commission for detailed disclosures and descriptions of our business as well as uncertainties and other variable circumstances, including, but not limited to, risks and uncertainties identified under the caption Risk Factors in our quarterly report on Form 10-Q. You may get Applied Digital's Securities and Exchange Commission filing for free, by visiting the SEC website at www.sec.gov.
I would also like to remind everyone that this call is being recorded and will be made available for replay via a link available in the Investor Relations section of Applied Digital's website.
Now I will turn the call over to Applied Digital's Chairman and CEO, Wes Cummins. Wes?
Thanks, Alex, and good morning, everyone. Thank you for joining our fiscal first quarter 2024 conference call. I want to start by thanking our employees for their ongoing hard work and service and advancing our mission of providing digital infrastructure solutions to the rapidly growing high-performance computing industry.
Before turning the call over to our CFO, David Rench, for a detailed review of our financial results, I'd like to briefly discuss some recent developments across our business.
Let's start with our existing blockchain hosting operations. We aim to have all 3 of our blockchain hosting facilities fully online shortly with high reliability and performance for our customers. Our 100-megawatt Jamestown facility continues to perform as expected and operates at full capacity with consistent uptime throughout the quarter. This marks the fourth consecutive quarter in which the Jamestown facilities has operated at full capacity.
Our 180-megawatt Ellendale facility in North Dakota was fully energized and became fully operational during the first quarter of fiscal year 2024 and contributed to our results this quarter, the facility is fully online and operating with consistent uptime during the second quarter, bringing our total hosting capacity to 280 megawatts across our North Dakota facilities, all of which are contracted out to customers on multiyear terms.
In September, we entered into a facility extension agreement with Oncor for the transmission and metering of power to our Garden City, Texas facility. With this in place, metering and telemetry equipment owned by the power provider will be installed and once complete, the site will be energized. This installation is expected to be completed by October 23rd. Once our Garden City facility becomes fully energized, we will have approximately 500 megawatts of hosting capacity across our 3 facilities.
We expect our 3 sites to produce around $300 million in revenue and $100 million in EBITDA on an annualized basis. Presence of all 3 operational facilities with high uptime will provide us with consistent cash flow, supporting our capital requirements for the build-out of our HPC data centers and purchasing GPUs to service our AI cloud customers.
Let's move to our AI cloud services, which launched this calendar year to provide accelerated computing power for AI applications. Our AI cloud service continues to ramp up as we make further progress in supporting our existing contracts and pursue additional opportunities in our pipeline. In July, we activated the first cluster of GPUs for character AI and since then have made meaningful progress receiving our second cluster of GPUs in September with the expectation of receiving additional GPUs this month.
Since our last earnings announcement, we have added two additional AI cloud customers. Both customers have an established user base and are growing quickly. These customer agreements have a similar structure to our first two. They also include significant prepayments to fund a large portion of the capital requirements for purchasing the GPUs.
This brings our total annual contract value of AI Cloud services contracts at full capacity to approximately $378 million. In addition to substantial prepayments we received from customers, we are using vendor financing and actively exploring other tailored financing options to support the capital requirements for the 34,000 GPUs we have on order to support our cloud service. We remain on track for delivery of the majority of these GPUs by April of next year.
Our established partnerships with leading OEMs like Supermicro, Hewlett Packard Enterprise and Dell, combined with our recent elite partner status in NVIDIA's partner network provide us with visibility into the delivery time line, ensuring timely receipt of these GPUs. As previously mentioned, we will initially provide this service from our 9-megawatt HPC Jamestown facility along with third-party colocation space as we continue to execute on the element of our dedicated next-gen HPC data centers.
The pipeline of opportunities for our AI cloud service business remains robust. We look forward to capitalizing further on this opportunity and providing further updates on our newly signed customers going forward.
Lastly, let me provide a quick update on our purposeful HPC data centers. We have 300 megawatts of capacity in development and have begun initial groundwork for our Ellendale facility. In order to proceed with the construction of these facilities and obtain the necessary financing, we're in the process of securing a credit-rated anchor tenant. We have been actively engaged in ongoing discussions with several potential anchor tenants for our Ellendale and Utah facilities.
Similar to our new customers for our AI cloud service, we will provide more information once available. We aim to secure an anchor tenant customer for each facility and have these facilities fully energized and operational within the next 24 months. With that, I now turn the call over to our CFO, David Rench, to walk you through our financials and provide an update on guidance before providing my closing remarks. David?
Thanks, Wes, and good morning, everyone. Revenues for the fiscal first quarter of 2024 were $36.3 million compared to $6.9 million for the fiscal first quarter of 2023. The increase in hosting revenues was driven by an increase in online capacity due to Ellendale, North Dakota site being operational and revenue from the company's first AI cloud service contract, which began during the three months ended August 31, 2023.
Cost of revenues for the fiscal first quarter of 2024 was $24.4 million compared to $6.1 million for the fiscal first quarter of 2023. The increase in costs was attributable to higher energy costs used to generate hosting revenues, depreciation, amortization expense and personnel expenses for employees directly working on our Jamestown and Ellendale hosting facilities.
Operating expenses for the fiscal first quarter of 2024 were $17.1 million compared to $5 million in the prior year comparable period. The increase was primarily due to personnel-related costs as a result of the increase in head count as well as an increase in depreciation. Net loss for the fiscal first quarter of 2024 was $9.6 million or a loss of $0.10 per basic and diluted share based on a weighted average share count during the quarter of approximately 100.5 million.
This compares to a net loss of $4.7 million or a loss of $0.05 per basic and diluted share in the fiscal first quarter of 2023 based on a weighted average share count during the quarter of approximately 93.1 million.
Adjusted net income, a non-GAAP measure for the fiscal first quarter of 2024 was $0.1 million or adjusted net income per basic and diluted share of less than $0.01. Based on a weighted average share count during the quarter of approximately 100.5 million. This compares to an adjusted net loss, a non-GAAP measure of $3.4 million or a loss of $0.04 per basic and diluted share for the fiscal first quarter of 2023. Based on our weighted average share count during the quarter of approximately 93.1 million.
Adjusted EBITDA, a non-GAAP measure for the fiscal first quarter of 2024 was $10 million compared to an adjusted EBITDA loss for the fiscal first quarter of 2023 of $1.9 million.
Lastly, on the balance sheet, we ended the fiscal first quarter with $31.2 million in cash, cash equivalents and restricted cash and $44 million in debt. During the fiscal first quarter of 2024, we received $39.5 million in customer payments due to the structure of our commercial arrangements with our customers that incorporate upfront deposits and prepayments.
In certain contracts, the prepayments are amortized back to the customer over the first year of their contract with no impact on revenue recognition, but the timing of cash flow with upfront cash to us is a major benefit for the company and that it helps our CapEx funding needs as we build out our data centers.
Since the quarter closed, we have received an additional $15 million in customer prepayments and are expecting an additional $23 million this week.
Now turning to guidance for the full year fiscal 2024. We are reaffirming our expectations for revenue in the range of $385 million to $405 million and adjusted EBITDA in the range of $195 million to $205 million. Now I'll turn the call over to Wes for closing remarks.
Thank you, David. As we look ahead, we remain confident in our competitive advantages and differentiated capabilities to meet the sophisticated and demanding requirements for businesses and enterprises to run AI workloads and other emerging HPC applications. I remain optimistic about the future of Applied Digital as we solidify our leadership in next-generation digital infrastructure for both blockchain and non-blockchain HPC use cases during this era of digital transformation.
I'd like to thank all of our team members for their dedication in making Applied what it is today and our shareholders for trusting us and our mission and execution. We're now happy to take questions. Operator?
[Operator Instructions] Thank you. And our first question comes from the line of Rob Brown with Lake Street Capital.
Good morning. First question on the -- on your kind of efforts for anchored customers in the AI business. I can't give too much detail, but maybe a sense of kind of the sizing of the center. Do they need to fill out kind of a whole center before you kick off? And how does that sort of kind of scope out to get -- to make the decision to kick off a data center build?
Yes. I think we've shown in the past, we've talked about this, and it remains the same, which is we think we need to contract roughly 70% of the capacity. Doesn't have to be one customer, but we think we need to contract that on a reasonable, say, 7- to 10-year contracts with renewals to kick off the construction, and we're in process of securing that.
But Rob, the way I think about these is kicking off roughly 100 megawatts in North Dakota shortly and then followed up with 100 in Utah and then come back kind of mid next year for the second 100 in North Dakota.
Okay. Great. And then on the ramp of the AI business, it depends on getting the GPUs sort of delivered. How do you sort of see that ramping of the GPU's you've order? How do you see those coming in, in terms of the ability to ramp up that business?
Yes. So we -- I think at this point, we're getting pretty good visibility on delivery schedules. It's gotten better since our last call. And as I mentioned in the prepared remarks that we received our second cluster in September. Expect to receive more this month and really expect to start to receive large volumes in November, December and January.
Our next question is from the line of George Sutton with Craig Hallum.
This is Adam on for George. Wes, starting with Garden City, could you provide a little more detail on the pace of energization once Oncor has installed their equipment?
Yes. Sure. So the date that we mentioned in the prepared remarks and in the press release is the expectation that everything will be ready to go for energization. We should start energizing that day. And then the ramp should be faster than we've seen at our North Dakota facilities where we were continuing to finish buildings and energize them.
It will still take weeks to energize, but it won't be the months that we've seen in both Jamestown and Ellendale because construction is complete, the miners are racked and ready to turn on. And so it should go much faster. So at some point in kind of the November -- mid- to late November should be fully online from that late October start, the 23rd.
Great, one more follow-up for me. With respect to current construction efforts, are there any milestones that you need to hit before you face more adverse weather?
Yes. In North Dakota, we're -- we started on the groundwork there, and it's kind of the same thing we did last year, which is it will be a little bit of a rush to get foundations poured. To be able to work through the winter to enclose the facility and work through the winter there. And so that's what we're doing right now.
Our next question is from the line of John Todaro with Needham & Company.
One here on -- so the site computing business, just trying to understand -- first off, congrats on adding a few more contracts. But kind of what is the capacity out there? Can -- is there more contracts we can add now? Or are you looking to get those HPC sites on and anchor tenants there? Just kind of wondering really how much capacity if we can expect any more contracts to come online?
Yes. We have some more capacity, not a lot until we start to bring our own facilities online. So John, if we walk through kind of the math on the site compute side. So roughly every 1,000 GPUs generates approximately $1.5 million of revenue per month. And we've talked about before about getting to 26,000 on by April. And so that gets you to, call it, $460 million plus annual revenue business. And then when we go from April on, we'll be looking to put it in our own facilities. So through that ramp in April, we have our Jamestown facility and then we've secured third-party colo to support that ramp and then we're looking to put further deployments in our own facilities.
Got it. Okay. Great. That's helpful. And then I guess just on the timeline, too. So that's when you guys increased from 26,000 to 34,000 GPUs, just any kind of color from the suppliers that it seemed like certainly doable? Or is that -- what is the difficulty in kind of get that? I know you guys said you're tracking, but any more color there would be helpful.
Yes. So those are the -- what we expect to deploy post April of next year. And so we've added those in because of, again, the pipeline of demand that we're seeing. But you should think about those being added after April. So the 26 that we talked about on the last call being added through April and then the additional 8 being added post, but I would continue to think about it's 8 right now, but we'll see where the demand comes through. So if we need to expand that, we should have our own data center capacity coming online to support that.
Next question comes from the line of Lucas Pipes with B. Riley Securities.
My first question is on guidance. And with the delays at Garden City should we think kind of more of the EBITDA and revenue contribution having shifted to AI Cloud. Thank you very much for walking me through those changes.
Yes. Thanks, Lucas, for the question. So the -- given what I just said about the AI cloud, you can kind of walk through as GPUs are deployed, how that steps up throughout the year, but it will be a fairly steep ramp, especially in our last 2 quarters as these come online, we'll get the deliveries. We got the delivery in September. We'll get additional in October.
And then I think the deliveries really ramp for us in November, December and somewhat into January. So as you see those come online, you'll see a pretty steep revenue ramp on the cloud business through the remainder of the year. That should make up for the delay that we've seen in Garden City.
On the HPC side, can you walk us through the capital intensity with many of the planning completed from what I understand. And in terms of financing, what are your current targets for debt to equity? Is it project financing that would cover the majority of the capital needs would appreciate your thoughts on that.
Yes, sure. So on the data center business, and we'll go through this in great detail on Thursday. But we -- I think a lot of people have seen. So we redesigned the data center with the knowledge we have now. So what we built in Jamestown, which was the initial build that we talked about, it was single-level horizontal because that's the cheapest way to do it. And we have plenty of land in North Dakota, and so it's going to be just build it as far as you wanted to horizontally.
But now that we know how these workloads work and the necessity of them being all much higher density closer to the network core. We redesigned our structure to a 3-story structure that has network core that runs through the middle so that you can get much more density for -- specifically for training and then somewhat for inference. But with that redesign, you should be thinking, we talked before about kind of $4 million, $4.5 million, we're looking more around $6 million per megawatt to build those facilities.
We think with the work that we have done that we can get roughly an 80% loan to cost for construction, this is -- looks much more like data centers, so we get -- secure that tenant or tenants that are credit-rated, then we can put the construction financing in place. And then there's an equity component, but we call it site level equity and you can get financing partners on the equity component of that as well.
That we're -- we've spent a lot of time talking to there, and you should -- it's usually in the data center industry is called the equity financing, again, it's at a site level, but I think of it more as what we would see in the capital markets as like Mezz debt. So we're working through all those pieces of the capital structure to finalize and start the initial build in North Dakota. I guess technically, Lucas, not our initial build. It's the initial build of this design.
Understood. I'll try to squeeze one more in. In terms of the 34,000 GPUs today versus 26,000 GPUs previously, would those be related to the additional two customers you announced or -- are there other moving pieces there?
There's other moving pieces. It's really more related to the demand we continue to see. We haven't seen demand in this area slow down. I think if you go look through the capital raising in the industry hasn't slowed down. There was a large deal, I think, announced with Anthropic, was that last week with AWS. But we're still seeing a robust funding environment for the companies that will be customers for us we see good demand, really strong demand in our pipeline. And so it's really just a decision based on what we're seeing from a demand perspective.
Okay. Very helpful. I appreciate it. And to you and the team, best of luck.
Thanks, Lucas.
Our next question comes from the line of Darren Aftahi with ROTH MKM.
Wes, could you speak a little bit more to the cadence on the GPU orders? I'm just kind of curious if you can talk month-to-month in September and in October. Are those numbers getting bigger relative to the initial 1,000 order? And then I guess, why is the cadence kind of hockey ticked up into November, December and January for your comments. Is that just a function of backlog? And I guess what's your confidence that, that number isn't kind of pushed up further into '24?
Yes. So it's a good question, Darren. So what we saw in September is our second 1024 cluster, 1,024 GPUs being delivered. And in October, we could see that be doubled and then get significantly bigger in November and December and into January. And it's just the our order book that we put in back in the May time frame, starting to be delivered, you get small pieces. And then it's the schedule that we've been given as far as when deliveries will happen really between now and the end of the calendar year.
So that's what gives us that confidence in the deliveries, can things be pushed. I mean that it could always happen, I suppose, but it seems that we've -- this is probably trying to think when it was maybe 3 weeks ago, we're given or a little bit longer than that, kind of a firmer delivery schedule from our suppliers. So feel pretty good about the deliveries for us. And I think it's just a matter of when we ordered those and when they're being shipped out. And kind of the cadence of that ramping up over the next 3 months.
I don't know. I don't think it's for me to say whether there's better supply in the industry or not or if it's just related specifically to us that I don't know.
That's helpful. And then just one last one for me. On the anchor tenant with HBC, you have multiple tenants you're talking to? And I guess, in terms of slotting people in, I mean, how close are we to North Dakota vis-a-vis Utah? And then I assume your data center financing is probably going to be right behind that based on kind of your prior comments. So if you just kind of walk through -- I'm just trying to understand kind of anchor tenant demand relative to the capacity you have and then kind of a longer-term plan.
Sure. So we're seeing a lot of interest. We've been having these conversations for a few months now. And then in the first week, the end of the first week of September, we finalized our design. And so we've been in a, what I would call, a formal marketing process from that point. until now, and we'll move that into an LOI stage and into a contracting stage over the next few weeks or a month or so that's the expectation, but that's kind of how it's come together as it really started a formal process in, call it, mid-September, and we hope to conclude that in the coming weeks.
But the interest is high, and we'll go through this in more detail on Thursday. But it's -- the demand for this style of data center with this type of density because we've been both in out getting colocation space for ourselves on the cloud side and then talking to potential customers on the data center side for building our own. We're seeing a huge amount of demand and really demand for power that is available in capacity that can come online over the next 24 or 36 months.
Next question is from the line of Mike Grondahl with Northland Securities.
A couple of questions. The first on potential anchor tenants, would you say the potential anchor that you're going to announce near term, is that still sort of wide open, meaning there's still multiple five, six, seven big potential anchors out there you're talking to? Or have one or two kind of made it way down the funnel and you're just kind of finalizing who it might be out of a very small group?
So Mike, I would say that we're right in the, I would say, in the middle of the kind of the two scenarios that you described. So the funnel has gotten smaller, but we're not right at the end yet. Does that makes sense.
Got it. Yes. Just trying to understand. And then on the prepayments, you mentioned $39.5 million in the August quarter. And then I think a $15 million prepayment and a $23 million prepayment you expect this week. Can you say are the $15 million in the $23 million, are those from customers three and four? Or do those relate back to customer one and two?
It's both for those prepayments. The ones that we talked about that we've received already and then the one we expect to receive this week.
Got it. Got it. And then lastly, the 1,024 GPUs in August that you put to work, did they all go to customer one?
So those came in September, Mike. And they are for customer one.
Our next question is from the line of Kevin Dede with H.C. Wainwright.
Maybe you could just help me understand a little bit better how you're thinking about AI cloud versus AI host and how that might figure in your calculus your construction calculus.
In what way, Kevin?
Well, I guess what I'm wondering is when you go look for your anchor tenants, are you looking at them purely from a cloud customer perspective? Or are you looking at some perhaps from a host perspective, where they're bringing her own GPUs.
Yes, yes. So the anchor tenants are absolutely a host hosting business for us. where they will bring their own equipment. It's just a data center hosting business for us. When you think about anchor tenants and for these data centers, we've talked about the ideas, 70% goes to the colocation hosting style business and then we keep 30% for our own cloud capacity.
Then the you said you're comfortable in colocation that you need for -- to support the 26,000 to 34,000 GPUs you have coming in, how do you figure moving them once they're at their colocation to your own facilities once they're ready.
So we won't ever move those installations. And the way that I think about it from a cloud service perspective, is a lot of that colocation is in what I would call kind of traditional cloud regions. And as we ramp up capacity in North Dakota, it's really being built for very large training clusters. And so I think about the training portion of the business that's being deployed in smaller training clusters in these cloud regions moving over to the infrastructure that gets put in place in North Dakota and then using the smaller clusters that we're building out now is inferencing, which I think the way this market splits is there will be training batch inference, and some inferencing done in these large facilities like we're building in North Dakota.
And then a lot of the inference portion of the market will be more in what I would call traditional cloud regions. And so it works well for our cloud business over time to have that type of architecture of just being spread into more cloud regions for the inferencing portion of the business, while a lot of the training will move into North Dakota.
Okay. The redesign that you did in September, Wes for your new Ellendale facility, did you have to rethink latency? Or did you have to be more concerned about power backup on those? Or do you still think you can operate under the conditions that you built for in Jamestown?
So it's not about latency or necessarily power backup. It's about -- it's a redesigned specifically for density. So it's designed to be able to basically take a network core and go through multiple floors. And put all of the GPU. So the design here for the new North Dakota facility, it's really around kind of this magic number of, I call it, 30 meters being the magic number of how far -- what rack can be away from the network core and still be in the same cluster, same spine for networking.
You technically, Kevin can go out to 50 meters away from the network core, but you have to use a single mode transceiver on optics instead of multimode, so it gets more expensive, a little more difficult. So it's really designed around how many racks and how many servers can we get within 30 meters of the network core. And so that's why it goes through multiple levels in the building instead of the single level. That's the primary piece of the redesign.
And what are the heat implications, though, if you've got multiple racks stacked, right, and heat's tendency to rise?
Yes. So this is a liquid cool facility. That's the other part of the redesign. So it was air cool. You can effectively do air cool, our opinion up to, call it, roughly 50 kW per rack. But as you start to go above that, you really need to move to a liquid cool solution. And so the new facility is designed for liquid cool. The new facility will go to 150 kW per rack. It can still -- it still has the floor space to do it at 45 kW, which is what we're doing in Jamestown, but we've -- this will be built for liquid cooling.
[Operator Instructions] At this time, this concludes our question-and-answer session. I'd now like to turn the call back over to Wes Cummins.
Thank you, operator, and thanks, everyone, for joining our call. I look forward to speaking with you on Thursday at our Investor Day, which will be held in Midtown Manhattan. And again, thanks to all of our employees for the efforts they put in the last quarter and this quarter to date. I look forward to speaking with you on our next quarterly call.
Thank you for joining us today for Applied Digital's conference call. You may now disconnect.