Future of Power Podcast Episode 2: 
Unblocking AI’s Growth with Fault Managed Power

Transcript:

Steve Eaves (00:09.464)
Welcome to the Future of Power podcast, where we explore how fault managed power is revolutionizing energy distribution and driving a smarter, safer, and more sustainable future. As the inventors of fault managed power, fault server is at the forefront of this transformation. With FMP now part of the National Electric Code, this breakthrough technology is entering the mainstream, unlocking new possibilities in power delivery.

In this podcast, we bring together industry experts, innovators, and thought leaders to discuss the challenges, opportunities, and game-changing potential of FMP. Join us as we have meaningful conversations, share insights, and explore what’s next in the future of power.

James Eaves
Okay, welcome back everyone to the Future of Power podcast. I’m James Eves, Director of Building Electrification at Volt Server. And I’m joined again by Steve Eves, our founder and CEO and the inventor of Fault Managed Power. How you doing, Steve?

Great, thanks for hosting, James.

You’re welcome. I wasn’t supposed to, but I’m happy because I like doing this. How’s the weather in East Greenwich?

Steve Eaves (01:19.224)
crappy. feel like a wet cat in a freezer. Yeah, it’s like 40 degrees out. May in New England.

Terrible. It’s a little warm here. It’s just like a little bit too warm. But you know, it’s it’s okay, because there’s a pool and I’m going to barbecue tonight. But you know, it could be two degrees lower, and that would be perfect.

Yeah, I know how you like to torture us when you’re in California.

Yeah. All right, let’s get going. Like I said, I wasn’t supposed to do this. I don’t know if you know Mike, but Rona is our chief revenue officer, and she’s sort of the expert in data centers. So I had to do a little prep, a little research to kind of catch up on this topic. But I found it really interesting.

and I was thinking about a half an hour before we started, like why is it so stimulating to learn about like power distribution and data centers? And so I’m just curious, like, it’s a little test. I want to see if either of you, and Mike, I’m going to introduce you in a second. I’m just like, holding the punch. But I’m just curious about what, if you guys are like, find this a particularly interesting use case for power distribution and why, like what’s interesting about it to you?

Steve Eaves (02:37.538)
Well, as a business, we chase watts of power. So from the business side, I just love chasing things that need a lot of power because that’s what we sell. But from a technology side, a data center is a really complicated facility. You know, it involves power distribution, but also a lot of redundancy, resiliency.

control and monitoring. it’s a really nice business for fault managed power because fault managed power is a combination of power delivery and data. It’s a great fit for us.

you Mike.

So I think it’s, you know, early on when we’re talking about applications, you know, for fault managed power, I think it’d be great if we can somehow adapt this to data centers, because all the load is like within the servers, right? At least the critical load. So, and what do UPSs do? They convert AC to DC, then back to AC and then transmit

AC and then eventually what did servers want DC cycle? Yeah, let’s just keep it at DC as much as we can. And not only that, but also the safety. There’s a lot of safety. You’re dealing with a lot of power, a lot of safety concerns you’re always dealing with as far as arc flash and keeping your fault currents low and whatnot. So the safety of fault managed power is very appealing for data centers. So it’s like a.

Mike Starego (04:19.288)
This is a great application. you know, Steve and I have talked about a lot. In fact, we set up, you know, dedicated meetings, just to pour through on how we do it. And then that evolved, you know, as we’ll get into that evolved into the study we did last year. And, and now we’re turning the page on that. No pun intended. Well, sort of, but, we’re turning the page on that because of the changes in the industry. Yeah.

Yeah, that’s it’s similar to me. I was I, of course, I came initially I came from an academic background as a researcher. And what I realized, like, and this is funny, because this is how me and Mike met. I was just telling Mike before we started the show that, you know, I was thinking about the first time we met was because of this. I was I was having this sense as like level of interest that I or I don’t know how explain it. But like this feeling I had was learning about it that reminded me of a time when we were

about five years ago when we were investigating power distribution in CEA, like in greenhouses and indoor farms. And what I really liked about that was like the interaction between plant science and electrical engineering. Like you’re kind of designing, optimizing around both those systems. But with data centers, it’s like the computer science is like driving this so hard, like this design so hard, and that everyone

Like it’s everyone, all the professionals have to like, professions have to kind of court it really tightly because everyone’s kind of getting pushed their limit. Like you have to, like the HVAC engineers, electrical engineers, architectural engineers, everyone’s being challenged because everything’s like this constraint. Every aspect of the design is so constrained. And it’s just like, so you have to learn at least a little bit about.

all those different disciplines to kind of have a holistic understanding of what’s going on with the power distribution. And I’m not saying, I don’t actually, don’t claim to have a deep understanding of all those areas, but you learn a little bit about each and it’s really exciting. Which is a good segue to introducing Mike because I think Mike is an expert in this industry, right? So Mike.

James Eaves (06:35.79)
Sarego is a principal engineer at Southland Industries. And for those who don’t know Southland, Southland is one of the largest and most respected MEPs in America. And they do a ton of data center design and optimization. A year ago today, Mike and Steve published this paper called A Comparison of Fault-Managed Power and Traditional AC Distribution in High-Performance Data Centers. And today I want to use that article, go back to that article and use it as a roadmap.

for this discussion. so, but you know, the first thing I want to do is just sort of like level set. And, you know, something we talked about, you know, in the prep call was that one of the challenges with designing for these design these data centers is this kind of a moving target, right? The technology is moving so quickly that they don’t like technology is moving so quickly. It makes it challenging to design systems to like

deliver power to these servers. I wonder if am I saying that correctly and can you expand on it a bit?

Yeah, sure. And I wanted to bounce off your point with the CEA, which is controlled environment agriculture, for those who don’t know the acronym, and that did get the, you know, the juices flowing and the wheels turning on regards to, you know, the application of this for data centers, because the application in CEA is the lighting, which is a large amount of load in a small amount of space. And what do drivers want DC?

Ultimately, right? Because the driver converting AC to DC so that tunes you into you know that server application on how to how to do so but and You know some of those solutions are similar like say a year or several years ago The solutions may be similar because of the densities but getting right into the meat of it James, you know with with the densities we’re talking about we did the study last year, you know, we were saying well we were modeling around, know 25 kw rack and

Mike Starego (08:38.572)
And at the time that was very reasonable, but now, you know, those densities are, you know, with AI are getting higher and higher. So there’s a lot of challenges there using the traditional methods that we do data centers. Now those methods have evolved, but they’ve over time as those rack densities increased, but the technology that was being traditionally used for these data center rows, rows of racks was keeping up because we were just

We’re still living within their means, if you will, like the busway we’re using above the racks was, you know, at one time, maybe 200, then involved at 400. Now we’re looking at multiple runs of, you know, 1200 potentially to reach these densities, which becomes more and more cumbersome. Now, you know, and Southland being primary primary, we’re designed build. we’re primarily a constructor. We’re on the engineering side. We’re we’re.

deep within the constructability of these data centers rather than just putting lines on paper what’s in the back of our minds or how is we actually going to construct that. And that’s where we offer a better perspective on the solutions because the physical fit and the safety of the installation, the safety of the use for that matter is paramount for us.

So for someone who hasn’t built one of these data centers, can you just kind of help us visualize the key components of delivering power inside the data hall to these server racks?

Sure, yeah, so a lot of data centers, know, to your point, they’re large, they have lots of powers, everybody knows. So it uses a scheme called block redundant distribution. And you know, what does that mean? Well, when you’re buying and designing or, you know, and installing electrical equipment, you want to stay within the means that the manufacturers can develop them quickly. So you want to live within their rules as far as sizes.

Mike Starego (10:48.686)
But just to give you an example, you want to live in the switchgear size of maybe 3000 to 4000 amps because it’s something they’re used to producing. They have all the UL listings for. So you want to stay within those units. Obviously you need much more of that. So how do you develop a large facility using those sizes? Well, you create, you put together blocks, you call them power blocks. So you utilize those and then you use them to create redundancy.

So you can use in two major fashions, either distributed redundancy or isolated redundancy. In distributed redundancy, I’ll use the example of saying, well, one data hall will use five blocks of power. So five sets of, usually what that means is transformers, generators, and switch gear, and that’s your power block. And then you’ll have a large UPS in there to pick up your critical load. So you’ll balance the load equally across five

power blocks, but the capacity of the data hall can be handled by four. So if you lose any piece or one whole data block or one whole power block, then the rest of them just pick up the load just fine. Right. So, that’s called distributed redundant. Now an isolated redundant, you have an additional power block where you have your, your normal power blocks handle your load, but you have one that sits idle.

until you have a failure of a component or an entire power block. And then that is the reserve block or the catcher block, as it’s sometimes called. And that picks up that load that has failed. then what the equipment you need in that case for the critical load anyway is what’s called static transfer switches, which transfer that load to that power block, that reserve power block in an instant in a fraction of a cycle.

which is very fast and it does not perceive it as an outage at all. The servers not perceive that as an outage at all because it happens so quickly. Now, that’s a very expensive and technical piece of equipment obviously, so that plays into the whole complexity argument. But that’s a basic rundown. We could talk about many, many details about that. So I hope that suffices as a good description of how you distribute power in an AC data hall.

James Eaves (13:13.166)
It does, but basically you want to ask which one’s better? is there one? How do you decide which of those two designs you use?

Well, the distributed redundant distribution system is typically a more economical way to go because those static switches are expensive and because you have that reserve block, which is considered the extra block. Now you may say, well, so does the distributed redundant, but you don’t have those static transfer switches in there. And that means you don’t have also those extra feeders.

distributed redundant, you’re utilizing everything that you’re providing. Maybe in the case I proposed, it was four fifths of a capacity. So that lessens the cost, but it also makes the math tricky. Meaning you have to have, you have to distribute a load in multiples of the blocks and it makes the math a little bit tricky so you don’t have overloads. So.

Using an isolated redundant takes the burden off that math, if you will, in your distribution so that you have a little more flexibility in how you lay out your data haul as opposed to handcuffing it to multiples of four or five. Does that make sense, James? Yeah.

Yeah, distributed redundant is in terms of the higher capital utilization, right? It’s using less equipment. But isolated redundant, you’re paying for basically flexibility. You’re adding more equipment to have increased flexibility.

Mike Starego (14:49.26)
Yeah, yeah, yeah, you could put it there. And you know, there’s some finer points to it. But I mean, that’s that’s that’s the gist of some major advantages.

OK. All right. Last question about that. I don’t want to go get too far down if I find this really interesting. So if I’m a data center, like an operator, I want to build in optionality because I think it’s hard for me to predict what my customers are going to ask me to install. Say my Colo, and in three years from now, I’m worried about, right now we’re designed for 20 kilowatt racks. And I’m worried that three years from now, there’ll be 50 kilowatt racks. Would that impact?

which one of those designs with the isolated redundant system give me more flexibility in order to be able to adapt more quickly or easily to unexpected changes in the market like that? Is that the type of flexibility it provides?

I think so. In the colo market where there’s uncertainty about your layouts and who you’re serving, think the isolated redundant lends itself better because it gives you that flexibility. If you know who your client is going to occupy those halls and you have a good sense of their system architecture, like how many racks and at what capacity and how many rows they have, then you can say, well, we could do this distributed redundant. But keep in mind though,

because of the math involved with the distributed redundant, once you get above like five blocks, tend to sway towards the isolated redundant just because it starts getting way too complex. The more blocks you have, like if you want your data hall to be, know, eight, nine, 10 megawatts, then you’re looking at isolated redundant because it just becomes too tricky to do to distribute redundant method.

James Eaves (16:38.894)
I get it. OK, so my next question is, the challenge is, we’re talking about a challenge of putting this equipment in. It’s expensive. I get it. Right now, what’s driving the market is increases in peak densities for these server racks. Power peak densities. Peak power densities, excuse me. And so it’s.

I get this feeling as I’m learning about this market that that’s a big problem. It’s a big design challenge. But you just described a bunch of components to me. And so my first instinct is like, well, you know, they’re making a lot of money from these GPUs, right? What’s the big deal of just, you know, running new busway, installing new PDUs? Like, why not just add more equipment and you’re done with it?

Well, the challenge is the the the increase in density of the of the racks because yes You’re increasing like adding more stuff, but you’re trying to get to a finer point if you will physically So when we did our study we did 25 KW per rack and that was At the time we began the study that was a going rate, but now we have clients asking for Planning for 200 KW per rack. It’s an immense difference because you know

When I mentioned, you know, the busway before, you know, can get a plugin busway is a very common AC method for distributing racks. And oftentimes you, you’ll have to, in fact, I don’t can’t think of a time where we didn’t have to, because you want that redundancy. above every row, you have two of these things. Well, once you get a row of say 10 racks at 200 KW, you’re way beyond what, what feed in one of those with a single busway, because you just run out of amps, you run out of power.

So you have to use multiple runs of that busway. Now there’s a way you can do it from both ends, say run two lengths from both ends. But if you say, I want 200 KW and 10 racks in a row, you can’t do that anymore because you’re beyond even that 1200 limit, that 1200 amp limitation for plugin busway, or where you can plug in outlets and drop cords and whatnot to it.

Mike Starego (18:55.162)
or modular, know, those tap boxes that have the, you know, either drop cords or outlets. that’s, know, what you’d ideally use. so then you need a solution for those ancillary racks or maybe a couple racks on the end that you need to either provide a short run of busway or maybe even, dedicated outlets and makes it inconsistent. Right. So, you, you can’t really apply those traditional methods at those densities anymore. and then.

You know, if you use multiple, you can only serve from two ends. So if you need more than two, then you have to have some that are higher overlapping, zigzagging, all that stuff. And they all need clearances. All those boxes need clearances to be able to twist them in to utilize them in the future to plug in your outlets. Not to mention, you know, the NEC clearances. You need to access them to in case you have to go disconnect one in an emergency. So.

There’s a lot to it once you start getting at that, packing that much power in a tight space, things start running over each other and you can’t buy code and you can’t buy any reasonable safety.

But can’t you just like, this is a customer asking you for this, like 200 KW racks today, right? So can’t you just design around that anticipation, like just build a bigger building with more power distribution inside it.

Right, you can put more power distribution around it, but you still gotta get to that point where you gotta distribute within that rack, that row that they’re used to. I get it. Yeah, so you have, and you’re gonna have some level of containment, so you have a row across from it too, so that you can run your AC. Now a lot are liquid-cooled, but there’s still some air-cooled components that aren’t liquid-cooled, so if you use liquid-cooled chips, you still have the rest of a server.

Mike Starego (20:48.194)
that still requires air cooling. So you’re still using that traditional containment, the hot aisle, cold aisle air distribution. Even if you have doors, you need room for that kind of airflow, even though you’re cooling the chips with direct liquid. So you still have that traditional physical architecture of the rack. you’re still, even though you may have all the…

You could have all the land in the world around to put your gens and your transformers and stuff. You’ve still got to converge on that traditional rack pod, if you will. Does that make sense?

I maybe, again, I’m not the data center expert, you’re gonna have to talk dumb to me or talk to me like I’m dumb. Like for instance, why can’t I just spread everything out more? Do you know what I mean? It seems like what you’re describing is a density problem. Like these buildings, I start running out of space, either in the vertical.

spaces above the racks or on the floor or really close to the racks themselves, right? that what you’re describing? Like I’m having space constraints.

Now yeah, in regards to like the IT infrastructure and why this stuff is close together, I’ll tell you what, I’ll let Steve answer that because he had some really good insight. knows a bit more about that than me as far as the IT infrastructure and why those distances between those servers are so important. So Steve, if you want to explain that.

Steve Eaves (22:21.782)
Yeah, it has a lot to do. I mean, we’re all in the end serving IT customers. We’re bringing all these services to them. Power data cooling. And what they’re doing is they’re doing compute, right? And they’re trying to. There’s reasons why GPUs have gotten so power dense, right? They’re they’re, you know, nominally at a thousand watts per. Per chip.

for GPU. And you got to ask, well, that seems like a lot of power. But the thing to remember is that they’re incredibly efficient devices. It seems like a lot of power. But if you had to take a whatever, a 1995 server and create the same compute as that single GPU, you’d be talking hundreds of servers. So they’ve

things in really dense because it has there’s a economy of scale. Everything is close together. You can manage power flow really well, but more importantly can manage data flow because you can do direct access direct memory access right over short distances. And then the other thing to remember is that these these large learning models when they’re running these they’re they’re built in clusters, so they’re they’re grouping together.

GPUs in these models. And those really high-speed data transfer only works over short distances. if you look at the different protocols, like NVLink, it runs maybe 20 inches. So you can access memory and other GPUs over these short hops. And then you have to take another hop to go further. But there is a cost.

processing costs to take a hop. So there’s a big drive to pack everything close together so they can run in the end what are a bunch of parallel processes together. It’s like a bunch of race cars. You’re trying to launch them all at the exact same moment. And you want them to finish at exactly the same moment. And if you have to go out and, say, reach out into a different rack, then you’re going upwards in the rack to a switch, like an ethernet switch.

Steve Eaves (24:46.7)
And then jumping to a different rack and there’s incredible costs in terms of time to do that. It’s glacial, compared to what the GPUs are doing when they’re just jumping to each other within the rack. So all that’s kind of what drives it. That’s what’s making everything so dense. And I, I sometimes they look at, as at real estate costs, right? So if you look, if you draw this line from the center of a GPU out to the parking lot and the data center,

It goes from unbelievably expensive real estate to fairly reasonable real estate when you’re in the middle of the car and you just can’t waste that space. That’s why people are tucking things in so tightly.

So no matter how much space I have, I just want to make sure I understand this. So no matter how much space I have, can make a data center that’s the size of a state. But the technology, the chip, is still going to constrain power density to be incredibly high. Because in order to meet those latency requirements they have,

Those all the components that cluster have to be within 20 inches or whatever you said. And so I always have to distribute thousands of Watts to that tiny little space.

Yeah, they have what they call head and tail components in data transfer. So I talked about those race cars on the track. If you ran 1,000 race cars in parallel on that track, you spent a lot of money to set up that process and launch it. If you have a race car that had to go to another rack,

Steve Eaves (26:37.57)
You have 999 race cars sitting there idle waiting for that one car to get back and finish. So there’s penalties involved. if we spread things out and weren’t efficient about tightly spacing, it costs that end customer, which is the IT customer that’s paid to process data.

James Eaves (27:04.738)
We always makes me think of another point. we often hear, you know, experts, like really smart people in this industry. mean, not often, but you do hear it say things like vertical space doesn’t matter. It’s just floor space. but that according to what you’re saying, that just doesn’t make any sense. Like the, like, can’t use productively use an infinite amount of space above my rack because that chip is like enforcing this, like,

space constraint that I still have to have incredible amounts of equipment and power in a small location around that server rack, no matter what, however I design this data center. Do you guys agree with that?

Well, the vertical space, we’ve seen a higher need for that so that you can serve the liquid cooling, you know, with that chill water. You know, you need space for that piping. So the vertical space has become very important as the density increases because you need room for a larger busway, multiple runs of busway, chill water pipes. And you want some separation between that.

those items too, because, know, you want to be able to like, say, maybe put your, if you, if you have the capability to do so physically in your data center, maybe put your piping under the floor and your electrical infrastructure and your data infrastructure high. that if you have, you know, a leak in those, in those pipes, then you’re not. You know, causing damage to your, to your equipment, either your distribution equipment or, or, know, God forbid the servers themselves, which are

tremendously expensive. you know, height, you need height to make all that work in a small area to stack your infrastructure.

Steve Eaves (28:58.284)
Yeah, I think there’s going to be a lot of friction between we’re like the back of house guys, right? I always think of these hotels in New York City. Imagine the real estate cost in Manhattan, right? And a hotel wants to fill that with rooms because they consider the end customer considers that the revenue producing component of their business. And there’s always going to be a lot of friction that, you know, the back office is, know, when we’re working in buildings.

There’s a whole different world where you walk through a little door and there’s all these hallways and elevators for people that are working in the hotel and all the facility equipment is back there. But I think what’s happening is there’s already a migration of higher racks because of that GPU thing I was talking about, how you have short reach on high speed communications.

So the natural thing is to try to push the rack up where just like you’re in Manhattan, there’s gonna be this friction between the service provider back a house component of the business where you’re, you it’s like, you certainly don’t wanna be in a hotel and then the electrical and plumbing folks come in and say, hey, we need the top 20 floors for our stuff, right? There’s gonna be pressure between those two factions of the business.

Yeah. It makes me think of this concept in finance called option value. it’s just the, the real financial value of having choices, having flexibility to, to make different decisions in the future. and another thing is because we have so little, it’s so difficult. For instance, you started, we started talking about the, article, right. And a year ago in the article, you guys were talking about predictions that, you know,

peak power densities would hit, would hit 20 KW. and now they’ve shot past that, you know? And so we obviously have a really hard time predicting what power densities are going to be. and it takes two years to build these data centers, two or three years. don’t know. And so you almost like just the value of managing that space really well, like keeping as little equipment in that space as possible so that.

James Eaves (31:21.772)
You have the option or more optionality to add taller racks. If you need to add more power distribution, if you need to add like, like chilling lines, chiller lines, if you need to is incredibly valuable. and I wonder, Mike, are you hearing like your customers like articulate that? they asking for more vertical space in these designs?

Yeah, and Mike, follow on to that question. How far do you have to lead the target? When you’re talking about a design right now, how long is it from where folks come up with the idea when that data center is operating? How far do you have to look in the future?

So what we see happening is, know, our clients are, are having us designed around a pretty big end game, if you will. because when we see their actual ramp up to the load, it’s taken, takes them years to get there. So, and where we see that is when we were shared the strategy with buying power from the utility, cause they’re buying power and KW blocks. So they have what they call ramp up. So.

We may be building, you know, a 50 or a hundred megawatt data center, but it may take that customer. If it’s a single base customer, even a multi, you know, in a colo, it may take them, you know, four or five years to get to that load. So we’re starting, we’re leading it off very large. and you know, we’re talking Steve about our rack densities of 200 KW per act before we were lamenting that well,

You know, the server technology at the moment doesn’t quite get there. So they’re planning. So when this data center is done, they’re anticipating the technology is going to get them so that a 200 KW racks reality. And, know, when you go to the symposiums and webinars and stuff, they’re boasting much, much higher numbers, you know, in the coming years and shortcoming years.

Mike Starego (33:29.974)
of even higher densities within racks. So they’re leading us, but they’re, they end up catching up to it pretty good. Like the, the, the ones we’ve built, already are at capacity. Of course they’re, they’re building more. So, know, the stuff we started years ago and built and have gotten populated already caught up to themselves. So it didn’t take very long. So you would anticipate that’s going to continue to happen. So that is a challenge though, that to chase that.

It’s like, far do you go with the density when you design it? Because to James’s point about the timeline, you can’t just build this overnight. We’re building them faster than anybody, believe me. And we’re quite proud of that. But still, it’s not overnight. It’s still even the fastest data center from the very inception to the final build is a couple years, even at the fastest pace.

So you gotta get it.

Right, lead times on equipment alone are a good chunk of that. Just, yeah.

Exactly. Yeah, like we’re seeing lead times for like if you’re not in line already, like say like you want a two and a half megawatt generator and you want to get in line, you’re you’re waiting a couple of years just for that generator. Wow. You know, now if you’re connected, you may be able to short shorten that. But, you know, if you I remember in a recent project, we were contemplating changing the KW of a generator, but we had with many of them.

Mike Starego (35:04.462)
And the vendor told us, you know, you’re already in line for this one. So you have to get back to the beginning of line. You won’t get them though. 2027. Wow. So, uh, you know, you have a lot out and that’s, that’s an extreme case. You know, uh, lead times have gotten a little bit better for a lot of equipment, but not much. Uh, we, were thinking they were going to escalate in lead time, but there’s a lot more players now cause they see the opportunity. So.

You’re not, you’re not subject to just being at the mercy of the typical big manufacturers for say switch gear. there’s people, lot of people now making custom switch gear and sourcing circuit breakers elsewhere. And, that competition is, is decreasing those lead times. They’re still significant. Like for switch gear, you’re looking, you know, you know, they’ll tell you 40 weeks, but they’re not including the submittal process in there.

they’re giving you the lead time to ship. So you got to add some shipping time. You got to add a submittal time, which could be one to two months, depending on how well that goes. So you’re looking at a year for switch gear and pretty much most of the other equipment at least. Now transformers too can be quite a hefty lift too, especially because the data centers buy a lot of common, lot of same sizes they need, right? You’re not buying

variety of same pad mounted oil filled transformers in a variety of sizes. You’re looking at 2500 kVA 3000 you know in those ranges so you have a lot of people wanting the same product you know so transformers you know they could be 80 weeks.

Wow. Yeah, okay.

James Eaves (36:51.788)
I think I totally underestimated like speed to market, the value of speed to market. Cause I was just, while you’re talking, I was thinking about the stat I heard that these chips, these like high performing GPUs, highest performing GPUs become obsolete after four years. Like the technology demand is changing so fast. Like the customer demands changing so fast that they feel like you have about four years to extract value from that purchase. but at the same time we’re talking about, know, so you’re when you’re

planning to build one these facilities, you have all that planning time, you have the lead times, you’re making your best guess about what the densities will be, and you could get it wrong by the time you actually turn the power on. And so it’s such a risk. seems like the fact that this market, the chips are evolving so fast, creates this enormous risk and cost of just delays.

just beyond like loss of revenues. Like literally you could open a facility that some proportion of it is obsolete by the time it opens. Is that too extreme or is that realistic?

It’s realistic to a point, but you know, the folks that are building the data centers, you know, they’re not building one data center and watching it intently as you know, this is their one sole project. They have several in the works all at different stages. So, you know, you take those lessons learned and you see the shortcomings of the one that’s being built now as far as how it’s not keeping up and then you adapt the next one.

So they’re building multiple data centers at various different stages. So they can adapt on the next one. but you know, land isn’t infinite. It’s nice to say that we have all the land and you you have a lot of land to work with and they are moving to more areas where they have accessibility to larger, you know, swaths of land. But, you know, you can still adapt and make these things larger.

Mike Starego (38:57.486)
As you as you move further out and away from you know urban and ideally away from residential right.

Yeah. Yeah. Okay. This is, this is such a fast a topic. could just like pick your brain for another hour, but I’m going to, because you know, we’re, also business. I’m going to shift over to the article. want to, it’s like a good segue to talk about the article. and so Steve, even though I, you know, many of our millions of listeners might already know what fault managed power is. can you like, can you give a quick introduction?

to what the technology is and a high level description of what the value proposition is for facilities like this.

Okay, all right. Okay, so fault-managed power is a new class of electricity that is digital in format. So it sends electricity in what we call a packet, and it’s a burst of electrical energy that goes from a device called a transmitter, which would be out in, as Mike talked about, out in the power block area, and then a receiver

which is inside the server rack and that is what folks normally call a PDU or a rack PDU right now. And the other thing to talk about is that fault managed power, our brand name for it is digital electricity, but it’s inherently safe. So you can touch, even at hundreds of volts, you can touch it with your hands, right? And not get hurt. It won’t start a fire, but maybe even more importantly,

Steve Eaves (40:41.546)
a digital environment, it’s qualified to run in data channels. So it uses the same installation practices as ethernet cable or communications cable. And it can run in the same pathways. So it’s an installation instead of overhead busway, you’re using data tray for digital electricity. And it’s a structured cable or a communication cable that can even have fiber optics as part of that cable.

So it kind of converges what we call digital convergence. It converges the pathways on one layer. And then it’s a point to point architecture. Mike talked about how in the data block area, have n plus one or redundant, redundancy that’s, it’s not a two n environment, which means there’s not two of everything. There’s just a little bit more than you need for redundancy on the outer edges.

But with traditional AC distribution, when you enter the data hall, it’s two ends. So you have two of every two of the busways that Mike talked about. And within the racks, you’ll have two rack PDUs to distribute power. So if you lose one, you’re still running at full capacity. And fault-managed power systems are what’s called point-to-point architecture. So they

It’s what a data person would be familiar with. Ethernet is a point to point architecture. You have a centralized hub that projects individual channels out to the load. so the data center architectures we’re working on with fault managed power are point to point. So if you have, a fault or a short circuit or somebody touches the wires in one of the server cabinets, well, for one thing, they don’t get hurt and nothing blows up.

But the other thing is it only takes out that one individual channel and there’s many channels inside that server rack. So you don’t need two of everything. It is kind of like the outer edge. The architectures that we run, they could be generally called an N plus one architecture. So you might have four channels that go to the rack, four big channels, but you only need three.

James Eaves (43:04.652)
So what equipment do you eliminate? In that architecture you just described for fault-managed power, compared to the AC system, just list the main piece of equipment that you eliminate.

Well, yeah, the blocks that we talked about before, how to power flow through those is you have a transformer that serves a switchboard and typically that’s at 40 volts and that’s backed up by a generator that sits closely nearby. So that switchboard combines those two and acts as the transfer switch, has the brain that switches between the two sources should one fail. And then that’s distributing to a large UPS.

And that UPS is designed to carry the critical load of that block, which is the servers. And in some cases, if you’re doing liquid cooling, you want that to back up your liquid distribution in the form of the pumps and the what’s called the CDUs because you don’t want those to, you’ve got to have constant fluid flow. can’t even tolerate the 20 seconds it takes for a generator to come up. So after the UPS, you know, that

critical load will have distribution. is where we talk about when Steve was talking about PDUs. So go to another switchboard, distribute to PDUs, power distribution. These are large floor mounted, like vending machines or larger sized pieces of equipment that have transformers in them. And they will transform to a voltage of typically 415 volts because your single phase voltage at that.

Line the line voltage is 240 volts and that’s ideal for the servers. So that’s why you see folks use that 415 volt as a standard. It’s large amount of data centers that we do use that voltage to distribute to the server racks. So those PD use will then serve that overhead plug-in busway that that ride above the server racks in a two one. And once you get to that point, as Steve mentioned.

Mike Starego (45:09.858)
that is a two end configuration because you want one bus from two different sources in case you lose that whole bus. And I think, you know, that’s, that’s a very strong point for fault managed power because, just to use, some numbers I’ll use, I’ll use 300 KW, say like your, your busway serves 300 KW in a row. And I know we’re talking about larger densities, but let’s use that number just for, to keep the math simple. Well,

Now you have to have distribution in that row for 600 KW. Okay. Because you want to plan for what if that bus fails and I need a whole nother bus above that, above that row. Well, if you have a point by point architecture and you could feed that easily with four sources, because all you’re doing is routing, these small cables and cable tray, you have the ability to serve that row with four sources. And with the fort that means.

that you need infrastructure for 400 KW in lieu of 600. So you can only imagine the savings, right? For every row, you’re taking out a third of your redundancy need with doing that math. So adding that fourth bus, which is physically impossible in a traditional scheme, because you can’t put four busways above a single row of server racks. There’s just not physical space to do it.

You could put two and that leaves you stuck with two and instead of four to make three.

Wow, that’s amazing. So just to reiterate, so you eliminate the generator in a generator switchboard, the UPS, the second switchboard for the PDU, I’m sorry, and then the PDUs.

Mike Starego (46:56.834)
Well, it wouldn’t be that much. You would still need the major equipment. You would still need the transformer generator and switchboard. You would still need those pieces. Once you get to the UPS, now that’s where things get different. So ideally how you would serve the fault-managed power transmitters that Steve described is with rack-mounted UPSs. So you have a row of racks that serves, and you could call it your fault-managed power

UPS where you have a combination of rack mounted UPS is that go with their buddies, right? Their buddies are the transmitters. So Steve has adapted his product to match and scale them with how these rack mounted UPS is have have grown and adapted over the years to get more powerful in smaller spaces. So you match those one for one for one.

And now you have a fault managed power UPS, which has a transmitter and the UPS is and the batteries in a similar rack lineup as an AC UPS and battery rack. So in that case, you’re not sacrificing space, the space you’ve allocated for your AC UPS, you have not sacrificed that space now. And then what comes out of the transmitter are the cables and the, the

the class four or fault managed power cables come out of those transmitters and that’s where the difference is. Those cables right on cable trace directly to the PD, I’ll call them the rack mounted PDUs, not the floor mounted, but they’re basically the receiver in which the server plugs into. And that’s direct fault managed power DC to a straight DC. And that’s what those receivers are doing is transforming that

monitor DC into straight DC. So therefore you eliminate the downstream switchboard of UPS, you’re replacing the UPS with this more sophisticated UPS and you’re replacing everything downstream about so you don’t have that critical distribution panel. You don’t have the static switches. You don’t have the PDUs. You don’t have the busway.

Mike Starego (49:15.736)
So if you think of all those pieces, all those pieces have connections that need to be maintained because they’re all failure points. So you think of all the failure points and all those connections and all those IR windows that you need to keep an eye on to see if they’re overheating. It’s a lot of elimination of failure points and also makes it simplifies greatly your maintenance. A lot of big costs for data centers is maintaining all the sophisticated and expensive equipment because outages are very, very bad for them.

you know, obviously. So, you know, there’s a lot of maintenance to ensure that they continue to run properly. Looking at those IR windows, make sure those connections aren’t getting hot. The default managed power eliminates all of that because once you receive the, once you leave the transmitter, all you’re doing is the other end is the receiver and then that’s your power source for your server.

And how much vertical space? sounds like you can put, and correct me if I’m wrong, so I can put the digital electricity, or their class four or fault managed power cable, in the same cable tray as the data cable. But would you do that in practice? Could you do that in practice?

You could at, think when these densities increase, probably wouldn’t, but they’d be closely nearby. And because you want a little bit of space here, you you want your, your, have a limit to the practical limit, I should say to your cable trays, because you don’t want them to ever grow over four feet, because then you need sprinklers underneath them, right? In the sprinkler code. So you want to keep them.

you know, typically at three feet, because when you get, when it gets to cable tray, you need to be able to reach over and get in there too, to, you know, add cables, replace cables, whatnot. So there’s a practical width there that you want to, you want to live in. And at the densities we’re talking about how we’re growing, you’re going to need a lot of cables, but you could still provide that amount of power with, with cable tray in lieu of busway. The situation.

Mike Starego (51:27.146)
that I described before with multiple runs of busway that takes up a tremendous amount of space, especially when you have to introduce a third bus and or or extra distribution to pick up more servers that fall beyond the standard means of of of what’s available in those products. So, you know, having the cable trade just get a little bit larger to add more fault managed power cables is is a lot easier than adding.

AC large-ampacity distribution equipment.

Did you guys come up with an estimate on how much vertical space? It sounds like you obviously clearly remove tons of equipment. And I’d be curious about the percentage, like how would the percentage reduction in equipment. I also want to know the percentage reduction in cost. But before I get there, did you estimate how much vertical space you save with this architecture, the digital electricity architecture?

Yeah, I don’t think it’s too early. mean, the paper will be out soon. I think we’re pretty confident that it’s coming out to about an 80 % reduction in cross sectional area. Because the thing the thing that we’re talking about is that, you know, it’s it’s very a lot of folks think that it’s all about reducing copper cross section. And, you know, there’s talk now about going up in voltage. Right. And that that helps you reduce the amount of copper.

that you need to distribute the power. You need lower amps, so you need less copper cross section. But the thing that makes electricity big, electrical distribution conventionally, is the safety aspect. Because Mike opened up with that about how safety is a big concern. And I’m not going to say that data centers aren’t safe. They are. It’s kind of like, what do you have to do to get to that level of safety? And with conventional AC,

Steve Eaves (53:22.274)
You have a lot of stuff that you put around that copper. It’s kind of like a wild animal in a cage, right? And you have to keep it in its cage from hurting people or starting things on fire or arcing off. That takes a lot of space. And so the 80 % reduction, it has something to do with the voltage. mean, but we actually operate at similar voltages to the 415.

three phase power that bikes talking about it’s just that we don’t need all that extra stuff we don’t have the disconnects and the switches and all the sheet metal that goes around busway in the keep out areas that’s where all the savings are.

That’s amazing though. 80 % is amazing.

Yeah. the safety is a massive advantage because if you think about the components that we discussed that we’re replacing with the fault managed power, where does that fault managed power run? Well, that’s in the data hall because all the equipment upstream of that is located outside of the data hall. So wouldn’t it be great to tell a client that’s in your data hall is this data hall, your racks are safe.

from any electrical injury, it’s safe from fire. You don’t need arc flash stickers because, you know, to Steve’s point, you know, you could touch this stuff. It doesn’t, it doesn’t tickle, but it doesn’t injure you, you know, it doesn’t burn you and certainly won’t kill you. So you can’t say the same for any of the AC components I described. They’re very, very dangerous. you know, that’s why there’s, they’re in big metal enclosures. They’re, you know, you have.

Mike Starego (55:05.358)
Many safety precautions and stickers on them, you know as to what the ramifications are if you open want to work off on live you have those arc flash stickers and you require your required your PPE to even you know think about doing that stuff. But now what fault managed power number one all you have is cable right? There’s really nothing to maintain so you know, but even if you know something rubbed off like an installation rubbed off.

that channel just goes down instantly within milliseconds. So there’s really no risk of fire or shock. And you know when it goes down because one of the other benefits I learned about fault managed power is because you’re monitoring those channels constantly. At 500 hertz, you’re monitoring those channels to make sure that the receiver is getting every bit from that transmitter.

You now have the most granular power management system you could ever want. Every channel, know, know, the Watts, you know, every statistic about that in real time. So, and, and, and it’s recordable. So you have now have a very sophisticated and granular, electrical power monitoring system.

How about speed to market? we were talking about, like how much faster does it actually do anticipate that actually increases the speed that you build these facilities?

Yeah, I think from a labor standpoint, I think the lead times for the equipment and Steve, if you correct me if I’m wrong, but they’re significantly better than some of the year long, the 40, 50, 60 weeks we were talking about before or higher. I believe that the speed to market just for the pieces are shorter, but the labor because, you know, this can be installed because of a class four and NEC class four

Mike Starego (57:06.464)
recognized installation, you can use an IT technician to install these components. Now, it’s a skill set, no doubt. not, you know, still very skilled labor. So it’ll tend to be a cheaper form of labor, if you will. But, you know, that’s a different level of cost because you’re routing cables in a cable tray. So the labor is going to be much

much less on a per hour basis. And also the techniques involved are instead of pulling wire and conduit and making torqued connections with 600 volt cable, that’s a very, very skilled skill set and time consuming. Whereas laying a cable in a cable tray is not, it’s a much more simple process.

Yeah. And what we see in our other projects is that often people will still use electricians because they’re in there anyways and they have excellent wire management, the amount of labor it takes to distribute the power is so much less that the financial impact is negligible.

I think we ran that. Yeah, we ran the first study all on union labor. And yeah, and it’s and it’s fine. Many of our installations are done with licensed electricians, but it’s just less hours. can use a common crew and there’s also, the, you know, your utilization of apprentice labor can be better distributed for like cable poles and things like that.

So what was the cost impact? Like, you willing to talk about the new paper’s estimates today, or do you want to wait?

Steve Eaves (58:59.996)
we can’t. We’re not done.

Okay, okay, that’s too bad. Well people have to check in our websites to get the update

That’s the big finish, right?

Yeah

I guess, I mean, we exceeded 30 in the first analysis. It exceeded 30 % overall installed. And then I guess what we’ll find out this time is how it relates because that was at 25 kilowatts per rack. And this one will be at 200. So I guess it’ll be how it relates. The paper is a little strange because as Mike pointed out,

Steve Eaves (59:42.902)
Rack density is leading the available sort of, if you look at say a Dell or a super micro GPU server right now, they can’t fit into a 200 kilowatt. The density isn’t there yet. So we got to sort of fudge around some of that stuff. But I guess we’re going to find out how it scales to really high power densities versus 25 kilowatts, which actually is high power density.

It’s the new high.

Right. lot of it was attributed to, in the first go around, the significant savings was in the labor.

All right, I’m sorry, Steve, I cut you off. Go ahead.

I was just going to point out, yeah, the other, think we got to work out, like the overhead, the busway distribution in the AC case, we don’t really have a good solution yet for getting it to the racks. I mean, I guess you could go 20 feet high or whatever, but yeah, it’s a little messy right now that we still have some work to do there.

Mike Starego (01:00:52.014)
Well, that you know, because you have a road that’s too big, right? So you do have to have a tertiary level distribution. You can’t just get it from both ends anymore. So that’s the tricky part of having a busway, uh, run over the other busway and then jog down and pick up those last two racks, unless you want, um, you know, to serve those, uh, stray racks, if you will, with just straight outlets, but that becomes a, uh,

non-ideal form of distribution for those racks because you have two different styles now.

Let’s wrap up with a prediction. we’re saying that using fault-managed power, you’re eliminating 80 % of the equipment that you would typically use for distribution inside a data hall. Whoa. Whoa. is that it?

Where’d you get that? I liked that joke. We said we had an 80 % cross-section reduction in. I don’t know. do. don’t know about. Yeah. Equipment. Well, depending on how you define equipment, right? We took out all that stuff that Mike’s talking about. I don’t know if we want to count it as weight or volume or whatever. There’s certainly a lot of weight and volume that’s taken out of the white space.

Okay, good.

James Eaves (01:02:00.918)
Space reduction.

Steve Eaves (01:02:19.426)
or the data hall.

Right. Yeah. And that’s at the densities we’re talking about. You know, that becomes significant because those PD use, you know, they they’ll sit along the perimeter. You have to find room for them. Like, you know, you can get slick and start putting mezzanines and stuff. But that that becomes tricky for maintenance. But they they take up significant space at these densities. And that would be much more useful as more rows of servers. Right. So, you know, that that space that you’re taking out

You know what those PD use that’s significant and also you know your static switches and whatnot that you’re able to remove those take up space to because they need to be in conditioned spaces. Those are those are sophisticated pieces of equipment that require a decent level of conditioning. They just can’t sit outside or sit in an unconditioned space. They have temperature limitations like like most equipment that have processes processors in them.

Okay, so we’re getting rid of in terms of volume, tons of stuff, right? It’s way easier to install. It’s safer. It’s smarter. And at least for the 25 KW data center, a rack, it was less expensive. So my question is like, first, do you believe that, you know, it adds in some amount of time fault managed power is going to be the standard for

data for power distribution data centers. And if so, how long until it’s the standard.

Mike Starego (01:03:56.44)
Well, from my side, think what I see is it’s going to take a leap from an owner to do that, right? Especially at a large scale. I think how it will happen is through some pilot sites to show that it could work and then scale from there and show how it’s scalable. think it will be, and then once that occurs and shown as proven, then you’ll get more owners.

ready to take that leap, because it would be a big, you’re asking them to change their topology, take a big risk, and they’re building these things so fast, would they risk, and in a colo environment, that would be the biggest challenge, because as far as a single user data center, they might be more willing to take that risk, because they know their servers would adapt well to that direct DC from the receivers, which, you

you can’t assume is the case for a colo because a colo owner isn’t going to know specifically what equipment’s going into those racks. So you have to know first off if it’s going to take that DC. So it’s going to be a tricky environment in the colo environment, but a single user data center, I think those would probably be the ones that take the first leap.

Yeah, that makes sense. And yeah, we have some tiny, tiny pilots going on right now. So it’s getting some mileage on it. But we, as Mike said, yeah, we don’t have anything at scale yet. And it will take it. You know, we’re going to keep talking and promoting and use our relationships and eventually we’ll get there.

add to that.

James Eaves (01:05:43.628)
Yeah, because today that technology exists. It’s in the code. So eventually, how could it’s a better form of power distribution? There’s some pathway that it has to go through in order to be adopted in the data center space. But it’s really hard for them to believe that it won’t be.

Yeah, I think we started in our other markets. We always started by the advantages, the mix of the value proposition with how much the total installed cost and the other values it brings. Like we talked about data analytics that are inherent to the system. we know that we have to really be impactful. The 30 plus percent installed cost, CapEx, was

Mhm.

Steve Eaves (01:06:36.972)
has always been a formula that we’ve used in Vault Server to get people to consider it, to consider taking some risk and then throwing everything else in. And that’s how we get first adopters. That’s how we’ve traditionally got it. As you know, this is a new market for us. We just started on this one, but we use a similar formula in our other market spaces. maybe somebody listening to this podcast.

Yeah, we’ll take the chance. But it’s great. have partners like Mike that’s helping us champion this and promote it and find, understand the new possibilities it creates. Mike, you’re so smart and you just know so much stuff. I would just love it if you were always on this podcast with us.

Yeah.

Mike Starego (01:07:29.09)
Yeah, happy to be here. I appreciate you having that. Haven’t having me join you today. It’s been a lot of fun, but you know the nice thing about like the fault managed power just to build build on that is if you have an existing data center that still has halls to go portions to go. This technology could still be adapted there because it’s all downstream. Of of the UPS.

All right, so a lot of cases that when you have empty, when you have shell space, you you don’t have your UPS, you don’t have your transformers, you don’t have your generators yet. And all that’s the same. It takes up the same room in a fault managed power solution because you still have those components. But just once you get to the UPS, that’s where you adapt it. And then it’s all downstream from there. So if you have one hall that has, say, you know, five, six, seven power blocks.

Then you just build out those blocks to have fault managed power distribution. You have a single tenant that has the equipment that accepts that DC power, which isn’t a stretch at all. You know, that data hall could be fault managed power, even though if you have AC and all the other ones. So it’s not, doesn’t, you have to, don’t have to commit to it for a whole building. You can commit to it for a portion. So that’s.

a nice little feature to it so that maybe there’s more willingness to take a little plunge in a smaller area once the owner’s comfortable with the technology.

Yeah, I agree with that. I think that’s what we’ve seen, how we’ve penetrated other markets is often from, it’s a hard retrofit. Like you’re trying to get more power into this tight location, which is power constrained for some reason, like a data center that was built for, you know, eight KW racks and suddenly they need to install a 30 KW rack and they just can’t squeeze. They can’t get the power in there. If they try to do that work, they risk, you know,

James Eaves (01:09:39.351)
cause a disruption to service. And it’s much easier to basically snake a skinny little digital electricity cable through whatever pathways have available and deliver that power to a location. And so I agree with you. think an application like that will likely be the way we penetrate this market.

It doesn’t have to be all in.

Yeah. Anyway, so I think that’s a good place to wrap up. Mike, thanks for joining us. I just I really enjoyed speaking with you. I hope that you could join us on many more of these podcasts.

Well, thank you. Yeah, I really enjoyed the time here. I appreciate you guys having me on today. It was a pleasure to talk about it and I’m excited for the future that fault managed power could bring to the industry.

And if the audience wants to download the paper that Steve and Mike wrote, published last year, A Comparison of Fault-Managed Power and Traditional AC Distribution High Performance Data Centers for, I think it was a 20 data center built around, what was it, 20 KW server racks or 25 KW server racks? It’s on, I think it’s on both our websites. And I guess when can we expect your new paper around the 200 KW rack design?

Steve Eaves (01:10:55.414)
Is that three days,

Coming to your theater this summer.

All right, fair enough. Summer’s a good target.

All right, so we’ll post it on social media when it’s done sometime this summer. All right, thanks everyone and see you guys next time.

Thanks, James.

Mike Starego (01:11:13.858)
Thank you.

Get the latest Digital Electricity® news + updates delivered right to your inbox

Talk to our Solutions Engineers

Learn how Digital Electricity can solve your wireless connectivity needs