Internet Explorer is not a supported browser for TI.com. For the best experience, please use a different browser.
Video Player is loading.
Current Time 0:00
Duration 36:37
Loaded: 0.46%
Stream Type LIVE
Remaining Time 36:37
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected

[AUDIO LOGO]

[MUSIC PLAYING]

Thanks, everyone, for joining. My name is Robert Taylor. I'm the sector GM for industrial power design services. My team is responsible for doing custom reference designs specifically focused on data centers. I'm so happy to be joined today by these two lovely colleagues of mine. Please introduce yourselves.

Hey, good to see you again, Kannan Soundarapandian. I am a vice president here at TI, and I run the high-voltage products business unit. And I'm responsible for pretty much anything 150 volts and higher from a silicon standpoint. Very happy to be here. Looking forward to the conversation.

Yeah, good to be with you guys. I'm Priya Thanigai. I'm vice president and business unit manager for the power switches business. And we build power pack protection and distribution products across multiple voltage rails. So I'm really excited to join you guys and talk about power and data centers.

What's happening there?

Yeah, so we really just are seeing explosive growth in data centers. And we specifically-- my job is to make sure I provide products that protect along the power path and are reliable and resilient all the way through. So our goal is to make sure as our customers need scale from 12 volt to 48 volt to whatever else beyond that, we have parts that service that need.

And Kannan, I'm super excited to see some of your parts in the data center, hopefully in the near future.

Just to echo everything that Priya's been talking about, right, the most important things we're looking at right now are to make sure that whatever it is that we're investing in and doing, to your point about the fact that it's anywhere from 12 to 48 and then what's beyond, all of those things are happening at the same time. I've never really seen anything like this in the industry before, where things can constantly get accelerated. Usually, when somebody is working on something, you'll see the usual manufacturing delays, R&D delays. None of that's actually happening in this space.

It's crazy.

Every single thing continues to keep getting pulled in. And the challenge that we've actually got, especially in the HVP BU that we're working on is first of all to move at that pace while not relinquishing any kind of a reputation we have to be able to create reliable and quality products, right? So it's an interesting place to be. And yeah, you're right, lots of exciting stuff coming out. And this is the place to operate. So actually, I feel pretty--

So there's a lot of different things going on. There's AI, artificial intelligence. We have crypto, cloud computing, all of these different types of things happening in the data centers, happening with the different products that we offer. So in terms of power, it seems like nobody really cares about that part, right?

[CHUCKLES]

When I go home and I talk to my kids, they pull up this AI-generated photo that they did, they have no idea how much power it took to actually do that. These are some of the problems that we have to solve, right?

The critical applications of AI, how to make anime versions of yourself, for sure.

That's right.

Yeah.

So I got this app where I took a picture of my dog, and then I tell it what to do with my dog. And so my dog is playing softball on a beach near the Eiffel Tower. It's just wild.

This is why we're building all the power that we need for data centers, so we can see your dog on a beach.

It's incredible.

So Kannan, thinking of these types of things, right, we hear crazy things every day from customers, right? So a typical rack server right now, anywhere between 150 and 200 kilowatts. Where do you see that going?

The number we're actually looking at right now, and it's not in existence yet, but we're thinking about megawatt racks, when-- that is insane. A megawatt, right, 1,000 kilowatts. That is absolutely insane.

So we can make more videos faster?

Pretty much. Yeah, every single time you make a dog do things, for example, you need one of those things built up and ready to go. The insanity of having a megawatt of power being consumed in the space of something that looks like a refrigerator, you cannot overstate it. The whole--

Do you see that the timelines for that are getting compressed? When you think about what it took to scale from, let's say, single digit kilowatts to tens of kilowatts versus tens of kilowatts to a megawatt, do you see that just collapsing in terms of how quickly it's needed?

We talk about hockey stick curves, right? That's exactly-- your point is exactly right. You start. You have this linear ramp. Everybody is chugging along very happy with this is what the world is going to look like. And this is the power expectation in so many years and all that. And then all of a sudden you have ChatGPT. You have LLMs. And when those things show up, that's when you hit the hockey stick. So at this point, that's where we are. It's no longer that old line where everybody knew how much power a data center needed five years into the future, 10 years into the future. Now, it's completely the wild west.

Yeah, I saw I saw this stat. So if you do a normal Google search, it takes 0.3 watt hours of electricity. If you do the same search with a large language model or with AI it takes 10x the amount of power.

Yeah.

10x the amount of power. It just seems--

And I mean--

Seems insane. So where do we stop, right? If you think about it, how are we going to power these things, right? There was a recent nuclear power plant that they just had to bring back online.

Three Mile Island, yeah.

--Three Mile Island in order to power a data center over there.

Yeah, this is real. And it will continue to get more efficient. There's no question about it. I think everybody has read about ChatGPT and then DeepSeek and all that stuff. But the reality is this AI and LLMs as a whole are the first time I think we as a species have had any way to process noisy or unproperly formulated information, right? It's just you just throw data at this thing, and it does the work for you to make some sense out of it. The power of that cannot be overstated. That is why, even though it takes 10 times more for a ChatGPT or an LLM-based search, it matters so much more because a lot of that heavy lifting is done for you.

Now, if you take companies like ours over here, you've got 20 years of digital data. But you know what? It's 20 years of digital data with maybe five generations of engineers, 10 generations of engineers having delivered it, generated it, and tabulated it. So there's no real way to make any sense of that. But now we do. That is actually a fundamental paradigm shift in how we can actually consume digital data. This is why this is such-- this is, first of all, growing at such a breakneck pace, because everybody understands the possibilities here. And the most important thing is this thing scales. By the very nature of it, it scales. It scales from the smallest thing, dog playing softball--

Yeah.

--all the way up to crunching through two decades of data that an $80 billion corporation can actually generate, right? So it's insane.

Do you guys think that the challenges as we walk through this really, let's just say, explosive growth in the need for power, do you think the challenges fundamentally change to something different? Because what I'm seeing is that now more than ever you hear the emphasis on power efficiency. You hear the emphasis on redundancy and zero downtime. And all of these challenges they always existed for a data center. But now we're looking at, for example, the backbone, the digital backbone of a hospital is a data center.

Right.

And that just shifts the paradigm, right? It's not just a banking institution or your favorite video that you want to put on social media. But we're talking about, really, I would say serious consequences if these very power-hungry data centers aren't designed right.

It's actually a wonderful-- it's a wonderful topic to hit. If you think about it, the whole reason why we have these new servers and new infrastructure called AI, ML infrastructure, simply because it does one thing much better than traditional GPU-based servers, right? And that is parallelism. So fundamentally, these things can crunch vast amounts of data in parallel at the same time. But there's a caveat to that. So what ends up happening is in a traditional data center, if one server goes down, you can actually build it and architect it so that there's four others or three others that can carry that load and nothing happens.

In AI, ML data centers, there's this new term being thrown around by our customers. They call it a blast radius. So if one device goes down, what ends up happening is there's actually an event that has to be restarted. So whatever work was actually running on that plus all these other servers is hard to salvage. You can't do that anymore. So the need, especially when it comes to human safety stuff, like a hospital back end like you're talking about, the need for that to be minimized and minimized orders of magnitude more than it was the case before, right? So there was always a need for reliability and quality in a data center from before, but now it becomes very real for that reason. That's the reason we use terms like blast radius. It is a very colorful term, but it absolutely fits that particular event.

It makes a ton of sense.

Definitely nails it.

So if I look back to the different power challenges that we're seeing, everybody is talking about efficiency, power density, safety, protection. What are we hearing from customers, from other people in the field about how we're going to get there? What challenges are we seeing?

That's the funny thing. The one thing that when you talk to somebody who's not soaking in this stuff, like possibly the three of us are, when you talk about power being a problem to solve, we need to get higher efficiencies, they're all thinking about trees and greenery and forests being saved. The reality is, it is not that.

Well, it could also be that.

Yeah, sure. There's another element over here that I'm--

Just stop using AI searches and go back to Google.

It's not the only thing. Maybe that's what you mean.

No question, but the driving imperative by why these things are accelerating so much-- I talked about the fact that you're hitting a hockey stick-- that doesn't happen because of anything other than a do or die moment. And that's where we are. If you're looking at the latest GPUs and the ASICs, we're talking about each one of those sucking down about 2,000 amps of current. Now, if you're an electrical engineer you know that that is a staggering amount of current. And all of that has to be shoved through very tiny pieces of copper. And that has to happen continuously and be safe and reliable over a lifetime for 15 years. This is a crazy problem to solve.

So your point is really that it becomes more of an existential crisis rather than a nice to have and I want to save x dollars per year from a green energy scheme or something like that. It's more here's the stuff that we need to have in place because every inch, millimeter square of PCB footprint we cross to take that power over means that we're losing power along the way. And then we're trying to power these very, very power-hungry processors, which means that even to get it started up and going we have to-- the design from the get go has to be perfect.

Even if you look at the amount of energy consumption that is predicted to be in data centers, our energy consumption is going to outstrip the amount of energy infrastructure that we're putting in place.

And that is the point.

It's like 2x the amount of energy that we're going to need versus what we're able to generate.

4.5%, that was the 2023 number. A little less than 4.5% of every watt generated on the planet was used up in a server somewhere.

--for servers.

That was in 2023.

It's 10% now, right?

'28, we're thinking about more than 10. It's about 12%, right?

It's definitely only going up.

And if you think about the fact that we're talking about a period of a few years, something that shifts the paradigm of how we as a species consume energy, it should give you the enormity of that change that is coming. We're talking about a few years where you go up from three times more of all energy on the planet is now consumed in something that didn't exist a few years ago. This is insane if you think of it.

It's crazy. It's completely crazy.

Where that becomes-- where that comes into focus-- and you asked a question before, when you talk about AI and ML, everybody talks about dogs playing poker or whatever. But the problem of solving-- solving the power problem over there is, in my opinion at least, one of the most exciting problems that are available out there. I feel personally pretty lucky to be back in the middle of trying to get this thing solved because the reality is every interesting problem is always good if it's one of optimization, if you will.

And in this particular case, I'll just throw a few things out there. It is a reality that to be able to deliver larger and larger amounts of power-- to be able to, not able to well, I'm saying two even be able to deliver larger amounts of power, you have to go to higher and higher voltages because you have to reduce the current because there's an I square R loss. These are fundamental concepts in physics that you cannot escape. So you have to go higher and higher in voltage.

So I didn't bring my calculator with me, but I got my engineering notepad.

Well, there you go.

I got my engineering notepad here. So if we're talking about a megawatt, and most backplanes on the server rack right now are at 48 volts, right? So a megawatt, 48 volts, quick math in my head tells me that's 20,000 amps. Is that right? 20,000 amps. And again, I don't have the exact number on the wire gauge. But in order to be able to carry that amount of current on the busbar is just insane, right? So we definitely see this shift to going to higher voltages. We hear 800 plus [INAUDIBLE]--

The only point I'll make is it's not insane, it's impossible because what ends up happening is you will cook the copper, and you will not have any kind of usable lifetime. So that's the reality. And when I say impossible, I'm not just talking about it is you can't let the thing run for a few hours or something. You can. But as a production system that's supposed to last some amount of years, it's not viable. So that's the key thing why there's so much excitement around also creating these power structures to serve these needs in an AI/ML space is that we will end up making it possible, not better, but possible.

So what we're seeing is the need, not only-- and we'll get into why we need to remove the power supplies from the IT racks but removing the power supplies from the IT racks into what a lot of people are referring to as a sidecar. And so my team is responsible for designing those reference designs. And I feel kind of bad because the reason they want to take the power supplies out is because we can't make them power dense enough in order to get the GPUs and CPUs close together. So basically, they're like your power supplies are taking up too much space. They generate too much heat. So we got to move them over here to this side car. So that's what my team is designing.

And honestly--

We're pretty excited about that.

We've been talking about moving the curve on power density for what? The last two, three decades, et cetera. But when it comes to enterprise power, I think just keeping on top of how much we're scaling, even our process geometries, and how do we stay on top of getting more and more power through smaller and smaller area in thermally efficient ways? I think this will continue to be a challenge, for sure.

But even at 48 volts, you have some eFuse parts, right, that are very power dense. So you still get to play over in the IT rack. See? My team, we don't even get to play over there anymore. We've totally been kicked out with Kannan's GaN fest. We're over there.

That's the benefit of being the insurance provider. You're always on every board.

Yeah, but the journey doesn't stop with the sidecar, right? You know this. The whole point about bringing a sidecar close to where you have that one megawatt load, which is effectively what a server rack is going to look like, it means that the challenge doesn't end. Again, there is a power path from when you plug in that server all the way down to the core. And that is--

So we like to refer to this as the grid to the gate.

Yeah.

So from the grid-- from the grid all the way to the gate inside that CPU.

I heard that yesterday.

Yeah, sounds like something you just made up right now. No, I'm just kidding.

It's kind of catchy.

Yeah, it is kind of catchy. But it's so true. I think it really encapsulates the exact problem, is how do you take-- first of all, the grid has to be able to supply. But assuming it does, and then how do you take that energy every step of the way without losing as less as possible all the way down to that last gate on your fancy GPU.

This is cool. So Kannan you're going to like this. So I was talking to a customer, and they said that when they turn on their workloads, they have to stagger the amount of workloads that they started at a certain amount of time so that the electric company where they're generating the electricity can ramp up the amount of electricity going to the data center. Imagine this, right? [? R2s ?] over here, and I'm doing some ChatGPT search.

And all of a sudden all the lights in the neighborhood turn down because this data center is taking so much power. So they said that in order to ramp from a 0 to 1 megawatt load, they have to stagger that over like two minutes of time. So in order to be able to do that, I'm not going to wait two minutes for my search results to come back. So we need to come up with these other technologies, BBUs, capacitor backup units, all of these types of things. So what are you seeing on that?

That's a fantastic question. And that is the whole point because the amount of load that these racks and these AI, ML infrastructure-based data servers that we're building, there simply isn't a way to build enough energy infrastructure to be able to not only handle the constant load but also any kind of surge loads. That basically means that you have to have some way to buffer that energy on premises and use that instead.

By the way, I can't buffer that inside the IT rack because we've already talked about I got to get all the power out of there.

Hence, the sidecar, right? Also, so you've got to put these batteries or supercapacitors or any kind of an energy storage element somewhere in the vicinity of the data center so that when you have these loads that R2's talking about, you can, for a short period of time, supply that locally and then get back to that constant state. So that is an entirely different problem that we also have to solve but is now getting integrated into that power architecture, right, from your grid--

Yeah, and you guys can do the math on the inrush piece, right? As you scale the power, which the most stressful time is power up.

Yes.

The startup is the most stressful time. And then the more you scale-- I mean, I'm sure there's other stressful events, but I'm just saying, startup is one of the most stressful times. And we're just talking about exponentially scaling the inrush and being able to handle it. And for me, that comes down to how can I fit that in a really small 5 by 5 chip, right? And my energy is going up 10x, but the size of my chip is getting smaller. And everything we need to build around, whether it's packaging technology or even cooling our packages to be able to absorb that inrush. But then you have to think about larger scale. Is that across the board, across the power supply, how we manage and stay fault tolerant in any of those startup situations, I think becomes very--

Always comes back to the fault tolerance.

Well, that's my thing.

Protection.

That's my thing, so I'm going to keep talking about it.

But the larger point also is that this doesn't happen overnight. Understanding how to get to reliability on anything that we do, that, I think because we've got decade, decade plus-- I don't know, you may know the number better than I do-- but we have multiple decades of operating in marketplaces like the automotive market, where these needs for quality and reliability actually showed up way before now. And understanding it and basically carrying a bag full of scars in terms of all the things that we know not to do, which I think is far more precious than what to do, and the energy infrastructure market, for example, all the work that we've done to be able to get UPS's and other things connected to the grid, that experience is all table stakes at this point, right?

It's one of the exciting things, at least for me, the fact is you get to work in a team and start with a team that already understands these primary concepts and then apply them to basically what you're talking about. So 5 by 5 piece of silicon, multiple tens of amps, that is a very hard problem to solve. But we have solved it in other areas before. It's a question of bringing all that together and putting it together for this one area, which just so happens to be 10 times harder. So it's a good starting point, is my point. And that's actually pretty enjoyable to me--

Yeah.

--if I think of it. And honestly, it's like downtime in a server context is just different scale, right? You guys think of-- throw me a number of-- a minute of downtime, how much do you think it cost these guys?

Oh, it's gotta be insane. If you're talking about-- imagine if the New York Stock Exchange--

Oh, yeah.

--went down.

Or your favorite online commercial retailer where people are putting in millions of transactions every minute.

Every minute.

It's thousands of dollars every minute, right? And you're talking hundreds of thousands of dollars every hour. And that's just the financial impact, right, not to talk about safety impact or security impact.

Yeah, if I'm impulse buying and I have to wait another minute because the server's down--

That is the most important thing when we think about redundancy.

--I may change my mind. I may change my mind.

You may never come back.

I'm at the checkout counter. That's it. You had me for that one minute. Now your server went down? Oh, never mind.

The interesting thing is we actually haven't solved that blast radius problem, right? We really haven't. Right now, it's something that the world is dealing with. Every time there is a failure, they just have to deal with it. They have to plan for it. They have to actually have capacity come online to be able to handle that. But the dream is, and actually, it's more than a dream. I think the way to actually create success as a power semiconductor or a power infrastructure provider into this market is to solve that problem. It's basically an automotive or an energy infrastructure-based quality need times 10. And that is basically what we're working on right now, in the sense of--

But are you thinking purely quality because I hear a lot about predictive maintenance and diagnostics and how you pipe some of that data and information of the health of the system now getting more and more complex? As the systems get more complex, it becomes more and more important to just watch and monitor and observe and be able to predict. So down to the building block, which could be your GaN fed and controller or my eFuse that he likes to ding. They all have to be able to report back.

I'm just jealous.

Well, I have good parts.

Creating a new-- yeah, basically bringing the smarts away from just the GPU and the CPU also into all the other constituent little microcontrollers.

Even something like security, right?

Yes. Integral, everything.

You don't really think about security, I'm not talking about the guy that's standing out in front of the data center preventing people from coming inside.

Not that security, yeah.

If you go in and all of a sudden, now because we have digital power supplies, I can hack into that and shut the server down? Wow.

And that's actually happening. It's happening now.

Yeah.

You know of multiple instances.

Quite a few examples of that.

The point is, if you're thinking about it, you can damn well bet that other people are.

It's crazy. So in terms of powering the data centers, there are a lot of different challenges that we're running into, efficiency, density, high-voltage DC distribution, package innovations, integration. All of these types of things are challenges that we are solving every day.

Yeah. Actually, the most important thing about that is that a lot of these ways you can solve all those issues you talk about are in direct conflict with each other. And that's where the biggest opportunities are. To keep it simple, for example, it's pretty well known that the most efficient way to actually deliver power to a place of consumption is by getting higher and higher voltages closer and closer to where it's being consumed. So basically, that is the reason why we went from 5-- we went from 12 volt distribution on the board to 48 volts now, to we're talking about even 400 to 800 volt energy coming directly into the servers itself. That's--

What kind of semiconductors do we need to do that?

That's precisely the point. So the idea, the way to get power is to go higher in voltage. And one of the other ways in which you can actually process that power and deliver it efficiently is to switch it faster and harder because that will take less magnetics, less area, and it will generate less heat. But to do those two things, they are in direct conflict. You cannot do that on-- to go faster in switching speeds, you need to go lower in voltage. And to deliver power better you need to go higher. So this is where you need newer materials. And that is why GaN technologies, for example, is one example. Gallium nitride-based FETs and gallium nitride-based switches are so important over here because it finds that middle. This is what I meant by some of the best problems in the world to solve are ones of optimization.

So one of the things that my team does and that we work on is we solve system-level challenges. And so we run into problems with these higher voltage, higher power transistors, whether it's silicon carbide or GaN, and especially when we try to go to higher switching frequencies. You're talking about higher switching frequencies. We're talking up to a few hundred kilohertz higher?

The current state of creating a power converter is what you said. It's 100 kilohertz, maybe 200. But a new technology like GaN-- and honestly, the reason I keep talking about GaN and why I'm excited about it is the best power switch in the world is quite simply the one that takes the least amount of energy to turn on and the least amount of energy to turn reliably off. And that today is GaN. There isn't a better switch out there. So if you take that and then you add to it, the problem of being able to switch it faster and take it from 100 to 200 kilohertz all the way up to megahertz type switching while being at a high-voltage, there really is only one switch that does that in the under 20 kilowatt range, and that is GaN. That's why all our new GaN power block products, as well as the more integrated versions of these switches, are so important.

So what about the integration? Tell me about the integration. So there's drivers, there's protection. All of these different things help to make the system more reliable, right?

Yeah, and I'm sure you'll want to talk about the high-voltage measurements--

Well, if we make your switches reliable, then we don't even need her eFuses anymore.

Yeah, well-- yeah.

Let's talk about that a little bit too.

I doubt if you're building any power board without having a fuse component associated with it, that's for sure. But speaking to the challenges, though, one of the things that we have to solve is when we think about whether it's predictive maintenance or monitoring, or how do we get data back that we can pipe through the system and monitor the health. Usually, that's lot of digital intelligence, right? You're thinking about some form of high-voltage meteorology. You're thinking of your ADCs, your DACs, your black box systems, PMBus protocols, et cetera.

So speaking of conflicts, right, so you need a high amount of digital. Or the digital content is going up generation upon generation. At the same time, you need to scale voltage, and you need to scale power density, which means you need better and better class of analog power delivery along with higher digital integration. And I think that's where products like the 48-volt eFuse really gives you the best of both worlds, which is high-end digital metrology and ability to scale your digital without losing a beat on just the pure RSP, right, the pure resistivity of the chip itself so we can deliver max power output for that small square footage.

I love that.

And that answer actually fits perfectly to the question you asked me, which is, OK, you're adding these drivers, you're adding these diagnostic capabilities to a power switch. But that is the point. And the good news over here is generations now we've had-- we're on our 10th generation of our latest and greatest BCD process. So that allows us to do mixed signal design in ways that are better than anything we've ever done before, which basically means high-density digital, which allows you to put a lot of intelligence into a very small area of-- along with analog capabilities. So you can do that level shift that you need to go from the intelligence of a microcontroller to being able to turn on and off a switch reliably, while making sure you have all the diagnostic information on that switch available to you.

So the ability for us in TI-- and this is, I think, one of the key reasons I'm excited about this portfolio-- is to be able to put the best power switch in the world, which is an e-mode GaN along with the best BCD process in the world today together into the same package. So you get the best of both worlds, just to repeat what you said, which is best power switch does one thing and one thing well, turns on and turns off. And the best possible intelligence, as well as level shift capability and diagnostics to monitor that, fit right next to it in the same package. This creates an ease-of-use experience for any customer using these devices that you can't beat. And we can take that and we can extend it. We can put more intelligence, more metrology, more of everything into the same chip, make it easy to use.

So we have 48-volt products. But now we see the industry moving to 400 volts. So what are we going to do to solve that problem?

I'll talk to that too. We've had a good 15-year run now, I'd say, 10 to 15 years of investing in technologies like isolation technologies.

Sure.

Isolation is simply the ability for us to build a structure on a piece of silicon that is able to withstand up to 1,000 volts for 40 years. We can do that in a few millimeters now. In a little package like that, we have that capability. The minute you have that, it opens up a whole world. So now, to be able to sense a high-voltage line, to see how it's doing, to decide if it's going to fail in the near future or not, you have the capability to have that same component that allows you to sense off of a high-voltage while also doing all this other stuff. So integrating not just a driver or intelligence or diagnostics but also the ability to sense the world around you no matter how noisy or dangerous from a voltage standpoint it gets is absolutely a step up in capabilities.

I agree with you. And I think one of the things, Robert, that maybe is underrated, but I think when you talk about challenges and how we solve them efficiently, I think it's equally important is when you have access to a set of tools or building blocks that can solve all of the problems that he's talking about, then you can find a way to put things together because engineering is about putting things together, failing, trying again, and getting better, right?

Yeah.

And it starts with the readiness you have for the building blocks. And I think that's why having a portfolio, whether it's a GaN FET or it's an eFuse that can work at 48 volts, all just kind of come together to help our customers build fast proof of concepts, to fail quickly and give us feedback on what that integration looks like so we can move faster than the pace of the market, right?

That's also why things like the toll package that we're all very excited about and we're talking about, that is a fantastic example of what you're talking about, which is the integration possibilities that we have. That package has the best switching element in the world, which is GaN FET right now. And it has the best BCD process, which we use to deliver on gate drivers, diagnostics, et cetera. And we can also build in high-voltage metrology. So there could be things like a high-voltage resistor, which we can use to go off and sample high-voltage lines. All of that allows us to pack more intelligence into a toll package, which was, to this point, just a boring discrete FET.

That's just industry standard, right?

Yes, it's an industry standard package, but now you have the best of both worlds, which is you have the intelligence you need, the ability to tell what the environment around you is doing, while at the same time not compromising on the good thermal benefits from a package like the toll package. That's what makes it powerful, having access to all these elements of IP that we then have gotten pretty good at putting together, stuffing them all into the same package, and then making your life, as you said, easier when you're building these--

The engineers on my team, they really do appreciate products from both of you. In terms of being able to solve our customers' challenges, having good parts and having access to this technology really enables us to be able to deliver world-class solutions, whether it's an 8-kilowatt AI-power supply or a 12-kilowatt BBU backup power supply. These are all just examples of what we can do by having all of the different parts, all the way from, as we like to say, the grid to the gate. We got all the parts to cover that.

That's the thing though.

So doing those reference designs and being able to share those with customers is just really rewarding, really, really powerful. I think we went through a lot of different topics here. And what was one thing-- if you could pick one thing, Kannan, that you want people to take away from this, what would it be?

Yeah, for me, it's-- well, personally, if I start with the personal aspect of it, this is a lot of fun because it is a problem that is unsolved, and it is a problem that has to be solved. And so it puts you in all kinds of a compressive box, in a pressure cooker situation, which I kind of enjoy.

Yeah.

I don't know why, but I do. The other aspect of it is also that that is what is driving this, right? What is driving this is an actual need. This is one of those paradigm shifts that comes along very rarely. I've actually heard this said at a customer. This is the first energy revolution was in the UK. It was coal. It changed the world. The second one was oil. It was in the US. It changed the world. And then the country that this customer happened to be in was owning that.

They were saying the third energy revolution is going to be control, actual processing and power delivery and how you cook and send that energy and how you share energy among the different things that are consuming it, that is the third-- that is the third revolution. And by God, we're going to own it. It was a different story, but I found it very interesting. This is very much, if you think of it, now, that you have access to this energy, multiple types, fossil fuels to solar, to wind, and all that, the trick to using this energy, because it goes back to what you said, because the need for energy is growing far faster than our ability to generate it, which means there's only one thing left to do. And that is to squeeze whatever the heck we have into being able to deliver more.

Right.

And whoever gets to do that, whoever solves that problem, wins that third revolution. That's why, if there's nothing else to take away, it is that that revolution is here. It is needed, and it was needed yesterday. And we're running to catch up.

And I love that. Priya?

I like that. Do I need to top that?

No.

I think for me, if I think about what is one thing that's super exciting for me just as we really get deeper and deeper into the engineering challenges for a data center, it's like I talked about, it's seeing that really compressed timeline, where you see multiple inflection points within less than a decade. And you're having to move really honestly faster than the speed of light. And you're always trying to stay one step ahead, right? I think that aspect of it is super cool.

And I think that it's going to push-- being a part of this conversation is going to push me and the products that I make to always be on the edge of innovation, to always be on the edge of power density. I'm just going to be naturally forced to do that as I try to break down how to take some of these big chunk scaling problems into my small chips, right? So I think that's very exciting for me. I love being on top of inflection points, and I feel like definitely the moment is now kind of a situation. And it's not too often you get to do that. So that's the most interesting point for me.

I love that. I look pretty good for my age. It's hard to believe that I've been--

That you do. That you do.

--doing this job, this type of work, for more than half of my life. But I would say that in that time, this period, similar to what Kannan said, is revolutionary, evolutionary. And I've never seen anything quite like what we're seeing in the data center. So just happy to be a part of it. Happy that Texas Instruments has all of the parts to go from the grid to the gate.

We got that.

That term is going to catch on.

Yeah, yeah, it is.

Fingers crossed.

Yeah.

[MUSIC PLAYING]