black-opal
Vivere è pericoloso perchè si può morire
Consider the following: For every $1 spent on a GPU, roughly $1 needs to be spent on energy costs to run the GPU in a data center. So if Nvidia sells $50B in run-rate GPU revenue by the end of the year (a conservative estimate based on analyst forecasts), that implies approximately $100B in data center expenditures.
During historical technology cycles, overbuilding of infrastructure has often incinerated capital, while at the same time unleashing future innovation by bringing down the marginal cost of new product development.
The AI infrastructure build out is happening. Infrastructure is not the problem anymore. Many foundation models are being developed—this is not the problem anymore, either. And the tooling in AI is pretty good today. So the $200B question is: What are you going to use all this infrastructure to do? How is it going to change people’s lives?
![www.sequoiacap.com](/proxy.php?image=https%3A%2F%2Fwww.sequoiacap.com%2Fwp-content%2Fuploads%2Fsites%2F6%2F2023%2F09%2FSteps-og.png&hash=0d646bf7d3380d0177e45aa9a9480236&return_error=1)
AI’s $200B Question
GPU capacity is getting overbuilt. Long-term, this is good for startups. Short-term, things could get messy. Follow the GPUs to find out why.
![www.sequoiacap.com](/proxy.php?image=https%3A%2F%2Fwww.sequoiacap.com%2Fwp-content%2Fuploads%2Fsites%2F6%2F2021%2F11%2Fcropped-cropped-android-chrome-512x512-1.png%3Fw%3D32&hash=1614fbe35efa1fc7cd8a07cf9e417a4f&return_error=1)