قالب وردپرس درنا توس
Home https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Business https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Why the NVIDIA growth days can be completed – Motley Fool

Why the NVIDIA growth days can be completed – Motley Fool



NVIDIA (NASDAQ: NVDA) has become one of the hottest companies in the technology industry. Its graphics processors (GPUs) have made computing more efficient, making its stock rise more than 500% over the past three years.

I've been a fan of NVIDIA for a long time. I visited her headquarters in 2015 and was excited to learn about deep learning. I have been a shareholder for many years and have done quite a lot of money.

But in recent months, NVIDIA has been bombed, and analysts are divided about where its growth will come from. The company has already dominated consumer games, so now it is turning to data centers as a key market to expand its top line. Bulls will point to NVIDIA's revenue growth of 58% in data centers, indicating that it is showing significant progress.

I remain much more skeptical. I agree that datacenters are very important, especially those of the largest cloud providers, such as Amazon.com (NASDAQ: AMZN) with Amazon Web Services (AWS) or Alphabet (NASDAQ: GOOG) (NASDAQ: GOOGL) with the Google Cloud Platform.

But it becomes increasingly clear to me that the GPU is not ] right for datacenters. I believe that the future of NVIDIA will look much more than its past, and this is not good news for investors.

  Image of the corporate data center

Are there any other places for NVIDIA graphics processors in data centers like this? Image Source: Getty Images

Looking Into The Clouds

To set the scene, huge data centers for large cloud computing providers are indeed a sacred tool for equipment vendors. There are several locations around the world, each of which manages a huge amount of data for analysis. The victory here for NVIDIA can lead to significant sales in the future GPU

But Titanium Clouds have unique needs in their workflow. Such needs often do not coincide at all with the benefits of graphics processors

For example, Amazon is developing its own chips to make its Echo devices more responsive to voice signals. This application is a very important delay (the speed you need to take to understand and respond). Google is developing its own chips to reduce energy consumption by the data center. Here, the number of operations performed per watt of power consumption is very important. None of these programs require image or video recognition.

The basic code of software applications is also constantly changing, and machine learning algorithms are constantly being re-taught. The Oxford English Dictionary can first learn English, but eventually retrained to admit that "the murder of this interview" is a good thing, not a killing act. needs and constantly changes the code by hundreds of thousands of other clients, each of which leases storage, processing, and "machine learning as a service" from cloud providers such as Amazon Web Services and the Google Cloud Platform. Things quickly begin to become very complicated

In its honor, NVIDIA did its best to find solutions to its customers' problems. She used optimizers to match the logic of what their customers are trying to achieve with their hardware GPUs that could best reach them. TensorRT is an example that will allow you to configure NVIDIA graphics processors to work with certain data center applications.

But fitness is never perfect.

Companies had to admit that their software was to some extent suited to existing NVIDIA hardware devices and their proprietary features. End users did not know what architecture was actually abstracted by NVIDIA optimizers. They just knew that graphics processors were much better when working with their needs than processors. That is why we have seen a significant increase in graphics processors in recent years

This is the problem for NVIDIA. Graphic processors are not a magic solution that can handle the entire complexity of the data center. There are inefficiencies that are created with each compulsory application to the GPU. And everything becomes harder every day.

Big companies with deep pockets are already developing their own integrated circuits to optimize individual tasks. But even for those who do not have billions of dollars and special research groups, the decision begins to emerge.

What to do if there are chips that can be programmed and then reprogrammed to always perfectly match what you want your software to do?

Image source: Getty Images

Enter FPGA

These chips exist, and they are called field programmable gate arrays (FPGAs). Programmable chips can have their logic constantly changing, which means that they are adapted to changing software requirements.

This distinguishes them from other chips based on instructions, such as processors or graphics processors, but the difference did not really have historical significance. Traditionally, the calculation was carried out using processors that doubled the performance every 18 months. Then the graphics processors found a more effective way to improve the processor.

But this point of differentiation really matters today. Artificial intelligence (AI) is a bizarre beast, and its constant learning of AI algorithms complicates processors and graphics processors. And just as with Alexa, the delay becomes more and more important for any application that requires a response time of a second. Inefficiencies become less tolerant.

The programmable aspect of FPGA can completely eliminate these inefficiencies. By way of abstraction of the layer between software and hardware models – the "Model Optimizer Library" to technicians – an EOS using FPGA, it is theoretically capable of executing each program perfectly . FPGA as a service may look like CUDA on steroids: configured to match hardware to specific algorithms, instead of using any NVIDIA graphics processor. And the FPGAs can be optimized and then reoptimized over time, which is convenient when code and logic changes.

The FPGA will not be the answer for everyone. Their original cost exceeds the CPU and GPU. And they take a lot of programming time, which should be done by experienced engineers.

But these factors do not apply to cloud providers. They have access to IT talent and can afford first-rate value. The advantage they receive is reducing overall power consumption for their data centers, which they can transfer as more competitive rates to their rental and processing clients.

This is the right market, and now it's time for FPGA. And that is why the largest cloud service providers, including AWS, Microsoft Alibaba and Baidu are expanding rapidly around the world. I believe that NVIDIA growth rates in data centers will be slowing down and that the biggest days may be behind this.

John McCee, CEO of Whole Foods Market, a subsidiary of Amazon, is a member of the board of directors of The Motley Fool. Suzanne Frey, executive director of Alphabet, is a board member of The Motley Fool. Teresa Kersten, an employee of LinkedIn, a Microsoft subsidiary, is a member of The Motley Fool's Board of Directors. Simon Erickson owns Amazon and Nvidia shares. Motley Fool owns shares and recommends the alphabet (stocks), the alphabet (shares C), Amazon, Baidu and Nvidia. Motley Fool owns Microsoft shares. Motley Fool has a disclosure policy


Source link