While Nvidia's recent 1-4 stock split makes the price of its stock more affordable, it's what its doing technically that's pretty dynamic.
Nvidia today announced the release of TensorRT 8, the latest version of its software development kit (SDK) designed for AI and machine learning inference.
The inference breakthrough is likely to make conversational AI smarter with more interactions from Cloud to Edge computing. It is the eighth generation of the company's AI software, which slashes inference time in half for language queries -- enabling developers to build the world's best-performing search engines, ad recommendations and chatbots and offer them from the cloud to the edge.
At the Last Futurist we predict Nvidia is now the next BigTech company after Apple, Alphabet, Amazon, Microsoft, Facebook and Tesla (by market cap, not by actual technological influence). This would make it among the Big Seven of America.
- TensorRT 8's optimizations deliver record-setting speed for language applications, running BERT-Large, one of the world's most widely used transformer-based models, in 1.2 milliseconds.
- Built for deploying AI models that can power search engines, ad recommendations, chatbots, and more, Nvidia claims that TensorRT 8 cuts inference time in half for language queries compared with the previous release of TensorRT.
The company's revenue has basically doubled during the pandemic, and the company's intersection of gaming, data centers, crypto and the future of AI is very significant in the 2020s.
Nvidia's products are increasingly impacting the developers of tomorrow and AI innovation. In five years, more than 350,000 developers across 27,500 companies in wide-ranging areas, including healthcare, automotive, finance and retail, have downloaded TensorRT nearly 2.5 million times. If AI is ubiquitous in business, Nvidia's impact on the future of AI is now becoming influential.
Sparsity and Quantization
TensorRT 8's breakthroughs in AI inference are made possible through two other key features.
Sparsity is a new performance technique in NVIDIA Ampere architecture GPUs to increase efficiency, allowing developers to accelerate their neural networks by reducing computational operations.
Quantization aware training enables developers to use trained models to run inference in INT8 precision without losing accuracy. This significantly reduces compute and storage overhead for efficient inference on Tensor Cores.
Nvidia's is seeing broad industry adoption. For instance, industry leaders have embraced TensorRT for their deep learning inference applications in conversational AI and across a range of other fields. It is still not clear if Nvidia's acquisition of the U.K's ARM will be delayed.
Nvidia CEO Jensen Huang wears his usual leather jacket and is notably a Taiwanese-American businessman. TensorRT 8 is now generally available and free of charge to members of the NVIDIA Developer program. The latest versions of plug-ins, parsers and samples are also available as open source from the TensorRT GitHub repository.
TensorRT essentially dials a model's mathematical coordinates to a balance of the smallest model size with the highest accuracy for the system it'll run on. Nvidia claims that TensorRT-based apps perform up to 40 times faster than CPU-only platforms during inference. The specs on the improvements Nvidia is making does sound impressive.
Nvidia is Powering Faster AI at Lower Costs
There's an explosion of demand for increasingly sophisticated AI-enabled services like image and speech recognition, natural language processing, visual search, and personalized recommendations. At the same time, data sets are growing, networks are getting more complex, and latency requirements are tightening to meet user expectations.
NVIDIA® TensorRT is a programmable inference accelerator that delivers the performance, efficiency, and responsiveness critical to powering the next generation of AI products and services—in the cloud, in the data center, at the network's edge, and in vehicles.
What Nvidia's has achieved in the last twenty years feels like a journey of computing. Its invention of the GPU in 1999 sparked the growth of the PC gaming market and has redefined modern computer graphics, high performance computing and artificial intelligence. Now its powering the future of AI, gaming, data centers and crypto in ways that are opening up new possibilities for how AI is scaling in business and across industries.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.