Nvidia Corp Chief Executive Jensen Huang on Tuesday unveiled new cloud computing services with Oracle Corp and others as it rolled out an array of new artificial intelligence technologies based around its chips and software.
“The iPhone moment of AI has started,” Huang said in the virtual keynote address at GTC, the company’s annual conference for software developers, referring to how Apple Inc’s smartphone set a new standard for the handheld device.
Huang said Nvidia was working with Microsoft Corp and Alphabet Inc as well to widen accessibility to the massive systems with tens of thousands of chips like those used to develop fast-rising technologies such as the chatbot ChatGPT.
Nvidia is also partnering with AT&T Inc to make dispatching trucks more efficient, collaborating with quantum computing researchers to speed software development, and working with industry giant Taiwan Semiconductor Manufacturing Co to speed up chip development, Huang added.
The company has paired with a number of large image copyright holders to resolve the legal uncertainties around image-generation technologies. It also announced new chips that would help make running services similar to ChatGPT much cheaper, working with Google’s cloud unit.
Nvidia has come to dominate the field for selling chips used to developing generative AI technologies, which can answer questions with human-like text or generate fresh images based on a text prompt.
Those new technologies rely on the use of thousands of Nvidia chips at once to train the AI systems on huge troves of data. Microsoft, for example, built a system with more than 10,000 Nvidia chips for startup company OpenAI to use in developing technologies that underpin its wildly popular ChatGPT.
Nvidia released a new service called DGX Cloud that it said would give companies and software developers access to supercomputer power by logging on through a browser.
Nvidia said it would work with partners to host the service, starting with a deal with Oracle, which would offer a DGX-based supercomputer that can string together more than 32,000 of Nvidia’s chips at once.
A single “instance” of the cloud service – which consists of eight of Nvidia’s flagship A100 or H100 chips strung together with its custom networking technology – starts at $36,999 per month, Nvidia said.
Biotech firm Amgen Inc and software firm ServiceNow Inc have started using the service, Nvidia said.
Nvidia released a service called AI Foundations to help companies train their customized artificial intelligence models, enabling them to create products similar to ChatGPT or the Dall-E image creation system but fine-tuned using their own proprietary data.
The move is significant because the provenance of the data used to train AI models has become an area of legal dispute, with some artists and copyright holders saying their work was unfairly used to teach AI to create new content. That legal uncertainty has in turn made businesses cautious about using AI-generated content.
Huang also announced technology to speed up the design and manufacturing of semiconductors, working with ASML Holding , Synopsys Inc and TSMC to bring it to market.
Huang said TSMC will start readying the technology for production in June.
The boom in AI has helped drive Nvidia shares up 77% this year, compared with a rise of 11.5% in the Nasdaq Composite Index. With a market capitalization of $640 billion, Nvidia has become about five times more valuable than longtime rival Intel Corp.News Source: Yahoo Finance