“PyTorch’s mission is to accelerate the path from research prototyping to production deployment. With the growing mobile machine learning ecosystem, this has never been more important than before,” a spokesperson told VentureBeat via email. “To help reduce the friction for mobile developers to create novel machine learning-based solutions, we introduce PyTorch Live: a tool to build, test, and (in the future) share on-device AI demos built on PyTorch.”
PyTorch, which Meta publicly released in January 2017, is an open-source machine learning library based on Torch, a scientific computing framework and script language that is in turn based on the Lua programming language. While TensorFlow has been around slightly longer (since November 2015), PyTorch continues to see a rapid uptake in the data science and developer community. It claimed one of the top spots for fast-growing open-source projects last year, according to GitHub’s 2018 Octoverse report, and Meta recently revealed that in 2019 the number of contributors on the platform grew more than 50% year-over-year to nearly 1,200.
PyTorch Live builds on PyTorch Mobile, a runtime that allows developers to go from training a model to deploying it while staying within the PyTorch ecosystem, and the React Native library for creating visual user interfaces. PyTorch Mobile powers the on-device inference for PyTorch Live.
PyTorch Mobile launched in October 2019, following the earlier release of Caffe2go, a mobile CPU- and GPU-optimized version of Meta’s Caffe2 machine learning framework. PyTorch Mobile can launch with its runtime and be created with the assumption that anything a developer wants to do on a mobile or edge device, the developer might also want to do on a server.
“For example, if you want to showcase a mobile app model that runs on Android and iOS, it would have taken days to configure the project and build the user interface. With PyTorch Live, it cuts the cost in half, and you don’t need to have Android and iOS developer experience,” Meta AI software engineer Roman Radle said in a prerecorded video shared with VentureBeat ahead of today’s announcement.
PyTorch Live ships with a command-line interface (CLI) and a data processing API. The CLI enables developers to set up a mobile development environment and bootstrap mobile app projects. As for the data processing API, it prepares and integrates custom models to be used with the PyTorch Live API, which can then be built into mobile AI-powered apps for Android and iOS.
In the future, Meta plans to enable the community to discover and share PyTorch models and demos through PyTorch Live, as well as provide a more customizable data processing API and support machine learning domains that work with audio and video data.
“This is our initial approach of making it easier for [developers] to build mobile apps and showcase machine learning models to the community,” Radle continued. “It’s also an opportunity to take this a step further by building a thriving community [of] researchers and mobile developers [who] share and utilize pilots mobile models and engage in conversations with each other.”News Source: VentureBeat