Please vote:
The new platform is an evolution of the Red Hat OpenShift data science platform with a focus on helping enable the production deployment of AI models.
“We’ve focused so much of our time and energy in the past 10 or 20 years building application platforms, and today it’s about bringing the data workloads together with the same platform that we use to produce applications and run applications,” Red Hat CTO Chris Wright said in a briefing with press and analysts.
“The challenges for enterprises to adopt AI/ML are huge.”
IBM is already using OpenShift AI
Wright noted that the reality for many enterprises is that data science experiments often fail, with less than half reaching production.
Red Hat’s goal with OpenShift AI is to have a collection of tools that provide the ability to do all of the training, serving and monitoring needed for AI, and in a way that will help more models reach production. It’s an approach and technology that Red Hat has already proven via its parent company IBM.
Wright commented that the cost and complexity of training large language models (LLMs) is, well, particularly large. When IBM started to build out is new watsonx foundation models — which were publicly announced earlier this month — it turned to Red Hat OpenShift.
“Our platform is the platform that IBM uses to build, train and manage their foundation models just to show you the kind of scale and production capabilities that we have built into OpenShift AI”
The challenges of AI/ML deployments and Red Hat’s solution
Red Hat is building a series of enhanced capabilities into OpenShift AI. Among them is model performance capabilities. Wright said OpenShift AI will continue to improve data scientists’ ability to manage the monitoring and performance of a model deployed into production. Part of model performance is also about watching for potential model drift and making sure that a model remains accurate.
Deployment pipelines for AI/ML workloads is also critical. To that end, Red Hat OpenShift AI is enabling organizations to create repeatable approaches for model builds and deployment. There is also an effort to integrate custom runtimes for building AI/ML models.
“One of the things that we’ve discovered is that data science teams spend a disproportionate amount of their time just assembling their tools,”
said Wright. “Of course, we can produce a set of tools, but it may not be the exact set of tools that an enterprise is looking for, so they may need to customize the runtime environment.”
What’s also needed to help AI/ML workloads reach production is the ability to integrate AI quality metrics. Wright noted that many data science experiments fail because they lack alignment with business outcomes.
When that happens, “it’s hard to measure your success,” said Wright. “So, making sure we can build metrics into that whole pipeline I think is really critical.”