IA: OVHcloud’s ambitious roadmap

Launched in 2017, OVHcloud continues to develop AI & Machine Learning, its catalog of services dedicated to AI available on its public cloud.

In June 2020, OVHcloud presented ML Serving. Then, in April 2021, the cloudist launched AI Training.

ML Serving makes it possible to deploy in production models of machine learning and deep learning in TensorFlow, ONNX and PMML format. The service accepts models developed with the scikit-learn, Pandas, Keras, TensorFlow and Pytorch frameworks, as well as exporting them from HuggingFace and FastAI. OVHcloud also offers pre-trained models.

AI Training is a service to support the training of models written with PyTorch, Tensorflow, MxNet or Jupyter notebooks from Docker containers on Kubernetes.

To these two offers are added AI Notebooks. AI Notebooks, accessible in beta, is none other than a managed service of Jupyter or VSCode notebooks linked to CPU or GPU resources, like AI Training. It will be generally available within two months based on the vendor’s public roadmap.

AI Apps, the breeding ground for creating an ecosystem

In addition, OVHcloud plans to launch AI Apps by the end of the year. Available in alpha version since January 2022, AI Apps should make it possible to start models and managed applications via Docker containers with a system supporting high availability via multi-node deployments. Again, OVHCloud intends to offer these models from its catalog.

The supplier also intends to bring together startups and companies likely to enrich its catalog.

“We have to make AI usable and accessible,” says Alexis Gendronneau, Head of Data Products at OVHcloud.

Thus, the cloud provider is forging special relationships with Gladia, a startup that offers a model inference platform accessible by API. Gladia.io was founded and is managed by Jean-Louis Quéguiner, former technical director of the Big Data and AI branch at OVHcloud. OVH has also approached Picsell.ai, a startup that offers an MLOps platform dedicated to computer vision, and Lettria, a company that publishes a platform and APIs for developers of natural language processing projects. . Other collaborations will follow.

“We work quite actively on the notion of ecosystem. The idea is to allow any startup to package its solution and deliver it quickly on OVH infrastructures, without having to ask existential questions about cost forecasting, so that there is no no nasty surprises at the end, even for customers,” he adds.

Regarding the consumption of IT resources, each service has its own metrics, such as the number of VCores, amount of RAM or GPU per cluster or per instance. All AI services are billed by the minute.

OVHcloud defends the transparency of its billing. However, as its documentation specifies, it must be taken into account that the cost of storage is not fully included in its AI services.

Templates are stored on OVHcloud Object Storage. Each instance has local storage in addition to storage space attached to Object Storage. With AI Training, you have to choose whether the models are going to be trained on CPUs or GPUs. With AI Apps, the same dilemma arises for inference.

Full tech stack within 18 months, despite ‘hardware dependency’

However, OVHcloud’s AI offer still needs to evolve to provide a complete technology stack. “In the field of AI, I’m going to take the side of saying that a technology stack must make it possible to build a model, to orchestrate it, to manage its life cycle with CI/CD pipelines and to grant bricks that make it possible to integrate it, that is to say APIs for inference”, summarizes Alexis Gendronneau.

“For the first part, we have AI Training, which needs to be brought to life and developed. As far as orchestration and lifecycle management are concerned, I don’t think we have a choice: we have to offer a working solution within 12 to 18 months. It’s really doable,” he says.

There remains the transition to model production, linked to inference.

“When it comes to inference, I think it will be difficult to provide viable solutions over time,” warns the manager. Here, OVHcloud would be dependent on equipment upgrades.

Today, OVHcloud has instances equipped with GPUs. The GPUs in question are Nvidia Tesla V100s, a bodybuilt version of the V100, featuring 32GB HBM2 and passive cooling.

In the future, OVHcloud intends to adopt the new architectures designed by Nvidia. “We want to offer the most up-to-date hardware infrastructure possible. We are closely following Nvidia’s progress with its A100 and H100 systems while keeping in mind the issues of costs and environmental impacts,” said the manager.

The American cloud giants offer the possibility of performing inference tasks on FPGAs or ASICS. “It’s a track that we are also exploring,” says Alexis Gendronneau.

“We are not going to lie to each other, today in AI, there is still a big dependence on hardware. There is a need to bring out standards”.

Alexis GendronneauHead of Data Products, OVHcloud

“We are not going to lie to each other, today in AI, there is still a big dependence on hardware. There is a need to bring out standards,” he says. And this question of standardized infrastructures would be far from being resolved. But OVHcloud does not want to shift its roadmap.

“Due to the existence of multiple PCI-E compatible chips, Nvidia initiatives, etc. it is a very complex market. We are going to have to take sides, but we are not going to lock ourselves into a strategy: we are going to be forced to diversify the catalog”, he considers.

Therefore, the manager does not prefer to commit to a “firm ETA” [ETA pour Estimated Time of Arrival N.D.L.R.]but OVHcloud must “have something available before 18 months”.

AI: a market “far from fixed” according to OVHcloud

This dependence reinforces the desire to offer serverless services that abstract the hardware architecture in order to obtain a “generic platform” for publishers and customers.

“We want European startups and companies specializing in AI, which have very good ideas, not to have to migrate to other territories in order to be able to develop”, says Alexis Gendronneau.

At the same time, OVHcloud does not only want to deploy its AI services in its European data centers, but also in the United States and in the APAC region. “You have to be able to offer solutions wherever customers are,” boasts the manager. Most of the services in the AI ​​& Machine Learning catalog are already available from the Canada cloud region.

Despite the increasingly full portfolios devoted to artificial intelligence from American suppliers, Alexis Gendronneau considers that OVH can do well.

“The market is far from fixed. Customers are opportunistic and that’s fine.

Alexis GendronneauHead of Data Products, OVHcloud

“The market is far from fixed. Customers are opportunistic and that’s fine. We are far from behind. Very often, we are the best answer when it comes to performance, efficiency or even user experience,” he defends.

According to our interlocutor, OVH’s AI solutions are not reserved for startups. Public authorities, large groups and universities use AI Training, for example. On the other hand, Alexis Gendronneau is careful not to give the names of these customers. To MagIT’s knowledge, Customs Bridge is one of the only ones to communicate on its use of the French supplier’s AI services.

We wish to thank the author of this short article for this awesome content

IA: OVHcloud’s ambitious roadmap

Find here our social media accounts as well as other pages related to it.https://www.ai-magazine.com/related-pages/