R Model Operationalization on Azure - Part 9
Welcome to the last and final part of the series, part 9. In this section, I’ll demonstrate a CD pipeline for deploying our containerized R Model to Azure Kubernetes Service.
Tips, tricks and news for all things data, ML & AI.
Welcome to the last and final part of the series, part 9. In this section, I’ll demonstrate a CD pipeline for deploying our containerized R Model to Azure Kubernetes Service.
Welcome to Part 8. Parts 8 through 9 of the series will focus on CI/CD for the second deployment option, which targets creating a containerized request/response web service.
Welcome to Part 7 of the series, where we will talk about creating the container image for our R model inference code. In the previous section, we created all the necessary scripts for inference with our model.rds file, along with supporting tests.
Welcome to Part 6. Parts 6 through 9 of the series will focus on creating inference scripts for our model, executing them as web services, creating a container + Kubernetes service and the associated CI/CD for the second deployment option. Our goal here is to have a request/response web service deployed for performing inference with the same R model we used from the Databricks/ Batch inference portion of the series.
We’ve made it to part five of the series! Now, we’re going to focus on the release pipeline for batch inference, which involves deploying Azure Databricks & Data Factory resources. In the previous section, we completed the continuous integration portion of our CI/CD pipeline.
Welcome to part four of the R Model Operationalization on Azure series. In this section, we will focus on creating the CI (continuous integration) portion of the batch pipeline with Azure Databricks inside of Azure DevOps.
In part three of this series, we will create and configure an Azure Data Factory pipeline which can execute the inference notebook from the previous post, in Azure Databricks.
Welcome to Part 2! Parts 2- 5 of the series will cover Deployment Option 1, which is focused on batch processing flat files with Databricks and Azure Data Factory. We want to enable a scenario where our model can automatically perform inference on a flat file containing many records.
This is part one of a multipart series. I’ll be going over different options for operationalizing R models on Azure with Azure Databricks, Azure Data Factory, Docker containers (which can be run on Azure Container Instances or Azure Kubernetes Service) and CI/CD pipelines with Azure DevOps.
This is my first post on the website. Stay tuned for more posts!