# Plumber router with 2 endpoints, 4 filters, and 1 sub-router.
# Use `pr_run()` on this object to start the API.
├──[queryString]
├──[body]
├──[cookieParser]
├──[sharedSecret]
├──/logo
│ │ # Plumber static router serving from directory: /home/runner/work/_temp/Library/vetiver
├──/ping (GET)
└──/predict (POST)
To start a server using this object, pipe (%>%) to pr_run(port = 8080) or your port of choice.
from vetiver import VetiverAPIapp = VetiverAPI(v, check_ptype =True)
To start a server using this object, use app.run(port = 8080) or your port of choice.
You can interact with your vetiver API via automatically generated, detailed visual documentation.
FastAPI and Plumber APIs such as these can be hosted in a variety of ways. For RStudio Connect, you can deploy your versioned model with a single function, either vetiver_deploy_rsconnect() for R or vetiver.deploy_rsconnect() for Python. For more on these options, see the Connect documentation for using vetiver.
For other hosting options, you can create a ready-to-go file for deployment.
# Generated by the vetiver package; edit with care
library(pins)
library(plumber)
library(rapidoc)
library(vetiver)
b <- board_folder(path = "pins-r")
v <- vetiver_pin_read(b, "cars_mpg", version = "20230123T183915Z-33c30")
#* @plumber
function(pr) {
pr %>% vetiver_api(v)
}
from vetiver import VetiverModel
import vetiver
import pins
b = pins.board_folder('pins-py', allow_pickle_read=True)
v = VetiverModel.from_pin(b, 'cars_mpg', version = '20230123T183918Z-e37a6')
vetiver_api = vetiver.VetiverAPI(v)
api = vetiver_api.app
In a real-world situation, you would see something like board_rsconnect() or board_s3() here instead of our temporary demo board.
Important
Notice that the deployment is strongly linked to a specific version of the pinned model; if you pin another version of the model after you deploy your model, your deployed model will not be affected.
Generate a Dockerfile
For deploying a vetiver API to infrastructure other than RStudio Connect, such as Google Cloud Run, AWS, or Azure, you likely will want to build a Docker container.
Note
You can use any pins board with Docker, like board_folder() or board_rsconnect(), as long as your Docker container can authenticate to your pins board.
# Generated by the vetiver package; edit with care
FROM rocker/r-ver:4.2.2
ENV RENV_CONFIG_REPOS_OVERRIDE https://packagemanager.rstudio.com/cran/latest
RUN apt-get update -qq && apt-get install -y --no-install-recommends \
libcurl4-openssl-dev \
libicu-dev \
libsodium-dev \
libssl-dev \
make \
zlib1g-dev \
&& apt-get clean
COPY vetiver_renv.lock renv.lock
RUN Rscript -e "install.packages('renv')"
RUN Rscript -e "renv::restore()"
COPY plumber.R /opt/ml/plumber.R
EXPOSE 8000
ENTRYPOINT ["R", "-e", "pr <- plumber::plumb('/opt/ml/plumber.R'); pr$run(host = '0.0.0.0', port = 8000)"]
When you run vetiver_write_docker(), you generate two files: the Dockerfile itself and the vetiver_renv.lock file to capture your model dependencies.
vetiver.write_docker(app_file)
# # Generated by the vetiver package; edit with care
# start with python base image
FROM python:3.8
# create directory in container for vetiver files
WORKDIR /vetiver
# copy and install requirements
COPY vetiver_requirements.txt /vetiver/requirements.txt
#
RUN pip install --no-cache-dir --upgrade -r /vetiver/requirements.txt
# copy app file
COPY ./app.py /vetiver/app
# expose port
EXPOSE 8080
# run vetiver API
CMD ["uvicorn", "app.app:api", "--host", "0.0.0.0", "--port", "8080"]
To build the Docker image, you need two files: the Dockerfile itself generated via vetiver_write_docker() and a requirements.txt file to capture your model dependencies. If you don’t already have a requirements file for your project, vetiver.load_pkgs() will generate one for you, with the name vetiver_requirements.txt.
Tip
When you build such a Docker container with docker build, all the packages needed to make a prediction with your model are installed into the container.
When you run the Docker container, you can pass in environment variables (for authentication to your pins board, for example) with docker run --env-file .Renviron.
Predict from your model endpoint
A model deployed via vetiver can be treated as a special vetiver_endpoint() object.
── A model API endpoint for prediction:
http://127.0.0.1:8080/predict
from vetiver.server import predict, vetiver_endpointendpoint = vetiver_endpoint("http://127.0.0.1:8080/predict")endpoint
'http://127.0.0.1:8080/predict'
If such a deployed model endpoint is running via one process (either remotely on a server or locally, perhaps via a background job in the RStudio IDE), you can make predictions with that deployed model and new data in another, separate process1.
Being able to predict with a vetiver model endpoint takes advantage of the model’s input data prototype and other metadata that is stored with the model.
Footnotes
Keep in mind that the R and Python models have different values for the decision tree hyperparameters.↩︎