How to Deploy a Roboflow (YOLOv8) Model to a Raspberry Pi (Part 8)

Building Your Own Real-Time Object Detection App: Roboflow(YOLOv8) and Streamlit

Eduardo Padron
4 min readAug 4, 2023

Introduction

To follow along with this tutorial, you will need a Raspberry Pi 4. You will need to run the 64-bit Raspberry Pi OS (Bullseye version) operating system.

The Raspberry Pi is a useful edge deployment device for many computer vision applications and use cases. For applications that operate at lower frame rates, from motion-triggered security systems to wildlife surveying, a Pi is an excellent choice for a device on which to deploy your application. Pis are small and you can deploy a state-of-the-art YOLOv8 computer vision model on your Pi.

Notably, you can run models on a Pi without an internet connection while still executing logic on your model inference results.

In this guide, we’re going to walk through how to deploy a computer vision model to a Raspberry Pi. We’ll be deploying a model built on Roboflow that we will deploy to a local Docker container. By the end of the guide, we’ll have a working computer vision model ready to use on our Pi.

Without further ado, let’s get started!

We are going to take where we finish in Part 2 from this series where we have successfully trained our model. When the aforementioned deploy() function in your code, the weights were uploaded to Roboflow and the model was deployed, ready for use.

This guide is to run the model using image files that we have saved locally.

If your are going to take images check Part 4 from this series to see how use the camera in Raspberry Pi.

Download the Roboflow Docker Container to the Pi

While we wait for our model to train, we can get things set up on our Raspberry Pi. To run our model on the Pi, we’re going to use the Roboflow inference server Docker container. This container contains a service that you can use to deploy your model on your Pi.

To use the model we built on a Pi, we’ll first install Docker:

curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh

After Docker is installed, we can pull the inference server Docker container that we will use to deploy our model:

sudo docker pull roboflow/inference-server:cpu

The inference API is available as a Docker container optimized and configured for the Raspberry Pi. You can install and run the inference server using the following command:

sudo docker run -it --rm -p 9001:9001 roboflow/roboflow-inference-server-arm-cpu

You can now use your Pi as a drop-in replacement for the Hosted Inference API (see those docs for example code snippets in several programming languages).

Next, install the Roboflow python package with pip install roboflow.

Run Inference

To run inference on your model, run the following code, substituting your API key, workspace and project IDs, project version, and image name as relevant. You can learn how to find your API key in our API docs and how to find your workspace and project ID.

from roboflow import Roboflow

rf = Roboflow(api_key="xxxxxxxxxxxxxxxxxxxx")
project = rf.workspace().project("xxxxxxxx")
model = project.version(1).model

model.predict("your_image.jpg", confidence=40, overlap=30).save("prediction.jpg")

prediction = model.predict("example.jpg")
print(prediction.json())

prediction.save("output.png")

This code tells our Python package that you want to run inference using a local server rather than the Roboflow API. The first time you run this code, you will need to have an internet connection. This is because the Python package will need to download your model for use in the inference server Docker container.

After your model has been downloaded once, you can run the program as many times as you like without an internet connection.

Now, let’s make a prediction on an image!

We can retrieve a prediction from our model that shows hand is in this image with a blue rectangle like a labeled image, when we run the code, we see a JSON dictionary that contains the coordinates of the hand in our image.

Our model is working! We can save an image that shows our annotated predictions, if we open up the file, we’ll see these results:

Right now, our model works using image files that we have saved locally. But, that doesn’t need to be the case. You could use the Roboflow Python package with a tool like the Raspberry Pi camera to take a photo every few seconds or minutes and retrieve predictions. Or you could use the Pi camera to run your model on a live video feed.

Conclusion

The Raspberry Pi is a small, versatile device on which you can deploy your computer vision models. With the Roboflow Docker container, you can use state-of-the-art YOLOv8 models on your Raspberry Pi.

Connected to a camera, you can use your Raspberry Pi as a fully-fledged edge inference device. Once you have downloaded your model to the device, an internet connection is not required, so you can use your Raspberry Pi wherever you have power.

Now you have the knowledge you need to start deploying models onto a Raspberry Pi.

I recommend you in case you want to use the full app from part 4 in Raspberry Pi the part 9.If you find errors following this or feedback about this guide let me know in the comments, thank you for following this post. Good luck with your projects.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Eduardo Padron
Eduardo Padron

Written by Eduardo Padron

Data Scientist and enthusiast of IoT projects.

No responses yet

Write a response