Drawing Images Quickly with PicoGraphics and MicroPython
If you've been following the other posts, you'll know that I managed to get Streaming Video to a Pico 1 with MicroPython at 5FPS. This post builds on the streaming video implementaiton to answer a more practical question - how you render images to the screen quickly and efficiently with PicoGraphics? The image we'll be rendering is insulam.rgb565be.
There are some methods listed in the PicoGraphics documentation using Sprites, JPEG and PNG. However, what I don't like about these techniques is that they all load the image into memory, and they don't provide a way to clear that memory. If you have a long-lived application which cycles through different screens with different images, then you're going to run out of RAM (especially on the Raspberry Pi Pico 1 or 2).
I'd like to instead read the image from flash every time and draw it directly, meaning the only memory used is that of the image in the framebuffer (which is already allocated). As with the prior post, we'll be using the following stopwatch class:
import utime
class Stopwatch:
def restart(self):
self.start_time = utime.ticks_us()
def elapsed_ms(self):
return utime.ticks_diff(utime.ticks_us(), self.start_time) / 1000
Framebuffer Formatted Image
First we need to consider the pixel format we're using. Here's a map of what ffmpeg calls the "pen" formats offered by PicoGraphics, with the custom palettes omitted.
Pico Graphics Pen | ffmpeg Pixel Format |
---|---|
PEN_RGB332 | rgb8 |
PEN_RGB565 | rgb565be |
PEN_RGB888 | rgb24 |
Depending on your pen type (framebuffer format), you'll need to convert your images to that format. I created a tool to convert entire folders of images at once: imageconvert.py. It also makes the image background black, trims the whitespace, and prepends the image data with a width/height. You can also use ffmpeg for this purpose, but it won't encode the width/height marker. Usage:
python imageconvert.py convert --format <pixel format> <input file/folder> <output file/folder>
Where pixel format is rgb8
, rgb565be
or rgb24
depending on your pen type (the default is rgb24
). The tool produces .bin files, which are our raw framebuffer-formatted images. You can preview the output using ffplay - but bear in mind since the first two bytes are width and height it will look slightly shifted:
ffplay -f rawvideo -pixel_format rgb565be -video_size 320x240 insulam.rgb565be
Reading the File into the Framebuffer
The next step, what's the fastest way to draw the image? As with the video experiment, we'll wrap the framebuffer in a memoryview
and use readinto
from the file:
import struct
import picographics
from stopwatch import Stopwatch
display = picographics.PicoGraphics(display=picographics.DISPLAY_PICO_DISPLAY_2, pen_type=picographics.PEN_RGB565)
display_width, display_height = display.get_bounds()
sw = Stopwatch()
def draw_image(image_name, framebuffer):
with open(image_name, 'rb') as image_file:
image_width, image_height = struct.unpack('<HH', image_file.read(4))
image_file.readinto(framebuffer)
fb = memoryview(display)
while True:
sw.restart()
draw_image('insulam.rgb565be', fb)
display.update()
print(sw.elapsed_ms())
This takes 50ms to run on an RP2040 using our full-framebuffer test image. There is a catch though - the above code only works if the image is exactly the same size as the framebuffer.
To render an image smaller than the framebuffer, we need to copy it row by row, which will also give us the opportunity to put it at a custom x,y offset too.
import struct
import picographics
from stopwatch import Stopwatch
display = picographics.PicoGraphics(display=picographics.DISPLAY_PICO_DISPLAY_2, pen_type=picographics.PEN_RGB565)
display_width, display_height = display.get_bounds()
sw = Stopwatch()
def draw_image(image_name, framebuffer, image_x, image_y):
with open(image_name, 'rb') as image_file:
image_width, image_height = struct.unpack('<HH', image_file.read(4))
# Calculate bytes per pixel from framebuffer size and display dimensions
total_pixels = display_width * display_height
bytes_per_pixel = len(framebuffer) // total_pixels
image_row_bytes = image_width * bytes_per_pixel
fb_row_bytes = display_width * bytes_per_pixel
for row in range(image_height):
fb_y = row + image_y
if 0 <= fb_y < display_height:
fb_start = fb_y * fb_row_bytes + image_x * bytes_per_pixel
mv = memoryview(framebuffer)[fb_start : fb_start + image_row_bytes]
image_file.readinto(mv)
else:
image_file.seek(image_row_bytes, 1)
fb = memoryview(display)
while True:
sw.restart()
draw_image('insulam.rgb565be', fb, 0, 0)
display.update()
print(sw.elapsed_ms())
There we have it! This code takes around 75ms to run on an RP2040 using our full-framebuffer test image (which is not a good test given the above code is for smaller images).
Conclusion
If you can stomach using a custom raw image format (and your images are fairly small), this technique will save you memory in exchange for storage space.
🏷️ image images framebuffer pen format picographics memory video pico pixel ffmpeg custom height file code
Please click here to load comments.