Piframe

On my home server I backup my photos from my phone to a self-hosted photo
management system called Immich. Immich is alot like
Google Photos but self-hosted. It can organize the photos by date, location, and
even faces. It also has a web interface and mobile apps for viewing the photos.
Hardware⌗
I wanted to have a photo frame that would display the photos from Immich. pimoroni had a view options using the RPi Pico, but they were always out of stock, and I wanted a bit more processing power for the photo frame. I decided to use a Raspberry Pi 2W and a 7.3" e-ink display from Waveshare. I used a Waveshare hat for my Pwnagotchi, so I was confident that the Waveshare display would work.
Setting up the Waveshare display was pretty easy. I just had to plug the ribon cable into the connector that went back to the Pi Hat. Then I flashed PiOS 32 bit to a microSD card and booted it up. I then installed the Waveshare drivers from here and tried their example code.
NOTE: I tried the 64 bit version of PiOS, but it didn’t boot, so I used the 32 bit version. Might be user error though.
Frame⌗
The e-ink display is a bit fragile, so I wanted to make sure it was protected. I used Fusion360 to disign a frame that would hold the display and the Raspberry Pi in place. I used my Bambu Labs A1 Mini to print the frame. The display slots into the top, and the he Pi and display connector are mounted in the back of the frame with M3 and M2 screws. The frame took about 4 hours to print.
The model can be downloaded here.
Software⌗
I wanted to use the Immich API to get the photos from the server onto the Pi. The Immich team posts everywhere that the API is subject to change, and in my experience does change quite frequently. API docs can be found here.
NOTE: I am using version 1.126.1 of Immich. The API is subject to change.
First generate a API token from the Immich web interface. Then set your x-api-key
header to the token.
I used the python requests library to make the API calls.
headers = {
"x-api-key": f"{API_TOKEN}"
}
You can use the /api/people/
endpoint to search for people in your photos. Then the /api/search/smart
endpoint to get a
list of photos that match the search criteria. I used the personIds
parameter to search for a specific person in my photos.
response = requests.post(
f"{IMMICH_URL}/api/search/smart",
headers=headers,
json={
"personIds": [person_id],
"type": "IMAGE",
"withExif": True,
"query": "",
}
)
The response will be a list of photos that match the search criteria. You can then use the /api/assets/{id}
endpoint to get the photo.
response = requests.get(
f"{IMMICH_URL}/api/assets/{photo_id}/original",
headers=headers
)
The response is a photo that then needs to get converted to a format that the
Waveshare display can use. I used the Pillow library to resize the photo, and
use the .act
palette available from waveshare to convert the photo to a 6 color
image compatible with my display.
def apply_act_palette(image_path, act_path, output_path="output.bmp", ratio_mode="crop"):
"""
Applies an .act color palette to an image, resizes it to 800x480,
and saves it as a bitmap.
:param image_path: Path to the input image.
:param act_path: Path to the .act color palette file.
:param output_path: Path to save the output bitmap.
"""
# Step 1: Open the image and resize to 800x480
img = Image.open(image_path).convert("RGB")
VALID_RATIO_MODES = ["maintain", "stretch", "crop"]
if ratio_mode not in VALID_RATIO_MODES:
raise ValueError(f"Invalid ratio mode. Choose from {VALID_RATIO_MODES}")
if ratio_mode == "maintain":
# add white bars to maintain aspect ratio
img.thumbnail((EPD_WIDTH, EPD_HEIGHT), Image.LANCZOS)
# Create a new white image
new_img = Image.new("RGB", (EPD_WIDTH, EPD_HEIGHT), (255, 255, 255))
# Calculate position to paste the thumbnail
paste_x = (EPD_WIDTH - img.width) // 2
paste_y = (EPD_HEIGHT - img.height) // 2
# Paste the thumbnail onto the white image
new_img.paste(img, (paste_x, paste_y))
img = new_img
elif ratio_mode == "stretch":
# stretch to fit
img = img.resize((EPD_WIDTH, EPD_HEIGHT), Image.LANCZOS)
elif ratio_mode == "crop":
# crop to fit
# Calculate the aspect ratio of the image and the display
img_aspect = img.width / img.height
display_aspect = EPD_WIDTH / EPD_HEIGHT
# Determine the new size
if img_aspect > display_aspect:
# Image is wider than display, crop width
new_width = int(img.height * display_aspect)
img = img.crop(((img.width - new_width) // 2, 0, (img.width + new_width) // 2, img.height))
else:
# Image is taller than display, crop height
new_height = int(img.width / display_aspect)
img = img.crop((0, (img.height - new_height) // 2, img.width, (img.height + new_height) // 2))
# Resize to fit the display
img = img.resize((EPD_WIDTH, EPD_HEIGHT), Image.LANCZOS)
# Step 2: Load the .act color table (256 colors, RGB format)
with open(act_path, "rb") as f:
act_data = f.read()
if len(act_data) < 768:
raise ValueError("Invalid .act file: must contain at least 768 bytes (256 RGB triplets).")
# Convert ACT file to a list of (R, G, B) tuples
palette = [act_data[i:i+3] for i in range(0, 768, 3)]
palette = [tuple(color) for color in palette]
# Step 3: Convert to a palette-based image
palette_img = Image.new("P", (1, 1))
palette_img.putpalette([value for rgb in palette for value in rgb])
# Convert image to use this palette
img = img.quantize(palette=palette_img, dither=Image.FLOYDSTEINBERG)
# Step 4: Save as a .bmp file
img.save(output_path, "BMP")
print(f"Image saved as {output_path}")
I included 3 different modes for resizing the image. maintain
will maintain
the aspect ratio of the image and add white bars to the top and bottom of the
image. stretch
will stretch the image to fit the display, and crop
will crop
the image to fit the display.
The Floyd-Steinberg dithering algorithm is used to convert the colors in the image to the 6 colors that the display can use. This algorithm will mix pixels to create the illusion of more colors.
The waveshare display expects a bitmap file. In this case the bitmap file describes how every pixel on the display should be colored.
I then used the Waveshare library to display the image on the screen.
epd = epd7in3e.EPD()
epd.init()
epd.Clear()
logging.info("1.read bmp file")
bmp_image = Image.open(output_path)
logging.info("2.display image")
epd.display(epd.getbuffer(bmp_image))
I threw that in a while loop to display a new photo ever 5 minutes. It takes about 30 seconds to clear the screen and display a new photo, so I do not want to display new photos too often.
Then with a simple systemd service coppied to /etc/systemd/system/piframe.service
, we can start the photo frame on boot.
[Unit]
Description=Piframe
After=network.target
[Service]
ExecStart=/usr/bin/python3 /home/user/Projects/piframe/piframe.py
WorkingDirectory=/home/user/Projects/piframe/
Restart=always
User=user
Group=user
Environment="PYTHONUNBUFFERED=1"
[Install]
WantedBy=multi-user.target
Conclusion⌗
This photo frame is a fantastic way of displaying my memories in my home. Randomly selecting photos of my fiance and I is a great way to reminisce. I was a bit nervous that only 6 color e-ink would not look good, but I think the photos come out great.
-E