Creating Animations with the Raspberry Pi Pico on the SSD1306 OLED Display with Micropython
Published on February 15, 2025 | đď¸ 83 Views
In this tutorial, we are going to interface an ssd1306 OLED display module and 3 LED lights with the RP Pico board. We are going to show different animations which will be entirely different facial expressions like normal blinking, happiness, and anger! The choice of the facial expressions were absolutely random and does not reflect the different emotions I was feeling during the conception of this blog. The LEDs will just be an additional feature that also represent the emotion.
Hardware Required
- 128Ă64 SSD1306 OLED display module
- Raspberry Pi Pico running MicroPython
- Breadboard
- Connecting wires
- 3 LEDs
Circuit Diagram of the system
Make the setup as shown in the image above by following these steps;
1. Connect Pin 20 to one end of the LED (GPIO 15)
2. Connect Pin 21 and 22 to the remaining LEDs (GPIO 16 and 17 respectively) as shown. To each of the other legs, connect a resistor.
3. Connect the SDA line of the OLED display to Pin 24 (GPIO 18) of the Pico
4. Likewise, connect the SCL line of the OLED display to Pin25 (GPIO 19) of the Pico
5. Connect the GND and 3.3V out of the Pico to the + and â rails of the breadboard
6. Connect the GND and VCC pins of the OLED display to the + and â rails of the breadboard
7. Clone this repository and open them in your IDE of choice. You could use Thonny or VSCode. Iâll use VSCode.
8. Plug in the pico to your computer and lets get started
Development Environment
For this project, I will be developing using VSCode and the MicroPico extension that can be found on the extensions tab of vscode. You could also follow along using Thonny although this tutorial doesnât cover that. The choice to use vscode is just a matter of preference.
To get the project up and running, weâll have to do the following.
- Set up the pico as described above
- Clone the project to our computer
- Upload the project to the pico
- Run the project in the pico
1. Downloading the project
All the code for this project can be found on my repository. So head there and clone it to your computer.
git clone https://github.com/collins-emasi/oled-pico-animation.git
2. Uploading the Project on to the Pico
By default, when you click Upload Project the extension will only upload file types specified in the Micropico: Sync File Types. So to upload all the frames on to the pico, do the following
- Youâll have to go the extensions tab in VSCode and then look for the install MicroPico extension, Youâll then click the settings button which is just next to the uninstall button of the extension and then choose âExtension settingsâ
- Youâll then scroll down to the âMicropico: Sync File Types and click the add item and then type âpbmâ which is the Portable Bitmap type of our individual frame. Click OK and close the settings
- Now when you upload the project, you should see the frames being uploaded
3. Executing the project
To run the project. We want to first upload the project on to the Pico as discussed above. We will then make sure that we have currently opened the main.py file and then click the Run button which should be on the bottom of vscode.
The OLED Display should now start showing the different facial expressions of âRitaâ. This includes blinking and smiling. [Hopefully that looks a lot like smiling to you too]
NOTE: Whenever you change the code, upload the file, soft reset the Pico and then run the code.
How does the code work?
Lets look inside the code to see what actually happens.
I first start off by including the required libraries. I will use choice and randint from random library to make the whole program a bit random and seem natural [Yes I understand thereâs nothing natural about smiling and blinking every 1 second but that will have to do for the sake of this project] .
from machine import Pin, I2C
from random import choice, randint
from utime import sleep
from rita_facial import RitaFacial
from SSD1306LIB import SSD1306_I2C
We then define the SDA and SCL pins, the I2C object, the OLED display object, and the Custom Rita object that takes initializing inputs as OLED object and inversion. Inversion here just determines if itâll be white eyes on black background or black eyes on white background.
ID = 0
SDA = Pin(16)
SCL = Pin(17)
my_I2C = I2C(id=ID, scl=SCL, sda=SDA) # type: ignore
OLED = SSD1306_I2C(width=128, height=64, i2c=my_I2C)
OLED.init_display()
rita = RitaFacial(OLED=OLED, inversion=1)
After that, I simple implement an infinite while loop that displays the different expressions
while True:
for _ in range(randint(1, 4)):
rita.express("blink")
sleep(choice([0.1, 0.5, 0.9, 1.5]))
for _ in range(randint(1, 4)):
rita.express("smile")
sleep(choice([0.1, 0.5, 0.9, 1.5]))
I can also choose to express a random expression which is a lot shorter and a lot neater.
while True:
rita.express_random()
sleep(choice([0.1, 0.5, 0.9, 1.5]))
Iâve written custom classes to Implement the different functionalities for Rita. You can look at those on the repo to just have a feel of what is happening in the background.
Final thoughts
You can add more expressions and place them with the name of the expressions as the subfolder inside the frames subfolder. Keep in mind the structure should be consistent because the code depends on it.
To create the frames, I used Adobe Illustrator to design the different frames of the animation and GNU Image Manipulation Program (GIMP) to export them as bitmap images. I am working on an extension for GIMP that would be able to do batch exports and I will share the whole process of creating and exporting after I finish that. If you have any ideas or other softwares that can be used to do the same thing conveniently and faster please share in the comments or shoot me an email.
Comments
-
Carlo - 1Â month, 2Â weeks ago
Could you please fix the video. I'd like to see it in action