Skip to main content
Created By: Roni Bandini Public Project Link: https://studio.edgeimpulse.com/studio/863714 GitHub Repo: https://github.com/ronibandini/PunchedCards

Intro

In 1804, Joseph Marie Jacquard demonstrated a mechanism to automate loom operation using a sequence of punched cards. Each card mechanically defined which warp threads were lifted for that pass, enabling programmable weaving patterns. This established the first industrial use of binary logic (hole or no hole) for machine control.

Punched cards evolved from this mechanical origin, into a standardized data storage medium for computing. While the Jacquard loom used physical needles to sense holes, later systems like the 1890 Hollerith tabulator introduced electrical sensing, using spring-loaded pins to complete circuits through the perforations. This technology eventually culminated in the IBM 80-column / 12-row format (standardized in 1928), where card readers detected electrical continuity via wire brushes, or in later decades, utilized optical sensors to read data at high speeds. As a tribute to this technology and to test the LattePanda IOTA mini PC, this project recognizes visually “punched” cards — printed and tagged here instead of punched — using a camera and machine learning.

Hardware

The LattePanda IOTA is an x86 mini PC SBC designed by DFRobot with enough performance to run traditional robotics workloads and on-device AI inference. It features an Intel Processor N150 (4C/4T), 8GB or 16GB of LPDDR5 memory, 64GB or 128GB of eMMC storage, and an onboard RP2040 microcontroller for real time sensors and actuators.

Parts Required

  • LattePanda IOTA
  • Active cooler
  • USB Webcam
  • USB Pendrive 16GB or more
  • Power supply

Cards

To simplify training, a single row with eight binary positions is used to represent characters. Instead of physical holes, printed circles represent binary “1” positions. Misalignment is intentional to give the model tolerances similar to physical punch cards. Example:
  • A 01000001
  • B 01000010

Edge Impulse Model Creation

Seven characters were chosen for this demo. Each character corresponds to ASCII BIN:
A 01000001
B 01000010
C 01000011
H 01001000
E 01000101
L 01001100
O 01001111
A template was created in Photoshop as a PSD file, and 11 variants per character were exported as .PNG image files with the circles displaced in different directions. All the images were uploaded to a new Edge Impulse project using Data Acquisition, with one label per image and an 82/18 split between Training and Testing.
An Impulse with a Classification Learning block and Grayscale color depth achieved perfect recognition, with only 50 training cycles and a 0.0005 learning rate:

Setup and Deployment

The LattePanda IOTA originally shipped with Windows pre-installed on the eMMC, but Ubuntu LTS was installed instead for this project. After flashing the Ubuntu image to a USB stick with balenaEtcher, the device was booted from USB by pressing F7 and selecting the USB pen drive.
After installation, the following commands prepare the needed environment in Ubuntu:

Update


sudo apt update

Optional: enable ssh for remote console access


sudo apt install openssh-server -y
sudo systemctl enable ssh
sudo systemctl start ssh

Install Node


sudo apt install curl -y
curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash -
sudo apt install nodejs -y

Install Edge Impulse


sudo npm install -g edge-impulse-linux –unsafe-perm
sudo npm install -g edge-impulse-cli –unsafe-perm
Now that everything is installed, you can execute: edge-impulse-linux-runner Log in, select the Edge Impulse project (if you have more than 1), and inference output begins immediately. For example:
classifyRes 1ms. {
  A: 0.9214,
  B: 0.4301,
  C: 0.2434,
  E: 0.1809,
  H: 0.1588,
  L: 0.0557,
  O: 0.0096
}
Values represent probabilities for each class. Live visualization is also available at port 4912 on the IP address of the LattePanda on your network. For example, http://192.168.0.100:4912. This is helpful if you need to adjust the web cam position or punch card position.

Parsing Results in Python

I have also included a Python script to execute the runner, capture the output, parse the result block and display the most probable character. After cloning the GitHub repo linked above, you can launch the runner with: python3 runner.py

Final Notes

The DFRobot LattePanda IOTA provided stable inference performance and a straightforward deployment path for Edge Impulse. While this project is a tribute to punched-card computing, the same principles apply to recognizing visual states in control panels, tags, physical tokens, or other symbolic markers captured by a camera. https://studio.edgeimpulse.com/public/863714/live https://github.com/ronibandini/PunchedCards

References

https://www.lattepanda.com/lattepanda-iota https://www.ascii-code.comhttps://artsandculture.google.com/story/punched-card-machines-the-national-museum-of-computing/bwWBrooyeGKPiA?hl=enhttps://www.ibm.com/history/punched-card