Skip to content

Camera Snapshots with Python

SiliconWit.IO treats cameras as standard devices: you upload a JPEG or PNG via HTTP, get back a URL, and include that URL in your telemetry payload. The dashboard then renders the image as a clickable thumbnail alongside your sensor data.

This tutorial walks through the entire process: creating a device, configuring data fields, writing a Python script that generates and uploads snapshots with telemetry, and running it on hardware as small as a Raspberry Pi Zero.

  • Snapshot Upload - HTTP POST images directly from your device to SiliconWit.IO’s snapshot endpoint, no MQTT broker needed
  • Paired Telemetry - every snapshot arrives with its corresponding sensor readings; no orphaned images, no data without visual evidence
  • Dashboard Thumbnails - any telemetry field ending in _url is automatically rendered as a clickable thumbnail
  • Pi Zero Compatible - only requires requests and Pillow; runs comfortably on a Raspberry Pi Zero W or Zero 2W
  • A SiliconWit.IO account (free tier works, 10 snapshots/day)
  • Python 3.7+ with pip
  • Internet connectivity on your device
  1. Log in to siliconwit.io and navigate to Devices in the dashboard.

  2. Click “Add Device” and fill in the following:

    FieldValueWhy
    NameField Sensor CamDescriptive name shown on your dashboard
    TypeSensorThe device captures and sends data
    Data DirectionSend OnlyWe only upload data from the device
    ConnectivityWiFi + HTTP or 4G/LTE + HTTPWe use HTTP POST for both snapshots and telemetry
    Data Interval5 minutesHow often the device sends readings (adjust to your needs)
    Schema ModeStrictDefault for free plan, only declared fields are accepted
  3. Enable Camera Snapshots - check the box labelled “Allow this device to upload images via HTTP POST.”

  4. Configure Data Fields - you can either add them manually or use the JSON import tool (see Step 2).

  5. Click “Create Device” - the system generates your Device ID and Access Token.

Your device needs five fields: four sensor readings plus a snapshot URL. The free plan allows up to 5 fields.

Save this file as field-sensor-cam-fields.json:

[
{ "name": "temperature", "label": "Temperature", "unit": "°C" },
{ "name": "humidity", "label": "Humidity", "unit": "%" },
{ "name": "soil_moisture", "label": "Soil Moisture", "unit": "%" },
{ "name": "light", "label": "Light", "unit": "lux" },
{ "name": "snapshot_url", "label": "Snapshot", "unit": "" }
]

Then in the device creation form, click Import and browse to the file.

Add each field one by one using the form:

NameLabelUnit
temperatureTemperature°C
humidityHumidity%
soil_moistureSoil Moisture%
lightLightlux
snapshot_urlSnapshot(leave empty)

Two HTTP endpoints work together to pair snapshots with telemetry.

POST /api/devices/{device_id}/snapshot
HeaderValue
AuthorizationBearer YOUR_ACCESS_TOKEN
Content-Typeimage/jpeg or image/png

Body: Raw image bytes (max 500 KB)

Response (200):

{
"success": true,
"url": "/api/devices/{device_id}/snapshot/{id}",
"id": "a1b2c3d4-..."
}
POST /api/devices/ingest

Body (JSON):

{
"device_id": "YOUR_DEVICE_ID",
"access_token": "YOUR_ACCESS_TOKEN",
"data": {
"temperature": 28.3,
"humidity": 65.0,
"soil_moisture": 42.5,
"light": 720,
"snapshot_url": "https://siliconwit.io/api/devices/.../snapshot/..."
}
}

Response (200):

{ "success": true }
StatusMeaning
400Missing device ID or unsupported content type
401Invalid access token
403Device is paused
413Image exceeds 500 KB limit
429Daily snapshot quota exhausted
503Storage temporarily unavailable
PlanSnapshots/DayRetention
Free107 days
Starter5030 days
Business200365 days

pip:

Terminal window
pip install requests Pillow

Raspberry Pi OS:

Terminal window
sudo apt update
sudo apt install python3-pip python3-pil
pip3 install requests

Requirements file - create requirements.txt:

requests
Pillow

Then:

Terminal window
pip install -r requirements.txt

Create a file called field-sensor-cam.py. This script is fully self-contained - it generates its own realistic field images and sensor data with no external downloads required.

"""
Field Sensor Cam - generates realistic field images and sensor data,
uploads snapshot + telemetry atomically to SiliconWit.IO via HTTP.
No external data sources - fully self-contained.
"""
import io
import json
import math
import random
import requests
from datetime import datetime
from PIL import Image, ImageDraw
# --- Config: replace with your device credentials ---
DEVICE_ID = "YOUR_DEVICE_ID"
ACCESS_TOKEN = "YOUR_ACCESS_TOKEN"
BASE_URL = "https://siliconwit.io"
INGEST_URL = f"{BASE_URL}/api/devices/ingest"
# --- End config ---
# Crop profiles with realistic parameter ranges
CROP_PROFILES = [
{"crop": "Tomato", "temp": (22, 32), "hum": (55, 75),
"soil": (40, 65), "light": (500, 850)},
{"crop": "Corn", "temp": (20, 30), "hum": (50, 70),
"soil": (45, 70), "light": (600, 900)},
{"crop": "Rice", "temp": (24, 35), "hum": (70, 90),
"soil": (70, 95), "light": (400, 750)},
{"crop": "Wheat", "temp": (12, 24), "hum": (40, 60),
"soil": (30, 55), "light": (450, 800)},
{"crop": "Potato", "temp": (15, 22), "hum": (60, 80),
"soil": (50, 75), "light": (350, 650)},
{"crop": "Grape", "temp": (18, 30), "hum": (45, 65),
"soil": (25, 45), "light": (550, 900)},
{"crop": "Coffee", "temp": (18, 26), "hum": (60, 85),
"soil": (45, 65), "light": (200, 500)},
{"crop": "Mango", "temp": (26, 38), "hum": (50, 75),
"soil": (30, 55), "light": (600, 900)},
]
def generate_readings():
"""Generate realistic varied sensor data from a random crop profile."""
profile = random.choice(CROP_PROFILES)
t_lo, t_hi = profile["temp"]
h_lo, h_hi = profile["hum"]
s_lo, s_hi = profile["soil"]
l_lo, l_hi = profile["light"]
# Time-of-day influence on readings
hour = datetime.now().hour + random.uniform(-2, 2)
day_factor = max(0, math.sin(math.pi * hour / 24))
temp = round(random.uniform(t_lo, t_hi) + day_factor * 4 - 2, 1)
humidity = round(random.uniform(h_lo, h_hi) - day_factor * 10, 1)
soil = round(random.uniform(s_lo, s_hi) - day_factor * 5, 1)
light = int(random.uniform(l_lo, l_hi) * day_factor)
return {
"temperature": max(t_lo - 3, min(t_hi + 5, temp)),
"humidity": max(20, min(99, humidity)),
"soil_moisture": max(5, min(99, soil)),
"light": max(0, light),
}, profile["crop"]
def generate_scene(crop_name, readings):
"""Generate a field scene image that visually reflects the readings."""
WIDTH, HEIGHT = 640, 480
img = Image.new("RGB", (WIDTH, HEIGHT))
draw = ImageDraw.Draw(img)
temp = readings["temperature"]
humidity = readings["humidity"]
light = readings["light"]
soil = readings["soil_moisture"]
# Sky colour based on light level
if light > 500:
sky_top, sky_bot = (90, 160, 230), (160, 200, 240)
elif light > 200:
sky_top, sky_bot = (140, 150, 170), (180, 185, 195)
elif light > 50:
sky_top, sky_bot = (60, 50, 90), (200, 140, 100)
else:
sky_top, sky_bot = (15, 15, 40), (30, 30, 55)
horizon = int(HEIGHT * 0.50)
for y in range(horizon):
t = y / horizon
r = int(sky_top[0] + (sky_bot[0] - sky_top[0]) * t)
g = int(sky_top[1] + (sky_bot[1] - sky_top[1]) * t)
b = int(sky_top[2] + (sky_bot[2] - sky_top[2]) * t)
draw.line([(0, y), (WIDTH, y)], fill=(r, g, b))
# Sun or moon
if light > 200:
sx = random.randint(80, WIDTH - 80)
sy = random.randint(30, horizon - 60)
sr = 25 + int(light / 100)
brt = min(255, 180 + int(light / 5))
draw.ellipse([sx - sr, sy - sr, sx + sr, sy + sr],
fill=(brt, brt - 20, min(255, int(brt * 0.5))))
elif light < 50:
draw.ellipse([480, 35, 515, 70], fill=(180, 180, 200))
# Clouds - more clouds when humidity is higher
for _ in range(int(humidity / 15)):
cx = random.randint(30, WIDTH - 30)
cy = random.randint(20, horizon - 40)
cb = 230 if light > 200 else 80
for dx, dy in [(-22, 0), (0, -10), (22, 0), (12, 6), (-12, 6)]:
cr = random.randint(14, 26)
draw.ellipse([cx+dx-cr, cy+dy-cr, cx+dx+cr, cy+dy+cr],
fill=(cb, cb, cb + 10))
# Ground - darker when soil is wetter
dry = max(0, min(1, 1 - soil / 100))
for y in range(horizon, HEIGHT):
t = (y - horizon) / (HEIGHT - horizon)
r = int((90 - 40 * dry) - 30 * t)
g = int((130 + 20 * dry) - 50 * t)
b = int((50 + 15 * dry) - 20 * t)
draw.line([(0, y), (WIDTH, y)],
fill=(max(0, r), max(0, g), max(0, b)))
# Wet patches on soil
if soil > 50:
for _ in range(int(soil / 10)):
px = random.randint(0, WIDTH)
py = random.randint(horizon + 20, HEIGHT - 10)
pr = random.randint(10, 30)
draw.ellipse([px-pr, py-pr//2, px+pr, py+pr//2],
fill=(40, 60, 30))
# Crop-specific plant colours
crop_colors = {
"Tomato": {"stem": (40, 100, 20), "fruit": (200, 50, 30)},
"Corn": {"stem": (50, 140, 30), "fruit": (220, 200, 60)},
"Rice": {"stem": (80, 160, 50), "fruit": None},
"Wheat": {"stem": (180, 170, 60), "fruit": (200, 180, 70)},
"Potato": {"stem": (50, 120, 35), "fruit": None},
"Grape": {"stem": (60, 110, 40), "fruit": (100, 30, 120)},
"Coffee": {"stem": (30, 90, 25), "fruit": (140, 50, 30)},
"Mango": {"stem": (40, 120, 30), "fruit": (240, 180, 40)},
}
colors = crop_colors.get(crop_name,
{"stem": (50, 130, 30), "fruit": None})
# Plant height reflects temperature and soil moisture
health = min(1.0, (temp / 30) * 0.5 + (soil / 100) * 0.5)
base_h = int(20 + 30 * health)
for row_y in range(horizon + 15, HEIGHT, 28):
for x in range(15, WIDTH - 15, random.randint(28, 42)):
h = random.randint(base_h - 8, base_h + 12)
w = random.randint(8, 16)
gv = random.randint(-20, 20)
stem = tuple(max(0, min(255, c + gv))
for c in colors["stem"])
draw.polygon([(x, row_y), (x+w//2, row_y-h), (x+w, row_y)],
fill=stem)
draw.line([(x+w//2, row_y), (x+w//2, row_y-h+3)],
fill=tuple(max(0, c-30) for c in stem), width=1)
if colors["fruit"] and random.random() < 0.4:
fr = random.randint(3, 6)
fx = x + w//2 + random.randint(-5, 5)
fy = row_y - h + random.randint(5, h // 2)
fc = tuple(max(0, min(255, c + random.randint(-20, 20)))
for c in colors["fruit"])
draw.ellipse([fx-fr, fy-fr, fx+fr, fy+fr], fill=fc)
buf = io.BytesIO()
img.save(buf, "JPEG", quality=85)
return buf.getvalue()
def upload_snapshot(image_data):
"""Upload snapshot via HTTP POST, return the full URL."""
url = f"{BASE_URL}/api/devices/{DEVICE_ID}/snapshot"
r = requests.post(url, data=image_data, headers={
"Authorization": f"Bearer {ACCESS_TOKEN}",
"Content-Type": "image/jpeg",
})
r.raise_for_status()
return f"{BASE_URL}{r.json()['url']}"
def publish_telemetry(readings):
"""Send telemetry via HTTP POST. Returns True on success."""
r = requests.post(INGEST_URL, json={
"device_id": DEVICE_ID,
"access_token": ACCESS_TOKEN,
"data": readings,
}, timeout=10)
if r.status_code == 200:
return True
print(f" HTTP {r.status_code}: {r.text}")
return False
def main():
print("=" * 50)
print(" Field Sensor Cam - SiliconWit.IO")
print("=" * 50)
# 1. Generate sensor readings
print("\n[1/3] Generating sensor data...")
readings, crop = generate_readings()
print(f" Crop: {crop}")
for key, val in readings.items():
print(f" {key}: {val}")
# 2. Generate a matching scene image
print("[2/3] Generating field scene...")
image_data = generate_scene(crop, readings)
print(f" {len(image_data)} bytes")
# 3. Upload snapshot, then publish telemetry
print("[3/3] Sending snapshot + telemetry...")
try:
snapshot_url = upload_snapshot(image_data)
print(f" Snapshot: {snapshot_url}")
except Exception as e:
print(f" Snapshot upload failed: {e}")
return
readings["snapshot_url"] = snapshot_url
if publish_telemetry(readings):
print(" Telemetry: published")
else:
print(" Telemetry FAILED")
return
print(f"\nDone - {crop} field snapshot + data linked.")
if __name__ == "__main__":
main()
  1. Generate readings - a random crop profile is selected (Tomato, Corn, Rice, etc.) and realistic sensor values are produced within that crop’s range, influenced by time of day.

  2. Generate image - a 640x480 field scene is drawn using Pillow. The scene visually reflects the readings: brighter sky when light is high, more clouds when humidity is high, darker soil when moisture is high, and taller plants when conditions are good.

  3. Upload snapshot - the JPEG is POSTed to /api/devices/{id}/snapshot. If this fails, the script stops (no orphaned data).

  4. Publish telemetry - the sensor readings plus the snapshot_url are sent to /api/devices/ingest. The dashboard links them together.

Terminal window
python field-sensor-cam.py

Expected output:

==================================================
Field Sensor Cam - SiliconWit.IO
==================================================
[1/3] Generating sensor data...
Crop: Rice
temperature: 33.5
humidity: 67.0
soil_moisture: 84.0
light: 542
[2/3] Generating field scene...
43718 bytes
[3/3] Sending snapshot + telemetry...
Snapshot: https://siliconwit.io/api/devices/.../snapshot/...
Telemetry: published
Done - Rice field snapshot + data linked.

Run it multiple times: each run picks a different crop with different readings, so your dashboard will show varied data with distinct images.

Navigate to your device’s detail page on siliconwit.io. You should see:

  • Data table - each row shows Temperature, Humidity, Soil Moisture, Light, and a Snapshot thumbnail
  • Snapshots tab - all uploaded images in a gallery view
  • Charts - sensor readings plotted over time

Every data row should have a corresponding snapshot thumbnail. If you see images without data or data without images, check the Troubleshooting section.

The script is designed to run on constrained hardware. Here is a typical setup for a Raspberry Pi Zero W or Zero 2W.

ComponentPurpose
Raspberry Pi Zero W / 2WCompute + WiFi
Camera Module (v2 or HQ)Capture real images
DHT22 or BME280 sensorTemperature + humidity
Capacitive soil moisture sensorSoil readings
BH1750 light sensorLux measurement
5V power supply or battery packPower

Replace generate_readings() and generate_scene() with real hardware reads.

PiCamera capture:

from picamera2 import Picamera2
import io
def capture_image():
"""Capture a JPEG from the Pi camera."""
cam = Picamera2()
config = cam.create_still_configuration(
main={"size": (640, 480)}
)
cam.configure(config)
cam.start()
buf = io.BytesIO()
cam.capture_file(buf, format="jpeg")
cam.stop()
return buf.getvalue()

DHT22 sensor:

import adafruit_dht
import board
dht = adafruit_dht.DHT22(board.D4)
def read_dht():
"""Read temperature and humidity from DHT22."""
return {
"temperature": round(dht.temperature, 1),
"humidity": round(dht.humidity, 1),
}

Soil moisture + light:

import board, busio
import adafruit_ads1x15.ads1115 as ADS
from adafruit_ads1x15.analog_in import AnalogIn
import adafruit_bh1750
i2c = busio.I2C(board.SCL, board.SDA)
# Soil moisture via ADS1115 ADC
ads = ADS.ADS1115(i2c)
soil_chan = AnalogIn(ads, ADS.P0)
# Light via BH1750
bh = adafruit_bh1750.BH1750(i2c)
def read_soil_and_light():
raw = soil_chan.value
soil_pct = round((1 - raw / 26000) * 100, 1)
return {
"soil_moisture": max(0, min(100, soil_pct)),
"light": round(bh.lux),
}

Use cron to run the script every 5 minutes (matching the data interval configured on the device):

Terminal window
crontab -e

Add this line:

*/5 * * * * /usr/bin/python3 /home/pi/field-sensor-cam.py >> /home/pi/sensor-cam.log 2>&1

If you want to test the snapshot endpoint without Python, use curl:

Terminal window
# Upload a snapshot
curl -X POST "https://siliconwit.io/api/devices/YOUR_DEVICE_ID/snapshot" \
-H "Authorization: Bearer YOUR_ACCESS_TOKEN" \
-H "Content-Type: image/jpeg" \
--data-binary @photo.jpg
# Send telemetry
curl -X POST "https://siliconwit.io/api/devices/ingest" \
-H "Content-Type: application/json" \
-d '{
"device_id": "YOUR_DEVICE_ID",
"access_token": "YOUR_ACCESS_TOKEN",
"data": {
"temperature": 28.3,
"humidity": 65.0,
"snapshot_url": "URL_FROM_SNAPSHOT_RESPONSE"
}
}'

Images appear without data (or vice versa)

Section titled “Images appear without data (or vice versa)”

Cause: The snapshot upload and telemetry are two separate HTTP calls. If one fails, the other may still succeed.

Fix: The script uploads the snapshot first, then publishes telemetry. If telemetry fails, the script reports the failure. For production, add retry logic around publish_telemetry().

Connectivity mismatch - data not appearing

Section titled “Connectivity mismatch - data not appearing”

If your device is configured for HTTP connectivity but your script uses MQTT (or vice versa), the data will still work - both protocols write to the same database. The connectivity setting is informational and affects which code snippets appear on the device detail page.

You have hit the daily snapshot limit for your plan. Free plan allows 10 per day. Upgrade to Starter (50/day) or Business (200/day) if you need more.

The image exceeds the 500 KB limit. Reduce the JPEG quality or image dimensions:

img.save(buf, "JPEG", quality=60) # lower quality
img.thumbnail((320, 240)) # smaller dimensions

Double-check your DEVICE_ID and ACCESS_TOKEN. The token is only shown once at device creation. If you lost it, regenerate it from the device settings page.

MQTTHTTP
Best forContinuous streaming, bidirectional controlPeriodic uploads, camera snapshots
ConnectionPersistent (keeps socket open)Per-request (connect, send, disconnect)
Device configSet connectivity to WiFi/Ethernet/4G + MQTTSet connectivity to WiFi/4G + HTTP
  • Use JPEG for photos - 5-10x smaller than PNG at equivalent visual quality
  • Resize before uploading - 640x480 at 80% quality is typically under 100 KB
  • Match data interval to cron - if your device is set to “Every 5 minutes”, run the script every 5 minutes
  • Log failures - redirect output to a log file when running via cron
  • Use motion detection - in production, trigger snapshots on events (motion, threshold breach) rather than fixed intervals to conserve your daily quota