Store

No products in the cart.

Home Blog Retail Shelf Edge AI DIY Build Guide
Developer Build · Blog 4 of 5

How to Build a Retail Shelf Monitoring Camera System with Edge AI

A step-by-step builder's guide: from unboxing the NE301 to a working shelf gap detection system with real-time MQTT alerts — no cloud, no subscription, no prior AI experience required.

shelf-monitor — bash — 80×24
$ docker compose up aitoolstack # start AI Tool Stack
✔ Container aitoolstack Started → http://localhost:8000
✔ NE301_A3F2C1 bound to project "shelf-monitor-aisle3"
✔ Dataset: 312 images | Classes: shelf_gap, product_present
✔ Training complete mAP50: 0.943 | Quantizing to INT8…
✔ Model package: shelf_gap_v1_ne301.zip → ready to deploy
$ mosquitto_sub -t "shelf/aisle3/events"
⚠ {"event":"shelf_gap","shelf_id":"aisle3-bay2","fill_pct":14,"ts":"09:47:12Z"}
1
Camera Setup
2
Data Collection
3
Train & Quantize
4
Deploy Model
5
MQTT Alerts
6
Test & Iterate

This guide walks you through building a complete real-time out-of-stock detection system — a shelf camera that runs AI inference on-device, detects when products are missing, and fires structured alerts to your store operations stack. Every step maps to real CamThink hardware and open-source tooling. By the end, you'll have a working system you can iterate on, scale across shelves, and deploy to production.

This is part of CamThink's Retail Shelf Monitoring guide series. If you want to understand the business case and architecture options before diving in, start there. If you're ready to build — read on.

What You'll Need

Bill of Materials
ItemWhy You Need ItWhereRequired?
CamThink NE301 (Wi-Fi or PoE)The edge AI camera — runs YOLOv8 on-device at 25 FPS, outputs MQTT alerts. Core of the entire build.camthink.ai/storeRequired
88° FOV lens moduleCovers ~90–120cm shelf width at standard mounting distance. Included with NE301; 51° and 137° available separately.Included / storeRequired
Shelf mounting bracketClip-on shelf edge mount or small RAM-style arm. Any adjustable bracket compatible with 1/4"-20 thread.Amazon / hardware storeRequired
USB-C power bank or PoE switchUSB-C power bank for wireless NE301 Wi-Fi; PoE switch if using NE301 PoE for always-on deployment.Any electronics supplierRequired
Docker host (PC, Mac, or Linux server)Runs AI Tool Stack locally for data collection, annotation, model training, and quantization.Existing machineRequired
MQTT broker (Mosquitto)Receives shelf gap alerts from NE301. Run locally via Docker or use a cloud broker.mosquitto.orgRequired
Home Assistant (optional)Consumes MQTT events and routes alerts to store associate mobile apps, dashboards, or webhooks.home-assistant.ioOptional
NVIDIA GPU (optional)Dramatically speeds up YOLOv8 training. A GTX 1060 6GB cuts training time from ~2h to ~15min on 300 images. CPU training works fine for small datasets.Existing machineOptional
CamThink NeoEyes NE301
The build's core component
STM32N6 · Cortex-M55 Neural-ART NPU · 0.6 TOPS 25 FPS YOLOv8 on-device Wi-Fi 6 · PoE · LTE Cat.1 51° / 88° / 137° FOV IP67 · –20°C to +50°C Fully open source firmware

The NE301 runs the complete detection pipeline locally — image capture, model inference, gap analysis, and MQTT publish — without sending a single frame to the cloud. The open-source ne301 firmware and AI Tool Stack give you full control over every part of the stack.

From $199.90
Wi-Fi · PoE · LTE variants
Buy NE301 →
1
Step One
Camera Placement & Shelf Coverage Setup
~30–60 min

Getting the camera position right is the single most important determinant of model performance. A great model trained on poorly-framed images will underperform a mediocre model with consistent, well-framed images. Spend the time here — it pays off in every subsequent step.

Choosing the Right FOV Lens

LensCoverage Width at 80cmCoverage Width at 50cmBest For
88° (Recommended)~120cm~80cmStandard grocery / convenience shelf bays, 1–2 sections per camera
51°~65cm~44cmNarrow sections, higher resolution per SKU, premium product areas
137°~220cm~150cmWide-span end caps, open cooler doors, overview shots

Mounting & Positioning Guidelines

  • Height: Mount at the middle shelf rail, roughly eye-level (100–130cm). This gives the most consistent product-facing view — the same angle a shopper sees.
  • Distance from shelf: 50–90cm from the shelf face works well for the 88° lens. Closer gives higher resolution per product; farther gives wider coverage per camera.
  • Tilt angle: Keep the camera facing the shelf straight-on (0–5° tilt). Severe downward angles distort product geometry and reduce detection accuracy by 10–15%.
  • Lighting: Avoid positions where a ceiling spot creates a moving shadow across the shelf face. If the store uses LED shelf strips, these dramatically improve model consistency.

First Boot & Network Connection

bash · NE301 setupFirst connection via built-in Wi-Fi AP
# 1. Power up the NE301 (USB-C or PoE)
# 2. Hold button 2–3s → "NE301_[MAC6]" AP appears in Wi-Fi list
# 3. Connect to AP, then navigate to Web UI:

$ open http://192.168.10.10      # default Web UI address
# Default credentials: admin / admin123

# 4. In Web UI → Settings → Network → add your store Wi-Fi SSID + password
# 5. Reboot; NE301 joins your LAN and gets a DHCP address
# 6. Find the assigned IP in your router's DHCP table, reconnect to Web UI
💡
PoE for Production, USB-C for Prototyping

Use the NE301 Wi-Fi + USB-C power bank for your first prototype — quick to mount anywhere, no cable runs needed. Switch to the NE301 PoE for production deployments where always-on operation and a single cable per camera is preferred. The firmware and model behave identically on both variants.

2
Step Two
Data Collection with CamThink AI Tool Stack
~2–4 hours

CamThink AI Tool Stack is a self-hosted, open-source end-to-end AI development environment built specifically for NE301. It covers every step from image collection through annotation, training, quantization, and deployment — running entirely on your local machine. No data leaves your premises.

📷
Collect
NE301 → AI Tool Stack · bind device
🏷️
Annotate
Web UI · bounding boxes · class labels
🧠
Train
YOLOv8n · ultralytics · local GPU/CPU
Quantize
INT8 · NE301 model package · one-click
🚀
Deploy
Web UI upload · OTA · on-device inference

Install and Start AI Tool Stack

bashAI Tool Stack setup (Docker required)
# Prerequisites: Docker + Docker Compose v2.0+

# Clone the repository
$ git clone https://github.com/camthink-ai/AIToolStack.git
$ cd AIToolStack

# Pull the NE301 quantization environment (needed for model export)
$ docker pull camthink/ne301-dev:latest

# Start AI Tool Stack services
$ docker compose up -d

# Open Web UI
$ open http://localhost:8000

Bind NE301 and Capture Shelf Images

In the AI Tool Stack Web UI:

  • Create a new project named shelf-monitor-[your-shelf-id]
  • Go to Data Sources → Bind Device, enter the NE301's IP address
  • Click Start Collection — the NE301 will begin pushing live images to the project

What images to capture:

  • Fully stocked shelf (baseline — "this is what correct looks like")
  • 75%, 50%, 25% fill levels — simulate progressive depletion
  • Completely empty shelf — critical for high-confidence gap detection
  • Multiple times of day — morning shadow, midday flat light, evening overhead
  • Common occlusion scenarios — misaligned products, labels turned sideways
⚠️
Minimum Dataset Recommendation

200 images minimum, 300–400 recommended for a production-quality first model. With AI Tool Stack's auto-annotation feature (annotate 20 images manually, then use the model to pre-annotate the rest for human review), 300 images can be annotated in under 90 minutes — even without prior annotation experience.

🔗
Hackster.io · Live Reference
AIToolStack for CamThink NeoEyes NE301 — Full Workflow Tutorial

CamThink's published Hackster tutorial covers the complete AI Tool Stack workflow with NE301, including device binding, dataset creation, and model deployment with screenshots.

View on Hackster.io
3
Step Three
Train Your Shelf Detection Model & Quantize for NE301
~1–3 hours

With your dataset collected, the next step is training a YOLOv8-based object detection model and quantizing it to INT8 format for the NE301's Neural-ART NPU. AI Tool Stack handles both steps — you do not need to write any training code or deal with TensorFlow Lite directly.

Annotation: Two Classes for Shelf Gap Detection

In the AI Tool Stack annotation interface, create two classes:

  • product_present — draw bounding boxes around visible product facings on the shelf
  • shelf_gap — draw bounding boxes around empty shelf areas where products should be

Use Auto Annotation once you've manually labeled 20–30 images: the built-in inference backend pre-labels the remaining images. Review and correct the auto-labels, then export the verified dataset.

Training with YOLOv8

bash · AI Tool Stack CLI (optional)Manual training command if bypassing the Web UI
# Inside AI Tool Stack or standalone — uses ultralytics
# imgsz=256 is required for NE301 NPU quantization

$ yolo detect train \
    data=shelf_gap.yaml \
    model=yolov8n.pt \
    epochs=100 \
    imgsz=256 \
    device=0          # device=cpu if no GPU

# Expected output on a 300-image dataset:
Epoch 100/100:  box_loss=0.82  cls_loss=0.41  dfl_loss=0.95
mAP50: 0.943   mAP50-95: 0.671
Training complete. Results saved to runs/detect/train/
📐
Why imgsz=256?

The NE301's Neural-ART NPU is optimized for 256×256 input resolution. Training at this size ensures the quantized model fits within the NPU's memory constraints and achieves the rated 25 FPS inference speed. Training at higher resolution (e.g., 640) produces a model that cannot be efficiently quantized for NE301 deployment.

One-Click Quantization to NE301 Model Package

bashExport to INT8 NE301 model package via AI Tool Stack
# In AI Tool Stack Web UI → Model Management → Export NE301 Package
# OR via CLI inside the ne301-dev container:

$ docker run --rm \
    -v "$(pwd)/runs/detect/train/weights:/workspace/weights" \
    -v "$(pwd)/export:/workspace/export" \
    camthink/ne301-dev:latest \
    yolo export \
        model=/workspace/weights/best.pt \
        format=tflite \
        imgsz=256 \
        int8=True \
        data=shelf_gap.yaml \
        fraction=0.2

# AI Tool Stack wraps the .tflite + config JSON into a deployable .zip
✔ Exported: shelf_gap_v1_ne301.zip  (ready to upload to NE301)

The exported shelf_gap_v1_ne301.zip is a complete NE301 model package — it includes the quantized .tflite model, the class label JSON, and the inference configuration file. No further configuration needed before deployment.

4
Step Four
Deploy Model to NE301 & Configure Inference
~15 min

Deploying the model to the NE301 is done entirely through the Web UI — no terminal access or firmware flashing required for model updates.

  • Open the NE301 Web UI (http://[ne301-ip])
  • Navigate to Application Management → Upload Model
  • Drag and drop the shelf_gap_v1_ne301.zip package
  • Click Apply — the NE301 reboots into the new model (takes ~30s)
  • Go to Live View — you should see bounding boxes appearing around detected products and any gaps on the shelf

Configure Inference Thresholds

json · NE301 Inference ConfigThreshold tuning via Web UI or config file
{
  "model":            "shelf_gap_v1_ne301",
  "confidence_min":   0.55,   // minimum detection confidence to report
  "nms_iou":          0.45,   // non-max suppression overlap threshold
  "alert_classes":    ["shelf_gap"],
  "alert_threshold": {
    "shelf_gap": {
      "min_area_pct":   5,     // gaps smaller than 5% of frame are ignored
      "fill_pct_alert": 30    // fire alert when fill % drops below 30%
    }
  },
  "inference_fps":     5,     // 5 FPS adequate for shelf monitoring; saves power
  "power_mode":       "performance"
}
OTA Updates for Production Fleets

Once a new model version is validated, you can push it to all NE301 devices in a store (or across stores) simultaneously using the NE301 OTA API — no physical access required. This is the path from pilot to production: validate on 3 cameras, push to 30. The NE301 supports staged rollout: deploy to 10% of devices first, monitor for issues, then complete the rollout.

5
Step Five
Configure MQTT Output & Store Alerts
~1–2 hours

The NE301 outputs detection results over MQTT. This is where your shelf monitoring system connects to the real world — routing gap detection events to store associates, dashboards, or ERP systems.

Start a Local MQTT Broker

bashMosquitto broker via Docker
$ docker run -d \
    --name mosquitto \
    -p 1883:1883 \
    -p 9001:9001 \
    eclipse-mosquitto:latest

# Verify broker is running
$ mosquitto_sub -h localhost -t "#" -v   # subscribe to all topics

Configure NE301 MQTT Output

In the NE301 Web UI, go to Settings → MQTT:

  • Broker: 192.168.1.x (your Docker host's LAN IP — not localhost)
  • Port: 1883
  • Uplink topic: shelf/[your-shelf-id]/events (e.g., shelf/aisle3-bay2/events)
  • Publish on: Detection event (not continuous — only fires when a gap is detected)

Example Alert Payload

json · MQTT payloadshelf_gap detection event from NE301
{
  "device_id":    "NE301_A3F2C1",
  "shelf_id":     "aisle3-bay2-left",
  "timestamp":    "2025-03-01T09:47:12Z",
  "event":        "shelf_gap_detected",
  "detections": [
    {
      "class":        "shelf_gap",
      "confidence":   0.94,
      "bbox":         [320, 140, 580, 390],
      "fill_pct":     14
    }
  ],
  "image_url":    "http://192.168.1.45/snap/20250301_094712.jpg"
}

Home Assistant Automation for Store Alerts

yaml · Home AssistantMQTT → mobile push notification to store associate
alias: "Shelf OOS Alert — Aisle 3"
trigger:
  - platform: mqtt
    topic: "shelf/aisle3-bay2-left/events"
condition:
  - condition: template
    value_template: "{{ trigger.payload_json.detections[0].fill_pct | int < 30 }}"
action:
  - service: notify.mobile_app_store_phone
    data:
      title:   "⚠️ Low Stock Alert"
      message: >
        Aisle 3, Bay 2 Left is at
        {{ trigger.payload_json.detections[0].fill_pct }}% fill.
        Please restock.
      data:
        image: "{{ trigger.payload_json.image_url }}"
        # attaches the shelf snapshot to the notification
🔗
Beyond Home Assistant: ERP & Task App Integration

Home Assistant works well for prototypes and small deployments. For enterprise integration — connecting OOS events to SAP, Oracle, or commercial task management apps — route MQTT events through Node-RED to HTTP REST endpoints. See our planogram compliance guide for the full enterprise integration architecture.

6
Step Six
Test, Measure, & Iterate
~1–2 weeks live

The first deployed model is a starting point, not a finished product. Plan for one to two weeks of live operation before declaring the system production-ready. The data you collect in this phase is the most valuable training data you'll ever have.

  • False positive log Record every alert that fired incorrectly — a box shadow misidentified as a gap, a product with non-standard packaging. Add these as product_present annotations and retrain.
  • False negative log Walk the shelf daily and note any gaps the system missed. Capture images of these missed cases, annotate as shelf_gap, and add to training set.
  • Threshold tuning If false positive rate is high, increase confidence_min from 0.55 to 0.65. If you're missing real gaps, decrease it or lower fill_pct_alert.
  • Lighting variation Note whether performance drops at specific times of day (opening, closing, late-afternoon sun). Capture images at those times and add to training set.
  • Response time KPI Measure the time from gap occurrence to associate notification. Target: under 15 minutes for fast-moving SKUs. Adjust inference frequency (inference_fps) if needed.
📈
Typical Performance Trajectory

Most builders reach 90%+ detection accuracy within two retraining cycles — typically 3–4 weeks from first deployment. The AI Tool Stack makes retraining fast: annotate edge cases → retrain (15–90 min) → quantize (5 min) → deploy OTA (2 min). Iteration velocity is the key to a great production model, not the first-attempt accuracy.

Going Further: Scaling Beyond One Shelf

Once your single-shelf prototype is validated, the path to full-store coverage is straightforward. The NE301's OTA model update capability means scaling from 1 to 30 cameras doesn't require physical access to each device.

Distributed Architecture
5–30 cameras

Deploy one NE301 per shelf section. Each camera runs its own inference independently and publishes MQTT alerts to a shared local broker. Simple, resilient, and cost-effective. No single point of failure.

Hybrid with NG4500
30+ cameras / enterprise

Add a CamThink NG4500 (20–100 TOPS Jetson Orin) as a store-level edge server. NE301 cameras handle first-pass detection; NG4500 runs planogram compliance analysis, aggregates compliance scores, and manages ERP integration across the full camera fleet.

OTA Fleet Management
Any scale

Push improved model versions to all NE301 devices simultaneously via the OTA API. Use staged rollout: deploy to 10% of cameras first, validate accuracy metrics, then complete the rollout. Zero physical access required after initial installation.

Expand Use Cases
Same hardware

The same NE301 camera and AI Tool Stack pipeline supports planogram compliance detection, price tag verification, and cold chain monitoring — by training additional classes or deploying multiple model configurations. One hardware SKU, many applications.

🔧
Hackster.io · Related Project
Industrial Edge AI: Smart Warehouse Monitoring with NE301

CamThink's warehouse monitoring project on Hackster demonstrates the NE301 + Home Assistant + MQTT architecture in a logistics context — directly applicable inventory counting and presence detection patterns for retail shelf monitoring.

View on Hackster.io
Ready to Build?

Get the NE301 and Start Building Today

Everything in this guide runs on one piece of hardware. From $199.90 — with open-source firmware, AI Tool Stack, and a developer community to help you ship faster.

Welcome Offer