Store

No products in the cart.

APPLICATION · Integrator Guide

Replace Ultrasonic Bin Sensors with On-Device Vision AI

Your waste bin monitoring project defaults to ultrasonic fill sensors. Before you commit the hardware budget, here’s what those accuracy numbers actually mean at 50–200 bins — and how on-device AI inference running directly on the NeoEyes NE301 changes data quality, integration architecture, and long-term deployment cost.

⏱ ~12 min read 🗓 Updated March 2026 NeoEyes NE301 · NeoEdge NG4500 Integrator / Developer

The Integration Problem You’re Actually Solving

Most waste bin monitoring projects land with a line item already filled in: ultrasonic fill sensors. They’re cheap, well-understood, and have a long track record in smart city deployments. But there’s a gap between what the spec sheet says and what the sensor delivers when you’re trying to push reliable data into a routing or alerting system at scale.

The problem isn’t that ultrasonic sensors are bad technology. The problem is that they produce a single scalar value — distance in centimetres — and expect your downstream system to make complex operational decisions from it. When your client asks “why did the truck go to an empty bin?” or “why did this bin overflow if the sensor said 60%?”, the answer is almost always in the gap between what the sensor measured and what was actually in the bin.

60–75%
Ultrasonic fill accuracy in mixed-waste environments
(field avg., IoT Analytics 2023)
>90%
Vision AI fill classification accuracy with YOLOv8 on NE301 NPU
$0
Recurring cloud API cost — NE301 runs inference on-device, no cloud dependency

Why Ultrasonic Sensors Fail in Waste Bin Monitoring

Ultrasonic sensors work by emitting a sound pulse and measuring the time to bounce back from the nearest surface. In a clean cylindrical tank, that’s reliable. In a waste bin, you’re dealing with irregular materials, varied surface textures, and unpredictable geometry — and the physics of acoustic reflection don’t handle that consistently.

〰️
Ultrasonic Sensor
Distance-based fill measurement
Unit cost$40–$120
Accuracy~60–75%
OutputDistance (cm)
PowerLow (mW range)
Hard Failure Modes
  • Foam / soft waste absorbs pulse — reads “empty”
  • Irregular bags create false surface height
  • No distinction: compacted vs. loose fill
  • No image evidence for SLA disputes
  • Cannot detect external overflow or bin tipping
🔴
IR Beam Sensor
Beam-break fill detection
Unit cost$20–$60
Accuracy~45–60%
OutputBinary (full / not)
PowerVery low
Hard Failure Modes
  • Single threshold — no fill-level gradient
  • Transparent bags and liquids pass the beam
  • Dust accumulation causes false positives
  • Single point only — misses side-piled waste
  • No image data for downstream analysis
⚠ The Shared Constraint

Ultrasonic and IR sensors both produce a single scalar value at a single point in time, with no image data attached. Every classification decision your system makes — and every alert, dispatch, or SLA log — is built on a number that cannot be audited, contextualised, or retrained when it’s wrong.

How On-Device Vision AI Works Differently

The NeoEyes NE301 uses an STM32N6 processor with a NPU (Neural Processing Unit — a chip module purpose-built to accelerate AI inference). The camera captures a frame of the bin’s current state, runs a trained classification model directly on the NPU, and produces a fill-level result in under 50ms — entirely on-device, with no cloud dependency. The result is published via MQTT or HTTP to your backend.

At 0.6 TOPS (tera operations per second), the NPU is sufficient for running quantised YOLOv8 classification models native to the NE301’s firmware. This is the architectural fact that separates it from cameras using general-purpose MCUs: the dedicated NPU handles the inference workload without competing with network I/O or sensor management tasks.

System Architecture — NE301 Waste Bin Monitoring
NE301
On-device inference
STM32N6 + IP67
MQTT / HTTP
Network
WiFi 6 + BT 5.4
optional LTE / PoE
JSON payload
Backend
Alert routing
SLA log · dashboard
webhook / API
Dispatch
Demand-driven
trigger

Multi-Class Output — Not a Distance or a Boolean

A vision model returns whatever classes you train it on. A typical waste bin monitoring model uses 6 classes, each mapping to a different action in your alerting logic:

Class Fill Level Recommended System Action Ultrasonic Equivalent
Empty 0–15% Skip — no collection needed Distance > 80 cm
Partial 15–50% Log only — monitor next cycle Distance 40–80 cm
Near-Full 50–80% Flag for next scheduled route Distance 20–40 cm
Full 80–95% Priority pickup — add to active route Distance < 20 cm
Overflow >95% / external waste Immediate dispatch alert ❌ Undetectable
Contaminated Any Escalate — specialist collection ❌ Undetectable

The two classes at the bottom — overflow (waste piled outside the bin) and contaminated (hazardous or mixed-stream waste) — are invisible to any sensor measuring internal fill depth. These are the conditions that generate the most expensive operational responses, and they are the ones a sensor-based deployment systematically misses.

The MQTT Payload Your Backend Receives

When the NE301 publishes inference results, each MQTT message includes the classification label, confidence score, estimated fill percentage, and a reference to the captured image stored on-device. Here is the payload structure:

JSON · MQTT Payload NE301 inference result — topic: camthink/bin/{device_id}/status
{
    "device_id":     "ne301-bin-042",
    "timestamp":     "2026-03-24T08:14:22Z",
    "location":      "zone-B / stand-12",
  
    // Inference result from Neural-ART NPU
    "fill_class":    "near_full",      // empty | partial | near_full | full | overflow | contaminated
    "confidence":    0.91,              // recommended dispatch threshold: ≥ 0.80 for overflow / contaminated
    "fill_pct_est":  72,               // estimated fill % from model (informational)
  
    // Image reference (stored on-device, fetched via HTTP)
    "image_url":     "http://192.168.1.42/snap/20260324_081422.jpg",
    "image_size_kb": 91,
  
    // Device health
    "battery_pct":   84,               // NE301 supports PoE, USB, or battery
    "signal_rssi":   -61,
    "fw_version":    "1.4.2"
  }
🛠 Developer Note · Confidence Thresholds

Set dispatch-trigger logic to act only on payloads where confidence ≥ 0.80 for overflow or contaminated classes. For full and near_full, 0.75 is generally sufficient. Log all payloads regardless — low-confidence events are the most valuable source of new training data for your next model iteration.

Hardware Selection for Bin Monitoring Deployments

The right hardware depends on power availability, inference requirements, and scale. For waste bin monitoring specifically, there are two distinct deployment architectures:

CamThink NeoEyes NE301 on-device AI camera for bin overflow detection
NeoEyes NE301
Primary · On-Device AI Camera
Inference on-device — no server needed per bin
  • STM32N6 + NPU · 0.6 TOPS
  • YOLOv8 · <50ms inference latency
  • 4MP MIPI CSI · 51°/88°/137° FOV options
  • Deep sleep 7–8 µA · PoE, USB, or battery
  • IP67 weatherproof · 77×77×48 mm
  • WiFi 6 · BT 5.4 · optional LTE Cat.1 or PoE
CamThink NeoEdge NG4500 edge AI box for multi-camera bin monitoring
NeoEdge NG4500
Edge Server · Multi-Camera Hub
Centralised inference for large-scale networks
  • NVIDIA Jetson Orin NX/Nano · up to 157 TOPS
  • Aggregates feeds from multiple cameras
  • Supports VLM / LLM edge inference
  • JetPack 6.0+ · mainstream DL frameworks
  • Fanless chassis · 12–36V DC
  • CAN · RS232 · RS485 · multi I/O
Architecture Decision

NeoEyes NE301 is the right choice for most deployments: each unit handles its own inference, publishes results via MQTT, and requires no edge server. Move to a NE301 + NG4500 hybrid when you need centralised model management across a large fleet, VLM-level anomaly analysis, or aggregated dashboards pulling from 20+ cameras into a single compute node.

Full Comparison Across Deployment Dimensions

Dimension IR Sensor Ultrasonic Sensor Cloud Vision API NE301 On-Device AI
Fill accuracy (mixed waste) ~50% ~68% >90% >90%
Output type Binary only Scalar (distance) Custom classes Custom classes
Image evidence per reading No No Yes (stored in cloud) Yes (stored on-device)
Cloud / internet dependency None Optional Always required Not required
Recurring cost per device Low Low–Medium API fee per inference $0 after hardware
Overflow / external detection No No Model-dependent Yes (train the class)
MQTT / HTTP integration Vendor-specific Vendor-specific API only MQTT + HTTP native
Open model pipeline Not possible Not possible Provider-dependent Fully open · YOLOv8 native
Est. TCO · 100 units · 3 years $4–12K HW $6–18K HW $0 HW + ongoing API ~$20K HW · zero recurring

Accuracy at Scale: What the Numbers Mean at 100 Bins

NE301 On-Device AI
~94%
Cloud Vision API
~91%
Ultrasonic Sensor
~68%
IR Sensor
~52%

NE301 accuracy: CamThink field validation with custom YOLOv8 model. Sensor figures: field-reported averages, IoT Analytics 2023.

At 100 bins monitored 4× per day, a 68% accurate ultrasonic sensor generates approximately 128 incorrect readings every 24 hours. Each false “full” reading risks an unnecessary truck dispatch. Each false “not-full” reading risks a missed pickup and visible overflow. At that noise level, dynamic routing stops working — operators revert to fixed schedules because they can’t trust the data, which defeats the purpose of the monitoring system.

At 94% accuracy, 100 bins × 4 readings produces roughly 24 incorrect readings per day — and each one comes with a timestamped image that lets your system, or a human reviewer, immediately determine whether the error is a model issue, an unusual waste type, or an occlusion. Those images become your retraining dataset.

Training a Waste Bin Detection Model for NE301

The NE301 runs quantised YOLOv8 models natively via its NPU. The model pipeline is open: you train on your own dataset, export to INT8 quantised format, and deploy via the NE301 Web UI or OTA update — no proprietary toolchain required.

Dataset Collection

You need representative images for each fill class from the actual bin types in your deployment. A minimum viable 6-class waste model requires approximately 200–400 images per class captured under realistic conditions — including varying lighting, weather, and waste types. The critical classes to deliberately over-represent during collection are overflow and contaminated: these are rare in real operation, so you need to stage them. A model trained on an imbalanced dataset will systematically under-detect exactly the conditions that matter most.

Python · YOLOv8 Training Fine-tune YOLOv8n on your waste bin dataset, export INT8 for NE301
from ultralytics import YOLO
  
  # YOLOv8n-cls — nano classification variant, suitable for NE301 NPU
  model = YOLO('yolov8n-cls.pt')
  
  results = model.train(
      data='waste_bin_dataset/',   # directory: train/ val/ per-class folders
      epochs=80,
      imgsz=320,               # NE301 optimal input resolution
      batch=32,
      patience=15,
      project='bin_monitor',
      name='ne301_waste_v1'
  )
  
  # INT8 quantisation: compresses weights from 32-bit float to 8-bit integer
  # Result: ~4× smaller model, ~2–3× faster inference on NPU, <2% accuracy loss
  model.export(
      format='tflite',
      int8=True,
      data='waste_bin_dataset/data.yaml'
  )
  # Output: best_int8.tflite — flash to NE301 via Web UI or OTA
🛠 Integration Note · MQTT via CamThink Wiki

The NE301 MQTT interaction guide covers broker configuration, topic structure, QoS settings, and retained message behaviour for fleet-scale deployments. See the MQTT Data Interaction guide on Wiki for the complete reference.

Deployment Phases: From PoC to Production Network

1

PoC: 5–10 Bins, Single Zone

Install NE301 units on a representative sample — mix of bin types and traffic levels. Run at 4–6 inference intervals per day. Validate MQTT payload delivery and classification accuracy against manual ground truth for 2–3 weeks before expanding. Pay attention to low-confidence readings: they show you where the model needs more training data.

Starting a PoC?

Tell us your bin count, deployment environment, and connectivity constraints — we’ll help scope the hardware and model configuration.

Discuss Your PoC →
2

Model Iteration: Retrain on Real Deployment Images

Use payloads where confidence < 0.75 from Phase 1 as your primary source of new labelled data. A second training iteration with real in-deployment images typically improves accuracy by 5–8 percentage points over the initial studio-collected dataset. Export a new INT8 model and push OTA.

3

Full Network Rollout + Fleet Management

Scale to full bin count. Use the NeoMind platform or your own MQTT broker for fleet-level management — firmware updates, model pushes, and alert configuration can be applied to the entire network without on-site access. Each NE301 reports its own battery level and signal strength in every MQTT payload, giving you passive device health monitoring at no additional cost.

4

Algorithm Customisation (Optional)

If your waste stream has unusual characteristics — specialist recycling streams, industrial waste types, high contamination rates — CamThink’s algorithm customisation service delivers a trained and quantised model tuned to your specific bin images. The service covers dataset annotation, training, INT8 quantisation, and deployment validation. Typical delivery is 4–6 weeks from dataset submission.

Ready to evaluate NE301?

Start with the NE301 dev kit and the MQTT integration guide on Wiki.

Shop NE301 →

FAQ

Does the NE301 run AI inference locally, or does it send images to a server for processing?
The NE301 runs inference entirely on-device using its NPU. No image is sent to a cloud server for classification — the result is produced locally in under 50ms and published as a structured MQTT or HTTP payload. Images are optionally stored on-device and accessible via a local HTTP endpoint if your backend wants to retrieve them for logging or retraining.
What are the connectivity options for bins in areas with weak WiFi coverage?
The NE301 supports WiFi 6 and Bluetooth 5.4 as standard, with optional LTE Cat.1 and PoE variants available. For outdoor deployments where WiFi coverage is inconsistent, the LTE variant provides cellular fallback with no additional hardware. The PoE variant — NE301 PoE — uses a single Ethernet cable for both power and data, which is the preferred option for fixed installations in public infrastructure.
What’s the recommended MQTT topic structure for a multi-bin fleet?
A recommended pattern: camthink/{client}/bin/{zone}/{device_id}/status. Subscribe at camthink/{client}/bin/# to receive all events across the fleet. Use retained messages on the broker for each device’s last known state — this way, a newly-connected subscriber immediately has the current fill status of every bin without waiting for the next inference cycle. Full MQTT configuration reference is in the NE301 MQTT guide on Wiki.
Can model updates be pushed to a deployed NE301 fleet without on-site access?
Yes. The NE301 supports OTA (over-the-air) firmware and model updates via the NeoMind platform or a custom MQTT-based device management pipeline. Updated model files are pushed in compressed format and applied on the next device restart. For a large fleet, you can stage rollouts — e.g., 10% → 50% → 100% — to validate accuracy before full deployment.
Are images captured by the NE301 stored in the cloud or on third-party servers?
No. Images are stored locally on the NE301 and are only retrievable via the device’s own HTTP endpoint on your local network. Nothing is forwarded externally unless your backend explicitly fetches and stores the image from the image_url field in the MQTT payload. There is no mandatory cloud storage, no vendor-side data retention, and no per-image API call. Your inference data and images remain on your infrastructure.
When does a deployment need the NG4500 edge server instead of standalone NE301 units?
Standalone NE301 units handle their own inference and are sufficient for most bin monitoring networks. Add the NG4500 when you need: (1) centralised model management and fleet analytics across 20+ cameras in a single zone; (2) VLM or LLM-level analysis — e.g., natural language anomaly descriptions or cross-camera event correlation; or (3) integration with industrial I/O systems via CAN, RS232, or RS485 for direct connection to facility management platforms.
What data is needed to start a custom algorithm service engagement?
To begin, CamThink needs: a sample of 50–100 labelled images per class from your specific bins and waste types, a description of the deployment environment (indoor/outdoor, lighting conditions, mounting height), and the target class definitions. From there, the team scopes the full dataset collection and annotation plan. Contact via the inquiry form or email sales@camthink.ai to start the conversation.

Build Your Bin Monitoring System on Open Hardware

NE301 on-device inference. MQTT integration. Custom model pipeline. No cloud subscription.