Smarter Roads, Part 6: Sensing – What We Can Learn From Cameras First

Smarter Roads, Part 6: Sensing – What We Can Learn From Cameras First

In the first four parts of this series, we mapped the problem space. We looked at how defects form, why they escalate, and why most are still found too late.

In Part 5, we introduced the solution space, the structured landscape of all the credible, feasible, and publicly defensible ways we could close the detection gap. We broke that landscape into five interlocking domains:

  • Sensing
  • Models and Detection Logic
  • Data Pipelines and Workflows
  • Operational Integration
  • Civic and Editorial Layer

This chapter begins the deep dive into those domains, starting with the foundation of any detection system: what we can actually sense.

Sensing is where every downstream choice begins. The hardware you mount, the angles you capture, the lighting you operate in, and the environmental constraints you accept all shape what is possible in the rest of the stack.

Sensing determines the raw material for every model, workflow, and operational decision that follows.

Because of this, sensing is the domain where early experimentation pays off the most.


Why Sensing Comes First

The detection gap is a system‑level issue: limited coverage, inconsistent reporting, slow triage, and the absence of a reliable early warning layer.

Road authorities require accurate, up‑to‑date condition information to manage the network effectively, yet most cracking is still assessed manually and infrequently. Traditional visual inspections are sparse, expensive, and slow, leaving large parts of the network unobserved for long periods.

The challenge isn’t visibility. It’s the long stretches of time when no one is looking.

If we can capture more of the network, more often, with enough fidelity to detect early‑stage defects, then everything else becomes easier, cheaper, and more predictable.

But sensing is also where the real‑world constraints hit hardest:

  • changing light
  • vibration at 40–80 km/h
  • weather
  • mounting limitations
  • bandwidth and storage
  • lens contamination
  • power and thermal limits

Understanding these constraints is essential before we talk about models or pipelines. And to understand those constraints, we need to look closely at the sensing modalities available today and why cameras are the most practical place to start.


The Sensing Landscape

There are many ways to observe a road network. Each modality brings strengths, weaknesses, and operational implications.

Vision‑Based Cameras

  • Rich visual detail
  • Low cost, easy to mount
  • Sensitive to lighting, motion blur, and contamination
  • Strong foundation for ML and sensor fusion

Vibration Sensors (Accelerometers and IMUs)

  • Cheap and robust
  • Detect roughness indirectly
  • Strongly influenced by vehicle dynamics
  • Best as a complementary signal

LiDAR

  • High‑precision 3D geometry
  • Excellent for profiling
  • Expensive and weather‑sensitive
  • Suited to specialised deployments

Radar

  • Weather‑resistant
  • Lower resolution than LiDAR
For early, low‑cost, field‑driven experimentation, cameras are the most versatile and scalable starting point.

Why Start With Cameras

Cameras offer the best balance of:

  • cost
  • ease of mounting
  • data richness
  • compatibility with modern ML models
  • scalability across council or fleet vehicles

They also allow us to explore the primary levers in the sensing domain.

Primary Levers

  • sensor type and quality
  • lens choice and field of view
  • mounting position and vibration behaviour
  • exposure, shutter speed, and capture rate
  • environmental resilience (light, rain, dust)
  • integration with edge devices (Pi, Jetson, smartphone)

Key Trade‑offs

  • cost versus fidelity
  • resolution versus bandwidth
  • wide field of view versus geometric distortion
  • fixed focus versus autofocus stability
  • stabilisation versus power consumption
  • daylight performance versus low‑light reliability

Austroads highlights that pavement condition measurement technologies must deliver reliable, repeatable data, operate effectively under real‑world environmental conditions, and be practical for network‑level deployment, as reflected in the guidance on field surveys and surface‑testing equipment [1].

Councils also care about:

  • predictable operating costs
  • minimal staff burden
  • clear evidence that the system works
  • outputs that integrate cleanly into existing workflows

These are the same levers and trade‑offs that will appear again when we explore LiDAR, IMUs, and sensor fusion, but cameras give us the fastest, cheapest way to start learning.


Constraints That Shape Camera Performance

Real‑world deployment means designing around constraints, not ignoring them.

Optical Constraints
  • low‑light performance
  • dynamic range
  • shutter speed and motion blur
  • resolution versus processing load
  • focus stability
Mechanical Constraints
  • mounting and vibration
  • rolling‑shutter artefacts
  • lens contamination
  • field of view and distortion
System Constraints
  • edge versus cloud processing
  • power and thermal limits
  • storage and bandwidth
  • integration with existing hardware
These constraints define what each camera can realistically deliver in the field.

Comparing Practical Camera Options

We now compare three modern camera categories: action camera, mid‑range smartphone, and embedded camera module. The GoPro Hero 13 Black (2024), Samsung Galaxy A56 (2025), and Raspberry Pi Camera Module 3 (2023) together bracket the practical design space for modern sensing systems, providing optical, consumer‑grade, and embedded baselines respectively.


Optical Constraints

These determine how well a camera captures usable detail under real road conditions, especially at speed, in mixed light, and across varying surface textures.

Camera ModelSensorSensor SizeLow Light PerformanceDynamic RangeMotion Blur ResistanceFocus BehaviourResolution Impact
Raspberry Pi Cam 3Sony IMX7081/2.43‑inch (1.4 µm effective pixels)ModerateModerateSensitive to blur at higher speedsMotorised fixed focus, stable once set12 MP (3 MP effective)
GoPro Hero 13 BlackSony STARVIS 21/1.9‑inch (1.7 µm pixels)Very goodHighVery strong (HyperSmooth 7.0 + shutter bias)Fixed focus27 MP
Samsung Galaxy A56Samsung ISOCELL1/1.7‑inch (1.6 µm effective pixels)GoodHighModerate (OIS + EIS; fusion can smear texture)Autofocus; can hunt50–64 MP (12–16 MP effective)
The GoPro is the optical leader. The PiCam 3 is predictable and deterministic. The Galaxy A56 offers high resolution but less stability under motion.

Mechanical Constraints

These shape how the camera behaves once mounted on a moving vehicle: vibration, angle, contamination, and physical robustness.

Camera ModelVibration HandlingMounting RobustnessRolling Shutter ArtefactsLens Contamination RiskField of View Characteristics
Raspberry Pi Cam 3Moderate (mount dependent)ModerateNoticeable at speedExposed lens; needs housingNarrow FOV (Narrow variant)
GoPro Hero 13 BlackHigh (action‑tuned)HighWell controlledExcellent sealingVery wide FOV; distortion manageable in Linear mode
Samsung Galaxy A56ModerateModerateCan wobble under vibrationConsumer‑grade sealingWide FOV; balanced distortion

System Constraints

These determine how easily the camera integrates into a sensing workflow: power, processing, storage, and connectivity.

Camera ModelPower & Thermal BehaviourStorage & BandwidthIntegration with Edge DevicesWorkflow FitCost
Raspberry Pi Cam 3low power; minimal heatconfigurableSeamlessVery highMid range*
GoPro Hero 13 BlackHigher power; thermals manageablesomewhat configurableLimited direct integrationModerateHigh (A$500–A$600)
Samsung Galaxy A56Moderate power; thermals manageableconfigurableIntegration via apps/APIsModerateMid range (A$400–A$500)

*A$400–A$500 including Pi 4/5 & case

Interpretation:
The PiCam 3 is the easiest to integrate and the most deterministic. The GoPro produces the richest data but at the highest operational cost. The Galaxy A56 again sits in the middle with limited hardware integration but flexible software integration.


Where This Leads

A clear pattern emerges: each platform anchors a different part of the sensing landscape.

  • The GoPro Hero 13 provides the strongest optical baseline for early experimentation.
  • The Galaxy A56 shows what is feasible with the devices citizens already carry.
  • The PiCam 3 shows what becomes possible when sensing is integrated with LiDAR, IMUs, GPS, and edge compute.
SensorOPTICALMECHANICALSYSTEM
GoPro Hero 13●●●●●●
Galaxy A56●●●●●●
PiCam 3●●●

Legend: ●●● strong, ●● moderate, ● weak


Platform Selection Matrix

PlatformStrengthsWeaknessesBest Use Case
GoPro Hero 13Best optical quality; excellent stabilisation; strong low‑light performanceHarder to integrate; large files; higher costEstablishing optical baselines; early high‑fidelity experiments
Samsung Galaxy A56Economical; widely available; many built‑in sensorsLess deterministic; motion fusion can smear textureCitizen reporting; community sensing; low‑cost pilots
Raspberry Pi Cam 3Fully integrable; deterministic; low power; easy to pair with LiDAR and IMUsWeaker optics; sensitive to vibration; requires careful mountingScalable sensing systems; fleet integration; long‑term deployments
The next phase of experimentation is to capture some data from the field with an action camera!

References

  1. Guide to Pavement Technology Part 5 - Pavement Evaluation and Treatment Design - Austroads, 2025