Why Holonomic, Not Differential
A car-wash bay is roughly three meters wide. A sedan parked in the middle of it leaves a service envelope of about a meter on each side, plus front and rear gaps that vary with vehicle length. Inside that envelope a robot has to dock against four doors, the trunk, and sometimes a hatchback or sliding van door — none of which are at convenient angles to a single forward heading.
A differential-drive base would have to constantly rotate to face each new docking pose. That burns time, scuffs the floor, and forces every motion plan to fight a non-holonomic constraint that has nothing to do with the actual task. A holonomic base — three or four mecanum wheels, or a Swerve-style steerable wheel module — can translate sideways without rotating. The arm stays oriented toward the vehicle while the base slides along its flank.
The Nav2 implication is straightforward: we set `holonomic` planning in `controller_server` and use a controller that respects lateral velocity commands. In practice that means picking MPPI over DWB.
MPPI vs. DWB for Lateral Motion
DWB (Dynamic Window Approach) is the long-standing Nav2 default. It samples a window of feasible (vx, vy, w) commands, scores them against a set of critics, and picks the best. It works well, but its critic stack was originally tuned for diff-drive defaults and the lateral term needs careful weighting before it stops biasing toward forward motion.
MPPI (Model Predictive Path Integral) rolls out hundreds of trajectory samples through a learned or analytical motion model and weighs them by cost. For tight, holonomic maneuvering it tends to produce smoother lateral approaches because it optimizes a horizon rather than a single window. The trade-off is compute: MPPI is heavier, and on a Jetson-class compute module you want to keep the rollout count and horizon honest.
We start with `batch_size: 2000`, `time_steps: 56`, and `model_dt: 0.05`, then tune the `PathAlign`, `GoalCritic`, and `PreferForwardCritic` weights. Disabling `PreferForwardCritic` (or setting its weight to zero) is essential — otherwise the planner penalizes the very strafe motions you bought a holonomic base to enable.
AMCL on a Wet, Reflective Floor
Wash bays are mirrors. Standing water reflects ceiling fixtures and the underside of the vehicle into the LiDAR scan, producing phantom returns that confuse a vanilla AMCL likelihood field.
Two changes make AMCL robust here. First, increase `laser_z_hit` slightly and reduce `laser_z_rand` so the model trusts strong, geometrically consistent returns over scattered noise. Second, narrow `laser_max_range` to the bay's actual diagonal — anything beyond that is almost certainly a reflection or a pass-through to the next bay. We also pre-filter the scan with a short median window to drop single-beam outliers before they reach the particle filter.
Where the floor is genuinely featureless, we lean on a fixed set of fiducials mounted at known bay corners. AprilTag detections are fused as absolute pose updates, which keeps the particle cloud from drifting during long stationary cleaning phases.
Inflating Around the Vehicle
The vehicle is the most important obstacle in the bay, and it is not in the static map. Each cycle begins with a perception pass that fits an oriented bounding box to the parked car. That box is published as a `polygon` obstacle into a dynamic layer of the local costmap, with an inflation radius tuned to the arm's reach plus a safety margin.
The base's footprint is also asymmetric: arm extended versus stowed changes the effective footprint by tens of centimeters. We swap footprints at runtime via the `set_parameters` service so that the planner never assumes more clearance than physics permits.
A Wash-Bay World in Gazebo
Before any of this touches a real bay, it runs in Gazebo Garden / Ignition Fortress. Our wash-bay world ships with a parameterized vehicle (sedan, SUV, van), a wet-floor friction model, and a small library of debris meshes for the perception side. The launch file brings up Nav2, the controller, and a stand-in for the arm so we can iterate on docking trajectories without wearing out hardware.
The honest caveat: sim is not a benchmark for traction on real soapy tile. It is a regression harness for planner behavior. Anything that fails in sim almost certainly fails on hardware; anything that succeeds in sim still has to be earned on the real floor.
Follow the Nav2 work on GitHub
Visit handybot.ai →Why interior turnover is the constraint at airport rental counters during peak — and what an autonomous cabin-cleaning step would have to do to move it.