How I Took a Minimum Wage Job to Solve an Unsolvable Problem
I took a job sweeping floors at a supermarket, then spent my free time converting the floor plan into a graph, writing a C++ simulated-annealing optimizer, and learning that optimizing the wrong metric is worse than not optimizing at all.
"Sweep an entire Albert Heijn supermarket? Sounds easy. It should be, really. But I am a computer science student with one problem — I compulsively optimize processes that probably do not need optimizing."
Instead of simply doing the job, I converted the supermarket floor plan into a grid graph, built a visual editor, and wrote a C++ path optimizer using simulated annealing. Here is how that went.
The Central Question
Imagine two routes through the store: Route A is technically shorter in total distance but involves constant sharp turns — you would look like a glitching Roomba. Route B is a bit longer but flows naturally. Which one is actually better?
Route A is what you get when you optimize with the wrong priorities.
Step 1: Converting Reality to a Model
I converted the supermarket floor plan into a grid where each cell is either empty (needs sweeping) or an obstacle (walls, checkout counters, the occasional spilled yogurt). Using Processing (a Java-based visualization tool), I projected the floor plan onto the grid and exported it.
The floor tiles conveniently divided the space into natural small sections. Each cell became a graph node connected to its neighbors, allowing horizontal, vertical, and diagonal movement (excluding walls). This transforms the problem into the Traveling Salesman Problem: visit every node exactly once.
Step 2: Writing the Optimizer
A perfect TSP solution is not computationally feasible at this scale, so I used heuristics. My C++ implementation used simulated annealing — an algorithm based on testing small, incremental changes called local moves.
The algorithm begins by accepting almost any change, even a worse one. Then it gradually becomes more selective, eventually accepting only improvements. Like metallurgical annealing — heating metal and letting it cool slowly — it starts at "high temperature" to explore the solution space broadly, then cools down to lock in a good result.
For the local move I used the 2-opt method: remove two edges from the current path and reconnect the nodes in a different way. If the new path is shorter, keep it; if not, accept it anyway with a probability that decreases as the temperature drops.
Step 3: The Problem
The first optimized path had "more sharp turns than a Christopher Nolan film." It covered every cell with near-optimal total distance, but it was absolutely useless in practice. The algorithm did exactly what it was asked — the problem was that the task had been formulated incorrectly.
Step 4: Optimizing for Reality
I realized distance was not the only factor that mattered. Turns matter. Pace matters. Not looking like a broken robot matters.
I added a turn penalty to the cost function: 90-degree turns were penalized, and 180-degree reversals were penalized even more heavily. The routes became smoother, though slightly longer in total distance. "This is a route you could actually assign to a human without worrying they would quit on the spot."
Step 5: Tuning
By adjusting the turn penalty like a slider, you can balance pure efficiency against practical usability. A higher penalty gives smoother routes but longer total distances. A lower penalty increases efficiency but produces chaotic movement patterns. The right setting depends on the worker's agility, how much total distance matters, and their vestibular tolerance.
Step 6: The Bigger Lesson
Floor sweeping was never really the point. The same mistake — optimizing the wrong metric — shows up everywhere.
Social media algorithms optimized for engagement are excellent at engagement. But engagement is not happiness, or truth, or well-being. It equals clicks, screen time, compulsive checking, and emotional reactivity. Consequences: outrage, misinformation, doomscrolling, anxiety. The algorithms work perfectly. The error is only in the cost function.
Large language models like ChatGPT are trained to sound convincing rather than to be truthful. They are optimized to complete patterns, not to say "I do not know." They guess confidently without hesitation — because that is what the training signal rewarded.
Business optimization focused on profit maximization leaves the planet, the environment, ethics, and human well-being outside the cost function. They are not considered in the optimization because they were never added to it.
Did I Use the Optimized Path?
Obviously not. I just swept like a normal person.
But the lesson stands: there is no point in technical correctness if you are solving the wrong problem. Perfect code and flawless systems can still produce garbage results. The solution is not a better optimization algorithm — it is deciding what metric to optimize in the first place. And usually, we never ask that question. We just optimize what is easy to measure and hope it works out.
Spoiler: it probably won't.
Source code: github.com/TiesPetersen/FloorSweepingOptimizer