Hidden Algorithms Inside Tech
Featuring: DoorDash, Arc Raiders, and Deal Desks
DoorDash and UberEats Isn’t Optimizing for You
Ordering food feels simple: pick, pay, wait.
Under the hood, every request hits a system balancing:
delivery time
driver availability
projected customer churn
historical tolerance for delays
Two identical orders can get different results — not because the system is broken, but because it’s optimized for broader business goals. ( This post was actually inspired by this reddit post here: https://www.reddit.com/r/doordash_drivers/comments/1q2oweb/desperation_score_insane_admission_from_ashamed/ )
Your experience is where you landed in that balance.
Arc Raiders: Matchmaking Is Not “Fair”
Arc Raiders is one of the most talked-about multiplayer shooters right now. Yet players are finding its matchmaking unpredictable, and a lot of that isn’t rumor.
Here’s what the community is observing:
Queue times sometimes stretch from seconds to minutes, with solo players ending up in duo or trio lobbies. GamesRecon
Solo players can spawn into ongoing matches, leaving them disadvantaged from the start.
Developers officially deny gear-score matchmaking, focusing instead on separating solos and squads to keep fights balanced.
Community tests and player experience suggest behavior-based matchmaking may be real, aka players who fight aggressively report consistently hostile lobbies, while more passive play leads to calmer matches. Runeus
Whether it’s queuing mechanics, population smoothing, or weighting behavior patterns, players believe the system isn’t just matching skill… it’s also using implicit signals from past behavior to shape future lobbies.
B2B Isn’t Immune Either
In the enterprise world, the same pattern shows up.
Consider SaaS pricing and deal approvals.
To a seller:
“Why did that discount get denied?”
“Why did this close immediately and that one take forever?”
Under the surface:
models weigh deal size, term length, churn risk, customer segment
margin thresholds and historical closure patterns bias outcomes
Two similar deals can get different results because the system’s math favors patterns that historically worked.
The system already had an opinion, and no one manually intervened.
When the Algorithm Became Memory
This is the real shift in modern tech.
Old systems:
human-written rules
explicit logic
easy to interrogate
Modern systems:
learned weights
billions of tiny numerical values trained on data
encode priorities, tradeoffs, biases
Those weights don’t say “if X, then Y.”
They say what the system learned matters.
In LLM-based systems, those weights are the algorithm.
They decide:
how strongly to weigh recency vs. accuracy
what similarities matter
how uncertainty should be handled
This learned “memory” is what actually shapes outcomes.
Why This Matters
These systems feel neutral because we can’t see the math.
But they’re not neutral.
Someone chose:
the data
the objective function
what success looks like
Once those decisions are baked into weights, they become hard to question, hard to audit, and easy to hide behind.
We don’t argue with a rule, we instead argue with the result of learned priorities.
Zach

