Your internet isn’t mine
Two people type the same search and see different worlds. A social feed hides a post, a price ticks up for one shopper, a political ad only ever finds the undecided. That invisibility is the point—and the problem. A group at the University of Amsterdam argues this adds up to an “algorithmic control crisis”: everyone’s online experience is uniquely personalized and largely unobservable, so neither citizens nor policymakers can tell what’s really being shown, to whom, and with what effect.
What’s doing the shaping?
Think of “algorithmic agents” as any automated systems that rank, recommend, or decide—simple sorting code, machine‑learning models, and everything between. They filter email, order search results, curate news, set prices, and target ads. And they do it for each person, continuously. The result is a private “personalized experience cocoon”: a mix of content, offers, and interactions tailored to one individual and invisible to everyone else.
This is broader than the familiar “filter bubble.” Cocoons aren’t just about news and opinion; they include which products you see, which friends or creators surface, which prices and interfaces you’re offered. Multiply that across platforms dominated by a few firms, and society loses a shared reference frame. As the researchers put it, we can’t monitor what we can’t see.
Why the usual audits fall short
Many have tried to open the black boxes. Code audits rarely happen and wouldn’t reveal behavior shaped by data and context. “Sock puppet” tests—synthetic accounts fed scripted behavior—are clever but can drift far from real people’s histories. Scraping public interfaces or APIs runs into rate limits and missing signals. Even studies that observe volunteers with real profiles struggle with confounders, sampling bias, and ethics. Most importantly, these methods often isolate one platform at a time, while people fluidly move across search, social, shops, and news.
Tracking the trackers—carefully
The Amsterdam team chose a different protagonist: not the algorithm, but the person living inside the cocoon. They built a browser-based monitor, nicknamed Robin, to act like a flight data recorder for the web. With the explicit consent of participants from a nationally representative panel, a small plugin routes traffic to an enhanced “transparent proxy.” That lets Robin see both inputs (cookies, beacons, device fingerprints) and outputs (stories, search results, ads, prices) as they actually appear, alongside user actions like clicks or comments.
The setup deliberately mimics a “man-in-the-middle” position—minus the malice. Two design choices keep it on a tight leash:
- A whitelist: only traffic to approved sites—news, search, price comparison, selected shops, some social pages—passes through the proxy. Banks, doctors, and direct messaging aren’t on the list.
- Filters: tailor‑made rules strip out passwords, payment details, private correspondence, and third‑party personal data before anything is stored.
Because behavior has causes beyond the screen, the team pairs this stream with surveys, interviews, and occasional tests (for example, to see if certain exposure improves knowledge). The aim isn’t to reverse‑engineer a single platform, but to measure lived exposure across the services that shape it.
The paradox and the safeguards
Here’s the hard truth: to understand surveillance and profiling, researchers must collect sensitive data. In Europe, that triggers strict rules. The project runs on explicit, informed consent (no buried terms, no tacit opt‑outs). Core data protection principles are baked into the design:
- Data minimization: observe only whitelisted sites; collect only what’s needed.
- Purpose limitation: define the scope—“research into the effects of personalized communication”—and don’t reuse beyond it.
- Storage limitation: delete unprocessed personal data on a schedule; share only aggregates; provide controlled access for peer review.
- Security: keep data in the EU on research‑grade infrastructure, encrypt at rest and in transit, log and govern access.
Organizationally, a privacy working group and a joint steering committee (with the panel provider and an external expert) can veto whitelist entries, consent language, and data policies. It’s radical transparency by design: public explanations of what’s collected and why aim to win trust and keep power checks visible.
There are trade‑offs. Mobile apps and smart TVs are largely out of scope for now. Whitelists narrow coverage and are expensive to maintain. Consent done properly lowers participation. But that’s the cost of doing this responsibly.
What this unlocks
With a window into real exposure, researchers can start to quantify questions that have lingered as anecdotes:
- Diversity of information diets: how varied are sources, topics, and viewpoints across people and over time?
- Polarization and fragmentation: are audiences drifting into non‑overlapping clusters?
- Targeting and fairness: when do prices, ads, or recommendations differ across groups in ways that matter?
The team also argues for building this capacity beyond a single project. Just as societies invented bank stress tests and audience ratings, they could mandate independent “algorithm observatories”: standing, representative panels that monitor personalized exposure under strong privacy and security safeguards.
Closing the loop
The opening mystery—why two people see different worlds—won’t go away. Personalized algorithms are here, and they’re not sci‑fi robots with three laws to obey. They’re diffuse, networked systems shaping billions of micro‑decisions at once. If society wants to steer rather than drift, it needs a clear view of what people actually see. Robin’s message is simple: stop guessing about the cocoon, and start measuring it—ethically, lawfully, and in the public interest.


