Labor Economy

Micro-Management AI: How your company is tracking your every click.

BR
Briefedge Research Desk
Dec 10, 202510 min read

Every time you pause to think, someone's system logs it as idle time.

That moment of silence between reading a brief and writing your response? Flagged. The two minutes you spent staring at a spreadsheet, mentally running numbers before typing anything? Recorded as inactivity. Welcome to the era of algorithmic management — where the pause before your best idea gets counted against you.

This isn't dystopian fiction. It's a Tuesday morning in most European offices.


The Software Your HR Department Didn't Tell You About

Somewhere between the pandemic pivot to remote work and the corporate scramble to "maintain productivity," a quiet industry exploded. Employee monitoring software — tools like Teramind, Hubstaff, ActivTrak, Veriado, and Microsoft Viva Insights — went from niche IT solutions to standard operating procedure for thousands of companies across France, Germany, the Netherlands, and the UK.

Gartner reported that by 2023, 70% of large employers were using some form of employee monitoring technology — a figure that had doubled since 2019. That growth didn't happen because productivity was falling. It happened because surveillance became affordable, and fear became a product someone could sell to nervous executives.

The pitch is elegant: you can't manage what you can't measure. And now? They're measuring everything.

What "Active Time" Actually Tracks [Lever: Cost]

The core metric most platforms sell is "active time" — a deceptively simple phrase for something far more invasive. Active time typically means: keystrokes logged per hour, mouse movement frequency, screenshots captured at timed intervals (sometimes every 3–5 minutes), URLs visited and duration, application switching patterns, and idle thresholds that trigger an alert when you haven't moved your cursor in 90 seconds.

Microsoft's own Viva Insights — built into Microsoft 365 — scores workers on "focus time," "collaboration hours," and meeting effectiveness, feeding that data to managers in dashboard format.

Here's the mechanism that makes this particularly brutal for women specifically: research from Professors Laura Sherbin and Ripa Rashid, published via the Center for Talent Innovation, found that women in office environments already spend 20–25% more time on coordination tasks — replying to colleague queries, managing team communication, emotional labour that doesn't generate a single keystroke in a code editor. Algorithmic management systems don't see coordination. They see idle time.


The Algorithmic Boss That Never Sleeps

So what happens when the data from these tools starts shaping decisions?

It already is. Not hypothetically. Right now.

Performance Scores Built on Broken Inputs [Lever: Quality]

A 2023 study by the University of Amsterdam found that workers managed by algorithmic systems — where software data directly influenced performance reviews — experienced 34% higher rates of anxiety and reported a 22% drop in perceived job security compared to those under traditional management. The anxiety wasn't paranoia. The systems were genuinely punishing behaviours that had no relationship to output quality.

Here's how the mechanism works: a manager receives a dashboard showing that one employee has 73% active time and another has 61%. Without context, the gap looks like a performance gap. But active time doesn't account for the complexity of the work, the depth of thinking required, the quality of what was eventually produced, or the fact that one employee spent 40 minutes on a video call that registered as "idle" because they were talking rather than typing.

The decision gets made on corrupt data. The performance review gets skewed. The raise doesn't come.

And who disproportionately suffers? Women, who are already navigating the visibility gap — the well-documented phenomenon where female employees are judged more harshly on perceived effort than male counterparts, a pattern confirmed by McKinsey's 2023 Women in the Workplace report across 276 organisations in Europe and the US.

Here's the question that should make your stomach tighten: did you actually agree to this?

Under the EU's General Data Protection Regulation (GDPR), employers are technically required to inform employees about monitoring and obtain either consent or demonstrate legitimate interest. In practice, this disclosure is buried in page 47 of an employment contract addendum that you signed on your first day alongside 30 other documents.

A 2022 survey by the European Trade Union Institute (ETUI) found that 56% of European workers subject to algorithmic monitoring had received no meaningful explanation of what data was being collected or how it was being used.

The GDPR framework also requires that processing be proportionate — that you can't collect everything just because you technically can. But enforcement is inconsistent. France's CNIL has issued guidance. Germany's works councils have pushed back hard. In smaller companies, in countries with weaker enforcement mechanisms, the monitoring runs unchecked.

Your employer may be collecting your behavioural data in ways that are — quietly, legally-grey, technically-disputable — not fully compliant with the law. That's not conspiracy theory. That's the gap between regulatory text and operational reality.


This Is What Your "Productivity Score" Is Actually Measuring

Let's slow down and be precise about what these systems capture versus what they think they're capturing, because the gap between the two is where careers get damaged.

The Measurement Illusion [Lever: Speed]

Take the example of a senior marketing analyst — let's call her Laure, working for a mid-size logistics firm in Lyon. Laure's job involves pulling campaign data, identifying patterns, briefing agency partners, and presenting strategic recommendations. On a given Tuesday, she spends:

90 minutes reading industry reports (one application, minimal mouse movement), 45 minutes on a call with an agency (screen appears static), 30 minutes drafting a strategic memo (moderate keystrokes, but long pauses for thinking), and 20 minutes dealing with three colleagues who message her with questions unrelated to her deliverables.

Her algorithmic activity score for that day? Likely below average. Her actual output? Probably the most valuable thing produced in her team that week.

The system doesn't measure what Laure produced. It measures how she looked while producing it.

This distinction matters enormously when you understand how algorithmic management scales. These scores aggregate over weeks and months. They flow into quarterly reviews. They influence who gets put forward for promotion panels. They shape which names come to mind when a leadership opportunity opens.

The OECD's 2023 Employment Outlook flagged exactly this dynamic — algorithmic performance systems "risk systematically undervaluing cognitive and relational work relative to procedural and high-keystroke activity." That's academic language for: the system rewards people who type fast and punishes people who think slow. And thinking slow — carefully, deliberately — is often thinking well.

The Surveillance Tax on Mental Load [Lever: Leverage]

There's a secondary effect that rarely makes the headline stats but grinds workers down faster than the monitoring itself: the cognitive tax of being watched.

Psychological research on performance under surveillance — dating back to Cottrell's 1968 social facilitation studies and confirmed in modern workplace contexts — shows that monitoring increases errors on complex tasks by up to 27% while improving performance only on simple, repetitive ones.

What does this mean in practice? If your role involves anything requiring creativity, judgment, or non-linear problem solving — which most professional roles do, and which women in analytical and managerial roles certainly do — surveillance doesn't improve your output. It degrades it.

The irony is almost architectural. The monitoring system installed to catch underperformance actively causes it.

And yet: the data from the monitoring system is used to justify the monitoring system. The loop closes.


What European Companies Are (And Aren't) Doing About It

Not every company has sleepwalked into this. There's meaningful pushback, mostly where worker representation is structurally strong.

Germany stands out. Under the Betriebsverfassungsgesetz (Works Constitution Act), German works councils have co-determination rights over the introduction of monitoring technologies. Several major corporations — including Deutsche Telekom and Bosch — have faced direct works council vetoes over specific surveillance tools. The result is that German employees generally have stronger practical protections than their UK or French counterparts, despite the GDPR applying uniformly across all three countries.

The Netherlands has seen its Data Protection Authority (Autoriteit Persoonsgegevens) issue specific enforcement guidance on keystroke logging, calling many implementations "disproportionate by default."

But for the majority of European workers — particularly those in SMEs, in countries with weaker enforcement cultures, or in sectors without strong union representation — the monitoring runs and the data accumulates with minimal scrutiny.

The Eurofound 2023 report on New Forms of Employment and Digital Work found that women in hybrid and remote roles were 31% more likely to report being subject to intensive monitoring than men in equivalent positions. The reason isn't malice. It's that women disproportionately occupy the remote-first roles that monitoring software was initially designed for — and the tools followed the work pattern.


The Surveillance Architecture You Didn't Vote For

Here's what needs to be said plainly, because the corporate communications version of this story is infuriatingly sanitised.

The mass deployment of algorithmic monitoring in European workplaces happened without meaningful democratic or worker consent. It happened because software vendors found a market in anxious post-pandemic management culture. It happened because GDPR enforcement is uneven and slow. It happened because workers — particularly younger workers in junior to mid-level roles — often don't know their rights clearly enough to assert them.

The EU AI Act, which entered into force in 2024, classifies certain AI-based worker monitoring systems as "high risk" — meaning they require transparency, human oversight, and contestability of automated decisions. Implementation timelines extend to 2026 for many categories. Between now and then, there's a gap.

That gap is where your Tuesday morning is being logged.

What can you actually do? That depends on your role, your country, your company size, and your union status — all of which shape your practical leverage. But starting with the right questions is not a small thing: What monitoring tools does my employer use? What data is collected? How is it used in performance evaluation? Where is it stored, and for how long?

These are questions you are legally entitled to ask under GDPR Article 15 — the right of access. Most employers will be uncomfortable answering them in writing. That discomfort is informative.


The Algorithm Doesn't Know You're Thinking

The deepest problem with algorithmic management isn't the data collection. It isn't even the misuse of data. It's the ontological claim baked into the system: that what can be measured is what matters, and what can't be measured doesn't exist.

Your judgment, your institutional knowledge, the way you read a room in a negotiation, the question you asked that reframed a project — none of that registers. The cursor didn't move. The score didn't climb.

For women navigating workplaces that already discount their authority, undervalue their coordination work, and apply higher scrutiny to their perceived effort, algorithmic surveillance isn't neutral. It doesn't add bias to a clean system. It amplifies the bias already there, at machine speed, across every review cycle, with a veneer of objectivity that makes it harder to challenge.

A number doesn't look like a prejudice. That's precisely why it's dangerous.

The companies selling these tools will tell you this is about fairness — consistent measurement, objective data. But consistency in a flawed framework doesn't produce fairness. It produces flawed outcomes at scale.

The pause before your best idea is not laziness. It's cognition. And until the systems measuring you understand the difference, the system isn't measuring you — it's erasing you.

Research Highlights

Essential Intelligence. Delivered Daily.

Join 120,000+ professionals receiving Briefedge Intelligence every morning at 6 AM EST.