Automatic translation into Irish is available for this page. Translate this page
Data Misinterpretation at Evidence in Action Forum

Evidence in Action: Same Data, Different Stories

What ETF’s Evidence in Action forum in Milan revealed about bias, framing, and the power of interpretation in VET systems

 

“There are three kinds of lies: lies, damned lies … and statistics.”

Who coined that well-known English phrase?

Was it American author and humourist, Mark Twain? British politician of the 19th century, Benjamin Disraeli? Or somebody else entirely?

No one really knows - but the ghost of that well-worn cliché hovered over one session of the European Training Foundation’s annual monitoring forum, Evidence in Action, late October.

Hosted in Milan, Italy, the one-day conference brought together statisticians and vocational education and training (VET) experts from ETF partner countries to review and assess key indicators on education, skills, and employment (KIESE).

This year’s forum gave participants the chance to review 2025 cross-country results and look ahead to 2026 — when a new ETF data tool, the Skills Gap Initiative (SGI), will be rolled out. The SGI aims to measure how effectively VET systems work with employers to identify and close skills gaps.

More on that later this month (ie. November) when the ETF begins to roll out its annual country fiches (country specific reports) a cross-country report highlighting national, regional, and international trends drawn from the data, and thematic reports. The three reports incorporate information formerly presented by the ETF’s flagship Torino Process - which tracked how national reforms improve access, quality, and funding in skills systems.

Lies, Damned Lies… and Data

So, how reliable are statistics? Can they be manipulated? Is data misinterpretation a problem — or a powerful tool?

Those questions took centre stage in one of two parallel workshops.

The other session explored the role of artificial intelligence in data gathering and analysis. The verdict? AI is a strong ally for number-crunching and text proofing, but data collection, input, and interpretation will remain firmly in human hands for some time.

In the Data Misinterpretation – a Problem or a Tool? workshop, delegates from across ETF partner regions — South East Europe & Türkiye, the Eastern Partnership, the Southern & Eastern Mediterranean, and Central Asia — examined how data can tell very different stories depending on who’s holding the pen.

“Data interpretation is never entirely neutral,” said Mihaylo Milovanovitch, the ETF’s Team Leader for Monitoring and Assessment.

To prove the point, participants were guided by a virtual character named Miss Interpretation. Three volunteers were handed rulers and asked to measure three sheets of coloured A4 paper. Each came up with the same figure: 560 sq. cm.

They were all wrong. The sheets weren’t perfect rectangles — each had holes punched in them — and the rulers, from a brand called Imperfecto, had “17 cm” printed twice. The entire calculation was flawed from the start.

Same Data, Different Stories

Next, Mihaylo presented a simple table showing annual VET school enrolment in a fictional country, broken down by gender.

Raw numbers suggested female participation was rising.
Percentage data, however, showed stagnation at around 30% over a decade — though the share had recently crept up to 33%. Male enrolment rose sharply, but female numbers grew faster in relative terms.

“All of these interpretations are true,” Mihaylo noted. “But each one can be used to drive a different narrative.”

Participants discussed legitimate tools for data interpretation — such as strategic framing (presenting facts in a way that shapes how people perceive an issue) and data storytelling — as well as more questionable tactics, including truncated graphs that start at 50% to exaggerate differences or omit inconvenient details.

Selective use of averages, ignoring sample sizes, and drawing spurious correlations — think yoghurt consumption versus opium production in Afghanistan — rounded off a lively session.

The takeaway? Every statistic carries choices — and consequences.

The Good vs. the Bad:

  • Transparency in interpretation / data censorship or hallucination
  • Staying true to the data / falsifying or bending it
  • Policy-driven interpretation / politicisation of data

“We Must Be Transparent”

Mayssaa Daher, Math Statistician in charge of Social Statistics at Lebanon’s Central Administration of Statistics, summed up the ethical challenge:

“It is our duty not to misinterpret data. data because of all the political problems we have in our region. We must be transparent.”

She added that her agency was the first in Lebanon to publish negative GDP growth and high unemployment figures — despite that fact this was politically unpalatable.

Did you like this article? If you would like to be notified when new content like this is published, subscribe to receive our email alerts.