
My social media feed has been full of AI news for months. The events seem to be coming thick and fast. AI (i.e. artificial intelligence) is constantly creating something new, something never seen before, something perfect and profitable, in short, something that will change the world.
However, something else is creeping in between this news, which often doesn’t even deserve this Anglo-American label on closer inspection. AI will change the future labour market, that is foreseeable. But how and who will it affect? The AI news accounts are largely in agreement on this – at least in my feed.
Translators, administrative staff, (customer) support and then I see it, usually somewhere in the top 10: HISTORIKER! I panic a little and put my smartphone away, annoyed. Only to pick it up again and look at the post again. I keep scrolling and find another post next to funny dog videos, and currently really too many photos of Donald Trump, hidden between supermarket adverts and some kind of call for a demonstration.
The same picture again: „Job in danger! These are the jobs with the greatest risk of being replaced by AI!“ This time, the news account even advertises a study carried out by Microsoft. First place among the jobs: interpreters and translators, second place: HISTORIAN, again a little panic creeps in, third place: passenger attendant. 4th place: Sales representative, a total of 40 jobs are listed.
Meanwhile, the panic leaves me. Anger arises, not at the post, not at the AI news account, but at me; „Why didn’t I study anything decent?“ At the same time, I know that analysing and interpreting historical events, researching history, campaigning for democracy and human rights, the work I do every day at the Fritz Bauer Forum is not for nothing – is it? There it is again, the self-doubt of the humanities…
I read on, there was something about a source. „A study by Microsoft“ is written in the post – there’s no link, it’s Instagram after all. In any case, it’s a study by Microsoft and if what’s being claimed here is true, I’m going to have a problem sooner or later…
Armed with the name of the supposed study, I set off. By now I’m no longer lying on the sofa with my smartphone, but sitting at my desk in front of my computer. The source research begins… Fractions of a second pass, the first result in the search engine: found! Below a brief summary in English on a Microsoft website is a link to GitHub [1] .
Great, so now I find out why I could soon be unemployed… I start reading and have to start again right after the title. Not because it consists only of technical terms, no, the second word makes me wonder: „with“. „Working with AI“ means „working with AI“. Then I read the second part of the title: „Measuring the Applicability of Generative AI to Occupations“. [2] This translates as: „Measuring the applicability of generative AI to occupations“. A very sober title. I continue to skim the text, not because I imagine I can understand the study this way – remember, I’m a historian, not a computer scientist – but because I want to learn about the basic concept of the study and the subject matter. What exactly did the researchers do?
No mention of jobs that would be at risk. No comprehensive calls to retrain entire occupational groups, no apocalyptic description of a society in which AI takes over creative or analytical tasks. But rather a hypothesis: as a society, we are already working WITH generative, i.e. material-generating, artificial intelligence and this MIT has different consequences for different occupational groups.
Almost disappointed, I get up and get myself a glass of water. The post that started this short adventure was based on a table from the study. A list of 40 jobs that AI would make redundant in the future, or so it said. This list can be found again in the study, with a slightly different heading: „Top 40 occupations with highest AI applicability score“ [3] . I am now amused, no longer feeling anxious about the future…
The researchers have developed a method to assess occupational groups according to the extent to which generative AI can be applied. This rating is largely based on data from Microsoft’s Copilot applications and can be expressed as a number, for example 0.462 – the rating for the profession of historian. Only the group of interpreters and translators is higher, with a score of 0.492. Whether these figures are correct or whether they accurately reflect the reality for professional groups is not something I can or want to judge at this point. What I do know, however – and I owe this to the source criticism I learnt in my studies – is that the study can only be used as a basis to identify a risk for certain occupational groups in a very contrived, almost malicious way.
So it’s not the threat of unemployment, but rather scientific proof of something I already know: my working environment is changing as a result of technological developments. Of course… I am writing this text on a computer, in a modern form of the German language. I’m not carving this text into a clay tablet like historians before me, and I don’t know Latin either. Text and information processing are at the heart of my day-to-day work, which of course includes checking this information – with or without generative AI. Whichever tool I use, be it a word processing programme, photos and graphics or AI, is first and foremost just that: a tool. A text with exactly 200 characters? One command and my text meets the requirements. Of course, I still have to check and weigh up the result before publication, just as I would do with information from a book. Is the information I want to convey correct, is it conveyed the way I want it to be?
At the same time, the use of AI has a clear limit: complex issues, empathy, legal questions, i.e. whenever something is at stake. Not because the AI refuses to work at this point – you always get an answer – but because a substantive decision has to be made. A problem that the computer manufacturer IBM already solved in an internal presentation in 1979: „A computer can never be held accountable, therefore a computer must never make a management decision.“ [4]
And yet the post leaves an aftertaste. It was factually incorrect, the feelings generated were real and it left me with doubts – albeit brief ones. Doubts that I was able to counter with the means of source criticism I learnt during my studies.
Meanwhile, if you ask a generative AI like perplexity.ai about the content of the study, you get an adequate answer that even goes into the underlying data, describes the methodology and reproduces the core message.
The AI does not see an increased risk for my workplace, and this certainly also applies to the many other occupational groups mentioned. Even a quick glance at the internet, a few skimmed lines or a short command to a generative AI is enough to make the post in question and its author look extremely bad. So perhaps the job that we as a society should actually put up for debate is that of the „AI techbro news account“ operator. At least this shows that the further development of one’s own workplace, even with generative AI, has probably been overslept – not to mention journalistic integrity.
So back to the sofa, I was actually off work and should actually be staring less at my smartphone…
[1]
Microsoft Research: „Working with AI“ results files, in: https://github.com/microsoft/working-with-ai
[2]
Tomlinson, Jaffe, Wang, Counts, and Suri: Working with AI: Measuring the Applicability of Generative AI to Occupations, in: https://arxiv.org/pdf/2507.07935
[3]
Tomlinson, Jaffe, Wang, Counts, and Suri: Working with AI: Measuring the Applicability of Generative AI to Occupations, in: https://arxiv.org/pdf/2507.07935 p. 39
[4]
IBM: „internal presentation“, 1979. cf. https://web. archive.org/web/20241231172504/https://staging.cohostcdn.org/attachment/cd42a292-bde9-41d2-a900-fd587bd80d5c/C41B7UWWIAAWRCY.jpeg?width=675&auto=webp&dpr=2