22.01.2026 à 01:00
Ploum
With some adjustments, this post is mostly a translation of a post I published in French three years ago. In light of the European Commission’s "call for evidence on Open Source," and as a professor of "Open Source Strategies" at École Polytechnique de Louvain, I thought it was a good idea to translate it into English as a public answer to that call.
Google (sorry, Alphabet), Facebook (sorry, Meta), Twitter (sorry, X), Netflix, Amazon, Microsoft. All those giants are part of our daily personal and professional lives. We may even not interact with anything else but them. All are 100% American companies.
China is not totally forgotten, with Alibaba, TikTok, and some services less popular in Europe yet used by billions worldwide.
What about European tech champions? Nearly nothing, to the great sadness of politicians who believe that the success of a society is measured by the number of billionaires it creates.
Despite having few tech-billionaires, Europe is far from ridiculous. In fact, it’s the opposite: Europe is the central place that allowed most of our tech to flourish.
The Internet, the interconnection of most of the computers in the world, has existed since the late sixties. But no protocol existed to actually exploit that network, to explore and search for information. At the time, you needed to know exactly what you wanted and where to find it. That’s why the USA tried to develop a protocol called "Gopher."
At the same time, the "World Wide Web," composed of the HTTP protocol and the HTML format, was invented by a British citizen and a Belgian citizen who were working in a European research facility located in Switzerland. But the building was on the border with France, and there’s much historical evidence pointing to the Web and its first server having been invented in France.
It’s hard to be more European than the Web! It looks like the Official European Joke! (And, yes, I consider Brits Europeans. They will join us back, we miss them, I promise.)
Gopher is still used by a few hobbyists (like yours trully), but it never truly became popular, except for a very short time in some parts of America. One of the reasons might have been that Gopher’s creators wanted to keep their rights to it and license any related software, unlike the European Web, which conquered the world because it was offered as a common good instead of seeking short-term profits.
While Robert Cailliau and Tim Berners-Lee were busy inventing the World Wide Web in their CERN office, a Swedish-speaking Finnish student started to code an operating system and make it available to everyone under the name "Linux." Today, Linux is probably the most popular operating system in the world. It runs on any Android smartphone, is used in most data centers, in most of your appliances, in satellites, in watches and is the operating system of choice for many of the programmers who write the code you use to run your business. Its creator, the European Linus Torvalds, is not a billionaire. And he’s very happy about it: he never wanted to become one. He continued coding and wrote the "git" software, which is probably used by 100% of the software developers around the world. Like Linux, Git is part of the common good: you can use it freely, you can modify it, you can redistribute it, you can sell it. The only thing you cannot do? Privatize it. This is called "copyleft."
In 2017, a decentralized and ethical alternative to Twitter appeared: Mastodon. Its creator? A German student, born in Russia, who had the goal of allowing social network users to leave monopolies to have humane conversations without being spied on and bombarded with advertising or pushed-by-algorithm fake news. Like Linux, like git, Mastodon is copyleft and now part of the common goods.
Allowing human-scale discussion with privacy and without advertising was also the main motivation behind the Gemini protocol (whose name has since been hijacked by Google AI). Gemini is a stripped-down version of the Web which, by design, is considered definitive. Everybody can write Gemini-related software without having to update it in the future. The goal is not to attract billions of users but to be there for those who need it, even in the distant future. The creator of the Gemini protocol wishes to remain anonymous, but we know that the project started while he was living in Finland.
I could continue with the famous VLC media player, probably the most popular media player in the world. Its creator, the Frenchman Jean-Baptiste Kempf, refused many offers that would have made him a very rich man. But he wanted to keep VLC a copyleft tool part of the common goods.
Don’t forget LibreOffice, the copyleft office suite maintained by hundreds of contributors around the world under the umbrella of the Document Foundation, a German institution.
We often hear that Europeans don’t have, like Americans, the "success culture." Those examples, and there are many more, prove the opposite. Europeans like success. But they often don’t consider "winning against the whole society" as one. Instead, they tend to consider success a collective endeavour. Success is when your work is recognized long after you are gone, when it benefits every citizen. Europeans dream big: they hope that their work will benefit humankind as a whole!
We don’t want a European Google Maps! We want our institutions at all levels to contribute to OpenStreetMap (which was created by a British citizen, by the way).
Google, Microsoft, Facebook may disappear tomorrow. It is even very probable that they will not exist in fourty or fifty years. It would even be a good thing. But could you imagine the world without the Web? Without HTML? Without Linux?
Those European endeavours are now a fundamental infrastructure of all humanity. Those technologies are definitely part of our long-term history.
In the media, success is often reduced to the size of a company or the bank account of its founder. Can we just stop equating success with short-term economic growth? What if we used usefulness and longevity? What if we gave more value to the fundamental technological infrastructure instead of the shiny new marketing gimmick used to empty naive wallets? Well, I guess that if we changed how we measure success, Europe would be incredibly successful.
And, as Europeans, we could even be proud of it. Proud of our inventions. Proud of how we contribute to the common good instead of considering ourselves American vassals.
Some are proud because they made a lot of money while cutting down a forest. Others are proud because they are planting trees that will produce the oxygen breathed by their grandchildren. What if success was not privatizing resources but instead contributing to the commons, to make it each day better, richer, stronger?
The choice is ours. We simply need to choose whom we admire. Whom we want to recognize as successful. Whom we aspire to be when we grow up. We need to sing the praises of our true heroes: those who contribute to our commons.
I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.
I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!
19.01.2026 à 01:00
Ploum
What I like most about teaching "Open Source Strategies" at École Polytechnique de Louvain is how much I learn from my students, especially during the exam.
I dislike exams. I still have nightmares about exams. That’s why I try to subvert this stressful moment and make it a learning opportunity. I know that adrenaline increases memorization dramatically. I make sure to explain to each student what I was expecting and to be helpful.
Here are the rules:
1. You can have all the resources you want (including a laptop connected to the Internet)
2. There’s no formal time limit (but if you stay too long, it’s a symptom of a deeper problem)
3. I allow students to discuss among themselves if it is on topic. (in reality, they never do it spontanously until I force two students with a similar problem to discuss together)
4. You can prepare and bring your own exam question if you want (something done by fewer than 10% of the students)
5. Come dressed for the exam you dream of taking!
This last rule is awesome. Over the years, I have had a lot of fun with traditional folkloric clothing from different countries, students in pajamas, a banana and this year’s champion, my Studentausorus Rex!
My all-time favourite is still a fully clothed Minnie Mouse, who did an awesome exam with full face make-up, big ears, big shoes, and huge gloves. I still regret not taking a picture, but she was the very first student to take my words for what was a joke and started a tradition over the years.
Rule N°1 implies having all the resources you want. But what about chatbots? I didn’t want to test how ChatGPT was answering my questions, I wanted to help my students better understand what Open Source means.
Before the exam, I copy/pasted my questions into some LLMs and, yes, the results were interesting enough. So I came up with the following solution: I would let the students choose whether they wanted to use an LLM or not. This was an experiment.
The questionnaire contained the following:
# Use of Chatbots
Tell the professor if you usually use chatbots (ChatGPT/LLM/whatever) when doing research and investigating a subject. You have the choice to use them or not during the exam, but you must decide in advance and inform the professor.
Option A: I will not use any chatbot, only traditional web searches. Any use of them will be considered cheating.
Option B: I may use a chatbot as it’s part of my toolbox. I will then respect the following rules:
1) I will inform the professor each time information come from a chatbot
2) When explaining my answers, I will share the prompts I’ve used so the professor understands how I use the tool
3) I will identify mistakes in answers from the chatbot and explain why those are mistakes
Not following those rules will be considered cheating. Mistakes made by chatbots will be considered more important than honest human mistakes, resulting in the loss of more points. If you use chatbots, you should be held accountable for the output.
I thought this was fair. You can use chatbots, but you will be held accountable for it.
This January, I saw 60 students. I interacted with each of them for a mean time of 26 minutes. This is a tiring but really rewarding process.
Of 60 students, 57 decided not to use any chatbots. For 30 of them, I managed to ask them to explain their choices. For the others, I unfortunately did not have the time. After the exam, I grouped those justifications into four different clusters. I did it without looking at their grades.
The first group is the "personal preference" group. They prefer not to use chatbots. They use them only as a last resort, in very special cases or for very specific subjects. Some even made it a matter of personal pride. Two students told me explicitly "For this course, I want to be proud of myself." Another also explained: "If I need to verify what an LLM said, it will take more time!"
The second group was the "never use" one. They don’t use LLMs at all. Some are even very angry at them, not for philosophical reasons, but mainly because they hate the interactions. One student told me: "Can I summarize this for you? No, shut up! I can read it by myself you stupid bot."
The third group was the "pragmatic" group. They reasoned that this was the kind of exam where it would not be needed.
The last and fourth group was the "heavy user" group. They told me they heavily use chatbots but, in this case, were afraid of the constraints. They were afraid of having to justify a chatbot’s output or of missing a mistake.
After doing that clustering, I wrote the grade of each student in its own cluster and I was shocked by how coherent it was. Note: grades are between 0 and 20, with 10 being the minimum grade to pass the class.
The "personal preference" students were all between 15 and 19, which makes them very good students, without exception! The "proud" students were all above 17!
The "never use" was composed of middle-ground students around 13 with one outlier below 10.
The pragmatics were in the same vein but a bit better: they were all between 12 and 16 without exceptions.
The heavy users were, by far, the worst. All students were between 8 and 11, with only one exception at 16.
This is, of course, not an unbiased scientific experiment. I didn’t expect anything. I will not make any conclusion. I only share the observation.
Of 60 students, only 3 decided to use chatbots. This is not very representative, but I still learned a lot because part of the constraints was to show me how they used chatbots. I hoped to learn more about their process.
The first chatbot student forgot to use it. He did the whole exam and then, at the end, told me he hadn’t thought about using chatbots. I guess this put him in the "pragmatic" group.
The second chatbot student asked only a couple of short questions to make sure he clearly understood some concepts. This was a smart and minimal use of LLMs. The resulting exam was good. I’m sure he could have done it without a chatbot. The questions he asked were mostly a matter of improving his confidence in his own reasoning.
This reminded me of a previous-year student who told me he used chatbots to study. When I asked how, he told me he would tell the chatbot to act as the professor and ask exam questions. As a student, this allowed him to know whether he understood enough. I found the idea smart but not groundbreaking (my generation simply used previous years’ questions).
The third chatbot-using student had a very complex setup where he would use one LLM, then ask another unrelated LLM for confirmation. He had walls of text that were barely readable. When glancing at his screen, I immediately spotted a mistake (a chatbot explaining that "Sepia Search is a compass for the whole Fediverse"). I asked if he understood the problem with that specific sentence. He did not. Then I asked him questions for which I had seen the solution printed in his LLM output. He could not answer even though he had the answer on his screen.
But once we began a chatbot-less discussion, I discovered that his understanding of the whole matter was okay-ish. So, in this case, chatbots disserved him heavily. He was totally lost in his own setup. He had LLMs generate walls of text he could not read. Instead of trying to think for himself, he tried to have chatbots pass the exam for him, which was doomed to fail because I was asking him, not the chatbots. He passed but would probably have fared better without chatbots.
Can chatbots help? Yes, if you know how to use them. But if you do, chances are you don’t need chatbots.
One clear conclusion is that the vast majority of students do not trust chatbots. If they are explicitly made accountable for what a chatbot says, they immediately choose not to use it at all.
One obvious bias is that students want to please the teacher, and I guess they know where I am on this spectrum. One even told me: "I think you do not like chatbots very much so I will pass the exam without them" (very pragmatic of him).
But I also minimized one important generational bias: the fear of cheating. When I was a student, being caught cheating was a clear zero for the exam. You could, in theory, be expelled from university for aggravated cheating, whatever "aggravated" could mean.
During the exam, a good number of students called me panicked because Google was forcing autogenerated answers and they could not disable it. They were very worried I would consider this cheating.
First, I realized that, like GitHub, Google has a 100% market share, to the point students don’t even consider using something else a possibility. I should work on that next year.
Second, I learned that cheating, however lightly, is now considered a major crime. It might result in the student being banned from any university in the country for three years. Discussing exam with someone who has yet to pass it might be considered cheating. Students have very strict rules on their Discord.
I was completely flabbergasted because, to me, discussing "What questions did you have?" was always part of the collaboration between students. I remember one specific exam where we gathered in an empty room and we helped each other before passing it. When one would finish her exam, she would come back to the room and tell all the remaining students what questions she had and how she solved them. We never considered that "cheating" and, as a professor, I always design my exams hoping that the good one (who usually choose to pass the exam early) will help the remaining crowd. Every learning opportunity is good to take!
I realized that my students are so afraid of cheating that they mostly don’t collaborate before their exams! At least not as much as what we were doing.
In retrospect, my instructions were probably too harsh and discouraged some students from using chatbots.
Another innovation I introduced in the 2026 exam was the stream of consciousness. I asked them to open an empty text file and keep a stream of consciousness during the exam. The rules were the following:
In this file, please write all your questions and all your answers as a "stream of consciousness." This means the following rules:
1. Don’t delete anything.
2. Don’t correct anything.
3. Never go backward to retouch anything.
4. Write as thoughts come.
5. No copy/pasting allowed (only exception: URLs)
6. Rule 5. implies no chatbot for this exercice. This is your own stream of consciousness.
Don’t worry, you won’t be judged on that file. This is a tool to help you during the exam. You can swear, you can write wrong things. Just keep writing without deleting. If you are lost, write why you are lost. Be honest with yourself.
This file will only be used to try to get you more points, but only if it is clear that the rules have been followed.
I asked them to send me the file within 24h after the exam. Out of 60 students, I received 55 files (the remaining 5 were not penalized). There was also a bonus point if you sent it to the exam git repository using git-send-email, something 24 managed to do correctly.
The results were incredible. I did not read them all but this tool allowed me to have a glimpse inside the minds of the students. One said: "I should have used AI, this is the kind of question perfect for AI" (he did very well without it). For others, I realized how much stress they had but were hiding. I was touched by one stream of consciousness starting with "I’m stressed, this doesn’t make any sense. Why can’t we correct what we write in this file" then, 15 lines later "this is funny how writing the questions with my own words made the problem much clearer and how the stress start to fade away".
And yes, I read all the failed students and managed to save a bunch of them when it was clear that they, in fact, understood the matter but could not articulate it well in front of me because of the stress. Unfortunately, not everybody could be saved.
My main takeaway is that I will keep this method next year. I believe that students are confronted with their own use of chatbots. I also learn how they use them. I’m delighted to read their thought processes through the stream of consciousness.
Like every generation of students, there are good students, bad students and very brilliant students. It will always be the case, people evolve (I was, myself, not a very good student). Chatbots don’t change anything regarding that. Like every new technology, smart young people are very critical and, by defintion, smart about how they use it.
The problem is not the young generation. The problem is the older generation destroying critical infrastructure out of fear of missing out on the new shiny thing from big corp’s marketing department.
Most of my students don’t like email. An awful lot of them learned only with me that Git is not the GitHub command-line tool. It turns out that by imposing Outlook with mandatory subscription to useless academic emails, we make sure that students hate email (Microsoft is on a mission to destroy email with the worst possible user experience).
I will never forgive the people who decided to migrate university mail servers to Outlook. This was both incompetence and malice on a terrifying level because there were enough warnings and opposition from very competent people at the time. Yet they decided to destroy one of the university’s core infrastructures and historical foundations (UCLouvain is listed by Peter Salus as the very first European university to have a mail server, there were famous pioneers in the department).
By using Outlook, they continue to destroy the email experience. Out of 55 streams of consciousness, 15 ended in my spam folder. All had their links destroyed by Outlook. And university keep sending so many useless emails to everyone. One of my students told me that they refer to their university email as "La boîte à spams du recteur" (Chancellor’s spam inbox). And I dare to ask why they use Discord?
Another student asked me why it took four years of computer engineering studies to get a teacher explaining to them that Git was not GitHub and that GitHub was part of Microsoft. He had a distressed look: "How could I have known? We were imposed GitHub for so many exercises!"
Each year, I tell my students the following:
It took me 20 years after university to learn what I know today about computers. And I’ve only one reason to be there in front of you: be sure you are faster than me. Be sure that you do it better and deeper than I did. If you don’t manage to outsmart me, I will have failed.
Because that’s what progress is about. Progress is each generation going further than the previous one while learning from the mistakes of your elders. I’m there to tell you about my own mistakes and the mistakes of my generation.
I know that most of you are only there to get a diploma while doing the minimal required effort. Fair enough, that’s part of the game. Challenge accepted. I will try to make you think even if you don’t intend to do it.
In earnest, I have a lot of fun teaching, even during the exam. For my students, the mileage may vary. But for the second time in my life, a student gave me the best possible compliment:
— You know, you are the only course for which I wake up at 8AM.
To which I responded:
– The feeling is mutual. I hate waking up early, except to teach in front of you.
I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.
I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!
13.01.2026 à 01:00
Ploum
Il y a quelques mois, un lecteur m’a fait découvrir un incroyable jeu historico-politique entièrement réalisé en AsciiArt: « Le comte et la communiste », de Tristan Pun.
La simplicité des graphismes m’a fait plonger dans l’histoire comme dans un excellent livre. Jouant une espionne/servante, vous êtes amené à découvrir l’envers du décor d’un château aristocrate pendant la Première Guerre mondiale. De manière très intéressante pour le sous-texte sociopolitique, le temps passé dans le château est proportionnel au nombre de lessives. Car, bien qu’espionne, vous êtes avant tout une servante et il va falloir se remonter les manches et faire la lessive !
Si je vous en parle avec tant d’enthousiasme, c’est qu’à un moment du jeu vous trouvez une bande de papier contenant un message au format Baudot. Le code Baudot est l’ancêtre de l’ASCII et le premier encodage de caractère utilisé sur les télégraphes automatiques : le message est directement reçu sur des bandes de papier, à la différence du Morse qui nécessite un opérateur humain à la réception.
Dans le jeu, le message est trouvé sur une bande de papier servant de signet qui ressemble à ceci.
Sur une plaque, vous trouverez toutes les informations pour déchiffrer le code Baudot.
Il ne reste plus qu’à déchiffrer le message… Mais, très vite, j’ai trouvé ça laborieux. Et si je demandais à ma fidèle ligne de commande de le faire à ma place ? Après tout, la lecture de l’excellentissime « Efficient Linux at the Command Line », de Daniel J. Barret, m’a mis en confiance.
Attention, je vous préviens, ça va être très technique. Vous n’êtes pas obligé de vous infliger ça.
Vous êtes toujours là ? C’est parti !
Une fois le code en Asciiart copié/collé dans un fichier, on va l’afficher avec "cat". Même si ce n’est pas strictement nécessaire, je commence toujours toutes mes chaînes de pipe avec cat. Cela me semble plus clair.
Ici, la difficulté est que je veux accéder aux colonnes : je veux reconstruire un mot en prenant les premières lettres de chaque ligne, puis les secondes de chaque ligne, etc.
La commande Unix qui se rapproche le plus de cela est "cut". Cut permet de prendre le Xème caractère d’une ligne avec l’option -cX. Pour la quinzième, je fais donc "cut -c15". Tout de suite, je réalise que je vais avoir besoin d’une boucle. Considérant que les lignes font moins de 100 caractères, je peux faire un
for i in {1..100}; do" avec un "cut -c$i
Et voilà, je sais désormais isoler chaque colonne.
Dans l’asciiart, le binaire du code Baudot est représenté par des "o" et des espaces. Par soucis de clarté, je vais remplacer les espaces par des "l" (ça ressemble à un "1" binaire). La commande Unix pour faire des substitutions de caractères est "tr" (translate): tr " " l (l’espace est entre guillemets).
Toujours par souci de clarté, je vais effacer tout ce qui n’est pas un o ou un l. Tr permet d’effacer des caractères avec l’option -d et de prendre "tous les caractères sauf ceux de la liste" avec l’option -c (complémentaire). Je rajoute donc un tr -cd "ol".
La chaîne dans ma boucle ressemble donc à:
cat $1|cut -c"$i"|tr " " l|tr -cd "ol"
Problème : mes groupes de 5 lettres sont toujours verticaux !
J’ai tenté l’approche de supprimer les retours à la ligne avec tr -d "\n", mais cela produit des résultats bizarres, surtout que le dernier retour à la ligne est lui aussi supprimé.
Du coup, la commande qui me semble tout indiquée pour faire cela est le contraire de "cut" : "paste". Mais, essayez de lancer la chaîne dans "paste" : rien ne se passe !
Et pour cause : paste est initialement conçu pour joindre chaque ligne de plusieurs fichiers. Ici, il n’y a qu’un seul fichier : le flux d’entrée. Paste ne peux donc rien lui joindre.
Heureusement, l’option "-s" permet de dire à paste de tout mettre sur une seule ligne. Je suis complètement passé à côté pour une raison très simple : la page man de paste est incompréhensible.
-s, --serial
copier un fichier à la fois au lieu de le faire en parallèle
Je défie quiconque de voir le rapport entre la page de man et la fonction réelle. J’ai heureusement eu l’intuition de tester avec "tldr" au lieu de man.
Join all the lines into a single line, using TAB as delimiter:
paste -s path/to/file
Avouez que c’est déjà beaucoup plus clair !
Un p’tit test me révèle que c’est presque bon. Tout est sur une ligne. Sauf qu’il y’a désormais des TAB entre chaque lettre. On pourrait les enlever avec tr. Ou simplement dire à paste de ne pas les mettre en utilisant à la place un séparateur nul:
cat $1|cut -c"$i"|tr " " l|tr -cd "ol"|paste -sd "\0"
Ma boucle for m’affiche désormais chaque lettre sur une ligne. Il ne me reste qu’à traduire le code Baudot.
Je peux, par exemple, rajouter un pipe vers "sed s/llloo/a/" pour remplacer les lettres a. Et un pipe vers un nouveau sed pour la lettre b et ainsi de suite. Ça fonctionne, mais c’est moche et très lent (chaque sed lançant son propre process).
Lorsqu’on a beaucoup de règles sed, autant les mettre dans un fichier baudot.sed qui contient les commandes sed, une par ligne :
s/llloo/a/ s/oollo/b/ s/loool/c/ ...
Je peux appeler ces règles avec "sed -f baudot.sed".
Le code Baudot a une subtilité : y’a un code qui permet de passer en mode "caractère spécial". Je ne prends pas la tête, je me contente de remplacer ce code par "<" et le code pour revenir en mode normal par ">" (ces deux caractères n’existant pas dans le code Baudot). De cette manière, je sais que toute lettre entre < > n’est pas vraiment la lettre, mais le symbole correspondant. Un <m> est en réalité un ".". Également, il y a de nombreux espaces avant et après les messages, qui ont été convertis en autant de "l". Là, j’ai fait un bon gros hack. Au lieu de mettre un "l" normal dans le code Baudot, j’ai mis un L majuscule. Ensuite, à la toute fin, je supprime les "l" restant avec une règle globale : "s/l//g" (le "g" indique de changer tous les l sur une ligne, même s’il y’en a plusieurs). Puis, je remets en minuscule le "L" avec "s/L/l". Oui, c’est un bon gros hack. Ça fait l’affaire.
Mon fichier baudot.sed ressemble alors à ça :
s/ooloo/</ s/lloll/ / s/ooooo/>/ s/llloo/a/ s/oollo/b/ s/loool/c/ s/lollo/d/ s/llllo/e/ s/loolo/f/ s/oolol/g/ s/ololl/h/ s/llool/i/ s/loloo/j/ s/loooo/k/ s/ollol/L/ s/oooll/m/ s/looll/n/ s/oolll/o/ s/oolll/p/ s/olooo/q/ s/lolol/r/ s/llolo/s/ s/ollll/t/ s/llooo/u/ s/ooool/v/ s/olloo/w/ s/ooolo/x/ s/ololo/y/ s/olllo/z/ s/l//g s/L/l/g
Mon message est décodé. Mais, bien entendu, il s’affiche verticalement.
Ça ne vous rappelle rien ?
Un bon vieux paste -sd "\0" remet tout à l’endroit (et cette fois-ci, je n’ai pas eu à chercher).
Dans le jeu, le seul symbole utilisé sera le point, qui est ici devenu un "<m>" ou "<m" s’il est à la fin du message. Soyons propres jusqu’au bout et rajoutons deux petits sed. On pourrait également faire un autre fichier sed avec tous les caractères, mais le jeu ne comporte finalement que deux messages.
Dommage, là j’étais chaud pour plus.
Mon script final est donc :
#!/bin/bash
for i in {1..100}; do
cat $1|cut -c"$i"|tr " " l|tr -cd "ol"|paste -sd "\0"
done|sed -f baudot.sed|paste -sd "\0"|sed "s/<m>/./"|sed "s/<m/./"
Avec le recul, ce script est beaucoup plus simple et efficace qu’un script Python. Le script Python prendrait des dizaines de lignes et agirait sur une matrice de caractères. Les outils Unix, eux, agissent sur des flux de texte. J’y trouve une certaine élégance, un plaisir particulier. Le tout m’a pris environ 30 minutes, dont une bonne partie sur l’erreur de "tr -d \n".
Comme mon compte Kagi me donne désormais accès à ChatGPT, je me suis dit que j’allais faire l’expérience de lui demander de résoudre le même problème que moi. Histoire de comprendre ce qu’est le « Vibe coding ».
Au départ, ChatGPT est perdu. Il ne sait clairement pas traduire les messages, m’assène de longues tables de soi-disant code Baudot (qui sont parfois correctes, mais pas toujours) et me raconte l’histoire de ce code (qui est généralement correcte, mais que je n’ai pas demandée). C’est verbeux, je dois lui dire plusieurs fois de faire cours et d’être efficace.
Je lui demande de me faire un script bash. Ses premières tentatives sont extrêmement longues et incompréhensibles. Il semble beaucoup aimer les scripts awk à rallonge.
Persévérant, je demande à ChatGPT de ne plus utiliser awk et je lui explicite chaque étape l’une après l’autre: je lui dit qu’il faut parcourir chaque colonne, la redresser, la convertir, etc.
Logiquement, il arrive à un résultat très similaire au mien. Il choisit d’utiliser "tr -d \n" pour supprimer les fins de ligne, mais, comme je l’ai dit, ça ne fonctionne pas correctement. Je lui passe, car je n’ai moi‑même pas compris l’erreur.
Je constate cependant une amélioration intéressante : là où j’ai demandé les 100 premières colonnes, chatGPT mesure la longueur avec :
for i in $(seq $(head -1 $1|wc -c)); do
Concrètement, il prend la première ligne avec "head -1", compte les caractères avec "wc -c" et construit une séquence de 1 jusqu’a ce nombre avec "seq".
C’est une excellente idée si on considère que la première ligne est représentative des autres. Une fois le "redressement" en place (il m’a fallu des dizaines de prompts et d’exemples pour arriver à lui expliquer, à chaque fois il prétend qu’il a compris et c’est faux), je lui demande de traduire en utilisant le code Baudot.
Au lieu de ma solution avec "sed", il fait une fonction bash "baudot_to_letter" qui est un gros case:
baudot_to_letter() {
case $1 in
00000) echo " " ;;
00001) echo "E" ;;
00010) echo "A" ;;
Fait amusant, les caractères ne sont pas en ordre alphabétique, mais dans l’ordre binaire de la représentation Baudot. Pourquoi pas ?
Je valide cette solution même si je préfère le sed parce que je n’avais pas envie de faire du bash mais utiliser les outils Unix. Je lui ai dit plusieurs fois que je voulais une commande Unix, pas un script bash avec plusieurs fonctions. Mais je lui laisse, car, au fond, sa solution est pertinente.
Maintenant que toutes les étapes ont été décrites, je teste son code final. Qui ne fonctionne pas, produisant une erreur bash. Je lui demande une nouvelle version en lui copiant-collant l’erreur. Après plusieurs itérations de ce type, j’ai enfin un script qui fonctionne et me renvoie la traduction du message suivante :
LLECJGCISTGCIUMCEISJEISKN CT UIXV SJECUZUCEXV
Le nombre de lettres n’est même pas correct. Je n’ai évidemment pas envie de débugger le truc. Je demande à lors à ChatGPT de lancer le script lui-même pour me donner la traduction :
À ce point de l’histoire, j’ai, montre en main, passé plus de temps à tenter d’utiliser ChatGPT que je n’en ai mis pour écrire mon propre script, certes imparfait, mais fonctionnel. Je me retrouve à faire des copier-coller incessants entre ChatGPT et mon terminal, à tenter de négocier des changements et déboguer du code que je n’ai pas écrit.
Et encore, pour ChatGPT, j’ai nettoyé le message en enlevant tout ce qui n’est pas le code Baudot. Mon propre script est beaucoup plus robuste. J’en ai marre et j’abandonne.
ChatGPT n’est impressionnant que par sa capacité à converser, à prétendre. Oui, il peut parfois donner des idées. Il a par exemple amélioré la première ligne de mon script que j’avais bâclée.
Mais il faut avoir du temps à perdre. À ce moment-là, autant replonger dans un bon livre dont on sait que la majeure partie des idées sont bonnes.
ChatGPT peut être utile pour brainstormer à condition d’être soi-même fin connaisseur du domaine et très critique avec tout ce qui sort. C’est dire la débauche d’énergie pour finalement très peu de choses.
Vous croyez être plus efficace en utilisant l’AI, car vous passez moins de temps à « penser ». Mais les études semblent montrer qu’en réalité, vous êtes plus lent. Avec ChatGPT, on est tout le temps "occupé". On ne s’arrête jamais pour réfléchir à son problème, pour lire différentes ressources.
Et lorsque ChatGPT vous fait vraiment gagner du temps, c’est peut-être parce qu’il s’agit d’un domaine où vous n’êtes pas vraiment compétent. En ce qui concerne la ligne de commande, je ne peux que répéter ma suggestion de lire « Efficient Linux at the Command Line ». Vraiment !
Comme le rappelle très bien Cal Newport: qui a le plus intérêt à ce que vous soyez un ouvrier incompétent qui se contente de pousser aveuglément sur les boutons d’une machine au lieu de faire fonctionner son cerveau ?
Peut-être que passer quelques heures à espionner les aristocrates en étant obligé de faire des lessives vous fera réfléchir à ce sujet. En tout cas, ça en vaut la peine !
Bon amusement, bonnes réflexions et… bonnes lessives !
Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
06.01.2026 à 01:00
Ploum
Je n’ai plus de compte Google depuis plusieurs années. Je me suis rendu compte que j’évite autant que possible de cliquer sur un lien YouTube, car, à chaque vidéo, je dois passer par le chargement d’une page qui surcharge mon ordinateur pourtant récent, je dois tenter de lancer la vidéo, attendre plusieurs secondes qu’un énorme popup l’interrompe. Puis je dois faire en sorte de trouver le son original et non pas une version automatiquement générée en français. Une fois tout ça terminé, il faut encore se taper des publicités de parfois plusieurs minutes.
Tout ça pour voir une vidéo qui pourrait potentiellement contenir une information qui m’intéresse. Et encore, ce n’est pas du tout certain.
Alors, oui il y a des moyens de contourner ces merdifications, mais c’est un travail permanent et qui ne fonctionne pas toujours. Donc, en gros, je ne clique sur les liens YouTube que quand je suis vraiment obligé. Genre avant je regardais les clips vidéos des groupes de métal que recommandait Alias. Désormais, j’utilise Bandcamp (j’y achète même des albums) quand il le mentionne ou je cherche ailleurs.
Vous croyez que votre vidéo doit être sur YouTube, car « tout le monde y est », mais, au moins dans mon cas, vous avez perdu de l’audience en étant uniquement sur YouTube.
Le pire, c’est de se rendre compte que la merdification est vraiment assumée de l’intérieur. Comme le souligne Josh Griffiths, YouTube encourage les créateurs à tourner des vidéos dont le scénario est généré par leur IA. YouTube rajoute des pubs sans le consentement du créateur.
Toujours dans son blog post, il décrit comment YouTube utilise votre historique vidéo pour déterminer votre âge et bloquer toutes les vidéos qui ne seraient pas appropriées. C’est tellement effrayant de stupidité que ça pourrait être dans une de mes nouvelles.
Une chose est certaine : en me connectant sur YouTube sans compte et sans historique, YouTube me propose spontanément des dizaines de vidéos sur les nazis, sur la Seconde Guerre mondiale, sur les fusils utilisés par les nazis, etc. Je n’ai jamais regardé ce genre de choses. Au vu du titre, certaines vidéos me semblaient à la limite de la théorie du complot ou du négationnisme. Pourquoi me les recommander ? L’hypothèse la plus effrayante serait que ce soit les recommandations par défaut !
Parce que ce n’est pas comme si YouTube ne savait pas comment effacer les vidéos qui ne lui plaisent pas !
Si vous réalisez des vidéos et que vous souhaitez les partager à des humains, par pitié, postez-les également ailleurs que sur YouTube! Personne ne vous demande d’abandonner votre « communauté », vos likes, vos 10 centimes de revenus publicitaires qui tombent tous les mois. Mais postez également votre vidéo ailleurs. Par exemple sur Peertube !
Comme le dit très bien Bert Hubert, le problème de la dépendance aux monopoles américains n’est pas tant technique que culturel. Et les gouvernements européens devraient être les premiers à montrer l’exemple.
Je pense qu’il illustre parfaitement la profondeur du problème, car, dans sa conférence qui décrit la dépendance technologique de l’UE envers les USA et la Chine, il pointe vers des vidéos explicatives… sur YouTube. Et Bert Hubert ne semble même pas en réaliser l’ironie alors qu’il recommande Peertube un peu plus loin. Il héberge d’ailleurs ses projets personnels sur Github. Github appartient à Microsoft et son monopole sur les projets Open Source a des impacts dramatiques.
J’ai déjà noté à quel point l’Europe développe des solutions technologiques importantes, mais que personne ne semble s’en apercevoir parce que, contrairement aux USA, nous développons des technologies qui offrent de la liberté aux utilisateurs : le Web, Linux, Mastodon, le protocole Gemini.
À cette liste, j’aimerais rajouter VLC, LibreOffice et, bien entendu, PeerTube.
Les solutions européennes qui ont du succès font partie des communs. Elles sont tellement évidentes que beaucoup n’arrivent plus à les voir. Ou à les prendre au sérieux, car « pas assez chères ».
Le problème de l’Europe n’est pas le manque de solutions. C’est simplement que les politiciens veulent « un Google européen ». Les politiciens sont incapables de voir qu’on ne lutte pas contre les monopoles américains en créant, avec 20 ans de retard, un sous-monopole européen.
C’est un problème purement culturel. Il suffirait que quelques députés européens aient le courage de dire : je supprime mes comptes X, Facebook, Whatsapp, Google et Microsoft pour un mois. Un simple mois durant lequel ils accepteraient que, oui, les choses sont différentes, il faut s’adapter un peu.
Ce n’est pas comme si le problème n’était pas urgent : tous nos services informatiques officiels, tous nos échanges, toutes nos données sont aux mains d’entreprises qui collaborent ouvertement avec l’armée américaine. Vous croyez vraiment que les militaires américains n’ont pas exploité toutes les données Google/Microsoft/Whatsapp des politiciens vénézuéliens avant de lancer leur raid ? Et encore, le Venezuela est un des rares pays qui tentait officiellement de se passer des solutions américaines.
Quitter les services merdifiés est difficile, mais pas impossible. Cela peut se préparer, se faire petit à petit. Si, pour certains, c’est actuellement strictement impossible pour des raisons professionnelles, pour beaucoup d’entre nous, c’est surtout que nous refusons d’abandonner nos habitudes. Se plaindre, c’est bien. Agir, c’est difficile et nécessite d’avoir le temps et l’énergie à consacrer à une période de transition.
Bert Hubert prend l’exemple du mail. En substance, il dit que le mail n’est plus un bien commun, que les administrations ne peuvent pas utiliser un mail européen, car Microsoft et Google vont arbitrairement rejeter une partie de ces emails. Pourtant, la solution est évidente : il suffit de considérer que la faute est chez Google et Microsoft. Il suffit de dire « Nous ne pouvons pas utiliser Microsoft et Google au sein des institutions officielles européennes, car nous risquons de ne pas recevoir certains emails ».
Le problème n’est pas l’email, le problème est que nous nous positionnons en victimes. Nous ne voulons pas de solution ! Nous voulons que ça change sans rien changer !
Beaucoup de problèmes de l’humanité ne proviennent pas du fait qu’il n’y a pas de solutions, mais qu’en réalité, les gens aiment se plaindre et ne veulent surtout pas résoudre le problème. Parce que le problème fait désormais partie de leur identité ou parce qu’ils ne peuvent pas imaginer la vie sans ce problème ou parce qu’en réalité, ils bénéficient de l’existence de ce problème (on appelle ces derniers des « consultants » ).
Il y a une technique assez simple pour reconnaître ce type de situation : c’est, lorsque tu proposes une solution, de te voir immédiatement rétorquer les raisons pour lesquelles cette solution ne peut pas fonctionner. C’est clair, à ce moment, que la personne en face ne cherche pas une solution. Elle n’a pas besoin d’un ingénieur, mais d’un psychologue (rôle que prennent cyniquement les vendeurs).
Une personne qui cherche réellement à résoudre son problème va être intéressée par toute piste de solutions. Si la solution n’est pas adaptée, elle va réfléchir à comment l’améliorer. Elle va accepter certains compromis. Si elle rejette une solution, c’est après une longue investigation de cette dernière, car elle a réellement l’espoir de résoudre son problème.
Pour les gouvernements aujourd’hui, il est techniquement assez simple de dire « Nous voulons que nos emails soient hébergés en Europe par une infrastructure européenne, nous voulons diffuser nos vidéos via nos propres serveurs et faire nos annonces officielles sur un site que nous contrôlons. » C’est même trivial, car des milliers d’individus comme moi le font pour un coût dérisoire. Et il y a même des tentatives claires, comme en Suisse.
Les seules raisons pour lesquelles il n’y a même pas de réflexion poussée à ce sujet sont, comme toujours, la malveillance (oui, Google et Microsoft font beaucoup de cadeaux aux politiciens et sont capables de déplacer des montagnes dès qu’une alternative à leur monopole est considérée) et l’incompétence.
Malveillance et incompétence n’étant pas incompatibles, mais plutôt complémentaires. Et un peu trop fréquentes en politique à mon goût.
Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
05.01.2026 à 01:00
Ploum
I teach a course called "Open Source Strategies" at École Polytechnique de Louvain, part of the University of Louvain.
As part of my course, students are required to find an open source project of their choice and make a small contribution to it. They send me a report through our university Gitlab. To grade their work, I read the report and explore their public interactions with the project: tickets, comments, pull requests, emails.
This year, during my review of the projects of the semester, Github decided to block my IP for one hour. During that hour, I simply could not access Github.
It should be noted that, even if I want to get rid of it, I still have a Github account and I was logged in.
The block happened again the day after.
This gave me pause.
I wondered how many of my students’ projects were related to projects hosted on Github. I simply went into the repository and counted 238 students reports in the last seven years:
ls -l projects_*/*.md | wc -l
238
Some reports might be missing. Also, I don’t have the reports before 2019 in that repository. But this is a good approximation.
Now, let’s count how many reports don’t contain "github.com".
grep -L github.com projects_*/*.md | wc -l
7
Wow, that’s not a lot. I then wondered what those projects were. It turns out that, out of those 7, 6 students simply forgot to add the repository URL in their report. They used the project webpage or no URL at all. In those 6 cases, the repository happened to be hosted on Github.
In my course, I explain at great length the problem of centralisation. I present alternatives: Gitlab, Codeberg, Forgejo, Sourcehut but also Fossil, Mercurial, even Radicle.
I literally explain to my students to look outside of Github. Despite this, out of 238 students tasked with contributing to the open source project of their choice, only one managed to avoid Github.
As it was demonstrated to me for one hour, the immediate peril of centralisation is that you can suddenly lose access to everything. For one hour, I was unable to review any of my students’ projects. Not a great deal, but it serves as a warning. While writing this post, I was hit a second time by this block.
A few years ago, one of my friends was locked out of his Google account while travelling for work at the other end of the world. Suddenly, his email stopped working, most of the apps on his phone stopped working, and he lost access to all his data "in the clouds". Fortunately, he still had a working email address (not on Google) and important documents for his trip were on his laptop hard drive. Through personal connections at Google, he managed to recover his account a few weeks later. He never had any explanations.
More recently, Paris Buttfield-Addison experienced the same thing with his Apple account. His whole online life disappeared, and all his hardware was suddenly bricked. Being heavily invested in Apple doesn’t protect you.
I’m sure the situation will be resolved because, once again, we are talking about a well-connected person.
But this happens. All the time. Institutions are blindly trusting monopolies that could lock you out randomly or for political reasons as experienced by the French magistrate Nicolas Guillou.
Worst: as long as we are not locked out, we offer all our secrets to a country that could arbitrarily decide to attack yours and kidnap your president. I wonder how much Venezuelan sensitive information was in fact stored on Google/Microsoft services and accessed by the US military to prepare their recent strike.
Big institutions like my Alma Mater or entire countries have no excuse to still use American monopolies. This is either total incompetence or corruption, probably a bit of both.
As demonstrated by my Github anecdote, individuals have little choice. Even if I don’t want a Github account, I’m mostly forced to have one if I want to contribute or report bugs to projects I care about. I’m forced to interact with Github to grade my students’ projects.
237 out of 238 is not "a lot." It’s everyone. There’s something more than "most projects use Github."
According to most of my students, the hardest part of contributing to an open source project is finding one. I tell them to look for the software they use every day, to investigate. But the vast majority ends up finding "something that looks easy."
That’s where I realised all this time my students had been searching for open source projects to contribute to on Github only. It’s not that everything is on Github, it is that none of my students can imagine looking outside of Github!
The outlier? The one student who contributed to a project not on Github? We discussed his needs and I pointed him to the project he ended up choosing.
Github’s centralisation invisibilised a huge part of the open source world. Because of that, lots of projects tend to stay on Github or, like Python, to migrate to Github.
Each year, students come up with very creative ways not to do what I expect while still passing. Last year, half of the class was suddenly committing reports with broken encoding in the file path. I had never seen that before and I asked how they managed to do it. It turns out that half the class was using VS Code on Windows to do something as simple as "git commit" and they couldn’t use the git command line.
This year, I forced them to use the command line on an open source OS, which solved the previous year’s issue. But a fair number of the reports are clearly ChatGPT-generated, which was less obvious last year. This is sad because it probably took them more effort to write the prompt and, well, those reports are mostly empty of substance. I would have preferred the prompt alone. I’m also sad they thought I would not notice.
But my main mistake was a decade-long one. For all those years, I asked my students to find a project to contribute to. So they blindly did. They didn’t try to think about it. They went to Github and started browsing projects.
For all those years, I involuntarily managed to teach my students that Open Source was a corner of the web, a Microsoft-managed repository of small software one can play with. Nothing serious.
This is all my fault.
I know the solution. Starting this year, students will be forced to contribute to a project they use, care about or, at the very least, truly want to use in the long term. Not one they found randomly on Github.
If they think they don’t use open source software, they should take a better look at their own stack.
And if they truly don’t use any open source software at all and don’t want to use any, why do they want to follow a course about the subject in the first place?
I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.
I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!
19.12.2025 à 01:00
Ploum
You probably heard about the Wall Street Journal story where they had a snack-vending machine run by a chatbot created by Anthropic.
At first glance, it is funny and it looks like journalists doing their job criticising the AI industry. If you are curious, the video is there (requires JS).
But what appears to be journalism is, in fact, pure advertising. For both WSJ and Anthropic. Look at how WSJ journalists are presented as "world class", how no-subtle the Anthropic guy is when telling them they are the best and how the journalist blush at it. If you are taking the story at face value, you are failing for the trap which is simple: "AI is not really good but funny, we must improve it."
The first thing that blew my mind was how stupid the whole idea is. Think for one second. One full second. Why do you ever want to add a chatbot to a snack vending machine? The video states it clearly: the vending machine must be stocked by humans. Customers must order and take their snack by themselves. The AI has no value at all.
Automated snack vending machine is a solved problem since nearly a century. Why do you want to make your vending machine more expensive, more error-prone, more fragile and less efficient for your customers?
What this video is really doing is normalising the fact that "even if it is completely stupid, AI will be everywhere, get used to it!"
The Anthropic guy himself doesn’t seem to believe his own lies, to the point of making me uncomfortable. Toward the ends, he even tries to warn us: "Claude AI could run your business but you don’t want to come one day and see you have been locked out." At which the journalist adds, "Or has ordered 100 PlayStations."
And then he gives up:
"Well, the best you can do is probably prepare for that world."
None of the world class journalists seemed to care. They are probably too badly paid for that. I was astonished to see how proud they were, having spent literally hours chatting with a bot just to get a free coke, even queuing for the privilege of having a free coke. A coke that cost a few minutes of minimum-wage work.
So the whole thing is advertising a world where chatbots will be everywhere and where world-class workers will do long queue just to get a free soda.
And the best advice about it is that you should probably prepare for that world.
I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.
I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!