Tag Archives: ChatGPT

The author on a blue background wearing Apple AirPods.

On Machinery

This week, for the penultimate post of the Wednesday Blog, how machinery needs constant maintenance to keep functioning.—Click here to support the Wednesday Blog: https://www.patreon.com/sthosdkane—Sources:%5B1%5D Surekha Davies, “Walter Raleigh’s headless monsters and annotation as thinking,” in Strange and Wonderous: Notes from a Science Historian, (6 October 2025).[2] “Asking the Computer,” Wednesday Blog 5.26.


This week, for the penultimate post of the Wednesday Blog, how machinery needs constant maintenance to keep functioning.


I am just old enough to remember life before the ubiquity of computers. I had access to our family computer as long as I can remember, and to my grandparents’ computer at their condo when we stayed with them in the Northwest Suburbs of Chicago. Yet even then my computer usage was limited often to idle fascination. I did most of my schoolwork by hand through eighth grade, only switching from writing to typing most of my work when I started high school and was issued a MacBook by my school. I do think that a certain degree of whimsy and humanity has faded from daily life as we’ve so fully adopted our ever newly invented technologies. Those machines can do things that in my early childhood would’ve seemed wonderous. Recently, I thought how without knowing how powerful and far-reaching my computer is as a vehicle for my research and general curiosity, I would be happy, delighted in fact, if my computer could conduct one function, say if it had the ability to look up any street address in the United States as a device connected to the US Postal Service’s database. That alone would delight me. Yet that is the function of not just one application on my computer but merely one of many functions of several such programs I can load on this device, and not only can I look up addresses in the United States but I can look up addresses in any country on this planet.

With the right software downloaded onto this computer I can read any document printed or handwritten in all of human history and leave annotations and highlights without worrying about damaging the original source. Surekha Davies wrote warmly in favor of annotating in her newsletter this week, and I appreciated her take on the matter.[1] In high school, I was a bit of a prude when it came to annotating; I found that summer reading assignment in my freshman and sophomore English classes to be almost repulsive because I see a book as a work of art crafted by its author, editor, and publisher to be a very specific way. To annotate, I argued, was like drawing a curly-cue mustache on the Mona Lisa, a crude act at best. Because of this I process knowledge from books differently. I now often take photos of individual pages and organize them into albums on my computer which I can then consult if I’m writing about a particular book, in much the same fashion that I use when I’m in the archive or special collections room looking at a historical text.

All of these images can now not only be sorted into my computer’s photo library, now stored in the cloud and accessible on my computer and phone alike, but they can also be merged together into one common PDF file, the main file type I use for storing primary and secondary sources for my research. With advances in artificial intelligence, I can now use the common top-level search feature on my computer to look within files for specific characters, words, or phrases to varying levels of accuracy. This is something that was barely getting off the ground when I started working on my doctorate six years ago, and today it makes my job a lot easier; just my file folder containing all of the peer-reviewed articles I’ve used in my research since 2019 contains 349 files and is 887.1 MB in size.

Our computers are merely the latest iterations of machines. The first computer, Charles Babbage’s (1791–1871) counting machine worked in a fairly similar fashion to our own albeit built of mechanical levers and gears where ours have intricate electronics in their hard drives. I, like many others, was introduced to Babbage and his difference engine by seeing the original in the Science Museum in London. This difference engine was a mechanical calculator intended to compute mathematical functions. Blaise Pascal (1623–1662) and Gottfried Wilhelm Leibniz (1646–1716) both developed similar mechanisms in the seventeenth century and still older the Ancient Greek 2nd century BCE Antikythera mechanism can complete some of the same functions. Yet between all of these the basic idea that a computer works in mathematical terms remains the same even today. For all the linguistic foundations of computer code, the functions of any machine burn down to the binary operations of ones and zeros. I wrote last year in this blog about my befuddlement that artificial intelligence has largely been created on verbal linguistic models and was only in 2024 being trained on mathematical ones.[2] Yet even then those mathematical models were understood by the A.I. in English, making their computations fluent only in one specific dialect of the universal language of mathematics making their functionality mostly useless for the vast majority of humanity.

Yet I wonder how true that last statement really is? After all, I a native English speaker with recent roots in Irish learned grammar like many generations of my ancestors through learning to read and write in Latin. English grammar generally made no sense to me in elementary school, it is after all very irregular in a lot of ways, and so it was only after my introduction to a very orderly language, the one for which our Roman alphabet was first adapted, that I began to understand how English works. The ways in which we understand language in a Western European and American context rely on the classical roots of our pedagogy influenced in their own time by medieval scholasticism, Renaissance humanism, and Enlightenment notions of the interconnectedness of the individual and society alike. I do not know how many students today in countries around the globe are learning their mathematics through English in order to compete in one of the largest linguistic job markets of our time. All of this may well be rendered moot by the latest technological leap announced by Apple several weeks ago that their new AirPods will include a live translation feature acting as a sort of Babel Fish or universal translator depending on which science fiction reference you prefer.

Yet those AirPods will break down eventually. They are physical objects, and nothing which exists in physical space is eternal. Shakespeare wrote it well in The Temepst that 

“The solemn temples, the great globe itself,

Yea, all which it inherit, shall dissolve,

And, like this insubstantial pageant faded,

Leave not a rack behind. We are such stuff

As dreams are made on, and our little life

Is rounded with a sleep.” (4.1.170-175)

For our machines to last, they must be maintained, cleaned, given breaks just like the workers who operate them lest they lose all stamina and face exhaustion most grave. Nothing lasts forever, and the more those things are allowed to rest and recuperate the more they are then able to work to their fullest. So much of our literature from the last few centuries has been about fearing the machines and the threat they pose. If we are made in the Image of God then machines, our creation, are made in the image of us. They are the products of human invention and reflect back to us ourselves yet without the emotion that makes us human. Can a machine ever feel emotion? Could HAL-9000 feel fear or sorrow, could Data feel joy or curiosity? And what of the living beings who in our science fiction retrofitted their bodies with machinery in some cases to the extent that they became more machine than human? What emotion could they then feel? One of the most tragic reveals for me in Doctor Who was that the Daleks (the Doctor’s main adversaries) are living beings who felt so afraid and threatened that they decided to encase the most vital parts of their physical bodies in wheelchair tanks, shaped like pepper shakers no less, rendering them resilient adversaries for anyone who crossed them. Yet what remained of the being inside? I urge caution with suggestions of the metaverse or other technological advances that draw us further from our lived experiences and more into the computer. These allow us to communicate yet real human emotion is difficult to express beyond living, breathing, face-to-face interactions.

After a while these machines which have our attention distract us from our lives and render us blind to the world around us. I liked to bring this up when I taught Plato’s allegory of the cave to college freshmen in my Western Civilization class. I conclude the lesson by remarking that in the twenty-first century we don’t need a cave to isolate ourselves from the real world, all we need is a smartphone and a set of headphones and nothing else will exist. I tried to make this humorous, in an admittedly dark fashion, by reminding them to at least keep the headphones on a lighter mode so they can hear their surroundings and to look up from their phone screen when crossing streets lest they find themselves flattened like the proverbial cartoon coyote on the front of a city bus. 

If we focus too much on our machines, we lose ourselves in the mechanism, we forget to care for ourselves and attend to our needs. The human body is the blueprint for all human inventions whether physical ones like the machine or abstract like society itself. As I think further about the problems our society faces, I conclude that at the core there is a deep neglect of the human at the heart of everything. I see this in the way that disasters are reported on in the press: often the financial toll is covered before the human cost, clearly demonstrating that the value of the dollar outweighs the value of the human. In abdicating ourselves to our own abstractions and social ideals we lose the potential to change our course, repair the machinery, or update the software to a better version with new security patches and fixes for glitches old and new. In spite of our immense societal wealth, ever advancing scientific threshold, and technological achievement we still haven’t gotten around to solving hunger, illiteracy, or poverty. In spite of our best intentions our worst instincts keep drawing us into wars that only a few of us want.The Mazda Rua, my car, is getting older and I expect if I keep driving it for a few years or more it’ll eventually need more and more replacement parts until it becomes a Ship of Theseus, yet is not the idea of a machine the same even if its parts are replaced? That idea is the closest I can come to imagining a machine having a soul as natural things like us have. The Mazda Rua remained the Mazda Rua even after its brakes were replaced in January and its slow leaking tire was patched in May. Yet as it moves into its second decade, that old friend of mine continues to work in spite of the long drives and all the adventures I’ve put it through. Our machinery is in desperate need of repair, yet a few of us see greater profit from disfunction than they figure they would get if they actually put in the effort, money, and time to fix things. If problems are left unattended to for long periods of time they will eventually lead to mechanical failure. The same is true for the machinery of the body and of the state. Sometimes a good repair is called for, reform to the mechanisms of power which will make the machine work better for its constituent parts. In this moment that need for reform is being met with the advice of a bad mechanic looking more at his bottom line than at the need of the mechanism he’s agreed to repair. Only on this level the consequences of mechanical failure are dire.


[1] Surekha Davies, “Walter Raleigh’s headless monsters and annotation as thinking,” in Strange and Wonderous: Notes from a Science Historian, (6 October 2025).

[2] “Asking the Computer,” Wednesday Blog 5.26.


Asking the Computer

This week, I address news that the latest version of ChatGPT will help with your math problems. — Links: New York Times, 12 Sep. 2024, Cade Metz, "OpenAI Unveils New ChatGPT That Can Reason Through Math and Science." Eddie Burback, 1 Sep. 2024, "AI is here. What now?" YouTube.


This week, I address news that the latest version of ChatGPT will help with your math problems.


I’ve used ChatGPT on occasion, mostly to test the system and see what it will do if I prompt it about very particular things. What does it know about André Thevet (1516–1590), or about the championship run of my beloved Chicago Cubs from the 80s, the 1880s that is. I even asked it questions in Irish once and was startled to see it reply with perfect Irish grammar, better than Google Translate does. I’ve occasionally pulled up my ChatGPT app to ask about the proper cooking temperatures of beef, pork, or chicken rather than typing those questions into Google, and in one instance I used it to help me confirm a theory I had based on the secondary literature it had in its database for a project I was writing. The one thing that I would’ve expected ChatGPT to be best at from the start are logical questions, especially in mathematics. 

There are clear rules for math, except that in America it’s singular in its informal name while in Britain it retains its inherent plurality. As much as I acted out a learned frustration and incomprehension when posed with mathematical questions in elementary, middle, and high school, I appreciate its regularity, the way in which it operates on a universal and expected level. Many of the greatest minds throughout human history have seen math as a universal language, one which they could use to explain the world in which we live and the heavens we see over our heads. The History of Science is as much a history of knowledge as it is the history of the development of the Scientific Method, a tool which has its own mathematical regularity. All our scales and theorems and representations of real and unreal numbers reflect our own interpretation of the Cosmos, and so it is logical that an advanced civilization like our own (if I may be so bold) would have developed their own language for these same concepts which are inherent in our universe. Carl Sagan took this idea to a fuller level in his novel and later film Contact, in which the alien signal coming from Vega is mathematical in nature. 

Often, the lower numbers are some of the easiest words in a language for learners to pick up on. The numbers retain their similarities in the Indo-European languages to the extent that they were used as early evidence that the Irish trí, the English three, and the Latin trēs are related to the Sanskrit trī (त्रि) and the Farsi se (سه.) The higher the numbers go the more complicated they get, of course. An older pattern in Irish which I still use is to count higher numbers as four and fifty or ceathair is caoga, which is similar to the pattern used in modern German, and something that appears far more King James Bible in English. I love the complexity of the French base-twenty counting system, where the year of my birth, 1992, is mille neuf cent quatre-vignts douze, or one thousand nine-hundred four-twenties and twelve. Will the Belgian and Swiss word nonante to refer to the same number as quatre-vignts-dix ultimately win out in the Francophonie? Peut-être.

I was surprised to read in the New York Times last Friday that the latest version of ChatGPT called OpenAI o1 was built specifically to fix prior bugs that kept the program from solving mathematical problems. Surely this would be the first sort of language that one would teach a computer. As it turns out, no. Even now, OpenAI o1’s mathematical capabilities are limited to questions posed to it in English. So, as long as you have learned the English dialect of the language of mathematics then you can use this computer program to help you solve questions in the most universal of languages.

It reminds me of the bafflement I felt upon first seeing TurnItIn’s grammar correction feature, the purple boxes on TurnItIn’s web interface. For the uninitiated, TurnItIn is the essay grading and plagiarism detection system that most academic institutions that I’ve studied and taught at in the last 15 years use as a submission portal. I was proud to program into my Binghamton TurnItIn account several hotkeys that would allow me to save time retyping the same comment on 50 student essays every time they had a deadline. Thousands of essays later I can squarely say these hotkeys saved my bacon time and time again. Like legal documents, especially the medieval and early modern kind that I’ve read and written about in my studies, they are formulaic and expectable in their character.

The same goes for math: even with the basic understanding that I have (I only made it as far as Algebra II) the logic when explained well is inherent in the subject. Earlier in my doctoral studies, beginning in 2020, my two-sided approach to developing my own character and intellect beyond my studies came in the form of first signing up for Irish classes again, and second picking up where I left off with my mathematical studies in college and trying my hand at a beginner physics course. I’m sad to say I really haven’t had the time to devote to this mathematical pursuit as much as I would like. Perhaps I will be able to work it in someday, alas I also have to eat and sleep, and I’ve learned my attention will only last for so long. I too, dear reader, am only human.

Yet this is something where Open AI o1 differs from the average bear, for it is decidedly not human. How would we try to successfully communicate with a non-human entity or being when we have no basis for conversation to start with? The good thing about o1 and other AI programs is these are non-human minds which we are creating in our own image, ever the aspirant we are wrestling with the greater Essence from beyond this tangible Cosmos we inhabit. We can form o1 and its kind in the best image of our aspirations, a computerized mind that can recognize both empathy and logic and reflect those back to us in its answers to our questions. In the long run, I see o1’s descendants as the minds of far more powerful computers that will help our descendants explore this solar system and perhaps even beyond. 

From the first time I saw it in work, I saw in ChatGPT a descendant of the fictional computers of Starfleet’s vessels whose purpose in being is to seek out new life and new civilizations and to boldly go where no one has gone before. Perhaps that future where humanity has built our utopia in this place, our planetary home, will be facilitated by AI. Perhaps, if we use it, build it, and train it right. 

That said, the YouTuber Eddie Burback made a video several weeks ago about how he has seen AI put to use in his daily life in Los Angeles. In it, from the food delivery robots to his trips in several self-driving Waymo cars (manufactured by Jaguar), to his viewing of several AI films, Burback concluded that AI at this moment in 2024 is a net negative on human creativity and could remove more of the human element from the arts. I have seen far more AI generated images appear on my Instagram and Pinterest in the last year. I like Eddie’s videos, they may be long, but they are thorough and full of emotion, heart, and wit. They do a great service to their viewer at taking a long look at the world as he perceives it. I see much of the same thing, yet as the good Irish Catholic Cub fan that I am, I hold out hope that what today seems impossible to some: AI used morally and for the future improvement of our species and our advancement out of this adolescence in our story may still happen. I believe this is possible because I believe in us, that once this Wild West phase of the new Information Age settles down, we will see better uses of our new technologies develop, even as they continue to advance faster, higher, and stronger with each passing day.