To rephrase Leon Trotsky: You may not be interested in artificial intelligence, but artificial intelligence is interested in you.

Suddenly, long-rumored and awaited, AI is upon the world—a world that isn’t ready for the massive and forever disruption it threatens.

AI could be the greatest disruptor in history, surpassing the arrival of the printing press, the steam engine, and electricity. Those all led to good things.

At this time, the long-term effects of AI are just speculative, but they could be terrifying, throwing tens of millions out of work and making a mockery of truth, rendering pictures and printed words unreliable.

There is no common view on the impact of AI on employment. When I ask, the scientists working on it point to the false fears that once greeted automation. In reality, jobs swelled as new products needed new workers.

My feeling is that the job scenario has yet to be proven with AI. Automation added to work by making old work more efficient and creating things never before enjoyed, and, in the process, opening up new worlds of work.

AI, it seems to me, is all set to subtract from employment, but there is no guarantee it will create great, new avenues of work.

An odd development, spurred by AI, might be in a revival of unionism. More people might want to join a union in the hope that this will offer job security.

The endangered people are those who do less-skilled tasks, like warehouse laborers or fast-food servers. Already Wendy’s, the fast-food chain, is working to replace order-takers in the drive- through lanes with AI-operated systems, mimicking human beings.

Also threatened are those who may find AI can do much, if not all, of their work as well as they do. They include lawyers, journalists, and musicians.

Here the AI impact could, in theory, augment or replace our culture with new creations; superior symphonies than those composed by Beethoven or better country songs than those by Kris Kristofferson.

I asked the AI-powered Bing search engine a question about Adam Smith, the 18th-century Scottish economist. Back came three perfect paragraphs upon which I couldn’t improve. I was tempted to cut-and-paste them into the article I was writing. It is disturbing to find out you are superfluous.

Even AI’s creators and those who understand the technology are alarmed. In my reporting, they range from John E. Savage, An Wang professor emeritus of computer science at Brown University, to Stuart J. Russell, professor of computer science at the University of California, Berkeley, and one of the preeminent researchers and authors on AI. They both told me that scientists don’t actually know how AI works once it is working. There is general agreement that it should be regulated.

Russell, whose most recent book is “Human Compatible: Artificial Intelligence and the Problem of Control,” was one of a group of prominent leaders who signed an open letter on March 29 urging a six-month pause in AI development until more is understood—leading, perhaps, to regulation.

And there’s one rub: How do you regulate AI? Having decided how to regulate AI, how would it be policed? By its nature, AI is amorphous and ubiquitous. Who would punish the violators and how?

The public became truly aware of AI as recently as March 14 with the launch of GPT-4, the successor to GPT-3, which is the technology behind the chatbot ChatGPT. Billions of people went online to test it, including me.

The chatbot answered most of the questions I asked it more or less accurately, but often with some glaring error. It did find out about a friend of my teenage years, but she was from an aristocratic English family, so there was a paper trail for it to unearth.

Berkeley’s Russell told me that he thinks AI will make 2023 a seminal year “like 1066  [the Norman Conquest of England].”

That is another way of saying we are balanced on the knife-edge of history.

Of course, you could end AI, but you would have to get rid of electricity — hardly an option.