Remember blank “Blue Books” for taking exams?  Well, they’re back.

A company called Roaring Spring Paper Products is experiencing a resurgence in “Blue Book” sales to educational institutions seeking to thwart the use of chatbots like ChatGPT for tests. The blank books are handed out at the beginning of the exam, and no screens, apps, or phones are permitted. The company’s windfall reflects the latest chapter in a long-standing debate about using tools as a substitute for acquiring reasoning and analysis skills. 

We think that is a false dichotomy and that the newest focus of the debate — artificial intelligence — is the most recent.

When we were students of science and engineering in the 1960s, we became proficient at using devices called slide rules (the precursor of calculators).

Soon, smart calculators capable of complex calculations emerged. That made the solution of simple problems easy, but for more complicated calculations, framing them and knowing when and how to use these devices shifted the strategic goalposts. In other words, correctly defining the problem became the real challenge. That was a task for the brain, and proficiency in using tools was no substitute.

The predilection for tools begins early in the educational process. Why learn mental addition or the times tables when all that information is on our cellphones? And how many teenagers today can perform long division? We see the results daily with cashiers who get flustered when you give them $21 for a $15.99 item to get back only a five-dollar bill instead of four ones (in addition to a penny). They have to rely on the software in the newfangled cash register.

Similarly, the ability to use ChatGPT or another chatbot skillfully is only a substitute for research, writing or analysis when all that matters is the answer. In school, however, it is considered cheating, officially and academically, because it hinders the students’ accumulation of knowledge and analytical skills. In the working world, just arriving at an adequate solution may be sufficient for some tasks. However, how many people will get ahead by being a ChatGPT jock any more than just being a whiz at Microsoft Word or Excel?

This evolution is hardly a surprise, given the not-so-recent trends in education. How and why anybody ever thought that praising a student who did not answer that two and two equals four is the ultimate example of a teacher’s betrayal of students. How can that student ever grow to understand that not everything is relative and that proper reasoning and a sufficient knowledge base are critical for success? Although no error of this basic nature will emerge from ChatGPT, as it is challenged with increasingly complex tasks, will today’s students be able to detect and correct the errors or subtle misdirections that do occur?

That might seem unlikely to us today. After all, most people accept Microsoft Word’s grammar recommendations that are incorrect (which were offered to us several times as we wrote this article). Will the death of common sense be next? Will AI be able to understand the physical world sufficiently so that it won’t make ill-advised recommendations? For example, AI may correctly design the arm of an airline seat with the controls on the arm placed on top, but will it consider whether they are positioned such that peoples’ elbows will inadvertently change the channels of the entertainment system (as happened to one of us recently on a flight)? In short, will AI be able to judge which design flaws frustrate humans?

We predict that the best innovations will still emerge from well-trained human minds, with or without the assistance of supercomputers and chatbots; however, our educational system is increasingly less focused on this realization. More and more, diplomas are participation trophies. Honors classes are deemphasized because not everyone can succeed in them. The mathematics term for this thinking is “lowest common denominator,” which describes (usually in a disapproving manner) a rule, proposal, opinion, or media that is deliberately simplified to appeal to the largest possible number of people.

Is AI starting its takeover of humanity by convincing us that it is better for schools to graduate an army of ChatGPT drones than to accept human ability differentiation?

America’s continuing prosperity and the advancement of its citizens’ well-being will depend on the nurturing and challenging of its best and brightest minds. If our core population of properly educated minds continues to shrink, sooner or later, the critical mass of advancement will go elsewhere. So, let’s get out the Blue Books.

And please, don’t ask ChatGPT whether that’s a good idea!