A.I. could develop serious downsides

Photo by Tara Winstead on Pexels.com

Cooper Olson ’26


ChatGPT was released on November 30th, 2022, marking the dawn of one of the most complex challenges society has faced. The world has never been the same since.

I do not believe A.I. is some monomaniacal enemy with no legitimate benefits. In fact, it can be harnessed to increase efficiency throughout various fields, most notably medicine.

Advances in technology, however, don’t come without their downsides, and A.I. is no exception, especially when it comes to three main concerns: economic replacement, intellectual stagnation, and most most importantly, the manipulation of truth.

Just last fall Sports Illustrated made headlines for laying off writers in favor of A.I. generated articles, NBC News reported. It was not long ago that technology like this could hardly give a correct list of presidents; now major publications are trusting it over people all while it’s acing the SAT, ACT, LSAT, and any other test you can name. Maybe these test scores don’t seem significant now, but based on how quickly the technology is evolving, what will the job landscape look like less than a decade from now when we graduate from college? Will accountants, low-level lawyers, and most importantly, writers, be replaced by low-cost, unskilled clerks who only type prompts without any self-thought?

As people begin relying more and more on this technology to answer increasingly complex questions, what will happen to human intellectual growth? The more we rely on A.I. for such questions implies that self-thought and critical thinking will diminish from society relatively quickly. Such a lack of critical thinking ultimately leads to an irreparable amount of blind trust that allows for the abuse of truth.

Thus, if no self-thought or investigative power remains, could truth itself become much like truth in George Orwell’s 1984: manipulated or nonexistent? The problem with the technology lies in the fact that it reflects the personal biases of the person who trained it. The more trust such technology garners, the more power the people controlling it gain. The truth, as far as the average web surfer is concerned, becomes reduced to merely the bias of whoever holds influence over the software.

I’m not trying to say artificial intelligence is inherently bad and should be destroyed. Rather, too many advocates of A.I. don’t look beyond the technology’s convenience; it’s important to foresee the possible repercussions of its evolution, to help understand how we may better use it for good. Sadly, if society benefits too much from ignoring these concerns, it will already be too late to avoid the consequences once finally confronted.