I’ve been a Trekkie since I was 9— more specifically, since around when Star Trek: The Next Generation season 5’s “Unification I/II” first aired.
The idea of AI has pervaded the Star Trek universe, from Data in Star Trek: The Next Generation to the Emergency Medical Hologram in Star Trek: Voyager to “Control/Zora” in Star Trek Discovery to the Borg Collective through… well, since TNG. In other words, the ideas and fictions of AI have long and consistently been integral to my media consumption. The character uses of AI in Star Trek were vehicles for comparing and contrasting technology with humanity. If anything, and as with most of Star Trek‘s utopian technologies, the comparisons/contrasts resonated with the optimistic idealist in me, despite how “doomerist” or Luddite the rest of this post may sound.
The AI hype cycle— especially the GenAI hype— seems to pervade my LinkedIn feed these days. Not the least, practitioners are eager to add it to their regimen of technological tools (though, clearly, honor pledges/disclosures on conference/journal submission platforms are indicative of academic use as well). Now that reality is meeting fiction, I’m increasingly concerned about the balance between the ethics of AI and the practically hyped implementation of AI (and again, especially GenAI).
The larger proportion of market discourse centers on the “Responsible/Ethical Use of AI” and less so on how the creation of AI relates to its primary and secondary outputs. I’m hardly the first person to address these concerns— AI scholars such as Abeba Birhane, Joy Buolamwini, Timnit Gebru, Margaret Mitchell, Emily Bender, have prominently issued clarion calls over ethical issues with AI (e.g., see the infamous “Stochastic Parrots” paper). And frankly, these scholars have far more knowledge and command of the domain than I ever would.
But my institution recently surveyed its population about the attitudes and opinions about the use of AI for teaching and research. For me this means weighing the input of my own opinions about AI with perceived market fatalism. While I’m not inherently opposed to AI (and have developed a policy for student AI use out of fatalistic recognition of market reality), I’m shaded by the ethical comparisons/contrasts I’ve drawn from Star Trek and am concerned about several major shortcomings of its mainstream use in its current mainstream state:
- The underlying datasets used to train models for many popular AI tools have been constructed using un-informed consent. For most platforms branching into AI, user consent may have been vaguely buried in a terms of service that was agreed to 10… 15… 20 years ago. This consent was granted far ahead of the development of AI as distinctive tools to ultimately integrate into/onto platforms. (I disagree that opt-out updates to the TOS are sufficient; switching costs make it difficult for consumers to abandon platforms even though those platforms may be using data to train AI, while platforms haven’t necessarily demonstrated great track records with maintaining privacy standards, and research has demonstrated consumers rarely read privacy policies/TOS.) Additionally, this consent issue is to say nothing of the commonly reported issues of non-consent to use copyrighted works for AI model training, and the extent to which GenAI tools that create “new” works may be considered plagiarism. (For another eerie case on dataset models, check out the “Enron Corpus.”).
- The underlying datasets used to train AI often skew WEIRD (Western, educated, industrialized, rich and democratic) data. This data skewness (also often found in academic research) introduces various biases, especially to the models’ outputs, as an overrepresentation in the samples can replicate different biases and stereotypes from the underlying data. For example, there is already reporting on these biases in the training of AI models for HR hiring practices. Or biases by insurance companies against provided extended care to the elderly. Or biases in criminal justice decisions and predictive policing. Or legally inaccurate informational guidance to small business owners. Without building critical thinking skills to critique biased model outputs, the secondary effects of these biases will exponentially replicate.
- Model training by large AI companies often have a hidden cost of labor that is not just socio-economically exploitative, and not just environmentally exploitative, but psychologically exploitative as well (for those following along at home, that hits the triple bottom lines). An assumption may be that it is solely computer science programmers that are building models and raising capital, yet this belies the far more people who are doing the labor of sustaining these models with building datasets and training models.
- Because AI and GenAI are effectively pattern recognition tools (the latter of which outputs “new” content from patterns), they lack abductive reasoning skills that humans have that help synthesize understandings beyond endogenous solutions. The inability for AI to make inferences on incomplete/uncertain information makes its “growth” impossible (this is certainly a common AI theme Star Trek grapples with). In other words, perhaps Artificial Intelligence may exist, but Artificial Wisdom is kneecapped by lack of this ability; it is bound by and within its own underlying datasets. Humans are not. In that case, it truly isn’t much more than a stochastic parrot, indeed.
Education has pushed for the development of more analytical reasoning skills with students (likely thanks to STEM focuses shaping K-12 education), but not enough critical reasoning skills for students to assess and apply AI output. Critical thinking skills are taught starting in grade school, yet skill development needs to continue far beyond high school. Instead, many folks in professional schools often downplay the value liberal arts education has in developing these skills. Not the least, this has also been a byproduct of both the politics and finances of higher education.
This is especially the case in professional schools, where skills are often taught with singular focus at the expense of broad-based education. (The computer scientist Ian Bogost has an excellent piece on issues that arise when CompSci majors are overly, narrowly focused in their studies.) For example, a learning objective of our BSBA program, for example, is analytical skills. And while students are required to take some “GenEd” courses, attitudes toward these courses are often painted as mere degree requirements; the skills these courses teach are often put aside for the sake of ensuring the analytical skills objective is met. And yet…
Thus, although we would like to think that critical thinking (as well as common sense) would prevent it, we nonetheless see nonsense AI outcomes in student work, we see them in advertisements, and we are also seeing them in peer-reviewed journal publications (see here, in here, and in here, and which is an ironic forsaking of our own academic integrity, disclosure or no). The ability to engineer good prompts may yield sufficiently good outputs, but not necessarily yield good application.
I have no doubt there could be massive benefits from AI for automation (e.g., the synthetic development of new materials or proteins, although these are not without their own ethical considerations). And I am not a Luddite, looking to throw metaphoric wrenches in the gears of automation. I’m writing this on a computer that replaced a word processor replaced a typewriter replaced a printing press, and transmitting it on a dynamic social media page that supplanted a static web site that supplanted a printing press, while streaming a hockey game from Montreal on my laptop in Colombia which replaced linear TV replaced radio. Humans have been quite adept at eventually changing the nature of labor in the face of technological innovation.
But the types of mainstream tools (especially GenAI) used in practice, teaching, and research are in need of significant, sustained critique; if fatalism demonstrates an inevitability of the technology, then my fear is of uncritical instruction and application in practice— and in academia— outpacing the market hype cycle of this moment. Star Trek taught me to appreciate these challenges of this technology as much— not as a doomerist, but as an idealist.
P.S., Despite LinkedIn’s prompting, I did not try writing this article with AI.
Leave a Reply