The OpenAI logo on a smartphone. Understanding AI tech and managing...

The OpenAI logo on a smartphone. Understanding AI tech and managing tools effectively are important, but they focus on the how, not the why and the whether. Credit: Bloomberg/Gabby Jones

This guest essay reflects the views of Shlomo Engelson Argamon, the associate provost for artificial intelligence and dean of the Graduate School of Technology at Touro University.

In 2023, two New York lawyers submitted a federal court brief full of cases that didn't exist. ChatGPT made them up. The lawyers didn't notice. The problem was not that they didn't know how to use the tool. They did. What they lacked was the mental discipline and good judgment to stay responsible when the AI started talking. They outsourced their thinking to a machine.

This isn't a technical failure; it's a failure of human accountability.

New York likes to lead. In law, finance and government, we set the standard that the rest of the country follows. But right now, we may be about to follow a standard for artificial intelligence literacy that is fundamentally broken.

In February the U.S. Department of Labor released its AI Literacy Framework. The intent is admirable, but the execution misses the point entirely. It treats AI literacy as a technical skill, like handling spreadsheet formulas, rather than what it actually should be: the master competency of good judgment.

This framework doesn't regulate private employers. But if New York's Department of Labor and local boards adopt it as a benchmark, it will shape curricula and certifications across the state, becoming a de facto standard for students, job-seekers and employers. But this tools-first approach won't give us a more capable workforce. We'll get one that may be AI aware but will be mentally fragile.

Understanding AI tech and managing tools effectively are important, but they focus on the how, not the why and the whether. In the real world that New Yorkers actually live in, that distinction is everything.

An NYPD detective is AI literate under DOL's framework if they can run facial recognition software to find suspects. But true AI literacy would be the skepticism required to question an algorithmic "match" before making a wrongful arrest.

DOL would consider a teacher to be AI literate if they know how to use a bot to grade student essays. But consider the teacher in a Long Island classroom, already overwhelmed, reaching for that tool because there simply aren't enough hours in the day. Real AI literacy is knowing when the algorithm might be penalizing a student from a marginalized background because of its bias, and having the confidence to override it.

We have a major problem brewing that I call "AI laziness." Research increasingly shows that the more we trust these systems, the less we think for ourselves. We start assuming the machine is right, and disengage. The more we rely on these systems, the less we exercise our own judgment.

New Yorkers need to stop focusing on tools, and start focusing on decision-making in context. We must teach workers to know when AI is the wrong tool for the task, and how to use it properly when it is.

A resilient workforce isn't built by training people to follow machines. It's built by teaching New Yorkers to remain the boss of the machine. Before you hit send on any AI-assisted work, ask yourself three questions: Do I have an independent way to verify this output, or am I taking the AI's word for it? Can I explain the logic of this decision to a boss or a judge without mentioning the software? And have I actively looked for the silent failure — something that looks plausible but may be fundamentally wrong?

In the age of AI, independent human judgment remains the one skill we cannot afford to lose.

This guest essay reflects the views of Shlomo Engelson Argamon, the associate provost for artificial intelligence and dean of the Graduate School of Technology at Touro University.

SUBSCRIBE

Unlimited Digital AccessOnly 25¢for 6 months

ACT NOWSALE ENDS SOON | CANCEL ANYTIME