- Advertisement -

- Advertisement -

OHIO WEATHER

4 Reasons Generative AI Won’t Replace Humans Anytime Soon


Opinions expressed by Entrepreneur contributors are their own.

Since generative AI (or “GenAI”) burst onto the scene earlier this year, the future of human productivity has gotten murkier. Every day brings with it growing expectations that tools like ChatGPT, Midjourney, Bard and others will soon replace human output.

As with most disruptive technologies, our reactions to it have spanned the extremes of hope and fear. On the hope side, GenAI’s been touted as a “revolutionary creative tool” that venture maven Marc Andreeson thinks will one day “save the world.” Others have warned it’ll bring “the end” of originality, democracy or even civilization itself.

But it’s not just about what GenAI can do. In reality, it operates in a larger context of laws, financial factors and cultural realities.

And already, this bigger picture presents us with at least four good reasons that AI won’t eliminate humans anytime soon.

Related: The Top Fears and Dangers of Generative AI — and What to Do About Them

1. GenAI output may not be proprietary

The US Copyright Office recently decided that works produced by GenAI are not protected by copyright.

When the work product is a hybrid, only the parts added by the human are protected.

Entering multiple prompts isn’t enough: A work produced by Midjourney was refused registration even though a person inputted 624 prompts to create it. This was later confirmed in DC District Court.

There are similar difficulties in patenting inventions created by AI.

Markets are legally bounded games. They require investment risk, controlled distribution and the allocation of marketing budgets. Without rights, they collapse.

And while some countries may recognize limited rights in GenAI’s output, human contributions are still required to guarantee strong rights globally.

2. GenAI’s reliability remains spotty

In a world already saturated with information, reliability is more important than ever. And GenAI’s reliability has, to date, been very inconsistent.

For example, an appellate lawyer made the news recently for using ChatGPT to build his casebook. It turns out that the cases it cited were invented, which led to penalties against the lawyer. This bizarre flaw has already led to legal ramifications: A federal judge in Texas recently required lawyers to certify they didn’t use unchecked AI in their filings, and elsewhere, uses of AI must now be disclosed.

Reliability issues have also appeared in the STEM fields. Researchers at Stanford and Berkeley found that GPT-4’s ability to generate code had inexplicably gotten worse over time. Another study found that its ability to identify prime numbers fell from 97.5% in March, to a shockingly low 2.4% just three months later.

Whether these are temporary kinks or permanent fluctuations, should human beings facing real stakes trust AI blindly without getting human experts to vet its results? Currently, it would be imprudent — if not reckless — to do so. Moreover, regulators and insurers are starting to require human vetting of AI outputs, regardless of what individuals may be willing to tolerate.

In this day and age, the mere ability to generate information that “appears” legitimate isn’t that valuable. The value of information is increasingly about its reliability. And human vetting is still necessary to ensure this.

3. LLMs are data myopic

There may be an even deeper factor that limits the quality of the insights generated by large language models, or LLMs, more generally: They aren’t trained on some of the richest and highest-quality databases we generate as a species.

They include those created by public corporations, private businesses, governments, hospitals and professional firms, as well as personal information — all of which they aren’t allowed to use.

And while we focus on the digital world, we can forget that there are massive amounts of information that is never transcribed or digitized at all, such as the communications we only have orally.

These missing pieces in the information puzzle inevitably lead to knowledge gaps that cannot be easily filled.

And if the recent copyright lawsuits filed by actress Sarah Silverman and others are successful, LLMs may soon lose access to copyrighted content as a data set. Their scope of available information may actually shrink before it expands.

Of course, the databases LLMs do use will keep growing, and AI reasoning will get much better. But these forbidden databases will also grow in parallel, turning this “information myopia” problem into a permanent feature rather than a bug.

Related: Here’s What AI Will Never Be Able to Do

4. AI doesn’t decide what’s valuable

GenAI’s ultimate limitation may also be its most obvious: It simply will never be human.

While we focus on the supply side — what generative AI can and can’t do — who actually decides on the ultimate value of the outputs?

It isn’t a computer program that objectively assesses the complexity of a work, but…



Read More: 4 Reasons Generative AI Won’t Replace Humans Anytime Soon

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy

Get more stuff like this
in your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Thank you for subscribing.

Something went wrong.