A ccording to a recent Monmouth University poll, 55 percent of Americans are worried by the threat of artificial intelligence (AI) to the future of humanity. In an era where technological advancements are accelerating at breakneck speed, it’s crucial to ensure that AI development is appropriately monitored. As AI-powered chatbots like ChatGPT become integrated into our daily lives, it is high time we address the potential legal and ethical implications of the technology.
ADVERTISEMENT |
Some have done so. A recent letter signed by Elon Musk, who co-founded OpenAI; Steve Wozniak, the co-founder of Apple; and more than 1,000 other AI experts and funders calls for a six-month pause in training new models. In turn, Time published an article by Eliezer Yudkowsky, the founder of the field of AI alignment, calling for a much more hard-line solution of a permanent global ban and international sanctions on any country pursuing AI research.
However, the problem with these proposals is coordinating the numerous stakeholders from a wide scope of companies and government figures required. Let me share a more modest proposal that’s much more in line with our existing methods of reining in potentially threatening developments: legal liability.
…
Comments
Is AI an entity, or a tool?
If I take the viewpoint that ChatGPT (as a primary example of a larger whole) is a tool and not a specific entity, I think I don't put blame on that tool for the job it does, I would blame the controlling entity. Would that entity be the AI developer? One would not sue Craftsman for a screwdriver improperly setting a nail.
So, for AI developers to be held responsible for hate speech (again, as a prime example of the whole), seems the wrong way, or at least not the most correct way, to proceed.
To your loud music bothering neighbors example, we don't sue the musician who recorded the album, nor the record maker or the jukebox manufacturer (perhaps I'm out of date, eh?). If volume is the problem, the responsibility sits with who controls the knob.
I can see the need for establishing some robust governance, though, before the big suits start coming in. I don't know that slowing down AI development will get that establishment rolling in advance. More likely, the establishment of governance will slow as well, and be in the same trouble - but with slower tech advancement.
The user should be ultimately responsible
I think the user should be ultimately responsible for what the AI does. If for example the AI writes something false about a competitor's product in response to a prompt to write a comparison between one's own product and the competitor's, the user should have read what the AI generated and removed the false information. I would not sign my name to something written by another person (or a computer) without reading it.
I've seen the excuse "the computer did it" applied to moderation of social media sites as well, e.g. the software removed somebody's post because it determined, falsely, that it violated the platform's policies. This is not an excuse. If one programs a computer to do something, one is responsible for what it does.
Add new comment