AI is making literary leaps – now we need the rules to catch up
A row over the release of a new language-learning model highlights how ethics and the law are lagging behind
by John Naughton
Nov 02, 2019
3 minutes
Last February, OpenAI, an artificial intelligence research group based in San Francisco, announced that it has been training an AI language model called GPT-2, and that it now “generates coherent paragraphs of text, achieves state-of-the-art performance on many language-modelling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarisation – all without task-specific training”.
If true, this would be a big deal. But, said OpenAI, “due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing
You’re reading a preview, subscribe to read more.
Start your free 30 days