Whatever the capabilities, there’s always a little hype, or at least the risk won’t be as great as thought:
> Due to our concerns about malicious applications of the technology, we are not releasing the trained model.
That was for GPT-2 https://openai.com/index/better-language-models/
In the same article you linked:
> Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT‑2 along with sampling code .
7 years later, these concerns seem pretty legit.