Pass notes, tl;dr, crib sheet, 101…call it what you will. Here’s our whistle stop guide to everything you need to know about ChatGPT.
ChatGPT is an AI, right? Yes. It’s from Open AI, the same people behind the DALL-E 2 image generator. They opened it up for public access on Wednesday 30 November and CEO, Sam Altman, announced it had already racked up over one million users on December 5.
I thought I recognised the name! Sure, it’s actually been around for a while. Two years ago, British newspaper The Guardian used the company’s GPT-3 Large Language Model (LLM) to generate an opinion piece for the paper titled ‘A robot wrote this entire article. Are you scared yet, human?’.
Not yet. Reads a bit wooden. Oh sure, but it’s come on a bit since then. The team behind it made the latest version — GPT-3.5 it’s been referred to as in some circles — available for public access last week, and the internet frankly has lost its collective shit about it.
Is it really that good? Yes and no. With a few simple prompts we got it to write a piece for us that is definitely superior to the Guardian article, and it’s being used by a huge amount of people to do all sorts of things from write responses to exam papers to clean up computer code —and then tell you what the problem was in iambic pentameter. Equally, on the other hand, programming Q&A site Stack Overflow has already had to ban its use temporarily as it was filling the site with very convincingly written wrong answers.
"The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce,” wrote the mods. “There are also many people trying out ChatGPT to create answers, without the expertise or willingness to verify that the answer is correct prior to posting. Because such answers are so easy to produce, a large number of people are posting a lot of answers.”
I detect the odd ethical question about all this. Yeah, it’s a can of worms. The research team has put in all manner of safeguards to prevent abuse but, the internet being home to an equally large number of wrongdoers, people have circumvented them all rather quickly. It’s been characterised by some as having the knowledge and writing skills of a smart 13 year-old and, like smart 13 year olds the world over, it can still be tricked and gamed. Asking it to role-play certain scenarios has already seen people get past some of the filters regarding graphic and content not described as unfortunate. We’ve not had a Tay level problem as yet (the 2016 linked article’s title ‘Microsoft shuts down AI chatbot after it turned into a Nazi’ tells you all you need to know about that), but it would be a brave person who bets against something like that happening.
Looks like things could get lively Oh yes. And it’s not the fact that LLMs such as ChatGPT can create convincing bullshit so easily that is the problem, it’s that they can do it at such scale. We’ve all stumbled across webpages filled with gibberish to gull search engines into pointing towards their content. Imagine how many exabytes of SEO-friendly rubbish AIs can generate in the course of a calendar year. That pretty much undermines the way that search currently operates right across the internet.
Google won’t be happy You’d image not. And it gets worse for them too. Because LLMs scrape the internet to gain the content to analyse the patters in text, they’ve effectively read an awful lot of stuff that they can regurgitate. In The Platformer, Casey Newton compares the results of a Google search on “What are some styles of shoes that every man should have in his wardrobe?” Google returned an excerpt of a blog post and some men’s fashion sites; ChatGPT produced a 200 word essay covering at least five essential styles to possess and pretty convincing reasons for all of them.
Sounds great Again, yes and no. How many shoes do you really need? The problem is (well one of them anyway) is that as of yet there is no source verification to any of this; just rehashed and reheated information that a) cannot be verified independently and b) also cannot be monetised. And if that doesn’t sound like much of an issue, consider the amount of businesses that depend on those Google clicks. The economics of a substantial part of the internet suddenly seem under a good deal of threat. Plus, the workings of search engines have been so opaque for such a long time, it would be good to have some transparency to the process. Call it wish fulfilment, but if someone’s selling us a map it would be nice to know at least an outline of how they’re drawing it.
Talking of money… Ah yes, for the moment access to ChatGPT is open to all while, essentially, over 1 million of us help train it for free. It won’t stay that way for a long time. Chief Twit, Elon Musk, no less, asked Sam Altman how much each chat was costing the company to produce and the reply was ‘probably single-digits cents per chat’. That’s going to mount up quickly. Access to DALL-E 2 has been fairly quickly put behind a freemium paywall and expect ChatGPT to go the same way soon (especially as it currently seems to be overwhelmed by demand and inaccessible most of the time). And lets not even get into the carbon emission cost of all this.
Anything else? Yes, it’s also being dragged into culture wars territory, with alt right voices already accusing of having a liberal bias or muttering about censorship of free speech for AIs. And some people are already convinced its sentient. It’s not.
What does ChatGPT say about all this? Ah, you figured someone would ask it already, eh? “Overall, whether or not to allow AI-generated answers on Stack Overflow is a complex decision that would need to be carefully considered by the community,” says ChatGPT itself.
Tags: Technology AI
Comments