Is it ethical to use chatGPT for school or work?
Are we equipped to handle the fast-paced advancements of AI?
“What are you doing for Christmas this weekend?”
“Not much, just chilling at home and catching up on some Netflix. How about you? Got any plans?”
“Nah, not really. Just gonna take it easy and maybe meet up with some friends, hang out with my fam and eat lots of good food, like always.”
No, this isn’t a chat log between me and a friend. It’s actually a conversation I had with a conversational artificial intelligence (AI) system, chatGPT.
The prompt given was to answer the question using colloquial terms and Gen-Z slang.
As I engaged in conversation with the chatbot, I was surprised by how easily I was deceived by its responses, which seemed natural in the back-and-forth exchange.
ChatGPT is a variant of the GPT (Generative Pre-training Transformer) language model developed by AI company OpenAI, designed specifically to generate human-like text in real-time.
The newest model, GPT-3, surpassed one million users less than a week after its release, according to OpenAI CEO Sam Altman.
GPT-3 is able to learn the patterns and structures of human language, and generate highly realistic and coherent text. This allows it to perform a wide range of language tasks, including translation, summarisation, question answering, and content generation – capabilities that people have unfortunately exploited.
Since GPT-3’s launch on Nov 30, several posts about the strange and questionable requests people have made have gone viral on spaces like TikTok, Twitter and LinkedIn.
One of the more absurd questions the chatbot has been asked is to write a biblical verse in the style of the King James Bible explaining how to remove a peanut butter sandwich from a videocassette recorder (VCR).
Twitter user @tqbf shared in their post: “I’m sorry, I simply cannot be cynical about a technology that can accomplish this.”
Another Twitter user, Ammaar Reshi, used the AI system to write a 12-page children’s book over a weekend before proceeding to print copies and sell them on Amazon. As of Dec 14, Ammaar has sold around 70 copies, earning royalties of close to US$200 (S$270).
Titled Alice and Sparkle, it tells the story of a young girl named Alice who discovers the magic of AI. She creates her own AI, named Sparkle, and together they go on adventures and use their combined knowledge to make the world a better place.
Apart from GPT-3, the product design manager also utilised image-generating software MidJourney to create the illustrations.
Upon publicising his latest creation on the social media platform, he was met with a mob of raging Twitter users, renewing a heated debate around the ethical implications of AI-generated art.
They argued that AI-generated content lacks the originality and authenticity of human-generated content, and that it is wrong for him to profit from it in the same way one might from the sale of traditional creative works.
Furthermore, as these illustrations are created based on existing artworks in the digital realm, taking credit for such AI-generated content is essentially taking credit for the work of another artist without their permission or recognition.
Hopping on the thread, user willywonka378 wrote: “Sorry I’m maybe missing something here but why is this an achievement? Much less one that should be celebrated. You didn’t write anything. You didn’t draw anything. You sat in front of a computer and asked it to do it for you. That’s not achieving anything.”
User theartofadriane added: “And which artist’s work did you steal from who should ACTUALLY be paid and credited for the creation of this book?
“Please look into how this harms artists and how you have stolen (an) artist’s lifelong and COPYRIGHTED hard work. Without it, you would not have been able to make this.”
In response to these negative comments, Ammaar defended himself, saying that he “genuinely wanted to see how the (technology) could be combined together.”
“I’m even donating the books. This isn’t about the money for me. I do think it’s sparked an incredibly important discussion though. Just wish it was more civil on both sides,” he said.
This discussion has gone on to other platforms like Reddit, where some users have diverging opinions on the issue.
Under a Reddit thread “Why is AI art considered theft”, user omofesso shared: “AI does not trace or take parts of other art. It does not make a collage of stolen images. Instead, by my limited understanding, it looks at thousands of different images and learns how things are drawn or represented, and it imitates what it sees. It doesn’t take from any image in particular, it just takes inspiration from many sources.”
However, user @Muted_Item_8665 retorted: “It is very frowned upon to copy someone’s art style in the art community and claim it’s yours – that’s why artists that do learn from other artist’s styles call it a ‘study’ – because it’s purely for learning purposes and not for passing that art style off as their own.”
Ammaar’s “creation” is simply one of the many innovative works that have been made with little to no human input or creative agency. However, as it stands, the intersection of AI and intellectual property (IP) is a complex and evolving area of law that has yet to be properly addressed and protected by policies and guidelines.
Intellectual property (IP) is a legal concept that covers a wide range of creative and innovative works, such as inventions, artistic creations, and scientific discoveries.
For example, it is not always clear who owns the rights to works that are created or generated by AI systems. Some experts have argued that AI systems should be treated as the “authors” of these works and should be granted the same IP protections as human creators. Others have argued that the human creators of the AI systems should be considered the “authors” of these works and should be granted the rights to them.
Another issue related to AI and IP is the question of whether AI systems can be granted patents for their creations. In some cases, AI systems may be used to invent or discover new technologies or products, and it is not clear whether these innovations should be eligible for patent protection.
As the debate around AI-generated art continues to rage on social media and other platforms, those in the local art community have also shared their sentiments.
With regards to the legalities of AI art, well-known local muralist Yip Yew Chong shared with me that in the event his art gets “stolen”, he may take action. However, he acknowledged that it would be difficult to identify and prove that the machine had referenced his art.
“In practice, it is very difficult. It is already difficult even for physical reproduction when it happens overseas.”
Adding to this, illustrator Joel Rong also told me that as much as he would feel insulted should such an occurrence happen to him, he doesn’t believe that he can accuse the AI software of theft.
He said: “All art is inspired from other styles and ideas in one way or another, nothing is truly original anymore. But when we transform and create our own styles from these inspirations, we don’t consider that an outright copy of someone else’s style.
“Unless every pen or paint stroke is identical to another art piece, it is hard to claim that a computer generated artwork made with different styles can be considered theft of any particular artist.
“The best option for us is to treat this new system as a tool or an extension to creating art.
We should use it as a foundation to enhance our creativity and productivity, not as a replacement.”
Artists aren’t the only ones grappling with these issues. Experts from a variety of fields are also attempting to understand and address the ethical considerations surrounding AI.
Speaking at a panel discussion during the Times Higher Education Innovation and Impact Summit at KAIST in 2019, Director of AI Technology at AI Singapore, Professor Tze-Yun Leong explained that when building these softwares, it’s critical to have a “clear record” on the kind of knowledge that is going to be fed into the systems.
“Those steps are some checks and also some regulatory considerations have to be put in place,” said Prof Leong, who also teaches Practice of Computer Science at the School of Computing, National University of Singapore.
Beyond these legal and moral implications, there’s also the mounting issue of students misusing the software. Rather than employing it to aid them, they may rely on it to complete their assignments, relinquishing their agency and decision-making to the machine.
In Singapore, data science professional Chin Hon Chua shared on LinkedIn that he had asked the chatbot for a primary school composition, in Mandarin, about a visit to the zoo starting with the classic opener: “在一个风和日丽的早上” (on a sunny morning).
He said: “Students might just have to submit their homework in their handwriting in future, if there isn’t a good way to catch such AI generated text in future.”
However, for photojournalism lecturer Samuel He, he employs AI intentionally as part of his coursework requirements.
Teaching at Nanyang Technological University’s Wee Kim Wee School of Communication and Information, Mr He explores AI in a module called Advanced Photojournalism (APJ), where his students are tasked to try new ways and learn new tools to tell stories to an increasingly tech- and media-savvy audience.
Mr He explained on his website: “As technology changes, coursework needs to evolve too.
“Over the years, we have worked on weird and wonderful applications of AR filters, news gamification and Text-to-Image AI Engines like Dall-E and Midjourney.”
Mr He believes that exploring how nascent AI-art software can affect content is a gateway to discussions about how that will affect jobs, workflows and ethics in journalism in the near future.
“What is more important here is learning to find ways to tell a story meaningfully using a nascent technology or a particular trend. That ability to adapt quickly and to avoid the easy temptation to complain will serve young journalists well in the next ever-changing 15 years,” he elaborated on the website.
This begs the question – are we equipped to handle the fast-paced advancements of AI?
GPT-3 notes that this is largely dependent on how society chooses to use these systems.
The chatbot explained: “GPT-3 and other advanced AI systems have the potential to revolutionise various industries and fields, and can bring significant benefits in terms of efficiency and productivity.
“However, it is important for society to carefully consider the potential implications of using such systems and to ensure that their use is responsible and ethical.”
With it being such a complex and multifaceted technology with a wide range of potential impacts and implications, it is clear that AI has the potential to bring about significant changes in many different areas.
Some potential concerns with using AI systems like GPT-3 include the possibility of biassed or flawed results if the data used to train the model is biassed, the potential for misuse or abuse of the technology, and the potential for job displacement as AI systems become more advanced.
On an individual level, GPT-3 suggests staying informed about the latest developments in AI, participating in public debates and discussions about AI policy, and advocating for responsible and ethical approaches to the development and use of AI. It also highlighted the importance of upskilling to stay competitive in the face of potentially disruptive technological change.
Fortunately – or not – it’s clear that AI is here to stay.
Considering that GPT-3 can fulfil requests like writing rap lyrics to the national anthem and chatting with us like it’s our best friend, I wouldn’t be surprised if one day such AI systems decide to enslave us all. Until then, we’ll just have to sit back and marvel at the wonders of technology.