Want to see how ChatGPT wrote this story? Click here.
It only took artificial intelligence seconds to replace Zan Comerford.
Comerford, the founder of Litework Marketing in Nelson, had been writing news releases throughout the day when her husband insisted on showing what ChatGPT could do. He demonstrated by asking it to write a release similar to what she had been working on.
It generated a document that was a reasonable facsimile of her work, and did in less than a minute what she’d just spent hours on.
“I was sunk. I was like, ‘there goes my job, there goes the future of humanity.’”
Since the public version of ChatGPT launched in November, it has proven to be a generational moment in the history of the technology. It’s as important as the introduction of the iPhone, or the emergence of social media.
AI is omnipresent in today’s world. It’s working behind the scenes every day on phones, Google searches – even Netflix.
But ChatGPT is different: It’s a natural language generator that pulls information from the internet to form what is the most statistically likely response to user questions or commands, known as prompts. And it does so in a human voice that can be conversational and even a little disarming.
Its responses are seemingly limited only by one’s imagination. Ask it to write a short story about aliens who want ice cream and it will start typing in front of your eyes. Ask it to provide a vegetarian recipe for dinner, or a weekly exercise plan, or for the meaning of life and it’s there in moments.
The emergence of what’s known as Generative AI has come as a surprise to people in Nelson’s tech community.
Shortly after ChatGPT’s release, Brad Pommen of Nelson’s SMRT1 Technologies was driving with some of his employees to meetings in Vancouver. The company, which specializes in touchscreen-based vending machines, had previously tested chatbots and decided they weren’t very good.
But on the trip, designer Greg Coppen began playing with ChatGPT and the group soon realized its potential.
“We’re all just spitballing and entering this text and getting instant results,” says Pommen. “It doesn’t replace anybody at this point, but it’s not far off from needing less resources and doing more with less that really caught my attention.”
It’s also far from a finished product. The latest version GPT-4, which was released by the American company OpenAI in mid-March, may be able to pass a bar exam, but it still makes factual errors (ask it to write your online bio and you’ll probably be surprised by the response). Even though it can write in Shakespearean English, its writing probably wouldn’t have impressed The Bard.
That’s provided some peace for Comerford.
She began experimenting with ChatGPT and found it worked best as an idea generator. It couldn’t provide inspired marketing campaigns for Comerford’s clients in the tourism and cannabis industries, but it could be used to finesse her own thoughts and help Comerford overcome occasional writer’s block.
It was a tool, she realized, and not one that would soon take her job.
“If a machine gives you the bones, then you can build from there. I haven’t experienced anything with AI or ChatGPT yet that I would publish without tweaking, so that makes me feel a little bit more relieved.”
Pommen has come to the same conclusion.
SMRT1 has begun using ChatGPT to write grant proposals, summarize points and even build pro and con lists. What will it be able to do in a month or a year? Pommen is intrigued to find out.
“I’m always looking for the positive side of things. I am never focused on the negative. And I see this as just another opportunity of creativity exemplified.”
The classroom is real. The teacher isn’t
Keeping students engaged can be a chore for Hazel Mousley.
Mousley is an online French tutor with students ranging in age from four to 80. The younger they are, the harder it can be for Mousley to connect with them.
But they seemingly respond to AI.
One of Mousley’s students is a 10-year-old girl who loves figure skating and is, unsurprisingly, not as invested in her French homework. Mousley’s solution was to ask ChatGPT to write a short play in French about a girl and her stuffed elephant at a skating competition. It was a hit with the student.
“The play is very short, and I’m just astounded at how simple the vocabulary is and how hilarious it is.”
Every tutor Mousley knows is using ChatGPT. Not only can it interpret a student’s poor grammar and spelling prompts, it also responds with empathy. Her students treat it like a friend and Mousley sometimes feels like she is only a witness to the lesson.
The loss of direct influence can be worth it. Mousley says the right prompts provide exercises geared at any level of language.
“If I have a student learning a very specific grammar piece, it might be hard to find exercises. It would take me a long time to create an exercise on that. I can just say [to ChatGPT], ‘Write whatever using this grammar concept as much as possible.’ And then, holy crow, it creates some very compelling pieces doing that.”
But if ChatGPT can ask questions, it can also give students passable answers that have prompted plagiarism concerns among educators.
To illustrate this, Dr. Theresa Southam, Selkirk College’s co-ordinator of the Teaching and Learning Centre, suggests a prompt: Ask ChatGPT to write 1,000 words on the British North America Act of 1867, which led to the creation of Canada. The AI responds with a serviceable essay in seconds.
This, Southam says, should challenge instructors to ask students more nuanced questions. When one Selkirk teacher found ChatGPT nearly passed their online course, Southam says, it encouraged them to review their material.
“As soon as you get into creative and critical thinking, that’s where ChatGPT has trouble and that’s where we want to take our work,” says Southam. “We want to have creative and critical thinkers.”
Southam has spotted other errors. The chatbot sometimes pulls information from sources like blogs that don’t hold up to critical analysis. Its answers are pancultural and struggle with regional context. ChatGPT also doesn’t have access to oral histories, and omits cultures with poor access to the internet.
ChatGPT may have answers for everything, but it can’t tell you much more about your community than Wikipedia can.
“I’m realizing that it’s only one part of human collective intelligence that’s being represented in the results that are getting spit out.”
The master and apprentice
Abby Wilson points to four images of Kootenay Lake on her screen and begins picking them apart. One has missing reflections. Another has incomplete sun beams. None of them catch the eye.
Each is a variation on a poor picture she took from her phone of a ferry crossing the lake, then uploaded to the AI image creator Midjourney. Wilson, a Nelson-based landscape painter, can see errors in each image. But she can also see how they might be improved.
In her studio, Wilson paints her own image based on elements suggested by Midjourney. The sun breaking through clouds is now more dramatic, and the ferry is more visible. Using AI is giving her a different perspective on her own art.
“I think first draft is a good way to think about it. Just making a visual variation like taking an old painting and asking how could I have made this better? Just different takes on an idea.”
Midjourney, one of several AI image creators available for free or trial use, operates similar to ChatGPT. It scrapes the internet for image data, then responds to text prompts by creating original art in any style you want.
But it comes with its own controversies. Midjourney uses image data without consent. So if you’ve put any type of visual art online, Midjourney could be using it without your knowledge.
Wilson acknowledges this and points out other shortcomings she’s noticed. It’s not particularly good at drawing real places (a request for images of Nelson returns with a city that captures its vibe but wouldn’t fool any residents).
It also has a racial bias — white people make up the majority of its image subjects.
“It’s trained on the images that we have out there, and the images we have are out there are based on our biases. So it reflects that back.”
But Wilson is still excited by Midjourney’s possibilities. She’s OK with the ethics of it so long as it is only using reference images she provides, and the only work she sells are her own paintings.
She also doesn’t worry that AI art will replace her. All art is iterative — Wilson’s own style is influenced by the Group of Seven — and her patrons know there is only one Abby Wilson.
“I don’t feel super threatened because … the nature of the art market is that originals do have more value.”
That certain something
AI has prompted an important question: should we use AI in these ways?
Some of the most influential voices in tech say no. Last month, an open letter signed by over 2,000 people – including influential Canadian expert Yoshua Bengio – called for a six-month pause on AI development until safety protocols are added.
Avi Phillips is of two minds about it. The owner of Transform Your Org, a Nelson-based digital services company, has been using ChatGPT to create content for social media posts and websites as well as develop an outline for an ebook. He describes his first experience with ChatGPT as magic.
“The thing that I loved about the internet, initially, was anything I wanted to learn about was there available for me, and with ChatGPT I’m back to that kind of child-like feeling of learning things at my speed.”
Phillips also sees AI with open eyes. He worries about how it might be used to spread misinformation, which is easily done, since ChatGPT has no built-in fact checker, or for malicious activities like making deepfakes, which are images altered to recreate a person’s likeness.
He also doesn’t think OpenAI should have released ChatGPT to the public before it had finished development. “We’re all part of this guinea pig training for this AI.”
Joe Boland, a Trail-based health coach and owner of Darn Strong Dads, uses ChatGPT to draft curriculums specific to his clients’ needs. Recently he asked it to write out a six-month schedule that included nutrition exercises, homework assignments and biweekly Zoom meetings.
The result impressed Boland, but he’s found the AI fails when tasked with finding solutions to people’s health issues. It can fill a spreadsheet, yet can’t understand people.
It reminds Boland of his time working at a call centre. He remembers answering the phone to frustrated customers who were immediately relieved to be speaking with a person and not navigating an automated system.
“That’s why I don’t necessarily fear that something like ChatGPT or AI could replace what we do, because there is a certain je ne sais quoi about needing to talk to somebody, especially with something as vulnerable as our health or our wellness or whatever else.
“We need that human connection that I don’t think AI can necessarily offer.”
But it’s also not going away. As people race to figure out how AI can help or hinder them, Phillips says they also need to consider the deeper question of what this means for humanity.
“One thing is for sure, we have to have this conversation. We can’t pretend it’s not there. It’s the new paradigm.”
READ MORE:
• B.C. researchers use AI to predict a cancer patient’s survival rate with 80% accuracy
• What can ChatGPT maker’s new AI model GPT-4 do?
• This Canadian VFX studio is using AI to cosmetically alter your favourite actors
@tyler_harper | tyler.harper@nelsonstar.com
Like us on Facebook and follow us on Twitter.