When I started working in this space close to eight years ago, not that many people were talking about the impacts of AI. One of the positive side effects of generative AI being released to everyone is that the impacts of AI are now on everyone’s mind. I was so pleased to get an note from a high school student in their senior year working on a project about the impacts of AI. I’m including their questions and my answers as this month’s column to unpack this overarching topic …
What are the impacts of AI on creativity, schools and industry?
Here’s the list of this student’s specific questions and my answers. I’ve also included some extra resources.
How would you address the problems of cheating or academic dishonesty when using AI tools in schools?
A big part of this issue is the lack of clarity by institutions to establish what they mean by cheating or academic dishonesty in an age of AI tools. The first step is to get very clear on that at the institutional level, which might mean school districts or whoever has overall policy responsibilities providing that guidance. Once that is established, assuming it’s not a blanket ban on AI tools (which certainly could be a choice), the next step is to provide education and training to all stakeholders so they can uphold the policy when using an AI tool. This means training teachers, students, administrators, maybe even parents too, to ensure everyone knows how to behave accordingly. Last step is to monitor and adjust as needed. It’s not a one and done scenario.
What doesn’t work? What we do now – give everyone the tools with an ill defined policy and no training – and hope for the best.
Do you think AI is a threat or a tool for artists? Why?
Both. I think it threatens the livelihood for a good deal of artists. It drives the cost of production down and makes it easier for non-artists to create work that could be a substitute especially for various commercial projects (e.g. marketing, ads. etc). It doesn’t have to produce great art to be a substitute – it can be “good enough.”
It can also be a tool for artists and can be used creatively. There are many examples of this in the world of visual arts, performance art, and music. However, this doesn’t necessarily mean that the artist will see more economic rewards from using the tool. These are separate issues.
Do you think the effects AI has on industries are positive or negative? Why?
Generally speaking, if the goal is to reduce the cost of labour by replacing it with equipment (capital – or AI), then assuming the AI tool replaces the labour in a way that is acceptable to drive the desired outputs the business could possibly drive more profit. So that might be construed as positive for the business.
However, businesses exist in the bigger context of society. To take an extreme example, if a large section of the population loses their jobs, they can’t buy your products, and that could hurt your organization. It also puts more burdens on society for a social safety net, perhaps resulting in tax increases or some other impacts to business to pay for those services.
Overall, we need to weigh the issues of who benefits and who is at risk and find a balance that is healthy for society.
What skills do you think students should learn today to thrive in an AI-driven world?
Geoffrey Hinton, a famous AI researcher, is telling everyone to become a plumber. I don’t think it’s quite that simple, but the bigger concept is learning to do things that AI can’t do right now, which are tasks involving manual dexterity. A lot of trades would fall into that camp.
I would encourage people to focus on people-centric skills – face to face communications, empathy, discernment, ethical judgement – the type of humanistic skills not easily replicated by AI. These are also things that humans will protect because we believe they should not be automated. I think more learning in community with others, ideally face to face, is helpful to build those skills.
Also, it’s important to say that the pace of adoption might not be as fast as the media headlines suggest. Don’t just learn something because you think it might be “AI proof.” For example, if you are passionate about becoming a writer, don’t cross that off your list just because you think it’s too exposed to AI tools.
Do you think it is ethical to use AI-generated content as your own? Why or why not?
I think it’s important to disclose the use of AI in a process. For video, audio or images – a symbol or some text to say “AI generated” can accomplish that goal. There is also watermarking that content which is a more technical method.
For text, it’s trickier. I don’t think everyone needs to be told about every instance of a spellchecker (to use an extreme example) but if the whole thing is generated, then it is important to say that. This is where a policy can be helpful. For example, one might apply the 80/20 rule – if less than 20% is generated, perhaps it’s not necessary to disclose it. That said, there better not be any inaccuracies or errors in the content if you choose NOT to disclose it. See this case in Australia. This is an example of why I think disclosing, overall, is a good idea.
I would also add, there are many ethical questions related to things outside how or if the content should be attributed. For example, copyrighted content was taken from artists to be used as training data. They were not consulted or compensated for this use. There’s also the enormous environmental costs of using AI vs. other methods. Those need to be considered in addition to the question of attribution.
What do you think are the biggest misconceptions businesses have about AI?
So many! I’ll mention three…
It will improve productivity. This isn’t a given and if it’s not well defined – just some vague notion of improvement, then it’s doubly dubious. Many businesses have no idea why they want to use AI, they are mostly driven by a fear of missing out and not wanting to be left behind.
It will be cheap and easy to implement. There are all kinds of costs to do it well – from ensuring the risks are mitigated, to having the right types of tools, to considering all the relevant data prep, to training people how to use it. Businesses should prepare to spend money before they see the ROI. They should also do a long term cost/benefit and factor in different pricing models for AI tools to consider long term impacts.
That AI tools are more capable than they really are. There’s a lot of hype around agentic AI or replacing whole job categories with chatbots. There’s many examples of companies who have had to backtrack because they pivoted to AI, fired their whole customer service team, and then realized a chatbot was actually not a suitable replacement.
I hope to see more high school students choosing to wrestle with this topic. I’m including a list of some of favourite books – some classics and some newer fare – that I think will be helpful resources.
Resources
Algorithms of Oppression by Safiya Umoja Noble – a classic in the field that helped set me on my current career path
Atlas of AI by Kate Crawford – unpacks the materiality of data and AI systems – a go to text for my graduate level course
Empire of AI by Karen Hao – through the story of one company – OpenAI – we learn about the bigger story of AI in our current culture – a great primer to understand what motivates the people in this space
The Age of Surveillance Capitalism by Shoshana Zuboff – a book I return to often, centered on the economic and socio-political aspects of data and algorithms
The AI Con by Emily M. Bender and Alex Hanna – love their myth-busting podcast and this book felt like a deeper, extended version of the show to highlight their well documented (and often humorous) critical takes on ‘AI’
Send Me Your Questions!
I would love to hear about your data dilemmas or AI ethics questions and quandaries. You can send me a note at [email protected] or connect with me on LinkedIn. I will keep all inquiries confidential and remove any potentially sensitive information – so please feel free to keep things high level and anonymous as well.
This column is not legal advice. The information provided is strictly for educational purposes. AI and data regulation is an evolving area and anyone with specific questions should seek advice from a legal professional.
Applied Data Governance Practitioner Certification
Validate your expertise – accelerate your career.


