AI ethics happening in the real world, not just armchair debate

Mitch Malone
6 min readNov 27, 2023

Call me an old-timer if you must, but I’ve been reluctant to get too heavily invested with AI hype over the last few years. It’s not that I am not excited by AI, I have just struggled understanding the actual value and I’ve questioned where it’s all heading.

It’s great. I’m excited. But what does it all mean?

I’ve used ChatGPT for idea generation, Notion AI Assistant for getting started on documents, and Github Copilot for generating repetitive boilerplate in coding projects. I’ve even dabbled with prompting Midjourney for generative art with mixed results. I enjoy it, it’s fun, but there is still a way to go before I’d consider it a true solution.

Over the period of playing with these tools and keeping close to AI ethics debate, I’ve been trying to wrangle internally with where various lines get drawn and asking myself what really happens with the jobs AI is able to take over.

Does low quality output safeguard jobs?

You might explore AI tools and, like me, notice that the quality isn’t exactly perfect. This is unlikely to be an incredible surprise since many of these tools are in active development and improvement, but it always seemed to me that this would prohibit tools taking jobs from real workers.

Generally speaking when I leverage an AI tool for a task the work flow vaguely resembles something like this.

  1. Generate ideas / concept
  2. After several prompts get to ~70% quality
  3. Take over, improve quality, and end up at a final result

This process of needing to intervene in the later steps felt obvious to me, and it also felt like the barrier to entry for most of these tools taking away from skilled workers. “How could these tools possibly replace people,” I’d ask myself while I rewrote paragraphs upon paragraphs of generated text.

But what happens when quality is less important? Content which I might feel is only at 70% quality may be at 90% quality for someone in a different industry or someone with a different level of digital literacy.

A true anecdote about job loss

I recently had a conversation with a friend of mine who shared a story about her sister-in-law, “Mariah” which became the inspiration for this very article on AI ethics in the workplace.

Mariah is a graphic designer and copywriter. She has up-to-date skills, attended design at university, and she is good at her job. Mariah is also a parent, and while she loves being a designer, she prefers working with small to mid-sized clients that allow her the freedom to care for her children.

Mariah’s clients tend to include local businesses in her area. Dentists, law firms, schools, etc. — businesses that tend to have some budget for design and copywriting work, but their requirements are fairly straight forward and easy to predict for a freelancer who is still a stay at home parent.

Over the last few years Mariah’s work slowly tapered off. Long standing clients had used her expertise less and less, churn was manageable but frustrating, and she steadily found it harder and harder to keep her work pipeline completely topped up.

At first Mariah was sure that it was seasonal or situational. Clients closing contracts gave reasonable causes and she had little reason to question it. Later she became convinced she’d been dropping the ball and needed to improve her work, certain that chasing some more certifications would be a quick fix.

After some time Mariah started to see her old clients publishing work that was clearly AI generated. It only took a few phone calls to confirm that her services had been replaced with various LLMs and generative models.

First, those who can least afford it

The story for Mariah isn’t all bad; she’s fortunate enough to have a partner who also has a job, she has since found other clients who are less inclined to use AI tools, and she also has the means to go without work for short periods if necessary. She’s actually quite well positioned to weather this storm for now.

But the story above raises interesting questions for me — if AI is best at replacing lower skilled, less talented, or in Mariah’s case less ambitious people, does this simply mean that the people likely to lose their jobs are those who can least afford it?

It seems obvious that less skilled or less in demand workers are most likely to be disproportionately effected by technology shifts, I do get that, but the real challenge we have in the future is the scale at which these jobs could be lost.

Next could be the rest of us

As an aside, during my research on this article I ran into a terrific YouTube video titled Large Language Models and The End of Programming. Matt Welsh is Co-founder and Chief Architect of, a Seattle-based startup developing a new computational platform AI.

Matt proposes that it may actually be software engineers who are losing their jobs to LLMs.

I highly recommend watching this video to see how far this can go.

Like a medicine becomes poison, the problem is the dose

Throughout history, technologies and inventions have disrupted labor markets and changed the nature of work, often leading to job losses in certain sectors while adding jobs in emerging ones.

The Industrial Revolution took away the jobs of artisans, Automobile Industry Automation significantly reduced the need for skilled craftspeople, the Computer Revolution automated many tasks that were previously done manually.

However, in each of these cases, the increase in these technologies also caused inverse opportunities in the newly created industries — people lost jobs in one industry as new jobs were created in emerging fields.

As examples, job losses for artisans resulted in an increased need for factory workers, increased demand for vehicles created the need for motor mechanics, and IT technicians still have jobs to this day fixing printers.

The challenge we face with an AI revolution is one of scale. While there will obviously be some opportunities for those creating and maintaining the AI technologies and infrastructure, and there may still be the need for someone to oversee this work for a while, but eventually there is a point where these technologies take away jobs from those who want to work.

A hypothetical situation with disastrous implications

In order to see how bad things might get for a designer and copywriter specializing in small to medium businesses, I’ll pose two hypothetical scenarios.

Hypothetical #1: What if in the next 5 years LLMs could be produced to write compelling, accurate and natural copywriting for small and medium-sized businesses?

Feasibility: Very likely, probably inevitable.

Hypothetical #2: What if that same model could be linked to a WordPress (or similar) plugin that would allow the operator to produce SEO-friendly and compelling content at will?

Feasibility: Also very likely.

Given these two outcomes, which at this point almost feel inevitable, how far-reaching is the job loss?

Okay, so if AI is bad, what then?

I am definitely not trying to make a statement against AI in this article, as much as it might seem like I am. I don’t think that AI is necessarily out to get us and I am ultimately optimistic for human progress alongside AI, but I do think that the pace and scale is hard to compare with anything from our past and therefore hard to predict.

Job losses have started and are continuing to affect many people while at the same time some others are embracing AI to start their careers or build innovative startups. It’s really quite complicated.

I think the ultimate challenge for the AI revolution is going to be how humanity responds to seeing the incredible change in labor demand over the next few years. Conversations around Universal Basic Income (UBI) are likely to resurface stronger than ever, and I think economies will have to be reimagined over the next few years.



Mitch Malone

Product and engineering leader (prev. CTO @ Linktree, Head of Eng @ BlueChilli). Nomad, remote worker, writer, photographer.