The other day, the CTO and other higher-level tech brains at my company held an information session in the office. The subject was RPA, or Robotic Process Automation. The idea being that with our new-found familiarity with AI, we could keep our eyes and ears open with our clients and report back with any processes that their staffs perform that could be done by a software robot that we could develop and sell them. There were mentions of how the aim is to increase efficiency and not put people out of work, but in almost the same breath, the presenters demo'd how they could replace our own secretarial and partial HR staff with AI.
It was a small, stepping-stone facet into the polarizing topic of AI & automation, sure, but the implications were clear. I've been increasingly interested in understanding the effects automation (of its various forms) can and will have on our future - to include the ethics and decision-making behind creating and deploying AI in the first place. It's a huge topic that I think we're all at least somewhat familiar with so I'm not going to unpack the whole topic here, but rather I wanted to get a conversation going around a few points that come to mind, to start.
What do you think?
------------------------
My thoughts:
So, in the end, I could see automation being a huge net benefit for my quality of life - one that'd require a huge personal reorganization of what it means to be alive and what kind of purpose to derive from work. I kinda think the precedence is there, however, that large corporations will absolutely adopt AI at a pace that will outstrip our, and our legislators, understanding of its effects (think about what we're dealing with now with social media & truth in journalism). With governments that are beholden to corporations and the rich first and foremost, I think a lot of people will be left out to dry in the short term as these technologies and processes are adopted, leading to some sort of upheaval.
It was a small, stepping-stone facet into the polarizing topic of AI & automation, sure, but the implications were clear. I've been increasingly interested in understanding the effects automation (of its various forms) can and will have on our future - to include the ethics and decision-making behind creating and deploying AI in the first place. It's a huge topic that I think we're all at least somewhat familiar with so I'm not going to unpack the whole topic here, but rather I wanted to get a conversation going around a few points that come to mind, to start.
- Is automation inevitable?
- What do we think the broader, long term effects of automation will be on society and long-held norms of modern life?
- Is there really a historical precedent that can be used as an example to discuss the possible effects of automation?
- What avenues will be available to people who find themselves automated out of a job?
- What must be done to ensure the economical benefits of automation are equitably shared?
What do you think?
------------------------
My thoughts:
- In the short term, I think there are few business owners that could resist the profit margins possible by automating as much work as possible out of the hands of human beings. We're far from the ideal workers - we sleep, require vacation time & health benefits, get injured, etc. That, and competition will drive any business that doesn't adopt the efficiencies of AI to do so in the face of competing businesses getting on board. I don't think that halting technological progress is ever the answer, but instead there has to be conversations held on a global scale that aren't narratives bought & paid for by the global elite about the effects of automation, and there's reason to be pessimistic about this. There seems to already be evidence of an interesting duality occurring right now in which business & thought leaders on the subject are selling the idea that "compassionate automation" will be the order of the day, while behind the scenes it's a race to get AI in place ASAP, no matter what the effects (yet to be fully understood) are.
- I think automation could be an avenue towards some sort of quasi-Star Trek economy, long term. In that realm, they got rid of scarcity, and adopted a mix of socialism & capatilism - the state provided the basics, and if you wanted more, you were free to explore making what profit there is to make in often creative or service industries. Who knows what the next century holds, but as AI matures, we're going to have to investigate concepts like universal basic income if any non-managerial job not requiring creative thinking eventually ends up on the chopping block. The more AI matures, in theory, the fewer traditional jobs remain, which would totally upend, well, pretty much all industries, taxation structures, & the way work shapes lots of people's sense of self.
- I see lots of parallels being drawn between the industrial revolution & automation, and I'm not sure they're entirely accurate. It was absolutely the case that a couple hundred years ago, there were industries and endeavors that had the growth potential to absorb vast quantities of people. Agricultural jobs that were destroyed were replaced by factory jobs building or maintaining machinery, for example. Today, with globalization added into the equation, there are many, many jobs at risk of being replaced by AI on a global scale, and as I already pointed out, there are theoretically only so many data scientist, app developer, & big data specialist roles that can be created. The Industrial Revolution was about mechanizing the processes of human labor to increase profits per person; the automation revolution is about replacing those people entirely. I definitely see the long-term benefits possible from the automation revolution, but I think the process for getting there could be much more painful if not handled properly - and in the wake of the Industrial Revolution, we had two major Communist revolutions that killed over 100 million people. I'm not even touching on climate change and the possible resource wars that could occur, or other conflicts from mass migration.
- How sustainable is large-scale job-loss due to automation, on what kind of timeline? Even if we don't call it job-loss - say, "career reorganization", plenty of people are advertising that, at least in the short term, people can just re-train into careers that are still in-demand and not automated. Software developers being the #1 cited career. This is of course assuming there will be a 1-to-1 demand for a software developer for each job automated out of existence. We're currently (in America) experiencing a lack of qualified tech-industry employees, so this may be true for a certain period of time (assuming everyone who is out of a job is willing and able to re-train). On a long enough timeline, enough jobs (including in the tech sector) may be automated or otherwise streamlined to the point that large swaths of people just get left out of the equation. This could eventually go above and beyond the unknown jobs that could be created by automation (robot servicing? there's already talk of automating automation...)
- There's already evidence that businesses might try to shirk some responsibility for implementing automation in the name of increasing profits. Companies are in the business of making money, and one could argue that even the products or services they provide are incidental in that goal, and you could add employing people to that list. Some important conversations are being had, given the foregone inevitability of automation, that the real issue to discuss is how to ensure that the broad changes to our socioeconomic systems don't just benefit those at the top. I'm not sure what this would look like - transitioning businesses large and small into co-ops that pay out to current & former employees, pension style, from the profits grown from increasing automation? Henry Ford had it figured out that his workers need to be able to afford the very product they're making for his business to be sustainable. Large businesses and industries need to be able to maintain the buying power of the populace as a whole to survive.
So, in the end, I could see automation being a huge net benefit for my quality of life - one that'd require a huge personal reorganization of what it means to be alive and what kind of purpose to derive from work. I kinda think the precedence is there, however, that large corporations will absolutely adopt AI at a pace that will outstrip our, and our legislators, understanding of its effects (think about what we're dealing with now with social media & truth in journalism). With governments that are beholden to corporations and the rich first and foremost, I think a lot of people will be left out to dry in the short term as these technologies and processes are adopted, leading to some sort of upheaval.
Current: '20 Kia Stinger GT2 RWD | '20 Yamaha R3 | '04 Lexus IS300 SD
Past: '94 Mazda RX-7 | '04 Lexus IS300 (RIP) | '00 Jeep XJ | '99 Mazda 10AE Miata | '88 Toyota Supra Turbo
My MM Movies - Watch Them Here
Past: '94 Mazda RX-7 | '04 Lexus IS300 (RIP) | '00 Jeep XJ | '99 Mazda 10AE Miata | '88 Toyota Supra Turbo
My MM Movies - Watch Them Here


