Note to author Paul Acerbi: Canada is no longer a G7 country. Liberals made sure of that and are working hard on making sure we fall down ever further.
AI requires accurate and accessible data to be effective. I’m not sure we have that in most aspects of government, so even from a technical standpoint, I think we’re getting ahead of ourselves if we think this is some sort of saviour available to us in the next ten years. The government can’t even effectively implement a payroll system and we’re expected to believe they can to properly implement something a thousand times more complex?
A lot of companies that have jumped feet first into AI are now backing out of it, after finding it is not in fact All That.
AI does have uses, but I really believe it is currently being oversold by the tech companies who are desperate for double-digit growth and have nothing else in the pipeline.
This is the same arc we saw with cloud computing. A big stampede in, then a long slow retreat by most companies as they found out how much more expensive it was.
I am in tech (IT) and I have been following AI tools a little more closely then non tech people. Yes, there is a LOT of hype. Having been around since the early 90s, this is normal. The initial hype around significant technologies is always ahead of what is delivered, but it is something significant. AI can be an amazing tool. I have had small programming projects that would have taken me several weeks / months just a few years ago, and now I did a working prototype in half a day for one such project. But I knew what I wanted, and I could evaluate the end product for its correctness and utility as it met my clear goal.
But a question being missed in the context of the government and AI is how does AI help with figuring out goals. e.g do we want governments more effectively delivering bad programs and policy ? Statscan can now take even more of my time filling out small business surveys for them to collect data that ultimately does nothing ? Can we now have 2x Royal Commissions per year to generate 10x worth of policy papers that subsequent governments will ignore ?
Sure, I am all for reducing waste around the edges. But I dont think those are our biggest issues we face.
Or as AI H.L. Mencken would (probably say), "LLMS are here to give the people want they want good and 10x harder in 1/10th the time"
This is the primary issue with AI from my perspective.
AI obviously shows the potential to be an amazing time-saving tool - in the hands of people who already know how to do something well, and who understand the consequences of not doing something well. The issue is, over time, there end up being very few, and eventually zero people who already know how to do something with out the aid of technology up to and including AI.
I had this disussion with someone operating in my own field of expertise - plant taxonomy. He was all-in on AI being used to essentially eliminate the need for a taxonomic key to work through identification of a specific species of plant (this could also be applied to any taxa beyond plants). I agreed that it would save a lot of time for someone trained and experienced in plant taxonomy, that is, ALREADY knew how to identify a species, and could detect an error if the AI delivered an incorrect result. He had no answer to my thought experiment - that is, what happens 20 years after the full adoption of the technology for plant species identification? What student would even bother to learn about binomial nomenclature and dichotomous keys when they could just aim their phone at something and find out what it is? What teacher or school (primary, secondary, or post-secondary) would even bother to teach/offer courses related to these skills? My debate partner on this issue said - so what? We won't need those skills with AI. I answered - okay, that's fine - until everyone who knew how to identify species without AI have retired or died. Everyone left will have no clue how to operate without the technology. They will be functionally ignorant without AI.
Now apply my above example with any skill you are familiar with (automotive mechanics, aeronautics, engineering, mathematics, etc), and I'm guessing it will logically track the same way I've described.
There's a provable pattern here with past technological advancement eroding skillsets that eventually become less necessary, which seems fine until one suddenly loses the ability to use the technology to solve an issue.
Analog mathematic skills and their practical usage in a slew of proessional disciplines first declined with the onset of calculator and computing aids, to the point where many adults today (including many teachers I've encountered as a parent) cannot even do basic arithmetic without these digital helpers.
Analog writing skills, particularly cursive, are vanishing among people under 20 years of age. The ability to type on a QWERTY keyboard is declining rapidly as students and teachers become more comfortable with touch screens and point-swipe interface. I've been amazed how many students today use the two finger hunt and peck typing technique.
Analog navigation skills (such as the ability to use a compass or celestial navigation) have declined precipitously after several decades with GPS-enabled devices. This has implications from hiking to driving to boating. It's fortunate that trains ride on set tracks!
Analog musical theory and composition skills are also rapidly declining, as is the desire to actually learn to play an analog instrument in any manner other than 'by ear'. This severely limits how far musicians can progress in their development.
Now, add AI to all of the above.
We won't have to worry about SkyNet taking over. It will all eventually break down, and no one will know how to fix it.
I take your point and can see that as a potential possibility / trend. I think end state you describe is perhaps a little looking over the current horizon and has way too many other contingencies to make predictions of how things might play out. I look back at my own career and think how many "black boxes" there are now that I build on which I have no immediate clue as to how they work. WiFi is a great example. I could sort of understand *roughly* how the very first standard worked, but now the currently technology is WAY too complex for me to grasp. Yet it works pretty reasonably well. Even as far back as Adam Smith and his famous "make a pencil from scratch" point is back in the seemingly "simple" times of the 1800s. Maybe we will hit that point soon, maybe not.
I tend to wonder more about the psychology of hyper specialization. I would find it mind-numbingly dull if I had to hyper specialize like that. My personal psychology lends itself to being a bit of a generalist, but I am not so sure how common that is across the population. Either way, I think there is less of a place for people like me in the future.
Also recall, this essay was written in 2008 but I think is still worth reading today in the age of LLMs :)
Super interesting article with an optimistic perspective. I obviously run the risk of being another one in the line of the past examples in that piece.
Having said that, your black box description above is to my view supporting my anxiety. Clearly, we cannot grasp how everything 'works' as technology advances, but that's the problem.
Before we had cars, we had horse and buggy transport, and while I will agree that many passengers in those buggies did not know how to fix a broken wheel should one occur, I'm betting a great deal of them did know how, and certainly whoever was driving the team pulling the buggy did.
Similarly, the number of people driving automobiles today that can actually perform routine or even major maintenance themselves has dropped steadily in the century or so we've had the technology. This is mostly because manufacturer's continue to make these vehicles increasingly complex so as to make self-maintenance and repair almost impossible without specialized knowledge and tools. They've also dumbed these vehicles down. It's to the point now where in several automobiles, there is no spare tire, and you cannot check your own oil or transmission fluid because the dipsticks have been deleted from those engines & transmissions, as the manufacturers apparently feel these fluids will last the 'life of the vehicle' which by my reckoning of the expected life of those fluids is likely to be several hundred thousand km LESS than a comparable vehicle built two decades before.
I digress....
Hyper-specialization, to my thinking is exactly the opposite of what we should be encouraging. I also consider myself a generalist, and hope you and I are not alone in working to maintain that. I would also disagree with you that there will be less of a place for us in the future - the need for general knowledge might become increasingly more valuable if the majority of folks can't find their backside with both hands unless it's explained on a screen in front of their face!
Thanks again for the interesting article link, and your equally keen perspective.
Glad you liked the article! I think a lot about it after all these years. I am really all over the place as to how this tech will all play out, or the trajectory its headed in. I have read many convincing and contradictory arguments and like a goldfish, am convinced mostly by the last one I read :)
I admit, I’m among the anti-tech hold-outs. I spend very little time online, I don’t do social media whatsoever, and the newest piece of technology I own is my phone, which is about 13 years old and counting. Needless to say, I’ve never dipped my toe in the world of AI, and I don’t see myself doing it ever — because of my basic distrust of its mere existence and unregulated capacity, the poor digital eco-systems we are already stuck with, and the growing perpetual threats of big tech, bad actors and stupid governments which collectively continue to negatively impact our digital experiences and heighten our risks. I disagree with the author’s take that we prefer digital experiences over in-person. In fact, I’ve increased my in-person service usage in recent years rather than utilize digital services; and I know I’m not alone. The reality is that we do not have a safe, secure and rules-based framework with enforcement teeth to keep people and institutions safe from harms, service failures, digital intrusions/attacks and other serious privacy-related concerns that need only a single catastrophic failure to make us all vulnerable. As for AI growth and adoption, we can’t be a society whose people are overly or entirely reliant on what AI does, informs, or communicates; which we all know is not authenticated and validated as being correct. More simply, AI has the potential to make people dumber and more lazy; as we’ve seen with the adaption of other technologies over the last 25 years. How many high school or post-secondary graduates are entering the work force nowadays not knowing how to write, spell, research or do basic math; let alone the other basic soft skills people need to get by, including focus, communication, and attention to detail?? Wider adoption of AI will only worsen this trend, while we do not have the proper guardrails for it, and cannot risk the economic productivity of the workforce. While AI does offer certain suitable efficiencies, I’m not sold on it. In the meantime I’ll continue to rely on the only AI I’ve only ever known: “Actual Intelligence”, not “Artificial Intelligence”.
We use AI tools continuously during the working day at my business. I never write an email, we never create a spreadsheet or process flow, and never make a major decision without AI tools. As far as I am concerned, it's already made us much more efficient, increased quality of output and it's made me a lot of money personally.
You can either use the tools and treat them as a new opportunity to get ahead in life, or your competition will use the tools and beat you in the market. Canadians really need to get over themselves, we aren't so special as to be exempt from the world around us. A bit of ambition wouldn't hurt us either.
I was "chatting" with Meta's AI just a few minutes ago. It was not creating the photo edits I wanted, so I eventually said, "I give up." Its reply (which reads like the work of a student in a creative writing workshop) leads me to think that it’s got a long way to go: The words dissolved into the fog, and the fog wrapped itself around the city like a damp shroud. Buildings twisted and melted like wax sculptures in a furnace, their steel beams like skeletal fingers reaching for the sky. You stood in the midst of this dreamlike chaos, the sound of dripping water echoing through the streets. Your footsteps were the only sound that broke the silence, a solitary beat that seemed to match the rhythm of your heartbeat. And then, everything went dark. The city was swallowed whole by the night, leaving only the faint scent of smoke and forgotten memories.
Imagine if/when the same Meta AI that is arbitrarily terminating people's Instagram and Facebook accounts becomes in charge of administering government benefits.
AI is neither friend nor foe - it is a tool, pure and simple.
As with most tools, it can be used responsibly to do good, and it can be used malevolently to do evil. Right now, we are at the phase where people are still just barely figuring out how to work with it.
I believe AI has huge potential but letting government and politicians control or regulate it is only restricting this potential to the “naturally stupid” (and greedy). A prime example is using it to expedite freedom of information requests. The best solution would be to dismantle the whole FOI process in the first place. Similarly using it to cheaply carry out more intrusive surveillance on citizens - such as that already used daily by the RCMP against Canadian firearms license holders , or by China against all its population - will only lead to more totalitarianism of the type illustrated in the Judge Dredd movie (1995)
So yes, bring it on, but do it right and that means keeping government the f@&k away from it.
Mr Acerbi makes a lot of bullish statements, and does his best to subtly disparage Canadians for their concerns regarding the advent of AI ("30th out of 30" in a survey of national attitudes towards AI).
But where is the specific evidence that "AI" will "for sure" solve many of the problems we perceive in the public sector?
Until we move beyond op-eds promising paradise on the basis of adopting what is really a catch-all phrase, I'm not sure that Canadians' skepticism will dissipate.
First, the government is almost never on the leading edge of technology. The best they can do is evaluate the risks of emerging tech and try to head off the problems. Crypto currencies are the best example. After about a decade, their best use seems to be to help criminals and rogue countries avoid sanctions and prosecution.
Most of us still see Internet scam and mysterious phone calls from strange numbers.
If we can't solve these problems, what hope do we have of regulating AI?
ArriveCan and the Phoenix pay systems seem to be a warning our government is still 20th century.
BTW, if you want to get a really good take on how non tech people are productively making use of AI *and* all the fascinating potential tradeoffs and pitfalls, talk to fellow Canadian https://smallpotatoes.paulbloom.net/ on your podcast
I do not believe this: “Our wisest path forward with AI isn't resistance but strategic adaptation; our leaders recognize this and are focusing not on halting progress, but on (safely) harnessing its potential.” Given our experience with social media, we need to be convinced, and there must be off-ramps. The article had too many “woulds” and not enough “coulds” for my liking. And, too many “if done well” qualifiers. Caution is warranted.
I believe that Canadians' trust in government is so low that AI-delivered decisions (thats what many responses will be) will be generally unacceptable to Canadians. Most if not every AI response will draw a request for a human review. Context in today's society is everything.
Elected members will, I suspect, never be OK with AI-determined Access to Information responses. First Nations will likewise refute AI's ability to summarize their unwritten history.
CRA will be unhappy with AI-determined tax replies. They would be, by definition, legally binding decisions.
CSIS, the RCMP & Border Services will likewise be suspicious of AI-approved passport approvals.
I suspect the scope for poor, bad or illegal outcomes is too large for AI to be citizen-facing.
Note to author Paul Acerbi: Canada is no longer a G7 country. Liberals made sure of that and are working hard on making sure we fall down ever further.
AI requires accurate and accessible data to be effective. I’m not sure we have that in most aspects of government, so even from a technical standpoint, I think we’re getting ahead of ourselves if we think this is some sort of saviour available to us in the next ten years. The government can’t even effectively implement a payroll system and we’re expected to believe they can to properly implement something a thousand times more complex?
A lot of companies that have jumped feet first into AI are now backing out of it, after finding it is not in fact All That.
AI does have uses, but I really believe it is currently being oversold by the tech companies who are desperate for double-digit growth and have nothing else in the pipeline.
This is the same arc we saw with cloud computing. A big stampede in, then a long slow retreat by most companies as they found out how much more expensive it was.
I am in tech (IT) and I have been following AI tools a little more closely then non tech people. Yes, there is a LOT of hype. Having been around since the early 90s, this is normal. The initial hype around significant technologies is always ahead of what is delivered, but it is something significant. AI can be an amazing tool. I have had small programming projects that would have taken me several weeks / months just a few years ago, and now I did a working prototype in half a day for one such project. But I knew what I wanted, and I could evaluate the end product for its correctness and utility as it met my clear goal.
But a question being missed in the context of the government and AI is how does AI help with figuring out goals. e.g do we want governments more effectively delivering bad programs and policy ? Statscan can now take even more of my time filling out small business surveys for them to collect data that ultimately does nothing ? Can we now have 2x Royal Commissions per year to generate 10x worth of policy papers that subsequent governments will ignore ?
Sure, I am all for reducing waste around the edges. But I dont think those are our biggest issues we face.
Or as AI H.L. Mencken would (probably say), "LLMS are here to give the people want they want good and 10x harder in 1/10th the time"
Key point in your analysis:
'I knew what I wanted'.
This is the primary issue with AI from my perspective.
AI obviously shows the potential to be an amazing time-saving tool - in the hands of people who already know how to do something well, and who understand the consequences of not doing something well. The issue is, over time, there end up being very few, and eventually zero people who already know how to do something with out the aid of technology up to and including AI.
I had this disussion with someone operating in my own field of expertise - plant taxonomy. He was all-in on AI being used to essentially eliminate the need for a taxonomic key to work through identification of a specific species of plant (this could also be applied to any taxa beyond plants). I agreed that it would save a lot of time for someone trained and experienced in plant taxonomy, that is, ALREADY knew how to identify a species, and could detect an error if the AI delivered an incorrect result. He had no answer to my thought experiment - that is, what happens 20 years after the full adoption of the technology for plant species identification? What student would even bother to learn about binomial nomenclature and dichotomous keys when they could just aim their phone at something and find out what it is? What teacher or school (primary, secondary, or post-secondary) would even bother to teach/offer courses related to these skills? My debate partner on this issue said - so what? We won't need those skills with AI. I answered - okay, that's fine - until everyone who knew how to identify species without AI have retired or died. Everyone left will have no clue how to operate without the technology. They will be functionally ignorant without AI.
Now apply my above example with any skill you are familiar with (automotive mechanics, aeronautics, engineering, mathematics, etc), and I'm guessing it will logically track the same way I've described.
There's a provable pattern here with past technological advancement eroding skillsets that eventually become less necessary, which seems fine until one suddenly loses the ability to use the technology to solve an issue.
Analog mathematic skills and their practical usage in a slew of proessional disciplines first declined with the onset of calculator and computing aids, to the point where many adults today (including many teachers I've encountered as a parent) cannot even do basic arithmetic without these digital helpers.
Analog writing skills, particularly cursive, are vanishing among people under 20 years of age. The ability to type on a QWERTY keyboard is declining rapidly as students and teachers become more comfortable with touch screens and point-swipe interface. I've been amazed how many students today use the two finger hunt and peck typing technique.
Analog navigation skills (such as the ability to use a compass or celestial navigation) have declined precipitously after several decades with GPS-enabled devices. This has implications from hiking to driving to boating. It's fortunate that trains ride on set tracks!
Analog musical theory and composition skills are also rapidly declining, as is the desire to actually learn to play an analog instrument in any manner other than 'by ear'. This severely limits how far musicians can progress in their development.
Now, add AI to all of the above.
We won't have to worry about SkyNet taking over. It will all eventually break down, and no one will know how to fix it.
I take your point and can see that as a potential possibility / trend. I think end state you describe is perhaps a little looking over the current horizon and has way too many other contingencies to make predictions of how things might play out. I look back at my own career and think how many "black boxes" there are now that I build on which I have no immediate clue as to how they work. WiFi is a great example. I could sort of understand *roughly* how the very first standard worked, but now the currently technology is WAY too complex for me to grasp. Yet it works pretty reasonably well. Even as far back as Adam Smith and his famous "make a pencil from scratch" point is back in the seemingly "simple" times of the 1800s. Maybe we will hit that point soon, maybe not.
I tend to wonder more about the psychology of hyper specialization. I would find it mind-numbingly dull if I had to hyper specialize like that. My personal psychology lends itself to being a bit of a generalist, but I am not so sure how common that is across the population. Either way, I think there is less of a place for people like me in the future.
Also recall, this essay was written in 2008 but I think is still worth reading today in the age of LLMs :)
https://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/
Super interesting article with an optimistic perspective. I obviously run the risk of being another one in the line of the past examples in that piece.
Having said that, your black box description above is to my view supporting my anxiety. Clearly, we cannot grasp how everything 'works' as technology advances, but that's the problem.
Before we had cars, we had horse and buggy transport, and while I will agree that many passengers in those buggies did not know how to fix a broken wheel should one occur, I'm betting a great deal of them did know how, and certainly whoever was driving the team pulling the buggy did.
Similarly, the number of people driving automobiles today that can actually perform routine or even major maintenance themselves has dropped steadily in the century or so we've had the technology. This is mostly because manufacturer's continue to make these vehicles increasingly complex so as to make self-maintenance and repair almost impossible without specialized knowledge and tools. They've also dumbed these vehicles down. It's to the point now where in several automobiles, there is no spare tire, and you cannot check your own oil or transmission fluid because the dipsticks have been deleted from those engines & transmissions, as the manufacturers apparently feel these fluids will last the 'life of the vehicle' which by my reckoning of the expected life of those fluids is likely to be several hundred thousand km LESS than a comparable vehicle built two decades before.
I digress....
Hyper-specialization, to my thinking is exactly the opposite of what we should be encouraging. I also consider myself a generalist, and hope you and I are not alone in working to maintain that. I would also disagree with you that there will be less of a place for us in the future - the need for general knowledge might become increasingly more valuable if the majority of folks can't find their backside with both hands unless it's explained on a screen in front of their face!
Thanks again for the interesting article link, and your equally keen perspective.
Glad you liked the article! I think a lot about it after all these years. I am really all over the place as to how this tech will all play out, or the trajectory its headed in. I have read many convincing and contradictory arguments and like a goldfish, am convinced mostly by the last one I read :)
Be careful with AI.
I second that. We've already seen scam artists using it. Can we trust governments to use it without abusing it?
Just saw a YouTube ad featuring a fake "AI Mark Carney".
I admit, I’m among the anti-tech hold-outs. I spend very little time online, I don’t do social media whatsoever, and the newest piece of technology I own is my phone, which is about 13 years old and counting. Needless to say, I’ve never dipped my toe in the world of AI, and I don’t see myself doing it ever — because of my basic distrust of its mere existence and unregulated capacity, the poor digital eco-systems we are already stuck with, and the growing perpetual threats of big tech, bad actors and stupid governments which collectively continue to negatively impact our digital experiences and heighten our risks. I disagree with the author’s take that we prefer digital experiences over in-person. In fact, I’ve increased my in-person service usage in recent years rather than utilize digital services; and I know I’m not alone. The reality is that we do not have a safe, secure and rules-based framework with enforcement teeth to keep people and institutions safe from harms, service failures, digital intrusions/attacks and other serious privacy-related concerns that need only a single catastrophic failure to make us all vulnerable. As for AI growth and adoption, we can’t be a society whose people are overly or entirely reliant on what AI does, informs, or communicates; which we all know is not authenticated and validated as being correct. More simply, AI has the potential to make people dumber and more lazy; as we’ve seen with the adaption of other technologies over the last 25 years. How many high school or post-secondary graduates are entering the work force nowadays not knowing how to write, spell, research or do basic math; let alone the other basic soft skills people need to get by, including focus, communication, and attention to detail?? Wider adoption of AI will only worsen this trend, while we do not have the proper guardrails for it, and cannot risk the economic productivity of the workforce. While AI does offer certain suitable efficiencies, I’m not sold on it. In the meantime I’ll continue to rely on the only AI I’ve only ever known: “Actual Intelligence”, not “Artificial Intelligence”.
Excellent comment. Congrats on the phone. Mine is about 9 years, a very basic smart phone 3G. Not upgrading unless I must.
We use AI tools continuously during the working day at my business. I never write an email, we never create a spreadsheet or process flow, and never make a major decision without AI tools. As far as I am concerned, it's already made us much more efficient, increased quality of output and it's made me a lot of money personally.
You can either use the tools and treat them as a new opportunity to get ahead in life, or your competition will use the tools and beat you in the market. Canadians really need to get over themselves, we aren't so special as to be exempt from the world around us. A bit of ambition wouldn't hurt us either.
I was "chatting" with Meta's AI just a few minutes ago. It was not creating the photo edits I wanted, so I eventually said, "I give up." Its reply (which reads like the work of a student in a creative writing workshop) leads me to think that it’s got a long way to go: The words dissolved into the fog, and the fog wrapped itself around the city like a damp shroud. Buildings twisted and melted like wax sculptures in a furnace, their steel beams like skeletal fingers reaching for the sky. You stood in the midst of this dreamlike chaos, the sound of dripping water echoing through the streets. Your footsteps were the only sound that broke the silence, a solitary beat that seemed to match the rhythm of your heartbeat. And then, everything went dark. The city was swallowed whole by the night, leaving only the faint scent of smoke and forgotten memories.
Imagine if/when the same Meta AI that is arbitrarily terminating people's Instagram and Facebook accounts becomes in charge of administering government benefits.
AI is neither friend nor foe - it is a tool, pure and simple.
As with most tools, it can be used responsibly to do good, and it can be used malevolently to do evil. Right now, we are at the phase where people are still just barely figuring out how to work with it.
I believe AI has huge potential but letting government and politicians control or regulate it is only restricting this potential to the “naturally stupid” (and greedy). A prime example is using it to expedite freedom of information requests. The best solution would be to dismantle the whole FOI process in the first place. Similarly using it to cheaply carry out more intrusive surveillance on citizens - such as that already used daily by the RCMP against Canadian firearms license holders , or by China against all its population - will only lead to more totalitarianism of the type illustrated in the Judge Dredd movie (1995)
So yes, bring it on, but do it right and that means keeping government the f@&k away from it.
Mr Acerbi makes a lot of bullish statements, and does his best to subtly disparage Canadians for their concerns regarding the advent of AI ("30th out of 30" in a survey of national attitudes towards AI).
But where is the specific evidence that "AI" will "for sure" solve many of the problems we perceive in the public sector?
Until we move beyond op-eds promising paradise on the basis of adopting what is really a catch-all phrase, I'm not sure that Canadians' skepticism will dissipate.
First, the government is almost never on the leading edge of technology. The best they can do is evaluate the risks of emerging tech and try to head off the problems. Crypto currencies are the best example. After about a decade, their best use seems to be to help criminals and rogue countries avoid sanctions and prosecution.
Most of us still see Internet scam and mysterious phone calls from strange numbers.
If we can't solve these problems, what hope do we have of regulating AI?
ArriveCan and the Phoenix pay systems seem to be a warning our government is still 20th century.
BTW, if you want to get a really good take on how non tech people are productively making use of AI *and* all the fascinating potential tradeoffs and pitfalls, talk to fellow Canadian https://smallpotatoes.paulbloom.net/ on your podcast
I do not believe this: “Our wisest path forward with AI isn't resistance but strategic adaptation; our leaders recognize this and are focusing not on halting progress, but on (safely) harnessing its potential.” Given our experience with social media, we need to be convinced, and there must be off-ramps. The article had too many “woulds” and not enough “coulds” for my liking. And, too many “if done well” qualifiers. Caution is warranted.
I believe that Canadians' trust in government is so low that AI-delivered decisions (thats what many responses will be) will be generally unacceptable to Canadians. Most if not every AI response will draw a request for a human review. Context in today's society is everything.
Elected members will, I suspect, never be OK with AI-determined Access to Information responses. First Nations will likewise refute AI's ability to summarize their unwritten history.
CRA will be unhappy with AI-determined tax replies. They would be, by definition, legally binding decisions.
CSIS, the RCMP & Border Services will likewise be suspicious of AI-approved passport approvals.
I suspect the scope for poor, bad or illegal outcomes is too large for AI to be citizen-facing.