Note to author Paul Acerbi: Canada is no longer a G7 country. Liberals made sure of that and are working hard on making sure we fall down ever further.
A lot of companies that have jumped feet first into AI are now backing out of it, after finding it is not in fact All That.
AI does have uses, but I really believe it is currently being oversold by the tech companies who are desperate for double-digit growth and have nothing else in the pipeline.
This is the same arc we saw with cloud computing. A big stampede in, then a long slow retreat by most companies as they found out how much more expensive it was.
I am in tech (IT) and I have been following AI tools a little more closely then non tech people. Yes, there is a LOT of hype. Having been around since the early 90s, this is normal. The initial hype around significant technologies is always ahead of what is delivered, but it is something significant. AI can be an amazing tool. I have had small programming projects that would have taken me several weeks / months just a few years ago, and now I did a working prototype in half a day for one such project. But I knew what I wanted, and I could evaluate the end product for its correctness and utility as it met my clear goal.
But a question being missed in the context of the government and AI is how does AI help with figuring out goals. e.g do we want governments more effectively delivering bad programs and policy ? Statscan can now take even more of my time filling out small business surveys for them to collect data that ultimately does nothing ? Can we now have 2x Royal Commissions per year to generate 10x worth of policy papers that subsequent governments will ignore ?
Sure, I am all for reducing waste around the edges. But I dont think those are our biggest issues we face.
Or as AI H.L. Mencken would (probably say), "LLMS are here to give the people want they want good and 10x harder in 1/10th the time"
AI requires accurate and accessible data to be effective. I’m not sure we have that in most aspects of government, so even from a technical standpoint, I think we’re getting ahead of ourselves if we think this is some sort of saviour available to us in the next ten years. The government can’t even effectively implement a payroll system and we’re expected to believe they can to properly implement something a thousand times more complex?
I was "chatting" with Meta's AI just a few minutes ago. It was not creating the photo edits I wanted, so I eventually said, "I give up." Its reply (which reads like the work of a student in a creative writing workshop) leads me to think that it’s got a long way to go: The words dissolved into the fog, and the fog wrapped itself around the city like a damp shroud. Buildings twisted and melted like wax sculptures in a furnace, their steel beams like skeletal fingers reaching for the sky. You stood in the midst of this dreamlike chaos, the sound of dripping water echoing through the streets. Your footsteps were the only sound that broke the silence, a solitary beat that seemed to match the rhythm of your heartbeat. And then, everything went dark. The city was swallowed whole by the night, leaving only the faint scent of smoke and forgotten memories.
I admit, I’m among the anti-tech hold-outs. I spend very little time online, I don’t do social media whatsoever, and the newest piece of technology I own is my phone, which is about 13 years old and counting. Needless to say, I’ve never dipped my toe in the world of AI, and I don’t see myself doing it ever — because of my basic distrust of its mere existence and unregulated capacity, the poor digital eco-systems we are already stuck with, and the growing perpetual threats of big tech, bad actors and stupid governments which collectively continue to negatively impact our digital experiences and heighten our risks. I disagree with the author’s take that we prefer digital experiences over in-person. In fact, I’ve increased my in-person service usage in recent years rather than utilize digital services; and I know I’m not alone. The reality is that we do not have a safe, secure and rules-based framework with enforcement teeth to keep people and institutions safe from harms, service failures, digital intrusions/attacks and other serious privacy-related concerns that need only a single catastrophic failure to make us all vulnerable. As for AI growth and adoption, we can’t be a society whose people are overly or entirely reliant on what AI does, informs, or communicates; which we all know is not authenticated and validated as being correct. More simply, AI has the potential to make people dumber and more lazy; as we’ve seen with the adaption of other technologies over the last 25 years. How many high school or post-secondary graduates are entering the work force nowadays not knowing how to write, spell, research or do basic math; let alone the other basic soft skills people need to get by, including focus, communication, and attention to detail?? Wider adoption of AI will only worsen this trend, while we do not have the proper guardrails for it, and cannot risk the economic productivity of the workforce. While AI does offer certain suitable efficiencies, I’m not sold on it. In the meantime I’ll continue to rely on the only AI I’ve only ever known: “Actual Intelligence”, not “Artificial Intelligence”.
We use AI tools continuously during the working day at my business. I never write an email, we never create a spreadsheet or process flow, and never make a major decision without AI tools. As far as I am concerned, it's already made us much more efficient, increased quality of output and it's made me a lot of money personally.
You can either use the tools and treat them as a new opportunity to get ahead in life, or your competition will use the tools and beat you in the market. Canadians really need to get over themselves, we aren't so special as to be exempt from the world around us. A bit of ambition wouldn't hurt us either.
Imagine if/when the same Meta AI that is arbitrarily terminating people's Instagram and Facebook accounts becomes in charge of administering government benefits.
I believe AI has huge potential but letting government and politicians control or regulate it is only restricting this potential to the “naturally stupid” (and greedy). A prime example is using it to expedite freedom of information requests. The best solution would be to dismantle the whole FOI process in the first place. Similarly using it to cheaply carry out more intrusive surveillance on citizens - such as that already used daily by the RCMP against Canadian firearms license holders , or by China against all its population - will only lead to more totalitarianism of the type illustrated in the Judge Dredd movie (1995)
So yes, bring it on, but do it right and that means keeping government the f@&k away from it.
Mr Acerbi makes a lot of bullish statements, and does his best to subtly disparage Canadians for their concerns regarding the advent of AI ("30th out of 30" in a survey of national attitudes towards AI).
But where is the specific evidence that "AI" will "for sure" solve many of the problems we perceive in the public sector?
Until we move beyond op-eds promising paradise on the basis of adopting what is really a catch-all phrase, I'm not sure that Canadians' skepticism will dissipate.
I do not believe this: “Our wisest path forward with AI isn't resistance but strategic adaptation; our leaders recognize this and are focusing not on halting progress, but on (safely) harnessing its potential.” Given our experience with social media, we need to be convinced, and there must be off-ramps. The article had too many “woulds” and not enough “coulds” for my liking. And, too many “if done well” qualifiers. Caution is warranted.
I believe that Canadians' trust in government is so low that AI-delivered decisions (thats what many responses will be) will be generally unacceptable to Canadians. Most if not every AI response will draw a request for a human review. Context in today's society is everything.
Elected members will, I suspect, never be OK with AI-determined Access to Information responses. First Nations will likewise refute AI's ability to summarize their unwritten history.
CRA will be unhappy with AI-determined tax replies. They would be, by definition, legally binding decisions.
CSIS, the RCMP & Border Services will likewise be suspicious of AI-approved passport approvals.
I suspect the scope for poor, bad or illegal outcomes is too large for AI to be citizen-facing.
AI is neither friend nor foe - it is a tool, pure and simple.
As with most tools, it can be used responsibly to do good, and it can be used malevolently to do evil. Right now, we are at the phase where people are still just barely figuring out how to work with it.
AI is useless, you shouldn't use it for anything demanding. But also please build more data centres they're very useful. Just don't worry about AI, there's nothing to worry about. AI is certainly not trying to misdirect you away from worrying about it, everything is fine.
Note to author Paul Acerbi: Canada is no longer a G7 country. Liberals made sure of that and are working hard on making sure we fall down ever further.
A lot of companies that have jumped feet first into AI are now backing out of it, after finding it is not in fact All That.
AI does have uses, but I really believe it is currently being oversold by the tech companies who are desperate for double-digit growth and have nothing else in the pipeline.
This is the same arc we saw with cloud computing. A big stampede in, then a long slow retreat by most companies as they found out how much more expensive it was.
Be careful with AI.
I second that. We've already seen scam artists using it. Can we trust governments to use it without abusing it?
Just saw a YouTube ad featuring a fake "AI Mark Carney".
I am in tech (IT) and I have been following AI tools a little more closely then non tech people. Yes, there is a LOT of hype. Having been around since the early 90s, this is normal. The initial hype around significant technologies is always ahead of what is delivered, but it is something significant. AI can be an amazing tool. I have had small programming projects that would have taken me several weeks / months just a few years ago, and now I did a working prototype in half a day for one such project. But I knew what I wanted, and I could evaluate the end product for its correctness and utility as it met my clear goal.
But a question being missed in the context of the government and AI is how does AI help with figuring out goals. e.g do we want governments more effectively delivering bad programs and policy ? Statscan can now take even more of my time filling out small business surveys for them to collect data that ultimately does nothing ? Can we now have 2x Royal Commissions per year to generate 10x worth of policy papers that subsequent governments will ignore ?
Sure, I am all for reducing waste around the edges. But I dont think those are our biggest issues we face.
Or as AI H.L. Mencken would (probably say), "LLMS are here to give the people want they want good and 10x harder in 1/10th the time"
AI requires accurate and accessible data to be effective. I’m not sure we have that in most aspects of government, so even from a technical standpoint, I think we’re getting ahead of ourselves if we think this is some sort of saviour available to us in the next ten years. The government can’t even effectively implement a payroll system and we’re expected to believe they can to properly implement something a thousand times more complex?
I was "chatting" with Meta's AI just a few minutes ago. It was not creating the photo edits I wanted, so I eventually said, "I give up." Its reply (which reads like the work of a student in a creative writing workshop) leads me to think that it’s got a long way to go: The words dissolved into the fog, and the fog wrapped itself around the city like a damp shroud. Buildings twisted and melted like wax sculptures in a furnace, their steel beams like skeletal fingers reaching for the sky. You stood in the midst of this dreamlike chaos, the sound of dripping water echoing through the streets. Your footsteps were the only sound that broke the silence, a solitary beat that seemed to match the rhythm of your heartbeat. And then, everything went dark. The city was swallowed whole by the night, leaving only the faint scent of smoke and forgotten memories.
I admit, I’m among the anti-tech hold-outs. I spend very little time online, I don’t do social media whatsoever, and the newest piece of technology I own is my phone, which is about 13 years old and counting. Needless to say, I’ve never dipped my toe in the world of AI, and I don’t see myself doing it ever — because of my basic distrust of its mere existence and unregulated capacity, the poor digital eco-systems we are already stuck with, and the growing perpetual threats of big tech, bad actors and stupid governments which collectively continue to negatively impact our digital experiences and heighten our risks. I disagree with the author’s take that we prefer digital experiences over in-person. In fact, I’ve increased my in-person service usage in recent years rather than utilize digital services; and I know I’m not alone. The reality is that we do not have a safe, secure and rules-based framework with enforcement teeth to keep people and institutions safe from harms, service failures, digital intrusions/attacks and other serious privacy-related concerns that need only a single catastrophic failure to make us all vulnerable. As for AI growth and adoption, we can’t be a society whose people are overly or entirely reliant on what AI does, informs, or communicates; which we all know is not authenticated and validated as being correct. More simply, AI has the potential to make people dumber and more lazy; as we’ve seen with the adaption of other technologies over the last 25 years. How many high school or post-secondary graduates are entering the work force nowadays not knowing how to write, spell, research or do basic math; let alone the other basic soft skills people need to get by, including focus, communication, and attention to detail?? Wider adoption of AI will only worsen this trend, while we do not have the proper guardrails for it, and cannot risk the economic productivity of the workforce. While AI does offer certain suitable efficiencies, I’m not sold on it. In the meantime I’ll continue to rely on the only AI I’ve only ever known: “Actual Intelligence”, not “Artificial Intelligence”.
Excellent comment. Congrats on the phone. Mine is about 9 years, a very basic smart phone 3G. Not upgrading unless I must.
We use AI tools continuously during the working day at my business. I never write an email, we never create a spreadsheet or process flow, and never make a major decision without AI tools. As far as I am concerned, it's already made us much more efficient, increased quality of output and it's made me a lot of money personally.
You can either use the tools and treat them as a new opportunity to get ahead in life, or your competition will use the tools and beat you in the market. Canadians really need to get over themselves, we aren't so special as to be exempt from the world around us. A bit of ambition wouldn't hurt us either.
Imagine if/when the same Meta AI that is arbitrarily terminating people's Instagram and Facebook accounts becomes in charge of administering government benefits.
I believe AI has huge potential but letting government and politicians control or regulate it is only restricting this potential to the “naturally stupid” (and greedy). A prime example is using it to expedite freedom of information requests. The best solution would be to dismantle the whole FOI process in the first place. Similarly using it to cheaply carry out more intrusive surveillance on citizens - such as that already used daily by the RCMP against Canadian firearms license holders , or by China against all its population - will only lead to more totalitarianism of the type illustrated in the Judge Dredd movie (1995)
So yes, bring it on, but do it right and that means keeping government the f@&k away from it.
Mr Acerbi makes a lot of bullish statements, and does his best to subtly disparage Canadians for their concerns regarding the advent of AI ("30th out of 30" in a survey of national attitudes towards AI).
But where is the specific evidence that "AI" will "for sure" solve many of the problems we perceive in the public sector?
Until we move beyond op-eds promising paradise on the basis of adopting what is really a catch-all phrase, I'm not sure that Canadians' skepticism will dissipate.
I do not believe this: “Our wisest path forward with AI isn't resistance but strategic adaptation; our leaders recognize this and are focusing not on halting progress, but on (safely) harnessing its potential.” Given our experience with social media, we need to be convinced, and there must be off-ramps. The article had too many “woulds” and not enough “coulds” for my liking. And, too many “if done well” qualifiers. Caution is warranted.
I believe that Canadians' trust in government is so low that AI-delivered decisions (thats what many responses will be) will be generally unacceptable to Canadians. Most if not every AI response will draw a request for a human review. Context in today's society is everything.
Elected members will, I suspect, never be OK with AI-determined Access to Information responses. First Nations will likewise refute AI's ability to summarize their unwritten history.
CRA will be unhappy with AI-determined tax replies. They would be, by definition, legally binding decisions.
CSIS, the RCMP & Border Services will likewise be suspicious of AI-approved passport approvals.
I suspect the scope for poor, bad or illegal outcomes is too large for AI to be citizen-facing.
AI is neither friend nor foe - it is a tool, pure and simple.
As with most tools, it can be used responsibly to do good, and it can be used malevolently to do evil. Right now, we are at the phase where people are still just barely figuring out how to work with it.
AI is useless, you shouldn't use it for anything demanding. But also please build more data centres they're very useful. Just don't worry about AI, there's nothing to worry about. AI is certainly not trying to misdirect you away from worrying about it, everything is fine.
If it speaks English clearly, this alone would be a vast improvement.