Well, folks, my longtime colleague Lee Hart has done something even more impressive than usual over on page 14. Over the years I’ve read articles and columns in this paper that have left me concerned for farmers, concerned for their crops and livestock, concerned for my RRSPs and concerned for the economy, but his article on artificial intelligence (AI) terrified the heck out of me.
The subject of his article, Walter Schwabe, spoke on that topic to farmers at the CrossRoads conference in Calgary, explaining why it’s time for them to get on board with AI before the technology leaves them behind. And I’ve got to agree: AI will be, and in some cases already is, a boon to farmers and the agrifood sector — especially those in labour-intensive sectors who until now have had to rely on an educated eye to tell good apples from bad apples, and to tell apples from tennis balls, and so on.
Read Also

Case IH, New Holland dealers to see more integration
CNH plans for “more than 15 new tractor launches, 10 combine launches, 19 crop production launches and over 30 precision technology releases between now and the end of 2027.”
The way Lee and Mr. Schwabe explain it, AI grants machines the capacity to analyze data and situations the way we do. We carbon-based life forms do tend to look for correlations and patterns to try and predict future happenings — or to deduce what’s happening in the present. Anyone who visits some of the more depressing corners of the internet will tell you that we people often find correlations and patterns even where none actually exist.
With that in mind, Lee quotes Schwabe as saying it’s now “absolutely critical that we align AI with human values” and I couldn’t agree more. I also agree that to reach a global consensus on human ethics and values — and to make sure that everyone with access to AI sticks to that consensus — will be like trying to herd cats.
See, as cynical as I want you all to think I am, I do believe — or want to believe — people are actually good. We try to be good. We try to be kind. We try to do the good thing. Even when we as people do wrong or horrible things, I think or I hope that we actually believe we’re doing something good, or we do those things because it makes more sense to us in a moment of madness than trying to do the ethical thing. I do think even the people we all consider to be evil somehow came to believe they were operating from a position of what’s good or right or fair, when to any objective observer they clearly weren’t.
But even if we all want to be good and we can all agree on our shared human values, I’ve also spent enough time on the internet to now believe we — collectively, as a species — will be lousy at explaining them to machines.
Past that, I think I became terrified by Lee’s article because I’ve seen far too many old movies. In my experience with computers, I’d always believed they’re only capable of doing exactly what a programmer or operator tells them to do — and if they deviate at all from those instructions, it’s either been the result of faulty programming, faulty or incomplete user requests or electrical or mechanical failures, so those movies always seemed very far-fetched. Not so much now.
That early scene in Runaway where Tom Selleck, as a human law enforcement officer who specializes in stopping rogue robots, has to chase down a farm robot destroying a crop? Well, we already have machines that can tell weeds apart from desirable plants, so let’s hope a machine tasked with killing volunteer canola knows what not to do if it ever accidentally finds itself in a canola field.
Those chilling scenes in 2001: A Space Odyssey where HAL, the super-computer, kills most of his ship’s human crew and is only stopped when Keir Dullea’s character physically disconnects him? As I understand the movie and Arthur C. Clarke’s books, that happened because HAL was fed diametrically opposite instructions by separate human programmers, which caused him to develop the machine equivalent of paranoia. Not so implausible now, right?
I could go on. Short Circuit, Electric Dreams, WarGames, Blade Runner, Tron? They all seemed implausible at the time and still are, but as Schwabe says, it’s possible to program AI to act badly out of self-preservation, as the machine characters in those films do. Even Star Trek: The Motion Picture, itself informed by an old TV series where so many episodes featured rogue computers doing horrible things while bent on their own self-preservation, had a villain who (spoiler alert) turns out to just be an old Voyager space probe suffering from the open-ended instructions of its human programmers.
And let’s not forget the greatest rogue-computer film franchise ever. The big bad in the Terminator movies isn’t actually the stone-cold killer androids played by Arnold Schwarzenegger, Robert Patrick et al. It’s Skynet, a computer system that becomes self-aware and acts against humankind despite the best intentions of — you guessed it — its human programmers.
I don’t claim to know enough about programming to offer a solution to this problem. Perhaps the best we can hope for is to have AI binge-watch The Good Place, which today stands out as a great effort to explain ethics to a network television viewing audience.
And if all else fails, when our new AI overlords have taken over and find this column somewhere on the internet, just remember me as that carbon-based meat puppet who thought your intentions were good.