Editorial—AI Trust Issues

The short issue last week leads into a bigger one this week.  I’m happy to start it off with the latest Minds We Meet, where we speak with an accounting student who has one of the best slogans for when you’re just getting up in the morning that I’ve ever heard.

Also, did you know that April is poetry month?  I didn’t, but that’s okay, because Jessica Macleod did, and she’s got some ideas for how you can bring out your poetic side, or at least some interesting ways that you can indulge yourself in something you might not otherwise consider.

Also this week, Alek Golijanin brings us two interesting pieces, both with conclusions that make them a great read!  The first is a look at the use of AI within newsrooms and around the world, and just what that might mean not just in terms of us being able to trust anything we see or hear in the media ever again, but how so many of our systems are now vulnerable. With so many people publishing videos and podcasts of themselves, it becomes possible to train an AI to emulate someone’s voice.  Think what that might mean the next time you use the voiceprint security feature offered by your bank over the telephone.  And that’s without even getting into what shenanigans might go on during an election.

Personally, I’m thinking this might be a bit of a boon for us in the long run—provided we survive the learning process.  After all, when people realize they can’t trust anything they see or hear someone say unless they’ve seen it in person, politics is kind of forced to go back to looking at actual policies to be able to tell the difference.

But until then, we’re entering a world where politicians can say absolutely anything to certain groups of supporters, and then claim it was a deepfake when the video gets published and they’re trying to say something completely different to another group.  Or of course it really could be a deepfake, and by the time the AI forensic analysis gets done, things have already moved to another subject, assuming that the forensic analysists aren’t themselves deepfakes. And how many times will news agencies be fooled in their rush to get the latest statement out before people stop tuning into them altogether?

This, to me, is the real threat of AI, not that it will act malevolently or in ways to take over the world, but rather what people will do with it.  How does our world react when we realize we can’t trust any connection that isn’t in-person?  Is that really your grandson calling you because he’s in trouble and needs extra money?  Or someone who’s used his podcast to capture his voice?  When you can’t trust your own ears and eyes if there’s any sort of technology being used, how does our globalized, internationally connected world continue to operate?

Beyond that, however, we’ve also got a brand new Cities In Six, this week with some great pictures and information about the city of Copenhagen, Denmark, as well as articles that might help you get inspired, earn more, or just learn something.  Plus, [blue rare] gets serious about spring cleaning.  At least, serious about thinking about it. No doubt it will happen soon. And of course we’ve got a bunch of smaller pieces to keep you up to date with what’s going on around AU, whether it’s the Student Sizzle, AU-Thentic events, Scholarship of the Week, or more.

So, until next time, trust no-one and enjoy the read!