Trust in the Newest World
Asimov, Heinlein, and even Vonnegut dabbled and fiddled with it, marking imaginary time with dreamy hallucinations pressed to paper to the delight of millions. Including me.
Nobody that I read anyway could
possibly have predicted the weirdness of merging a budding, but quite
unfinished world-changing technology with unbridled capitalism. Where there’s
money, the watchwords are faster and more, consequences be dammed!
Who made this image, and well,
who cares? Who sang the song, and then vanished? How do you make a hand – is it
five or six fingers? What the hell are ownership, copyright, patents? Does an
artist even need fingers?
At the current state of AI
(“Artificial Intelligence” – an oxymoron if I ever heard one), the above
paragraph captures the most apparent issues, but certainly not all. Just as I
mused way back when computers got small and light enough to fit on a spacecraft
– and of course, a nuclear missile is a spacecraft – running, god help us (in
my imagination) one of the ubiquitous OS “algorithms,” it wasn’t the risk of
one of our future AI overlords kicking off a nuclear war that put fear and
trepidation into my (much younger) heart. No, it was the phenomenon best
exemplified by the infamous “blue screen of death”
that Microsoft invented to inform users that digital suicide was the only
option remaining.
The problem with that sky-blue
burp was that if said computer were busily coordinating the carefully planned
strategic course of whatever “vehicle” we might think of – though I am quite
sure there were and are “safeguards” in place to cover just such an eventuality
– that aspect of “control” would certainly be lost. Maybe not so bad, with
those safeguards. Maybe.
That fear, for me, has diminished
over the decades. I guess I’m close enough to “done,” that hey, whatever. I do
feel bad for you if you are much younger though.
Now I have noticed, in the last
decade or so, that a number of the newer technologies being touted as AI seem
to be artificial, all right, but intelligent? I know, definitions, definitions.
I have a belief (with no
evidence) that my
cat is a genius because she has profound expertise at being a housecat, as
well as brilliant execution of detail for same.
In this paradigm, ants are at
least equivalent to humans in intelligence because several species of ants are
at least, if not more, successful than humans both in numbers and biomass. So
what if they can’t do trigonometry? No human of 1000 years ago or more could do
that either. And you and I wouldn’t be here if that mattered at all. And while
I’m on the subject, everything – including us – shits in their own nest (think
of Earth as our nest). Everything. It’s called life. Shitting in its own nest
is what it does best.
My problem with AI is that all I
have ever heard about it is that it’s on its way (if not already there) to
being as intelligent as us. There are a shit ton of rabbit holes I could go
down with that. The reason that’s scary is not because I’m afraid of becoming a
slave to a smarter-than-me machine (said while glancing at everyone reading
this on their phone). I’m scared that it (AI) will become just exactly like us –
but with even more power to change things. I don’t know if you’ve noticed this,
but humans, in general, suck.
Really.
So, since the AI stuff is made by
humans, who as mentioned suck, therefore AI will suck too until humans stop
making it (it will hopefully become capable of making itself better than we
can).
Wait, what? AI sucks?
Well, not always. I’ve found AI
(touted) gizmos – say, Alexa, Siri, Googe Assistant (er, Gemini now?) to be
marginally useful in their functional domains.
I know this is probably just me,
but the only use I could find for Alexa after a full year was having it behave
like a radio (look that up if you’re under 40) – play me NPR or Pandora. I
could have had it order me a pizza, tell me the weather, or snag me some
tickets to a GlagNogg concert (don’t bother looking that up). But I could do
all of that myself anyway (ok, maybe not GlagNogg). So, two ways to do the same
thing. Times 3.
Poor Alexa (hiding inside an
Amazon Echo device) landed in the bottom of a box of useless stuff that ended
up given to our housekeeper who helped me organize my room.
I am not acquainted with Siri
other than occasionally hearing other people yell at her because she’s hit or
miss on who to call. iI iDon’t iHave an iPhone.
Now we come to Google Assistant.
Buy yourself a Pixel phone
(1,2,3,4,5,6,7,8,9 and MAYBE 10), and Google Assistant will be right there
waiting for you (or apparently anyone else) to say, “Hey Google” and you too
can have the Pizza, GlagNogg, and no rain today, BUT…
Before you start asking those
questions about “did you train it on your voice?” – YES, I did, several times.
The first time Assistant got my
irate attention was when she heard Mike (one of my housemates) in another room
ask Siri (on his iPhone) to call a different Mike (of course).
Ms. Google decided that my Mike (housemate, the only Mike in my
address book) would be the ideal person to call without even asking me. So, she
did. We worked it out, of course.
But that wasn’t all. No, not at
all.
Google Assistant has a splendid
feature called “Hold For Me.” It does what it sounds like – it will suspend
your end of the phone call to that government agency that has a wait queue of
40 minutes, until someone on the other side actually picks up. Then it’ll buzz
for you to pick your end up while it rattles off a spiel to the poor government
worker bee telling them not to hang up, Paul’s coming.
Sounds great, right?
It is great! It is great in
principle. It is what all of us have wanted for decades!
The biggest problem with this is
that it is on everyone’s Google phone (and maybe other Android
phones as well) so it didn’t take long at all for those government worker bees
to say to themselves, and anyone who wasn’t in earshot of their boss, fuck this
shit, I’m hanging up now. So there went 40 minutes of your life with no result
at all.
The good news is that you always
have the option of disabling “Hold For Me’ when you make a call and get the
40-minute nonsense from the other end, so I always disabled it. Oh boy, was I
ignorant at that point. See, as long as the thing is enabled for calling in
general (system level), it’s always there, waiting in the wings, as we might
say. Thus comes the second episode of madness that was the last straw.
One day, I was happily having a
lengthy conversation with a good friend. We were just talking. We weren’t
singing or playing a clarinet/guitar duet. We said nothing about “… callers
ahead of you” or “please hold.” Nothing.
Our call gets cut off.
Hold For Me tells me it is
holding the call until someone answers.
At that moment, anyway, it does
not give me the option of removing the hold. It’s much like the blue screen of
death above. We’re done, so I hang up and call back to get voicemail, because
(as I find out later), Hold For Me told my friend to, ahem, HOLD until Paul
answers.
My philosophical question for the
reader, based on the above scenario, is “who is confused here?”
It certainly was not me, nor my
yacking buddy. We knew what we were doing – talking.
That was when my brain very
quietly hatched the plot – to murder Google Assistant. It didn’t happen right
away, and it took a bit of techno investigation and a couple false starts, but
I finally found the system setting wherein Google tacitly admitted that
Assistant was a work in progress, and that some folks might appreciate a
different “Assistant app” or none.
I sneakily picked “none” and
Google Assistant was no more. If she were corporeal, I would have buried her in
a corner of the back yard by the fence, but that was hardly necessary since she
was entirely digital (I wonder if eventually a fully intelligent “artificial
intelligence” could be murdered and the perpetrator arrested?)
Now, in a rabbit hole of wild
insanity, comes the true confusion factor, and the entity to which it belongs –
humans. Sucking (as they will, as often as not).
More specifically, the humans
that designed and “trained” Google Assistant, etc.
Personally, I have been sitting
in the bleachers (cheap seats) at the AI ballgame for quite some time.
And now for the biggest reason
that humans suck. Laziness.
I think I have mentioned
elsewhere on this blog (not sure) that humans are wired much like all living
creatures to take the easiest shortcut to any purposeful endeavor. In general,
this serves us quite well, as it conserves energy both physically and mentally.
Energy to keep on bangin’ forward, as all of life needs to do in one way or
another.
Back to the latest result of this
paradigm.
The various developers of AI have
figured out that if you want to “teach” a machine stuff, give it a really big
pile of human rock scratches (language) and let it eat its heart out. Call that
a “large language model” and call it a day.
Oh, they knew they’d need to
tweak it a bit down the road, but as soon as one of the first iterations
started hitting on its designers and suggested they dump their current
significant others for a pile of digital strippers, the dollar signs started
flying around their heads like the stars around the head of a cartoon character
that gets bonked on the noggin.
Somebody did get fired over that,
but still.
So, you might think this
(current) 65-year-old is just complaining like the old curmudgeon he is.
Admittedly, I got my official curmudgeon license the day I hit sixty. “Lose
your floppy discs with the old text adventure games, Boomer, and join the 21st
century!” I hear you saying. I get it. In fact, I got it 40 years ago. That was
when I decided I would want to “drink the Kool-Aid” when I hit thirty. Old
people were backwards and annoying. I never planned to fulfil their legacy –
but here I am.
I know that by the time I finish
writing this post, all of my thoughts on the subject of AI will be as outdated
as that brick of cheese in the back of the fridge that was bought when Ronald
Reagan was president. Even the stuff I wrote just yesterday became irrelevant
when I scanned the latest tech news items on my phone’s news drawer. The speed
of change in the AI realm is increasing exponentially.
That might be my biggest issue
with the current trends in AI. Nobody knows for sure what it can do for us in
the not-too-distant future – and nobody knows what it will do to
us either. The biggest problem in my view is something that is only rarely
talked about during a major technological advancement. It is the incredibly old
problem of good and evil (humans). AI is great for developing the best stuff
ever for humanity. It is an awesome tool. The problem is that for
all the good it can, and probably will do for us, it will be just as
good at doing bad for the people who want it to do that.
Look at the Internet. It will do
fantastic things for us at the speed of light and make our lives so much easier
that we (at least I) cannot fathom how we got along without it for most of
human history.
That same statement could also be
said by scammers, hackers, etc. who can now also operate at the speed of light
to steal and destroy at that same speed of light.
Now imagine the speed of light as
the speed of mind. That’s what we are getting with the thoughtless (in my view)
acceptance of AI “tools” on our phones and computers (for now). I won’t pretend
here that I have a solution to these issues, nor that folding my (our?) arms
and making grumbling sounds will do any good. Hiding under a rock and ignoring
the present and looming problems won’t work either.
It just comes down to, like most
unsolvable issues, happy acceptance, and a cheeseburger (or whatever works for
you.) And perhaps a wary eye and some forethought before installing that latest
AI dating app. Hey, at least you can have Siri order the cheeseburger for you,
and maybe she can fluff up that picture of you for the dating app.
Cheers, and thanks for reading.
Comments
Post a Comment