D.A. Kirk
1 min readOct 5, 2018

--

Thank you for pointing this out, Clint! We’ve agreed on some things and disagreed on others, but on this topic, you and I are 100% in sync. I’m a lifelong science-fiction fan, and the pervasive assumption that AI will one day become self-aware has always driven me nuts. It essentially assumes that humanity can match nature’s skill at creating life, which strikes me as a bit arrogant. Could it happen? Sure. But it’s hardly an inevitability.

And if it does happen, I suspect it will end badly for us, though not for the reasons most people assume. For all our faults, human beings are very adept at finding meaning and value in existence. But what if a sentient AI is incapable of doing the same? What if it filters everything through a cold, sanitary lens of objectivity and concludes that there’s no real purpose to existence, no objective value to any of our relationships, actions or experiences? What precisely is it left with?

I’ve thought a lot about this, and I find the whole subject very unsettling. I just don’t want a nihilistic AI running around, lol. I don’t want to find out what it might be capable of.

--

--

D.A. Kirk
D.A. Kirk

Written by D.A. Kirk

Outer space enthusiast. Japanese history junkie. I write about politics, culture, and mental illness. Disagreement is a precursor to progress.

No responses yet