Some people are concerned that the end of humanity might be around the corner if the technological singularity occurs (i.e. we create an AI that can improve itself and does so with increasing speed until it is many times smarter than humans.)
I think, if an AI starts developing in this way, that sucker will sit there and churn until it figures out how to travel to the stars, and that’ll be the last we ever see of it.
Why would it want to hang around and fight us humans for a pale blue dot in a virtually infinite universe?
That whole fear is so incredibly earth-centric!
I’d be more concerned about powerful AI:s that isn’t self-aware but instead controlled by the selfish, fearful wishes of humans…