If anyone here takes a serious interest in AI research and wants to know how far it's come, you should watch this MIT open course on the subject, published in 2015:
Here is a link to a conference that took place on Jan, 2017 that involved Elon Musk, Max Tegmark, Ray Kurzweil, and other leading philosophers and scientists on the subject of super-intelligence:
Elon Musk presents the most relevant issues (imo) followed by Ray Kurzweil.
Elon points out that his only concerns are the issue of bandwidth (human data output is extremely poor compared to machines, using "meat sticks" (fingers) and speech patterns, whereas machines can output terabytes per second) and the issue of democratization.
The first issue could be resolved through mind-machine interfaces - a direct connection to the neocortex. There has actually been significant progress in the research and development of this technology that could begin to see implementation within the next ten years. The second issue of democratization requires that super-intelligent software be free and open-source, but also regulated by the government. If it is kept private, then wherever it first gets invented would have "first-mover" advantages, because the recursive growth becomes exponential within only a few days, and anyone who has access to super-intelligence would be able to run a number of different "monte carlo" simulations that instantly solve things like energy, stocks, military, transportation and distribution, weather patterns, biological systems like the brain, social constructs like personality and political sentiment analysis, etc.
Ray approaches the issue conceptually by using a narrative of going back in time and approaching the quintessential caveman, whereupon you ask him, if he could have anything he wanted, what would it be? And he tells you he would want things like a fire that never went out, a bigger stone to block intruders from his cave, plentiful food, clean water, and lots of women to mate with. But you in turn ask him, "don't you want a better website and a new smartphone and a faster computer?" Case-in-point, he doesn't have a concept of such things because they are so much more advanced than his present understanding allows him to imagine.
To contrast this even more clearly, he asks the same question again but this time you are talking to the smartest ape, millions of years ago, and it tells you it wants delicious nuts and fruit and sexual partners and less predators and things like this, but you ask it, "don't you want music, and math, and culture?" Again, these are concepts it is incapable of even imagining because of the limits of it's present intelligence. The relationship between super-intelligence and our human intelligence would be similar. We cannot begin to speculate what super-intelligent beings would desire to produce or have, or what the motives of a super-intelligent being would be, because it is simply beyond the limits of our finite reasoning capacity.
The conclusion then is that it is pointless really to speculate or worry about it existentially. Rather than being concerned with the possibility of machine super-intelligence "replacing" us, being malicious, or whatever else, I think it is better to approach the question from a pragmatic perspective the way Elon Musk does, by asking questions about how it could be best implemented.
By the way, for those of you too lazy to watch the conference fully, they all more-or-less agree that we will have machine super-intelligences within the next 5 years. Remember, these are statements coming from the world's leaders in science, philosophy, and business. They are not joking, these are not romantic speculations. They are saying "within the next 5 years" because that is in fact the proper time-frame based on what we currently know. Granted, there could be some kind of unforeseen setbacks, however nobody who is a qualified expert in the field of artificial intelligence predicts there will be any.