As Benjamin “Ben” Parker said “With great power comes great responsibility“, this is true with any new technology advance, imagine if Marie Curie not had continued with their pioneer radioactivity studies because is possible that will be dangerous, at this moment we don’t have radiotherapy to treat cancer, radiocarbon dating in archaeology, power our spacecraft and supply electricity to satellites that are sent on missions to the outermost regions of our solar system where the solar power is not a choice; because this for me is impressive ear respected people like Bill Gates and Stephen Hawking talking so ease about an AI apocalypse were we would create an AI that was able to adapt and evolve on its own, and would do so at such an accelerated rate that it would move beyond human ability to understand or control, when the reality is so far away from that.
The first think is that we don’t understand what intelligence really is, we know the behavior caused by an intelligent process but how this process works is so far from our actual knowledge; for example we humans all have brains, but intelligence varies widely; we know how neurons work and what areas in the brain are associate with memory, vision, the creative process, etc. But really we don’t know how these all together produce an intelligent manner.
Our AI efforts have concentrated on mimicking some elements of that behavior but the results so far have offered some impressive results, but only in narrow application areas; for example we have cars that drive alone, airplanes that can takeoff and landing by themselves; but they work in a specified ranks of parameters and can’t do any more when are off of them.
In order to create the expected movies AI is necessary to join all the separate aspects that are under study and constant development like visual recognition, voice recognition, natural language recognition (is not the same that voice recognition), natural language response and the learning that is the most important in order to have an evolution “movies like”; I’m not counting the human’s creative process that really we don’t have a clear idea how this process is done because is not the same learn some task that create something complete new.
At the AI area exists two kinds of system, the cognitive and the emergent systems; the cognitive systems use the knowledge to take decisions, in this type are for example expert system, or the Hierarchical Asynchronous Rational Bayesian Intelligent System (HARBIS) that I developed using Bayesian logic to take decision based in events probability; The emergent systems use the paradigm that the intelligence emerge from simple and localized operations at individual units working all together to accomplish a goal; this systems try to copy biological models like the neural networks that try to mimic the brains; in this type of system are for example the Neural Networks, Recurrent Neural Networks, Asynchronous Neural Networks and the Recurrent implementation of it, Cellular Automaton, etc.
To get an idea how complicated and powerful that are the Neural Networks, just with 4 Neurons (1 Output, 3 Hidden) the Neural Networks can learn the XOR function, to learn most complicated patterns is necessary to increase the number of neurons in the Neural Network but If you increase the neurons number in the Neural Network the number of synapses increase in an exponential progression.
If you want to include the time in the pattern to learn sequences is necessary to use Recurrent Neural Networks (RNN), so popular today in face and voice recognition and photos auto-labeling, but this increase the Neural Networks complexity and the computer power required. In this type of RNN I’m working on NeuroBrain that have some features for Self-Generated RNN, and it produce some interesting and promising results.
With the neural networks’ complexity that increase in an exponential way with each neuron added and the actual computer’s power available just neural networks with hundreds or few thousands of neurons are really practical in an single computer and some more thousands in high performance clusters computers. With this magnitudes in mind we can figure that our most complex neural networks are at the same level that a Jellyfish or a Leech.
In order to reach more complex Neural Networks is necessary a substantial increase in the computational power like the development of a practical quantum computer.
With these technical facts in mind is possible infer that we are a few decades away to increase the computational power in order to create a 1.000.000 neural network (a Cockroach). I’m not taking in consideration other studies areas like biological-electronic hybrids.
Now with the facts about AI clear, we can begin to theorize about the benefits and dangers about the AI.
The first fear is about the AI will overset the human intelligence, this fear come from the popular idea that a computer can be smart that a human, but the fact is that they are super-calculators that can do billions of operations per seconds and move and storage a lot of information per seconds, but as know every 101 computer science student if you don’t tell the computer what to do, it will don’t do by itself. Exist study area of the evolutive programming, but it just can use the blocks provided by the human programmer to construct new programs, is just like a LEGO, but the program can’t create a new module by itself; the most advanced computer virus used this approach in order to generate mutations of itself in order to cheat antivirus programs.
The fear that an AI will emerge from a distributed program over the internet that takeover every connected computer, is not reasonable because everyone that worked with distributed AI cluster know the critical that is the send and receive information between nodes in the faster way possible and how stressful it is for the cluster’s data links, despite using fiber optic; now think about a distributed AI over the internet, where everyone know how slow is for example download a file over the internet, now imagine how will be keep a AI interprocess communication.
The fear to construct killer robots, really I think that this already happened, I don’t have any proof but knowing the human behavior I give an 99% of chance that in some secret laboratory is someone working on killer drones using the facial recognition and the self-flying capabilities available today; just imagine the strategic advantage to instruct a small drone to eliminate a military target of interest.
The first benefit from AI is in the space exploration, where the human interaction with unmanned spacecraft is very limited due to the time that take to communicate with it due to the huge distances, right now we send Batch instructions to the spacecraft and we expect that nothing unexpected occur, now imagine an AI that receive the instruction but in case of something unexpected make decisions by itself in order to accomplish the tasks received.
In general the AI will be beneficious on every dangerous and unreachable environment for humans where a standard remote control is not feasible.
The real and imminent AI danger are that every day we are leaving more and more responsibility to manage critical systems controlled by “dumb Intelligent Systems” with poor quality control and a deficient testing and homologation procedures.
In my opinion I think when we realize what we are creating a genuine artificial intelligence surely will place controls as the Asimov’s three laws of robotics.
One last thought, even were we able to create something that is truly intelligent, who’s to say that such an entity will be malevolent?
Julian Bolivar-Galeno is an Information and Communications Technologies (ICT) Architect whose expertise is in telecommunications, security and embedded systems. He works in BolivarTech focused on decision making, leadership, management and execution of projects oriented to develop strong security algorithms, artificial intelligence (AI) research and its applicability to smart solutions at mobile and embedded technologies, always producing resilient and innovative applications.