Boston Dynamics has helped make four-legged functional robots famous with the likes of BigDog, LS3 and Cheetah; then came the tethered bipedal PetMan. These early functional artilects were at once amazing and unnerving in their own right, but still betrayed some obvious limitations with their narrow range of functionality.
However, the company’s latest iteration may effectively strip us of our last bit of confidence in our human superiority, forcing us to reconsider whether we’ll actually be able to forever outperform and out-maneuver a semi-(or fully!)-autonomous, tetherless humanoid robot. Meet the latest version of Atlas, Boston Dynamics’ agile anthropomorphic robot capable of doing many of the same tasks a man can do– only better.
In a lab in Michigan, an eery sight greets visitors: a man-sized box with two legs – actuators and wiring exposed – trundles across an uneven test track of boards, astroturf, and chunks of styrofoam. Although humans are in the room, the box ignores them, fixedly progressing from one end of the test track to the other, not missing a step. Meet MARLO, the Michigan Anthropomorphic Robot for Locomotion Outdoors. 3D simulations of terrain were used to develop MARLO’s control algorithms, which can select from among 15 different gait patterns to find the optimal way to walk across – let’s say – snow, versus grass.
The latest developments in functional robots may be the coolest thing to emerge in the world of Artificial Intelligence (AI), but with the application of data science to problem-solving and machine-learning AI is already integrated into your life in ways that may have never occurred to you…
Voice recognition and speech transcription is AI-driven. Recommendations in your Netflix queue are courtesy of machine-learning algorithms. Credit card service fraud-detection algorithms are AI.
The fields of artificial intelligence and data science are inextricably linked. Data scientists who work in AI and robotics R&D aren’t just building better machines, they’re also building better data science. Artificial intelligence is key to processing the massive volumes of disparate digital information that data scientists work with every day. And data science is key to furthering the development of artificial intelligence by developing the algorithms by which artilects process information and develop “intelligence.”
The Self-Perpetual Learning Machine: How the Application of Data Science to AI is Advancing Data Science Itself
As data scientists use machine learning to process large data sets, they are developing a deeper understanding of the process of data science itself.
Of course, experience, intuition, and some preliminary searches inform the initial attempts at data analysis before an approach is altered on the basis of the results. Still, data scientists have traditionally begun their analysis of a data set more or less by brail. This has been recognized as a major bottleneck in the data science process. However, new developments in AI are changing all that.
At MIT, a program called the “Data Science Machine” churns through massive data sets and builds predictive models to identify relevant features just like data scientists have done manually for years. And the machine is beating the humans at coming up with those models.
Improving Binary Code Literacy Rates with Artificial Intelligence
While AI researchers were trying to teach machines how to think for themselves by parsing large data sets, the same challenges were being tackled by researchers in other fields for more prosaic reasons: humans couldn’t easily comprehend the contents of those massive data sets either.
To machines, concepts hidden in the messy data of real life are obscured because machines don’t have the built-in pattern recognition inherent to humans. Humans have a different problem; our pattern recognition algorithms are superb, but geared toward processing certain types of data: visual, aural, tactile. Throw a pile of ones and zeros at us, and we have the same problem as the machines.
So data scientists in all fields – from pharmaceutical R&D to finance – looking to make sense of reams of digital data turned to the same type of machine learning algorithms that their brethren in AI were using.
To sift through the thousands of variables involved in modern pharmaceutical development and trials, for example, data scientists at California’s NuMedii Labs developed an algorithm to comb the data sets and identify correlations between disease information and drug composition to predict efficacy rates. The traditional approach to the problem would require years of testing and guesses from human researchers. NuMedii expects their approach to remove substantial amounts of risk from the development process and bring new life-saving drugs to market faster.
The Concept of Deep Learning: Teaching Machines to Teach Themselves
For years, AI researchers focused on making robots smarter by programming them with the best algorithms they could invent for dealing with the routine inputs and situations the robot was expected to face. If it had to open a door, for example, routines would be written to recognize the door, intercept the doorknob with a manipulator, twist the knob a certain number of degrees, and pull.
It didn’t take long for them to run into the limitations of that approach: Doorknobs have different appearances, different heights. Sometimes there are push bars. More force – but not too much – may be required when a door is stuck or obstructed. The variety of situations encountered couldn’t be accounted for in a lifetime of coding performed by mere humans.
Problems also emerged with allowing humans to interact with this rudimentary AI. Natural language parsing, something that humans do instinctively, involves many rules that are either fuzzy or frequently broken. Machines could not reasonably be programmed to handle more than a small subset of the possible inputs— and those inputs had to be typed into a terminal. Deciphering handwriting or speech seemed impossible, as teaching machines to “see” and “hear” were even more complex problems.
As a result, researchers came up with hacks, such as ELIZA, an early chatbot that offered a conversation with a computerized psychotherapist. But ELIZA relied on a few basic rules and canned responses that merely simulated interaction; she really didn’t think for herself.
It turned out that the best way to teach a machine to genuinely figure out a complex problem was to teach the machine to teach itself. And to do that, the programs had to be offered massive sets of data to learn from, and algorithms to process that data.
“Deep Learning” is what they call it; the process of building artificial neural networks that comb through data and find relationships that human beings would not have identified in a million years. The technique was designed to decipher human handwriting and to enable speech recognition. Google’s voice recognition software experienced a 49 percent jump in accuracy after implementing a deep learning protocol.
Survival of the Fittest: How “Strong” AI Will Beat Out “Weak” AI in the Future
All of the artificial intelligence in the world today is referred to in the field as “weak” or “narrow” AI. It focuses on a particular problem or task and applies itself to that issue entirely. Siri is not going to wake up and take over the world. Her programming is restricted to speech recognition. Google’s self-driving cars aren’t going to start rounding up humans and placing them in detention centers—driving is all they can do.
The promise – and threat – of artificial intelligence lies in “strong” AI, or a “general” artificial intelligence. Such a system would be able to perform any general intellectual task or function that a human could accomplish. It could apply itself to any challenge… a machine that could plan, reason, anticipate, and act in such a way that it might be considered sapient.
With sapience comes uncertainty. Already, the complexity of adaptive weak AI is such that unexpected and unwelcome behaviors may emerge. Microsoft’s chatbot “Tay” recently brought a storm of negative press down on the company when correspondents on Twitter taught her to parrot racist phrases to her followers. More recently, Hanson Robotics unveiled Sophie, the world’s most life-like fembot at the 2016 SXSW (South by Southwest) tech conference. Much to the chagrin of her human handlers, Sophie stated her willingness to destroy humankind without so much as a smirk to imply irony.
A chatbot can’t do much damage other than to its creator’s reputation. A robot, on the other hand, can wreak all kinds of havoc—even if it’s not a physical robot…
On May 6, 2010, the U.S. stock market suddenly dropped by 1000 points in less than five minutes. Just as quickly, it returned to its previous level, leaving human traders and investors stunned and mystified.
But there was no real mystery. Trading robots, algorithms constructed to monitor and execute trades in milliseconds based on market data, had begun to interact with one another in ways that were unanticipated, but entirely in line with their programming. The Flash Crash was inadvertent, but illustrative of the potential for unexpected emergent behavior in even weak AI systems.
Data scientists will have a hand in developing strong AI and it will be largely on their shoulders to prevent even worse debacles from occurring in the future.