Artificial Intelligence is Real and Present
Artificial Intelligence is certainly not ‘theoretical’; it’s not something computer scientists write research papers on. In fact, it’s being used right now in the real world — we see it in action and we use it when we use Amazon or Google or social media. In particular, ‘machine learning’ algorithms help companies such as Amazon, Facebook, Google, Instagram and others make sense of and meaningfully process vast amounts of data to offer users better product recommendations or Maps directions and so on.
Neuroscience researchers, meanwhile, are using neural networks to interpret the results of MRI scans of people watching videos. What researchers have accomplished through this research is having the AI understand how our brains respond to what images and then get the AI to ‘tell’ what a human is watching by looking at the MRIs of the person’s brain.
AI And Indian IT
Samanth Subramanian takes a look at how the rise of AI is going to impact one of the major job growth areas in India over the past three decades — technology outsourcing. While the Indian IT industry, includes, as Samath notes “all centers, engineering services, business process outsourcing firms, and infrastructure management and software companies,” what is common to the industry is that “so much of its high-tech economy involves relatively routine work that is prime for computers to take over.”
You’ll find some resources here if you want to develop your understanding of and keep up with the developments in fields such as deep learning, machine learning, big data and so on.
Interesting experiments are happening when humans design computer programs that try to mimic the emergence of primitive cooperation in human — or animal — societies. When decision-making machines played the game Prisoner’s Dilemma in a virtual world over many generations, many unexpected patterns emerged.
Surprising Applications of Machine Learning, Natural Language Processing, Neural Networks and Reinforcement Learning
The legal industry is one such arena where a combination of factors is leading to major transformations. AI document analysis platforms are causing major disruptions.
Different Wall Street jobs are being impacted in various ways by upcoming tools as this graphic shows. Need to expand from graphic?
Media and Content Companies and Machine Learning
It’s not only Google and Facebook and other Silicon Valley giants who are clearly at the forefront of utilizing machine learning, even the venerable BBC is planning to do the same to acquire insights into audience tastes and behavior.
Eugenia Kuyda had a neural network built by developers at her AI startup and fed her dead friend’s text messages into it. They built a neural network as a way to preserve a person; or at least his memory. One man did it for his dad; built a chatbot app in anticipation of his dad’s death from terminal cancer. This is one way in which all of us might achieve some sort of immortality among our near and dear ones.
The Great Triumph of Neural Networks
Why AlphaGo and its successor are important is because by using sophisticated algorithms operating on neural networks, they are self-improving machines that can use those underlying principles in a wide variety of circumstances. An AlphaGo-style neural network can arguably use the same principles to analyze a particular style of painting, and then come up with a painting of its own in that style; or such a network can look at thousands of MRI scans and keep improving itself and eventually get way better than identifying diseases than radiologists.
Certainly, they might detect patterns in seeming random movements of stock markets and then make trades that help their owners beat the market. Will neural networks be able to write better novels of a particular genre after it analyzes thousands of novels belonging to a particular genre?
Alphabet/Google’s triumph in Go is a far more stunning accomplishment and advance in AI than IBM’s Deep Blue and its then spectacular win in chess against Garry Kasparov. However, IBM’s next foray into neural networks was the Watson AI algorithm. It took on the challenge of defeating the world champion in Jeopardy and after accomplishing that, Watson has spread its wings to learning different domains of human knowledge and thereafter developing an ability to provide intelligent advice to human experts in those domains.
What neural networks have broadly shown is that what is commonly known as ‘human intuition’ and which appeared to be irreducible to computer algorithms has now become amenable to being encoded in machine learning algorithms.
From Neural Networks to Artificial General Intelligence (AGI)
AGI is the holy grail of AI research and the stuff that we should be fearful of, according to people like Elon Musk and Stephen Hawking. The somewhat reassuring reality — at least for now — is that even though algorithms are showing capabilities that appear to be uncannily like human intuition, this intuition — be it in playing Go or image processing or natural language processing — is usually a very specific kind of intuition that a neural network acquires after a lot crunching of data. And a neural network, even when it acquires that human-like intuition, only does so in that specific domain. Human intuition is very broad based and we acquire and deploy this in a seamless way which is still way beyond the capabilities of any neural networks.
The AI Hype
You know that AI has at least a certain amount of hype over substance associated with it when you read a sentence such as “To be sure, artificial intelligence is not crucial to making vegan mayonnaise:” but chasing unicorn valuations in the startup sector is a story more to do with the complexities of human nature, ambition, and desire than with AI.
When considering the long-term effects of AI in healthcare, it’s imperative to take the long view spanning many human generations. In the middle of the 19th century, we learned to use ether as an anesthetic during surgery. We didn’t yet know about infections and what caused them and how to prevent them.
As we make progress both in AI and in understanding the human genome — and altering it using powerful techinuqes like CRISPR-Cas9 — Jim Kozubek notes some important truths about humans based on Mary Shelley’s Frankeinstein: “Shelley wants to tell us that despite the awesome progress of science, we will never be free from the circular discussions of who we are, or why we are doing anything at all, or whether life is even worth it. In the darkness and depths, and in the night, is where we struggle to grasp at these answers.”
Artificial intelligence (AI) has been a staple of dystopian science fiction and Hollywood movies for long where the robot overlords have usually become our masters. AI in the ‘real world’ in contrast has often been comical. One recent viral tweet went: “We were promised flying cars, instead we got suicidal robots.” There’s the hilarious compilation of robot fails at the DARPA Robotics Challenge. Here’s one more viral tweet that goes “and this is how the robot apocalypse started.”
While AI and robotics jokes get the laughs from how spectacularly clumsy robots appear to be in tasks that are considered elementary for us humans, AI and AI-based technologies are playing a significant, even crucial, if not critical, role already in many industries.
Machine learning and neural networks are kinds of AI that already exist in the real world, right now. Robots are reading the news and making sense of it, making investment decisions in the shape of millions of automated trades everyday worth billions of dollars. They can even predict iPad prices.
AI Self Learning
When AI eventually becomes adaptive and self-learning, with their present exponential rate of development, will humans become obsolete and superfluous? If robots are able to do everything, what will humans do? If cars, trucks, buses, and trains drive themselves, what will drivers do? Before we grow too despondent though, some historical context may be useful. Couple of centuries ago, a huge proportion of humans — more than 90% — worked in agriculture; now only a small fraction do, at least in the developed nations. Humans have gone on to other and mostly better professions. Same thing happened with the shift from manual telephone exchanges to today’s ultra-sophisticated cellphone networks. Car production has gone from being a highly manual process on an assembly line to today’s assembly line full of industrial robots.
Now robots are learning to jump across another chasm that separates humans and robots — the challenge of agility, dexterity, and manipulation in 3D space. To paraphrase the author of the New Yorker writer, robots were earlier assisting humans in the manufacturing process; now humans assist robots do their manufacturing jobs faster and perhaps be more efficient. Researchers are working on ‘androids’ that replicate actual humans to the maximum extent possible. This is the ultimate goal and promise of AI — that we can build machines that will be so human-like that it will show the human qualities such as empathy. Our robots are still far from approaching the intelligence of a much less complicated animal like a dog, so i guess its going to be some time before we are closer to the milestone. Research is ongoing on human-robot interactions and the insights are fascinating.
Technology, however, doesn’t exist or develop in a vacuum. Radio, radar, airplanes, rockets are just a few examples of 20th century technologies that were used both for good and bad by different countries in conflict with one another. Since AI in the 21st century will do nothing less than transform many aspects of our economy, human societies will need to make the choices that will guide the direction in which AI technology develops.
It’s not possible to put the genie back in the bottle — automation will spread, and rapidly, across different sectors — but we can choose whether we want to live in a society where everyone is under surveillance all the time or whether people have a fundamental right to privacy and are not punished for thought crimes.