Resources

Monday, April 2, 2018

Should we worry about Artificial Intelligence? The Perils of A.I.



There has been a lot of discussion around Artificial Intelligence (A.I.) now that it is becoming more ubiquitous than ever due to three major developments that took off in 2017 and into 2018: better algorithms, increases in networked computing power and the ability to capture, store and mine massive amounts of data.

The launch of voice-activated virtual assistants like Alexa, Siri, Google Home, Cortana, Apple HomePod, and others has propelled A.I. into mainstream thinking and the consumer market.


The fact of the matter is that visionaries, researchers, scientists, and developers have been working with Artificial Intelligence for more than sixty years. Only in the past few years are we seeing an explosion of uses and devices from chatbots to home assistants to medical diagnosis to robotic devices that vacuum our homes, mow our lawns or manufacture goods.

Will A.I. be mankind’s final invention? Will it ultimately destroy civilization and eventually all human life? Will it eliminate jobs? Will it enhance human productivity beyond our wildest dreams? Will it be mankind’s best invention?

Stephen Hawking, Elon Musk, and Bill Gates have gone on record noting the dangers of unchecked A.I. and that it could be the demise of the human race.

Maybe, yes, maybe no, but I see three major areas of concern that need to be addressed right away: algorithm security, algorithmic bias and algorithm interactivity.

Algorithms power our technology and pretty much how we view and participate in the world. They are a complex web of if/then scenarios and a set of instructions for the device. Every time you go to Amazon or Netflix or any site for that matter your activity is being tracked and that is why when you go to another page or another site, an ad or a suggestion may pop up of what you looked at previously. Netflix’s powerful algorithms learn your entertainment preferences and suggest similar movies or shows. Algorithms are really the ancestors of A.I. These are just the tip of the iceberg considering the algorithms used in A.I.

A.I. algorithms have become so complex with machine learning and neural networks that in May this year the European Union’s Data Protection Regulation goes into effect after decades in the making. The regulation sheds light on the “black box” notion of the algorithms and gives E.U. citizens the right to know how the algorithms work when machines make decisions that affect their lives. 

              Stephen Hawking                                            Elon Musk                                               Bill Gates
This is a step in the right direction, but a herculean challenge for the likes of Facebook, Google, Microsoft and other techno giants that are entrenched in A.I. simply because no one is clear on exactly how the algorithms work or either they’re too complicated to understand, or they’re proprietary algorithms that companies want to keep secret.

In addition, the AI Now Institute at New York University, a research institute examining the social implications of artificial intelligence, recently applauded New York City in becoming the first city in the nation to take up the issue of algorithmic accountability when it set up its Automated Decision Making Task Force.

“The task force is required to present the Mayor and ultimately the public with recommendations on identifying automated-decision systems in New York City government, developing procedures identifying and remedying harm, developing a process for public review, and assessing the feasibility of archiving automated decision systems and relevant data,” according to the letter sent to Mayor de Blasio by AI Now outlining the mission of the task force.

As we get more secure and comfortable with devices like Siri, Alexa, Google Home, Cortana and the Internet of Things, can we implicitly rely on information from these devices? Suppose these devices and others to come were hacked and used for nefarious agendas. What if personal bias were inadvertently or intentionally programmed into the algorithms?

In his book, Future Crimes, in Chapter 8, In Screen We Trust, Marc Goodman writes that every screen is hackable and “whether or not you realize it, your entire experience in the online world and displayed on digital screens is being curated for you.”

We recently experienced a similar catastrophic event when our social media was altered falsely to sway public opinion away from one candidate to another by Russian interlopers during the 2016 presidential campaign. Imagine if many of the devices that we depend on, especially our mobile phones, were hacked and the algorithms changed to spawn a different result? The ensuring scenarios are unthinkable and ultimately uncontrollable.

In addition, both the National Academy of Sciences and American Institute for Behavioral Research and Technology have shown that Google search results could shift voting preferences by 20% or more and up to 80% in certain demographic areas.

Algorithmic bias has recently come under scrutiny by various researchers.  The AI Now Institute is working with the ACLU because of the high stakes decisions that impact criminal justice, law enforcement, housing, hiring, and education to list a few.

“Algorithms that may conceal hidden biases are already routinely used to make vital financial and legal decisions,” according to Will Knight in his article in the MIT Technology Review, Biased Algorithms Are Everywhere, and No One Seems to CareJuly 12, 2017. “Proprietary algorithms are used to decide, for instance, who gets a job interview, who gets granted parole, and who gets a loan.”

Probably, the most challenging and catastrophic real-world problem facing A.I. and its algorithms today is what I call Algorithm Interoperability. It is described by organizational theorist Charles Perrow in his seminal book, Normal Accidents: Living with High-Risk Technologies - when complex systems are tightly coupled and designed to immediately interact with each other.

One such incident occurred in May 2010 on Wall Street when a cascading chain reaction of algorithmic buys and sells caused the DOW industrial index to drop 1,000 points in twenty minutes, according to James Barrat in his book, Our Final Invention: Artificial Intelligence and the End of the Human Era.

The catastrophe started when a frightened trader ordered the immediate sale of $4.1 billion futures contracts and ETFs (exchange-traded funds) related to Europe, wrote Barrat. At the time, Greece was having trouble financing its national debt and the debt crisis had weakened the European and US economies.

“After the sale, the price of the futures contracts (E-Mini S&P 500) fell 4 percent in four minutes. High-frequency trade algorithms (HFTs) detected the price drop. To lock in profits, they automatically triggered a sell-off, which occurred in milliseconds (the fastest buy or sell order is currently three milliseconds—three one-thousandths of a second). The lower price automatically triggered other HFTs to buy E-Mini S&P 500, and to sell other equities to get the cash to do so. Faster than humans could intervene, a cascading chain reaction drove the Dow down 1,000 points. It all happened in twenty minutes,” Barrat wrote.

Perrow called the problem “incomprehensibility,” according to Barrat where an incident is not expected and incomprehensible for a critical period. No one anticipated how the Wall Street algorithms would interact with each other, and so the event was incomprehensible and unstoppable.

How do we solve these problems and not build A.I. machines that cause more harm than good? With the EU algorithm law going into effect and AI Now’s algorithm accountability initiatives as well as a slew of others that will come to be, they will shed some light on these black boxes and make their creators accountable.  But will it make algorithms more vulnerable to hacking or copying as organizations will be required to publicly reveal how they work?


What’s apparent is that A.I. algorithms need to be highly secured with blockchain-based architectures, Distributed Ledger Technology and other technologies to come, so they cannot be hacked and changed by bad actors.

Algorithm accountability means the black box aspect of this technology needs to be explained and made public so that it is clearly understood but not made vulnerable to hackers because of its transparency. Accountability is paramount to ensure that machine-made decisions are not made with human bias’ that were inadvertently or intentionally programmed into the systems.

Interactivity may be the biggest challenge yet in A.I. to prevent the buy/sell frenzy experienced by Wall Street. Perhaps, we need to create an A.I. system that can test algorithm interactivity with multiple scenarios in real time as new data flows into the system.

In any event, A.I. is a powerful new technology that should be created with safeguards to ensure it works for the good of all of us.

No comments:

Post a Comment

We welcome your comments. Thanks for visiting!