Resources

Friday, November 30, 2018

We are losing our ability to communicate effectively

By Anthony S. Policastro

While we can now communicate in the fastest, easiest and most convenient ways possible using a myriad of devices anywhere, anytime to anyone in any corner of the connected world, I believe we are losing our ability to communicate.

Because communication is now ubiquitous, convenient, easy and instant, we have taken our writing skills for granted. Take emails. I believe that most of us are so comfortable with communicating with this medium that we use it like we are conversing with a good friend. Facebook and Twitter reinforce this mindset because we know our posts and tweets are reaching friends and relatives.

Jeff Bezos
The result is a quantum disconnect fueled by snippets of information that are most times incomprehensible.

When you a sitting face to face and having a conversation, the context of what you are talking about is always top of mind. But when you converse in the same manner with email the recipient may not read email for hours or days. The context gets lost. Complicate that with several acronyms in the copy and you might as well call a cryptologist.

We tend to write emails as if the recipient is sitting across from us leaving out the content because we believe the recipient will know what we are writing about. We have become lazy writers.

And I’m not alone in my view.

Walter Chen in his blog, IDoneThis.com, wrote about Jeff Bezos, founder of Amazon who values writing over talking to such an extreme that in Amazon senior executive meetings, “before any conversation or discussion begins, everyone sits for 30 minutes in total silence, carefully reading six-page printed memos.”

Andy Grove
Writing out full sentences enforces clear thinking, but more than that, it’s a compelling method to drive memo authors to write in a narrative structure that reinforces a distinctly Amazon way of thinking—its obsession with the customer. In every memo that could potentially address any issue in the company, the memo author must answer the question: “What’s in it for the customer, the company, and how does the answer to the question enable innovation on behalf of the customer?”

Like Bezos, Andy Grove of Intel finds value in the process of writing, but he doesn't consider reading important. Grove considers the process to force yourself “to be more precise than [you] might be verbally”, creating “an archive of data” that can “help to validate ad hoc inputs” and to reflect with precision on your thought and approach. 

Writing, according to Grove, is a “safety-net” for your thought process that you should always be doing to “catch … anything you may have missed.”
So what is the solution? Write more, write casually, but include all the pertinent facts and pretend your reader knows practically nothing about what you are writing about.

Monday, April 2, 2018

Should we worry about Artificial Intelligence? The Perils of A.I.



There has been a lot of discussion around Artificial Intelligence (A.I.) now that it is becoming more ubiquitous than ever due to three major developments that took off in 2017 and into 2018: better algorithms, increases in networked computing power and the ability to capture, store and mine massive amounts of data.

The launch of voice-activated virtual assistants like Alexa, Siri, Google Home, Cortana, Apple HomePod, and others has propelled A.I. into mainstream thinking and the consumer market.


The fact of the matter is that visionaries, researchers, scientists, and developers have been working with Artificial Intelligence for more than sixty years. Only in the past few years are we seeing an explosion of uses and devices from chatbots to home assistants to medical diagnosis to robotic devices that vacuum our homes, mow our lawns or manufacture goods.

Will A.I. be mankind’s final invention? Will it ultimately destroy civilization and eventually all human life? Will it eliminate jobs? Will it enhance human productivity beyond our wildest dreams? Will it be mankind’s best invention?

Stephen Hawking, Elon Musk, and Bill Gates have gone on record noting the dangers of unchecked A.I. and that it could be the demise of the human race.

Maybe, yes, maybe no, but I see three major areas of concern that need to be addressed right away: algorithm security, algorithmic bias and algorithm interactivity.

Algorithms power our technology and pretty much how we view and participate in the world. They are a complex web of if/then scenarios and a set of instructions for the device. Every time you go to Amazon or Netflix or any site for that matter your activity is being tracked and that is why when you go to another page or another site, an ad or a suggestion may pop up of what you looked at previously. Netflix’s powerful algorithms learn your entertainment preferences and suggest similar movies or shows. Algorithms are really the ancestors of A.I. These are just the tip of the iceberg considering the algorithms used in A.I.

A.I. algorithms have become so complex with machine learning and neural networks that in May this year the European Union’s Data Protection Regulation goes into effect after decades in the making. The regulation sheds light on the “black box” notion of the algorithms and gives E.U. citizens the right to know how the algorithms work when machines make decisions that affect their lives. 

              Stephen Hawking                                            Elon Musk                                               Bill Gates
This is a step in the right direction, but a herculean challenge for the likes of Facebook, Google, Microsoft and other techno giants that are entrenched in A.I. simply because no one is clear on exactly how the algorithms work or either they’re too complicated to understand, or they’re proprietary algorithms that companies want to keep secret.

In addition, the AI Now Institute at New York University, a research institute examining the social implications of artificial intelligence, recently applauded New York City in becoming the first city in the nation to take up the issue of algorithmic accountability when it set up its Automated Decision Making Task Force.

“The task force is required to present the Mayor and ultimately the public with recommendations on identifying automated-decision systems in New York City government, developing procedures identifying and remedying harm, developing a process for public review, and assessing the feasibility of archiving automated decision systems and relevant data,” according to the letter sent to Mayor de Blasio by AI Now outlining the mission of the task force.

As we get more secure and comfortable with devices like Siri, Alexa, Google Home, Cortana and the Internet of Things, can we implicitly rely on information from these devices? Suppose these devices and others to come were hacked and used for nefarious agendas. What if personal bias were inadvertently or intentionally programmed into the algorithms?

In his book, Future Crimes, in Chapter 8, In Screen We Trust, Marc Goodman writes that every screen is hackable and “whether or not you realize it, your entire experience in the online world and displayed on digital screens is being curated for you.”

We recently experienced a similar catastrophic event when our social media was altered falsely to sway public opinion away from one candidate to another by Russian interlopers during the 2016 presidential campaign. Imagine if many of the devices that we depend on, especially our mobile phones, were hacked and the algorithms changed to spawn a different result? The ensuring scenarios are unthinkable and ultimately uncontrollable.

In addition, both the National Academy of Sciences and American Institute for Behavioral Research and Technology have shown that Google search results could shift voting preferences by 20% or more and up to 80% in certain demographic areas.

Algorithmic bias has recently come under scrutiny by various researchers.  The AI Now Institute is working with the ACLU because of the high stakes decisions that impact criminal justice, law enforcement, housing, hiring, and education to list a few.

“Algorithms that may conceal hidden biases are already routinely used to make vital financial and legal decisions,” according to Will Knight in his article in the MIT Technology Review, Biased Algorithms Are Everywhere, and No One Seems to CareJuly 12, 2017. “Proprietary algorithms are used to decide, for instance, who gets a job interview, who gets granted parole, and who gets a loan.”

Probably, the most challenging and catastrophic real-world problem facing A.I. and its algorithms today is what I call Algorithm Interoperability. It is described by organizational theorist Charles Perrow in his seminal book, Normal Accidents: Living with High-Risk Technologies - when complex systems are tightly coupled and designed to immediately interact with each other.

One such incident occurred in May 2010 on Wall Street when a cascading chain reaction of algorithmic buys and sells caused the DOW industrial index to drop 1,000 points in twenty minutes, according to James Barrat in his book, Our Final Invention: Artificial Intelligence and the End of the Human Era.

The catastrophe started when a frightened trader ordered the immediate sale of $4.1 billion futures contracts and ETFs (exchange-traded funds) related to Europe, wrote Barrat. At the time, Greece was having trouble financing its national debt and the debt crisis had weakened the European and US economies.

“After the sale, the price of the futures contracts (E-Mini S&P 500) fell 4 percent in four minutes. High-frequency trade algorithms (HFTs) detected the price drop. To lock in profits, they automatically triggered a sell-off, which occurred in milliseconds (the fastest buy or sell order is currently three milliseconds—three one-thousandths of a second). The lower price automatically triggered other HFTs to buy E-Mini S&P 500, and to sell other equities to get the cash to do so. Faster than humans could intervene, a cascading chain reaction drove the Dow down 1,000 points. It all happened in twenty minutes,” Barrat wrote.

Perrow called the problem “incomprehensibility,” according to Barrat where an incident is not expected and incomprehensible for a critical period. No one anticipated how the Wall Street algorithms would interact with each other, and so the event was incomprehensible and unstoppable.

How do we solve these problems and not build A.I. machines that cause more harm than good? With the EU algorithm law going into effect and AI Now’s algorithm accountability initiatives as well as a slew of others that will come to be, they will shed some light on these black boxes and make their creators accountable.  But will it make algorithms more vulnerable to hacking or copying as organizations will be required to publicly reveal how they work?


What’s apparent is that A.I. algorithms need to be highly secured with blockchain-based architectures, Distributed Ledger Technology and other technologies to come, so they cannot be hacked and changed by bad actors.

Algorithm accountability means the black box aspect of this technology needs to be explained and made public so that it is clearly understood but not made vulnerable to hackers because of its transparency. Accountability is paramount to ensure that machine-made decisions are not made with human bias’ that were inadvertently or intentionally programmed into the systems.

Interactivity may be the biggest challenge yet in A.I. to prevent the buy/sell frenzy experienced by Wall Street. Perhaps, we need to create an A.I. system that can test algorithm interactivity with multiple scenarios in real time as new data flows into the system.

In any event, A.I. is a powerful new technology that should be created with safeguards to ensure it works for the good of all of us.

Thursday, November 9, 2017

Big Data is the key to Artificial Intelligence

By Anthony S. Policastro

Have you ever thought about all the data your business is capturing on a hourly, daily or weekly basis? It is probably incomprehensible in light of the channels and volume of information captured 24/7.

The overall, high-level purpose of mining all this structured and unstructured data from your CRM, sales, marketing and advertising channels and most recently IoT devices is to garner insights into your customers, competitors and potential market trends.

It is not humanly possible to categorize and find insights from these oceans of data quickly enough so that the information is relevant.


With all that data, the teams of data analysts that companies rely on today to interpret the data simply can’t keep pace with the volume.

The real challenge is merging all the analysis together to get a 360-degree contextual picture of your customers, potential purchases and market trends.


Apple's Steve Jobs once said during an interview,
“I remember reading an article when I was about twelve years old. I think it might have been Scientific American, where they measured the efficiency of locomotion for all these species on planet earth. How many kilocalories did they expend to get from point A to point B? And the condor won, came in at the top of the list, surpassed everything else. And humans came in about a third of the way down the list, which was not such a great showing for the crown of creation. But somebody there had the imagination to test the efficiency of a human riding a bicycle. A human riding a bicycle blew away the condor all the way off the top of the list. And it made a really big impression on me that we humans are tool builders. And that we can fashion tools that amplify these inherent abilities that we have to spectacular magnitudes. And so for me, a computer has always been a bicycle of the mind. Something that takes us far beyond our inherent abilities.”
Artificial Intelligence (AI) is the new bicycle bridging the virtual world with the physical and big data is the fuel and lifeblood of AI.


With recent advancements in computer processing, data storage, and better machine-learning algorithms it is possible to ingest and analyze more data than ever before. At the same time, there is a connectivity boom as more and more devices and apps connect to the Internet producing even more data.

With these advances, is it now possible to feed your big data into an AI engine and let machine learning mine the precious insights, predictions and next course of action. We can teach machines through supervised learning now, instead of programming them and they will then learn on their own through trial and error. That’s why having large amounts of data is more important than ever. The more data AI has, the more accurate it will become.

Data is now more valuable than oil

The Economist says the world's most valuable resource is no longer oil, but data.
"As devices from watches to cars connect to the internet, the volume is increasing: some estimate that a self-driving car will generate 100 gigabytes 
per second. Meanwhile, artificial-intelligence (AI) techniques such as machine learning extract more value from data. Algorithms can predict when a customer is ready to buy, a jet-engine needs servicing or a person is at risk of a disease. Industrial giants such as GE and Siemens now sell themselves as data firms."
The International Data Corporation (IDC) predicts that 44 zettabytes will be generated by 2020 (A zettabyte is 1 trillion gigabytes).
Trends that will shape Big Data and AI in 2017

TechRepublic, a resource for IT decision makers, says there are five major big data trends to watch in 2017.
  1. AI and machine learning will increase the need for for big data analytics
  2. Self-service big data tools even for beginners are hitting the web
  3. Analytics is struggling to keep up even with big data warehouses like Hadoop and Spark
  4. Data cleansing will become a prominent industry as AI is only as effective as the data it ingests.
  5. Democratization of data - server-less, micro architectures will allow data to be accessed, analyzed and managed without servers from anywhere by anyone.
AI is ubiquitous and growing

No matter what you do, AI will eventually touch every aspect of your life. AI, machine learning and deep learning are making big impacts on business and your personal life from simple chatbots to self-driving cars.

Many people use these terms interchangeably, but they are different.

  • AI is defined as the capability of a machine to imitate intelligent human behavior.

Examples are computer chess and most chatbots where the AI is programmed to ONLY play chess or answer a specific subset of questions like customer support issues or a back-to-school sale.

  • Machine Learning (ML) is a subset of AI and designed to analyze large subsets of data and learn from it. ML allows computers to learn without programming to complete a task.

ML understands speech and can make predictions based on the data it analyzes.

  • Deep Learning (DL) is a subset of ML and uses neural networks to learn the characteristics of something like face recognition.

Google's DeepMind AlphGo used Dl to beat 18-time Go world champion Lee Sedol in 2016. AlphaGo studied 30 million human moves in Go and learned by playing against itself.

Google Translations can now teach itself to translate languages it doesn't know using its DL Google Neural Machine Translation (GNMT) system. The new DL improves translation quality, and enables “Zero-Shot Translation” — translation between language pairs never seen explicitly by the system.



Saturday, July 29, 2017

Is In-store customer location tracking similar to digital website analytics?

By Anthony S. Policastro

My wife and I were shopping in a one of our favorite department stores when I noticed a small nondescript sign between the racks that read, "Free WiFi - sign in to get discounts."

(BTW: that's not my wife on the left; that's our dog, Nickie in case you were wondering.)

So I pulled out my iPhone and logged in. A coupon popped up on the screen, "15% off your entire order - today only!"

I was thrilled. My wife was looking at several blouses she liked but she thought the price was too high so I told her to pick one out.

What just happened is a harbinger of in-store interactive, personalized marketing in its infancy. I say infancy because logging into a WiFi network will soon be paramount to taking a horse-drawn wagon cross the country rather than a commercial jet.

What is currently happening in some retaiI stores and what I envision will be in all stores eventually in the not too distant future is the following scenario:

My wife and I walk into a department store. The store's WiFi detects our presence from the store's app or MAC addresses on our phones or smartwatches. We had agreed when we loaded the app to allow detection because the feature would provide discounts. We wander over to the men's jeans department.

A text arrives, "Hello Anthony. Interested in jeans today? Buy two pair and get the third pair half off. I decided not to buy the jeans as good as the deal was at the moment.

We head over to the women's section and my wife is looking at dresses. She picks one off the rack.

"Do you like it?" she asks holding it up to her shoulders.

"Yeah, the color accents your hair. You should get it," I said.

"Maybe, it's a bit expensive."

My wife's phone chimes. She takes it out her pocket and reads the text, "Look at that!"

She shows me the text; it was from the store, "Hello Joann. Looking for a new dress? Take 15% off for being a loyal customer."

She buys two dresses.

At home, I grab my tablet and check ESPN for the latest basketball scores for my favorite teams. A display ad from the store we had just visited pops up with the message, "Just for you. Buy two pair of jeans, get the third on us. Today and tomorrow only."

Spooky, but we had agreed to allow the store to access our devices and to receive messages.

What transpired is just one future scenario in marketing. The WiFi in the store detected when we walked into the store and Near Field Communication (NFC) or Bluetooth technology or iBeacons detected when we were near specific clothing racks.

It was like having a virtual digital sales person standing near us with the power to give discounts to close a sale except the sales person is an algorithm. I purchase all my jeans at this particular store and my purchase history is in the store's database as a frequent buyer of jeans. The algorithm detected my profile and pushed out the text message discount to my phone.

This is tracking of customer behavior similar to website analytics only in the physical realm.

My wife, Joann, has made many more purchases at this store than I have over the years, so her profile is that of a high value, loyal customer hence the 15% discount on the dresses.

All sorts of retailers — including national chains, like Macy's, Nordstrom, American Eagle, Family Dollar, and Cabela’s among others — have been testing these technologies as early as 2013 and using them to optimize store layouts or offer customized coupons, according to The New York Times.

Screenshot of Nomi Technology's In-Store Customer Analytics Dashboard
One company, Nomi Technologies, which provides the technology to track customers in store, recently settled with the Federal Trade Commission (FTC) for allegedly lying about tracking customers in stores, according to a report by the International Business Times.

"Nomi previously defended its use of phone-tracking tech, telling the New York Times in July 2013 that offering retailers the ability to keep track of a shoppers' habits is similar to the way Amazon and other online retailers use cookies to keep track of their customers," reported the website, circa.

According to a video on Nomi's website, their technology can track the number of customers walking into the store, track where they browse and push relevant messages out to their smart phones.

This is tracking of customer behavior similar to digital website analytics only in the physical realm.

While most consumers are Ok with being tracked online with cookies, database profiles of their buying habits and cookie matching used by most e-commerce retailers, there are those who bristle with anger and fear over being tracked physically.

In a March 2014 survey by Chicago-based Opinion Lab and reported by AdWeek, consumers feel this way about in-store tracking:
  • Eight out of 10 consumers don't want to be tracked without giving their explicit consent
  • 64 percent said they should only be tracked if they opt-in or sign up to participate in a program
  • 24 percent believe retailers shouldn't do any in-store tracking at all
  • Promises of a better shopping experience didn't change consumers' minds with 88 percent saying it wouldn’t make any difference
  • Discounts or free products would sway consumers towards acceptance of tracking
  • 81 percent do not trust retailers to keep their data private and secure
The study was based on feedback from 1,042 consumers.

In-store tracking won't go away and what will foster its widespread acceptance are the incentives retailers offer to convince consumers to buy in and reap the fruits of a great discount or free merchandise in exchange for a little less privacy.

 What's your take on in-store tracking? Do you feel it is a violation of your privacy or are you Ok with in-store tracking? Feel free to leave a comment.