AI is People....at least for now

Artificial Intelligence (AI) is the demonstration of intelligence by machines.  Demonstration of intelligence is usually identified by performing functions such as learning and problem solving.  My opinion is that we are seeing very advanced forms of Machine Learning (ML) but because of the significant advances in ML to evolve into a form of AI, the definition of AI has expanded beyond the old school definition of being able to pass the Turing test.  (This was the test developed by Alan Turing in 1950 to test a machine’s ability to exhibit intelligent behavior equivalent to a human.)

The big tech giants (Facebook, Apple, Amazon, Google, Microsoft, Symantec, Siemens, Juniper, and Cisco) and many others are constantly advertising their use of AI with their different projects.  Recently, it was reported that Facebook, Apple, Amazon, and Google were using people to monitor their voice recognition software which is used for their home devices (Echo, Siri, Google Home) and messenger services (where voice translation is required for Facebook Messenger).  These software systems are often referred to as a form of AI for these companies (thanks to their marketing teams).  The ability to read your messages and searches might have been part of some fine print section of the End User Licensing Agreement (EULA) but we all know NOBODY ever reads those things.  Therefore, it can be assumed that no clear approval was given for this monitoring of these home recorders.  That explains why exposing of this monitoring of personal query and conversation information resulted in such a backlash online that these organizations back-peddled quickly from supporting this practice of “voice recognition refinement.”  While this practice might serve as a practical application of having human intervention in the automation improvement process, this confirms my opinion that no matter how advanced the computer learning algorithms become, human over site and intervention remains essential if we desire the product to operate safely.

Social Media

Facebook is notoriously known for having developed an algorithm designed to keep people in their self-interested protected bubbles.  People like to see what they want to see and maintain their own perception bias.  To keep this process going, Facebook has been documented as having employed personnel in the Philippines to act as content moderators for their “platform” and ensure inappropriate material is not promoted online.  These people monitor the monitor program that is already developed to check for this racist, offensive, vulgar, and inappropriate material.  Although having the many different accepted versions of AI (really Machine Learning) used to help the human operators keep up with all the data online, it is very obvious that the true moderator of the AI system is the actual person double-checking the work of the computer.  Keeping a person in the loop is very advantageous to help eliminate error if the practice is exercised with the practicality but unfortunately it usually is not.  This was proven as a new article revealed that Facebook was using people to monitor Messenger audio sessions to improve the transcribe feature of the software, without user permission or prior consent.

Online Videos

As a subdivision of Google, YouTube is very well documented as a “platform” requiring content moderation.  Another organization subscribing to the “person in the loop” model to review the content posted to their site.  While there is a desire to protect freedom of speech, prevention of extremely offensive or hateful information being shared is in the best interests of society.  Unfortunately, what is in the best interest of society is NOT in the best interest of a corporation whose allegiance is to increasing shareholder value.  YouTube’s algorithm is part of the problem because it subscribes to the “shock porn” reliance on keeping viewers online watching their “platform” because it exposes the viewers to more advertisements.  This allows Google advertisement revenue model to perpetuate.  Because of this shock material viewing, if allowed to play continuously, YouTube’s algorithm will progress to more and more outlandish and “can’t turn away” type terrible videos which have been linked to the spreading of conspiracy theories and becoming a safety and health hazard.  These theories include the spread of anti-vaxxer theories (debunked over and over again and the doctor who wrote the original article to lose his medical license), alt-right propaganda, aliens, and many other outrageous theories.  With this down the rabbit hole algorithm, the AI used by the system needs constant baby sitting by humans because it does NOT moderate itself from offensive or proven untrue theories.

The So-What: Soylent Green

The 1973 American post-apocalyptic sci-fi film staring Charlton Heston revolved around a detective finding out that the secret ingredient behind the extremely popular food source Soylent Green.  If a Terminator Skynet fearing person bought in to concerns about where we are heading with technology, the status quo might either make you happy or scared.  People are still acting as the puppet masters behind our future computer dominating overlords, but the computers are advancing and learning more every day.

The sad truth is that one day the tables may turn.  These ever-improving algorithms already influence our Facebook feed, YouTube views, and Google advertisements.  Promoting human oversight is critical to ensuring that technology doesn’t run too wild in a direction that is not beneficial to societal interests.  Being transparent about this process enables a better understanding of the technology we are using and what its actual capabilities and limitations are.  Because advertisers are only trying to sell products and not be honest, they have enabled the need to split up what was once a simple definition of Artificial Intelligence even though when we first started there was AI and Machine Learning.  **MY OPINION** Because what we have are a complex set of well-developed decision trees (If-Then statements) combined with massive data storage recordings of these results, therefore, we only really have very well created Machine Learning systems.  Because these systems cannot create and make decisions of their own volition, they are not actual intelligence just very capable.  Since these are not actually intelligent systems people are required to monitor and moderate them.  In some cases, more monitoring and moderation is required than others.

Closing out, AI (currently ML) is just like Soylent Green.  As Charlton Heston was pronouncing very strongly at the end of the movie concerning Soylent Green, “It’s people!”