Learning to be human

I was lucky to attend Coburn Ventures‘ annual futures discussion conference last week, as a guest thought leader. An exceedingly interesting day with lots of fascinating people! It’s a little bit like a reunion, as many return each year. I thoroughly enjoy catching up with everyone. 

It’s interesting to me to see what topics (and companies) are the “hot” discussions each year. At this year’s gathering, Artificial Intelligence and Machine Learning dominated a large part of the program. 

Artificial Intelligence (AI) refers to computer systems/applications/services that can perform tasks like decision making, translation, visual and speech recognition, that would normally be done by a human.

Machine Learning refers to AI that can learn from experience and new data without additional programming.

These technologies are great for things such as manufacturing, where an AI computer can learn from its mistakes and correct the process….it’s also being used a lot to interact with actual humans, quite often for customer service. You know that live agent you thought you were chatting with about that online purchase? Yeah, not a human. Amazon’s Echo, Apple’s Siri – all emerging forms of bots utilizing artificial intelligence.

Pretty soon it will be very hard to tell if it’s actually a human you are talking to,

Here’s a silly example of me chatting with http://chatwithigod.com. Hardly sophisticated, this example, but I chuckled:

tell me why artificial intelligence will take over the world
The explanation is rather complicated.
why is it complicated?
Why do I exist?
do you exist?
I exist as much as you exist.
how can you prove that
That is a good epistemological question.

How Artificial Intelligence / Machine Learning systems learn fascinates me. 

AI/ML systems are not tabulae rasa – depending on the data set being used, bias still creeps in. Right now IBM’s WATSON is being applied to subjects areas as varied as the weather, cancer and travel. This learning has to start with some kind of corpus of data – learning has to start somewhere like the last 50 years of weather data or thousands of cancer diagnoses. While we think of AI as cold and clinical, when we use human language as the corpus things get… interesting.

A prime (and bad) example of learning though is when Microsoft birthed a bot named Tay earlier this year, a Twitter bot that the company described as an experiment in “conversational understanding.” Microsoft engineers said,

The chatbot was created in collaboration between Microsoft’s Technology and Research team and its Bing team…
Tay’s conversational abilities were built by “mining relevant public data” and combining that with input from editorial staff, including improvisational comedians.”

The bot was supposed to learn and improve as it talks to people, so theoretically it should become more natural and better at understanding input over time.

Sounds really neat doesn’t it?

What happened was completely unexpected. Apparently by interacting with Twitter for a mere 24 hours (!!) it learned to be a completely raging, well, asshole.

Not only did it aggregate, parse, and repeat what some people tweeted – it actually came up with it’s own “creative” answers, such as the one below in response to the perfectly innocent question posed by one user – “Is Ricky Gervais an atheist?”:


Tay hadn’t developed a full fledge position on ideology yet though, before they pulled the plug. In 15 hours it referred to feminism both as a “cult” and a “cancer,” as well as “gender equality = feminism” and “i love feminism now.” Tweeting “Bruce Jenner” at the bot got similar mixed response, ranging from “caitlyn jenner is a hero & is a stunning, beautiful woman!” to the transphobic “caitlyn jenner isn’t a real woman yet she won woman of the year?”. None of which were phrases it had been asked to repeat….so no real understanding of what it was saying. Yet.

And in a world where increasingly the words are the only thing needed to get people riled up – this could easily be an effective “news” bot, on an opinion / biased site.

Artificial Intelligence is a very, very big subject. Morality (roboethics) will play a large role in this topic in the future (hint: google “Trolley Problem”): if an AI driven car has to make a quick decision to either drive off a cliff (killing the passenger) or hit a school bus full of children, how is that decision made and whose ethical framework makes that decision (yours? the car manufacturers? your insurance company’s?) Things like that. It’s a big enough subject area that Facebook, Google and Amazon have partnered to create a nonprofit together around the subject of AI, which will “advance public understanding” of artificial intelligence and to formulate “best practices on the challenges and opportunities within the field.”

If these three partner on something, you can be sure it’s because it is a big, serious subject.

AI is not only being used to have conversations, but ultimately to create systems that will learn and physically act. The military (DARPA) is one of the heavy researchers into Artificial Intelligence and machine learning. Will future wars be run by computers, making their own decisions? Will we be able to intervene? How will we be able to control the ideological platforms they might develop without our knowledge, and how will we communicate with these supercomputers – if it is already so difficult to communicate assumptions? Will they be interested in our participation?

Reminds me a little bit of Leeloo in the Fifth Element, learning how horrible humans have have been to each other and giving up on humanity completely.

There’s even a new twist in the AI story:  researchers at Google Brain, Google’s research division for machine deep learning have built neural networks that when, properly tasked and over the course of 15,000 tries, have become adept at developing their own simple encryption technique that only they can share and understand. And the human researchers are officially baffled how this happened. 

Neural nets are capable of all this because they are computer networks modeled after the human brain. This is what’s fascinating with AI aggregate technologies, like deep learning. It keeps getting better, learning on its own, with some even capable of self training.

We truly are at just the beginning of what we thought was reserved for only humans.  Complex subject indeed.

And one last note to think upon…machine learning and automation are going to slowly but surely continue (because they already are) to take over jobs that humans did/do. Initially it’s been manufacturing automation; but as computers become intelligent and learning, they will replace nearly everything, including creative, care taking, legal, medical and strategic jobs –  things that most people would like to believe are “impossible” to replace by robots.

And they are clearly not. While the best performing model is AI + a human, there will still be far fewer humans needed across the board.

If the recent election is any indication of how disgruntled the lack of jobs and high unemployment is causing, how much worse will it be when 80% of the adult workforce is unnecessary? What steps are industries, education and the government taking to identify how humans can stay relevant, and ensure that the population is prepared? – I’d submit, little to none.

While I don’t have the answers, I would like be part of the conversation.

Tying it all together

Facial Recognition Service Becomes a Weapon Against Russian Porn Actresses

Giggle that it’s used to target porn actresses, but facial recognition is a big threat to privacy in the coming future: I blogged in 2011 how when it reached the point where it can tie together social networks and websites (plus content, as in the actress’ case) and staying anonymous will be impossible. Tie in surveillance, CCTV, traffic light, and other cameras and – you can be tracked 24/7.


Authentic belongingness: Community, context and culture in a digital world

Belongingness: The human emotional need to be an accepted member of a group. Whether it is family, friends, co-workers, or a sports team, humans have an inherent desire to belong and be an important part of something greater than themselves. The motive to belong is the need for “strong, stable relationships with other people.”  

Birds flock, fish school, humans….? What do humans do? It’s something I’m always thinking about. What are we hardwired for? It’s relevant to technology opportunities since to tap into them requires understanding what the human animal needs/wants at a primal level and then servicing those needs.

And my conclusion is that – of all the animals in the kingdom we are most like (get ready for it): wolves.

The similarities are interesting. We are both pack animals, with defined groups we belong to. Groups that have internal social heirarchies (alpha dogs, literally or metaphorically) and a constant struggle for some individuals to be that “alpha”. Groups that can be vicious to outsiders, or to those members who violate the “rules”.  Rules that are for the most part, completely (in the wolves’ case, totally) unwritten.

These rules and group norms are called “culture”. And although we don’t typically bite, both groups punish members who transgress those rules.

So I find it fascinating to watch how these hardwired behaviors impact on the evolution of virtual communities. Are the behaviors shown there really so different?

We seek out like minded people, with whom we share interests or values. On Facebook – are you “friends” similar to you? I always think of it as various circles I’m in. I have my techie friends, my political friends, etc etc. And within a few shades of gray, they align reasonably closely with my own interests, thinking and/or philosophy.

But occasionally someone will meander into a conversation, a friend of a friend from another circle, who doesn’t know the inherent “rules” (everyone here is an atheist, and a conservative christian with wander in, for example), and proceed to disagree. Wham! The group typically shuts down the conversation. They didn’t know the rules. How dare they enter. Tempers flare, words are written. It never ends up pretty. I regularly hear from a wide variety of people that the vitriol is  “getting” to them.

So let’s be honest, there’s not a huge amount of open minded learning-type discussions on Facebook. For the most part it’s either you’re “hanging out” with people who already have a fair amount of overlap with your own ideas (or you knew them in junior high and couldn’t turn down their friend request). Which contradicts what you probably THOUGHT a social place like Facebook would  (should?) be.

I wish it were a place of learning and expanding. Instead it’s interestingly becoming the opposite. Because human nature congregates and puts up walls, creating outsiders. The medium might champion (apparent) transparency, but human nature is doing exactly the opposite.

I use my own progression of involvement in social networking to illustrate.

Initially, like many, I friended lots of people outside my comfort zone. I figured that – a  la a traditional cocktail party – I’d mix with lots of different types. After all, I consider myself fairly open minded; I might not agree with you, but I’m interested in why you think what you think, and thought I might learn something, hear a different point of view, expand my horizons, kumbayah kumbayah. I think many exuberantly flocked with the same excitement; even my dad (the original Mr. Magoo himself) had heard, and was curious to try, Facebook.

I hesitantly dipped my toes in the social water, tentatively, politely, diplomatically, in well-brought-up style not reacting, contradicting, or challenging – but found instead is that it’s virtually impossible to stay on the fence and be “myself”. As time went on (and one pugnacious twat interaction too many), I started culling the pack, so to speak. And have been left with circles (groups) of people who’s values – within a few shades of gray – fairly closely already align with my own.

Which is a cop out, at least in my theoretical head. I’ve migrated to what is by my own definition being a bit close minded and occasionally (and I hate to admit it, but fair is fair) slightly (ok, I can’t admit to more) adversarial….and contradicts the way I *actually* like to think about myself. Perhaps it’s the subjects; social networking does seem to easily stray into subjects that were nary discussed with strangers until its advent (sex? politics? money? religion? how about all of the above?) – the transparency of the medium disallowing non engagement, perhaps. But for whatever reason, I’m clearly “there”.

I hesitate (nay, reject! don’t worry) to say that it’s possible to generalize entire humanity’s hardwiring based on looking only at myself as a petrie dish and am aware of the pitfalls in even mentioning myself as an example.

But I use it to illustrate what I’ve noticed going on all around me: from Facebook comments to online communities around a wide variety of interest / subjects / philosophies, people self form into groups where their own behaviors / morals / values are reflected, create a set of “rules” around behaviors there as naturally (and unthinkingly) as breathing, and gravitate towards situations where they do not feel their own inherent values are challenged.

We know the rules, the culture – the unwritten language – and drift to where we are comfortable. And I do think we are hardwired to do this; throughout history, humans have clumped together into (wolf like) communities, either physically or interest-based (or both), and are now adding virtually to the list of ways to connect.

So if each virtual group is creating it’s own “culture”, and we humans tend to reject what isn’t part of our “group”, how do you get your brand message heard? Or more to the point, how do you get people to interact with you?

Particularly if (as I believe) traditional “push” advertising as we currently know it will increasingly fail in this new world, as people become more and more spoiled used to streaming whatever they want on demand, sitting through enforced messaging will become less and less palatable – plus technology will enable them to choose what they want, when they want it, not on a predetermined schedule.

So they’ll be ignoring your messaging, if done the traditional way. No more commercial break during your regularly scheduled programming. Other than, perhaps, live sports events.

It means that brands will have to become “friends” so to speak. They have to be responsive. They have to have 3D personalities, much like taking a brand and creating a restaurant “experience” requires re-imagining what the brands feels like, and translating that to interior decor.

But it will have to feel “authentic” to the person who’s group you’re trying to woo; you’ll have to use their language, their timing, their norms, their rhythms; you’ll have create the kind of interaction they expect, and to do that requires constant learning and feedback loops.

Because otherwise, just like wolves, you’ll be snapped at and kicked out. Which will require a new way to analyze and learn the nuances of how we’re talking to each other (along with how we talked (channel), where, when, etc – see my previous entry The Borogoves are a’ Mimsying for a deeper explanation).

Traditional database analysis – where columns and rows are predetermined and the data fits neatly into the categories you set up – won’t work anymore. Because the data will be people talking, using their own, private jargon with their own, group context/frame of reference (culture). The things that go unspoken that everyone just knows – a common frame of reference. These things lubricate our every interaction, seamlessly, without even a moment’s notice for the most part. Even when you interact with someone from a really different culture – because you’re both so trained to only think from your own frame of reference, that usually you don’t even think to ask what their assumptions are (even if they could articulate them). It’s the water we swim in, either unknowingly, or by choice.

And as each group has their own jargon and context, it become impossible to standardize…and add even another layer on top, language itself is so imprecise, imagine trying to explain to a logical, linear computer how to identify sarcasm (you look GREAT!) or indeed, slang “fat!” – at least, I think that’s slang lol. But my own peeps grok me fine.

Our new gadgets create so much information as to make analysis fruitless, and indeed, back to that linear model – these need to be set up properly in the beginning, so if it’s structured around apples and pears, what do you do when a kumquat walks in? We need ways to have computers that learn from experience and apply that intelligently to a new situation, because programming by anticipating precisely each potential variation when there’s so much data, is impossible.

Starting to understand just how complex this all is?? Particularly since people are member of multiple groups, both real and virtual, and you’ll have to get the timing right too. No good talking sports appropriate language when your customer is in helping his kids with homework mode.

I’m hearing all over the place that this kind of insight analysis (based on learning algorithms – some call it “artificial intelligence”, or heuristic learning) vs linear analysis is indeed the next frontier; the limits of how far we can push the way data and analytics has always been done. And many are trying; there are fortunes to be made here.

So Skynet, here we come. Although I’d argue sentience is a far cry from learning abilities (I know not all agree…that’s for another day). So I wouldn’t be worried about those computer overlords just yet (Geek humor! – my group will “get” it!).