AI: It’s potential and its limitations

When you first meet Applied AI CEO JT Kostman, he lets you know that ‘JT’ stands for “Justice, Truth and the Eradication of Ignorance”. What followed was a deep dive conversation into machine learning, his time as Chief Data Officer at Time Inc and the role of artificial intelligence on the 2020 campaign.

*This interview has been edited and condensed for clarity.”

Pulse Q&A: Welcome to another edition of Ask Me Anything. Today we have JT Kostman, a former soldier who moved to the private sector, ended up being chief data officer at Time and is now CEO of Applied AI. Thank you for joining us JT.

JT Kostman: Thank you for inviting me. 

Pulse Q&A: I want to start off with your general expertise. When did you begin down the path of AI? 

Kostman: Long before there was a path or shortly after one had been identified, I learned to code on an IBM 1620 with Hollerith punch cards back in the 1970s. That’s what drew me into this area. AI had been established as a football field back in the late 1950s. It was an extension of the Dartmouth conference, Marvin Minski and some others who were doing work in that arena, had set the ball in motion of machines, computers, with human level intelligence. It’s fascinated me since.

Pulse Q&A : In a recent video where you talked about this AI revolution, you said that ‘a lot of nonsense’ is now being spoken about with AI. What’s the most nonsensical claim about AI?

Kostman: The most nonsensical part is this dystopian vision of what’s going to happen. This fear that the machines are going to rise up. It’s absolutely absurd. People tend to be intentionally convoluted when they’re talking about AI just because it’s more sensational. It’s more sexy to talk about the scary things. 

It’s important when we talk about the people who do this work, we think of it as three categories along a continuum. There is narrow or specific use AI. And that’s everything we’re doing right now. That’s everything from Siri to self-driving cars to recommendations on Amazon and Netflix, all that in the realm of specific use very narrow-banded AI. 

When you move up the continuum from there you move to a general AI. In general AI there would be basically similar skills to what you would expect from a very intelligent human So you would expect someone who, like Siri can make a calendar reservation for you, or can help book a trip and Uber has a self-driving car feature. Generally, I should be able to do both of those things, and all the things that any narrow AI can do. In fact, anything any human can do. 

When you move past along that continuum, now you’re talking about super AI: super intelligence or near omniscience, this ability to far transcend any human capabilities. Frankly, worrying about super intelligence now is like worrying that, because we’ve been to the moon, we might overpopulate Mars. It’s it’s just ridiculous. I’m not saying that this will never ever be a consideration. But it’s so far in the future, that it’s like a caveman worrying that one day we’ll invent this thing called electricity. And someone might put their finger in a socket and get shocked. 

Pulse Q&A: And do you imagine that there are major limitations to getting super intelligence at this stage?

Kostman: There are. Currently it’s well beyond the keen of what’s even possible. And for a variety of technical reasons. There are some philosophical reasons behind it. But the math simply says it’s unlikely to ever be realized. If you think along that continuum, somewhere between general artificial intelligence and super intelligence, think about a synthetic brain. Why couldn’t we just copy your carbon-based brain into silicon? Well, we could in theory. We could evolve those capabilities. We know that because we can even perfectly mimic a weather system. That’s the notion behind the mathematics of chaos and complexity. 

In essence, minor perturbations, initial conditions, change the entirety of the model, what most people would know as the butterfly effect. One minor change and the entire system changes. And so we’re unable to realize those capabilities. That said, we’ve been able to model closely, but not perfectly, the neural network of a single living organism. That is C. elegans, which is basically a tiny little worm. Its neural computational complexity is about a trillion trillion trillion trillion times less complex as a human being. So even if we grow exponentially and our capabilities and what we’re able to model, it comes a point where it’s absurd.

That said, and here I think is the most important point, even if we were to achieve super intelligence and we had those machines, I’m dying to understand why people ascribe that super intelligence a malevolent intent. Einstein was a pretty smart fella, and he wasn’t a serial killer. Intelligence and insanity really don’t go hand in hand. 

My first PhD was in psychology and I’ll tell you, psychopathy, and intelligence and genius, are not corresponded capabilities. Why would they, where’s the motivation? The whole thing just doesn’t make sense. I wrote an article a little while ago that made the point that you’re in far greater danger from toasters than you are from AI. Over 400 people died last year from toasters. So it’s rise of the toasters, not AI.

Pulse Q&A: Despite the sensationalism, do you believe there are major ethical challenges for people innovating in AI?

Kostman: That’s an interesting question. The assumption was always that humans are the creative ones. But there are creative AI now. They are AI now that create works of literature and do a very effective job of it. And so one has to really define the terms. That’s really where we start with all these things is: when we say creativity, what do we mean? We’ve shown that AI are now capable of tabula–from scratch–creating patentable inventions, mathematical proofs, poetry, etc. And so where does that creativity line get drawn? Nobody’s really sure. And how do you really define what is and what is not creative? That has become one of the burdens of AI. 

Claude Shannon once said whatever line is set for us, as soon as we achieve it, the rest of the world will say that’s not really intelligence. You can’t keep shifting the line. Will AI continue to get more and more intelligent? It certainly will but it doesn’t have to be a contest. It doesn’t have to be a competition. The work I do focuses on what I refer to as symbiotic tech solutions. How to have humans and machines work integrally. We are the species that augments from eyeglasses to iPads, from shoes to cell phones. We are born, you know, naked, alone and afraid. It’s only through technology that we are able to achieve what we do as humanity. And so this is nothing new for us. This is just the next logical step.

Pulse Q&A: You said that machine learning was the fundament of AI. Could you explain how that might be used practically? 

Kostman: It’s a very important point and I’m glad you brought that up. People blindly use the term artificial intelligence as if that actually means something. They don’t tend to realize that artificial intelligence is really a reapplication. It’s a broad descriptive term. When you’re talking about artificial intelligence what you tend to mean is creating a capability for machines to do the kind of things that, if people did that, we would consider that to be intelligence. But what does that actually mean? 

AI is this composite that consists of a couple of capabilities. It consists of machine learning, robotics, signal processing, natural language processing: there are these constituent elements. But the biggest part of that is machine learning. So you can think of this as sort of Russian nesting dolls where, within this doll of AI, we would have co-equal natural language processing, signal processing, and robotics. Within each of those, we would have machine learning. That, collectively is what AI really is about. 

Machine learning is not just a figurative term. Machine learning is literal. We literally teach the machines and machines learn. They learn very much in the same way that pigeons puppies and people learn. They learn by us presenting them examples, by us showing them what the outcomes are. And by them extracting lessons. They learned by being rewarded, being reinforced, very much the same way that pigeons, puppies, and even your children learn. The way that you teach them is exactly the same way that we teach the machines. But the big difference is because they are machines, they are–and I mean this lovingly–perfect and stupid all at the same time. They do the same thing over and over and over again, they don’t deviate, they are incredibly patient, incredibly fast, incredibly accurate. As a consequence, we can show them something once, twice, a million times and there are different permutations and different examples. Then they learn to get better and better.

Pulse Q&A: What’s the equivalent of a reward or reinforcement for a machine? I would know what you do with the dog. What would you do with a machine?

Kostman: What would you do with a dog? A treat? The reinforcer you give a dog would probably depend on the dog. It could be you can pet the dog, you could give it a little biscuit. But what happens when this dog doesn’t like regular dog treats. It likes baby carrots, which is bizarrely weird because it’s a Rottweiler. But we would never think to reward our other dog with a baby carrot. For the machine it’s what it considers rewarding, and what it considers rewarding is what we tell it is rewarding. So we tell it its goal in life is to achieve points. Its goal is to try to accumulate as many points as it can. We create a circumstance for it, where it can earn or win, or get rewarded with points predicated on its actions. So if it does the things it is expected to do. it gets points. It can lose points too. And while I would not go so far as to attribute desire to a machine, now you get into the deep philosophic of what the difference is with us: merely organic machines. We have a desire that is instantiated. For what? For attention, for sex, for nutrition for whatever it is. That becomes the outside point. That’s what we’re looking to achieve, biologically, socially for whatever the reward mechanism is, we do the same thing as the machine.

Pulse Q&A: This is super interesting, the idea that you’re awarded on the basis of what you’re told that you like..

Kostman: Which we did with our children also, right? What did we tell them? From the time they were born, we told them that hugs are good. They learned that. And so we’ve done sort of the same thing with machines.

Pulse Q&A: Could you tell us a little bit about what you’ve worked on with chabots and natural language processing?

Kostman: What we didn’t mention in my background is that I spent quite some time working for the US government. I worked. I’ve worked for the US intelligence community for quite some time for intelligence, defense and security agencies. We’ve built some of those capabilities within that arena. But then we took those same capabilities and we extended them into the corporate environment. During the 2012 Obama campaign, the work we did at Samsung, and then eventually with Time Inc were all examples. What we constructed were AI-enabled conversational chatbots that could engage with people. What I’m working on now in that arena, is really the interesting part. My team and I are working on three different AI enabled conversational chatbots: Sage, Joe and Daenesh.

Sage can be a boy or a girl and is intentionally gender non-specific. It is intended to be a teenage boy or a girl that will live in social media and find at risk adolescents, kids who are at risk of self harming, at risk of running away, those sorts of things. It won’t pretend to be human, it will present itself as a bot. But even when it does, in test groups we found they tend to be rapidly anthropomorphized. People not only tend to treat them like they’re a human entity, they tend to be more willing to disclose and more willing to engage in intimate conversations with the bot because they don’t feel judged or evaluated. Daenesh does the same thing for kids who are at risk of radicalization by extremist groups. And Joe’s full name is AI Joe, and he’ll be aimed at veterans at risk of suicide, self harm and other harmful behaviors.

Pulse Q&A: So Joe is directed it at veterans?

Kostman: I do quite a bit of work with disabled veterans. I am one myself and so it’s a community I’ve stayed very attached to and feel very committed to. In fact, the new venture that we’re launching is called We are bringing together AI and cyber security and our mission is to protect people, property and places. Our workforce will be retrained disabled veterans. We are getting a bunch of folks who have come back from various conflicts and incidents, and need a new lease on life, a new professional opportunity. 

Pulse Q&A:It will be interesting to see how chatbots and AI powered cyber security will also affect businesses going forward. Have you seen any big movement in that space?

Kostman: Very much. So my firm has built a couple of chatbots for organizations and they’re rapidly taking over customer service. I expect they are going to replace entire industries soon. I think we’re going to see the recruiting industry severely upset. And why not? I was just on chat with tech support with Amazon yesterday and they’ve become awful. And all I kept thinking was I cannot wait for you to be replaced with a bot. Why? Because a bot has shared knowledge among all the bots instantly. Any question that’s been asked and answered, they have pure access. They listen and you don’t have to worry about context. They are such a delight to work with is what we found in the organizations where we’ve implemented them.

Their customers have soared through the roof. Why wouldn’t it? It’s something in the high 90th percentile of calls to any call center for any service or any support, or purely routine, what are people going to call for right? I had an inquiry about my order, I wanted to reset my password, whatever it is, it’s effectively an FAQ list that may be a bit complex and it may require a training structure to be able to answer it. But that’s what chatbots have been built to do. 

We are increasingly going to see the person who’s on the other end of a phone become an irrelevancy. We’re getting to the point now where we’re going to start to understand nuance, then. So we are going to very quickly–within a year, two years at most–where we will get to a point where the bot will probably have as good an understanding of you–or better–than a native language speaker.

Pulse Q&A: It’s interesting when you say that there might be anthropomorphization going on because I think that a lot of businesses are recognizing that as well. It’s not important that they have this Turing test where they can fool you into thinking that they’re human. What’s more emotive is whether they can get the job done for you.

Kostman: That’s exactly right. And you know, there’s a company ( that provides a virtual assistant and they have a problem that people treat her too humanly. One of the big challenges we had with her was she kept getting asked out. Before we identified her as a bot, she had a personality and a perspective. She had a little bit of an attitude. Guys were constantly hitting on her. She had to consistently through the interaction remind you that she was a machine.” 

Pulse Q&A: I’d love to hear your thoughts on who the dominant players are now in AI.

Kostman: Amazon’s work in AI is first rate. Unfortunately, it has become a handful of players. It’s Amazon, Google and Facebook. Microsoft is trying to play. I’m very intentionally not going to include IBM in there. I think they’ve frankly been very disappointing. And what they’ve done in trying to create the Watson facade, they haven’t really evolved their capabilities. 

But that’s a big part of the problem right now is we’ve created what’s called an oligopoly within the AI talent pool. These fortunate few in the Fortune 50 are able to pay these code size salaries and they are buying up all the talent. They are very intentionally trying to corner the markets within AI and machine, to effectively take control and monopolize these industries. That’s very dangerous. It’s very dangerous for us as an economy. It’s very dangerous for the world economy and it’s very dangerous for the evolution of the field itself. 

Look at the impact that any monopoly has on technology and it’s always negative. Always look at what happened after the breakup of AT&T. Almost immediately we had modems and we had the internet and we had cellphones. Look currently in the US at the cable companies and we’re using cable boxes that have not demonstrably changed in 20-25 years. The services tend to be terrible. I’m not saying that Google and Facebook, and some of those others aren’t making strides and moving the agenda forward. But innovation and entrepreneurship have to go hand in hand. As soon as you have this over-wielding influence from any companies, it becomes a challenge to everyone else.

Pulse Q&A: Do you have favorite AI-based software products yourself that you use? You mentioned, you use Alex in the corner of your room? Anything else?

Kostman: The tools I like best are the ones that I built. That’s one of the things I find so wonderful and so appealing about this field. There’s nothing I do that isn’t available to everyone in anyone, a kid with a laptop can get access to these, a guy can bang out some code and try to tweak some hyper parameters, change a models configuration a little bit do something cool and interesting. 

Pulse Q&A: One last question about your role in the Obama campaign. I’ve had a chance to read a little bit about it. I’m interested in what your thoughts are about how AI will play a role in the 2020 campaign?

Kostman: That’s a critically important question that I think we’re all going to have to pay attention to. There is going to be the white hat and the black hat. There are going to be the forces of good and the forces of evil. The reason President Obama asked us to get involved with the social media analysis for the campaign was that he very genuinely wanted to understand what his constituencies thought, understanding that he had various constituencies. 

There’s a lot of talk now in the political parties about having a base: this misperception that voters are monolithic, that they all care about the same thing. That’s absurd. What President Obama wanted to know was what this group of folks thinks about the economy versus that group of folks or who might be supporting me. How I can hear them? That was really what he wanted to do was use social media analysis as a way to truly listen, not just to hear, but to truly listen to what people’s deepest concerns are so that he could decide: is this something that’s consonant with my philosophy with my beliefs and somebody we can address? 

Look at something like guns. In the United States, we have this bizarre obsession with guns and the people who are most obsessed tend to migrate more to one party than the other. So you would think you would say “Okay, I’ll be pro gun,” if you just wanted to pander to them. Well, not that easy because it turns out 90% of voters are in favor of the same sort of background checks. And so I would assume, of those 90%, there are subdivisions, right. So those who are at the 90%, and very pro gun, those are the 90%, and very opposed to guns, you would want to formulate your message differently, for speaking each one of those to be able to convince them of your point, or to at least have them here your point. That’s really what Obama was doing.

The other side of that coin was Cambridge Analytica, who approached me early on during the 2016 campaign. In 2014 is when they first approached me when they were shilling for Ted Cruz, just before they went over to Trump. I was mentioning to my wife yesterday that one of the best political business decisions in my life was kicking them out of my office in under 20 minutes. They came in when I was at Time Inc and wanted us to do some work for them. Their work was what I would consider to be ‘the black hat’. Their attention was how could they engage in very sophisticated propaganda to be able to distort messages to be able to convince people. Both of those forces are going to be hard at play in the 2020 election, 

Pulse Q&A: I think I probably have to reschedule an entire hour with you. This has been fascinating and I really appreciate your time with us. Hope we can talk soon.

Kostman: I hope we can. Thank you very much for inviting me.

1 comment

Leave a Reply

You May Also Like