The story of AI, as told by the people who invented it

Welcome to I was there when, a new oral historical project from In machines we trust podcast. It contains stories of how breakthroughs in artificial intelligence and computing happened, as told by the people who witnessed them. In this first episode, we meet Joseph Atick – who helped create the first commercially viable face recognition system.


This episode was produced by Jennifer Strong, Anthony Green and Emma Cillekens with the help of Lindsay Muscato. It is edited by Michael Reilly and Mat Honan. It is mixed by Garret Lang, with sound design and music by Jacob Gorski.

Full print:


Jennifer: I’m Jennifer Strong, host In machines we trust.

I want to tell you about something we’ve been working on for a while behind the scenes here.

It is called I was there when.

It’s an oral history project with stories about how breakthroughs in artificial intelligence and computing happened … as told by the people who witnessed them.

Joseph Atick: And when I entered the room, it caught sight of my face, pulled it out from the background and it said, “I see Joseph,” and that was the moment when the hair on the back … I felt that something had happened. We were a witness.

Jennifer: We’re getting things started with a man who helped create the first face recognition system that was commercially viable … back in the ’90s …


I’m Joseph Atick. Today, I am the President of ID for Africa, a humanitarian organization that focuses on giving people in Africa a digital identity so that they can access services and exercise their rights. But I have not always been in the humanitarian field. After I received my Ph.D. in math, I made some fundamental breakthroughs with my collaborators, leading to the first commercially viable face recognition. That’s why people refer to me as one of the founders of face recognition and the biometric industry. The algorithm for how a human brain would recognize known faces became clear while we were doing research, mathematical research, while I was at the Institute for Advanced Study in Princeton. But it was far from having an idea of ​​how you would implement such a thing.

It was a long period of months of programming and failure and programming and failure. And one night, early in the morning, we had actually just finished a version of the algorithm. We submitted the source code for compilation to get a driving code. And we stepped out, I stepped out to go to the toilet. And then when I stepped back into the room and the source code had been compiled by the machine and had returned. And usually after you compile it, it runs automatically, and when I entered the room, it spotted a human moving into the space, and it spotted my face, extracted it from the background and it stated, “I see Joseph. ” and that was the moment when the hair on my back – I felt something had happened. We were a witness. And I started calling on the other people who were still in the lab, and each of them would enter the room.

And it would say, “I see Norman. I wanted to see Paul, I wanted to see Joseph. “And we would take turns running around the room just to see how many people can see it in the room. That was, it was a moment of truth where I would say that years of work finally led to a breakthrough, even though theoretically no further breakthrough was required. Just the fact that we found out how we implemented it and finally saw that the functionality was very, very rewarding and satisfying. We had developed a team that is more of a development team, not a research team focused on putting all of these capabilities into one PC platform. And that was the birth, really the birth of commercial face recognition, I would say in 1994.

My concern started very quickly. I saw a future where there was no place to hide with the proliferation of cameras everywhere, and the commoditization of computers and the processing capabilities of computers became better and better. And then in 1998, I lobbied the industry, and I said we need to put together principles of responsible use. And I felt good for a while, because I felt like we’d got it right. I felt that we have introduced a responsible usage code that must be followed regardless of the implementation. However, this code did not stand the test of time. And the reason behind that is that we did not anticipate the advent of social media. Basically at the time we established the code in 1998, we said that the most important element of a face recognition system was the tagged database of famous people. We said if I’m not in the database, the system will be blind.

And it was difficult to build the database. At most we could build a thousand 10,000, 15,000, 20,000 because each image had to be scanned and entered by hand – the world we live in today, we are now in a regime where we have let the animal out of the bag by feeding it with billions of faces and help it by labeling ourselves. Um, we are now in a world where any hope of controlling and demanding that everyone be responsible in their use of face recognition is difficult. And at the same time, there is no shortage of famous faces on the internet because you can just scrape, as has happened recently by some companies. And then I started panicking in 2011 and I wrote an op-ed article that it’s time to press the panic button because the world is heading in a direction where face recognition will be ubiquitous and faces will be everywhere available in databases.

And at the time, people said I was an alarmist, but today they realize that is exactly what is happening today. And where are we going to go from here? I have lobbied for legislation. I have lobbied for legal frameworks that make it a responsibility of you to use someone’s face without their consent. And then it is no longer a technological problem. We cannot contain this powerful technology using technological means. There must be some kind of legal framework. We cannot allow technology to go too far ahead of us. Prior to our values, ahead of what we think is acceptable.

The issue of consent remains one of the most difficult and challenging issues when it comes to technology, just giving someone a message does not mean that it is enough. For me, consent must be given. They need to understand the consequences of what it means. And not just to say, yes, we put up a sign, and that was enough. We told it to people and if they did not want to, they could have gone anywhere.

And I also think that there is, it is so easy to be seduced by flashy technological features that can give us a short-term advantage in our lives. And then down the line, we acknowledge that we have given up something that was too valuable. And at that point, we have desensitized the population, and we’re getting to a point where we can not withdraw. That’s what I’m worried about. I’m concerned that face recognition through Facebook and Apple and others is working. I’m not saying it’s all illegal. Much of it is legitimate.

We have reached a point where the public may have been blasé and may be desensitized because they see it everywhere. And maybe in 20 years you will step out of your house. You will no longer have an expectation that you would not be. It will not be recognized by dozens of people you cross along the way. I think at the time, the public will be very upset because the media will start reporting on cases where people were persecuted. People were targeted, people were even selected based on their net worth on the street and kidnapped. I think it’s a big responsibility on our hands.

And then I think the issue of consent will continue to haunt the industry. And until that question becomes a result, it may not be resolved. I think we need to set limits on what can be done with this technology.

My career has also taught me that it’s not a good thing to be too much ahead because face recognition as we know it today was actually invented in 1994. But most people think it was invented by Facebook and the machine learning algorithms that are now growing all over the world. I basically had to resign as public CEO at some point because I curtailed the use of technology that my company would promote because of the fear of negative consequences for humanity. So I feel that researchers need the courage to project into the future and see the consequences of their work. I’m not saying they should stop having breakthroughs. No, you must go for full force, get more breakthroughs, but we should also be honest with ourselves and basically warn the world and politicians that this breakthrough has pros and cons. And therefore, when using this technology, we need some kind of guidance and framework to ensure that it is channeled into a positive application and not a negative one.

Jennifer: I was there then … is an oral history project with stories about people who have witnessed or created breakthroughs in artificial intelligence and computing.

Do you have a story to tell? Do you know someone who does? Send us an email at [email protected]



Jennifer: This episode was recorded in New York City in December 2020 and produced by me with the help of Anthony Green and Emma Cillekens. We are edited by Michael Reilly and Mat Honan. Our mix engineer is Garret Lang … with sound design and music by Jacob Gorski.

Thanks for listening, I’m Jennifer Strong.


Leave a Comment