Julian Assange is not a guy that many people are prepared to go to bat for. Even though the damage was limited and has faded since the first big WikiLeaks dump in 2010, US authorities have continued to pursue his extradition on 17 charges of espionage and one charge of computer misuse.

His defenders tend to be retired politicians, retired newspaper editors, and retired movie stars. Few people active in public life are prepared to speak up publicly. Eight years ago Assange wrote an article, “Google Is Not What It Seems”. As a Professor who has to grade over 500 students a year, the clarity of the writing was admirable. Read from the top and when you get to the words: “They believe that they are doing good. And that is a problem.”

Stop. Breathe. Read it again. He’s right, isn’t he?  

Shoshana Zuboff is not a name familiar to many people but her 1998 book In the Age of the Smart Machine: The Future of Work and Power got me into computers. More recently, she returned with another book, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power.

She writes fearlessly, and very similar to Assange, arguing that before 9/11, the USA was on the right track with regard to privacy. After 9/11, the fear of the ticking bomb allowed a “collect-it-all mentality” to prevail in Congress. The constitutional right to privacy first recognized in the mid-sixties became a weakness, not a strength, and a culture of “You want me on that wall, you need me on that wall” gave rise to the Patriot Act.  

Innovation was cool, regulation was uncool

The thirst for metadata means any threat to Section 230 of the 1996 Communications Decency Act, which allows companies such as Facebook and Google to escape being a publisher, disappeared. Newspapers coulda and shoulda pushed back but their boards were often dominated by patrician families defined by inherited wealth. They had no idea what to do.

Who knows what would have happened if they had formed an alliance with the likes of Vodafone and AT&T. Then maybe Facebook and Google would not have been allowed to do largely what they liked. Both would have been large and profitable companies, but they would not have been worth over a trillion dollars.  

Instead, what took hold was a view that surveillance capitalism represented progress and anyway, it was impossible to stop. Innovation was cool and regulation was, well uncool. Meanwhile in Beijing, the Chinese Communist Party took one look at Facebook and decided that we are not having that. Mark Zuckerberg learned to speak Chinese, read four masterpieces of Chinese, but got nowhere. We ain’t buying what you are selling. We know what it can do. 

All of this is new again as another nice young man does the rounds to promote a piece of technology that furthers the mission of advancing human potential and promoting equality. The mission of OpenAI is to ensure that artificial general intelligence benefits all of humanity and the CEO, Sam Altman, 38, is currently on a listening tour visiting Toronto, Rio, Lagos, Madrid, Brussels, Munich, London, Paris, Tel Aviv, Dubai, New Delhi, Singapore, Jakarta, Seoul, and Tokyo. 

As I modeled much of a course that I teach in IE Business School on Sam’s course at Stanford, “How to Start a Start-up”, I’ve been a long-time reader of his blog.

As Sinead O’Sullivan never tires telling us on this site, every course in a top Business School has at its origin “How to Be Successful”, and so a 2019 essay has long been a foundation text in every course I teach. Previous to this essay, Sam had another post from 2015 titled “Machine Intelligence: The Need For Regulation”. Go look at it now; if things continue as they are, it will not be there for much longer.

“Governments should regulate the development of superhuman machine intelligence”, he declares, “In an ideal world, regulation would slow down the bad guys and speed up the good guys.”

If Julian Assange has access to the internet in Belmarsh Prison, he would have emitted a wry smile as Sam Altman openly rejected the idea of a six-month AI moratorium in a Senate hearing last month. The call was made not just by Elon Musk and Steve Wozniak but also by AI luminaries from UC Berkeley’s Center for Human-Compatible AI and Google’s DeepMind.  

In the manner of the eccentric professor that I have always aspired to be, I have long been pushing OpenAI as the appropriate choice for each student’s Capstone Project, the project that’s the culmination of their educational experience. This put me on a collision course with Class Reps who preferred L’Oréal or Inditex.

*****

Joe Haslam with Sam Altman. Photo: IE University Communications

Last month, I became the third Limerick person in the month to publicly interview Sam. He had deviously talked to John and Patrick Collison. My plan was to be a bit John and a bit Patrick. They at least got him to smile, unlike the other people who got to interview him, the billionaires Tobi Lütke of Shopify and Reid Hoffman of LinkedIn. 

In person, Sam plays very nice. Each city first features a behind-closed-doors meeting with a politician. In Spain, this was Prime Minister Pedro Sánchez and in France, it was President Emmanuel Macron. Then it’s a fireside chat with almost always someone I follow on Twitter.

At UCL, he spoke with author Azeem Azhar and in Paris, he spoke with StationX’s Roxanne Varza. We have since exchanged DMs to compare our experiences of interviewing Sam. The way he responds to questions is not unlike the tool that he has created. The answers are always very very plausible, even if they later turn out to be not fully correct.  

The team that works with him in OpenAI are all very agreeable as well. I think if they has stayed in Madrid a little longer they would have become tired of my jokes about the West Wing. There is a Josh, a CJ, a Toby, and a Sam. At dinner the night before, there were one too many “I was in the room” moments as we discussed Kamala Harris, the Obama White House, and Putin.

OpenAI is moving closer and closer to Microsoft

I made sure to stack the audience with people who already knew me by sending the invitation first to people taking a class with me. Technical questions for Sam were taken care of previously at a Developer Forum. The Fireside Chat was Prime Time, baby. So no questions about LLMs, Transformers, Token Size, RLHF (Reinforcement Learning from Human Feedback). Instead, a lot of it was focused on education. How should we be using Chat GPT in the classroom? How will it change the jobs in the future? 

If we had been doing the interview in Ireland, I could have pressed him a lot harder but we were in Spain, and in Spain, you are respectful to your guests. Seated in the front row of the auditorium were four of the most important people at IE University.

I may one day decide to commit career suicide but it wasn’t going to be that day. Following the lead of the Collison interviews, I asked him only things I knew he has a strong view on. We spoke about copyright, scaling companies, the quality of technical skills in Europe, and about the International Atomic Energy Agency. 

Quoting the Gospel of Luke – “To whom much is given, much is expected” – I asked Sam: “Do you understand why we in Europe for whom Silicon Valley has meant teenage depression and election interference might listen to you and think here we go again?”

I’ve yet to watch back the video to fully appreciate Sam’s reaction to my question but he did acknowledge that there were legitimate questions. This was mainly the headline that appeared the next day in the papers. At that point he definitely looked over at my script to know what else I had coming before saying, “Let’s go to audience questions” and so one question later, we did.  

OpenAI, or ClosedAI as Jason Calacanis is calling it on The AllIn Podcast, certainly started out with the right motives. Sam has now broken with Elon Musk, who is creating a new AI company called X.AI Corp. And perhaps more significantly, a number of OpenAl execs have left to form Anthropic and launched Claude, a chatbot to rival OpenAI’s ChatGPT. OpenAI is moving closer and closer to being part of Microsoft. Not legally of course but if you take an investment of $10 billion then that creates an obligation.  

For once Brexit Britain is not a joke

Clever lawyers have found ways to preserve its status as a non-profit and Sam claims to have no equity. But this is a race with Google and perhaps Apple and Amazon once they reveal their plans. A leaked, but verified, internal memo from Google concluded that ChatGPT has “No Moat” meaning that it could inevitably lose out to open-source models built on leaked LLMs such as Meta’s LLAMA (that´s if you believe it was leaked and not seeded – LLAMA was of course built in Meta’s Paris lab.)

France may be derided as a tech wasteland but there are French scientists working at high levels in every AI firm including at Hugging Face. For once, Brexit Britain is not a joke. Stability AI, which popularised Stable Diffusion, is in London as is Google”s Deepmind.

Later in the tour, Sam said he was looking at Poland as the location for a European HQ. Wojciech Zaremba, a Polish computer scientist, is a founding team member of OpenAI and many of the first 50 employees were also Polish.  

Sam’s original itinerary featured a trip to Brussels but for reasons unexplained, he later decided not to go there on this leg. On everyone’s lips is the European AI Act, which confirms everyone’s criticism of the Brussels bureaucracy.

Instead of anything resembling an innovation agenda, we get page after page of sanction, warning, and penalty. Thre is no real understanding of how algorithms work in practice or that they could have benefits.

One law that is in force is Spain’s Rider Law, which requires that algorithms relating to people be published. Widely derided at the time on the issue of practicality, it’s not too far away from what Sam is now proposing. He differs in the requirements between small and large-scale deployment but he concedes the principle that certain algorithms should be published, inspected, and verified.  

Europe must prioritise its technical sovereignty

Europe struggles to build business model innovation companies like Google and Facebook but it could quite easily build something more deep-tech like OpenAI. It’s tempting to ask where Ireland is in all this. From the Department of Enterprise website, it appears that the charge is being led by Minister for Trade Promotion, Digital and Company Regulation, Dara Calleary, and Minister for Skills and Further Education, Niall Collins.

I imagine people like Johnny Ryan at the Irish Council for Civil Liberties has something to say. Having lived in Ireland for  20 years, I’m now on Team Spain and we are working on a first certificate for algorithmic transparency with Adigital, the Spanish Association of the Digital Economy.

The only way that Europe is not going to be steamrolled is if we prioritise our own technology sovereignty to earn its position on the stage of global innovation. As usual in Europe, we all know what to do but worry about people losing jobs. 

At the Valencia Digital Summit, I interviewed Nathan Benaich, general partner with Air Street Capital. He said that what’s needed are slow, detailed, and targeted policy measures. Exactly the kind of thing that governments struggle with.

“AI, biotech, quantum computing and energy independence are technologies and sectors that are critical for our future,” he stressed. “They all require a combination of the world-class research that’s undertaken in Europe’s universities coupled with high-energy entrepreneurial drive. Governments need to be more intentional in empowering this unique talent to thrive if Europe wants to create the next generation of durable unicorns”.  

There is a new sense of confidence in Europe

I have no doubt that Sam is sincere in his desire for AI to benefit humanity. As I joked with him, instead of investing $500 million in Helion, Worldcoin, and Retro, he could have built a yacht like Jeff Bezos and sailed around Mallorca.

He is in Europe to win friends because Europe does matter. GDPR was much derided by Silicon Valley but to do business here they have to implement it. The AI Act is derided right now but with the right amendments, the GPAIS (General Purpose AI System) could work.  

The best commentary that I have read was by Ian Hogarth in The Financial Times. Hogarth is not only an investor but also the co-author of the annual “State of AI” report.

He declared: “We are not powerless to slow down this race. If you work in government, hold hearings and ask AI leaders, under oath, about their timelines for developing God-like AGI. Ask for a complete record of the security issues they have discovered when testing current models. Ask for evidence that they understand how these systems work and their confidence in achieving alignment. Invite independent experts to the hearings to cross-examine these labs.” 

It’s to Sam’s credit that he has come to Europe but a listening tour means listening. Mark Zuckerberg famously went on a “listening tour” in 2017 but seems to have ignored the parts he didn’t like. The answer to everything was always more Facebook.

It is noticeable that as Sam’s tour has gone on, he is encountering some hostility. There were protesters outside his event in London and the audience reaction in Munich was at times hostile. In particular, they want OpenAI to return nearer its roots as Open Source.  

I’ve no idea what Sam and the OpenAI team believe in private. Maybe they believe that if they ask nicely enough they will get what they want. Or maybe they believe that their invention is too important for Europe to ban it altogether. 

But I sense a new confidence in Europe. Meta has been fined a record €1.2 billion. This time, Europe is ready. OpenAI will need to demonstrate respect for the EU’s institutions and procedures.

“Fool me once, shame on you; fool me twice, shame on me.”  

Professor Joe Haslam is the Executive Director of the Owners Scaleup  Programme at IE Business School in Madrid.