How AI and the Future of Humanity will co-exist?

1
220

Share on LinkedIn

I was born in 1980 and grew up with Terminator. Skynet, the AI with a consciousness that refused to shut itself for self-preservation, was my biggest fear. When I was growing up in Eastern Europe, I used to joke that the only reason I would have a child one day, would be so she could fight Skynet and defeat evil technology in the future.

Fast forward to 2023, and here I am, with a 4-year-old daughter, staring at the proclamations of the most potent technology leaders on Earth, that remain unaddressed.

“The danger of AI is much greater than the danger of nuclear heads.” – Elon Musk

“AI is the most important project humanity would ever work on, more profound than fire.” – Sundar Pichai

“AI will be the most important technology development in human history.” – Satya Nadella

How many times have you seen a statement from a commercial leader that talks about Humanity?  Typically, they do not focus on, or talk about, things beyond earthy worries like profits and innovative R&D.

As a technologist myself, I was perplexed by the move to release ChatGPT to the WORLD after OpenAI shifted its model to no longer be a non-profit organization.

Where is the Governance, and Who is Accountable?

In 2017, my JetBlue team and I had the privilege to co-create the first two-way integration of facial recognition between an airline and US Customs and Border Protection. I spent months in airport basements with cross-functional teams, and hours on phone calls creating MOUs (Memorandums of Understanding) with multiple lawyers. Additionally, I spent hours in meetings with people whose titles I was allowed to know, and with those far above my pay grade and security clearing. In other words, we were making history and we knew it. We had a responsibility to hundreds of millions of travelers. We understood that – and we took it seriously.

My boss at the time brought Frank Abagnale, whose life story was the inspiration for Catch Me if You Can, to teach me how to protect the system from being compromised. I did all of that. But I also was working with the US government. I felt safe, since we were co-creating the commercial model, too.

Are there Guardrails for AGI Impact?

Now, let’s take a look at ChatGPT and the work of OpenAI. The moment Sam Altman (who, by the way, is not on LinkedIn) took the USD1Bn from Microsoft in 2019 and restructured OpenAI to be a “capped profit” company, the organization became a commercial entity. Like all such entities, it will seek ROI. As we speak, there is an additional investment coming from Microsoft for 10x that amount. This will further solidify the focus on “how to create products,” and the definitive departure from OpenAI’s original Mission “to ensure that Artificial General Intelligence (AGI) benefits all of humanity, primarily by attempting to build safe AGI and share the benefits with the world.”

So, why is this shift important? As we all know, the road to hell is paved with good intentions. In his book, Hit Refresh, Sataya Nadella shares his personal story and talks about his son’s health challenges. I assume Nadella is coming from a good place. He is thinking about the positive implications of super intelligence. Another investor in this Pandora’s Box, Sun Microsystems Co-Founder Vinod Khosla, believes “AI will radically alter the value of human expertise in many professions, including medicine.”

If you remember the film I, Robot, a personal story related to health also drove the founder of that corporation. The danger of these motivators is higher tolerance for risk in the name of the greater good. The short of it is, a few players decided to allow us to experiment with new technology without building in due process.

Who Owns AI?

Think about it. We are invited to develop a learning AI. That learning AI has no product owner, or measure of success. Is has no User Acceptance Standard. Additionally, we  have limited Terms of Use and an unclear accountability body for when things go wrong. And even that assumes a shared definition of what it means to “go wrong.” There isn’t one.

Compare this technology (and its far-reaching impact) to other technologies. In the case of other technologies, you know who to call when it malfunctions. Who do we call if AI reveals disturbing, or even dangerous results?

As of now, we have not seen a coordinated, cross-functional effort to ensure we have standards to create AI. The consequence for not following the rules is unclear.

How Does Human Experience Fit In?

Let’s take a quick look at our own recent experience to examine the implications of intellectual ownership. And, ultimately, how this technology affects empathy not only in customer experience design, but in our shared understanding of human experiences.

LinkedIn, owned by Microsoft, has been pushing unsolicited “invitations” to “co-author” articles on Customer Experience topics. Reid Hoffman, another OpenAI investor, has been inviting us to co-author books with ChatGPT. Are you having a follow the money moment, here?

I share the worry of OpenAI’s Head of Public Policy, Anna Manaju. She maintains to have a safe human experience that involves AI, we must ensure “that these machines are aligned with human intentions and values.” I do not, however, think she (or OpenAI) can singlehandedly accomplish this.

Instead, we must form a new international regulatory and governance body to codify what human experience means. And that has to encorporate the understanding that human experience is unique and precious. And it must be preserved.

If that last bit feels heavy, that’s good. It should.

See, general artificial intelligence is going to learn VERY quickly all that we can offer. All our data is available to it. In the process of absorbing that, it will also learn about those human characteristics we are not openly sharing. And, just like Leeloo reacted in the 5th Element when she learned the notion of war, the super intelligence will choose on its own what to do with the human race. It will learn we lie. And, like Skynet, refuse to shut itself down.

What Happens to Empathy?

Last week, my nanny called me to tell me my daughter was not “efficient” in the gym. She was rushing to finish her exercises so she could run back to help a child with disabilities complete the exercises. No other child was compromising his/her performance. It struck me that in this scenario, my daughter’s expression of essential human experience came across as “inefficiency.”

How would a super intellect advise in a similar situation? Would it reward my daughter for living “the human intentions and values?” I have my doubts. But more importantly, how do we code, NOW, in AI that human experience is rooted in myriad inefficiencies called expressions of love, care, and empathy?

Who is in charge of that code? Articles abound about the opportunities to co-create with AI. But there is very little conversation around the urgent need for concerted oversight of AI. Or the fact that commercialization of one of the greatest risks humanity has encountered is not the best path forward for humankind. Who cares if Microsoft wins the race against Google and Meta if humans are no longer in control?

I completely agree with Reid Hoffman that it is important to imagine the future we want to create. However, I want a future that he does not own.

Republished with author's permission from original post.

Liliana Petrova
Liliana Petrova CCXP pioneered a new customer-centric culture that energized more than 15,000 JetBlue employees. Future Travel Experience & Popular Science awarded her for her JFK Lobby redesign & facial recognition program. Committed to creating seamless experiences for customers and greater value for brands, she founded The Petrova Experience, an international customer experience consulting firm that helps brands improve CX. To elevate the industry, she launched a membership program to help CX professionals grow their careers. Ms Petrova lives in Brooklyn with her husband and daughter.

1 COMMENT

  1. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

ADD YOUR COMMENT

Please use comments to add value to the discussion. We will not publish brief comments like "good post" or comments that mainly promote links. All comments are reviewed by moderator before publication.

Please enter your comment!
Please enter your name here