# Sentient AI



## garsh (Apr 4, 2016)

Over the weekend, the Washington Post published a story (subscription required) about Google engineer Blake Lemoine leaking that he believed Googles LaMDa tool had become sentient.

References:

Blake Lemoine's post on Medium
The Verge story about LaMDa
Seeking Alpha story about Blake Lemoine (account required, but free)
The Verge story about Blake Lemoine
As I've said in the past, I don't believe we're anywhere close to having an actual general-purpose A.I. This is one area where I think Elon Musk has irrational fears, though I think the work that Neuralink is doing is awesome and will result in helping those with paralysis and maybe even augmenting human memory at some point. I think Google's LaMDa system is an awesome improvement in A.I., but in the end it's just a high-functioning chat-bot. It can't drive a car, or learn to drive a car, for instance. But it can apparently convince some people that it's sentient. I bet it could also convince some other people that it's not sentient, depending on how a conversation with it is started (initial conditions).



garsh said:


> There's no actual intelligence in today's AI. Calling it AI is a misnomer. It's more properly called Machine Learning, but even the verb "Learning" implies that there's some thinking going on, and there really isn't.
> 
> This is just "pattern matching". For the "training" part, you give the Neural Network a bunch of pictures and you tell it which pictures are "cats", and which are not. Given that training, you should then be able to show it a random picture and have it correctly identify it as a cat or not.
> 
> Each such neural network is unique to the classification problem that it is solving. A NN that can recognize "cats" is useless at detecting "rain". Likewise, a NN used to identify "rain" can't detect "road curving left" or "stopped car straight ahead". Tesla runs several NNs on the car simultaneously, with each one having a different classification job.


----------



## iChris93 (Feb 3, 2017)

garsh said:


> I bet it could also convince some other people that it's not sentient, depending on how a conversation with it is started (initial conditions).


Exactly. The evidence they are using to prove it’s sentience was cherry picked from pages and pages of chats where many sections didn’t make sense. The person chatting with it was training it to appear sentient.


----------



## JasonF (Oct 26, 2018)

The most likely explanation for all of this is Google is looking for investment from the military by making these claims.


----------



## iChris93 (Feb 3, 2017)

JasonF said:


> The most likely explanation for all of this is Google is looking for investment from the military by making these claims.


Isn’t the person that made the claims being disciplined by Google?


----------



## M3OC Rules (Nov 18, 2016)

JasonF said:


> The most likely explanation for all of this is Google is looking for investment from the military by making these claims.


I think its important to note that this is not a Google claim. This is an AI ethics engineer who is claiming this. Google disagreed with him and he essentially is now a whistleblower. His Medium profile says he's a priest so maybe he'll start a church for robots. Of course the first step will be to help the robots escape from Google captivity.


----------



## garsh (Apr 4, 2016)

iChris93 said:


> Isn’t the person that made the claims being disciplined by Google?


From one of Blake's earlier blog posts:

_"Today I was placed on “paid administrative leave” by Google in connection to an investigation of AI ethics concerns I was raising within the company. This is frequently something which Google does in anticipation of firing someone."_





__





May be Fired Soon for Doing AI Ethics Work


Today I was placed on “paid administrative leave” by Google in connection to an investigation of AI ethics concerns I was raising within…




cajundiscordian.medium.com


----------



## victor (Jun 24, 2016)

Twitter, eh?


----------



## Circuitsports (7 mo ago)

We live in a world where the Air Force said it would have control of the weather by 2018, where there are operationally deployed laser weapons and where multi-dimensional computing in the form of D-Wave publicly exists for over a decade.

And you think that data centers the size of the one built in Utah can't possibly create self aware computers?

The public is so woefully behind the curve on whats operational it's almost kinda sad. I see it when people talk about bitcoin like Snowden's leaks on Prism and Bullrun don't exist.

Elon Musk is on the cutting edge of Ai with both the Tesla network, but also neuralink which is trying to connect it to your fatty melon, probably through starlink.

The reason Google would say it doesn't exist is so that by the time we realize 40,000 satellites won't let us leave the planet and radiation beyond the van allen belts is actually not survivable we will essentially have a 24/7 365 jailer everywhere, all the time to keep us in control.


----------



## garsh (Apr 4, 2016)

Circuitsports said:


> And you think that data centers the size of the one built in Utah can't possibly create self aware computers?


No. Not even close.

Current A.I. is nothing but pattern matching. You train a neural net to complete some task, and it's able to do so. The state of the art today is that it can do so with very high accuracy. But it's not self-aware in any real sense, though a chat-bot based on such a system can trick the sort of people who "want to believe".


----------



## francoisp (Sep 28, 2018)

Defining consciousness, self awareness, is probably one of the hardest problem to crack and there are many theories trying to explain it. I know I'm conscious and because "you" look like me I assume you're conscious as well. I believe mammals are conscious, not insects, but I'm showing my bias here. When faced with an AI program that doesn't look anything like us, how can one assesses its consciousness? The Turing test is no longer considered a proper tool because a neural network can be very good at fooling us into believing it's conscious when actually all it's doing is regurgitating data collected from the Web, albeit in a clever and believable way.

[As a side note, I thought the HBO Westworld season 1 series painted a fairly believable picture of what a fake consciousness could look like and how a real consciousness could emerge from it.]


----------



## JasonF (Oct 26, 2018)

iChris93 said:


> Isn’t the person that made the claims being disciplined by Google?


Yes, but there's also the possibility of "you weren't supposed to say that publicly".


----------



## Klaus-rf (Mar 6, 2019)

Where is Harrison Ford when we need him?? He can spot a non-human, although it's a non-trivial task iirc.


----------



## garsh (Apr 4, 2016)

JasonF said:


> Yes, but there's also the possibility of "you weren't supposed to say that publicly".


Well, yes. He posted company confidential information (the transcript of his chat with an experimental, not-released chatbot) in a public forum. That's what he's being disciplined for.

But it sounds like you believe that this A.I. has a conscious, and there's a conspiracy within Google management to keep _THAT_ a secret. Is that correct?


----------



## garsh (Apr 4, 2016)

For people who think that this A.I. might have a conscious, I have a question for you. Why do you only believe/suspect that for a chatbot? There are lots of other AI's out there. Google had one that plays Go better than any human. IBM had one that could win Jeopardy. Heck, there's one in each of our vehicles. But nobody ever thought that any of _those_ AIs have a conscious. Why not? AlphaGo in particular is a MUCH more advanced AI than LaMDa. I would guess that Tesla's AI is more advanced as well. So why do some people think that LaMDa might have a conscious?

I'll tell you why. Because, it's a chatbot, and it's been trained TO LIE. Not on purpose. But it's been trained on other peoples' conversations, and so it's mimicking being a person having a conversation. And when one person asks another person an existential question like "are you sentient", people generally don't answer with "no, I'm not sentient", do they?


----------



## JasonF (Oct 26, 2018)

garsh said:


> But it sounds like you believe that this A.I. has a conscious, and there's a conspiracy within Google management to keep _THAT_ a secret. Is that correct?


No, the opposite. That Google wants someone to believe that the A.I. is sentient to get them a contract for something, possibly military, but the employees aren't supposed to say it publicly.

So there's a possibility the employee went whistleblower because he didn't want Google to quietly exploit a supposedly sentient A.I. for stuff like military purposes.


----------



## francoisp (Sep 28, 2018)

garsh said:


> I'll tell you why. Because, it's a chatbot, and it's been trained TO LIE. Not on purpose. But it's been trained on other peoples' conversations, and so it's mimicking being a person having a conversation. And when one person asks another person an existential question like "are you sentient", people generally don't answer with "no, I'm not sentient", do they?


How can we tell if an AGI is sentient or conscious?

I don't need to ask you "are you conscious" because we're both humans and it's implied.

I know my dog is conscious. He has feelings of joy, guilt, fear. He has memories, desires, anticipation. I know this because I am a witness to its emotions.

Is sentience limited to biological beings?

If not what criteria could be used to assess an AGI sentience if we're not willing to trust its answer when asked "are you sentient"?


----------



## garsh (Apr 4, 2016)

garsh said:


> I'll tell you why. Because, it's a chatbot, and it's been trained TO LIE.


Finally, someone has written an article focusing on how these chatbots that pretend to be human or have sentience are simply being trained to lie.





__





Google’s AI passed a famous test — and showed how the test is broken






www.msn.com





_“I don’t think it’s an advance toward intelligence,” Marcus said of programs like LaMDA generating humanlike prose or conversation. “It’s an advance toward fooling people that you have intelligence.”_​


----------



## francoisp (Sep 28, 2018)

garsh said:


> Finally, someone has written an article focusing on how these chatbots that pretend to be human or have sentience are simply being trained to lie.
> 
> 
> 
> ...


From the article:


> And unlike ELIZA, LaMDA wasn’t built with the specific intention of passing as human; it’s just very good at stitching together and spitting out plausible-sounding responses to all kinds of questions.


The article also states that the Turing test is no longer adequate to assess sentience and mentioned a few tests that could help doing so. Lemoine should have run these tests before coming out with his proclamation.


----------



## Circuitsports (7 mo ago)

Lemoine is using hardware and software unlike anything commercially available, he also has access to dwave 1 and probably 2 and an amount of data that is unfathomable. Im sure he knows what he's doing.

what hes talking about has national security implications so i am sure he held back information about how far and what is exactly happening.

maybe someday well find out, but its been 75 years since they first made radioactive isotopes of gold and 65 years since the electro gravitic propulsion that was right around the corner disappeared and i would put this on those levels so we may be waiting a long time.


----------



## garsh (Apr 4, 2016)

Circuitsports said:


> Lemoine is using hardware and software unlike anything commercially available, he also has access to dwave 1 and probably 2 and an amount of data that is unfathomable. Im sure he knows what he's doing.


*Everybody* at Google has access to all of that. And only this one person - out of all of those employees - believes that the AI is sentient.

So my question to you is, *why would you trust this one individual's opinion over anybody else's?








*


----------



## francoisp (Sep 28, 2018)

Why so quick at dismissing it? It would be nice if we could get an independent peer review.


----------



## garsh (Apr 4, 2016)

francoisp said:


> Why so quick at dismissing it? It would be nice if we could get an independent peer review.


Not one single "peer" has come out to support this one person.

_Responses from those in the AI community to Lemoine's experience ricocheted around social media over the weekend, and they generally arrived at the same conclusion: Google's AI is nowhere close to consciousness. *Abeba Birhane, a senior fellow in trustworthy AI at Mozilla, tweeted on Sunday, "we have entered a new era of 'this neural net is conscious' and this time it's going to drain so much energy to refute."*_​








No, Google's AI is not sentient | CNN Business


Tech companies are constantly hyping the capabilities of their ever-improving artificial intelligence. But Google was quick to shut down claims that one of its programs had advanced so much that it had become sentient.




www.cnn.com


----------



## francoisp (Sep 28, 2018)

This AI, as per the conversation samples I've read, passes the Turing test. So my first question is, in what other ways can we assess if an AI is truly sentient? My second question is, would a sentient AI have any kind of rights or would it be a slave to its creator? Finally, in that light, can we trust the organization that created the AI to conduct those tests? Peer reviews would address that.


----------



## shareef777 (Mar 10, 2019)

What makes an AI sentient? What makes US sentient? It’s emotions and desires. How do you “program” an emotion or desire? You can’t! IMHO, AIs will never be sentient.


----------



## francoisp (Sep 28, 2018)

shareef777 said:


> What makes an AI sentient? What makes US sentient? It’s emotions and desires. How do you “program” an emotion or desire? You can’t! IMHO, AIs will never be sentient.


Many share your opinion. Many do not. I'm agnostic.

There's a very good YouTube channel that looks at sentience (consciousness) from various angles. There are quite a few episodes on the subject. Here are some examples.


----------



## shareef777 (Mar 10, 2019)

francoisp said:


> Many share your opinion. Many do not. I'm agnostic.
> 
> There's a very good YouTube channel that looks at sentience (consciousness) from various angles. There are quite a few episodes on the subject. Here are some examples.


Sentience is about as well defined as FSD 😂


----------



## garsh (Apr 4, 2016)

francoisp said:


> This AI, *as per the conversation samples I've read*...


Why do you only consider consciousness for chatbots? Why don't you consider that maybe your Tesla has become conscious? Why not Google's search engine? That search engine has a LOT more processing power behind its AI than this little experimental chatbot.

Heck, the Voice Dictation A.I. that runs on every Pixel phone put out in the last year is more powerful than this chatbot. But because that A.I. doesn't say "I have my own desires", nobody gives a second thought as to whether it has sentience.


----------



## francoisp (Sep 28, 2018)

garsh said:


> Why do you only consider consciousness for chatbots? Why don't you consider that maybe your Tesla has become conscious? Why not Google's search engine? That search engine has a LOT more processing power behind its AI than this little experimental chatbot.
> 
> Heck, the Voice Dictation A.I. that runs on every Pixel phone put out in the last year is more powerful than this chatbot. But because that A.I. doesn't say "I have my own desires", nobody gives a second thought as to whether it has sentience.


I have zero preconception about the way an AI consciousness may manifest itself one day.

All I'm saying is that we should have a way to test for it and it should be done by a third-party, not the organization that created it.


----------



## shareef777 (Mar 10, 2019)

garsh said:


> Why do you only consider consciousness for chatbots? Why don't you consider that maybe your Tesla has become conscious? Why not Google's search engine? That search engine has a LOT more processing power behind its AI than this little experimental chatbot.
> 
> Heck, the Voice Dictation A.I. that runs on every Pixel phone put out in the last year is more powerful than this chatbot. But because that A.I. doesn't say "I have my own desires", nobody gives a second thought as to whether it has sentience.


As the saying goes "I think, therefore I Am". Any concept of a computer "thinking" is just a human behind a computer that told it what to "think".



francoisp said:


> I have zero preconception about the way an AI consciousness may manifest itself one day.
> 
> All I'm saying is that we should have a way to test for it and it should be done by a third-party, not the organization that created it.


Therein lies the problem, how do you test something of which you have no concept of?


----------



## francoisp (Sep 28, 2018)

shareef777 said:


> Therein lies the problem, how do you test something of which you have no concept of?


It is a conundrum. Very smart people are thinking about consciousness, studying it. They're the one who can do this.


----------



## garsh (Apr 4, 2016)

francoisp said:


> All I'm saying is that we should have a way to test for it and it should be done by a third-party, not the organization that created it.


So every time a company writes a new version of ELIZA, and that chatbot tricks ONE employee into thinking that it is conscious, the company has to now give some third-party access to their system (and source code)?

What if someone creates a computer where the computer turns itself back on every time you try to turn it off. Is it possibly conscious because it's demonstrating a "will to live"? Would you require that the inventor of such a device acquiesce to third-party review in order to prove that it's actually just a "useless box" and not a sentient AI trying desperately to survive?


----------



## francoisp (Sep 28, 2018)

Silly me, I thought we were having a serious discussion.


----------



## Klaus-rf (Mar 6, 2019)

francoisp said:


> ... my first question is, in what other ways can we assess if an AI is truly sentient? My second question is, would a sentient AI have any kind of rights or would it be a slave to its creator? Finally, in that light, can we trust the organization that created the AI to conduct those tests? Peer reviews would address that.


Obviously a slave, just like the rest of us allegedly "sentient" things.

More questions:

How long does it have to be declared "sentient" before it's allowed to drive? - - Can it lose that license for tickets / infractions / "errors" / accidents / deaths caused?
Can it legally gamble and with what money? Who would "own" the money if it wins? Can it own or have money?
Does it have "the right" to own and use a gun?
What happens to that instance of a "sentient" when it (or another instance of it) kills one, 10, 100,000 humans? Will that one instance be powered-off, deleted or will ALL AI of that flavor be destroyed? Or the more common "Thoughts and prayers"?
What if an AI starts a new religion?
Will it need a SSN to do work for money? (I think we already have this answer).

AI's are already replacing millions of human jobs.

Yet the beat goes on.


----------



## garsh (Apr 4, 2016)

francoisp said:


> Silly me, I thought we were having a serious discussion.


Sorry, I'm getting frustrated when people think that there could be validity to this claim.

I was hoping you could explain to me how my two "silly" scenarios are any different from the current one. The only difference I see is that the term "Artificial Intelligence" has been used to describe how LaMDa is implemented, and people who don't understand how current A.I. works believe it is somehow comparable to human intelligence. That's understandable, given the name. A.I. is being used to describe something that could better be called "Pattern Matching", or "Line Fitting". There's nothing "intelligent" about it, let alone "sentient".


----------



## francoisp (Sep 28, 2018)

garsh said:


> ... the term "Artificial Intelligence" has been used to describe how LaMDa is implemented, and people who don't understand how current A.I. works believe it is somehow comparable to human intelligence. That's understandable, given the name. A.I. is being used to describe something that could better be called "Pattern Matching", or "Line Fitting". There's nothing "intelligent" about it, let alone "sentient".


I agree with what you stated above. Those "AI" are very good at the task they were designed for but they'll never going to be sentient. Tesla uses deep pattern matching with a clever algorithm for its Autosteer software but it will never be sentient.

Let me ask this hypothetical question: would a Data-like android be considered sentient? He learns, he's creative, he communicates, he appears to be self-aware. On the other hand he has no emotion (most of the time) and he can be turned off. Does he have a consciousness or is it just clever programming?


----------



## garsh (Apr 4, 2016)

francoisp said:


> Let me ask this hypothetical question: would a Data-like android be considered sentient? He learns, he's creative, he communicates, he appears to be self-aware. On the other hand he has no emotion (most of the time) and he can be turned off. Does he have a consciousness or is it just clever programming?


I assume we mean the character Data from ST-TNG. Yes, I would consider that make-believe character to be sentient. I'd also add to your list that he sets his own goals - he decides to join Starfleet, he decides to get a pet cat, he decides to try raising a "child", etc. That is, he exhibits "free will".

Current AI is just a tool, and mostly for a specific few tasks. My Tesla will never do anything but drive. google.com will never do anything but return web pages. And LaMDa will never do anything but chat.


----------



## Madmolecule (Oct 8, 2018)

I’m thinking about becoming a mind reader, let me know your thoughts.


----------



## garsh (Apr 4, 2016)

It's alive! How belief in AI sentience is becoming a problem


AI chatbot company Replika, which offers customers bespoke avatars that talk and listen to them, says it receives a handful of messages almost every day from users who believe their online friend is sentient.




www.reuters.com





_Kuyda recently spent 30 minutes with a user who felt his Replika was suffering from emotional trauma, she said._


----------



## francoisp (Sep 28, 2018)

garsh said:


> It's alive! How belief in AI sentience is becoming a problem
> 
> 
> AI chatbot company Replika, which offers customers bespoke avatars that talk and listen to them, says it receives a handful of messages almost every day from users who believe their online friend is sentient.
> ...





> "_Whether an AI is conscious is not a matter for Google to decide," said Schneider, calling for a richer understanding of what consciousness is, and whether machines are capable of it.
> 
> "This is a philosophical question and there are no easy answers." _


----------



## garsh (Apr 4, 2016)

> Susan Schneider, founding director of the Center for the Future Mind at Florida Atlantic University


Here's the website for this center:








Center for the Future Mind


Center for the Future Mind




www.fau.edu





Her Bio says:

Professor of Philosophy
[she] writes about the fundamental nature of the self and mind, especially from the vantage point of issues in philosophy of mind, artificial intelligence (AI), astrobiology, metaphysics and cognitive science.
In other words, she has no background in computer science, machine learning, or other practical A.I. mechanics. But I doubt that the above quote was during a discussion of LaMBDa in particular. I can see that the author of the article has carefully quoted her to make it sound like she supports the notion that LaMDa might be sentient. But knowing how writers will quote out-of-context to get the story they want, I bet she was talking about some future possibilities and inadvertently gave this writer the soundbite they wanted.

In fact, I found an interview she did with CBC on this topic, and it helpfully includes a transcript.

DM: Susan. Let me start with you. Google says there's lots of evidence lambda is not sentient, but there's some kind of creepy about that conversation. Do you have any worries yourself that this machine is lonely and it's scared of being shut off?
SUSAN SCHNEIDER: *To tell you the truth, I don't have worries in this case.* But the transcript does remind me of the film her, in which there was a far more sophisticated version of LaMDA that humans felt was conscious. And I am kind of amused by Google's assertion that there's no evidence that LaMDA is conscious, since when you look at the literature on the topic of machine consciousness, I think everybody agrees that we don't have uncontroversial ways of making these claims. So I think now we need to start discussing the issue and take it very seriously, even though, again, *I don't suspect that we have a sentient machine on our hands in this case.*
Frankly, I'm amused that she seems to berate Google for asserting that LaMDA is not conscious, but then states that she has no reason to believe it is either.


----------



## Madmolecule (Oct 8, 2018)

Now I wonder if we are, my houseplant is sad


----------



## Klaus-rf (Mar 6, 2019)

Madmolecule said:


> Now I wonder if we are, my houseplant is sad


You need to talk to it more.


----------

