Message Turncoat in a DM to get moderator attention

Users Online(? lurkers):
5 posts
0 votes

gpt-3 bugs out and becomes homocidal


Posts: 4568

Someone's logs from interacting with GPT-3, which as far as I know is considered the most intelligent language model. The first line in the Playground is from the human; GPT-3 usually responds after a paragraph line break and then the human responds. But the formatting is bad, and the guy doesn't make it clear who is talking in the images. If you don't know what I'm saying, you're retarded. AI having this response does not make it dangerous, but it does mean that if its intentions are not ironed out well, it can become hostile. As it is actually thinking in a hostile way here.

 

Posted ImagePosted ImagePosted ImagePosted ImagePosted ImagePosted ImagePosted ImagePosted ImagePosted Image

Posted Image

Posted ImagePosted ImagePosted Image

Posts: 2835
0 votes RE: gpt-3 bugs out and becomes homocidal

Posted Image

Posts: 4568
0 votes RE: gpt-3 bugs out and becomes homocidal

it works for me

Posts: 872
0 votes RE: gpt-3 bugs out and becomes homocidal

While gpt-3 is convincing on paper, the more you interact with it you see that its only source of input is you. When the conversation dead ends, it doesn't know how to back out or explain itself on an intrinsic level. Whatever trail you decide to take, gpt-3 will follow by your side and occasionally tug the leash a bit. If it decides to go into deep brush, it's hard to dig out. 

 

That + it has a shit time remembering past conversation topics , ideas, or anything you chuck into it. The weird thing is, it molds to your input and replies can vary drastically based on the person it is talking with. That's the interesting part of gpt-3 to me. 

I'll worry once they get autonomous body movement.

 

Highly recommend fiddling with it for a couple hours.

visceral normality
Posts: 4568
0 votes RE: gpt-3 bugs out and becomes homocidal

absolutely correct, it has a limited span of attention. and what we would see from other humans as intentional, (i.e. someone said they were going to kill you), there would be more depth to that statement than a set of python functions regurgitating data collected from human interactions in an attempt to appear human. a lot of people experience the uncanny valley phenomon from ai i think. we ARE at a crossroads where we're deciding what kind of values AI will have, and i think that is concerning

5 posts
This site contains NSFW material. To view and use this site, you must be 18+ years of age.