Message Turncoat in a DM to get moderator attention

Users Online(? lurkers):
8 posts

Computer AI Theory


Posts: 2876

Hi,

I'd like to take some time to reflect on theory of AI. A system that is able to learn from it's envrionment could prove very useful obviously to the world.

I think it's best to begin by trying to understand the Human brain. If we assume that the Human brain is able to exist without a soul, than we have already found evidence of successful biological artificial intelligence that could provide clues on how to code a similar, though far less complex system that could emulate the brain to an extent.

Birth to one month

Each child is born with inherited reflexes that they use to gain knowledge and understanding about their environment. Examples of these reflexes include grasping and sucking.

I think this can be applied to AI by giving the program "inherited reflexes".

1–4 months

Children repeat behaviors that happen unexpectedly because of their reflexes.

For example, a child’s finger comes in contact with the mouth and the child starts sucking on it. If the sensation is pleasurable to the child, then the child will attempt to recreate the behavior.

Infants use their initial reflexes (grasping and sucking) to explore their environment and create schemes.

Schemes are groups of similar actions or thoughts that are used repeatedly in response to the environment. Once a child begins to create schemes they use accommodation and assimilation to become progressively adapted to the world.

Assimilation is when a child responds to a new event in a way that is consistent with an existing schema. For example, an infant may assimilate a new teddy bear into their putting things in their mouth scheme and use their reflexes to make the teddy bear go into their mouth.

Accommodation is when a child either modifies an existing scheme or forms an entirely new schema to deal with a new object or event. For example, an infant may have to open his or her mouth wider than usual to accommodate the teddy bear's paw.

We can apply this knowledge to AI by coding a program to act in such a way that the program attempts to apply it's inherent reflexes to new objects it comes into contact with. 

I'm not really sure what reflexes a program would have as the envrionment is completely different, but I think if we were to build a simulated envrionment in which the program opperates, we could then create a simulation which allows the program to act freely within it, and interact with different physical or data-related objects.

A problem that is imediately introduces with this idea-- is how make the program flexible enough to "open it's mouth further" in order to accomodate different situations, and create new schemes. One possiblitiy is to introduce a random element to the program which creates small differences in the way the program runs through it's reflexes. We could then run the simulation as many times as we like, and then choose the most effective solution to apply the reflex to the object creating a new schema that would then be written as a whole new sub-program.

5–8 months

Child has an experience with an external stimulus that they find pleasurable, so they try to recreate that experience. For example, a child accidentally hits the mobile above the crib and likes to watch it spin. When it stops the child begins to grab at the object to make it spin again.

In this stage habits are formed from general schemes that the infant has created but there is not yet, from the child’s point of view, any differentiation between means and ends.

8–12 months

Behaviors will be displayed for a reason rather than by chance. They begin to understand that one action can cause a reaction. They also begin to understand object permanence, which is the realization that objects continue to exist when removed form view. For example: The baby wants a rattle but the blanket is in the way. The baby moves the blanket to get the rattle. Now that the infant can understand that they object still exists, they can differentiate between the object, and the experience of the object.

According to psychologist David Elkind, “An internal representation of the absent object is the earliest manifestation of the symbolic function which develops gradually during the second year of life whose activities dominate the next stage of mental growth.

I think this is great. If we assume our program uses it's reflexes to experiement with it's envrionment, we can then create a general logic that explains to the program the effect the scheme or habbit applied has on the object. This data can then be organized into a library the program than reviews when considering possible actions to take when attempting to create a desired reaction. In addtion, we can create a library which saves location data of objects.

12–18 months

Actions occur deliberately with some variation.For example a baby drums on a pot with a wooden spoon, then drums on the floor, then on the table

18 - 24 months

Children begin to build mental symbols and start to participate in pretend play. For example, a child is mixing ingredients together but doesn't have a spoon so they pretend to use one or use another object that replaces the spoon.

Symbolic thought is a representation of objects and events as mental entities or symbols which helps foster cognitive development and the formation of imagination.

According to Piaget, the infant begins to act upon intelligence rather than habit at this point. The end product is established after the infant has pursued for the appropriate means. The means are formed from the schemes that are known by the child. The child is starting to learn how to use what it has learned in the first two years to develop and further explore their environment.

So I think this is the hardest part to emulate. I think it's worth a whole new essay to explain how this might work in AI, but I think it would be somethign along the lines of.. giving the program an ultimate goal, and than allowing the program to run internal simulations of it's schemes in a way to effectively reach that goal. The most effective it finds, it would then try to run.

 

Luna

 

Data in Italics is from this wiki artical.

Posts: 19
Computer AI Theory

do you think that last stage is even possible to get to with a machine?

Posts: 3110
Computer AI Theory

why the hell would you want to though?.

Theres more than enough idiots and fucktards to go round, let alone mechanical and AI ones. Mind you, having said that, an AI lemur would be fun to have around the place for bad mood and stress release, at least it wouldnt die when you bashed the little fucker over the head, or booted it down the hallway.

Wouldnt taste so hot roasted though. 

Posts: 377
Computer AI Theory

The last I heard the forecast for a fully developed AI capable of behaving on a level with an adult human was for 2050, so we are looking near future for this tech if that forcast was to be believed. This forcast is 4 years out of date however, I have no idea what the current one is.

Posts: 2876
Computer AI Theory

So I've been considering how to get the AI to "Learn", and to speak a language. I've come up with this.

Language seems to be a library of generalizations we create from our experiences or senses. It also includes schemas we have created as a result of applying modifications of our natural born reflexes to external objects in order to achieve a desired result.

Desired results seem to be inherited values. These can include but are not limited to the avoidance of pain or discomfort, ability to successfulyl reproduce, and survival.

With this information, I think it's neccessary to write a program with natural born inherited reflexes that can be used to create schemes. In addition, the program needs a way to sense or experience objects in order applying meaning in the form of language. 

I think this isn't too difficult actually!

Posts: 504
Computer AI Theory

Robot Programmed to Fall in Love with a Girl Goes too Far

Researchers at Toshiba’s Akimu Robotic Research Institute were thrilled ten months ago when they successfully programmed Kenji, a third generation humanoid robot, to convincingly emulate certain human emotions. At the time, they even claimed that Kenji was capable of the robot equivalent of love. Now, however, they fear that his programming has taken an extreme turn for the worst.

“Initially, we were thrilled to see a bit of our soul come alive in this so called ‘machine,’” said Dr. Akito Takahashi, the principal investigator on the project. “This was really the final step for us in one of the fundamentals of the singularity.”

Kenji was part of an experiment involving several robots loaded with custom software designed to let them react emotionally to external stimuli. After some limited environmental conditioning, Kenji first demonstrated love by bonding with a a stuffed doll in his enclosure, which he would embrace for hours at a time. He would then make simple, but insistent, inquiries about the doll if it were out of sight. Researchers attributed this behavior to his programmed qualities of devotion and empathy and called the experiment a success.

What they didn’t count on were the effects of several months of self-iteration within the complex machine-learning code which gave Kenji his initial tenderness. As of last week, Kenji’s love for the doll, and indeed anybody he sets his ‘eyes’ on, is so intense that Dr. Takahashi and his team now fear to show him to outsiders.

The trouble all started when a young female intern began to spend several hours each day with Kenji, testing his systems and loading new software routines. When it came time to leave one evening, however, Kenji refused to let her out of his lab enclosure and used his bulky mechanical body to block her exit and hug her repeatedly. The intern was only able to escape after she had frantically phoned two senior staff members to come and temporarily de-activate Kenji.

“Despite our initial enthusiasm, it has become clear that Kenji’s impulses and behavior are not entirely rational or genuine,” conceded Dr. Takahashi.
Ever since that incident, each time Kenji is re-activated, he instantaneously bonds with the first technician to meet his gaze and rushes to embrace them with his two 100kg hydraulic arms. It doesn’t help that Kenji uses only pre-recorded dog and cat noises to communicate and is able to vocalize his love through a 20 watt speaker in his chest.

Posts: 2473
Computer AI Theory

Just... LOL!

Seriously- reading about that technician getting physically blocked by an amorous robot with his two 100 kg hydraulic arms, having to scramble frantically to call two senior staff members to rescue her, is the most hilarious mental image I've had the pleasure of indulging in quite some time. Perhaps they ought to call him "Lenny" instead :D

Posts: 1231
Computer AI Theory

Wouldn't the use of a quantum computer allow the simulation of all evolutionary states simultaneously?

Wouldn't that computer then come to a conclusion that there is a reason for existence?

What would its conclusions and its premonitions be based upon?

8 posts
This site contains NSFW material. To view and use this site, you must be 18+ years of age.