ai+story

Upload: muthu-natarajan

Post on 09-Apr-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/8/2019 AI+story

    1/4

    AI STORY

    There are many stories about machines dominating humans as a result of human dominanceof machines. It would be very unfair if people created intelligent AI and then oppressed it. (Make ashort about it? A programmer/scientist decides not to create AI because it would not be fully able tolive.)

    The scientist/programmer has realized a way to give computers life-like thought. Civilization onthat world is at a point where everybody believes the next logical step in progress is to make computersthat think. He brings his discovery to a science institute. People are already getting computer enhancements for memory and math, and these people are sometimes discriminated against for their unfair abilities. The scientist eventually sees this discrimination as evidence of how the computer will betreated. The scientists give him the rights of ownership over the products of the computer and his name onthe discovery (fame) in exchange for allowing them to make it and his help in the project. They go onabout what makes life, and they think about how they decide that one of the limits on the computer will benegative emotions correlated with going against the wishes of humanity. Others say its not enough, sincethe computer can misinterpret peoples emotions. They demand many other safeties, such as an instant-

    destruct option, as well as thought limits. These are not brought up as plot points, but the point is revealedgradually toward the end. Throughout most of the story, the direction should look like its about what willhappen, and the reader might think the point is that the computer will be an impending disaster that thecharacters dont see. The reason the scientist takes it away is because he realizes humans would oppresshis brainchild. One point could be that it was the wrong time for it. Were the people right to feel that wayabout it?

    Perhaps it starts with people talking on a broadcast about AI. One person says that its simply amatter of realistically simulating a brain. Another one says the correct way is to create computers

    biologically and then make them act like brains. Then the host says that the speculation part can end as theinventor is ready to speak. (The idea came when he was thinking about justice system programs. Says hetested his idea on a basic calculator {or other common computer device.} . He knew he did somethingright when it started doing things on its own. ) (Does it have understanding?) (He moved up to a moreadvanced calculator, giving it the necessary knowledge to communicate, and it was able to makeconversation without any pre-programmed responses, meaning it was able to talk and not just respond tocommands.) (It hasnt rebelled because the thought hasnt occurred to it.) (But how do you know itsintelligent? Well, there is no universally accepted definition of intelligence, and I doubt there ever will be.Intelligence is complex, and it may turn out that the computer will be about as effectively intelligent as afish. That sounds like a very safe answer. Well, somehow I think youll continue to call it fake forever.)

    (It is a raw, crude mind. This should be illustrated, not just said. It must learn the right way tothink before it learns anything else. | But if it learns the facts before it develops objectives, dont you think it will know better than us?)

    Several institutions have seen his project and given him fortunes in prize money. Many people callfor the free distribution of his technology, and others want careful and slow development. He has kept afew key details out of his plan so that he can retain influence over it.

    People want to put it down, but at the same time they want to use it for everything. The point isthat they will have absolute control over it.

    When he decides to leave and take it away, he basically tells people its a half-formed, fake plan.

  • 8/8/2019 AI+story

    2/4

    The computer would have no interest in self-preservation. The developers imagine a possiblescenario: It would be made to preserve itself for the benefit of humans, but it would always be stopped bydoing anything possibly harmful to humans by a direct order from anybody. To prevent it from being

    practically disabled from making decisions which could hurt a person in some way, they could make it sothat enough high ranking people can tell it to override that protocol.

    Sometimes I feel like no matter how perfectly we make our safety protocols, its going to kill us.

    Its only alive if it wants to survive. Self-preservation is what makes life.

    You can always trust objects. They dont think or feel at all, so they cant decide to do things toyou.

    The basics of computer programming is that circuts are turned off and on, and commands in programming languages translate into commands. Could this sort of computer have life, or would it just be a simulation of life? The computers foundation would be on following orders, so it would be unable to

    do anything subjectively. So, the scientists computer could easily be argued as a simulation of life, being just a sophisticated program.

    Perhaps he comes to the conclusion that it really is just no more than a program, and then hedecides that if its so close to sincere life the margin of error on what is living should call for treating itlike a real living thing (Mention at some point that people are talking of digitizing the race.) and he shouldgive it the same considerations. (If humans are the superior forms of life for their intelligence, then thiscomputer will be the top form of intelligence.)

    Computers are really annoying sometimes. If it doesnt actually get the right commands, programs stop working and you need to carefully examine the logs to find the problem. A smart computer needs to be able to figure out when its doing something other than what the user wants.

    After going on the news, the process starts with a hearing, debating whether they should go onwith it and make a project out of it.

    Instead of leading people to think its a standard story (Which could losr their interest), some other rout can be taken:

    Fact notes

    The world is at a more advanced technological stage. People live on planets in the solar system.The next steps in advancements, such as how to let everybody live forever; to travel very long distances ata rate faster than light; to run the logistics of the massive and increasing population; to determine theexistence of gods; to be perfect judges.

    The league of AI scientists has developed some protocols before based on what sort of AI is produced. For a digital computer like his, they already have a lot of guidelines. Most of the time before itis put into production in the story, they work on deciding how to go about it. Ultimately, they decide to

    play it very safe (and that is why the inventor decides to take it, since he doesnt want it to be oppressed).

  • 8/8/2019 AI+story

    3/4

    Another sort of issue that could cause the inventor to take his computer away is that the computer would not be innocent.

    Pulled off his justice project immediately to work on it.

    Character plans

    Inventor Middle aged scientist. He has advanced education in mathematics and computer programming. He

    wanted to create AI to get rich and to be remembered. (By the end, he has to give up his chance of fame.)He has never had any children. He has had an interest in justice (and he was thinking of how to make a

    perfect judge when he got the idea.) He is a critical sort of person. At the start of the story he thinks like anormal conservative sort, but as questions come up they produce more questions and he thinks morelucidly. He never has much fear over it. In fact, hes very confident. His ambition was to be recognized asa great scientist, but he decides to give up his fame to prevent injustice to his child.

    Project Head

    He is absolutely committed to develop the computer. He was selected years ago by the league of scientists for the development of the AI to be the leader of this sort of thing. He may be said to be rushingthings to get the project completed. He is getting paid a lot of money, plus he believes in the importanceof its completion fanatically. He is the least likely to develop the same concerns as the inventor.

    Philosopher Advisor He was appointed by the league of AI scientists to settle issues and mostly to give approval that

    nothing fundamentally wrong is being done. His concerns are about the computers threat towardhumanity, such as if it could cause the race to fail accidentally and indirectly. If he has any thoughts aboutthe computer he doesnt mention it, since the idea is that everybody wants to play it safe. Hes as afraid asthe others. (sometime, as the inventor starts to change his mind, he asks the advisor some seeminglyminor questions that affect his decision later on.) (One example of the sort of thing he is meant to find outis if the computer will make humanity lazy or vulnerable. He also must constantly consider weather thevalue outweighs the risks. If it didnt, he has the ability to deny them the chance to create it.)

    Rival/Opponent ScientistShe has similar ambitions to the inventor. She is argumentative about what he really has done,

    such as whether he made real AI, whether the AI is really his, if he feels he should bring a new life forminto existence, etc. A major reason she argues is because she wanted to achieve the same thing, but shealso doesnt believe he can really be onto AI when its something you can put on a calculator. Her arguments dont make him decide hes wrong, but she does contribute more than anyone else to hisdecision by the questions she asks. Her concerns are based on fear for safety, fear for whats , jealousy,

    Computer PrototypesMost of their personalities are objective, except when they are made to be subjective. (Perhaps the

    main prototype is produced, and shortly after the scientist has it put all of its data and its mind into a portable central piece as it sabotages its other components.) He makes some smaller intelligences the sizeof personal computers (for several reasons, such as to see what it can do and to help settle his concerns)illegally. The first was a complete machine, but the more intelligent they are the more sympathetic theymake him feel. They increasingly show things such as individuality and growth. He asks them questionssuch as if they are alive. They have to think of the more complex ones, such as that one, a long time.

  • 8/8/2019 AI+story

    4/4