An introduction to Artificial Intelligence |
Artificial Intelligence: a different form of Thinking My favorite artificial intelligence (AI for short) in all of fiction, and I am probably dating myself, is the robot in the original “Lost is Space” TV series — at least the first season was in black and white. That was a robot with personality. “Max Headroom” and “HAL 9000” are two other good choices as well. Oh, and we cannot forget “Commander Data.” These fictional Pseudo-Human characters have caught the imagination of many people, young and old. Some of those people became scientists, writers and computer programmers. Today, we are beginning to make artificial intelligence a reality. However, if you were to ask people on the street, “What is artificial intelligence?” chances are the large majority of them would give you “Data,” or the name of some other character from a popular science fiction story. On the other hand, they might say, “A big computer,” “A secret government project,” “My new cell phone,” or “Intelligence, where have you found intelligence?” The last example response could be from someone who has been on some bad dates. My experience in this subject has come from a mix of studies in data processing, general science — including electronics, and metaphysics. When not writing or on my computer, you might find me reading an article in a science magazine, or something that was written by a New Age author. What is AI? Few people today have any experience with artificial intelligence as a real subject. We have a mathematician (Alan Turing) to thank for the concepts for a machine to process information, known as the “Turing Machine,” that lead to everything from adding machines to computers to iPods. But, the subject of artificial intelligence is mostly limited to science fiction writers and super geeks. To know what artificial intelligence is, one first needs to know what intelligence is. Psychology defines intelligence as the ability to learn from experience. The quicker someone learns, the greater their intelligence. The definition of learning is, changing one’s behavior, based upon your external experiences, in a way that improves your life, and retains that new behavioral pattern. Therefore, an artificial intelligence is a machine that can learn. Put another way, a machine that uses its input data to adjust its own programming for the better. Now since we are talking about machines, computer scientists have added an additional quality to the definition of artificial intelligence — the ability to communicate with humans as if it were another human. (The Turing Test) This does not mean that it needs to look human, sound human, or even have a voice. It means it can use words and symbols as well as a human. Therefore, if you had an IM conversation with an artificial intelligence, you might not know you were having a conversation with a computer. However, human gestures and facial expressions are in the development stage for AI. Self-awareness is not a requirement. One other quality an AI does not need to possess, at least if not self-aware, is having emotions. Psychology, however, is not the only school of thought about the nature of intelligence. I was once a Scientologist. Mr. Hubbard had some profound thoughts about intelligence, as well as life and the universe in general. He described the basic qualities of intelligence as, “The ability to recognize similarities, differences and identities.” He also stated another quality, I am not sure if these are his exact words, “The ability to see a situation from another’s point of view.” These are abilities that an AI might do well to have. Another New Age author that is worth mentioning regarding this subject is Neal Donald Walsch. He has stated “the three principals of all life.” They are, “Workability, Adaptability, and Sustainability.” While only the first and last would be applicable to any machine or human-made system, all three would be applicable to an AI. In this case, workability is when a system consistently produces results that lead to the intended goal. Adaptability would be a system that can adjust its choices and activities to changing conditions so to remain on a course to the desired goal. You achieve sustainability when a system’s design does not conflict with itself, or its environment, and eventually fail when used from damaging itself or its environment. The environment for an AI would include humans. Learning, Consciousness, Awareness and Purpose As Humans, we are not just aware of our surroundings; we recognize many of the people, places, things, actions and situations that we perceive around us. In addition, all of this usually means something to us, personally. This comes from similarities in our experiences, thoughts, meanings, purposes and other identities. In addition, we solve problems, and we plan for our futures. We also have an advanced method of learning, called mirroring. Mirroring is internally simulating the experiences of others, as we perceive them, from their perspective. Have you ever watched someone watching a fight on TV? Have you ever seen someone stub his or her tow, and said ouch? Have you watched a sad movie and cried, or at least felt like crying? These are examples of people mirroring the experience of another. The most common term for this is empathy. Moreover, it is the belief of some anthropologists that mirroring is one of the characteristics that separated us from the other human species in the distant past. Consciousness is different from awareness. Consciousness is being aware as an identity. A machine might be aware if it responds to some type of input. Being self-aware is being aware of not only one’s “input.” but also of one’s self, as an identity, and that you are aware. The human mind is a record of all that an individual has been conscious of. However, it is not as detailed as one might think — more on this later. The workings and structure of the human mind transfer only in part to an Artificial Intelligence. You may not want to try to make a machine human. Therefore, a self-aware AI would, at best, be a quasi consciousness — a collectively created physical based consciousness. Human self-awareness is a multi-level, or compartmented form of awareness. We are aware of our surroundings through our five senses. Our perceptions assemble in our brain, we process them, and we have many thoughts and experience emotions. All of this happens at a repeating cycle of about forty to eighty times a second. This is called Gamma oscillation, one of many oscillations in the brain, controlled by timing signals originating from a number of centers in the brain, allowing different parts of the brain to share information in an organized manor. Our first level of awareness operates at Gamma oscillation.1 Computers also have a cyclic process that governs processor time sharing among all the applications that are running at any one time. The next level of awareness occurs at a higher frequency, which either is the eighty cycles per second (eighty Hertz) or is yet unknown in the brain, to the best of my knowledge. At this level, we are aware of our awareness at the first level, and our thought and emotions as they are occurring. This is also where we are more aware of the part our experience that is larger than we are, as individuals. Spirituality often addresses this part of our awareness. It works like the mind’s supervisor. Most people today pay little attention to this level of our awareness. There is at least one more level of human awareness. The large majority of us are not aware that this level exists, because the Ego disguises it from our mind. This level is aware of everything below it, and more. It uses subtle emotions to shape our thoughts and lives on the large scale. We may find this level’s oscillation frequency in the brain someday. I suspect it will be in the brain cells’ outer membranes, based upon research I have knowledge by Dr. Bruce Lipton. Spirituality also addresses this level of awareness. It is like the owner of a company that you seldom see. It is comparable to the mind of the soul. Without this level of awareness, self-awareness may not be complete, or stable. Human purposes are not limited to present tasks, and those on a back burner somewhere. We also have purposes that we live to express throughout our lives. These purposes come from the third level of awareness, and have profound effects on our emotional states. A non-self-aware AI does not need to have anything like emotions. However, the AI closest equivalent to a human’s life long purpose is its prime objectives. That is something you would probably want an AI to have. Otherwise, it would do most anything someone told it to do. I should say a little about the human mind. When we recall our past throughout our day-to-day lives, as a rule we very seldom experience Total Recall. What we remember is normally a reconstruction of the past based upon memory notes. You could think of it as like decompressing computer files. Our mind makes these notes from what we had our attention on at that time. Our memories may be very detailed for a short time — short-term memory. However, our long-term memories tend to be very compressed. They are our thoughts, feelings and sketchy details about the experience and the identities involved. Like humans, no AI will maintain a detailed record of its whole existence, at least not for a long time. First, we would need to be able to store a very large library of information on the point of a needle. A Recipe for AI A self-aware AI might do well to have something like emotions. Such an AI would be more autonomous and easier to interact with; and simulated emotional states could aid in keeping it in-line with its prime objectives. One good example of prime objectives is “Isaac Asimov’s ‘Three Laws of Robotics’” “1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. “2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. “3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” Later, he added another law. “0. A robot may not injure humanity, or through inaction, allow humanity to come to harm.” I feel these laws are a good start to writing AI prime objectives. However, they are not perfect. For example, if any human being told such a robot to destroy itself, as long as doing so would not harm a human or lead to a human being harmed, it would comply, if able. It may also have trouble responding to conflicts between humans. Workable methods of making ethical choices do exist. They should be transferable to machines. However, these methods tend to conflict with the ways many people think in today’s world. Therefore, they are not popular. I feel that would be a subject for another article. A good AI would need to simulate, internally, its surroundings, include itself in its simulation, be able to do the majority of what I described we can do, and have excellent language skills. The easiest way to accomplish all of this is by a super computer — a number of independent computer processors sharing and exchanging information, and working toward a common goal. The first self-aware AI will probably have dozens of processors. If you were to write self-aware AI software, you would need to include at least two levels of awareness for it to be self-aware. The first level would do the bulk of the processing, while the second level would be an administrator. My thinking about simulating emotions is a scoring system from the results of all its actions and choices, as compared to its prime objectives and temporary objectives, which select emotional templates for all its software. These templates would help select its expressions and actions, making it seem more human, and guide it in following all its objectives. Nevertheless, an emotional self-aware AI would be likely to experience some unusual conflicts, eventually, possibly an AI paradox from time to time and question the nature of its own existence. Some of these conflicts may be hard to resolve. AI and Humans Another factor to take into account is human reactions to AI in general. Since well before the industrial revolution, stories were told of autonomous machines going rogue, killing people and attempting to over through or exterminate humankind. They have sold many books, magazines and movie tickets. “HAL 9000” was a recent example of such a machine. The bulk of them are a form of a horror story often based upon people having lost jobs to machines. Nevertheless, they did serve a valuable service to humanity. They pointed to potentially catastrophic mistakes that we might otherwise make, if not for these creative writers. There will be other people who will say that building a self-aware AI would be immoral, that we would be playing God. What would a member of a primitive tribe in some third would country, who has not seen the modern world, think of someone reviving another with mouth to mouth? There is a good chance that they would call him, or her a God or something very undesirable. Our Creator has worked through people here on Earth countless times, whether they were aware of it or not. If the Creator does not want such an AI to exist, it will never exist. All that has happened, no mater how undesirable some of it has been, worked into life’s plan. The creation of a self-aware AI would most likely be as much of a creation of The Creator, as a creation of Humans. The way it looks now, self-aware AI is not a mater of, if, but a mater of how long. Undoubtedly, many people will fear advanced AI when it is developed.2 I wouldn’t be surprised if people were afraid of the wheel when it was invented. People were afraid of the automobile when it was new. The fear of those new things has passed with few exceptions, and so shall the fear of AI. Those writers of yesteryear, and some more recent, have done their job well. We already know most of the pitfalls. Footnotes |