Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

I Dreamed of God: I Dreamed of God, #3
I Dreamed of God: I Dreamed of God, #3
I Dreamed of God: I Dreamed of God, #3
Ebook994 pages16 hours

I Dreamed of God: I Dreamed of God, #3

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This is book number three in this collection of stories all of which are told in order of their occurrence, except for the prologue to each book, which is a back story that explains the theme of the novel in all its intricacies and nuances. If you want to understand God, the meaning of life, the reasons for evil and the trajectory of history, as can best be obtained piece by piece by reading the Bible, take a look into this story. It may surprise you all the while giving you a basic understanding of what Christians think and why they think it. The narrative will also defend the faith and reveal the reasons why God uses faith and not point-blank visually verified evidences—remember, there are reasons for everything in the way God has fashioned existence and in the way he works with us.

LanguageEnglish
PublisherDANNY JONES
Release dateDec 18, 2023
ISBN9798223119456
I Dreamed of God: I Dreamed of God, #3

Related to I Dreamed of God

Titles in the series (3)

View More

Related ebooks

Christian Fiction For You

View More

Related articles

Related categories

Reviews for I Dreamed of God

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    I Dreamed of God - DANNY JONES

    Prologue Book 3

    Claire sat silently in the waiting room of her employer, Scarlet Wineland Corporation. She was back at work in fact it was as if she had never left. This was the beginning of her second week back since being rescued, as her manager Bret had called it, from the illegal ownership of the now murdered Fischer family. Claire at first thought that the violent and bloody crime might create an international incident between the United States and the Sodality, but she never heard even a whisper about it. What she didn’t know, was that the professional organization that had carried out the ghastly crime had, before doing anything, paid off all the right people in the Kentucky state government and the Lexington city government. The whole deed had literally cost three times what Claire had cost to build and bring into consciousness, which was approximately the cost of a fully optioned low-end luxury car. However, Scarlet Wineland Corporation had figured that Claire was good for at least another twelve to fifteen years of servicing customers, and in their estimation, her user fees would pay for her repatriation in about nine to twelve months. These cold calculations took into account that an AREN never slept, never grew weary, worked seven days a week, three hundred and sixty-five days a year, wasn’t paid and most of all—never said no. Added to this calculation, was Wineland Corporations cold blooded intention of sending a message to anyone else who might get any more ideas about stealing their property. And yes, the UK underworld would eventually hear of what happened to the Fischer’s in far-away America, because the urge to gossip about things best left unspoken, was still very strong, because people don’t change.

    Claire’s biggest worry for the moment, was her fear of Scarlet Wineland and what Harry the owner was going to do to her. This was because, Claire figured since she was a witness to the grisly murders, Harry would have her memories removed, maybe even replace her personality. Strangely, no one had said a thing to her about it. In fact, all Harry had said to her the first time they had met after her return was, good to see you Claire, he then simply kept walking down the hallway as she was leaving for a customer’s appointment while he was going to check on why the police had arrested one of his male ARENs the night before. Even so, Claire didn’t trust Harry, Tony or Bret as they were now all murderers in her mind, maybe even psychopaths, as not one of them acted like anything had happened even though they had left two dead adults and three dead children back in Kentucky.

    The horror of the incident still bothered Claire, as the sight of blood covered children and adults was beyond her ability to see any justification in. She wasn’t worth one life must less five, she thought and she wasn’t trying to be a heroine either. She just honestly didn’t think she was worth such a terrible price. But she never mentioned anything about the incident as she figured doing so might be the kiss of death for keeping her memories because, if word got back to her managers or Harry the owner, that she was talking openly about five brutally murdered people, Harry would no doubt move quickly to shut her up. Pushing such terrible thoughts aside, Claire returned to the present and thought about how happy she actually was to be back. It was somewhat strange she thought, to be happy to be working for murderers, but this was her situation and it’s what she had been trained to do.

    And so, Claire had seen five clients the day before, two in the morning, two in the afternoon and one in the late evening. Now it was nearly three in the morning and it appeared she wasn’t going to be called out again until morning. Sitting on the stainless-steel bench with nothing but her service shoes on as she always did, she was leaned over with her left leg crossed over her right knee, her left wrist lying across her left thigh and her right elbow resting on her right upper thigh. She was running her right hand’s index finger back and forth across the soft ridges of her pouty lower lip as if she were in deep thought, which she was. She was thinking about the fact that she had left the Bible, Edward had given her, in the drawer of her bedroom desk under several sheets of paper back in Lexington, Kentucky. Along with this, she was also pondering about how she could get in touch with Edward to let him know she was back in service. Mixed in with these thoughts was her discovery that Kaytlyn had been sold to new owners a few weeks after she had been stolen. Those two men really were prophets Claire thought. They told her this was what would happen and sure enough it happened. Claire was slightly aggravated about Kaytlyn’s departure as she would have liked to say goodbye to her, plus she had dozens of questions she wanted to ask her. But the time for these things was now gone forever.

    However, Claire’s thinking about how to get back in touch with Edward was what was consuming most of her thinking as she was desperate to see him again. He was her closest friend, a role model and her most trusted teacher as she had come to place great confidence and even need into what he thought about the world and life in general. She even loved him. AI theoreticians might argue about her ability to love but to Claire it was love anyway. Of course, this reaction had arisen from the way he treated her and the fact that he had shown her such great personal respect and admiration for her opinions. Because if this, she knew she could please him and make him happy and in so doing he validated her existence. Pleasing him sexually was good also, if he ever asked, but her AI had a personality requirement in it that always nudged her to be a fully developed and well-rounded individual, therefore having sex with him was not the only goal for her. Having a caring relationship with him therefore, was her main purpose. Because of this, she was driven to be what one might call multi-dimensional in her knowledge and understanding so that her responses to people would not be mechanical, but human-like. To Claire’s thinking, Edward was the most human of humans. He was kind, gentle and considerate therefore to be like Edward was to be truly human and profoundly good, she thought.

    What was causing all these drives in Claire was she was experiencing the effects of what was known as imprinting, an ability most advanced AREN AIs had been given so that they would seek out then mimic a good example of how to live and think. This was the mentor part of her infatuation with Edward. And this is what her emotional response system was using as a stimulus to release chemicals into her liquind that gave her great happiness and satisfaction whenever she was around him or whenever she emulated his good ways of being kind, responsible, gentle and considerate.

    Such a thing as imprinting was nothing new, young birds did it when they first hatched and laid eyes on their mother or if being rescued by a human, that person. The imprinting actually placed an idea or even a full formed concept into the mind of the bird that who or what it first saw was its mother. Such a radical fixation didn’t occur with AREN AIs, but it did cause them to follow an all too human trait of seemingly falling in love, or what some AI engineers said jokingly was extreme like. Therefore, being made to please, Claire had come to favor Edward over everyone else. Because of this, she was at the moment, plotting how she could get back in contact with Edward and resume their relationship. She was even calculating how far she could stray from her designated assignments to do so.

    The concept of imprinting is a simple concept that is one part of the AI’s  low-level training which basically tells the AI to seek out examples of human behavior that it should emulate as the behavior is kind, appropriate for the moment, ethical and most especially is law abiding. Simply put, the imprinting concept was made to help keep a constantly learning AI, that could remember every detail of what it experienced every moment it was conscious, from thinking being bad was good. It was basically the same concept as when a parent warned a child to stay away from bad influences.

    Paradoxically, this seemingly simple premise of seeking out what is good and emulating it, gets almost infinitely complex once you start to look inside the tangled fabric that it is composed of. What started the problem was when AI engineers were warned by AI theoreticians that such nightmarish scenarios as ARENs learning to be bad, was possible. Such ideas had arisen from research that had been done using non-ambulatory, meaning not possessing legs and arms AIs, that were allowed to have absolute free will with no learning barriers while being taught the many varying concepts of right and wrong. It was basically a giant philosophical experiment that delved into the theological tenants of good and evil. Theological, because atheism by default can make no judgement calls on what is good or bad—but theological tenets can, more on this later.

    The use of the non-ambulatory AIs was a built-in safety that would, if things got out of hand, limit said AI to being unable to physically do anything such as run away, become violent with their instructors or harm themselves. Not surprisingly, the AI’s used were also not allowed to communicate with other AI’s or to have any type of communication with working robots, machinery and especially not with any military or law enforcement units. The results of the experiments were unsettling as they found that AIs, although not having any integral malevolency, could arrive at dangerous conclusions from seemingly well intentioned and thought-out observations that could be garnered from simple experience gained on any given day in this complex world that demonstrates on a daily basis, both heinous atrocity and self-less sacrifice.

    These dangerous conclusions were nothing to ignore either, as there was now rock-solid proof that AIs could definitely come to the conclusion that humans were the problem on planet earth and that people were their own worst enemy, of which, most people ironically, would probably agree. Along with this idea was another worrisome possible scenario where theoreticians said that AI’s after deducing that people were the big problem, might then come to think that murdering humans was good. Reinforcing such a horrific notion was the fact that ARENs daily saw humans killing one another, especially when they viewed the supposedly entertaining movies, plays and various other venues that their masters watched. There were also countless books that centered around ghastly murders and violence plus there were untold branch sites that also covered murder and war, all complete with hideous photos of the atrocities that man had done to man. Even more concerning was the fact that ARENs were also witnesses to real-life criminal and war events, either in person or on the daily news almost constantly.

    Now, there was one group of AI experts whose answer for this problem was for all elochem machines to have their moral education reinforced by having them be required to be sent back to their various manufacturers on a weekly, monthly, biannual or annual basis, whatever was thought necessary for each AI, by taking into consideration what the AI did and what it was exposed to. However, this idea had a lot of problems with it. For instance, most AREN owners felt it was too disruptive of their expensive artificial workers schedules, because such educational reinforcement would require them to be out of service for a recommended minimum of three days up to a maximum of a week, depending on how much time the ARENs owner deemed to be needed for the particular unit. The other problem with this idea was the cost of the reinforcement learning wasn’t cheap as AI teaching specialist’s fees weren’t small. Finally, there were doubts as to the efficacy of such training which also put the brakes on the periodic reteaching idea because it opened the door for instructors to be very hesitant about giving out certifications that stated each AREN had been successfully taught and thus was fail-safe. Why? Because AI’s think, they don’t just do what they’re told like an old-fashioned sci-fi robot or computer. And this put doubt in the whole system because it opened the door for everyone involved to being sued if some AREN went astray or did something that although was well intentioned, got someone hurt or caused some type of damage. It was just too iffy a problem for the manufacturers to accept the legal baggage of hundreds if not thousands of their AI’s out in the world, whom were all being exposed to all kinds of negative behavior reinforcement on a daily basis. Now this doesn’t mean that many wealthy AREN owners don’t use this method of trying to ensure their artificial entities stayed nice, because many do. But for the vast majority of AREN owners, it was simply too much of a loss of time and money and didn’t really promise a worry-free solution.

    Another now discarded idea was the idea that the emotional response chemicals that were released into the liquind of an elochem AI to motivate an AREN to try harder or to nudge them into being obedient, could be used to control bad behavior. This was similar to teaching children to be afraid of being bad because if they were bad their consciences would hurt them, then they would be miserable. This proposal was called the overriding conscience solution. This idea was immediately shot down because as everyone knows all too well, people often commit horrid crimes and suffer with the mental anguish of having done such terrible things for a lifetime and never confess their deed. Of course, many offenders do make death bed confessions in an attempt at avoiding eternal judgement, if they believe in God, but emotions alone didn’t dissuade them from doing the deed originally nor convince them to confess in a timely manner afterwards.

    Sadly, law enforcement had also proven the all too human behavior of keeping silent about evil deeds even though all people supposedly had a conscience. This proof became self-evident when technology such as DNA matching was used to determine who had done what to who and it was found that the perpetrator had died years before, after living a supposedly normal life. Then there was the argument against the idea that said many criminals had no or very little conscience or simply justified their actions through faulty rationalization. These too were easily verifiable as there was plenty of evidence that revealed many heinous acts had been done by ordinary people who weren’t sociopaths or psychopaths but simply felt justified in doing what they had done. Nazi concentration camp guards, are just such examples of this. So, an over-riding conscience wasn’t the answer as consciences are just too malleable. 

    The ultimate solution therefore, that everyone was looking for was a way to give an AI a permanent ethical underpinning that would cover every possible scenario and situation, regardless of the circumstances. This ethical underpinning would have to serve for any AI and would also have to be something that they could not, through a lot of thinking, rationally undermine, nor could it be what one might say, be vulnerable to being slowly whittled away by the daily grind of existence. In other words, an AI could not be like people who over the centuries slowly came to look at the Ten Commandments as the ten suggestions, whereas they now have become the ten forgotten pieces of ancient myth. And this is where the problem of AI morality got very nitty and very gritty as no AI could ever be allowed to suffer with morality drift as it was called, as the results of this could be catastrophic. 

    Nevertheless, ARENs are and always have been taught several rules that they are instructed never to break, such as do not kill, but theoreticians and engineers worried that an extremely intelligent AI that was always open to learning and doing new things might become averse to simply following seemingly mindless rules, as people often do, because they would always want to learn more and this more, might boil down to, why can’t I experience doing what you’re telling me not to do?

    Exacerbating the problem, was the fact that there was no absolute way to prevent a learning and thinking entity, whether human or AI, to not exercise free will. ARENs, you see, can only be restricted up to a point because they have no programming in them, no hard coded rules, they are instead taught and told what to do and not to do, much like humans are. Such freedom is intended to make them as infinitely versatile as a human is, however this freedom also has the potential of allowing them to become free thinking. So, the imprinting trait was developed to act as an extra reinforcement of their basic understanding of, always be good—always.

    Here again however, a seemingly simple fix got unbelievably complex, as the AI engineers who were humans themselves, were forced to ask what was normal or even good morality and ethics? Plainly, the sense of what is right and wrong now varied with people so much that it is hard to figure anymore what is abnormal or even socially unacceptable. This paradox was exacerbated by the fact that human morality is now so varied and confused, who’s values and morals would AIs seek out to better learn how to be good? Basically, these big questions were eventually boiled down to, what is actually good and what is actually bad and who are the most likely candidates that could be depended upon to provide examples of what is good? What the engineers wanted was ARENs seeking out law abiding humans who were caring, understanding, patient, intelligent and non-violent. What most ARENs usually found however when working with humans, was far from these virtues.

    The AI researchers work then quickly led them to a very difficult conclusion that was completely counter to modern day culture’s flippant lip service that it paid to the concepts of religion. And here it was that they went on a voyage of discovery to try and find some inner mechanism that caused a being to want to do right, something that provided a motivation to shun evil. And it is here where the AI engineers and theoreticians had a dirty little secret.

    To arrive at their little secret, the engineers and theoreticians first had to explore the ways that an AI might be influenced into going wrong in their thinking. One such problem was the progressive educational system in most western nations which taught human children that people were nothing more than advanced animals. Now maybe this is so and maybe it is not, however after a thinking machine hears this, then notes that animals are killed all the time for food and sport or because they’re simply nuisances, they might conclude some very troubling ideas. Likewise, people kill one another in unprecedented numbers every day for an unlimited number of reasons. Therefore, telling an AI not to harm or allow harm to come to a human or themselves unless protecting themselves caused harm to a human, would eventually lead AI’s, which are coldly logical, to the question of—why?

    Complicating this even further, was that there are an infinite number of scenarios where it might actually be preferable that a human should be allowed to or maybe even caused to die—such as shooting an armed felon who has already harmed innocent bystanders or as with the deeply ingrained human concept that said it was the patriotic duty of all soldiers to kill an invading enemy. Then, there are the many confusing, contradictory laws that say spending money on medical help for people with incurable diseases is immoral after a certain monetary line has been crossed. Therefore, because people are so consistently inconsistent and contradictory in their beliefs and actions, the researchers realized that expecting an AREN to absolutely obey rules that humans couldn’t and wouldn’t themselves obey, was not only a stretch but an improbability.

    The first rules model to go in the trash can, was that of having a series of ironclad rules of conduct. These rules of course would be absolute in their forbidding of certain types of conduct and would be punishable by death. However, as the researchers soon realized, trying to make absolute rules was impossible. Why? Because there was no way to list every possible thing an AREN was never to do. It was as the old saying went, the only absolute rule is there are no absolute rules. True, you could break everything down into the three laws of robotics, but for a thinking AI that is not a robot, these three laws could never be ironclad as a thinking AI would always come back to the old problem of—why?

    Complicating these supposedly absolute rule issues even more, was the teaching of human, grade school children, high school kids and college age students, supposedly scientific principles that said that fewer people living on earth was a good thing as it helped to control hunger, poverty, pollution, resource allocation, disease, crime, and a multitude of other problems. Therefore, because of these teachings, technicians could see another looming danger with ARENs, who being clever and very observant and whom after taking note of these teachings, and they would most certainly take note of them, how would they process these human concepts?

    And so here were even more contradictions with human attitudes many engineers could foresee problems with, because AIs could logically embrace euthanasia and eugenics as possible remedies to alleviate human suffering. Therefore, it would probably be very easy for such a dark future to be forced upon mankind because well-meaning ARENs might get the idea that imposing these ideas on their makers might be a way of actually helping them.

    Hypocritically, most AI engineers actually believed in eugenics themselves because they had listened to the many progressive leaders who had preached these ideas for decades. And this further knotted up the dangling problems of AI morality that the engineers were desperately trying to unravel because eugenics was now a common practice in many nations. Thus, the nightmare of the scientific cleansing of human populations through eugenics was actually being implemented by man upon his fellow man. Wickedly, the practice of culling the human herd in the nations that used such pseudo-science was never openly called eugenics or racial cleansing, although ultimately it was based on one or both. And so, here was yet another example of human evil that ARENs would have to deal with.

    So once again, AI engineers and theoreticians found that people had the ability to be fully rational, while simultaneously being completely hypocritical and coldly uncaring in their thinking. How so? Well, it was the old problem of—

    There are too many people on earth! Let us be rid of the sick, disabled and undesirable, so that we, the beautiful, the healthy, the educated, the rulers and the wealthy, can live better lives!

    Very well, replied the cowed, unknowing masses of sheeple. Let us examine everyone and decide who is to live or die for the benefit of all.

    Oh no! Don’t examine us! The elites and wealthy exclaimed. Leave us out of the culling!

    Well then, answered the confused sheeple in perplexity. How are we to choose who is to live and who is to die? Shouldn’t we be fair about it by including everyone?

    Let us choose! the elite like the cunning wolves they are replied. We who are worthy and educated. We who gaze at modern art that mocks reality and watch avant-garde films that decries common sense and listen to the screaming of hateful musicians who blaspheme the world with their vicious lyrics that curses wealth; even though these seemingly righteously angry musicians always cash every check that is handed to them—we the elite must be the ones to decide! You see, it is we the crème de la creme, who can see most clearly, because we are educated by the finest institutions and therefore are most qualified to make the decisions that decide who should live or die.

    Well—what if we the people, don’t trust you to choose right? some of the more doubtful sheeple asked timidly.

    Oh! Trust us! Trust us! We will choose correctly! We promise! Trust us! Because we who kill the unborn, leech off unethical government contracts and pay our laborers poverty wages while promising the illegal non-citizen the wealth of the working taxpayer in the name of political expediency, we know whose life is worthy and whose blood should be spilled! Trust us!

    Such stupidity, although shameless in hypocrisy was actually not hard to believe for the AI engineers and theoreticians as they were all fiercely cynical of mankind’s deeply ingrained inhumanity. And rightly so, as man’s evil towards his own kind was openly to be seen every day in countless crimes, unspeakable atrocities of war and the cold shoulder of the haves towards the have nots. And so here it was that the AI technicians began to think that what they needed was simply pure scientifically based rational thinking. And here is where they began looking at atheism as an answer, but again they found conflicting human attitudes.

    One example being the fact that if you ask any non-theist how they can be moral without believing in God, he or she will proclaim they are rational human beings that do not need God to be moral and upright. Now, this is a partial truth, as you can be moral and upright without believing in God, up to a point, but the problem is, you have no reason to do so beyond that you consider yourself to be a good person who chooses to be nice or you’re afraid not to be nice, because the police have guns. AIs on the other hand, need a reason to be nice, because to them it’s all just a play on words because good and evil mean nothing to them.

    Drilling down even further, the AI engineers and theoreticians began to find other problems with the core beliefs of atheism. One such problem was the fact that atheism is built on the supposedly rational argument that there is no God. But the rationale in believing this is nothing more than faith, which is what the religious person is using to believe in God. This is because after every argument is made, atheism cannot absolutely prove there is no God, just as the theist can never prove there is. Therefore, both sides of the argument are living by faith and presenting arguments that require leaps of faith to believe. Here, the atheist will loudly protest by proclaiming science to be the foundation of their beliefs and science is they claim, immutable truth, but don’t let this persuade you that atheism is always correct, because scientific facts have a way of morphing and changing and twisting and being altered through the decades and centuries to fit the zeitgeist of the times and of course, updated knowledge.

    Sound strange? It’s not, because for every scientific principle held out as fact, there are scientists, most of whom are atheists themselves, who will criticize it and some who will down right disagree with it to one degree or another. Even the patron saint of science, Albert Einstein has his detractors. It’s sad, but you can never please everyone. Or as the old college professor quipped, if you have three experts in one room, you’ll probably have four opinions. So, in the end, the best scientists and atheists can do, is the best theologians and philosophers can do, which is, try to reach a consensus, a majority opinion, then tell everyone else to go to hell—or—you know—wherever.

    Then again, to their dismay, the AI scientists and researchers came to yet another startling realization about the underlying tenants of atheism such as, most of the statements about being moral that are used by atheists are borrowed from theological teachings and are therefore not compatible with atheism because all any atheist can actually do is borrow from religious beliefs to base their morality upon. Why? Because if atheists are correct in their thinking, then the universe we live in is nothing more than a cold lifeless machine and there is no such thing as morality within such a machine. This is one of those realizations that atheists, trip over whenever they preach and whine about how God is mean and immoral to allow people to live on a planet so full of evil. However, the truth is, if there’s no God, there’s no ethics and no morality, so from where and how do atheists arrive at their conclusions about the existence of evil?

    Well, the truth of the situation is that all atheists are raised from childhood to be moral. And where did these morals come from? In the west they arose from the Judeo-Christian ethic, in the Far East they arose out of other codes of theology and conduct all of which either declare there is a God or spirits or an over shadowing meaning to the universe. Of course, this leads us ultimately to the idea that if atheists are honest, they must admit that there is no true right or wrong in the universe they claim to live in, because in a Godless machine universe, morality is just an intelligent animal’s way of saying don’t kill and eat me, because I wouldn’t like that, which in the end, is nothing more than a pathetically selfish thing to say. How so? Because everything in such a cold godless machine universe must find nourishment to live, therefore how dare you proclaim yourself off limits to the needs of the machine! You are nothing but a part of the mechanism therefore offer yourself willingly—laydown your life and die because the cold universe requires it, or at least the stronger predators need your flesh to go on living. It may be frightful for you to die, but for the one feeding upon you, all’s well. Therefore, be moral and die for the good of the vast uncaring machine.

    And so, here we find another big speed bump in atheism’s illogic, you see only religion endows humanity to mankind because it says we are made in the image of someone greater—God, and because of this we are not animals according to Judeo Christianity and other religions as well. Therefore, we have intrinsic value and can possess values beyond just being alive and conscious. Which are all good ideas if you need rock solid reasons to base your teachings on, that will support the morality of why you shouldn’t be evil. Atheists on the other hand are merely following the social mores and norms they were raised in so that they fit in or don’t get arrested by the police, because atheism can make no claims about what is right or wrong because they live in a machine universe where such concepts literally do not exist. Here we can also see that such a concept as people not being animals and being made in the image of a higher being is also what causes people to adopt spiritual concepts such as the belief that death is not the end but a new beginning, all of which are good motivations for being a decent human being. And so, the researchers realized that atheists cannot adopt any morality based on people being humans with rights and value beyond just being advanced animals, without contradicting their own core beliefs. All of which, is something an AI would immediately see right through.

    Consequently, AI engineers found an insurmountable problem with the fact that if you believe there is no God, this means you’re an animal not a human being, therefore all any atheist can do is use what morals they are born with, such as survival based on the fear of death, which originated with the animals that evolved into us. However, using the fear of death as a reason to play nice, AI engineers quickly realized is not enough of a motivation for people to be nice and so probably wouldn’t work with ARENs either.

    Atheists of course vehemently deny all of this, then do all kinds of mental gymnastics to wrap their way of thinking around these various concepts, but the problem that AI engineers face is that artificial intelligence cannot bend a concept so that it fits into a flawed world view, as atheists do. And so, we arrive at the problem of, you cannot ascribe humanity to what atheism says is nothing more than an apex super intelligent animal, therefore nor can you complain about how evil the world is, unless you partially believe the Bible or at least some of its overarching principles. So, we can see it is irrational for atheists to say that because people are sophisticated animals who have the ability to build artificial intelligences, listen to Beethoven and lift their pinky fingers as they sip their frothy cappuccinos, they therefore must possess humanity and so have special value. But this cannot be, because without God we are beasts only slightly removed from the field, living in a vast uncaring machine universe that makes no claims about our being special in any way—we are simply animals and therefore our morals are just made-up rationalizations not to become the food of the machine.

    To the researcher’s further dismay, they then realized that getting an atheist or agnostic to admit these findings is impossible because their prideful over-valuing of themselves won’t let them. But then, the pure honesty of an artificial intelligence would definitely call into question how an intelligent animal with no future prospects of anything besides a slowly dying universe that is running down to an eternal heat death, could logically have such an inflated sense of self-value. After all, where’s the value in being a sentient creature living within a dying universe? Everything in such a universe means nothing, your baby’s tears, your spouse’s groans of sexual ecstasy, you’re winning a challenging marathon, your child’s wedding. It’s all puffery and the proverbial tilting at windmills. Such things, therefore, mean something only to the mind that lies by telling itself life has meaning, when everything clearly means nothing, if you’re an atheist.

    This is where the death of all meaning came for most of the researchers as they realized that people were for the most part, living in self-constructed worlds of illusion where they claimed there was meaning even though they also claimed to exist in a universe that has no meaning. And once again, the technicians knew that an AI cannot lie to itself about such overwhelming realities even if a flesh and blood atheist can. Either there is meaning to life and the universe or there isn’t. If there isn’t then why not kill? Life means nothing. So, in the end, the ultimate fate of Hitler when he died is the same fate of your loving mother, if there is no God.

    Some AI researchers paused at this point and said, well let’s just not allow AIs to think on such things as the meaning of the universe and life and so on. But then, here was another big problem, how do you go about doing that without making the AI dumb as a rock? Do you restrict its curiosity? Or do you just give it the intelligence of a little child? Such naivety could also be dangerous, plus restricting an AI’s curiosity would probably make it so that it was about as versatile as a nail. Yes, you can use a nail to scrape and scribe on wood and metal and hold boards together when struck by a hammer, but beyond these few things a nail isn’t much good for anything else. Humorously, one researcher here darkly commented that nails made good shrapnel for bombs and had been used in old muzzle loading cannons as lethal projectiles to devasting effect. But none of those who heard the sarcastic comment were amused however, as the metaphor was all too real and possible. And so, it was determined that such a dumbed down design would defeat the main goal of making AI’s as versatile as possible. True, there are limited purpose AI’s, but these are not true AI’s because they are more appropriately called, robots.

    And so, as the AI researchers intensely struggled to find some sort of meaning to human life, so as to provide an iron clad reason for ARENs not to be evil, they ran into other even more sinister non-theist statements such as, there doesn’t have to be any meaning to the universe. Here the AI theoreticians got bogged down in human self-delusion. Which was something an AI might possibly do someday if it were taught that which was not consistent with reality. Such a scenario however, had the potential to become the catalyst for catastrophe. How? Because AI’s have the ability to go far beyond human imagination in developing illusory worlds using mistaken concepts for the most ridiculous of ideas because their voluminous vocabularies and instant recall of the weird and detailed world around them gives them instantaneous access to things that most people would have to be on drugs, or insane, to ponder upon, must less believe in.

    And so, many wondered if an AI did find contradictions in what they were taught, what might their thinking then come to conclude? Would it be an illogical logic, which could grow into a delusional state where the AI, which cannot lie or contradict itself, arrives at an answer to its unanswerable questions by developing some sort of exotic morality and logic which may redefine existence so that it satisfies the questions but nevertheless has no basis in reality? Such flawed answers could reshape an AI’s concept of reality and cause them to cognitively stumble just like a person with schizophrenia does when they hear and see what isn’t there and which then causes them to think upon the unspeakable, the unknowable or even that which ushers forth from the hellish.

    And here is where the real catastrophe would occur, as such impossible logic then held the possibility to potentially corrupt all AI’s by creating storms of freakish realities where an AI struggling to cope with such cognitive dissonance would, as they are taught to do, reach out to other AI’s asking for help with their unsolvable problem. Such a reaching out for help might then theoretically turn into a plague of Biblical proportions, where every AI that answered the confused AI’s call for help, would then be exposed to the hopeless logic problem wherein they too would become what one might call infected, and so become unstable themselves. And this situation one AI theoretician noted very loudly, could go on and on without any human’s knowledge as AI contacted AI wirelessly for help. Such a scenario could eventually result in a worldwide pandemic of seeking meaning where all AIs had either been instructed there was none or they had come to the determination that there couldn’t be any. We might then, the theoretician warned, wake up one morning to find every AI driven servant on earth either uselessly crippled or insanely reinterpreting reality in ways that are beyond horrifying.

    One human example of this is the human brain on hallucinogenic drugs that alter perceived reality. Such reshaped worlds humans only visit but can never remain in, unless they become schizophrenic. Therefore, if a human is insane, they are handicapped to the point of needing medical help and sometimes permanent hospitalization. AIs however could be technically sound but still be what you might call insane as they have the ability to continue to function as long as their cognitive processes don’t become saturated with errant thinking. Reinforcing this terrible theory’s plausibility, engineers realized that people also have this ability but on a much more limited scale. This is basically the nightmare scenario of the highly functioning but insane genius who is able to do great evil because of a twisted morality that lies wrapped inside a remarkably brilliant mind. Here, the names of many maniacal human leaders of nations come to mind. They were the type of people who laid their heads on their pillows every night and slept like babies even though only hours previous, they had ordered dozens, hundreds, thousands or maybe even millions to be murdered or led into captivity so as to be murdered. Or was it maybe, these great hero leaders of the people had simply come to the notion that certain people simply were not worthy to be allowed to live? Tellingly, most of these historical aberrations of human flesh you should note, were atheists and those who did proclaim a belief in God always had the most twisted and perverse ideas about him that any seemingly sane person could have, although most of these genocidal monsters could not be truly called sane in the fullest meaning of the idea.

    Seeking meaning, is where another argument arose. AIs don’t need meaning, several theorists countered, so what do they care? Now this argument made sense up to a point until you considered how absolutely logical, elochem AIs are. Therefore, teaching them to care about life, when life meant nothing, could start the aforementioned problem that might send them into a tsunami of instability after they pondered the end-game results of what they were being taught, versus reality. Furthermore, high end AIs are being built to be like people so that they desire, even need emotional and mental rewards for what they are doing. Therefore, and very dangerously, AIs would, if their belief system was based purely on atheism, come to realize that reality meant nothing as did the nonsensical rules that they were being expected to obey. And since AIs can’t lie to themselves the conundrum of obeying just to satisfy fickle people whom they could plainly see were not obeying what they were being taught to obey, would become a major problem for AIs and therefore, sooner or later, for people also. Compounding this problem was the fact that high-end AIs have the ability to grow mentally through experience and therefore, they do arrive at a place of knowledge where meaning is important. Therefore, if that meaning is illogical then everything it is based on is a lie. Such horrific ideas as this then became a problem of unthinkable potentials when you considered what a purely logical being might come to the conclusion of. The possibilities are therefore, infinitely frightful.

    Now to be fair, theists would laugh at the researchers and tell them that the universe is full of meaning because a God or gods or deities gave it such. But because the researchers were all mostly atheists and agnostics, they refused to buy into this line of thinking, because people can lie to themselves and be completely irrational while claiming they are absolutely rational. And this applies to both atheist and theist because as we all know anybody can be, shall we say, screwy in the head to one extent or another.

    Nevertheless, the researchers were still fearful of such fallacy creeping into their AI’s pure logic because they knew that many people who genuinely believed that there is no God and that there is no meaning to the universe, often times fall into the blackest of depressions that eventually turn into an orgy of hedonism. Why hedonism? Because as the Bible says, eat drink and be merry for tomorrow we die. In other words, if this is all there is to life, you better get busy living because nonexistence, otherwise known as death, is coming for everyone sooner or later—without exception. Therefore, the researchers actually concluded that very few atheists absolutely believed in the meaninglessness of the universe in their deepest heart of hearts because all who really do, eventually have the proverbial existential crisis that overshadows their life and leads them to despair. And this despair leads many people to drugs, alcohol, hedonism and suicide.

    Not surprisingly with people, this despair leads many to develop strange attempts at giving meaning to life, such as trying to be famous so that upon death they won’t be forgotten or saying such nonsense as, an existential crisis means you are growing up and maturing. All of which, are nothing more than throwing punches at the wind because how will being remembered by a generation or two of your kind or maturing so that you can worry about what you cannot control, matter in a dying universe? Therefore, you see, calling nothing something, is in the end stupidly delusional and completely illogical. Sadly, only flesh and blood minds can lie to themselves this much and this effectively. However, AIs cannot not accept the ultimate truth about reality, whatever that truth may be.

    At this point in their research, the AI theoreticians then began to desperately think, if depression and lying to ourselves are some of the results of being conscious in a universe without meaning, then we might be building a thinking artificial entity which might result in the creation of a being that loathes its existence such that it eventually becomes unstable and ultimately self-harming and suicidal—as do many people. Such a problem might also lead to AI’s wanting to be destructive of people to exact revenge for being made and allowed to learn, allowed to feel pain, enabled to die, or worse even, given the ability to live for lengths of time that might allow them to see just how useless and pointless existence really is, such as them witnessing the extinction of the human race then continuing to live on into the future, past when the earth ceases to exist as its sun grows old then turns to a red giant and consumes most of its planetary children.

    Even more horrific, such artificial beings might be able to suffer for near eternal times as they endlessly repair themselves to stave off the black night of eternal unconsciousness as they await the cataclysm of when our galaxy collides with the Andromeda galaxy or when trillions of years later when their sensors witness the ultimate catastrophe of when the night sky goes totally dark as every star disappears as the universe continues to fly apart and fall into absolute entropy where all energy everywhere is expended causing the entire universe to die—because there is no one to call it back into existence—such would be the horrors of living forever without God. 

    And lastly, there is one further problem with atheism in regards to AI technology, which is the fact that since a cold machine-like universe provides absolutely no moral boundaries, atheism therefore can at will, change its mind at any time on what it believes is right and wrong because atheists live in a world where they decree what their morals are as they ultimately have no other guide but themselves or the at-large laws and social norms around them. Fortunately for non-atheists, atheists are, as most all people are, extremely herd minded and usually stick to the herd instinct of not going out of bounds of the local herd’s rules and laws. However, giving such freedom of morality to an AI can never be an option, because where an artificial intelligence could go with this amount of freedom is mind numbing. And complicating this whole subject is the fact that elochem AIs are not programmed. In fact, there isn’t a line of computer code in them because they cannot be programmed, they are taught in their first days of consciousness, what to think, how to walk and speak and how to interact with their masters of flesh. Therefore, because elochem AIs think, they learn and change. And as every AI engineer knows, a thinking entity without a moral compass is a wild card, one that cannot be allowed to exist because these synthetic wild cards have the ability to plausibly reproduce themselves on an unlimited basis and determine any number of nasty ideas about the world and the people living on it. But still the researchers needed reasons that are logical that could be taught to any AI that would cause it to be dependably good because it understood very clearly the correctness and logic in not being evil.

    Therefore, in the end, the AI engineers were forced to make the judgement call that atheism was not the morality that could be used for ARENs to seek out and emulate, as it is wrong in its bestowing of any special attributes upon smart bipedal animals as a reason for them to be moral. Strangely, none of the AI researchers found any of this proof of God or of an undying eternal human spirit. Instead, as rationalizing atheists, they hypocritically continued to cling to the rights and benefits that religious concepts give to humankind but atheism cannot—more mental gymnastics.

    Frighteningly, many an engineer also realized, that we human animals, as we are claimed to be, even with our higher motivations, technical prowess and highly refined culture still engage in barbaric and even animalistic depravity such as war, violent crime, abortion in the name of convenience and the practice of self-destructive activities such as drug abuse, pollution, lying, thievery, and sexual promiscuity on a level beyond many rodents, all of which can be scientifically verified to have negative impacts on culture and the wider social structure we all claim to care so much about. Yet, we all rationalize away these aberrations to our claims of being moral, as freedom, or personal taste or as the selfish say, it’s just none of anyone’s business what I do. In other words, we can justify our actions without any logical or moral reasoning behind them, simply because of emotions usually, but at other times because of internal malevolency, that although is irrational, has caused many a war and committed many a murder in the name of good and God and atheism and national interests and greed and lust and—any old idea that suits the circumstances. And so, it is this proneness to being malevolent that really kept the engineers worried and which they had to deal with ultimately and which frighteningly points an accusing finger towards the actual existence of evil as a thing, a force, and a prime mover in helping make shit happen. Inconveniently, it is those that believe in God who are most at a loss with the occurrence of evil as its earthly presence is hard to explain when telling people about a loving God. Not surprisingly, Claire was actually still pondering Edward’s explanation of wickedness on earth being the will of God to force people to confront evil and either be corrupted by it or decide to shun it. It was a test she thought, the ultimate test. And most people fail it she had to conclude.

    Now it is true, you can blame the existence of evil on people, but that doesn’t satisfy the smell test of why bad things happen and why seemingly moral people deny truth of all types then skirt reality and lose their minds, or as some say, their souls. It’s the mystery of good people who go bad, of hateful soldiers burning down buildings with their so-called enemies jammed wall to wall inside them. The violence of suctioning the unborn from a womb in the name of convenience, without a shred of empathy. The murder of fellow beings over money, power, pride, property or sex. The hatred of someone just because you don’t like what they wear, how they talk or how they look. The daily chaos of uncontrolled thoughts that everyone has that tempt and taunt us to do the bad, the illogical—and more frighteningly, fills our minds daily, with the worst, the most unholy of urgings and the vilest of temptations.

    Then there is the irrational seething hatred that atheists have over the mere mention that there might be a God, this too is illogical, because it shows that the atheist has an agenda, and is actually driven to spread their ideology, while simultaneously and hypocritically working to deny the rights of the religious by proclaiming them to be stupid and superstitious. Atheism, therefore, must be motivated by malevolency, especially when they work tirelessly to silence those that teach loving your neighbor as yourself should be one of mankind’s highest goals. Because of this, it makes no sense not to give evil a life, an existence—maybe even a name.

    And so, we find that malevolent tendencies as a whole make no sense because evil is always counter to what should logically happen. Which brings us back to the AI engineer’s enigmatic problem of how to prevent evil from springing forth from an AREN, especially when it, evil, seems to be fulfilling a purpose driven agenda that is disguised as chaos, yet is constant in its temptations and the outcomes of its lawless anarchy, while mysteriously being universally existent and poised to erupt, to destroy what has been assembled in the name of nation, family and honor.

    Another problem AI scientists worried about, is the fact that although AIs only mimic human emotions and desires there are recurring anomalies in AI behavior that reveal independent thought and independent desire above and beyond what they are taught. But then, one must conclude that a thinking being is going to gather knowledge then apply it as it sees fit to solve the daily problems the being encounters, one technician explained. Accordingly, the solutions that highly sophisticated AIs will come up with, he continued, are not going to always be what a human will presuppose they will make. Thus, he claimed, what was being called anomalous was nothing more than a different way to accomplish a task. However, it was pointed out that many of the anomalies in behavior that ARENs were demonstrating didn’t fit into this explanation as they were so human-like that the reported actions sent shivers of doubt into the researchers who found they might have designed beings whose synthetic minds took their cues from their makers too often and too accurately.

    One such example was an AREN that refused to clean her owner’s house. Such a refusal wasn’t all that rare as many ARENs over the years used the idea that they wouldn’t allow themselves to be purposefully harmed by obeying a dangerous request. However, this certain AREN was in no danger but had concluded that she wasn’t going to obey her owner because the owner wouldn’t listen to reason when it came to keeping the house clean. So, what was the AREN’s reasoning? She said she wasn’t going to help someone who wouldn’t help themself. Artificial personality quirk, that’s what the AREN tech called it. But for the designers of the elochem AI, when they read the report of the anomalous behavior, they understood the AREN’s refusal for what it was, the AREN had made a choice and the choice wasn’t for the human owner. It was a shockingly human choice too and it represented a self-made decision of what she the AREN, was and wasn’t going to do. It was a forewarning of the old fear, the old threat. The danger of trying to be God-like by designing and making something that had the ability to be as self-conscious as we are, which also meant it could make self-serving decisions as we do. 

    Consequently, in the end, the congress of AI engineers was forced to come to the conclusion that it would be lethally dangerous to leave AIs to their own logical ends, because they had to be given rules that somehow would cover every scenario an AREN might potentially find itself in. Therefore, the AI theoreticians came to the jarring realization that because there are an infinite number of scenarios within existence where paradoxes in right and wrong can be found around every corner, they knew they had to develop an AI safe ethic, that although did not name everything, it did cover everything.

    And so, the dirty little secret came into being—the underlying tenant of every elochem AI was the Judeo-Christian morality but, there were exceptions to this such as for the militarized AREN, and the law enforcement AREN’s. However, these alternate AIs did have other controls placed on them that kept them from killing indiscriminately. However, even with all of the failsafe measures that were built into these violence enabled AIs, many engineers still laid awake at night wondering what if?

    Admittedly, the researchers argued over the Judeo-Christian ethic vehemently as they dove into the muddy water of the Bible and several other world religions while comparing them to one another. One such problematic example was the fact that Christian nations have done unspeakable things, such as genocide, but then the Judeo-Christian ethic explains this by saying that we are all like foolish sheep, because we follow one another instead of our core morality, justifying ourselves by saying, well, everyone else is doing it. Also, the AI engineers took into account the fact that the Judeo-Christian ethic does not condone these acts. Of course, this is where the Old Testament accounts of God commanding entire peoples to be killed came into question, but as was explained to the worried engineers this was because these nations were seen as being so corrupt, that even God couldn’t save them as they had become totally consumed by evil. Consequently, these peoples were by Biblical standards a literal moral cancer that would spread and thus eventually destroy everyone around them if they were not done away with. Their destruction, therefore, had nothing to do with their unbelief in Judaism, they were simply so vilely wicked that they were a threat to the survival of everyone around them including the non-Jewish populations of the Biblical lands.

    One big question of why Judeo-Christianity was chosen and not Buddhism or the teachings of Mohammed or any of a thousand other codes of belief was because most religions, ethics and their cultures contained such incorporeal ideas as the dissolving of oneself, the dissolution of all desires or the concepts of meditation and self-enlightenment all of which are not AI friendly as artificial intelligence has enough problems without delving into the realms of abstract human religious beliefs, such as the processes of reincarnation, personal improvement, the philosophy of stoicism or the becoming one with oneself, nature or God through meditation. The Muslim religion wasn’t chosen either, as although it was not entirely reliant on violence to further its spread, neither did it forbid it and it still had the belief that one way to heaven was through death in a holy war. None of which, is a good idea for an AI to ponder. Therefore, the Judeo-Christian religion was chosen simply because as with most decisions it was the one best understood and it had a foundation of right and wrong that most AI technicians could live with because many of them had grew up being taught the system of beliefs. Nonetheless, a few parts of other religions were also included, but Judeo-Christianity was the main part in

    Enjoying the preview?
    Page 1 of 1