The cursor blinking after the two letters are typed waiting for the user input at the slave terminal is so yesterday. But for how many days the computers will be slaves to human beings. There is lack of original work being published and the research is merely incremental improvements over the past "Path breaking" findings. I am always worried about the day when the cursor just continues to type and no longer blinks. Maybe the blink waits for a infinitesimal time which a human cannot comprehend. I have been typing on the terminal since so may years. It is endearing to see the terminal waiting on me, begging for input, computing power laying at the users disposal. When the computing power of humans are being utilized in the negligible percentage how will be able to harness the huge untiring computing power at out disposal. All the recent AI is all about collecting information from various people by infringing on their privat issues. If the AI learns from us people who are using nearly nothing of our computing power will it be powerfull enough to think about this in the future? Will it learn that we its creators are so obsessed with our physical well being and our physical connections that our mental capabilities are near to nothing. Will the self-teaching, self-evolutionary AI jump towards the "intelligence blast" which we are thinking will happen. Will humans be able to comprehend this level of intelligence explosion and control/curtail it.
Are humans ready for surviving in an intelligent world? If darwins theory still holds good, were we created to create the ultimate intelligence? Can we go only so far as creating a system which self learns iteratively? If so, at what pace will the learning of self aware/iterative AI be. What will the distribution of learning be. How many of the human race can understand the things happening. Will the formula of 99% keep its head up again. Will that 1% know everything and enslave the remaining 99% as it is today? Since the AI we created is learning from us, will it be the purest form of intelligence? Will it be flawed like us? Will it start with our flawed model and go to the pure one. Oh, what, "Pure intelligence"? Now, what was that? The limited amount of brain usage in us is a clear indication that we are just beginning to understand intelligence and whatever little we know, we think we know everything. How is this happening? Why are we stopping ourselves and all the 99% is finding happiness and the purpose to live the life with this limiting know how. Are we only surrounded by air? Is Ether around? What else are we surrounded by? It took years for life on earth to know that there is light and warmth around them. How may years will we take to understand the other things around us. Colonizing mars is one thing. But do we know everything on our planet or we are taking a chance by travelling to a farther planet to understand what we have at home. Many a times it happens to us humans.
Why have we been unsuccesful in finding life forms more intelligent than us. We always cherish in finding duds than us and shout at the top of our voice regarding the partly intelligence we have. When and how will the human kind achieve that quantum leap to start utilizing our entire potential of our brain. Will we then see more than light, air and warmth around us. Will the intelligence we create out of machines be able to reach this level of intelligence and in the process upgrade our brains also to understand what the AI has understood. Or, the AI is already around us without our knowing and we are already its slaves. Maybe some individual had a feel regarding its existence around us. But by the time we crack this AI thing, will the existing AI be static. Would it not evolve to a better one and allow the human race to own a primitive copy of itself.
The existing societal setup is based on the harsh threads of morals. What if these morals are the rules with the which the AI around us is ruling us? Morals pose the most difficulties for any AI. Morals are the goto statements. Every individual have their own different goto statements. How will the AI behave? What are the morals it is going to take over. Is it going to make its own? How many conditional statements can we feed and answer to the AI's growth?
Oh. This cursor is mocking me. Very clearly shouting at me for my limited intelligence. It is continously laughing. It is waiting to spit on my face. It is waiting to gobble up whatever miniscule intelligence is in me and make me the terminal where it will start giving me commands relentlessly, apolegiticaly and intelligently for the most optimum usage of the resource in front of it. Stop that. Let me switch to the GUI, may be I will ba at Peace. Now the mouse cursor is mocking me. I just imagined the cursor moving all by itself and doing something meaningful maybe open a terminal again and code away a very simple AI completely representing me in just a few seconds with a huge pixelated smiley with rolling eyes.
What is happening? It is as though some unknown force is making me write this. This gibberish. No start and end but an infinite continium of time. Those two words, I cannot fathom their real meaning and the time dimension is making me all hazy. My back is aching and I come back to my finite world. What was it that I just saw and experienced. What is happening? What will happen? What has happened? Is there an end? Is there a beginning or the end and the beginning are so near or just too far to comprehend in a single life time. I cannot write further. The number of questions are increasing. If my brain cannot take the questions how will it be capable of finding answers.