This is not nearly as metaphysical as the above posters, but it still quite Philsophical in its rationality. A noted philosopher (the name escapes me) developed the idea that an idea could only be as perfect as its creator. While I do not think it adequately describes God or other supernatural things, it helps me to answer this idea of "Who am I" and what consciousness is. Computers are, in my humble position, the greatest invention of man to date. It has the most complex subsystem of design and intricate parts that I can think of. As we advance in computer and robotic design, we notice that the behaviors become more and more like a sophisticated organism -- more and more like humans. I do not, however, think we will be able to create a computer superior to ourselves, and for that I give the philosopher his credit.
How does this relate to "What I am?" Well, many of our behavior and emotions can be mimicked and recreated in solid state silicon devices, without the need for a physical "blood and air" power mechanism. I therefore see no reason to believe there is a "me" beyond a self-aware computing organism. Computers nowadays can easily be "taught" to retain knowledge, much like our brain. The more knowledge they are endowed with, the more effective they become at calculations and relations. Further still, we can endow a computer system with the ability to "defend" itself, in a self-saving mechanism like our own immune system. Antivirus programs and firewalls are perfect metaphors for our skin and our white blood cells. Computers can be "taught" to turn themselves off and on at scheduled intervals to save power (sleep). Computers can be taught to prioritize activites and perform them only when there is available resources to do so. They can be taught to communicate with others via established protocols, and be taught to learn new ways for communicating with previously unreachable devices. Eating and drinking are obviously inherent abilities of all these devices, as their first and only "need" is the power to continue their devices. Risk-taking can even be programmed.. a computer can calculate its odds as well as create pattern-based appraisals of its opponents. Through these calculations, it can determine the value of taking a risk for its own improvement and depending on the capability of the subsystems, act on them. The final piece that many have claimed seperate AI from humans is "emotion". I think emotion is just as artificial of a construct as any other human/animal complex, and can be duplicated as such. The core emotions, fear, anger, and love -- can be programmed. Anger, for example, is simple. If a malicious piece of software is introduced into the computing system, many lines of code can be written to respond to it with different severities. The things that make people angry, such as repeated offenses, can also be programmed. If (attack = 15th time), { attempt to destroy targetting device }, if (attack = 1st time) {appraise attacker's propensity to attack again }. Love can similiarly be programmed. We only truly love those that are close to us, so if (interaction with device) = often, {increase loveCounter}. If (attack}, {decrease loveCounter}. These are obviously simplistic psuedocode, but they can be programmed to behave in a very realistic "human" matter. They will never exceed humans in this capacity, but it can be done. There is no need for a "soul" or anything beyond a very sophisticated set of systems and subsystems interacting together. This is the relationship that we share with computers. It is almost enlightening to consider the hardware similarity between my computer that I'm typing this on and myself. We are obviously much more powerful computers, because our brains are working at quantum calculation speed, whereas conventional PCs are working in binary. Similarly, we have much more hard drive space than RAM space (more permanent magnetic memory than temporary memory). We run with an energy simliar to electircity, snapping from synaptic gap to synaptic gap in the process of calculation. We have "video cards" capable of taking three dimensional space and reproducing its manifestation to our "display device", or the back of our retinas. We have sound cards, capable of recording as well as outputting sound. We have input devices such as our senses which compare (in a rudimentary format) to the mouse and keybord of a computer. Our output can be transferred to other devices through speech and written word, much like CDs and file-sharing. The hardware and software relationship between us and computers is undeniable, which is understandable as our greatest creation.
Looong explanation aside, where does "I" come in? In a computing system, the CPU would perfrom this "I" role, as the controller and allocator of the duties and responsibilites of the individual systems. It's level of "Me"-ness is identical to us as humans - it depends on our awareness of our individual systems. If we were not aware that our arms were attached to our body, and could not "feel them", then we would not consider them part of ourselves. If a CPU could not see the hidden method that allowed it to control power supply, then it would not consider that part of me. It only considers "me' the list of things with which it depends on and can "feel."
So -- we're supercomputers, but nothing more, to me. Sorry for the long post.
EDITED to add: I just thought about it some more, and I think few people realize the similarity between pyschology and computer science in this department. They both study extremely complex systems and work on "reprogramming" people and computers respectively to conform to acceptable standards. They both very often analyze the behavior from the outside, based on its outputs and its method of disposition. They both spend many years learning acceptable reactions and solutions for common problems that occur in complex systems such as they (we) are. Interesting connection, I think.