Longbets.org has a number of interesting long bets on issues of scientific and social concern, for example the artificial awareness long bet, which proposes that "by 2050 no synthetic computer nor machine intelligence will have become truly self-aware (ie. will become conscious)." The bet includes the bettor's rational, the balance of voting and a place to discuss the matter.
I've recently been reading a lot of David Chalmers, a philosopher of the mind, who is very interested in the questions of consciousness and, fortunately for me, a prolific and accessible writer who also maintains the web log fragments of consciousness.
Chalmers is a dualist, someone who rejects the notion that a description of the physical functions of a system is also, ipso facto, a complete description of the intrinsic nature of the system, a position to which I am sympathetic. For example, color can be described as a funtion of hue and saturation, among other qualities, but does not explain how we experience the quality of being purple, or much less, how we might react to the film, The Color Purple. Likewise, in a dualist philosophy of the mind, attributes like observe, discriminate and report are functions of the mind but do not adequately explain consciousness, or experience in the first person.
It's one reason why I'm skeptical that machinery might achieve sentient or conscious behavior. The process is much more than ordination, no matter how impressive, or of passing the Turing Test.
If you are inclined, Chalmer's paper Consciousness and Its Place in Nature is relatively accessible and makes a case for a dualist understanding of the mind.