-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

Soul of a new machine?

Tracy Kidder’s classic 1981 bestseller The Soul of a New Machine chronicles relentless work by teams of Data General engineers to design a next-generation mini-computer. How it all gets done isn’t necessarily pretty and looks just like hard work, buffeted by personalities, morale, ambition, deadlines, compromises. But the machine became a leader in its field and enabled the company to command a strong position in the then brand new era of 32-bit computing. As we look today at an emergent trend like cognitive computing, it is clear that we stand at another such inflection point—where wholly new systems architectures will need to emerge to provide the groundwork for the intelligent applications of tomorrow.

Today people worldwide carry devices in their pockets and purses that compute rings around those shiny new machines of the 1980s. No air-conditioned, raised floor computer rooms required for these phone/camera/radio/TV/movie/record player/e-reader multifunction computers. Today only a fresh battery and perhaps a wiped clean glass screen is enough to bring the world to your palm.

Quite a few people still around remember that the only way to interface with a computer as recently as the late 1970s was first to sit down at a special machine created to punch holes in stacks of cardboard cards, then punch out a carefully coded stack of them of your own design and walk up to a window through which a gatekeeper to the massive hidden contraption would receive the precious deck of cards (and don’t you dare drop it!). Then you’d take a number and wait, perhaps coming back the next day to return to the window and pick up a sheaf of results that might (or might not) indicate that the computer had agreed to do what you had asked of it.

Today we expect to ask Siri something like: “Are there good seats left on Broadway for that popular show set during Prohibition, named for some city in the Midwest?” and have her say almost instantly, “Do you want me to buy you seats to Chicago in Row D center at the Ambassador Theater on 49th Street?”

Leaving aside for the moment any commentary on Siri’s glaring weaknesses today and her inability to respond so accurately in real life, the fact remains that our expectations regarding the intelligence that our machines should deliver has risen dramatically in the very recent past. We now assume Google will translate foreign language Web pages for us instantly, that we can search from a mobile device by simply talking to it (and get driving directions in a similar manner), and we even take somewhat seriously a movie in which a forlorn guy falls literally in love with his talking digital assistant, an (almost) perfectly intelligent human voice from the machine. We have also seen the IBM computer Watson trash the most talented human experts across all the topics and trick language devices in the TV quiz show Jeopardy.

But a glaring problem currently creates a gulf between our expectations of seamless mobile cognitive computing and the smarts that Watson exhibited to answer Jeopardy questions. That Jeopardy-playing Watson required what we recently would have called an advanced supercomputer. As stated in Wikipedia, “Watson is composed of a cluster of 90 IBM Power 750 servers, each of which uses a 3.5 GHz POWER7 eight core processor, with four threads per core. In total, the system has 2,880 POWER7 processor cores and has 16 terabytes of RAM.” At last check, this kind of compute horsepower will still not fit in pocket or purse and, yes, still requires air conditioning and special environments, if not punched cards.

So how will we get practical cognitive computing intelligence that follows us around through our mobile devices? IBM has been readying an answer to this dilemma with its work on an entirely new machine. Since 2008, IBM has been a lead developer with partners in a government-supported joint research program aimed at designing a new kind of computer architecture that mimics the function of the human brain.

The fundamental idea that the SyNAPSE program is pursuing is to design a machine that models the networked inter-linkages of neurons and synapses that power our elegant and portable brain, rather than attempting to improve the Turing/von Neumann processing model used by today’s computers. The design goals aim at massively parallel processing resources capable of resolving complex problems in real time while requiring virtually no electrical power. Our brains, for example, run on the equivalent of 20 watts and move around unobtrusively with us, while the most advanced supercomputers today require something like 9.9 million watts and their own live-in facilities.

In August, IBM announced “the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt.” While this milestone achievement does not promise a truly smart Siri in your palm via the next release of the iPhone or its Android equivalent, it certainly establishes the legitimacy of the research approach and the probability of an entirely new soul for the next-generation intelligent devices populating our future. The path from here to there will again not be pretty—full of compromises, personalities, ambition and more. But Tracy Kidder may yet have the chance to write the sequel to that 1981 bestseller.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues