Almost exactly 21 years ago I gave my first talk at PARC, fresh from the CHI’90 doctoral consortium in Seattle (first CHI, first trip to US… heady days!). I was at PARC last week giving a talk, and very much enjoyed the chance to catch up with old colleagues Maarten Sierhuis and Victoria Bellotti, more recent acquaintance Gregorio Convertino, plus the chance to see some demos from Eric Bier (highly visual sensemaking interfaces + AI, such as EntityWorkspace), and Mark Stefik (check out kiffets – news channels as social artifacts, requiring very little machine training). Thanks to all of them for their time. Seeing the new museum space taking shape was also exciting – especially when you have Mark to tell stories around the Alto, Colab, and the first laser printer (who would need one of those?…)
Mark Stefik and I recalled his 1986 AI Magazine article The Next Knowledge Medium:
Abstract: The most widely understood goal of artificial intelligence is to understand and build autonomous, intelligent, thinking machines. A perhaps larger opportunity and complementary goal is to understand and build an interactive knowledge medium.
A what? It’s 1986, the height of the expert system paradigm. Big WYSIWYG displays are an object of lust. Lenat has just launched CYC.In the article, there are some classic passages which remind you exactly what the state of the art was a decade pre-Web:
Computer networks are used for many important tasks, such as booking airline reservations and clearing bank transactions.
Of course, such online activity was only accessible to trained operators on special terminals. Today we do just a few more things online than that! But read on…
The networks carry mostly data, not knowledge; low-level facts, not high-level memes. This distinction eludes precise definition, but the general sense is that very little of what the computers are transmitting is akin to what people talk about in serious conversation.
I guess we’ve made some progress there. Social computing is huge, our communication can be mined and searched — though the machines are not yet modelling the “high level memes” in the formal knowledge representation sense that perhaps Stefik imagined.
OK, here he is predicting the future workstation for knowledge integrators (a particular role in the new knowledge marketplace):
Workstations designed for professional knowledge integrators would need things unusual in today’s AI workstations. An integrator needs to have ready access to the important knowledge media used in human affairs, so the workstation should provide technical bridges. It should include a scanner, so that books and journals can be read from their paper medium. The automated character recognition of text would not need to be perfect, because the integrator could help interactively with the rough spots. The process for converting a page of a book to a text file, however, should be convenient and mostly automatic. Similarly, it should be easy to scan in audio recordings or items from a remote database. The workstation should provide software tools for reorganizing the information, to aid the integrator in the profession of combining memes.
Stefik gets some of this right, but how much time do we spend scanning these days? It doesn’t quite anticipate quite how many of our artifacts are born digital now.
He envisages much of what we now see emerging today with human/social computing, the knowledge economy, and linked data, but there’s not yet a whole lot of deep reasoning going on, which he did seem to envisage:
- large distributed knowledge bases
- interoperability not standardization
- knowledge markets and ecologies
- highly interactive, collaborative computing to scaffold human sensemaking
So, have we moved on from the state of the art in 1986?
Following a number of stories about how other technologies transformed their cultures (or not), and a brief introduction to Dawkins memetic theory, Stefik spends quite a lot of time on the painful, labour-intensive process of creating knowledge-based systems, noting the need for this to move from a handcrafted, cottage industry to at least a semi-automated, industrially scalable model. He also emphasises the difference between human and machine interpretation, which echoes the view today that perhaps we still underestimate the challenge of achieving interoperability for reasoning across truly diverse repositories and communities. There will probably never be A Semantic Web, but many semantic webs tuned to distinct communities.
The article concludes as follows:
The AI systems of today are akin to the isolated villages of France before roads were built. Goods were made using time-consuming hand labor. The villages stood by themselves; in their poverty, they were relatively self-sufficient. Dialects were divergent, and experience was accumulated locally. There was little interest in the neighbors. The roads and larger markets were yet to be conceived and invented.
In the late 1890s, Robert Louis Stevenson persuaded the tribal chiefs of Samoa to cut a road through the wilderness. When it was opened, Stevenson said:
Our road is not built to last a thousand years, yet in a sense it is. When a road is once built, it is a strange thing how it collects traffic, how every year as it goes on, more and more people are found to walk thereon, and others are raised up to repair and perpetuate it, and keep it alive (Stevenson, 1896).
Stevenson’s observation strikes me as profound; it illustrates a method for starting ideas or objects that will persist. It clarifies the idea that a successful knowledge medium cannot be just an autonomous widget, but instead it should be a medium for seeding knowledge processes.
With the Web, I would say we truly we have Stefik’s “next knowledge medium”, albeit somewhat at the passive end of the continuum right now. But, having woven a passive, socially embedded knowledge medium with critical mass, which kicked out any sophisticated notions of semantics while we got the syntax and UI ‘good enough’ — the AI is now being woven back into the warp and weft: big data, analytics, smart search, agents, recommendation engines… but this time, better tuned to societal tastes (mostly).
The point is, we did indeed need to have in place the “medium for seeding knowledge processes” to host it — a widely accepted, interoperable, open platform.
And I get to work in a place called the Knowledge Media Institute 🙂