Knowledge Management
in Theory and Practice
Second Edition
Kimiz Dalkir
foreword by Jay Liebowitz
8 Knowledge Management Tools
Any sufficiently advanced technology is indistinguishable from magic
—Arthur C. Clarke (1917–2008)
This chapter provides an overview of knowledge management (KM) tools, which are
all too often treated as black boxes (data goes in and knowledge magically comes out
the other end) by the majority of users. The new generation of millennials however
appear to have developed different technology skills and have differing expectations
of these new tools. New technologies are continually emerging, and many will have
some intersection with KM. Knowledge management implementations require a wide
range of quite diverse tools that come into play throughout the KM cycle. Technology
is used to facilitate primarily communication, collaboration, and content management
for better knowledge capture, sharing, dissemination, and application. The major
categories of KM tools are presented and described together with a discussion on how
they can be used in KM contexts.
Learning Objectives
1. Describe the key communication technologies that can be used to support knowledge sharing within an organization.
2. Illustrate the major advantages and major drawbacks of synchronous versus asynchronous KM technologies.
3. Define data mining and list some cases where it would be used.
4. Compare and contrast the different types of intelligent agents and how they can
be used to personalize KM technologies.
5. Define the difference between push and pull KM technologies.
268
Chapter 8
6. Characterize the major groupware tools and explain how they would be implemented within an organization.
7. Sketch out the major components of a knowledge repository and explain how
organizations and organizational users would make optimal use of one.
8. Describe how e-learning and knowledge management intersect and in which ways
they differ.
9. Identify emerging technologies and describe how they may be applied in a KM
context.
10. Compare and contrast the skill set and technology expectations of the baby
boomer and the millennial generations.
Introduction
Technology is a moving target as new tools are being continuously developed and
adopted to varying degrees by users. Knowledge management has an added complication in that there is no single tool that will cover all the bases. A suite or toolkit of
technologies, applications, and infrastructures are required in order to address all
phases involved in capturing, coding, sharing, disseminating, applying, and reusing
knowledge. Yet another variable to further complicate the situation is that the users
themselves are continuously changing. While baby boomers have certain preferences,
such as preferring the phone to e-mail or meeting face to face, as well as certain
expectations of technology (e.g., they are quite tolerant of errors, willing to wait, and
quite accepting of asynchronous communications), the same cannot be said of the
new millennial generation (Eisner 2005; Raines 2003).
The millennial generation is also referred to as the net generation (Tapscott,) or the
Y generation as it comes after generation X. The baby boomers are generally defined
as those born after World War II in the years between 1945 and 1965. Generation X
refers to those born between 1966 and 1980, while the Y generation refers to those
born between 1980 and the year 2000. Perhaps the best way to characterize generation
Y or the millennials is that they were the first to grow up with television and the
Internet. Throughout all three waves, there has been a wide range of innovations and
new tools, both for public consumption and for the workplace. The millennials tend
to have high expectations of the workplace precisely because they are such avid users
of real-time tools in their personal lives. The generational differences thus introduce
an added level of complexity to the KM world.
Knowledge Management Tools
269
One strategy for navigating through all of this complexity is to categorize the different types of KM tools. Ruggles (1997) provides a good classification of KM technologies as tools that intervene in the knowledge processing phases:
•
To enhance and enable knowledge generation, codification, and transfer
•
That generate knowledge (e.g., data mining that discover new patterns in data)
•
That code knowledge to make knowledge available for others
•
That transfers knowledge to decrease problems with time and space when commu-
nicating in an organization
Rollet (2003) classifies KM technologies according to the following scheme:
•
Communication
•
Collaboration
•
Content creation
•
Content management
•
Adaptation
•
E-learning
•
Personal tools
•
Artificial intelligence
•
Networking
Rollet’s (2003) categories can also be grouped according to what phase of the KM
cycle they occur in (refer to figure 8.1).
The initial knowledge capture and creation phase does not make extensive use of
technologies. Methods of converting tacit knowledge into explicit knowledge were
discussed in chapter 4. A wide range of diverse KM technologies may be used to
support knowledge sharing and dissemination as well as knowledge acquisition and
application. Table 8.1 lists the major KM tools, techniques and technologies currently
in use. The underlying theme is that of a toolkit. Many tools and techniques are borrowed from other disciplines and others are specific to KM. All of them need to be
mixed and matched in the appropriate manner in order to address all of the needs of
the KM discipline. The choice of tools to include in the KM toolkit must be consistent
with the overall business strategy of the organization.
270
Chapter 8
Organizational culture
Assess
Knowledge capture
and/or creation
Knowledge sharing
and dissemination
Contextualize
Update
Knowledge acquisition
and application
KM Technologies
Figure 8.1
An integrated KM cycle
Knowledge Capture and Creation Tools
Content Creation Tools
Robertson (2003a) predicts that content management systems (CMS) will become a
commodity in the future. Many content management system projects fail due to lack
of good implementation standards and a lack of understanding of usability issues.
Technology-only approaches will continue to generate unsuccessful projects. CMS
should be handled in a strategic way. Lessons learned from these failures provide a
valuable source of learning. The move toward open standards would greatly assist the
evolution of CMS. This is likely to proceed with the use of XML-based protocols for
communicating with and between content management systems. Additional standards are needed for storing, structuring, and managing content. There will eventually
be a convergence between content, documents, records and knowledge management
that will be of greatest benefit to organizations. As yet, there is no merged platform
to accommodate such a convergence.
Authoring tools are the most commonly used content creation tools. Authoring
tools range from the general (e.g., word processing) to the more specialized (e.g., web
Knowledge Management Tools
271
Table 8.1
Major KM techniques, tools, and technologies
Knowledge creation and
codification phase
Knowledge sharing and
dissemination phase
Knowledge acquisition and
application phase
Content creation
Communication and
collaboration technologies
• Telephone/Internet
telephone/Fax
• Videoconferencing
• Chat rooms/instant
messaging/iwitter
• E-mail/discussion forums/
wikis
• Groupware
• Work flow management
• Folksonomies
• Social networking
• Web 2.0/KM 2.0
E-learning technologies
• CBT
• WBT
• EPSS
Emerging technologies
• Folksonomies
• Metadata
Networking technologies
Intranets
• Extranets
• Web servers, browsers
• Knowledge repository
• Portal
Artificial intelligence technologies
• Expert systems
• DSS
• Customization/personalization
• Push/pull technologies
• Recommender systems
• Visualization
• Knowledge maps
• Intelligent agents
• Automated taxonomy systems
• Text analysis—summarization
•
•
•
•
•
•
•
Authoring tools
Templates
Annotations
Data mining
Expertise profiling
Blogs
Mashups
Content management
• Taxonomies
• Folksonomies
• Metadata tagging
• Classification
• Archiving
• Personal KM
•
page design software). Annotation technologies enable short comments to be attached
to specific sections of a text document, often by a number of different authors (e.g.,
track changes feature in Word). This allows a running commentary to be built up and
preserved. Annotations may be public (visible to all who access and read the document) or private (visible to author only).
Data Mining and Knowledge Discovery
Data mining and knowledge discovery are processes that automatically extract predictive information from large databases based on statistical analysis (typically cluster
analysis). Using a combination of machine learning, statistical analysis, modeling
272
Chapter 8
techniques, and database technology, data mining detects hidden patterns and subtle
relationships in data and infers rules that allow the prediction of future results. Raw
data are analyzed to put forth a model that attempts to explain the observed patterns.
This model can then be used to predict future occurrences, and to forecast expected
outcomes (see figure 8.2).
A large number of inputs are required, usually over a significant period of time, and
the types of models produced range from easy to almost impossible to understand.
Easy to understand models are decision trees, for example. Regression analyses are
moderately easy to understand and neural networks remain black boxes. The major
drawback of the black box models is that it becomes very difficult to hypothesize about
causal relationships (see figure 8.3).
Historical
data
If
Then xxxx
Data
mining
If
Then yyyy
Figure 8.2
Predictive models
Age
How well will the
student perform on
the entrance exam?
Education
Eye color
Model
Figure 8.3
Black box models
Knowledge Management Tools
273
Box 8.1
A vignette: Beer with your diapers
A chain of convenience stores conducted a market basket analysis to help in product
placement. Market basket analysis is a statistical analysis of items that consumers tend to
buy together (i.e., that are found in the same basket at checkout). One of their hypotheses
was to place all infant care-related items together and run a simple correlation check to
validate that mothers of newborns did in fact tend to buy items such as baby powder or
cream when they came in to purchase diapers. To their surprise, the highest correlation
for an item that tended to be bought at the same time as diapers (in the newborn size and
format) was in fact a case of beer. This was later explained by the observation that it was
the fathers of newborns who were more likely to be sent to the store to buy more diapers
and while they were there, they tended to pick up other equally essential items.
Variables may be correlated but this relationship may not have any meaning or
usefulness. For example, a major bank found that there was a relationship between
the state an applicant lived in and a higher percentage of defaults on loans given out.
This should not be the basis for a policy that would automatically reject any applicants
from that state! Reality checks are always needed with statistics before any conclusions
can be drawn, as noted by British statesman Benjamin Disraeli, “There are three kinds
of lies: lies, damned lies and statistics.”
Typical applications of data mining and knowledge discovery systems include
market segmentation, customer profiling, fraud detection, retail promotion evaluation, credit risk analysis, and market basket analyses (as described in the vignette).
However, there are a few gems usually to be mined with data mining applications.
These are often unexpected correlations that upon further study yield some useful
(and often actionable) insights into what is occurring. The famous example is that of
the relationship between purchases of beer and purchases of diapers.
Some data mining tools that are currently in use include:
•
Statistical analysis tools (e.g., SAS)
•
Data mining suites (e.g., EnterpriseMiner)
•
Consulting/outsourcing tools such as EDS, IBM, Epsilon (note that these tools are
models, not just software)
•
Data visualization software that coherently present a large amount of information
in a small space. They make use of the human computer—your eyes—to detect
patterns, for example, virtual reality and simulation software—to walk around the data
points.
274
Chapter 8
It is also possible to apply this technique and use these tools to mine content other
than data, namely text mining, thematic analysis, and web mining to look at what
content, how often, for how long (e.g., number of hits) which is very helpful
in content management. Similarly, skill mining or expertise profiling can be used
to detect patterns in online curriculum vitae of organizational members. Expertise
location systems can be automatically created based on the content that has been
mined. Commercial software systems can also be used to mine e-mail data in order
to determine who is answering what types of queries or themes. Organizational experts
and expertise can be detected by looking at the patterns of questions and answers
contained within the e-mails. The same caveat applies to all of these data mining
applications—a human being is always needed in the loop in order to carry out “reality
checks” (i.e., to verify and validate that the patterns do indeed exist and that they
have been interpreted in a useful and valuable manner).
Blogs
A blog is a term for a web log—a popular and fairly personal content form on
the Internet. A blog is almost like an open diary; it chronicles what a person wants
to share with the world on an almost daily basis (Blood 2002; see also http://www
.rebeccablood.net/). While the “blogosphere” started off as a medium for mostly personal musings, it has evolved into a tool that offers some of the most insightful
information on the web. Further, blogs are becoming much more common, as businesses, politicians, policy makers, and even libraries and library associations have
begun to blog as a way of communicating with their patrons and constituents.
Several librarians publish blogs that offer a wealth of information about social
software and its uses. SNTReport.com focuses on the social software industry and how
social software tools are being used to help people collaborate. Blogs not only offer a
new way to communicate with customers, they have internal uses as well. For example,
large organizations can use a well-formed blog to exchange ideas and information
about web development projects, training initiatives, or research issues. These questions and answers can be cross-indexed and archived, which helps build a knowledge
network among the participating members. Most important, the price of setting up a
well-formed, secure blog and leveraging it into a knowledge and content management
tool is a pittance when compared to other proprietary solutions.
Right now, the majority of blogs are published exclusively in text. The next generation of blogs, however, will implement audio and video elements, bringing a sophisticated multimedia blend to the medium (Dames 2004). The overwhelming popularity
of YouTube (www.youtube.com) attests to the powerful draw of the image, and in
particular, the moving image. On YouTube, short video clips can be posted on practi-
Knowledge Management Tools
275
cally any topic. These are often self-filmed and self-indexed. It is possible to search
the YouTube web site for a clip on a particular topic. While many videos are mostly
entertaining, quite a few serve as educational resources (see listings in chapter 14).
Pikas (2004) added the notion of searching to blogs. Blogs are reverse chronologically arranged collections of articles or stories that are generally updated more
frequently than regular web pages. Just like any other information on the net, there
is no guarantee of authority, accuracy, or lack of bias. In fact, personal blogs are
frequently biased and can be good sources of opinion and information from the man
on the street. Because blogs can be updated on the fly, they frequently have unfiltered
information faster from war zones and sites of natural disasters than the mainstream
media outlets. Blogs are also good sources of unfiltered information on either faulty
or very useful products.
In the beginning, blogs appeared in search results alongside regular web pages.
Since blogs are not technologically any different from other web pages (i.e., they are
HTML, XML, JavaScript, etc., and it is their format, not their coding, that is different.),
spiders and bots collect posts the same way they collect other online information.
Search engines that place greater value on sites that are recently and frequently
updated and are highly linked tend to rank blog posts very highly. Since the barrier
to publication is so low in blogs, arguably much lower than for standard web pages,
these high rankings were introducing a lot of noise into online searches. Odds are that
you have run across several archived blog posts if you have searched on a controversial
topic in the past year. Recently, most major search engines have altered their
algorithms to push blogs down in the search results. Engines that only return two
results from any one site use this feature to limit the impact of blogs on the search
results.
Blog searching breaks down into at least two categories: information from within
blogs/across blogs or addresses of feeds from blogs so that you may subscribe in your
aggregator. Feeds and blogs are two different things, but are closely linked because
most blogs have feeds and many feeds are generated by blogs. Just as in other web
search tools, there are search engines and directories. At this time, blog search engines
are where general search engines were before the Google Age. There are many competing smaller products but no outstanding products dominating the scene.
Mashups
A mashup is an innovative way of combining content (Merrill 2006). Mashups are
web applications that offer an easy and rapid way of combining two or more difference sources of content into a single seamlessly integrated application. The term
originates from the practice of mixing tracks from two different songs. One of the first
276
Chapter 8
applications was to combine real estate listings with the location map drawn from
Google Maps. The integration is typically undertaken by retrieving content from publicly available sources, combining continuous web feeds such as RSS or using some of
the newly created mashup editors and programming languages. Mashups make it very
easy to combine different media such as text and images, videos, maps, and news
feeds. There is, however, an issue with intellectual property and information privacy
that will need to be ironed out with this new emergent technology (Zang, Rosson,
and Nasser 2008).
Within a business context, however, if the content to be combined is clearly available for use by the company and its employees, then mashups become an intriguing
means of creating new content from old. Some popular business uses of mashups to
date have been to create presentations that contain aggregated content and to support
collaborative work such as joint authoring of content. In a way, mashups may also be
considered as knowledge portals—both are aggregate content. However, mashups do
so in a much more dynamic way (portals are discussed later in this chapter).
Content Management Tools
Content management refers to the management of valuable content throughout the
useful life span of the content. Content life span will typically begin with content
creation, handle multiple changes and updates, merging, summarization, and other
repackaging and will typically end with archiving. Metadata (information about the
content) is used to better manage content throughout its useful life span. Metadata
includes such information as source/author, keywords to describe content, date
created, date changed, quality, best purposes, annotations by those who have made
use of it, and an expiry or best before date where applicable. Additional attributes such
the storage medium, location, and whether or not it exists in a number of alternative
forms (e.g., different languages) are also useful to include. XML is increasingly being
used to tag knowledge content. Taxonomies serve to better organize and classify
content for easier future retrieval and use.
XML (eXtensible markup language) provides the ability to structure and add relevance to chunks of information (that’s why many CM solutions use XML), and in
theory, exchange data more easily between applications, for example, with your suppliers, customers, and partners. However, you may all use the same words (tags), but
if each of you defines and applies them differently, then we remain in the land of
Babel. Common agreed schemas are essential. Keep tabs with developments on the
schemas and metadata standards in your field. Useful sources are XML.org (http://
www.xml.org) the W3C XML schemas section— http://www.w3.org/XML/Schema.
Knowledge Management Tools
277
Taxonomies—hierarchical information trees for classifying information—act like
your library subject catalog. They can help overcome differences of language usage in
different parts of an organization and even the use of different languages. Traditionally
manually intensive, the growing problem of information overload means that they
are receiving significant attention. But how do you cope with the evolution of terms,
whose meaning seems to change from one year to the next? Automatic (or
semi-automatic) classification of information objects—natural language analyzers, text
summarizers, and other technology—helps to understand some of the meaning—
the concepts—behind blocks of text and to tag and index it appropriately for to aid
subsequent retrieval. Many take advantage of the organization’s underlying knowledge
taxonomy.
Folksonomies and Social Tagging/Bookmarking
Metadata is literally translated as data about data and refers to specific information
about content contained in books, reports, articles, images, and other containers so
that they can be organized and retrieved in an orderly fashion. Metadata is also
referred to as tags or keywords. Taylor (2004) notes that metadata comes in three
general flavors: administrative, structural and descriptive. The Oxford Digital Library
(ODL) (http://www.odl.ox.ac.uk/metadata.htm) defines three types. Administrative
metadata is the information needed to manage the information resource over its life
cycle such as data about how it was acquired, where it came from, licensing, intellectual property rights, and attribution (e.g., was it scanned, what format is it stored
in, etc.). This is sometimes referred to as preservation metadata. Structural metadata
relates to the actual computer elements involved such as tables, columns, and indices—all the logical units of the information resource. Descriptive metadata refers more
to the content or subject matter of the information resource to help users find it (e.g.,
cataloguing records, findings aids, keywords). Descriptive metadata is of greatest
concern in KM because we often need to expand this type of data about data greatly
in order to increase the usability (and reusability) of a given unit of knowledge.
Metadata is very formal and tends to be created and updated by dedicated personnel such as catalogers and other library and information science professionals. This is
the highest standard in metadata but is time consuming to produce (Mathes 2004).
An alternative is to have authors create and add their own metadata for their own
works. The Dublin Core best exemplifies author-created metadata (Greenberg et al.
2001). Both of these approaches work well for the person who develops the metadata
but not necessarily as well for other users (often referred to as unknown or unanticipated users). A third option exists—that of user-created metadata. This bottom-up or
278
Chapter 8
grassroots approach is referred to as a folksonomy or as social bookmarking or tagging.
The advantage of this third option is that metadata is created by the collectivity of
users. All users should more readily understand the tags or data about data, not just
their creators.
Social bookmarking is a method whereby users participate directly in the storage,
organization, searching, and managing of web resources. One way is by saving
personal bookmarks on a publicly accessible web site and then tagging these sites
with your own metadata. Early sites include: del.icio.us (http://www.delicious.com),
Furl (http://www.furl.net/), web page bookmarking sites, and Citeulike (http://www
.citeulike.org/), a social citation site for scholarly publications. Other users can then
view the bookmarks by category, search by key word or use other attributes. Users
make use of informal tags instead of more formal cataloguing methods. Since all the
tags originate from the intended end users, they are easier to understand than more
standardized or top-down indexing terms. The major drawback is this very lack of
standardization. There is no controlled vocabulary, that is, a list of standard keywords.
So many errors can occur due to misspelling, synonym confusion, tags with more than
one meaning, or tags that are too personalized. This situation brings us right back to
the problem faced by more traditional cataloguing approaches: How to tag so that
others can understand your tags?
In a KM context, social bookmarking makes it possible to share knowledge with
others in a new way by sharing not only the original knowledge but also what you
think about it (the metadata). The technology is easy to use with hardly any learning
curve to speak of. The real potential lies in what the metadata can be used for. For
example, if the knowledge resource (data) is a best practice, then the metadata (data
about data) can include annotations about what others think of the best practice,
testimonials, cautionary notes (when not to apply and why), and other contextual
information that can greatly increase the successful use and reuse (application) of this
knowledge. Social bookmarking is an excellent vehicle to peer-to-peer knowledge
sharing and may play a greater role in future communities of practice. In a given
community of practice (CoP), there is, in addition to a shared purpose and a shared
repository, a shared vocabulary. Since CoP members share the same jargon, tagging is
less likely to be a problem. Tagging for yourself should approximate tagging for your
peers, who are neither unknown nor unanticipated users.
As social bookmarking sites mature and ever-increasing numbers of users participate
in them, it becomes possible to see some patterns emerging with respect to the tags
that are most commonly used. This tag “cloud” can be found by looking at the right-
Knowledge Management Tools
279
hand side of individual tag pages, under related tags of most social bookmarking sites.
Tag clouds represent emergent or organically grown taxonomies—commonly referred
to as folksonomies, a term coined by Thomas van der Wal in 2004 (Smith 2004, in
Mathes 2004) as a combination of folk and taxonomy.
Folksonomies differ from traditional taxonomies in that there is no hierarchy, no
object-oriented style of inheritance from parent object to child object, just clusters of
tags that appear to be loosely related. They also do not follow taxonomy rules in that
folksonomies can have more than one type of relationship between the same terms.
In a typical folksonomy, terms will differ in their level of specificity, they may be
qualitatively different, and they may not necessarily make sense! A folksonomy, in
other words, freely advocates mixing apples and oranges. The drawbacks are once
again lack of standardization, ambiguity, diminished rigor in classifying, and the use
of a flat rather than hierarchical space. The advantages are being able to use the everyday language that users have and unlimited expansion of keywords. Finding through
serendipity improves retrieval by being able to observe what others felt were related
knowledge.
As with social bookmarking, folksonomies appear particularly well suited to communities of practice, where peer-to-peer sharing can be augmented through the folksonomy approach. A folksonomy should help increase cooperation and knowledge
sharing among community members by making visible what often remains an invisible model of who knows whom and who knows what or who is interested in what
topic. Folksonomies can therefore be considered as knowledge creation tools (creation
of tags) and knowledge sharing and dissemination tools (peer-to-peer sharing, public
posting of tags) as well as a knowledge application tool (metadata that contextualizes
when and where the knowledge should be used).
A final note: folksonomies and more traditional knowledge organization schemes
(see chapter 4) need not be mutually exclusive. A folksonomy can be an excellent
starting point for a more formal taxonomy. The folksonomy can serve a needs-analysis
function and permit the users to make use of their own preferred vocabulary while
the designers link this to the more formal taxonomy through a thesaurus. This linkage
will also serve as a form of personalization of the search and retrieval interface for the
users.
Personal Knowledge Management (PKM)
Personal capital is a term coined by Cope (2000) as a divergence from the traditional
notion of capital, which is an asset owned by an organization. In fact, the future of
280
Chapter 8
KM will blur the boundaries between the individual, the group or community, and
the organization. KM will become a pervasive part of how we conduct our everyday
business lives. Personalized KM (PKM) will gain increasing importance given the everincreasing momentum of information overload that we must deal with. In other
words, some of the key principles, best practices, and business processes of KM that
have to date been focused at the organizational level will filter down to be used by
individuals managing their own personal capital.
PKM and traditional knowledge management differ depending on whether an
organizational or personal perspective is adopted. Tools for personal information
management are impressive and, if you think about e-mail and portals, are already
widely used. Newer tools such as blogs, news aggregators, instant messaging, and wikis
represent a new toolset for PKM.
The personal portal, what was once an enterprise portal, is now focused around the
needs of the individual. All of a person’s information and application needs harmoniously are brought together and arranged on the desktop, mass customization in front
of your eyes! Again, the aims are laudable, but reality and theory are often miles apart.
PKM brings many of the key principles of KM to bear on the personal productivity
and specific work requirements of a given knowledge worker. Definitions of PKM
revolve around a set of core issues: managing and supporting personal knowledge and
information so that it is accessible, meaningful, and valuable to the individual; maintaining networks, contacts, and communities; making life easier and more enjoyable;
and exploiting personal capital (Higgison 2004). On an information-management
level, PKM involves filtering and making sense of information, organizing paper and
digital archives, e-mails, and bookmark collections.
Knowledge Sharing and Dissemination Tools
Rollet (2003) made a distinction between communication technologies, such as
telephone and e-mail, and collaboration technologies, such as work flow management.
Yet it is very difficult to draw a line between the two. Communication and collaboration are invariably intertwined. It is quite difficult to establish where one ends and
the other begins. Both types of tools have been grouped under the category of
groupware or collaboration tools. Although all organizational members will make
use of communication and collaboration, including project teams and work units,
communities of practice will be particularly active in making use of many if
not all of the communication and collaboration technologies described in this
section.
Knowledge Management Tools
281
Groupware and Collaboration Tools
Groupware represents a class of software that helps groups of colleagues (work groups)
attached to a communication network (e.g., LAN) organize their activities. Typically,
groupware supports the following operations:
•
Scheduling meetings and allocating resources
•
E-mail
•
Password protection for documents
•
Telephone utilities
•
Electronic newsletters
•
File distribution
Communication technologies used typically include the telephone, fax, videocon-
ferencing, teleconferencing, chat rooms, instant messaging, phone text messaging
(SMS), Internet telephone (voice over IP or VOIP), e-mail, and discussion forums.
Communication is said to be dyadic when it occurs between two individuals, for
example, a telephone call. Teleconferencing, on the other hand, may have more
than two participants interacting with one another in real time. Videoconferencing
introduces a multimedia component to the communication channel as participants
can not only hear (audio) but also see the other participants (audiovisual). Desktop
videoconferencing is similar but does not require a dedicated videoconference facility.
Simple and inexpensive digital video cameras can be used to transmit images. The
visual component is especially useful when demonstrations are presented to all
participants.
Chat rooms are text based but synchronous. Participants communicate with one
another in real time via a web server that provides the interaction facility. Instant
messaging is also real-time communication, but in this case participants sign on to
the instant messaging system and they can immediately see who else is online or live
at that same time. Messages are exchanged through text boxes. The SMS (short messaging system) allows text messages to be sent via a cell phone rather than through
the Internet.
E-mail continues to be one of the most frequently used communication channels
in organizations. Although e-mail messaging is dyadic, it can also be used in a more
broadcast mode (e.g., group mailings) as well as in an asynchronous group discussion
mode by forwarding previous discussion threads.
Communication technologies are almost always integrated with some form of
collaboration, whether it be planning for collaboration or organizing collaborative
282
Chapter 8
Table 8.2
Classification of groupware technologies
Same time synchronous
Different time asynchronous
Same place, colocated
Voting presentation support
Shared computers
Different place, distant
Videophones Chat
E-mail Work flow
work. Collaboration technologies are often referred to as groupware or as work group
productivity software. It is technology designed to facilitate the work of groups. This
technology may be used to communicate, cooperate, coordinate, solve problems,
compete, or negotiate. While traditional technologies like the telephone qualify as
groupware, the term is ordinarily used to refer to a specific class of technologies relying
on modern computer networks, such as e-mail, newsgroups, videophones, or chat.
Groupware technologies are typically categorized along two primary dimensions
(see table 8.2):
•
Whether users of the groupware are working together at the same time (real-time or
synchronous groupware) or different times (asynchronous groupware), and
•
Whether users are working together in the same place (co-located or face-to-face) or
in different places (non-co-located or distance).
Coleman (1997) developed the taxonomy of groupware that lists twelve different
categories:
•
Electronic mail and messaging
•
Group calendaring and scheduling
•
Electronic meeting systems
•
Desktop video, real time synchronous conferencing
•
Non-real time asynchronous conferencing
•
Group document handling
•
Work flow
•
Work group utilities and development tools
•
Groupware services
•
Groupware and KM frameworks
•
Groupware applications
•
Collaborative Internet-based applications and products
E-mail is by far the most common groupware application (besides, of course, the
traditional telephone). While the basic technology is designed to pass simple messages
Knowledge Management Tools
283
between two people, even relatively basic e-mail systems today typically include interesting features for forwarding messages, filing messages, creating mailing groups, and
attaching files with a message. Other features that have been explored include automatic sorting and processing of messages, automatic routing, and structured communication (messages requiring certain information).
Newsgroups and mailing lists are similar in spirit to e-mail systems except that they
are intended for messages among large groups of people instead of one-to-one communications. In practice the main difference between newsgroups and mailing lists is
that newsgroups only show messages to a user when they are explicitly requested (an
on-demand service), while mailing lists deliver messages as they become available (an
interrupt-driven interface).
Work flow systems allow documents to be routed through organizations using a
relatively fixed process. A simple example of a work flow application is an expense
report in an organization. An employee enters an expense report, submits it, a copy
is archived, and then routed to the employee’s manager for approval. The manager
receives the document, electronically approves it, and sends it on. The expense is
registered to the group’s account and forwarded to the accounting department for
payment. Work flow systems may provide features such as routing, development of
forms, and support for differing roles and privileges.
Hypertext is a system for linking text documents to each other with the web being
an obvious example. Whenever multiple people author and link documents, the
system becomes group work, constantly evolving and responding to others’ work.
Some hypertext systems include capabilities for seeing who else has visited a certain
page or link or at least seeing how often a link has been followed, thus giving users a
basic awareness of what other people are doing in the system. Page counters on the
web are a crude approximation of this function. Another common multi-user feature
in hypertext that is not found on the web is allowing any user to create links from
any page, so that others can be informed when there are relevant links that the original
author was unaware of.
Group calendars allow scheduling, project management, and coordination among
many people and may provide support for scheduling equipment as well. Typical
features detect when schedules conflict or find meeting times that will work for everyone. Group calendars also help to locate people. Typical concerns are privacy (users
may feel that certain activities are not public matters) and completeness and accuracy
(users may feel that the time it takes to enter schedule information is not justified by
the benefits of the calendar).
Collaborative writing systems may provide both real-time support and nonreal-time support. Word processors may provide asynchronous support by showing
284
Chapter 8
authorship and by allowing users to track changes and make annotations to documents. Authors collaborating on a document may also be given tools to help plan
and coordinate the authoring process, such as methods for locking parts of the
document or linking separately authored documents. Synchronous support allows
authors to see each other’s changes as they make them and usually needs to provide
an additional communication channel to the authors as they work (via videophones
or chat).
Synchronous or real-time groupware is exemplified by shared workspaces, teleconferencing or videoconferencing, and chat systems. For example, shared whiteboards
allow two or more people to view and draw on a shared drawing surface even from
different locations. This can be used, for instance, during a phone call, where each
person can jot down notes (e.g., a name, phone number, or map) or to work collaboratively on a visual problem. Most shared whiteboards are designed for informal
conversation, but they may also serve structured communications or more sophisticated drawing tasks, such as collaborative graphic design, publishing, or engineering
applications. Shared whiteboards can indicate where each person is drawing or pointing by showing tele-pointers, which are color coded or labeled to identify each
person.
Twitter is a newer technology that is about as real as real-time can get. The major
use of Twitter is to continuously answer the question, “what are you doing now?” It
is a miniblogging service that allows users to send tweets or minitexts up to 140 characters in length to their user profile web page. This information is then conveyed to
users who have signed up to receive the posts (typically a circle of friends or colleagues). Tweets can be received as web page updates RSS feeds, SMS text on phones,
through e-mail, on Facebook, and so on. Twitter started out in life as an R&D project
in podcasting (Glaser 2007). While Twitter remains largely a novelty application used
by early adopters, there are potential applications within a KM context. Anthony
Bradley (2008) addressed this point and noted that Twitter is a people-based technology and can serve as a good alerting service for people who are working together,
particularly if they are working together on time critical work. Twitter can also serve
as an ultra-rapid way of testing out ideas on a few trusted individuals—a quick forum
for feedback in real time (e.g., a presenter who checks to see how the talk is going, a
meeting coordinator who needs everyone in attendance ASAP, or a project manager
trying to physically locate his team). One potential application for real-time tweets
could be an expertise locator system—one that locates expertise in real-time as well
as a means of meeting some of the expectations of millennial knowledge workers
(Lee 2003).
Knowledge Management Tools
285
Video communications systems allow two-way or multi-way calling with live video,
essentially a telephone system with an additional visual component. Cost and compatibility issues limited early use of video systems to scheduled videoconference meeting
rooms. Video is advantageous when visual information is being discussed, but may
not provide substantial benefit in most cases where conventional audio telephones
are adequate. In addition to supporting conversations, video may also be used in less
direct collaborative situations, such as providing a view of activities at a remote
location.
Chat systems permit many people to write messages in real-time in a public space.
As each person submits a message, it appears at the bottom of a scrolling screen. Chat
groups are usually formed by listing chat rooms by name, location, number of people,
topic of discussion, and so on.
Many systems allow for rooms with controlled access or with moderators to lead
the discussions, but most of the topics of interest to researchers involve issues related
to unmediated real-time communication including anonymity, following the stream
of conversation, scalability with number of users, and abusive users.
While chatlike systems are possible using non-text media, the text version of chat
has the rather interesting aspect of having a direct transcript of the conversation,
which not only has long-term value, but allows for backward reference during conversation making it easier for people to drop into a conversation and still pick up on
the ongoing discussion.
Groupware applications from Teamware, the U.S. Army, Chevron, and BP are
further illustrated in boxes 8.2 and 8.3.
Wikis
Wikis are web-based software that supports concepts such as open editing, which
allows multiple users to create and edit content on a web site (for more information,
see: http://en.Wikipedia.org/Wiki/Wiki). A wiki site grows and changes at the will
of the participants. People can add and edit pages at will, using a Word-like screen
without knowing any programming or HTML commands. More specifically, a wiki
is composed of web pages where people input information and then create hyperlinks to another or new pages for more details about a particular topic. Anyone can
edit any page and add, delete, or correct information. A search field at the bottom
of the page lets you enter a keyword for the information you want to find. Today
two types of wikis exist: public wikis and corporate wikis. Public wikis were developed first and are freewheeling forums with few controls. In the last year or two,
corporations have been harnessing the power of wikis to provide interactive forums
286
Chapter 8
Box 8.2
An example: Teamware
Teamware Group, a Fujitsu subsidiary, implemented an interactive web community solution for the city of Kerava in Finland. The solution enhances communication between
and within the city managers, city board, city council, and other elected officials, and
offers them facilities to interact and distribute information regardless of time or location.
The objective of the system is to facilitate the daily work of the city administrators by
providing them with a new virtual means of interaction in addition to the traditional
meetings and sessions. “It has become more and more difficult for the city administrators
to take care of their duties within the normal working hours and premises. Therefore,
it is essential to provide them with facilities to communicate and obtain information
without the boundaries of time or location,” says IT manager Ari Sainio from the city of
Kerava.
The new system was built on the Teamware Pl@za platform and integrated with the
existing Teamware Office groupware solution, which means that now e-mail, city archives,
electronic calendars, and bulletin boards will be available for the city administrators
through a standard web browser. In order to enhance interaction between the city officials,
the system is augmented with discussion facilities where individuals can exchange
opinions and discuss different issues. Various archives and files are created for content
management purposes. Different user groups are provided with their own virtual workspaces that can be accessed only by authorized members. Thanks to Teamware Pl@za’s
decentralized and easy-to-use updating functionality, the city officials can update the pages
themselves.
for tracking projects and communicating with employees over their in-house
intranets.
An example is Wikipedia (http://en.Wikipedia.org/Wiki/Main_Page), a free encyclopedia written by literally thousands of people around the world. Wikis exist for
thousands of topics (http://www.worldwideWiki.et/Wiki/SwitchWiki). If one does not
exist for your favorite subject, you can start one on it and add it to the list.
Wikis support new types of communications by combining Internet applications
and web sites with human voices. That means people can collaborate online more
easily, whether they are working together on a brief or working with a realtor online
to tour offices space in another city. Outside the office, it means customer service
representatives can interact with customers more readily, which should advance
e-commerce (Leuf and Cunningham 2001). Cunningham, a programmer, decided to
build the most minimal working database possible and started the first wiki in 1995.
The idea was to provide a simple web site where programmers could quickly and easily
Knowledge Management Tools
287
Box 8.3
An example: U.S. Army/Chevron/BP
The Army’s after action review (AAR) is an excellent example of a process that ensures
lessons are learned after an event (Bhatt 2000). British Petroleum (BP) and Chevron have
introduced similar systems whereby they learn before, during, and after the undertaking
of a large project. Major cost savings have been realized by introducing these learning
processes. For example, Chevron introduced a lessons learned tool for their drilling processes. Every time they drill in a particular area, lessons are recorded. Next time drilling
takes place in a similar area, lessons learned during the last drilling operations are available. This results in fewer errors and less reinventing of the wheel. Chevron has also
recorded waste savings in their drilling operations.
The United States Air Force (USAF) is utilizing Open Text’s Livelink to manage its
Business Solutions Exchange (BSX), which involves integrating the people, process, and
policies of the USAF’s service contracting into a single system, paving the way for the
group to meet the Pentagon’s goal of a completely paper-free acquisition process. Prior to
installing Livelink, the USAF employed a variety of client-server based systems that had
difficulty managing this process across different geographic locations. With the new collaborative KM approach, the USAF has reduced the time spent from identifying the point
of need to completing a performance requirement document (PRD) from seven months
to eight weeks, a 70% reduction in processing time.
The USAF’s KM initiative is part of the Pentagon’s requirement to simplify and modernize the US Defense Department’s acquisition process in the area of contract writing,
administration, finance, and auditing. Since July 1998, the USAF has been using Livelink
on a variety of outsourcing projects. The first and largest project can be found at the
Maxwell Air Force Base in Alabama. The goal of the business solutions exchange (BSX)
process is to continually improve USAF business practices. BSX goes to work as soon as a
requirement is identified and a business strategy team is formed. The collaborative software
is used throughout the life cycle of the project, from requirements definition to contract
closeout, connecting a cross-functional team dispersed across a given base and the
command.. A team, often composed of people from six different locations within the US,
is formed to create a PRD and uses the collaborative software as its central knowledge
library to gather market research, establish an acquisition plan, record baseline costs,
eliminate regulatory constraints, draft requirements, and gather feedback from customers
and industry on the contract requirements. The BSX team works together throughout the
planning, execution, and supplier management phases. Teams use the public folders
(http://www.bsx.org) to gather feedback from industry on ways to improve existing
requirements documents. In addition, the public sites include process-oriented libraries of
best practices that are available to other agencies, whether or not they use the collaborative capabilities of Livelink.
288
Chapter 8
exchange information without waiting for a webmaster to update the site. He named
the site wiki, after the quick little Wiki-Wiki shuttle buses in Hawaii.
A public wiki survives thanks to the initiative, honesty, and integrity of its users.
Sites can be vandalized, derogatory remarks—called flames—can be posted, and misinformation can be published. However, a vandalized site can be restored, a flame can
be erased, and information can be corrected by anyone who knows better. The community polices itself. Corporate wikis differ from public wikis in that they are more
secure and have many more navigation, usage, and help features. Corporate wikis are
used for project management and company communications and well as discussion
sites and knowledge databases. For example, a wiki can be established for a particular
project with the project team given access to update the status of tasks and add related
documents and spreadsheets. Its central location makes it easy to keep everyone
informed and up-to-date regardless of his or her home office, location or time zone.
A wiki is more reliable than continually e-mailing updates back and forth to the team
members. It is faster than e-mail since updates are available instantly and more efficient than e-mail since each team member does not have to maintain his or her own
copies. Managers like wikis because they can see what progress the team is making or
what issues it is facing without getting involved or raising concern (e.g., a new way
of doing of project management reporting).
For security reasons, corporations usually buy wiki software, rather than lease space
on the Internet, and set it up the wiki behind the company’s firewall as part of an
intranet or as an extranet if customers or vendors are allowed access. Also, corporations
look for wiki software that has authorization and password safeguards, roll-back versions for information to be restored to its former state, and easy upload capabilities
for documents and images. Some wikis notify users when new information is added,
an especially nice feature for corporate projects where fast responses are required.
Social Networking, Web 2.0, and KM 2.0
Social networking has rapidly become a part of everyday living and working, particularly for the Y or millennial generation (eMarketer 2008). As noted by Jones (2001, 2),
“knowledge management is inherently collaborative: thus a variety of collaboration
technologies can be used to support knowledge management practices.” Social networks are dynamic people-to-people networks that represent relationships between
participants. A social network can serve to delimit or identify a community of practice
as it models the interaction between people. Wladawsky-Berger (2005) notes that
social networks are “knowledge management done right” (p. 1) as they address similar
goals to solve problems, increase efficiency, and better achieve goals.
Knowledge Management Tools
289
Social network analysis (SNA; see http://www.insna.org) is a social science research
tool that dates back to the 1970s and has increasingly become used in KM applications
(Durkheim 1964, Drucker 1989, Granovetter 1973, Lewin 1951). Valdis Krebs (2008)
defines SNA as the “mapping and measuring of relationships and flows between
people, groups, organizations, computers, or other information/knowledge processing
entities.” SNA can be used to identify communities and informal networks and to
analyze the knowledge flows (i.e., knowledge sharing, communication, and other
interaction) that occur within them (Brown and Duguid 1991). SNA is one of the ways
of identifying experts and expertise to develop an expertise locator system. The basic
steps to develop a survey tool (e.g., a questionnaire) to collect the required data are
to identify network members and their exchange patterns. Next, the data are analyzed
using software such as Pajek (http://www.pajek.com) or UCINET (http://www
.analytictech.com) to identify patterns of interaction and emergent relationships. The
analyzed data can then be used to inform decision-making based on the objectives
(Scott 2000), for example, for change management, to establish a baseline in order to
later assess the effects of a technology introduction, or to improve upon the knowledge
flow and connections.
The combination of social networking, blogging, wikis, and other related technologies together define Web 2.0 or the next generation of the web. Web 2.0 is a concept
that began with an interactive conference session between Tim O’Reilly and Dale
Dougherty that in turn led to the development of the annual Web 2.0 conference
(O’Reilly 2009). (http://en.oreilly.com/web2008/public/content/home). They defined
Web 2.0 as something without a hard boundary but rather a set of principles that
include:
•
The web as a platform
•
User control of your own data
•
Services instead of packaged software
•
An architecture of participation
•
Cost-effective scalability
•
Re-mixable data sources and data transformations
•
Software that rises above the level of single device
•
Harnessing of collective intelligence
A popular way of defining Web 2.0 is a form of concept analysis—the listing
of examples for both Web 1.0 and Web 2.0. For example, Netscape is an example of
Web 1.0 whereas Google exemplifies Web 2.0. Microsoft Outlook e-mail is a Web 1.0
290
Chapter 8
application whereas Gmail (http://www.gmail.com) is a Web 2.0 application. Other
Web 2.0 examples include eBay, a digital marketplace (http:/www.ebay.com); BitTorrent, a free open source file-sharing application site for sharing large software and
media files (http://www.bittorrent.com); Wikipedia, a user-authored encyclopedia site,
(http://www.wikipedia.org); as well as folksonomies, viral marketing and open source
software sites. Many Web 2.0 sites contain RSS feeds—which allows someone to subscribe to a webpage and be alerted to any changes. An RSS feed is much more reliable
than a link to what could be an ever-changing web site.
Finally the harnessing of the collective intelligence is a key attribute of Web 2.0
which means that the collective (i.e., the set of users) determine what is of value, what
is valid, and what is important (Surowiecki 2004). The more people use a Web 2.0 site,
the more the site automatically improves. A key feature of Web 2.0 sites is that the
users of that site contribute the content.
IBM developed a social networking tool called Pass It Along (a free demonstration
is available at http://www.ibm.com/developworks/community/passitalong) to promote
knowledge sharing and skills development. Pass It Along integrates knowledge management, social networking, and Web 2.0 concepts to help users share and apply
information. Each user can decide how widely they want their content to be shared
and who they would like to collaborate with, for example, new hires, include external
partners or not or limit to a particular community of practice. Users can visually map
out their knowledge assets so others can see them.
KM 2.0 is analogous to Web 2.0 and refers to a more people-centric approach to
knowledge management. Companies are adopting KM 2.0 to varying degrees, mostly
based on their underlying culture and how well it promotes transparency and are less
concerned with control and availability of the underlying technologies. A surprising
example is the Central Intelligence Agency (see the vignette). Other examples include
IBM where a large collaborative online brainstorming session called InnovationJam
was held that included over 150,000 people (Dearstyne 2007). Participants were not
only employees but also customers and business partners. The event ran for three days
with different topics being addressed in different moderated forums. The best ideas
generated were acknowledged and rewarded.
Lee and Lan (2007) suggest that traditional knowledge management (KM 1.0) is
based on knowledge repositories, the storing and preserving of knowledge but in
a largely static fashion. KM 2.0 represents a new paradigm and much like the core
attributes listed for Web 2.0, the authors propose corresponding attributes for KM
2.0 (p. 50). In building on a theme of collaborative intelligence, the following list of
Knowledge Management Tools
291
Box 8.4
An example: Intellipedia at the CIA
Web 2.0 technologies are enabling the CIA to share more information within their agency
in addition to their intelligence counterparts (Wailgum 2008). The events of September
11, 2001, have catalyzed a series of reforms in the intelligence community, especially when
it became clear that key agencies were not able to connect the dots.
After 9/11, we asked ourselves: why was no one able to connect the dots? (David Ignatius, Associate
Editor, The Washington Post). Could 9/11 have been prevented? In a number of crucial cases, mishandled intelligence, bureaucratic tangles and legal hurdles blinded the CIA and the FBI to clues right
in front of them. Individually, none of these was a smoking gun. But combined they were a four-alarm
fire. (Frank 2004)
The CIA is well aware of the post-9/11 analyses and reports that described how sixteen
government intelligence agencies were unable to puncture internal and external silos and
as a result critical information was not shared and was not aggregated to detect a pattern—
and a substantial threat. The CIA’s CIO Al Tarasiuk, introduced the notion of web 2.0 and
KM 2.0 into the sixty-one-year-old agency in the form of Intellipedia, modeled on Wikipedia. Intellipedia is a bottom-up system that allows all US analysts to share their information, their analyses, and even their insights with trusted peers over a secure network. The
new system is essentially a wiki for knowledge sharing that was implemented in 2006.
There is no anonymity as users log on and are authenticated each time they use Intellipedia. There is a form of expertise locator system integrated within this system as users
can find out who has expertise on a particular topic, a particular country, and so forth.
After two years in operation, Intellipedia has over forty thousand registered users who
have made almost two million edits on the web pages (which number around three
hundred thousand). It is interesting to note that the most prolific user of Intellipedia is
an employee who is preparing to retire, which indicates that such systems may also play
a role in organizational memory and knowledge continuity (see chapter 11).
In the old web 1.0 world, the content contained within Intellipedia would have been
shared with a limited amount of people and most likely through e-mail (which only served
to add to employee information overload). Intellipedia defines and enables the US intelligence community and is a clear contrast to what prevailed before: a need to know basis
for knowledge sharing and one based on status, hierarchical relationships, and formal
authority. The major goal of Intellipedia is to enable collaboration across silos to help
participants solve complex problems and to connect all of the known dots. This requires
that participants speak the same language (i.e., share the same vocabulary and define all
the dots in the same way). This new way of working also requires the motivation to share,
which in turn entails a change in organizational culture (see chapter 7). The major challenge is not with the technology but with a change in mind-set of the individuals and the
collective mind-set that prevails as the organizational culture.
292
Chapter 8
features may be considered as the objectives of knowledge contents development via
Web 2.0.
Contribution Every Internet user has the opportunity to freely provide their knowledge
content to the relevant subject domains.
Sharing Knowledge contents are freely available to others. Secured mechanisms may
be enforced to enable the knowledge sharing among legitimate members within specific communities.
Collaboration Knowledge providers collaboratively create and maintain knowledge
content. Internet users participating in the knowledge content can have conversations
as a kind of social interaction.
Dynamic
Knowledge contents are updated constantly to reflect the changing environ-
ment and situation.
Reliance Rnowledge contribution should be based on trust between knowledge providers and domain experts.
Once again, the best approach is one of inclusion rather than mutual exclusivity.
KM 1.0 is mainly focused on preserving valuable knowledge that has been created.
KM 2.0 is mainly concerned with user participation, knowledge flow and sharing, and
user-generated content with much more rapid feedback and revision of the knowledge.
The two can coexist in much the same way as taxonomies and folksonomies can
coexist. KM 2.0 is closer to the everyday operational concerns of knowledge workers
and serves as an excellent framework for collaboration and conversation with others.
KM 1.0 (as discussed in more detail in the next section) can then periodically access,
assess, incorporate the outputs of KM 2.0, and ensure that they are well preserved and
well organized for future retrieval and reuse.
Networking Technologies
Networking technologies consist of intranets (intra-organizational network), extranets
(inter-organizational network), knowledge repositories, knowledge portals, and webbased shared workspaces. Liebowitz and Beckman (1998) define knowledge repositories as an “on-line computer-based storehouse of expertise, knowledge, experiences,
and documentation about a particular domain of expertise. In creating a knowledge
repository, knowledge is collected, summarized, and integrated across sources.” Such
repositories are sometimes referred to as experience bases or corporate memories. The
repository can either be filled with knowledge by what Van Heijst, Van Der Spek, and
Kruizinga (1997) call passive collection, where workers themselves recognize what
knowledge has sufficient value to be stored in the repository; or active collection,
Knowledge Management Tools
293
where some people in the organization are scanning communication processes to
detect knowledge.
Davenport and Prusak (1998) divide between three types of knowledge
repositories:
•
External knowledge repositories (such as competitive intelligence)
•
Structured internal knowledge repositories (such as research reports, product-
oriented market material)
•
Informal internal knowledge repositories (such as lessons learned)
A knowledge repository differs from a data warehouse and an information reposi-
tory primarily in the nature of the content that is stored. Knowledge content will
typically consist of contextual, subjective, and fairly pragmatic content. Content in
knowledge repositories tends to be unstructured (e.g., works in progress, draft reports,
presentations). Knowledge repositories will also tend to be more dynamic than other
types of architectures because the knowledge content will be continually updated and
splintered into varying perspectives to serve a wide variety of different users and user
contexts. To this end, repositories typically end up being a series of linked mini-portals
distributed across an organization.
Most repositories will contain the following elements (adapted from Tiwana 2000):
•
Declarative knowledge (e.g., concepts, categories, definitions, assumptions—knowl-
edge of what)
•
Procedural knowledge (e.g., processes, events, activities, actions, manuals—knowl-
edge of how or know-how)
•
Causal knowledge (e.g., rationale for decisions, for rejected decisions—knowledge
of why)
•
Context (e.g., circumstances of decisions, informal knowledge, what is and what is
not done, accepted, etc.—knowledge of care-why)
The knowledge repository is the one-stop-shop for all organizational users to be
able to access all historical, current, and projected valuable knowledge content. All
users should be able to connect to and annotate content, connect to others who have
come into contact with the content, as well as contributing content of their own. The
interface to the repository or repositories should be user-friendly, seamless, and
transparent.
Personalization in the form of personalized news services through push technologies in the form of mini-portals for each community of practice and so forth will help
maintain the repository in a manageable state. To this end, the use of a term such as
294
Chapter 8
a knowledge warehouse should be strongly discouraged—the knowledge repository
should instead be visualized as a lens that is placed on top of the data and information stores of the organization. The access and application of the content of a repository should be as directly linked to professional practice and concrete actions as
possible.
The knowledge repository typically involves content management software tools
such as a LotusNotes platform and will be run as an intranet within the organization
with appropriate privacy and security measures in place. An example is described in
box 8.5.
Knowledge portals provide access to diverse enterprise content, communities,
expertise, and to internal and external services and information (Collins 2003;
Box 8.5
An example: Price Waterhouse Coopers (PWC)
Price Waterhouse Coopers focused on sharing knowledge across what had been boundaries
following the merger of Price Waterhouse and Coopers & Lybrand. The chief knowledge
officer, Ellen Knapp, supported this effort by putting into place the KnowledgeCurve,
where employees can find a repository of best practices, consulting methodologies, tax
and audit rules, news services, online training, directories of experts, and more, plus links
to specialized sites for various industries or skills. The site gets eighteen million hits a
month, mostly from workers downloading forms or checking news, but also from employees looking things up. Yet there is a feeling that it is underused. When looking for expertise, most people still go down the hall.
In parallel, a British-based PWC consultant and his colleagues set up a network where
they could be more innovative. Over five months they set up a Lotus Notes e-mail list
with no rules, no moderator, and no agenda other than what is set by the messages people
sent. Any employee was able to join. Kraken, as it came to be known, now has five hundred
members and although it still has unofficial status, it has become the premier forum for
sharing. As an analogy, Kraken is to KnowledgeCurve what Carlos was to Eureka. On a
busy day, members may get fifty Kraken messages but they are welcomed because they are
relevant and useful.
What are some of the reasons for this grassroots CoP success over corporate top-down
KM systems? It is demand-driven (“does anyone know…”); it gets at tacit knowledge; it
allows fuzzy questions rather than structured database queries; it is part of the everyday
routine; and it is full of opinions—points of view rather than dry facts. KnowledgeCurve
preserves explicit knowledge—Kraken enables the sharing of tacit knowledge. Kraken is
about learning; KnowledgeCurve is about teaching. You cannot have one without the
other.
Knowledge Management Tools
295
Firestone 2003). Portals are a means of storing and disseminating organizational
knowledge such as business processes, policies, procedures, documents, and other
codified knowledge. They will typically feature searching capabilities through content
as well as through the taxonomy (categorized content). The option to receive personalized content through push technologies as well as through pull technologies (intelligent agents) may exist. Communities can be accessed via the portal for communication
and collaboration purposes. There may be a number of services that users can subscribe
to as well as web-based learning modules on selected topics and professional practices.
The critical content will consist of the best practices and lessons learned that have
been accumulated over the years and to which many organizational members have
added value.
The purpose of a portal is to aggregate content from a variety of sources into a
one-stop shop for relevant content. Portals enable the organization to access internal
and external knowledge that can be consolidated, analyzed, and used as inputs to
decision making. Ideally, portals will take into account the different needs of users
and the different sorts of knowledge work they carry out in order to provide the best
fit with both the content and the format in which the content is presented (the portal
interface). Knowledge portals link people, processes, and valuable knowledge content
and provide the organizational glue or common thread that serves to support knowledge workers. First generation portals were essentially a means of broadcasting information to all organizational members. Today, they have evolved into sophisticated
shared workspaces where knowledge workers can not only contribute content and
share content but also acquire and apply valuable organizational knowledge. Knowledge portals support knowledge creation, sharing, and use by allowing a high level of
bidirectional interaction with users.
Portals serve to promote knowledge creation by providing a common virtual space
where knowledge workers can contribute their knowledge to organizational memory.
Portals promote knowledge sharing by providing links to other organizational members
through expertise location systems. Communities of practice will typically have a
dedicated space for their members on the organizational portal and their own membership location system included in the virtual workspace. The portal organizes valuable knowledge content using taxonomies or classification schemes to store both
structured (e.g., documents) and unstructured content (e.g., stories, lessons learned,
and best practices). Finally, portals support knowledge acquisition and application by
providing access to the accumulated knowledge, know-how, experience, and expertise
of all those who have worked within that organization. An application is described in
box 8.6.
296
Chapter 8
Box 8.6
An example: KPMG
KPMG International implemented KWORLD, an advanced global knowledge management
system. KWORLD, an online messaging, collaboration, and knowledge-sharing platform,
is reportedly the first system of its kind built entirely from standard Microsoft components—Microsoft Windows NT Server, including Microsoft Exchange, Site Server, and
Microsoft Office, Outlook, and Internet Explorer. KWORLD is KPMG’s digital nervous
system based on the Microsoft concept.
KPMG invested over one year and $100 million in developing this universally accessible
knowledge-sharing environment, which allows its nearly one hundred thousand professional workers to conduct active conferences and public exchanges, locate customized and
filtered external and internal news, and access global- and country-specific firm information. As acknowledged by Microsoft, KPMG is one of only five organizations to embark
on its fast-track program to exploit fully the power of the web browser, integrate Microsoftbased messaging, collaboration and knowledge-sharing applications, and push current web
technology to the “limit.” Knowledge is content in context, and KPMG’s global communities of practice—who marry knowledge about complex services to specific industries—
determine KWORLD’s contextual frames. KWORLD brings qualified internal content and
filtered external content to each community with a click. KPMG foresees developing
KWORLD extranets to make KPMG a virtual extension of its clients.
Mashups were discussed in an earlier section as a form of portal (see the previous
section on Knowledge Creation and Codification Tools). Both mashups and portals
aggregate content coming from different sources. However, there are some significant
differences between the two tools. Portals are a somewhat older, more established tool
that serves to aggregate vetted and validated content to be stored for future use in an
organization. The purpose of a portal is to preserve organizational knowledge and to
make it available to all employees. Portals are well defined, often adhere to standards,
are updated according to an established schedule, only by those authorized to do so.
A portal is thus more formal in some ways. A mashup, on the other hand, is more of
a Web 2.0 application. Users tend to have complete control and autonomy in what
they choose to aggregate. This is often shared with others in a limited way (e.g., often
within their own community of practice). Mashups may have a limited life span as
they serve a specific purpose, such as putting together a presentation. Mashups are
not necessarily formalized nor do they need to be centralized in order to be useful
(Wong and Hong 2007).
Knowledge Management Tools
297
Knowledge Acquisition and Application Tools
A number of technologies play an important role in how successful knowledge workers
are in acquiring and applying knowledge content that is made available to them by
the organization. E-learning systems provide support for learning, comprehension,
and better understanding of the new knowledge to be acquired. Tools such as EPSS,
expert systems, and decision support systems (DSS) help knowledge workers to better
apply the knowledge on the job. Adaptive technologies can be used to personalize
knowledge content push or pull. Recommender systems can detect similarities or
affinities between different types of users and make recommendations of additional
content that others like them have found to be useful to acquire and apply. Knowledge
maps and other visualization tools can help to acquire and apply valuable knowledge
better. A number of tools derived from artificial intelligence can at least partially
automate processes such as text summarization, content classification, and content
selection.
E-learning applications started out as computer-based learning or tutoring systems
(CBT) and web-based training (WBT) applications. The common feature is the online
learning environment provided for learners. Courses can now be delivered via the web
or the company intranet. The particular knowledge and know-how to be acquired can
be scoped and delivered in a timely fashion in order to support knowledge acquisition.
E-learning technologies also greatly increase the range of knowledge dissemination as
knowledge that has been captured and coded or packaged as E-learning can be easily
made available to all organizational members, regardless of any time or distance
constraints.
Decision support systems are designed to facilitate groups in decision-making.
They provide tools for brainstorming, critiquing ideas, putting weights and probabilities on events and alternatives, and voting. Such systems enable presumably more
rational and even-handed decisions. Primarily designed to facilitate meetings, they
encourage equal participation by, for instance, providing anonymity or enforcing turn
taking.
Visualization technologies and knowledge mapping are good ways of synthesizing
large amounts of complex content in order to make it easier for knowledge workers
to acquire and apply.
Artificial intelligence (AI) research addressed the challenges of capturing, representing, and applying knowledge long before the term knowledge management entered
popular usage. AI developed automated reasoning systems that could make use
of explicit knowledge representations in order to provide expert-level advice,
298
Chapter 8
troubleshooting, and other forms of support to knowledge workers. Expert systems
are decision support systems that do not execute an a priori program but instead
deduce or infer a conclusion based on the inputs provided. Natural language processing also grew out of AI research. Linguistic technologies resulted in automating the
parsing (breaking into subsections) and analysis of text. Common applications today
are voice interfaces or natural language queries that can be typed in to search databases. Similar AI technologies can also be applied to analyze and summarize text or
to automatically classify content (e.g., automated taxonomy tools). Many of the automated reasoning capabilities studied in AI research were encapsulated in autonomous
pieces of software code, called intelligent agents or software robots (softbots). These
agents act as proxies for knowledge workers and they can be tasked with information
searching, retrieving, and filtering tasks.
Intelligent Filtering Tools
Intelligent agents can generally be defined as software programs, which assist their
user and act on his or her behalf, such as a computer program that helps you in
newsgathering, acts autonomously and on its own initiative, has intelligence and can
learn, and improves its performance in executing its tasks (Woolridge and Jennings
1995). They are autonomous computer programs, where their environment dynamically affects their behavior and strategy for problem solving. They help users deal with
information. Most agents are Internet based, that is, software programs inhabiting the
Net and performing their functions there.
The following features are necessary to define a true intelligent agent (Khoo, Tor,
and Lee 1998):
Autonomy The ability to do most of their tasks without any direct assistance from an
outside source, which includes human and other agents, while controlling their own
actions and states.
Social ability
The ability to interact with, when they deem appropriate, other software
agents and humans.
Responsiveness The ability to respond in a timely fashion to perceived changes in the
environment, including changes in the physical world, other agents, or the Internet.
Personalization The ability to adapt to its users needs by learning from how the user
reacts to the agent’s performance.
Initiative The ability of an agent to take initiatives by itself, autonomously (out of a
specific instruction by its user) and spontaneously, often on a periodical basis, which
makes the Agents a very helpful and time saving tool.
Knowledge Management Tools
Adaptivity
299
The capacity to change and improve according to the experiences accumu-
lated. This has to do with memory and learning. An agent learns from its user and
progressively improves in performing its tasks. The most experimental bots even
develop their own personalities and make decisions based upon past experiences.
Cooperation The interactivity between agent and user is fundamentally different from
the one-way working of ordinary software.
There are many knowledge management applications that make use of intelligent
agents (e.g., see Elst et al. 2004). These include personalized information management (such as filtering e-mail), electronic commerce (such as locating information
for purchasing and buying), and management of complex commercial and industrial processes (such as scheduling appointments and air traffic control). These
tasks/applications can generally be grouped into five categories (Khoo, Tor, and Lee
1998):
Watcher agents
Look for specific information
Learning agents
Tailor to an individual’s preferences by learning from the user’s past
behavior
Shopping agents
Compare “the best price for an item”
Information retrieval agents
Help the user to “search for information in an intelligent
fashion”
Helper agents
Perform tasks autonomously without human interaction.
In the age of computers, information is readily available on the Internet, whether
it is useful or useless. There is so much data available that we often claim to be overloaded with information. Having too much data can cause as much trouble as having
no data, as we must shift through so much information to get what we need. We can
categorize this information overload problem into two divisions:
Information filtering We must go through an enormous amount of information to find
the small portion that is relevant to us.
Information gathering
There is not enough information available to us and we have
to search long and hard to find what we need.
Information filtering is a particularly important function in KM, as users need a
way of filtering these data into a more manageable situation. Knowledge workers (such
as managers, technical professionals, and marketing personnel) need information in
a timely manner as it can greatly affect their success. Tasks that are redundant or
routine need to be minimized by some individuals that can otherwise spend their time
more productively (Roesler and Hawkins 1994).
300
Chapter 8
Some companies receive so much e-mail that they have to employ clerical worker
to sift through the flood of e-mail, answering basic queries and forwarding others to
specialized workers. Others use intelligent filtering software such as GrapeVine for
Lotus, which reads a pre-established knowledge chart to determine who should receive
what mail. Intelligent agent services can supplement but not replace the value of
edited information. As information becomes more available, it becomes more and
more crucial to have strong editors filter that information (Webb 1995). There is so
much content out there that the tools that filter content are going to be as important
as the content itself (Wingfield 1995). As stated by the Rutherford Rogers, “we are
drowning in information but starved for knowledge” (Rogers 1985).
An end user, required to constantly direct the management process, is the contributing factor to information overload. But having agents to do the tasks, such as searching and filtering, can ultimately reduce the information overload to a degree. Maes
(1994) describes an electronic mail filtering agent called Maxims. Maxims is a type of
learning agent. The program learns to prioritize, delete, forward, sort, and archive mail
messages on behalf of a user. The program monitors the user and uses the actions the
user makes as a lesson on what to do. Depending upon threshold limits that are constantly updated, Maxims will guess what the user will do. Upon surpassing a degree
of certainty, it will start to suggest to the user what to do.
Maes (1994) also describes an example of an Internet news-filtering program called
NewT. This program takes as input a stream of Usenet news articles and gives as output
a subset of these articles that is recommended for the user to read. The user gives NewT
examples of articles that would and would not be read, and NewT will then retrieve
articles. The user then gives feedback about the articles, and thus NewT will then be
trained further on which articles to retrieve and which articles not to retrieve. NewT
retrieves words of interest from an article by performing a full-text analysis using the
vector space model for documents. Some additional examples of information filtering
agents are shown in table 8.3.
News agents are designed to create custom newspapers from a huge number of web
newspapers throughout the world. The trend in this field is toward autonomous,
personalized, adaptive, and very smart agents that surf the net, newsgroups, databases,
and so on, and deliver selected information to their users. “Push” technology is strictly
connected to news bots development, consisting basically in the delivery of information on the web that appears to be initiated by the information server rather than by
the client. Some examples are shown in table 8.4.
Information overload is a problem of the world today, but intelligent agents help
reduce this problem. Using them to filter the oncoming traffic of the information
Knowledge Management Tools
301
Table 8.3
Sample information filtering agents
Name
Description
Reference
Search pad
An advanced bot that finds and
categorizes relevant information
based on the users preferences,
also learning from them
http://www.searchpad.com
Copernic
An agent that carries out net
searches by simultaneously
consulting the most important
search engines on the web
http://copernic.com
Citizen 1
Finds thousands of the best
databases on the Internet and
indexes them into a hierarchy of
files, making the Internet look like
an extension of a PC file system
http://www.download.com/PC/
Result/TitleDetail/
0,4,0-21278-g.html
NetAttachePro v1.0
A “second generation web agent”
which features a powerful
information-filtering intelligent
agent that organizes off-line
browsing
http://www.tympani.com/
Table 8.4
Examples of personalized news services
Name
Description
Reference
myCNN
Personalized news service
http://my.cnn.com
Excit News Tracker
Pulls information from a
collection of databases
http://nt.excite.com
Infoseek Personal News
Personalized news service
http://www.infoseek.com/
news?pg=personalize.html
Dogpile
Fast, efficient news service that
draws upon a large database for
its searches
http://www.dogpile.com
302
Chapter 8
highway can help reduce cost, effort, and time. Yet the development of intelligent
agents is still in its infancy. As they gain in popularity and use, we can expect to see
more sophisticated and better-developed intelligent agents.
Information studies research has studied information seeking behavior for over five
decades now and this research can serve as an excellent theoretical basis for the study
of the Internet as an information source and intelligent agents as mediators in this
digital environment (e.g., Kulthau 1991, 1993; Rasmussen, Pejtersen, and Goodstein
1994; Spink 1997, Wilson 1981, 1994 1999). Detlor (2003) used a case study to explore
how knowledge workers made use of Internet-based information systems and found
that information studies theory provides an appropriate framework for examining
Internet-based information seeking behaviors. Detlor, Sproule, and Gupta (2003) made
use of a similar conceptual framework to explore goal-directed behavior in online
shopping environments. Choo, Detlor, and Turnbull (2000a) investigated how knowledge workers use the web to find information external to their organizations as part
of their daily work life. A typology of different complementary modes of using the
web as an information source was identified and described (e.g., formal search, informal search).
Detlor (2004) adopted an information vantage point that views enterprise knowledge portals as more than tools to merely deliver content. He instead see them as
shared workspaces that can facilitate communication and collaboration among knowledge workers. Intelligent agents can play a significant role to improve the interaction
between knowledge workers and knowledge portals for the successful completion of
everyday work tasks. Empirical research studies on information seeking helps define
a web use model based on information seeking motives and modes. The advantage of
using a theoretical framework as a starting point is that online behavior and preferences can be better understood, explained, and predicted. These online behavioral
preferences can then be used to better design both online environments and mediators
such as intelligent agents.
Adaptive Technologies
Adaptive technologies are used to better target content to a specific knowledge worker
or to a specific group of knowledge workers who share common work needs. Customization refers to the knowledge worker manually changing their knowledge environment. For example, selecting user preferences to change the desktop interface,
specifying certain requirements in content to be provided to them (language, format),
or subscribing to certain news or listserv services.
Knowledge Management Tools
303
Personalization, on the other hand, refers to automatically changing content and
interfaces based on observed and analyzed behaviors of the intended end user. For
example, many MS Office applications offer the option of dynamically reordering pop
down menu items based on frequency of usage (the ones used most often will be
displayed on the top). One way of automatically personalizing knowledge acquisition
makes use of recommender systems. Recommendations regarding content that is likely
to be considered useful and relevant by a given knowledge worker may be based on
a user profile of that knowledge worker (e.g., with themes checked off) or the recommendation may be based on affinity groups. Affinity groups make use of similarity
analysis of users in order to develop groups of individuals who appear to share the
same interests. Amazon uses affinity groups for example, when after ordering a book
online, visitors to the site are provided with information on related books that others
who have bought the same book have also purchased.
Communities of practice are affinity groups to some extent. Personalization technologies are often used to target or push certain types of content that is of interest to
a given community. Community profiles can be established just as individual profiles
and used in the same manner in order to better adapt content and interfaces to the
community members.
Strategic Implications of KM Tools and Techniques
Historically, the IT horse has always been placed before the KM carriage. It is crucial
to think of KM tools in strategic terms. It is often said that if we hold a hammer in
our hand, then all the problems we see look very much like nails. It is important to
avoid this bias in knowledge management. Tools and techniques are a means and not
an end. The business objectives must first be clearly identified and a consensus reached
on priority application areas to be addressed. For example, an initial KM application
will typically be some form of content management system on an internally managed
intranet site. This is a good building block for subsequent applications, such as yellow
pages or expertise finders and groupware tools to enable newly connected knowledge
workers to continue to work together. An illustration is provided in box 8.7.
A number of the techniques presented here address the phenomenon of emergence
that can help discover existing valuable knowledge, experts, communities of practice,
and other valuable intellectual assets that exist within an organization. Once this is
done, the intellectual assets can be better accessed, leveraged, and made use of. KM
tools and techniques have an important enabling role in ensuring the success of KM
applications.
304
Chapter 8
Box 8.7
An example: Mercedes-Benz
The Mercedes-Benz Customer Assistance Center in Maastricht, The Netherlands, serves as
a central customer contact point for the whole of Europe, handling all customer needs in
seventeen European countries, in twelve languages, twenty-four hours a day, 365 days a
year. In order to share knowledge of product information, technical information, and
business procedures as well as sample letters, FAQs, and best practices, a web-based knowledge management solution was developed for Mercedes-Benz by CMG, a leading European
IT services business. Called BRAiN (backbone repository for archiving information), this
KM-based IT solution enables Mercedes-Benz Customer Assistance Center employees to
share and retrieve knowledge through the company’s corporate intranet. Full text searching and dynamic knowledge maps allow users to navigate intuitively to the information
needed. Direct search facilities enable quick retrieval of all information related to a specific
vehicle, country, or market, and have been fine-tuned to support business needs. Web
technology facilitated a quick rollout within the organization and helps to minimize
maintenance. Attention was paid to all business aspects throughout the project phases. A
staged business approach, supported with incremental system development (RAD, rapid
application development), was applied. Both technical and organizational goals were
identified at each stage. Procedures were defined for sharing knowledge, and these were
directly supported by the knowledge management system. BRAiN offers the possibility to
identify knowledge users, publishers, advanced publishers, and knowledge administrators,
each with their own rights and authorities.
Practical Implications of KM Tools and Techniques
A number of techniques and tools, while never having been specifically developed for
or targeted to KM applications, have proven to be quite useful. A pragmatic toolkit
approach is needed for KM as there is no single end-to-end solution that can be simply
bought “off the shelf” in order to address all the critical dimensions of a knowledge
management initiative. It is therefore important to understand what is out there
already and what some of the new emerging tools are in order to adapt them and
make use of them for KM purposes.
Key Points
•
Content creation and management tools are used to structure and organize knowl-
edge content for each retrieval and maintenance.
Knowledge Management Tools
•
305
Groupware and other collaboration tools are essential enablers of knowledge flow
and knowledge sharing activities among personnel.
•
Data mining and knowledge discovery techniques can be used to discover or identify
emergent patterns that could not have otherwise been detected. Some of these may
provide valuable insights.
•
Intelligent filtering agents are a KM technology that can help address the challenges
of information overload by selecting relevant content and delivering this in a just-intime and just-enough format.
•
A knowledge repository will often be the most used and most visible aspect of a KM
technology. What is important is not so much the containers but the content and
how this content will be managed.
•
Knowledge management technologies help support emergent phenomena involved
in the creation, sharing, and application of valuable knowledge assets.
Discussion Points
1. Discuss the pros and cons of the major technologies used in:
a. The knowledge creation and capture phase.
b. The knowledge sharing and dissemination phase.
c. The knowledge acquisition and application phase.
2. Data mining technologies can be used on a number of different types of knowledge
content. What are the major categories and what sorts of patterns would this technology detect?
3. Describe an application of blog technology within an organization. What potential
benefits would accrue to the individual, the community of practice, and to the organization as a whole if blogs were implemented?
4. How would you categorize the different forms of groupware or collaboration technologies? What sort of criteria would you make use of in order to determine when
and where each type would be the best means of sharing and disseminating knowledge? How would you adopt a cost-benefit approach to such a technology selection
decision?
5. What role can a wiki play in promoting group collaboration? What advantages does
a wiki offer when compared to a discussion forum?
6. What role is played by e-learning tools in knowledge management?
306
Chapter 8
7. How can intelligent agents help knowledge workers find relevant knowledge
content?
8. Describe how you would attempt to accommodate different user skill levels and
expectations in the same organization, in particular, what type of tools would be
recommended for the baby boomer versus the millennial generation of technology
users?
9. Select one new emerging technology and lists potential uses for knowledge management. Make the connection between what the technology offers and each phase of
the KM cycle. For example, are some tools better suited to knowledge capture or
knowledge sharing?
10. Select any KM technology and describe how it may be applied at the individual,
group, and organizational level. Would they require different degrees of standardization? Maintenance? Training?
References
Bhatt, D. 2000. EFQM: Excellence Model and Knowledge Management Implications. http://
www.eknowledgecenter.com/articles/1010/1010.htm. (accessed June 4, 2010).
Blood, R. 2002. The weblog handbook: Practical advice on creating and maintaining your blog. Cambridge, MA: Perseus Publishing.
Bradley, A. 2008. Twitter and knowledge management: Synergy or oxymoron? http://blogs
.gartner.com/anthony_bradley/2008/09/29/twitter-and-knowledge-management-synergy-oroxymoron/ (accessed September 29, 2008).
Brown, J., and P Duguid. 1991. Organizational learning and communities of practice:
Toward a unified view of working, learning, and innovation. Organization Science 2 (1):
40–57.
Choo, C. W., B. Detlor, and D. Turnbull. 2000a. Working the web: An empirical model of web
use. Proceedings of HICSSS 2000, http…
Purchase answer to see full
attachment
Recent Comments