In the late 1990s and the early 2000s, “Knowledge Management” (KM) was all the rage. Companies invested millions on enterprise content management (ECM) systems and teams of KM practitioners. It was believed that the codification of all knowledge assets across the enterprise would lead to new insights and higher levels of innovation. People would simply search a central repository and solve their most complex technical problems through the transfer of others’ learnings. This effort, however, was not fruitful for the vast majority of organizations who invested. Most of these initiatives failed, and failed miserably. An over-reliance on technology and a lack of cultural transformation doomed these efforts. Yes, there were some stories of success; but for many, the concept of “knowledge management” was associated with an IT project gone bad.
Fast-forward to 2014 and what are many scientific organizations talking about in a post-electronic laboratory notebook (ELN) environment? Yes, knowledge management. ELN was supposed to bring a new level of knowledge organization and search through the elimination of unsearchable paper notebooks. However, due to legal requirements, many companies do not allow conclusions and commentary in the documentation of experiments. In addition, meta-experiment analysis is rarely documented in the ELN. Therefore, the result is that the true “knowledge” gained from the performance of studies is documented somewhere else. For non-regulated workflows, this often means Power- Points, technical reports, memos and monthly summaries stored across unorganized servers, SharePoint sites, eRooms and client PCs.
The ad hoc management of these files makes knowledge retrieval extremely difficult. In many cases, this is self-inflicted: the lack of internal discipline to govern the file organization creates much of this problem. The governance of SharePoint sites across the enterprise is particularly poor at most companies. It is hard to get users to agree on a taxonomy, let alone standard vocabularies for tagging files to enable effective retrieval. Since the user community cannot coalesce on a communal approach to how content should be managed, they determine it is an IT problem to solve. The cries for IT to do something about knowledge management are growing louder and louder from small to large organizations. The wish for “one place to put all our data,” is the common objective, just like it was 15 years ago.
When you distill the demand for knowledge management, users are primarily asking for effective content management and search. Thankfully, today’s technology is vastly superior to what it was 15 years ago. Previously, it was incredibly difficult to search across both unstructured and structured data. The number of options for data federation and virtualization were limited. Now, there are endless technical options, with open source tools like Hadoop, Solr and Sphinx and vendor products from companies such as Cambridge Semantics, Sinequa, Thomson Reuters and Waters.
Nevertheless, how will a lack of governance for something as simple as SharePoint be solved with a much more complex architecture? Do users really desire true knowledge management and the cultural transformation it entails? Does senior management really buy in to make the leap? This reliance on IT to “solve” KM reminds me of one of Yogi Berra’s famous quotes: “It’s like déjà vu all over again.”
What is Knowledge Management?
Back in 1998 during the peak of its first hype cycle, KM was said to be “the process of capturing, distributing and effectively using knowledge.”1 This definition, however simple, failed to consider the broader implications of how knowledge is created and shared. It is no wonder people viewed KM as an IT problem if it is defined as mainly a capture and sharing concept. Today, KM tends to be viewed more holistically as a process for innovation and continuous improvement. To move beyond the simple definition, the categories of knowledge and the process of creation must be understood.
There are two generally accepted categories of knowledge. Explicit knowledge is knowledge that can be easily written down and transferred to others. It is formal, such as a validated assay methodology. Tacit knowledge is difficult to put into written form; it is occasionally called experiential knowledge. Skills learned from years of medical practice are an example of tacit knowledge. Philosophies and viewpoints are also tacit. According to organizational theorists Ikujiro Nonaka and Hirotaka Takeuci,2 tacit knowledge is expressed through a socialization process, where organizations transfer knowledge and allow the development of new ideas and perspectives. Some view explicit knowledge as merely information, whereas tacit knowledge is the understanding of information based on their personal learning.
Nonaka described what he called the “Spiral of Knowledge” (Figure 1) to illustrate how knowledge and learning migrate through stages:
• Tacit to Tacit: This is the sharing of tacit knowledge directly with another. For example, a new post-graduate works as an intern observing senior chemists perform their work. The intern picks up tips and tricks from the lifelong experiences of the elder scientists.
• Explicit to Explicit: This is combining explicit knowledge to develop a new understanding. For example, conclusions entered into a report from the meta-analysis of experiments documented in ELN.
• Tacit to Explicit: When an experienced researcher enters notations during experimental design and utilizes concepts learned from prior experiences, explicit knowledge is created from tacit knowledge.
• Explicit to Tacit: As new explicit knowledge is shared, others begin to internalize it. They use it to broaden their own tacit knowledge. For example, reading outside research to develop ideas for the design of a new molecular entity.
Location, personality conflicts or divisions within an organization often create barriers to the creation of new knowledge. If tacit knowledge is not shared, then scientists must learn from their own mistakes versus learning insights from others, with resulting inefficiency. Socialization is necessary for tacit to tacit knowledge sharing.
It is a best practice to have communities of practice and informal social networks allowing researchers from different departments to share knowledge and experiences. These are associations of researchers divorced from the organizational hierarchy. In the example depicted by Figure 2, communities of practice are formed around a particular group or applications where researchers from different departments share their knowledge.
Communities are supported through information technology such as wikis, blogs and other collaboration tools. Communities of practice can also be expanded beyond a company’s walls to include interested parties from academia, government research institutes or contract research organizations. These external facing “Open Innovation” initiatives are becoming increasingly popular with pharmaceutical companies looking to broaden knowledge transfer to stimulate innovation.
If not monitored by strong ground rules and governance, communities can become sounding boards for disgruntled workers. Without champions to establish and support them, over time, these communities risk fading into oblivion. Initially, there will be very little information in the knowledge base, as it takes time and stamina to ingrain the community into routine practice. A strong vocal champion is required to get them off the ground and gain traction.
In her book, Sharing Hidden Know-How,3 Katrina Pugh of Columbia University advocates the concept of the “Knowledge Jam” to guide the movement of tacit to explicit knowledge. Adopted by forward-thinking technology and pharmaceutical companies, a “Knowledge Jam” is a facilitated series of sessions for tacit elicitation and documentation through questions and answers around a specific objective. Jams can be an effective concept to build communities of practice.
Deploying a Successful KM Project
To avoid the mistakes of the past, it is important for teams embarking on a KM initiative to understand the lessons learned from both failed and successful projects. As noted previously, KM is not an information technology (“IT”) problem, but a challenge to the culture of an organization involving people and processes.
When examining past projects, there are many best practices that can be derived. A few of which are:
• Strategy and Process: Start with clear goals and objectives and try not to “boil the ocean” by doing too much, too soon. There must be plan to move the organization to higher levels of KM maturity in a series of stages, taking into consideration Nonaka’s framework for learning. Establish communities of practice that enable the capture of tacit knowledge. Adapt business processes to the concept of openly sharing knowledge and ideas.
• Technology: Technology is just a tool for bringing people closer together and must be closely aligned with the processes for communication and sharing. Moving data from point A to point B is pointless without an understanding of the relevance or context. It is important for information architects to properly understand how data sets are/should be used and to organize assets based on contextual relationships. Build use cases along with the business to explore the large number of options for content management, search and data virtualization. Don’t rule out open source, particularly the tools for “Big Data” such as Hadoop.
• Leadership: KM initiatives need a strong leader. With KM involving a strong humanistic component, the leader must constantly strive for a cultural migration and change acceleration. Gain management buy-in and dedicate a strong leader to move the organization forward. Without a strong leader and management sponsorship to change culture, failure is almost certainly assured.
• Curation: Keep in mind the challenge of the three Vs of Veracity, Volatility and Validity in a scientific environment. Are the results meaningful to the question (Veracity)? How long will the assets need to be maintained to be scientifically relevant (Volatility)? Are the knowledge assets correct or are old, non-valid data going to drive people to incorrect conclusions (Validity)? Initial and on-going curation is required or searching will produce inadequate results. A process and resources for continuous data stewardship must be a component of the ongoing system support. Projects have failed when on-going support was not considered.
• Data Scientist: Recently, companies have realized the need for a role within the organization that combines domain expertise, technical skills and mathematics/statistics to extract knowledge from the myriad of data assets. These resources are increasingly members of scientific project teams with responsibility for mining and interpreting content stores (explicit to explicit).
• Standards: For knowledge access to be effective, assets must be properly categorized. Limited taxonomy and metadata standards are necessary for long-term retention: a common vocabulary and a base ontology, i.e., relationships between concepts, to ensure the ability to search and compare data and information today and in the future.
• Adaption: Any KM initiative will need to adapt over time to the changing needs of the business environment. Project team members and users must be willing to take risks, experiment and make mistakes without “protecting turf.”
For the IT/Informatics team, any request for “knowledge management” should be well-framed. It is essential to understand the business’ vision, strategic goals and the “need behind the need” before agreeing to embark on a KM project. Without building a roadmap of staged deliverables, obtaining management support, and a plan for organizational change, the latest generation of projects can end up being piled on the heap of KM failures of the past.
1. Davenport, T. and Prusak, L., Working knowledge: how organizations manage what they know, Boston, MA: Harvard Business School Press, 1998
2. Nonaka, I., Takeuci, H., The Knowledge Creating Company, Oxford University Press, 1995
3. Pugh, Katrina, Sharing Hidden Know-How: How Managers Solve Thorny Problems With the Knowledge Jam, Jossey-Bass, 2011
Michael Elliott is CEO of Atrium Research & Consulting. He may be reached at editor@ScientificComputing.com.