The field of knowledge management has changed significantly over the last ten years. When I entered the industry in the mid-1990s, most of the solutions we offered were hard-coded and rigid, even for something as basic as a central blog (which was not a term used back then). Within a few years, many of the tools and methods we relied on became outdated. Even today, with the rise of cloud-based content management and collaboration platforms combined with machine learning and social networking features, the pace of change can sometimes exceed our organizations’ ability to keep up. In a landscape where features are created and launched in days, not over several years, many companies find it hard to adapt to this new environment.
The basic needs of knowledge management have not changed:
- the ability to capture and store knowledge assets and intellectual property
- simple yet powerful search capabilities to allow employees to easily discover and retrieve their content
- business processes to automate the dissemination and consumption of the content
However, the sheer volume of content is making both the storage of these assets, and the ability to retrieve it quickly and easily, a difficult proposition. And the more difficult it is to locate and retrieve your content and business-critical data, the less likely end users are to embrace the platform. Adoption and engagement have become key measurements of the success of knowledge management initiatives, and yet most organizations struggle to achieve their goals because they do not truly understand the failures of their platforms.
What is needed is a “knowledge management hypergraph.” A hypergraph is defined in mathematics as a generalization of a graph that can model the relationships between models, but with unlimited edges and nodes. When put in context of the social graph efforts led by companies such as Microsoft, Google, Facebook and others, the relationships between document artifacts and business processes are joined by and expanded upon by the relationships of readers, authors, team members, administrators, and anyone else who may interact with any single node, amplifying the relationships between nodes exponentially.
With the combination of our standard information assets (think email, Word documents, presentations, and so forth), data from our business systems (CRM and ERP platforms), social feeds, and the various machine learning data points that are increasingly being connected to all of our geographic, biographic, and psychographic profiles, we are generating massive amounts of data, year in and year out. With this increase, our data is growing more complex, and therefore we need to approach knowledge management and content collaboration in a more strategic manner.
We need to take a more serious look at the primary platforms through which we create and consume content, such as SharePoint. The most recent AIIM.org survey investigating the success of SharePoint deployments (“Connecting and Optimizing SharePoint – important strategy choices”) reveals that only 11% of respondents feel that their SharePoint projects have been a success, just over half (52%) say they have aligned SharePoint with their corporate governance efforts, and yet 75% still have a strong commitment to making the platform work. While most respondents claim that a lack of executive support is one of the primary causes of these failures, in my experience it is a chicken-and-the-egg argument. If you fail to go into the project (a SharePoint deployment) without a clear idea of the end state, the business value you will achieve, and the measurements necessary to show that business value, most executives will find it difficult to give their support, viewing it as yet another IT boondoggle. On the flipside, I have also seen projects with strong executive buy in (call it “intuitive leadership”) work their way through these deployment and measurement issues, because their management agrees that the platform is “directionally correct” and provides both qualitative and quantitative benefits.
Part of our operational excellence efforts is to continually improve the quality of our inputs and outputs. In this case, constantly refining, revising, improving our methodologies, our content, and our metrics. One major aspect of this activity is building a strong information architecture, ensuring that as content goes into the system, it is correctly “mapped” to related content. Context is everything. I spent the good part of a decade writing about and speaking to audiences about pattern development, identification, and reuse as a way of improving the product development process. Having a map of your key business processes, for example, allows you to locate and use individual artifacts on their own, as well as better identify related material. Of course, at the center of it all, is a strong search strategy. No way around it: if we fail on the search experience, the rest of our planning was all for naught.
Generally speaking, we are good at collecting data, but less effective at managing our data, and, in my opinion, failing at taking advantage of the knowledge and wisdom buried within our data. Which is a fundamental problem with all of this data capture without converting it into knowledge: we don’t know what is there, we don’t understand its potential value, therefore let’s keep everything, because we may eventually find out. Some of the latest technologies in search, social, and machine learning will help us improve the ways in which we capture, organize, and orchestrate our data – but not everything can be automated. The concept of a content hypergraph as a way of indexing and contextualizing our information assets may be the solution.