Category Archives: Scholarly Papers

Capturing Organizational Knowledge

Capturing Organizational Knowledge:

Approaches to Knowledge Management and Supporting Technology

By

Russ Wright

Knowledge Management

Although it is said that money makes the world go around, the use of knowledge has displaced money as the primary business driver because, according to Drucker (1988), organizations discovered that organizational knowledge is the most useful tool to use to gain a competitive advantage. Identifying and leveraging knowledge held within the individual and the organization was used to increase their competitiveness (Baird, Henderson, & Watts, 1997). Over the past three decades information and the technology to support it, has grown at an explosive rate and the wealth of information available has rapidly advanced in many fields, including electronics, computers and communications technology (Adomavicius, Bockstedt, Gupta, & Kauffman, 2008). Knowledge management is the inevitable result of rapid progress in Information Technology (IT), globalization and rising awareness of the commercial value of organizational knowledge (Prusak, 2001). The existence of all this information forces organizations to find a way to handle it and transform it into actionable knowledge. Thus the problem exists not only in interpreting, distilling and sharing the information, but also efficiently turning it into knowledge.

The purpose of this document is to explore how knowledge has become the most important resource for an organization and learning is the most important capability for an organization that wants to compete in the marketplace. There is a discussion of the background on creating a competitive advantage and the importance of learning within an organization. This document also compares and contrasts the major approaches to knowledge management within an organization and examines the role that computer technology plays in capturing organizational knowledge. The conclusion finds that the field of knowledge management is still evolving and Web 2.0 technology might change the way knowledge is captured within an organization.

Background

Knowledge for A Competitive Advantage

The realization that knowledge, when organized and viewed through the lens of competitive factors, could help an organization gain a competitive advantage, formalized the beginning of knowledge management. Porter (1980) explained that the existing model of developing a business strategy was no longer working. He created a new model that brought together the ideas of the Harvard Business School and the Boston Consulting Group and created a business strategy commonly called the five forces model as displayed in figure 1 below. This model used five factors of competition as a basis for a business strategy: (1) industry competitors, (2) pressure from substitute products, (3) bargaining power of suppliers, (4) bargaining power of buyers, and (5) potential entrants. The author explained that analysis of these five areas allowed a business within a particular industry to establish themselves and react to these forces of competition and profit from them. Albers and Brewer (2003) explained that examining each of these five forces required specific knowledge within that particular competitive factor. Accordingly, knowledge management began with the need to understand the complexities of each of the five factors. Yet, the knowledge alone was not enough, as organizations had to learn from the analysis of the five factors and adapt to the ever-changing market.

Figure 1

Porter’s Five Forces Model

porter-five-forces

The Learning Organization

Possessing knowledge of competitive factors is not enough of a business strategy to make an organization competitive and profitable. Instead the organization must adapt and take advantage of opportunities to remain competitive because learning is an organization’s most important capability (Earl, 2001; Grant, 1996; Zack, 1999a). Nonaka (1991) explained that learning must be integrated into the culture of the organization and not a separate activity performed by specialists. Senge (1994) described a learning organization as a place where “people continually expand their capacity to create the results they truly desire, where new and expansive patterns of thinking are nurtured, where collective aspiration is set free, and where people are continually learning how to learn together” (p. 1). Thus, a learning organization embraces a culture where the ability to create and share new knowledge will give it a competitive advantage. Still, defining and implementing the required skills for an organization to embrace learning is complicated process.

Creating and sustaining a learning or knowledge-creating culture requires an organization to not only engage in specific activities, but also develop a new mindset. Argyris and Schon (1978) theorized that learning involved not only detecting the error and correcting it, but changing the way the organization behaves as a whole through policy change. Garvin (1993) built upon this work and created a set of activities in which learning organizations must engage to sustain a knowledge-creating culture. The author defined these activities as: (1) systematic problem solving, (2) experimentation with new approaches, (3) learning from their own experience and past history, (4) learning from the experiences and best practices of others, and (5) transferring knowledge quickly and efficiently throughout the organization. The author further explained that applying these practices is not enough, as real change must include an analysis beyond the obvious by delving into the underlying factors. Inasmuch, both learning activities and a culture change are needed to create a learning culture.

All the aforementioned work led to the creation of a field of research commonly called knowledge management. According to Alavi and Leidner (2001), knowledge management is made up of several somewhat overlapping practices that an organization can use to find, create and share knowledge that exists within individuals or processes of an organization. Another view of knowledge management defined it as “an institutional systematic effort to capitalize on the cumulative knowledge that an organization has” (Serban & Jing Luan, 2002, p. 5). Consequently, knowledge management is deeply connected to the people, procedures, and technology within an organization.

Approaches to Knowledge Management

There are different views on the definition of knowledge, which lead to the creation of multiple models of knowledge management (Carlsson, 2003). The first series of approaches to knowledge management, define knowledge as a “Justified True Belief” (Allix, 2003, p. 1). These models place knowledge into different categories (Boisot, 1987; Nonaka & Takeuchi, 1995). The second more scientific type of knowledge management model views knowledge as an asset and connects the value to intellectual capital (Wiig, 1997). The third type of knowledge management views knowledge as subjective and the model focuses on the creation of knowledge within the organization (Demarest, 1997). Which model an organization chooses to use depends upon the organization’s strategic needs (Aliaga, 2000). Thus, there exist many different models of knowledge management for many different needs.

Categorized Knowledge Management

One of the earliest knowledge management models created by Boisot (1987) categorized knowledge into four basic groups as demonstrated in Table 1 below. The first group was codified knowledge which encompassed knowledge that could be packaged for transmission and could have two states, diffused or undiffused. Codified-diffused knowledge was considered public information. Codified-undiffused knowledge was private or proprietary information shared with only a select few. Uncodified knowledge was knowledge that is difficult to package for transmission. Uncodified-undiffused knowledge was personal knowledge. Uncodified-diffused was considered common sense knowledge. The author explained that common sense knowledge develops through the social interactions where individuals share their personal knowledge. The author also pointed out that codified and uncodified are unique categories of knowledge.

Table 1:

Boisot’s Knowledge Category Model

Uncodified Common Sense Personal Knowledge
Codified Public Knowledge Proprietary Knowledge
Diffused Undiffused

 

The knowledge management model created by Nonaka and Takeuchi (1995) defined two forms of knowledge in their model: tacit and explicit knowledge. They explained that explicit knowledge is knowledge that is shared in some way and gathered into some storage device, such as a book or computer program, which makes it easy to share with others. Tacit knowledge was explained as internal to a person, somewhat subjective, useful only in a specific context, and difficult to share, as it exists only within the mind of the individual. The authors explained that tacit knowledge could be shared through socialization: social interactions, either face to face or in a shared group experience by members of an organization. This knowledge could become explicit knowledge throughexternalization when it is formalized into the information systems of the organization. Explicit knowledge can then be compiled and mixed with other existing knowledge through a process callcombination. Likewise, explicit knowledge could become tacit through a process of internalization, which happens, for example, when members of the organization are trained on how to use a system. When all of these different modes of knowledge transfer work together they create learning in what the author calls “the spiral of knowledge” (p. 165). Through iterations of learning starting at the individual, and spiraling up into the group and eventually the organization, knowledge accumulates and grows which leads to innovation and learning (Nonaka, 1991).

Table 2:

Nonaka’s Knowledge Management Model

Tacit to Tacit → Socialization Tacit to Explicit → Externalization
Explicit to Tacit → Internalization Explicit to Explicit → Combination

 

When comparing and contrasting these two categorical models, it is easy to see some similarities. The tacit and explicit categories from Nonaka (1991) are somewhat similar to the codified and uncodified knowledge categories defined by Boisot (1987). Another similarity is that tacit and explicit, codified and uncodified knowledge categories are considered unique by both authors. Also, both authors mentioned that their models include a sharing of knowledge, which moves knowledge from the person to the larger group. One place the two models differ greatly is that Nonaka (1991) is much more explicit about the idea of collecting knowledge and creating new knowledge through the knowledge spiral. McAdam and McCreedy (2000) criticized these models as too mechanistic and explained that they lacked a holistic view of knowledge management.

Intellectual Capital as Knowledge Management

The Skandia firm developed a scientific model of knowledge management to help measure their intellectual capital. According to Wiig (1997), this tree-like model treats knowledge as a product that can be considered an asset to the organization. The knowledge or intellectual capital has several categories: (1) human, (2) structural, (3) customer, (4) organizational, (5) process, (6) innovation, (7) intellectual and (8) intangible. The value assigned to each of these categories tells the organization their future capabilities. The author defined each of the categories as:

  • Human capital is the level of competency for the employees.
    Structural capital is the collection of all intellectual activities of the employees.
    Customer capital is the value of the organization’s relationships with their customers.
    Organizational capital is the knowledge embedded in processes.
    Process capital is the value creating processes.
    Innovation capital is the explicit knowledge and inscrutable knowledge assets.
    Intellectual property is documented and captured knowledge.
    Intangible assets are the value of immeasurable, but important items.

     

When comparing the Skandia model against the Nonaka and Takeuchi (1995) model, it is possible to see the two models share the concept of explicit knowledge, defined as innovation capital within the Skandia model. Although the concept of tacit knowledge is not directly mentioned, Wiig (1997) explained that tacit knowledge that is transferred to explicit to be of lasting value compares to customer capital being transferred to innovation capital in the Skandia model. A study by Grant (1996) criticized the Nonaka and Takeuchi (1995) model because it was based in the context of new product development, whereas the Skandia model would also work for existing products.

Social Construction Model

The last model of knowledge management presented here was created by Demerest (1997) and focused on the creation of knowledge within the context of the organization. The author contends that all organizations have a knowledge economy and in general operate in about the same way. This includes an understanding that commercial knowledge is not truth; instead it is what works for the situation to produce knowledge that leads to an economic gain. One of the primary assumptions of this model is that the knowledge creation process happens through interactions between members of the organization. The author borrowed several concepts and adapted a model created by Clark and Staunton (1989) which includes the four following phases: (1) construction, (2) dissemination, (3) use, and (4) embodiment. The author defined construction as discovering or structuring some knowledge, embodiment as the process of selecting a container for the new knowledge, dissemination as the sharing of this new knowledge and use as creation of commercial value from the new knowledge. The author further explained that process could flow through all four steps in sequence, or happen simultaneously along a few different paths such as construction to use and construction to dissemination.

Figure 2

The Demarest Knowledge Management Model

demarest-km-model

The social construction model seems to be the best of all the other models. When comparing and contrasting this social construction model to the categorical models of Nonaka and Takeuichi (1995) and the Boisot (1987) model, they share (1) the concept of knowledge creation as powered by the flow of information within the organization, (2) the concept that knowledge creation happens between the members of the organization, and (3) include a sharing of knowledge which moves knowledge from the person to the larger group. According to McAdam and McCreedy (2000), this model differed from the categorical and intellectual capital models because the author included the idea that knowledge is inherently connected to the social and learning processes within the organization, and “knowledge construction is not limited to scientific inputs but includes the social construction of knowledge” (p. 6). Therefore this model brings together the best parts of all the other models.

These knowledge management models span a wide range of perspectives on the definition of knowledge management. The categorical models shared the concept of tacit and explicit knowledge. The intellectual capital model considered knowledge as an asset to be managed efficiently to make an organization successful. The social construction model linked knowledge to the social interactions and learning processes within the organization. The progression of models demonstrates that knowledge management continues to evolve. Grover and Davenport (2001) explained that the main purpose of knowledge management models is to help an organization grow their knowledge base and increase their competitive edge in the marketplace. Thus no one model is best to help an organization grow, but instead depends on the perception of the definition of knowledge.

The Role of Computer Technology in Knowledge Management

All the attention on knowledge management has lead to increased use of Information Technology (IT) to capture knowledge. Splender and Scherer (2007) explained that “the majority of KM consultants and business people see IT as KM’s principal armamentarium-it is all about collecting, manipulating, and delivering the increasing amounts of information ITs falling costs have made available” (p. 5). This opinion seems to resonate with Zack (1999b) who proposed a knowledge management strategy for transferring tacit knowledge to a storage format, thereby making it explicit. The author explained that this conversion process is commonly called codified knowledge. He also explained that this model uses IT as a pipeline to connect people to knowledge. Hansen, Nohria and Tierney (1999) proposed an additional strategy of knowledge management architecture that focused on dialog between individuals, thereby sharing tacit to tacit knowledge, which they called personalization. According to the authors this model uses IT to connect people to people and exchange tacit knowledge. Thus, the use of information technology to capture knowledge varies based on the organization’s competitive strategy.

The codified knowledge strategy according to Zack (1999b) is designed to capture knowledge, refine it into something usable, and then place it into a storage device, such as a document repository, where it is reusable by other members of the organization. The ability to store and reuse the knowledge whenever needed creates an economy of reuse, which helps to prevent the constant recreation of knowledge and therefore reduce costs (Cowan & Foray, 1997). According to Hansen et al. (1999), this knowledge strategy, which they call people-to-documents, comes with a significant investment cost for IT because of the need to sort and store large amounts of knowledge, now in data form.

Hansen et al. (1999) defined the personalization knowledge strategy as drawing on the relationships established between individuals in an organization wherein they share tacit knowledge. They further explained that this strategy created an economy of experts within the organization, which they called people-to-people. In contrast they explained that this model required a much smaller investment in IT infrastructure as much less knowledge is stored in any digital format, but instead stays in the minds of the employees.

Managerial Needs

Managers within an organization with a knowledge management strategy need different types of information about the technology used for the knowledge management system. According to research by Jennex and Olfman (2008) managers required multiple factors of measure to gauge the effectiveness and success of a knowledge management system. The managers needed to know about the information quality in the system, how well the users were adapting to using the software, and the overall performance of the knowledge management system. Massey Montoya-Weiss, and O’Driscoll (2002) explained that managers needed information not only about what is in the system, but how well the system was performing so they could assist in removing bottlenecks. Consequently, the information needed by managers, not only gauges the effectiveness of the knowledge management system, but also helps to make the system function smoothly.

Pitfalls

Using information technology to capture the knowledge of an organization might be detrimental if done improperly. In a paper by Johannessen, Olaisen and Olson (2001), the authors expressed much concern over the misuse of information technology to manage tacit knowledge within an organization. They argued that despite the empirical evidence to the contrary organizations continued to invest in IT systems that may lead to a loss of or at least a diminishing of the importance of tacit knowledge. Zack (1999b) explained that competitive performance requires a balance between tacit and explicit knowledge. Nonaka (1994) explained that knowledge within an organization was created by, and flows from the members of an organization engaging each other and sharing tacit and explicit knowledge. Scheepers, Venkitachalam and Giibs (2004) extended the research of Hansen et al. (1999) and concluded that an 80/20 mix of codification and personalization strategy, based on the competitive strategy of the organization was most successful. For these reasons, a balance between tacit and explicit knowledge must be maintained in the organization’s culture and IT infrastructure.

Web 2.0

The new technologies created in the Web 2.0 culture offer some new IT solutions to knowledge management. Web 2.0 technology functions more like the way individuals interact (O’Reilly, 2006). As previously stated, the codified knowledge strategy requires a significant IT investment not only in equipment but also specialists to gather and organize the knowledge (Hansen et al., 1999). According to Liebowitz (1999) one of the problems with traditional knowledge management technology is that it put the user in the role of passive receiver. Tredinnik (2006) in reference to Web 2.0 technology in knowledge management explained: “The technologies involved place a greater emphasis on the contributions of users in creating and organizing information than traditional information organization and retrieval approaches.”(p. 231). Chu, Miller and Roberts (2009) echoed this same concept when they explained that Web 2.0 technology puts the emphasis on the users generating new information or editing other participant’s work. According to Levy (2009) one of the advantages of Web 2.0 technology is that as individuals shared the knowledge they potentially assisted in the codification process. When they shared their tacit knowledge by posting it in an interactive Web 2.0 tool, such as a wiki, the knowledge began to move to explicit as others read, enhanced and categorized this knowledge, which moved it up from personal to organizational knowledge. Accordingly, web 2.0 technologies potentially offers many benefits, among them are more user participation creating more knowledge sharing, which helps keep the knowledge from becoming stale and lower costs as participants do more of the work.

Information technology for knowledge management and specifically capturing organization knowledge depends on the organization’s competitive strategy. The two strategies outlined here, codified and personalization knowledge strategy use information technology in different ways because the former builds a repository requiring significant IT investment and the latter creates a loose network of experts which requires a smaller IT investment. The experts warn that technology itself is not the answer and a real strategy with clear plans needs to be in place or the technology investment will not help the knowledge management process. There are some new Web 2.0 technologies on the horizon that could positively impact user participation in knowledge management technology and save money when codifying knowledge.

Knowledge Management Is Still Evolving

The approaches to knowledge management outlined here show a progression of thought. The models show a progression from a specific portion of knowledge sharing and an absolute definition of knowledge as truth (Boisot, 1987; Nonaka, 1991), to a wider and more generic perspective of sharing knowledge and a more subjective definition of commercial knowledge as truth (Demarest, 1997). Also, the information technology used to support the knowledge management strategy continues to evolve. Both the codified knowledge strategy of Zack (1999b) and the personalization knowledge strategy of Hansen et al. (1999) require technology to capture and share the knowledge. New Web 2.0 technology possesses a potential to change how much of an investment in technology the organization must make, as this technology is likely to increase the level of participation of the users making them more active in the knowledge management process. Therefore the only certainty of knowledge management is the continued growth and change of the models, strategies and technology.

 

References

Adomavicius, G., Bockstedt, J. C., Gupta, A., & Kauffman, R. J. (2008). Making sense of technology trends in the information technology landscape: A design science approach. MIS Quarterly,32(4), 779-809.

 

Alavi, M., & Leidner, D. E. (2001). Review: Knowledge management and knowledge management systems: Conceptual foundations and research issues. MIS Quarterly, 25(1), 107-136.

 

Albers, J., & Brewer, S. (2003). Knowledge management and the innovation process: The eco-innovation model. Journal of Knowledge Management Practice, 4(1).

 

Aliaga, O. A. (2000). Knowledge management and strategic planning. Advances in Developing Human Resources, 2(1), 91-104. doi:10.1177/152342230000200108

 

Allix, N. (2003). Epistemology and knowledge management concepts and practices. Journal of Knowledge Management Practice, 4(1), 136-152.

 

Argyris, C., & Schön, D. (1978). Organizational learning: A theory of action perspective. Reading, MA: Addison Wesley.

 

Baird, L., Henderson, J., & Watts, S. (1997). Learning from action: An analysis of the center for army lessons learned. Human Resource Management Journal, 36(4), 385-396.

 

Boisot, M. (1987). Information and organizations: The manager as anthropologist. London, UK: Fontana/Collins.

 

Carlsson, S. (2003). Knowledge managing and knowledge management systems in inter-organizational networks. Knowledge and Process Management, 10(3), 194-206. doi:10.1002/kpm.179

 

Chui, M., Miller, A., & Roberts, R. P. (2009). Six ways to make Web 2.0 work. The McKinsey Quarterly, 1-7.

 

Clark, P., & Staunton, N. (1989). Innovation in technology and organization. London, UK: Routleedge.

 

Cowan, R., & Foray, D. (1997). The economics of codification and the diffusion of knowledge.Industrial and Corporate Change, 6(3).

 

Demarest, M. (1997). Understanding knowledge management. Long Range Planning, 30(3), 374-384. doi:10.1016/S0024-6301(97)90250-8

 

Drucker, P. F. (1988). The coming of the new organization. Harvard Business Review, 66(1), 45-53.

 

Earl, M. (2001). Knowledge management strategies: Toward a taxonomy. Journal of Management Information Systems, 18(1), 215-233.

 

Garvin, D. A. (1993). Building a learning organization. Harvard Business Review, 71(4), 78-91.

 

Grant, R. (1996). Prospering in dynamically-competitive environments: Organizational capability as knowledge integration. Organization Science, 7(4), 375-387.

 

Grover, V., & Davenport, T. H. (2001). General perspectives on knowledge management: Fostering a research agenda. Journal of Management Information Systems, 18(1), 5-21.

 

Hansen, M. T., Nohria, N., & Tierney, T. (1999). What’s your strategy for managing knowledge?Harvard Business Review, 77(2), 106-116.

 

Jennex, M. E., & Olfman, L. (2008). A model of knowledge management success. In Current Issues in Knowledge Management (pp. 34-52). Hershey, PA: Information Science Reference.

 

Johannessen, J. (2001). Mismanagement of tacit knowledge: The importance of tacit knowledge, the danger of information technology, and what to do about it. International Journal of Information Management, 21(1), 3-20. doi:10.1016/S0268-4012(00)00047-5

 

Levy, M. (2009). WEB 2.0 implications on knowledge management. Journal of Knowledge Management,13(1), 120-134. doi:10.1108/13673270910931215

 

Liebowitz, J. (1999). Key ingredients to the success of an organization’s knowledge management strategy. Knowledge and Process Management, 6(1), 37-40.

 

Massey, A. P., Montoya-Weiss, M. M., & O’Driscoll, T. M. (2002). Knowledge management in pursuit of performance: Insights from nortel networks. MIS Quarterly, 26(3), 269-289.

 

McAdam, R., & McCreedy, S. (2000). A critique of knowledge management: Using a local constructionist model. New Technology, Work & Employment, 15(2), 155.

 

Nonaka, I. (1991). The knowledge-creating company. Harvard Business Review, 85(7/8), 162-171.

 

Nonaka, I. (1994). A dynamic theory of organizational knowledge creation. Organization Science, 5(1), 14-37.

 

Nonaka, I., & Takeuchi, K. (1995). The knowledge creating company: How Japanese companies create the dynamics of innovation. Oxford, UK: Oxford University Press.

 

O’Reilly, T. (2006). Web 2.0 compact definition: Trying again. O’Reilly radar. Retrieved January 24, 2011, from http://radar.oreilly.com/2006/12/web-20-compact-definition-tryi.html

 

Porter, M. (1980). Competitive strategy: Techniques for analyzing industries and competitors. New York: Free Press.

 

Prusak, L. (2001). Where did knowledge management come from? IBM Systems Journal, 40(4), 1002-1007.

 

Scheepers, R., Venkitachalam, K., & Gibbs, M. (2004). Knowledge strategy in organizations: refining the model of Hansen, Nohria and Tierney. The Journal of Strategic Information Systems, 13(3), 201-222. doi:10.1016/j.jsis.2004.08.003

 

Senge, P. (1994). The fifth discipline: the art and practice of the learning organization (1st ed.). New York: Doubleday/Currency.

 

Serban, A. M., & Jing Luan. (2002). Overview of knowledge management. New Directions for Institutional Research, 2002(113), 5.

 

Spender, J., & Scherer, A. (2007). The philosophical foundations of knowledge management: Editors’ introduction. Organization, 14(1), 5-28.

 

Tredinnick, L. (2006). Web 2.0 and business: A pointer to the intranets of the future? Business Information Review, 23, 228-234.

 

Wiig, K. (1997). Integrating intellectual capital and knowledge management. Long Range Planning,30(3), 399-405. doi:10.1016/S0024-6301(97)90256-9

 

Zack, M. H. (1999a). Developing a knowledge strategy. California Management Review, 41(3), 125-145.

 

Zack, M. H. (1999b). Managing codified knowledge. Sloan Management Review, 40(4), 45-58.

How To Implement a Knowledge Management System

How To Implement a Knowledge Management System:

A Practical Guide for Project Managers

By Russ Wright
 A Practical Recommendation for Project Managers to Implement A Knowledge Management System

Implementing a knowledge management system can greatly assist a project manager in their work. A study conducted by White and Fortune (2002) described the most important project success factors mentioned by project managers, which included: (1) having clear goals and objectives, (2) good support from the senior management, and (3) enough funding and resources to complete the tasks. In a paper on the benefits of knowledge management systems, Wiig (1997) explained that knowledge management systems, when properly implemented, could improve communication between departments and provide the users with a history of best practices within the organization. In a study conducted by Alavi and Leidner (1999) the authors described that an effective knowledge management systems assisted project management by providing better communication, shortening the time to find solutions to problems and better estimates of project duration. Thus a knowledge management system can help a project manager in all three of the important areas by providing the information they need to secure the project success factors. This paper will provide a background on the definition of knowledge and knowledge management models. Three knowledge management implementation models are then reviewed to demonstrate a progression of the research. Further, from a synthesis of the literature on knowledge management implementation, this document provides several factors that would help a project manager be successful at implementing a knowledge management system. The conclusion finds that the field of knowledge management and the process of implementation are still evolving.

Background

A Brief Discussion of Knowledge There is much debate among the scholars as to what constitutes knowledge within the context of knowledge management systems. Nonaka and Takeuchi (1995) defined knowledge as beliefs and commitments and not really information. Drawing from Polyanyi (1966), they used the concepts of tacit and explicit knowledge. Tacit knowledge was defined as personal, context specific and difficult to explain, this knowledge gained from experience can be lost if an individual leaves an organization and does not share it. Explicit knowledge was defined as the common knowledge known by a large
group and could be easily codified and shared. Zack (1999) defined codified knowledge as knowledge that is created, located, captured and shared that can be used to solve problems and make opportunities. Accordingly this type of knowledge, because it is captured, can become stale if not regularly revisited and evaluated (Gillingham & Roberts, 2006). Thus, knowledge, for the purpose of this paper, will follow the aforementioned two categories of tacit and explicit.

Knowledge Management

The knowledge management field is still fairly new, within the past three decades, and
many facets of the field are still unsettled. According to research by Rubenstein-Montano, Liebowitz, Buchwalter, McCaw, Newman & Rebeck (2001a), one of the bigger revelations in the past decade was the realization that knowledge management was far more
than technology for sharing knowledge as it also incorporated individuals, and the culture in which they worked. According to research by Bresnen, Eldman, Newell, Scarbrough and Swan (2003) the sharing of knowledge within and across projects was very difficult and developing the ability to share knowledge both within and across projects was a very important source of
competitive advantage for an organization. Thus the project manager, who wants to improve the quality of knowledge sharing through the implementation of a knowledge management system, needs to consider many factors to find a solution. For the project manager, finding a useful methodology and implementing it requires a good understanding not only of the methodologies, but also the technological constraints of the organization in which they wish to deploy a knowledge management system. Research conducted by Liebowitz and Megbolugbe (2003) identified several high-tech and low-tech solutions for knowledge management. Low cost
solutions included frequent face-to-face meetings between departments, perhaps over working lunches, to share tacit knowledge. If an organization required a low-tech virtual solution because they were spread out over a large distance, which made meeting in person difficult or impossible, they might have used on-line bulletin boards and facebook-like groups to share tacit knowledge in a virtual workspace. Research by Kasvi, Vartianen and Hailikari (2003) showed that these types of interactions, lunch meetings between departments and seminars, were described as some of the most important sources of knowledge. The more high-tech solutions explained by Liebowitz and Megbolugbe (2003) used expert systems to capture and codify knowledge into a repository and data and text mining software that looked for patterns to inductively create knowledge. These solutions were much more difficult to implement and required considerable IT investment and employee training. Kuhn and Abecker (1997) acknowledged the value of these systems and cautioned a balanced approach that flexes with the organization was required to make these systems function well within an organization. Thus, the project manager must soberly consider what models for knowledge management will fit into an organization’s capability and budget before attempting to find and implement a particular model.

Knowledge Management Models

The knowledge management models presented below are only a sample of the many
models found in the research literature. These models are representative of many of the other models as many share similar features and processes. The three presented below are an attempt to show the progression of the research as the models take on more complexity and at the same time attempt to explain and simplify the implementation process. The model presented by Wiig (1997), had four basic iterative steps: (1) review, (2) conceptualize, (3) reflect, (4) act as depicted in figure 1 below. The review process called for monitoring the internal performance of the organization against other organizations in the same industry to determine how well they are doing. The conceptualize step in the process began by organizing the knowledge by different levels. The author provided several examples of survey instruments that identified the knowledge assets and in turn associated them with the particular business
process that used them. Also at this step strengths and weaknesses in the knowledge inventory were identified. The reflectstep involved creating plans to improve the strengths and weaknesses previously discovered. And finally the act step was the implementation of the plan, which might be carried out by individuals in different parts of the organization. This process would be repeated to assist in the capture of knowledge.

Figure 1 Wiig’s knowledge management model


wig-km-model

A much more sophisticated model was presented by Rubenstein-Montano, Leibowitz, Buchwalter, McCaw Newman and Rebek (2001b) which according to the authors addressed several of the shortcomings of the other models. The authors argued that the
existing models lacked detail, did not include an overarching framework, and failed to address the entire knowledge management process. The model presented by the authors consisted of five phases: (1) strategize, (2) model, (3) act, (4) revise and (5) transfer as depicted in figure 2 below. Each phase of the model could loop back to the previous if it was determined that further work within a particular phase was required. The strategize phase covered the strategic planning; business needs analysis and a cultural assessment of the organization.
The model phase involved conceptual planning that covered knowledge audits and planning and a design of the plan to store and distribute the knowledge. The act phase focused on capturing, organizing, creating and sharing the knowledge. The revise phase consisted of implementing the system, reviewing the knowledge, and evaluating the achieved results. The
transferphase published the knowledge so it could be used to create value for the organization and consider expansion of the knowledge base.

Figure 2 Rubenstein-Montano et al. Model

rubenstein-montano-km-model

A later model presented by Chalmeta and Grangel (2008) tried to simplify the existing systems and provided a generic knowledge management implementation model. The authors argued that all knowledge management systems used some sort of computer system and therefore the implementation methodology should reflect the need for it. This model also consisted of five phases: (1) identification, (2) extraction, (3) representation, (4) processing and (5) utilization as depicted in figure 3 below. The identification phase focused on identifying the knowledge to be stored, and classified it into categories. The extraction phase involved transforming the knowledge from its existing state and putting it into the format used in the
knowledge management system. The representation phase created a model or diagram that showed a map of the knowledge in the system. The processing phase involved defining what technology platform was used to display and share the knowledge. The utilization phase involved deploying the knowledge portal and trained the members of the organization to use the system.

Figure 3 The Chalmeta and Grangel model

chalmeta-grangel-km-model

The models presented here are a sampling from the literature. The Wiig (1997) model for constructing a knowledge management system seemed very simple. Yet, Diakoulakis (2004) explained that this model was deceptive in the simplicity it portrayed because the model could “build, transform, organize, deploy and use knowledge” (p. 37). The Rubenstein-Montano et
al. (2001b) model tried to fix the shortcomings of models that came before it. In an attempt to generalize and simplify a knowledge management implementation model, Chalmeta and Grangel (2008) created another model, which included elements from both of the aforementioned models and attempted to create a much more generic and complete framework for the implementation of a knowledge management system. Regardless of which model is chosen to implement a knowledge management system within an organization, there are many factors that contribute to the success of the project.

Factors for Success

For the project manager, there are many factors to consider when deciding to implement a
knowledge management system. Below is a synthesis of many of the factors from existing research that will affect the ability of an organization to successfully implement a knowledge management system. They are: (1) managerial support, (2) a supportive culture, (3) incentives for motivation, (4) technology that matches the strategy, (5) ways to assess the value of the process, (6) specialists and processes, and (7) training. Each of these factors for success is discussed in detail below.

The Support of Management

Without the support of management the implementation of a knowledge management
system will not work. According to a study conducted by Holsapple and Joshi (2000) a major factor that contributed to the successful implementation of a knowledge management system were the behaviors of the management team who provided the impetus and the model of behavior that demonstrated a desire to use a knowledge management system. Another study by Massey Montoya-Weiss, and O’Driscoll (2002), who studied the implementation of a knowledge management system at Nortel Networks, explained that the managerial leadership provided control and coordination and most importantly they ensured that the knowledge management strategy was aligned with the business strategy. A similar study conducted by Sharp (2003) explained that the way employees acted in the implementation of the knowledge management system was a direct reflection on the behavior of management. Therefore the support of management not only provides the push to make it happen, it also requires them to set the tone which helps define the culture and acceptance of a knowledge management system.

The Proper Culture of Collaboration

The culture created by management will greatly influence the success of
an implementation of knowledge management system. In a recent paper by Anklam (2002) the author explained that knowledge management and creation requires collaboration on a much greater level. Individuals within the organization must develop a sense of trustworthiness between them that facilitates the sharing of knowledge. According to research by Ruggles (1999) knowledge management without a culture of collaboration will not succeed as collaboration is “strongly conducive to knowledge generation and transfer” (p. 300). Gold Malhorta and Segars (2001) explained that collaboration is important for the transfer of tacit
knowledge between individuals within an organization. The research conducted by Chourides, Longbottom and Murphy (2003) found that a coaching leadership style that established a learning culture was among the most significant factors for a successful knowledge management system implementation. Thus, the vision of management, which includes a vision of the organizational culture of collaboration, is required for the implementation of a knowledge management system to succeed.

Incentives for Motivation

Also included within the culture of an organization is motivation for the individuals in the form of incentives. According to research by Yahya and Goh (2002), the connection of rewards and compensation to an individual’s performance appraisals can have a positive impact on the motivation of an individual towards using a knowledge management system. Huber (2001) explained that to motivate individuals to share knowledge, the policies of the organization, in regards to rewards, must promote sharing. He further explained that the organization should publicize and celebrate instances of knowledge sharing that benefited the organization. The research conducted by Darroch (2005) seems to validate these earlier works as the author explained that the findings of his research showed that the knowledge sharing culture
of an organization was directly affected by performance incentives. Therefore, if management offers incentives, the workers within the organization will be motivated to share knowledge.

Technology That Matches the Strategy

Information and communication technology when matched to the business strategy for knowledge management plays an integral role in a successful implementation of a knowledge management system. The two major strategies for knowledge management are classified as codification and personalization. According to Zack (1999) codification is a process whereby tacit knowledge is captured in some electronic form and then shared about the organization thus making it explicit. He further explained that in this model, information technology is used like a pipeline to move knowledge around the organization. Because this model uses extensive technology and knowledge specialists to capture and store the knowledge, the monetary investment is very high. The second strategy for knowledge management, personalization, according to Hansen, Nohria and Tierney (1999), used information and communication technology to facilitate conversation from person to person where the participants transfered tacit knowledge. This model used much less technology and therefore the costs were much lower. It is important to note here that many scholars, (Alavi & Leidner, 1999; Borghoff & Pareschi, 1997; Wong, 2005), all stated that information and communication technology should not be considered an end unto itself and only considered a tool, as the wrong attitude towards technology can cause the entire knowledge management process to stagnate. Thus, matching the knowledge management strategy to the information technology budget of the organization will have significant impact on the successful implementation of a knowledge management system.

Assigning Value to the Process

Once a knowledge management system is in place, it will be important to express to management how well the system enhances the business strategy. This can be difficult as many of the benefits created from a knowledge management system are intangible, such as the good will and customer loyalty generated by the extra attention, and were very difficult to measure (Snowden, 2002). In research conducted by Park, Ribiere and Schulte (2004), management only
considered the implementation successful when there was some concrete way of measuring the positive impact of the implementation. This same attitude is echoed in the research conducted by Bose (2004), when he explained that the ability to measure the value of
a knowledge management system is critical to sustaining management’s support. He further explained that only with some way to measure the results could management assist in solving problems in the system. Jennex and Olfman (2008) further added that a successful implementation of a knowledge management system requires the ability to measure several factors of success, among them are: (1) information quality, (2) user satisfaction, and (3) system quality. They further explained that each of these factors add to the measurement of the benefits of implementing the system. Therefore defining   method to measure the success of the knowledge management system implementation, although somewhat difficult to define, not only informs management, but also helps the project manager to garner their continued support and sustained use of the system.

People and Processes

Special roles are needed to maintain the knowledge management system. According to Zack
(1999) there were specific roles required to maintain the knowledge management system within an organization, which included people to gather, refine and distribute the explicit knowledge throughout the organization, and IT support for the technology that held the repository. Grover and Davenport (2001) took this notion further and suggested a role of chief knowledge officer
that fulfilled many purposes including an indicator that an organization was serious about knowledge management. This role also served as the chief designer of the knowledge architecture. According to Coombs and Hull (1998) their research explained that
there also must be many knowledge managers within an organization that were familiar with knowledge management and facilitated the sharing of knowledge among different departments. Therefore these roles and responsibilities help to maintain the system and
show the support of the executive management.

Training

Individuals within an organization need to be trained, not only on the technology used to share
knowledge, but also to raise awareness of how to manage knowledge and see it as a valuable resource for the organization. Because the knowledge existed within the minds of individuals within the organization, without proper training an employee was not motivated to use a knowledge management system and share their knowledge (Bhatt, 2001). Research conducted by Hung, Huang, Lin and Tsai (2005) into critical factors for the adoption of a knowledge management system found that one of the biggest factors for successful implementation and increasing an organization’s competitiveness was the effective training of the employees to
recognize the importance of the knowledge management system. Another important factor for training employees was to give them a common language and perception of how they thought about and defined knowledge (Liebowitz, 1999). Therefore, training is a key success factor not only because they need to know how to use the knowledge management system, but also because it teaches the individual to recognize knowledge and understand the value that it represents to the organization. The seven factors for successful implementation of a knowledge management system within an organization outlined here give the project manager a
starting point for assessing the readiness of the particular organization. The project manager must consider how much support management, and especially senior management will give to the project. Another aspect requiring consideration is the culture of the organization. The project manager will have to reflect upon the culture of the organization and note if management is promoting a culture conducive to the plan. The culture created by management will also need to provide incentives to help foster sharing of knowledge among the members. A big consideration the project manager will have to undertake is the availability of
technology, and the people to support it. Some plans can be really expensive and a good review of the organizations technological infrastructure is needed before a serious plan can be made. Also of great importance is the training for the individuals who will use the system. Not only will they need to know how to use the system, but also how to recognize when something is knowledge worth storing.

Knowledge Management Implementation Is Still Evolving

It is clear from this research that a project manager who wants to improve the sharing of knowledge both within and across projects can benefit from a knowledge management system.
From the progression of knowledge management models demonstrated above, it is clear to see that researcher’s understanding ofhow to implement a knowledge management system is still evolving. The research presented on the factors for success demonstrates that research is still ongoing to understand the critical success factors for knowledge management system implementation. A core set of knowledge on how to successfully implement a knowledge management systems seems to exist, yet the constant evolution of technology seems to continue to change how a system might be implemented.

References

Alavi, M., & Leidner, D. (1999). Knowledge management systems: Emerging views and practices from the field. In Hawaii International Conference on System Sciences (p. 7009). Published by the IEEE Computer Society.

Anklam, P. (2002). Knowledge management: the collaboration thread. Bulletin of the American Society for Information Science and Technology, 28(6), 8-11.

Bhatt, G. (2001). Knowledge management in organizations: examining the interaction between technologies, techniques, and people. Journal of Knowledge Management, 5(1), 68-75.

Borghoff, U. M., & Pareschi, R. (1997). Information technology for knowledge management. Journal of Universal Computer Science, 3(8), 835-842.

Bose, R. (2004). Knowledge management metrics. Industrial Management & Data Systems, 104(6), 457-468.

Bresnen, M., Edelman, L., Newell, S., Scarbrough, H., & Swan, J. (2003). Social practices and the management of knowledge in project environments. International Journal of Project Management, 21(3), 157-166. doi:10.1016/S0263-7863(02)00090-X

Chalmeta, R., & Grangel, R. (2008). Methodology for the implementation of knowledge management systems. Journal of the American Society for Information Science & Technology, 59(5), 742-755.

Chourides, P., Longbottom, D., & Murphy, W. (2003). Excellence in knowledge management: an empirical study to identify critical factors and performance measures. Measuring Business Excellence, 7(2), 29-45.

Coombs, R., & Hull, R. (1998). ‘Knowledge management practices’ and path-dependency in innovation. Research Policy, 27(3), 239-256.

Darroch, J. (2005). Knowledge management, innovation and firm performance. Journal of Knowledge Management, 9(3), 101-115.

Diakoulakis, I. E., Georgopoulos, N. B., Koulouriotis, D. E., & Emiris, D. M. (2004). Towards a holistic knowledge management model. Journal of knowledge management, 8(1), 32-46.

Gillingham, H., & Roberts, B. (2006). Implementing knowledge management: A practical approach.Journal of Knowledge Management Practice, 7(1).

Gold, A. H., Malhotra, A., & Segars, A. H. (2001). Knowledge management: An organizational capabilities perspective. Journal of Management Information Systems, 18(1), 185-214.

Grover, V., & Davenport, T. H. (2001). General perspectives on knowledge management:

Fostering a research agenda. Journal of Management Information Systems, 18(1), 5-21.

Hansen, M. T., Nohria, N., & Tierney, T. (1999). What’s your strategy for managing knowledge?Harvard Business Review, 77(2), 106-116.

Holsapple, C. W., & Joshi, K. D. (2000). An investigation of factors that influence the management of knowledge in organizations. The Journal of Strategic Information Systems, 9(2-3), 235-261. doi:10.1016/S0963-8687(00)00046-9

Huber, G. P. (2001). Transfer of knowledge in knowledge management systems: unexplored issues and suggested studies. European Journal of Information Systems, 10(2), 72-79.

Hung, Y. C., Huang, S. M., & Lin, Q. P. (2005). Critical factors in adopting a knowledge management system for the pharmaceutical industry. Industrial Management & Data Systems, 105(2), 164-183.

Jennex, M. E., & Olfman, L. (2008). A model of knowledge management success. In Current Issues in Knowledge Management (pp. 34-52). Hershey, PA: Information Science Reference.

Kasvi, J. J. J., Vartiainen, M., & Hailikari, M. (2003). Managing knowledge and knowledge competences in projects and project organisations. International Journal of Project Management, 21(8), 571-582. doi:10.1016/S0263-7863(02)00057-1

Kuhn, O., & Abecker, A. (1997). Corporate memories for knowledge management in industrial practice: Prospects and challenges. Journal of Universal Computer Science, 3(8), 929-954.

Liebowitz, J. (1999). Key ingredients to the success of an organization’s knowledge management strategy. Knowledge and Process Management, 6(1), 37-40.

Liebowitz, J., & Megbolugbe, I. (2003). A set of frameworks to aid the project manager in conceptualizing and implementing knowledge management initiatives. International Journal of Project Management, 21(3), 189-198. doi:10.1016/S0263-7863(02)00093-5

Massey, A. P., Montoya-Weiss, M. M., & O’Driscoll, T. M. (2002). Knowledge management in pursuit of performance: Insights from nortel networks. MIS Quarterly, 26(3), 269-289.

Nonaka, I., & Takeuchi, K. (1995). The knowledge creating company: How Japanese companies create the dynamics of innovation. Oxford, UK: Oxford University Press.

Park, H., Ribiere, V., & Schulte, W. (2004). Critical attributes of organizational culture that promote knowledge management technology implementation success. Journal of Knowledge Management, 8(3), 106.

Polanyi, M. (1966). The tacit dimension. London: Routledge and Kegan Paul.

Rubenstein-Montano, B., Liebowitz, J., Buchwalter, J., McCaw, D., Newman, B., & Rebeck, K. (2001a). A systems thinking framework for knowledge management. Decision Support Systems, 31(1), 5-16. doi:10.1016/S0167-9236(00)00116-0

Rubenstein-Montano, B., Liebowitz, J., Buchwalter, J., McCaw, D., Newman, B., & Rebeck, K. (2001b). SMARTVision: a knowledge-management methodology. Journal of Knowledge Management,5(4), 300-310.

Ruggles, R. (1999). The state of the notion: knowledge management in practice. The Knowledge Management Yearbook 1999-2000, 295.

Sharp, D. (2003). Knowledge management today: Challenges and opportunities. Information Systems Management, 20(2), 32.

Snowden, D. (2002). Complex acts of knowing: paradox and descriptive self-awareness. Journal of knowledge management, 6(2), 100-111.

White, D., & Fortune, J. (2002). Current practice in project management: An empirical study.International Journal of Project Management, 20(1), 1-11. doi:10.1016/S0263-7863(00)00029-6

Wiig, K. M. (1997). Knowledge management: Where did it come from and where will it go? Expert Systems with Applications, 13(1), 1-14. doi:10.1016/S0957-4174(97)00018-3

Wiig, K. M., De Hoog, R., & Van Der Spek, R. (1997). Supporting knowledge management: a selection of methods and techniques. Expert systems with applications, 13(1), 15-28.

Wong, K. Y. (2005). Critical success factors for implementing knowledge management in small and medium enterprises. Industrial Management And Data Systems, 105(3/4), 261.

Yahya, S., & Goh, W. K. (2002). Managing human resources toward achieving knowledge management. Journal of Knowledge Management, 6(5), 457-468.

Zack, M. H. (1999). Managing codified knowledge. Sloan Management Review, 40(4), 45-5

Towards Understanding Deploying Open Source Software

Towards Understanding Deploying Open Source Software:

A Study of Factors That Can Impact the Economics of an Organization.

Abstract

This purpose of this paper is to present the factors that can impact the decision to deploy Open Source Software (OSS) in an organization. Many organizations fail to understand the economics involved in using open-source software and consequently suffer poor results. An explanation of the Open Source Initiative (OSI), and a definition of Free / Libre Open Source Software (F/LOSS) is provided along with a discussion of the benefits and pitfalls of deployment in the context of the value chain. The results conclude that the benefits outweigh the risks and profit/benefit is possible if the economic impact is understood.

Towards Understanding Deploying Open Source Software

Without the technology that runs Information Systems (IS) most organizations would cease to function. The business model of a typical organization is tied to the systems and technology that it uses. Since the advent of e-commerce, and the market for products becoming nearly global, organizations are constantly looking for innovation in technology that will give them a competitive edge. A recent study pointed out that the advantage is temporary as other competing companies will copy or innovate even newer and cheaper technology which creates a perpetual requirement to adapt business processes (Ward and Peppard 2004). Open Source Software (OSS), if properly implemented, can become a key part of the innovation and adaptation of business organizations helping them to maintain a competitive edge.

Background

Ever since Porter (1996) introduced the Value Chain Analysis business concept in Harvard Business Review, consultants have tried to use the subsequent methodology to evaluate the quality of each link in the chain. A value chain is a series of tasks and interrelated activities performed by an organization to produce a product. As this product passes through each activity or “link” in the chain, where it is somehow refined, and the product takes on more value. The final sum of the value of all the refinement activity is more than the value that is created at each step in the chain. One way an organization might improve the quality of the links would be to use OSS in their Information Systems (IS).

There are many advantages to using OSS to improve the value of each link and the relationships between them. How IS is used by a company has a significant influence on the relationships between the activities in a value chain (Porter and Millar, 1985). IS helps a company to create and maintain competitiveness because competitiveness flows from creating value for the customer. The activities that create value for a company, such as purchasing, production and sales, are not independent but rely on each other in the value chain. Porter and Millar (1985), concluded that the proper use of information technology minimizes costs while maximizing value, optimizing value activities, and guaranteeing competitive advantages (p 151). These relationships between activities can be strengthened by good use of IS, and quality OSS to improve competition and create greater value.

The Open Source Initiative

According to the website About the Open Source Initiative (n.d.), The Open Source Initiative (OSI), was incorporated in 1998, and is a non-profit corporation who’s primary purpose is to provide education about Open Source Software (OSS) and to advocate for the benefits of OSS. The OSI also claims they exist to build bridges among the constituents of the open-source community. They further claim that one of their most important activities is to act as a standards body that maintains the definition of OSS. They also hold a trademark called The Open Source Initiative Approved Licensearound which they attempt to build a nexus of trust so that all parties involved can cooperate on the use of OSS.

Definition of F/LOSS

The definition of free/libre open source software (F/LOSS) is often misunderstood. The free part of the definition is about the liberty of use with the product and not about the price. The Free Software Foundation (2010) maintains the following definition:

“Free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer.” Free software is a matter of the users’ freedom to run, copy, distribute, study, change and improve the software. More precisely, it means that the program’s users have the four essential freedoms:

  • The freedom to run the program, for any purpose (freedom 0). The freedom to study how the program works, and change it to make it do what you wish (freedom 1). Access to the source code is a precondition for this.
    The freedom to redistribute copies so you can help your neighbor (freedom 2).
    The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

Embracing the Business Model

The decision to deploy OSS in an organization is difficult at best. The idea of embracing the Open Source Initiative (OSI) business model as a business strategy is even more frightening for organization executives. There are dissenting opinions on the value of using OSS from many directions that attempt to sway the decision for many reasons. Some reasons are based on misinformation, others are based in fallacious economic policies and others based in fear. Behlendorf (1999), explained that the OSI model is not for everyone, and it is often implemented incorrectly and then blamed for the failure. In fact, the failure is more often result of a poor understanding of how to embrace the model and deploy OSS than the OSS itself.

All or Nothing?

There are multiple ways to deploy OSS in an organization it is not an all-or-nothing approach. For example a school might decide to deploy open-source office suite, such as Open Office, in place of the commonly used Microsoft Office. Another example might be a commercial company decides to use a module of OSS code to provide functionality in a product or as a value add-on to an existing proprietary product. For example, the company Objectweb had several products that embed OSS components. In the most extreme example, a company might decide to open the source code of one of their products to try and create a competitive advantage and gain market share. One of the most well known examples of this move is the Netscape corporation who opened the source to their web browser Mozilla. Each of the methods of deploying OSS share some economic impact commonalities that need to be considered before making the move to OSS.

The implementation of OSS, if properly understood, can be a reliable asset in the value chain. One of the biggest hurdles to overcome is the idea that the source code, once kept secret and valued as the company profit maker is now opened and shared, even with competitors, to produce a better product. In a recent study showed that if a company decides to open the source code to a product, they will often have greater profit if they have a complimentary product that the OSS application enhances. This allows the company to exploit the benefits of combining the open and closed products (Haruvy, Sethi and Zhou, 2008). In a related study the authors explained that not all code has to be shared, those pieces that differentiate and make the organization competitive can be kept and sold separately (Behlendorf, 1999). Instead allowing the core code to be open and shared allows for innovation that can strengthen and enhance the product and provide new paths for innovation.

The Symbiosis

There is a symbiotic relationship between an organization the chooses to use OSS and the OSI community. When the two work together well, success is likely. In a recent study there were four factors that defined the most successful projects. The findings showed that projects that did well had the following characteristics: (1) a solid core developer group, (2) an active peripheral group in communication, (3) a high level of communication exemplified by a depth of threaded communications and (4) only moderate dependence on the internal community for beta testing. The research finds a direct correlation and predictability in the level of communication and the code development within the project (Vir Singh, Fan, and Tan, 2007). This means that if these qualities are shared between an organization choosing to deploy and use OSS then there is more of a chance for success. There are many examples of companies that have successfully opened their source code and created a symbiotic relationship with the OSI community.

The Mozilla Example

Organizations, even commercial firms, can benefit from opening their source code. One possible benefit is to gain ground against a competitor. In a research study Lerner and Tirole (2005), explained how Netscape in 1998 decided to open the source code of a portion of it’s browser “Mozilla”. At that time Internet Explorer was dominating the browser market and Netscape was only holding a tiny share. The study further showed how the web browser application became more accepted, and by opening this source code, Netscape experienced a market share and profit increase. This means it is possible for an organization to remain commercial, open the source code and make a profit. When the point of the OSI is understood, there is great value in the process.

The Value In Using F/LOSS

About ten years ago OSS was relegated to the role of operating system (Linux) and web server (Apache). Now OSS is firmly in the middle layer providing databases (MySQL, Postgresql) and Email servers (Qmail, Horde) among many other functions. OSS has two very distinct properties: the source code is accessible and the developer has the liberty to modify the code any way they want and redistribute pieces of it or the entire source code (Thomas and Hunt, 2004). This is important, particularly to the OSI community, because access to the source, and the liberty to reuse it, creates the opportunity to use existing software as a launching point to create new products based on new designs. This gives the developer access to high quality source code readily available with a few mouse clicks in a web browser. The liberty to use the code as desired and the availability of the source code removes several boundaries that exist when attempting to reuse code. A developer may choose to use a few lines of code, an entire class or an entire system (Frakes and Terry, 1996). This means the reuse of the source code has several advantages including the ability to break out of a lock in with a specific vendor, produce high quality products, get the product to market quicker, and foster innovation which adds value and increases customer satisfaction.

OSS is developed by many groups, often spread out globally. Most often the initial creation of the software program is done to fulfill a need by a single group of developers at one company or school. This group will release the code to the public before it is completed, usually to help spur further development. Perens (2005) explained that other companies may pick up this code and extend or adapt it to their use. This means new features and functionality can be added or adapted as needed by the organization that chooses to use OSS. The code is often of very high quality and developers are offered incentives to write high quality code.

The Quality of the Code

One common objection to using OSS as part of the IS in an organization’s strategic portfolio is the quality of the source code. Quality, in this instance, is defined as well-designed, well-written, relatively error free and functional. There are two major influences on the quality of open-source code that drive the level of quality in an open-source project. Gacek and Arief (2004), explained that the structure of an OSI development community is based most closely on a meritocracy. The better the quality of the code written, the more merit the developer has within the community. Perens (2005) explained that the developers who modify OSS code have an incentive to write the code well so that their changes get incorporated into the main body of the code. That way they don’t have to spend money to re-integrate their changes into the existing code base every time they want to implement an update. The value of a meritocracy is that a developer has incentive to write quality code and to become a respected member of the community. These incentives help to promote high quality, well written code in the OSS projects.

Some argue that the quality of the OSS code is so good, it is unfair to compare it to proprietary or closed source projects. McConnell (1999), stated that successful OSI projects should not be compared to regular closed source projects and instead should be compared with software development effectiveness as it is achieved by companies on the leading edge using multiple practices to achieve high quality software. This means the quality should be compared based on the methodologies used by the companies to determine if within the methodology, the quality of the code is equal. Golden (2008) commissioned a case study by Coverity, a company that tests software quality, and found that OSS programs averaged half the number of the bugs per thousand lines of code when compared to proprietary programs (p. 36). This study shows the quality of OSS code, when compared to equivalent proprietary programs is superior.

Breaking Vendor Lock-In

One aspect of OSS that can present an opportunity to create a competitive advantage is the ability to modify the source to meet the organization’s business model. A recent study showed that a major strategy used by software vendors is to keep the cost of switching to another product high by implementing proprietary data formats and keeping tight access on the source code (Carillo and Okili, 2008). When an expensive proprietary product is purchased for use in a company, Castelluccio (2008), explained the product often becomes “sticky” or “locked-in” either because management wants to earn a positive return on the investment or because the cost of switching would be excessively high. For example, the data storage format could lack interoperability with other vendor’s software preventing import and export of information. These concerns can also be accompanied by the inability to fix bugs in a timely manor or get the vendor to respond quickly to the organization’s need. Golden (2008) showed one of the primary reasons organizations are adopting OSS is because it includes the source code, the organization is free to strip out unneeded functions and make repairs themselves thus eliminating dependence on the vendor. The study further showed that because no single vendor controls OSS so the organization is free to pick a different provider to support their program. For the organization this means the liberty to make the changes they need, customize the software to fit their business model, which in turn, will help increase value of individual links within the value chain and increase a competitive advantage.

Quicker To Market

Using OSS can provide a competitive advantage especially when it is used in the development of new applications. According to a research study by Ajila and Wu (2007), there is a strong correlation between the ability to get product to market sooner and the adoption of OSS, thus providing economic gains in both productivity and product quality. As long as the source code meets that needs of the project, the implementation of code into a project will shorten the development time. That means by adding OSS to a project the development time can be shortened, the cost to develop decreased and the time to market reduced. These factors can give an organization an advantage and the ability to strengthen the bond in the value chain and thus add value to the product.

Innovation

Innovation is truly a key part of the OSI movement. The ability to reuse existing code and create something new is vital to the success of an organization. Vujovic and Ulhøi (2008), explained that the model known as open innovation, now seen in a global market, allows for greater innovation in product research and development. Companies’ ability to stay competitive is no longer exclusively determined by efficient cost management and marketing capabilities. Rather, it relies increasingly on the continuous development of new and superior products and services in a business environment characterized by growing instability (with regard to consumer preferences and technology development). A recent study found that within a three year period a small core of developers, assisted by a transitory group of less committed developers were able to create several new products using existing OSS from sourceforge. The ability to use the code gave them a measurable advantage in creating applications (David and Rullani, 2008). This means the competitive edge has to come from a wider body of developers and researchers and with such strong competition in a global market, using the OSI model, where innovation happens across a wide community, will help an organization stay competitive.

The Pitfalls

There are many ways to use OSS improperly. When choosing to deploy OSS to increase or create a competitive advantage, there are just as many ways to implement the project improperly as there are to create success. Before an organization decides to incorporate a single line of OSS code into a project, or to open their source to the world, there are some concerns that bear a sober review.

No Reliable Release Schedule

The OSI community does not employ the same controls for release schedules like a commercial application. According to a recent study the level of control exerted on an OSS development project is much lower that a commercial counterpart and thus have a less reliable release schedule. Only the most active and highly used projects, such as Apache, have reliable release schedules (Capra, Francalanci, and Merlo 2006). This means the release schedules are not exact and may not deliver as planned. If an organization is counting on the deployment of a certain OSS project, there is a risk that the project might not deliver what is planned or promised as they do not exert the same type of control of release dates.

The Developer Skill Set

The developers who will implement OSS in an organization do not need special skills, but do need an understanding of OSS and how to reuse source code properly. A recent study showed that while there is no strong statistical significance between OSS reuse skill and experience, and software development economics, there is some statistical correlation pointing to the fact that software reuse experience and skill in general is important when reusing OSS (Ajila and Wu, 2007). That means it is important that the developers engaged in the process have at least a general understanding of code reuse and some familiarity with OSS before attempting to do an integration. Otherwise failure and cost overruns from a lack of preparation are likely.

Ideology Over Pragmatism

Although OSS offers many benefits it does not fit in all situations and a clear pragmatic perspective is required because relying only on the ideology can lead to missing out on possibly better solutions. OSS often makes it’s way into a company through developers who work on projects. Their mindset towards OSS can be ideological and prevent them from considering alternatives. According to a study by Ven and Verelst (2008), if the mindset of the developer is ideological in reference to OSS, their decision will not be pragmatic but instead they will have a strong preference for using OSS, without properly considering proprietary alternatives. Suitability of the OSS solution is sometimes overlooked and in the organization-specific context, and new innovations might be ignored. This means the developers, who are often the decision makers about what products to use might instead use an OSS solution that does not fit as well because they are adverse to considering proprietary solutions or do not properly consider all alternatives and find the best fit.

Maintenance

Another pitfall to deploying OSS within an organization is failure to create the symbiotic relationship with the OSI community. An organization could chose to download, modify and deploy OSS solutions within their value chain to gain a competitive advantage. According to Dahlander (2004), some organizations stop at this point and for a time, enjoy the benefit of using high quality code for free in their primary or secondary value chain activities. This however, is a short sighted plan and only a short term gain is realized as there is still cost associated as problems arise when the need for updates and upgrades appears. The organization may attempt to go back to the project where they acquired the source code only to find the project discontinued or morphed into something different that is far from meeting their needs. The modifications made to the original OSS code should be returned to the community so that they are incorporated into the code base, thus helping to maintain a symbiosis. The maintenance of the OSS code is equally important as the value gained from the low price. This means an organization should take the time to create the relationship with the OSI community and not focus solely on the short term gain realized by inserting free source code into their value chain.

Legal Issues

One important part of using OSS in a product that is distributed outside the organization is to preserve the intellectual property rights of the people who worked hard to create the software. When choosing to reuse OSS or incorporate the code into a product that is distributed outside the organization credit must be given to the original developer and the source code must be made available or the organization would be in violation of the license given with OSS. According to research performed by Walsh and Tibbets (2010), infringing a registered copyright carries with it the risk of statutory damages, injunction against shipping products incorporating the OSS, and possibly other penalties; some may find this a surprising consequence of using “free” software. That means misunderstanding the license that comes with OSS can have detrimental effects upon an organization. Use of the software is provided as long as the organization complies with the terms of the license.

Benefits Outweigh The Risks

The choice of an organization to integrate OSS into the application portfolio is not a simple task. The choice to distribute OSS as part of a product can lead to legal issues if the license assigned to the code is not fully understood. The benefits, though mixed, overall outweigh the risks according to the research offered in this paper. If an organization understands the risks and benefits of deploying OSS, and follows a systematic model for the implementation and reuse of code, there is a good chance they will achieve some economic gain from a shorter development cycle and equally important, a high quality product.

Further Study

A future project to add to this study could be a qualitative study involving several interviews with executives, mangers and developers to provide insight into the intent of each group and help understand how each of the aforementioned factors impacted each group’s decision to participating in an OSS project. Another possible project to add to to this study could be a quantifiable survey of executives, mangers and developers to provide an analysis and predictability of success based on the level of understanding of each of the aforementioned factors that impact the decisions to deploy OSS.

References

About the Open Source Initiative (n.d.). Retrieved from http://www.opensource.org/about

Ajila, S., & Wu, D. (2007). Empirical study of the effects of open source adoption on software development economics. The Journal of Systems and Software, 80(9), 1517-1529. doi:10.1016/j.jss.2007.01.011

Behlendorf, B. (1999) Open source as a business strategy. In Dibona, C., Ockman, S. & Stone, M. (Eds.), Open-sources: Voices from the open source revolution,(pp.149-170). O’Reilly. Retrieved from http://oreilly.com/catalog/opensources/book/brian.html

Capra, E., Francalanci, C., & Merlo, F. (2006). An empirical study on the relationship between software design quality, development effort and governance in open source projects. IEEE Transactions on Software Engineering, 34(6), 765-774. doi:10.1109/TSE.2008.68

Carillo, K., & Okoli, C. (2008). the open source movement: a revolution in software development.Journal of Computer Information Systems, 49(2), 1-9. Retrieved from Business Source Complete Database.

Castelluccio, M. (2008). Enterprise open source adoption. Strategic Finance, 90(5), 57-58. Retrieved from Business Source Complete Database.

Dahlander, L. (2004). Appropriating the commons: Firms in open source software. International Conference on Software Engineering: St. Louis Missouri. doi:10.1145/1083258.1083269

David, P. & Rullani, F. (2008). Dynamics of innovation in an “open source” collaboration environment: Lurking, laboring, and launching FLOSS projects on sourceforge. Industrial and Corporate Change. 17(4), 647-710. doi:10.1093/icc/dtn026

Dornan, A. (2008). The Five Open Source Business Models. Retrieved May 25, 2010, from http://www.informationweek.com.

Frakes, W., & Terry, C. (1996). Software reuse: Metrics and models. ACM Computer Surveys, 28(2), 415-435. doi:10.1145/234528/234531

Free Software Foundation. (2010). The free software definition – gnu project. Retrieved May 25, 2010, from http://www.gnu.org/philosophy/free-sw.html

Gacek, C., & Arief, B. (2004). The many meanings of open source. IEEE Software, 21(1) 1359-1360.doi:10.1109/MS.2004.1259206

Golden, B. (2008). Open source in the enterprise: an o’reilly radar report. O’Reilly Media, Inc. 1st Ed.

Haruvy, E., Sethi, S., & Zhou, J. (2008). Open Source development with a commercial complementary product or service. Production and Operations Management. 17(1), 29-43. Retrieved June 4, 2010, from ABI/INFORM Global. (Document ID: 1477481991).

Lerner, J., & Tirole, J. (2005). The economics of technology sharing: Open source and beyond. Journal of Economic Perspectives, 19(2), 99-120. Retrieved from ProQuest Database.

McConnell, S. (1999). Open source methodology: Ready for prime time? IEEE Software, 16(4), 6-8. Retrieved from Business Source Complete Database.

Perens, B. (2005). The emerging economics of open source software. Retrieved May 11, 2010, fromhttp://perens.com/Articles/Economic.html

Porter, M. (1996). What is strategy? Harvard Business Review, 74(6), 61-78. Retrieved from ProQuest Database.

Porter, M., Millar, V. (1985). How information gives you competitive advantage. Harvard Business Review, 63(4), 149-160. Retrieved from Business Source Complete Database.

Thomas, D., & Hunt, A. (2004). Open source ecosystems. IEEE Software, 21(4), 89-91. doi:10.1109/MS.2004.24

Ven, K., & Verelst, J. (2008). The impact of ideology on the organizational adoption of open source software. Journal of Database Management, 19(2), 58-72. doi:10.1109/MS.2008.73

Vir Singh, P., Fan, M., & Tan, Y. (2007). An empirical investigation of code contribution, communication participation and release strategies in open source software development: A conditional hazard model approach.Journal of Information Systems and Operations Management. Retrieved from the MIT Open-Source database.

Vujovic, S., & Ulhoi, J. P. (2008). Online innovation: The case of open source software development.European Journal of Innovation Management, 11(1), 142-156. doi:1087747

Walsh, E., & Tibbetts, A. (2010). Reassessing the benefits and risks of open source software.Intellectual Property & Technology Law Journal. 22(1), 9-13. Retrieved from the ProQuest Database.

Ward, J., & Peppard, J. (2004). Beyond strategic information systems: Towards an IS capability. Journal of Strategic Information Systems, 13(2), 167-194. Retrieved from ProQuest Database.

Value Creation/Innovation

Value Creation/Innovation:

Exploring Value Creation Theories

By Russ Wright

Value Creation

Value creation or innovation as defined in the question above, focuses on creating new products and new ideas within the field of software development. There is much debate in the literature over how value is created within software development. Some literature focused primarily on the economy of software development (Boehm, 2003; Boehm & Sullivan, 2000). Others took a more holistic approach and realize that the culture and process have the greatest effect on the ability to innovate (Highsmith & Cockburn, 2002; Karlsson & Ryan, 2002; Little, 2005; Prahalad & Ramaswamy, 2004; Quinn, Baruch, & Zien, 1996). Regardless of the path taken to explain value creation and innovation in software development, the general agreement was that the process needed to change to better fit the challenges and opportunities of a global market. Inasmuch, this paper will explore the value creation theories related to software development and the additional challenge of outsourcing to create innovation.

The purpose of this document is to explore how an organization can create value in the software development process. There is a discussion of the background on value creation within the context of agile software development principles, which contain many ways to innovate. This document also explores the theories of outsourcing software development and how they relate to value creation or innovation within agile development. The conclusion finds that creating value in software development is still a topic up for debate among scholars and that outsourcing can add value but is full of pitfalls.

Value Creation Theories Related To Software Development

Achieving value creation has slowly changed the way software is developed into a set of methodologies loosely called agile software development. Research conducted by Fowler and Highsmith (2001) concluded that the agile development philosophy and methods sought to remove cumbersome and time-consuming barriers to value creation. The authors instead supported a philosophy of software development that focused on the individuals, their interaction, creating working software, collaboration with the customer and quick responses to change. There are several theories of value creation or innovation embedded within the principles of the agile philosophy of software development. Therefore, several of these embedded value creation theories are explored below.

Customer Defined Software Value

One theory of value creation in software development is that customer satisfaction is achieved by delivering software that the customer actually values. According to research by Highsmith and Cockburn (2002), the authors explained that within the agile development methodologies the customer defined value for the project and set the measurement of success. For software to be valuable the customer had to find that the product not only met their needs, but also was usable and useful (Constantine & Lockwood, 1999). In an article on agile methodology, Boehm (2002) explained that generating value in the development process for the customer was achieved by emphasizing customer involvement in the development process over the traditional contract negotiation. Consequently, value creation comes from the ability of the customer to define the measure of success, including usability and usefulness of the product and their involvement in the development process as a team member. Making the customer a member of the team means that the requirements might change, many times and even late in the process.

Requirements Changes

A change in the requirements for an application, even late in the development process allows a customer to build in greater value to the product. The ability to prioritize the customer’s requirements through a cost-value approach produced a win-win result when creating software products (Karlsson & Ryan, 2002). Prahalad and Ramaswamy (2004) explained that changed requirements happened because the customer and development team developed a greater understanding of the customer’s needs. Recent research conducted by Paetsch and Eberlein and Maurer (2003) explained that the ability to adapt to the changing situation in the software development project created more value than prediction of the customer’s requirements. A study by Cao and Ramesh (2008) offered a warning against too much change in requirements leading to project failure because the customer never saw the software product. Thus the ability to change requirements is a positive, yet too much change can prevent a usable product from being delivered.

Quick Delivery

The ability to deliver the software quickly builds value for the customer. According to a recent study by Quinn, Baruch and Zein (1996) quick delivery of the software product made a significant difference in the ability to compete in the global market. Larman (2004) explained that innovation was accomplished when the developers, through an iterative cycle, delivered a working product to the customer, which allowed the customer to see the progress and adjust requirements quickly to the changing market. Yet, Nurer Mahapatra and Magalaraj, (2005) in a recent paper cautioned that organizations who attempted to adapt to agile methods that delivered quickly could fail because they were often unprepared for the radical change in behavior. Therefore, quick delivery of the software product builds value by allowing the customer to adapt and refine the product but also holds a potential for disaster if not managed well. Managers, customers and developers have to work together well to build value.

Collaboration with Managers

The development team and managers must communicate daily to build value in the development process. Cohn and Ford (2003) explained that managers had to adapt to a new style of leadership where they were required to relinquish some of what they perceived as control. The authors posited that traditional development plans which offered specific delivery dates were probably padded and inaccurate and instead needed participation to see that they could deliver the product quicker and with less resources. According to research conducted by Patton (2002) the regular discussions with managers added the benefit of clearing roadblocks and bottlenecks, which would slow down the project and add costs. The managers were also able to see how the product met the customer’s needs. A study conducted by Augustine, Payne, Sencidiver and Woodcock (2005) explained that without the constant communication, managers would often fall back into trying to manage the project using linear approaches as they attempted to gain control of the project, which lead to lost time and possibly project failure. As a result, the constant communication with managers helps create value by not only keeping them informed but also empowering them to clear obstacles that might slow down delivery. Managers can also provide considerable motivation and support to help create value in the software development process.

Motivation and Support

The motivation and support of the management team has a significant impact on the value creation in software development. According to research conducted by Ceschi, Sillitti, Succi and Panfilis (2005) into development project success factors found that team members ranked motivation from management in the form of support and training as among the top key factors for innovation. The study further showed that managers agreed that their support was a significant factor in the success and value creation within the project. A recent study conducted by Asproni (2004) into the benefits of motivation from management in software development teams showed that highly effective teams benefited most and achieved the best results when management provided among other factors, clear elevating goals, a unified commitment to the project and a collaborative climate. The study further explained that this support gave the team members the ability to innovate and build better quality software, often with fewer resources. A research survey conducted by Forward and Lethbridge (2002) found that the management team provided a significant impact on the performance of the development process when they provided the team with the proper tools to improve automation in the development process. The team saw these tools as support and found motivation to perform at a higher level. Inasmuch, support and motivation from management has a reciprocal effect on the development process and the ability of the developers to innovate and create value.

In Person Meetings for Tacit Knowledge Transfer

Value creation can be significantly effected by the ability of team members to meet and share knowledge. Nonaka and Takeuchi (1995) drew on the work of Polyanyi (1966), and explained that tacit knowledge was personal, context specific and difficult to explain. They further explained that this knowledge, gained from experience, can be lost if an individual leaves an organization and does not share it. Likewise the members of a development team, which includes the customer, must meet often to share knowledge to establish the context of the requirements and build new knowledge, which creates value (Dyba & Dingsayr, 2008). Cohn (2004) explained that when teams met to transfer knowledge, the use of stories to explain the requirements helped to build up individual and group knowledge as they shared. The author does warn that this process might not work well in really large teams, but did acknowledge the positive impact of tacit knowledge sharing on innovation. A recent study conducted by Chau, Maurer and Melik (2003) found that knowledge sharing within agile development teams, particularly when done in face-to-face settings helped to build trust among team members and increased the team’s ability to function together. Therefore, sharing of knowledge among team members, especially in face-to-face formats helps build team trust and create value in the development process.

Working Software as Measurement Of Success.

The ability for management to measure the progress of a development project has a significant impact on the ability of the software development team to create value. Boehm and Turner (2005) explained that agile development processes do not include the typical milestones and other measurement techniques common to traditional development methods. They further explained that the completed functional stories could serve as a replacement for these measures as they showed the amount of work completed on a particular development phase. A case study by Fitzgerald, Hartnett and Conboy (2006) demonstrated that the ability of the management team to measure the progress of the team, based on the amount of working code, increased the project performance by reducing the amount of paperwork needed in the previous traditional software development projects. The developers spent less time writing reports and more time on the actual development, which accelerated the development process. In a recent report, Lapham, Williams, Hammons, Burton and Schenker (2010) explained that progress within an agile project was measured by gathering the customer’s value assigned to each completed part of the development project. As these pieces were completed, they were used as a measure of how much the customer valued the product at that time. Thus, the ability of management to measure progress, along with the customer’s assigned value at each phase of development adds value to the project, particularly because it reduces the reporting workload on the development team freeing them to perform more development tasks.

Realistic Schedules

The proper attitude towards development cost and schedule when managing a project will have an impact on the development team’s ability to create value. Glass (2001) explained that successful projects had a realistic schedule that was not a death march to finish the project and the developer worked a normal workweek. Developing innovative products did not require that the software development projects be managed by the schedule or costs, as it would have distracted from the real objective of creating a profitable product that met customer needs and gave a competitive advantage (Poppendieck & Poppendieck, 2003). Likewise, and exploratory study by Begel and Nagappan (2007) explained that one of the benefits of implementing agile software methods was the flexibility of the process which gave the developers the ability to change directions when a rigid schedule would not have worked. So, a tight management of the schedule and costs, as is common in traditional development processes would inhibit value creation because they lacked flexibility.

Technical Excellence

The quality of skill for each member of a development team will have an impact on the value creation ability of an organization. An organizational culture that supported and provided opportunities for growth in skills was desirable because it lead to productivity (Wendorff, 2002). A recent study conducted by Chow and Cao (2008) showed that team capability and delivery strategy ranked highest as critical success factors. Technical excellence also extended to the tools used by the software development team, as good quality tools impacted the ability of the team to innovate (Hanssen & Fægri, 2008). For these reasons, a skilled software development team equipped with the proper tools and supported by opportunities for growth is an important factor for innovation and value creation in software development.

Keep It Simple – Maximize Effort

The design of the software program can impact the ability of the software development team to create value in the development process. In the agile development process, one of the first steps is writing the test for the specific functionality. By writing the test for the code first, the developer would write code to the test and minimize the additional code needed to meet the test requirements and functionality (Poppendieck & Poppendieck, 2003). In a recent paper by Lindstrom and Jeffries (2004) the authors explained that value was achieved by keeping the design as simple as possible so that the design matched the functionality and included no additional wasted motion. They further explained that the design was regularly reviewed to keep effort to the minimum and maximize efficiency. Hence, value creation in software development can be improved by coding only what is required and reviewing the code regularly to increase efficiency.

Team Self-Organization

The ability for a software development team to reorganize themselves into different configurations as the situation dictates can affect the organization’s ability to innovate. Cockburn and Highsmith (2002) explained that the ability of a software team to reorganize as the situation dictated was important for making decisions quickly and dealing with ambiguity. A recent paper by Decker, Ras Rech, Klein and Hoecht (2005) explained that the ability to reorganize as a development team allowed for the reuse of engineering knowledge in new projects. The benefit of knowledge reuse was a key factor in the reasons given for reorganizing the team to fit the new situation. Yet, there is reason to show caution when attempting to use a self-reorganizing team philosophy. A study conducted by Moe, Dingsoyer and Dyba (2008) found that the very specialized skills of certain team members and the uneven division of work among the team presented barriers to realizing a true self-organizing team. Therefore, value creation, in regards to self-organizing teams depends upon the ability of a development team to reorganize quickly to meet new challenges, and the balancing of skills and workload within the team.

Team Reflection

The ability for a development team at regular intervals to reflect upon the entire project can impact the ability of the team to create value. According to research conducted by Salo,Kolehmainen, Kyllönen, Löthman, Salmijärvi and Abrahamsson (2004) the post-iteration workshops provided significant help to improve and optimize practices and enhance the learning and satisfaction of the project team. The authors further explained that the cost of the workshops was quite small, and the benefits quite large. Cockburn (2002) echoed this same idea and explained that the after process reviews were helpful in growing the skill of the team and improving the skill sets of the participants. A review at the end of the development cycle where the participants shared their experiences significantly enhanced the development process (Dingsøyr & Hanssen, 2003). Thus, reflection builds the ability of a software development team to innovate by improving and optimizing the team practices.

Within each of the principles behind agile software development are theories of value creation for a software development team. By allowing the customer to set the measures of success within the product, brings value creation by building trust. The ability to adapt to changing requirements allows the development team to innovate and meet the customer’s needs. Quick and often delivery of a working product, even if it is not complete, builds value for the customer and the development team as they both gain credibility. Constant communication with management helps to build trust in the team and give them the freedom to innovate and create value for the organization. Closely related to this theory is the need for management to be able to measure progress by measuring the completeness of the project. By using the completeness of the project for measure the management is able to see progress and eliminate additional paperwork for the developer, which freed them to write more code. Also related to management was using other measures instead of cost and schedule as they would detract from the real goals of the project. Another way that value is created is through the support of management with proper training and tools, which brings about excellence. The adaptable and self-organizing team, a difficult goal to reach, also brings about value creation by allowing the team to adapt to the fluid situations found in software development. And lastly, one of the important and relatively inexpensive ways that a software development team can create value is by reflecting regularly on the development process and integrating the lessons learned, thus constantly improving the ability to innovate.

Value Creation Theories and Outsourcing Of Software Development

So far the exploration of value creation has focused on software development teams in local settings. The additional factor of distance between the development team and the customer or another development team, or even members of the same team presents some additional factors for innovation as well as failure. A review of the current literature shows much disagreement about the benefits and potential for success when outsourcing the development of software. Some of the theories, both pro and con, related to value creation, agile development and outsourcing are explored below.

Effects on Many Levels

Outsourcing of software development in general creates new opportunities for value creation, but also brings many challenges. A recent paper by Hersleb and Moitra (2002) explained that the separation of the software development team over the globe could add many problems. These problems included how the project manager divided up the work and how they handled resistance to the process. They further added that many cultural issues, including the attitude towards management, perceptions of time and communication styles all contributed to the successful outsourcing of a project. A paper by Agelfalk, Fitzgerald, Holstrom, Lings, Lundell and Conchuir (2005) explained that the process of communicating between team members, coordination of activities and control of the project were all challenged by distance. The authors further explained that only when strong supporting processes were in place could the outsourced project work. Hence, the challenges of distance, communication, culture and command and control in an outsourced software development project must be addressed with strong supporting principles and methodologies, like agile, to support value creation.

Even if an organization uses agile methods, because of the focus of the processes for dealing with ambiguity, change and communication, to create value in an outsource project, there are still many challenges that must be overcome. According to research by Carmel and Agrawal (2002) the authors identified three critical challenges of outsourcing software development as: (1) coordination, (2) control and (3) communication. Coordination was defined as integrating tasks across each unit so they all contribute to the whole. Control was defined as following the goals, policies and standards of the organization. Communication was defined as the exchange of information that is understood by those communicating. Thus, an understanding of how to deal with these three critical challenges is required to achieve value creation when outsourcing. Each of these challenges to outsourcing of software development, within the context of agile development principles, is explored further below.

Coordination

The division of tasks when outsourcing software development can impact the ability of an organization to create value. According to Shrivasta and Date (2010) agile teams that were distributed across too wide of a time zone difference suffered from poor performance as they had little overlapping time to coordinate activities. One possible solution to coordination problems suggested by the research of Carmel, Espinoza and Dubinsky (2010) was handing off the work from one site to the next toward the end of the work day going around the globe in the direction of the sun. The authors admitted that this solution was still not entirely proven, yet did present a method that might help in the coordination of distributed software development teams. Another possible solution to the problems of coordination suggested by the research of Wahyudin, Matthias, Eckhard, Schatten and Biffl (2008) was the use of a notification software tool that supported the agile development methodology and managed the interdependent tasks, which gave the project manager and the team members a way to coordinate activities. Hence, the coordination of tasks by the project manager and team members must be managed well to achieve value for the software development project.

Control

Adherence to organizational policies, goals and standards can impact value creation when outsourcing software development. In his research of outsourcing strategies Jennings (1997) explained that one of the most important factors for successful outsourcing was protection and development of the core capabilities that gave an organization their competitive edge. According to research by Sutherland, Schoonheim, Rustenburg and Rijk (2008) the authors found that exceptional productivity, and therefore value creation, in a development project among distributed teams was possible when the teams fully integrated the agile method into their development teams. This was achieved by bringing the teams together, instilling the agile goals and methods and then separating them. The authors acknowledged that the time spent instilling the agile principles and philosophy was a major contributing factor for success. Therefore, instilling the relevant goals and standards, especially the principles of agile development as mentioned above, gives a competitive edge, contributes to the success of outsourced projects and helps build value.

Communication

The most important factor for value creation when outsourcing software development is communication. Research conducted by Sutherland, Viktorov , Blount and Puntikov (2007) showed that communication, particularly when crossing cultures presented a significant obstacle as it limited productivity. The authors further explained that the solution that worked best was a full integration of the agile teams with members distributed around the globe as opposed to teams divided by geography. They argued that although this method slowed down the development process some when compared to an agile project done in a local space, it increased communication and built trust among the team members. A recent research paper by Shrivasta and Date (2010) concluded that knowledge management and communication were among the major problems encountered when software development was outsourced. They proposed an interesting solution were the agile teams used a web-based knowledge management wiki to assist in the capture of experiences. They further suggested that teams should still be brought together at different times in the development process and work together to build trust. Hence, a solid plan for communication among distributed software development teams is required to achieve value creation.

No Easy Answers

Agile software development philosophies and methods provide many opportunities to create value in software development. Much of the research into agile shows the potential for value creation when an organization is willing to embrace the philosophy and create a culture that supports and celebrates innovation. The process of embracing the philosophy and building the culture will take time and training on all levels of an organization to see the process happen. There is still much to be researched and solved to realize true value creation when outsourcing software development. Agile methods hold much promise, but still face many challenges to make the process work with teams spread across the globe. For an organization that already uses agile development principles, outsourcing might create additional value. For an organization that already has outsourced projects, adding agile development principles might also increase innovation. However, an organization with weak development procedures would risk much by trying to add agile principles and outsource at the same time and are likely to reduce instead of improve value creation.

References

Ågerfalk, P. J., Fitzgerald, B., Holmström, H., Lings, B., Lundell, B., & Conchúir, E. (2005). A framework for considering opportunities and threats in distributed software development. InInternational Workshop on Distributed Software Development (pp. 47–61). Citeseer.

 

Asproni, G. (2004). Motivation, teamwork, and agile development. Agile Times, IV (1), 8–15.

Augustine, S., Payne, B., Sencindiver, F., & Woodcock, S. (2005). Agile project management: Steering from the edges. Communications of the ACM, 48(12), 85-89.

Begel, A., & Nagappan, N. (2007). Usage and perceptions of agile software development in an industrial context: An exploratory study. In First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007) (pp. 117-125). Presented at the First International Symposium on Empirical Software Engineering and Measurement, Madrid, Spain: ESEM. doi:10.1109/ESEM.2007.85

Boehm, B. (2002). Get ready for agile methods, with care. Computer, 27(4), 64-69.

Boehm, B. (2003). Value-based software engineering. ACM SIGSOFT Software Engineering Notes, 28(2), 1-12.

Boehm, B., & Turner, R. (2005). Management challenges to implementing agile processes in traditional development organizations. IEEE software, 21(2), 30-39.

Boehm, B. W., & Sullivan, K. J. (2000). Software economics: A roadmap. In Proceedings of the conference on The future of Software engineering (pp. 319-343). ACM.

Cao, L., & Ramesh, B. (2008). Agile requirements engineering practices: An empirical study.Software, IEEE, 25(1), 60-67.

Carmel, E., & Agarwal, R. (2002). Tactical approaches for alleviating distance in global software development. Software, IEEE, 18(2), 22-29.

Carmel, E., Espinosa, J. A., & Dubinsky, Y. (2010). Follow the sun workflow in global software development. Journal of Management Information Systems, 27(1), 17-38.

Ceschi, M., Sillitti, A., Succi, G., & De Panfilis, S. (2005). Project management in plan-based and agile companies. Software, IEEE, 22(3), 21-27.

Chau, T., Maurer, F., & Melnik, G. (2003). Knowledge sharing: Agile methods vs. tayloristic methods. In Enabling Technologies: Infrastructure for Collaborative Enterprises, 2003. WET ICE 2003. Proceedings. Twelfth IEEE International Workshops on (pp. 302-307). IEEE.

Chow, T., & Cao, D. (2008). A survey study of critical success factors in agile software projects.Journal of Systems and Software, 81(6), 961-971. doi:10.1016/j.jss.2007.08.020

Cockburn, A. (2002). Agile software development. Boston, MA USA: Addison-Wesley.

Cockburn, A., & Highsmith, J. (2002). Agile software development: The people factor. Computer,34(11), 131-133.

Cohn, M. (2004). User stories applied: For agile software development. Addison-Wesley Professional.

Cohn, M., & Ford, D. (2003). Introducing an agile process to an organization [software development]. Computer, 36(6), 74-78.

Constantine, L. L., & Lockwood, L. A. D. (1999). Software for use: A practical guide to the models and methods of usage-centered design. New York, NY, USA: ACM Press/Addison-Wesley Publishing Co.

Decker, B., Ras, E., Rech, J., Klein, B., & Hoecht, C. (2005). Self-organized reuse of software engineering knowledge supported by semantic wikis. In Proceedings of the Workshop on Semantic Web Enabled Software Engineering (pp. 126-135). ACM.

Dingsøyr, T., & Hanssen, G. K. (2003). Extending agile methods: Postmortem reviews as extended feedback. Advances in Learning Software Organizations, 4-12.

Dyba, T., & Dingsayr, T. (2008). Empirical studies of agile software development: A systematic review. Information and Software Technology, 50(9-10), 833-859. doi:10.1016/j.infsof.2008.01.006

Fitzgerald, B., Hartnett, G., & Conboy, K. (2006). Customising agile methods to software practices at Intel Shannon. European Journal of Information Systems, 15(2), 200-213.

Forward, A., & Lethbridge, T. C. (2002). The relevance of software documentation, tools and technologies: A survey. In Proceedings of the 2002 ACM symposium on Document engineering (pp. 26-33). ACM.

Fowler, M., & Highsmith, J. (2001). Manifesto for agile software development. Retrieved February 6, 2011, from http://agilemanifesto.org/

Glass, R. (2001). Agile versus traditional: Make love, not war! Cutter IT Journal, 14(2), 12-18.

Hanssen, G. K., & Fægri, T. E. (2008). Process fusion: An industrial case study on agile software product line engineering. Journal of Systems and Software, 81(6), 843-854.

Herbsleb, J. D., & Moitra, D. (2002). Global software development. Software, IEEE, 18(2), 16-20.

Highsmith, & Cockburn. (2002). Agile software development: The business of innovation.Computer, 34(9), 120-127.

Jennings, D. (1997). Strategic guidelines for outsourcing decisions. Strategic Change, 6(2), 85-96.

Karlsson, J., & Ryan, K. (2002). A cost-value approach for prioritizing requirements. Software, IEEE, 14(5), 67-74.

Lapham, M. A., Williams, R., Hammons, C., Burton, D., & Schenker, A. (2010). Considerations for using agile in DoD acquisition (Technical Note No. CMU/SEI-2010-TN-002). Hanscom AFB, MA: Carnegie Mellon.

Larman, C. (2004). Agile and iterative development: A manager’s guide. Prentice Hall.

Lindstrom, L., & Jeffries, R. (2004). Extreme programming and agile software development methodologies. Information Systems Management, 21(3), 41-52.

Little, T. (2005). Value creation and capture: A model of the software development process.Software, IEEE, 21(3), 48-53.

Moe, N. B., Dingsoyr, T., & Dyba, T. (2008). Understanding self-organizing teams in agile software development. In Software Engineering, 2008. ASWEC 2008. 19th Australian Conference on(pp. 76-85). IEEE.

Nerur, S., Mahapatra, R. K., & Mangalaraj, G. (2005). Challenges of migrating to agile methodologies. Communications of the ACM, 48(5), 72-78.

Nonaka, I., & Takeuchi, K. (1995). The knowledge creating company: How Japanese companies create the dynamics of innovation. Oxford, UK: Oxford University Press.

Paetsch, F., Eberlein, A., & Maurer, F. (2003). Requirements engineering and agile software development. In Enabling Technologies: Infrastructure for Collaborative Enterprises, 2003. WET ICE 2003. Proceedings. Twelfth IEEE International Workshops on (pp. 308-313). IEEE.

Patton, J. (2002). Hitting the target: Adding interaction design to agile software development. InOOPSLA 2002 Practitioners Reports (p. 1). Presented at the Object-Oriented Programming, Systems, Languages, and Application Conference, Seattle, WA: ACM.

Polanyi, M. (1966). The tacit dimension. London: Routledge and Kegan Paul.

Poppendieck, M., & Poppendieck, T. (2003). Lean software development: An agile toolkit. Addison-Wesley Professional.

Prahalad, C. K., & Ramaswamy, V. (2004). Co-creation experiences: The next practice in value creation. Journal of Interactive Marketing, 18(3), 5-14.

Quinn, J. B., Baruch, J. J., & Zien, K. A. (1996). Software-based innovation. The McKinsey Quarterly, (4), 94-96.

Salo, O., Kolehmainen, K., Kyllönen, P., Löthman, J., Salmijärvi, S., & Abrahamsson, P. (2004). Self-adaptability of agile software processes: A case study on post-iteration workshops.Extreme Programming and Agile Processes in Software Engineering, 13(2), 184-193.

Shrivastava, S. V., & Date, H. (2010). Distributed agile software development: A review. Journal of Computer Science and Engineering, 1(1), 10-17.

Sutherland, J., Schoonheim, G., Rustenburg, E., & Rijk, M. (2008). Fully distributed scrum: The secret sauce for hyperproductive offshored development teams. In Expanding Agile Horizons (pp. 339-344). Presented at the Agile 2008 Conference, Toronto, Canada: IEEE.

Sutherland, J., Viktorov, A., Blount, J., & Puntikov, N. (2007). Distributed scrum: Agile project management with outsourced development teams. In Information Technology in Health Care (pp. 274-284). Presented at the 40th Hawaii International Conference on System Sciences, Waikoloa, Big Island, Hawaii, USA: IEEE Computer Society. doi:10.1109/HICSS.2007.180

Wahyudin, D., Heindl, M., Eckhard, B., Schatten, A., & Biffl, S. (2008). In-time role-specific notification as formal means to balance agile practices in global software development settings. In Lecture Notes in Computer Science (Vol. 5082, pp. 208-222). Springer.

Wendorff, P. (2002). Organisational culture in agile software development. Lecture Notes In Computer Science, 17(2559), 145-157.

Leadership in Open Source Software Development

Leadership in Open Source Software Development:

Past, Present and Future

By Dr. Russ Wright

Abstract

This paper explores the past, present and future of leadership and governance within Open Source Software (OSS) development projects. This paper also explores the current factors for successful leadership and governance of OSS development. As background, this paper explores the beginnings of the “hacker” culture that became the current geek and OSS culture. The future implications for leadership in regards to the Open Innovation model are also explored. The conclusion is that the OSS development model is morphing and changing so quickly that research cannot keep up with the change.

Leadership in Open Source Software Development

The nature of geek work is cerebral, most of the effort takes place inside the head of the geek, who uses their smarts to find solutions to problems. Typical leadership methods, designed for people who work on something external to themselves is not going to work on geeks. (Glen, 2003) How is it possible to lead these people? Before an answer is offered consider these addition factors: The people who do this work are volunteers, they are also spread over the globe and might not ever meet in person. How can a development project with these seemingly insurmountable factors actually work?

What defines OSS?

There are two basic camps that occupy the Open Source Software definition. The more radical of the two camps, holds to the belief that closed source software is dangerous and harmful and holds firmly to the belief that software should be open for the public good. (Stallman, 2001) There are others, perhaps with a more moderate philosophy, is that closing the source is a defect, as it prevents public inspection and that makes the program inferior. They desire open software to remedy the quality of the programs and make them more secure. (Perens, 2005). Other recent research contends that “OSS ‘hackers’ conceive of themselves as a movement to correct the failure of existing institutions (both industry and academia) to produce software adequately. (de Laat, 2007) By opening the source code of a software application, the developers were able to make changes to the programs as they pleased, create a whole new development methodology and build whole new communities around projects. As a result, a new term was needed to manage the licensing, ownership and copyright issues that blossomed along with the movement. Thus the term Open Source Software (OSS) was created by Raymond and Perens to define the situation.

Defining Leadership Challenges in OSS Development Projects

Leadership, or more precisely governance, which encompasses the leadership and the structure of the organization, in the OSS development environment must take on some different strategies to work. The volunteer, not motivated by the reward of a paycheck, must be motivated by other factors. Some recent research showed that the average volunteer software developer spends about 14 to 18 hour in a week working for free on OSS projects. These developers are mostly full time employees at commercial software companies, yet contribute for free. (Lakhani & Wolf, 2007) Thus, the leader must not only appeal to these other motivations, but also put in place a structure that cultivates the rewards that will motivate the volunteer.

The existing research into the leadership of OSS development communities created multiple definitions, each coming from different perspectives and does not provide a single clear encompassing explanation. For example: To provide rules, formal and informal that help identify developer identity, and help assign people to tasks. (Crowston, Li, Wei, Eseryel, & Howison, 2007) Another definition given: To control outcomes, people and communication. (Lattemann & Stieglitz, 2005) Yet another: To provide the normal methods of exchange using the commonly held values structure and the belief in sharing code with others. (Shah, 2006) Thus, none of these definitions fully define the role of leadership within an open source project.

The reason it is difficult to create a single definition for leadership or governance within an OSS community is because the communities and their purposes are widely varied. Some OSS projects do little more than build a single simple program, others become organizations, with a simple legal shell to provide a holding place for the assets and allow the project to take donations. At the extreme, some are non-profit foundations with committees and managed releases and possibly employees on a payroll. Other recent research attempted to create a definition which stated: “Thus, OSS governance can be defined as the means of achieving the direction, control, and coordination of wholly or partially autonomous individuals and organizations on behalf of an OSS development project to which they jointly contribute.” (Markus, 2007, p. 152) Although purposefully vague, this definition does provide a good starting point for defining the role of leadership in OSS development projects.

 

The Markus (2007) definition is good because it does not seem in conflict with traditional software development leadership responsibilities, and only highlights those additional aspects unique to OSS development. Her research revealed three particular goals of OSS development leadership and structure: (1) Keeping developers motivated, (2) solving coordination problems, and (3) cultivating a climate where developers will want to participate. These three goals are explored below.

The Motivation Dilemma

In traditional software development the developer is hired to do a job to create some product and receives compensation in the form of a paycheck for the effort. The developer might also take the job because they are interested in learning about the particular area of programming and want to advance their skills or they are motivated to solve the problem because it intrigues them. These factors fall into two different categories. According to Glen (2003) the software developer can be motivated by extrinsic factors, most often the paycheck, or intrinsic factors such as the desire to learn. He further explained that most of the factors will be intrinsic over the extrinsic. This idea is support by the research of Roberts, Hann and Slaughter (2006) who discovered in OSS development projects that even when extrinsic motivations exist, such as a paycheck, they do not crowd out the intrinsic motivators. Thus the developers who work on OSS projects, more often than not, can only rely on intrinsic factors and the leadership of the project must identify and build on those factors to keep the developers interested in the project.

Development Coordination Problems

In typical OSS projects the developers are spread out over the globe and this can present special problems for the leadership of the project especially when coordinating the interdependence of pieces of the software program. Traditional development leadership would assign tasks to specific individuals and organize the workers to performs certain tasks best fitted to their skills. (Glen, 2003) A study of virtual teams by Kayworth and Leinder (2001) discovered that leadership of virtual teams is fraught with difficulties because the solutions to the problems are much fewer because of the great distance between members. This problem of assignment and coordination does not seems to be as much an issue in OSS development projects. A recent study by Crowston et al, (2007) showed evidence that the most common method used in OSS development is a self-assignment process where the OSS developers would do the work by choice. One example given in the research showed an email where a developer called “dibs” on a particular part of the project because he wanted to learn how to make it work. The research further showed that instead of a hierarchy in assigning tasks, the developers will politely ask or just take on the task and do the work. Thus, the leadership of OSS projects do not seem to have the same issue, at least of assigning tasks and coordinating interdependencies as the developers seem motivated to pick up the tasks themselves.

Cultivating The Climate

The developers that participate in OSS development project come from two basic sources. A recent study defined two distinct categories of OSS developers as need-driven and hobbyist. (Shah, 2006) The need-driven participants consciously made the choice to use an OSS product, rather than a commercial solution so they could view and change the code to meet their own needs and solve a particular issue for a work-related purpose. The hobbyist programmer viewed participation in a particular OSS development project as a hobby activity and described the work as fun and challenging. Thus, each group has a different set of motivations and the leadership must be aware of these differences if they want to cultivate participation and contribution. Having a voice within the community is a potential solution to the motivation dilemma.

Because the OSS projects have little if any extrinsic motivators, such as a paycheck, to offer the leadership must capitalize on the intrinsic motivators available to them. A recent study by Manville and Ober (2003) explored the potential of using the voice of democracy as motivator. The study explained a situation where a developer might consider two different jobs, which have different working conditions and different pay scales. Some developers might always consider the paycheck the most important factor and choose the job with higher pay. Other developers might choose to work at a lower paying job because of greater opportunity to shape the direction of the organization. Thus the ability to have a voice in the decision making process might be a positive motivator for participation within a project.

A Brief History Of OSS Developer Culture And Ideology

To understand these struggles in leading an OSS development project requires some background. The history of OSS development is a tangle of political and personal motivations that evolved over forty years from the early computer programmers of the first mainframes all the way to the current personal computer users. After several attempts to untangle hype from fact, this author found it near impossible to determine the real facts and history of the Open Source Software movement. Below is a brief history, amalgamated from multiple sources into a single time-line. The history provided here only includes information that can be corroborated from at least one other source and is limited to details that reflect the transformation of the original computer hacker culture into the Free and Open Source software development culture of today.

The Big Iron

To fully understand the culture that permeates the OSS development world, will require some background on the origins of the hacker culture that become the OSS developer culture. According to Raymond (1999) the first group to begin the OSS developer culture began in 1961 when Massachusetts Institute of Technology (MIT) acquired a Digital Equipment Corporation PDP-1. This is often called the beginning of the “Big Iron” batch processing computers. In 1969 the addition of ARPAnet, the forerunner of the Internet built by the Defense Department as an experimental digital communications network, brought these programmers together allowing them to collaborate via electronic discussion groups despite being spread across the US. The computing culture flourished across ARPAnet particularly in the computer science department of colleges and universities who all used DEC PDP computers.

The birth of collaborative development started with a project at the MIT labs in 1967. MIT bought a PDP-10, yet the team rejected the existing operating system and built their own. According to Raymond (1999) they built their own operating system because they wanted to work their own way. This point is important because it shows the first real example of OSS developer attitude when a group of developers started a project because they want to solve a need not met by existing programs. This attitude is echoed by Perens (2005) when he stated: “Open source is an indication of an unfulfilled need.” Thus, the current Open Source Software developer mentality was influenced by the actions of MIT in 1967 when the existing software did not meet the needs and the group developers decided to make their own operating system that did things their way.

The Unix Gurus

The second group who influenced the current Open Source Software developer culture started in a very different way. Raymond (1999) explained that Ken Thompson working at Bell Labs in New Jersey resurrected parts of the failed Multics operating system and combined it with his own ideas and created a new operating system to run on a scavenged DEC PDP-7. While still in the very early stages of creating Unix, another developer named Dennis Ritchie created a development language called C to run on Thompson’s fledgling Unix operating system. Raymond (1999) explained that two major factors propelled this OS into popularity. First, this OS could run on essentially unaltered on several platforms and second, computer hardware was getting cheaper and faster and the operating system did not need to eek out every possible ounce of power from the hardware. Thus, Thompson’s Unix and Ritchie’s C language started a change in development, that allowed developers to take and move source code with them to any project running on multiple platforms. This portability, the ability to take and reuse software for other projects is part of the Open Source Software developer mindset today.

Commercial Destruction

A major solidifying factor for the Open Source developer culture came about through the commercialization and subsequent destruction of both the Big Iron and Unix cultures. According to Levy (2002) several startup companies lured away the talent from the MIT Big Iron labs and started rival fractured groups who were once good friends. A few years later similar attempts to commercialize Unix started enormous infighting and knocked them out of the market. According to Raymond (1999) “The proprietary-Unix players proved so ponderous, so blind, and so inept at marketing that Microsoft was able to grab away a large part of their market with the shockingly inferior technology of its Windows operating system.” (p. 164) Thus the infighting and the commercialization attempts caused many years of efforts to create a free and open operating system to flounder.

One of the most influential figures of the Free Software ideology is Richard Stallman who was a developer during the time when the commercialization of some of the MIT lab technologies saw the fracturing and destruction of the Big Iron and the Unix cultures. According to Levy (2002) Stallman was so depressed by the schism in the labs that he would tell strangers he met that his wife had died. This fracturing obviously colored his perceptions of software ownership and continues to affect the OSS developer culture.

In 1982 Stallman founded the Free Software Foundation, which holds the philosophy that software should have essential user freedoms anmd maintains the following definition of free software:

Free software is a matter of the users’ freedom to run, copy, distribute, study, change and improve the software. More precisely, it means that the program’s users have the four essential freedoms:

The freedom to run the program, for any purpose (freedom 0).
The freedom to study how the program works, and change it to make it do what you wish (freedom 1). Access to the source code is a precondition for this.
The freedom to redistribute copies so you can help your neighbor (freedom 2).
The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

According to Raymond (1999) Stallman’s idea of a free clone of Unix, which later became GNU/Linux, and his stubborn adherence to a free and open operating system epitomized the OSS developer culture. Thus Stallman, influenced by witnessing the destruction of computing cultures by commercialization forwarded through his creation of the Free Software Foundation another aspect of the Open Source developer culture.

Stallman began the construction of a Unix clone in 1983, that would be free and open. The original project was called the GNU (Gnu’s Not Unix) operating system. The project moved along slowly developing many tools and really floundered when AT&T sued over Unix rights in 1992. Stallman wanted to make sure that his software would stay publicly available and created a regulatory framework that came to be known as the GNU Public License (GPL). (“The GNU General Public License,” 2007) The GPL allows the developer to use, modify and redistribute the modified program, with one very important rule. The new code created from the original code must carry the same license. Thus this self perpetuating licensing scheme kept the source code always in the public domain. Around the same time a Helsinki University student named Linus Torvalds began developing a free Unix kernel for 386 machines using the Free Software Foundation’s toolkit. His initial, rapid success attracted many Internet hackers to help him develop Linux, a full-featured Unix with entirely free and re-distributable sources.

The Birth Of The OSS Development Method.

The OSS development methodology underwent a significant change when Linus Torvalds started developing his Linux kernel. The application development methods used reflected a significant culture shift from the previous development style. Raymond (1999), explained the difference in the development style by comparing the typical commercial development style of the time to building a cathedral and the development style of Linux to a marketplace bazaar. The cathedral style of software development was typified by a small group of experts working away in isolation with few code releases. The bazaar was a much more open style of development typified by many developers contributing updates continuously and many and often releases of the code. From the late 90′s through the early part of 2003 OSS development saw an enormous explosion using the Torvald’s model of development.

The Early Leadership Problems

Here is where the paths of the Free Software Foundation and the Open Source Initiative really began to diverge on philosophy. Raymond (1999) explained that shortly after a presentation of his paper on Torvald’s development model, Netscape announced that they would try this model and open the source of their browser. He further explained that over the next several days, in meetings with many influential figures, the decision was made to create a definition of Open Source Software through an organization called the Open Source Initiative. Raymond (1999) Explained: “What we realized, under the pressure of the Netscape release, was that FSF’s actual position didn’t matter. Only the fact that its evangelism had backfired (associating `free software’ with these negative stereotypes in the minds of the trade press and the corporate world) actually mattered.” (p. 256) Thus, a conscious decision was made to create a definition that would be more appealing to the business person as the definitions of the FSF did not translate well to business.

Over the next few years a great injustice was done to Stallman. As previously mentioned, Stallman had a large chunk of an operating system completed, but lacked a kernel and Torvald’s Linux kernel filled that gap. Often developers refer to an operating system by it’s kernel, despite the fact that many other pieces exist to make it an operating system. Perens (2005) notes: “Although Stallman did a great deal of the work that made Linux possible, Torvalds’ team of kernel contributors was not closely allied with Stallman, and announcements of Linux were not attributed to FSF.” Thus the considerable work of Stallman was not credited in the Linux announcements creating a division within the community that still exists today.

These major events shaped the culture that is the OSS community today. From the Big Iron era the early developers demonstrated the desire to meet a need not met by the existing operating system. This desire was carried forward into the Unix and C microcomputer era where tools and languages become portable and could be carried to other jobs so developers did not have to start over on each new assignment. Then the birth of the personal computer, and the explosion of the Internet brought together these developers from around the planet to work together to create software, GNU/Linux for example, that introduced a new way of creating software, and all the new challenges of leading these developers.

Major Intrinsic Motivation Factors In The Current OSS Development Culture

Like all development projects, regardless if the project is open-source or not, one of the major jobs of the leadership is to keep the team motivated. According to Glen (2003) the leadership must set a tone for the work environment to cultivate motivation. The unique qualities of an open-source project require the leadership of the project to take on some different attributes than their commercial counterpart. A recent study by O’Mahoney (2007) defined five specific intrinsic motivation traits of OSS development that manage and motivate the developers: (1) Independence, (2) Pluralism, (3) Representation, (4) Decentralized decision-making, and (5) Autonomous participation. Each of these traits are interrelated to each other and build a cohesive model of the motivations of the OSS volunteer developer. Thus, these traits are the basis of a community managed OSS project and represent the primary motivations for volunteer involvement. Each of these traits are explored in detail below as they relate to OSS development practices and other research.

Independence

For a mature OSS project, independence is natural, as the gained experiences of the community will form their own culture. Independence, according to O’Mahoney (2007) is defined as “A community that is independent does not rely upon the resources from any one organization, but is supported by a diverse body of participants.” (p. 144) This seems to resonate with Glen (2003) where he explains that the culture of a community will grow over time and establish shared patterns of interaction and will limit the ability of any one entity to control the group. Thus, independence seems a natural part of an OSS project as leadership cannot exert control but only nurture the environment.

This issue of project control and leadership came to a head with the development of the Linux Kernel. An article by Lemos (2002) explained that the developers who worked with Linus Torvalds on the Linux kernel were frustrated by the lack of response and tight control Torvalds was exerting over the development process. Changes and bug fixes would be submitted only to be ignored or flat out rejected without explanation which angered the developers. This created a delicate situation for Torvalds as the developers could easily fork the project, taking a set of the existing code, walking away from the existing Linux kernel project and starting a new project of their own. To resolve the problem Torvalds appointed several new people, he called lieutenants, who took over different segments of the development and code integration responsibilities. Torvalds had to give up some control to keep control of the project. Thus the kernel development project was establishing patterns of interaction that went beyond the control of the Torvalds.

Pluralism

The intrinsic motivator of pluralism has two dimensions, that partially overlap. The first part of the definition has to do with the governance of the project. For a project to recruit and retain talented developers, no one organization may own or control too much of the project and stifle other voices. According to research by West and O’Mahoney (2008), project with sponsors have a special problem as talented developers, which the project desires, will want freedom to act and share their thoughts. These developers will test the ability share and become vocal and critical of the project leadership if their voice is not heard and respected. Thus, the leadership must be cautious to not let any one sponsor dominate the project or talented developers will not want to participate.

The second part of pluralism deals directly with the voice of the developer. Because the work done in the development of a software programs comes from the creative mind of the developer, there can be multiple solutions to the same problem. Some solutions will be better that others, but all must be heard. O’Mahoney (2007) defined pluralism as: “A pluralistic community allows many approaches, methods, theories or points of view to be legitimate or plausible in pursuing a course of action” (p. 146) This also resonates with Glen (2003) who explained that one of the important roles for the leadership of a project is to create an environment that is safe for the presentation of ideas and that freedom of speech exists within the community. Thus the intrinsic motivator for the OSS developer is the freedom to present ideas that they know will be heard and considered as part of the solution.

Representation

The developers must also have a voice in the direction of the project. The definition of representation provided by O’Mahoney (2007), was a system of democratic representation within the project. However, the researcher does differentiate that the authority of the representative is limited to making decisions on the project and not in control over other members. The Glen (2003) research seemed to agree with this position as he stated, “everyone should feel that what they have to say is valued and becomes part of the discussion about how to proceed” (p. 127). Thus, even if the voice is representative and not a pure democracy, the members must feel that their voice impacts the direction of the project.

Decentralized Decision Making

To understand the organization of OSS development requires an understanding of the social structure, often called a social network. In the research by Wasserman and Faust (1998) they defined the structure within a social network as a grouping of units who depend upon each other and who are connected by relationships. In an OSS development project, where the participants are scattered about the globe and interact and connect with each other over the Internet, these units might be the groups of developers, testers and users. The relationships might be the direct discussions between two developers via a conversation or the bonds of trust formed between the users and developers. This social network is how the groups of people share knowledge and solve problems.

In OSS development the structure of the social network, usually comes in two distinct styles. The first style is centralized or group centralization, which Wasserman and Faust (1998) explained and modeled as equivalent to a solar system with a star in the middle and planets in orbit about them. The model they created of group centralization provides a simple measurement of the inequities of the team members based on the differences on actions and the patterns of interaction. This is the model initially used to create the Linux kernel as Linus Torvalds acted as the star and hundreds of developers worked though him suggesting changes which he accepted or rejected. (Raymond, 1999) The second style, as defined by Wasserman and Faust (1998), is core-peripheral which exhibits a central dense interconnected core surrounded by a halo of unconnected peripheral participants. Decentralized decision-making according to O’Mahoney (2007) is defined as a model where some of the decision making rights are distributed to the community members. The decision making rights generally deal with or fall into three distinct categoris of (1)code, (2)sub project and (3)community wide issues. This is the development model used on the FreeBSD operating system where some 300 core developers all have the ability to add and update the source-code at the same time. (Jørgensen, 2007) Thus, the collaboration and decision making process is influenced by the style social structure within the OSS development project and can affect how well a project team performs and the perception of belonging and satisfaction each member of the team experiences.

Autonomous Participation

Developers working on OSS projects will often desire to stake a claim on a particular part of the program and work on that part of the code. The research by O’Mahoney (2007) defined autonomous participation as one of the most important factors where developers are attracted to projects by the opportunity to learn, solve problems and improve their skills. This seems to agree with earlier research by Hackman and Oldham (1980) where their Job Characteristic Model (JCM) defined autonomy as a precondition for high motivation potential that multiplies the other intrinsic motivators. Glen (2003) also discussed the importance of autonomy:

With a solid understanding of the environment, they are able to make their own decisions about day-to-day matters without having to check with the boss constantly. For geeks, who generally have a strong independent streak, autonomy fosters motivation.(p. 110)

Therefore the ability to operate freely within a particular area of the code is a large motivating factor and the existence of autonomy will be a deciding factor for an OSS developer to join a project.

 

The Future of OSS Leadership: Harnessing Innovation

The OSS development model is currently jumping over to physical products such as beer and bread, and into other disciplines such as biotechnology and government policies. This new development process for physical products is often called Open Innovation. (Chesbrough & Garman, 2009) The research by de Laat (2007) made a bold proclamation about the flexibility of the OSS development model when he suggested that this model is the only known model of development that can be applied across products and is not restricted software alone. He tried to find an example of the closed source software development model that can be used for physical products and was unsuccessful in finding any examples. Thus the OSS development model, renamed Open Innovation for physical products is unique in the ability to be used in other industries. This concept is still relatively new to researchers and there is much territory still unexplored.

There is not much current research on how the leadership or governance will be affected by the shift of the OSS development model to other physical product industries. Most of the research continues to demonstrate the need to provide extrinsic motivators but also maintain a focus on the intrinsic motivators. (Chesbrough & Garman (2009), Dahlander, (2008); Grönlund, Sjödin, & Frishammar, 2010)

Most of the current research on Open Innovation looks at the for-profit companies and how they will need to shift their thinking about corporate boundaries. The companies will have to accept that the boundaries of their organization, once firmly defined will now have to be permeable and allow for interactive and distributed innovation where the resources, ideas and people move in and out of organizations constantly. (Laursen & Salter, 2006) This means that employees will need to experience those same intrinsic motivators as defined by O’Mahoney (2007) as their affiliations with a company will be much looser and not tied to the extrinsic motivation of a paycheck.

The research by Chesbrough & Garman offers an example of the Phillips company who, seeing a failure in their research model, opened up their R&D facility and created a open campus where now some 7000 researchers from multiple organizations share knowledge about products they create. The authors explained how this style of leadership prevented lay-offs and prevented the “high emotional and economic costs of severance while boosting morale for remaining and departing workers alike ” (p. 75). Thus the current research still reflects the regular needs of leadership as defined in Glen (2003), but also recognizes the advantages of embracing an Open Innovation model.

Changing Fast

All the research uncovered by this author had a common thread of exasperation among the researchers. The OSS development process is changing and morphing quickly. So quickly in fact, that the research cannot keep up. The research into the motivators of OSS developers by O’Mahoney (2007) explicitly stated that the rapid evolution occurring in the OSS development process as new hybrid models appear made defining the responsibilities of leaders and the structure of governance difficult to define. The leadership of the future version of an OSS development project, or Open Innovation product, will require a cyclic return to the list of extrinsic and intrinsic motivating factors to determine how well they are meeting the needs of their open communities.

References

Chesbrough, H. W., & Garman, A. R. (2009). How open innovation can help you cope in lean times. (cover story). Harvard Business Review, 87(12), 68-76.

Crowston, K., Li, Q., Wei, K., Eseryel, U. Y., & Howison, J. (2007). Self-organization of teams for free/libre open source software development. Information & Software Technology, 49(6), 564-575.

Dahlander, L., Frederiksen, L., & Rullani, F. (2008). Online communities and open innovation: Governance and symbolic value creation. Industry & Innovation, 15(2), 115-123.

Glen, P. (2003). Leading Geeks (1st ed.). San Francisco: Jossey-Bass.

Grönlund, J., Sjödin, D. R., & Frishammar, J. (2010). Open innovation and the stage-gate process: A revised model for new product development. California Management Review, 52(3), 106-131.

Hackman, J. (1980). Work redesign. Reading, MA: Addison-Wesley.

Jørgensen, N. (2007). Developer autonomy in the FreeBSD open source project. Journal of Management & Governance, 11(2), 119-128.

Kayworth, T. R., & Leidner, D. E. (2001). Leadership effectiveness in global virtual teams. Journal of Management Information Systems, 18(3), 7-40.

de Laat, P. (2007). Governance of open source software: state of the art. Journal of Management & Governance, 11(2), 165-177.

Lakhani, K., & Wolf, R. (2007). Why hackers do what they do
understanding motivation and effort in free/open source software projects. In Perspectives on Free and Open Source Software (pp. 3-22). Cambridge, Massachusetts: MIT PRess.

Lattemann, C., & Stieglitz, S. (2005). Framework for governance in open source communities. InProceedings of the 38th Annual Hawaii International Conference on System Sciences (pp. 192a-192a). Presented at the 38th Annual Hawaii International Conference on System Sciences, Big Island, HI, USA. doi:10.1109/HICSS.2005.278

Laursen, K., & Salter, A. (2006). Open for innovation: the role of openness in explaining innovation performance among U.K. manufacturing firms. Strategic Management Journal, 27(2), 131-150. doi:10.1002/smj.507

Lemos, R. (2002). Torvalds, developers at odds over Linux – CNET News. Retrieved August 29, 2010, from http://news.cnet.com/Torvalds,-developers-at-odds-over-Linux/2100-1002_3-826093.html

Levy, S. (2002). Hackers (Updated ed.). London: Penguin.

Manville, B., & Ober, J. (2003). Beyond empowerment: Building a company of citizens. Harvard Business Review, 81(1), 48-53.

Markus, M. (2007). The governance of free/open source software projects: monolithic, multidimensional, or configurational? Journal of Management & Governance, 11(2), 151-163.

O’Mahony, S. (2007). The governance of open source initiatives: what does it mean to be community managed? Journal of Management & Governance, 11(2), 139-150.

Perens, B. (2005). The emerging economics of open source software. Retrieved May 11, 2010, from http://perens.com/Articles/Economic.html

Raymond, E. (1999). The cathedral & the bazaar : Musings on Linux and open source by an accidental revolutionary (1st ed.). Cambridge, MA: O’Reilly.

Roberts, J. A., Hann, I., & Slaughter, S. A. (2006). Understanding the motivations, participation, and performance of open source software developers: A longitudinal study of the apache projects. Management Science, 52(7), 984-999. doi:10.1287/mnsc.1060.0554

Shah, S. K. (2006). Motivation, Governance, and the Viability of Hybrid Forms in Open Source Software Development. Management Science, 52(7), 1000-1014.

Stallman, R. (2001). The GNU General Public License Protects Software Freedoms – GNU Project – Free Software Foundation (FSF). Retrieved September 2, 2010, from http://www.gnu.org/press/2001-05-04-GPL.html

The GNU General Public License. (2007). Free Software Foundation. Retrieved September 2, 2010, from http://www.gnu.org/licenses/gpl.html

Wasserman, S., & Faust, K. (1998). Social network analysis : methods and applications. Cambridge: Cambridge University Press.

The Open Source Phenomenon: How it is Affecting Business and Education.

The Open Source Phenomenon:

How it is Affecting Business and Education.

By Dr. Russ Wright

Abstract

Open source software, with the unique license that gives the user many rights and freedoms, is transforming business and education. Once thought of as hobbyist programming, open source now has many products that are competitive with commercial proprietary software. This paper explores the perception of programming methods used to create open source programs, the security of the open source model, the evolution of the business model used to sell open source software, and the impact of open source on education and business.

The Open Source Phenomenon: How it is Affecting Business and Education.

The phenomenon of open-source software is permeating the software market. There is much debate over which type of software is best, open source or proprietary, over topics such as security, development methodology and business models. Open Source Software (OSS) is now posing significant competition to proprietary or closed source software in several markets (Jaisingh, See-To, & Tam, 2008). Open source software has matured to the point where it is a viable alternative to proprietary commercial offerings (Fitzgerald 2006). Ignoring the benefits of open source in the ever-changing software market would be a foolish decision for a development shop, and a blind allegiance or strict adherence to either software model will eliminate possible solutions to technological problems.

Open Versus Closed Source

In the world of software development, there are basically two types of software. The first type is open source, where the source code is freely provided with the application. The second type is closed source, where the application comes pre-compiled for installation and the owner of the program does not freely provide the source code. A simple definition of open source software (OSS) is software that allows the users access the source code, to which they may make improvements, perform bug fixes, change or enhance the functionality, and redistribute the original program or the derivative work to others who can, in turn, do the same according to their own needs (Sen, Subramaniam, & Nelson, 2008). Closed-source software, more often referred to as proprietary, usually has a copyright owner who can exercise control, through the use of a license, over what users can do with the software. This limit includes the type or number of machines on which the software can be installed and the number of users who can use the application (Castelluccio, 2008). Therefore by the nature of the license, open-source software gives the user more freedom to decide how they will use the software, what they will modify to meet their needs, and when it is most advantageous to make changes to the program in their environment. Open source software, because of the open nature, also has a unique development model causing companies to rethink the way they do business.

Security

There are many arguments both for and against the security of open source code. Some believe because the source code for open source software is readily available for inspection by “many eyes”, then the code is inherently more secure. On the other side of the debate, others believe that through secrecy and obscurity, the code is secure because only a few people truly know how it works. In a recent study by Hoepman & Jacobs (2007), the authors make the argument that opening the source code allows a third party to determine how much risk is involved in using a particular program, allows developers to create fixes for the bugs found, and forces the developer to allocate more time to writing better quality code (p. 81). The argument presented here has merit. By allowing “many eyes” to see the code, the problems can be fixed must faster. The one possible exception is if no one bothers to look at the code until the security is breached. Still, the advantages of having open source code critiqued by many will far outweigh the disadvantages of closed source.

There are those in the open source community who believe that the “many eyes” model is not good enough because it does not go far enough. Laurie (2006), posits in the paper on open source and security that the “many eyes” concept only applies when a bug is found. Then the developer who found the bug can post the issue and many developers will work together to quickly find the solution. Most developers do not have the experience to identify where the problem exists just by looking at the source code (p. 60). This theory is still contested and supported by many within the open source community. There is no clear answer and this point of contention continues to be a stumbling block for open source adoption.

Proponents of proprietary software often argue that opening the source code allows an attacker access to information that would be helpful in creating and launching an attack against an application. In a recent study of increasing software security, the authors contend that closed source projects often use poor methods for coding practices, project management, change control and quality control (Hoepman & Jacobs, 2007). The authors also posit that when the source code is opened, the software projects cannot get away with poor practices because they are immediately evident as discovered when the source code. This fact was made evident when the source code for the Deibold voting machines was distributed on the Internet and the source code revealed horrible programming errors and vulnerabilities. Therefore open source software, by providing the source code and more rigorous coding standards and methods, provides the most security. The myth of open source programmers being lazy coders when compared to the proprietary development shops is more often than not, quite the opposite.

Programming Model Perception

The perception of the development model for open source software is often incorrect as popular opinion portrays a group of lonely geeks in their mom’s basements working to create a program that will be released into the wild without support. According to Baird (2008), although some open source projects are still created by volunteers working in an ad-hoc fashion, most are developed by paid programmers working for not-for-profit organizations or by proprietary companies that support internal open source development and proprietary development (p. 233). It is very rare to find a developer who only works on open source projects. More often than not, an open source developer also works on proprietary projects. Developers from both open and proprietary shops benefit from the body of source and libraries that are free for use in developing their programs. A great example of a company that pays programmers to work on open source projects is the end-user software OpenOffice. The OpenOffice project is strongly supported by Sun Microsystems, who pays and provides developers to maintain the program (Woods, 2005). OpenOffice is an open source office suite with word processor, spreadsheets, presentations and database tools that rivals any commercially available office suite. Therefore this incorrect perception of open source software creation adds to the confusion over how free software can yield profit or reduce costs. Finding the true total cost of ownership (TCO) of open source software is a confusing topic for many and requires further discussion.

Business Model

In the early years of open source software commercialization, there were two basic models. According to Baird (2008), the two forms were value-added support or software-as-a-loss-leader (p. 235). For the value-added model, a vendor would provide the software for free and get the customer to sign up for support to install and configure the open source application. In the other model, the vendor would provide the open source program and then sell proprietary extensions that added more functionality to the open source program. This business model has evolved into vendors providing a real mix of both open and proprietary packages.

The blending of open and proprietary software creates much confusion about how free software can be profitable for a company. Economists are often mystified and struggle to identify how open source software can generate income for a company and struggle to come up with new models to define Total Cost of Ownership (TCO). Fichman (2004) developed a possible solution that might fit the constantly shifting world using a theory of real options investment analysis. In summation, real options analysis is a method used in stock analysis to provide a way to calculate TCO where flexibility must be exceptionally high and much uncertainty is possible. This exceptionally high flexibility and much uncertainty defines the world of open source software. As a result of this research companies can quantify and report upon the costs associated with using open source software. This represents a significant step forward in the acceptance of open source software in both education and enterprise.

Maturation of Open Source

In a paper on the transformation of open source software, Fitzgerald (2006), posited that the changes in security, programming model and business model have all contributed to the open source phenomenon taking on a new form he calls OSS 2.0. This theory is echoed in the Scacchi (2007), paper where the author discusses the recent research results and emerging opportunities in Free/Open Source Software development (FOSSD). The author concludes that open source development has created new types and kinds of socio-technical work practices, development processes and community networking (p. 465). There is also much opportunity for researchers to discover, observe, analyze, model and simulate these practices and processes primarily because of the open nature of the projects themselves, publicly provides the research materials.

FOSSD project source code, artifacts, and on-line repositories represent and offer new publicly available data sources of a size, diversity, and complexity not previously available for research, on a global basis. FOSSD projects and project artifact repositories contain process data and product artifacts that can be collected, analyzed, shared, and be re-analyzed in a free and open source manner. (p. 466)

Therefore the open source community has evolved past the early days and stereotypical models into a modern and mature solution, called OSS 2.0, for both education and enterprise. Corporations and places of learning can both benefit from incorporating OSS 2.0 into their technology structure.

The Impact of Open Source on Education

The apparent readiness of open source in the market sparked many changes in the attitude of educators towards using FOSS to educate students. This shift in perception, although not complete, is creating an environment where open source is finding purchase, albeit small, in both education and enterprise. According to a study of the willingness of Chief Information Officers, (CIO) and Chief Academic Officers (CAO) to use open source software conducted by Van Rooij (2007), the participants had negative perceptions of the value of commercial software because of the problems experienced with implementation of commercial software and the lack of fit with the desired needs and functions. Because of the situation of limited funding for education projects and the enticement of an opportunity to collaborate with peers, open source software was the best perceived solution because it provided the promise of control over the end result (p. 446). Therefore open source represents a viable alternative to proprietary solutions because the decision makers and staff perceive the value of open source to be first the lower cost, second they value the opportunity to collaborate with peers and third the control over the end result of the programming effort. The failure of vendors to meet the needs of the customer pushed the educators towards open source because it holds the promise of a solution that offers the ability to meet the desired needs. These perceived benefits sparked the creation of some exceptional open source programs for education.

oodle, a software program for Internet-based courses and websites, with functionality similar to the commercial program Blackboard, is a great example of open source education software. According to the Moodle website:

The word Moodle was originally an acronym for Modular Object-Oriented Dynamic Learning Environment, which is mostly useful to programmers and education theorists. It’s also a verb that describes the process of lazily meandering through something, doing things as it occurs to you to do them, an enjoyable tinkering that often leads to insight and creativity. As such it applies both to the way Moodle was developed, and to the way a student or teacher might approach studying or teaching an online course. (“About Moodle – MoodleDocs,” n.d.)

 

One shining example of innovation with Moodle is covered in a case study (Marquart & Rizzi, 2009) where they used Moodle to move from a predominantly classroom model to a blended learning and training environment for the tutoring staff. The BELL (Building Educated Leaders for Life) program uses tutors to teach children living in low income, urban communities. The BELL program selected Moodle because of the open source and therefore cost effective (p. 52). The study concludes that using Moodle saved them money and it provided an exceptional platform for the tutoring program which produced significant measurable benefits and outcomes. Therefore the implementation of Moodle allowed a not-for-profit organization to positively transform a tutoring program on a small budget and see significant educational benefits. This is one of the positive ways that open source is transforming education.

The Impact of Open Source on Enterprise

Although many still debate open source software as an alternative to proprietary solutions, this is merely noise designed to draw attention to open source products because most IT shops are no longer supported by one vendor but are instead blended shops of both open source and proprietary products. In a study of several IT shops, Baird (2008) puts forth the notion that most current enterprise IT is a mix of both proprietary and open-source solutions. The study goes further to explain that this phenomenon is true in both the private sector and government facilities. The reasons given for this blend are practical more than idealogical. As governments and companies reorganize and offices are merged, multiple disparate IT systems must be integrated together. According to the study, there are four primary reasons that despite the rhetoric to the contrary, IT shops are a mix:

(1) an enterprise’s IT is essentially built anew and requires specific solutions, some of which are open-source and some proprietary, (2) an enterprise builds upon legacy systems and the “modernization” requires implementation of both approaches, (3) a government enterprise has to interface with technology that is popular with its citizens which may encompass both approaches, and (4) vendors have embraced business strategies that incorporate the distribution and support of both open-source and proprietary software. In all of these cases, the key to success for the enterprise is to assure interoperability and maximize the efficiency and value of the combined technologies (p. 234).

Therefore the rhetoric about open source not being a viable solution for IT shops is false. The use of open source along side proprietary software is the correct model of implementation that will meet the most needs of technology users.

 

A second place where open source really impacts enterprise is the development methods used by programmers. Open source software development methodology evolved just like the software and developed an exceptionally good model for delivering quality code. The Mozilla project exemplifies the development process used in open source. The methodology used by the developers of Mozilla, who make the open source browser Firefox, is an extremely disciplined methodology that is copied by many open source and proprietary development shops (DiBona, 2006). Many of the tools created by the Mozilla team were released as open source and then found their way into other large development shops that adopted the tools and methodology proven by the success of the Mozilla software projects. Therefore the methodology developed by the Mozilla organization has added a significant amount of quality not only to open source projects but to proprietary software development too.

Ignoring Open Source is a Mistake

Despite the attention seeking noise from both camps, Open source is here to stay and ignoring the benefits in the ever-changing software market would be a foolish decision for a vendor, educator or development shop. The costs, which can be measured in TCO, provide a clear analysis of how open source, when mixed with proprietary software creates a beneficial solution. Right now the education market has an opening for open source to truly take advantage of the maturity achieved by software vendors. Open source projects also contributed many tools and methodologies that will benefit any software development project regardless of the license assigned to the final product.

Use What Works Best for Your Situation

A blind allegiance and strict adherence to either software model will eliminate possible solutions to technology problems. Open source is attractive because of the control over the code and the ability to make changes and fixes as desired. Proprietary software also has merits in the support provided and the stability of a long term relationship company. A CIO or CAO, if they are honest with themselves should seriously consider the possibility of using open source and proprietary solutions together to create a best-fit solution. Each model has merits that when properly implemented together in a mixed environment represent to best possible solutions for the majority of situations and users.

References

About Moodle – MoodleDocs. (n.d.). . Retrieved March 3, 2010, from http://docs.moodle.org/en/About_Moodle

Baird, S. A. (2008). The heterogeneous world of proprietary and open-source software. InProceedings of the 2nd International Conference on Theory and Practice of Electronic Governance – ICEGOV ’08 (p. 232). Presented at the the 2nd International Conference, Cairo, Egypt. doi:10.1145/1509096.1509143

Castelluccio, M. (2008). Enterprise Open Source Adoption. Strategic Finance, 90(5), 57-58.

DiBona, C. (2006). Open sources 2.0 : the continuing evolution (1st ed.). Beijing;Sebastopol CA: O’Reilly.

Fichman, R. G. (2004). Real Options and IT Platform Adoption: Implications for Theory and Practice. Information Systems Research, 15(2), 132-154.

Fitzgerald, B. (2006). THE TRANSFORMATION OF OPEN SOURCE SOFTWARE. MIS Quarterly,30(3), 587-598.

Hoepman, J., & Jacobs, B. (2007). Increased security through open source. Commun. ACM,50(1), 79-83.

Jaisingh, J., See-To, E. W. K., & Tam, K. Y. (2008). The Impact of Open Source Software on the Strategic Choices of Firms Developing Proprietary Software. Journal of Management Information Systems, 25(3), 241-275.

Marquart, M., & Rizzi, Z. J. (2009). Case Study of BELL E-learning: Award-Winning, Interactive E-learning on a Nonprofit Budget. (Vol. 2, pp. 50-56). Retrieved from http://search.ebscohost.com

Scacchi, W. (2007). Free/open source software development: recent research results and emerging opportunities. In The 6th Joint Meeting on European software engineering conference and the ACM SIGSOFT symposium on the foundations of software engineering: companion papers (pp. 459-468). Dubrovnik, Croatia: ACM. Retrieved from http://portal.acm.org.library.capella.edu/citation.cfm?id=1295014.1295019

Van Rooij, S. W. (2007). Perceptions of Open Source Versus Commercial Software: Is Higher Education Still on the Fence? Journal of Research on Technology in Education, 39(4), 433-453.

Woods, D. (2005). Open source for the enterprise : managing risks, reaping rewards (1st ed.). Sebastopol CA: O’Reilly.