► We explore the relationship between product designs and organizational designs. ► We compare open source software with software developed by commercial firms. ► We measure modularity by capturing ...the level of coupling between components. ► We find that loosely coupled organizations tend to develop more modular products. ► The differences in modularity are substantial—up to a factor of six in our sample.
A variety of academic studies argue that a relationship exists between the structure of an organization and the design of the products that this organization produces. Specifically, products tend to “mirror” the architectures of the organizations in which they are developed. This dynamic occurs because the organization's governance structures, problem solving routines and communication patterns constrain the space in which it searches for new solutions. Such a relationship is important, given that product architecture has been shown to be an important predictor of product performance, product variety, process flexibility and even the path of industry evolution.
We explore this relationship in the software industry. Our research takes advantage of a natural experiment, in that we observe products that fulfill the same function being developed by very different organizational forms. At one extreme are commercial software firms, in which the organizational participants are tightly-coupled, with respect to their goals, structure and behavior. At the other, are open source software communities, in which the participants are much more loosely-coupled by comparison. The mirroring hypothesis predicts that these different organizational forms will produce products with distinctly different architectures. Specifically, loosely-coupled organizations will develop more modular designs than tightly-coupled organizations. We test this hypothesis, using a sample of matched-pair products.
We find strong evidence to support the mirroring hypothesis. In all of the pairs we examine, the product developed by the loosely-coupled organization is significantly more modular than the product from the tightly-coupled organization. We measure modularity by capturing the level of coupling between a product's components. The magnitude of the differences is substantial—up to a factor of six, in terms of the potential for a design change in one component to propagate to others. Our results have significant managerial implications, in highlighting the impact of organizational design decisions on the technical structure of the artifacts that these organizations subsequently develop.
This paper reports data from a study that seeks to characterize the differences in design structure between complex software products. We use design structure matrices (DSMs) to map dependencies ...between the elements of a design and define metrics that allow us to compare the structures of different designs. We use these metrics to compare the architectures of two software productsthe Linux operating system and the Mozilla Web browserthat were developed via contrasting modes of organization: specifically, open source versus proprietary development. We then track the evolution of Mozilla, paying attention to a purposeful "redesign" effort undertaken with the intention of making the product more "modular." We find significant differences in structure between Linux and the first version of Mozilla, suggesting that Linux had a more modular architecture. Yet we also find that the redesign of Mozilla resulted in an architecture that was significantly more modular than that of its predecessor and, indeed, than that of Linux. Our results, while exploratory, are consistent with a view that different modes of organization are associated with designs that possess different structures. However, they also suggest that purposeful managerial actions can have a significant impact in adapting a designs structure. This latter result is important given recent moves to release proprietary software into the public domain. These moves are likely to fail unless the product possesses an "architecture for participation."
Uncertain and dynamic environments present fundamental challenges to managers of the new product development process. Between successive product generations, significant evolutions can occur in both ...the customer needs a product must address and the technologies it employs to satisfy these needs. Even within a single development project, firms must respond to new information, or risk developing a product that is obsolete the day it is launched. This paper examines the characteristics of an effective development process in one such environmentthe Internet software industry. Using data on 29 completed development projects we show that in this industry, constructs that support a more flexible development process are associated with better-performing projects. This flexible process is characterized by the ability to generate and respond to new information for a longer proportion of a development cycle. The constructs that support such a process are greater investments in architectural design, earlier feedback on product performance from the market, and the use of a development team with greater amounts of "generational" experience. Our results suggest that investments in architectural design play a dual role in a flexible process: First, through the need to select an architecture that maximizes product performance and, second, through the need to select an architecture that facilitates development process flexibility. We provide examples from our fieldwork to support this view.
There is increasing interest in the literature about the notion of a contingent approach to product development process design. This interest stems from the realization that different types of ...projects carried out in different environments are likely to require quite different development processes if they are to be successful. Stated more formally, a contingent view implies that the performance impact of different development practices is likely to be mediated by the context in which those practices operate. This article provides evidence to support such a view.
Our work examines whether projects in which the development process matches the context achieve superior performance. We focus on two sources of uncertainty that generate challenges for project teams: platform uncertainty, reflecting the uncertainty generated by the amount of new design work that must be undertaken in a project; and market uncertainty, reflecting the uncertainty faced in determining customer requirements for the product under development. We develop hypotheses for how these sources of uncertainty are likely to influence the relationships between a number of specific development practices and performance. We then test these hypotheses using data from a sample of 29 Internet software development projects.
Our results provide evidence to support a contingent view of development process design. We show that in projects facing greater uncertainty, investments in architectural design, early technical feedback, and early market feedback have a stronger association with performance. The latter relationships are influenced by the specific sources from which this uncertainty stems: platform uncertainty mediating the impact of early technical feedback and market uncertainty mediating the impact of early market feedback. Our results also indicate that while greater uncertainty is associated with making later changes to a product's design, this practice is not associated with performance.
Our findings suggest that managers carefully must evaluate both the levels and sources of uncertainty facing a project before designing the most appropriate process for its execution. In particular, they should explore the use of specific development practices based upon their usefulness in resolving the specific types of uncertainty faced. Importantly, these decisions must be made at the start of a project, with purposeful investments to create a process that best matches the context. Reacting to uncertainty ex‐post, without such investments in place, is unlikely to prove a successful strategy.
Many studies highlight the challenges facing incumbent firms in responding effectively to major technological transitions. Though some authors argue that these challenges can be overcome by firms ...possessing what have been called dynamic capabilities, little work has described in detail the critical resources that these capabilities leverage or the processes through which these resources accumulate and evolve. This paper explores these issues through an in‐depth exploratory case study of one firm that has demonstrated consistently strong performance in an industry that is highly dynamic and uncertain. The focus for the present study is Microsoft, the leading firm in the software industry. The focus on Microsoft is motivated by providing evidence that the firm's product performance has been consistently strong over a period of time in which there have been several major technological transitions—one indicator that a firm possesses dynamic capabilities. This argument is supported by showing that Microsoft's performance when developing new products in response to one of these transitions—the growth of the World Wide Web—was superior to a sample of both incumbents and new entrants. Qualitative data are presented on the roots of Microsoft's dynamic capabilities, focusing on the way that the firm develops, stores, and evolves its intellectual property. Specifically, Microsoft codifies knowledge in the form of software “components,” which can be leveraged across multiple product lines over time and accessed by firms developing complementary products. The present paper argues that the process of componentization, the component “libraries” that result, the architectural frameworks that define how these components interact, and the processes through which these components are evolved to address environmental changes represent critical resources that enable the firm to respond to major technological transitions. These arguments are illustrated by describing Microsoft's response to two major technological transitions.
Recent contributions to information systems theory suggest that the primary role of a firm’s information technology (IT) architecture is to facilitate, and therefore ensure the continued alignment of ...a firm’s IT investments with a constantly changing business environment. Despite the advances we lack robust methods with which to operationalize enterprise IT architecture, in a way that allows us to analyze performance, in terms of the ability to adapt and evolve over time. We develop a methodology for analyzing enterprise IT architecture based on “Design Structure Matrices” (DSMs), which capture the coupling between all components in the architecture. Our method addresses the limitations of prior work, in that it i) captures the architecture “in-use” as opposed to high level plans or conceptual models; ii) identifies discrete layers in the architecture associated with different technologies; iii) reveals the “flow of control” within the architecture; and iv) generates measures that can be used to analyze performance. We apply our methodology to a dataset from a large pharmaceutical firm. We show that measures of coupling derived from an IT architecture DSM predict IT modifiability – defined as the cost to change software applications. Specifically, applications that are tightly coupled cost significantly more to change.
Formal contracts represent an important governance instrument with which firms exercise control of and compensate partners in R&D projects. The specific type of contract used, however, can vary ...significantly across projects. In some, firms' govern partnering relationships through fixed‐price contracts, whereas in others, firms' use more flexible time and materials or performance‐based contracts. How do these choices affect the costs and benefits that arise from greater levels of partner integration? Furthermore, how are these relationships affected when the choice of contract is misaligned with the scope and objectives of the partnering relationship? Our study addresses these questions using data from 172 R&D projects that involve partners. We find that, (i) greater partner integration is associated with higher project costs for all contract types; (ii) greater partner integration is associated with higher product quality only in projects that adopt more flexible time and materials or performance‐based contracts; and (iii) in projects where the choice of contract is misaligned with the scope and objectives of the partnering relationship, greater partner integration is associated with higher project costs, but not with higher product quality. Our results shed light on the subtle interplay between formal and relational contracting. They have important implications for practice, with respect to designing optimal governance structures in partnered R&D projects.
The importance of platform‐based businesses in the modern economy is growing continuously and becoming increasingly relevant. Specifically, the deployment of digital technologies has enhanced the ...applicability of two‐sided business models, enabling companies to act not just as builders and owners of assets, but also as orchestrators of external resources. Management research has, therefore, focused increasingly on the unique aspects of this model. At the center of a two‐sided platform there is a platform provider that enables a transaction between the sides, reducing the relative transaction costs. However, in recent years, a new technology emerged that challenges some of the underlying assumptions of this model: the blockchain. Blockchain enables the creation of a peer‐to‐peer network that is able to authenticate transactions, upon which applications and services may be built. It allows users to conduct transactions without the need for a central platform.
We explore how blockchain technology reshapes two‐sided platforms, focusing in particular on the role of the platform provider. The research is based upon multiple case studies, using an inductive approach to explore this emerging phenomenon. Our findings show there is a significant shift in the role of the central player that links the two sides of a transaction using blockchain. We frame this as a shift from a “platform provider” to a “service provider,” leveraging the blockchain as a Platform‐as‐a‐Service. Our work examines the peculiarities of this model, unveiling new dynamics in these businesses. Specifically, we show that different variables must be considered to classify two‐sided platforms using blockchain. Furthermore, the essential characteristics of two‐sided platforms must also be enlarged. For example, traditional platform theories emphasize the importance of cross‐side network externalities in creating value. In blockchain‐enabled platforms however, we show the use of “tokens” play a key role in creating different types of externalities between the two sides.
•We describe a methodology for characterizing the architecture of complex systems.•Our methodology is based upon directed network graphs.•We define three types of architecture: Core–Periphery, ...Multi-Core, and Hierarchical.•We apply our methodology to a sample of 1286 software releases.•We find that most systems in our sample possess a Core–Periphery structure.
In this paper, we describe an operational methodology for characterizing the architecture of complex technical systems and demonstrate its application to a large sample of software releases. Our methodology is based upon directed network graphs, which allows us to identify all of the direct and indirect linkages between the components in a system. We use this approach to define three fundamental architectural patterns, which we label core–periphery, multi-core, and hierarchical. Applying our methodology to a sample of 1286 software releases from 17 applications, we find that the majority of releases possess a “core–periphery” structure. This architecture is characterized by a single dominant cyclic group of components (the “Core”) that is large relative to the system as a whole as well as to other cyclic groups in the system. We show that the size of the Core varies widely, even for systems that perform the same function. These differences appear to be associated with different models of development – open, distributed organizations develop systems with smaller Cores, while closed, co-located organizations develop systems with larger Cores. Our findings establish some “stylized facts” about the fine-grained structure of large, real-world technical systems, serving as a point of departure for future empirical work.
► We provide an empirical examination of a Grand Innovation Prize (GIP) in action. ► Divergence exists between the empirical reality of GIPS and the literature on prize theory and policy. ► We offer ...a practical framework and roadmap for future GIP theory, policy and design. ► GIP design includes specifications, incentives, qualification rules and governance.
This paper provides a systematic examination of the use of a Grand Innovation Prize (GIP) in action – the Progressive Automotive Insurance X PRIZE – a $10 million prize for a highly efficient vehicle. Following a mechanism design approach we define three key dimensions for GIP evaluation: objectives, design, and performance, where prize design includes ex ante specifications, ex ante incentives, qualification rules, and award governance. Within this framework we compare observations of GIPs from three domains – empirical reality, theory, and policy – to better understand their function as an incentive mechanism for encouraging new solutions to large-scale social challenges. Combining data from direct observation, personal interviews, and surveys, together with analysis of extant theory and policy documents on GIPs, our results highlight three points of divergence: first, over the complexity of defining prize specifications; secondly, over the nature and role of incentives, particularly patents; thirdly, the overlooked challenges associated with prize governance. Our approach identifies a clear roadmap for future theory and policy around GIPs.