"Terminate inefficient or ineffective programs." Jonathan Greenert said while listing multiple Navy cost-saving initiatives and describing its ongoing "wholeness review." "It isn't going to happen, ...we're going to have to terminate it, sorry," Greenert said during a speech in Washington. He did not name any programs during his comments to a Center for Strategic and International Studies audience that included defense-industry officials. "We've been working that pretty deliberately on SSBN(X)," he said. "The most recent (activity)...we have actually put cost as...an objective threshold, just like we do the other parameters. And we want to move training to that, and have that conversation."
Next generation data centers must be designed to meet service level agreements (SLAs) for application performance while reducing costs and environmental impact. Traditional design approaches are ...manually intensive and must integrate thousands of components at multiple granularities, often with conflicting goals. We propose an automated data center synthesizer to design sustainable data centers that meet SLA goals, minimize carbon emissions and embedded exergy, are optimally efficient and deliver significantly reduced Total Cost of Ownership (TCO). The paper concludes with a use case study that employs the synthesizer process flow to design an optimal data center to deliver a set of services for a hypothetical city using state of the art sustainable technologies.
When a Petrochemical end-user builds a new plant, almost all the end-users have specifications with a preferred vendor list. For the competition the end-users put more than one brand on the preferred ...vendors list. The Engineering, Procurement, and Construction (EPC) contractor will buy the cheapest brand of the vendors list. This means selection is based on minimizing the Capital Expenditures (CAPEX) rather than minimizing the Total Cost of Ownership (TCO), being the sum of CAPEX and Operating Expenditures (OPEX). Analyzing the TCO it becomes clear the OPEX can be divided into two segments. The first one is the controlled OPEX which can clearly be calculated. The a second segment of OPEX however is not that clear and is depending on risk. Risk is defined as the product of a probability a situation during lifetime of the product will take place and the consequences it will have for the process in which the product is fulfilling it's role. Controlling risk means reducing the probability or limiting the consequences. Some tools out of the field of asset management used by utilities are applied for a Low Voltage Motor Control Centre (MCC) to give guidance how decisions can be argued. As will be found limiting the consequences can be translated into mandatory and preferred specifications that can be used in a tender. But what will happen when one of the brands on the preferred vendor list has a product development that will increase the CAPEX and will reduce the TCO, e.g. will reduce the risk. To take specific manufacturer specifications into account, it will demand that the purchase process find a way to take this into account, by having more aspects as CAPEX only to make the decision to which vendor the project will be granted.
Museum collection management on-demand Wu, Steven; Chua, Philip
Proceedings of the 2nd international conference on Theory and practice of electronic governance,
12/2008
Conference Proceeding
In this case study, we trace the rationale, development and operation of the Integrated Museum Collection Management System as a software-as-a-service solution for public as well as private museums ...in Singapore. This on-demand service may serve as a model for other public services particularly in the context of business process re-engineering and standards adoption. It comprises a traditional client-server solution for internal users and a web frontend for public search. From the operation perspective, the service has posed challenges in delivery and support. Lastly, standards and technology adoption issues have to be addressed.
Why computer-based systems should be autonomic Sterritt, R.; Hinchey, M.
12th IEEE International Conference and Workshops on the Engineering of Computer-Based Systems (ECBS'05),
2005
Conference Proceeding
Odprti dostop
The objective of this paper is to discuss why computer-based systems should be autonomic, where autonomicity implies self-managing, often conceptualized in terms of being self-configuring, ...self-healing, self-optimising, self-protecting and self-aware. We look at motivations for autonomicity, examine how more and more systems are exhibiting autonomic behavior, and finally look at future directions.
Open source has created a paradigm shift in the way software is provided. In simple terms. Organizations of all sizes are doing serious cost and feasibility evaluations before taking the leap to open ...source, and are concluding, for a variety of reasons, that the open source option is built on a solid base. As an operating system that can run on anything from a cheap PC to a mainframe, Linux is the virtual foundation upon which open source rests. Because open source software is initially free, total cost of ownership (TCO) is the only way one can compare it with the alternatives. A software platform like Linux interacts a great deal with other IT components, so TCO gets complicated. Through its various technical attributes, it affects TCO through its respective influence on hardware costs, support and training costs, and licensing costs of other software products. The following categories are often considered: 1. security, 2. stability, 3. scalability, and 4. performance and overhead.