PhD, EngD, MPhil and MSc by research theses (SAS)
Permanent URI for this collection
Browse
Browsing PhD, EngD, MPhil and MSc by research theses (SAS) by Supervisor "Allwood, R. L."
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Open Access Generating project value through design for reliability: on the development and implementation of a potential value framework(Cranfield University, 2007-10) Woods, K. B. W.; Allwood, R. L.; Johnson, M.; Strutt, J. E.The current trend to economically exploit deepwater hydrocarbon reserves is to reduce the capital expenditure; accomplished by deploying subsea equipment. The financial benefit afforded is offset by the risk of high operational costs associated with failure. Recognition of the life cycle cost implications of subsea reliability have led to the development of the reliability strategy. This strategy adopts a risk based approach to design for reliability where only analyses (and their subsequent recommended actions) perceived to add to whole project value are implemented. While life cycle costing has been developed to address through life cost, analyses are traditionally considered a source of cost accumulation rather than value creation. This thesis proposes a potential reliability value decision making framework to assist in the design for reliability planning process. The framework draws on the existing concepts of life cycle costing to explicitly consider the through life value of investing in reliability analyses. Fundamental to the framework are the potential reliability value index and an associated value breakdown structure intended as central decision support for decentralised decision making. Implementation of the framework is reliant on synergies within the project organization; including relationships between organizations and project functions. To enhance synergy between functions and dismantle some of the recognised barriers to implementing the reliability strategy an organizational structure, for projects, guided centrally by the reliability value framework is proposed. This structure requires the broadening of each project functions’ skill set to enable the value added implementation of the strategy’s activities. By widening the scope of application, the reliability analysis toolkit becomes the central guidance of the design process and awareness of the causes of unreliability and how they can be avoided increases. As this capability improves so the cost-efficiency with which reliability is managed in design (introduced as the reliability efficiency frontier) also increases.Item Open Access Risk-based reliability allocation at component level in non-repairable systems by using evolutionary algorithm(Cranfield University, 2007-04) Hussain, Syed Adeel; Todinov, M. T.; Allwood, R. L.The approach for setting system reliability in the risk-based reliability allocation (RBRA) method is driven solely by the amount of ‘total losses’ (sum of reliability investment and risk of failure) associated with a non-repairable system failure. For a system consisting of many components, reliability allocation by RBRA method becomes a very complex combinatorial optimisation problem particularly if large numbers of alternatives, with different levels of reliability and associated cost, are considered for each component. Furthermore, the complexity of this problem is magnified when the relationship between cost and reliability assumed to be nonlinear and non-monotone. An optimisation algorithm (OA) is therefore developed in this research to demonstrate the solution for such difficult problems. The core design of the OA originates from the fundamental concepts of basic Evolutionary Algorithms which are well known for emulating Natural process of evolution in solving complex optimisation problems through computer simulations of the key genetic operations such as 'reproduction', ‘crossover’ and ‘mutation’. However, the OA has been designed with significantly different model of evolution (for identifying valuable parent solutions and subsequently turning them into even better child solutions) compared to the classical genetic model for ensuring rapid and efficient convergence of the search process towards an optimum solution. The vital features of this OA model are 'generation of all populations (samples) with unique chromosomes (solutions)', 'working exclusively with the elite chromosomes in each iteration' and 'application of prudently designed genetic operators on the elite chromosomes with extra emphasis on mutation operation'. For each possible combination of alternatives, both system reliability and cost of failure is computed by means of Monte-Carlo simulation technique. For validation purposes, the optimisation algorithm is first applied to solve an already published reliability optimisation problem with constraint on some target level of system reliability, which is required to be achieved at a minimum system cost. After successful validation, the viability of the OA is demonstrated by showing its application in optimising four different non-repairable sample systems in view of the risk based reliability allocation method. Each system is assumed to have discrete choice of component data set, showing monotonically increasing cost and reliability relationship among the alternatives, and a fixed amount associated with cost of failure. While this optimisation process is the main objective of the research study, two variations are also introduced in this process for the purpose of undertaking parametric studies. To study the effects of changes in the reliability investment on system reliability and total loss, the first variation involves using a different choice of discrete data set exhibiting a non-monotonically increasing relationship between cost and reliability among the alternatives. To study the effects of risk of failure, the second variation in the optimisation process is introduced by means of a different cost of failure amount, associated with a given non-repairable system failure. The optimisation processes show very interesting results between system reliability and total loss. For instance, it is observed that while maximum reliability can generally be associated with high total loss and low risk of failure, the minimum observed value of the total loss is not always associated with minimum system reliability. Therefore, the results exhibit various levels of system reliability and total loss with both values showing strong sensitivity towards the selected combination of component alternatives. The first parametric study shows that second data set (nonmonotone) creates more opportunities for the optimisation process for producing better values of the loss function since cheaper components with higher reliabilities can be selected with higher probabilities. In the second parametric study, it can be seen that the reduction in the cost of failure amount reduces the size of risk of failure which also increases the chances of using cheaper components with lower levels of reliability hence producing lower values of the loss functions. The research study concludes that the risk-based reliability allocation method together with the optimisation algorithm can be used as a powerful tool for highlighting various levels of system reliabilities with associated total losses for any given system in consideration. This notion can be further extended in selecting optimal system configuration from various competing topologies. With such information to hand, reliability engineers can streamline complicated system designs in view of the required level of system reliability with minimum associated total cost of premature failure. In all cases studied, the run time of the optimisation algorithm increases linearly with the complexity of the algorithm and due to its unique model of evolution, it appears to conduct very detailed multi-directional search across the solution space in fewer generations - a very important attribute for solving the kind of problem studied in this research. Consequently, it converges rapidly towards optimum solution unlike the classical genetic algorithm which gradually reaches the optimum, when successful. The research also identifies key areas for future development with the scope to expand in various other dimensions due to its interdisciplinary applications.Item Open Access The use of Hall effect probes in the detection and sizing of cracks in steel structures(Cranfield University, 1991-02) Konefal, T.; Allwood, R. L.The most commonly used method for non-destructive testing (NDT) of welded tubulars in underwater locations is magnetic particle inspection (MPI). This method is effective in terms of crack or defect detection, but requires much diver effort. This work examines the use of Hall effect probes for crack detection and measurement in steel specimens and underwater pipelines and structures. A simple theory of magnetic leakage fields is developed, and how such fields relate to crack characteristics. The finite sizes of the Hall probes employed are taken into account, and an analytic expression for the field from a tapered crack is developed. Practical magnetic signals from a cracked Y-jointed tubular are taken, and shown to be consistent with MPI indications. A double probe system is proposed which enables crack depth measurement to be made irrespective of a knowledge of the crack width or level of magnetisation in the specimen. Experiments using a prototype double probe system show encouraging results on artificial cracks in small specimens, though there is a troubling unknown background bias effect in the measured signals. An instrument using a time differentiated probe signal has been developed which is capable of detecting a crack in a Y-joint at a scan height of up to 5mm with a level of magnetisation rather less than that used by MPI. A method of continuously monitoring a crack in a Y-joint is also described, using multiple differential pairs of probes. The method is found to give indications consistent and comparable with MPI.