Page: | Page Score: | Total Score: |
Introduction |
This tool allows the user to assess the level of maturity of its own organization’s, or another organization’s, approach to reliability by answering a series of questions about the approach. The answers are used to “score” the maturity of the approach as a means of indicating areas of possible improvement. The absolute value obtained from the scoring is a coarse approximation of the robustness of the organization’s approach. An in-person interview is always more effective in any kind of assessment because more detailed follow-up questions can be used. The areas of assessment are:
– Defining Reliability Program – Developing Reliability Requirements – Designing for Reliability – Assessing Reliability Progress – Measuring Product Reliability – Ensuring Reliable Performance |
Areas, Elements, And Questions |
The assessment is divided into areas, elements, and questions. A series of questions makes up each element, and a series of elements make up each area. The questions are the top-level questions that must be answered to complete the assessment. Weights are assigned to each question, element and area behind the scenes in this tool, with the Designing for Reliability considered most important. |
Recommendations For Improvement |
The last page of this tool indicates a series of recommendations for any of the six assessment areas with ratings below average (less than 3 out of 5). |
Determining Customer Satisfaction |
1. How do you determine customer satisfaction? Select all that apply. |
a.
Customer complaints used
b. Customer surveys / interviews used c. Industry trends analyzed d. Competitors are benchmarked e. Best practices are benchmarked (business processes, support services, internal customer satisfaction) |
2. How is policy on reliability formed, documented, and implemented? Select all that apply. |
a.
Reliability activities done as needed
b. Reliability is part of strategic plans / policies c. Formal reliability policies, practices are in place d. Follow-up used to ensure that policies are implemented e. Management reviews progress and adjusts accordingly |
3. Describe the distribution of reliability responsibilities and authority in the organization. |
i. Reliability responsibilities are clearly defined and monitored ii. All organizational entities have a plan for continuously improving their reliability efforts |
4. What training is provided in reliability? |
i. Training of management in reliability strategies and benefits ii. Training of designers in “design for reliability” iii. Periodic training updates (and symposia participation) for latest approaches |
5. How does the reliability effort interface with other assurance and design activities? |
i. Routine integrated working arrangement of reliability with designers ii. Company-wide network including reliability data / information and ongoing initiatives for continuous improvement |
6. How does the Organization know what level of reliability it’s delivering? |
Tailoring Program or Product Reliability Efforts |
1. How do you determine what reliability tasks are needed for a given program / product development? |
2. How does management get visibility into the reliability program? |
3. Is there a reliability strategy for products developed? |
Reliability Figures of Merit |
1. How are figures of merit (measures of reliability) determined? |
Consideration of differences between factory and field, mission and logistics figures of merit and / or considering transcendent relationships such as availability and effectiveness. |
Quantitative Requirements and Goals |
1. How are quantitative requirements / goals determined? |
Integration |
1. How is reliability integrated into the design effort? |
Critical Items |
1. How do you identify critical items? |
2. How do you handle critical items? Select all that apply. |
a.
By specifications to suppliers
b. By periodic review of status c. By tailored analysis and tests d. Periodic management review e. Organization-supplier partnership, back-up plans, or mandatory milestones to be met before release |
Use Environment |
1. How do you determine the use environment? |
i. Use of past experience of exposure conditions ii. Use of standardized tables of environments and stresses iii. Use of customer data for expected exposure iv. Special measurement of the environmental exposure |
Parts Control |
1. How do you control parts selection? |
Supplier Control |
1. How do you select suppliers? |
2. How do you work with suppliers? Select all that apply. |
a.
By providing specification
b. By testing of products c. By auditing of processes d. Through dialogue on requirements and processes e. By joint improvement efforts |
3. How do you establish supplier reliability requirements? |
Software Reliability |
Program Risks |
1. How do you identify program risks and do design reviews address reliability issues? |
Reliability Tools |
1. How are reliability tools (e.g., modeling, derating, FMECA, FTA, etc.) selected for use on a program? |
2. How do you ensure a robust design? |
Methodology |
1. What data do you use to assess progress? Select all that apply. |
a.
Customer complaints
b. Program manager assessments and FRACAS c. Standardized analyses and tests d. Tracking “cost of quality” metrics e. Continual evaluation of new methods |
Timing |
1. When do you assess progress in reliability? |
Methodology |
1. How do you measure reliability? Select all that apply. |
a.
Customer required tests
b. Use of factory test data c. Collection and use of customer data d. Tests tailored to product or program (or both) e. Continual evaluation of methodology |
2. What conditions are used to measure reliability? |
Scope |
1. To what level do you measure reliability? |
Defects |
1. How do you control defects? |
2. How do you handle defective material? |
3. How do you address variability? |
Support Services |
1. How do you consider handling, packaging, and shipping? |
2. How do you address business processes? |
i. Reaction to customer complaints ii. Standardization iii. Statistical analysis / benchmarking iv. Continual improvement efforts |
Results |
1. How do you ensure that you provide reliability services? Select all that apply. |
i. Compilation of program / product reliability results ii. Use of field data from customers iii. Comparison against competitors iv. Comparison against industry leaders |
2. What trends do you see and what do you mean? |
3. What actions do you take as a result of detecting trends? |
4. How do you assure that you provide reliable services? |
You’ve already entered your email address. Thank you!
View as Printable PDF
Interpreting the Resulting Score |
|||
The scores for the six areas of assessment are indicated below out of a five point system (five is best). It should be stressed that the process of developing reliable products cannot be so structured that a single numerical metric can represent its overall effectiveness. Nevertheless, the rating system presented can be valuable as a gross indicator, capable of identifying areas of weakness. | |||
Areas and Elements Requiring Remedial Action | |||
For areas and elements that are given ratings below average, remedial action is strongly recommended. The initial steps for remedial action should be documented in an improvement plan that represents the strategy and commitment to improve. Clues regarding the areas of improvement should be evident from the questions posed that could not be answered affirmatively, and for processes/ procedures that are implemented on an ad hoc basis, or not at all. Other common problems include lack of communication and integration regarding reliability activities. Improvement might include additional training; real-time access by engineers to the organization’s data collection, analysis, and corrective action system; refinements in testing; and more thorough root case analyses. For each step, a detailed schedule showing the necessary activities should be developed, responsibility for each activity assigned, and budgets allocated. |
|||
Area 1: Defining a Reliability Program |
Score: | ||
Area 2: Developing Reliability Requirements |
Score: | ||
Area 3: Designing for Reliability |
Score: | ||
Area 4: Assessing Reliability Progress |
Score: | ||
Area 5: Measuring Product Reliability |
Score: | ||
Area 6: Ensuring Reliable Performance |
Score: | ||