Scheduling Analysis Using Discrete Event Simulation

Edward J. Williams
206-2 Engineering Computer Center
Mail Drop 3
Ford Motor Company
Dearborn, Michigan 48121-2053 USA

Igal Ahitov
Production Modeling Corporation
Three Parklane Boulevard
Suite 910 West
Dearborn, Michigan 48126 USA

Proceedings of the 29th Annual Simulation Symposium, pages 148-154.

Abstract

We describe a production shop which needed to undertake schedule analyses at the macro level of planning and explain the methods of using simulation to obtain valid, credible schedule analyses quickly. After discussion of the simulation model itself, we present actual simulation results and conclusions. These successful simulation results help the production shop attain nimble adaptation to product mix requirements, optimal in-process buffer sizing, and speedy confirmation of the ability of scheduling proposals to meet throughput targets.

1. Introduction

Simulation has been defined as "the process of designing a mathematical or logical model of a real system and then conducting computer-based experiments with the model to describe, explain, and predict the behavior of the real system" [5]. Scheduling problems, by contrast, deal with deciding the processing times of jobs comprised by a project, given constraints on personnel, equipment, and facilities [8]. Inasmuch as simulation is a powerful tool for the analysis of scheduling problems, algorithms, and policies, simulation and scheduling analyses can work synergistically toward process improvement.

Controlling production operations with economic effectiveness is an ever-present challenge to operations management. This challenge presents itself at two levels: the macro level of long-term work balancing and the micro level of daily or even hourly facility control [1]. Long-standing obstacles between this synergy of simulation and scheduling, especially at the macro level, include differences between scheduling tasks and traditional simulation applications and the absence of scheduling capabilities within typical process-simulation software [6]. Additionally, many of the scheduling packages now in use assume deterministic systems. Use of simulation as a precursor to scheduling permits consideration of probabilistic events as well.

As stressed by [7], one of the most vital steps in a simulation study is the careful statement of project objectives; until precise knowledge of the issues to be addressed by the model is available, it is impossible to decide the appropriate level of model detail. Then, the model complexity needn't, and shouldn't, exceed the minimum required to accomplish those project objectives [3]. Consistent with objectives on specific projects, models may be macro models (low level of detail and encompassing a broadly defined system) or micro models (high level of detail and encompassing a narrowly defined system).

First, we describe the production shop whose scheduling concerns were the motivation for the simulation study. We then describe the simulation model itself, stressing its adaptation to these scheduling concerns and how the need for these adaptations influenced the building of the model. We then present the conclusions drawn from the model and indicate promising directions for further work.

2. Overview of the System

The system in question is a manufacturing shop (in design, not yet an existing system) for the production of an automotive component. These components are naturally subdivided into three distinct families, denoted x1, x2, and x3 respectively in this paper. All processing times are fixed within a family. In turn, each of these three families comprises two part types (x11 and x12, x21 and x22, and x31 and x32). In this design, the visualized part flow is the following:

The existence of multiple part families with different processing sequences, like those indicated above, requires flexibility of manufacturing similar to that described by [11].

This work flow design is illustrated in Figure 1, next page. The first four operations appear in triplicate. The identical triplets are dedicated to the x1, the x2, and the x3 families respectively. Of these families, the x1 family parts have the highest target production rate. Downstream from operation 40, x1 family parts follow the upper or middle path, bypassing operation 60; x2 family parts follow the middle or lower path, including operation 60; and x3 family parts follow the middle or lower path, bypassing operation 60. Table 1, below, specifies the gross output capabilities of each operation.

Table 1: Gross Output Capabilities by Operation
Family OP10 OP20 OP30 OP40 OP50 OP60 OP70 OP80 OP90 OP100
CNC CNC CNC Gage Drill Cup Press Balancer Grinder Washer Gage
X1 85 82 85 240 200 N/A 300 220 700 240
X2 88 77 78 240 200 225 300 220 700 240
X3 102 95 109 240 200 N/A 300 220 700 240

Operations 40, 60, 90, and 100 have no changeover time. For all other operations, except operation 50, there is a setup time whenever a part from a different family arrives. Additionally, at operation 50, setup time is required whenever a different part type arrives. In conjunction with the dedication of Line #1 (the upper path in Figure 1) to the x1 family, as noted above, these considerations imply that operation 50 is the only operation in Line #1 ever requiring changeover time.

The significant system metrics, and hence issues specifying the objectives of the project, are:

The third metric is of particular importance relative to the anticipated installation of conveyors between operations 20 and 30, since validating ability to thrift these buffer sizes implies high cost avoidance in capital expenditure, conveyor installation costs, and floor space requirements. These metrics, and their underlying economic motivations, are similar to those described by [10].

3. Modeling Approach

The fundamental model-building approach entails representing each of the principal components of the system at the macro level in terms of its processing time for different product families, the frequency and duration of its maintenance, both scheduled and unscheduled, its changeover times between different part families, and the size of the buffer immediately upstream from it. The reasons for the choice of this macro-level representation are the increased adaptability of the simulation model to change as the system design is refined, the need to avoid inclusion of too much weakly understood detail into the model early in its life cycle, the lack of detailed knowledge of the type and capacity of the material-handling equipment to be used, and the ability to build, verify, and validate the model in time for its beneficial recommendations to be fully acted upon by management. This macro approach, adapted here to assess long-range strategic policies for a system not yet implemented, may be contrasted with the micro approach used to guide real-time decisions in an existing system, as described in [4].

In accordance with this macro approach to model building, simplifying assumptions are appropriate. First, especially since transfer times between stations were known only approximately, all details of material-handling systems are omitted from the model.

Second, the model assumes the appropriate skilled labor is always available for repair, tooling changes, and changeover. Third, an infinite supply of parts is assumed to exist at operation 10 relative to each of the three lines. Fourth, taking historical scrap rates from analogous, existing systems into account, input into the system per two-day period required to achieve the target production rates is assumed to occur as indicated in Table 2, below.

Table 2: Assumed Input Rates by Family and Part
Input Target
Output
X1 Part X11 2622 2586
family Part X12 796 784
X2 Part X21 894 880
family Part X22 636 624
X3 Part X31 1724 1706
family Part X32 430 422

More specifically, this model is constructed using the SIMAN/ARENA software, but not the Advanced Manufacturing Template. ARENA animation capabilities such as entity color-coding by part type, color change of resource icons (busy, idle, down), and placement of numbers on the screen to represent operation throughput to date and current upstream queue size provide generous help to both the modeler and the user in verifying model behavior and visualizing system performance. Operation downtimes are modeled both as count-based (for tooling changes) and for busy-time-based (for random malfunctions). Under these conditions, neither type of downtime can begin while the other type of downtime is in progress. This subdivision of downtime by cause and attention to the possibility of overlapping downtimes concur with modeling considerations discussed in [12]. The details of buffering afforded by the Advanced Manufacturing Template are extraneous given the avoidance of material-handling detail indicated above. Hence, dummy resources between the operations simulate buffers. Additionally, since changeover is modeled by a dummy part seizing the machine, changeover intervals appear as operation utilization time in the statistical output reports. This modeling approach simplifies model construction and verification at the acceptable expense of overestimated operation utilizations. Since absolute operation utilization is not a metric of importance in this study (although equality of utilizations is), this simplification serves as an excellent example of tailoring the modeling approach to the user's validation expectations as specified by performance-prediction requirements [9].

In accordance with the originally specified project goals, this model can show the effects of operating the system with different schedules, the effects of buffer size on throughput, the effect of various tool changeover times and/or changeovers between different parts on throughput, or the effect of adding new machinery on throughput. However, in accordance with the truism that "simulation is not linear programming," the model, although it allows its user to experiment quickly and efficiently with different hypotheses concerning the extent to which the middle path (Line #2) can help the upper path (Line #1) produce x1 family parts, cannot specify an optimal scheduling scheme. With a verified and validated model, the users can assess different allocation levels of Line #2 to backup x1 production on behalf of Line #1, and observe their effects on throughput and machine utilization. Indeed, the model, by allowing such experimentation, allows the meaning of "optimal" to vary between users, or between successive "what-if" studies undertaken by the same user. For example, "optimal" may mean "using the smallest total buffer size possible" to one user or manager, may mean "have highest probability of meeting demand" to another user or manager, and may mean "achieve the most nearly balanced utilization of a particular work cell possible" to another. These capabilities reflect well-accepted reasons for doing a simulation analysis before scheduling implementation: evaluation of alternative scheduling logic rules, establishment of performance criteria, and identification of problem areas (severe capacity constraints).

4. Major Findings During Experimentation

Preliminary statistical analyses reveal that long runs are required to overcome high system variability; hence production runs represent one hundred days, with no warm-up period. Experimental runs under the model-simplifying assumptions described above verify that the system reaches steady state within five minutes. Furthermore, since all issues under investigation pursued with the help of the model are closely related to evaluation of proposed production schedules driven by specific production-mix demands, the runs of the model are conceptually terminating, not steady-state.

Extensive experimentation with this model produces the following results, all of immediate value to process-planning engineers and managers: