The meaning of statistical methods of quality control is: Statistical process control. See what “statistical quality control” is in other dictionaries

The meaning of statistical methods of quality control is to significantly reduce the costs of its implementation compared to organoleptic (visual, auditory, etc.) with continuous control, on the one hand, and to exclude random changes in product quality, on the other.

There are two areas of application of statistical methods in production (Fig. 4.8):

when regulating the progress of the technological process in order to keep it within the specified limits (left side of the diagram);

upon acceptance of manufactured products (right side of the diagram).

Rice. 4.8. Areas of application of statistical methods for product quality management

To control technological processes, the problems of statistical analysis of the accuracy and stability of technological processes and their statistical regulation are solved. In this case, the tolerances for the controlled parameters specified in the technological documentation are taken as the standard, and the task is to strictly maintain these parameters within the established limits. The task may also be to search for new operating modes in order to improve the quality of final production.

Before undertaking the use of statistical methods in the production process, it is necessary to clearly understand the purpose of using these methods and the benefits of production from their use. Very rarely is data used to make inferences about quality as received. Typically, seven so-called statistical methods or quality control tools are used for data analysis: data stratification; graphics; Pareto chart; cause-and-effect diagram (Ishikawa diagram or fishbone diagram); checklist and histogram; scatter plot; control cards.

1. Delamination (stratification).

When dividing data into groups in accordance with their characteristics, the groups are called layers (strata), and the separation process itself is called stratification (stratification). It is desirable that the differences within a layer be as small as possible, and between layers as large as possible.

There is always a greater or lesser scatter of parameters in the measurement results. If you stratify by the factors that generate this scatter, it is easy to identify the main reason for its occurrence, reduce it and achieve improved product quality.

The use of different delamination methods depends on specific tasks. In production, a method called 4M is often used, which takes into account factors depending on: the person; machines (machine); material (material); method.

That is, delamination can be done like this:

By performers (by gender, work experience, qualifications, etc.);
- by machines and equipment (by new or old, brand, type, etc.);
- by material (by place of production, batch, type, quality of raw materials, etc.);
- by production method (temperature, technological method, etc.).


In trade there can be stratification by regions, companies, sellers, types of goods, seasons.

The stratification method in its pure form is used when calculating the cost of a product, when it is necessary to estimate direct and indirect costs separately by product and batch, when assessing the profit from the sale of products separately by customer and by product, etc. Layering is also used in the case of other statistical methods: when constructing cause-and-effect diagrams, Pareto diagrams, histograms and control charts.

2. Graphical presentation of data widely used in production practice for clarity and to facilitate understanding of the meaning of data. The following types of graphs are distinguished:

A). A graph representing a broken line (Fig. 4.9) is used, for example, to express changes in any data over time.

Rice. 4.9. An example of a “broken” graph and its approximation

B) Pie and strip graphs (Figures 4.10 and 4.11) are used to express the percentage of the data under consideration.

Rice. 4.10. Example of a pie chart

The ratio of the components of production costs:

1 – cost of production as a whole;

2 – indirect costs;

3 – direct costs, etc.

Rice. 4.11. Example of a strip chart

Figure 4.11 shows the ratio of sales revenue for individual types of products (A, B, C), a trend is visible: product B is promising, but A and C are not.

IN). The Z-shaped graph (Fig. 4.12) is used to express the conditions for achieving these values. For example, to assess the general trend when recording actual data by month (sales volume, production volume, etc.)

The schedule is constructed as follows:

1) the values ​​of the parameter (for example, sales volume) are plotted by month (for a period of one year) from January to December and connected by straight segments (broken line 1 in Fig. 4.12);

2) the cumulative amount is calculated for each month and the corresponding graph is constructed (broken line 2 in Fig. 4.12);

3) the total values ​​(changing total) are calculated and the corresponding graph is constructed. In this case, the changing total is taken to be the total for the year preceding a given month (broken line 3 in Fig. 4.12).

Rice. 4.12. Example of a Z-shaped graph.

The y-axis is revenue by month, the x-axis is the months of the year.

Based on the changing total, one can determine the trend of change over a long period. Instead of a changing total, you can plot the planned values ​​on a graph and check the conditions for achieving them.

G). The bar graph (Fig. 4.13) represents the quantitative dependence, expressed by the height of the bar, of such factors as the cost of the product on its type, the amount of losses due to defects on the process, etc. Varieties of a bar graph are a histogram and a Pareto chart. When constructing a graph, the number of factors influencing the process being studied (in this case, the study of incentives to purchase products) is plotted along the ordinate axis. On the abscissa axis are factors, each of which has a corresponding column height, depending on the number (frequency) of manifestation of this factor.

Rice. 4.13. Example of a bar graph.

1 – number of incentives to purchase; 2 – incentives to purchase;

3 – quality; 4 – price reduction;

5 – warranty periods; 6 – design;

7 – delivery; 8 – other;

If we arrange purchase incentives by the frequency of their occurrence and build a cumulative sum, we get a Pareto diagram.

3. Pareto diagram.

A diagram built on the basis of grouping by discrete characteristics, ranked in descending order (for example, by frequency of occurrence) and showing the cumulative (accumulated) frequency is called a Pareto diagram (Fig. 4.10). Pareto was an Italian economist and sociologist who used his diagram to analyze the wealth of Italy.

Rice. 4.14. Example of a Pareto chart:

1 – errors in the production process; 2 – low-quality raw materials;

3 – low-quality tools; 4 – low-quality templates;

5 – low-quality drawings; 6 – other;

A – relative cumulative (accumulated) frequency, %;

n – number of defective units of production.

The above diagram is based on grouping defective products by type of defect and placing in descending order the number of units of defective products of each type. The Pareto chart can be used very widely. With its help, you can evaluate the effectiveness of measures taken to improve product quality by plotting it before and after making changes.

4. Cause-and-effect diagram (Fig. 4.15).

a) an example of a conditional diagram, where:

1 – factors (reasons); 2 – large “bone”;

3 – small “bone”; 4 – middle “bone”;

5 – “ridge”; 6 – characteristic (result).

b) an example of a cause-and-effect diagram of factors influencing product quality.

Rice. 4.15 Cause-and-effect diagram examples.

A cause-and-effect diagram is used when you want to explore and depict the possible causes of a certain problem. Its application makes it possible to identify and group the conditions and factors influencing a given problem.

Consider the shape of the cause-and-effect diagram in Fig. 4.15 (also called the “fishbone” or Ishikawa diagram).

How to draw a diagram:

1. A problem to be solved is selected - a “ridge”.
2. The most significant factors and conditions influencing the problem are identified - first-order causes.
3. A set of reasons is identified that influence significant factors and conditions (reasons of 2nd, 3rd and subsequent orders).
4. The diagram is analyzed: factors and conditions are ranked by importance, and those reasons that can currently be corrected are identified.
5. A plan for further action is drawn up.

5. Check sheet(table of accumulated frequencies) is compiled to build histograms distribution, includes the following columns: (Table 4.4).

Table 4.4

Based on the control sheet, a histogram is constructed (Fig. 4.16), or, with a large number of measurements, probability density curve(Fig. 4.17).

Rice. 4.16. An example of presenting data as a histogram

Rice. 4.17. Types of probability density distribution curves.

A histogram is a bar graph and is used to visually display the distribution of specific parameter values ​​by frequency of occurrence over a certain period of time. By plotting the acceptable values ​​of a parameter, you can determine how often the parameter falls within or outside the acceptable range.

By examining the histogram, you can find out whether the batch of products and the technological process are in satisfactory condition. The following questions are considered:

· what is the distribution width in relation to the tolerance width;

· what is the center of the distribution in relation to the center of the tolerance field;

What is the form of distribution?

If

a) the shape of the distribution is symmetrical, then there is a margin in the tolerance zone, the center of the distribution and the center of the tolerance zone coincide - the quality of the batch is in satisfactory condition;

b) the center of distribution is shifted to the right, that is, there is a fear that among the products (in the rest of the batch) there may be defective products that go beyond the upper tolerance limit. Check whether there is a systematic error in the measuring instruments. If not, then they continue to produce products, adjusting the operation and shifting the dimensions so that the center of distribution and the center of the tolerance field coincide;

c) the center of the distribution is located correctly, but the width of the distribution coincides with the width of the tolerance zone. There are concerns that when examining the entire batch, defective products will appear. It is necessary to investigate the accuracy of the equipment, processing conditions, etc. or expand the tolerance range;

d) the center of distribution is shifted, which indicates the presence of defective products. It is necessary to move the distribution center to the center of the tolerance field by adjustment and either narrow the distribution width or revise the tolerance;

e) the situation is similar to the previous one, and the measures of influence are similar;

f) there are 2 peaks in the distribution, although the samples are taken from the same batch. This can be explained either by the fact that the raw materials were of 2 different grades, or the machine settings were changed during the work process, or products processed on 2 different machines were combined into 1 batch. In this case, the examination should be carried out layer by layer;

g) both the width and the center of distribution are normal, however, a small part of the products exceeds the upper tolerance limit and, when separated, forms a separate island. Perhaps these products are part of the defective ones, which, due to negligence, were mixed with good ones in the general flow of the technological process. It is necessary to find out the cause and eliminate it.

6. Scatter diagram used to identify the dependence (correlation) of some indicators on others or to determine the degree of correlation between n pairs of data for variables x and y:

(x 1 ,y 1), (x 2 ,y 2), ..., (x n, y n).

This data is plotted on a graph (scatter plot) and a correlation coefficient is calculated for it.

Let's consider various options for scatter diagrams (or correlation fields) in Fig. 4.18:

Rice. 4.18. Scatter plot options

When:

A) we can talk about a positive correlation (with growth x increases y);

b) there is a negative correlation (with growth x decreases y);

7. Control card.

One way to achieve satisfactory quality and maintain it at this level is to use control charts. To manage the quality of a technological process, it is necessary to be able to control those moments when manufactured products deviate from the tolerances specified by the technical conditions. Let's look at a simple example. We will monitor the operation of a lathe for a certain time and measure the diameter of the part being manufactured on it (per shift, hour). Based on the results obtained, we will build a graph and get the simplest control card(Fig. 4.20):

Rice. 4.20. Example of a control chart

At point 6, a breakdown in the technological process has occurred; it needs to be regulated. The position of the VKG and NKG is determined analytically or using special tables and depends on the sample size. With a sufficiently large sample size, the limits of VKG and NKG are determined by the formulas

NKG = –3,

.

VKG and NKG serve to prevent process breakdown when products still meet technical requirements.

Control charts are used when it is necessary to establish the nature of faults and assess the stability of the process; when it is necessary to determine whether a process needs to be regulated or whether it should be left as is.

The control chart can also confirm process improvement.

A control chart is a means of distinguishing deviations due to non-random or special causes from probable variations inherent in the process. Probable changes rarely repeat themselves within predicted limits. Deviations due to non-random or special causes signal that some factors affecting the process need to be identified, investigated and brought under control.

Control charts are based on mathematical statistics. They use operational data to set limits within which future research will be expected if the process remains ineffective due to non-random or special causes.

Information on control charts is also contained in the international standards ISO 7870, ISO 8258.

The most widely used are average control charts. X and span control charts R, which are used together or separately. Natural fluctuations between control limits must be controlled. You need to ensure that the correct control chart type is selected for the specific data type. Data must be taken exactly in the sequence in which it was collected, otherwise it becomes meaningless. Changes to the process should not be made during the data collection period. The data should reflect how the process occurs naturally.

A control chart can indicate potential problems before defective products are produced.

It is customary to say that a process is out of control if one or more points are outside the control limits.

There are two main types of control charts: for qualitative (pass - fail) and for quantitative characteristics. For quality characteristics, four types of control charts are possible: the number of defects per unit of production; number of defects in the sample; the proportion of defective products in the sample; number of defective products in the sample. Moreover, in the first and third cases the sample size will be variable, and in the second and fourth cases it will be constant.

Thus, the purposes of using control charts can be:

identifying an uncontrollable process;

control over the managed process;

assessing process capabilities.

To solve the problem of product quality, it is necessary to use methods aimed not at eliminating defects in finished products, but at preventing the causes of their occurrence during the production process. Known control methods were reduced, as a rule, to analyzing defects through continuous inspection of products. In mass production, such control is very expensive and does not provide a 100% guarantee due to objective and subjective factors. In statistical control of product quality, measurement results processed by methods of mathematical statistics make it possible to assess the true state of the technological process with a high degree of accuracy and reliability. Statistical methods of quality management are selective methods based on the application of probability theory and mathematical statistics (23).

For effective management, process control and product quality control, the following methods have been widely used: Pareto charts, checklists, cause-result diagrams, histograms, control charts, scatterplots and stratification (24). These methods allow you to solve the following problems:

– analysis of stability, configuration, reproducibility and controllability of processes;

– organization of targeted work to identify the causes of inconsistencies (defects, defects).

The basis of any statistical study is a set of data obtained from the results of measurements of one or several parameters of the product (linear dimensions, temperature, mass, density, etc.).

Checklists. A control sheet is a form on which the values ​​of the controlled parameter are pre-marked (tolerances of equal length, value intervals, nominal value, etc.) with a free field for sequential recording of measurement results. They are used when carrying out ongoing monitoring of raw materials, blanks, semi-finished products, components and finished products; when analyzing the condition of equipment, technological operations or the process as a whole; when analyzing marriage, etc. The form and content of checklists are very diverse. The most commonly used forms of checklists are:

1. Check sheet for recording the distribution of the measured parameter during the production process.

2. Checklist for recording types of defects.

3. Checklist for localizing defects (for process diagnosis).

4. Checklist for causes of defects.

Pareto chart is used when analyzing the reasons on which the solution to the problems under study depends, and allows you to clearly show the importance of these reasons in order of decreasing importance.

Delamination is a method of identifying sources of variation in collected data and classifying measurement results according to various factors. Layering method ( Stratification consists of dividing the total population of data into two or more subpopulations according to the conditions that existed at the time the data were collected. Such subsets are called layers (strata), and the process of dividing data into layers is called stratification (stratification).

The stratification method is used to identify individual causes acting on any cause or phenomenon.

This method is effectively used to improve product quality by reducing scatter and improving the estimate of the average value of the process. Layering is usually carried out according to materials, equipment, production conditions, workers, etc.

Scatterplots– used to study dependencies between two variables and analyze them .

Cause-Result Diagram (Fishbone) allows you to identify and group the reasons according to their significance that affect product quality. The purpose of drawing up a cause-and-effect diagram is to find the most correct and effective way to solve a product quality problem.

bar chart is a method of presenting measurement results grouped by frequency of falling within a certain, predetermined interval (tolerance limits). The histogram shows the spread of quality indicators, average values, and gives an idea of ​​the accuracy, stability and reproducibility of the technological process and the operation of technological equipment.

Control cards. Control charts are linear graphs of the dependences of the values ​​of statistical characteristics (arithmetic mean, median, mean square, range) on the ordinal number of the sample (sample subgroups). The arithmetic mean is a measure of the center of a distribution, the median is the middle value of data ordered in ascending or descending order, the range is the difference between the largest and smallest sample value, the population is the entire set of objects under consideration (batch, operation, process), normal distribution – distribution obeying the Gaussian law.

Control charts are the most effective technical means of product quality management.

4.1.Histograms as a quality management method

At industrial enterprises, two methods of statistical control of product quality are widely used: current control of the technological process and the selective control method.

Methods of statistical control (regulation) make it possible to timely prevent defects in production and, thus, directly intervene in the technological process. The selective control method does not have a direct impact on production (technical process), because it serves to control the finished product, allows you to identify the volume of defects, the reasons for its occurrence in the technological process, or qualitative deficiencies of the starting raw materials.

Analysis of the accuracy and stability of technological processes allows us to identify and eliminate factors that negatively affect the quality of the product.

In general, process stability control can be carried out:

– graphic-analytical method with plotting the values ​​of the measured parameters on a diagram;

– calculation and statistical method for quantitative characteristics of the accuracy and stability of the technological process, as well as predicting their reliability based on the quantitative characteristics of the given deviations.

Organizing and analyzing measurement results using histograms is one of the most widely used statistical methods for quality management (25). The method allows you to solve the following problems:

– analysis of stability, configuration and reproducibility of processes;

– assessment of the level of defects in the technologies used;

– organization of targeted work to identify the causes of inconsistencies in the technological process.

The methodology is used in the development of regulatory documentation for technological processes, planning and implementing quality control of specific types of products, assessing the stability of production before and after corrective actions, etc.

The technique reveals an approach to the implementation of bar charts (histograms) in practical activities, constructed on the basis of any information (measurement results, expert assessments, control, etc.), grouped by the frequency of falling into certain, predetermined intervals (tolerance limits).

The use of histograms as a separate tool allows you to make reliable, informed management decisions and influence the processes under study. This tool is included in the composition and structure of any set of technical tools for product quality management.

To process statistical information and construct histograms, computer software is used, for example, the EXCEL program.

Judgment about product quality is based on the assessment of certain geometric, chemical, mechanical and other characteristics (features).

Over time, numerical indicators characterizing the quality of products manufactured on the same equipment under constant technological conditions change and vary within certain limits, i.e. There is a certain dispersion of the values ​​of the measured quantities. This scattering can be divided into two categories:

a) inevitable dispersion of quality indicators;

b) removable dispersion of quality indicators.

The first category is random production errors that arise due to changes (within permissible deviations) in the quality of raw materials, production conditions, errors in measuring instruments, etc. It is uneconomical to eliminate this category of dispersion caused by random (ordinary) reasons. Reducing their influence is possible by changing the production system as a whole, which requires significant capital expenditures. In this regard, their influence (presence) is taken into account when assigning tolerances to controlled parameters.

The second category represents systematic production errors (arising due to the use of non-standard raw materials, violations of the technological regime, unexpected breakdown of equipment, etc.). As a rule, this occurs in the presence of certain (non-random or special) reasons that are not inherent in the process and which must certainly be eliminated.

The distribution of errors usually corresponds to some theoretical distribution law (Gauss, Maxwell, Laplace and other laws). By comparing their theoretical distribution curves with empirically obtained (curves or histograms) data, we can attribute these actually observed distributions of parameter values ​​(see Fig. 4.1) to one or another distribution law.

This type of distribution is the most typical and widespread, when the spread of quality characteristic values ​​is due to the influence of the sum of a large number of independent errors caused by various factors.

A normal distribution is recognized by the following characteristics:

– bell-shaped or apex-like shape;

– most points (data) are located near the central line or in the middle of the interval and their number (frequency) smoothly decreases towards its ends;

– the central line divides the curve into two symmetrical halves;

– only a small number of points are scattered far and refer to minimum or maximum values;

– there are no points lying behind the bell-shaped curve.

Normal probability distribution curve P(x i) characterized by two statistical characteristics that determine the shape and position of the curve:

– distribution center (arithmetic mean);

S- standard deviation.

The distribution center is the center at which individual values ​​of random distribution variables are grouped x i.

Standard deviation S characterizes the scattering of the parameter under study, i.e. scatter relative to the average value.




Figure 4.1. Typical histogram shapes

a) – normal type; b) – comb; c) – positively skewed distribution;
d) – distribution with a break on the left; e) – plateau; e) – two-peak type;
g) – distribution with an isolated peak.

These parameters are calculated in accordance with the expressions:

Where x ii-the value of the measured parameter;

N– number of measurements (sample size).

(4.2)

To simplify calculations, the standard deviation is determined using the following formula:

Where d 2– coefficient depending on the sample size (Table 1);

R– the range is determined by the formula.

, (4.4)

Where x max, x min– maximum and minimum values ​​of the controlled parameter, respectively.

In accordance with the normal distribution law, 99.7% of all measurements should fall within the ± 3S (or 6S) interval. This is a sign that the spread of data is caused by random, natural variability of influencing factors.

Table 4.1 – Calculated coefficients

Odds Sample size n
D 2 1,69 2,06 2,33 2,70 2,83 2,85 2,97 3,08
C 2 0,89 0,92 0,94 0,95 0,96 0,97 0,97 0,97

Any unstable process has a histogram that does not look like a bell-shaped curve (see Fig. 4.1 b–g).

In a reproducible technological process, the spread of values ​​of the controlled parameter(s) is bell-shaped (stable process) and falls within the tolerance range.

Analysis of process reproducibility makes it possible to assess the suitability of existing production when technical tolerances are tightened (at the request of the consumer) or to identify the possibility of a controlled process going beyond the tolerance limits.

If the process parameters do not fit within the tolerance limits or there is no regulation margin, it is necessary:

a) reduce the spread of the controlled parameter to a smaller value;

b) achieve a shift of the average value closer to the nominal value;

c) rebuild the process;

d) find out the reasons for the excess scatter and implement appropriate influences on the process aimed at reducing the variation in the values ​​of the controlled parameter.

Quantitative assessment of process reproducibility is carried out using dispersion coefficients ( K R) and process offset ( To SM), calculated using the following expressions:

where is the tolerance range of the estimated parameter.

By the value of the coefficient K R, judge the accuracy of the technological process

If K R 0.85 – reproducible technological process;

If 0.85< K R 1.00 – the technological process is reproducible, but under strict control;

If K R> 1.00 – the process is not reproducible.

Process displacement coefficient ( To SM):

, (4.6)

Where WITH– the middle of the tolerance field (or the nominal value of the controlled parameter specified in the technical documentation).

If To SM 0.05 – process setting is quite satisfactory (correct);

at To SM> 0.05 – the process requires adjustment.

Based on these process reproducibility indicators, the expected proportion of defective products is assessed using Table 4.2 based on the calculated values K R And To SM.

Table 4.2 – Determination of sample size for statistical analysis

The object of research (products regardless of purpose and type, technological processes or individual operations, equipment, modes, etc.) is carefully studied. They receive multifaceted information about the quality of raw materials and materials, features of the technological process, identifying critical operations that affect the quality and characteristics of products (determining operational reliability, safety, etc.), the accuracy of the equipment used, wear of equipment, personnel qualifications, etc.

Collection of information is necessary for the rational application of the selected statistical method and subsequent interpretation of the results obtained (in the form of histograms), which are the basis for making management decisions on the impact on the object under study.

The choice of a single quality indicator for constructing a histogram is individual for each specific object of study. The most general selection rules are:

– the parameter (characteristic) must reflect any property of the object (operational reliability, safety, efficiency) or be sensitive to changes in the technological process;

– preference is given to quantitative rather than qualitative characteristics (for example, quality indicators of the technical process for operations, quality indicators of raw materials, semi-finished products, components, etc.);

– the ability to use standard measuring instruments and certified techniques to determine characteristics that are easily measurable;

– if it is impossible to measure the selected parameter, reasonable substitute indicators are selected that can be influenced;

– taking into account the real cost of conducting the analysis and assessing those indicators that are correlated (i.e., closely interrelated) with these quality indicators, etc.

Selection of measuring instruments should provide for the possibility of using standard measuring instruments and certified techniques to determine the characteristics of the values, ensuring the measurement of controlled quantities with the required degree of accuracy. The accuracy of measurement of readings is ensured by the use of serviceable, verified or calibrated measuring instruments, and the selected measuring instruments must have a measuring scale with a division value of no more than 1/6÷1/10 of the tolerance field of the measured value.

For statistical observations, they prepare control tools, select the type of control (continuous or selective), prepare forms for recording measurement results and assign controllers to controlled operations.

To analyze the accuracy and stability of the process, the following types of samples are used:

– instant samples of 5-20 parts, obtained in the sequence of their processing on a piece of equipment. These samples are taken at regular intervals (0.5 – 2 hours). Based on this sample, the level of equipment configuration is determined;

– general samples consisting of at least 10 instantaneous samples taken sequentially from one equipment during the inter-tuning period or during the period from the installation of a new tool until its replacement. Using these samples, the influence of random and systematic factors is determined separately without taking into account adjustment errors;

– random samples, ranging from 50 to 200 parts, manufactured with one or more settings on a piece of equipment. Based on the sample data, the combined influence of random and systematic factors (including the adjustment error) is determined (see Table 4.2).

To ensure uniformity, ease of data collection, ease of subsequent processing and identification, standard forms (forms) are prepared for recording measurement results: observation protocols, tables of results or checklists.

The professional level and experience of inspectors must ensure competent handling of the selected measuring instruments, obtaining reliable results, an unambiguous understanding of the measurement procedure, recording and identification of data.

When collecting data, it is necessary to indicate the day of the week, date, time when the results were collected, equipment, machine on which the products were manufactured, type and number of the operation, etc. The order of measurements of the parameter selected for control, the number of measurements, their sequence, taking into account process adjustments, etc., collection and grouping of data, as well as their recording in registration documents (protocols, tables, checklists) must be clearly defined.

To construct a histogram p The following parameters are calculated:

calculate sample range R by expression (4.7):

and determine the length of the histogram interval ( J).

There are various options for estimating the value J. The simplest method is to arbitrarily (based on the experience of constructing histograms) assign the number of intervals, for example, TO=9 (usually a value from 5 to 20 is taken) and calculate the width of the interval:

You can also use the calculation option for estimating the value TO:

Then, using formula (6.1), we calculate J:

We round the result to a convenient number.

Preparation of a frequency table (Table 4.3). A form is prepared where the boundaries of the intervals are entered (column 1), marks of measurement results falling into a particular interval (column 2) and frequencies (frequency column), where the number of measurement results in each interval is given.

Table 4.3 – Frequency table

For the beginning of the first interval ( x o) take the value x min or calculated using the following expression:

(4.10)

Consistently adding to x o the calculated value of the interval is obtained by the boundaries of the intervals:

first interval;

second interval;

TO– interval [ x o+(TO-1)J x o+ K J].

The boundaries of the intervals are entered in Table 4.3.

Receiving frequencies.

Marks are made of the measurement results (in the form of slanted lines) falling into one or another interval and the number of results in the corresponding interval is counted.

Introduction

The most important source of growth in production efficiency is the constant improvement of the technical level and quality of products. Technical systems are characterized by strict functional integration of all elements, so they do not contain secondary elements that can be poorly designed and manufactured. Thus, the current level of development of scientific and technological progress has significantly tightened the requirements for the technical level and quality of products in general and their individual elements. A systematic approach allows you to objectively select the scale and direction of quality management, types of products, forms and methods of production that provide the greatest effect of the efforts and funds spent on improving product quality. A systematic approach to improving the quality of products makes it possible to lay the scientific foundations of industrial enterprises, associations, and planning bodies.

In industries, statistical methods are used to analyze product and process quality. Quality analysis is an analysis by which, using data and statistical methods, the relationship between the exact and the replaced quality characteristics is determined. Process analysis is an analysis that allows us to understand the relationship between causal factors and results such as quality, cost, productivity, etc. Process control involves identifying causal factors that affect the smooth functioning of the production process. Quality, cost and productivity are the results of the control process.

Statistical methods for product quality control are currently becoming increasingly recognized and widespread in industry. Scientific methods of statistical control of product quality are used in the following industries: mechanical engineering, light industry, and public services.

The main objective of statistical control methods is to ensure the production of usable products and the provision of useful services at the lowest cost.

Statistical methods for product quality control provide significant results in the following indicators:

· improving the quality of purchased raw materials;

· saving of raw materials and labor;

· improving the quality of manufactured products;

· reduction of control costs;

· reduction in the number of defects;

· improving the relationship between production and consumer;

· facilitating the transition of production from one type of product to another.

The main task is not just to increase the quality of products, but to increase the quantity of products that would be suitable for consumption.

Two basic concepts in quality control are the measurement of controlled parameters and their distribution. In order to judge the quality of a product, it is not necessary to measure parameters such as the strength of the material, paper, weight of the item, quality of coloring, etc.

The second concept - distribution of values ​​of a controlled parameter - is based on the fact that there are no two parameters of the same products that are absolutely identical in value; As measurements become more precise, small discrepancies are found in the parameter measurements.

The variability of the “behavior” of the controlled parameter is of 2 types. The first case is when its values ​​constitute a set of random variables formed under normal conditions; the second is when the set of its random variables is formed under conditions different from normal under the influence of certain reasons.

1. Statistical acceptance control based on an alternative criterion

The consumer, as a rule, does not have the opportunity to control the quality of products during the manufacturing process. However, he must be sure that the products he receives from the manufacturer meet the established requirements, and if this is not confirmed, he has the right to demand that the manufacturer replace the defective product or eliminate the defects.

The main method of monitoring raw materials, materials and finished products supplied to consumers is statistical acceptance control of product quality.

Statistical acceptance control of product quality– selective control of product quality, based on the use of mathematical statistics methods to check product quality to established requirements.

If the sample size becomes equal to the volume of the entire controlled population, then such control is called continuous. Complete control is possible only in cases where the quality of the product does not deteriorate during the control process, otherwise selective control, i.e. control of a certain small part of the total production becomes forced.

Continuous control is carried out if there are no special obstacles to this, in the event of the possibility of a critical defect, i.e. a defect, the presence of which completely precludes the use of the product for its intended purpose.

All products can also be tested under the following conditions:

· the batch of products or material is small;

· the quality of the input material is poor or nothing is known about it.

You can limit yourself to checking part of the material or products if:

· the defect will not cause serious equipment malfunction and does not pose a threat to life;

Products are used in groups;

· Defective products can be detected at a later stage of assembly.

In the practice of statistical control, the general share q is unknown and should be estimated based on the results of control of a random sample of n products, of which m are defective.

A statistical control plan is understood as a system of rules indicating methods for selecting products for testing, and the conditions under which a batch should be accepted, rejected, or continued control.

There are the following types of plans for statistical control of a batch of products based on an alternative criterion:

one-stage plans, according to which, if among n randomly selected products the number of defective m is no more than the acceptance number C (mC), then the batch is accepted; otherwise the batch is rejected;

two-stage plans, according to which, if among n1 randomly selected products the number of defective m1 is no more than the acceptance number C1 (m1C1), then the batch is accepted; if m11, where d1 is the rejection number, then the batch is rejected. If C1 m1 d1, then a decision is made to take a second sample of size n2. Then, if the total number of products in two samples (m1 + m2) is C2, then the batch is accepted, otherwise the batch is rejected according to the data of the two samples;

multi-stage plans are a logical continuation of two-stage plans. Initially, a batch of volume n1 is taken and the number of defective products m1 is determined. If m1≤C1, then the batch is accepted. If C1p m1 d1 (D1C1+1), then the batch is rejected. If C1m1d1, then a decision is made to take a second sample of size n2. Let there be m2 defective ones among n1 + n2. Then, if m2c2, where c2 is the second acceptance number, the batch is accepted; if m2d2 (d2 c2 + 1), then the batch is rejected. When c2 m2 d2 the decision is made to take the third sample. Further control is carried out according to a similar scheme, with the exception of the last k-th step. At the k-th step, if among the inspected products of the sample there were mk defective and mkck, then the batch is accepted; if m k ck, then the batch is rejected. In multi-stage plans, the number of steps k is assumed to be n1 =n2=…= nk;

sequential control, in which the decision on the controlled batch is made after assessing the quality of samples, the total number of which is not predetermined and is determined in a process based on the results of previous samples.

Single-stage plans are simpler in terms of organizing production control. Two-stage, multi-stage and sequential control plans provide greater accuracy of decisions with the same sample size, but they are more complex in organizational terms.

The task of selective acceptance control actually comes down to statistical testing of the hypothesis that the proportion of defective products q in a batch is equal to the permissible value qo, i.e. H0:q = q0.

The goal of choosing the right statistical control plan is to make errors of the first and second types unlikely. Let us recall that errors of the first type are associated with the possibility of mistakenly rejecting a batch of products; errors of the second type are associated with the possibility of mistakenly missing a defective batch.

2. Statistical acceptance control standards

For the successful application of statistical methods for product quality control, the availability of appropriate guidelines and standards, which should be available to a wide range of engineering and technical workers, is of great importance. Standards for statistical acceptance control provide the ability to objectively compare quality levels of batches of the same type of product both over time and across different enterprises.

Let us dwell on the basic requirements for standards for statistical acceptance control.

First of all, the standard must contain a sufficiently large number of plans with different operational characteristics. This is important, as it will allow you to choose control plans taking into account the specifics of production and consumer requirements for product quality. It is desirable that the standard specify different types of plans: single-stage, two-stage, multi-stage, sequential control plans, etc.

The main elements of acceptance control standards are:

1. Tables of sampling plans used in normal production conditions, as well as plans for enhanced control in conditions of disturbances and to facilitate control when achieving high quality.

2. Rules for selecting plans taking into account control features.

3. Rules for the transition from normal control to enhanced or lightweight control and the reverse transition during the normal course of production.

4. Methods for calculating subsequent assessments of quality indicators of the controlled process.

Depending on the guarantees provided by acceptance control plans, the following methods for constructing plans are distinguished:

set the values ​​of the supplier's risk and the consumer's risk and put forward the requirement that the operational characteristic P(q) pass through approximately two points: q0, α and qm, where q0 and qm are, respectively, acceptable and rejection quality levels. This plan is called a compromise plan, since it ensures protection of the interests of both the consumer and the supplier. For small values ​​of α and β, the sample size should be large;

select one point on the operating characteristic curve and accept one or more additional independent conditions.

The first system of statistical acceptance inspection plans to be widely used in industry was developed by Dodge and Rolig. Plans for this system provide for continuous control of products from rejected batches and the replacement of defective products with suitable ones.

The American standard MIL-STD-LO5D has become widespread in many countries. The domestic standard GOST-18242–72 is close in structure to the American one and contains plans for one-stage and two-stage acceptance inspection. The standard is based on the concept of acceptable quality level (AQL) q0, which is considered as the maximum percentage of defective products permissible by the consumer in a batch manufactured during normal production. The probability of rejecting a batch with a share of defective products equal to q0 is small for standard plans and decreases as the sample size increases. For most plans it does not exceed 0.05.

When inspecting products based on several criteria, the standard recommends classifying defects into three classes: critical, significant and minor.

3. Control cards

One of the main tools in the vast arsenal of statistical quality control methods is control charts. It is generally accepted that the idea of ​​the control chart belongs to the famous American statistician Walter L. Shewhart. It was proposed in 1924 and described in detail in 1931. Initially, they were used to record the results of measurements of the required properties of products. If the parameter went beyond the tolerance range, it indicated the need to stop production and adjust the process in accordance with the knowledge of the specialist managing the production.

This provided information about when someone, on what equipment, received defects in the past.

However, in this case, the decision to adjust was made when the defect had already been received. Therefore, it was important to find a procedure that would accumulate information not only for retrospective research, but also for use in decision making. This proposal was published by the American statistician I. Page in 1954. Maps that are used in decision making are called cumulative.

A control chart consists of a center line, two control limits (above and below the center line), and characteristic (performance indicator) values ​​plotted on the map to represent the condition of the process.

At certain periods of time, n manufactured products are selected (all in a row; selectively; periodically from a continuous flow, etc.) and the controlled parameter is measured.

The measurement results are plotted on a control chart, and depending on these values, a decision is made to adjust the process or to continue the process without adjustments.

Signals of a possible problem with the technological process can be:

the point goes beyond the control limits (point 6); (the process got out of control);

the location of a group of consecutive points near one control boundary, but not going beyond it (11, 12, 13, 14), which indicates a violation of the level of equipment settings;

strong scattering of points (15, 16, 17, 18, 19, 20) on the control map relative to the center line, which indicates a decrease in the accuracy of the technological process.


Upper limit

Central line

lower limit


6 11 12 13 14 15 16 17 18 19 20 Sample number

Conclusion

Increasing development of a new economic environment for our country of reproduction, i.e. market relations dictates the need for constant improvement of quality using all possibilities, all achievements of progress in the field of technology and organization of production.

The most complete and comprehensive quality assessment is ensured when all the properties of the analyzed object are taken into account, manifested at all stages of its life cycle: during manufacturing, transportation, storage, use, repair, etc. service.

Thus, the manufacturer must control the quality of the product and, based on the results of sampling, judge the state of the corresponding technological process. Thanks to this, he promptly detects problems in the process and corrects them.

Bibliography

1. GembrisS. Herrmann J., “Quality Management”, Omega-L SmartBook, 2008.

2. Shevchuk D.A., “Quality Control”, Gross-Media., M., 2009.

3. Electronic textbook “Quality Control”

Ministry of General and Professional Education of the Russian Federation

Nizhny Novgorod State University

them. N.I. Lobachevsky

Faculty of Economics

TEST

in the discipline "Quality Management"

on the topic “Statistical methods of quality control”

Head A.Yu. Efimychev

5th year student of group 52 A.Yu. Tyutin

Nizhny Novgorod, 1999

1. Introduction................................................ ........................................................ 3

2 Statistical methods for product quality control.................................................... ........................................................ .... 4

2.1 Control charts. Quantitative control. 5

2.1.1 Average value and range.................................................... ............................................. 5

2.1.2 Arithmetic mean and range control charts......... 8

2.2 Control charts. Control based on alternative characteristics.. 8

2.2.1 Theoretical distribution of the share of defective units of production at constant n and p.................................... ........................................................ ....................................... 9

2.2.2 Control chart for constant volume sampling.................................................. 11

2.3 Statistical acceptance control based on alternative characteristics 13

2.4 Statistical acceptance control by quantitative characteristic 13

3 Conclusion........................................ ........................................................ .....

4 List of used literature................................... 15

1. Introduction

The most important source of growth in production efficiency is the constant improvement of the technical level and quality of products. Technical systems are characterized by strict functional integration of all elements, so they do not contain secondary elements that can be poorly designed and manufactured. Thus, the current level of development of scientific and technological progress has significantly tightened the requirements for the technical level and quality of products in general and their individual elements. A systematic approach allows you to objectively select the scale and direction of quality management, types of products, forms and methods of production that provide the greatest effect of the efforts and funds spent on improving product quality. A systematic approach to improving the quality of products makes it possible to lay the scientific foundations of industrial enterprises, associations, and planning bodies.

Statistical methods can be divided into 3 categories according to the degree of difficulty:

1) The elementary statistical method includes the so-called 7 “principles”:

· Pareto map;

· Cause-and-effect analysis;

· Grouping data according to common characteristics;

· Checklist;

· Bar chart. The histogram method is an effective data processing tool and is intended for ongoing quality control during the production process, studying the capabilities of technological processes, and analyzing the work of individual performers and units. A histogram is a graphical method of presenting data grouped by frequency of occurrence within a certain interval;

· Scatter diagram (correlation analysis through determining the median);

· Schedule and control chart. Control charts graphically reflect the dynamics of the process, i.e. changes in indicators over time. The map shows the range of inevitable dispersion, which lies within the upper and lower limits. Using this method, you can quickly trace the beginning of a drift of parameters for any quality indicator during the technological process in order to take preventive measures and prevent defects in finished products.

These principles must be applied by everyone without exception - from the head of the company to the ordinary worker. They are used not only in the production department, but also in departments such as planning, marketing, and logistics departments.

2) Intermediate statistical method includes:

· Theory of sampling research;

· Statistical sampling control;

· Various methods for conducting statistical assessments and determining criteria;

· Method of applying sensory checks;

· Method of calculation of experiments.

These methods are aimed at engineers and quality management professionals.

3) Advanced (computer-assisted) statistical methods include:

· Advanced methods for calculating experiments;

· Multivariate analysis;

· Various methods of operations research.

A limited number of engineers and technicians are trained in this method because it is used in very complex process and quality analyses.

The main problem associated with the use of statistical methods in industry is false data and data that does not correspond to facts. Various data and facts are provided in two cases. The first case concerns data that is cleverly created or poorly prepared, and the second case concerns invalid data prepared without the use of statistical methods.

The use of statistical methods, including the most sophisticated ones, should become commonplace. We should also not forget about the effectiveness of simple methods, without mastering which the use of more complex methods is not possible.

Technical progress cannot be separated from the application of statistical methods that improve the quality of products, increase reliability and reduce quality costs.

In industries, statistical methods are used to analyze product and process quality. Quality analysis is an analysis through which, using data and statistical methods, the relationship between the exact and replaced qualitative characteristics is determined. Process analysis is an analysis that allows us to understand the relationship between causal factors and results such as quality, cost, productivity, etc. Process control involves identifying causal factors that affect the smooth functioning of the production process. Quality, cost and productivity are the results of the control process.

Statistical methods for product quality control are currently becoming increasingly recognized and widespread in industry. Scientific methods of statistical control of product quality are used in the following industries: mechanical engineering, light industry, and public services.

The main task Statistical control methods are to ensure the production of usable products and the provision of useful services at the lowest cost.

Statistical methods for product quality control provide significant results in the following indicators:

· improving the quality of purchased raw materials;

· saving of raw materials and labor;

· improving the quality of manufactured products;

· reduction of control costs;

· reduction in the number of defects;

· improving the relationship between production and consumer;

· facilitating the transition of production from one type of product to another.

The main task is not just to increase the quality of products, but to increase the quantity of products that would be suitable for consumption.

Two basic concepts in quality control are the measurement of controlled parameters and their distribution. In order to judge the quality of a product, it is not necessary to measure parameters such as the strength of the material, paper, weight of the item, quality of coloring, etc.

The second concept - distribution of values ​​of a controlled parameter - is based on the fact that there are no two parameters of the same products that are absolutely identical in value; As measurements become more precise, small discrepancies are found in the parameter measurements.

The variability of the “behavior” of the controlled parameter is of 2 types. The first case is when its values ​​constitute a set of random variables formed under normal conditions; the second is when the set of its random variables is formed under conditions different from normal under the influence of certain reasons.

Personnel managing the process in which the controlled parameter is formed must determine from its values: firstly, under what conditions they were obtained (normal or different from them); and if they are obtained under conditions other than normal, then what are the reasons for the violation of normal process conditions. Then a control action is taken to eliminate these causes.

One way to achieve satisfactory quality and maintain it at this level is to use control charts.

The most common are mean control charts and range control charts. R, which are used together or separately.

Let's give an example. In vessels 1,2,3,... there are wooden sticks on which the numbers –10,-9,...,-2,-1,0,1,2,...,9,10 are written. The sticks imitate the products, and the numbers printed on them indicate deviations of the controlled size from the nominal size in hundredths of a percent. Each vessel contains N sticks, which can be considered as products made over a given time interval, called the sampling period. The values ​​of N are assumed to be large, so that the same number may be printed on several sticks, some sticks may be the only carriers of certain numbers, moreover, it is possible that in some vessel there will be no stick with a certain number at all. After thoroughly mixing the sticks in the vessels, a sample of n sticks is removed from each vessel, for example n=5. At the same time, thorough mixing ensures that the selection of sticks is random. Having written down the numbers printed on the sticks that were in the next samples, their arithmetic averages are calculated and plotted as the ordinate of a point with an abscissa corresponding to the number of the vessel. If the point is within the boundaries drawn on the control map, then the process simulated by the described model is considered established, otherwise it requires adjustment.

Statistics it is customary to call a function of random variables obtained from one population, which is used to estimate a certain parameter of this population.

Let - observation results forming one sample of size n. The sample arithmetic mean is defined as (i=1,2,…,n)

Range of this sample , Where

The maximum result of observations in the sample,

The minimum result of observations in the sample.

Let twenty-five samples be taken, each consisting of five samples. The arithmetic mean and range are determined for each sample separately. They are plotted on control charts of arithmetic averages and ranges.

Table 2‑1. Accounting for observation results

Next, we find the average value of all measurements, or the overall average. This can be done by adding the total column and dividing the sum by the number of samples (note that some of these values ​​are negative). If we denote the number of samples by (in this case equal to 25), then the overall average can be determined using the following formula.

Then we determine the average range by dividing the sum of the different range values ​​by the number of samples: . After this, the values ​​are plotted on control charts as control lines.

· upper limit of regulation for the control chart of arithmetic average values;

· lower limit of regulation of the control chart of arithmetic average values;

· upper limit of regulation of the span control chart;

· lower limit of regulation of the range control chart , where are the coefficients depending on the sample size. If the sample contains 5 samples ( n=5), then


Rice. 2‑1. Control chart for the data shown in Table 2-1. Average value


Rice. 2‑2. Control chart for the data shown in Table 2-1. Scope

The above boundaries are plotted on control maps. If we take a sample from a jar of sticks, then, as a rule, all the points on the control map are within the established boundaries. And if the points on the control map are within the established boundaries, then the corresponding process is considered established.

It should be noted that this fact does not indicate whether the quality of all products is satisfactory.

If all points on the control chart are within the control boundaries, then the process is considered established until production conditions change. This means that all changes are natural or random, i.e. chaotic, and do not occur due to certain reasons.

These cards are used for control on an alternative basis. This means that after inspection, the product is considered either suitable or defective, and a decision on the quality of the controlled population is made depending on the number of defective products found in a sample or sample or on the number of defects per certain number of products (product units).

Defect– this is each individual non-compliance of the product with the established requirements.

Marriage– these are products, the transfer of which to the consumer is not allowed due to the presence of defects.

The most common method for accounting for defects is quality control of the proportion of defective units of production, called R-maps and the number of defects per unit of production, called With- cards.

The concept of the share of defective units of production is used when we mean the share of defective units of production in the totality of defective and good units.

Then R is defined as follows: R(proportion of defective units) is equal to the total number of defective items detected divided by the total number of items inspected.

The concept of the number of defects per unit of production is used when the product is considered neither defective nor serviceable, but is determined only by the number of defects in the product.

Thus, With(number of defects per unit of production) is equal to the total number of defects detected divided by the total number of products inspected.

Characteristics R And With are statistical estimates of the population R And With' .

Table 2‑3. Data for p-card



Rice. 2‑4. p - map for the data specified in table 2-3

The data given in the table shows the result of 20 samples (50 samples each) from a vessel in which there are 4% red balls (defective units of production). These samples simulate daily sampling from a month-long process. Values R are consistently entered on R-map.

Center line on R-map determines the values ​​or average proportion of defective units of production. The value is equal to the total number of defective products divided by the total number of inspected R products: . This is the meaning R can be obtained by calculating the average of all R; however, if the sample size is not constant, it cannot be calculated in this way. And the above formula is always valid.

The regulation limits are determined by the formula

If on R- on the map, according to the results of statistical control, not a single point is outside the regulation boundaries, then the process is considered established; in this case, all deviations of points from the central line are random.

If subsequently any point turns out to be outside the control limits, this means that a certain reason for the process disorder has appeared.

The consumer, as a rule, does not have the opportunity to control the quality of products during the manufacturing process. However, he must be sure that the products he receives from the manufacturer meet the established requirements, and if this is not confirmed, he has the right to demand that the manufacturer replace the defective product or eliminate the defects.

The main method of monitoring raw materials, materials and finished products supplied to consumers is statistical acceptance control of product quality.

Statistical acceptance control of product quality– selective control of product quality, based on the use of mathematical statistics methods to check product quality to established requirements.

If the sample size becomes equal to the volume of the entire controlled population, then such control is called continuous. Complete control is possible only in cases where the quality of the product does not deteriorate during the control process, otherwise selective control, i.e. control of a certain small part of the total production becomes forced.

Continuous control is carried out if there are no special obstacles to this, in the event of the possibility of a critical defect, i.e. a defect, the presence of which completely precludes the use of the product for its intended purpose.

All products can also be tested under the following conditions:

· the batch of products or material is small;

· the quality of the input material is poor or nothing is known about it.

You can limit yourself to checking part of the material or products if:

· the defect will not cause serious equipment malfunction and does not pose a threat to life;

Products are used in groups;

· Defective products can be detected at a later stage of assembly.

It has been established that statistical acceptance control with the same sample size provides more information than acceptance control based on an alternative criterion. It follows that the results of statistical acceptance control, with a smaller sample size, contain the same information as statistical acceptance control on an alternative basis.

However, this does not mean that statistical acceptance control on a quantitative basis is always better than statistical acceptance control on an alternative basis. It has the following disadvantages:

· the presence of additional restrictions that narrow the scope of application;

· monitoring often requires more sophisticated equipment.

If destructive testing is carried out, then control plans based on a quantitative characteristic are more economical than control plans based on an alternative characteristic.

3 Conclusion

Increasing development of a new economic environment for our country of reproduction, i.e. market relations dictates the need for constant improvement of quality using all possibilities, all achievements of progress in the field of technology and organization of production.

The most complete and comprehensive quality assessment is ensured when all the properties of the analyzed object are taken into account, manifested at all stages of its life cycle: during manufacturing, transportation, storage, use, repair, etc. service.

Thus, the manufacturer must control the quality of the product and, based on the results of sampling, judge the state of the corresponding technological process. Thanks to this, he promptly detects problems in the process and corrects them.

Ishikawa K. Japanese methods of quality management: Abbr. lane from English M.: Economics, 1998

Knowler L. et al. Statistical methods of product quality control. Per. from English – 2nd Russian. Ed. M.: Standards Publishing House, 1989

Okrepilov V.V. Shvets V.E. Rubtsov Yu.N. Product quality management service. L.: Lenizdat, 1990

Statistical methods occupy a special place in the group of quality control methods. Their application is based on the results of measurements, analysis, tests, operational data, and expert assessments. The main thing in statistical methods is the methodology of working with actual data.

data. The tasks solved in this case are planning, obtaining, processing and unifying information, its use in analysis and management, decision-making based on the results of analysis, forecasting, etc.

The set of modern statistical methods for quality control is divided into three categories according to the degree of complexity.

1. Elementary statistical methods including diagram

Pareto, cause and effect chart, check sheet, histogram, scatter chart, stratification method, control chart.

2. Intermediate statistical methods, which include: the theory of sampling research; statistical sampling control; various methods for conducting statistical assessments and defining criteria; experimental calculation method. This group of methods is used by engineers and quality management specialists.

3. Advanced statistical methods including design of experiments, multivariate analysis, various operations research methods. A limited number of engineers and specialists are trained in their use.

Elementary statistical methods underlie other categories of statistical methods.

Checklist is a form on which the controlled parameters of a part or product are printed so that measurement data can be easily and accurately entered into it. Its purpose is twofold: firstly, to facilitate the process of collecting data on monitored parameters, and secondly, to automatically organize the data to facilitate their further use.

There are four types of checklists:

1) a check sheet for recording the distribution of the measured parameter during the production process.



2) check sheet for recording types of defects.

3) checklist of defect locations. Some products exhibit external defects, such as scratches or dirt, and the company takes various measures to reduce these. A major role in solving this problem is played by defect localization checklists, which contain sketches or diagrams where notes are made so that the location of defects can be observed. Such sheets are necessary for diagnosing the manufacturing process of a part, since the causes of defects can often be found by examining the places where they occur and observing the process in search of explanations why defects are concentrated in these areas;

4) checklist of causes of defects. Here, detected defects are recorded by type, taking into account that the reasons for their occurrence may be equipment, manufacturing time, or the direct manufacturer. The checklist allows you to identify the root causes so that corrective measures can be developed.

Pareto chart named after the Italian economist V. Pareto, who in 1897 derived a formula showing that benefits in society are distributed unevenly.

The essence of the Pareto principle, which forms the basis for constructing the diagram, is that the entire set of possible causes of defects is divided into two groups. The first group is a small number of reasons that significantly affect the occurrence of defects (a few that are significantly important). The second group is a large number of causes that have little impact (numerous insignificant). Constructing a Pareto chart is a method for determining a few essential factors that affect the quality of a part or product.

There are such types of Pareto chart as a chart based on the results of activities and a chart based on reasons. The first is intended to identify the main problem in the process under study and may reflect undesirable results of activity (in the field of quality, these may be: defects, breakdowns, errors, failures, complaints, repairs, product returns). The second reflects the causes of problems that arise during the production process and is used to identify the main one.

The Pareto diagram and curve clearly reflect the results of quality control of a particular product. Based on these data, the main causes that lead to the most significant defects are identified, and measures to eliminate them are developed.

After a certain time after the implementation of these measures, the procedure for constructing a Pareto diagram is repeated, and it is advisable to do this on the same form in order to clearly see how effective the efforts made to eliminate the causes of a particular type of defect were.

Cause and Effect Diagram(ISHIKWAWA) reflects the relationship between a certain quality indicator and the factors affecting it.

It is otherwise called a fishbone diagram due to the similarity in shape.

In order to construct a cause and effect diagram, you must:

1) determine the quality indicator that will be studied;

2) find the main reasons that affect this indicator;

3) identify secondary causes that influence the main ones, then identify third-order causes that affect the secondary ones, and so on until they are completely exhausted;

4) analyze all detected causes and highlight those that presumably have the greatest impact on the quality indicator under study. These reasons are given special attention when solving problems with the quality indicator under study.

Scatter diagram- one of the types of elementary statistical methods - is used to identify the dependence of some indicators on others. The data reproduced by the scatter plot forms a correlation field. The dependency between indicators is determined based on the shape of this field. Using a scatter diagram, you can technically competently solve many issues, for example, establish the dependence of the accuracy of part processing on the parameters of the machine, tools, adherence to technological discipline, etc.

bar chart is a type of bar chart used to illustrate the distribution of any controlled parameter. A histogram is used to provide visual information about the manufacturing process of a product and helps make decisions about what to do.

what problem needs to be focused on. This information is displayed in a series of bars of the same width but different heights. The width of the column is the interval in the control range, the height is the number of studies within one interval.

Stratification method (data stratification) - a tool that allows you to select data that reflects the required information. In accordance with this method, statistical data is stratified, i.e. they are grouped depending on the conditions of receipt, and each group of data is processed separately. Data divided into groups according to their characteristics are called layers (strata), and the separation process itself is called stratification (stratification). There are various delamination methods, the use of which depends on the specific application. For example, data relating to products produced in one workshop may vary to some extent depending on the performer, the equipment used, the methods of carrying out work operations, etc. All these differences can

be factors of delamination. For stratification, the “5” method is often used M", taking into account factors depending on the person (man), machine (machine), material (material), method (method), measurement (measurement).

Delamination can be carried out as follows:

By artist - qualifications, gender, work experience, etc.;

By machinery and equipment - new and old equipment, brand, design, manufacturing company, etc.;

By material - place of production, manufacturing company, etc.

Control cards were developed in the 1930s. in the USA U.A. Shewhart. Such maps are used to detect negative trends in order to prevent the development of serious problems that lead to the process getting out of control.

For example, during a certain period (shift, hour), the operation of a machine or process was monitored and the diameter of the manufactured parts was measured. Based on the results obtained, a graph is constructed. The value of the measured diameter is plotted along the vertical axis, and the part numbers are sequentially marked on the horizontal axis. Two horizontal lines are drawn that correspond to the tolerances of the drawing or technical specifications, and two more that establish the upper and lower control limits (their position is determined using special formulas). A small range of measurement variations between them indicates that the product is produced within tolerance. Thus we get the simplest

control chart that displays the change in tuning level and process accuracy

If the points of the measurement line depicting the process are in the interval between the control limits, then the process is considered to be under control. If a number of points go beyond the boundary, this signals a disorder in the process and the need to regulate it. Control charts allow you to monitor current process performance. They show emerging deviations from a standard, target or average and reflect the level of statistical control of a process over time. The use of statistical methods is an important condition for increasing the efficiency of quality control of products and processes.