Reasons for blurry photos. Practical work processing graphic information

  • Unusual phenomena
  • Nature monitoring
  • Author sections
  • Discovering the story
  • Extreme World
  • Info reference
  • File archive
  • Discussions
  • Services
  • Infofront
  • Information from NF OKO
  • RSS export
  • useful links




  • Important Topics

    Based on materials from the book "Satellite Meteorology", M.A. German

    Interpretation of cloud images

    The ability to identify clouds in television images is determined by the resolution of the equipment. Obviously, the smaller the resolution element on the ground, the more detailed the captured objects are visible and the more accurate their interpretation. Individual clouds smaller than the element of decomposition on the ground are not distinguished, and cloud fields consisting of clouds of this size appear on TV images in the form of a uniform light veil, like thin clouds of varying brightness.
    Cloud photographs obtained from satellites differ favorably from ground-based observations in that they provide a holistic picture of the distribution of clouds over vast areas comparable to the main synoptic objects. This makes it possible, by the nature of the image pattern in the images, to study inhomogeneities of cloud cover of various scales, many of which are practically imperceptible for discrete ground-based observations. When studying, a natural desire arises: to identify and classify the cloud formations displayed in the images, according to the classification of clouds adopted for ground-based observations. On the one hand, and on the other, to identify and classify entire cloud systems covering large areas of the earth's surface.

    Cloud image drawing

    Image texture. Texture refers to the pattern of small details in an image, created by the difference in brightness of individual elements, the sizes of which are comparable to the resolution of the equipment. In this case, only the most basic features of the object are reproduced in the image, by which one can judge whether it is round or elongated, lighter (cooler) than the surrounding background, or vice versa. There are three main types of texture: matte, grainy and fibrous.
    Matte texture is characterized by a uniform image tone. Television photographs of this texture differ from each other only in brightness. Matte texture is characteristic of images of open areas of the water surface, land in areas of sufficient moisture, arid land areas, continuous ice and snow cover, fog and stratus clouds.
    Grainy texture- an accumulation of light or dark spots (grains) on a corresponding background. Fine grains are usually characteristic of images of cumulus clouds; in this case, the grain sizes are so small that the details and shapes of individual clouds are completely hidden. Stratocumulus clouds look similar, only the grains in this case are dark on a light background. Here grains should be understood as gaps in the clouds. However, the presence of dark grains on a light background does not guarantee that stratocumulus clouds are depicted. These can also be cumulus clouds - geometric features of large areas of the image created by hundreds of elements, the size of which is approximately two to three orders of magnitude greater than the resolution of the system.
    Large-scale cloud systems include: frontal cloud zones, cyclone cloud vortices, jet stream clouds, tropical cyclone cloud vortices, intertropical convergence zone clouds, and tropical cold front clouds.
    The characteristics of cloud systems make it possible to identify the general synoptic environment in which certain cloud formations are observed.
    Thus, the complex of basic characteristics discussed above can form the basis for recognizing clouds and the underlying surface in traveler photographs. But still, with these cloud characteristics at the decryptor’s disposal, the decryption task remains difficult. An important addition to simplifying the recognition of satellite photographs are radiation measurements that are made simultaneously with TV cloud tracking. A joint analysis of all satellite information will make it possible to determine the vertical extent of clouds and to clarify their shape using this data. If the decoder has only television photographs of cloudiness at its disposal, then to determine the thickness of the clouds, the shadows cast by high clouds on lower ones are used. The excess of one cloud over another in this case can be determined from data on the height of the Sun. Shadows can be visible not only against lower clouds, but also against light sand, snow and ice. At the same time, on the water surface, which usually has a dark tone, a shadow cannot always be detected.

    Cloud shape and amount When analyzing television images, it is not always possible to accurately determine the forms of morphological classification of clouds due to the photographic similarity of most of them to each other. Therefore, when deciphering, they use a conditional classification compiled taking into account the informative capabilities of the photograph. The following main types of cloudiness are distinguished, each of which can include not only the corresponding forms of morphological classification - cumulus, stratus, cirrus, etc., but also all kinds of varieties of all tiers that create a similar visual effect in the images: cirrus, stratus, cumulus -shaped, cumulonimbus or powerful cumulus, stratocumulus, various combinations of these types.
    In addition to the main types of cloudiness, during interpretation the boundaries of homogeneous cloud fields and the amount of cloudiness are determined.
    Border (outline) called the dividing line between fields with different characteristics. Contours delineate areas (fields) that are uniform in brightness and structure of the cloud image.
    Cloud amount characterizes the degree of cloud coverage of a particular area of ​​the earth's surface and is determined by the ratio (in percentage) of the area occupied by cloud elements inside the contour to the entire area limited by the contour.

    Cirrus clouds

    Clouds that show relief or lower clouds are usually cirrus-shaped. They can be recognized in images in most cases by their filamentous structure, but also by their association with other clouds such as cumulonimbus.
    Knowledge of the geography of the area also provides significant assistance in recognizing clouds. If cloud bands cross high mountain ranges and are not influenced by them, then the height of such clouds can be judged unambiguously and classified as cirrus. Bands of more or less dense cirrus clouds often cast shadows on low- and mid-level clouds or the snow-covered surface of the Earth. Particularly clear shadows are associated with cirrus clouds forming on the right side of the jet stream.
    Cirrus clouds can include not only cirrus clouds, but also cloud fields of other shapes that have a similar structure. For example, in the absence of other identifying features, an isolated field of adventive fog over the open sea creates the same photographic effect as cirrus clouds in the image. However, knowledge of the physical mechanism and area of ​​formation of certain cloud formations, taking into account history, as well as the use of other sources make it possible to correctly identify cloud types.

    Stratus clouds

    Basic distinctive feature The stratus cloud in the TV image is its matte, uniform tone. The mesostructure of this cloudiness can be uncertain or striped. Along the stripes, the uniformity of tone is usually maintained or it changes gradually.
    The tone of the image of dense stratus clouds is often white, sometimes bright white, while thin clouds are light gray.
    The image of stratus clouds on TV images is created by nimbostratus (Ns), stratus (St), and altostratus (As) clouds. In addition, some cumulus clouds: cumulus (Cu), altocumulus (Ac) and stratocumulus (Sc), consisting of relatively small cloud elements separated by the same small gaps, can appear as stratospheric on TV images. The brightest in the image will be the nimbostratus clouds, whose average albedo is 80%. As, which have an average albedo of 60%, will have lower brightness.
    Stratus clouds are often observed in combination with cumulus clouds. In this case, the matte top of the image, characteristic of stratus clouds, will be somewhat disrupted by inclusions of grainy or larger cloud elements of rounded shapes. Often, stratus clouds include cumulonimbus clouds (Cb), which are visible in images as bright white spots against a less bright, uniform background. Sometimes the presence of Cb can be revealed by the shadows of their peaks protruding above the upper edge of stratospheric clouds. In terms of quantity, stratus clouds are only continuous or significant.
    It is characterized by large horizontal dimensions (up to several thousand kilometers). Its vertical thickness varies from 0.3 to 5-6 km.
    Stratus clouds are most often observed in the area of ​​warm and occluded fronts, as well as in anticyclones in the cold half of the year.
    Fog should be distinguished from stratus clouds. In satellite photographs, it has a solid milky-white image with swarming edges that, as a rule, repeat the shapes of the relief. Advective fog over the oceans can also have a band structure, reminiscent of the structure of cirrus clouds. Dense fog is easily recognized even against the background of snow, since it covers the contours of the underlying surface and can be seen through thin clouds. Weak (transparent) fog is detected in images only in the absence of snow and clouds. A light gray veil of fog over small pools of water sometimes creates the impression of a glow in the water, like a flare of sunlight.
    Fogs in IR images present certain difficulties in deciphering. The low temperature contrast between the fog and the underlying surface very often does not allow the fog to be distinguished from other objects by the tone of the image. In this case, aerosynoptic materials can provide significant assistance in a time frame closest to the time frame for collecting information.

    Cumulus clouds


    Cumulus clouds (cells)

    The images of cumulus clouds in photographs are characterized by large brightness heterogeneity. The image tone of these clouds can range from gray to bright white, with the image usually alternating between a lighter tone and a darker one. The typical image texture is grainy, fibrous, or dome-shaped. Mesostructural formations of cumulus clouds can be of three types: cells, stripes, chains.
    Of the cumulus clouds, based on photographs, one can distinguish mainly clouds of vertical development, which include cumulus (C), powerful cumulus (C cong.) and some forms of stratocumulus (Sc). The brightness of the tone of the image of cumulus clouds is directly proportional to their horizontal and vertical dimensions.
    Small clusters of cumulus clouds that are smaller than the resolution of the system appear as a solid gray haze in the image and can be interpreted as thin stratiform clouds. These clouds include fair weather cumulus< (Сu hum), высококучевые (Ас), некоторые формы слоисто-кучевых (Sc) и перисто-кучевых (Сс).
    Cumulus clouds are usually located in the form of individual rare clouds or in the form of significant clusters of them. The horizontal dimensions of clouds vary within very wide limits. Cumulus clouds can be combined with other cloud forms of all levels. They will stand out from the background of other clouds if they have a high brightness of the image in the image, or because of the characteristic shadow they cast on the underlying clouds.
    Cumulus clouds most often form near cold fronts and in the rear of a cyclone in an unstable air mass.
    It is especially important to distinguish cumulonimbus clouds from other clouds. The main features for deciphering Cb images in a TV image are: the brightest (bright white) background of the image (albedo of about 80%); clearly defined contours of cloudiness, clearly visible against the background of the underlying surface and easily recognizable against the background of any other cloudiness. Dome-shaped image texture; significant fluctuations in horizontal dimensions; characteristic emissions (plume) of anvil cirrus clouds; strip mesostructure (in the form of ridges).
    Cb occurs both isolated and in combination with other forms. In the case of a combination of Cb with other forms, their border is sharply expressed: they are revealed by the shadows created by the peaks, the bright white domes of which protrude against a darker background. In the absence of shadows, CLs are identified by the brightness of their images in the photographs. They can be observed in the rear of a cyclone in unstable cold air, as well as in an anticyclone and a blurred pressure field, especially in summer time of the year.

    Stratocumulus clouds


    Stratocumulus clouds are designated as SC Sheets

    In television photographs, stratocumulus clouds appear as large or small granules. Sometimes this cloudiness in the image appears as a field of isolated blurred spots, in the center of which, as a rule, a relatively bright formation of more powerful clouds can be traced. Stratocumulus clouds have a grainy texture. Cloudiness has a gray and light gray tone in IR images, and a light and bright white tone in images taken in visible rays. Clouds of these forms have a well-defined structure and are very often grouped into ridges and stripes, which are usually oriented in the direction of the wind. Stratocumulus clouds form in cold, moist air in the subinversion layer and have a small vertical extent.

    Cumulonimbus clouds


    Cumulonimbus clouds

    This type of cloudiness is fairly easy to recognize on television images. Cumulonimbus clouds typically have a dome-shaped texture and are large in brightness and size. In photographs they look like large bright white spots with a diameter of 10-40 km, and sometimes more.
    Cloud formations with a diameter of about 100 km or more are a collection of individual cumulonimbus clouds in which the anvils have merged to form a continuous cover of cirrus clouds.
    A plume of cirrus clouds, according to K. O. Erickson, associated with cumulonimbus clouds, is observed in the presence of vertical wind shear. In this case, the windward edge of the cumulonimbus cloud is sharp, and the leeward edge, where the cirrus clouds are carried off, is blurry. A plume of cirrus clouds extends in the direction of the wind at cloud level. In this regard, from the image of cumulonimbus clouds, it is possible to determine the direction of air flows in the upper troposphere (at the level of cirrus clouds), and they to some extent characterize the movement of the cumulonimbus clouds themselves.
    The presence of cumulonimbus clouds in a TV image is a good indicator for the forecast of thunderstorms, showers and squally winds in the area for which satellite information was obtained.
    Under certain TV shooting conditions, it is not always possible to correctly recognize individual forms of cloudiness, especially since in the operational practice of using satellite images this stage of interpretation is intermediate. These and other considerations contributed to the creation of a generalized method for deciphering TV cloud images. The essence of the method proposed by I.P. Vetlov lies mainly not in recognizing individual cloud forms recorded during ground-based observations, but in identifying typical cloud systems associated with characteristic atmospheric processes. This approach to decryption is based on the principle that each individual cloud system is determined by a certain form of circulation in the atmosphere. Deciphering cloud images in this way also facilitates the task of identifying ordinary cloud forms, which, after identifying typical cloud systems from the point of view of analysis and weather forecasting, in many cases can lose its independent significance.

    USE OF OBSERVATIONAL DATA FROM METEOROLOGICAL SATELLITES IN SYNOPTIC ANALYSIS. Fronts.

    Extensive meteorological information regularly received from satellites finds wide application in synoptic practice. Collective maps of cloud cover, constructed from television images, are highly informative; they display the spatial structure and other characteristics of cloudiness. Cloud systems of various synoptic formations (fronts, cyclones, hurricanes, convergence zones, etc.) are so typical that the use of cloud cover images has become an indispensable tool for forecasting large-scale atmospheric processes.
    For initial stage The development of satellite meteorology, associated with the use of cloud cover images in weather forecasting practice, is characterized by the predominance of methods of qualitative (synoptic) analysis of the obtained data. Research carried out in recent years indicates great possibilities for using satellite meteorological information in the framework of modern numerical weather forecasts. In particular, the use of outgoing radiation data in various spectral regions makes it possible to obtain quantitative information about temperature, density, air humidity and ozone content.
    The real possibility of solving inverse problems of satellite meteorology puts on the agenda the problem of the optimal combination of conventional and satellite means of meteorological observations. If, for example, satellite measurements of the vertical profile of air temperature anywhere on the globe become completely reliable, this will eliminate the need for the massive use of radiosondes as the main means of temperature sounding of the atmosphere.
    The prospects for obtaining meteorological information in quantitative form using satellites do not at all reduce the relevance of using and improving methods qualitative analysis images of the Earth from space. On the contrary, research in recent years has opened up new possibilities here, consisting in the use of images to determine various properties of the characteristics of the underlying surface.

    USING CLOUD DATA TO ASSESS SYNOPTIC POSITION

    When analyzing synoptic maps and assessing the nature of atmospheric processes, along with observational data from ground stations, the results of observations from meteorological satellites have recently been increasingly used. By examining a successive series of photographs of the earth's surface, it is possible to identify certain structural characteristics cloud fields. With the help of satellite equipment capable of photographing large areas, it is possible to obtain a general picture of cloud cover in on a global scale. A composite map of cloud cover, constructed from photographs from a large area, describes the nature of atmospheric processes occurring over a large area and can be of practical importance. These maps, which provide a continuous picture of cloud distribution, are highly visual, essential for synoptic analysis, and greatly help to more correctly comprehend the data of a discrete network of meteorological observations. Identify large-scale atmospheric disturbances that are associated with sudden changes in weather conditions.

    MAIN STRUCTURAL FEATURES OF CLOUD FIELDS AND THEIR RELATIONSHIP WITH SYNOPTIC PROCESSES

    The structure of the image depends mainly on the brightness contrast of the observed clouds, which exceeds the contrast sensitivity threshold of the television system. Changes in shooting conditions (lighting, shutter speed, aperture, etc.) have little effect on the structure of the image; only its contrast changes.

    Synoptic-scale cloud systems - macrostructure - characterize the geometric features of large areas of the image created by hundreds of elements, with dimensions approximately two to three orders of magnitude larger than the resolution of the system with which television (TV) or infrared (IR) images were obtained. This structure of cloud images provides a horizontally continuous picture of cloud distribution and is more visual than conventional cloud data placed on a synoptic map. For areas with a dense network of TV and IR stations, synoptic-scale images of cloud fields help the weather forecaster to more intelligently systematize atmospheric processes. With a relatively sparse meteorological network of stations, when individual sections of the synoptic map are poorly supported by instrumental observations, the macrostructure of cloud images serves as the main information in the analysis and preparation of weather forecasts. It can have different mesoscale and macroscale characteristics (mesostructure and texture), which expands the amount of information about a particular cloud field.

    CLOUD COVERAGE OF ATMOSPHERIC FRONTS

    Cloud systems of atmospheric fronts are depicted on TV and IR images in the form of light stripes of varying width, brightness and structure.
    The widest and brightest cloud bands correspond to active fronts with intense upward movements of moist air, narrower and less light cloud bands correspond to inactive fronts, in the area of ​​which upward movements do not develop.
    Frontal bands usually consist of multilayer clouds, which are a combination various types. Recognition of cloud types is carried out both by the characteristics characteristic of each type of cloud separately, and by the nature of the boundaries of the cloud band. For example, the presence of cirrus clouds can be judged by the "sweeping" of a light gray tone, as well as by short transverse stripes, often observed along the boundary of the frontal clouds. “Ragged” (uneven) boundaries are characteristic of cumulonimbus and cumulonimbus clouds. Smooth edges indicate the predominance of stratus clouds. There are usually at least two types of clouds in the frontal strip. The activity of atmospheric fronts decreases from the center of the cyclone to the periphery, and this change in their activity is revealed on TV images, but by a decrease in the bandwidth and amount of cloudiness. Frontal cloud systems are represented on nephanalysis images and maps in most cases in the form of cloud bands ranging in width from one to several hundred kilometers. Since cloud bands usually consist of clouds of various shapes, all forms of clouds are often plotted on non-fan analysis maps in the contour where frontal cloudiness is indicated. However, in a number of cases it is possible to trace the predominance of cumulus clouds in the cold front zone and stratus clouds in the warm front zone.
    Analysis of cloud maps, weather maps and baric topography showed that frontal sections are often traced in the cloud field much longer than in the field of other elements. At the same time, the appearance of the clouds and the configuration of the cloud band often make it possible to determine the type of front in the image. This circumstance can serve as the basis for refining the analysis of the synoptic position in a specific area.

    Cold front cloud cover.

    Cloud bands of cold fronts have a clear structure in the form of a bright strip 200-300 km wide and more than 1000 km long, very often interspersed with round bright spots with sharply defined edges. The bands are formed from nimbostratus clouds and isolated clusters of cumulonimbus clouds. Usually they have a uniform tone of the image, against the background of which interspersed round bright spots of clouds with vertical development can be clearly seen. Active cold fronts are characterized by an image of a continuous, well-developed cloud band. For fronts with reduced activity, the cloud band is usually less wide, with isolated breaks in the contour.


    Cold Front (CF)

    Very often, cloud bands of a cold front are separated by cloud-free zones from pre-frontal and post-frontal clouds. In the images taken for the warm period of the year, in front of the frontal zone, at some distance from the main cloud band, ridges of cumulonimbus clouds located parallel to the front are often visible. Behind the front, accumulations of cumulus clouds can sometimes be observed, formed into ridges, cells or ensembles that do not have a specific structure. Such clouds are the result of convection: watery air moving over a warm underlying surface. Cloudy zones of cold fronts are characterized by a noticeable cyclonic curvature (deflection towards warm air).
    Research carried out by T.P. Popova shows that the line of the cold front at the Earth's surface is almost always located within the cloud strip. In cases where the cloud zone is dominated by stratiform clouds, the surface front line is located near its right (front) edge; when cumulus clouds predominate, the front line is located at the left (rear) edge of the cloud strip. Noteworthy is the clarity of the boundaries of these stripes.

    Warm front cloudiness.

    A warm front, as a rule, is clearly visible in the cloud field only in the initial stages of cyclone development, so recognizing these fronts in images is much more difficult than cold fronts. The image of warm front clouds on TV images is characterized by a wide variety of sizes and patterns of cloud cover.
    According to the research of E.P. Dombkovskaya, the most typical for a warm front is a cloud zone with a characteristic strip structure, 300-500 km wide and from several hundred to a thousand kilometers long, and long cloud stripes are rare on warm fronts.
    The cloud band corresponding to the warm front merges with the clouds of the cold front during the process of occlusion. Typically, the cloud zone on a warm front is blurred and only a slight protrusion at the occlusion point is visible in the images, corresponding to the pre-existing cloud band of the warm front. At the same time, the cold front remains very clearly defined.
    The cloud zone of a warm front has an anticyclonic curvature and bends towards the cold air.
    The cloud band of this front is formed from homogeneous nimbostratus clouds. In images taken in the summer, isolated formations of cumulonimbus clouds can often be observed. The width of the frontal cloud strip is not the same throughout its entire length. Where the wave and cyclone develop, it is expanded, in the area of ​​​​the rear ridges it is narrowed and washed out. Blurred warm fronts are sometimes visible in images as stripes of cirrus clouds. As Popova notes, a distinctive feature of the cloudiness of a warm front is its sharp, often rounded outline, rear boundary and jagged front boundary, where individual cloud banks and elongated gaps are located parallel to the main cloud strip.
    In front of the cloudy zone of a warm front, small, randomly scattered cumulus clouds can be observed in the cold air; behind the front, convection clouds can be observed in the warm air. These clouds are characteristic mainly of summer; they indicate instability and high moisture content of warm air. Research shows that the position of the cloud band of a warm front usually agrees well with the position of the surface trough. In this case, the front line at the Earth's surface should be drawn near the inner edge of the cloud strip.

    Cloud front occlusion.

    The cloud zone corresponding to the occlusion front is a dense (bright) cloud band about 300 km wide. It usually has a spiral shape, resembling appearance a giant comma, the top of which is in the center of the cyclonic circulation at the cloud level. A cloud spiral is characterized by a sharply defined internal (rear) boundary; behind it there is a cloudless or lightly cloudy band, and at some distance from it cumulus-shaped clouds can be seen in the form of ridges, convective cells or clusters of clouds that do not have a clear structure. In contrast to the inner edge of the cloud band of the occlusion front, the outer (front) edge is more diffuse, often jagged. The cloud strip in this case consists of individual cloud shafts, which alternate with gaps, both of which extend along the direction of the main cloud strip.
    Research by T. P. Popova and L. S. Minina shows that the occlusion front line at the Earth’s surface is located within the cloud strip. If the cloud strip has a sharp internal boundary, then the front of the occlusion is in the rear part of the cloud spiral; if the inner boundary is more amorphous, the occlusion front at the Earth's surface shifts toward the central part of the cloud band. An occluded cloud system often transitions into a cold front cloud system without noticeable bifurcation into cold and warm front clouds. Sometimes the position of the occlusion point can be determined by a small protrusion on the right side of the cloud band. This ridge represents the remnants of the warm front clouds. A study of the cloud bands of occlusion fronts shows that on the synoptic map the cloud spiral of this front corresponds to the front part of the cyclone. Over time, the occlusion front can transform into a cold, warm, or stationary front. In this case, the cloud strip begins to acquire the characteristic features and configuration of the corresponding cloud systems.
    It has been established that in the free atmosphere the cloud band of the occlusion front coincides with the position of the axis of the thermal ridge in the lower half of the troposphere, and the axis of the pressure ridge at the level of 500 hPa is often the leading boundary of cloud propagation. In the zone of clearing and development of cumulus clouds in the free atmosphere, there is a high-altitude trough or cyclone and a center of cold. An example of a cloud system of occluding cyclone fronts is given. An arc-shaped cloud band formed from cumulonimbus, cumulonimbus, and cirrus clouds corresponds to a cold front. A wide cloudless zone adjoins the front from the side of cold air.

    Cloudiness of a stationary front.

    The cloud band of a stationary front usually does not have cyclonic or anticyclonic curvature. Its width is about 200-300 km, its structure is heterogeneous, with frequent clearings. The average extent of cloud bands of a stationary front is much greater than the extent of cloud spirals associated with fast-moving fronts.
    On a synoptic map, the line of the surface front most often coincides with the central part of the cloud strip. In cases where the front makes a slight forward movement, the front line at the Earth's surface shifts to the rear part of the cloud strip. Isobars on a synoptic map, as a rule, form a deformation field. In a free atmosphere, such cloud fields correspond to a low gradient field of isohypses.
    Examples of cloudiness of a stationary front are shown in the figure. The cloud strip of a stationary front with waves has a latitudinal direction, its width reaches 300-400 km. It is formed from stratus and cumulus clouds. At the top of the image there is cumulonimbus cloudiness. The presence of waves is indicated by thickening of the cloud band.

    Sharpness is one of the most important criteria for image quality. However, we often encounter its disadvantage. The reasons may be different, but the main one is the photographer’s mistake. In this chapter, I will talk not about sharpness as such, but about the reasons for its absence and how to deal with it.

    Blur due to movement (shaking)

    The most important reason for blur is movement, that is, blurring of the picture due to the fact that the photographer’s hand trembled at the moment of shooting. The result of the wiggle looks something like this:

    It's a pathetic sight, you'll agree. Main factors causing the appearance the movements are given below:

    1. Shooting in low light without a tripod or flash
    2. Shooting at a long focal length (with a strong “zoom in”)
    3. Shooting in motion, for example from a car window
    4. Shooting fast moving subjects

    If only one of the factors, a factor, is present in the shooting conditions, then it can almost always be dealt with. But if there are several of them at once, we are almost guaranteed to get a defective photograph.

    For the first two factors (handheld shooting in low light, shooting with a long focal length), the “safe shutter speed” rule applies.

    A safe shutter speed will most likely ensure that there is no movement. It depends on the focal length. Many sources provide simple formula, from which you can calculate a “safe” shutter speed - you need to divide one by the focal length. That is, with a focal length of 50 mm, a safe shutter speed will be 1/50 of a second. All this is wonderful and simple, but this rule does not take into account that the camera may have a crop factor, which narrows the angle of view and, as it were, increases the focal length of the lens. A 50mm lens on a 1.6 crop has an equivalent focal length of 80mm. How to calculate a safe shutter speed, say, for a focal length of 24 mm not cropped? You can't do without a calculator! I offer a simple but effective way.

    We look at the lens focal length scale:

    With a focal length of 24 mm, the next line corresponds to 35 mm. We calculate the safe shutter speed based on it, having first rounded the value up. Thus, a safe shutter speed for 24 mm on a 1.6 crop will be 1/40 of a second. We check it in the calculator - 24 mm * 1.6 = 38.4. That is, absolutely the same thing - a safe shutter speed of 1/40 second!

    As the focal length increases, the safe shutter speed decreases proportionally. That is, for a 50 mm EGF, the safe shutter speed is 1/50 of a second, for 300 mm - 1/300 of a second. This explains why a telephoto lens without a stabilizer can only be used without a tripod on a sunny day.

    Image Stabilizer (IS, VR, Antishake) makes life a lot easier, lengthening the safe shutter speed by 2-3 times. That is, a 300 mm telephoto lens with the stabilizer turned on allows you to take mostly sharp photographs already at a shutter speed of 1/100 of a second.

    Of course, a lot still depends on the physical abilities of the photographer. Some people manage to get clear pictures at shutter speeds of 1/5 of a second without a tripod, while for others even 1/500 is not enough!

    Shooting from a car window- very bad conditions that should be avoided at all costs. In addition to the fact that often shooting is done through glass (which does not add sharpness), the composition in such photographs is almost always absent. Purely documentary filming, but I have not seen a single artistic shot taken from the window of a moving car.

    Shooting a moving subject can be solved in two ways - either with a very short shutter speed, or with an extended shutter speed with wiring.

    We know that there are two ways to reduce shutter speed - opening the aperture and increasing the ISO sensitivity. To photograph fast-moving subjects (such as passing cars), you almost always need to do both. The picture looks static - the car seems to be standing still. To convey movement, a technique is used - shooting with wiring.

    Photo by Sergei Tishin

    Notice how wonderfully the movement is conveyed in the photograph due to the characteristic blurring of the background. How to do it? For shooting moving object with wiring You need to do some steps to set up the camera:

    1. Setting the burst mode
    2. Set the shutter priority mode (TV, S) and fix the shutter speed around 1/30-1/60 seconds. The longer the shutter speed, the more dynamic the background blur will be, but the risk of foreground movement increases. More speed - shorter shutter speed.
    3. We switch autofocus to tracking mode.

    When an object approaches us, we take it into the “crosshair” and begin continuous shooting, trying to keep this object in the center of the frame. Imagine that in your hands is not a camera, but a machine gun, and the object is a low-flying enemy plane that needs to be “shot down” :) The higher the burst shooting speed, the larger the series of photographs from which you can choose the most successful ones.

    Blur due to optics

    1. "Chronic" autofocus miss

    The phenomenon when autofocus constantly tries to aim a little closer or a little further than necessary is called front focus And backfocus(respectively).

    Most of all, front/back focus spoils the lives of those who like to shoot portraits, macro, as well as photographers involved in product photography. When shooting at close range, even a small autofocus miss significantly increases the defect rate. For example, we know that when shooting a portrait, the focus is on the eyes. Even if the focus confirmation point blinked in the right place, due to back focus the focus will actually be focused on the ears, and with front focus - on the tip of the nose (more serious misses are possible).

    How to identify front/back focus? There are many options. First, use a special target to check autofocus. It looks like this:

    However, such a target is only available in photo stores and you can mainly use it only when purchasing a new lens (or camera). The beauty of the target is that it is very easy to determine not only the presence of an error, but also its exact value.

    Secondly, you can download plate for checking front/back focus take advantage of it. This can be done on the website www.fotosav.ru.

    Well, and thirdly - the easiest option! Simply take a photo of a sheet of printed text, first focusing on a specific line or heading. In this case, you need to open the aperture to the maximum possible value and set the ISO sensitivity so that the shutter speed is no shorter than 1/100 (to eliminate movement). Take pictures from approximately this angle:

    An arrow on a sheet of paper shows the line where autofocus was aimed. As you can see, in this case it worked correctly. To be sure, it is better to repeat the experiment 5 times.

    However, sometimes it happens that all these five times the device focuses in the wrong place.


    This is what it looks like front focus


    And this is what it looks like backfocus

    What to do if front/back focus is detected?

    If front/back focus is detected when purchasing a lens, it is better to refuse such a copy and ask for another one - and so on until the test result suits you. But what if the defect is discovered after purchase?

    Now some DSLRs have an autofocus micro-adjustment function, with which you can correct front/back focus without leaving home. However, most cameras do not have this function, so you will have to take the camera with all its optics for adjustment to service center. Yes Yes! All your equipment! If a technician “customizes” your device for a specific lens, it is not a fact that your other lenses will work as correctly as before.

    2. Curvature of the image field

    With most lenses, it is noticeable that the sharpness of the image in the corners of the photo differs from the sharpness in the center, and for the worse. This difference is especially pronounced at an open aperture. Let's look at the reason for this phenomenon.

    When we talked about depth of field (DOF) in earlier chapters, we were talking about the space outside the lens, that is, somewhere in environment. But, do not forget that the depth of field zone is also on the other side of the lens, where the shutter and matrix are.

    Ideally, the matrix completely falls within the depth of field (internal) zone, but the trouble is that the image field (marked with a dotted line in the figure) has not a flat, but a slightly curved shape:

    It is because of this that the clarity of the image in the corners of the image will be lower than in the center. The saddest thing is that it is a congenital defect of the lens that cannot be corrected by any adjustments. It is known that a similar drop in sharpness in the corners of the picture is present in the Canon EF 24-70mm f/2.8L USM lens of the first version. In the second version of the lens, this drawback was eliminated, but this caused a significant increase in the cost of the lens.

    3. Spherical aberration

    Spherical aberration in photography it manifests itself as a softening of the image due to the fact that the rays incident on the edge of the lens are focused not on the matrix itself, but a little closer than necessary. Because of this, the image of the point turns into a blurry speck. This is especially noticeable when the aperture is open. At medium apertures, spherical aberration disappears for most lenses.

    In portrait photography, it gives an interesting effect in the blur zone - the blurred background has a characteristic “twisted” pattern (bokeh). The picture itself, even in the sharpness zone, looks very soft.

    Please note that the spots from light objects in the blur zone are not round, but slightly elongated, reminiscent of a cat’s eyes in shape. This effect is sometimes called “cat eyes”.

    For decreasing spherical aberrations Aspherical elements are inserted into lenses.

    4. Diffraction blur

    From the previous paragraph it follows that to obtain the best sharpness you should close the aperture. Another question is to what value and is there any reasonable limit?

    Let's look at an example. I just took three pictures of the text on the monitor screen, Canon lens 50mm f/1.8, shooting distance about 50 cm. Shooting was carried out with different apertures. Here is a 100% crop located near the center of the frame:

    1. Aperture 1.8 (starting point). The sharpness is not so great; at an open aperture, spherical aberrations are strong, they soften the picture:

    2. Aperture 5.6 (intermediate position)

    It can be seen that the detailing has become much better than with the maximum open aperture! The reason for this is the reduction of the effect of spherical aberration. Well, that's good. Can we assume that the further the aperture is closed, the better the detail? Let's try to clamp the aperture to the maximum!

    3. Aperture 22 (aperture clamped to maximum)

    What's happened? Why is the detail reduced so much? It turns out that the conclusion we made was premature. We have completely forgotten about such a phenomenon as diffraction.

    Diffraction- this is the property of a wave to slightly change its direction when it passes an obstacle. Light is nothing but electromagnetic wave, and the obstacle is the boundaries of the diaphragm hole (aperture). When the aperture is open, diffraction practically does not manifest itself at all. But with a closed diaphragm, the waves propagate something like this:

    It is clear that the image of a “perfectly sharp” point in this regard will turn into a slightly blurry speck. Exactly diffraction and causes a decrease in picture sharpness when the aperture is closed too much.

    For most APS-C DSLR lenses, the graph of detail versus aperture ratio looks something like this:

    In the vertical axis - the scores are the same as at school: 2 - bad, 5 - excellent.

    It follows from the graph that maximum detail (in the sharpness zone) is achieved at apertures from 5.6 to 11. At a lower aperture number, the picture is spoiled by spherical aberrations, and at a larger aperture, diffraction spoils the picture. However, this does not mean that you need to shoot everything with an aperture of 8. Often, the difference in detail is not so significant, but interesting artistic effects can appear with an open and closed aperture. With an open aperture, there is a pleasant softness in the portrait, good blurring of the background. When closed, there are characteristic stars around bright light sources.

    Blur due to mirror clap

    As you know, a mirror shutter, when triggered, causes a slight shake of the camera body, which under certain conditions can cause a slight loss of sharpness.

    To avoid this, most DSLRs have a " mirror lock" or " preliminary mirror lift". Its essence is that to shoot you need to press the shutter button not once, but twice. The first time you press the mirror rises (the optical viewfinder turns black), the second time you shoot.

    A very illustrative example is given in a short article on the website www.fotosav.ru, which compares two photographs taken without mirror blocking and with blocking.

    The left fragment is taken from a photo taken in normal mode, the right one is taken with the mirror locked up.

    A rather old man took part in the test. Canon camera The EOS 5D has a really, really noisy shutter and when it fires, you can clearly feel the vibration in your hands. The shutters of modern DSLRs are more advanced in terms of vibration load, so the risk of such blurring of the image is much less. Some devices have a “quiet” mode, in which the shutter operates a little slower, but there is much less vibration and the picture is clearer.

    Blur due to improper use of stabilizer

    Stabilizer- a device that allows you to reduce movement when shooting handheld. However, sometimes it can cause harm.

    The instructions for a lens with a stabilizer almost always contain a warning - turn off the stabilizer when shooting from a tripod. This rule is often neglected, but in vain. Have you ever brought a microphone to a speaker? After this, the amplifier self-excites and the speakers begin to whistle. It turns out exactly like the saying “much ado about nothing.” It's the same with the stabilizer. It is designed to counteract vibration caused by movement, but it does not occur on a tripod. However, the rotating gyroscopic elements of the stabilizer cause a slight vibration, which is perceived as movement and the stabilizer tries to dampen it, “swinging” more and more. As a result, the picture turns out fuzzy.

    There is an opinion that the stabilizer can reduce the sharpness of the image during daytime handheld shooting. This may be true, but I don’t remember in my experience a single case where the turned on stabilizer would noticeably spoil the sharpness when shooting at a short shutter speed. Although, on the Internet they regularly write about the harmful effects of a stabilizer, for example, during macro photography. The arguments are as follows:

    1. Reverse shake - the stabilizer reacts too strongly to slight camera shake and causes the image to shift in the opposite direction.
    2. A noticeable jolt when the stabilizer is turned on causes the photo to become blurry. The stabilizer turns on when we half-press the shutter button (to focus) and works until the shot is taken. If you immediately press the shutter button all the way, then, indeed, the stabilizer can cause blurring of the picture. If you give the stabilizer a second to “calm down,” the risk of getting a blurry picture is reduced. Much also depends on the lens. For example, in the Canon 75-300 IS USM the stabilizer turns on with a clearly audible knock and causes noticeable vibration, while in the Canon 24-105L it is almost silent.
    3. Microvibration from gyroscopes reduces picture clarity. Again, a lot depends on the lens - in cheap optics (Canon 75-300), vibration is indeed noticeable. The Canon 24-105L has virtually no vibration.

    Personally, I prefer to turn off the stabilizer in cases where it is not needed, but mainly to reduce power consumption. The stabilizer really helps in cases where, when shooting handheld, the shutter speed becomes longer than safe and at the same time you don’t want to increase the ISO sensitivity. In other cases it is useless.

    The stabilizer is also useless when shooting moving objects. It just compensates for the vibrations transmitted to the camera from your hands, but it is not able to slow down the movement of a running person who is caught in the frame. The stabilizer only helps when shooting static scenes. No matter how many steps of exposure the stabilizer compensates, with a long shutter speed moving objects will inevitably turn out blurry.

    Incorrect image settings

    In obtaining visually blurry images, not only the lens, but also the camera itself, or more precisely, its settings, may be to blame. In the image settings of the camera there is an item sharpness or sharpness, which determines the degree of contrast of the boundaries of objects in the photograph.

    This setting is only relevant when shooting in JPEG. If you prefer the RAW format, then the desired level of software sharpening (sharping) can be set in the program used to convert from RAW to JPEG.

    With an increase in program sharpness, an unpleasant surprise may await us - an increase in the noise level. Look at two fragments of the same photograph, shown at 100% scale.

    The first picture is with standard sharpness settings, in the second the in-camera sharpening is turned to maximum. The second picture is visually perceived as clearer, however, it is also noisier.

    Test tasks

    1. Learn to calculate a safe shutter speed.

    2. Try taking a photo from a tripod with a long shutter speed with the stabilizer turned on and off, compare the results and draw conclusions.

    3. Find the function in the instructions for your camera mirror lock and learn how to use it.

    4. Try filming the same story with different meanings aperture (from a tripod). Find out at what aperture your lens produces the sharpest image.

    5. Try shooting in daylight with the stabilizer turned on and off (in the wide-angle position). Draw a conclusion regarding the advisability of using a stabilizer in good lighting and a short focal length.

    HISTOGRAM CONVERSION METHOD

    At the first stage, we will consider an approach based on the histogram transformation method. This approach is appropriate to use in cases where the observed image is subject to the distorting influence of a translucent aerosol formation; in addition, the histogram of the brightness distribution of this section of video data, obtained under good visibility conditions, is known. The latter can be replaced with a histogram of a neighboring image area if it is texturally equivalent to the area being restored and is not subject to clouding in this image. Note that the image histogram as an averaged statistical characteristic is more stable compared to a specific implementation of observations. Taking into account the resolution of the AVHRR device, when a 1x1 km2 area of ​​the PPZ is displayed in pixels of video data, the model of the influence of turbidity on the surface image in mathematical form has the form of a convolution operator. The point spread function for stratified scattering layers not adjacent to the reflecting surface, has a delta component and slowly decaying extended fronts, is unknown to us.

    Let's try to describe this situation using histograms. We will assume that ideal conditions for observing a certain area of ​​the Earth's surface form a distribution of radio brightness described by a histogram, and the influence of translucent fog leads to distortion of the histogram, so that we observe the distribution of brightness, expressed in a decrease in the dynamic range and a shift in the area of ​​definition of video data. First, for simplicity of presentation, we will assume that and are continuous quantities, . We will describe the radio brightness distribution of the clouded image with a probability density function. And the distribution of radio brightness of the ideal (reference) image will be described by the distribution. To restore the image, we will use brightness transformations expressed as follows:

    where are the brightness values ​​of the clouded image, and are the clear image.

    We will consider the class of restoring transformations T(x) that are single-valued and strictly monotone on, so that the inverse transformation T-1(x) will also be strictly monotone on. The monotonicity condition preserves the order of transition from black to white in the brightness scale of the reconstructed image.

    Considering the fact that the quantities and are functionally related, their probability distributions are expressed as follows

    where is the inverse transformation.

    To find the transformation, consider the following two-step identification procedure. Let us use the property of the integral distribution function, interpreted as a transformation, to equalize frequencies, namely,

    where is the cumulative distribution function and the value is distributed uniformly over the interval. On the other hand, by analogy with (3.3) we have

    where is the integral distribution function, equating the expressions, we get

    where is the inverse transformation.

    Thus, passing at the first stage to a uniform distribution of brightness according to formula (3.3), and at the second stage by inverting the transformation G(y), we obtain the desired distribution of brightness and the expression for the corrective transformation.

    Now consider the discrete version of transformations (3.4). Let a fragment of a digitized image (not necessarily rectangular) and be the number of pixels of this fragment. Let us assume that this fragment is subject to the distorting influence of the atmosphere, and is a fragment of digitized data taken in “good” vision conditions. This fragment allows you to restore the histogram.

    When brightness levels take discrete values, expression (3.3) has the following tabular form

    where is the number of discrete brightness levels, is the number of elements from the total number that have a level in the discrete image.

    Accordingly, the discrete form of expression (3.4) looks like the following table

    therefore, the inversion of such a function is achieved by rearranging the input and output and, together with (3.5), can be used to correct radio brightness using the histogram transformation method.

    IMAGE RESTORATION BASED ON REGRESSION EQUATIONS FOR PREDICTING RANDOM RADIO BRIGHTNESS FIELDS.

    Now let's consider an approach to video data recovery based on the use of regression dependence. We will describe the reconstructed values ​​of the predicted field by a random variable, and the radio brightness of the fields that are sources of predictive information will be described by a random vector, where is the dimensional Euclidean space, is the radio brightness of the th channel of the AVHRR device, =5, is the transposition sign. The relationship between the predicted variable and the vector will be described by the regression functional of the following form

    where is the mathematical expectation operator, and. If the following probability densities of random variables and exist, then taking into account (3.7) we have

    where, is the joint probability density of a random vector and a variable, is the probability density of a random vector, is the probability density of a random variable, and is the cumulative distribution function. If we have at our disposal a sample of pairwise independent identically distributed random variables, where n is the number of control samples in the test section, to calculate expression (3.8) it is natural to use nonparametric estimates of unknown distributions from sample data, then

    where h is the width of the window (smoothing or scaling parameter) described by the function. The Epanechnikov kernel of the following form can be taken as K(u), where I is an indicator function. The problem arises of estimating h taking into account a specific sample of observations. To estimate h, we will use the sliding control method, which consists in constructing a modified regression estimate in which the jth observation is sequentially omitted. This observation at a point must now be reconstructed from all other observations included in equation (3.9) in the best possible way. The estimation quality criterion h depends on the ability to predict a set of values ​​from sets of subsamples

    where is a weighting function, which, in the simplest cases, may not be used (set equal to unity). The optimization problem (3.10) with respect to the parameter h is solved numerically by the search adaptation method. After the parameter h in expression (3.9) for is specified, the regression equation can be used to reconstruct values ​​from the observed data and for a fragment of video data obscured by clouds.

    EXAMPLES OF CORRECTION AND RESTORATION OF PPZ IMAGES

    When shooting PPV in autumn and spring, the following situation is often observed. In the 1st and 2nd spectral channels of the AVHRR device, translucent clouding of some sections of the video data is noted. At the same time, in the 3rd, 4th and 5th channels we see complete screening of these image fragments by thermal anomalies. A prerequisite for using the developed approaches is the principle of similarity. At the first stage of restoring such images, we correct translucent areas using the histogram transformation method. For this purpose, we select two texturally homogeneous fragments of the image, one of which is “clean”, and the other is cloudy and subject to correction. We evaluate the histograms from both sections and form dependence (3.5), (3.6), based on which we correct the clouded fragment. The result of the correction is shown in Fig. 3.2.a. The quality of the resulting image can be assessed by the degree of adequacy of the reference histogram and the histogram of the corrected image (Fig. 3.1.a, Fig. 3.1.c). It is necessary to take into account the line nature of the last of the histograms, associated with the discreteness of the brightness of the converted image. Then regression dependencies (3.9) were restored on a texturally similar unclouded area. The quality of prediction of spectral channels on the control fragment, different from the training one, was 3.4%.

    Rice. 3.1. Histograms of image fragments: reference area - (a); translucently cloudy fragment - (b); restored fragment - (c).


    Rice. 3.2. Correction of translucent haze by histogram transformation method (channel 1 and 2) - (a); screening thermal anomaly in the 4th (and 5th) channels - (b); restoration of the screened area of ​​the image in the 4th (and 5th) channels - (c).

    Finally, in Fig. 3.2.a,b,c shows the entire cycle of the two-stage procedure for correction and restoration. In Fig. Figure 3.2.a shows a fragment of the histogram correction of translucent haze in the 1st (2nd) channel. In Fig. Figure 3.2.b shows the shielding thermal cloud in the 4th (and 5th) channel, which was translucent in the 1st channel. Finally, in Fig. 3.2.c shows the result of image restoration of the 4th (5th) channel using a nonparametric regression equation. In the latter case, it is difficult to assess the quality of the reconstruction, since we do not know the true distribution of radio brightnesses of the reconstructed area.

    From the point of view of recognizing and analyzing objects in an image, the most informative are not the brightness values ​​of objects, but the characteristics of their boundaries - contours. In other words, the main information lies not in the brightness of individual areas, but in their outlines. The task of contour extraction is to construct an image of the boundaries of objects and the outlines of homogeneous areas.

    As a rule, the boundary of an object in a photograph is reflected by the difference in brightness between two relatively uniform areas. But the difference in brightness can also be caused by the texture of the object, shadows, highlights, changes in illumination, etc.

    We will call the contour of an image a set of its pixels in the vicinity of which an abrupt change in the brightness function is observed. Since in digital processing the image is represented as a function of integer arguments, the contours are represented by lines at least one pixel wide. If the original image, in addition to areas of constant brightness, contains areas with smoothly varying brightness, then the continuity of the contour lines is not guaranteed. On the other hand, if there is noise in the “piecewise constant” image, then “extra” contours can be detected at points that are not the boundaries of the regions.

    When developing contour extraction algorithms, it is necessary to take into account the specified features of the behavior of contour lines. Special additional processing of selected contours eliminates breaks and suppresses false contour lines.

    The procedure for constructing a binary image of object boundaries usually consists of two sequential operations: contour extraction and their thresholding.

    The original image is subjected to linear or nonlinear processing, reacting to changes in brightness. As a result of this operation, an image is formed whose brightness function differs significantly from zero only in areas of sharp changes in image brightness. Through threshold processing, a contour object is formed from this image. The choice of threshold at the second stage should be made based on the following considerations. If the threshold is too high, edge discontinuities may appear and subtle changes in brightness may not be detected. If the threshold is too low, false contours may appear due to noise and heterogeneity of areas.

    Gradient-based edge detection. One of the most simple ways Boundary selection is the spatial differentiation of the brightness function. For a two-dimensional brightness function A(x, y), changes in the x and y directions are recorded by the partial derivatives A(x, y)/x and A(x, y)/y, which are proportional to the rates of brightness change in the corresponding directions.

    The identification of brightness differences is illustrated in Fig. 3.3. It can be seen that the underlining of contours perpendicular to the x-axis is provided by the derivative A(x, y)/x (Fig. b), and the underlining of contours perpendicular to the y-axis is provided by A(x, y)/y (Fig. . V).

    In practical problems, it is necessary to identify contours whose direction is arbitrary. For these purposes, you can use the module of the gradient of the brightness function, which is proportional to the maximum (in direction) rate of change of the brightness function at a given point and does not depend on the direction of the contour. The gradient modulus, unlike partial derivatives, takes only non-negative values, therefore, in the resulting image (Fig. d), the points corresponding to the contours have an increased level of brightness.

    For digital images analogues of partial derivatives and the gradient modulus are difference functions.

    A practical example of identifying boundaries in a photograph is shown in Fig. 3.4. The original image (1) is monochromatic. Image (2) shows the result of calculating the brightness gradient vector Ax, y) = (A/x, A/y). As can be seen in the figure, at points of large brightness differences the gradient has a large length. By filtering out pixels with a gradient length greater than a certain threshold, we obtain an image of the boundaries (3).


    The disadvantage of the algorithm is that it skips boundaries with small changes in brightness and includes among the boundaries image details with large changes in brightness (chipmunk skin). When the image becomes noisy, the map of boundary points will be contaminated by just noise, since it is not taken into account that the boundary points correspond not just to brightness differences, but to brightness differences between relatively monotonous areas.

    To reduce the impact of this drawback, the image is first subjected to smoothing Gaussian filtering. With anti-aliasing filtering, small, unimportant details are blurred faster than differences between areas. The result of the operation can be seen in image (4). However, at the same time, clearly defined boundaries blur into thick lines.

    The brightness gradient at each point is characterized by its length and direction. Above, when searching for boundary points, only the vector length was used. The direction of the gradient is the direction of maximum increase in the function, which allows the use of the non-maxima suppression procedure. In this procedure, for each point, a segment several pixels long is considered, oriented in the direction of the gradient and centered on the pixel in question. A pixel is considered maximum if and only if the gradient length in it is the maximum among all gradient lengths of pixels in the segment. All maximum pixels with gradient lengths greater than a certain threshold can be considered borderline. The brightness gradient at each point is perpendicular to the boundary, so after suppressing non-maxima there are no thick lines left. At each perpendicular section of the thick line there will be one pixel with the maximum gradient length.

    The perpendicularity of the luminance gradient to the boundary can be used to trace the boundary starting at some boundary pixel. This tracking is used in maximum pixel hysteresis filtering. The idea behind hysteresis filtering is that a long, stable boundary contour is likely to contain pixels with a particularly large difference in brightness, and, starting from such a pixel, the contour can be traced along the boundary pixels with a smaller difference in brightness.

    When performing hysteresis filtering, not one, but two threshold values ​​are introduced. The smaller () corresponds to the minimum gradient length at which a pixel can be considered a boundary. The larger (), corresponds to the minimum gradient length at which a pixel can initialize a path. After a contour is initialized at a maximum pixel P with a gradient length greater than that, each adjacent maximum pixel Q is considered. If pixel Q has a gradient length greater and the angle between the vectors PQ and (P) is close to 90o, then P is added to contour, and the process recursively moves to Q. Its result for the original image in Fig. 3.4. shown in Fig. 3.5.

    Thus, the algorithm for finding boundaries based on a gradient consists of sequentially applying the following operations:

    Gaussian smoothing filtering;

    Finding the brightness gradient in each pixel;

    Finding the maximum pixels;

    Hysteresis filtering of maximum pixels.

    This algorithm is called the Canny algorithm and is most often used to find boundaries.

    Finding boundaries based on the Laplacian. It is known that a necessary and sufficient condition for the extremal value of the first derivative of a function at an arbitrary point is that the second derivative at this point is equal to zero, and the second derivative must have different signs with respect to different sides from the point.

    In the two-dimensional version, the analogue of the second derivative is the Laplacian - a scalar operator

    f) = (f/x + f/y).

    Finding boundaries in an image using the Laplacian can be done by analogy with the one-dimensional case: points at which the Laplacian is equal to zero and around which it has different signs are considered boundary points. Estimation of the Laplacian using linear filtering is also preceded by Gaussian smoothing filtering to reduce the sensitivity of the algorithm to noise. Gaussian smoothing and Laplacian search can be performed simultaneously, so finding boundaries using such a filter is faster than using the Canny algorithm. The filter is used in systems where both the quality of the result (usually inferior to the Canny algorithm) and performance are important. To reduce sensitivity to unimportant details, you can also exclude from the number of boundary points those whose gradient length is less than a certain threshold (Fig. 3.6).

    Practical work Processing of graphic information, contains 12 tasks on the relevant topic ( the job will do for 8th grade students studying Bosovaya teaching materials).

    Task 1. Working with graphic primitives.

    IMPORTANT!
    To draw a graphic primitive (rectangle, rounded rectangle, ellipse), you need to click on the button with its image on the toolbar, move the mouse pointer to the work area, press the left mouse button and, without releasing it, move the mouse pointer diagonally, following the image on screen. To display a square and circle when using the corresponding tools, hold down the key Shift.

    To change the outline width for shapes made with tools Rectangle, Ellipse And Rounded rectangle, you must first activate the tool Line(tab home group Figures) and in its settings menu specify the required width.

    1. Launch the graphics editor Paint.
    2. Set the dimensions of the drawing area: width - 1024 pixels, height - 512 pixels. Home > Images > Resize.
    3. Repeat the pattern below using the tools Line, Rectangle, Rounded rectangle And Ellipse.

    4. Save the result of your work in a personal folder:
    in file p1.bmp as 24-bit image;
    in file p2.bmp as a 256-color drawing;
    in file p3.bmp as 16-color drawing;
    in file p4.bmp as a monochrome drawing;
    in file p5.jreg;
    in file p5.gif.
    5. Compare the sizes of the received files and the quality of the images saved in them.

    Task 2. Selecting and deleting fragments

    1. In a graphic editor, open the file Devices.bmp.

    2. Leave only the input devices in the picture, and remove everything unnecessary by first selecting the fragments using the tool Select. Home > Images > Select.
    3. Save the drawing in a personal folder under the name Input Devices.

    Task 3. Moving fragments

    Skazka.bmp.

    2. Using a tool Selection select the rectangular, transparent fragments one by one and move them so that fairy tale characters have found their true form.

    Task 4. Converting fragments

    1. In the Paint graphic editor, open the file Dragonfly.bmp.

    2. Select the rectangular fragments one by one ( transparent background), if necessary, rotate them (command To turn menu Images) and move them so that you get an illustration for I. Krylov’s fable “The Dragonfly and the Ant”.
    3. Save the result of your work in a personal folder.

    Task 5. Design of complex objects and graphic primitives

    IMPORTANT!
    It is advisable to depict complex objects in parts. Draw each of the primitives separately. Then select them one by one (the Selection, mode Transparent fragment) and drag it to the desired location.


    2. Draw one of the following pictures:

    3. Save the result of your work in a personal folder under the name My_drawing.

    Task 6. Creating labels

    1. In the Paint graphic editor, open the file Panel.bmp.
    2. Using a tool Text sign the tools of the graphic editor Paint

    3. Save the drawing in a personal folder as a file Panel1.bmp.

    Task 7. Copying fragments

    1. Launch the Paint graphic editor.
    2. Using the following sequence of actions as a basis, draw a chessboard.

    3. Label the rows and columns of the chessboard.
    4. Save the drawing in a personal folder under the name Chess board.

    Task 8. Working with multiple files

    Download files for work:





    1. In the Paint graphic editor, open the Scheme.bmp file.
    2. Illustrate the diagram by adding images of the corresponding devices from the files RAM.bmp, Winchester.bmp, Disk.bmp, Floppy Disk.bmp, Flash Drive.bmp. For convenience, open each of these files in a new window. Copy the necessary images to the clipboard and paste them into the desired places on the diagram.

    3. Save the result in a personal folder under the name Scheme 1.

    Task 9. Getting a screen copy

    1. Launch the Paint graphic editor, minimize its window and make a copy of this window (keys Alt+PrintScreen- press simultaneously).
    2. Expand the Paint graphic editor window to full screen and place the resulting image in the center of the work area (tab home, group Clipboard, button Insert), label the main interface elements.
    3. Save the result of your work in a personal folder under the name Paint.

    Task 10. Creating animation

    1. Open the file in the Paint graphic editor Acrobat.bmp.
    2. Copy and mirror the existing fragment, combine the two halves and color the resulting acrobat figure. Save the resulting image in your personal folder as a file a1.gif.
    3. By copying, moving and deleting individual parts of the image, make changes to the acrobat figurine (for example, depict an acrobat with his arms down). Save the resulting image in your personal folder as a file a2.gif.

    4. Go to the website https://www.gifup.com/ and, following the instructions there, create an animation by repeating two frames multiple times.
    5. Save the result of your work in a personal folder.

    Task 11. Artistic processing images

    1. Launch the Gimp graphics editor.
    2. Open the file in the Paint graphic editor mamont.jpg.
    3. Apply various filters to the original image so that the result is close to what is shown in the figure below.

    4. Save your results in files mamont1.jpg, mamont2.jpg, mamont3.jpg And mamont4.jpg.

    Task 12. Scaling raster and vector images

    1. In the Paint graphic editor, create the following image:

    2. Save your work in a personal folder as a 24-bit graphic (file type).
    3. Select any fragment of the picture. Zoom in and out on the selected fragment several times. Observe how scaling operations affect image quality.
    4. Create the same drawing in the graphic editor OpenOffice.org Draw. Save your work in a personal folder as an ODF Drawing (file type).
    5. Select any fragment of the picture. Zoom in and out on the selected fragment several times. Observe how scaling operations affect image quality.
    6. Finish working with graphic editors.

    Venus is the second planet from the Sun solar system, is slightly smaller in size than the Earth. The planet is surrounded by a dense atmosphere, which consists almost entirely of carbon dioxide. The cloud cover that envelops the planet is made up of droplets of sulfuric acid. Its surface is constantly covered by dense layers of clouds, due to which the details of the landscape are almost invisible. The pressure of the atmosphere is 90 times higher than the pressure at the surface of the Earth, and the temperature is about 500 o C. The atmosphere of Venus at the level of the cloudy upper layer rotates in the same direction as the surface of the planet, but much faster, completing a revolution in four days. This unusual movement of cloud cover is called superrotation, and no explanation has yet been found for this mysterious phenomenon.

    The first radar maps obtained showed that most of the surface of Venus is occupied by vast plains, above which rise large plateaus several kilometers high. The two main elevations are the land of Ishtar in the northern hemisphere and the land of Aphrodite near the equator. From the American space probe Magellan, many radar images were transmitted to Earth indicating the formation of impact structures as a result of falling meteorites, as well as the presence of volcanic activity in the relatively recent past. Many different features of volcanic origin were discovered on the planet: lava flows, small domes 2-3 km across, large volcanic cones hundreds of kilometers across, and web-like structures “crowns” - round or oval volcanic formations surrounded by ridges, depressions and radial lines.

    Surface of Venus.

    When studying Venus using space probes and radar, it was found that its surface was formed relatively recently and consists mainly of streams of solidified lava. Intense volcanic activity on the planet continues to this day. The American automatic station Magellan transmitted to Earth a radar image of a lava flow one kilometer wide and 7,700 km long. According to planetary scientists, the erupting lava consists of liquid sulfur. The structure of the surface of Venus is significantly different from other planets in the solar system. Radar surveys have revealed complex patterns of intersecting mountain ranges and valleys called “tesserae,” web-like formations ranging from 50 to 230 kilometers long, intersecting lava flows, and lava-flooded meteorite craters up to 300 kilometers in diameter. The anomalous origin of Venus is indicated by its slow rotation in the opposite direction, the planet makes one revolution around its axis in 243 days, and the almost complete absence of a magnetic field, as well as excess infrared (thermal) radiation, which is almost twice as high as calculated. The surface of Venus is quite young: and significantly different from any landscape features found on other planets or moons.

    R.A. Kerr writes in Science magazine: “Planetary geologists studying radar images from Magellan have discovered that they are faced with a mystery. When reading the geological clock telling how old the surface of Venus is, they found a planet at the end of its youth. But when they look directly at the surface, they see a newborn baby.”

    I. Velikovsky, an American scientist and writer, argued that Venus originated from the substance of Jupiter. Some historical sources directly indicate that Venus was born from this planet. This happened during the approach of a propeller-class neutron star (Typhon) to this planet. During the star's closest approach to Jupiter, part of the planet's crust and atmosphere was captured, from which Venus was formed.

    Image of Venus (the "shooting" star). Mendoza Code.

    In the Indian epic "Mahabharata" it is said that "the heavenly Surabhi ... "jumped out of his (the Creator's) mouth." Homer in his poem "Iliad" states: "Athena is the daughter of Zeus." Among the Pawnee Indians (Nebraska, USA) there is a legend that “Tirawa (Jupiter) gave most of his power to the Morning Star.” Ptolemy believed: “Venus has the same power as Jupiter, and also has a similar nature to it.”

    The ancient Greeks claimed that Venus (Pallas Athena) jumped out of the head of Zeus (Jupiter). This is how the birth of Venus is described in Greek myth, which was accompanied by various cataclysms on Earth: “The skull of Zeus split, and a maiden jumped out of it in full armor and stood next to her parent, militantly shaking her spear.

    Olympus shook from the powerful jump, the lands lying around groaned, the sea trembled and boiled with waves, and snow fell on distant Rhodes, covering the tops of the mountains. It took the gods a long time to come to their senses.”

    Rice. No. 97. Birth of Pallas Athena.

    In more ancient Hittite mythology, there is a description of the unusual birth of the deity Katsal, who, having pierced the skull of Kumarbi, was born. Only a small fragment of this ancient myth has been preserved on the clay tablet, and the image of the god Katsal is not identified with any celestial body. It can be assumed that this is the planet Venus.

    Mysterious rock paintings have been discovered in the mountains of California. On one of them there is an image of a strange human figure, from whose head a star jumped out! The zigzag line crossing the body (an anthropomorphic image of Jupiter) is probably the trajectory of Typhon's passage near this planet. In the lower right corner of the rock art there are crossed bones and a lizard, which are a symbol of death and a neutron star. This pictogram, carved on a rock in North America, surprisingly resembles the Greek myth about the emergence of Venus from the head of Zeus.

    Rice. No. 98. Birth of the morning star.

    In the ancient Aztec Codex Borgia there is an image of an Indian looking at telescope an unusual star with four of its largest satellites. To the right of the planet's drawing is an outflowing stream with balls at the tips of the streams. This is how the Aztecs depicted the flow of water, precipitation or flood in their writings and drawings. Perhaps, with the help of this symbol, the compiler of the codex depicted the capture of part of the atmosphere and crust of Jupiter by a neutron star. Below this fragment there is a drawing of Venus, which is depicted in the form of a bird. The culprit of this cataclysm is indicated by the image of a dragon with two long tongues on the same page of the Aztec document.

    Another illustration from the Codex Borgia shows an anthropomorphic creature with rabbit ears clinging to the chest of the deity of the planet Jupiter. In the middle of the picture there is a planet with its satellites, from which a stream of matter is erupting. At the tips of the jets there is a symbol in the form of a question mark (?). South American Indians used this symbol to designate an outflow of air, a whirlwind, smoke from a fire, or a phrase flying out of a person’s mouth. The modern analogue of this symbol, used in caricatures and caricatures, is a cloud emanating from the mouth on which the words of a sentence are written. With this sign, the Aztec artist tried to convey information that a substance was ejected from the bowels of Jupiter. Interestingly, the Egyptians also depicted Set (the neutron star) as a small man with the face of a rabbit. On the head of the Aztec deity of the planet Jupiter there is an emblem in the form of a small snake. The symbol of the Egyptian god Horus is the uraeus (snake head). Below the illustration there is a kind of explanatory text for the picture - these are three icons indicating a neutron star and several symbols of Jupiter’s satellites. One of them (the head of an eagle) is a symbol of Venus.

    On page 42 of Codex Vaticanus B there is a similar illustration as in Codex Borgia. The picture shows the scene of the “battle” of Jupiter with the Aztec “Typhon”. In the upper right corner a planet is shown with the substance erupting from its interior, from which Venus was subsequently formed.

    The Aztec Codex Borgia contains more detailed information about the unusual origin of Venus. One of the pictures in the codex shows the process of the emergence of a planet from the depths of Jupiter, which is depicted as a ball cut by a red line. In the center of the sphere is a head, split into two halves, which are painted yellow and red. At the base of the ball lies the defeated deity of the planet. Above the column of trapped material emanating from Jupiter, Venus is shown in the form of the Quetzal bird. To the left and right of Jupiter are its satellites.

    Rice. No. 102. Birth of Venus. Codex Borgia.

    In the code " VindobonensisMexicanus 1" contains an illustration of the "home" of Jupiter, where the planet is shown as a disk with a cut out segment. Perhaps in this way the Indian artist tried to convey to his descendants information about the capture of part of Jupiter’s matter by a neutron star. On other pages of the same codex there are fragments with images of an ancient cosmic cataclysm, on which symbols of Jupiter and emblems of the planet are drawn with cut out segments. To the left of these drawings is a neutron star in the form of a black ball with the sign of the Serpent and a black circle with a smoothed swastika. This is probably what the star looked like before its approach to Jupiter and after the “celestial battle.”

    Rice. No. 103. Code VindobonensisMexicanus 1. “House” of Jupiter (fragment).

    Rice . No. 104. CodeVindobonensis Mexicanus 1. Symbols of a rotating neutron star and Jupiter (fragment).

    On the Cagaunes Peninsula (Cuba) in the Ramos Cave, Antonio Nunez Jimenez photographed mysterious pictograms, which he published in the work “Cuba: Rock Art.” One of the pictograms (No. 8) is very reminiscent of the capture of matter from Jupiter by a neutron star. There is also an image in the cave with three celestial bodies connected by bridges. One of them is probably the future planet Venus.

    A similar rock carving was discovered in Californian rocks, where two celestial bodies are depicted connected by two lines. Obviously, in this form, Stone Age people observed this enormous catastrophe in the night sky.