Parametric Modeling

In general, parametric modeling is a history-based modeling method that enables design automation and creates product platforms for a product family, which are suitable for a product design strategy that is aimed to be family-based or platform-driven.

From: e-Design , 2015

Design and Analysis of Complex Structures

Feng Fu , in Design and Analysis of Tall and Complex Structures, 2018

6.6.1 What is Parametric Modeling

Parametric modeling is a modeling process with the ability to change the shape of model geometry as soon as the dimension value is modified. Parametric modeling is implemented through the design computer programming code such as a script to define the dimension and the shape of the model. The model can be visualized in 3D draughting programs to resemble the attributes of the real behavior of the original project. It is quite common that a parametric model uses feature-based, modeling tools to manipulate the attributes of the model.

Parametric modeling was first invented by Rhino, which is a 3D draughting software that evolved from AutoCAD. The key advantage of parametric modeling is, when setting up a 3D geometric model, the shape of model geometry can be changed as soon as the parameters such as the dimensions or curvatures are modified; therefore there is no need to redraw the model whenever it needs a change. This greatly saves time for engineers, especially in the scheme design stage. Before the advent of parametric modeling, the scheme design was not an easy task for designers, as the model is prone to be changed frequently. Therefore, changing the shape of a construction model was very difficult. Particularly, parametric modeling allows the designer to modify the entire shapes of the model, not just individual members. For example, to modify a roof structure, conventionally, the designer had to change the length, the breadth, and the height. However, with parametric modeling, the designers need to only alter one parameter; the other two parameters get adjusted automatically.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978008101018100006X

Learning in Parametric Modeling: Basic Concepts and Directions

Sergios Theodoridis , in Machine Learning (Second Edition), 2020

3.1 Introduction

Parametric modeling is a theme that runs across the spine of this book. A number of chapters focus on different aspects of this important problem. This chapter provides basic definitions and concepts related to the task of learning when parametric models are mobilized to describe the available data.

As has already been pointed out in the introductory chapter, a large class of machine learning problems ends up as being equivalent to a function estimation/approximation task. The function is "learned" during the learning/training phase by digging in the information that resides in the available training data set. This function relates the so-called input variables to the output variable(s). Once this functional relationship is established, one can in turn exploit it to predict the value(s) of the output(s), based on measurements from the respective input variables; these predictions can then be used to proceed to the decision making phase.

In parametric modeling, the aforementioned functional dependence that relates the input to the output is defined via a set of parameters, whose number is fixed and a-priori known. The values of the parameters are unknown and have to be estimated based on the available input–output observations. In contrast to the parametric, there are the so-called nonparametric methods. In such methods, parameters may still be involved to establish the input–output relationship, yet their number is not fixed; it depends on the size of the data set and it grows with the number of observations. Nonparametric methods will also be treated in this book (e.g., Chapters 11 and 13). However, the emphasis in the current chapter lies on parametric models.

There are two possible paths to deal with the uncertainty imposed by the unknown values of the involved parameters. According to the first one, parameters are treated as deterministic nonrandom variables. The task of learning is to obtain estimates of their unknown values. For each one of the parameters a single value estimate is obtained. The other approach has a stronger statistical flavor. The unknown parameters are treated as random variables and the task of learning is to infer the associated probability distributions. Once the distributions have been learned/inferred, one can use them to make predictions. Both approaches are introduced in the current chapter and are treated in more detail later on in various chapters of the book.

Two of the major machine learning tasks, namely, regression and classification, are presented and the main directions in dealing with these problems are exposed. Various issues that are related to the parameter estimation task, such as estimator efficiency, bias–variance dilemma, overfitting, and the curse of dimensionality, are introduced and discussed. The chapter can also be considered as a road map to the rest of the book. However, instead of just presenting the main ideas and directions in a rather "dry" way, we chose to deal and work with the involved tasks by adopting simple models and techniques, so that the reader gets a better feeling of the topic. An effort was made to pay more attention to the scientific notions than to algebraic manipulations and mathematical details, which will, unavoidably, be used to a larger extent while "embroidering" the chapters to follow.

The least-squares (LS), maximum likelihood (ML), regularization, and Bayesian inference techniques are presented and discussed. An effort has been made to assist the reader to grasp an informative view of the big picture conveyed by the book. Thus, this chapter could also be used as an overview introduction to the parametric modeling task in the realm of machine learning.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012818803300012X

Additive manufacturing of a prosthetic limb

S. Summit , in Rapid Prototyping of Biomaterials, 2014

11.2.3 Parametrics

Parametric modeling performs the 'heavy lifting' in terms of geometry creation. In short, parametrics describes a process where minimal user input may drive global changes to a master template and, ultimately, to the end product. In the case of a prosthetic leg, for example, by entering Cartesian coordinates at four variable locations in the template, an entirely user-specific leg may be generated digitally.

These measurements are taken from landmarks of the body, using commonly accepted 'bony prominences', which can be seen or found by palpating. These are entered into a standard spreadsheet, which then updates a parametric 3D computer aided design (CAD) model with the necessary dimensions to individualize the template. While the template may have been created for a generic 170   cm overall body height, for example, the newly entered data will adjust the various pivot points to approximate the dimensions of the original limb. Additionally, body weight and activity level may be entered into the parametric model to further influence the structure. Each resulting leg is therefore entirely unique, and closely tailored to the specific body and needs of the individual user. With the updated template and the mirrored morphology from the sound side limb, enough data exists to generate entirely user-specific data that will drive the creation of the new limb.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780857095992500118

Digital Signal Processing Systems: Implementation Techniques

Michail K. Tsatsanis , in Control and Dynamic Systems, 1995

I INTRODUCTION

Parametric modeling of signals and systems provides a compact description of the underlying process and facilitates further processing of the data (e.g., in deconvolution or filtering problems). Most of the work in parametric system identification however, relies upon the stationarity assumption for the observed signal, or equivalently, on the time-invariance (TI) of the underlying system. This assumption, although mathematically convenient, is not always valid for various signals encountered in several applications.

Time-varying (TV) systems arise naturally in a variety of situations including speech analysis [1] (due to the constantly changing vocal tract), seismic processing [2] (due to earth's space-varying absorption) and array processing (due to moving sources). Other examples include time-delay estimation, echo cancellation, radar and sonar problems and many more applications of system identification. The growing interest in time-frequency representations and TV spectral analysis (e.g., [3]) indicates the importance of nonstationary signal analysis.

A major application of system identification and deconvolution appears in digital transmission through channels with multipath effects or bandwidth constraints. Intersymbol Interference (ISI) is present in this case, due to delayed copies of the transmitted signal arriving through the multiple paths, or due to the transmitter and receiver filters [4]. ISI is a major impeding factor in high-speed digital transmission and its effects can be significantly more severe compared with those of additive noise. Thus, the use of some channel equalization procedure is essential for the recovery and detection of the transmitted symbols.

It is common practice in communication applications to assume that the intersymbol interference does not change throughout the transmission period, i.e., the channel is time-invariant (TI). In many cases however, ISI is induced by multipath effects from a changing environment, thus a time- varying channel has to be considered. Examples of TV channels (called frequency-selective fading links) include over the horizon communications [4] (due to random changes in the ionosphere), the underwater acoustic channel [5] (due to local changes in the temperature and salinity of the ocean layers) and microwave links [6]. An equally important application appears in radio transmission to a mobile receiver, as for example in cellular telephony. In this case, the multipath effect from reflections at nearby buildings is constantly changing as the vehicle moves. In order to equalize these fading links, identification and deconvolution of TV systems and channels should be considered. This is the general topic of this work.

The most popular approach for TV channel estimation and equalization has been to employ an adaptive algorithm, in order to track the chanel's changing parameters [4, Ch. 6,7], [7]. Typically, a training sequence (known to the receiver) is transmitted at the beginning of the session so that the equalizer can adapt its parameters. After the training period, the equalizer usually switches to a decision-directed mode. In this mode, the previously detected symbols are assumed to be correct and are fed back to the adaptive algorithm, to update the parameter estimates. In this way, the algorithm can follow the time variations, provided they are sufficiently slow in comparison to the algorithm's convergence time.

Despite their popularity and simplicity, adaptive algorithms are derived under the stationariiy assumption and do not take explicitly into account the TV nature of the channel. Thus, they can only be used for slowly changing channels and systems. Moreover, in the decision feedback (DF) mode they suffer runaway effects and divergence problems, whenever a deep fading or rapid change occurs. For this reason they require periodic retraining.

In order to overcome these problems, further modeling of the channel's variations needs to be incorporated into the equalization procedure. A second, probabilistic approach would be to regard each TV system coefficient as a stochastic process. In this framework, the TV identification problem is equivalent to estimating these "hidden" processes. If the statistics of these processes are a priori known, Kalman filtering techniques can be employed to estimate the TV coefficients from input/output data [5]. It is not clear however, how to estimate those statistics since the TV coefficients are not directly observed. Moreover, this as well as simpler random walk models rely on the random coefficient assumption, which is reasonable only when there are many, randomly moving reflectors in a multipath channel (e.g., ionospheric channel). It will not be valid for different setups, e.g., channels with occasional jumps or periodic variations.

A third approach, on which we will focus, is based on the expansion of each TV coefficient onto a set of basis sequences. If a combination of a small number of basis sequences can well approximate each coefficient's time-variation, then the identification task is equivalent to the estimation of the parameters in this expansion, which do not depend on time. This approach transforms the problem into a time-invariant one and has been used for the estimation of TV-AR models in the context of speech analysis [1], [8]. However, the performance of these methods depends crucially on the wise choice of a basis set, which can capture the dynamics of the channel's variations in a parsimonious way. Several polynomial [9], [10], and prolate spheroidal sequences [1] have been proposed in the past, although accompanied by no quantitative justification.

Here, we defer the discussion on the choice of the basis sequences for Section V, where the wavelet basis is advocated for the general case. We motivate the basis expansion approach however in Section II, where we show that the mobile radio, multipath channel can be described by a periodically varying model. Each TV coefficient is given as a combination of some complex exponentials. Thus, the use of an exponential basis in this framework, proves the usefulness and applicability of the basis expansion approach.

Basis expansion ideas provide a valuable tool for extending RLS and LMS type adaptive algorithms to the rapidly varying systems case. Moreover, they offer a framework into which the more challenging problem of blind or output only identification of the TV channel can be addressed.

Blind or self recovering equalization procedures use output only information and therefore do not require a training period. Thus, they are useful in applications where no training sequence is available [11], [12], [13], [14], [15]. Examples include broadcasting to many receivers (e.g., HDTV broadcasting), where the transmitter cannot be interrupted to initiate new training sessions, and multipoint data networks, where the cost of training each individual terminal point is prohibitive in terms of network management [16]. Blind methods (in the TI case) typically involve the minimization of criteria based on the signal's statistics in the place of the mean square error [11], [15]. Thus, they do not lend themselves easily to TV extensions, since the statistics in this case vary with time and cannot be easily estimated.

In Section IV basis expansion ideas are employed to address the blind equalization problem for rapidly fading channels. Second- and fourth-order nonstationary moments and cumulants are used to recover the TV channel coefficients. Identifiability of the channel from these output statistics is shown and novel linear and nonlinear algorithms are proposed based on instantaneous approximations of the TV moments. The performance of these methods is studied and strong convergence of the proposed algorithm is shown.

In an effort to keep the presentation as general as possible, we do not refer to any specific basis throughout these derivations. However, the choice of an appropriate basis set is crucial for the success of this approach. While for certain cases, the choice of the basis sequences is clearly dictated by the channel dynamics (e.g., mobile radio channel), for the general case it is not a trivial problem[8].

Motivated by the success of multiresoluiion methods in signal and image compression, [17], [18], in Section V we study the applicability of the wavelet basis for the parsimonious description of the TV system coefficients. Wavelet expansions offer a time-scale analysis of the signal and provide information about global as well as local behavior at different resolution depths. The promise of multiresolution expansions of the TV coefficients is that most of their energy will be concentrated into the low-resolution approximation, and hence the detail signals can be discarded without affecting the quality of the approximation. In this way a parsimonious approximation to the channel's variations is obtained.

While this approach can provide an acceptable overall approximation to the system's trajectory, it will not be able to track rapid changes or transient fadings which usually manifest themselves in the detail signal. Thus, some important parts of the detail signal have to be kept as well, similarly to image coding procedures. We should be able to locally "zoom into" the details when necessary (e.g., in an abrupt change or transition) or in other words, select the appropriate resolution depth locally, depending on the variability of the system's coefficients.

In Section V we formulate this problem as a model selection problem and use information theoretic criteria [19] or hypothesis testing procedures[20] to automatically select the appropriate resolution depth. The proposed algorithm incorporates maximum likelihood, or simpler blind methods, and provides a general framework for the estimation of TV systems, where no specific a priori knowledge on the nature of the time variations is assumed.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0090526706800447

Seakeeping

In The Maritime Engineering Reference Book, 2008

7.5.1 Selection of Wave Data

Chapter 1 gives information on the type of data available for sea conditions likely to be met in various parts of the world. Much of this is based on visual observations, both of waves and winds. As such they involve an element of subjective judgment and hence uncertainty. In particular visual observations of wave periods are likely to be unreliable. Care is necessary therefore in the interpretation and analysis of wave data if sound design decisions are to be derived from them.

Figure 7.60. Seakeeping speed polar diagram.

The National Maritime Institute, (now BMT Ltd), developed a method, known as wave climate synthesis, for obtaining reliable long-term wave data from indirect or inadequate source information. This approach can be used when instrumented wave measurements are not available. Essentially relationships derived from corresponding sets of instrumented and observed data are used to improve the interpretation of observed data. Various sources of data and of methods of analysing it are available such as the Marine Information and Advisory Service of the Institute of Oceanographic Sciences and agencies of the World Meteorological Organization. Much of the data is stored on magnetic tape.

The NMI analysis used probabilistic methods based on parametric modelling of the joint probability of wave height and wind speed. Important outputs are:

(a)

Wave height

When a large sample is available raw visual data provide reasonable probability distributions of wave height. However, comparisons of instrumented and visual data show that better distributions can be derived using best fit functional modelling to smooth the joint probability distributions of wave height and wind speed. This is illustrated in Figure 7.61 for OWS India in which the 'NMIMET Visual' curve has been so treated.

Figure 7.61. Visual and measured wave height probabilities.

Analysis of joint probabilities for wave height and wind speed from measured data leads to the relationship

(7.116) Mean wave height = H r = [ ( a W r n ) 2 + H 2 2 ] 1 2 , undefined W r = wind speed .

Standard deviation of the scatter about the mean is

σ r = H 2 ( b + c W r )

The joint probability distribution is given by a gamma distribution

(7.117) P ( H s / H r , σ r ) = q p + 1 Γ ( p + 1 ) H s p exp ( - q H s )

where p = H r 2 σ r 2 - 1 q = H r / σ r 2 H s = significant wave height

H 2 , a, b, c and n are the model parameters for which, in the absence of more specific data, suitable standard values may be used. The following values, Table 7.10, have been recommended on the basis of early work using instrumental data from a selection of six stations. In quoting them it should be noted that they are subject to review in the light of more recent work and meanwhile should be regarded as only valid for use with measured wind speeds up to a limit of about 50 knots. It should also be noted that the numerical values cited are to be used in association with units of metres for wave height and knots for wind speed.

Table 7.10. No caption

H2
(metres) a b c n
Open ocean 2.0 0.033 0.5 0.0125 1.46
Limited fetch 0.5 0.023 0.75 0.0188 1.38

The wave height probabilities follow from the wind speed probabilities using:

(7.118) P ( H s ) = r ( H s / H r , σ r ) × P ( W r )

Wave directionality data can be obtained if the joint probability distributions of wave height and wind speed are augmented by corresponding joint probabilities of wave height and period and wind speed and direction.

(b)

Wave periods

Reliability of visual observations of wave period is poor. NMI adopted a similar approach to that used for wave height but using a different functional representation. Based on analysis of instrumented wave height/period data, wave height and period statistics can be synthesized when reliable wave height data are available using:

(7.119) P ( T ) = r P ( T / H r ) × P ( H r )

where

P ( T / H r ) = F 1 ( μ h , undefined σ h , undefined μ t , undefined σ t , undefined ρ ) = [ 2 π ( 1 - ρ 2 ) σ r 2 ] - 1 2 exp undefined { - 1 2 ( ( 1 - ρ 2 ) σ t ) 2 } [ ( t - μ t ) - ρ σ t σ h ( h - μ h ) ] 2 } P ( H r ) = F 2 ( μ h , undefined σ h ) = 1 2 π σ h exp { - ( h - μ h ) 2 2 σ h 2 } { 1 - C s 6 [ 3 ( h - μ h σ h ) - ( h - μ h σ h ) 3 ] }

where

μh = mean value of h

σh = standard deviation of h

μ t = mean value of t = ln μ T - σ t 3 / 2

σt = standard deviation of t = 0.244 − 0.0225μ H

ρ = correlation coefficient = 0.415 + 0.049μ H

Cs = skewness parameter = E ( [ h - μ h ] 3 σ h 3 )

In these expressions h and t are the logarithmic values of H and T respectively. μh , σ h and Cs follow from the given probability distribution of H as does μ H the mean wave height.

μ T = 3.925 + 1.439 μ H

The numerical values of the coefficients in the formulae for σt , ρ and μ t were derived by regression analysis of over 20 sets of instrumental data.

(c)

Extreme wave height

Sometimes the designer needs to estimate the most probable value of the maximum individual wave height in a given return period. After the probabilities of Hs are obtained the corresponding cumulative probabilities are computed and plotted on probability paper.

The methods used by NMI for analysing these cumulative probabilities for Hs are suitable for use when, as in the case of visual data, wave records are not available.

The data define exceedance probabilities for Hs up to a limiting level 1/m where m is the number of Hs values (or visual estimates of height) available. It is commonly required to extrapolate these to a level 1/M corresponding to an extreme storm of specified return period, R years, and duration, D hours, and in this case M = 365 × 24 ×R/D.

In the NMI method this extrapolation is achieved by use of a 3-parameter Weibull distribution, the formula for the cumulative probability being:

(7.120) P ( x > H s ) = exp - [ ( H s - H 0 ) n b ]

with values of the parameters n, b and H0 determined numerically by least square fitting of the available data. The most probable maximum individual wave height H max corresponding to the significant height HsM for the extreme storm having exceedance probability 1/M is then estimated by assuming a Rayleigh distribution of heights in the storm, so that H max ( 1 2 undefined ln undefined N ) 1 2 H sM ,where N is an estimate of the number of waves in the storm given by N =3600D/T, where T is an estimated mean wave period.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978075068987800007X

Aerodynamic Optimization Design System for Turbomachinery Based on Parallelized 3D Viscous Numerical Analysis

Zhirong Lin , ... Xin Yuan , in Parallel Computational Fluid Dynamics 2006, 2007

Publisher Summary

The optimization system includes three sub-systems: parametric modeling, evaluation system, and optimization strategies. In aerospace and power-generation industries, engineers are challenged to design the highest quality systems while reducing the cost and the duration of the design cycle to achieve reactivity to the market demands and business changes. In optimization course, one problem is how to choose the design variables and reduce their number while maintaining the freedom and quality of the blade representation. The most natural way to achieve parallelization for numerical simulation is by demined composition. A suitable overlapping width of the sub-grid for domain decomposition is a key point for the present method of parallelization. The code was already parallelized by demined composition technique and running on 8 CPUs for parallel computation. An optimization design system has been used for blade design optimization, which includes three modules: parametric modeling, evaluation system, and optimization strategy. The present aerodynamic-optimization design system can be an efficient and robust design tool to achieve good aerodynamic blade design optimization in a reasonable time. This optimization system can become an efficient and robust design tool to achieve better blade performance in the near future for manufactory.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444530356500390

Solid Modeling

Kuang-Hua Chang , in e-Design, 2015

3.3.6 Direct Modeling

As can be seen from the discussion above, the parametric modeling approach requires the designer to anticipate design changes and accordingly define features, add relations to sketch entities, and add parameter relations between features. As a result, the solid model is created in such a way that a design modification (e.g., change in a dimension value) triggers rebuild in solid features in a prescribed manner. Feature-based parametric modeling is a structured modeling process, in which feature creation sequence or history tree masters the model rebuild process and design intent is captured implicitly through sketch relations, parent–child relationships, and parametric relations between dimensions.

Although the feature-based parametric modeling is indispensable in support of product design in the e-Design paradigm, capturing design intents in complex solid models is not always straightforward, to say the least. It requires the designer's effort, considerable planning, and careful implementation in achieving such parametric solid models. In general, parametric CAD tools lack ease of use, speed, and modeling flexibility. It requires a relatively steep learning curve and modeling effort upfront for the designer, and the solid models created suffer from model interoperability issues; that is, a CAD model created in software A cannot be understood or imported to software B with features and dimensions due to the nature of the "history-based" model.

The newly developed direct modeling approach provides a geometric-based modeling strategy that gives designers the power to quickly define and edit geometry by simply clicking on the model geometry and moving it with a mouse. Designers can focus on creating geometry rather than building features, adding constraints and design intent into their models and therefore speeding up design, saving time and development costs, and increasing productivity. The direct modeling paradigm is especially suited to the needs of designers working with legacy and heterogeneous CAD data. The direct modeling eliminates the need to access feature-level information to implement design changes. Designers can easily edit, modify, and repurpose solid models from any CAD sources.

Both Pro/ENGINEER (Creo™ 2.0 and higher) and SolidWorks (2012 and newer) are equipped with direct modeling (also called direct model editing) capabilities, which is built on top of existing feature-based parametric modeling technique. With the added direct modeling capability, designers are able to copy, move, split, replace, offset, push, and drag geometry to create the result as desired, instead of clicking on a dimension, entering a different value, and asking for model rebuild. In addition, with direct modeling capability, CAD automatically imports nonnative, imported model geometry without a model tree. The imported geometric model can be modified through direct geometry manipulation.

In general, parametric modeling is a history-based modeling method that enables design automation and creates product platforms for a product family, which are suitable for a product design strategy that is aimed to be family-based or platform-driven. On the other hand, direct modeling is a geometry-centered and history-free approach that supports quick and easy 3D solid model construction, allows design change through direct manipulation of geometric models, and supports direct geometry-editing from any CAD sources. There are pros and cons to these two methods. They are not exclusive but in general complement each other. More details can be found in Projects S1 and P1.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012382038900003X

Design Parameterization

Kuang-Hua Chang , in e-Design, 2015

5.1 Introduction

After intensive research and development in recent decades, the feature-based parametric modeling technique has become a reality ( Lee, 1999; Zeid, 1991). This technique has been widely adopted in the mainstream CAD tools, such as Pro/ENGINEER, SolidWorks, SolidEdge, Unigraphics, CATIA, and even Mechanical Desktop of AutoCAD. With such a technique, designers are able to create parts through solid features and assemble parts or subassemblies for a complete product digital mockup in the CAD environment. In addition, the designer is able to define design variables by relating dimensions of part features and create assembly mating constraints between parts to parameterize the product model through the parametric modeling technique. With the parameterized product model, the designer can make a design change simply by changing geometric dimension values and asking the CAD software to automatically regenerate the parts that are affected by the change, and hence the entire assembly.

For example, the bore diameter of an engine case is defined as a design variable, as shown in Figure 5.1(a). When the diameter is changed from 1.2 in. to 1.6 in., the engine case is regenerated first by properly updating its solid features that are affected by the change. As shown in Figure 5.1(b), the engine case becomes wider and the distance between the two exhaust manifolds is larger, just to name a few. At the same time, the change propagates to other parts in the assembly, including the piston, piston pin, cylinder head, cylinder sleeve, cylinder fins, and crankshaft, as illustrated in Figure 5.1(b). More important, the parts stay intact, maintaining adequate assembly mating constraints, and the change does not induce interference nor leave excessive gaps between parts. With such parametric models, designers are given tremendous freedom to explore design alternatives efficiently and accurately. In addition, this parametric technology supports the cross-functional team in conducting parametric studies and designing trade-offs in the e-Design environment (Chang et al., 1999). More about parametric study and design trade-off methods are discussed in Chapter 17 Design optimization.

Figure 5.1. An exploded view of a single-piston engine with a bore diameter of 1.2 in. (a) and a bore diameter of 1.6 in. (b).

We start in Section 4.2 by introducing design intents in product solid models. With the understanding of design intents, we discuss the two design axioms in Section 4.3 that form the basis of the design parameterization methods to be discussed in this chapter. In Sections 4.4 and 4.5 Section 4.4 Section 4.5 , we offer guidelines for design parameterization at part and assembly levels, respectively. Section 4.6 includes two case studies, an airplane engine and an HMMWV suspension, which demonstrate the application of the parameterization method and guidelines to practical examples.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123820389000053

Video Segmentation

A. Murat Tekalp , in The Essential Guide to Video Processing, 2009

6.4.1.1 Segmentation Using Two Frames

Motion estimation in the presence of more than one moving objects with unknown supports is a difficult problem. It was Burt et al. [29] who first showed that the motion of a 2D translating object can be accurately estimated using a multiresolution iterative approach even in the presence of other independently moving objects without prior knowledge of their supports. This is, however, not always possible with more sophisticated motion models (e.g., affine and perspective), which are more sensitive to presence of other moving objects in the region of analysis.

To this effect, Irani et al. [24 ] proposed multistage parametric modeling of dominant motion. In this approach, first a translational motion model is employed over the whole image to obtain a rough estimate of the support of the dominant motion. The complexity of the model is then gradually increased to affine and projective models with refinement of the support of the object in between. The parameters of each model are estimated only over the support of the object based on the previously used model. The procedure can be summarized as follows:

(i)

Compute the dominant 2D translation vector (dx , dy ) over the whole frame as the solution of

(6.6) [ I x 2 I x I y I x I y I y 2 ] [ d x d y ] = [ - I x I t - I y I t ] ,

where Ix , Iy , and It denote partials of image intensity with respect to x, y, and t. In case the dominant motion is not a translation, the estimated translation becomes a first-order approximation of the dominant motion.

(ii)

Label all pixels that correspond to the estimated dominant motion as follows:

(a)

Register the two images using the estimated dominant motion model. The dominant object appears stationary between the registered images, whereas other parts of the image are not.

(b)

Then, the problem reduces to labeling stationary regions between the registered images, which can be solved by the multiresolution change detection algorithm given in Section 6.3.1.

(c)

Here, in addition to the normalized FD (6.3), define a motion reliability measure as the reciprocal of the condition number of the coefficient matrix in (6.6), given by [24]

(6.7) R ( x , k ) = λ min λ max ,

where λmin and λmax are the smallest and largest eigenvalue of the coefficient matrix. A pixel is classified as stationary at a resolution level if its normalized FD is low and its motion reliability is high. This step defines the new region of analysis.

(iii)

Estimate the parameters of a higher order motion model (affine, perspective, or quadratic) over the new region of analysis as in [24]. Iterate over steps ii) and iii) until a satisfactory segmentation is attained.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123744562000074