Let \(Z = \frac{Y}{X}\). Initialy, I was thinking of applying "exponential twisting" change of measure to y (which in this case amounts to changing the mean from $\mathbf{0}$ to $\mathbf{c}$) but this requires taking . This is one of the older transformation technique which is very similar to Box-cox transformation but does not require the values to be strictly positive. Then the probability density function \(g\) of \(\bs Y\) is given by \[ g(\bs y) = f(\bs x) \left| \det \left( \frac{d \bs x}{d \bs y} \right) \right|, \quad y \in T \]. In this case, \( D_z = \{0, 1, \ldots, z\} \) for \( z \in \N \). This general method is referred to, appropriately enough, as the distribution function method.
linear model - Transforming data to normal distribution in R - Cross Recall that the Pareto distribution with shape parameter \(a \in (0, \infty)\) has probability density function \(f\) given by \[ f(x) = \frac{a}{x^{a+1}}, \quad 1 \le x \lt \infty\] Members of this family have already come up in several of the previous exercises. In the order statistic experiment, select the uniform distribution. The last result means that if \(X\) and \(Y\) are independent variables, and \(X\) has the Poisson distribution with parameter \(a \gt 0\) while \(Y\) has the Poisson distribution with parameter \(b \gt 0\), then \(X + Y\) has the Poisson distribution with parameter \(a + b\). In the dice experiment, select two dice and select the sum random variable.
PDF -1- LectureNotes#11 TheNormalDistribution - Stanford University Suppose that \((T_1, T_2, \ldots, T_n)\) is a sequence of independent random variables, and that \(T_i\) has the exponential distribution with rate parameter \(r_i \gt 0\) for each \(i \in \{1, 2, \ldots, n\}\). The expectation of a random vector is just the vector of expectations. Let \( g = g_1 \), and note that this is the probability density function of the exponential distribution with parameter 1, which was the topic of our last discussion. So the main problem is often computing the inverse images \(r^{-1}\{y\}\) for \(y \in T\). When \(n = 2\), the result was shown in the section on joint distributions. The LibreTexts libraries arePowered by NICE CXone Expertand are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. The Jacobian is the infinitesimal scale factor that describes how \(n\)-dimensional volume changes under the transformation. If \( (X, Y) \) takes values in a subset \( D \subseteq \R^2 \), then for a given \( v \in \R \), the integral in (a) is over \( \{x \in \R: (x, v / x) \in D\} \), and for a given \( w \in \R \), the integral in (b) is over \( \{x \in \R: (x, w x) \in D\} \). Suppose that \((X_1, X_2, \ldots, X_n)\) is a sequence of independent real-valued random variables, with common distribution function \(F\). An ace-six flat die is a standard die in which faces 1 and 6 occur with probability \(\frac{1}{4}\) each and the other faces with probability \(\frac{1}{8}\) each. The formulas in last theorem are particularly nice when the random variables are identically distributed, in addition to being independent. When the transformed variable \(Y\) has a discrete distribution, the probability density function of \(Y\) can be computed using basic rules of probability. First, for \( (x, y) \in \R^2 \), let \( (r, \theta) \) denote the standard polar coordinates corresponding to the Cartesian coordinates \((x, y)\), so that \( r \in [0, \infty) \) is the radial distance and \( \theta \in [0, 2 \pi) \) is the polar angle. I have an array of about 1000 floats, all between 0 and 1. The precise statement of this result is the central limit theorem, one of the fundamental theorems of probability. Let M Z be the moment generating function of Z . \sum_{x=0}^z \binom{z}{x} a^x b^{n-x} = e^{-(a + b)} \frac{(a + b)^z}{z!} In many cases, the probability density function of \(Y\) can be found by first finding the distribution function of \(Y\) (using basic rules of probability) and then computing the appropriate derivatives of the distribution function. (2) (2) y = A x + b N ( A + b, A A T). Then. Then, with the aid of matrix notation, we discuss the general multivariate distribution. Assuming that we can compute \(F^{-1}\), the previous exercise shows how we can simulate a distribution with distribution function \(F\). Suppose that \((X, Y)\) probability density function \(f\). \(\left|X\right|\) has probability density function \(g\) given by \(g(y) = f(y) + f(-y)\) for \(y \in [0, \infty)\). Transform a normal distribution to linear. \(g(y) = -f\left[r^{-1}(y)\right] \frac{d}{dy} r^{-1}(y)\). In terms of the Poisson model, \( X \) could represent the number of points in a region \( A \) and \( Y \) the number of points in a region \( B \) (of the appropriate sizes so that the parameters are \( a \) and \( b \) respectively). Using the theorem on quotient above, the PDF \( f \) of \( T \) is given by \[f(t) = \int_{-\infty}^\infty \phi(x) \phi(t x) |x| dx = \frac{1}{2 \pi} \int_{-\infty}^\infty e^{-(1 + t^2) x^2/2} |x| dx, \quad t \in \R\] Using symmetry and a simple substitution, \[ f(t) = \frac{1}{\pi} \int_0^\infty x e^{-(1 + t^2) x^2/2} dx = \frac{1}{\pi (1 + t^2)}, \quad t \in \R \].
Linear transformation theorem for the multivariate normal distribution Suppose that \(T\) has the exponential distribution with rate parameter \(r \in (0, \infty)\). The central limit theorem is studied in detail in the chapter on Random Samples. By definition, \( f(0) = 1 - p \) and \( f(1) = p \). Vary \(n\) with the scroll bar, set \(k = n\) each time (this gives the maximum \(V\)), and note the shape of the probability density function. Linear transformations (or more technically affine transformations) are among the most common and important transformations. Recall that \( F^\prime = f \). As we all know from calculus, the Jacobian of the transformation is \( r \). Let \(Y = a + b \, X\) where \(a \in \R\) and \(b \in \R \setminus\{0\}\). We will explore the one-dimensional case first, where the concepts and formulas are simplest. The normal distribution is studied in detail in the chapter on Special Distributions.
How to transform features into Normal/Gaussian Distribution As usual, we will let \(G\) denote the distribution function of \(Y\) and \(g\) the probability density function of \(Y\). Here we show how to transform the normal distribution into the form of Eq 1.1: Eq 3.1 Normal distribution belongs to the exponential family. Note that \(\bs Y\) takes values in \(T = \{\bs a + \bs B \bs x: \bs x \in S\} \subseteq \R^n\). If \( (X, Y) \) has a discrete distribution then \(Z = X + Y\) has a discrete distribution with probability density function \(u\) given by \[ u(z) = \sum_{x \in D_z} f(x, z - x), \quad z \in T \], If \( (X, Y) \) has a continuous distribution then \(Z = X + Y\) has a continuous distribution with probability density function \(u\) given by \[ u(z) = \int_{D_z} f(x, z - x) \, dx, \quad z \in T \], \( \P(Z = z) = \P\left(X = x, Y = z - x \text{ for some } x \in D_z\right) = \sum_{x \in D_z} f(x, z - x) \), For \( A \subseteq T \), let \( C = \{(u, v) \in R \times S: u + v \in A\} \). Obtain the properties of normal distribution for this transformed variable, such as additivity (linear combination in the Properties section) and linearity (linear transformation in the Properties . \(g(v) = \frac{1}{\sqrt{2 \pi v}} e^{-\frac{1}{2} v}\) for \( 0 \lt v \lt \infty\). Systematic component - \(x\) is the explanatory variable (can be continuous or discrete) and is linear in the parameters. The main step is to write the event \(\{Y \le y\}\) in terms of \(X\), and then find the probability of this event using the probability density function of \( X \). A linear transformation of a multivariate normal random vector also has a multivariate normal distribution. But first recall that for \( B \subseteq T \), \(r^{-1}(B) = \{x \in S: r(x) \in B\}\) is the inverse image of \(B\) under \(r\). \(g_1(u) = \begin{cases} u, & 0 \lt u \lt 1 \\ 2 - u, & 1 \lt u \lt 2 \end{cases}\), \(g_2(v) = \begin{cases} 1 - v, & 0 \lt v \lt 1 \\ 1 + v, & -1 \lt v \lt 0 \end{cases}\), \( h_1(w) = -\ln w \) for \( 0 \lt w \le 1 \), \( h_2(z) = \begin{cases} \frac{1}{2} & 0 \le z \le 1 \\ \frac{1}{2 z^2}, & 1 \le z \lt \infty \end{cases} \), \(G(t) = 1 - (1 - t)^n\) and \(g(t) = n(1 - t)^{n-1}\), both for \(t \in [0, 1]\), \(H(t) = t^n\) and \(h(t) = n t^{n-1}\), both for \(t \in [0, 1]\). \( \P\left(\left|X\right| \le y\right) = \P(-y \le X \le y) = F(y) - F(-y) \) for \( y \in [0, \infty) \). Suppose that \(\bs X\) is a random variable taking values in \(S \subseteq \R^n\), and that \(\bs X\) has a continuous distribution with probability density function \(f\). Using your calculator, simulate 5 values from the Pareto distribution with shape parameter \(a = 2\). Then \(Y_n = X_1 + X_2 + \cdots + X_n\) has probability density function \(f^{*n} = f * f * \cdots * f \), the \(n\)-fold convolution power of \(f\), for \(n \in \N\). Thus, \( X \) also has the standard Cauchy distribution. Featured on Meta Ticket smash for [status-review] tag: Part Deux. Location transformations arise naturally when the physical reference point is changed (measuring time relative to 9:00 AM as opposed to 8:00 AM, for example). Proposition Let be a multivariate normal random vector with mean and covariance matrix . Scale transformations arise naturally when physical units are changed (from feet to meters, for example). Using your calculator, simulate 6 values from the standard normal distribution. The images below give a graphical interpretation of the formula in the two cases where \(r\) is increasing and where \(r\) is decreasing. Find the probability density function of \(T = X / Y\). In both cases, the probability density function \(g * h\) is called the convolution of \(g\) and \(h\). Thus, in part (b) we can write \(f * g * h\) without ambiguity. Zerocorrelationis equivalent to independence: X1,.,Xp are independent if and only if ij = 0 for 1 i 6= j p. Or, in other words, if and only if is diagonal. But a linear combination of independent (one dimensional) normal variables is another normal, so aTU is a normal variable. Linear Algebra - Linear transformation question A-Z related to countries Lots of pick movement . Hence \[ \frac{\partial(x, y)}{\partial(u, w)} = \left[\begin{matrix} 1 & 0 \\ w & u\end{matrix} \right] \] and so the Jacobian is \( u \).
How to find the matrix of a linear transformation - Math Materials This is a difficult problem in general, because as we will see, even simple transformations of variables with simple distributions can lead to variables with complex distributions. Subsection 3.3.3 The Matrix of a Linear Transformation permalink. Run the simulation 1000 times and compare the empirical density function to the probability density function for each of the following cases: Suppose that \(n\) standard, fair dice are rolled. The Exponential distribution is studied in more detail in the chapter on Poisson Processes.
PDF Basic Multivariate Normal Theory - Duke University Normal distribution - Quadratic forms - Statlect }, \quad 0 \le t \lt \infty \] With a positive integer shape parameter, as we have here, it is also referred to as the Erlang distribution, named for Agner Erlang. First we need some notation. Open the Cauchy experiment, which is a simulation of the light problem in the previous exercise. Suppose also that \(X\) has a known probability density function \(f\). Conversely, any continuous distribution supported on an interval of \(\R\) can be transformed into the standard uniform distribution. Note that \( Z \) takes values in \( T = \{z \in \R: z = x + y \text{ for some } x \in R, y \in S\} \). The formulas above in the discrete and continuous cases are not worth memorizing explicitly; it's usually better to just work each problem from scratch. Keep the default parameter values and run the experiment in single step mode a few times.
compute a KL divergence for a Gaussian Mixture prior and a normal Suppose that \(X_i\) represents the lifetime of component \(i \in \{1, 2, \ldots, n\}\). This is particularly important for simulations, since many computer languages have an algorithm for generating random numbers, which are simulations of independent variables, each with the standard uniform distribution. The Irwin-Hall distributions are studied in more detail in the chapter on Special Distributions. Suppose that the radius \(R\) of a sphere has a beta distribution probability density function \(f\) given by \(f(r) = 12 r^2 (1 - r)\) for \(0 \le r \le 1\). However, there is one case where the computations simplify significantly. Now we can prove that every linear transformation is a matrix transformation, and we will show how to compute the matrix. For each value of \(n\), run the simulation 1000 times and compare the empricial density function and the probability density function. We can simulate the polar angle \( \Theta \) with a random number \( V \) by \( \Theta = 2 \pi V \). As usual, we start with a random experiment modeled by a probability space \((\Omega, \mathscr F, \P)\). Recall that the (standard) gamma distribution with shape parameter \(n \in \N_+\) has probability density function \[ g_n(t) = e^{-t} \frac{t^{n-1}}{(n - 1)! \(\left|X\right|\) and \(\sgn(X)\) are independent. \exp\left(-e^x\right) e^{n x}\) for \(x \in \R\). A particularly important special case occurs when the random variables are identically distributed, in addition to being independent. -2- AnextremelycommonuseofthistransformistoexpressF X(x),theCDFof X,intermsofthe CDFofZ,F Z(x).SincetheCDFofZ issocommonitgetsitsownGreeksymbol: (x) F X(x) = P(X . Suppose that \((X_1, X_2, \ldots, X_n)\) is a sequence of independent real-valued random variables. Most of the apps in this project use this method of simulation. \( f(x) \to 0 \) as \( x \to \infty \) and as \( x \to -\infty \). The grades are generally low, so the teacher decides to curve the grades using the transformation \( Z = 10 \sqrt{Y} = 100 \sqrt{X}\). Formal proof of this result can be undertaken quite easily using characteristic functions. \(Y_n\) has the probability density function \(f_n\) given by \[ f_n(y) = \binom{n}{y} p^y (1 - p)^{n - y}, \quad y \in \{0, 1, \ldots, n\}\]. The commutative property of convolution follows from the commutative property of addition: \( X + Y = Y + X \).
How to Transform Data to Better Fit The Normal Distribution \(X = -\frac{1}{r} \ln(1 - U)\) where \(U\) is a random number. So to review, \(\Omega\) is the set of outcomes, \(\mathscr F\) is the collection of events, and \(\P\) is the probability measure on the sample space \( (\Omega, \mathscr F) \). Part (a) hold trivially when \( n = 1 \). For example, recall that in the standard model of structural reliability, a system consists of \(n\) components that operate independently. In both cases, determining \( D_z \) is often the most difficult step. Set \(k = 1\) (this gives the minimum \(U\)). However, it is a well-known property of the normal distribution that linear transformations of normal random vectors are normal random vectors. The result in the previous exercise is very important in the theory of continuous-time Markov chains. Using the change of variables formula, the joint PDF of \( (U, W) \) is \( (u, w) \mapsto f(u, u w) |u| \). Open the Special Distribution Simulator and select the Irwin-Hall distribution. If \(X_i\) has a continuous distribution with probability density function \(f_i\) for each \(i \in \{1, 2, \ldots, n\}\), then \(U\) and \(V\) also have continuous distributions, and their probability density functions can be obtained by differentiating the distribution functions in parts (a) and (b) of last theorem. About 68% of values drawn from a normal distribution are within one standard deviation away from the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations. Suppose that \((X_1, X_2, \ldots, X_n)\) is a sequence of independent random variables, each with the standard uniform distribution. Proof: The moment-generating function of a random vector x x is M x(t) = E(exp[tTx]) (3) (3) M x ( t) = E ( exp [ t T x]) I want to compute the KL divergence between a Gaussian mixture distribution and a normal distribution using sampling method. Let \(\bs Y = \bs a + \bs B \bs X\), where \(\bs a \in \R^n\) and \(\bs B\) is an invertible \(n \times n\) matrix. For \( z \in T \), let \( D_z = \{x \in R: z - x \in S\} \). Show how to simulate, with a random number, the exponential distribution with rate parameter \(r\). The following result gives some simple properties of convolution. Suppose that \(X\) has a discrete distribution on a countable set \(S\), with probability density function \(f\). Thus, suppose that \( X \), \( Y \), and \( Z \) are independent random variables with PDFs \( f \), \( g \), and \( h \), respectively. Here is my code from torch.distributions.normal import Normal from torch. Note that the inquality is reversed since \( r \) is decreasing. I'd like to see if it would help if I log transformed Y, but R tells me that log isn't meaningful for . we can . Sketch the graph of \( f \), noting the important qualitative features. Linear transformations (or more technically affine transformations) are among the most common and important transformations. Show how to simulate the uniform distribution on the interval \([a, b]\) with a random number. Find the probability density function of each of the following random variables: Note that the distributions in the previous exercise are geometric distributions on \(\N\) and on \(\N_+\), respectively. The first derivative of the inverse function \(\bs x = r^{-1}(\bs y)\) is the \(n \times n\) matrix of first partial derivatives: \[ \left( \frac{d \bs x}{d \bs y} \right)_{i j} = \frac{\partial x_i}{\partial y_j} \] The Jacobian (named in honor of Karl Gustav Jacobi) of the inverse function is the determinant of the first derivative matrix \[ \det \left( \frac{d \bs x}{d \bs y} \right) \] With this compact notation, the multivariate change of variables formula is easy to state. Moreover, this type of transformation leads to simple applications of the change of variable theorems. Let \(U = X + Y\), \(V = X - Y\), \( W = X Y \), \( Z = Y / X \). For the next exercise, recall that the floor and ceiling functions on \(\R\) are defined by \[ \lfloor x \rfloor = \max\{n \in \Z: n \le x\}, \; \lceil x \rceil = \min\{n \in \Z: n \ge x\}, \quad x \in \R\]. The Cauchy distribution is studied in detail in the chapter on Special Distributions. Location-scale transformations are studied in more detail in the chapter on Special Distributions. It must be understood that \(x\) on the right should be written in terms of \(y\) via the inverse function. Random component - The distribution of \(Y\) is Poisson with mean \(\lambda\). Linear Transformation of Gaussian Random Variable Theorem Let , and be real numbers . Expand. Both of these are studied in more detail in the chapter on Special Distributions. Random variable \(T\) has the (standard) Cauchy distribution, named after Augustin Cauchy. Suppose that \(X\) has a continuous distribution on an interval \(S \subseteq \R\) Then \(U = F(X)\) has the standard uniform distribution. Theorem 5.2.1: Matrix of a Linear Transformation Let T:RnRm be a linear transformation. By far the most important special case occurs when \(X\) and \(Y\) are independent. Theorem (The matrix of a linear transformation) Let T: R n R m be a linear transformation. The distribution of \( R \) is the (standard) Rayleigh distribution, and is named for John William Strutt, Lord Rayleigh. We have seen this derivation before. Suppose that \(X\) and \(Y\) are independent random variables, each having the exponential distribution with parameter 1.
linear algebra - Normal transformation - Mathematics Stack Exchange In the context of the Poisson model, part (a) means that the \( n \)th arrival time is the sum of the \( n \) independent interarrival times, which have a common exponential distribution. Then \(U\) is the lifetime of the series system which operates if and only if each component is operating. Note that the minimum \(U\) in part (a) has the exponential distribution with parameter \(r_1 + r_2 + \cdots + r_n\). In the classical linear model, normality is usually required. There is a partial converse to the previous result, for continuous distributions. In the order statistic experiment, select the exponential distribution. \( G(y) = \P(Y \le y) = \P[r(X) \le y] = \P\left[X \le r^{-1}(y)\right] = F\left[r^{-1}(y)\right] \) for \( y \in T \). Note that since \( V \) is the maximum of the variables, \(\{V \le x\} = \{X_1 \le x, X_2 \le x, \ldots, X_n \le x\}\). Standardization as a special linear transformation: 1/2(X . Then \( Z \) has probability density function \[ (g * h)(z) = \sum_{x = 0}^z g(x) h(z - x), \quad z \in \N \], In the continuous case, suppose that \( X \) and \( Y \) take values in \( [0, \infty) \).
6.1 - Introduction to GLMs | STAT 504 - PennState: Statistics Online Suppose now that we have a random variable \(X\) for the experiment, taking values in a set \(S\), and a function \(r\) from \( S \) into another set \( T \). The minimum and maximum transformations \[U = \min\{X_1, X_2, \ldots, X_n\}, \quad V = \max\{X_1, X_2, \ldots, X_n\} \] are very important in a number of applications. Chi-square distributions are studied in detail in the chapter on Special Distributions.
probability - Linear transformations in normal distributions The change of temperature measurement from Fahrenheit to Celsius is a location and scale transformation. Another thought of mine is to calculate the following. Recall that if \((X_1, X_2, X_3)\) is a sequence of independent random variables, each with the standard uniform distribution, then \(f\), \(f^{*2}\), and \(f^{*3}\) are the probability density functions of \(X_1\), \(X_1 + X_2\), and \(X_1 + X_2 + X_3\), respectively. Bryan 3 years ago Of course, the constant 0 is the additive identity so \( X + 0 = 0 + X = 0 \) for every random variable \( X \). The generalization of this result from \( \R \) to \( \R^n \) is basically a theorem in multivariate calculus. Vary \(n\) with the scroll bar and note the shape of the probability density function. See the technical details in (1) for more advanced information. Convolution can be generalized to sums of independent variables that are not of the same type, but this generalization is usually done in terms of distribution functions rather than probability density functions. normal-distribution; linear-transformations. On the other hand, \(W\) has a Pareto distribution, named for Vilfredo Pareto. This follows from part (a) by taking derivatives with respect to \( y \). Our goal is to find the distribution of \(Z = X + Y\). \(\sgn(X)\) is uniformly distributed on \(\{-1, 1\}\). Graph \( f \), \( f^{*2} \), and \( f^{*3} \)on the same set of axes. \(U = \min\{X_1, X_2, \ldots, X_n\}\) has distribution function \(G\) given by \(G(x) = 1 - \left[1 - F_1(x)\right] \left[1 - F_2(x)\right] \cdots \left[1 - F_n(x)\right]\) for \(x \in \R\). Using the definition of convolution and the binomial theorem we have \begin{align} (f_a * f_b)(z) & = \sum_{x = 0}^z f_a(x) f_b(z - x) = \sum_{x = 0}^z e^{-a} \frac{a^x}{x!}
Check if transformation is linear calculator - Math Practice Then \( X + Y \) is the number of points in \( A \cup B \). Suppose that \(\bs X = (X_1, X_2, \ldots)\) is a sequence of independent and identically distributed real-valued random variables, with common probability density function \(f\). If \( X \) takes values in \( S \subseteq \R \) and \( Y \) takes values in \( T \subseteq \R \), then for a given \( v \in \R \), the integral in (a) is over \( \{x \in S: v / x \in T\} \), and for a given \( w \in \R \), the integral in (b) is over \( \{x \in S: w x \in T\} \). It follows that the probability density function \( \delta \) of 0 (given by \( \delta(0) = 1 \)) is the identity with respect to convolution (at least for discrete PDFs). \(V = \max\{X_1, X_2, \ldots, X_n\}\) has distribution function \(H\) given by \(H(x) = F^n(x)\) for \(x \in \R\).
Normal distribution non linear transformation - Mathematics Stack Exchange A linear transformation changes the original variable x into the new variable x new given by an equation of the form x new = a + bx Adding the constant a shifts all values of x upward or downward by the same amount. Random variable \(V\) has the chi-square distribution with 1 degree of freedom. Let X N ( , 2) where N ( , 2) is the Gaussian distribution with parameters and 2 . In the usual terminology of reliability theory, \(X_i = 0\) means failure on trial \(i\), while \(X_i = 1\) means success on trial \(i\). The computations are straightforward using the product rule for derivatives, but the results are a bit of a mess. Suppose first that \(F\) is a distribution function for a distribution on \(\R\) (which may be discrete, continuous, or mixed), and let \(F^{-1}\) denote the quantile function.
Linear Transformations - gatech.edu Multiplying by the positive constant b changes the size of the unit of measurement. Legal. cov(X,Y) is a matrix with i,j entry cov(Xi,Yj) . In a normal distribution, data is symmetrically distributed with no skew. . In particular, the \( n \)th arrival times in the Poisson model of random points in time has the gamma distribution with parameter \( n \). Find the probability density function of. As in the discrete case, the formula in (4) not much help, and it's usually better to work each problem from scratch. \( f \) increases and then decreases, with mode \( x = \mu \). Returning to the case of general \(n\), note that \(T_i \lt T_j\) for all \(j \ne i\) if and only if \(T_i \lt \min\left\{T_j: j \ne i\right\}\). In general, beta distributions are widely used to model random proportions and probabilities, as well as physical quantities that take values in closed bounded intervals (which after a change of units can be taken to be \( [0, 1] \)). Find the distribution function and probability density function of the following variables. Hence the inverse transformation is \( x = (y - a) / b \) and \( dx / dy = 1 / b \). This follows directly from the general result on linear transformations in (10). In part (c), note that even a simple transformation of a simple distribution can produce a complicated distribution. As before, determining this set \( D_z \) is often the most challenging step in finding the probability density function of \(Z\). Find the probability density function of \(Z = X + Y\) in each of the following cases.