Home | Looking for something? Sign In | New here? Sign Up | Log out
Showing posts with label Stephen P. Boyd. Show all posts
Showing posts with label Stephen P. Boyd. Show all posts

Thursday, August 14, 2008

Linear Dynamical Systems Lecture 20-Observability and State Estimation

Thursday, August 14, 2008
0 comments

read more
Watch Video

Lecture 19 Controllability and State Transfer and their uses in modern electrical engineering

0 comments

read more
Watch Video

Linear Dynamical Systems Lecture 18 -Applications of SVD, Controllability,and State Transfer in Electrical Engineering

0 comments

read more
Watch Video

Linear Dynamical Systems Lecture 17-Applications of single value decomposition in LDS and electrical engineering

0 comments
Singular value decomposition (SVD) is an important factorization of a rectangular real or complex matrix, with several applications in signal processing and statistics. Applications which employ the SVD include computing the pseudoinverse, least squares fitting of data, matrix approximation, and determining the rank, range and null space of a matrix.



read more
Watch Video

Linear Dynamical Systems Lecture 16- Use of symmetric matrices, quadratic forms, matrix norm, and SVDs in LDS

0 comments

read more
Watch Video

Linear Dynamical Systems Lecture 15-Inputs and Outputs of symmetric matrices

0 comments

read more
Watch Video

Linear Dynamical Systems Lecture 14- Applications of Jordan Canonical Form in LDS and Electrical Engineering

0 comments

read more
Watch Video

Linear Dynamical Systems Lecture 13-generalized eigenvectors, diagonalization, and Jordan canonical form

0 comments
Jordan normal form (often called Jordan canonical form) shows that a given square matrix M over a field K containing the eigenvalues of M can be transformed into a certain normal form by changing the basis. This normal form is almost diagonal in the sense that its only non-zero entries lie on the diagonal and the superdiagonal. This is made more precise in the Jordan-Chevalley decomposition. One can compare this result with the spectral theorem for normal matrices, which is a special case of the Jordan normal form.



read more
Watch Video

Linear Dynamical Systems Lecture12-Matrix exponentials,Eigenvectors,and Diagonalization and their uses in LDS

0 comments
Matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. Abstractly, the matrix exponential gives the connection between a matrix Lie algebra and the corresponding Lie group.In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e. if there exists an invertible matrix P such that P −1AP is a diagonal matrix. If V is a finite-dimensional vector space, then a linear map T : V → V is called diagonalizable if there exists a basis of V with respect to which T is represented by a diagonal matrix. Diagonalization is the process of finding a corresponding diagonal matrix for a diagonalizable matrix or linear map.




Diagonalizable matrices and maps are of interest because diagonal matrices are especially easy to handle: their eigenvalues and eigenvectors are known and one can raise a diagonal matrix to a power by simply raising the diagonal entries to that same power.

The Jordan-Chevalley decomposition expresses an operator as the sum of its diagonal part and its nilpotent part.

read more
Watch Video

Linear Dynamical Systems Lecture 11-LaPlace transform use of matrix exponentials

0 comments
Matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. Abstractly, the matrix exponential gives the connection between a matrix Lie algebra and the corresponding Lie group.Laplace transform is one of the best known and widely used integral transforms. It is commonly used to produce an easily solvable algebraic equation from an ordinary differential equation. It has many important applications in mathematics, physics, optics, electrical engineering, control engineering, signal processing, and probability theory.


In mathematics, it is used for solving differential and integral equations. In physics, it is used for analysis of linear time-invariant systems such as electrical circuits, harmonic oscillators, optical devices, and mechanical systems. In this analysis, the Laplace transform is often interpreted as a transformation from the time-domain, in which inputs and outputs are functions of time, to the frequency-domain, where the same inputs and outputs are functions of complex angular frequency, or radians per unit time. Given a simple mathematical or functional description of an input or output to a system, the Laplace transform provides an alternative functional description that often simplifies the process of analyzing the behavior of the system, or in synthesizing a new system based on a set of specifications.


read more
Watch Video

Linear Dynamical Systems Lecture 9&10 -Autonomous linear dynamical systems

0 comments



read more
Watch Video

Linear Dynamical Systems Lecture 8-Least Norm solutions of undetermined equations

0 comments

read more
Watch Video

Linear Dynamical Systems Lecture 7-Regularized least squares and the Gauss-Newton method

0 comments
Gauss–Newton algorithm is a method used to solve non-linear least squares problems. It can be seen as a modification of Newton's method for finding a minimum of a function. Unlike Newton's method, the Gauss–Newton algorithm can only be used to minimize a sum of squared function values, but it has the advantage that second derivatives, which can be challenging to compute, are not required.Non-linear least squares problems arise for instance in non-linear regression, where parameters in a model are sought such that the model is in good agreement with available observations.The method is due to the renowned mathematician Carl Friedrich Gauss.



read more
Watch Video

Linear Dynamical Systems Lecture 6-Applications of least squares

0 comments

read more
Watch Video

Linear Dynamical Systems Lecture 5- QR Factorization and least squares

0 comments
The method of least squares is used to solve overdetermined systems. Least squares is often applied in statistical contexts, particularly regression analysis.Least squares can be interpreted as a method of fitting data. The best fit in the least-squares sense is that instance of the model for which the sum of squared residuals has its least value, a residual being the difference between an observed value and the value given by the model. The method was first described by Carl Friedrich Gauss around 1794.[1] Least squares corresponds to the maximum likelihood criterion if the experimental errors have a normal distribution and can also be derived as a method of moments estimator. Regression analysis is available in most statistical software packages.



read more
Watch Video

Linear Dynamical Systems lecture 4-Orthonormal sets of vectors and QR factorization

0 comments
In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal and both of unit length. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of unit length. An orthonormal set which forms a basis is called an orthonormal basis.


read more
Watch Video

Linear Dynamical Systems Lecture 3-Linear algebra

0 comments
Linear algebra is the branch of mathematics concerned with the study of vectors, vector spaces (also called linear spaces), linear maps (also called linear transformations), and systems of linear equations. Vector spaces are a central theme in modern mathematics; thus, linear algebra is widely used in both abstract algebra and functional analysis. Linear algebra also has a concrete representation in analytic geometry and it is generalized in operator theory. It has extensive applications in the natural sciences and the social sciences, since nonlinear models can often be approximated by linear ones.



read more
Watch Video

Linear Dynamical Systems Lecture 2 -Linear functions

0 comments
A linear function is a real or complex function f with the functional equation y=f(x)=m⋅x+b, where m and b are real (or complex) numbers. The equation y=f(x)=m⋅x+b is called (general) linear equation. The graph of a real linear function is a straight line.


read more
Watch Video

Introduction to Linear Dynamical Systems Lecture 1

0 comments
Introduction to applied linear algebra and linear dynamical systems, with applications to circuits, signal processing, communications, and control systems. Topics include: Least-squares aproximations of over-determined equations and least-norm solutions of underdetermined equations. Symmetric matrices, matrix norm and singular value decomposition. Eigenvalues, left and right eigenvectors, and dynamical interpretation. Matrix exponential, stability, and asymptotic behavior. Multi-input multi-output systems, impulse and step matrices; convolution and transfer matrix descriptions.

read more
Watch Video