Independence on Algebraic Decompositions and Algorithms
Received: 13-Sep-2021 Accepted Date: Sep 20, 2021; Published: 27-Sep-2021
Citation: Man YK. Independence on Algebraic Decompositions and Algorithms. J Pur Appl Math. 2021; 5(5):55:55.
This open-access article is distributed under the terms of the Creative Commons Attribution Non-Commercial License (CC BY-NC) (http://creativecommons.org/licenses/by-nc/4.0/), which permits reuse, distribution and reproduction of the article, provided that the original work is properly cited and the reuse is restricted to noncommercial purposes. For commercial reuse, contact reprints@pulsus.com
Introduction
When formalizing intuitive concepts, a common approach is to construct a set of objects (symbols) and a set of rules to manipulate these objects. This is known as algebra.
Linear algebra is the study of vectors and certain algebra rules to manipulate vectors. The vectors many of us know from school are called “geometric vectors”, which are usually denoted by a small arrow above the letter, e.g., −→x and −→y. In this book, we discuss more general concepts of vectors and use a bold letter to represent them, e.g., x and y. In general, vectors are special objects that can be added together and multiplied by scalars to produce another object of the same kind. From an abstract mathematical viewpoint, any object that satisfies these two properties can be considered a vector.
Here are some examples of such vector objects:
• Geometric vectors
• Polynomials
• Audio signals
• Elements of Rn
In a system of linear equations with two variables x1,x2, each linear equation defines a line on the x1 x2-plane. Since a solution to a system of linear Equations must satisfy all equations simultaneously, the solution set is the intersection of these lines. This intersection set can be a line (if the linear Equations describe the same line), a point, or empty.
Matrix multiplication is not defined as an element-wise operation on matrix elements, i.e., cij 6= aijbij (even if the size of A,B was chosen appropriately). This kind of element-wise multiplication often appears in programming languages when we multiply (multi-dimensional) arrays with each other, and is called a Hadamard product. The leading coefficient of a row (first nonzero number from the left) is called the pivot and is always pivot strictly to the right of the pivot of the row above it. Therefore, any equation system in row-echelon form always has a “staircase” structure. It is in row-echelon form. Every pivot is 1. The pivot is the only nonzero entry in its column. The reduced row-echelon form will play an important role later because it allows us to determine the general solution of a system of linear equations in a straightforward way. Gaussian elimination is an algorithm that performs elementary transformations to bring a system of linear equations into reduced row-echelon form. Why is approximation theory useful? The answer goes much further than the rather tired old fact that your computer relies on approximations to evaluate functions like sin(x) and exp(x). There are also many other fascinating and important topics of approximation theory not touched upon in this volume, including splines, wavelets, radial basis functions, compressed sensing, and multivariate approximations of all kinds.
Collins’ cad-based decision procedure for elementary algebra and Geometry is the best known. Schwartz and Sharir used the cad algorithm to solve a motion planning problem. Lankford and Dershowitz pointed out that a decision procedure for elementary algebra and geometry could be used to test the termination of term rewrite systems. Kahn used cads to solve a problem on Frameworks in algebraic-topology.