Are there any other methods to apply to solving simultaneous equations? The 2019 Stack Overflow Developer Survey Results Are In Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)Solving matrix equations of the form $XA = XB$Simultaneous EquationsSolving a set of linear equations for variables with non-constant valuesMistake in my NLP using Lagrange Multipliers?Solving equations system: $xy+yz=a^2,xz+xy=b^2,yz+zx=c^2$Solving simultaneous equations involving a quadraticMethods for solving a $4$ system of equationFunctions for fixed-point iteration on a nonlinear system of equationsSystem of simultaneous equations involving integral part (floor)Different ways of solving simultaneous equations

What do I do when my TA workload is more than expected?

Make it rain characters

What is the padding with red substance inside of steak packaging?

Why not take a picture of a closer black hole?

Deal with toxic manager when you can't quit

Is it ethical to upload a automatically generated paper to a non peer-reviewed site as part of a larger research?

Windows 10: How to Lock (not sleep) laptop on lid close?

Working through the single responsibility principle (SRP) in Python when calls are expensive

Simulating Exploding Dice

Are spiders unable to hurt humans, especially very small spiders?

How to handle characters who are more educated than the author?

Can a flute soloist sit?

Store Dynamic-accessible hidden metadata in a cell

Variable with quotation marks "$()"

Identify 80s or 90s comics with ripped creatures (not dwarves)

Did the new image of black hole confirm the general theory of relativity?

What force causes entropy to increase?

How do spell lists change if the party levels up without taking a long rest?

How do you keep chess fun when your opponent constantly beats you?

Why can't devices on different VLANs, but on the same subnet, communicate?

"... to apply for a visa" or "... and applied for a visa"?

What to do when moving next to a bird sanctuary with a loosely-domesticated cat?

For what reasons would an animal species NOT cross a *horizontal* land bridge?

Why don't hard Brexiteers insist on a hard border to prevent illegal immigration after Brexit?



Are there any other methods to apply to solving simultaneous equations?



The 2019 Stack Overflow Developer Survey Results Are In
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)Solving matrix equations of the form $XA = XB$Simultaneous EquationsSolving a set of linear equations for variables with non-constant valuesMistake in my NLP using Lagrange Multipliers?Solving equations system: $xy+yz=a^2,xz+xy=b^2,yz+zx=c^2$Solving simultaneous equations involving a quadraticMethods for solving a $4$ system of equationFunctions for fixed-point iteration on a nonlinear system of equationsSystem of simultaneous equations involving integral part (floor)Different ways of solving simultaneous equations










21












$begingroup$


We are asked to solve for $x$ and $y$ in the following pair of simultaneous equations:




$$beginalign3x+2y&=36 tag1\ 5x+4y&=64tag2endalign$$




I can multiply $(1)$ by $2$, yielding $6x + 4y = 72$, and subtracting $(2)$ from this new equation eliminates $4y$ to solve strictly for $x$; i.e. $6x - 5x = 72 - 64 Rightarrow x = 8$. Substituting $x=8$ into $(2)$ reveals that $y=6$.



I could also subtract $(1)$ from $(2)$ and divide by $2$, yielding $x+y=14$. Let $$beginalign3x+3y - y &= 36 tag1a\ 5x + 5y - y &= 64tag1bendalign$$ then expand brackets, and it follows that $42 - y = 36$ and $70 - y = 64$, thus revealing $y=6$ and so $x = 14 - 6 = 8$.



I can even use matrices!



$(1)$ and $(2)$ could be written in matrix form:



$$beginalignbeginbmatrix 3 &2 \ 5 &4endbmatrixbeginbmatrix x \ yendbmatrix&=beginbmatrix36 \ 64endbmatrixtag3 \ beginbmatrix x \ yendbmatrix &= beginbmatrix 3 &2 \ 5 &4endbmatrix^-1beginbmatrix36 \ 64endbmatrix \ &= frac12beginbmatrix4 &-2 \ -5 &3endbmatrixbeginbmatrix36 \ 64endbmatrix \ &=frac12beginbmatrix 16 \ 12endbmatrix \ &= beginbmatrix 8 \ 6endbmatrix \ \ therefore x&=8 \ therefore y&= 6endalign$$




Question



Are there any other methods to solve for both $x$ and $y$?










share|cite|improve this question











$endgroup$







  • 5




    $begingroup$
    you can use the substitution $y = 18 - frac 32 x.$ Or, you could use Cramer's rule
    $endgroup$
    – Doug M
    Apr 9 at 5:32






  • 3




    $begingroup$
    This is a linear system of equations, which some believe it is the most studied equation in all of mathematics. The reason being that it is so widely used in applied mathematics that there's always reason to find faster and more robust methods that will either be generic or suit the particularities of a given problem. You might roll your eyes at my claim when thinking of your two variable system, but soem engineers need to solve such systems with hundreds of variables in their jobs.
    $endgroup$
    – Mefitico
    Apr 9 at 12:28






  • 5




    $begingroup$
    I hope someone performs GMRES by hand on this system and reports the steps.
    $endgroup$
    – Rahul
    Apr 9 at 17:02







  • 2




    $begingroup$
    Since linear systems are so well studied, there are many approaches (that are essentially equivalent - but maybe not the iterative solution). As such, does this question essentially boil down to a list of answers, which is not technically on topic for this site?
    $endgroup$
    – Teepeemm
    Apr 10 at 0:02






  • 2




    $begingroup$
    There is an entire subject called Numerical Linear Algebra which studies efficient ways to solve $Ax = b$. There are many notable algorithms. For example, you could use an iterative algorithm such as the Jacobi method or Gauss-Seidel or, as @Rahul noted, GMRES. There are other direct methods also. For example, you could find the QR factorization $A = QR$, where $Q$ is orthogonal and $R$ is upper triangular, and solve $Rx = Q^T b$ using back substitution.
    $endgroup$
    – littleO
    Apr 10 at 0:25















21












$begingroup$


We are asked to solve for $x$ and $y$ in the following pair of simultaneous equations:




$$beginalign3x+2y&=36 tag1\ 5x+4y&=64tag2endalign$$




I can multiply $(1)$ by $2$, yielding $6x + 4y = 72$, and subtracting $(2)$ from this new equation eliminates $4y$ to solve strictly for $x$; i.e. $6x - 5x = 72 - 64 Rightarrow x = 8$. Substituting $x=8$ into $(2)$ reveals that $y=6$.



I could also subtract $(1)$ from $(2)$ and divide by $2$, yielding $x+y=14$. Let $$beginalign3x+3y - y &= 36 tag1a\ 5x + 5y - y &= 64tag1bendalign$$ then expand brackets, and it follows that $42 - y = 36$ and $70 - y = 64$, thus revealing $y=6$ and so $x = 14 - 6 = 8$.



I can even use matrices!



$(1)$ and $(2)$ could be written in matrix form:



$$beginalignbeginbmatrix 3 &2 \ 5 &4endbmatrixbeginbmatrix x \ yendbmatrix&=beginbmatrix36 \ 64endbmatrixtag3 \ beginbmatrix x \ yendbmatrix &= beginbmatrix 3 &2 \ 5 &4endbmatrix^-1beginbmatrix36 \ 64endbmatrix \ &= frac12beginbmatrix4 &-2 \ -5 &3endbmatrixbeginbmatrix36 \ 64endbmatrix \ &=frac12beginbmatrix 16 \ 12endbmatrix \ &= beginbmatrix 8 \ 6endbmatrix \ \ therefore x&=8 \ therefore y&= 6endalign$$




Question



Are there any other methods to solve for both $x$ and $y$?










share|cite|improve this question











$endgroup$







  • 5




    $begingroup$
    you can use the substitution $y = 18 - frac 32 x.$ Or, you could use Cramer's rule
    $endgroup$
    – Doug M
    Apr 9 at 5:32






  • 3




    $begingroup$
    This is a linear system of equations, which some believe it is the most studied equation in all of mathematics. The reason being that it is so widely used in applied mathematics that there's always reason to find faster and more robust methods that will either be generic or suit the particularities of a given problem. You might roll your eyes at my claim when thinking of your two variable system, but soem engineers need to solve such systems with hundreds of variables in their jobs.
    $endgroup$
    – Mefitico
    Apr 9 at 12:28






  • 5




    $begingroup$
    I hope someone performs GMRES by hand on this system and reports the steps.
    $endgroup$
    – Rahul
    Apr 9 at 17:02







  • 2




    $begingroup$
    Since linear systems are so well studied, there are many approaches (that are essentially equivalent - but maybe not the iterative solution). As such, does this question essentially boil down to a list of answers, which is not technically on topic for this site?
    $endgroup$
    – Teepeemm
    Apr 10 at 0:02






  • 2




    $begingroup$
    There is an entire subject called Numerical Linear Algebra which studies efficient ways to solve $Ax = b$. There are many notable algorithms. For example, you could use an iterative algorithm such as the Jacobi method or Gauss-Seidel or, as @Rahul noted, GMRES. There are other direct methods also. For example, you could find the QR factorization $A = QR$, where $Q$ is orthogonal and $R$ is upper triangular, and solve $Rx = Q^T b$ using back substitution.
    $endgroup$
    – littleO
    Apr 10 at 0:25













21












21








21


10



$begingroup$


We are asked to solve for $x$ and $y$ in the following pair of simultaneous equations:




$$beginalign3x+2y&=36 tag1\ 5x+4y&=64tag2endalign$$




I can multiply $(1)$ by $2$, yielding $6x + 4y = 72$, and subtracting $(2)$ from this new equation eliminates $4y$ to solve strictly for $x$; i.e. $6x - 5x = 72 - 64 Rightarrow x = 8$. Substituting $x=8$ into $(2)$ reveals that $y=6$.



I could also subtract $(1)$ from $(2)$ and divide by $2$, yielding $x+y=14$. Let $$beginalign3x+3y - y &= 36 tag1a\ 5x + 5y - y &= 64tag1bendalign$$ then expand brackets, and it follows that $42 - y = 36$ and $70 - y = 64$, thus revealing $y=6$ and so $x = 14 - 6 = 8$.



I can even use matrices!



$(1)$ and $(2)$ could be written in matrix form:



$$beginalignbeginbmatrix 3 &2 \ 5 &4endbmatrixbeginbmatrix x \ yendbmatrix&=beginbmatrix36 \ 64endbmatrixtag3 \ beginbmatrix x \ yendbmatrix &= beginbmatrix 3 &2 \ 5 &4endbmatrix^-1beginbmatrix36 \ 64endbmatrix \ &= frac12beginbmatrix4 &-2 \ -5 &3endbmatrixbeginbmatrix36 \ 64endbmatrix \ &=frac12beginbmatrix 16 \ 12endbmatrix \ &= beginbmatrix 8 \ 6endbmatrix \ \ therefore x&=8 \ therefore y&= 6endalign$$




Question



Are there any other methods to solve for both $x$ and $y$?










share|cite|improve this question











$endgroup$




We are asked to solve for $x$ and $y$ in the following pair of simultaneous equations:




$$beginalign3x+2y&=36 tag1\ 5x+4y&=64tag2endalign$$




I can multiply $(1)$ by $2$, yielding $6x + 4y = 72$, and subtracting $(2)$ from this new equation eliminates $4y$ to solve strictly for $x$; i.e. $6x - 5x = 72 - 64 Rightarrow x = 8$. Substituting $x=8$ into $(2)$ reveals that $y=6$.



I could also subtract $(1)$ from $(2)$ and divide by $2$, yielding $x+y=14$. Let $$beginalign3x+3y - y &= 36 tag1a\ 5x + 5y - y &= 64tag1bendalign$$ then expand brackets, and it follows that $42 - y = 36$ and $70 - y = 64$, thus revealing $y=6$ and so $x = 14 - 6 = 8$.



I can even use matrices!



$(1)$ and $(2)$ could be written in matrix form:



$$beginalignbeginbmatrix 3 &2 \ 5 &4endbmatrixbeginbmatrix x \ yendbmatrix&=beginbmatrix36 \ 64endbmatrixtag3 \ beginbmatrix x \ yendbmatrix &= beginbmatrix 3 &2 \ 5 &4endbmatrix^-1beginbmatrix36 \ 64endbmatrix \ &= frac12beginbmatrix4 &-2 \ -5 &3endbmatrixbeginbmatrix36 \ 64endbmatrix \ &=frac12beginbmatrix 16 \ 12endbmatrix \ &= beginbmatrix 8 \ 6endbmatrix \ \ therefore x&=8 \ therefore y&= 6endalign$$




Question



Are there any other methods to solve for both $x$ and $y$?







linear-algebra systems-of-equations






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Apr 9 at 7:17









Rodrigo de Azevedo

13.2k41962




13.2k41962










asked Apr 9 at 5:16









user477343user477343

3,64831345




3,64831345







  • 5




    $begingroup$
    you can use the substitution $y = 18 - frac 32 x.$ Or, you could use Cramer's rule
    $endgroup$
    – Doug M
    Apr 9 at 5:32






  • 3




    $begingroup$
    This is a linear system of equations, which some believe it is the most studied equation in all of mathematics. The reason being that it is so widely used in applied mathematics that there's always reason to find faster and more robust methods that will either be generic or suit the particularities of a given problem. You might roll your eyes at my claim when thinking of your two variable system, but soem engineers need to solve such systems with hundreds of variables in their jobs.
    $endgroup$
    – Mefitico
    Apr 9 at 12:28






  • 5




    $begingroup$
    I hope someone performs GMRES by hand on this system and reports the steps.
    $endgroup$
    – Rahul
    Apr 9 at 17:02







  • 2




    $begingroup$
    Since linear systems are so well studied, there are many approaches (that are essentially equivalent - but maybe not the iterative solution). As such, does this question essentially boil down to a list of answers, which is not technically on topic for this site?
    $endgroup$
    – Teepeemm
    Apr 10 at 0:02






  • 2




    $begingroup$
    There is an entire subject called Numerical Linear Algebra which studies efficient ways to solve $Ax = b$. There are many notable algorithms. For example, you could use an iterative algorithm such as the Jacobi method or Gauss-Seidel or, as @Rahul noted, GMRES. There are other direct methods also. For example, you could find the QR factorization $A = QR$, where $Q$ is orthogonal and $R$ is upper triangular, and solve $Rx = Q^T b$ using back substitution.
    $endgroup$
    – littleO
    Apr 10 at 0:25












  • 5




    $begingroup$
    you can use the substitution $y = 18 - frac 32 x.$ Or, you could use Cramer's rule
    $endgroup$
    – Doug M
    Apr 9 at 5:32






  • 3




    $begingroup$
    This is a linear system of equations, which some believe it is the most studied equation in all of mathematics. The reason being that it is so widely used in applied mathematics that there's always reason to find faster and more robust methods that will either be generic or suit the particularities of a given problem. You might roll your eyes at my claim when thinking of your two variable system, but soem engineers need to solve such systems with hundreds of variables in their jobs.
    $endgroup$
    – Mefitico
    Apr 9 at 12:28






  • 5




    $begingroup$
    I hope someone performs GMRES by hand on this system and reports the steps.
    $endgroup$
    – Rahul
    Apr 9 at 17:02







  • 2




    $begingroup$
    Since linear systems are so well studied, there are many approaches (that are essentially equivalent - but maybe not the iterative solution). As such, does this question essentially boil down to a list of answers, which is not technically on topic for this site?
    $endgroup$
    – Teepeemm
    Apr 10 at 0:02






  • 2




    $begingroup$
    There is an entire subject called Numerical Linear Algebra which studies efficient ways to solve $Ax = b$. There are many notable algorithms. For example, you could use an iterative algorithm such as the Jacobi method or Gauss-Seidel or, as @Rahul noted, GMRES. There are other direct methods also. For example, you could find the QR factorization $A = QR$, where $Q$ is orthogonal and $R$ is upper triangular, and solve $Rx = Q^T b$ using back substitution.
    $endgroup$
    – littleO
    Apr 10 at 0:25







5




5




$begingroup$
you can use the substitution $y = 18 - frac 32 x.$ Or, you could use Cramer's rule
$endgroup$
– Doug M
Apr 9 at 5:32




$begingroup$
you can use the substitution $y = 18 - frac 32 x.$ Or, you could use Cramer's rule
$endgroup$
– Doug M
Apr 9 at 5:32




3




3




$begingroup$
This is a linear system of equations, which some believe it is the most studied equation in all of mathematics. The reason being that it is so widely used in applied mathematics that there's always reason to find faster and more robust methods that will either be generic or suit the particularities of a given problem. You might roll your eyes at my claim when thinking of your two variable system, but soem engineers need to solve such systems with hundreds of variables in their jobs.
$endgroup$
– Mefitico
Apr 9 at 12:28




$begingroup$
This is a linear system of equations, which some believe it is the most studied equation in all of mathematics. The reason being that it is so widely used in applied mathematics that there's always reason to find faster and more robust methods that will either be generic or suit the particularities of a given problem. You might roll your eyes at my claim when thinking of your two variable system, but soem engineers need to solve such systems with hundreds of variables in their jobs.
$endgroup$
– Mefitico
Apr 9 at 12:28




5




5




$begingroup$
I hope someone performs GMRES by hand on this system and reports the steps.
$endgroup$
– Rahul
Apr 9 at 17:02





$begingroup$
I hope someone performs GMRES by hand on this system and reports the steps.
$endgroup$
– Rahul
Apr 9 at 17:02





2




2




$begingroup$
Since linear systems are so well studied, there are many approaches (that are essentially equivalent - but maybe not the iterative solution). As such, does this question essentially boil down to a list of answers, which is not technically on topic for this site?
$endgroup$
– Teepeemm
Apr 10 at 0:02




$begingroup$
Since linear systems are so well studied, there are many approaches (that are essentially equivalent - but maybe not the iterative solution). As such, does this question essentially boil down to a list of answers, which is not technically on topic for this site?
$endgroup$
– Teepeemm
Apr 10 at 0:02




2




2




$begingroup$
There is an entire subject called Numerical Linear Algebra which studies efficient ways to solve $Ax = b$. There are many notable algorithms. For example, you could use an iterative algorithm such as the Jacobi method or Gauss-Seidel or, as @Rahul noted, GMRES. There are other direct methods also. For example, you could find the QR factorization $A = QR$, where $Q$ is orthogonal and $R$ is upper triangular, and solve $Rx = Q^T b$ using back substitution.
$endgroup$
– littleO
Apr 10 at 0:25




$begingroup$
There is an entire subject called Numerical Linear Algebra which studies efficient ways to solve $Ax = b$. There are many notable algorithms. For example, you could use an iterative algorithm such as the Jacobi method or Gauss-Seidel or, as @Rahul noted, GMRES. There are other direct methods also. For example, you could find the QR factorization $A = QR$, where $Q$ is orthogonal and $R$ is upper triangular, and solve $Rx = Q^T b$ using back substitution.
$endgroup$
– littleO
Apr 10 at 0:25










12 Answers
12






active

oldest

votes


















19












$begingroup$

Is this method allowed ?



$$left[beginarrayrr
3 & 2 & 36 \
5 & 4 & 64
endarrayright]
sim
left[beginarrayrr
1 & frac23 & 12 \
5 & 4 & 64
endarrayright]
sim left[beginarrayrr
1 & frac23 & 12 \
0 & frac23 & 4
endarrayright] sim left[beginarrayrr
1 & 0 & 8 \
0 & frac23 & 4
endarrayright] sim left[beginarrayrr
1 & 0 & 8 \
0 & 1 & 6
endarrayright]
$$



which yields $x=8$ and $y=6$




The first step is $R_1 to R_1 times frac13$



The second step is $R_2 to R_2 - 5R_1$



The third step is $R_1 to R_1 -R_2$



The fourth step is $R_2 to R_2times frac32$



Here $R_i$ denotes the $i$ -th row.






share|cite|improve this answer











$endgroup$












  • $begingroup$
    I have never seen that! What is it? :D
    $endgroup$
    – user477343
    Apr 9 at 6:07






  • 1




    $begingroup$
    elementary operations!
    $endgroup$
    – Chinnapparaj R
    Apr 9 at 6:09






  • 1




    $begingroup$
    I assume $R$ stands for Row.
    $endgroup$
    – user477343
    Apr 9 at 6:28






  • 26




    $begingroup$
    It's also called Gaussian elimination.
    $endgroup$
    – YiFan
    Apr 9 at 8:50






  • 3




    $begingroup$
    See also augmented matrix and, for typesetting, tex.stackexchange.com/questions/2233/… .
    $endgroup$
    – Eric Towers
    Apr 9 at 14:52


















16












$begingroup$

How about using Cramer's Rule? Define $Delta_x=left[beginmatrix36 & 2 \ 64 & 4endmatrixright]$, $Delta_y=left[beginmatrix3 & 36\ 5 & 64endmatrixright]$
and $Delta_0=left[beginmatrix3 & 2\ 5 &4endmatrixright]$.



Now computation is trivial as you have: $x=dfracdetDelta_xdetDelta_0$ and $y=dfracdetDelta_ydetDelta_0$.






share|cite|improve this answer









$endgroup$








  • 1




    $begingroup$
    Wow! Very useful! I have never heard of this method, before! $(+1)$
    $endgroup$
    – user477343
    Apr 9 at 6:07






  • 1




    $begingroup$
    You must've made a calculation mistake. Recheck your calculations. It does indeed give $(2, 1)$ as the answer. Cheers :)
    $endgroup$
    – Paras Khosla
    Apr 9 at 6:55






  • 14




    $begingroup$
    Cramer's rule is important theoretically, but it is a very inefficient way to solve equations numerically, except for two equations in two unknowns. For $n$ equations, Cramer's rule requires $n!$ arithmetic operations to evaluate the determinants, compared with about $n^3$ operations to solve using Gaussian elimination. Even when $n = 10$, $n^3 = 1000$ but $n! = 3628800$. And in many real world applied math computations, $n = 100,000$ is a "small problem!"
    $endgroup$
    – alephzero
    Apr 9 at 9:06






  • 4




    $begingroup$
    @alephzero Just to be technical, there are faster ways to calculate the determinant of large matrices. However the one method I know to do it in n^3 relies on Gaussian elimination itself, which makes it a bit redundant...
    $endgroup$
    – mlk
    Apr 9 at 10:11






  • 3




    $begingroup$
    @user477343 asked for different ways to solve, not more efficient ways to solve. This is awesome.
    $endgroup$
    – user1717828
    Apr 9 at 12:09



















13












$begingroup$

Fixed Point Iteration



This is not efficient but it's another valid way to solve the system. Treat the system as a matrix equation and rearrange to get $beginbmatrix x\ yendbmatrix$ on the left hand side.



Define
$fbeginbmatrix x\ yendbmatrix=beginbmatrix (36-2y)/3 \ (64-5x)/4endbmatrix$



Start with an intial guess of $beginbmatrix x\ yendbmatrix=beginbmatrix 0\ 0endbmatrix$



The result is $fbeginbmatrix 0\ 0endbmatrix=beginbmatrix 12\ 16endbmatrix$



Now plug that back into f



The result is $fbeginbmatrix 12\ 6endbmatrix=beginbmatrix 4/3\ 1endbmatrix$



Keep plugging the result back in. After 100 iterations you have:



$beginbmatrix 7.9991\ 5.9993endbmatrix$



Here is a graph of the progression of the iteration:
iteration path






share|cite|improve this answer









$endgroup$








  • 2




    $begingroup$
    So we just have $fbeginbmatrix 0 \ 0endbmatrix$ and then $fbigg(fbeginbmatrix 0 \ 0endbmatrixbigg)$ and by letting $f^k(cdot ) = f(f(ldots f(f(cdot))ldots )$ $k$ times, this overall goes to $$f^100beginbmatrix 0 \ 0endbmatrix$$ and etc... hmm... it actually seems quite appealing to me, regardless of its low efficiency, as you say :P
    $endgroup$
    – user477343
    Apr 10 at 0:46











  • $begingroup$
    Note that this doesn't always work, $f$ needs to be a contraction.
    $endgroup$
    – flawr
    yesterday


















12












$begingroup$

By false position:



Assume $x=10,y=3$, which fulfills the first equation, and let $x=10+x',y=3+y'$. Now, after simplification



$$3x'+2y'=0,\5x'+4y'=2.$$



We easily eliminate $y'$ (using $4y'=-6x'$) and get



$$-x'=2.$$



Though this method is not essentially different from, say elimination, it can be useful for by-hand computation as it yields smaller terms.






share|cite|improve this answer









$endgroup$








  • 1




    $begingroup$
    This is a great method. +1 :)
    $endgroup$
    – Paras Khosla
    Apr 9 at 16:39






  • 1




    $begingroup$
    This is like a variation of the elimination method, but breaks things down better! Already upvoted :P
    $endgroup$
    – user477343
    yesterday


















10












$begingroup$

Another method to solve simultaneous equations in two dimensions, is by plotting graphs of the equations on a cartesian plane, and finding the point of intersection.



plot of simultaneous equations






share|cite|improve this answer









$endgroup$












  • $begingroup$
    That's what my school textbook wants me to do, but it can sometimes be a bit... tiring... but methinks graphing does reveal the essence of simultaneous equations. $(+1)$
    $endgroup$
    – user477343
    Apr 10 at 0:45



















9












$begingroup$

Any method you can come up with will in the end amount to Cramer's rule, which gives explicit formulas for the solution. Except special cases, the solution of a system is unique, so that you will always be computing the ratio of those determinants.



Anyway, it turns out that by organizing the computation in certain ways, you can reduce the number of arithmetic operations to be performed. For $2times2$ systems,
the different variants make little difference in this respect. Things become more interesting for $ntimes n$ systems.



Direct application of Cramer is by far the worse, as it takes a number of operations proportional to $(n+1)!$, which is huge. Even for $3times3$ systems, it should be avoided. The best method to date is Gaussian elimination (you eliminate one unknown at a time by forming linear combinations of the equations and turn the system to a triangular form). The total workload is proportional to $n^3$ operations.




The steps of standard Gaussian elimination:



$$begincasesax+by=c,\dx+ey=f.endcases$$



Subtract the first times $dfrac da$ from the second,



$$begincasesax+by=c,\0x+left(e-bdfrac daright)y=f-cdfrac da.endcases$$



Solve for $y$,



$$begincasesax+by=c,\y=dfracf-cdfrac dae-bdfrac da.endcases$$



Solve for $x$,



$$begincasesx=dfracc-bdfracf-cdfrac dae-bdfrac daa,\y=dfracf-cdfrac dae-bdfrac da.endcases$$



So written, the formulas are a little scary, but when you use intermediate variables, the complexity vanishes:



$$d'=frac da,e'=e-bd',f'=f-cd'to y=fracf'e', x=fracc-bya.$$



Anyway, for a $2times2$ system, this is worse than Cramer !



$$begincasesx=dfracce-bfDelta,\y=dfracaf-cdDeltaendcases$$ where $Delta=ae-bd$.




For large systems, say $100times100$ and up, very different methods are used. They work by computing approximate solutions and improving them iteratively until the inaccuracy becomes acceptable. Quite often such systems are sparse (many coefficients are zero), and this is exploited to reduce the number of operations. (The direct methods are inappropriate as they will break the sparseness property.)






share|cite|improve this answer











$endgroup$








  • 2




    $begingroup$
    +1 for the last paragraph which is, I think, of utmost importance. Indeed, our computers solve many, many, linear systems each day (and quite huge ones, not 100x100 but more 100'000 x 100'000). None of them are solved by any the methods discussed in the answers so far.
    $endgroup$
    – Surb
    Apr 9 at 19:55


















9












$begingroup$

Construct the Groebner basis of your system, with the variables ordered $x$, $y$:
$$ mathrmGB(3x+2y-36, 5x+4y-64) = y-6, x-8 $$
and read out the solution. (If we reverse the variable order, we get the same basis, but in reversed order.) Under the hood, this is performing Gaussian elimination for this problem. However, Groebner bases are not restricted to linear systems, so can be used to construct solution sets for systems of polynomials in several variables.




Perform lattice reduction on the lattice generated by $(3,2,-36)$ and $(5,4,-64)$. A sequence of reductions (similar to the Euclidean algorithm for GCDs): beginalign*
(5,4,-64) - (3,2,-36) &= (2,2,-28) \
(3,2,-36) - (2,2,-28) &= (1,0,-8) tag1 \
(2,2,-28) - 2(1,0,-8) &= (0,2,-12) tag2 \
endalign*

From (1), we have $x=8$. From (2), $2y = 12$, so $y = 6$. (Generally, there can be quite a bit more "creativity" required to get the needed zeroes in the lattice vector components. One implementation of the LLL algorithm, terminates with the shorter vectors $(-1,2,4), (-2,2,4)$, but we would continue to manipulate these to get the desired zeroes.)






share|cite|improve this answer









$endgroup$




















    6












    $begingroup$

    $$beginalign3x+2y&=36 tag1\ 5x+4y&=64tag2endalign$$



    From $(1)$, $x=frac36-2y3$, substitute in $(2)$ and you'll get $5(frac36-2y3)+4y=64 implies y=6$ and then you can get that $x=24/3=8$



    Another Method
    From $(1)$, $x=frac36-2y3$



    From $(2)$, $x=frac64-4y5$



    But $x=x implies frac36-2y3=frac64-4y5$ do cross multiplication and you'll get $5(36-2y)=3(64-4y) implies y=6$ and substitute to get $x=8$






    share|cite|improve this answer











    $endgroup$








    • 1




      $begingroup$
      Pure algebra! I personally prefer the second method. Thanks for that! $(+1)$
      $endgroup$
      – user477343
      Apr 9 at 7:55



















    4












    $begingroup$

    Other answers have given standard, elementary methods of solving simultaneous equations. Here are a few other ones that can be more long-winded and excessive, but work nonetheless.




    Method $1$: (multiplicity of $y$)




    Let $y=kx$ for some $kinBbb R$. Then $$3x+2y=36implies x(2k+3)=36implies x=frac362k+3\5x+4y=64implies x(4k+5)=64implies x=frac644k+5$$ so $$36(4k+5)=64(2k+3)implies (144-128)k=(192-180)implies k=frac34.$$ Now $$x=frac644k+5=frac644cdotfrac34+5=8implies y=kx=frac34cdot8=6.quadsquare$$





    Method $2$: (use this if you really like quadratic equations :P)




    How about we try squaring the equations? We get $$3x+2y=36implies 9x^2+12xy+4y^2=1296\5x+4y=64implies 25x^2+40xy+16y^2=4096$$ Multiplying the first equation by $10$ and the second by $3$ yields $$90x^2+120xy+40y^2=12960\75x^2+120xy+48y^2=12288$$ and subtracting gives us $$15x^2-8y^2=672$$ which is a hyperbola. Notice that subtracting the two linear equations gives you $2x+2y=28implies y=14-x$ so you have the nice quadratic $$15x^2-8(14-x)^2=672.$$ Enjoy!







    share|cite|improve this answer











    $endgroup$












    • $begingroup$
      In your first method, why do you substitute $k=frac34$ in the second equation $5x+4y=64$ as opposed to the first equation $3x+2y=36$? Also, hello! :D
      $endgroup$
      – user477343
      Apr 9 at 8:39







    • 1




      $begingroup$
      Because for $3x+2y=36$, we get $2k$ in the denominator, but $2k=3/2$ leaves us with a fraction. If we use the other equation, we get $4k=3$ which is neater.
      $endgroup$
      – TheSimpliFire
      Apr 9 at 8:41










    • $begingroup$
      So, it doesn't really matter which one we substitute it in; but it is good to have some intuition when deciding! Thanks for your answer :P $(+1)$
      $endgroup$
      – user477343
      Apr 9 at 9:02







    • 1




      $begingroup$
      No, at an intersection point between two lines, most of their properties at that point are the same (apart from gradient, of course)
      $endgroup$
      – TheSimpliFire
      Apr 9 at 9:06










    • $begingroup$
      Ok. Thank you for clarifying!
      $endgroup$
      – user477343
      Apr 10 at 0:43


















    3












    $begingroup$

    As another iterative method I suggest the Jacobi Method. A sufficient criterion for its convergence is that the matrix must be diagonally dominant. Which this one in our system is not:



    $beginbmatrix 3 &2 \ 5 &4endbmatrixbeginbmatrix x \ yendbmatrix=beginbmatrix36 \ 64endbmatrix$



    We can however fix this by replacing e.g. $y' := frac11.3 y$. Then the system is



    $underbracebeginbmatrix 3 & 2.6 \ 5 & 5.2endbmatrix_=:Abeginbmatrix x \ y'endbmatrix=beginbmatrix36 \ 64endbmatrix$



    and $A$ is diagonally dominant. Then we can decompose $A = L + D + U$ into $L,U,D$ where $L,U$ are the strict upper and lower triangular parts and $D$ is the diagonal of $A$ and the iteration



    $$vec x_i+1 = - D^-1((L+R)vec x_i + b)$$



    will converge to the solution $(x,y')$. Note that $D^-1$ is particularly easy to compute as you just have to invert the entries. So in theis case the iteration is



    $$vec x_i+1 = -beginbmatrix 1/3 & 0 \ 0 & 1/5.2 endbmatrixleft(beginbmatrix 0 & 2.6 \ 5 & 0 endbmatrix vec x_i + bright)$$



    So you can actually view this as a fixed point iteration of the function $f(vec x) = -D^-1((L+R)vec x + b)$ which is guaranteed to be a contraction in the case of diagonal dominance of $A$. It is actually quite slow and doesn't any practical application for directly solving systems of linear equations but it (or variations of it) is quite often used as a preconditioner.






    share|cite|improve this answer











    $endgroup$




















      2












      $begingroup$

      It is clear that:




      • $x=10$, $y=3$ is an integer solution of $(1)$.


      • $x=12$, $y=1$ is an integer solution of $(2)$.

      Then, from the theory of Linear Diophantine equations:



      • Any integer solution of $(1)$ has the form $x_1=10+2t$, $y_1=3-3t$ with $t$ integer.

      • Any integer solution of $(2)$ has the form $x_2=12+4t$, $y_2=1-5t$ with $t$ integer.

      Then, the system has an integer solution $(x_0,y_0)$ if and only if there exists an integer $t$ such that



      $$10+2t=x_0=12+4tqquadtextandqquad 3-3t=y_0=1-5t.$$



      Solving for $t$ we see that there exists an integer $t$ satisfying both equations, which is $t=-1$. Thus the system has the integer solution
      $$x_0=12+4(-1)=8,; y_0=1-5(-1)=6.$$



      Note that we can pick any pair of integer solutions to start with. And the method will give the solution provided that the solution is integer, which is often not the case.






      share|cite|improve this answer











      $endgroup$




















        0












        $begingroup$

        Consider the three vectors $textbfA=(3,2)$, $textbfB=(5,4)$ and $textbfX=(x,y)$. Your system could be written as $$textbfAcdottextbfX=a\textbfBcdottextbfX=b$$ where $a=36$, $b=64$ and $textbfA_perp=(-2,3)$ is orthogonal to $textbfA$. The first equation gives us $textbfX=dfracatextbfAtextbfA^2+lambdatextbfA_perp$. Now to find $lambda$ we use the second equation, we get $lambda=dfracbtextbfA_perpcdottextbfB-dfracatextbfAcdottextbfBtextbfA^2timestextbfA_perpcdottextbfB$. Et voilà :
        $$textbfX=dfracatextbfAtextbfA^2+dfractextbfA_perptextbfA_perpcdottextbfBleft(b-dfracatextbfAcdottextbfBtextbfA^2right)$$






        share|cite|improve this answer









        $endgroup$













          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3180580%2fare-there-any-other-methods-to-apply-to-solving-simultaneous-equations%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          12 Answers
          12






          active

          oldest

          votes








          12 Answers
          12






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          19












          $begingroup$

          Is this method allowed ?



          $$left[beginarrayrr
          3 & 2 & 36 \
          5 & 4 & 64
          endarrayright]
          sim
          left[beginarrayrr
          1 & frac23 & 12 \
          5 & 4 & 64
          endarrayright]
          sim left[beginarrayrr
          1 & frac23 & 12 \
          0 & frac23 & 4
          endarrayright] sim left[beginarrayrr
          1 & 0 & 8 \
          0 & frac23 & 4
          endarrayright] sim left[beginarrayrr
          1 & 0 & 8 \
          0 & 1 & 6
          endarrayright]
          $$



          which yields $x=8$ and $y=6$




          The first step is $R_1 to R_1 times frac13$



          The second step is $R_2 to R_2 - 5R_1$



          The third step is $R_1 to R_1 -R_2$



          The fourth step is $R_2 to R_2times frac32$



          Here $R_i$ denotes the $i$ -th row.






          share|cite|improve this answer











          $endgroup$












          • $begingroup$
            I have never seen that! What is it? :D
            $endgroup$
            – user477343
            Apr 9 at 6:07






          • 1




            $begingroup$
            elementary operations!
            $endgroup$
            – Chinnapparaj R
            Apr 9 at 6:09






          • 1




            $begingroup$
            I assume $R$ stands for Row.
            $endgroup$
            – user477343
            Apr 9 at 6:28






          • 26




            $begingroup$
            It's also called Gaussian elimination.
            $endgroup$
            – YiFan
            Apr 9 at 8:50






          • 3




            $begingroup$
            See also augmented matrix and, for typesetting, tex.stackexchange.com/questions/2233/… .
            $endgroup$
            – Eric Towers
            Apr 9 at 14:52















          19












          $begingroup$

          Is this method allowed ?



          $$left[beginarrayrr
          3 & 2 & 36 \
          5 & 4 & 64
          endarrayright]
          sim
          left[beginarrayrr
          1 & frac23 & 12 \
          5 & 4 & 64
          endarrayright]
          sim left[beginarrayrr
          1 & frac23 & 12 \
          0 & frac23 & 4
          endarrayright] sim left[beginarrayrr
          1 & 0 & 8 \
          0 & frac23 & 4
          endarrayright] sim left[beginarrayrr
          1 & 0 & 8 \
          0 & 1 & 6
          endarrayright]
          $$



          which yields $x=8$ and $y=6$




          The first step is $R_1 to R_1 times frac13$



          The second step is $R_2 to R_2 - 5R_1$



          The third step is $R_1 to R_1 -R_2$



          The fourth step is $R_2 to R_2times frac32$



          Here $R_i$ denotes the $i$ -th row.






          share|cite|improve this answer











          $endgroup$












          • $begingroup$
            I have never seen that! What is it? :D
            $endgroup$
            – user477343
            Apr 9 at 6:07






          • 1




            $begingroup$
            elementary operations!
            $endgroup$
            – Chinnapparaj R
            Apr 9 at 6:09






          • 1




            $begingroup$
            I assume $R$ stands for Row.
            $endgroup$
            – user477343
            Apr 9 at 6:28






          • 26




            $begingroup$
            It's also called Gaussian elimination.
            $endgroup$
            – YiFan
            Apr 9 at 8:50






          • 3




            $begingroup$
            See also augmented matrix and, for typesetting, tex.stackexchange.com/questions/2233/… .
            $endgroup$
            – Eric Towers
            Apr 9 at 14:52













          19












          19








          19





          $begingroup$

          Is this method allowed ?



          $$left[beginarrayrr
          3 & 2 & 36 \
          5 & 4 & 64
          endarrayright]
          sim
          left[beginarrayrr
          1 & frac23 & 12 \
          5 & 4 & 64
          endarrayright]
          sim left[beginarrayrr
          1 & frac23 & 12 \
          0 & frac23 & 4
          endarrayright] sim left[beginarrayrr
          1 & 0 & 8 \
          0 & frac23 & 4
          endarrayright] sim left[beginarrayrr
          1 & 0 & 8 \
          0 & 1 & 6
          endarrayright]
          $$



          which yields $x=8$ and $y=6$




          The first step is $R_1 to R_1 times frac13$



          The second step is $R_2 to R_2 - 5R_1$



          The third step is $R_1 to R_1 -R_2$



          The fourth step is $R_2 to R_2times frac32$



          Here $R_i$ denotes the $i$ -th row.






          share|cite|improve this answer











          $endgroup$



          Is this method allowed ?



          $$left[beginarrayrr
          3 & 2 & 36 \
          5 & 4 & 64
          endarrayright]
          sim
          left[beginarrayrr
          1 & frac23 & 12 \
          5 & 4 & 64
          endarrayright]
          sim left[beginarrayrr
          1 & frac23 & 12 \
          0 & frac23 & 4
          endarrayright] sim left[beginarrayrr
          1 & 0 & 8 \
          0 & frac23 & 4
          endarrayright] sim left[beginarrayrr
          1 & 0 & 8 \
          0 & 1 & 6
          endarrayright]
          $$



          which yields $x=8$ and $y=6$




          The first step is $R_1 to R_1 times frac13$



          The second step is $R_2 to R_2 - 5R_1$



          The third step is $R_1 to R_1 -R_2$



          The fourth step is $R_2 to R_2times frac32$



          Here $R_i$ denotes the $i$ -th row.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited 22 hours ago

























          answered Apr 9 at 5:43









          Chinnapparaj RChinnapparaj R

          6,41021029




          6,41021029











          • $begingroup$
            I have never seen that! What is it? :D
            $endgroup$
            – user477343
            Apr 9 at 6:07






          • 1




            $begingroup$
            elementary operations!
            $endgroup$
            – Chinnapparaj R
            Apr 9 at 6:09






          • 1




            $begingroup$
            I assume $R$ stands for Row.
            $endgroup$
            – user477343
            Apr 9 at 6:28






          • 26




            $begingroup$
            It's also called Gaussian elimination.
            $endgroup$
            – YiFan
            Apr 9 at 8:50






          • 3




            $begingroup$
            See also augmented matrix and, for typesetting, tex.stackexchange.com/questions/2233/… .
            $endgroup$
            – Eric Towers
            Apr 9 at 14:52
















          • $begingroup$
            I have never seen that! What is it? :D
            $endgroup$
            – user477343
            Apr 9 at 6:07






          • 1




            $begingroup$
            elementary operations!
            $endgroup$
            – Chinnapparaj R
            Apr 9 at 6:09






          • 1




            $begingroup$
            I assume $R$ stands for Row.
            $endgroup$
            – user477343
            Apr 9 at 6:28






          • 26




            $begingroup$
            It's also called Gaussian elimination.
            $endgroup$
            – YiFan
            Apr 9 at 8:50






          • 3




            $begingroup$
            See also augmented matrix and, for typesetting, tex.stackexchange.com/questions/2233/… .
            $endgroup$
            – Eric Towers
            Apr 9 at 14:52















          $begingroup$
          I have never seen that! What is it? :D
          $endgroup$
          – user477343
          Apr 9 at 6:07




          $begingroup$
          I have never seen that! What is it? :D
          $endgroup$
          – user477343
          Apr 9 at 6:07




          1




          1




          $begingroup$
          elementary operations!
          $endgroup$
          – Chinnapparaj R
          Apr 9 at 6:09




          $begingroup$
          elementary operations!
          $endgroup$
          – Chinnapparaj R
          Apr 9 at 6:09




          1




          1




          $begingroup$
          I assume $R$ stands for Row.
          $endgroup$
          – user477343
          Apr 9 at 6:28




          $begingroup$
          I assume $R$ stands for Row.
          $endgroup$
          – user477343
          Apr 9 at 6:28




          26




          26




          $begingroup$
          It's also called Gaussian elimination.
          $endgroup$
          – YiFan
          Apr 9 at 8:50




          $begingroup$
          It's also called Gaussian elimination.
          $endgroup$
          – YiFan
          Apr 9 at 8:50




          3




          3




          $begingroup$
          See also augmented matrix and, for typesetting, tex.stackexchange.com/questions/2233/… .
          $endgroup$
          – Eric Towers
          Apr 9 at 14:52




          $begingroup$
          See also augmented matrix and, for typesetting, tex.stackexchange.com/questions/2233/… .
          $endgroup$
          – Eric Towers
          Apr 9 at 14:52











          16












          $begingroup$

          How about using Cramer's Rule? Define $Delta_x=left[beginmatrix36 & 2 \ 64 & 4endmatrixright]$, $Delta_y=left[beginmatrix3 & 36\ 5 & 64endmatrixright]$
          and $Delta_0=left[beginmatrix3 & 2\ 5 &4endmatrixright]$.



          Now computation is trivial as you have: $x=dfracdetDelta_xdetDelta_0$ and $y=dfracdetDelta_ydetDelta_0$.






          share|cite|improve this answer









          $endgroup$








          • 1




            $begingroup$
            Wow! Very useful! I have never heard of this method, before! $(+1)$
            $endgroup$
            – user477343
            Apr 9 at 6:07






          • 1




            $begingroup$
            You must've made a calculation mistake. Recheck your calculations. It does indeed give $(2, 1)$ as the answer. Cheers :)
            $endgroup$
            – Paras Khosla
            Apr 9 at 6:55






          • 14




            $begingroup$
            Cramer's rule is important theoretically, but it is a very inefficient way to solve equations numerically, except for two equations in two unknowns. For $n$ equations, Cramer's rule requires $n!$ arithmetic operations to evaluate the determinants, compared with about $n^3$ operations to solve using Gaussian elimination. Even when $n = 10$, $n^3 = 1000$ but $n! = 3628800$. And in many real world applied math computations, $n = 100,000$ is a "small problem!"
            $endgroup$
            – alephzero
            Apr 9 at 9:06






          • 4




            $begingroup$
            @alephzero Just to be technical, there are faster ways to calculate the determinant of large matrices. However the one method I know to do it in n^3 relies on Gaussian elimination itself, which makes it a bit redundant...
            $endgroup$
            – mlk
            Apr 9 at 10:11






          • 3




            $begingroup$
            @user477343 asked for different ways to solve, not more efficient ways to solve. This is awesome.
            $endgroup$
            – user1717828
            Apr 9 at 12:09
















          16












          $begingroup$

          How about using Cramer's Rule? Define $Delta_x=left[beginmatrix36 & 2 \ 64 & 4endmatrixright]$, $Delta_y=left[beginmatrix3 & 36\ 5 & 64endmatrixright]$
          and $Delta_0=left[beginmatrix3 & 2\ 5 &4endmatrixright]$.



          Now computation is trivial as you have: $x=dfracdetDelta_xdetDelta_0$ and $y=dfracdetDelta_ydetDelta_0$.






          share|cite|improve this answer









          $endgroup$








          • 1




            $begingroup$
            Wow! Very useful! I have never heard of this method, before! $(+1)$
            $endgroup$
            – user477343
            Apr 9 at 6:07






          • 1




            $begingroup$
            You must've made a calculation mistake. Recheck your calculations. It does indeed give $(2, 1)$ as the answer. Cheers :)
            $endgroup$
            – Paras Khosla
            Apr 9 at 6:55






          • 14




            $begingroup$
            Cramer's rule is important theoretically, but it is a very inefficient way to solve equations numerically, except for two equations in two unknowns. For $n$ equations, Cramer's rule requires $n!$ arithmetic operations to evaluate the determinants, compared with about $n^3$ operations to solve using Gaussian elimination. Even when $n = 10$, $n^3 = 1000$ but $n! = 3628800$. And in many real world applied math computations, $n = 100,000$ is a "small problem!"
            $endgroup$
            – alephzero
            Apr 9 at 9:06






          • 4




            $begingroup$
            @alephzero Just to be technical, there are faster ways to calculate the determinant of large matrices. However the one method I know to do it in n^3 relies on Gaussian elimination itself, which makes it a bit redundant...
            $endgroup$
            – mlk
            Apr 9 at 10:11






          • 3




            $begingroup$
            @user477343 asked for different ways to solve, not more efficient ways to solve. This is awesome.
            $endgroup$
            – user1717828
            Apr 9 at 12:09














          16












          16








          16





          $begingroup$

          How about using Cramer's Rule? Define $Delta_x=left[beginmatrix36 & 2 \ 64 & 4endmatrixright]$, $Delta_y=left[beginmatrix3 & 36\ 5 & 64endmatrixright]$
          and $Delta_0=left[beginmatrix3 & 2\ 5 &4endmatrixright]$.



          Now computation is trivial as you have: $x=dfracdetDelta_xdetDelta_0$ and $y=dfracdetDelta_ydetDelta_0$.






          share|cite|improve this answer









          $endgroup$



          How about using Cramer's Rule? Define $Delta_x=left[beginmatrix36 & 2 \ 64 & 4endmatrixright]$, $Delta_y=left[beginmatrix3 & 36\ 5 & 64endmatrixright]$
          and $Delta_0=left[beginmatrix3 & 2\ 5 &4endmatrixright]$.



          Now computation is trivial as you have: $x=dfracdetDelta_xdetDelta_0$ and $y=dfracdetDelta_ydetDelta_0$.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Apr 9 at 5:58









          Paras KhoslaParas Khosla

          3,238627




          3,238627







          • 1




            $begingroup$
            Wow! Very useful! I have never heard of this method, before! $(+1)$
            $endgroup$
            – user477343
            Apr 9 at 6:07






          • 1




            $begingroup$
            You must've made a calculation mistake. Recheck your calculations. It does indeed give $(2, 1)$ as the answer. Cheers :)
            $endgroup$
            – Paras Khosla
            Apr 9 at 6:55






          • 14




            $begingroup$
            Cramer's rule is important theoretically, but it is a very inefficient way to solve equations numerically, except for two equations in two unknowns. For $n$ equations, Cramer's rule requires $n!$ arithmetic operations to evaluate the determinants, compared with about $n^3$ operations to solve using Gaussian elimination. Even when $n = 10$, $n^3 = 1000$ but $n! = 3628800$. And in many real world applied math computations, $n = 100,000$ is a "small problem!"
            $endgroup$
            – alephzero
            Apr 9 at 9:06






          • 4




            $begingroup$
            @alephzero Just to be technical, there are faster ways to calculate the determinant of large matrices. However the one method I know to do it in n^3 relies on Gaussian elimination itself, which makes it a bit redundant...
            $endgroup$
            – mlk
            Apr 9 at 10:11






          • 3




            $begingroup$
            @user477343 asked for different ways to solve, not more efficient ways to solve. This is awesome.
            $endgroup$
            – user1717828
            Apr 9 at 12:09













          • 1




            $begingroup$
            Wow! Very useful! I have never heard of this method, before! $(+1)$
            $endgroup$
            – user477343
            Apr 9 at 6:07






          • 1




            $begingroup$
            You must've made a calculation mistake. Recheck your calculations. It does indeed give $(2, 1)$ as the answer. Cheers :)
            $endgroup$
            – Paras Khosla
            Apr 9 at 6:55






          • 14




            $begingroup$
            Cramer's rule is important theoretically, but it is a very inefficient way to solve equations numerically, except for two equations in two unknowns. For $n$ equations, Cramer's rule requires $n!$ arithmetic operations to evaluate the determinants, compared with about $n^3$ operations to solve using Gaussian elimination. Even when $n = 10$, $n^3 = 1000$ but $n! = 3628800$. And in many real world applied math computations, $n = 100,000$ is a "small problem!"
            $endgroup$
            – alephzero
            Apr 9 at 9:06






          • 4




            $begingroup$
            @alephzero Just to be technical, there are faster ways to calculate the determinant of large matrices. However the one method I know to do it in n^3 relies on Gaussian elimination itself, which makes it a bit redundant...
            $endgroup$
            – mlk
            Apr 9 at 10:11






          • 3




            $begingroup$
            @user477343 asked for different ways to solve, not more efficient ways to solve. This is awesome.
            $endgroup$
            – user1717828
            Apr 9 at 12:09








          1




          1




          $begingroup$
          Wow! Very useful! I have never heard of this method, before! $(+1)$
          $endgroup$
          – user477343
          Apr 9 at 6:07




          $begingroup$
          Wow! Very useful! I have never heard of this method, before! $(+1)$
          $endgroup$
          – user477343
          Apr 9 at 6:07




          1




          1




          $begingroup$
          You must've made a calculation mistake. Recheck your calculations. It does indeed give $(2, 1)$ as the answer. Cheers :)
          $endgroup$
          – Paras Khosla
          Apr 9 at 6:55




          $begingroup$
          You must've made a calculation mistake. Recheck your calculations. It does indeed give $(2, 1)$ as the answer. Cheers :)
          $endgroup$
          – Paras Khosla
          Apr 9 at 6:55




          14




          14




          $begingroup$
          Cramer's rule is important theoretically, but it is a very inefficient way to solve equations numerically, except for two equations in two unknowns. For $n$ equations, Cramer's rule requires $n!$ arithmetic operations to evaluate the determinants, compared with about $n^3$ operations to solve using Gaussian elimination. Even when $n = 10$, $n^3 = 1000$ but $n! = 3628800$. And in many real world applied math computations, $n = 100,000$ is a "small problem!"
          $endgroup$
          – alephzero
          Apr 9 at 9:06




          $begingroup$
          Cramer's rule is important theoretically, but it is a very inefficient way to solve equations numerically, except for two equations in two unknowns. For $n$ equations, Cramer's rule requires $n!$ arithmetic operations to evaluate the determinants, compared with about $n^3$ operations to solve using Gaussian elimination. Even when $n = 10$, $n^3 = 1000$ but $n! = 3628800$. And in many real world applied math computations, $n = 100,000$ is a "small problem!"
          $endgroup$
          – alephzero
          Apr 9 at 9:06




          4




          4




          $begingroup$
          @alephzero Just to be technical, there are faster ways to calculate the determinant of large matrices. However the one method I know to do it in n^3 relies on Gaussian elimination itself, which makes it a bit redundant...
          $endgroup$
          – mlk
          Apr 9 at 10:11




          $begingroup$
          @alephzero Just to be technical, there are faster ways to calculate the determinant of large matrices. However the one method I know to do it in n^3 relies on Gaussian elimination itself, which makes it a bit redundant...
          $endgroup$
          – mlk
          Apr 9 at 10:11




          3




          3




          $begingroup$
          @user477343 asked for different ways to solve, not more efficient ways to solve. This is awesome.
          $endgroup$
          – user1717828
          Apr 9 at 12:09





          $begingroup$
          @user477343 asked for different ways to solve, not more efficient ways to solve. This is awesome.
          $endgroup$
          – user1717828
          Apr 9 at 12:09












          13












          $begingroup$

          Fixed Point Iteration



          This is not efficient but it's another valid way to solve the system. Treat the system as a matrix equation and rearrange to get $beginbmatrix x\ yendbmatrix$ on the left hand side.



          Define
          $fbeginbmatrix x\ yendbmatrix=beginbmatrix (36-2y)/3 \ (64-5x)/4endbmatrix$



          Start with an intial guess of $beginbmatrix x\ yendbmatrix=beginbmatrix 0\ 0endbmatrix$



          The result is $fbeginbmatrix 0\ 0endbmatrix=beginbmatrix 12\ 16endbmatrix$



          Now plug that back into f



          The result is $fbeginbmatrix 12\ 6endbmatrix=beginbmatrix 4/3\ 1endbmatrix$



          Keep plugging the result back in. After 100 iterations you have:



          $beginbmatrix 7.9991\ 5.9993endbmatrix$



          Here is a graph of the progression of the iteration:
          iteration path






          share|cite|improve this answer









          $endgroup$








          • 2




            $begingroup$
            So we just have $fbeginbmatrix 0 \ 0endbmatrix$ and then $fbigg(fbeginbmatrix 0 \ 0endbmatrixbigg)$ and by letting $f^k(cdot ) = f(f(ldots f(f(cdot))ldots )$ $k$ times, this overall goes to $$f^100beginbmatrix 0 \ 0endbmatrix$$ and etc... hmm... it actually seems quite appealing to me, regardless of its low efficiency, as you say :P
            $endgroup$
            – user477343
            Apr 10 at 0:46











          • $begingroup$
            Note that this doesn't always work, $f$ needs to be a contraction.
            $endgroup$
            – flawr
            yesterday















          13












          $begingroup$

          Fixed Point Iteration



          This is not efficient but it's another valid way to solve the system. Treat the system as a matrix equation and rearrange to get $beginbmatrix x\ yendbmatrix$ on the left hand side.



          Define
          $fbeginbmatrix x\ yendbmatrix=beginbmatrix (36-2y)/3 \ (64-5x)/4endbmatrix$



          Start with an intial guess of $beginbmatrix x\ yendbmatrix=beginbmatrix 0\ 0endbmatrix$



          The result is $fbeginbmatrix 0\ 0endbmatrix=beginbmatrix 12\ 16endbmatrix$



          Now plug that back into f



          The result is $fbeginbmatrix 12\ 6endbmatrix=beginbmatrix 4/3\ 1endbmatrix$



          Keep plugging the result back in. After 100 iterations you have:



          $beginbmatrix 7.9991\ 5.9993endbmatrix$



          Here is a graph of the progression of the iteration:
          iteration path






          share|cite|improve this answer









          $endgroup$








          • 2




            $begingroup$
            So we just have $fbeginbmatrix 0 \ 0endbmatrix$ and then $fbigg(fbeginbmatrix 0 \ 0endbmatrixbigg)$ and by letting $f^k(cdot ) = f(f(ldots f(f(cdot))ldots )$ $k$ times, this overall goes to $$f^100beginbmatrix 0 \ 0endbmatrix$$ and etc... hmm... it actually seems quite appealing to me, regardless of its low efficiency, as you say :P
            $endgroup$
            – user477343
            Apr 10 at 0:46











          • $begingroup$
            Note that this doesn't always work, $f$ needs to be a contraction.
            $endgroup$
            – flawr
            yesterday













          13












          13








          13





          $begingroup$

          Fixed Point Iteration



          This is not efficient but it's another valid way to solve the system. Treat the system as a matrix equation and rearrange to get $beginbmatrix x\ yendbmatrix$ on the left hand side.



          Define
          $fbeginbmatrix x\ yendbmatrix=beginbmatrix (36-2y)/3 \ (64-5x)/4endbmatrix$



          Start with an intial guess of $beginbmatrix x\ yendbmatrix=beginbmatrix 0\ 0endbmatrix$



          The result is $fbeginbmatrix 0\ 0endbmatrix=beginbmatrix 12\ 16endbmatrix$



          Now plug that back into f



          The result is $fbeginbmatrix 12\ 6endbmatrix=beginbmatrix 4/3\ 1endbmatrix$



          Keep plugging the result back in. After 100 iterations you have:



          $beginbmatrix 7.9991\ 5.9993endbmatrix$



          Here is a graph of the progression of the iteration:
          iteration path






          share|cite|improve this answer









          $endgroup$



          Fixed Point Iteration



          This is not efficient but it's another valid way to solve the system. Treat the system as a matrix equation and rearrange to get $beginbmatrix x\ yendbmatrix$ on the left hand side.



          Define
          $fbeginbmatrix x\ yendbmatrix=beginbmatrix (36-2y)/3 \ (64-5x)/4endbmatrix$



          Start with an intial guess of $beginbmatrix x\ yendbmatrix=beginbmatrix 0\ 0endbmatrix$



          The result is $fbeginbmatrix 0\ 0endbmatrix=beginbmatrix 12\ 16endbmatrix$



          Now plug that back into f



          The result is $fbeginbmatrix 12\ 6endbmatrix=beginbmatrix 4/3\ 1endbmatrix$



          Keep plugging the result back in. After 100 iterations you have:



          $beginbmatrix 7.9991\ 5.9993endbmatrix$



          Here is a graph of the progression of the iteration:
          iteration path







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Apr 9 at 18:12









          Kelly LowderKelly Lowder

          24516




          24516







          • 2




            $begingroup$
            So we just have $fbeginbmatrix 0 \ 0endbmatrix$ and then $fbigg(fbeginbmatrix 0 \ 0endbmatrixbigg)$ and by letting $f^k(cdot ) = f(f(ldots f(f(cdot))ldots )$ $k$ times, this overall goes to $$f^100beginbmatrix 0 \ 0endbmatrix$$ and etc... hmm... it actually seems quite appealing to me, regardless of its low efficiency, as you say :P
            $endgroup$
            – user477343
            Apr 10 at 0:46











          • $begingroup$
            Note that this doesn't always work, $f$ needs to be a contraction.
            $endgroup$
            – flawr
            yesterday












          • 2




            $begingroup$
            So we just have $fbeginbmatrix 0 \ 0endbmatrix$ and then $fbigg(fbeginbmatrix 0 \ 0endbmatrixbigg)$ and by letting $f^k(cdot ) = f(f(ldots f(f(cdot))ldots )$ $k$ times, this overall goes to $$f^100beginbmatrix 0 \ 0endbmatrix$$ and etc... hmm... it actually seems quite appealing to me, regardless of its low efficiency, as you say :P
            $endgroup$
            – user477343
            Apr 10 at 0:46











          • $begingroup$
            Note that this doesn't always work, $f$ needs to be a contraction.
            $endgroup$
            – flawr
            yesterday







          2




          2




          $begingroup$
          So we just have $fbeginbmatrix 0 \ 0endbmatrix$ and then $fbigg(fbeginbmatrix 0 \ 0endbmatrixbigg)$ and by letting $f^k(cdot ) = f(f(ldots f(f(cdot))ldots )$ $k$ times, this overall goes to $$f^100beginbmatrix 0 \ 0endbmatrix$$ and etc... hmm... it actually seems quite appealing to me, regardless of its low efficiency, as you say :P
          $endgroup$
          – user477343
          Apr 10 at 0:46





          $begingroup$
          So we just have $fbeginbmatrix 0 \ 0endbmatrix$ and then $fbigg(fbeginbmatrix 0 \ 0endbmatrixbigg)$ and by letting $f^k(cdot ) = f(f(ldots f(f(cdot))ldots )$ $k$ times, this overall goes to $$f^100beginbmatrix 0 \ 0endbmatrix$$ and etc... hmm... it actually seems quite appealing to me, regardless of its low efficiency, as you say :P
          $endgroup$
          – user477343
          Apr 10 at 0:46













          $begingroup$
          Note that this doesn't always work, $f$ needs to be a contraction.
          $endgroup$
          – flawr
          yesterday




          $begingroup$
          Note that this doesn't always work, $f$ needs to be a contraction.
          $endgroup$
          – flawr
          yesterday











          12












          $begingroup$

          By false position:



          Assume $x=10,y=3$, which fulfills the first equation, and let $x=10+x',y=3+y'$. Now, after simplification



          $$3x'+2y'=0,\5x'+4y'=2.$$



          We easily eliminate $y'$ (using $4y'=-6x'$) and get



          $$-x'=2.$$



          Though this method is not essentially different from, say elimination, it can be useful for by-hand computation as it yields smaller terms.






          share|cite|improve this answer









          $endgroup$








          • 1




            $begingroup$
            This is a great method. +1 :)
            $endgroup$
            – Paras Khosla
            Apr 9 at 16:39






          • 1




            $begingroup$
            This is like a variation of the elimination method, but breaks things down better! Already upvoted :P
            $endgroup$
            – user477343
            yesterday















          12












          $begingroup$

          By false position:



          Assume $x=10,y=3$, which fulfills the first equation, and let $x=10+x',y=3+y'$. Now, after simplification



          $$3x'+2y'=0,\5x'+4y'=2.$$



          We easily eliminate $y'$ (using $4y'=-6x'$) and get



          $$-x'=2.$$



          Though this method is not essentially different from, say elimination, it can be useful for by-hand computation as it yields smaller terms.






          share|cite|improve this answer









          $endgroup$








          • 1




            $begingroup$
            This is a great method. +1 :)
            $endgroup$
            – Paras Khosla
            Apr 9 at 16:39






          • 1




            $begingroup$
            This is like a variation of the elimination method, but breaks things down better! Already upvoted :P
            $endgroup$
            – user477343
            yesterday













          12












          12








          12





          $begingroup$

          By false position:



          Assume $x=10,y=3$, which fulfills the first equation, and let $x=10+x',y=3+y'$. Now, after simplification



          $$3x'+2y'=0,\5x'+4y'=2.$$



          We easily eliminate $y'$ (using $4y'=-6x'$) and get



          $$-x'=2.$$



          Though this method is not essentially different from, say elimination, it can be useful for by-hand computation as it yields smaller terms.






          share|cite|improve this answer









          $endgroup$



          By false position:



          Assume $x=10,y=3$, which fulfills the first equation, and let $x=10+x',y=3+y'$. Now, after simplification



          $$3x'+2y'=0,\5x'+4y'=2.$$



          We easily eliminate $y'$ (using $4y'=-6x'$) and get



          $$-x'=2.$$



          Though this method is not essentially different from, say elimination, it can be useful for by-hand computation as it yields smaller terms.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Apr 9 at 6:56









          Yves DaoustYves Daoust

          133k676231




          133k676231







          • 1




            $begingroup$
            This is a great method. +1 :)
            $endgroup$
            – Paras Khosla
            Apr 9 at 16:39






          • 1




            $begingroup$
            This is like a variation of the elimination method, but breaks things down better! Already upvoted :P
            $endgroup$
            – user477343
            yesterday












          • 1




            $begingroup$
            This is a great method. +1 :)
            $endgroup$
            – Paras Khosla
            Apr 9 at 16:39






          • 1




            $begingroup$
            This is like a variation of the elimination method, but breaks things down better! Already upvoted :P
            $endgroup$
            – user477343
            yesterday







          1




          1




          $begingroup$
          This is a great method. +1 :)
          $endgroup$
          – Paras Khosla
          Apr 9 at 16:39




          $begingroup$
          This is a great method. +1 :)
          $endgroup$
          – Paras Khosla
          Apr 9 at 16:39




          1




          1




          $begingroup$
          This is like a variation of the elimination method, but breaks things down better! Already upvoted :P
          $endgroup$
          – user477343
          yesterday




          $begingroup$
          This is like a variation of the elimination method, but breaks things down better! Already upvoted :P
          $endgroup$
          – user477343
          yesterday











          10












          $begingroup$

          Another method to solve simultaneous equations in two dimensions, is by plotting graphs of the equations on a cartesian plane, and finding the point of intersection.



          plot of simultaneous equations






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            That's what my school textbook wants me to do, but it can sometimes be a bit... tiring... but methinks graphing does reveal the essence of simultaneous equations. $(+1)$
            $endgroup$
            – user477343
            Apr 10 at 0:45
















          10












          $begingroup$

          Another method to solve simultaneous equations in two dimensions, is by plotting graphs of the equations on a cartesian plane, and finding the point of intersection.



          plot of simultaneous equations






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            That's what my school textbook wants me to do, but it can sometimes be a bit... tiring... but methinks graphing does reveal the essence of simultaneous equations. $(+1)$
            $endgroup$
            – user477343
            Apr 10 at 0:45














          10












          10








          10





          $begingroup$

          Another method to solve simultaneous equations in two dimensions, is by plotting graphs of the equations on a cartesian plane, and finding the point of intersection.



          plot of simultaneous equations






          share|cite|improve this answer









          $endgroup$



          Another method to solve simultaneous equations in two dimensions, is by plotting graphs of the equations on a cartesian plane, and finding the point of intersection.



          plot of simultaneous equations







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Apr 9 at 9:33









          Elements in SpaceElements in Space

          1,28211228




          1,28211228











          • $begingroup$
            That's what my school textbook wants me to do, but it can sometimes be a bit... tiring... but methinks graphing does reveal the essence of simultaneous equations. $(+1)$
            $endgroup$
            – user477343
            Apr 10 at 0:45

















          • $begingroup$
            That's what my school textbook wants me to do, but it can sometimes be a bit... tiring... but methinks graphing does reveal the essence of simultaneous equations. $(+1)$
            $endgroup$
            – user477343
            Apr 10 at 0:45
















          $begingroup$
          That's what my school textbook wants me to do, but it can sometimes be a bit... tiring... but methinks graphing does reveal the essence of simultaneous equations. $(+1)$
          $endgroup$
          – user477343
          Apr 10 at 0:45





          $begingroup$
          That's what my school textbook wants me to do, but it can sometimes be a bit... tiring... but methinks graphing does reveal the essence of simultaneous equations. $(+1)$
          $endgroup$
          – user477343
          Apr 10 at 0:45












          9












          $begingroup$

          Any method you can come up with will in the end amount to Cramer's rule, which gives explicit formulas for the solution. Except special cases, the solution of a system is unique, so that you will always be computing the ratio of those determinants.



          Anyway, it turns out that by organizing the computation in certain ways, you can reduce the number of arithmetic operations to be performed. For $2times2$ systems,
          the different variants make little difference in this respect. Things become more interesting for $ntimes n$ systems.



          Direct application of Cramer is by far the worse, as it takes a number of operations proportional to $(n+1)!$, which is huge. Even for $3times3$ systems, it should be avoided. The best method to date is Gaussian elimination (you eliminate one unknown at a time by forming linear combinations of the equations and turn the system to a triangular form). The total workload is proportional to $n^3$ operations.




          The steps of standard Gaussian elimination:



          $$begincasesax+by=c,\dx+ey=f.endcases$$



          Subtract the first times $dfrac da$ from the second,



          $$begincasesax+by=c,\0x+left(e-bdfrac daright)y=f-cdfrac da.endcases$$



          Solve for $y$,



          $$begincasesax+by=c,\y=dfracf-cdfrac dae-bdfrac da.endcases$$



          Solve for $x$,



          $$begincasesx=dfracc-bdfracf-cdfrac dae-bdfrac daa,\y=dfracf-cdfrac dae-bdfrac da.endcases$$



          So written, the formulas are a little scary, but when you use intermediate variables, the complexity vanishes:



          $$d'=frac da,e'=e-bd',f'=f-cd'to y=fracf'e', x=fracc-bya.$$



          Anyway, for a $2times2$ system, this is worse than Cramer !



          $$begincasesx=dfracce-bfDelta,\y=dfracaf-cdDeltaendcases$$ where $Delta=ae-bd$.




          For large systems, say $100times100$ and up, very different methods are used. They work by computing approximate solutions and improving them iteratively until the inaccuracy becomes acceptable. Quite often such systems are sparse (many coefficients are zero), and this is exploited to reduce the number of operations. (The direct methods are inappropriate as they will break the sparseness property.)






          share|cite|improve this answer











          $endgroup$








          • 2




            $begingroup$
            +1 for the last paragraph which is, I think, of utmost importance. Indeed, our computers solve many, many, linear systems each day (and quite huge ones, not 100x100 but more 100'000 x 100'000). None of them are solved by any the methods discussed in the answers so far.
            $endgroup$
            – Surb
            Apr 9 at 19:55















          9












          $begingroup$

          Any method you can come up with will in the end amount to Cramer's rule, which gives explicit formulas for the solution. Except special cases, the solution of a system is unique, so that you will always be computing the ratio of those determinants.



          Anyway, it turns out that by organizing the computation in certain ways, you can reduce the number of arithmetic operations to be performed. For $2times2$ systems,
          the different variants make little difference in this respect. Things become more interesting for $ntimes n$ systems.



          Direct application of Cramer is by far the worse, as it takes a number of operations proportional to $(n+1)!$, which is huge. Even for $3times3$ systems, it should be avoided. The best method to date is Gaussian elimination (you eliminate one unknown at a time by forming linear combinations of the equations and turn the system to a triangular form). The total workload is proportional to $n^3$ operations.




          The steps of standard Gaussian elimination:



          $$begincasesax+by=c,\dx+ey=f.endcases$$



          Subtract the first times $dfrac da$ from the second,



          $$begincasesax+by=c,\0x+left(e-bdfrac daright)y=f-cdfrac da.endcases$$



          Solve for $y$,



          $$begincasesax+by=c,\y=dfracf-cdfrac dae-bdfrac da.endcases$$



          Solve for $x$,



          $$begincasesx=dfracc-bdfracf-cdfrac dae-bdfrac daa,\y=dfracf-cdfrac dae-bdfrac da.endcases$$



          So written, the formulas are a little scary, but when you use intermediate variables, the complexity vanishes:



          $$d'=frac da,e'=e-bd',f'=f-cd'to y=fracf'e', x=fracc-bya.$$



          Anyway, for a $2times2$ system, this is worse than Cramer !



          $$begincasesx=dfracce-bfDelta,\y=dfracaf-cdDeltaendcases$$ where $Delta=ae-bd$.




          For large systems, say $100times100$ and up, very different methods are used. They work by computing approximate solutions and improving them iteratively until the inaccuracy becomes acceptable. Quite often such systems are sparse (many coefficients are zero), and this is exploited to reduce the number of operations. (The direct methods are inappropriate as they will break the sparseness property.)






          share|cite|improve this answer











          $endgroup$








          • 2




            $begingroup$
            +1 for the last paragraph which is, I think, of utmost importance. Indeed, our computers solve many, many, linear systems each day (and quite huge ones, not 100x100 but more 100'000 x 100'000). None of them are solved by any the methods discussed in the answers so far.
            $endgroup$
            – Surb
            Apr 9 at 19:55













          9












          9








          9





          $begingroup$

          Any method you can come up with will in the end amount to Cramer's rule, which gives explicit formulas for the solution. Except special cases, the solution of a system is unique, so that you will always be computing the ratio of those determinants.



          Anyway, it turns out that by organizing the computation in certain ways, you can reduce the number of arithmetic operations to be performed. For $2times2$ systems,
          the different variants make little difference in this respect. Things become more interesting for $ntimes n$ systems.



          Direct application of Cramer is by far the worse, as it takes a number of operations proportional to $(n+1)!$, which is huge. Even for $3times3$ systems, it should be avoided. The best method to date is Gaussian elimination (you eliminate one unknown at a time by forming linear combinations of the equations and turn the system to a triangular form). The total workload is proportional to $n^3$ operations.




          The steps of standard Gaussian elimination:



          $$begincasesax+by=c,\dx+ey=f.endcases$$



          Subtract the first times $dfrac da$ from the second,



          $$begincasesax+by=c,\0x+left(e-bdfrac daright)y=f-cdfrac da.endcases$$



          Solve for $y$,



          $$begincasesax+by=c,\y=dfracf-cdfrac dae-bdfrac da.endcases$$



          Solve for $x$,



          $$begincasesx=dfracc-bdfracf-cdfrac dae-bdfrac daa,\y=dfracf-cdfrac dae-bdfrac da.endcases$$



          So written, the formulas are a little scary, but when you use intermediate variables, the complexity vanishes:



          $$d'=frac da,e'=e-bd',f'=f-cd'to y=fracf'e', x=fracc-bya.$$



          Anyway, for a $2times2$ system, this is worse than Cramer !



          $$begincasesx=dfracce-bfDelta,\y=dfracaf-cdDeltaendcases$$ where $Delta=ae-bd$.




          For large systems, say $100times100$ and up, very different methods are used. They work by computing approximate solutions and improving them iteratively until the inaccuracy becomes acceptable. Quite often such systems are sparse (many coefficients are zero), and this is exploited to reduce the number of operations. (The direct methods are inappropriate as they will break the sparseness property.)






          share|cite|improve this answer











          $endgroup$



          Any method you can come up with will in the end amount to Cramer's rule, which gives explicit formulas for the solution. Except special cases, the solution of a system is unique, so that you will always be computing the ratio of those determinants.



          Anyway, it turns out that by organizing the computation in certain ways, you can reduce the number of arithmetic operations to be performed. For $2times2$ systems,
          the different variants make little difference in this respect. Things become more interesting for $ntimes n$ systems.



          Direct application of Cramer is by far the worse, as it takes a number of operations proportional to $(n+1)!$, which is huge. Even for $3times3$ systems, it should be avoided. The best method to date is Gaussian elimination (you eliminate one unknown at a time by forming linear combinations of the equations and turn the system to a triangular form). The total workload is proportional to $n^3$ operations.




          The steps of standard Gaussian elimination:



          $$begincasesax+by=c,\dx+ey=f.endcases$$



          Subtract the first times $dfrac da$ from the second,



          $$begincasesax+by=c,\0x+left(e-bdfrac daright)y=f-cdfrac da.endcases$$



          Solve for $y$,



          $$begincasesax+by=c,\y=dfracf-cdfrac dae-bdfrac da.endcases$$



          Solve for $x$,



          $$begincasesx=dfracc-bdfracf-cdfrac dae-bdfrac daa,\y=dfracf-cdfrac dae-bdfrac da.endcases$$



          So written, the formulas are a little scary, but when you use intermediate variables, the complexity vanishes:



          $$d'=frac da,e'=e-bd',f'=f-cd'to y=fracf'e', x=fracc-bya.$$



          Anyway, for a $2times2$ system, this is worse than Cramer !



          $$begincasesx=dfracce-bfDelta,\y=dfracaf-cdDeltaendcases$$ where $Delta=ae-bd$.




          For large systems, say $100times100$ and up, very different methods are used. They work by computing approximate solutions and improving them iteratively until the inaccuracy becomes acceptable. Quite often such systems are sparse (many coefficients are zero), and this is exploited to reduce the number of operations. (The direct methods are inappropriate as they will break the sparseness property.)







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Apr 9 at 7:35

























          answered Apr 9 at 7:13









          Yves DaoustYves Daoust

          133k676231




          133k676231







          • 2




            $begingroup$
            +1 for the last paragraph which is, I think, of utmost importance. Indeed, our computers solve many, many, linear systems each day (and quite huge ones, not 100x100 but more 100'000 x 100'000). None of them are solved by any the methods discussed in the answers so far.
            $endgroup$
            – Surb
            Apr 9 at 19:55












          • 2




            $begingroup$
            +1 for the last paragraph which is, I think, of utmost importance. Indeed, our computers solve many, many, linear systems each day (and quite huge ones, not 100x100 but more 100'000 x 100'000). None of them are solved by any the methods discussed in the answers so far.
            $endgroup$
            – Surb
            Apr 9 at 19:55







          2




          2




          $begingroup$
          +1 for the last paragraph which is, I think, of utmost importance. Indeed, our computers solve many, many, linear systems each day (and quite huge ones, not 100x100 but more 100'000 x 100'000). None of them are solved by any the methods discussed in the answers so far.
          $endgroup$
          – Surb
          Apr 9 at 19:55




          $begingroup$
          +1 for the last paragraph which is, I think, of utmost importance. Indeed, our computers solve many, many, linear systems each day (and quite huge ones, not 100x100 but more 100'000 x 100'000). None of them are solved by any the methods discussed in the answers so far.
          $endgroup$
          – Surb
          Apr 9 at 19:55











          9












          $begingroup$

          Construct the Groebner basis of your system, with the variables ordered $x$, $y$:
          $$ mathrmGB(3x+2y-36, 5x+4y-64) = y-6, x-8 $$
          and read out the solution. (If we reverse the variable order, we get the same basis, but in reversed order.) Under the hood, this is performing Gaussian elimination for this problem. However, Groebner bases are not restricted to linear systems, so can be used to construct solution sets for systems of polynomials in several variables.




          Perform lattice reduction on the lattice generated by $(3,2,-36)$ and $(5,4,-64)$. A sequence of reductions (similar to the Euclidean algorithm for GCDs): beginalign*
          (5,4,-64) - (3,2,-36) &= (2,2,-28) \
          (3,2,-36) - (2,2,-28) &= (1,0,-8) tag1 \
          (2,2,-28) - 2(1,0,-8) &= (0,2,-12) tag2 \
          endalign*

          From (1), we have $x=8$. From (2), $2y = 12$, so $y = 6$. (Generally, there can be quite a bit more "creativity" required to get the needed zeroes in the lattice vector components. One implementation of the LLL algorithm, terminates with the shorter vectors $(-1,2,4), (-2,2,4)$, but we would continue to manipulate these to get the desired zeroes.)






          share|cite|improve this answer









          $endgroup$

















            9












            $begingroup$

            Construct the Groebner basis of your system, with the variables ordered $x$, $y$:
            $$ mathrmGB(3x+2y-36, 5x+4y-64) = y-6, x-8 $$
            and read out the solution. (If we reverse the variable order, we get the same basis, but in reversed order.) Under the hood, this is performing Gaussian elimination for this problem. However, Groebner bases are not restricted to linear systems, so can be used to construct solution sets for systems of polynomials in several variables.




            Perform lattice reduction on the lattice generated by $(3,2,-36)$ and $(5,4,-64)$. A sequence of reductions (similar to the Euclidean algorithm for GCDs): beginalign*
            (5,4,-64) - (3,2,-36) &= (2,2,-28) \
            (3,2,-36) - (2,2,-28) &= (1,0,-8) tag1 \
            (2,2,-28) - 2(1,0,-8) &= (0,2,-12) tag2 \
            endalign*

            From (1), we have $x=8$. From (2), $2y = 12$, so $y = 6$. (Generally, there can be quite a bit more "creativity" required to get the needed zeroes in the lattice vector components. One implementation of the LLL algorithm, terminates with the shorter vectors $(-1,2,4), (-2,2,4)$, but we would continue to manipulate these to get the desired zeroes.)






            share|cite|improve this answer









            $endgroup$















              9












              9








              9





              $begingroup$

              Construct the Groebner basis of your system, with the variables ordered $x$, $y$:
              $$ mathrmGB(3x+2y-36, 5x+4y-64) = y-6, x-8 $$
              and read out the solution. (If we reverse the variable order, we get the same basis, but in reversed order.) Under the hood, this is performing Gaussian elimination for this problem. However, Groebner bases are not restricted to linear systems, so can be used to construct solution sets for systems of polynomials in several variables.




              Perform lattice reduction on the lattice generated by $(3,2,-36)$ and $(5,4,-64)$. A sequence of reductions (similar to the Euclidean algorithm for GCDs): beginalign*
              (5,4,-64) - (3,2,-36) &= (2,2,-28) \
              (3,2,-36) - (2,2,-28) &= (1,0,-8) tag1 \
              (2,2,-28) - 2(1,0,-8) &= (0,2,-12) tag2 \
              endalign*

              From (1), we have $x=8$. From (2), $2y = 12$, so $y = 6$. (Generally, there can be quite a bit more "creativity" required to get the needed zeroes in the lattice vector components. One implementation of the LLL algorithm, terminates with the shorter vectors $(-1,2,4), (-2,2,4)$, but we would continue to manipulate these to get the desired zeroes.)






              share|cite|improve this answer









              $endgroup$



              Construct the Groebner basis of your system, with the variables ordered $x$, $y$:
              $$ mathrmGB(3x+2y-36, 5x+4y-64) = y-6, x-8 $$
              and read out the solution. (If we reverse the variable order, we get the same basis, but in reversed order.) Under the hood, this is performing Gaussian elimination for this problem. However, Groebner bases are not restricted to linear systems, so can be used to construct solution sets for systems of polynomials in several variables.




              Perform lattice reduction on the lattice generated by $(3,2,-36)$ and $(5,4,-64)$. A sequence of reductions (similar to the Euclidean algorithm for GCDs): beginalign*
              (5,4,-64) - (3,2,-36) &= (2,2,-28) \
              (3,2,-36) - (2,2,-28) &= (1,0,-8) tag1 \
              (2,2,-28) - 2(1,0,-8) &= (0,2,-12) tag2 \
              endalign*

              From (1), we have $x=8$. From (2), $2y = 12$, so $y = 6$. (Generally, there can be quite a bit more "creativity" required to get the needed zeroes in the lattice vector components. One implementation of the LLL algorithm, terminates with the shorter vectors $(-1,2,4), (-2,2,4)$, but we would continue to manipulate these to get the desired zeroes.)







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered 2 days ago









              Eric TowersEric Towers

              33.9k22370




              33.9k22370





















                  6












                  $begingroup$

                  $$beginalign3x+2y&=36 tag1\ 5x+4y&=64tag2endalign$$



                  From $(1)$, $x=frac36-2y3$, substitute in $(2)$ and you'll get $5(frac36-2y3)+4y=64 implies y=6$ and then you can get that $x=24/3=8$



                  Another Method
                  From $(1)$, $x=frac36-2y3$



                  From $(2)$, $x=frac64-4y5$



                  But $x=x implies frac36-2y3=frac64-4y5$ do cross multiplication and you'll get $5(36-2y)=3(64-4y) implies y=6$ and substitute to get $x=8$






                  share|cite|improve this answer











                  $endgroup$








                  • 1




                    $begingroup$
                    Pure algebra! I personally prefer the second method. Thanks for that! $(+1)$
                    $endgroup$
                    – user477343
                    Apr 9 at 7:55
















                  6












                  $begingroup$

                  $$beginalign3x+2y&=36 tag1\ 5x+4y&=64tag2endalign$$



                  From $(1)$, $x=frac36-2y3$, substitute in $(2)$ and you'll get $5(frac36-2y3)+4y=64 implies y=6$ and then you can get that $x=24/3=8$



                  Another Method
                  From $(1)$, $x=frac36-2y3$



                  From $(2)$, $x=frac64-4y5$



                  But $x=x implies frac36-2y3=frac64-4y5$ do cross multiplication and you'll get $5(36-2y)=3(64-4y) implies y=6$ and substitute to get $x=8$






                  share|cite|improve this answer











                  $endgroup$








                  • 1




                    $begingroup$
                    Pure algebra! I personally prefer the second method. Thanks for that! $(+1)$
                    $endgroup$
                    – user477343
                    Apr 9 at 7:55














                  6












                  6








                  6





                  $begingroup$

                  $$beginalign3x+2y&=36 tag1\ 5x+4y&=64tag2endalign$$



                  From $(1)$, $x=frac36-2y3$, substitute in $(2)$ and you'll get $5(frac36-2y3)+4y=64 implies y=6$ and then you can get that $x=24/3=8$



                  Another Method
                  From $(1)$, $x=frac36-2y3$



                  From $(2)$, $x=frac64-4y5$



                  But $x=x implies frac36-2y3=frac64-4y5$ do cross multiplication and you'll get $5(36-2y)=3(64-4y) implies y=6$ and substitute to get $x=8$






                  share|cite|improve this answer











                  $endgroup$



                  $$beginalign3x+2y&=36 tag1\ 5x+4y&=64tag2endalign$$



                  From $(1)$, $x=frac36-2y3$, substitute in $(2)$ and you'll get $5(frac36-2y3)+4y=64 implies y=6$ and then you can get that $x=24/3=8$



                  Another Method
                  From $(1)$, $x=frac36-2y3$



                  From $(2)$, $x=frac64-4y5$



                  But $x=x implies frac36-2y3=frac64-4y5$ do cross multiplication and you'll get $5(36-2y)=3(64-4y) implies y=6$ and substitute to get $x=8$







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited Apr 9 at 7:50

























                  answered Apr 9 at 7:43









                  Fareed AFFareed AF

                  822112




                  822112







                  • 1




                    $begingroup$
                    Pure algebra! I personally prefer the second method. Thanks for that! $(+1)$
                    $endgroup$
                    – user477343
                    Apr 9 at 7:55













                  • 1




                    $begingroup$
                    Pure algebra! I personally prefer the second method. Thanks for that! $(+1)$
                    $endgroup$
                    – user477343
                    Apr 9 at 7:55








                  1




                  1




                  $begingroup$
                  Pure algebra! I personally prefer the second method. Thanks for that! $(+1)$
                  $endgroup$
                  – user477343
                  Apr 9 at 7:55





                  $begingroup$
                  Pure algebra! I personally prefer the second method. Thanks for that! $(+1)$
                  $endgroup$
                  – user477343
                  Apr 9 at 7:55












                  4












                  $begingroup$

                  Other answers have given standard, elementary methods of solving simultaneous equations. Here are a few other ones that can be more long-winded and excessive, but work nonetheless.




                  Method $1$: (multiplicity of $y$)




                  Let $y=kx$ for some $kinBbb R$. Then $$3x+2y=36implies x(2k+3)=36implies x=frac362k+3\5x+4y=64implies x(4k+5)=64implies x=frac644k+5$$ so $$36(4k+5)=64(2k+3)implies (144-128)k=(192-180)implies k=frac34.$$ Now $$x=frac644k+5=frac644cdotfrac34+5=8implies y=kx=frac34cdot8=6.quadsquare$$





                  Method $2$: (use this if you really like quadratic equations :P)




                  How about we try squaring the equations? We get $$3x+2y=36implies 9x^2+12xy+4y^2=1296\5x+4y=64implies 25x^2+40xy+16y^2=4096$$ Multiplying the first equation by $10$ and the second by $3$ yields $$90x^2+120xy+40y^2=12960\75x^2+120xy+48y^2=12288$$ and subtracting gives us $$15x^2-8y^2=672$$ which is a hyperbola. Notice that subtracting the two linear equations gives you $2x+2y=28implies y=14-x$ so you have the nice quadratic $$15x^2-8(14-x)^2=672.$$ Enjoy!







                  share|cite|improve this answer











                  $endgroup$












                  • $begingroup$
                    In your first method, why do you substitute $k=frac34$ in the second equation $5x+4y=64$ as opposed to the first equation $3x+2y=36$? Also, hello! :D
                    $endgroup$
                    – user477343
                    Apr 9 at 8:39







                  • 1




                    $begingroup$
                    Because for $3x+2y=36$, we get $2k$ in the denominator, but $2k=3/2$ leaves us with a fraction. If we use the other equation, we get $4k=3$ which is neater.
                    $endgroup$
                    – TheSimpliFire
                    Apr 9 at 8:41










                  • $begingroup$
                    So, it doesn't really matter which one we substitute it in; but it is good to have some intuition when deciding! Thanks for your answer :P $(+1)$
                    $endgroup$
                    – user477343
                    Apr 9 at 9:02







                  • 1




                    $begingroup$
                    No, at an intersection point between two lines, most of their properties at that point are the same (apart from gradient, of course)
                    $endgroup$
                    – TheSimpliFire
                    Apr 9 at 9:06










                  • $begingroup$
                    Ok. Thank you for clarifying!
                    $endgroup$
                    – user477343
                    Apr 10 at 0:43















                  4












                  $begingroup$

                  Other answers have given standard, elementary methods of solving simultaneous equations. Here are a few other ones that can be more long-winded and excessive, but work nonetheless.




                  Method $1$: (multiplicity of $y$)




                  Let $y=kx$ for some $kinBbb R$. Then $$3x+2y=36implies x(2k+3)=36implies x=frac362k+3\5x+4y=64implies x(4k+5)=64implies x=frac644k+5$$ so $$36(4k+5)=64(2k+3)implies (144-128)k=(192-180)implies k=frac34.$$ Now $$x=frac644k+5=frac644cdotfrac34+5=8implies y=kx=frac34cdot8=6.quadsquare$$





                  Method $2$: (use this if you really like quadratic equations :P)




                  How about we try squaring the equations? We get $$3x+2y=36implies 9x^2+12xy+4y^2=1296\5x+4y=64implies 25x^2+40xy+16y^2=4096$$ Multiplying the first equation by $10$ and the second by $3$ yields $$90x^2+120xy+40y^2=12960\75x^2+120xy+48y^2=12288$$ and subtracting gives us $$15x^2-8y^2=672$$ which is a hyperbola. Notice that subtracting the two linear equations gives you $2x+2y=28implies y=14-x$ so you have the nice quadratic $$15x^2-8(14-x)^2=672.$$ Enjoy!







                  share|cite|improve this answer











                  $endgroup$












                  • $begingroup$
                    In your first method, why do you substitute $k=frac34$ in the second equation $5x+4y=64$ as opposed to the first equation $3x+2y=36$? Also, hello! :D
                    $endgroup$
                    – user477343
                    Apr 9 at 8:39







                  • 1




                    $begingroup$
                    Because for $3x+2y=36$, we get $2k$ in the denominator, but $2k=3/2$ leaves us with a fraction. If we use the other equation, we get $4k=3$ which is neater.
                    $endgroup$
                    – TheSimpliFire
                    Apr 9 at 8:41










                  • $begingroup$
                    So, it doesn't really matter which one we substitute it in; but it is good to have some intuition when deciding! Thanks for your answer :P $(+1)$
                    $endgroup$
                    – user477343
                    Apr 9 at 9:02







                  • 1




                    $begingroup$
                    No, at an intersection point between two lines, most of their properties at that point are the same (apart from gradient, of course)
                    $endgroup$
                    – TheSimpliFire
                    Apr 9 at 9:06










                  • $begingroup$
                    Ok. Thank you for clarifying!
                    $endgroup$
                    – user477343
                    Apr 10 at 0:43













                  4












                  4








                  4





                  $begingroup$

                  Other answers have given standard, elementary methods of solving simultaneous equations. Here are a few other ones that can be more long-winded and excessive, but work nonetheless.




                  Method $1$: (multiplicity of $y$)




                  Let $y=kx$ for some $kinBbb R$. Then $$3x+2y=36implies x(2k+3)=36implies x=frac362k+3\5x+4y=64implies x(4k+5)=64implies x=frac644k+5$$ so $$36(4k+5)=64(2k+3)implies (144-128)k=(192-180)implies k=frac34.$$ Now $$x=frac644k+5=frac644cdotfrac34+5=8implies y=kx=frac34cdot8=6.quadsquare$$





                  Method $2$: (use this if you really like quadratic equations :P)




                  How about we try squaring the equations? We get $$3x+2y=36implies 9x^2+12xy+4y^2=1296\5x+4y=64implies 25x^2+40xy+16y^2=4096$$ Multiplying the first equation by $10$ and the second by $3$ yields $$90x^2+120xy+40y^2=12960\75x^2+120xy+48y^2=12288$$ and subtracting gives us $$15x^2-8y^2=672$$ which is a hyperbola. Notice that subtracting the two linear equations gives you $2x+2y=28implies y=14-x$ so you have the nice quadratic $$15x^2-8(14-x)^2=672.$$ Enjoy!







                  share|cite|improve this answer











                  $endgroup$



                  Other answers have given standard, elementary methods of solving simultaneous equations. Here are a few other ones that can be more long-winded and excessive, but work nonetheless.




                  Method $1$: (multiplicity of $y$)




                  Let $y=kx$ for some $kinBbb R$. Then $$3x+2y=36implies x(2k+3)=36implies x=frac362k+3\5x+4y=64implies x(4k+5)=64implies x=frac644k+5$$ so $$36(4k+5)=64(2k+3)implies (144-128)k=(192-180)implies k=frac34.$$ Now $$x=frac644k+5=frac644cdotfrac34+5=8implies y=kx=frac34cdot8=6.quadsquare$$





                  Method $2$: (use this if you really like quadratic equations :P)




                  How about we try squaring the equations? We get $$3x+2y=36implies 9x^2+12xy+4y^2=1296\5x+4y=64implies 25x^2+40xy+16y^2=4096$$ Multiplying the first equation by $10$ and the second by $3$ yields $$90x^2+120xy+40y^2=12960\75x^2+120xy+48y^2=12288$$ and subtracting gives us $$15x^2-8y^2=672$$ which is a hyperbola. Notice that subtracting the two linear equations gives you $2x+2y=28implies y=14-x$ so you have the nice quadratic $$15x^2-8(14-x)^2=672.$$ Enjoy!








                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited Apr 9 at 8:40

























                  answered Apr 9 at 8:34









                  TheSimpliFireTheSimpliFire

                  13.2k62464




                  13.2k62464











                  • $begingroup$
                    In your first method, why do you substitute $k=frac34$ in the second equation $5x+4y=64$ as opposed to the first equation $3x+2y=36$? Also, hello! :D
                    $endgroup$
                    – user477343
                    Apr 9 at 8:39







                  • 1




                    $begingroup$
                    Because for $3x+2y=36$, we get $2k$ in the denominator, but $2k=3/2$ leaves us with a fraction. If we use the other equation, we get $4k=3$ which is neater.
                    $endgroup$
                    – TheSimpliFire
                    Apr 9 at 8:41










                  • $begingroup$
                    So, it doesn't really matter which one we substitute it in; but it is good to have some intuition when deciding! Thanks for your answer :P $(+1)$
                    $endgroup$
                    – user477343
                    Apr 9 at 9:02







                  • 1




                    $begingroup$
                    No, at an intersection point between two lines, most of their properties at that point are the same (apart from gradient, of course)
                    $endgroup$
                    – TheSimpliFire
                    Apr 9 at 9:06










                  • $begingroup$
                    Ok. Thank you for clarifying!
                    $endgroup$
                    – user477343
                    Apr 10 at 0:43
















                  • $begingroup$
                    In your first method, why do you substitute $k=frac34$ in the second equation $5x+4y=64$ as opposed to the first equation $3x+2y=36$? Also, hello! :D
                    $endgroup$
                    – user477343
                    Apr 9 at 8:39







                  • 1




                    $begingroup$
                    Because for $3x+2y=36$, we get $2k$ in the denominator, but $2k=3/2$ leaves us with a fraction. If we use the other equation, we get $4k=3$ which is neater.
                    $endgroup$
                    – TheSimpliFire
                    Apr 9 at 8:41










                  • $begingroup$
                    So, it doesn't really matter which one we substitute it in; but it is good to have some intuition when deciding! Thanks for your answer :P $(+1)$
                    $endgroup$
                    – user477343
                    Apr 9 at 9:02







                  • 1




                    $begingroup$
                    No, at an intersection point between two lines, most of their properties at that point are the same (apart from gradient, of course)
                    $endgroup$
                    – TheSimpliFire
                    Apr 9 at 9:06










                  • $begingroup$
                    Ok. Thank you for clarifying!
                    $endgroup$
                    – user477343
                    Apr 10 at 0:43















                  $begingroup$
                  In your first method, why do you substitute $k=frac34$ in the second equation $5x+4y=64$ as opposed to the first equation $3x+2y=36$? Also, hello! :D
                  $endgroup$
                  – user477343
                  Apr 9 at 8:39





                  $begingroup$
                  In your first method, why do you substitute $k=frac34$ in the second equation $5x+4y=64$ as opposed to the first equation $3x+2y=36$? Also, hello! :D
                  $endgroup$
                  – user477343
                  Apr 9 at 8:39





                  1




                  1




                  $begingroup$
                  Because for $3x+2y=36$, we get $2k$ in the denominator, but $2k=3/2$ leaves us with a fraction. If we use the other equation, we get $4k=3$ which is neater.
                  $endgroup$
                  – TheSimpliFire
                  Apr 9 at 8:41




                  $begingroup$
                  Because for $3x+2y=36$, we get $2k$ in the denominator, but $2k=3/2$ leaves us with a fraction. If we use the other equation, we get $4k=3$ which is neater.
                  $endgroup$
                  – TheSimpliFire
                  Apr 9 at 8:41












                  $begingroup$
                  So, it doesn't really matter which one we substitute it in; but it is good to have some intuition when deciding! Thanks for your answer :P $(+1)$
                  $endgroup$
                  – user477343
                  Apr 9 at 9:02





                  $begingroup$
                  So, it doesn't really matter which one we substitute it in; but it is good to have some intuition when deciding! Thanks for your answer :P $(+1)$
                  $endgroup$
                  – user477343
                  Apr 9 at 9:02





                  1




                  1




                  $begingroup$
                  No, at an intersection point between two lines, most of their properties at that point are the same (apart from gradient, of course)
                  $endgroup$
                  – TheSimpliFire
                  Apr 9 at 9:06




                  $begingroup$
                  No, at an intersection point between two lines, most of their properties at that point are the same (apart from gradient, of course)
                  $endgroup$
                  – TheSimpliFire
                  Apr 9 at 9:06












                  $begingroup$
                  Ok. Thank you for clarifying!
                  $endgroup$
                  – user477343
                  Apr 10 at 0:43




                  $begingroup$
                  Ok. Thank you for clarifying!
                  $endgroup$
                  – user477343
                  Apr 10 at 0:43











                  3












                  $begingroup$

                  As another iterative method I suggest the Jacobi Method. A sufficient criterion for its convergence is that the matrix must be diagonally dominant. Which this one in our system is not:



                  $beginbmatrix 3 &2 \ 5 &4endbmatrixbeginbmatrix x \ yendbmatrix=beginbmatrix36 \ 64endbmatrix$



                  We can however fix this by replacing e.g. $y' := frac11.3 y$. Then the system is



                  $underbracebeginbmatrix 3 & 2.6 \ 5 & 5.2endbmatrix_=:Abeginbmatrix x \ y'endbmatrix=beginbmatrix36 \ 64endbmatrix$



                  and $A$ is diagonally dominant. Then we can decompose $A = L + D + U$ into $L,U,D$ where $L,U$ are the strict upper and lower triangular parts and $D$ is the diagonal of $A$ and the iteration



                  $$vec x_i+1 = - D^-1((L+R)vec x_i + b)$$



                  will converge to the solution $(x,y')$. Note that $D^-1$ is particularly easy to compute as you just have to invert the entries. So in theis case the iteration is



                  $$vec x_i+1 = -beginbmatrix 1/3 & 0 \ 0 & 1/5.2 endbmatrixleft(beginbmatrix 0 & 2.6 \ 5 & 0 endbmatrix vec x_i + bright)$$



                  So you can actually view this as a fixed point iteration of the function $f(vec x) = -D^-1((L+R)vec x + b)$ which is guaranteed to be a contraction in the case of diagonal dominance of $A$. It is actually quite slow and doesn't any practical application for directly solving systems of linear equations but it (or variations of it) is quite often used as a preconditioner.






                  share|cite|improve this answer











                  $endgroup$

















                    3












                    $begingroup$

                    As another iterative method I suggest the Jacobi Method. A sufficient criterion for its convergence is that the matrix must be diagonally dominant. Which this one in our system is not:



                    $beginbmatrix 3 &2 \ 5 &4endbmatrixbeginbmatrix x \ yendbmatrix=beginbmatrix36 \ 64endbmatrix$



                    We can however fix this by replacing e.g. $y' := frac11.3 y$. Then the system is



                    $underbracebeginbmatrix 3 & 2.6 \ 5 & 5.2endbmatrix_=:Abeginbmatrix x \ y'endbmatrix=beginbmatrix36 \ 64endbmatrix$



                    and $A$ is diagonally dominant. Then we can decompose $A = L + D + U$ into $L,U,D$ where $L,U$ are the strict upper and lower triangular parts and $D$ is the diagonal of $A$ and the iteration



                    $$vec x_i+1 = - D^-1((L+R)vec x_i + b)$$



                    will converge to the solution $(x,y')$. Note that $D^-1$ is particularly easy to compute as you just have to invert the entries. So in theis case the iteration is



                    $$vec x_i+1 = -beginbmatrix 1/3 & 0 \ 0 & 1/5.2 endbmatrixleft(beginbmatrix 0 & 2.6 \ 5 & 0 endbmatrix vec x_i + bright)$$



                    So you can actually view this as a fixed point iteration of the function $f(vec x) = -D^-1((L+R)vec x + b)$ which is guaranteed to be a contraction in the case of diagonal dominance of $A$. It is actually quite slow and doesn't any practical application for directly solving systems of linear equations but it (or variations of it) is quite often used as a preconditioner.






                    share|cite|improve this answer











                    $endgroup$















                      3












                      3








                      3





                      $begingroup$

                      As another iterative method I suggest the Jacobi Method. A sufficient criterion for its convergence is that the matrix must be diagonally dominant. Which this one in our system is not:



                      $beginbmatrix 3 &2 \ 5 &4endbmatrixbeginbmatrix x \ yendbmatrix=beginbmatrix36 \ 64endbmatrix$



                      We can however fix this by replacing e.g. $y' := frac11.3 y$. Then the system is



                      $underbracebeginbmatrix 3 & 2.6 \ 5 & 5.2endbmatrix_=:Abeginbmatrix x \ y'endbmatrix=beginbmatrix36 \ 64endbmatrix$



                      and $A$ is diagonally dominant. Then we can decompose $A = L + D + U$ into $L,U,D$ where $L,U$ are the strict upper and lower triangular parts and $D$ is the diagonal of $A$ and the iteration



                      $$vec x_i+1 = - D^-1((L+R)vec x_i + b)$$



                      will converge to the solution $(x,y')$. Note that $D^-1$ is particularly easy to compute as you just have to invert the entries. So in theis case the iteration is



                      $$vec x_i+1 = -beginbmatrix 1/3 & 0 \ 0 & 1/5.2 endbmatrixleft(beginbmatrix 0 & 2.6 \ 5 & 0 endbmatrix vec x_i + bright)$$



                      So you can actually view this as a fixed point iteration of the function $f(vec x) = -D^-1((L+R)vec x + b)$ which is guaranteed to be a contraction in the case of diagonal dominance of $A$. It is actually quite slow and doesn't any practical application for directly solving systems of linear equations but it (or variations of it) is quite often used as a preconditioner.






                      share|cite|improve this answer











                      $endgroup$



                      As another iterative method I suggest the Jacobi Method. A sufficient criterion for its convergence is that the matrix must be diagonally dominant. Which this one in our system is not:



                      $beginbmatrix 3 &2 \ 5 &4endbmatrixbeginbmatrix x \ yendbmatrix=beginbmatrix36 \ 64endbmatrix$



                      We can however fix this by replacing e.g. $y' := frac11.3 y$. Then the system is



                      $underbracebeginbmatrix 3 & 2.6 \ 5 & 5.2endbmatrix_=:Abeginbmatrix x \ y'endbmatrix=beginbmatrix36 \ 64endbmatrix$



                      and $A$ is diagonally dominant. Then we can decompose $A = L + D + U$ into $L,U,D$ where $L,U$ are the strict upper and lower triangular parts and $D$ is the diagonal of $A$ and the iteration



                      $$vec x_i+1 = - D^-1((L+R)vec x_i + b)$$



                      will converge to the solution $(x,y')$. Note that $D^-1$ is particularly easy to compute as you just have to invert the entries. So in theis case the iteration is



                      $$vec x_i+1 = -beginbmatrix 1/3 & 0 \ 0 & 1/5.2 endbmatrixleft(beginbmatrix 0 & 2.6 \ 5 & 0 endbmatrix vec x_i + bright)$$



                      So you can actually view this as a fixed point iteration of the function $f(vec x) = -D^-1((L+R)vec x + b)$ which is guaranteed to be a contraction in the case of diagonal dominance of $A$. It is actually quite slow and doesn't any practical application for directly solving systems of linear equations but it (or variations of it) is quite often used as a preconditioner.







                      share|cite|improve this answer














                      share|cite|improve this answer



                      share|cite|improve this answer








                      edited yesterday

























                      answered yesterday









                      flawrflawr

                      11.8k32546




                      11.8k32546





















                          2












                          $begingroup$

                          It is clear that:




                          • $x=10$, $y=3$ is an integer solution of $(1)$.


                          • $x=12$, $y=1$ is an integer solution of $(2)$.

                          Then, from the theory of Linear Diophantine equations:



                          • Any integer solution of $(1)$ has the form $x_1=10+2t$, $y_1=3-3t$ with $t$ integer.

                          • Any integer solution of $(2)$ has the form $x_2=12+4t$, $y_2=1-5t$ with $t$ integer.

                          Then, the system has an integer solution $(x_0,y_0)$ if and only if there exists an integer $t$ such that



                          $$10+2t=x_0=12+4tqquadtextandqquad 3-3t=y_0=1-5t.$$



                          Solving for $t$ we see that there exists an integer $t$ satisfying both equations, which is $t=-1$. Thus the system has the integer solution
                          $$x_0=12+4(-1)=8,; y_0=1-5(-1)=6.$$



                          Note that we can pick any pair of integer solutions to start with. And the method will give the solution provided that the solution is integer, which is often not the case.






                          share|cite|improve this answer











                          $endgroup$

















                            2












                            $begingroup$

                            It is clear that:




                            • $x=10$, $y=3$ is an integer solution of $(1)$.


                            • $x=12$, $y=1$ is an integer solution of $(2)$.

                            Then, from the theory of Linear Diophantine equations:



                            • Any integer solution of $(1)$ has the form $x_1=10+2t$, $y_1=3-3t$ with $t$ integer.

                            • Any integer solution of $(2)$ has the form $x_2=12+4t$, $y_2=1-5t$ with $t$ integer.

                            Then, the system has an integer solution $(x_0,y_0)$ if and only if there exists an integer $t$ such that



                            $$10+2t=x_0=12+4tqquadtextandqquad 3-3t=y_0=1-5t.$$



                            Solving for $t$ we see that there exists an integer $t$ satisfying both equations, which is $t=-1$. Thus the system has the integer solution
                            $$x_0=12+4(-1)=8,; y_0=1-5(-1)=6.$$



                            Note that we can pick any pair of integer solutions to start with. And the method will give the solution provided that the solution is integer, which is often not the case.






                            share|cite|improve this answer











                            $endgroup$















                              2












                              2








                              2





                              $begingroup$

                              It is clear that:




                              • $x=10$, $y=3$ is an integer solution of $(1)$.


                              • $x=12$, $y=1$ is an integer solution of $(2)$.

                              Then, from the theory of Linear Diophantine equations:



                              • Any integer solution of $(1)$ has the form $x_1=10+2t$, $y_1=3-3t$ with $t$ integer.

                              • Any integer solution of $(2)$ has the form $x_2=12+4t$, $y_2=1-5t$ with $t$ integer.

                              Then, the system has an integer solution $(x_0,y_0)$ if and only if there exists an integer $t$ such that



                              $$10+2t=x_0=12+4tqquadtextandqquad 3-3t=y_0=1-5t.$$



                              Solving for $t$ we see that there exists an integer $t$ satisfying both equations, which is $t=-1$. Thus the system has the integer solution
                              $$x_0=12+4(-1)=8,; y_0=1-5(-1)=6.$$



                              Note that we can pick any pair of integer solutions to start with. And the method will give the solution provided that the solution is integer, which is often not the case.






                              share|cite|improve this answer











                              $endgroup$



                              It is clear that:




                              • $x=10$, $y=3$ is an integer solution of $(1)$.


                              • $x=12$, $y=1$ is an integer solution of $(2)$.

                              Then, from the theory of Linear Diophantine equations:



                              • Any integer solution of $(1)$ has the form $x_1=10+2t$, $y_1=3-3t$ with $t$ integer.

                              • Any integer solution of $(2)$ has the form $x_2=12+4t$, $y_2=1-5t$ with $t$ integer.

                              Then, the system has an integer solution $(x_0,y_0)$ if and only if there exists an integer $t$ such that



                              $$10+2t=x_0=12+4tqquadtextandqquad 3-3t=y_0=1-5t.$$



                              Solving for $t$ we see that there exists an integer $t$ satisfying both equations, which is $t=-1$. Thus the system has the integer solution
                              $$x_0=12+4(-1)=8,; y_0=1-5(-1)=6.$$



                              Note that we can pick any pair of integer solutions to start with. And the method will give the solution provided that the solution is integer, which is often not the case.







                              share|cite|improve this answer














                              share|cite|improve this answer



                              share|cite|improve this answer








                              edited 2 days ago

























                              answered 2 days ago









                              PedroPedro

                              10.9k23475




                              10.9k23475





















                                  0












                                  $begingroup$

                                  Consider the three vectors $textbfA=(3,2)$, $textbfB=(5,4)$ and $textbfX=(x,y)$. Your system could be written as $$textbfAcdottextbfX=a\textbfBcdottextbfX=b$$ where $a=36$, $b=64$ and $textbfA_perp=(-2,3)$ is orthogonal to $textbfA$. The first equation gives us $textbfX=dfracatextbfAtextbfA^2+lambdatextbfA_perp$. Now to find $lambda$ we use the second equation, we get $lambda=dfracbtextbfA_perpcdottextbfB-dfracatextbfAcdottextbfBtextbfA^2timestextbfA_perpcdottextbfB$. Et voilà :
                                  $$textbfX=dfracatextbfAtextbfA^2+dfractextbfA_perptextbfA_perpcdottextbfBleft(b-dfracatextbfAcdottextbfBtextbfA^2right)$$






                                  share|cite|improve this answer









                                  $endgroup$

















                                    0












                                    $begingroup$

                                    Consider the three vectors $textbfA=(3,2)$, $textbfB=(5,4)$ and $textbfX=(x,y)$. Your system could be written as $$textbfAcdottextbfX=a\textbfBcdottextbfX=b$$ where $a=36$, $b=64$ and $textbfA_perp=(-2,3)$ is orthogonal to $textbfA$. The first equation gives us $textbfX=dfracatextbfAtextbfA^2+lambdatextbfA_perp$. Now to find $lambda$ we use the second equation, we get $lambda=dfracbtextbfA_perpcdottextbfB-dfracatextbfAcdottextbfBtextbfA^2timestextbfA_perpcdottextbfB$. Et voilà :
                                    $$textbfX=dfracatextbfAtextbfA^2+dfractextbfA_perptextbfA_perpcdottextbfBleft(b-dfracatextbfAcdottextbfBtextbfA^2right)$$






                                    share|cite|improve this answer









                                    $endgroup$















                                      0












                                      0








                                      0





                                      $begingroup$

                                      Consider the three vectors $textbfA=(3,2)$, $textbfB=(5,4)$ and $textbfX=(x,y)$. Your system could be written as $$textbfAcdottextbfX=a\textbfBcdottextbfX=b$$ where $a=36$, $b=64$ and $textbfA_perp=(-2,3)$ is orthogonal to $textbfA$. The first equation gives us $textbfX=dfracatextbfAtextbfA^2+lambdatextbfA_perp$. Now to find $lambda$ we use the second equation, we get $lambda=dfracbtextbfA_perpcdottextbfB-dfracatextbfAcdottextbfBtextbfA^2timestextbfA_perpcdottextbfB$. Et voilà :
                                      $$textbfX=dfracatextbfAtextbfA^2+dfractextbfA_perptextbfA_perpcdottextbfBleft(b-dfracatextbfAcdottextbfBtextbfA^2right)$$






                                      share|cite|improve this answer









                                      $endgroup$



                                      Consider the three vectors $textbfA=(3,2)$, $textbfB=(5,4)$ and $textbfX=(x,y)$. Your system could be written as $$textbfAcdottextbfX=a\textbfBcdottextbfX=b$$ where $a=36$, $b=64$ and $textbfA_perp=(-2,3)$ is orthogonal to $textbfA$. The first equation gives us $textbfX=dfracatextbfAtextbfA^2+lambdatextbfA_perp$. Now to find $lambda$ we use the second equation, we get $lambda=dfracbtextbfA_perpcdottextbfB-dfracatextbfAcdottextbfBtextbfA^2timestextbfA_perpcdottextbfB$. Et voilà :
                                      $$textbfX=dfracatextbfAtextbfA^2+dfractextbfA_perptextbfA_perpcdottextbfBleft(b-dfracatextbfAcdottextbfBtextbfA^2right)$$







                                      share|cite|improve this answer












                                      share|cite|improve this answer



                                      share|cite|improve this answer










                                      answered 2 days ago









                                      BPPBPP

                                      2,160927




                                      2,160927



























                                          draft saved

                                          draft discarded
















































                                          Thanks for contributing an answer to Mathematics Stack Exchange!


                                          • Please be sure to answer the question. Provide details and share your research!

                                          But avoid


                                          • Asking for help, clarification, or responding to other answers.

                                          • Making statements based on opinion; back them up with references or personal experience.

                                          Use MathJax to format equations. MathJax reference.


                                          To learn more, see our tips on writing great answers.




                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function ()
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3180580%2fare-there-any-other-methods-to-apply-to-solving-simultaneous-equations%23new-answer', 'question_page');

                                          );

                                          Post as a guest















                                          Required, but never shown





















































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown

































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown







                                          Popular posts from this blog

                                          getting Checkpoint VPN SSL Network Extender working in the command lineHow to connect to CheckPoint VPN on Ubuntu 18.04LTS?Will the Linux ( red-hat ) Open VPNC Client connect to checkpoint or nortel VPN gateways?VPN client for linux machine + support checkpoint gatewayVPN SSL Network Extender in FirefoxLinux Checkpoint SNX tool configuration issuesCheck Point - Connect under Linux - snx + OTPSNX VPN Ububuntu 18.XXUsing Checkpoint VPN SSL Network Extender CLI with certificateVPN with network manager (nm-applet) is not workingWill the Linux ( red-hat ) Open VPNC Client connect to checkpoint or nortel VPN gateways?VPN client for linux machine + support checkpoint gatewayImport VPN config files to NetworkManager from command lineTrouble connecting to VPN using network-manager, while command line worksStart a VPN connection with PPTP protocol on command linestarting a docker service daemon breaks the vpn networkCan't connect to vpn with Network-managerVPN SSL Network Extender in FirefoxUsing Checkpoint VPN SSL Network Extender CLI with certificate

                                          대한민국 목차 국명 지리 역사 정치 국방 경제 사회 문화 국제 순위 관련 항목 각주 외부 링크 둘러보기 메뉴북위 37° 34′ 08″ 동경 126° 58′ 36″ / 북위 37.568889° 동경 126.976667°  / 37.568889; 126.976667ehThe Korean Repository문단을 편집문단을 편집추가해Clarkson PLC 사Report for Selected Countries and Subjects-Korea“Human Development Index and its components: P.198”“http://www.law.go.kr/%EB%B2%95%EB%A0%B9/%EB%8C%80%ED%95%9C%EB%AF%BC%EA%B5%AD%EA%B5%AD%EA%B8%B0%EB%B2%95”"한국은 국제법상 한반도 유일 합법정부 아니다" - 오마이뉴스 모바일Report for Selected Countries and Subjects: South Korea격동의 역사와 함께한 조선일보 90년 : 조선일보 인수해 혁신시킨 신석우, 임시정부 때는 '대한민국' 국호(國號) 정해《우리가 몰랐던 우리 역사: 나라 이름의 비밀을 찾아가는 역사 여행》“남북 공식호칭 ‘남한’‘북한’으로 쓴다”“Corea 대 Korea, 누가 이긴 거야?”국내기후자료 - 한국[김대중 前 대통령 서거] 과감한 구조개혁 'DJ노믹스'로 최단기간 환란극복 :: 네이버 뉴스“이라크 "韓-쿠르드 유전개발 MOU 승인 안해"(종합)”“해외 우리국민 추방사례 43%가 일본”차기전차 K2'흑표'의 세계 최고 전력 분석, 쿠키뉴스 엄기영, 2007-03-02두산인프라, 헬기잡는 장갑차 'K21'...내년부터 공급, 고뉴스 이대준, 2008-10-30과거 내용 찾기mk 뉴스 - 구매력 기준으로 보면 한국 1인당 소득 3만弗과거 내용 찾기"The N-11: More Than an Acronym"Archived조선일보 최우석, 2008-11-01Global 500 2008: Countries - South Korea“몇년째 '시한폭탄'... 가계부채, 올해는 터질까”가구당 부채 5000만원 처음 넘어서“‘빚’으로 내몰리는 사회.. 위기의 가계대출”“[경제365] 공공부문 부채 급증…800조 육박”“"소득 양극화 다소 완화...불평등은 여전"”“공정사회·공생발전 한참 멀었네”iSuppli,08年2QのDRAMシェア・ランキングを発表(08/8/11)South Korea dominates shipbuilding industry | Stock Market News & Stocks to Watch from StraightStocks한국 자동차 생산, 3년 연속 세계 5위자동차수출 '현대-삼성 웃고 기아-대우-쌍용은 울고' 과거 내용 찾기동반성장위 창립 1주년 맞아Archived"중기적합 3개업종 합의 무시한 채 선정"李대통령, 사업 무분별 확장 소상공인 생계 위협 질타삼성-LG, 서민업종인 빵·분식사업 잇따라 철수상생은 뒷전…SSM ‘몸집 불리기’ 혈안Archived“경부고속도에 '아시안하이웨이' 표지판”'철의 실크로드' 앞서 '말(言)의 실크로드'부터, 프레시안 정창현, 2008-10-01“'서울 지하철은 안전한가?'”“서울시 “올해 안에 모든 지하철역 스크린도어 설치””“부산지하철 1,2호선 승강장 안전펜스 설치 완료”“전교조, 정부 노조 통계서 처음 빠져”“[Weekly BIZ] 도요타 '제로 이사회'가 리콜 사태 불러들였다”“S Korea slams high tuition costs”““정치가 여론 양극화 부채질… 합리주의 절실””“〈"`촛불집회'는 민주주의의 질적 변화 상징"〉”““촛불집회가 민주주의 왜곡 초래””“국민 65%, "한국 노사관계 대립적"”“한국 국가경쟁력 27위‥노사관계 '꼴찌'”“제대로 형성되지 않은 대한민국 이념지형”“[신년기획-갈등의 시대] 갈등지수 OECD 4위…사회적 손실 GDP 27% 무려 300조”“2012 총선-대선의 키워드는 '국민과 소통'”“한국 삶의 질 27위, 2000년과 2008년 연속 하위권 머물러”“[해피 코리아] 행복점수 68점…해외 평가선 '낙제점'”“한국 어린이·청소년 행복지수 3년 연속 OECD ‘꼴찌’”“한국 이혼율 OECD중 8위”“[통계청] 한국 이혼율 OECD 4위”“오피니언 [이렇게 생각한다] `부부의 날` 에 돌아본 이혼율 1위 한국”“Suicide Rates by Country, Global Health Observatory Data Repository.”“1. 또 다른 차별”“오피니언 [편집자에게] '왕따'와 '패거리 정치' 심리는 닮은꼴”“[미래한국리포트] 무한경쟁에 빠진 대한민국”“대학생 98% "외모가 경쟁력이라는 말 동의"”“특급호텔 웨딩·200만원대 유모차… "남보다 더…" 호화病, 고질병 됐다”“[스트레스 공화국] ① 경쟁사회, 스트레스 쌓인다”““매일 30여명 자살 한국, 의사보다 무속인에…””“"자살 부르는 '우울증', 환자 중 85% 치료 안 받아"”“정신병원을 가다”“대한민국도 ‘묻지마 범죄’,안전지대 아니다”“유엔 "학생 '성적 지향'에 따른 차별 금지하라"”“유엔아동권리위원회 보고서 및 번역본 원문”“고졸 성공스토리 담은 '제빵왕 김탁구' 드라마 나온다”“‘빛 좋은 개살구’ 고졸 취업…실습 대신 착취”원본 문서“정신건강, 사회적 편견부터 고쳐드립니다”‘소통’과 ‘행복’에 목 마른 사회가 잠들어 있던 ‘심리학’ 깨웠다“[포토] 사유리-곽금주 교수의 유쾌한 심리상담”“"올해 한국인 평균 영화관람횟수 세계 1위"(종합)”“[게임연중기획] 게임은 문화다-여가활동 1순위 게임”“영화속 ‘영어 지상주의’ …“왠지 씁쓸한데””“2월 `신문 부수 인증기관` 지정..방송법 후속작업”“무료신문 성장동력 ‘차별성’과 ‘갈등해소’”대한민국 국회 법률지식정보시스템"Pew Research Center's Religion & Public Life Project: South Korea"“amp;vwcd=MT_ZTITLE&path=인구·가구%20>%20인구총조사%20>%20인구부문%20>%20 총조사인구(2005)%20>%20전수부문&oper_YN=Y&item=&keyword=종교별%20인구& amp;lang_mode=kor&list_id= 2005년 통계청 인구 총조사”원본 문서“한국인이 좋아하는 취미와 운동 (2004-2009)”“한국인이 좋아하는 취미와 운동 (2004-2014)”Archived“한국, `부분적 언론자유국' 강등〈프리덤하우스〉”“국경없는기자회 "한국, 인터넷감시 대상국"”“한국, 조선산업 1위 유지(S. Korea Stays Top Shipbuilding Nation) RZD-Partner Portal”원본 문서“한국, 4년 만에 ‘선박건조 1위’”“옛 마산시,인터넷속도 세계 1위”“"한국 초고속 인터넷망 세계1위"”“인터넷·휴대폰 요금, 외국보다 훨씬 비싸”“한국 관세행정 6년 연속 세계 '1위'”“한국 교통사고 사망자 수 OECD 회원국 중 2위”“결핵 후진국' 한국, 환자가 급증한 이유는”“수술은 신중해야… 자칫하면 생명 위협”대한민국분류대한민국의 지도대한민국 정부대표 다국어포털대한민국 전자정부대한민국 국회한국방송공사about korea and information korea브리태니커 백과사전(한국편)론리플래닛의 정보(한국편)CIA의 세계 정보(한국편)마리암 부디아 (Mariam Budia),『한국: 하늘이 내린 한 폭의 그림』, 서울: 트랜스라틴 19호 (2012년 3월)대한민국ehehehehehehehehehehehehehehWorldCat132441370n791268020000 0001 2308 81034078029-6026373548cb11863345f(데이터)00573706ge128495

                                          Cannot Extend partition with GParted The 2019 Stack Overflow Developer Survey Results Are In Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) 2019 Community Moderator Election ResultsCan't increase partition size with GParted?GParted doesn't recognize the unallocated space after my current partitionWhat is the best way to add unallocated space located before to Ubuntu 12.04 partition with GParted live?I can't figure out how to extend my Arch home partition into free spaceGparted Linux Mint 18.1 issueTrying to extend but swap partition is showing as Unknown in Gparted, shows proper from fdiskRearrange partitions in gparted to extend a partitionUnable to extend partition even though unallocated space is next to it using GPartedAllocate free space to root partitiongparted: how to merge unallocated space with a partition