Skip to content

Commit

Permalink
Adding week 12 notebooks.
Browse files Browse the repository at this point in the history
  • Loading branch information
austinwn committed May 7, 2014
1 parent 023b725 commit 7fff869
Show file tree
Hide file tree
Showing 4 changed files with 718 additions and 0 deletions.
159 changes: 159 additions & 0 deletions 12.4.2 The Power Method.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,159 @@
{
"metadata": {
"name": ""
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
"cell_type": "heading",
"level": 1,
"metadata": {},
"source": [
" 12.4.2 The Power Method"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With this notebook, we demonstrate how the Power Method can be used to compute the eigenvector associated with the largest eigenvalue (in magnitude).\n",
"\n",
"<font color=red> Be sure to make a copy!!!! </font>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We start by creating a matrix with known eigenvalues and eigenvectors\n",
"\n",
"How do we do this? \n",
"<ul>\n",
" <li>\n",
" We want a matrix that is not deficient, since otherwise the Power Method may not work. \n",
" </li>\n",
" <li>\n",
" Hence, $ A = V \\Lambda V^{-1} $ for some diagonal matrix $ \\Lambda $ and nonsingular matrix $ V $. The eigenvalues are then on the diagonal of $ \\Lambda $ and the eigenvectors are the columns of $ V $.\n",
" </li>\n",
" <li>\n",
" So, let's pick the eigenvalues for the diagonal of $ \\Lambda $ and let's pick a random matrix $ V $ (in the hopes that it has linearly independent columns) and then let's see what happens. \n",
" </li>\n",
" </ul>\n",
"\n",
"<font color=red> Experiment by changing the eigenvalues! What happens if you make the second entry on the diagonal equal to -4? Or what if you set 2 to -1? </font>"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"import numpy as np\n",
"import laff\n",
"import flame\n",
"\n",
"Lambda = np.matrix( ' 4., 0., 0., 0;\\\n",
" 0., 3., 0., 0;\\\n",
" 0., 0., 2., 0;\\\n",
" 0., 0., 0., 1' )\n",
"\n",
"lambda0 = Lambda[ 0,0 ]\n",
"\n",
"V = np.matrix( np.random.rand( 4,4 ) )\n",
"\n",
"# normalize the columns of V to equal one\n",
"\n",
"for j in range( 0, 4 ):\n",
" V[ :, j ] = V[ :, j ] / np.sqrt( np.transpose( V[:,j] ) * V[:, j ] )\n",
"\n",
"A = V * Lambda * np.linalg.inv( V )\n",
"\n",
"print( 'Lambda = ' )\n",
"print( Lambda)\n",
"\n",
"print( 'V = ' )\n",
"print( V )\n",
"\n",
"print( 'A = ' )\n",
"print( A )\n"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"# Pick a random starting vector\n",
"\n",
"x = np.matrix( np.random.rand( 4,1 ) )\n",
"\n",
"for i in range(0,10):\n",
" x = A * x \n",
" \n",
" # normalize x to length one\n",
" x = x / np.sqrt( np.transpose( x ) * x )\n",
" \n",
" print( 'Rayleigh quotient with vector x:', np.transpose( x ) * A * x / ( np.transpose( x ) * x ))\n",
" print( 'inner product of x with v0 :', np.transpose( x ) * V[ :, 0 ] )\n",
" print( ' ' )"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "code",
"collapsed": false,
"input": [],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the above, \n",
" <ul>\n",
" <li>\n",
" The Rayleigh quotient should converge to 4.0 (slowly).\n",
" </li>\n",
" <li>\n",
" The inner product of $ x $ and the first column of $ V $, $ v_0 $, should converge to 1 or -1 since eventually $ x $ should be in the direction of $ v_0 $ (or in the opposite direction).\n",
" </li>\n",
" </ul>\n",
" \n",
"If you change the \"3\" on the diagonal to \"-4\", then you have two largest eigenvalues (in magnitude), and the vector $ x $ will end up in the space spanned by $ v_0 $ and $ v_1 $. \n",
" You can check this by looking at $ ( I - V_L ( V_L^T V_L )^{-1} V_L ) x $, where $V_L $ equals the matrix with $ v_0 $ and $ v_1 $ as its columns, to see if the vector orthogonal to $ {\\cal C}( V_L ) $ converges to zero. This is seen in the following code block:\n",
"\n"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"w = x - V[ :,0:2 ] * np.linalg.inv( np.transpose( V[ :,0:2 ] ) * V[ :,0:2 ] ) * np.transpose( V[ :,0:2 ] ) * x\n",
" \n",
"print( 'Norm of component orthogonal: ', np.linalg.norm( w ) )"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "code",
"collapsed": false,
"input": [],
"language": "python",
"metadata": {},
"outputs": []
}
],
"metadata": {}
}
]
}
186 changes: 186 additions & 0 deletions 12.5.1 The Inverse Power Method.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,186 @@
{
"metadata": {
"name": ""
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
"cell_type": "heading",
"level": 1,
"metadata": {},
"source": [
" 12.5.1 The Inverse Power Method"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With this notebook, we demonstrate how the Inverse Power Method can be used to find the smallest eigenvector of a matrix.\n",
"\n",
"<font color=red> Be sure to make a copy!!!! </font>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We start by creating a matrix with known eigenvalues and eigenvectors\n",
"\n",
"How do we do this? \n",
"<ul>\n",
" <li>\n",
" We want a matrix that is not deficient, since otherwise the Inverse Power Method may not work. \n",
" </li>\n",
" <li>\n",
" Hence, $ A = V \\Lambda V^{-1} $ for some diagonal matrix $ \\Lambda $ and nonsingular matrix $ V $. The eigenvalues are then on the diagonal of $ \\Lambda $ and the eigenvectors are the columns of $ V $.\n",
" </li>\n",
" <li>\n",
" So, let's pick the eigenvalues for the diagonal of $ \\Lambda $ and let's pick a random matrix $ V $ (in the hopes that it has linearly independent columns) and then let's see what happens. \n",
" </li>\n",
" </ul>\n",
"\n",
"<font color=red> Experiment by changing the eigenvalues! What happens if you make the second entry on the diagonal equal to -4? Or what if you set 2 to -1? </font>"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"import numpy as np\n",
"import laff\n",
"import flame\n",
"\n",
"Lambda = np.matrix( ' 4., 0., 0., 0;\\\n",
" 0., 3., 0., 0;\\\n",
" 0., 0., 2., 0;\\\n",
" 0., 0., 0., 1' )\n",
"\n",
"lambda0 = Lambda[ 0,0 ]\n",
"\n",
"V = np.matrix( np.random.rand( 4,4 ) )\n",
"\n",
"# normalize the columns of V to equal one\n",
"\n",
"for j in range( 0, 4 ):\n",
" V[ :, j ] = V[ :, j ] / np.sqrt( np.transpose( V[:,j] ) * V[:, j ] )\n",
"\n",
"A = V * Lambda * np.linalg.inv( V )\n",
"\n",
"print( 'Lambda = ' )\n",
"print( Lambda)\n",
"\n",
"print( 'V = ' )\n",
"print( V )\n",
"\n",
"print( 'A = ' )\n",
"print( A )\n"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The idea is as follows:\n",
"\n",
"The eigenvalues of $ A $ are $ \\lambda_0, \\ldots, \\lambda_3 $ with\n",
"\n",
"$$\n",
"\\vert \\lambda_0 \\vert > \\vert \\lambda_1 \\vert > \\vert \\lambda_2 \\vert > \\vert \\lambda_3 \\vert > 0\n",
"$$\n",
"\n",
"and how fast the iteration converges depends on the ratio \n",
"\n",
"$$\n",
"\\left\\vert \\frac{\\lambda_3}{\\lambda_2} \\right\\vert .\n",
"$$\n"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"# Pick a random starting vector\n",
"\n",
"x = np.matrix( np.random.rand( 4,1 ) )\n",
"\n",
"# We should really compute a factorization of A, but let's be lazy, and compute the inverse\n",
"# explicitly\n",
"\n",
"Ainv = np.linalg.inv( A )\n",
"\n",
"for i in range(0,10):\n",
" x = Ainv * x \n",
" \n",
" # normalize x to length one\n",
" x = x / np.sqrt( np.transpose( x ) * x )\n",
" \n",
" # Notice we compute the Rayleigh quotient with matrix A, not Ainv. This is because\n",
" # the eigenvalue of A is an eigenvalue of Ainv\n",
" \n",
" print( 'Rayleigh quotient with vector x:', np.transpose( x ) * A * x / ( np.transpose( x ) * x ))\n",
" print( 'inner product of x with v3 :', np.transpose( x ) * V[ :, 3 ] )\n",
" print( ' ' )"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "code",
"collapsed": false,
"input": [],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the above, \n",
" <ul>\n",
" <li>\n",
" The Rayleigh quotient should converge to 1.0 (quicker than the Power Method converged).\n",
" </li>\n",
" <li>\n",
" The inner product of $ x $ and the last column of $ V $, $ v_{n-1} $, should converge to 1 or -1 since eventually $ x $ should be in the direction of $ v_{n-1} $ (or in the opposite direction).\n",
" </li>\n",
" </ul>\n",
" \n",
" Try changing the \"2\" to a \"-1\" or \"1\". What happens then?\n",
" \n",
" You can check this by looking at $ ( I - V_R ( V_R^T V_R )^{-1} V_R ) x $, where $V_R $ equals the matrix with $ v_2 $ and $ v_3 $ as its columns, to see if the vector orthogonal to $ {\\cal C}( V_R ) $ converges to zero. This is seen in the following code block:\n"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"w = x - V[ :,2:4 ] * np.linalg.inv( np.transpose( V[ :,2:4 ] ) * V[ :,2:4 ] ) * np.transpose( V[ :,2:4 ] ) * x\n",
" \n",
"print( 'Norm of component orthogonal: ', np.linalg.norm( w ) )"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "code",
"collapsed": false,
"input": [],
"language": "python",
"metadata": {},
"outputs": []
}
],
"metadata": {}
}
]
}
Loading

0 comments on commit 7fff869

Please sign in to comment.