Added more work on setting up euler equations using pytorch

temporaryWork
youainti 5 years ago
parent ea4fec2813
commit 01a883a004

@ -3,7 +3,7 @@
{
"cell_type": "code",
"execution_count": 1,
"id": "11cc7082-6fb7-49fd-992e-6678f1570ed9",
"id": "dense-italic",
"metadata": {},
"outputs": [],
"source": [
@ -14,18 +14,20 @@
{
"cell_type": "code",
"execution_count": 2,
"id": "9cff4845-9f88-4f49-8ad9-d94386ccc7dc",
"id": "adult-cargo",
"metadata": {},
"outputs": [],
"source": [
"a = torch.tensor([2., 3.], requires_grad=True)\n",
"b = torch.tensor([6., 4.], requires_grad=True)"
"b = torch.tensor([6., 4.], requires_grad=True)\n",
"c = torch.tensor([2., 3.], requires_grad=False)\n",
"d = torch.tensor([6., 4.], requires_grad=False)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "f49549e1-dbcb-4aa7-8036-8bef936b3b98",
"execution_count": 3,
"id": "photographic-miniature",
"metadata": {},
"outputs": [],
"source": [
@ -35,60 +37,36 @@
},
{
"cell_type": "code",
"execution_count": 14,
"id": "7ee977aa-d71e-4cfd-8743-a06216fa2209",
"execution_count": null,
"id": "charming-plate",
"metadata": {},
"outputs": [],
"source": [
"f(a,b).backward(q)"
]
"source": []
},
{
"cell_type": "code",
"execution_count": 15,
"id": "bccc63c6-ce7c-402e-99fc-d5b0a9b0deb1",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"tensor([ 92., 169.])"
]
},
"execution_count": 15,
"execution_count": 4,
"id": "fitting-horizontal",
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"a.grad"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "fe0575a3-bdc9-4b2f-9564-b01763c09c2b",
"execution_count": 5,
"id": "secret-oasis",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"tensor([-6252., -1168.])"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"b.grad"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "e653ec77-4034-4797-b5db-bb06778090da",
"execution_count": 6,
"id": "comic-context",
"metadata": {},
"outputs": [
{
@ -101,7 +79,7 @@
"x = tensor([4.], requires_grad=True)\n",
"y = tensor([29.], grad_fn=<AddBackward0>)\n",
"\n",
"df = <AddBackward0 object at 0x7f388c0cd640>\n",
"df = <AddBackward0 object at 0x7fd45460eb20>\n",
"\n",
"gradient of func(x) = \n",
"tensor([11.])\n"
@ -143,7 +121,7 @@
},
{
"cell_type": "markdown",
"id": "b3470450-834c-43db-8f89-d6f51e2754e0",
"id": "chinese-family",
"metadata": {},
"source": [
"# Try this\n",
@ -165,23 +143,27 @@
"### Envelope\n",
"$$\n",
"0 = \\frac{\\partial F}{\\partial \\theta} + \\frac{\\partial G}{\\partial \\theta} \\frac{\\partial V}{\\partial G} - \\frac{\\partial V}{\\partial \\theta}\n",
"$$"
]
},
{
"cell_type": "markdown",
"id": "91e8528c-a2c4-4dfd-87c6-3d367c9d0f2e",
"metadata": {},
"source": [
"$$\n",
"\n",
"So, how do you incorporate the situation where you have to iterate multiple times?\n",
" - Just add conditions as rows?\n",
" - Solve and substitute using some theorem on the inverse of derivatives in multivariate systems?"
" - Solve and substitute using some theorem on the inverse of derivatives in multivariate systems?\n",
" \n",
"## Thoughts on solution\n",
"You can find $\\frac{\\partial G}{\\partial \\theta}$ through a direct construction maybe?\n",
" - This involves\n",
" - Setting up each scalar element of $G$, and then differentiating\n",
" - These would then need to be reassembled into a matrix\n",
" - Pros\n",
" - Will work\n",
" - Cons\n",
" "
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "9b9df983-d2b2-4515-a98b-2469d8d392f1",
"execution_count": 7,
"id": "exceptional-amount",
"metadata": {},
"outputs": [],
"source": [
@ -191,38 +173,194 @@
},
{
"cell_type": "code",
"execution_count": 19,
"id": "ea3a604f-48bb-424b-bb76-649d640a55db",
"execution_count": 8,
"id": "biblical-convertible",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"-1.0"
"(tensor([[0.0156, 0.0000],\n",
" [0.0000, 0.0123]]),\n",
" tensor([[0.0056, 0.0000],\n",
" [0.0000, 0.0177]]))"
]
},
"execution_count": 19,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": []
"source": [
"torch.autograd.functional.jacobian(utility, (a,b))"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "classified-crisis",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"tensor([10., 15.], grad_fn=<MulBackward0>)"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def complicated(c):\n",
" return c.sum()*c\n",
"\n",
"complicated(a)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "charged-locator",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[7., 2.],\n",
" [3., 8.]], grad_fn=<ViewBackward>)"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"torch.autograd.functional.jacobian(complicated, a, create_graph=True)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "cheap-necessity",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"tensor([50., 50.], grad_fn=<AddBackward0>)"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def more_complicated(c,d):\n",
" return c.sum()*d + d.sum()*c\n",
"more_complicated(a,b)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "8cc8b408-c0b9-4344-bed7-f6fbb6514a2c",
"execution_count": 12,
"id": "flexible-nightlife",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(tensor([[16., 6.],\n",
" [ 4., 14.]], grad_fn=<ViewBackward>),\n",
" tensor([[7., 2.],\n",
" [3., 8.]], grad_fn=<ViewBackward>))"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"x = torch.autograd.functional.jacobian(more_complicated, (a,b), create_graph=True)\n",
"x"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "heavy-duration",
"metadata": {},
"outputs": [],
"source": [
"def bellman(theta, x, V, b=0.95):\n",
" pass\n"
"def states(theta,x,c,d):\n",
" return (theta + x*c)@d * d"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "quantitative-organ",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"tensor([864., 576.], grad_fn=<MulBackward0>)"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"states(a,b,c,d)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "transparent-cartridge",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(tensor([[36., 24.],\n",
" [24., 16.]], grad_fn=<ViewBackward>),\n",
" tensor([[72., 72.],\n",
" [48., 48.]], grad_fn=<ViewBackward>),\n",
" tensor([[216., 96.],\n",
" [144., 64.]], grad_fn=<ViewBackward>),\n",
" tensor([[228., 90.],\n",
" [ 56., 204.]], grad_fn=<ViewBackward>))"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"torch.autograd.functional.jacobian(states, (a,b,c,d), create_graph=True)"
]
},
{
"cell_type": "markdown",
"id": "adjusted-saskatchewan",
"metadata": {},
"source": [
"So, I think I can construct a gradient, and possibly invert it/choose some other solution method."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d7730732-de9e-4d8c-b85f-112bee09762a",
"id": "outdoor-functionality",
"metadata": {},
"outputs": [],
"source": []
@ -244,7 +382,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.5"
"version": "3.8.8"
}
},
"nbformat": 4,

Loading…
Cancel
Save