saving work to be able work on it at home. Wrote most of the new abstractions (concrete and abstract classes) over states and model definition. Still missing some functions.

temporaryWork
youainti 5 years ago
parent cd994a640e
commit d0bcdd87c0

@ -1,17 +1,25 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "code", "cell_type": "markdown",
"execution_count": null, "id": "usual-deviation",
"id": "sexual-collective",
"metadata": {}, "metadata": {},
"outputs": [], "source": [
"source": [] "Note on pytorch. NN optimization acts imperitively/by side effect as follows.\n",
" - Define model\n",
" - loop\n",
" - Calculate loss\n",
" - Zero gradients\n",
" - backprop to model\n",
" - check conditions for exit\n",
" - report diagnostics\n",
" - disect results"
]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 1, "execution_count": 1,
"id": "complex-bookmark", "id": "generous-alexandria",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
@ -21,7 +29,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 2, "execution_count": 2,
"id": "informal-prisoner", "id": "surrounded-jurisdiction",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
@ -45,7 +53,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 3, "execution_count": 3,
"id": "racial-natural", "id": "legal-character",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
@ -55,7 +63,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 4, "execution_count": 4,
"id": "acting-athens", "id": "statutory-bachelor",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
@ -65,7 +73,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 5, "execution_count": 5,
"id": "later-bulgaria", "id": "informational-bennett",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
@ -86,7 +94,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 6, "execution_count": 6,
"id": "large-recipient", "id": "basic-printer",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
@ -96,7 +104,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 7, "execution_count": 7,
"id": "hybrid-interim", "id": "nonprofit-sponsorship",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
@ -107,7 +115,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 8, "execution_count": 8,
"id": "environmental-mercury", "id": "logical-conversation",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
@ -118,7 +126,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 9, "execution_count": 9,
"id": "internal-calibration", "id": "inner-stations",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
@ -176,7 +184,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"id": "black-platinum", "id": "native-bristol",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [] "source": []
@ -184,7 +192,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"id": "transparent-medication", "id": "moral-apollo",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [] "source": []

@ -0,0 +1,337 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "prepared-nitrogen",
"metadata": {},
"source": [
"Note on pytorch. NN optimization acts imperitively/by side effect as follows.\n",
" - Define model\n",
" - loop\n",
" - Calculate loss\n",
" - Zero gradients\n",
" - backprop to model\n",
" - check conditions for exit\n",
" - report diagnostics\n",
" - disect results\n",
" \n",
" \n",
"## Split result from NN\n",
"Goal is to train the NN and then get a couple of outputs at the end that can be used to split between value function partials and launch functions."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "grateful-conviction",
"metadata": {},
"outputs": [],
"source": [
"import torch"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "incorrect-animal",
"metadata": {},
"outputs": [],
"source": [
"class DoubleNetwork(torch.nn.Module):\n",
" def __init__(self, input_size,output_size,layers_size):\n",
" super().__init__()\n",
" \n",
" #So, this next section constructs different layers within the NN\n",
" #sinlge linear section\n",
" self.linear_step_1a = torch.nn.Linear(input_size,layers_size)\n",
" \n",
" #single linear section\n",
" self.linear_step_2a = torch.nn.Linear(layers_size,output_size)\n",
" self.linear_step_2b = torch.nn.Linear(layers_size,output_size)\n",
" \n",
" def forward(self, input_values):\n",
" \n",
" intermediate_values_a = self.linear_step_1a(input_values)\n",
" \n",
" out_values_a = self.linear_step_2a(intermediate_values_a)\n",
" out_values_b = self.linear_step_2b(intermediate_values_a)\n",
" \n",
" return out_values_a,out_values_b"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "ruled-letter",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
" tensor(3.5646, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(11.7849, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(24.8772, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(5.4752, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.4457, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0925, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0490, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0290, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0178, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0111, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0070, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0045, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0029, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0019, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0012, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0008, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0005, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0003, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0002, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0001, grad_fn=<AddBackward0>)\n"
]
}
],
"source": [
"model = DoubleNetwork(input_size = 5, output_size=5, layers_size=15)\n",
"\n",
"data_in = torch.tensor([1.5,2,3,4,5])\n",
"\n",
"data_in\n",
"\n",
"target = torch.zeros(5)\n",
"\n",
"def loss_fn2(output,target):\n",
" return sum((output[1] +output[0] - target)**2)\n",
" #could add a simplicity assumption i.e. l1 on parameters.\n",
"\n",
"#Prep Optimizer\n",
"optimizer = torch.optim.SGD(model.parameters(),lr=0.01)\n",
"\n",
"for i in range(20):\n",
" #training loop\n",
" optimizer.zero_grad()\n",
"\n",
" output = model.forward(data_in)\n",
" output\n",
"\n",
" l = loss_fn2(output, target)\n",
"\n",
" l.backward()\n",
"\n",
" optimizer.step()\n",
"\n",
" print(\"\\n\",l)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "quantitative-keeping",
"metadata": {},
"outputs": [],
"source": [
"class SplitNetwork(torch.nn.Module):\n",
" def __init__(self, input_size,output_size_a,output_size_b,layers_size):\n",
" super().__init__()\n",
" \n",
" #So, this next section constructs different layers within the NN\n",
" #sinlge linear section\n",
" self.linear_step_1 = torch.nn.Linear(input_size,layers_size)\n",
" self.linear_step_2 = torch.nn.Linear(layers_size,layers_size)\n",
" self.linear_step_3 = torch.nn.Linear(layers_size,layers_size)\n",
" self.linear_step_4 = torch.nn.Linear(layers_size,layers_size)\n",
" \n",
" #single linear section\n",
" self.linear_step_split_a = torch.nn.Linear(layers_size,output_size_a)\n",
" self.linear_step_split_b = torch.nn.Linear(layers_size,output_size_b)\n",
" \n",
" def forward(self, input_values):\n",
" \n",
" intermediate_values = self.linear_step_1(input_values)\n",
" intermediate_values = self.linear_step_2(intermediate_values)\n",
" intermediate_values = self.linear_step_3(intermediate_values)\n",
" intermediate_values = self.linear_step_4(intermediate_values)\n",
" \n",
" out_values_a = self.linear_step_split_a(intermediate_values)\n",
" out_values_b = self.linear_step_split_b(intermediate_values)\n",
" \n",
" return out_values_a,out_values_b"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "vietnamese-prophet",
"metadata": {},
"outputs": [],
"source": [
"model = SplitNetwork(input_size = 6, output_size_a=5, output_size_b=7, layers_size=15)\n",
"\n",
"data_in = torch.tensor([1.5,2,3,4,5,6])\n",
"\n",
"\n",
"target_a = torch.zeros(5)\n",
"target_b = torch.ones(7)\n",
"\n",
"def loss_fn3(output,target_a, target_b):\n",
" return sum((output[0] - target_a)**2) + sum((output[1] - target_b)**2)\n",
" #could add a simplicity assumption i.e. l1 on parameters."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "limiting-slide",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
" tensor(9.6420, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(4.1914, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(5.1337, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(1.4943, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.5210, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.1217, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0605, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0256, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0126, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0057, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0028, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0013, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0006, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0003, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(0.0001, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(7.2050e-05, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(3.5139e-05, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(1.7068e-05, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(8.3342e-06, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(4.0624e-06, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(1.9857e-06, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(9.7029e-07, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(4.7492e-07, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(2.3232e-07, grad_fn=<AddBackward0>)\n",
"\n",
" tensor(1.1381e-07, grad_fn=<AddBackward0>)\n"
]
}
],
"source": [
"#Prep Optimizer\n",
"optimizer = torch.optim.SGD(model.parameters(),lr=0.01)\n",
"\n",
"for i in range(25):\n",
" #training loop\n",
" optimizer.zero_grad()\n",
"\n",
" output = model.forward(data_in)\n",
" output\n",
"\n",
" l = loss_fn3(output, target_a, target_b)\n",
"\n",
" l.backward()\n",
"\n",
" optimizer.step()\n",
"\n",
" print(\"\\n\",l)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "elder-karen",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(tensor([ 3.4232e-05, 3.7350e-05, 5.3748e-05, -2.7344e-05, -1.0052e-04],\n",
" grad_fn=<AddBackward0>),\n",
" tensor([1.0001, 1.0001, 1.0000, 1.0000, 1.0001, 1.0000, 1.0001],\n",
" grad_fn=<AddBackward0>))"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "agreed-community",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -0,0 +1,578 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"id": "consolidated-separation",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import torch\n",
"from torch.autograd.functional import jacobian\n",
"import itertools\n",
"import math\n",
"\n",
"class States():\n",
" \"\"\"\n",
" This is supposed to capture the state variables of the model\n",
" \"\"\"\n",
" def __init__(self, stocks,debris):\n",
" pass\n",
"\n",
" def transition(self):\n",
" #tying the transition functions in here might be useful.\n",
" pass\n",
"\n",
" \n",
"class LDAPV():\n",
" \"\"\"\n",
" This class represents the outputs from the neural network.\n",
" They are of two general categories:\n",
" - the launch functions \n",
" \"\"\"\n",
" def __init__(self, launches, partials):\n",
" self.launches = launches\n",
" self.partials = partials\n",
" \n",
" def launch_single(constellation):\n",
" #returns the launch decision for the constellation of interest\n",
" \n",
" filter_tensor = torch.zeros(NUMBER_CONSTELLATIONS)\n",
" filter_tensor[constellation] = 1.0\n",
" \n",
" return self.launches @ filter_tensor\n",
" \n",
" def launch_vector(constellation):\n",
" #returns the launch decision for the constellation of interest\n",
" \n",
" filter_tensor = torch.zeros(NUMBER_CONSTELLATIONS)\n",
" filter_tensor[constellation] = 1.0\n",
" \n",
" return self.launches * filter_tensor\n",
" \n",
" def partial_vector(constellation):\n",
" #returns the partials of the value function corresponding to the constellation of interest\n",
" \n",
" filter_tensor = torch.zeros(NUMBER_CONSTELLATIONS)\n",
" filter_tensor[constellation] = 1.0\n",
" \n",
" return self.partials @ filter_tensor\n",
" \n",
" def partial_matrix(constellation):\n",
" #returns the partials of the value function corresponding to the constellation of interest\n",
" \n",
" filter_tensor = torch.zeros(NUMBER_CONSTELLATIONS)\n",
" filter_tensor[constellation] = 1.0\n",
" \n",
" return self.partials * filter_tensor\n",
" \n",
" def __str__(self):\n",
" return \"Launch Decisions and Partial Derivativs of value function with\\n\\t states\\n\\t\\t {}\\n\\tPartials\\n\\t\\t{}\".format(self.states,self.partials)"
]
},
{
"cell_type": "markdown",
"id": "numeric-coral",
"metadata": {},
"source": [
"# Setup Functions\n",
"## General CompositionFunctions"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "detected-still",
"metadata": {},
"outputs": [],
"source": [
"### Set up functions to compose functions \n",
"# These functions will \n",
"# - compose two functions together\n",
"# - compose a function to itself n times.\n",
"\n",
"def compose(f,g):\n",
" \"\"\"\n",
" this function composes two functions f and g in\n",
" the order f(g(*args))\n",
" \n",
" it returns a function\n",
" \"\"\"\n",
" return lambda *args: f(g(*args))\n",
"\n",
"def compose_recursive_functions(fn,n):\n",
" \"\"\"\n",
" This function takes a function fn and composes it with itself n times\n",
" \n",
" Returns an array/list that contains each of the n composition levels, ordered\n",
" from fn() to fn^n()\n",
" \"\"\"\n",
"\n",
" \n",
" #Set base conditions\n",
" out_func = None\n",
" out_func_list =[]\n",
"\n",
" #build the compositions of functions\n",
" for f in itertools.repeat(fn, n):\n",
" #if first iteration\n",
" if out_func == None:\n",
" out_func = f\n",
" else:\n",
" out_func = compose(f,out_func)\n",
"\n",
" #append the ou\n",
" out_func_list.append(out_func)\n",
" \n",
" return out_func_list"
]
},
{
"cell_type": "markdown",
"id": "fundamental-fusion",
"metadata": {},
"source": [
"# functions related to transitions"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "palestinian-uganda",
"metadata": {},
"outputs": [],
"source": [
"def single_transition(laws_motion_fn, profit_fn, stocks, debris, neural_network):\n",
" \"\"\"\n",
" This function represents the inverted envelope conditions.\n",
" It allows us to describe the derivatives of the value function evaluated at time $t+1$ in terms based in time period $t$.\n",
" \n",
" It takes a few derivatives (jacobians), and due to the nature of the return as tuples\n",
" they must be concatenated.\n",
" \n",
" It returns the transitioned values\n",
" \"\"\"\n",
" \n",
" launch_and_partials = neural_network.forward(stocks,debris)\n",
" \n",
" #Get the jacobian\n",
" a = jacobian(laws_motion_fn, (stocks, debris, launch_and_partials.launches))\n",
" \n",
" #Reassemble the Jacobian nested tuples into the appropriate tensor\n",
" A = BETA * torch.cat((torch.cat((a[0][0],a[0][1]),dim=1),torch.cat((a[1][0],a[1][1]),dim=1)), dim=0)\n",
"\n",
" #TODO: figure out some diagnostics for this section\n",
" #Possibilities include:\n",
" # - Det(A) ~= 0\n",
" # - EigVal(A) ~= 0\n",
" # - A.inverse() with a try catch system to record types of returns\n",
" #Alternatively, \n",
" #if abs(a.det())\n",
" \n",
" #Calculate the item to transition\n",
" f_jacobians = jacobian(profit_fn,(stocks, debris, launch_and_partials.launches))\n",
"\n",
" #issue with shape here: my launch function is for all launches, not just a single launch.\n",
" f_theta = torch.cat([f_jacobians[0][0], f_jacobians[1][0]],axis=0) \n",
"\n",
" #FIX: issue with scaling here, in that I need to get the constellation level partials, not the full system.\n",
" #I'm not sure how to get each \n",
" T = launch_and_partials.partials - f_theta\n",
"\n",
" #Includes rearranging the jacobian of profit.\n",
"\n",
" #Return the transitioned values\n",
" return ( A.inverse()/BETA ) @ T\n",
"\n",
"\n",
"# This function wraps the single transition and handles updating dates etc.\n",
"def transition_wrapper(data_in):\n",
" \"\"\"\n",
" \"\"\"\n",
" #unpack states and functions\n",
" stocks, debris,profit_fn, laws_motion_fn, neural_network = data_in\n",
" \n",
" #Calculate new states\n",
" launch = neural_network.forward().\n",
" new_stocks, new_debris = laws_motion_fn(stocks,debris, launch)\n",
"\n",
" #This gets the transition of the value function derivatives over time.\n",
" transitioned = single_transition(\n",
" laws_motion_fn, \n",
" profit_fn, \n",
" stocks, debris, #states\n",
" neural_network #launch function\n",
" )\n",
" \n",
" #collects the data back together for return, including the updated state variables\n",
" data_out = new_stocks, new_debris, profit_fn, laws_motion_fn, transitioned, launch_fn\n",
" \n",
" return data_out"
]
},
{
"cell_type": "markdown",
"id": "mexican-illness",
"metadata": {},
"source": [
"## Setup functions related to the problem"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "republican-designer",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
" \n",
"\n",
"### functions\n",
"\n",
"def survival(stock, debris):\n",
" \"\"\"\n",
" This is a basic, deterministic survival function. \n",
" It is based on the CDF of an exponential distribution.\n",
" \"\"\"\n",
" #SURVIVAL FUNCTION BASED ON AN EXPONENTIAL DISTRIBUTION\n",
" return 1 - torch.exp(-SCALING * stock - debris)\n",
"\n",
"\n",
"def laws_of_motion(stock, debris, neural_network):\n",
" \"\"\"\n",
" This function updates state variables (stock and debris), according \n",
" to the laws of motion.\n",
" \n",
" It returns the state variables as \n",
" \"\"\"\n",
" launches_and_partials = neural_network.forward(stock,debris)\n",
" \n",
" s = survival(stock,debris)\n",
" #Notes: Survival is a global function.\n",
" \n",
" new_stock = stock*s + launches_and_partials.launches\n",
" \n",
"\n",
" new_debris = (1-DELTA)*debris + LAUNCH_DEBRIS_RATE * launches_and_partials.launches.sum() + COLLISION_DEBRIS_RATE*(1-s) @ stock\n",
" \n",
" return (new_stock, new_debris)\n",
"\n",
"#This is not a good specification of the profit function, but it will work for now.\n",
"def profit(stock, debris, launches):\n",
" return UTIL_WEIGHTS @ stock - LAUNCH_COST*launches\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "introductory-forwarding",
"metadata": {},
"outputs": [],
"source": [
"# Launch functions\n",
"class ModelMockup():\n",
" def __init__(self):\n",
" pass\n",
" #just a wrapper for NN like results\n",
" \n",
" def forward(self,stock, debris):\n",
" \"\"\"\n",
" Temporary launch function \n",
" \"\"\"\n",
" return LDAPV(torch.ones(len(stocks), requires_grad=True),torch.ones((len(stocks)+len(debris),len(stocks)+len(debris)), requires_grad=True))\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "concrete-movement",
"metadata": {},
"source": [
"# Actual calculations"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "wrong-values",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(3, 11)"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#number of states\n",
"N = 5\n",
"\n",
"#set states\n",
"stocks = torch.ones(N)\n",
"#Last one is different if there are more than 1 (just for variety, no theoretical reason)\n",
"if N > 1:\n",
" stocks[-1] = 0.5\n",
"#now add the tracking requirement in place\n",
"stocks.requires_grad_()\n",
"\n",
"#Setup Debris\n",
"debris = torch.tensor([2.2],requires_grad=True)\n",
"\n",
"#CHANGE LATER: Launch is currently a value, should be a function (i.e. neural network)\n",
"launches = ModelMockup()\n",
"\n",
"#Starting point\n",
"# Stocks, debris, profit fn, laws of motion, item to transition, Launch function\n",
"base_data = (stocks,debris, profit, laws_of_motion, launches)\n",
"\n",
"#Parameters\n",
"SCALING = torch.ones(5)\n",
"DELTA = 0.9\n",
"LAUNCH_DEBRIS_RATE = 0.005\n",
"LAUNCH_COST = 1.0\n",
"COLLISION_DEBRIS_RATE = 0.0007\n",
"UTIL_WEIGHTS = torch.tensor([1,-0.2,0,0,0])\n",
"BETA = 0.95\n",
"\n",
"#Constants determining iterations etc.\n",
"NUMBER_CONSTELLATIONS = 5\n",
"NUMBER_DEBRIS_TRACKERS = 1\n",
"NUMBER_OF_CHOICE_VARIABLES = 1\n",
"NUMBER_OF_REQUIRED_ITERATED_CONDITIONS = (NUMBER_CONSTELLATIONS+NUMBER_DEBRIS_TRACKERS+(NUMBER_OF_CHOICE_VARIABLES*NUMBER_CONSTELLATIONS))\n",
"NUMBER_OF_REQUIRED_ITERATIONS = math.ceil(NUMBER_OF_REQUIRED_ITERATED_CONDITIONS/NUMBER_CONSTELLATIONS)\n",
"NUMBER_OF_REQUIRED_ITERATIONS,NUMBER_OF_REQUIRED_ITERATED_CONDITIONS"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "charitable-cleanup",
"metadata": {},
"outputs": [],
"source": [
"#Calculate the number of iterations\n",
"#Get the values from transitions\n",
"wt1 = compose_recursive_functions(transition_wrapper,NUMBER_OF_REQUIRED_ITERATIONS)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "floppy-arkansas",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Launch Decisions and Partial Derivativs of value function with\n",
"\t states\n",
"\t\t tensor([1., 1., 1., 1., 1.], requires_grad=True)\n",
"\tPartials\n",
"\t\ttensor([[1., 1., 1., 1., 1., 1.],\n",
" [1., 1., 1., 1., 1., 1.],\n",
" [1., 1., 1., 1., 1., 1.],\n",
" [1., 1., 1., 1., 1., 1.],\n",
" [1., 1., 1., 1., 1., 1.],\n",
" [1., 1., 1., 1., 1., 1.]], requires_grad=True)\n"
]
}
],
"source": [
"m = ModelMockup()\n",
"print(m.forward(stocks,debris))"
]
},
{
"cell_type": "markdown",
"id": "dressed-preparation",
"metadata": {},
"source": [
"# Optimatility conditions"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "hydraulic-powder",
"metadata": {},
"outputs": [],
"source": [
"#Optimality math\n",
"def optimality(stocks\n",
" ,debris\n",
" ,profit_fn\n",
" ,laws_motion_fn\n",
" ,neural_net\n",
" ):\n",
" \"\"\"\n",
" This function takes in the \n",
" - stock levels\n",
" - debris levels\n",
" - profit function\n",
" - laws of motion\n",
" - results from the neural network\n",
" and returns the parts used to make up the optimality conditions\n",
" \"\"\"\n",
" \n",
" #split out the results from the forwarded network\n",
" launch_and_partials = neural_net.forward(stock,debris)\n",
" \n",
" #Derivative of the value function with respect to choice functions\n",
" #this returns derivatives with respect to every launch, so I've removed that\n",
" fx = jacobian(profit_fn, (stocks,debris,launch_and_partials.launch))[-1][:,0]\n",
" \n",
" \n",
" #The following returns a tuple of tuples of tensors.\n",
" #the first tuple contains jacobians related to laws of motion for stocks\n",
" #the second tuple contains jacobians related to laws of motion for debris.\n",
" #we need the derivatives related to both\n",
" b = jacobian(laws_of_motion,(stocks,debris,launch_and_partials.launch))\n",
" B = torch.cat((b[0][2],b[1][2].T),axis=1)\n",
"\n",
" #return the optimality values\n",
" return fx + BETA *B @ launch_and_partials.partials\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "provincial-medline",
"metadata": {},
"source": [
"## Now to set up the recursive set of optimatliy conditions"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "monetary-bermuda",
"metadata": {},
"outputs": [
{
"ename": "TypeError",
"evalue": "'ModelMockup' object is not callable",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------\u001b[0m",
"\u001b[0;31mTypeError\u001b[0mTraceback (most recent call last)",
"\u001b[0;32m<ipython-input-10-282ba729dd5a>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0mbase_data\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m(\u001b[0m\u001b[0mstocks\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mdebris\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mprofit\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mlaws_of_motion\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mones\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m6\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mrequires_grad\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mlaunches\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mf\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mcompose_recursive_functions\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtransition_wrapper\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m3\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 3\u001b[0;31m \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mf\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mbase_data\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 4\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0;31m#unpack results\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m<ipython-input-3-fcc1e6d7dbd7>\u001b[0m in \u001b[0;36mtransition_wrapper\u001b[0;34m(data_in)\u001b[0m\n\u001b[1;32m 48\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 49\u001b[0m \u001b[0;31m#Calculate new states\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 50\u001b[0;31m \u001b[0mnew_stocks\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mnew_debris\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mlaws_motion_fn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mstocks\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mdebris\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mlaunch_fn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mstocks\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mdebris\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 51\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 52\u001b[0m \u001b[0;31m#WARNING: RECURSION: You may break your head...\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mTypeError\u001b[0m: 'ModelMockup' object is not callable"
]
}
],
"source": [
"def recursive_optimality(base_data,transition_wrapper):\n",
" #create and return a set of transition wrappers\n",
" for f in compose_recursive_functions(transition_wrapper,3):\n",
" result = f(base_data)\n",
"\n",
" #unpack results\n",
" new_stocks, new_debris, profit_fn, laws_motion_fn, launch_and_partials = result\n",
"\n",
" optimal = optimality(new_stocks\n",
" ,new_debris\n",
" ,profit_fn\n",
" ,laws_motion_fn\n",
" ,launch_and_partials\n",
" )\n",
" yield optimal"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "imported-richards",
"metadata": {},
"outputs": [],
"source": [
"def optimality_wrapper(stocks,debris, launches):\n",
" return optimality(stocks,debris, profit, laws_of_motion, launches)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "excited-question",
"metadata": {},
"outputs": [],
"source": [
"optimality_wrapper(stocks,debris,launches)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "outer-wages",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"id": "chronic-drilling",
"metadata": {},
"source": [
"Notes so far\n",
" - This takes $\\frac{\\partial W_t}{\\partial{\\theta_t}$ as given. I need to reframe this to clean that out. \n",
" - a failed method was just to calculate the partials of W for t = t. That doesn't work.\n",
" - Becasue the B value is not invertible, you can't just solve for the partials.\n",
" - note the approach below.\n",
" - Use part of the launch NN to approximate the gradient conditions.\n",
" - Complicates things slightly as it would require adding more outputs to the NN\n",
" - Would allow for PDE solution for value function.\n",
" - This might be useful for free-entry conditions.\n",
" - Not very helpful otherwise.\n",
" - I don't think it adds much to the final analysis.\n",
" \n",
" \n",
" - There is an issue in how to handle the multiple value functions/loss functions\n",
" - Current status\n",
" - The launch function (soon to be NN) returns launch choices for each constellation.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "necessary-incident",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -12,6 +12,7 @@
- Thoughts on converting my `connect_transitions_to_otimality_conditions` work to this. - Thoughts on converting my `connect_transitions_to_otimality_conditions` work to this.
I need to import torch into that section, and build a loss function. I need to import torch into that section, and build a loss function.
- The basics of this model - The basics of this model
- Use just a basic MSELoss wrapped so that it calculates
- add boundary conditions to loss function - add boundary conditions to loss function
- get a basic gradient descent/optimization of launch function working. - get a basic gradient descent/optimization of launch function working.
- add satellite deorbit to model. - add satellite deorbit to model.
@ -24,6 +25,7 @@ This is nice because it keeps them together, but it may require some thoughtful
The issue is that I need to set up a way to integrate multiple firms at the same time. The issue is that I need to set up a way to integrate multiple firms at the same time.
This may be possible through how I set up the profit funcitons. This may be possible through how I set up the profit funcitons.
Also, I think I need to write out some

@ -0,0 +1,65 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"id": "victorian-produce",
"metadata": {},
"outputs": [],
"source": [
"a = [1,2,3]\n",
"b = [\"a\",\"b\",\"c\"]"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "sought-beginning",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[(0, (1, 'a')), (1, (2, 'b')), (2, (3, 'c'))]"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"[x for x in enumerate(zip(a,b))]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "recent-lingerie",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -0,0 +1,280 @@
import torch
from torch.autograd.functional import jacobian
import itertools
import math
############### CONSTANTS ###################
#Parameters
BETA = 0.95
#Constants determining iterations etc.
NUMBER_CONSTELLATIONS = 5
NUMBER_DEBRIS_TRACKERS = 1
NUMBER_OF_CHOICE_VARIABLES = 1
NUMBER_OF_REQUIRED_ITERATED_CONDITIONS = (NUMBER_CONSTELLATIONS+NUMBER_DEBRIS_TRACKERS+(NUMBER_OF_CHOICE_VARIABLES*NUMBER_CONSTELLATIONS))
NUMBER_OF_REQUIRED_ITERATIONS = math.ceil(NUMBER_OF_REQUIRED_ITERATED_CONDITIONS/NUMBER_CONSTELLATIONS)
############# COMPOSITION FUNCTIONS ###################
def compose(f,g):
"""
this function composes two functions f and g in
the order f(g(*args))
it returns a function.
"""
return lambda *args: f(g(*args))
def recursively_compose_functions(fn,n):
"""
This function takes a function fn and composes it with itself n times.
Returns an array/list that contains each of the n+1 composition levels, ordered
from fn() to fn^n()
"""
#Set base conditions
out_func = None
out_func_list = []
#build the compositions of functions
for f in itertools.repeat(fn, n):
#if first iteration
if out_func == None:
out_func = f
else:
out_func = compose(f,out_func)
#append the ou
out_func_list.append(out_func)
return out_func_list
############### Physical Model ###################
class PhysicalModel():
"""
This is designed as a default, base class for the physical model of satellite interactions.
It captures the constants that characterize interactions, and provides a set of
function to calculate changes to the physical environment.
"""
def __init__(self
,collision_debris
,constellations_collision_risk
,debris_decay_rate
,launch_debris
,debris_autocatalysis_rate
)
self.collision_debris = collision_debris
self.constellations_collision_risk = constellations_collision_risk
self.debris_decay_rate = debris_decay_rate
self.launch_debris = launch_debris
self.debris_autocatalysis_rate = debris_autocatalysis_rate
def survival(self):
#returns the survival rate (not destruction rate) for the given constellation.
return 1- torch.exp(-self.constellations_collision_risk * self.stock - self.debris)
def transition_debris(self, state, launch_decisions):
"""
This function transitions debris levels based off of a state and launch decision.
"""
new_debris = (1-self.debris_decay_rate + self.debris_autocatalysis_rate) * state.debris \ #debris decay and autocatalysis
+ self.launch_debris*launch_decisions.sum() \ #debris from launches
+ self.collision_debris * (1-self.survival()) @ state.stocks
return new_debris
def transition_stocks(self, state, launch_decisions):
new_stock = self.survival() * state.stocks + launch_decisions
return new_stock
def transition(self, state, launch_decisions):
"""
This function takes a state and launch decision, and updates the state according to the physical laws of motion.
It returns a State object.
"""
d = self.transition_debris(state, launch_decisions)
s = self.transition_stocks(state, launch_decisions)
return States(s,d)
class States():
"""
This is supposed to capture the state variables of the model, to create a common interface
when passing between functions.
"""
def __init__(self, stocks,debris):
self.stocks = stocks
self.debris = debris
################ NEURAL NETWORK TOOLS ###################
class EstimandInterface():
"""
This defines a clean interface for working with the estimand (i.e. thing we are trying to estimate).
In general, we are trying to estimate the choice variables and the partial derivatives of the value functions.
This
This class wraps output for the neural network (or other estimand), allowing me to
- easily substitute various types of launch functions by having a common interface
- this eases testing
- check dimensionality etc without dealing with randomness
- again, easing testing
- reason more cleanly about the component pieces
- easing programming
- provide a clean interface to find constellation level launch decisions etc.
It takes inputs of two general categories:
- the choice function results
- the partial derivatives of the value function
"""
def __init__(self, partials, launches, deorbits=None):
self.partials = partials
self.launches = launches
self.deorbits = deorbits
def launch_single(constellation):
#returns the launch decision for the constellation of interest
filter_tensor = torch.zeros(NUMBER_CONSTELLATIONS)
filter_tensor[constellation] = 1.0
return self.launches @ filter_tensor
def launch_vector(constellation):
#returns the launch decision for the constellation of interest as a vector
filter_tensor = torch.zeros(NUMBER_CONSTELLATIONS)
filter_tensor[constellation] = 1.0
return self.launches * filter_tensor
def partial_vector(constellation):
#returns the partials of the value function corresponding to the constellation of interest
filter_tensor = torch.zeros(NUMBER_CONSTELLATIONS)
filter_tensor[constellation] = 1.0
return self.partials @ filter_tensor
def partial_matrix(constellation):
#returns the partials of the value function corresponding to
#the constellation of interest as a matrix
filter_tensor = torch.zeros(NUMBER_CONSTELLATIONS)
filter_tensor[constellation] = 1.0
return self.partials * filter_tensor
def __str__(self):
#just a human readable descriptor
return "Launch Decisions and Partial Derivativs of value function with\n\t states\n\t\t {}\n\tPartials\n\t\t{}".format(self.states,self.partials)
############## ECONOMIC MODEL ############
class EconomicModel():
"""
This class describes the set of profit functions involved in the value function iteration.
"""
def __init__(self,discount_factor, profit_objects):
self.discount_factor = discount_factor
self.profit_objects = profit_objects #A list of Profit objects
def constellation_period_profits(self, state, estimand_interface, constellation):
"""
This function calculates the current period profits for a single given constellation.
"""
return self.profit_objects[constellation].profit(state,estimand_interface,constellation)
def period_profits(self, state, estimand_interface):
"""
This function calculates the current period profits for each constellation.
"""
profits = []
for i,profit_object in self.profit_objectives:
constellation_profit = self.constellation_period_profits(state, estimand_interface, i)
profits.append(constellation_profit)
return profits
@property
def number_constellations(self):
return len(profit_objects)
#Abstract class describing profit. Each subclass will connect a profit function "style" to a specific instance of parameters.
class ProfitFunctions(metaclass=abc.ABCMetaclass):
@abc.abstractmethod
def profit(self,state,estimand_interface,constellation_number):
pass
#TODO: Should I attach the jacobian here? It may simplify things.
#def jacobian()
class LinearProfit(ProfitFunctions):
"""
The simplest type of profit function available.
"""
def __init__(self, benefit_weight, launch_cost, deorbit_cost=0):
self.benefit_weights = benefit_weight
self.launch_cost = launch_cost
self.deorbit_cost = deorbit_cost
def profit(self, state, estimand_interface, constellation_number):
#Calculate the profit
profits = self.benefit_weights @ state.stock \
- self.launch_cost * estimand_interface.launches[constellation_number] #\
#- deorbit_cost @ estimand_interface.deorbits[constellation_number]
return profits
#other profit functions to implement
# price competition (substitution)
# military (complementarity)
############### TRANSITION AND OPTIMALITY FUNCTIONS #################
#Rewrite these to use the abstractiosn present earlier
def single_transition(physical_model, economic_model, states, estimand_interface):
"""
This function represents the inverted envelope conditions.
It allows us to describe the derivatives of the value function evaluated at time $t+1$ in terms based in time period $t$.
It takes a few derivatives (jacobians), and due to the nature of the return as tuples
they must be concatenated.
It returns the transitioned values
"""
#TODO: rewrite using the current abstractions
#possibly move jacobians of profit functions and physical models to those classes
pass
def transition_wrapper(data_in): # Identify a way to eliminate this maybe?
"""
This function wraps the single transition and handles updating states etc.
"""
#TODO: rewrite using current abstractions
pass
#Optimality math
def optimality(stocks
,debris
,profit_fn
,laws_motion_fn
,neural_net
):
"""
This function takes in the
- stock levels
- debris levels
- profit function
- laws of motion
- results from the neural network
and returns the parts used to make up the optimality conditions
"""
pass

@ -3,7 +3,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 1, "execution_count": 1,
"id": "loved-quarter", "id": "weird-cloud",
"metadata": { "metadata": {
"tags": [] "tags": []
}, },
@ -11,13 +11,12 @@
"source": [ "source": [
"import torch\n", "import torch\n",
"from torch.autograd.functional import jacobian\n", "from torch.autograd.functional import jacobian\n",
"import itertools\n", "import itertools"
"import torch #pytorch"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "educated-cosmetic", "id": "ready-attention",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Setup Functions\n", "# Setup Functions\n",
@ -27,7 +26,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 2, "execution_count": 2,
"id": "single-currency", "id": "convertible-thinking",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
@ -74,7 +73,7 @@
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "ordered-arrow", "id": "cloudy-creek",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# functions related to transitions" "# functions related to transitions"
@ -83,7 +82,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 3, "execution_count": 3,
"id": "cellular-consensus", "id": "musical-virtue",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
@ -157,7 +156,7 @@
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "entertaining-hurricane", "id": "quick-destruction",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Setup functions related to the problem" "## Setup functions related to the problem"
@ -166,7 +165,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 4, "execution_count": 4,
"id": "supposed-rating", "id": "strategic-classics",
"metadata": { "metadata": {
"tags": [] "tags": []
}, },
@ -188,7 +187,7 @@
" \"\"\"\n", " \"\"\"\n",
" Temporary launch function \n", " Temporary launch function \n",
" \"\"\"\n", " \"\"\"\n",
" return torch.ones(5, requires_grad=True)\n", " return torch.ones(5, requires_grad=True),\n",
"\n", "\n",
"def laws_of_motion(stock, debris, launch):\n", "def laws_of_motion(stock, debris, launch):\n",
" \"\"\"\n", " \"\"\"\n",
@ -211,12 +210,14 @@
"\n", "\n",
"#This is not a good specification of the profit function, but it will work for now.\n", "#This is not a good specification of the profit function, but it will work for now.\n",
"def profit(stock, debris, launches):\n", "def profit(stock, debris, launches):\n",
" return UTIL_WEIGHTS @ stock - LAUNCH_COST*launches" " return UTIL_WEIGHTS @ stock - LAUNCH_COST*launches\n",
"\n",
"\n"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "located-anatomy", "id": "hairy-finding",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Actual calculations" "# Actual calculations"
@ -225,13 +226,17 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 5, "execution_count": 5,
"id": "aggregate-hughes", "id": "renewable-handy",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"#number of states\n",
"N = 5\n",
"\n",
"#set states\n", "#set states\n",
"stocks = torch.ones(5)\n", "stocks = torch.ones(N)\n",
"#Last one is different\n", "#Last one is different if there are more than 1 (just for variety, no theoretical reason)\n",
"if N > 1:\n",
" stocks[-1] = 0.5\n", " stocks[-1] = 0.5\n",
"#now add the tracking requirement in place\n", "#now add the tracking requirement in place\n",
"stocks.requires_grad_()\n", "stocks.requires_grad_()\n",
@ -244,7 +249,7 @@
"\n", "\n",
"#Starting point\n", "#Starting point\n",
"# Stocks, debris, profit fn, laws of motion, item to transition, Launch function\n", "# Stocks, debris, profit fn, laws of motion, item to transition, Launch function\n",
"base_data = (stocks,debris, profit, laws_of_motion, torch.ones(6, requires_grad=True),launches)\n", "base_data = (stocks,debris, profit, laws_of_motion, torch.ones(N+1, requires_grad=True),launches)\n",
"\n", "\n",
"#Parameters\n", "#Parameters\n",
"SCALING = torch.ones(5)\n", "SCALING = torch.ones(5)\n",
@ -259,7 +264,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 32, "execution_count": 32,
"id": "molecular-express", "id": "engaged-acceptance",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
@ -289,7 +294,7 @@
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "broken-iraqi", "id": "ready-quick",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Also, maybe I can create a `Model` class that upon construction will capture the necesary constants, functions, etc.\n" "Also, maybe I can create a `Model` class that upon construction will capture the necesary constants, functions, etc.\n"
@ -297,7 +302,7 @@
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "illegal-philosophy", "id": "intensive-forum",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Optimatility conditions" "# Optimatility conditions"
@ -305,13 +310,13 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 7, "execution_count": 54,
"id": "portuguese-soldier", "id": "prospective-mambo",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"#Optimality condition\n", "#Optimality math\n",
"def optimality(stocks,debris,profit_fn,laws_motion_fn,launch_fn, iterated_item):\n", "def optimality_parts(stocks,debris,profit_fn,laws_motion_fn,launch_fn):\n",
" #Derivative of the value function with respect to choice functions\n", " #Derivative of the value function with respect to choice functions\n",
" #this returns derivatives with respect to every launch, so I've removed that\n", " #this returns derivatives with respect to every launch, so I've removed that\n",
" fx = jacobian(profit_fn, (stocks,debris,launch_fn(stocks,debris)))[-1][:,0]\n", " fx = jacobian(profit_fn, (stocks,debris,launch_fn(stocks,debris)))[-1][:,0]\n",
@ -325,13 +330,20 @@
" B = torch.cat((b[0][2],b[1][2].T),axis=1)\n", " B = torch.cat((b[0][2],b[1][2].T),axis=1)\n",
"\n", "\n",
"\n", "\n",
" return fx, B\n",
"\n",
"#Optimality condition\n",
"def optimality(stocks,debris,profit_fn,laws_motion_fn,launch_fn, iterated_item):\n",
" #calculate portions of the optimality values\n",
" fx, B = optimality_parts(stocks,debris,profit_fn,laws_motion_fn,launch_fn)\n",
"\n",
" return fx + BETA *B @ iterated_item" " return fx + BETA *B @ iterated_item"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 11, "execution_count": 11,
"id": "hundred-bacteria", "id": "distinguished-consultancy",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
@ -341,7 +353,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 12, "execution_count": 12,
"id": "temporal-fancy", "id": "stylish-plaintiff",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
@ -361,7 +373,7 @@
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "commercial-liverpool", "id": "accredited-wales",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Now to set up the recursive set of optimatliy conditions" "## Now to set up the recursive set of optimatliy conditions"
@ -370,7 +382,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 35, "execution_count": 35,
"id": "simple-steering", "id": "steady-chart",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
@ -408,7 +420,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 46, "execution_count": 46,
"id": "amber-front", "id": "distributed-mailing",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
@ -419,7 +431,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 49, "execution_count": 49,
"id": "least-shock", "id": "considerable-settle",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
@ -440,7 +452,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 52, "execution_count": 52,
"id": "offshore-announcement", "id": "second-aquarium",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
@ -454,14 +466,36 @@
"output_type": "execute_result" "output_type": "execute_result"
} }
], ],
"source": []
},
{
"cell_type": "markdown",
"id": "innocent-visibility",
"metadata": {},
"source": [ "source": [
"torch.nn.MSELoss()(torch.tensor([1.0]), torch.tensor([2]))" "Notes so far\n",
" - This takes $\\frac{\\partial W_t}{\\partial{\\theta_t}$ as given. I need to reframe this to clean that out. \n",
" - a failed method was just to calculate the partials of W for t = t. That doesn't work.\n",
" - Becasue the B value is not invertible, you can't just solve for the partials.\n",
" - note the approach below.\n",
" - Use part of the launch NN to approximate the gradient conditions.\n",
" - Complicates things slightly as it would require adding more outputs to the NN\n",
" - Would allow for PDE solution for value function.\n",
" - This might be useful for free-entry conditions.\n",
" - Not very helpful otherwise.\n",
" - I don't think it adds much to the final analysis.\n",
" \n",
" \n",
" - There is an issue in how to handle the multiple value functions/loss functions\n",
" - Current status\n",
" - The launch function (soon to be NN) returns launch choices for each constellation.\n",
"\n"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"id": "satellite-match", "id": "operating-carpet",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [] "source": []

Loading…
Cancel
Save