\documentclass[../Main.tex]{subfiles} \graphicspath{{\subfix{Assets/img/}}} \begin{document} The computational approach I have decided to take is an application of \cite{MALIAR2018}, where the policy function is approximated using a neural network. The approach uses the fact that the euler equation implicitly defines the optimal policy function, for example: $[0] = f(x(\theta),\theta)$. This can easily be turned into a mean square loss function by squaring both sides, $0 = f^2(x(\theta),\theta)$, allowing one to find $x(\dot)$ as the solution to a minimization problem. By choosing a neural network as the functional approximation, we are able to take advantage of the significant computational and practical improvements currently revolutionizing Machine Learning. In particular, we can now use common frameworks, such as python, PyTorch, and various online accerators (Google Colab) which have been optimized for relatively high performance and straightforward development. \subsubsection{Computational Plan} I have decided to use python and the PyTorch Neural Network library for this project. The most difficult step is creating the euler equations. When working with high dimensioned problems involving differentiation, three general computational approaches exist: \begin{itemize} \item Using a symbolic library (sympy) or language (mathematica) to create the euler equations. This has the disadvantage of being (very) slow, but the advantage that for a single problem specification it only needs completed once. It requires taking a matrix inverse, which can easily complicate formulas and is computationally complex, approximately $O(n^3)$ algorithm. \item Using numerical differentiation (ND). The primary issue with ND is that errors can grow quite quickly when performing algebra on numerical derivatives. This requires tracking how errors can grow and compound within your specific formulation of the problem. \item Using automatic differentiation (AD) to differentiate the computer code directly. This approach has a few major benefits. \begin{itemize} \item Precision is high, because you are calcuating symbolic derivatives of your computer functions. \item ML is heavily dependent on AD, thus the tools are plentiful and tested. \item The coupling of AD and ML lead to a tight integration with the neural network libraries, simplifying the calibration procedure. \end{itemize} \end{itemize} I have chosen to use the AD to generate a euler equation function, which will then be the basis of our loss function. The first step is to construct the intertemporal transition functions (e.g \ref{put_refs_here}). %Not sure how much detail to use. %I'm debating on describing how it is done. These take derivatives of the value function at time $t$ as an input, and output derivatives of the value function at time $t+1$. Once this function has been finished, it can be combined with the laws of motion in an iterated manner to transition between times $t$ and times $t+k$. I did so by coding a function that iteratively compose the transition and laws of motion functions, retuning a $k$-period transition function. The second step is to generate functions that represent the optimality conditions. By taking the appropriate derivatives with respect to the laws of motion and utility functions, this can be constructed explicitly. Once these two functions are completed, they can be combined to create the euler equations, as described in appendix \ref{appx??}. %%% Is it FaFCCs or recursion that allows this to occur? %%% I believe both are ways to approach the problem. %\paragraph{Functions As First Class Citizens} %The key computer science tool that makes this possible is the concept %of ``functions as first class citizens'' (FaFCCs). %In every computer language there are primitive values that functions %operate on. %When a language considers FaFCCs, functions are one of the primitives %that functions can operate on. %This is how we can get %AD in pytorch does not work by FaFCC though, instead constructing a computational graph. \paragraph{Training} With the euler equation and resulting loss function in place, standard training approachs can be used to fit the function. I plan on using some variation on stochastic gradient descent. Normally, neural networks are trained on real world data. As this is a synthetic model, I am planning on training it on random selections from the state space. If I can data on how satellites are and have been distributed, I plan on selecting from that distribution. \paragraph{Heterogeneous Agents} One key question is how to handle the case of heterogeneous agents. When the laws of motion depend on other agents' decisions, as is the case described in \ref{lawsOFMotion}, intertemporal iteration may require knowing the other agents best response function. I believe I can model this in the constellation operator's case by solving for the policy functions of each class of operator simultaneously. I would like to verify this approach as I have not dived into some of the mathemeatics that deeply. \subsubsection{Existence concerns} %check matrix inverses etc. % I am currently working on a plan to guarantee existence of solutions. Some of what I want to do is check numerically crucial values as well as examine the necessary Inada conditions. \end{document}