\documentclass[12pt,twoside]{article} \renewcommand{\baselinestretch}{1} \setcounter{page}{1} \setlength{\textheight}{21.6cm} \setlength{\textwidth}{14cm} \setlength{\oddsidemargin}{1cm} \setlength{\evensidemargin}{1cm} \pagestyle{myheadings} \thispagestyle{empty} \markboth{\small{John Nixon}}{\small{Notes of systems of partial differential equations}} \usepackage{caption} \usepackage{verbatim} %allows some sections to be temporarily commented out, which was %useful for correcting compilation errors that caused a problem with the long captions in two %tables (search for 'comment' to find them) \usepackage{enumitem} \usepackage{amssymb} \usepackage{longtable} \usepackage{float} \usepackage{subfigure} \usepackage{multirow} \usepackage{bigstrut} \usepackage{bigdelim} \usepackage{pdflscape} \usepackage{adjustbox} \usepackage{mleftright} \usepackage{abraces} %\usepackage{cmll} \usepackage{pifont} \usepackage[utf8]{inputenc}% add accented characters directly \restylefloat{figure} \restylefloat{table} \usepackage{booktabs} \usepackage{hyperref}%The inclusion of package hyperref converts all the references to equations etc. to internal links \usepackage{array} \usepackage{mathtools}% includes amsmath \usepackage{amscd} \usepackage{amsthm} \usepackage{xypic} %\usepackage{MnSymbol} %\everymath{\mathtt{\xdef\tmp{\fam\the\fam\relax}\aftergroup\tmp}} %\everydisplay{\mathtt{\xdef\tmp{\fam\the\fam\relax}\aftergroup\tmp}} \usepackage{bm} \begin{document} \setbox0\hbox{$x$} \newcommand{\un}{\underline} %\centerline{\bf Mathematica Aeterna, Vol. x, 201x, no. xx, xxx - xxx}%uncomment this in published version Date: 2025-12-11 \centerline{} \centerline{} \centerline {\Large{\bf Partial differential equations}} \centerline{} \section{Introduction} The literature of basic work on partial differential equations (PDE's) and systems of them seems to be mainly divided into two parts, that based on knowledge of modern differential geometry and that just based on multivariable calculus. The first part I am not so familiar with and I am just getting to grips with what seems most important (\cite{rhermann1965,schouten_kulk1969,brl1991,wb1986}) of which Boothby's book has been most useful for the basic concepts. This geometric approach is based on (1) regarding the independent and dependent variables all on the same footing using a coordinate-free methods and (2) using the basic ideas of differential geometry such as vector fields, forms, (types of tensors) and exterior algebra, which is very elegant but quite complicated. Much of this work was synthesised in the work of \'Elie Cartan. I think it likely that this geometric approach will provide another way of getting at essentially the same results found here. However the older work is often quite hard to read due to difficult notation and concepts (apart from the annoying gothic script letters!). This is not often referred to in the basic texts. This document makes minimal use of this material except some of what is in Olver's book {\cite{pjo1986}. Firstly there is a heuristic argument showing a general equivalence to coupled sets of ODE's in different directions, but these directions depend in general on the boundary conditions of the original PDE problem. This argument suggests that for a set of $p$ PDE's of first order involving $p$ unknowns, $p$ characteristic directions can always be found but some might coincide with each other or might involve complex numbers. The concept probably arose initially in the case of the wave equation in one variable of space ($x$) and one of time ($t$) $\partial^2\phi/\partial t^2=c^2 \partial^2\phi/\partial x^2$ and plays a central role in the well-known theory of linear second order PDE's with two independent variables and the appropriate type of boundary/initial conditions for a unique solution (see for example \cite{is1957} or \cite{hfw1965}). It goes something like this. If a system of PDE's has solutions for a set of dependent variables say $u_1,u_2,\ldots u_p$ as functions of the independent variables $x_1,\ldots x_n$ in a region containing the initial given data and these solutions are assumed unique in a region as determined by the Cauchy-Kovalevskaya (CK) theorem, which is described in Olver \cite{pjo1986} showing the uniqueness of the solution of the system in a neighbourhood of a surface where initial values of all the unknowns are specified but naturally this can only be done (section 2.6) if the initial surface does not contain any of the characteristic directions. Then if hypothetically all the dependent variables except one are given their values (by an oracle that somehow managed to guess them) the system would then be a system of first order PDE's for the single remaining unknown. Just one such PDE with the CK theorem would then determine a unique solution for this unknown using the initial data. These latter problems can always be solved for the remaining unknown by integrating along ``strips" from an initial surface provided it does not contain any of these directions (Cauchy/Monge's method)\cite{is1957}. This process could be repeated, updating each unknown in turn, and the whole cycle repeated until convergence starting with an initial estimate of all the unknowns consistent with the initial data. This argument suggests that in general there will be $p$ directions to integrate to get the solution i.e. the original system is equivalent to a set $p$ coupled systems of ODE's one for each unknown. In the general case nothing can be said about these directions because they depend on the boundary conditions, but in many special cases of systems of PDE's some information about these directions is available from the original system itself. This is because there are simplifications that are independent of the boundary conditions can be searched for i.e. minimisation of dimension (independent and dependent variables). A reduced number of independent variables in which the equations can be expressed implies that any characteristic directions must be within the submanifolds defined by this reduced number of variables. Earlier I proposed that the numbers of independent and dependant variables be minimised and described methods for finding them \cite{jhn1991}. In this paper I have developed this a little giving examples of what could be termed ``partial" minimisation of dimension. This can give rise to some interesting cases such as when only two characteristic directions appear when three were expected showing that this is a case where two characteristic directions coincide but they are not necessarily in involution because if they were it would result in a reduction of dimension. Most treatments of this problem seem to not go much beyond identifying the characteristics and the relationship between these and the domains of influence and dependence of the solution on given data. The main theme of the theory of PDE's seems to me to be to classify and characterise these special cases many of which are well-known and some of which I have identified here and in my earlier work\cite{jhn1991}. When considering analytic systems of partial differential equations (PDE's) in general, two preliminary steps need to be taken first. (1) to simplify this to the treatment of a first order system, because any such system can be made first order by introducing new variables (so the original system is defined by a subset of the variables of a first order system) and (2) to consider only such systems that are locally solvable (Nixon 1991, \cite{pjo1986} Olver 1986) because this can always be arranged, at least for linear systems, by adding extra equations by cross-differentiation provided the original system is consistent. These techniques complement other techniques such as the use of symmetries that Olver has described \cite{pjo1986} and should be applied first because of the drastic simplifications that can be obtained. While developing these ideas, I was swapping between the general theory and the examples, each helping to improve the understanding of the other, and as a result it was difficult to find a way to present the work, with the options being to present the examples first and constantly refer forward to the general treatment, or the present the general theory first without prior motivating examples. In the end I chose the latter, so the outline of the paper is as follows for linear systems: In section 2 I show how to find the integrability conditions giving rise to local solvability. In section 3 I describe the extension of the method of minimising dimension to ``partial" minimisation of dimension i.e. doing it for a subset of the system. In section 4 I describe the 2D Laplace equation in detail. \section{\label{sec2}The integrability conditions and local solvability} I am starting with the linear system \begin{equation}\label{start}\sum_{\imath=1}^p\sum_{l=1}^n\frac{\partial u_{\imath}}{\partial x_l}a_{\imath lk}(x)+\sum_{\imath =1}^pu_{\imath}b_{\imath k}(x)=0\text{ for }1\le k\le m.\end{equation} Here I use $\imath$ where I used $i$ before in (16) and following in \cite{jhn1991} to distinguish it from the imaginary number $i$, and the 2D array called $\bm{a}$ there will be called $\bm{b}$ here to distinguish it from the 3D array also called $\bm{a}$ in \cite{jhn1991}. Suppose the 3D array $a_{\imath j k}$ with dimensions $p\times n\times m$ is such that the $n\times n$ matrices \begin{equation}\label{ass}(c_{\imath})_{jl}=\sum_{k=1}^m h_{kl}a_{\imath jk}\text{ are skew symmetric for }1\le \imath\le p.\end{equation} These are the equations derived by equating to zero all the second order terms arising from the linear combination \begin{equation}\label{new}\sum_{k=1}^m\left(\bm{h}_k(\bm{x}).\nabla \text{ equation }k\right)=0\end{equation} so if the $\bm{h}_k$ satisfy these equations, the resulting linear combinations are also first order. These are linear equations for the $mn$ elements of $\bm{h}$ therefore they can be written in the form \begin{equation}\label{form}\sum_\beta A_{\alpha \beta}H_{\beta}=0\text{ for all }\alpha.\end{equation} The parameter $\alpha$ is indexed by $\imath,j,l$ and $\beta$ is indexed by the two indices of $h$, and the number of values of $\alpha$ and $\beta$ are $pn^2$ and $mn$ respectively (but note that the equations \eqref{ass} have redundancy because if $l\ne j$ these indices can be swapped giving the same result). The procedure I was suggesting in \cite{jhn1991} to be applied to equation~\eqref{start} is to repeatedly search for all such coefficients $\bm{h}$ and then construct the corresponding equations of the form \eqref{new} and add them to the original system \eqref{start}. Keep doing this until there are no non-zero solutions for $\bm{h}$. Then all subsequent steps are applied to this system, again of the form \eqref{start}. The condition \eqref{ass} can be written as \begin{equation}\label{hyp}\sum_{k=1}^ma_{\imath jk}h_{kl}+\sum_{k=1}^ma_{\imath lk}h_{kj}=0\text{ for }1\le \imath\le p\text{ and } 1\le j,l\le n.\end{equation} Let $\lambda$ be an index that takes the place of $l$ as an index of $H$. Then $H$ and $\beta$ will be indexed by $k$ and $\lambda$, so that one can speak of row $(\imath,j,l)$ and column $(k,\lambda)$ of $A$. To evaluate the element in this position, pick out the coefficient of $h_{k\lambda}$ in \eqref{hyp} giving $a_{\imath jk}\delta_{l\lambda}+a_{\imath lk}\delta_{j\lambda}$ making it obvious that $A$ splits naturally into two terms which will be referred to as $A_1$ and $A_2$ respectively. If the indices $l$ and $\lambda$ vary most slowly in $\alpha$ and $\beta$ respectively (and $j$ varies slower than $\imath$) then $A_1$ can be written as an $n\times n$ matrix of matrices of dimension $pn\times m$ where the off-diagonal matrix terms of $A_1$ are zero and the diagonal terms are all the same, say $B$. Then $B$ is the $p n\times m$ matrix having all the elements $a_{\imath jk}$ arranged so that this is the element on row $(\imath,j)$ and column $k$. $A_2$ can also be split naturally into submatrices but is more complicated. First split $B$ into $n$ rows corresponding to the $n$ values of $j$. These could be called $B_1\ldots B_n$ such that $B_j$ is the $p\times m$ matrix with element $a_{\imath j k}$ on row $\imath$ and column $k$. Then $A_2$ is naturally split into an $n\times n$ matrix indexed by rows $l$ and columns $\lambda$ such that the $(l,\lambda)$ element is a matrix like $B$ but where all the sub-matrices $B_j$ are replaced by zero except the one in the position of $B_\lambda$ and it is $B_l$. If this is sketched out it is obvious that all the rows of $A_1$ are replicated in the rows of $A_2$ and vice versa. Specifically, major row $l$ subrow $j$ (for all values of $\imath$) of $A_1$ are the same as major row $j$ subrow $l$ (for all values of $\imath$) of $A_2$ and they are given by the matrix $B_j$ in position $l$ amongst a set of $n$ matrices all the same dimensions ($p\times m$) in a row where these others are all zero matrices. Therefore each row of $A$ is the sum of two rows of $A_1$. Also each row of $A_1$ will appear multiplied by two in $A$ so every row of $A_1$ is a linear combination of the rows of $A$ and vice versa i.e. the rows of $A$ and $A_1$ span the same vector space. Considering $A_1$ shows that the dimension of this vector space is $rn$ where $r$ is the rank of $B$ so the rank of $A$ is $mn$ if and only if the rank of $B$ is $m$. In terms of these submatrices, \eqref{form} can be written as \begin{equation}\label{sub}B_l\bm{h}_j+B_j\bm{h}_l=0\text{ for }1\le j,l\le n\end{equation} where $\bm{h}_j=\left(\begin{array}{@{}l@{}}h_{1j}\\h_{2j}\\\ldots\\h_{mj}\end{array}\right)$. Therefore if $m=p$ i.e. the number of unknowns is the same as the number of equations in the PDE system, this makes all the matrices $B_s$ have dimension $m\times m$ then if just one of the $B$'s say $B_s$ has full rank then \eqref{sub} for $l=j=s$ gives $\bm{h}_s=0$. Then put $l=s$ gives $B_s\bm{h}_j=0$ for $1\le j\le n$ i.e. $\bm{h}=0$ so a condition under which this process of repeatedly adding integrability conditions to the original system stops is that at least one of the submatrices $B_s$ has full rank. This ensures that any new integrability conditions obtained by this process are zero. Since the newly obtained PDE equation (21) in \cite{jhn1991} is not a linear combination of the PDE's system started with and because the rank of the $B$'s cannot decrease as extra columns are added, eventually this process will end as above. There is another case in which $\bm{h}\ne 0$ and the $B's$ all have less than full rank and their rank cannot be increased by any such $\bm{h}$. This would be expected if $mp$ is an overdetermined system and will not in general have solutions unless some cancellation occurs, and most of this logic can still be applied. The absence of solutions will manifest itself in inconsistent results. It turns out that the condition that at least one of the $B_j$ has maximal rank is precisely the condition that the original system can be put into Cauchy Kovalevskaya form and is therefore locally solvable (\cite{pjo1986} p166). Making a change of independent variables $x_1,\ldots x_n\to t,y_1,\ldots y_{n-1}$ gives \begin{equation}\frac{\partial u_{\imath}}{\partial x_l}=\sum_{j=1}^{n-1}\frac{\partial u_{\imath}}{\partial y_j}\frac{\partial y_j}{\partial x_l}+\frac{\partial u_{\imath}}{\partial t}\frac{\partial t}{\partial x_l}\end{equation} and when expressed in terms of these variables, \eqref{start} gives \begin{equation}\sum_{\imath=1}^m\sum_{l=1}^n\left(\sum_{j=1}^{n-1}\frac{\partial u_{\imath}}{\partial y_j}\frac{\partial y_j}{\partial x_l}+\frac{\partial u_{\imath}}{\partial t}\frac{\partial t}{\partial x_l}\right)a_{\imath lk}(x)+\ldots =0.\end{equation} This equation can be solved for $\frac{\partial u_{\imath}}{\partial t}$ if there exists a vector $\frac{\partial t}{\partial x_l}$ such that \begin{equation}\sum_{l=1}^n\frac{\partial t}{\partial x_l}a_{\imath lk}\end{equation} is an invertible matrix with indices $\imath$ and $k$. Because this matrix is a linear combination of the $B$'s it can be chosen to be invertible because at least one of the $B$'s is also, as obtained above. \section{Minimisation of dimension for a subset of the original system} The method used here is a modification and extension of the method I used in \cite{jhn1991} to search for a reduced number of independent variables which can be used to express a linear system of PDE's. The modification is to only require a subset of $m'p$ in general there will not be a solution to the boundary value problem. Solutions with smaller dimensional boundaries might be possible. About the simplest possible example is the Cauchy Riemann equations which shows that this simple idea can run into problems. Use equation 1 to calculate $u$ from $v$ by integrating along lines parallel to the $x$ axis, or calculate $v$ from $u$ by integrating ... $y$ axis. Using equation 2 calculate $u$ from $v$ integrate parallel to the $y$ axis 2 calculate $v$ from $u$ integrate parallel to the $x$ axis The problem here is that the calculation $u\to v$ by equation (1) and the calculation $v\to u$ by equation (2) give characteristics that are parallel, and likewise for the other way round. For minimisation of the number of independent variables resulting in (55) and (56) to $r$, the three-dimensional array $G$ with dimensions $m\times p\times n$ must satisfy \begin{equation}\text{Rank}\left(M_{ij}= \sum_{k=1}^m G_{kij} h_k\right)=r\end{equation} for some $h$. Requiring $r=1$ for just one linear combination $h$ would impose $pn-p-n+1$ conditions on $M$ because $n+p+1$ parameters uniquely specify a $p\times n$ matrix of rank 1. Therefore there will not in general be a reduction to $r=1$ for even a single linear combination of the equations. However if it does happen for a single linear combination, the equations (55) $(n)$ and the second member of (56) after eliminating $b_{ij}$ $(p)$ can be in principle be integrated from the boundary (if appropriate boundary conditions are given) and allow one of the unknowns to be eliminated by back-substitution of the result into the original system. Show how the new variables are constructed to do this. Given a system of PDE and a surface S, what conditions are imposed by the system within S? Possible answers: (1) no condition i.e. $\bm{u}$ is unrestricted (2) There are no solutions for $\bm{u}$ so any conditions could be added (vacuously).(3) There could be some conditions. This question is answered by the minimisation of dimension argument, because such restrictions, if present, would be systems of PDEs defined within S. So if a minimised dimension result was found within S, it must be included, and S is a characteristic surface, otherwise none are possible, and S is a non-characteristic surface. Dependence of $r$ on $\bm{x}$. It is not necessarily true that regions of different $r$ (dimension of the Lie algebra generated by the $\bm{f_i}$ at a point) correspond to integral surfaces. If a region of dimension $< n$ with tangent space $X$ a point is identified having a given value of $r$ for a L.C. of the system, the space $Y$ spanned by the L.I. vectors from the completion of the $\bm{f_i}$ does not necessarily relate to $X$. If $Y\subseteq X$ it is possible to write an equation in the reduced dimension in a nbd. of the point. Otherwise it won't work. This is another integrability condition I. In general the partial function \begin{equation}r(\bm{x})=\left\{\begin{matrix}r\text{ if }I(\bm{x})\\\text{undefined if }\tilde I(x)\end{matrix}\right.\end{equation} detefines the effective reduced dimension for a L.C. of the original system. From the general theory point of view, thinking of any variables complex is unnecessary because any such system can be written using the real and imaginary parts separately and adding the Cauchy Riemann equations if analytic solutions are wanted. Thus the new system equivalent to the system where complex or analytic solutions is sought, has only real solutions, coefficients and boundary conditions only. Indeed it would be possible to take the resulting system defined above and intepret the variables as complex and again represent everything in terms of real variables. This is clearly unnecessary "complexification". The general problem for determined systems ($p=m$) is to establish existence and uniqueness of solutions in a region containing the initial data. The basic theorem for this is the Cauchy Kovalevskaya (CK) theorem. One way to imagine how the solution might be constructed is to start with initial estimates of $u_1,\ldots u_m$ consistent with the initial data then sequentially update $u_1,u_2,\dots u_m$. For each of these $m$ updates one equation of the system is used, so that they are each used once. Thus each equation of the system is a single first order PDE for the single unknown with all the other ones replaced by their current values. Thus Monge's method of integral strips applies. Once a variable has been updated, this updated value is used in all subsequent calculations until it is updated again. The cycle of updates should be repeated to convergence to any desired degree of accuracy. $G_{kij}$ is the direction vector with components indexed by $j$ for the propagation of $u_i$ obtained from equation $k$, i.e. $m^2$ vectors in $n$ dimensions. Attempts to prove that this converges in analogy with the Picard theorem for ODE's failed because it is nor clear what analogue of the Lipschitz condition or metric could be used that would give rise to a unique fixed point. The problem comes from the derivatives of all the variables in the equations. With the CK theorem in mind, reductions of dimension that are possible should be seen as singular cases that are exceptional and if they occur, special techniques are needed (minimisation of dimension together with adding in the integrability conditions). It is possible that different conditions occur in different regions of the space of independent variables. For the CK theorem I think no partial reductions of dimension should be possible on the initial surface i.e. all the directions for integrating each variable (at convergence i.e. when all the variables are known) lead out of the initial surface. Were they not to do so the initial surface could not have independently defined data, i.e. some extra conditions would have to be satisfied. \begin{thebibliography}{99} \bibitem{jhn1991}J H Nixon J.Phys. A:Math.Gen. 24(1991)2913-2941: {Application of Lie groups and differentiable manifolds to general methods for simplifying systems of partial differential equations} \bibitem{pjo1986}Peter J.Olver: Springer-Verlag (1986): Graduate texts in mathematics: Applications of Lie groups to Differential Equations \bibitem{hfw1965}Hans F.Weinberger: Open University set book (1965): {A first course in Partial Differential Equations} \bibitem{is1957}Ian Sneddon: {McGraw-Hill Kogakusha (1957): International student edition. Elements of Partial Differential Equations} \bibitem{rhermann1965} Robert Hermann: Advances in Mathematics Vol 1 p265-317 (1965) {E.Cartan's geometric theory of partial differential equations.} \href{https://doi.org/10.1016/0001-8708(65)90040-X}{https://doi.org/10.1016/0001-8708(65)90040-X} \bibitem{schouten_kulk1969}J.A. Schouten and W.v.d. Kulk: Pfaff's problem and its generalisations (1969) Chelsea publishing co. New York USA \bibitem{brl1991}Bryant R.L., Chern S.S., Gardner R.B., Goldschmidt, Griffiths P.A.(1991) Mathematical sciences research institute publications: Springer-Verlag: {Exterior differential systems} \bibitem{wb1986}William Boothby(1986)An introduction to differentiable manifolds and riemmannian geometry. Second Ed. Academic press. \end{thebibliography} \end{document}