\documentclass[12pt,twoside]{article} \renewcommand{\baselinestretch}{1} \setcounter{page}{1} \setlength{\textheight}{21.6cm} \setlength{\textwidth}{14cm} \setlength{\oddsidemargin}{1cm} \setlength{\evensidemargin}{1cm} \pagestyle{myheadings} \thispagestyle{empty} \markboth{\small{John Nixon}}{\small{Notes of systems of partial differential equations}} \usepackage{caption} \usepackage{verbatim} %allows some sections to be temporarily commented out, which was %useful for correcting compilation errors that caused a problem with the long captions in two %tables (search for 'comment' to find them) \usepackage{enumitem} \usepackage{amssymb} \usepackage{longtable} \usepackage{float} \usepackage{subfigure} \usepackage{multirow} \usepackage{bigstrut} \usepackage{bigdelim} \usepackage{pdflscape} \usepackage{adjustbox} \usepackage{mleftright} \usepackage{abraces} %\usepackage{cmll} \usepackage{pifont} \usepackage[utf8]{inputenc}% add accented characters directly \restylefloat{figure} \restylefloat{table} \usepackage{booktabs} \usepackage{hyperref}%The inclusion of package hyperref converts all the references to equations etc. to internal links \usepackage{array} \usepackage{mathtools}% includes amsmath \usepackage{amscd} \usepackage{amsthm} \usepackage{xypic} %\usepackage{MnSymbol} %\everymath{\mathtt{\xdef\tmp{\fam\the\fam\relax}\aftergroup\tmp}} %\everydisplay{\mathtt{\xdef\tmp{\fam\the\fam\relax}\aftergroup\tmp}} \usepackage{bm} \begin{document} \setbox0\hbox{$x$} \newcommand{\un}{\underline} %\centerline{\bf Mathematica Aeterna, Vol. x, 201x, no. xx, xxx - xxx}%uncomment this in published version Date: 2025-12-11 \centerline{} \centerline{} \centerline {\Large{\bf Partial differential equations}} \centerline{} \section{Introduction} The literature of basic work on partial differential equations (PDE's) and systems of them seems to be mainly divided into two parts, that based on knowledge of modern differential geometry and that just based on multivariable calculus. The first part I am not so familiar with and I am just getting to grips with what seems most important (\cite{rhermann1965,schouten_kulk1969,brl1991,wb1986}) of which Boothby's book has been most useful for the basic concepts. This geometric approach is based on (1) regarding the independent and dependent variables all on the same footing using a coordinate-free methods and (2) using the basic ideas of differential geometry such as vector fields, forms, (types of tensors) and exterior algebra, which is very elegant but quite complicated. Much of this work was synthesised in the work of \'Elie Cartan. I think it likely that this geometric approach will provide another way of getting at essentially the same results found here. However the older work is often quite hard to read due to difficult notation and concepts (apart from the annoying gothic script letters!). This is not often referred to in the basic texts. This document makes minimal use of this material except some of what is in Olver's book {\cite{pjo1986}. Firstly there is a heuristic argument showing a general equivalence to coupled sets of ODE's in different directions, but these directions depend in general on the boundary conditions of the original PDE problem. This argument suggests that for a set of $p$ PDE's of first order involving $p$ unknowns, $p$ characteristic directions can always be found but some might coincide with each other or might involve complex numbers. The concept probably arose initially in the case of the wave equation in one variable of space ($x$) and one of time ($t$) $\partial^2\phi/\partial t^2=c^2 \partial^2\phi/\partial x^2$ and plays a central role in the well-known theory of linear second order PDE's with two independent variables and the appropriate type of boundary/initial conditions for a unique solution (see for example \cite{is1957} or \cite{hfw1965}). It goes something like this. If a system of PDE's has solutions for a set of dependent variables say $u_1,u_2,\ldots u_p$ as functions of the independent variables $x_1,\ldots x_n$ in a region containing the initial given data and these solutions are assumed unique in a region as determined by the Cauchy-Kovalevskaya (CK) theorem, which is described in Olver \cite{pjo1986} showing the uniqueness of the solution of the system in a neighbourhood of a surface where initial values of all the unknowns are specified but naturally this can only be done (section 2.6) if the initial surface does not contain any of the characteristic directions. Then if hypothetically all the dependent variables except one are given their values (by an oracle that somehow managed to guess them) the system would then be a system of first order PDE's for the single remaining unknown. Just one such PDE with the CK theorem would then determine a unique solution for this unknown using the initial data. These latter problems can always be solved for the remaining unknown by integrating along ``strips" from an initial surface provided it does not contain any of these directions (Cauchy/Monge's method)\cite{is1957}. This process could be repeated, updating each unknown in turn, and the whole cycle repeated until convergence starting with an initial estimate of all the unknowns consistent with the initial data. This argument suggests that in general there will be $p$ directions to integrate to get the solution i.e. the original system is equivalent to a set $p$ coupled systems of ODE's one for each unknown. In the general case nothing can be said about these directions because they depend on the boundary conditions, but in many special cases of systems of PDE's some information about these directions is available from the original system itself. This is because there are simplifications that are independent of the boundary conditions can be searched for i.e. minimisation of dimension (independent and dependent variables). A reduced number of independent variables in which the equations can be expressed implies that any characteristic directions must be within the submanifolds defined by this reduced number of variables. Earlier I proposed that the numbers of independent and dependant variables be minimised and described methods for finding them \cite{jhn1991}. In this paper I have developed this a little giving examples of what could be termed ``partial" minimisation of dimension. This can give rise to some interesting cases such as when only two characteristic directions appear when three were expected showing that this is a case where two characteristic directions coincide but they are not necessarily in involution because if they were it would result in a reduction of dimension. Most treatments of this problem seem to not go much beyond identifying the characteristics and the relationship between these and the domains of influence and dependence of the solution on given data. The main theme of the theory of PDE's seems to me to be to classify and characterise these special cases many of which are well-known and some of which I have identified here and in my earlier work\cite{jhn1991}. When considering analytic systems of partial differential equations (PDE's) in general, two preliminary steps need to be taken first. (1) to simplify this to the treatment of a first order system, because any such system can be made first order by introducing new variables (so the original system is defined by a subset of the variables of a first order system) and (2) to consider only such systems that are locally solvable (Nixon 1991, \cite{pjo1986} Olver 1986) because this can always be arranged, at least for linear systems, by adding extra equations by cross-differentiation provided the original system is consistent. These techniques complement other techniques such as the use of symmetries that Olver has described \cite{pjo1986} and should be applied first because of the drastic simplifications that can be obtained. While developing these ideas, I was swapping between the general theory and the examples, each helping to improve the understanding of the other, and as a result it was difficult to find a way to present the work, with the options being to present the examples first and constantly refer forward to the general treatment, or the present the general theory first without prior motivating examples. In the end I chose the latter, so the outline of the paper is as follows for linear systems: In section 2 I show how to find the integrability conditions giving rise to local solvability. In section 3 I describe the extension of the method of minimising dimension to ``partial" minimisation of dimension i.e. doing it for a subset of the system. In section 4 I describe the 2D Laplace equation in detail. \section{\label{sec2}The integrability conditions and local solvability} I am starting with the linear system \begin{equation}\label{start}\sum_{\imath=1}^p\sum_{l=1}^n\frac{\partial u_{\imath}}{\partial x_l}a_{\imath lk}(x)+\sum_{\imath =1}^pu_{\imath}b_{\imath k}(x)=0\text{ for }1\le k\le m.\end{equation} Here I use $\imath$ where I used $i$ before in (16) and following in \cite{jhn1991} to distinguish it from the imaginary number $i$, and the 2D array called $\bm{a}$ there will be called $\bm{b}$ here to distinguish it from the 3D array also called $\bm{a}$ in \cite{jhn1991}. Suppose the 3D array $a_{\imath j k}$ with dimensions $p\times n\times m$ is such that the $n\times n$ matrices \begin{equation}\label{ass}(c_{\imath})_{jl}=\sum_{k=1}^m h_{kl}a_{\imath jk}\text{ are skew symmetric for }1\le \imath\le p.\end{equation} These are the equations derived by equating to zero all the second order terms arising from the linear combination \begin{equation}\label{new}\sum_{k=1}^m\left(\bm{h}_k(\bm{x}).\nabla \text{ equation }k\right)=0\end{equation} so if the $\bm{h}_k$ satisfy these equations, the resulting linear combinations are also first order. These are linear equations for the $mn$ elements of $\bm{h}$ therefore they can be written in the form \begin{equation}\label{form}\sum_\beta A_{\alpha \beta}H_{\beta}=0\text{ for all }\alpha.\end{equation} The parameter $\alpha$ is indexed by $\imath,j,l$ and $\beta$ is indexed by the two indices of $h$, and the number of values of $\alpha$ and $\beta$ are $pn^2$ and $mn$ respectively (but note that the equations \eqref{ass} have redundancy because if $l\ne j$ these indices can be swapped giving the same result). The procedure I was suggesting in \cite{jhn1991} to be applied to equation~\eqref{start} is to repeatedly search for all such coefficients $\bm{h}$ and then construct the corresponding equations of the form \eqref{new} and add them to the original system \eqref{start}. Keep doing this until there are no non-zero solutions for $\bm{h}$. Then all subsequent steps are applied to this system, again of the form \eqref{start}. The condition \eqref{ass} can be written as \begin{equation}\label{hyp}\sum_{k=1}^ma_{\imath jk}h_{kl}+\sum_{k=1}^ma_{\imath lk}h_{kj}=0\text{ for }1\le \imath\le p\text{ and } 1\le j,l\le n.\end{equation} Let $\lambda$ be an index that takes the place of $l$ as an index of $H$. Then $H$ and $\beta$ will be indexed by $k$ and $\lambda$, so that one can speak of row $(\imath,j,l)$ and column $(k,\lambda)$ of $A$. To evaluate the element in this position, pick out the coefficient of $h_{k\lambda}$ in \eqref{hyp} giving $a_{\imath jk}\delta_{l\lambda}+a_{\imath lk}\delta_{j\lambda}$ making it obvious that $A$ splits naturally into two terms which will be referred to as $A_1$ and $A_2$ respectively. If the indices $l$ and $\lambda$ vary most slowly in $\alpha$ and $\beta$ respectively (and $j$ varies slower than $\imath$) then $A_1$ can be written as an $n\times n$ matrix of matrices of dimension $pn\times m$ where the off-diagonal matrix terms of $A_1$ are zero and the diagonal terms are all the same, say $B$. Then $B$ is the $p n\times m$ matrix having all the elements $a_{\imath jk}$ arranged so that this is the element on row $(\imath,j)$ and column $k$. $A_2$ can also be split naturally into submatrices but is more complicated. First split $B$ into $n$ rows corresponding to the $n$ values of $j$. These could be called $B_1\ldots B_n$ such that $B_j$ is the $p\times m$ matrix with element $a_{\imath j k}$ on row $\imath$ and column $k$. Then $A_2$ is naturally split into an $n\times n$ matrix indexed by rows $l$ and columns $\lambda$ such that the $(l,\lambda)$ element is a matrix like $B$ but where all the sub-matrices $B_j$ are replaced by zero except the one in the position of $B_\lambda$ and it is $B_l$. If this is sketched out it is obvious that all the rows of $A_1$ are replicated in the rows of $A_2$ and vice versa. Specifically, major row $l$ subrow $j$ (for all values of $\imath$) of $A_1$ are the same as major row $j$ subrow $l$ (for all values of $\imath$) of $A_2$ and they are given by the matrix $B_j$ in position $l$ amongst a set of $n$ matrices all the same dimensions ($p\times m$) in a row where these others are all zero matrices. Therefore each row of $A$ is the sum of two rows of $A_1$. Also each row of $A_1$ will appear multiplied by two in $A$ so every row of $A_1$ is a linear combination of the rows of $A$ and vice versa i.e. the rows of $A$ and $A_1$ span the same vector space. Considering $A_1$ shows that the dimension of this vector space is $rn$ where $r$ is the rank of $B$ so the rank of $A$ is $mn$ if and only if the rank of $B$ is $m$. In terms of these submatrices, \eqref{form} can be written as \begin{equation}\label{sub}B_l\bm{h}_j+B_j\bm{h}_l=0\text{ for }1\le j,l\le n\end{equation} where $\bm{h}_j=\left(\begin{array}{@{}l@{}}h_{1j}\\h_{2j}\\\ldots\\h_{mj}\end{array}\right)$. Therefore if $m=p$ i.e. the number of unknowns is the same as the number of equations in the PDE system, this makes all the matrices $B_s$ have dimension $m\times m$ then if just one of the $B$'s say $B_s$ has full rank then \eqref{sub} for $l=j=s$ gives $\bm{h}_s=0$. Then put $l=s$ gives $B_s\bm{h}_j=0$ for $1\le j\le n$ i.e. $\bm{h}=0$ so a condition under which this process of repeatedly adding integrability conditions to the original system stops is that at least one of the submatrices $B_s$ has full rank. This ensures that any new integrability conditions obtained by this process are zero. Since the newly obtained PDE equation (21) in \cite{jhn1991} is not a linear combination of the PDE's system started with and because the rank of the $B$'s cannot decrease as extra columns are added, eventually this process will end as above. There is another case in which $\bm{h}\ne 0$ and the $B's$ all have less than full rank and their rank cannot be increased by any such $\bm{h}$. This would be expected if $m
p$ is an overdetermined system and will not in general have solutions unless some cancellation occurs, and most of this logic can still be applied. The absence of solutions will manifest itself in inconsistent results.
It turns out that the condition that at least one of the $B_j$ has maximal rank is precisely the condition that the original system can be put into Cauchy Kovalevskaya form and is therefore locally solvable (\cite{pjo1986} p166). Making a change of independent variables $x_1,\ldots x_n\to t,y_1,\ldots y_{n-1}$ gives
\begin{equation}\frac{\partial u_{\imath}}{\partial x_l}=\sum_{j=1}^{n-1}\frac{\partial u_{\imath}}{\partial y_j}\frac{\partial y_j}{\partial x_l}+\frac{\partial u_{\imath}}{\partial t}\frac{\partial t}{\partial x_l}\end{equation}
and when expressed in terms of these variables, \eqref{start} gives \begin{equation}\sum_{\imath=1}^m\sum_{l=1}^n\left(\sum_{j=1}^{n-1}\frac{\partial u_{\imath}}{\partial y_j}\frac{\partial y_j}{\partial x_l}+\frac{\partial u_{\imath}}{\partial t}\frac{\partial t}{\partial x_l}\right)a_{\imath lk}(x)+\ldots =0.\end{equation}
This equation can be solved for $\frac{\partial u_{\imath}}{\partial t}$
if there exists a vector $\frac{\partial t}{\partial x_l}$ such that \begin{equation}\sum_{l=1}^n\frac{\partial t}{\partial x_l}a_{\imath lk}\end{equation} is an invertible matrix with indices $\imath$ and $k$.
Because this matrix is a linear combination of the $B$'s it can be chosen to be invertible because at least one of the $B$'s is also, as obtained above.
\section{Minimisation of dimension for a subset of the original system}
The method used here is a modification and extension of the method I used in \cite{jhn1991} to search for a reduced number of independent variables which can be used to express a linear system of PDE's. The modification is to only require a subset of $m'