Genesis of an Ordinary Differential Equation

Consider an equation

in which and are variables and are arbitrary and independent constants. This equation serves to determine as a function of ; strictly speaking, an -fold infinity of functions is so determined, each function corresponding to a particular set of values attributed to . Now an ordinary differential equation can be formed which is satisfied by every one of these functions, as follows.

Let the given equation be differentiated times in succession, with respect to , then new equations are obtained, namely,

where

Each equation is manifestly distinct from those which precede it; 1 from the aggregate of equations the arbitrary constants can be eliminated by algebraical processes, and the eliminant is the differential equation of order :

It is clear from the very manner in which this differential equation was formed that it is satisfied by every function defined by the relation (A). This relation is termed the primitive of the differential equation, and every function which satisfies the differential equation is known as a solution. 2 A solution which involves a number of essentially distinct arbitrary constants equal to the order of the equation is known as the general solution.3 That this terminology is justified, will be seen when in Chapter III. it is proved that one solution of an equation of order and one only can always be found to satisfy, for a specified value of distinct conditions of a particular type. The possibility of satisfying these conditions depends upon the existence of a solution containing arbitrary constants. The general solution is thus essentially the same as the primitive of the differential equation.

It has been assumed that the primitive actually contains distinct constants . If there are only apparently constants, that is to say if two or more constants can be replaced by a single constant without essentially modifying the primitive, then the order of the resulting differential equation will be less than . For instance, suppose that the primitive is given in the form

then it apparently depends upon two constants and , but in reality upon one constant only, namely . In this case the resulting differential equation is of the first and not of the second order.

Again, if the primitive is reducible, that is to say if breaks up into two factors, each of which contains , the order of the resulting differential equation may be less than . For if neither factor contains all the constants, then each factor will give rise to a differential equation of order less than , and it may occur that these two differential equations are identical, or that one of them admits of all the solutions of the other, and therefore is satisfied by the primitive itself. Thus let the primitive be

it is reducible and equivalent to the two equations

each of which, and therefore the primitive itself, satisfies the differential equation

0.1 The Differential Equation of a Family of Confocal Conies

Consider the equation

where and are definite constants, and an arbitrary parameter which can assume all real values. This equation represents a family of confocal conics. The differential equation of which it is the primitive is obtained by eliminating between it and the derived equation

From the primitive and the derived equation it is found that

and, eliminating ,

and therefore the required differential equation is

it is of the first order and the second degree.

When an equation is of the first order it is customary to represent the derivative by the symbol . Thus the differential equation of the family of confocal conics may be written:

1. Formation of Partial Differential Equations through the Elimination of Arbitrary Constants

Let be independent variables, and let , the dependent variable, be defined by the equation

where are arbitrary constants. To this equation may be adjoined the equations obtained by differentiating partially with respect to each of the variables , in succession, namely,

If , sufficient equations are now available to eliminate the constants . If the second derived equations are also adjoined; they are of the forms

This process is continued until enough equations have been obtained to enable the elimination to be carried out. In general, when this stage has been reached, there will be more equations available than there are constants to eliminate and therefore the primitive may lead not to one partial differential equation but to a system of simultaneous partial differential equations.

1.1 The Partial Differential Equations of all Planes and of all Spheres.

As a first example let the primitive be the equation

in which are arbitrary constants. By a proper choice of these constants, the equation can be made to represent any plane in space except a plane parallel to the axis. The first derived equations are:

These are not sufficient to eliminate , and , and therefore the second derived equations are taken, namely,

They are free of arbitrary constants, and are therefore the differential equations required. It is customary to write

Thus any plane in space which is not parallel to the -axis satisfies simultaneously the three equations

In the second place, consider the equation satisfied by the most general sphere; it is

where and are arbitrary constants. The first derived equations are

and the second derived equations are

When is eliminated, the required equations are obtained, namely,

Thus there are two distinct equations. Let be the value of each of the members of the equations, then

Consequently, if the spheres considered are real, the additional condition

must be satisfied.

2. A Property of Jacobians

It will now be shown that the natural primitive of a single partial differential equation is a relation into which enter arbitrary functions of the variables. The investigation which leads up to this result depends upon a property of functional determinants or Jacobians.

Let be functions of the independent variables , and consider the set of partial differential coefficients arranged in order thus:

Then the determinant of order whose elements are the elements common to rows and columns of the above scheme is known as a Jacobian.4 Let all the different possible Jacobians be constructed, then if a Jacobian of order , say

is not zero for a chosen set of values , but if every Jacobian of order is identically zero, then the functions are independent, but the remaining functions are expressible in terms of .

Suppose that, for values of in the neighbourhood of , the functions , ..., are not independent, but that there exists an identical relationship,

Then the equations

identically in the neighbourhood of , which is contrary to the hypothesis. Consequently, the first part of the theorem, namely, that are independent, is true.

In let the variables be replaced by the new set of independent variables . It will now be shown that if is any of the functions , and any one of the variables , then is explicitly independent of that is

Let

and let be replaced by their expressions in terms of the new independent variables , then differentiating both sides of each equation with respect to ,

The eliminant of , , is

But since, by hypothesis,

it follows that

Consequently each of the functions is expressible in terms of the functions alone, as was to be proved.

3. Formation of a Partial Differential Equation through the Elimination of an Arbitrary Function

Let the dependent variable be related to the independent variables by an equation of the form

where is an arbitrary function of its arguments which, in turn, are given functions of and . When for is substituted its value in terms of , the equation becomes an identity. If therefore represents the partial derivative of with respect to when has been replaced by its value, then

But

and therefore the partial differential equation satisfied by is

3.1. The Differential Equation of a Surface of Revolution

The equation

represents a surface of revolution whose axis coincides with the -axis. In the notation of the preceding section,

and therefore satisfies the partial differential equation:

or

Conversely, this equation is satisfied by

where is an arbitrary function of its argument, and is therefore the differential equation of all surfaces of revolution which have the common axis .

3.2. Euler's Theorem on Homogeneous Functions

Let

where is a homogeneous function of and of degree . Then, since can be written in the form

it follows that

In the notation of 2.3,

and therefore satisfies the partial differential equation:

and this equation reduces to

Similarly, if is a homogeneous function of the three variables and , of degree ,

This theorem can be extended to any number of variables.

4. Formation of a Total Differential Equation in Three Variables

The equation

represents a family of surfaces, and it will be supposed that to each value of corresponds one, and only one, surface of the family. Now let be a point on a particular surface and a neighbouring point on the same surface, then

Assuming that the partial derivatives

exist and are continuous, this equation may be written in the form

where , as .

Now let and be made zero and let , and be written for and respectively. Then there results the total differential equation

which has been derived from the primitive by a consistent and logical process.
If the three partial derivatives have a common factor , and if

then if the factor is removed, the equation takes the form

That there is no inconsistency in the above use of the differentials , etc., may be verified by considering a particular equation in two variables, namely,

The above process gives rise to the total differential equation

and thus the quotient of the differentials is in fact the differential coefficient .

The primitive gives rise to the total differential equation which, after multiplication by , becomes

Footnotes

  1. Needless to say, it is assumed that all the partial differential coefficients of exist, and that is not identically zero.

  2. Originally the terms integral (James Bernoulli, 1689) and particular integral (Euler, Inst. Calc. Int. 1768) were used. The use of the word solution dates back to Lagrange (1774), and, mainly through the influence of Poincaré, it has become established. The term particular integral is now used only in a very restricted sense, cf. Chap. VI. infra.

  3. Formerly known as the complete integral or complete integral equation (œquatio integralis completa, Euler). The term integral equation has now an utterly different meaning (cf. § 3.2, infra), and its use in any other connection should be abandoned.

  4. Scott and Mathews, Theory of Determinants, Chap. XIII.