home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: sci.math
- Path: sparky!uunet!enterpoop.mit.edu!galois!riesz!jbaez
- From: jbaez@riesz.mit.edu (John C. Baez)
- Subject: Re: Can Anyone Solve this?????
- Message-ID: <1993Jan24.234937.12223@galois.mit.edu>
- Sender: news@galois.mit.edu
- Nntp-Posting-Host: riesz
- Organization: MIT Department of Mathematics, Cambridge, MA
- References: <1993Jan23.210732.19327@magnus.acs.ohio-state.edu>
- Date: Sun, 24 Jan 93 23:49:37 GMT
- Lines: 57
-
- In article <1993Jan23.210732.19327@magnus.acs.ohio-state.edu> mfreiman@magnus.acs.ohio-state.edu (Mark R Freiman) writes:
- > If W={f such that f"-2f'+f=0}
- > What is the dimension? and what is a Basis?
- >I know the answer for the dimension is 2, and a basis is
- > e and xe
- > but how would one go about obtaining these answers?
- The point is to show that any other solution to your differential
- equation is a linear combination of the two you give. There are various
- ways to do this. One way would be to prove that any solution of a
- second-order differential equation like f''(x) + p(x)f'(x) + q(x)f(x) =
- 0, where p and q are smooth functions (for example), is determined by
- f(0) and f'(0). (A related way is to show that two solutions of this
- equation, say f and g, are linearly independent if and only if the
- Wronskian fg' - gf' is zero everywhere.) Here's one way that
- generalizes to differential equations of higher order. First rewrite
- the differential equation as two first-order linear differential
- equations for f and f'... which we will instead call f_1 and f_2, justto
- be confusing. We get
-
- f_1' = f_2
- f_2' = -pf_2 - qf_1
-
- or if we write F for the vector (f_1,f_2) and A for the (x-dependent)
- matrix
-
- 0 1
- -q -p
-
- we can write the two first-order differential equations as *one*
- first-order vector-valued differential equation, namely just
-
- F' = AF.
-
- Simpler, huh? Now "solve" it by integrating...
-
- F(x) = F(0) + int_0^x A(t) F(t) dt.
-
- This "solution" is true but not really a solution, since F(t) appears in
- the integral. However, we can repeatedly substitute this equation in
- itself (e.g., for the first iteration, use the equation to express F(t)
- as F(0) + int_0^t A(s) F(s) ds). Repeating this forever, we get an
- infinite sum, and I leave it to you to write this sum down and check
- that it converges and actually gives the answer... that is, an
- expression for F(x) in terms of A and F(0). Thus the solution is really
- determined by its initial data f(0) and f'(0).
-
- My argument really can be made into a 100% precise proof of the
- existence and uniqueness of solutions of such a differential equation
- with given initial data. And, better yet, the technique is enormously
- general and applies to give existence and uniqueness results for
- nonlinear ordinary differential equations and even nonlinear partial
- differential equations. In its full-fledged glory, it's called the
- method of nonlinear semigroups and is due to Irving Segal - but you are
- more likely to find it in basic differential equations books under the
- name of Picard's theorem on local existence and uniqueness. In such
- books, it's only done for first-order equations, but the trick I
- described allows one to solve n-th order equations too.
-