This commit is contained in:
2025-08-22 20:29:37 +02:00
parent 04918607d5
commit e11a532581
2 changed files with 65 additions and 41 deletions

Binary file not shown.

View File

@@ -153,14 +153,13 @@ this document was created using
)
]
== b)
define $ f^i_k := (partial f^i) / (partial y^k)
quad quad f^i_(k l) := (partial f^i) / (partial y^k partial y^l) $
and let the jacobian matrix $ frak(J) = [ (f_y)_(i j) = f^i_j ] $
and let the jacobian matrix $ frak(J) := [ (f_y)_(i j) = f^i_j ] $
where $f^2_y$ is the matrix product and $f_(y y)$ has entries $(f^i_(k l))$.
@@ -194,17 +193,18 @@ thus we need to prove that $ f^T f_y = 0 $
but $ f^T f_y = (f(pdv(f, y)))^T = (f^1(f^1_y, ..., f^m_y), ..., f^m (f^1_y, ..., f^m_y)) $
no more, i yield, i yield!!
= problem 2
== a)
let $f(x) = x^4 + 3 x^3 - 2 x + 5$; find all taylor polynomials around
$x_0 = -2$.
let $f(x) := x^4 + 3 x^3 - 2 x + 5$; find all taylor polynomials around
$x_0 := -2$.
#align(center)[#line(length: 75%)]
recall that each term is given by $ P_k (x) = (f^((k)) (x_0)) / k! (x - x_0)^k $
recall that each term is given by $ P_k (x) := (f^((k)) (x_0)) / k! (x - x_0)^k $
for $k in [0, deg(f)] inter ZZ$.
first compute
@@ -217,22 +217,25 @@ $
then
$
P_0 & = f(-2) = 16 - 24 + 4 + 5 = 1 \
P_1 & = f'(-2) dot (x + 2) = 2 (x+2) = 2x + 4 \
P_2 & = (f''(-2))/2 dot (x + 2)^2 = 6 x^2 + 24x + 24 \
P_3 & = (f^((3)) (-2))/6 dot (x + 2)^3 = -5 (x + 2)^3 \
& = -5x^3 - 30 x^2 - 60 x - 40 \
P_4 & = (x + 2)^4 = x^4 + 8 x^3 + 24 x^2 + 32 x + 16
P_0(x) & = f(-2) = 16 - 24 + 4 + 5 = 1 \
P_1(x) & = f'(-2) dot (x + 2) = 2 (x+2) = 2x + 4 \
P_2(x) & = (f''(-2))/2 dot (x + 2)^2 = 6 x^2 + 24x + 24 \
P_3(x) & = (f^((3)) (-2))/6 dot (x + 2)^3 = -5 (x + 2)^3 \
& = -5x^3 - 30 x^2 - 60 x - 40 \
P_4(x) & = (x + 2)^4 = x^4 + 8 x^3 + 24 x^2 + 32 x + 16
$
note that there are finitely many unique derivatives, as $f(x)$ is a polynomial
of degree 4.
then the $k$-th taylor polynomial can be expressed as
$
T_k = sum_(i=0)^k P_i
T_k := sum_(i=0)^k P_i
$
or alternatively
or recursively
$
T_k = T_(k-1) + P_k
T_k = T_(k-1) + P_k quad and quad T_0 = P_0 = 1
$
for $k in NN^+$.
@@ -241,14 +244,18 @@ thus the taylor polynomials for $f$ are
$
T_0 & = P_0 = 1 \
T_1 & = P_0 + P_1 = 2x + 5 \
T_2 & = 6x^2 + 26x + 28 \
T_3 & = -5x^3 - 24x^2 + 34x - 12 \
T_4 & = x^4 + 3x^3 + 66x + 4
T_2 & = 6x^2 + 26x + 29 \
T_3 & = -5x^3 - 24x^2 - 34x - 11 \
T_4 & = x^4 + 3x^3 - 2x + 5 = f(x)
$
naturally we are able to perfectly describe a fourth-degree polynomial with
a taylor series.
== b)
let $g(x) = ln(1 + x)$; calculate its maclaurin series.
let $g(x) := ln(1 + x)$; calculate its maclaurin series -- i.e. taylor series at
$x_0 = 0$.
#align(center)[#line(length: 75%)]
@@ -262,31 +269,34 @@ $
for $k in NN^+$.
recall that the maclaurin series of a non-analytic function $f$ is
as the function is infinitely differentiable and continuous around $x = 0$, we
can conclude that $g(x)$ is analytic.
recall that the maclaurin series of an analytic function $f$ is
$
f(x) = sum_(i=0)^k (f^((i)) (0))/(i!) x^i + O(x^(k+1))
f(x) = sum_(i=0)^infinity (f^((i)) (0))/(i!) x^i
$
now lucky us, since
fortunately we have
$
g^((k)) (0) = (-1)^(k-1) dot (k - 1)!
$
so
$
g(x) & = sum_(i=1)^k (g^((i)) (0)) / (i!) x^i + O(x^(k+1)) \
& = sum_(i=1)^k (-1)^(i-1) (i-1)! / (i!) x^i + O(x^(k+1)) \
& = sum_(i=1)^k ((-1)^(i-1) x^i) / i + O(x^(k+1)) \
& = x - x^2 / 2 + x^3 / 3 + dots.c + O(x^(k+1))
g(x) & = sum_(i=1)^k (g^((i)) (0)) / (i!) x^i \
& = sum_(i=1)^k (-1)^(i-1) (i-1)! / (i!) x^i \
& = sum_(i=1)^k ((-1)^(i-1) x^i) / i \
& = x - x^2 / 2 + x^3 / 3 - dots.c
$
we may choose $O(x^(k+1)) = 0$.
thus we have the maclaurin series of $g(x)$.
= problem 3
== a)
are $1+x$, $1-x$ and $x-x^2$ linearly independent in $P_2$?
are $1+x$, $1-x$ and $x-x^2$ linearly independent in $P_2$? what do they span?
#align(center)[#line(length: 75%)]
@@ -311,7 +321,7 @@ our affine space has two conditions, $p(1) = 1$ and $p(2) = 2$.
let
$
p(x) = a x^3 + b x^2 + c x + d
p(x) := a x^3 + b x^2 + c x + d
$
such that
$
@@ -328,8 +338,14 @@ $
mat(1, 1, 1, 1 | 1; 7, 3, 1, 0 | 1)
$
thus we can see that there are two linearly independent basis vectors that may
form a two-dimensional linear space.
thus the condition vectors are linearly independent and the matrix has rank 2,
so they span out a two-dimensional constraint space. by the rank-nullity theorem
we can conclude that the solution space must have
$
dim(P_3) - "rank" = 4 - 2 = 2
$
dimensions.
== c)
@@ -346,7 +362,7 @@ conditions.
so let
$
p(x) = alpha (x - 1) + beta (x^2 - 1) + gamma (x^3 - 1)
p(x) := alpha (x - 1) + beta (x^2 - 1) + gamma (x^3 - 1)
$
such that
$
@@ -361,6 +377,8 @@ equations, thus ending up in the same situation as the last subtask, meaning we
will not be able to choose the remaining values arbitrarily, since the system of
equations will be underdetermined.
as such, if we choose for $y_1 = 0$ to hold, it is possible.
= problem 4
@@ -371,7 +389,7 @@ equations will be underdetermined.
prove that ${sin(t), cos(t), 1}$ is orthogonal in the space $C[0, 2 pi]$ with
inner product
$
inner(f, g) = integral_0^(2 pi) f(s) g(s) dd(s)
inner(f, g) := integral_0^(2 pi) f(s) g(s) dd(s)
$
i.e. that it is an orthogonal basis for $C[0, 2 pi]$.
@@ -379,11 +397,17 @@ i.e. that it is an orthogonal basis for $C[0, 2 pi]$.
we must compute the pair-wise inner product of each base vector
$
inner(sin, cos) & = integral_0^(2 pi) sin(t) cos(t) dd(t) = \
inner(sin, 1) & = integral_0^(2 pi) sin(t) dd(t) = \
inner(1, cos) & = integral_0^(2 pi) cos(t) dd(t) = 0
// inner(sin, cos) & = integral_0^(2 pi) sin(t) cos(t) dd(t) = \
inner(sin, 1) & = integral_0^(2 pi) sin(t) dd(t) = 0, \
inner(1, cos) & = integral_0^(2 pi) cos(t) dd(t) = 0
$
because $sin(t)$ and $cos(t)$ all have a period of $2 pi$. lastly,
$
inner(sin, cos) & = integral_0^(2 pi) sin(t) cos(t) dd(t) \
& = 1/2 integral_0^(2 pi) sin(2 t) dd(t) \
& = 1/4 integral_0^(4 pi) sin(u) dd(u) = 0.
$
because $sin(t) cos(t)$, $sin(t)$ and $cos(t)$ all have a period of $2 pi$.
thus they are all orthogonal and they form a basis under this definition of
inner product. to make it an orthonormal basis, we can scale each base component
@@ -401,11 +425,11 @@ $
== b)
to form an orthonormal basis for the monomials ${1, x, x^2}$, we use the
gram-schmidt method with $vectorbold(v_1) = 1$.
gram-schmidt method with $vectorbold(v_1) := 1$.
then
$
vectorbold(v_2) & = x - "proj"_(vectorbold(v_1)) (x) \
vectorbold(v_2) & := x - "proj"_(vectorbold(v_1)) (x) \
& = x - (inner(vectorbold(v_1), x) / inner(
vectorbold(v_1),
vectorbold(v_1)
@@ -415,9 +439,9 @@ $
& = x
$
then similarly for the last vector
and similarly for the last vector
$
vectorbold(v_3) & = x^2
vectorbold(v_3) & := x^2
- "proj"_(vectorbold(v_1))(x^2)
- "proj"_(vectorbold(v_2))(x^2) \
& = x^2
@@ -432,5 +456,5 @@ $
frak(O) := {1/a, x/b, (x^2 - 1/3)/c}
$
where $a & := sqrt(2), b & := sqrt(2/3)$ and $c & := sqrt(2/5)$.
where $a & = sqrt(2), b & = sqrt(2/3)$ and $c & = sqrt(2/5)$.