Description
I need an answer for questions 1, and 2 only. *The sample is attached and HWs solution is attached. in this link https://www.filefactory.com/file/4y6pkt9yg5xm/Clas…Hint: When you open the class notes, you will find some explanations which have been written to me, but you can jump to the examples or search by keywords in the pdf file.* if you need more class notes, let me know. < But some of them are related to the previous questions.We are in very limited time so the time period to do these questions is 5
hours. The average time to solve the two questions is 1 hour and it might take
the mathematician 0.5 hours to do them ALL < I doubled the time to make you
do the questions with confidence
11 attachmentsSlide 1 of 11attachment_1attachment_1attachment_2attachment_2attachment_3attachment_3attachment_4attachment_4attachment_5attachment_5attachment_6attachment_6attachment_7attachment_7attachment_8attachment_8attachment_9attachment_9attachment_10attachment_10attachment_11attachment_11
Unformatted Attachment Preview
711 Test 2 (2021 Spring)
Problem 1 (25 points)
x1 , x2 , . . . , xn are i.i.d. samples of an exponential random variable X with probability density
function (pdf)
(
p(x) =
βe−βx , if x ≥ 0
0,
otherwise
(1)
where β > 0 is a parameter.
a) Find the maximum-likelihood estimate (MLE) of β.
b) Find the maximum a posteriori (MAP) estimate of β if we have a prior for β, given by
(
p(β) =
(1/a)e−β/a , if β ≥ 0
0,
otherwise
(2)
where a > 0 is a constant.
Problem 2 (25 points)
Suppose the joint pdf of random variables X and Y is
(
p(x, y) =
c, if (x, y) ∈ A
0, otherwise
(3)
where c > 0 is a constant and A is the “triangular” region given in Fig 1, with two straight line
boundaries (x = 0 and y = 0) and one curved boundary (y = 1 − x2 ). Suppose we want to find a
linear estimate of Y from X, with Yb = w1 X + w0 , where w1 and w0 are weights.
a) Find the weights w1 and w0 that will minimize the mean square error E[(Y − Yb )2 ].
b) What is the minimum mean square error achieved by the w1 and w0 you have found in part
a)?
Figure 1: Region A for Problem 2.
Problem 3 (25 points)
For Problem 2, suppose we want to use a conditional expectation estimator to estimate Y from X.
a) Find the conditional expectation estimator of Y , given by Yb = E[Y |X].
b) Find the mean square error of the conditional expectation estimator. Compare this with the
result of Problem 2 part b), what can you conclude?
Problem 4 (25 points)
a) In Fig 2, there are a triangular region A and a square region B on the plane. We can define a
function f (x1 , x2 ) on the entire plane as
(
f (x1 , x2 ) =
1, (x1 , x2 ) ∈ A ∪ B
0, otherwise
(4)
where A ∪ B denotes “A or B.” Can f (x1 , x2 ) be implemented with a neural network with
no more than 4 layers (i.e., an input layer, no more than two hidden layers, and an output
layer)? If yes, show how and the neural network weights (you can use φ(t) as the non-linear
function in the network); if not, explain why not. Here, φ(t) = 0, when t < 0, and φ(t) = 1
when t ≥ 0.
b) Suppose x1 , x2 , . . . , xn are i.i.d observations of a two-dimensional random variable x, with
x = [x(1) , x(2) ]. Suppose x(1) and x(2) are uniform random variables in intervals [a, b] and
[c, d], respectively, and x(1) and x(2) are independent of each other. Find the maximum
likelihood estimate of parameters a, b, c, d.
Figure 2: Regions A and B for Problem 4.
Lecture 3
outline
review at what
we
did
contd probability
A w problem
Quick Review
so far
we
discuss
probability 4 Random hariable
Probability Made
Balffean
Experiment
inferance
S Sample space
outcome
some outcomes
Event c subsetofS
if
A is event B is inert
S
AU B
A n B
SPEAK I
1
P Cs I
I
f
A nB
probability Model is
0
A
i
BC
AC
0
in the
outcome
Allof oursampleskate
Event
put all events
we
probability
P
afwhole space
that have
sits
s
y
knamaditive
PCB
P a
AUB
an abstraction
in a setcalled
probability
f
f g P
together called
prob model
of
Bayes
Inverse
PC At B
an
means
Inference what does
inverse
or probspace
mean
going fromsomething weknow to something youdon'tknow
collected ahead
af time
P CA N B
PCB
I
PCA 1133 PCB
PE B1 A
d
don't
knew
b
know
P
A
Random variable Definition
Basic
takeorignd
samplespace
putit
in the
real line
f
Idea
outcome
one
Apply
r
variable
y
measurable
take interval in red line
correspond
we
random variable
info
with correspond to
interfalhere
measurable
toevent is
Today
calculation
thismapingcalled
random
off
A
calculus for probability
an event
in theoriginal space
start with suppose you have random variable how do youmake
the probability calculation
probability calculation
Random variable
using
i
this class is me why probability 7
based on probability
variables
Classification Regression fpredictia
and optimal central
Random
or optimal sequential Decision
making
Also
if youwork on
It's impossible forthe person to look at
has Coo di mention what to do
get some summery
1 18 vectors
Data exploration
one
we have to
out of thedata
statistics Statistics themselfbased
ofthem Because each vector
look at variantstatisticsyou can
All thesumery that are usefulbased on
on prob etc
calculate probaility
How to
Roll
Discrete
Centiniousi
for
X
p
P
get
once
dice
In discrete case
roll
l
a
Temperature in MKE
Discrete
PD
using
can divided into 2 Cages
Random variable
a
3
Z
5 G
the ft
in
february
use probability distribution
PD is
Dice
4
we
Random variable
basically
a
PD
list
fortheexample
a
YG YG Yg YG YG YG
erenmbu
an
E f 2,4 61
sve Dyck
d
22 4,6
P
P
P Covent
E Pi
mi EA
X
p
R
Nz
P
R
P 431176
t
f t fo t
GeneralFormula
Abstractly
2
P A
PCU EA
E I
g
p
some at all probability af i in thebist
where Ri C A
probability Distribution look like
g
this
calculate
Given PD we can
always
event adding the
by
probability of an
Prob of these indivisualprobability
if
the random manias is confinious
we
take
an
integral
Continuous RV
Example
Temperature
MKE
in
K
of
ko
ion
list because a
rig
Ix Interval
I
50
f
mT
4030
I
20
I
iµ
D
o
n
is too
can takevalues in
i
O
probability density function
see
fin
ie
a i iii s ii
iwion iHwfftk
gdf
I
capita
li Hit
o
density function
A probability distribution is a list of
outcomes and their associated
classweusethis probabilities. ... A function that
represents a discrete probability
distribution is called a probability
mass function. A function that
represents a continuous probability
distribution is called a probability
density function.
in advanced
P
X C In
l
what's
I
N
l
the size
of interval
that it
the probability
is
very
cold
density
pcie iiseueyeaedj.pcxe.co
10
30 If
f
gto On
hype
tf
00
many
Continuous interfal
probability
µ
a
if we list it in a
p
yr IJ
Feb
in
o
4
407
jjsEaFotiI
probability
Event
of if
to calculate prob of eat
we want
PCevent
µ
P
T
In
A in this case
µ
from
0 to to
S
PCA
X EA
call
da
gI
also
if
over
ca
40
1
10
needs tobeequal or biggerthan zero wedon'tknow How it is
big
because it's densityfunction but theintegrationmustbe 1 becausethe
probabilityoftheentiresamplespace
density function
Of PCM
f
p ca da
p s
capital
some thing
PPEK DE
weusedprime
becausewedon't
miff
KittEnfffed
Paul P
PEX Ea
Y th't d U
density
fuelin
ao
probability
randome
we
density
Pcx you need
day
pen
E
u
youintegrate
to pointshere
getaramp
F
capitalPem
to
ingreate
if you knowpar and wantpanyou need to derivitive
often
w
from
little x
pen
and want
Pfx
a
lessthen
a
not
when
variable'll
the relationship between the two
if you know spae
Y'drab
p
I
fD
N
technical reasonthat thedensity function
There is
always exist
doesn't
because it is
for
doesn't
based on derivitive Derivativeof afunction
otherhand PCA CDF always
always exist on the
rafting
X
Gausian
variable
probability density
function
i
ca mil
P Cal
e
at 6
Ranoon
X return
feel
on investment
exist
it
outsidethesquareroot
m
Now
we move to
that's when
we
real important
subject and
have more random variables
a
two
Multiple Rands Variable
X
Example
Pdf
p
u
ofoil stock
price today
Jointpdf
a
Y
Aca g
price tomorrow
I
Pferent
p
Plea y
PGuent
ea
density
is interval on
it
a
da dy
faffen y
f a Penda
inthis case
EA
a
line
Y
dftbntergval
over
4
a
Our interval has
9
internal
gone from
s
into a region
I
I
am
2
two dimantiencaausian
f ample
j
off
of
fi
O
Pca.gr
I
2
JointGausian
X
tenpretue
y
today 11 we cant
tempnetue tommorow
arspose
co.EEI CYII cwithtccss
matric
ay dudy
use price
because pricecan't be
negative
g
Is
1
p
r Vis
FE
c.mg
inverse
taking
Co
zit
LEI
k
ska
y
N
ly
s
y
u
there's relation between
a
prediction
y for
B
what can
Q what's the point
what can use this
you
do aboutthis
for
You Can Use today's temperature to
predict
Tomorrow temperature
When
from one random variable
you go
to
two or
more
random
variables
Note
you got density
the joint density tells us
which could be used
In
order to
do that
conditional
Pdf
joint p d f
Pdf
which is
to
f
a
a
relation8ham between n y
for predictor
we
n
need to know conditional
y
byitself
function at
Ply
sometime with respect to the
pdf marginal pdf
n
and y
by itself
joint
we
call this
Pdf
P H f
Conditional Pdf
In
what's conditional Pdf
conditioned a
platy
p
AIB
P
s
p yin
A.A
Y
u
o
P Nn
h
marginal
ftp.Yy F y
given Y y
taperatmetoday
fo yl a
s
P
densityforms
densityfor u
a 7
PI
density
Eisen
y
on the otherhand
if
derided by
f
X
eupretue exople
thisthe joint
ginny
pdf of
Bay's formula
7 then
wgify.it
M
in blue
If
we know what a
equals to then
that to select one of the
curves and use that curve to do
we
can use
prediction
is also
if
a
one
LI
15 most
likely
tomorrow
the other hand
if
themost
likely tempererature
af tommorow based on the curve
u
Break
be not e
may
but slightly about E
Se in this
acheived by
the
actual prediction
way
conditioned
probability
is
how to know
p y
iiintoIaiiiiieaM
that if you knowthe
joint distribution
can
you
else
find
anything
eYYoYnnowm
mmEi
n.nugii.s
data
op
rly
Q how do
you find fromjoindensity
to find marigned density
A Do what's called marginalization
densityof
PCH
f youwant
x intigrateout
y
yo y
if
at
joint
Igp
ca y d
j
n
margin
y da
integrating with
asf
youwant to find density for
r
y
dx shouldremove
variable
tejoint
integrate
g
stat is the key to do data
explorationwhen
statistics
Data Exploration
Example
D f
Data
set
D
Ni
cos Ki test score for
Student i
n
It stagram
Pln
A histogram is an approximate
representation of the distribution of
numerical data.
µ
5
10 statistics
Max
a
a
known as
Ennis
we can calculate
average
statistics
statgiveus
summeryofthedata
i i dy
a
n
What's
youhavevery large
rata set
statistic
exactly
understanding
mean
Lot
law of
a
Sky
EE X
o
it
Theproper
way
to
correct
Expectation
random variable
Expectation of a v.v
X p on
to
average
a
da
numbers
in mathematical
in mathematical understanding is based on
of
Lary
expectation of y
mean
af
9
Mhth expectation off
can
we
same
also calculate
an
expectation
of
a
function
in
the
way
Et Cf cab
campcar
of
fCus
special1 case
day
X E
its mean
varianceofa
Effect
x
we takea ut
square
of the't off
we get
if we have two
X Y
special
x
f
g and deviation
x
I
r v s
Pku y
to
El Lg
bag
x
Y
g
ff
g ca
y
X
a
gc x
to
p cu y l d
y
o
LF XY
a
O
y
a
my pen g duoy
dy
this know as
Correlation between X and y
f
to
corelation
Note
If two radomevariables are highly
correlated
you can predict one from
theother onthe otherhand
x
basically tells you
whether two
if thecorelation is low
then it is difficult to predict one from theother
things are likely
to happen
together
or
likely not to happen
not
if two thing are not highly comelated
then
our
performance is not going
to be good
special case
gcx.tl
E
f
covariance of
g la y
Now
we
Y E Y
E CN
X
I
can
model which is
and y
Cov xy
all of this to important probability
apply
multi variant Gausian model
A multivariate Gaussian refers to the joint distribution of at least two variables. When this joint distribution is
Gaussian, it takes as parameters a vector of means for each variable and a variance-covariance matrix,
containing the variance of each single variable and the covariances between pairs of variables
Malti variant Gausian
Random vector X
X is jointly Gausian
El
dot E
determinate
n that'sthevector
has Gausian density
vector
if p E pea
kitIY Iheke
f
dome
mTi'c
as its mean
Tj
X
n
N
ie anite
m
E
abbreveate notation
Transpose
no
we can
as
writethis
X NCm.ci
Had
E t eEitfe
with me
any
matrixE
m
is a vector E
This
Model
t.eten
C
mica MIT
Elia
c's save
Footie
ifcij
fi
covereance
matrix
fcxi.mil Cai
Cov
Xi
mil
Xj
b
se it's coveniance between rid
one
p cu
gausican random variables
n
1
s
FIT
6
vote
EC
u
anxing
moment
this notcorrelation
it
is
moment
ni
Nj
related to the
problems
H w Etreewly helpful
marginals conditionals
joint
X
X
n't O
N
X has a normal distribution or guasian
E
M
distribution withmeanfactor A
and
cats
Pca
N
for multi variant Gaussian N Cm E
what's marginals
Ei
Ect
for PCI
and
Mt E Ix
covericence
matrix of C
m
whatis marginals
for PCI
also
we wantto
find the conditionals P n'IX
Phil
X
Answer
is
solving
marginals first
pail Pla
f
To find poi
just need
to integrate
we
se
out
Phi
b
Play
Sf
a
a
n
Plas dx
integrate x at
Fck
Ox
at the book
m
XrN
if
than
x
N
n
c
m
E
the mean is the
m
mfAnymi
we
cafightistionnag
top part
E
The marginal for u
I
the challenge
ni
N
is
to
C
get the conditional
Conditionals
e.it
we
want tofind
x'givenx
y
one giventwo
f I
NC in
I 2
I z
wer
or yea
giffin
m
a
is
m
c
C
c
I
Coy
qiang
uncertinity about x
gl
q
Ol
I
not one
Corelation Cov
halt
betweenx'and
nim
P 111
attention
rig
I
similar to It
Example
w
Estimation based
digerati
it
Three
Y
X
Mir Ya
oyer
i
we
t wi
X
N
w
N
estimate
Y Y2
random variables X
problem
noise
w
N
wz
problem
obseration
signal
x
wi
on
i
o or r
O G I
I
b i
got in
sa
fo J
wi
independent of each other
i
p
from noisy abservationY andya
from Y Ya
want to calculat
E
X Y Yz
can't calculate u itself because X itself is random
variable But we calculate the expectation
of X given YiandYe
we
E
x
w
delay
XI Y HI
this what we
can
whatwe have
applay
learned
review
H
w
classification
Review
what
we
done
have
last time
list
Note in DiscreteyoucanusePdf PDF
Random variables
In
pdf
where we
PX
a
N
U
but people always use this
P
pp
discrete
I
I Continue
PD
ka
an
P P
Puny
probability distribution
pqy any
probabilitydensityfunction
Pdf
PDF
function
probabilitydistribution
is similar to PD except the list has become continuous function
have
P Cut
Continus
function
him
III d
my
X
L.a
interval
p Xe Ia
size of the Ia
where the size goes
the
y I
to
Note In ContinuousRandomvariable
peoplealwaysuse this
zero
But in some application
people use this
P
such as in H w
probability distribution function
r
P
Xf
se
used in't
w
4
In discrete random variable
have this function
you
can
Iso
In terms
of probability
For Discrete
let's
say
cages
we want
PEvent
to calculate probability of bent
we convertthat into
P
PEvent
For
calculations
E Pi
X C A
P
Ni EA
E Pi
X CA
Ni
EA
continuous random variable i
density function
P aent
P XEa
Sa
PIM
du
Qa
SapCa
babbrevation
density
foru c f ion
f
you
an
think of
it
as continuous
Sumlsub over A
joint and conditional Pdf
let's assure for now that
X
Y
i
Probability
probability
f Ge y
we
are
when
belong to A
have
joint Pdf
Joint probability densityfudia
variables is just
for some correctly defined A
integrate over A and the probability of my
RVs
Cooking continuous case
of any event involving two random
of a y
we
density function d u
we
dy
can
P event
P
Cn y
E A
Saf Pca g
da dy
that is the joint
For conditional
conditional
p
this is thedensity for x given
Pdf of X givenYey
Idf p Cul y
BytheBaye's
formulathis just the
joint deridedby
Plaid
P
How to get this
cy
marginal
we
get
this by
out se
fd
se
p en y
9
g
us
I
if
integrating
Expectation
E
set
X
E
f
Gcn
E
g
a
du
gcn pas du
f
Cn y
its density then due
n weighted
by
functionalat n
g laid Pca g
dad's
functional
oftwo variables
variant
Important example that put all together multi variate
gausian Normal
R V
meansmulti
varieant
me Eiticist
x
C
HIT
EMEI
f
I
C
air
b
s
n
vector
Couch
Cci
Cij
fI
e
row
column
vector
aging
n
cu
arrows
Et Xi
mi
X
coverience
mj
metres
The multivariate Gaurian random variable the probability density
function is completely determinedby
so we
say
X is
MVG
multi variate
Aausian
I
or
m
X C paramiter
last
Feathered
M UN
multivariate
normal
n
X
n
N
m
C
t
metix
with
IF
vector
pix GIF
XI
9
e
mFc Cn
M_
Z
quadratic
c metrinand take
thedeterminate
I
II
n
form
ti
after partition
0
2
11 X mil
mi
f
C
its notation
1
modified norm
a norm
The norm is a measure
of the length of the
vector
we can
https://youtu.be/
UtNPmGw60jg
find the marginal
Density Forexample
density at firstpart is
the qualian with the mean at the first parts
The marginal
Env
Ncm d
X
N
g
when
TN
drip
so
we
integrate this
the conclusion we get
is this
x
m2
the sauteing for
E
the integration is
over
all dementionafat
x2 نسوي تكامل لx1 الحظ اذا نبي
Conditional
X
I
X
N
theMeany
Mi Iz
m
Cik
C
b
block1
ma z
C
E
c
w
m
l
El
E
bLoch r
this
you can think of
giver that
IX
x m2
E
c
X
y
we are
as
uncertinity in X
observed
already
in block
1
block
That equal the original certinity
in block one
E
etc
H
w
like it
dislike it
You just need to identify A p
this is an bseration
P
A
PCB
A B
obseration
A
p
Like
Dislike
B is the eventthat
actually
like it
B
probability
P
like it
P
p
pg
we
B
III
like it
thecause
5 don'tlike it
like it
A
they
say
The
it should be A
so
PC B
p
n
B
Nz
B
a sayingdontlike it
trying
to
say
they
find
is
don't like it
I F
PCB
0.6
P AIB
0.8
NAIB
0.2
PCA
notgiven
But can be
1B
A
P
P
0.2
0.6
0.33
PCB
fo
found usingtotal
propability formulas pknown
put
F
0.36
PLAIN P t
P IIB PCB
0.2 0.6
0 9 0.4
P A
0.33
gcn
P
pull
RA
wiki
z
b
E
a
g
PCaldr
a
1
X
f
Penda
2
2
8 peg E
Xs
N
V
means
P
by difinition
petty
Ax da
Z
PCE ZIYI
2
p EXILED
n
or
doit likethis way
Ican
Y12
Px Lacy 12
bihthisfway
y 12
pea ou
d PCD
day
pep
If
p
simgibraeeb.at
floes
DCM
pictp.fm
picture
Muggs
cmon
shift left
Cos X
cos
A
za
there's
N
M
x
y
I
w
S dy
play
s
Pcay
I
o.am
oof
0
s
a
et Con
P Cn
K E
Real
o
o
I
ones
By
Izu
µ
1
density for
funditional
the
find
PMKX.us
aly
pity
x g CA
MTO
x
0
y EA
Are they independent
Px y
Pdx
X y
Py
check
y
p C
fxly
s
d
p
a
Sa
y
C B
P
y
a
a
a
yl du dy
Ye
7 did y
S dx Indy
42
Pry
pg
ab
B
e
investmtgood
ad
o
or
I
c
big
smell
fish
x
s
a
investnt
big fish
3
small fish
E3
if fish is big
43 hieght
classifier
as
what's probabilityof
card
Pe
136
Pe1B
PC B
error
Pelly
LBJ
Trane
stands
for bigfish
I
mistake
43
Pels PCs
3
P B
s
J
density
Lydda
2
p
alls
not
find
answer
ELEM
Xn
Y
N
iE
oil
Nco D
WN
Npt Winoise
aamia
joint
FIFE
Y
multi varied Gasecian
MUN
X
Y esstimate
x
Y
m x
m
ECT
w
GEE
we
need
ng
w
East Ecw
C E
b Kan EOD
Ctx
E Y
x r
ECx7 IEfcx
oi
exy
E H
ECay
Elix
E.CM
1
y cx
ECxy7 tECxCx iw
ECx7 ECxw3
ECT
ECD ECD
EC wI3
E
E
f
X
z
way
partition above
part 2
ylxnncmylx.ly
in
game
I
C
x
xj
VarCH
E
Jemimonenaat
estimate of
rt VG IN UN
Background
Y
X
for this question
how to estimate
w
Yz X
Wr
how
well do
mm
rsu
Effi
Pagel
E X Yi
y
i
Eto
14,1
Gaz
do using
we
force
x
on
obseration
one
I
based
one obstervation
Guro obseration
T
Hit
nay
cxly.at
D
CH EA NH
Var
X
x
6
2
Ely
EEH
cxy ECx ECtIKY
o
E XY
ge
E
nco.gg
X
X
2
ICY
E
i
Et Gawd
2
E
C wi
don't 26
Tx
mxly
mx
IcxyCcyy5 Y
at 12J
6
Gx
Cxly
leave
26
Y
Awm
m
let
the length of the fish
L be
L small
U
Big
NUC 1
a
U
2,5
n
p small
F
p big
s
PCL 31small
P L
3
By
P L
P
3
length
is
is
longer
probability
L 3
of
4
a
small P small
3 b
error
if
we
longer than 3
G
Za
declare
a
P
L
31big
P big
F
fish to be
big if
its
Probability of small
fish if length
P L 31 small p small
Bayes hehe
than
p small 1L
s
3
Is
Iz 3
probability
t
9,7
big
the law of
z
3
far
z
problem
1.
P(x) =
-ße
ße
ifxzo
otherwise.
0
(a)
we
need to
solve
the following
couponba
(
(be) N
Armue = ardmax, TN6x) armax en, JT Nox.)
log
likelihood function is
l(B) = 8 C lnB - Bx;]
妇
1-1
now
n и
qo
-
differentiating it we get
l'(B) = [ - X;)
i-1 ß
Ž [l-x;] = 0
ta
n ท
n
I slie
1
& 동
{ x = 0
n
(
(1
تاليا مع
V
i=1
+ 17 X3D
MAP
bu
ਈ
D
EX
17
X3
b
It iX
n
) )
u
១
(
bu
b
Y + x 3
Clo
-
Problemin 4.
Q. We can define forchon
7(8182) / 1
.(ligk2). E AUB
0
otherwise
where; A is a :
triangular regton & B is a rectangular
region as shown in the figure.
B
A
B. is a
square with coordinate
(dit) (+12) (211! (242)
Ars a triangle with co-ordinates
(010) (to) 4 (0 it)
we have to implement the funchon f(2,182) with
neural network having 4 layers.a
& Anput layer, a hidden layers +, Output layer
a
Where
Assuming non-lineanly for act)
(t) = o for to
$4 = di when tzt
for
region A
boundry are
x20,
Y=0
xty=d
10=1-
xy
he
bië
(1-2-4)
ha= 0 (-x)
= 0 (tool
or, hi = .0 (-1.2-1.4 +1.1)
N
며
or, hi = 9([4-1
-12 [
)
here, for A.
H3 = 0 (y)
he hi
[12
h2
-| -1 1
ola y
do 0-10 11
X Х
-
o toto
20
9 y
.انا...
ملت
प-
(L+-+ + ,
%3D
Thi
h2
h?
Now for savare region
bounded iby,
.
2-1
--2
५- -
५-2
Now,
024
L-
XU
h? - Y+0
b, co F1 0 + 2)
म
३.का
५
by+-42)
b22 (-2+2)
24 = (-02
k
16
-40 -
A
५
।
CS
EURBIBPSURDULIM Pauurms
Now,
h Thi
-1
O+1
+2
-1
h2
hz
I by
x
y
I
I
+2
0 1
며
ŷ = 0 (Edititih] [hi
h2
hy
ny
Then,
The neured nehwork with two layers.
bi
hi
hra
= output
h3
input
hy
output
Jayer
input layer
hy
ist hidden
layer
and hidden
layer.
(8)
B (A)
fig' - Neural Network for of (ny)
XX
Purchase answer to see full
attachment
Tags:
linear equation
differentiation
integrations
exponential random variable
augmented theory
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.
Reviews, comments, and love from our customers and community: