Current location: Homepage > swift > Artificial Intelligence Full Course 2024 _ AI Tutorial For Beginners _ AI Full Course_ Intellipaat [DownSub.com](1).txt - Ep42

Artificial Intelligence Full Course 2024 _ AI Tutorial For Beginners _ AI Full Course_ Intellipaat [DownSub.com](1).txt - Ep42

2025-07-11 11:56:15 [rust] Source: MetaScripta
only one single weight value nowimagine a neural network which containsthousands of weights right now in orderto identify the direction of change ofeach and cognitive science 2025every weight it will obviouslytake time right because it might happenthat if you increase all the weightsyour error does not change because itmay happen that for some weights you hadto increase it for some weights you hadto decrease it right so to identify foreach and every weight what is thecorrect direction we need to have amathematical technique or we we need tohave a technique which will help us ingiving the weights uh accurately rightgiving me the direction of the weightchange accurately see if I know thedirection of the weight change then itdepends up to me if I want want tochange the weight value by a very uhlarge percentage or by a smallpercentage right so let's say if I havea technique which tells me that weightone you have to increase the valueweight two you have to decrease thevalue right now I can go ahead and I candecide decide the step size right nowthe learning rate parameter will help mein deciding the step size whereasgradient descent will help me indeciding the direction of weight changeright okay okay now let's try to see howdoes back propagation work right whatexactly happens now let's take anexample right we have actual value andwe have predicted valueright uhand let's say we have actual we haveweight and we have predicted right let'ssay my actual value is 10right when I take a weight of zero mypredicted becomes zero right so I willalso have an errorright so my error becomes let me taketwo errors I will take error and squareerror okay so my error becomes what okaycan you guys tellme if my actual is 10 predicted iszero my error will become 10 and whatwill this become the squareerror100 all right now once again my actualwas 10 my weight let's say becomes oneand when my weight becomes one mypredicted becomes five right now whatwill happen my error will become five mysquare error will become25right okay now actual remain10 let's say the weight becomestwo predicted becomes10 error becomeszero right square error also becomeszerocorrect right now let's say once againmy actual weight was actual was 10 myweight was three in this case mypredicted was 15 right my error becomesfive once again and my square errorbecomes 25 BS again right now thisprocess will keep on this this will goon and on and on right if I have aweight of four uh this will become 20this will become 20 so this will become10 this will become 100right all of this isan assumption only AR right I'm nottaking any data and just creating oneexample for you right now if you seewhat the square error is doing thesquare error is giving me a much highervariation of the error value right sothe output for your square error will besomething likethis right in case of your error in thisparticular example it is still somethingwhich looks like this but when you areworking with nonlinear problems rightwhen you are working with problems whichmight be nonlinear in nature in thosecases your absolute error might looklike a straight line which is like thisthis right now your error functioneither can be a U-shaped curve like thisif you're taking the square if you'renot taking the square if you're takingthe absolute value your error functioncan look like this right now in thisU-shaped curve it is easy for you to goahead and find out the minimum of thiserror curve right if I'm sitting overhere I need I know that I have to fromhere I have to reach somewhere over hereright but if my error function wouldwould not have been a squared right iflet's assume then I was sittingsomewhere overhere right you don't know the extent ofwhere you will have to go to in order toreach the minimum of the curve this willgive you more number of iterations andfor nonlinear and more complicatedproblems you will never be able to findout the minimum of the error so that iswhy we use the square of the error Ihave explained this I think yesterday orday before yesterday aswell okay so that is the logic of usingthe error soam is that clear to younow all right so let's try to see whathappens in backward propagation rightnow I think one thing should be clear toyou uh by now that this error curve iswhat this error curve for this errorcurve on the x-axis we will have what wewill have the weights of the model rightso let's say I have three weights W1 W2W3 right so my weight of the model willbe over here or let's say if if I looklook at an example which contains only asingle weight right let's say I justhave one weight right so my weight valuewill be on the x-axis and correspondingto this weight value you will have anerror is that correct if you see theprevious exampleonly this oneright if you see this example only myweight value is actually changing rightso I started from a weight value oflet's sayzero right12 3 and four right now you can see thatwhen my weight value was Zero over herein this example my error was quite Highclose to 100 when my weight value becameone my error decreased to 25 as soon asmy weight value became two my errorbecame zero then when I increase theweight value error became 25 once againand then if I made this four errorbecame 100 once again right sonow itis it is the value of the weight rightthat is driving the error okay and thatis what we are trying to minimize overhere now the good part with the U-shapedcurve is that you can if you are hereyou know that you have to go down if youare here you know how you have to godown how do you know that you have to godown that is gradient descentright so if you see if you're sittingsomewhere over here you know that youhave to increase the weight if you'resitting somewhere over here you know youhave to decrease the weight that's

(Editor: swift)

Recommended articles
Hot reading