cinera_handmade.network/cmuratori/hero/code/code101.hmml

36 lines
3.0 KiB
Plaintext
Raw Normal View History

[video output=day101 member=cmuratori stream_platform=twitch stream_username=handmade_hero project=code title="The Inverse and the Transpose" vod_platform=youtube id=PPDAqEJvUUQ annotator=dspecht annotator=Miblo]
[0:46][Recap our current situation]
[1:45][Start thinking about how we'll transform Normals]
[3:51][Blackboard: Rotating Normals (is pretty straightforward)]
[5:01][Blackboard: Non-uniform scaling gets a little bit hairy]
[6:17][Blackboard: Pretending we have the edge of a gem]
[8:41][Blackboard: Vectors are not all the same]
[11:23][Blackboard: Normals are written differently in linear algebra]
[12:46][Blackboard: Sometimes you have to go down a math hole][quote 66]
[13:03][Blackboard: Constructing the P vectors]
[14:17][Blackboard: Matrix multiplication]
[24:18][Blackboard: Transpose operation]
[26:19][Blackboard: Inverse operation]
[30:24][Blackboard: Gauss steps in with Gaussian Elimination]
[35:26][Blackboard: Solving equations in multiple unknowns]
[36:33][Blackboard: We're just entirely in the math hole][quote 66]
[37:17][Blackboard: You can add or multiply any two equations with equal terms for free]
[39:02][Blackboard: We can regularise operations on the rows and columns of this matrix]
[39:49][Blackboard: Use numbers to clearly demonstrate Gaussian Elimination]
[41:12][Blackboard: This is not JZ's term][quote 67]
[41:27][Blackboard: You can then divide the last remaining term by whatever the target is]
[43:10][Blackboard: We want to be able to multiply a matrix by something in order to produce that identity matrix]
[44:38][Blackboard: The regular solution form for Gaussian Elimination]
[58:12][Blackboard: Unfortunately we didn't quite get to the inverse transpose]
[1:02:26][Q&A][quote 65]
[1:03:06][@stelar7][For inverting 2x2 matrices, a simple cofactor equation is quite efficient]
[1:03:26][@mr4thdimention][All of the steps in elimination can be represented as a matrix. Starting with M and multiply by all those matrices you get the identity. So those matrices multiplied together are the inverse. Multiplying them all by the identity gives you the inverse. Is that the trick you are alluding to?]
[1:07:47][@robotchocolatedino][To find the inverse matrix couldn't we just rotate by the negative angle and scale by 1 over the amount that we scaled by originally?]
[1:17:32][@oliholli][I remember doing this by putting the identity besides the original matrix and performing each step to each]
[1:21:24][@ufphen][If you had a matrix such as int\[2\]\[2\] with the example of abcd as the variables in place, couldn't you just swap a and d and make c and b negative?]
[1:22:31][@miblo][I understood none of today's episode. Would you recommend that I rewatch it, or do you think all may become clear tomorrow?]
[1:23:31][@naysayer88][If you write A and then I next to each other, and apply a bunch of operations, you are computing T3T2T1 A and T3T2T1 I. Because you ended up at the identity, T3T2T1A = I. So T3T2T1 = A^-1. Therefore T3T2T1 I = A^-1]
[1:26:46][Blackboard: Look at all of this maths]
[1:27:30][Blackboard: Setup for Day 102 and sign off]
[/video]