cinera_handmade.network/cmuratori/hero/code/code166.hmml

45 lines
3.7 KiB
Plaintext
Raw Normal View History

[video member=cmuratori stream_platform=twitch stream_username=handmade_hero project=code title="Adding Locks to the Asset Operations" vod_platform=youtube id=jIWa0AZz2Sk annotator=Miblo annotator=debiatan]
[0:38][Recap and plan for the day]
[1:23][We'll finish the generationID work we started on the previous episode]
[2:00][We need to access the linked list through locks]
[4:06][We also want to avoid simultaneous AcquireAssetMemory calls]
[5:37][BeginTaskWithMemory is not protected against concurrent calls either]
[6:45][Background tasks should not spawn other background tasks]
[8:13][We don't want every LoadBitmap call to have it's own thread; only those associated to the main thread render_group should]
[11:12][Parameterizing LoadBitmap to exhibit immediate and deferred behaviors]
[12:42][Dividing LoadAssetWork into immediate and deferred portions]
[18:18][Deciding on how to lock access to AcquireAssetMemory and AddAssetHeaderToList. Do we want one or two locks?]
[25:24][Let's try with just one lock]
[29:59][Review of GetAsset and AcquireAssetMemory locks]
[32:06][Implementing BeginAssetLock and EndAssetLock]
[37:28][Testing the locking of linked lists]
[38:29][Finishing the GenerationID in-flight asset tracking from the past episode]
[42:34][Implementing NewGenerationID: Using AtomicIncrement to avoid returning the same GenerationID to two threads]
[43:13][Star Trek: The Next Generation ID][quote 167]
[45:29][Correctly use __sync_fetch_and_add for those of us on "Lunix"][quote 168]
[50:30][Testing it]
[51:07][Making sure we don't evict assets with in-flight GenerationIDs by keeping a list]
[54:12][AssetLock instead of AtomicIncrement inside NewGenerationID to protect both the GenerationID and the InFlightGenerations list]
[55:44][Implementing GenerationHasCompleted]
[59:02][What gets rid of the render_groups?]
[1:01:59][Implementing FinishRenderGroup]
[1:03:40][We still need to thoroughly test today's code]
[1:03:41][NOTE: (There are ten more minutes of programming in the answer to Q:1:21:19)]
[1:04:51][Q&A][:speech]
[1:05:30][@TheSizik][__sync_add_and_fetch returns the new value]
[1:05:43][@mmozeiko][Please don't cast Value to (long*) for __sync_fetch_and_add, it will generate wrong code on 64-bit Linux/OSX]
[1:06:17][@RobotChocolateDino][What's the advantage of calling load bitmaps from other threads? Wouldn't it be better to just have PushBitmap fail when called from other threads so that there are no assets missing from the ground chunks and so that all the bitmap memory could be acquired on the main thread? The ground chunks could probably wait one frame to have their assets loaded if they are prefetched ahead of time]
[1:07:23][@powerc9k][Is the Github repo online and if so how does one gain access?]
[1:07:55][@OsmanTheBlack][Will you get rid of stdint ever?]
[1:08:42][@Brotorias][Is "volatile" actually needed in your compare-and-exchange?]
[1:09:36][@jessem3y3r][How difficult would it be to have the letter particles fall independently of the hero?]
[1:17:20][@Stephenlast][Don't you need to move all ground chunk work into the separate thread? ATM it looks like it's only actually doing render to output in the task]
[1:18:23][@OsmanTheBlack][Why are you using u64 instead of size_t for buffer sizes?]
[1:19:21][@AlejRad][Why are you using windows?]
[1:21:19][@Stephenlast][I only ask because for now it seems like you will be stalling for that LoadBitmap on the main thread]
[1:34:59][@mvargasmoran][How difficult would be to make this boot alone in a Raspberry Pi or something like that?]
[1:35:11][@OsmanTheBlack][load_asset_work for u64 !size_t]
[1:36:08][@RobotChocolateDino][Would dedicating one thread to asset loaded which has an atomic queue be a bad idea?]
[1:37:39][Wind it down][:speech]
[/video]