I was recently confronted with the problem of the potential impact of dynamic allocation of memory on the performance of a program.
I do not want my old crust, but at the time where I was taught C/C++ (between 96 and 99), I was clearly explained that the allocation is not free, and it must avoid getting on the fly especially if these data are of considerable size, such as images for example. So I got used to pre-allocate as much memory as possible knowing that I needed and then work mainly on pointers.
Today one of my colleagues showed me his code for video playback using mainly references, so that each new image, it allocates a new object (ie approximately 640x480x4 bytes) to be destroyed once processed image then reallocated to the next image etc.
Obviously I immediately convinced him, but he argued that the additional allowance was negligible, and indeed some rapid tests and naive like "infinite loop" did not deny the thing. So even if I am do not "elegant" to play with the memory, why not, but it still goes against everything I believe.
So I ask myself questions: Is it wrong that I learned my lessons? Or the progress of hardware or compiler has eliminated what was a constraint at the time? Or there is a flaw in the memory allocation tests and there is indeed a problem with abuse of new/delete? (after all, a "hello world" infinite loop, this is not the same as a real program that brews up ...)
Could you enlighten me on the low memory management subject?
Bookmarks