World-writable memory on Samsung Android phones
Posted Dec 27, 2012 20:50 UTC (Thu) by khim
In reply to: World-writable memory on Samsung Android phones
Parent article: World-writable memory on Samsung Android phones
I see a lot of stuff deployed without any consideration of these criteria.
Sure, if it's some kind of in-house program where N will not ever be more then, e.g. 1000, then it may work. If you are lucky. If we are talking about commercial software… it just does not fly. I've seen a lot of stuff deployed in such a fashion, but it all tanks when users really come. You may be “first to market” but if your stuff stops working after 10'000th user (or after 100'000th user) then you are usually eaten alive by someone else who was second (or sometimes even third). Your only hope at this point is to sell this mess to someone with deep enough pockets and hope that redesign will survive. It rarely does.
I've seen senior management say explicitly that they want to optimize for programming time over system resource usage, because they perceived hardware as 'cheap' compared to programming manpower, as the company grew, management started to care more about system resources.
Well, that's why I'm talking about back-of-the envelope calculations. Quite often you only need to know resource usage very roughly. But it's one thing to estimate them imprecisely and explicitly refuse to spend time for too much fine-tuning and it's totally different thing to ignore resource requirements completely and not to even know if you need 10 times more servers with 10 times bigger userbase or 100 times more servers.
I think the only programs where you can explicitly refuse to think about rough estimates are some parts of games: there are a lot of not-all-that-time-critical stuff where you can use Lua or Python and N does not grow so your requirements don't grow either. And gamedev is not something we are doing here.
don't make the assumption that every place (or even every large company) is like yours.
I'm not doing that! Not even close! I know there are a lot of companies and a lot of so-called “software engineers” who don't care about this stuff. That's their choice. We just don't want to see these aliens from parallel universe landing in our world, that's all. Take a look on the title of this article—it explains what goes on pretty succinctly (why would you are about time to reference memory if you don't know roughly how many times you'll need to do that in your program?).
It does not mean that people are obsessed with all these numbers and complexities all the time—of course not. Often you just note something like “Ok, this is O(N) so will probably not be a bottleneck” or “fine, it's O(N³) memory, but we don't expect to ever see N > 10 thus it's Ok” in your head and happily go on with the implementation. Sometimes you need to actually stop and think if you want to fight for the smaller memory consumption or latency. But you usually do that at the design phase when changes are cheap and simple not when everything is deployed and the only way to fix something is try to add bazillion caches (and even they often don't help).
to post comments)