unix - why should I not use find optimizations? -


i read in manual , info pages sections optimization levels in find command , cannot understand why should not use aggressive optimization level.

the relevant sentences found (from man find version 4.4.2):

conversely, optimisations prove reliable, robust , effective may enabled @ lower optimisation levels on time.

the findutils test suite runs tests on find @ each optimisation level , ensures result same.

if understood well, it's proofing right behaviour of find through findutils but, test suit ensures otimization levels giving same result.

you're missing sentence:

the cost-based optimiser has fixed idea of how given test succeed.

that means if have directory highly atypical contents (e.g. lot of named pipes , few "regular" files), optimizer may worsen performance of query (in case assuming -type f more succeed -type p when reverse true). in situation this, you're better off hand-optimizing it, possible @ -o1 or -o2.

even ignoring issue, fixed costs of cost based optimizer difficult right. there multiple pieces of hardware , software involved (the hard disk, kernel, filesystem) caching , optimization of own. result, hard predict how expensive different operations be, relative 1 (e.g. know readdir(2) cheaper stat(2), don't know how cheaper). means cost-based optimization not guaranteed produce best optimization assuming typical filesystem contents. lower optimization levels allow hand-tune query trial , error, may more reliable, if more laborious.


Comments

Popular posts from this blog

javascript - Chart.js (Radar Chart) different scaleLineColor for each scaleLine -

apache - Error with PHP mail(): Multiple or malformed newlines found in additional_header -

java - Android – MapFragment overlay button shadow, just like MyLocation button -