Deep sampling
Encyclopedia
Deep sampling is a variation of statistical sampling in which precision is sacrificed for insight. Small numbers of samples are taken, with each sample containing much information. The samples are taken approximately uniformly over the resource of interest, such as time or space. It is useful for identifying large hidden problems.

Examples:
  • In the context of software performance analysis
    Performance analysis
    In software engineering, profiling is a form of dynamic program analysis that measures, for example, the usage of memory, the usage of particular instructions, or frequency and duration of function calls...

     samples are taken of the call stack
    Call stack
    In computer science, a call stack is a stack data structure that stores information about the active subroutines of a computer program. This kind of stack is also known as an execution stack, control stack, run-time stack, or machine stack, and is often shortened to just "the stack"...

     at random times during an execution interval. This can identify extraneous function calls as well as hot spots
    Hot spot (computer science)
    A hot spot in computer science is most usually defined as a region of a computer program where a high proportion of executed instructions occur or where most time is spent during the program's execution .If a program is stopped randomly, the program counter...

    .

  • In computer disk storage management, random bytes of storage under a directory are sampled. At each sample, the directory path to the file containing the byte is recorded. This can identify files or types of files that unnecessarily consume large amounts of storage, even though they may be buried or widely distributed within the directory structure.
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK