Here are two versions of the inclusion sort, which I apply to one with the pseudo code and straight one. I want to know which version takes more steps and space (in some places too complicated)
zero entry_art (int a [], int n) {int key, i, j ; For (i = 1; i & lt; n; i ++) {key = a [i]; J = i - 1; While (j> = 0 & a [J]> key) {a [j + 1] = a [j]; J--; } A [J + 1] = key; }} and this is a
entry_source (item S [], int n) {int i, j; (I = 1; i & lt; n; i ++) for {J = I; While ((j> 0) & amp; amp; amp; (s [j] Here the sample classification array is a = {5, 2, 4, 6, 1, 3}. In my opinion, the second version takes more steps because it swaps the number one by one, while the first loop swaps more numbers in the loop and then swaps the smallest number. For example: to index = 3, both versions take the same steps, but when the index = 4 arrives, to take the number 1 swap, takes more than 1 phase in 2. What do you think?
"Number of numbers" is nothing useful.
Is one step one line? a statement? an expression? An assembler directive? A CPU micro-op?
That is, your "steps" are converted into assembler and then optimized, and the resultant instructions can be different (and potentially variable) runtime costs.
Wise questions you can ask:
1 algorithmic complexity is it? As noted in response to the comments and arpit of rifle kettler, it aligns algorithm scales as to how the input size
2 how to do If you want to know which is faster (for some set of input), you should measure it.
If you want to know what more swap does, then why not just type a swap function every time it is called , And detects, then enhances a global counter?
No comments:
Post a Comment