• DS-fundation-sort1


    merge sort

    1. After Quicksort, this is the second efficient sorting algorithm
    2. Merge Sort operates on the “divide and conquer” principle:
    3. first
      在这里插入图片描述
    4. second
      在这里插入图片描述
    5. The array is divided until arrays of length 1 are created
      在这里插入图片描述
    6. the number of division stages is log2 n.
      在这里插入图片描述
    7. On each merge stage, we have to merge a total of n elements (on the first stage n × 1, on the second stage n/2 × 2, on the third stage n/4 × 4, etc.):
    8. The merge process does not contain any nested loops, so it is executed with linear complexity: If the array size is doubled, the merge time doubles, too. The total effort is, therefore, the same at all merge levels.
    9. The time complexity of Merge Sort is: O(n log n)
    10. Merge Sort is therefore no faster for sorted input elements than for randomly arranged ones.
    11. For presorted elements, Merge Sort is about three times faster than for unsorted elements.
    12. For elements sorted in descending order, Merge Sort needs a little more time than for elements sorted in ascending order.
    13. Merge Sort is about three times faster for pre-sorted elements than for unsorted elements. However, the number of comparison operations differs by only about one third.
    14. not in-place
    15. a stable sorting process.
    16. In-Place Merge Sort
      在这里插入图片描述
      在这里插入图片描述
      If the element above the left merge pointer is less than or equal to the element above the right merge pointer, the left merge pointer is moved one field to the right.
      Otherwise, all elements from the first pointer to, but excluding, the second pointer are moved one field to the right, and the right element is placed in the field that has become free. Then both pointers are shifted one field to the right, as well as the end position of the left subarray.
    17. Merge Sort is an efficient, stable sorting algorithm with an average, best-case, and worst-case time complexity of O(n log n).

    insert sort
    18. With the outer loop, this is obvious as it counts up to n.
    19. Average Time Complexity在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述

    1. the time complexity for shifting is, therefore, O(n²). This is also called “quadratic time”.
    2. For comparison operations, we have one more than shift operations (or the same amount if you move an element to the far left). The time complexity for the comparison operations is, therefore, also O(n²).
    3. Worst-Case Time Complexity
      在这里插入图片描述
      The term from the average case, therefore, changes in that the second dividing by two is omitted:
      6 × 5 × ½
      The worst-case time complexity of Insertion Sort is: O(n²)
    4. Best-Case Time Complexity
      If the elements already appear in sorted order, there is precisely one comparison in the inner loop and no swap operation at all.
    5. Insertion Sort With Binary Search?
      we would not have gained anything from this, because we would still have to shift each element from the insertion position one position to the right, which is only possible step by step in an array. Thus the inner loop would remain at linear complexity despite the binary search. And the whole algorithm would remain at quadratic complexity, that is O(n²).
    6. Insertion Sort With a Linked List?

    If the elements are in a linked list, couldn’t we insert an element in constant time, O(1)?

    Yeah, we could. However, a linked list does not allow for a binary search. This means that we would still have to iterate through all sorted elements in the inner loop to find the insertion position. This, in turn, would result in linear complexity for the inner loop and quadratic complexity for the entire algorithm.

    stable
    Insertion Sort is not directly parallelizable.* However, there is a parallel variant of Insertion Sort: Shellsort

    1. Insertion Sort is, therefore, not only faster than Selection Sort in the best case but also the average and worst case.

    The reason for this is that Insertion Sort requires, on average, half as many comparisons. As a reminder, with Insertion Sort, we have comparisons and shifts averaging up to half of the sorted elements; with Selection Sort, we have to search for the smallest element in all unsorted elements in each step.

    Selection Sort has significantly fewer write operations, so Selection Sort can be faster when writing operations are expensive. This is not the case with sequential writes to arrays, as these are mostly done in the CPU cache.

  • 相关阅读:
    Go Atomic
    【C++】单例模式【两种实现方式】
    Typecho框架漏洞
    Java 小数过多出现E情况
    CCF CSP认证2022年6月 归一化处理、寻宝!大冒险!、光线追踪
    Linux入门与进阶(九)
    学会Linux,看完这篇就行了!
    redis过期事件监听
    CentOS7服务器用U盘装centos7系统报错解决方案
    git远程创建了分支,本地如何更新到最新的分支
  • 原文地址:https://blog.csdn.net/weixin_43124546/article/details/126382590