Monday 15 June 2015

performance - Why is iterating over a hashmap O(c/n)? -


There are too many links that tell me that there is a big hush for a hashp:

  O (1) o (1) o (o) (1) next object o (c / n) c = table capacity (number of bucket) in o (1) o (1) n = size  < / Pre> 

It is a bit obvious why why get / join / include O (1), but I want to know why this frequency is O (C / N).

And when I am in it, I'd love to know that Big O, they are concurrently hashap, treemark etc.

Has anyone got a good link?

Linked papers not say that the code is O (C / n) . It says "next entry" is o (c / n) . The change for each N is "next entry".

First of all, note that c (capacity)> N (entries) is an irreversible - and C is of some function - then O (c / n) & gt; O (1 / n) . (Note: According to the comment, I am not entirely sure about claiming irreversible in the implementation of Hashmap which uses Channing for collision resolution.)

What does effectively say that a standard hashmap should be left on empty and a few buckets that are seen when doing "next entry". Thus the limitations are "more" than "over" for "next entry" but even though this limits the care while reading, because not indicates it Running faster with more n - it only describes "next entry". Total N entries.

  Since O (1 / n * N) -> gt;  

Since the regeneration is effectively just "next entry" for everyone, for the repetition of a hashmap:

  O (1 / n * N) - & gt ; O (n) o (c / n * n) - & gt; C * O (N) - & gt; ~ O (n)   

(Since c is a function of n may be a bit Be differently, in different circumstances, to drag it as a constant, therefore deviation.)

No comments:

Post a Comment