Monday 15 July 2013

mysql - Abysmal Performance for Count() using ZF2 Paginator on InnoDB Tables -


I use the ZF2 pgnator to record large sets (the worst case scenario of approximately 10 million w / oa search filters) I am trying to My tables are in InDADB format, which I think does not count as a part of the metadata

I know that I can extend the Zend \ Paginator \ Adapter \ DbSelect class and mine I can implement my count () method which uses counting data that I manually store in another table, but how am I unsure

default uses this method Is:

  & lt ;? Php Public Function Count (if ($ it- & gt; punctiquity! == zero) {return $ this-> punctiquity;} $ select = clone $ this-> select; $ Select- & Gt; reset (select :: border); $ Select-> reset (select :: OFFSET); $ Select-> reset (select :: order); $ countSelect = new selection ; $ CountSelect- & gt; Columns (array ('c' = & gt; new expression ('COUNT (1)'))); $ CountSelect- & gt; from (array ('original_select' = & gt; $) )); $ Statement = $ this-> SCL-> ready statement form SQL object ($ counts ); $ Result = $ statement- & gt; execute (); $ line = $ result-> current (); $ this- & gt; puncticount = $ line ['c']; return $ this- & Gt; rowCount;}?   

Here is a very simple example query that makes me the method:

  SELECT COUNT (1) form In the `c` (SELECT` contacts`` id`as```````````````````````````````````` First name, 'A' FirstName,` contacts '.' Midlnem 'AS' Midlnem ',' Contacts'. 'LetName' AS 'Last Name,' Contacts'. Zender `Contacts' with` gender` as' where ' Perk '.``````````````````````````` "_" _ "_" Fails because it is eating all the freelink space of Amazon RDS (25GB, DBM 1 SMLL) on which it is running. As a comparison, only running internal (original) queries, this completes in 100 seconds (certainly not good) and returns 7.3 9 million records.  

Expanse from internal query here (counting on an expanse dies due to disk space on one RDS server):

 + ---- + ----- -------- + ---------- + ------ + --------------- + ------- + --------- + ------- + --------- + ------- + | ID | Select_type | Table | Type | Possible_Keys | Key | Key_len | Referee | Rows | Extra | + ---- + ------------- + ---------- + ------ + ------------ --- + ------- + --------- + ------- + --------- + ------- + | 1 | Simple | Contact | Referee | Garbage Garbage 1 | Constant | 3441317 | | + ---- + ------------- + ---------- + ------ + ------------ --- + ------- + --------- + ------- + --------- + ------- + 1 lines In set (0.04 seconds)  

Is there anything that can be done to improve it? In the way the operation of the ZF2 pageinator is incompatible in the same way, the way InDebby works? If we are allowing search on most of the fields in the database, how would you handle the caching number of all possible queries?

Thank you in advance ...

You do not need to select from the original query - It uses your memory / disk space!

  Selection of contacts with 'C```` `` _`_Act`   

as ASAA count as SELECT count (1) Do 'WHERE' garbage = 0) as:

  • Consumption of garbage is only a boolean value, make it a boonable non-usable column and for any int or bullion Search is true / incorrect

      Optional tab 'contact' change 'waste' `garbage 'tinet (1) no null    
  • Make sure to index the trash column

      the optional tab 'Contacts' index index `Trash     

    Further:

    • Endorsement of large results sets is not necessary An exact number: Suppose we are displaying 100 entries per page, we do not need 100,000 single page n buttons, instead calculate the page using your offset and range and just show a button for example last / Combine with the next 10 pages and some "next / last 10 page" buttons Do not

    • If you need the possibility of "going to the last page", why not use something like DESC order to achieve something like that.

    • Does such a situation really happen that someone will panat through your 10 meter rows? In order to provide advanced filters, help the user find his needs.

No comments:

Post a Comment