Tuesday 15 July 2014

sql - Sum a sequence of device-labeled, data using most recent data for device -


I am using Postgrad 9.2.

I have the following problem:

  Time | Price | Devices - Yoga 1 v1 1 v1 2 v2 2 v1 + v2 3 v3 3 v1 + v2 + v3 v4 v1 + v4 + v3 v5 2 v1 + v5 + v3 v6 v6 v5 v5 + v3 v7 v7 V6 + v5 + v3   

Essentially, the time for each N device should be sum in the most recent value. In the above example, there are 3 devices.

I have tried many ways to use window operations and have failed. I have written a stored procedure which I need, but it can slow down the slobation with lack of experience with plpgsql. files> timeeries.combine_series (id int [], starttime timestamp, endtime timestamp) reset offset record $$ DECLARE retval double precision = 0; Line_data timeseries.total_active_energy% ROWTYPE; Maximum count integer = 0; Sz integer = 0; Last valve double precision []; V_rec record; Initially select in sz array_length ($ 1,1); Selection for line_data from time to time. Multiplier_active_Energy Time WHERE & gt; = Starttime and time & lt; Endtime and Device_ID = Loop Revlv = line_data at any ($ 1) order. Active_Power; 1..sz loop in i if $ 1 [i] = row_data.device_id then last night [i] = row_data.active_power; ELSE retval = retVal + COALESCE (final val [i], 0); end if; End loop; SELECT row_data.time, retal in v_rec; Back to next v_rec; End loop; Return; End; $$ LANGUAGE plpgsql; Click

Call: from> timeseries.combine_series ('{552,553,554}' ​​:: int [], '2013-05- 01 ':: Timestamp,' 2013-05-02 ':: Timestamp) AS (timestamp with time zone, double double precision);

Create sample data

  or change the template table (ts int, active_power real, device_id int, should_be int); Include T values ​​(1,2,554,2), (2,3,553,5), (3, 95, 5, 11), (4,7,553,9), (5,6,552,15), (6 , 8,554,21), (7,5,553,19), (8,7,553,21), (9, 6, 52, 21), (10,7,552,22);    

I am creating a reply to my previous question, where you have a simple case Had presented. Read the description of the aspect of the task of the window functions of the solution:

This question presents a "counter-cross-tabulated" data-set. To get where you want to be, you can reduce cross authoring in your simple, previous form.
Additional modules of PostgreSQL are working very fast for that. Run this command to install once per database :

  create Extension Tablefun;   

Then you only need it (including unnecessary columns in the result for debugging):

  SELECT ts, active_power, device_id, should_be, COALESCE (max. (SE) * (A) (part under PhD), 0) + COALESCE (maximum (B) over (division by GRP_b), 0) COALESCE (maximum (C) over (split by grp_c), 0) Count (A) as Grp_a, count (B) as AS grp_b, count (C) from crosstab as 'grp_c (' SELECT ts, active_power, device_id, should_be, device_id, active_power by t order 1, 2 ',' value (552), (553), (554) "AS T (   

Returns the desired results by TS et, Active Abuse Int, Digitized Int, BIAB ANAT, AIT, BIT, CAT) by Wind Bay (order by TS) sub order.
This query is a little bit of the dynamite gathered, it should perform well. Note that this solution builds on a small, given list crosstab () :
< / P>

About additional columns in your example:

Advanced order Sstab-Fu:


No comments:

Post a Comment