I am thinking that in order to consolidate the following data there would be a good method in Python, which would be asked from one function to another. Need to go keys or if I use SQLite to read and write data, it would be better to perform the performance.
For example, the function collects from which some pseudocode:
import sys def aggregatesources (sys.argv [1], sys.argv [2], sys. Argv [3]): source1 = open (sys.argv [1], 'r') # source1.txt source1data = source1.read () source2 = open (sys.argv [2], 'r') # source2. Txt source1data = source2.read () source3 = open (sys.argv [3], 'r') # source3.txt source1data = source3.read () aggregated_data = source1 + source2 + source3 # + etc ... This is a function that needs to be consolidated into sources but my question is that when I supply sources:
type1, 32 type2, 9 Typ E3, 12 type4, 21 etc ... There is a way to collect data and associate it within a big dictionary so that:
type1 , [Source1, 32], [source2, etc ...], [etc ...] I want to use Python's dictionary query speed to make this instantaneous, But if there can be alternative solutions, do the same, expand on them.
after "div class =" itemprop = "text"> What you are looking for should do this:
< Code> CSV import def add_source_to_dict (mydict, SourceFileName): my_reader = csv.reader (for csvfile) atype, value in my_reader: open (SourceFileName, 'rb') does not asyype in mydict as csvfile, then: Mydict [atype] = {} mydict [atype] [sourceFileName] = return mydict data = {} data = add_source_to_dict (data, "source1.txt") Interactive: < / P>
& gt; & Gt; & Gt; Data = {}> gt; & Gt; & Gt; Data = add_source_to_dict (data, "source1.txt") & gt; & Gt; & Gt; Data = add_source_to_dict (data, "source2.txt") & gt; & Gt; & Gt; Data {'type1,': {'source2.txt': '44', 'source1.txt': '32'}, 'type3,': {'source2.txt': '46', 'source1. Txt ':' '2' ',' Type 2, ': {' source2.txt ':' 45 ',' source1.txt ':' 9 '},' type4, ': {' source2.txt ':' 47 ',' Source1.txt ':' 21 '}}
No comments:
Post a Comment