Sunday, 15 February 2015

grep - Find content of one file from another file in UNIX -


I have 2 files The first file contains a list of topleps of a table of row ID in the database.

For example:

  1610657303 1610658464 and the second file contains the "ou" section of the query of SQL queries with these line IDs. 1610659169 1610668135 1610668350 1610670407 1610671066   

file 2

  update TABLE_X set ATTRIBUTE_A = 87 where ri = 1610668350; Update TABLE_X set ATTRIBUTE_A = 87 where RE = 1610672154; TABLE_X set ATTRIBUTE_A = 87 where RE = 1610668135; Updated TABLE_X set ATTRIBUTE_A = 87 where RE = 1610672153;   

I have to read file 1 and search for file SQL for all SQL commands that match the file ID from the file 1 and dumps those SQL queries in the third file. is.

File 1 contains 100,000 entries and file 2 contains 10 entries of file 1 i.e. 1,00,0000 entries.

I received the grep -f file_1 file_2 & gt; File_3 . But this is very slow and there are 1000 entries per hour.

Is there a faster way to do this?

awk with one method:

 < Code> awk -v fs = "[=]" 'NR == FNR {rows [$ 1] ++; Next} (The substags ($ NF, 1, in length ($ NF-1) in rows) 'File 1 file2'   

It should be very fast on my machine, 1 million entries It was taken less than 2 seconds to make a lookup and compared to 3 million lines.

Machine specs:

  Intel (R) XOn (R) CPU E5-2670 0 @ 2.60GHz (8 core) 98 GB RAM    

No comments:

Post a Comment