Friday 15 June 2012

node.js - Can not catch over >1000 message when server send over 1000 message at a time nodejs udp -


I write an application to send file multicast by nodes, I read the file by chuck and send it in block like this. <<< Code>

for the sendBlock by CHUNK

   
  (var block = 1; block & Lt; = number_of_block; block ++) {sendBlock (FILEPATH, block) Code> function sendBlock (file, block) {fs.open (file, 'r', function (err, fp) {if (err) {return }} Var buf = new buffer (4 + CHUNK_SIZE); fs.read (Fp, buf, 4, CHUNK_SIZE, (block-1) * CHUNK_SIZE, function (mistake, bytes read) {if (err) {} buf [0 ] = 0; buf [1] = opcodes.OPCODE_DATA; buf [2] = (block & g T; & gt; 8) & amp; 0xFF; buf [3] = block & amp; 0xFF; udpserver.send (buff, 0, 4 + bytes read, port, MULTICAST_IP_ADDRESS); fs.close (fp);}) ;});   

I create the client to receive the message

  fs.open (filename, 'a', function (e, id) {if (4 + CHUNK_SIZE & gt; Message.length) {fs.write (fd, message, 4, message length - 4, (block -1) * CHUNK_SIZE, function () {fs.close (fd, function () {console.log (Block the block: "+ missArray";});});} else {console.log ("Message length:", message.length) console.log ((block - 1) * CHUNK_SIZE) fs.write (fd, message, 4, CHUNK_SIZE, (block-1) * CHUNK_SIZE, function () {fs.close (fd, function () {console.log ('1file off', block) ; If (block% NUMBER_BLOCK == 0) {if (blockArray.length> 0) {missArray = missArray.concat (blockArray); } BlockOre = Category (Block + 1, NUMBER_BLOCK)} //udpserver.send(block+1)});});}});   

But when the server sends more than 1000 messages to the client, then it can not send all the servers

  blocks - 6907 blocks - 6908 blocks - 699 blocks - - 6910 Block - 6911 Block - 6912 Block - 6913   

Customer Gratitude and Write

  Block ------ 1008 Block - - - 100 9 blocks ------ 1010 blocks ------ 1011   

And I have a test of receiving maximum files of 10.4 MB.

How to get all the data from the sender?

node. JS is subject to the boundaries of the underlying operating system. Operating system limits the number of outstanding handles, which can be conducted concurrently by a process.

It is possible that you are ending the number of available file descriptors. I recommend using connection pooling to reduce the number of file descriptors that your application attempts to consume So instead of trying to send 1000 things together, limit your program to the pool of 100 programs at one go.

Many connection pooling libraries are available through NPM - a popular option.

No comments:

Post a Comment