bandwidth throttling C linux

Posted by bob moch on Stack Overflow See other posts from Stack Overflow or by bob moch
Published on 2012-11-23T22:56:41Z Indexed on 2012/11/23 23:03 UTC
Read the original article Hit count: 118

Filed under:
|
|

hi im currently creating a function to create a sleep time i can pause between packets for my port scanner im creating for personal/educational use for my home network.

what im currently doing is opening /proc/net/dev and reading the 9th set of digits for the eth0 interface to find out the current packets being set and then reading it again and doing some math to figure out a delay to sleep between sending a packet to a port to identify it and fingerprint it.

my problem is that no matter what throttle % i use it always seems to send the same rate of packets. i think its mainly my way of mathematically creating my sleep delay.

edit:: dont mind the function declaration and the struct stuff all im doing is spawning this function in a thread and passing a pointer to a struct to the function, recreating the struct locally and then freeing the passed structs memory.

void *bandwidthmonitor_cmd(void *param)
{
  char cmdline[1024], *bytedata[19];
  int i = 0, ii = 0;
  long long prevbytes = 0, currentbytes = 0, elapsedbytes = 0, byteusage = 0, maxthrottle = 0;

  command_struct bandwidth = *((command_struct *)param);
  free(param);

  //printf("speed: %d\n throttle: %d\n\n", UPLOAD_SPEED, bandwidth.throttle);

  maxthrottle = UPLOAD_SPEED * bandwidth.throttle / 100;
  //printf("max throttle:%lld\n", maxthrottle);

  FILE *f = fopen("/proc/net/dev", "r");

  if(f != NULL)
  {
    while(1)
    {
      while(fgets(cmdline, sizeof(cmdline), f) != NULL)
      {
    cmdline[strlen(cmdline)] = '\0';
    if(strncmp(cmdline, "  eth0", 6) == 0)
    {
      bytedata[0] = strtok(cmdline, " ");

      while(bytedata[i] != NULL)
      {
        i++;
        bytedata[i] = strtok(NULL, " ");
      }

      bytedata[i + 1] = '\0';

      currentbytes = atoi(bytedata[9]);
    }
      }

      i = 0;
      rewind(f);

      elapsedbytes = currentbytes - prevbytes;
      prevbytes = currentbytes;
      byteusage = 8 * (elapsedbytes / 1024);

      //printf("usage:%lld\n",byteusage);

      if(ii & 0x40)
      {
    SLEEP += (maxthrottle - byteusage) * -1.1;//-2.5;


    if(SLEEP < 0){
      SLEEP = 0;
    }
    //printf("sleep:%d\n", SLEEP);
      }

      usleep(25000);
      ii++;
    }

  }

  return NULL;
}

SLEEP and UPLOAD_SPEED are global variables and UPLOAD_SPEED is in kb/s and generated via a speedtest function that gets the upload speed of my computer. this function is running inside a POSIX thread updating SLEEP which my threads doing the socket work grab to sleep by after every packet. as testing instead of only doing the ports i want to check i make it do all the ports over and over again so i can run

dstat

on a machine to check bandwidth and no matter what bandwidth.throttle is set to it always seems to generate the same amount of bandwidth to the dstat machine.

the way i calculate how much i "should" throttle by is by finding the maximum throttle speed which is defined as

maxthrottle = upload_speed * throttle / 100;

for example if my upload speed was 1000kb/s and my throttle was 90 (90%) my max throttle would be 900kb/s from there it would find the current bytes sent from /proc/net/dev and then find my sleep time via incrementing or decrementing it via

sleep += (maxthrottle - bytesysed) * -1.1;

this should in theory increase or decrease the sleep time based on how many bytes used there are. the

if(ii & 0x40) 

statement is just for some moderation control. it makes it so it only sets sleep to a new time every 30-40 iterations.

final notes: the main problem is that the sleep timer does not seem to modify the speed of packets being set. or maybe its just my implementation because on a freshly restarted machine where /proc/net/dev has low numbers of bytes sent it seems to raise the sleep timer accordingly on my 60kb/s upload machine (ex if i set the throttle to 2 it will incline the sleep timer until network bandwidth out reaches the max bandwidth threshold, but when i try running it on a server which as been online forever it doesnt seem to work as nicely if at all.

if anyone can suggest a new method of monitoring the network to adjust a sleep delay then let me know or if anyone sees a flaw in my code. thank you.

© Stack Overflow or respective owner

Related posts about c

    Related posts about linux