Search Results

Search found 10536 results on 422 pages for 'cpu usage'.

Page 127/422 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • How to get the best LINPACK result and conquer the Top500?

    - by knweiss
    Given a large Linux HPC cluster with hundreds/thousands of nodes. What are your best practices to get the best possible LINPACK benchmark (HPL) result to submit for the Top500 supercomputer list? To give you an idea what kind of answers I would appreciate here are some sub-questions (with links): How to you tune the parameters (N, NB, P, Q, memory-alignment, etc) for the HPL.dat file (without spending too much time trying each possible permutation - esp with large problem sizes N)? Are there any Top500 submission rules to be aware of? What is allowed, what isn't? Which MPI product, which version? Does it make a difference? Any special host order in your MPI machine file? Do you use CPU pinning? How to you configure your interconnect? Which interconnect? Which BLAS package do you use for which CPU model? (Intel MKL, AMD ACML, GotoBLAS2, etc.) How do you prepare for the big run (on all nodes)? Start with small runs on a subset of nodes and then scale up? Is it really necessary to run LINPACK with a big run on all of the nodes (or is extrapolation allowed)? How do you optimize for the latest Intel/AMD CPUs? Hyperthreading? NUMA? Is it worth it to recompile the software stack or do you use precompiled binaries? Which settings? Which compiler optimizations, which compiler? (What about profile-based compilation?) How to get the best result given only a limited amount of time to do the benchmark run? (You can block a huge cluster forever) How do you prepare the individual nodes (stopping system daemons, freeing memory, etc)? How do you deal with hardware faults (ruining a huge run)? Are there any must-read documents or websites about this topic? E.g. I would love to hear about some background stories of some of the current Top500 systems and how they did their LINPACK benchmark. I deliberately don't want to mention concrete hardware details or discuss hardware recommendations because I don't want to limit the answers. However, feel free to mention hints e.g. for specific CPU models.

    Read the article

  • PC reboots spontaenously: debugging tips

    - by aaron
    I swapped my core 2 duo for a quad core recently, and generally things run fine, but every now and then my computer just restarts. I don't even get a blue screen (Vista 32). Core temp isn't a problem. My thinking is that my power supply is inadequate, but I haven't been able to test that (one idea was to under clock the cpu to see if that helped, but going up in speed was the only simple thing to do in the BIOS) Two cases where I semi-consistanly get problems: - Borderlands windowed after some period of time (and some other games, but Borderlands does it pretty regularly) - watching a video (e.g. quicktime/vlc) and having another video running Another thought is non-cpu heat? Maybe the graphics card? Any thoughts on how to track this down appreciated. Thanks!

    Read the article

  • How to decide the optimal number of ruby thin/mongrel instances for a server, number of cores?

    - by Amala
    We are trying to deploy mongrel instances on a machine. What is the optimal number of mongrel instances for a server? Since an instance can handle concurrent connections, I do not see any benefit in starting more than 1 per core. Any more than that and the threads will just fight for CPU. Our predecessors have assigned 10 instances for 4 cores, but I think it will just cause CPU contention. Any definitive answers / opinions? I have seen this question: How many mongrel instances? But it is really not specific enough.

    Read the article

  • Common filesystem for servers behind a rackspace load balancer

    - by thanos panousis
    Our PHP application consists of a single web server that will receive files from clients and perform a CPU-intensive analysis on them. Right now, analysis of a single user upload can take 3sec to conclude and take 100% CPU. This makes our system capacity amount to 1/3 requests per second. My team's requirement is to increase capacity without a lot of code reengineering. A possible solution would be to set up a load balancer in front of multiple servers running the same app, connecting to a common DB. The problem is that the analysis outputs files on disk. A load balancer would increase capacity, but then files won't be available between servers so consequent client requests may fail. We are hosted on Rackspace, is there a way to configure some sort of "common" storage for all servers, without having to rewrite our file persistance code? Current code relies on simple fopens etc. What are our options?

    Read the article

  • Fan is spinning too fast just in Windows - software?

    - by B. Roland
    I've recently replaced my fans (CPU, GPU, and bought a CHA fan). The GPU remained the same, but I've seen it when it was spinned 2 times faster, than it usual... but it is rarely. The problem is, that the CPU fan in Windows (especially 7) spinned too much, 'cos it keeps in under 40°C, and it is spinning with 3300-3600 RPM, which is too high I think. If I swich to Ubuntu, it keeps on ~40-45°C with with 2500-2800 RPM, which is a big difference in numbers, and in noise. I'm looking for a manual fan control solution, or just reduce the Windows' multipliers of fan speed control, somehow... I was bought the new fans because of the lower noise (and it does it, but not with 3.6k RMP). Thank you!

    Read the article

  • Windows webserver monitor and notifier service

    - by WestDiscGolf
    I'm looking for a software tool/service, preferably opensource/free, which will run on a Windows 2003 r3 standard server which will monitor bandwidth usage of all the websites it's hosting (a break down by site would be very useful) and also send out notifications if IIS or another service (as defined) stops responding. A web interface to see the bandwidth usage and other details would be very useful. I've personally got a vps which does this kinda of monitoring but I believe this is more linked into the host itself. Does anyone have any pointers? All help is appriciated. Cheers

    Read the article

  • Nagios remote monitoring: NRPE Vs. SSH

    - by sam
    We use Nagios to monitor quite a few (~130) servers. We monitor CPU, Disk, RAM and a few other things on each server. I've always used SSH to run the remote commands, purely because it requires little to no additional config on the remote server, just install nagios-plugins, create the nagios user and add the SSH key, all of which I've automated into a shell script. I've never actually considered the performance implications of using SSH over NRPE. I'm not too bothered about the load hit on the Nagios server (It's probably over-speced for what it does, it's never been over 10% CPU), but we run each remote check every 30 seconds and each server has 5 different checks performed. I assume SSH requires more resources for each check but is there a huge difference? (I.E. enough of a difference to warrant the switch to NRPE). If it's any help, we monitor a mix of physical servers (Normally with 8, 12 or 16 physical cores) and Amazon EC2 medium/large instances.

    Read the article

  • List and kill running processes on Mac OS in Ctrl/Alt/Delete-like way?

    - by AP257
    So, what do you do on a Mac when a process (as opposed to an application) is hogging CPU, swamping your machine, and you need to kill it? I know you can use top or open Applications > Utilities > Activity Monitor and kill it from there. But what happens when the process is already using so much CPU that doing either of those tasks is impossible? On Windows, you can just do Ctrl/Alt/Delete and the process list will reliably open. So no matter how much your computer is thrashing, you always have access to the list of processes. On Mac OS, there's Cmd/Alt/Escape, which reliably shows running applications. Fine when it's an application causing the problem. But: what do you do if it's a process?

    Read the article

  • Problem identifying which page/page/function locks whole IIS server

    - by fnovak
    Hello, I have problem identifying which page/page/function locks whole IIS server. Out of the blue whole w3wp.exe jumps to CPU 90-98% usage. I have created 3 different application pools to see which w3wp.exe service locks the processor but I am unable to find out this information. I can only see that 2 of 3 services have 0-5% usage and one is jumping around 90-98% after some while. I think some process/function/redirect/sql is doing this but I would like to eliminate it. So far I am not even able to find the source of the problem. On my local development machine with VS2010 everything works like charm and I am unable to replicate problem. The server is windows 2k3 web server, sql server 2k5 and .net 4.0 Thank you for your help, links or any information on this issue. Fero

    Read the article

  • Few questions on giga tweaker

    - by user23950
    I better consult first the people here before I do anything unnecessary using this app called giga tweaker. I don't really understand what this increase the performance of your CPU thing. It is under Customization-Memory Management-ram & disk cache of giga tweaker. What will happen if I change the level cache size of l2 cache into the highest possible value which is 8Mb. What are the negative effects of doing it? The file system caching memory, still under Customization-Memory Management-ram & disk cache. What effects will it have on my system which has 2Gb of Ram and 2.50 Ghz of Dual Core CPU. Please enlighten me.

    Read the article

  • nagiosgraph new services not showing

    - by Eleven-Two
    I am using Nagios Core with Nagiosgraph and had only enabled graphing for cpu usage for a while. This worked fine, but now i wanted to add some more services (for example memory usage). The new services are not working (no rrd data is generated). The Nagiosgraph site only says "no data available" and I get no error in apache log, nagiosgraph.log or nagiosgraph-cgi.log. The new services are standard services (nsclient++ MEMUSE for example) and of course they are included in the map file. If I execute the checks manually, it shows also the perfdata. I added the services by enabling the "graphed-service" use. Did I miss something?

    Read the article

  • How to limit router bandwidth?

    - by David
    Hello, Is there a way to configure my (D-Link DIR-615) router to throttle down the allowed bandwidth after a certain amount of bandwidth has been used? For instance, I want my router to operate normally up to 20GB. After 20GB I want the router to limit bandwidth to a fraction of the normal speed (perhaps 1/5th). I live in Canada, so in about a month, everyone is going to be billed based on the amount they used (usage based billing). Instead of the unlimited bandwidth that I am enjoying now, most people will be capped at 25GB and will have to fork out $2/GB of over usage. Thank you in advance for the help.

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • How OS comunicates with HW?

    - by Jack
    Hi, how can program runnig on CPU (mostly OS) acess other PC hardware? Such as Graphic card, HDD and so? From what I read, In DOS, this was done using BIOS calls, specifically INT instruction. But, int instruction should only jump to the certain space in RAM. So, how can some program stored in RAM acess other computer HW, when CPU can acess only RAM, and reeive interrupts? And, does windows use int instructions as well, or is there any new way to communicate with HW? Thanks.

    Read the article

  • Motherboard Issue - 3 Beep Bios (memory error) despite new RAM

    - by Glenn
    I have an Intel dG43RK motherboard, bought new and sealed, and have tried two different brands and speeds of RAM with a 3-beep BIOS indicating a memory error, which also occurs without RAM installed (as it should). The memory tried is; 1x4GB 1333 Kingston HyperX DDR3 RAM (New and Sealed) 2x4GB Team Elite 1066 DDR3 RAM (New and Sealed) I have tried multiple configurations and seating layouts and still no luck. I also have a GT520 graphics card on board as I dislike in-built graphics in most cases and had it at hand (also new and sealed). The only used parts are the CPU, which worked in my previous tower and was directly taken from the PC into the new set-up and the CPU Fan which will be replaced with a new fan in the foreseeable future once this is resolved. I've run out of ideas myself and any help is appreciated.

    Read the article

  • null pointer exception comparing two strings in java.

    - by David
    I got this error message and I'm not quite sure whats wrong: Exception in thread "main" java.lang.NullPointerException at Risk.runTeams(Risk.java:384) at Risk.blobRunner(Risk.java:220) at Risk.genRunner(Risk.java:207) at Risk.main(Risk.java:176) Here is the relevant bits of code (i will draw attention to the line numbers within the error message via comments in the code as well as inputs i put into the program while its running where relevant) public class Risk { ... public static void main (String[]arg) { String CPUcolor = CPUcolor () ; genRunner (CPUcolor) ; //line 176 ... } ... public static void genRunner (String CPUcolor) // when this method runs i select 0 and run blob since its my only option. Theres nothing wrong with this method so long as i know, this is only significant because it takes me to blob runner and because another one of our relelvent line numbers apears. { String[] strats = new String[1] ; strats[0] = "0 - Blob" ; int s = chooseStrat (strats) ; if (s == 0) blobRunner (CPUcolor) ; // this is line 207 } ... public static void blobRunner (String CPUcolor) { System.out.println ("blob Runner") ; int turn = 0 ; boolean gameOver = false ; Dice other = new Dice ("other") ; Dice a1 = new Dice ("a1") ; Dice a2 = new Dice ("a2") ; Dice a3 = new Dice ("a3") ; Dice d1 = new Dice ("d1") ; Dice d2 = new Dice ("d2") ; space (5) ; Territory[] board = makeBoard() ; IdiceRoll (other) ; String[] colors = runTeams(CPUcolor) ; //this is line 220 Card[] deck = Card.createDeck () ; System.out.println (StratUtil.canTurnIn (deck)) ; while (gameOver == false) { idler (deck) ; board = assignTerri (board, colors) ; checkBoard (board, colors) ; } } ... public static String[] runTeams (String CPUcolor) { boolean z = false ; String[] a = new String[6] ; while (z == false) { a = assignTeams () ; printOrder (a) ; boolean CPU = false ; for (int i = 0; i<a.length; i++) { if (a[i].equals(CPUcolor)) CPU = true ; //this is line 384 } if (CPU==false) { System.out.println ("ERROR YOU NEED TO INCLUDE THE COLOR OF THE CPU IN THE TURN ORDER") ; runTeams (CPUcolor) ; } System.out.println ("is this turn order correct? (Y/N)") ; String s = getIns () ; while (!((s.equals ("y")) || (s.equals ("Y")) || (s.equals ("n")) || (s.equals ("N")))) { System.out.println ("try again") ; s = getIns () ; } if (s.equals ("y") || s.equals ("Y") ) z = true ; } return a ; } ... } // This } closes the class The reason i don't think i should be getting a Null:pointerException is because in this line: a[i].equals(CPUcolor) a at index i holds a string and CPUcolor is a string. Both at this point definatly have a value neither is null. Can anyone please tell me whats going wrong?

    Read the article

  • IIS slow response

    - by Martin Ševic
    I have developed ASP.NET 4.5 application which take infos about sensors from sqlite database every 3 seconds. This application runs nice on my local develop machine on IIS Express server. I have created virtual machine (4x 3,25 GHz CPU; 6GB RAM) where i have installed Windows Server 2012 and IIS 8 service in order to test application on real server because we will run it on production machine later. After installing VC++ 2010 x64 and VC++ 2010 x86 and set "Enable 32-bit application" to true in application pool website started to work but there is a large problem with response time. There is a for example 10 seconds delay before page loads. CPU utillization is about 10% and RAM about 1,5GB. I am new to configuring IIS server so i want to ask if there is some tip how to make it faster. I am sure, there will be some twist which will make it normal work. Many thanks.

    Read the article

  • Make file Linking issue Undefined symbols for architecture x86_64

    - by user1035839
    I am working on getting a few files to link together using my make file and c++ and am getting the following error when running make. g++ -bind_at_load `pkg-config --cflags opencv` -c -o compute_gist.o compute_gist.cpp g++ -bind_at_load `pkg-config --cflags opencv` -c -o gist.o gist.cpp g++ -bind_at_load `pkg-config --cflags opencv` -c -o standalone_image.o standalone_image.cpp g++ -bind_at_load `pkg-config --cflags opencv` -c -o IplImageConverter.o IplImageConverter.cpp g++ -bind_at_load `pkg-config --cflags opencv` -c -o GistCalculator.o GistCalculator.cpp g++ -bind_at_load `pkg-config --cflags opencv` `pkg-config --libs opencv` compute_gist.o gist.o standalone_image.o IplImageConverter.o GistCalculator.o -o rungist Undefined symbols for architecture x86_64: "color_gist_scaletab(color_image_t*, int, int, int const*)", referenced from: _main in compute_gist.o ld: symbol(s) not found for architecture x86_64 collect2: ld returned 1 exit status make: *** [rungist] Error 1 My makefile is as follows (Note, I don't need opencv bindings yet, but will be coding in opencv later. CXX = g++ CXXFLAGS = -bind_at_load `pkg-config --cflags opencv` LFLAGS = `pkg-config --libs opencv` SRC = \ compute_gist.cpp \ gist.cpp \ standalone_image.cpp \ IplImageConverter.cpp \ GistCalculator.cpp OBJS = $(SRC:.cpp=.o) rungist: $(OBJS) $(CXX) $(CXXFLAGS) $(LFLAGS) $(OBJS) -o $@ all: rungist clean: rm -rf $(OBJS) rungist The method header is located in gist.h float *color_gist_scaletab(color_image_t *src, int nblocks, int n_scale, const int *n_orientations); And the method is defined in gist.cpp float *color_gist_scaletab(color_image_t *src, int w, int n_scale, const int *n_orientation) { And finally the compute_gist.cpp (main file) #include <stdio.h> #include <stdlib.h> #include <string.h> #include "gist.h" static color_image_t *load_ppm(const char *fname) { FILE *f=fopen(fname,"r"); if(!f) { perror("could not open infile"); exit(1); } int width,height,maxval; if(fscanf(f,"P6 %d %d %d",&width,&height,&maxval)!=3 || maxval!=255) { fprintf(stderr,"Error: input not a raw PPM with maxval 255\n"); exit(1); } fgetc(f); /* eat the newline */ color_image_t *im=color_image_new(width,height); int i; for(i=0;i<width*height;i++) { im->c1[i]=fgetc(f); im->c2[i]=fgetc(f); im->c3[i]=fgetc(f); } fclose(f); return im; } static void usage(void) { fprintf(stderr,"compute_gist options... [infilename]\n" "infile is a PPM raw file\n" "options:\n" "[-nblocks nb] use a grid of nb*nb cells (default 4)\n" "[-orientationsPerScale o_1,..,o_n] use n scales and compute o_i orientations for scale i\n" ); exit(1); } int main(int argc,char **args) { const char *infilename="/dev/stdin"; int nblocks=4; int n_scale=3; int orientations_per_scale[50]={8,8,4}; while(*++args) { const char *a=*args; if(!strcmp(a,"-h")) usage(); else if(!strcmp(a,"-nblocks")) { if(!sscanf(*++args,"%d",&nblocks)) { fprintf(stderr,"could not parse %s argument",a); usage(); } } else if(!strcmp(a,"-orientationsPerScale")) { char *c; n_scale=0; for(c=strtok(*++args,",");c;c=strtok(NULL,",")) { if(!sscanf(c,"%d",&orientations_per_scale[n_scale++])) { fprintf(stderr,"could not parse %s argument",a); usage(); } } } else { infilename=a; } } color_image_t *im=load_ppm(infilename); //Here's the method call -> :( float *desc=color_gist_scaletab(im,nblocks,n_scale,orientations_per_scale); int i; int descsize=0; //compute descriptor size for(i=0;i<n_scale;i++) descsize+=nblocks*nblocks*orientations_per_scale[i]; descsize*=3; // color //print descriptor for(i=0;i<descsize;i++) printf("%.4f ",desc[i]); printf("\n"); free(desc); color_image_delete(im); return 0; } Any help would be greatly appreciated. I hope this is enough info. Let me know if I need to add more.

    Read the article

  • kvm process has too large a memory footprint on host

    - by gucki
    I'm using latest ubuntu quantal and start a kvm guest which should have 2048 MB of memory. Now after a few hours I can see that the kvm process of this guest is around 2700 MB, so 700 MB more than the guest should be able to consume. I mean a small overhead like 1% would be ok, but not 30%?! root 8631 74.0 22.2 4767484 2752336 ? Sl Nov07 512:58 kvm -cpu kvm64 -smp sockets=1,cores=2 -cpu kvm64 -m 2048 -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -drive file=rbd:data/vm-disk-1,if=none,id=drive-virtio0,cache=writeback,aio=native -device virtio-net-pci,netdev=net0,bus=pci.0,addr=0x12,id=net0,mac=02:7a:86:e6:1a:6c,bootindex=200 -netdev type=tap,id=net0,vhost=on -usbdevice tablet -nodefaults -enable-kvm -daemonize -boot menu=on -vga cirrus root 8694 0.0 0.0 0 0 ? S Nov07 0:00 [kvm-pit/8631] How is this possible and how to prevent it?

    Read the article

  • High Load average threshold in linux

    - by user2481010
    My one of friend said that his server load average sometime goes above 500-1000, for me it is strange value because I never saw load average more than 10. I asked him give me some snapshot of top and memory usages, he gave following details: TOP USAGES top - 06:06:03 up 117 days, 23:02, 2 users, load average: 147.37, 44.57, 15.95 Tasks: 116 total, 2 running, 113 sleeping, 0 stopped, 1 zombie Cpu(s): 16.6%us, 6.9%sy, 0.0%ni, 9.2%id, 66.5%wa, 0.0%hi, 0.8%si, 0.0%st Mem: 8161648k total, 7779528k used, 382120k free, 3296k buffers Swap: 5242872k total, 1293072k used, 3949800k free, 168660k cached Free $ free -gt total used free shared buffers cached Mem: 7 6 1 0 0 4 -/+ buffers/cache: 1 5 Swap: 4 0 4 Total: 12 6 6 Total cpu $ nproc 8 my question is it possible load average more than 100 on 8 core,12 GB mem Server? because I read many tutorial,article on load average, it said that thumb rule is "number of cores = max load" according to thumb rule here is max load average 16 then how his server running with 147.37 load server? he said that it is least value (147.37) some time goes more than 500.

    Read the article

  • Virtualbox crashes quite often in Windows 8

    - by user1776158
    I just installed Windows 8 on my computer. I got the ISO and the product key from my university so the software itself is sound. I use alot of virtual box. And ever since I moved to Windows 8, I have noticed that virtual box crashes more often. In particular, it is very very bad at opening multiple guests. My CPU usage will be at like 20% and I only have 3 guests open and my entire computer just freezes. Cursor and all. In Windows 7, I was able to open like 6 (not that I ever needed to) and really push my CPU. I havent experience any other issues with Windows 8 yet. Has anyone encountered this? Thanks!

    Read the article

  • building a new pc - no display, no beeps

    - by Adam
    Hi Building a new pc using this motherboard: GA-MA785GMT-UD2H and a 500W power supply (1 x 20 pin & 1 x 4 pin connectors). The CPU fan, hard drive and power supply all spin up but no display on the monitor and no beeps. Have tried: taking out all of the memory and still no beeps used a different power supply and still no display I only have the Motherboard, memory, CPU, heat sink & fan & power supply connected. Any ideas? Do I have a faulty motherboard?

    Read the article

  • Outbound Traffic Logging on ASA 5520 possible?

    - by j2k4j
    Taking a look at the ASDM (6.4) for my ASA 5520, I get a nice summary of the traffic status, with items like "interface traffic usage", and "connections per second". This works well, but only shows the data for the last 5-6 minutes or so. Recently, I've been asked whether it's possible to pull up this same type of traffic data for a particular time in the past. (Such as: Find the traffic usage for a 3 minute period from date xx:xx:xx @ time xx:xx:xx) I've noticed that my ASA 5520 is logging the warning, errors, etc that it is processing. But traffic data is not logged (yet) according to my search through the ASA. Is logging the traffic data amounts (as wondered above) actually a possibility? Is there any way to find out the past data for traffic and such values? Thanks!

    Read the article

  • how to google a symbol keyword like "$?"

    - by ZhengZhiren
    i saw a trick in a book: in a linux shell, we can use &? to get the return value of a command. For example,we run a command,if it exit normally, the return value is 0. And then we type $?,we will get 0 in the screen. i want to google this kind of usage, so i have to type these two symbol $? in the search blank.But the search engine just return nothing to me... i have looked at the google help page, but still can't find a solution. so my question is: how can i search with this kind of keyword. or if you can give me some advise of the usage of $? or sort of thing, that will be also appreciated.

    Read the article

  • What is the formula for HughesNet FAP calculation?

    - by JohnFx
    I am somewhat frustrated with the only FAP monitor I have found on the net and discovered because it relies on a running count of bandwidth usage which (1) requires a service in the background; and (2) Tends to get inaccurate over time. Given that there is a diagnostics page on the firmware of the modem that tells the exact usage per hour, I was planning on writing a more accurate version with a better UI. However, it appears that HughesNet keeps the exact formula for calculating whether you are in FAP a secret. I have no idea why they wouldn't be more forthcoming with this information. Wondering if anyone out in SU-land had done some trial and error testing to reverse engineer the formula or had some inside knowledge to share.

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >