Search Results

Search found 59196 results on 2368 pages for 'time wastrel'.

Page 260/2368 | < Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >

  • Understanding omission failure in distributed systems

    - by karthik A
    The following text says this which I'm not able to quite agree : client C sends a request R to server S. The time taken by a communication link to transport R over the link is D. P is the maximum time needed by S to recieve , process and reply to R. If omission failure is assumed ; then if no reply to R is received within 2(D+P) , then C will never recieve a reply to R . Why is the time here 2(D+P). As I understand shouldn't it be 2D+P ?

    Read the article

  • How do I find the install directory of a Windows Service, using C#?

    - by endian
    I'm pretty sure that a Windows service gets C:\winnt (or similar) as its working directory when installed using InstallUtil.exe. Is there any way I can access, or otherwise capture (at install time), the directory from which the service was originally installed? At the moment I'm manually entering that into the app.exe.config file, but that's horribly manual and feels like a hack. Is there a programmatic way, either at run time or install time, to determine where the service was installed from?

    Read the article

  • Setting timeout for embedded Lua

    - by skyeagle
    I have embedded Lua in a C/C+= application. I want to be able to set a timeout value to prevent getting trapped with badly written scripts that can result in infinite loops (or even string searches that take an infinite time to complete). Basically, I want to be able to set a time interval and if the script fails to complete running at the end of that time interval, I want to be able to kill the Lua script engine (gracefully, if possible). Anyone knows of best practise way to do this?

    Read the article

  • Why can't I rename a data frame column inside a list?

    - by Moreno Garcia
    I would like to rename some columns from CPU_Usage to the process name before I merge the dataframes in order to make it more legible. names(byProcess[[1]]) # [1] "Time" "CPU_Usage" names(byProcess[1]) # [1] "CcmExec_3344" names(byProcess[[1]][2]) <- names(byProcess[1]) names(byProcess[[1]][2]) # [1] "CPU_Usage" names(byProcess[[1]][2]) <- 'test' names(byProcess[[1]][2]) # [1] "CPU_Usage" lapply(byProcess, names) # $CcmExec_3344 # [1] "Time" "CPU_Usage" # # ... (removed several entries to make it more readable) # # $wrapper_1604 # [1] "Time" "CPU_Usage"

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • Write/read count to txt file

    - by Brian
    Hi, I need a batch file that writes the count number to a txt file. Next time the batch file is run, it should read the current count number from the txt file and add 1 to count and save this new value in the txt file. (nothing else is in the txt file) When count is 5 it should start from 1 again Example: Count.bat runs 1 time: count.txt has no count so Count.bat saves the value 1 in count.txt Count.bat is run 2 time: Count.bat reads 1 from count.txt and saves the new value 2 to count.txt When count.bat is run for the 6 time it should start over by saving the value 1 in count.txt I think this just be easy to do, but I'am not use to batch commands So hopefully someone here could help me.

    Read the article

  • load balance timeout SQL connection string

    - by george9170
    It seems that if there is a sql memory leak somewhere and you dont have time to find it you can use the load balance timeout option in a sql connection string to destory the connection after x seconds. Am i right in assuming I can set the load balance time out to 30-40 seconds and then hunt for the leak latter, while in the mean time the leak will not affect my application too much.

    Read the article

  • cleartool question

    - by chuanose
    Lets say I have a directory at \testfolder, and the latest is currently at /main/10. I know that the operation resulting in testfolder@@/main/6 is to remove a file named test.txt. What's a sequence of cleartool operations that can be done in a script that will take "testfolder@@/main/6" and "test.txt" as input, and will cat out the contents of test.txt as of that time? One way I can think of is to get the time of /main/6 operation, create a view with config spec -time set to that time, and then cat the test.txt at the directory. But I'm wondering if I can do this in a easier way that doesn't involve manipulating config specs, perhaps through "cleartool find" and extended path names

    Read the article

  • Stopping long-running requests in Pylons

    - by Jack
    I'm working on an application using Pylons and I was wondering if there was a way to make sure it doesn't spend way too much time handling one request. That is, I would like to find a way to put a timer on each request such that when too much time elapses, the request just stops (and possibly returns some kind of error). The application is supposed to allow users to run some complex calculations but I would like to make sure that if a calculation starts taking too much time, we stop it to allow other calculations to take place.

    Read the article

  • http connection timeout issues

    - by Mark
    I'm running into an issue when i try to use the HttpClient connecting to a url. The http connection is taking a longer time to timeout, even after i set a connection timeoout. int timeoutConnection = 5000; HttpConnectionParams.setConnectionTimeout(httpParameters, timeoutConnection); int timeoutSocket = 5000; HttpConnectionParams.setSoTimeout(httpParameters, timeoutSocket); It works perfect most of the time. However, every once in while, the http connection runs for ever and ignore the setconnectiontimeout, especailly when the phone is connected to wifi, and the phone was idling. So after the phone is idling, the first time i try to connect, the http connection ignores the setconnectiontimeout and runs forever, after i cancel it and try again, it works like charm everytime. But that one time that doesn't work it creates a threadtimeout error, i tried using a different thread, it works, but i know that the thread is running for long time. I understand that the wifi goes to sleep on idle, but i dont understand why its ignoring the setconnectiontimeout. Anyone can help, id really appreciated.

    Read the article

  • Problem with loop MATLAB

    - by Jessy
    no time scores 1 10 123 2 11 22 3 12 22 4 50 55 5 60 22 6 70 66 . . . . . . n n n Above a the content of my txt file (thousand of lines). 1st column - number of samples 2nd column - time (from beginning to end ->accumulated) 3rd column - scores I wanted to create a new file which will be the total of every three sample of the scores divided by the time difference of the same sample. e.g. (123+22+22)/ (12-10) = 167/2 = 83.5 (55+22+66)/(70-50) = 143/20 = 7.15 new txt file 83.5 7.15 . . . n so far I have this code: fid=fopen('data.txt') data = textscan(fid,'%*d %d %d') time = (data{1}) score= (data{2}) for sample=1:length(score) ..... // I'm stucked here .. end ....

    Read the article

  • Simple php statement, I can't get it to work!

    - by eberswine
    How to get $schedule = true and $schedule = true to work??? I know this is easy and I'm overlooking something simple! else { if($schedule){ $where[] = 'date > '.(time() + $config_date_adjust * 60 - 432000); } else{ $where[] = 'date < '.(time() + $config_date_adjust * 60); } $schedule = false; } else { if($schedule2){ $where[] = 'date > '.(time() + $config_date_adjust * 60 - 86400); } else{ $where[] = 'date < '.(time() + $config_date_adjust * 60); } $schedule2 = false; }

    Read the article

  • Makefile rule depending on change of number of files instead of change in content of files.

    - by goathens
    I'm using a makefile to automate some document generation. I have several documents in a directory, and one of my makefile rules will generate an index page of those files. The list of files itself is loaded on the fly using list := $(shell ls documents/*.txt) so I don't have to bother manually editing the makefile every time I add a document. Naturally, I want the index-generation rule to trigger when number/title of files in the documents directory changes, but I don't know how to set up the prerequisites to work in this way. I could use .PHONY or something similar to force the index-generation to run all the time, but I'd rather not waste the cycles. I tried piping ls to a file list.txt and using that as a prerequisite for my index-generation rule, but that would require either editing list.txt manually (trying to avoid it), or auto-generating it in the makefile (this changes the creation time, so I can't use list.txt in the prerequisite because it would trigger the rule every time).

    Read the article

  • Threaded Python port scanner

    - by Amnite
    I am having issues with a port scanner I'm editing to use threads. This is the basics for the original code: for i in range(0, 2000): s = socket(AF_INET, SOCK_STREAM) result = s.connect_ex((TargetIP, i)) if(result == 0) : c = "Port %d: OPEN\n" % (i,) s.close() This takes approx 33 minutes to complete. So I thought I'd thread it to make it run a little faster. This is my first threading project so it's nothing too extreme, but I've ran the following code for about an hour and get no exceptions yet no output. Am I just doing the threading wrong or what? import threading from socket import * import time a = 0 b = 0 c = "" d = "" def ScanLow(): global a global c for i in range(0, 1000): s = socket(AF_INET, SOCK_STREAM) result = s.connect_ex((TargetIP, i)) if(result == 0) : c = "Port %d: OPEN\n" % (i,) s.close() a += 1 def ScanHigh(): global b global d for i in range(1001, 2000): s = socket(AF_INET, SOCK_STREAM) result = s.connect_ex((TargetIP, i)) if(result == 0) : d = "Port %d: OPEN\n" % (i,) s.close() b += 1 Target = raw_input("Enter Host To Scan:") TargetIP = gethostbyname(Target) print "Start Scan On Host ", TargetIP Start = time.time() threading.Thread(target = ScanLow).start() threading.Thread(target = ScanHigh).start() e = a + b while e < 2000: f = raw_input() End = time.time() - Start print c print d print End g = raw_input()

    Read the article

  • Always get the correct datetime?

    - by Jesper Mansa
    I was wandering if there is a way/site/link whith the correct time? Not the servers datetime or the clients datetime. I'm using datetime to count down on my gaming site for when it is the users time to make a play. Users come from all ower the world so using the users client time would not match if its from the US to Europe. Then normally I would use the servers time, but somehow it skips 1.2 hours sometimes? I would like to make sure that everbody makes a timestamp from the same source and that source always is correct! Hoping for help and thanks in advance.

    Read the article

  • Do I have to hash twice in C#?

    - by Joe H
    I have code like the following: class MyClass { string Name; int NewInfo; } List<MyClass> newInfo = .... // initialize list with some values Dictionary<string, int> myDict = .... // initialize dictionary with some values foreach(var item in newInfo) { if(myDict.ContainsKey(item.Name)) // 'A' I hash the first time here myDict[item.Name] += item.NewInfo // 'B' I hash the second (and third?) time here else myDict.Add(item.Name, item.NewInfo); } Is there any way to avoid doing two lookups in the Dictionary -- the first time to see if contains an entry, and the second time to update the value? There may even be two hash lookups on line 'B' -- one to get the int value and another to update it.

    Read the article

  • How do I keep connection in object explorer

    - by dotnet-practitioner
    This is regarding SQL server 2008 management studio.. I connect to different environment DB and every time I launch the Sql management console, I have to sign up every time to get those connections back in object explorer. Is there a way I could persist the connection so I don't have to login every time to different environments?

    Read the article

  • How to make my WPF application as FAST as Outlook

    - by Raul Otaño
    The commons WPF applications take some time for loading medium complex views, once the view is loaded it works fine. For example in a Master - Detail view, if the Detail view is very complex and use different DataTemplates take some seconds (2-3 seconds) for load the view. When i open the Outlook application, for instance, it renders complex views and it is relative much more fast. Is there a way for increase the performance of my WPF application? Maybe a way for not loading the template's data every time that change the "master" item, and load it only one time in the app time live? i will appreciate any suggestion.

    Read the article

  • How can i limit parsing and display the previously parsed contents in iphone?

    - by Warrior
    I am new to iphone development.I parsing a url and displayed its content in the table.On clicking a row it plays a video.When i click a done button, i once again call the tableview.When i call the table view it parse the url once again to display the contents .I want to limit the parsing for 1 time and for the next time i want to display the contents which are parsed at the first time.How can i achieve it ?Please help me out.Thanks.

    Read the article

  • Speeding up a group by date query on a big table in postgres

    - by zaius
    I've got a table with around 20 million rows. For arguments sake, lets say there are two columns in the table - an id and a timestamp. I'm trying to get a count of the number of items per day. Here's what I have at the moment. SELECT DATE(timestamp) AS day, COUNT(*) FROM actions WHERE DATE(timestamp) >= '20100101' AND DATE(timestamp) < '20110101' GROUP BY day; Without any indices, this takes about a 30s to run on my machine. Here's the explain analyze output: GroupAggregate (cost=675462.78..676813.42 rows=46532 width=8) (actual time=24467.404..32417.643 rows=346 loops=1) -> Sort (cost=675462.78..675680.34 rows=87021 width=8) (actual time=24466.730..29071.438 rows=17321121 loops=1) Sort Key: (date("timestamp")) Sort Method: external merge Disk: 372496kB -> Seq Scan on actions (cost=0.00..667133.11 rows=87021 width=8) (actual time=1.981..12368.186 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 32447.762 ms Since I'm seeing a sequential scan, I tried to index on the date aggregate CREATE INDEX ON actions (DATE(timestamp)); Which cuts the speed by about 50%. HashAggregate (cost=796710.64..796716.19 rows=370 width=8) (actual time=17038.503..17038.590 rows=346 loops=1) -> Seq Scan on actions (cost=0.00..710202.27 rows=17301674 width=8) (actual time=1.745..12080.877 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 17038.663 ms I'm new to this whole query-optimization business, and I have no idea what to do next. Any clues how I could get this query running faster?

    Read the article

  • How to profiling my C++ application on linux

    - by richard
    HI, I would like to profile my c++ application on linux. I would like to find out how much time my application spent on CPU processing vs time spent on block by IO/being idle. I know there is a profile tool call valgrind on linux. But it breaks down time spent on each method, and it does not give me an overall picture of how much time spent on CPU processing vs idle? Or is there a way to do that with valgrind. Thank you.

    Read the article

< Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >