Search Results

Search found 59278 results on 2372 pages for 'time estimation'.

Page 449/2372 | < Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >

  • Best way to carry & modify a variable through various instances and functions?

    - by bobsoap
    I'm looking for the "best practice" way to achieve a message / notification system. I'm using an OOP-based approach for the script and would like to do something along the lines of this: if(!$something) $messages->add('Something doesn\'t exist!'); The add() method in the messages class looks somewhat like this: class messages { public function add($new) { $messages = $THIS_IS_WHAT_IM_LOOKING_FOR; //array $messages[] = $new; $THIS_IS_WHAT_IM_LOOKING_FOR = $messages; } } In the end, there is a method in which reads out $messages and returns every message as nicely formatted HTML. So the questions is - what type of variable should I be using for $THIS_IS_WHAT_IM_LOOKING_FOR? I don't want to make this use the database. Querying the db every time just for some messages that occur at runtime and disappear after 5 seconds just seems like overkill. Using global constants for this is apparently worst practice, since constants are not meant to be variables that change over time. I don't even know if it would work. I don't want to always pass in and return the existing $messages array through the method every time I want to add a new message. I even tried using a session var for this, but that is obviously not suited for this purpose at all (it will always be 1 pageload too late). Any suggestions? Thanks!

    Read the article

  • How can I keep curl output out of mail from my cronjob?

    - by Russell C.
    I have written a Perl script that runs as a daily crontab job that uploads files to Amazon S3 via CURL. I want the output of the cron job emailed to me which works fine but I don't want that email to include messages related to the CURL upload (only those message my script is outputting). Here are the CURL related messages I'm seeing in the daily email right now: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 230M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 230M 0 0 0 544k 0 1519k 0:02:35 --:--:-- 0:02:35 1807k 0 230M 0 0 0 1744k 0 1286k 0:03:03 0:00:01 0:03:02 1342k 1 230M 0 0 1 2880k 0 1219k 0:03:13 0:00:02 0:03:11 1250k 1 230M 0 0 1 4016k 0 1198k 0:03:17 0:00:03 0:03:14 1218k 2 230M 0 0 2 5168k 0 1186k 0:03:19 0:00:04 0:03:15 1202k 2 230M 0 0 2 6336k 0 1181k 0:03:19 0:00:05 0:03:14 1157k 3 230M 0 0 3 7488k 0 1177k 0:03:20 0:00:06 0:03:14 1147k 3 230M 0 0 3 8592k 0 1167k 0:03:22 0:00:07 0:03:15 1142k 4 230M 0 0 4 9744k 0 1166k 0:03:22 0:00:08 0:03:14 1145k 4 230M 0 0 4 10.6M 0 1163k 0:03:23 0:00:09 0:03:14 1142k 5 230M 0 0 5 11.7M 0 1161k 0:03:23 0:00:10 0:03:13 1140k 5 230M 0 0 5 12.8M 0 1158k 0:03:23 0:00:11 0:03:12 1133k 6 230M 0 0 6 13.9M 0 1155k 0:03:24 0:00:12 0:03:12 1138k 6 230M 0 0 6 15.0M 0 1155k 0:03:24 0:00:13 0:03:11 1138k 7 230M 0 0 7 16.1M 0 1152k 0:03:25 0:00:14 0:03:11 1131k 7 230M 0 0 7 17.2M 0 1152k 0:03:25 0:00:15 0:03:10 1132k 7 230M 0 0 7 18.4M 0 1152k 0:03:24 0:00:16 0:03:08 1140k I am using a simple Perl system() call to invoke CURL. Does anyone know what command line argument I can supply CURL to turn off the reporting of the upload progress?

    Read the article

  • unbuffered I/O in Linux

    - by stuck
    I'm writing lots and lots of data that will not be read again for weeks - as my program runs the amount of free memory on the machine (displayed with 'free' or 'top') drops very quickly, the amount of memory my app uses does not increase - neither does the amount of memory used by other processes. This leads me to believe the memory is being consumed by the filesystems cache - since I do not intend to read this data for a long time I'm hoping to bypass the systems buffers, such that my data is written directly to disk. I dont have dreams of improving perf or being a super ninja, my hope is to give a hint to the filesystem that I'm not going to be coming back for this memory any time soon, so dont spend time optimizing for those cases. On Windows I've faced similar problems and fixed the problem using FILE_FLAG_NO_BUFFERING|FILE_FLAG_WRITE_THROUGH - the machines memory was not consumed by my app and the machine was more usable in general. I'm hoping to duplicate the improvements I've seen but on Linux. On Windows there is the restriction of writing in sector sized pieces, I'm happy with this restriction for the amount of gain I've measured. is there a similar way to do this in Linux?

    Read the article

  • What headers do I want to send together with a 304 response?

    - by Willem
    When I send a 304 response. How will the browser interpret other headers which I send together with the 304? E.g. header("HTTP/1.1 304 Not Modified"); header("Expires: " . gmdate("D, d M Y H:i:s", time() + $offset) . " GMT"); Will this make sure the browser will not send another conditional GET request (nor any request) until $offset time has "run out"? Also, what about other headers? Should I send headers like this together with the 304: header('Content-Type: text/html'); Do I have to send: header("Last-Modified:" . $modified); header('Etag: ' . $etag); To make sure the browser sends a conditional GET request the next time the $offset has "run out" or does it simply save the old Last Modified and Etag values? Are there other things I should be aware about when sending a 304 response header?

    Read the article

  • How to put reminder in Blackberry calendar

    - by anta40
    I need to put several reminders in the BB calendar. The idea is several hours, or days before a promo expires, the alarm will remind it for you. Here's my code so far: long ONE_HOUR = 3600; long ONE_DAY = 24 * 3600; try { EventList eventList = (EventList)PIM.getInstance().openPIMList(PIM.EVENT_LIST, PIM.READ_WRITE); BlackBerryEvent bbEvent = (BlackBerryEvent) eventList.createEvent(); FavoritePromo promo; if (eventList.isSupportedField(BlackBerryEvent.ALARM)){ for (int x = 0; x < promos.size(); x++){ promo = (FavoritePromo) promos.elementAt(x); time = (StringUtil.strToDate(promo.getExpireDate())).getTime() - value; bbEvent.addString(BlackBerryEvent.SUMMARY, BlackBerryEvent.ATTR_NONE, promo.getTitle()); bbEvent.addDate(BlackBerryEvent.ALARM,0,time); bbEvent.commit(); } } } catch (PIMException e){ } Every time i run it, an "IllegalArgumentException" is always thrown. I'm not really sure what goes wrong here...

    Read the article

  • C++ Iterator Pipelining Designs

    - by Kirakun
    Suppose we want to apply a series of transformations, int f1(int), int f2(int), int f3(int), to a list of objects. A naive way would be SourceContainer source; TempContainer1 temp1; transform(source.begin(), source.end(), back_inserter(temp1), f1); TempContainer2 temp2; transform(temp1.begin(), temp1.end(), back_inserter(temp2), f2); TargetContainer target; transform(temp2.begin(), temp2.end(), back_inserter(target), f3); This first solution is not optimal because of the extra space requirement with temp1 and temp2. So, let's get smarter with this: int f123(int n) { return f3(f2(f1(n))); } ... SourceContainer source; TargetContainer target; transform(source.begin(), source.end(), back_inserter(target), f123); This second solution is much better because not only the code is simpler but more importantly there is less space requirement without the intermediate calculations. However, the composition f123 must be determined at compile time and thus is fixed at run time. How would I try to do this efficiently if the composition is to be determined at run time? For example, if this code was in a RPC service and the actual composition--which can be any permutation of f1, f2, and f3--is based on arguments from the RPC call.

    Read the article

  • Mootools Form Post Problem, Abort increasingly+1 with each request

    - by Naresh
    Hi all, i'm using Mootools 1.2. Ihave 15 Forms with same name only change in their id's like form1, form2, form3, ... ,form15 same with their elements. i need to submit all forms by ajax. For that purpos i make a function and call them on each functions onClick event. function is function addStepConfiguration(id,mystrip,comment,error,submitid,i) { $(id).addEvent('submit', function(e) { e.stop(); $(submitid).setStyle('display','none'); loading_Img(i); this.set('send', { onComplete: function(responseText) { $('loading_img'+i).innerHTML = ''; SplittedResText = responseText.split("|"); if(SplittedResText[1]=='undefined') { $(error).innerHTML=SplittedResText[0]; } else { $(comment).innerHTML=SplittedResText[0]; $(mystrip).set('class',SplittedResText[1]); removeMsg.delay(20,'',error); removeMsg.delay(1500,'',comment); } $(submitid).setStyle('display','block'); } }); this.send(); }); } When i submit a form say first one then it submitted and give response normally, but in second time without page refresh, it abort the request one time. in third attempt it abort 2 times..then go on +1 each time. It gives no error, just abort and again request automatically. i can't understand what is the problem. please any one can help me.

    Read the article

  • VB6 ActiveX exe - what is the proper registration sequence?

    - by Timbuck
    I have recently updated a Visual Basic 6 application that is an ActiveX exe, running on Windows XP. I have a couple of testers for this application who have received a copy of the exe and are attempting to run it. However, they are getting an error message "Unexpected error;quitting" when trying to do so. A key difference between their testing and my testing is that on the machines I tested on, I have admin rights and was able to register the application using the appname.exe /regserver command line. Reading the details at MS Support about file registration appears unclear: Visual Basic ActiveX EXE files register themselves the first time you run the EXE. However, you cannot use the EXE as a COM server until it is registered. So does this mean that after the first time the users run the exe that the application should be correctly registered, and the error I am receiving is sign of something other than an incorrectly registered application? Or does this mean that the application will not work properly until such time as the file is explicitly registered using the appname.exe /regserver command line? nb - during a production distribution, the software would be sent out to client PCs using Systems Management Server, which isn't an option for this testing.

    Read the article

  • MySQL query paralyzes site

    - by nute
    Once in a while, at random intervals, our website gets completely paralyzed. Looking at SHOW FULL PROCESSLIST;, I've noticed that when this happens, there is a specific query that is "Copying to tmp table" for a loooong time (sometimes 350 seconds), and almost all the other queries are "Locked". The part I don't understand is that 90% of the time, this query runs fine. I see it going through in the process list and it finishes pretty quickly most of the time. This query is being called by an ajax call on our homepage to display product recommendations based your browsing history (a la amazon). Just sometimes, randomly (but too often), it gets stuck at "copying to tmp table". Here is a caught instance of the query that was up 109 seconds when I looked: SELECT DISTINCT product_product.id, product_product.name, product_product.retailprice, product_product.imageurl, product_product.thumbnailurl, product_product.msrp FROM product_product, product_xref, product_viewhistory WHERE ( (product_viewhistory.productId = product_xref.product_id_1 AND product_xref.product_id_2 = product_product.id) OR (product_viewhistory.productId = product_xref.product_id_2 AND product_xref.product_id_1 = product_product.id) ) AND product_product.outofstock='N' AND product_viewhistory.cookieId = '188af1efad392c2adf82' AND product_viewhistory.productId IN (24976, 25873, 26067, 26073, 44949, 16209, 70528, 69784, 75171, 75172) ORDER BY product_xref.hits DESC LIMIT 10 Of course the "cookieId" and the list of "productId" changes dynamically depending on the request. I use php with PDO.

    Read the article

  • Web page for IPhone - Large fonts instead of scrolling?

    - by chris_l
    I'm planning the layout of my web page, which should also be usable on the IPhone. I don't really have much experience with the IPhone yet - I just installed the IPhone Simulator on my Mac. The page's contents are flexible, so I think it would be better to use this flexibility to avoid that the user has to scroll around the entire page. Especially I have A header and a sidebar that will be used all the time to perform several actions. A main content area with a number of elements (e.g. images). The UI would stay usable pretty well, if the number of elements shown at one time is reduced for a small screen (e.g. by JavaScript). It would also be okay to make the main content area scrollable (as opposed to the entire page). The problem: If I simply display the page on the IPhone, it uses an extremely small font size, so that users must zoom in first, and then scroll around - so that they can't see the header and sidebar all the time. What's the best way to deal with this situation? Just leave it this way (very small fonts), because users expect that behaviour on the IPhone? Increase the font size (by specifying it in em or px or with xx-large, or what would be the best way?), if I detect - somehow - that it's being displayed on the IPhone. Or is there some way to restrict the viewport size to the screen size, and make it zoom in automatically? I think that would be the easiest solution in my case. Or ...?

    Read the article

  • Can't open COM1 from application launched at startup

    - by n0rd
    I'm using WinLIRC with IR receiver connected to serial port COM1 on Windows 7 x64. WinLIRC is added to Startup folder (Start-All applications-Startup) so it starts every time I log in. Very often (but not all the time) I see initialization error messages from WinLIRC, which continue for some time (couple of minutes) if I retry initialization, and after some retries it initializes correctly and works fine. If I remove it from Startup and start manually at any other moment it starts without errors. I've downloaded WinLIRC sources and added MessageBox calls here and there so I can see what happens during initialization and found out that CreateFile call fails: if((hPort=CreateFile( settings.port,GENERIC_READ | GENERIC_WRITE, 0,0,OPEN_EXISTING,FILE_FLAG_OVERLAPPED,0))==INVALID_HANDLE_VALUE) { char buffer[256]; sprintf_s(buffer, "CreateFile(%s) failed with %d", settings.port, GetLastError()); MessageBox(NULL, buffer, "debug", MB_OK); hPort=NULL; return false; } I see message box saying "CreateFile(COM1) failed with 5", and 5 is an error code for "Access denied" error according to this link. So the question is why opening COM-port can fail with such error right after booting Windows and proceed normally few seconds or minutes later?

    Read the article

  • Browser timing out attempting to load images

    - by notJim
    I've got a page on a webapp that has about 13 images that are generated by my application, which is written in the Kohana PHP framework. The images are actually graphs. They are cached so they are only generated once, but the first time the user visits the page, and the images all have to be generated, about half of the images don't load in the browser. Once the page has been requested once and images are cached, they all load successfully. Doing some ad-hoc testing, if I load an individual image in the browser, it takes from 450-700 ms to load with an empty cache (I checked this using Google Chrome's resource tracking feature). For reference, it takes around 90-150 ms to load a cached image. Even if the image cache is empty, I have the data and some of the application's startup tasks cached, so that after the first request, none of that data needs to be fetched. My questions are: Why are the images failing to load? It seems like the browser just decides not to download the image after a certain point, rather than waiting for them all to finish loading. What can I do to get them to load the first time, with an empty cache? Obviously one option is to decrease the load times, and I could figure out how to do that by profiling the app, but are there other options? As I mentioned, the app is in the Kohana PHP framework, and it's running on Apache. As an aside, I've solved this problem for now by fetching the page as soon as the data is available (it comes from a batch process), so that the images are always cached by the time the user sees them. That feels like a kludgey solution to me, though, and I'm curious about what's actually going on.

    Read the article

  • Isn't an Iterator in c++ a kind of a pointer?

    - by Bilthon
    Ok this time I decided to make a list using the STL. I need to create a dedicated TCP socket for each client. So everytime I've got a connection, I instantiate a socket and add a pointer to it on a list. list<MyTcp*> SocketList; //This is the list of pointers to sockets list<MyTcp*>::iterator it; //An iterator to the list of pointers to TCP sockets. Putting a new pointer to a socket was easy, but now every time the connection ends I should disconnect the socket and delete the pointer so I don't get a huge memory leak, right? well.. I thought I was doing ok by setting this: it=SocketList.begin(); while( it != SocketList.end() ){ if((*it)->getClientId() == id){ pSocket = it; // <-------------- compiler complains at this line SocketList.remove(pSocket); pSocket->Disconnect(); delete pSocket; break; } } But the compiler is saying this: error: invalid cast from type ‘std::_List_iterator<MyTcp*>’ to type ‘MyTcp*’ Can someone help me here? i thought I was doing things right, isn't an iterator at any given time just pointing to one of the elements of the set? how can I fix it?

    Read the article

  • What happens when modifying Gemfile.lock directly?

    - by Mik378
    Since the second time of bundle install execution, dependencies are loaded from Gemfile.lock when Gemfile isn't changed. But I wonder how detection of changes is made between those two files. For instance, if I'm adding a new dependency directly into Gemfile.lock without adding it into Gemfile (as opposed to the best practice since Gemfile.lock is auto-generated from Gemfile), would a bundle install consider Gemfile as changed ? Indeed, does bundle install process compares the whole Gemfile and Gemfile.lock trees in order to detect changes? If it is, even if I'm adding a dependency directly to Gemfile.lock, Gemfile would be detected as changed (since different) and would re-erase Gemfile.lock (so losing the added dependency...) What is the process of bundle install since the launch for the second time ? To be more clear, my question is: Are changes based only from Gemfile ? That means bundler would keep a Gemfile snapshot of every bundle install execution number N and merely compares it to the bundle install execution N+1 ? Or none snapshot are created in bundler memory and bundler makes a comparison with Gemfile.lock each time to detect if Gemfile must be considered as changed.

    Read the article

  • File IO with Streams - Best Memory Buffer Size

    - by AJ
    I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance).

    Read the article

  • Determing if an unordered vector<T> has all unique elements

    - by Hooked
    Profiling my cpu-bound code has suggested I that spend a long time checking to see if a container contains completely unique elements. Assuming that I have some large container of unsorted elements (with < and = defined), I have two ideas on how this might be done: The first using a set: template <class T> bool is_unique(vector<T> X) { set<T> Y(X.begin(), X.end()); return X.size() == Y.size(); } The second looping over the elements: template <class T> bool is_unique2(vector<T> X) { typename vector<T>::iterator i,j; for(i=X.begin();i!=X.end();++i) { for(j=i+1;j!=X.end();++j) { if(*i == *j) return 0; } } return 1; } I've tested them the best I can, and from what I can gather from reading the documentation about STL, the answer is (as usual), it depends. I think that in the first case, if all the elements are unique it is very quick, but if there is a large degeneracy the operation seems to take O(N^2) time. For the nested iterator approach the opposite seems to be true, it is lighting fast if X[0]==X[1] but takes (understandably) O(N^2) time if all the elements are unique. Is there a better way to do this, perhaps a STL algorithm built for this very purpose? If not, are there any suggestions eek out a bit more efficiency?

    Read the article

  • Concurrent Threads in C# using BackgroundWorker

    - by Jim Fell
    My C# application is such that a background worker is being used to wait for the acknowledgement of some transmitted data. Here is some psuedo code demonstrating what I'm trying to do: UI_thread { TransmitData() { // load data for tx // fire off TX background worker } RxSerialData() { // if received data is ack, set ack received flag } } TX_thread { // transmit data // set ack wait timeout // fire off ACK background worker // wait for ACK background worker to complete // evaluate status of ACK background worker as completed, failed, etc. } ACK_thread { // wait for ack received flag to be set } What happens is that the ACK BackgroundWorker times out, and the acknowledgement is never received. I'm fairly certain that it is being transmitted by the remote device because that device has not changed at all, and the C# application is transmitting. I have changed the ack thread from this (when it was working)... for( i = 0; (i < waitTimeoutVar) && (!bAckRxd); i++ ) { System.Threading.Thread.Sleep(1); } ...to this... DateTime dtThen = DateTime.Now(); DateTime dtNow; TimeSpan stTime; do { dtNow = DateTime.Now(); stTime = dtNow - dtThen; } while ( (stTime.TotalMilliseconds < waitTimeoutVar) && (!bAckRxd) ); The latter generates a very acurate wait time, as compared to the former. However, I am wondering if removal of the Sleep function is interferring with the ability to receive serial data. Does C# only allow one thread to run at a time, that is, do I have to put threads to sleep at some time to allow other threads to run? Any thoughts or suggestions you may have would be appreciated. I am using Microsoft Visual C# 2008 Express Edition. Thanks.

    Read the article

  • File name containing more than 16 characters inside parentheses failing

    - by Tom anMoney
    I am generating file names that contain a timestamp in the following format: "base_name (yyyy-mm-dd hhmmss).ext" This seems to cause a problem on Android. Here's my log: /storage/sdcard0/anMoney/transfer/Net worth over time _ Forecast (2012-11-19 110550).pdf E/Gmail (11802): java.io.FileNotFoundException: /storage/sdcard0/myapp/transfer/Net worth over time _ Forecast (2012-11-19 110550).pdf: open failed: ENOENT (No such file or directory) E/Gmail (11802): at libcore.io.IoBridge.open(IoBridge.java:416) E/Gmail (11802): at java.io.FileInputStream.<init>(FileInputStream.java:78) E/Gmail (11802): at java.io.FileInputStream.<init>(FileInputStream.java:105) E/Gmail (11802): at android.content.ContentResolver.openInputStream(ContentResolver.java:445) E/Gmail (11802): at com.google.android.gm.provider.MailEngine.cacheAttachment(MailEngine.java:3054) E/Gmail (11802): at com.google.android.gm.provider.MailEngine.sendOrSaveDraft(MailEngine.java:2746) E/Gmail (11802): at com.google.android.gm.provider.MailProvider.sendOrSaveDraft(MailProvider.java:477) E/Gmail (11802): at com.google.android.gm.provider.MailProvider.insert(MailProvider.java:534) E/Gmail (11802): at android.content.ContentProvider$Transport.insert(ContentProvider.java:201) E/Gmail (11802): at android.content.ContentResolver.insert(ContentResolver.java:864) E/Gmail (11802): at com.google.android.gm.provider.Gmail$MessageModification.sendOrSaveNewMessage(Gmail.java:3576) E/Gmail (11802): at com.google.android.gm.ComposeActivity$SendOrSaveTask$1.onInitializationComplete(ComposeActivity.java:1765) E/Gmail (11802): at com.google.android.gm.provider.MailEngine$5.run(MailEngine.java:1006) E/Gmail (11802): at android.os.Handler.handleCallback(Handler.java:615) E/Gmail (11802): at android.os.Handler.dispatchMessage(Handler.java:92) E/Gmail (11802): at android.os.Looper.loop(Looper.java:137) E/Gmail (11802): at android.os.HandlerThread.run(HandlerThread.java:60) E/Gmail (11802): Caused by: libcore.io.ErrnoException: open failed: ENOENT (No such file or directory) Now, if I trim the file name to have only 16 characters within the parentheses, everything is working as expected. I am able to send the file as a GMail attachment. The following file name is working fine: /storage/sdcard0/myapp/transfer/Net worth over time _ Forecast (2012-11-19 11070).pdf I tried the following troubleshooting: It's not the overall length of the file name, as if I shorten the base name, the same behavior remains It's not GMail, uploading the file to Google Drive fails similarly 16 characters inside the parentheses work, but not 17 It's not the space character inside the parentheses that causes the issue, as I replaced it with a dash and it's the same problem. Anybody has any ideas on what's going on here?

    Read the article

  • C++ and its type system: How to deal with data with multiple types?

    - by sub
    "Introduction" I'm relatively new to C++. I went through all the basic stuff and managed to build 2-3 simple interpreters for my programming languages. The first thing that gave and still gives me a headache: Implementing the type system of my language in C++ Think of that: Ruby, Python, PHP and Co. have a lot of built-in types which obviously are implemented in C. So what I first tried was to make it possible to give a value in my language three possible types: Int, String and Nil. I came up with this: enum ValueType { Int, String, Nil }; class Value { public: ValueType type; int intVal; string stringVal; }; Yeah, wow, I know. It was extremely slow to pass this class around as the string allocator had to be called all the time. Next time I've tried something similar to this: enum ValueType { Int, String, Nil }; extern string stringTable[255]; class Value { public: ValueType type; int index; }; I would store all strings in stringTable and write their position to index. If the type of Value was Int, I just stored the integer in index, it wouldn't make sense at all using an int index to access another int, or? Anyways, the above gave me a headache too. After some time, accessing the string from the table here, referencing it there and copying it over there grew over my head - I lost control. I had to put the interpreter draft down. Now: Okay, so C and C++ are statically typed. How do the main implementations of the languages mentioned above handle the different types in their programs (fixnums, bignums, nums, strings, arrays, resources,...)? What should I do to get maximum speed with many different available types? How do the solutions compare to my simplified versions above?

    Read the article

  • how to Add-in load again n again for every instance of an application?

    - by Ashwin Upadhyay
    I have one qus. i created one shared add-in in c#.net. This add-in is working fine. now i want this add-in is load again n again when any office application is opened. For e.g. when i open any MS word document then add-in is load for that and if after that i opened another MS word document without closing previously opened document then add-in is again load for newly opend MS word document. But when i opened MS word at first time the add-in is load and if i opened MS word again but add-in is already loaded. my requirement is like that-my add-in is worked backgroundly that is its work only to record the opening,closing time of the word document and how much time spend onto that word document and also the name of this document. But when i opened one word document then add-in is loaded for that and if againg opened new word document then becaz of previously opened document add-in is not load for that document remember that priviously opened document is not closed. but if i closed previously opened document then for new document add-in is load.

    Read the article

  • Updating a C# 2.0 events example to be idiomatic with C# 3.5?

    - by Damien Wildfire
    I have a short events example from .NET 2.0 that I've been using as a reference point for a while. We're now upgrading to 3.5, though, and I'm not clear on the most idiomatic way to do things. How would this simple events example get updated to reflect idioms that are now available in .NET 3.5? // Args class. public class TickArgs : EventArgs { private DateTime TimeNow; public DateTime Time { set { TimeNow = value; } get { return this.TimeNow; } } } // Producer class that generates events. public class Metronome { public event TickHandler Tick; public delegate void TickHandler(Metronome m, TickArgs e); public void Start() { while (true) { System.Threading.Thread.Sleep(3000); if (Tick != null) { TickArgs t = new TickArgs(); t.Time = DateTime.Now; Tick(this, t); } } } } // Consumer class that listens for events. public class Listener { public void Subscribe(Metronome m) { m.Tick += new Metronome.TickHandler(HeardIt); } private void HeardIt(Metronome m, TickArgs e) { System.Console.WriteLine("HEARD IT AT {0}",e.Time); } } // Example. public class Test { static void Main() { Metronome m = new Metronome(); Listener l = new Listener(); l.Subscribe(m); m.Start(); } }

    Read the article

  • How can I read a continuously updating log file in Perl?

    - by Octopus
    I have a application generating logs in every 5 sec. The logs are in below format. 11:13:49.250,interface,0,RX,0 11:13:49.250,interface,0,TX,0 11:13:49.250,interface,1,close,0 11:13:49.250,interface,4,error,593 11:13:49.250,interface,4,idle,2994215 and so on for other interfaces... I am working to convert these into below CSV format: Time,interface.RX,interface.TX,interface.close.... 11:13:49,0,0,0,.... Simple as of now but the problem is, I have to get the data in CSV format online, i.e as soon the log file updated the CSV should also be updated. What I have tried to read the output and make the header is: #!/usr/bin/perl -w use strict; use File::Tail; my $head=["Time"]; my $pos={}; my $last_pos=0; my $current_event=[]; my $events=[]; my $file = shift; $file = File::Tail->new($file); while(defined($_=$file->read)) { next if $_ =~ some filters; my ($time,$interface,$count,$eve,$value) = split /[,\n]/, $_; my $key = $interface.".".$eve; if (not defined $pos->{$eve_key}) { $last_pos+=1; $pos->{$eve_key}=$last_pos; push @$head,$eve; } print join(",", @$head) . "\n"; } Is there any way to do this using Perl?

    Read the article

  • Algorithm to determine if array contains n...n+m?

    - by Kyle Cronin
    I saw this question on Reddit, and there were no positive solutions presented, and I thought it would be a perfect question to ask here. This was in a thread about interview questions: Write a method that takes an int array of size m, and returns (True/False) if the array consists of the numbers n...n+m-1, all numbers in that range and only numbers in that range. The array is not guaranteed to be sorted. (For instance, {2,3,4} would return true. {1,3,1} would return false, {1,2,4} would return false. The problem I had with this one is that my interviewer kept asking me to optimize (faster O(n), less memory, etc), to the point where he claimed you could do it in one pass of the array using a constant amount of memory. Never figured that one out. Along with your solutions please indicate if they assume that the array contains unique items. Also indicate if your solution assumes the sequence starts at 1. (I've modified the question slightly to allow cases where it goes 2, 3, 4...) edit: I am now of the opinion that there does not exist a linear in time and constant in space algorithm that handles duplicates. Can anyone verify this? The duplicate problem boils down to testing to see if the array contains duplicates in O(n) time, O(1) space. If this can be done you can simply test first and if there are no duplicates run the algorithms posted. So can you test for dupes in O(n) time O(1) space?

    Read the article

  • How to end a thread in perl

    - by user1672190
    I am new to perl and i have a question about perl thread. I am trying to create a new thread to check if the running function is timed out, and my way of doing it is as below. Logic is 1.create a new thread 2.run the main function and see if it is timed out, if ture, kill it Sample code: $exit_tread = false; # a flag to make sure timeout thread will run my $thr_timeout = threads->new( \&timeout ); execute main function here; $exit_thread = true # set the flag to true to force thread ends $thr_timeout->join(); #wait for the timeout thread ends Code of timeout function sub timeout { $timeout = false; my $start_time = time(); while (!$exit_thread) { sleep(1); last if (main function is executed); if (time() - $start_time >= configured time ) { logmsg "process is killed as request timed out"; _kill_remote_process(); $timeout = true; last; } } } now the code is running as i expected, but i am just not very clear if the code $exit_thread = true works because there is a "last" at the end of while loop. Can anybody give me a answer? Thanks

    Read the article

  • How to run stored procedures and ad-hoc scripts asynchronously with "loosely" connected SQL Server 2

    - by sanga
    Is there a way to initiate a script against an instance of SQL server when it is not connected then have it run on the instance the next time it connects? This needs to happen without any intervention from me. Background situation if you are interested: We have about 120 machines each with their own instance of SQL Server 2000. Most of them are laptops. We have merge replication set up with each one. From time to time, there is a need to delete "rogue" guids from some tables in some instances that overwrite legitimate records on the main publisher as well as perform administrative tasks via stored procedure or adhoc sql statements. The problem is there is no telling when each machine is going to be connected to the network. Some folks turn their machines completely off at the end of the day. Others disconnect their machines and take them on business trips, home for the weekend etc. Did I mention that about 35 of these machines are in utility trucks and "attempt" to sync over a wireless connection. Thanks in advance for any assistance or suggestions. Sanga

    Read the article

< Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >