Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 676/837 | < Previous Page | 672 673 674 675 676 677 678 679 680 681 682 683  | Next Page >

  • Is select() Ok to implement single socket read/write timeout ?

    - by chmike
    I have an application processing network communication with blocking calls. Each thread manages a single connection. I've added a timeout on the read and write operation by using select prior to read or write on the socket. Select is known to be inefficient when dealing with large number of sockets. But is it ok, in term of performance to use it with a single socket or are there more efficient methods to add timeout support on single sockets calls ? The benefit of select is to be portable.

    Read the article

  • Apache2 on Ubuntu Server w/ CGI, FastCGI, mod_php

    - by illegal3alien
    I've looked at various websites on configuring Apache with cgi and can't get mod_fcgid to work. It works fine using mod_php5, but I wanted to compare performance using cgi and fastcgi. I tried methods using FGCIWrapper among various other techniques and the only one that didn't result in an unlogged 403 or download of the file was using "Action application/x-httpd-php /usr/bin/php-cgi" When trying to configure mod_fcgid the file normally just started a download of an unprocessed file. I used wget to check headers and type was "application/x-httpd-php" At one point I was able to get to the page, but it resulted in a 403, which was listed in access.log, but not error.log (I was told it should be in there too) I tried to get it working on a fresh install of Ubuntu Server 10.04 LTS and 10.10 and had the same results on both, so I'm not doing something correctly in terms of configuration. I tried virtualmin and could only get mod_php to work. The page just prompted a download when selecting cgi or fcgi from the control panel.

    Read the article

  • When to use Hibernate?

    - by Ramo
    Hi All, I was asked in an interview this question so I answered with the following: -Better Performance: - Efficient queries. - 1st and 2nd level caching. - Good caching gives better scalability. - Good Database Portability: - Changing the DB is as easy as changing the dialect configuration. - Increased Developer Productivity: - Think only in object terms not in query language terms. But I also feel that systems fall in one of the below categories, and Hibernate may not be suited for all these cases, I'm interested in your thoughts about this, do you agree with me? please let me know when would use HB in the following case and why. Write Only Systems: Read Only Systems: Write Mostly Systems: Read Mostly Systems: Regards Ramo

    Read the article

  • Oracle Triggers Query..

    - by AGeek
    Lets consider a Table STUD and a ROW-Level TRIGGER is implemented over INSERT query.. My scenario goes like this, whenever a row is inserted, a trigger is fired and it should access some script file which is placed in the hard disk, and ultimately should print the result. So, is this thing is possible? and if yes, then this thing should exist in dynamic form, i.e. if we change the content of script file, then the oracle should reflect those changes as well. I have tried doing this for java using External Procedures, but doesn't feel that much satisfied with the result that i wanted. Kindly give your point-of-view for this kind of scenario and ways that this can be implemented.

    Read the article

  • Can I tell Borland C++ Builder to copy a file somewhere else after it is built?

    - by MrVimes
    I have two computers. One is intended to be left 'free' for high-performance activities (such as playing games) The other is my 'all purpose' computer where I install all the apps I use for creating things, and so on. On the second computer I use Codegear C++ Builder to work on an app that I use on the first computer. If I have BCB compile to comp 1 it is hopeless. It becomes unresponsive. It compiles locally very quickly. So what I do is compile locally and then copy the exe to the other machine. Well, I'm all for streamlining processes, so I want a way to compile on PC2 and use on PC1 without any intermediate steps. So is it possible to have BCB do the compiling on PC2 and create a local exe file, then copy the file to PC 1?

    Read the article

  • Measuring the time to create and destroy a simple object

    - by portoalet
    From Effective Java 2nd Edition Item 7: Avoid Finalizers "Oh, and one more thing: there is a severe performance penalty for using finalizers. On my machine, the time to create and destroy a simple object is about 5.6 ns. Adding a finalizer increases the time to 2,400 ns. In other words, it is about 430 times slower to create and destroy objects with finalizers." How can one measure the time to create and destroy an object? Do you just do: long start = System.nanoTime(); SimpleObject simpleObj = new SimpleObject(); simpleObj.finalize(); long end = System.nanoTime(); long time = end - start;

    Read the article

  • [PHP] background upload

    - by Robijntje007
    I am working with a form that allows me to upload files via a local folder and FTP. So I want to move files over ftp (which already works) Because of performance reasons I chose this process to run in the background so I use nfcftpput (linux) In CLI the following command works perfectly: ncftpput-b-u name -p password -P 1980 127.0.0.1 /upload/ /home/Downloads/upload.zip (Knowing that the b-parameter triggers background process) But if I run it via PHP it does not work (without the-b parameter it does) PHP code: $cmd = "ncftpput -b -u name -p password -P 1980 127.0.0.1 /upload/ /home/Downloads/upload.zip"; $return = exec($cmd);

    Read the article

  • How can I persist a large Perl object for re-use between runs?

    - by Alnitak
    I've got a large XML file, which takes over 40 seconds to parse with XML::Simple. I'd like to be able to cache the resulting parsed object so that on the next run I can just retrieve the parsed object and not reparse the whole file. I've looked at using Data::Dumper but the documentation is a bit lacking on how to store and retrieve its output from disk files. Other classes I've looked at (e.g. Cache::Cache appear designed for storage of many small objects, not a single large one. Can anyone recommend a module designed for this? EDIT. The XML file is ftp://ftp.rfc-editor.org/in-notes/rfc-index.xml On my Mac Pro benchmark figures for reading the entire file with XML::Simple vs Storable are: s/iter test1 test2 test1 47.8 -- -100% test2 0.148 32185% --

    Read the article

  • CAD/CAM without C++

    - by zaladane
    Hello, Is it possible to do CAD/CAM software without having to use C++? My company developed their software with c/C++ but that was more than 10 years ago. Today,there is a lot of legacy code that switching would force us to get rid of but i was wondering what the actual risks are. We have a lot of mathematical algorithms for toolpath calculations, feature recognition and simulation and 3D Rendering and i was wondering if C# can handles all of that without great performance loss. Is it a utopia to rewrite such algorithms in c# or should that language only deal with UI. We are not talking about game development here (Halo 3 or Call Of Duty) so how much processing does CAD/CAM really need? Can anybody enlighten me on this matter? Most of my colleagues are hardcore C++ programmers and although i program in c++ i love .NET but i am having a hard time selling .NET to them other than basic UI. Does it make sense to consider switching to .NET in such a field, or is it just not a wise idea? Thank you

    Read the article

  • sharpziplib - can you add a file without it copying the entire zip first?

    - by schmoopy
    Im trying to add an existing file to a .zip file using sharpziplib - problem is, the zip file is 1GB in size. When i try to add 1 small file (400k) sharpziplib creates a copy/temp of the orig zip file before adding the new file - this poses a problem when the amount of free disk space is less than 2x the zip file you are trying to update. for example: 1GB zip myfile.zip 1GB zip myfile.zip.tmp.293 ZipFile zf = new ZipFile(path); zf.BeginUpdate(); zf.Add(file); // Adding a 400k file here causes a 1GB temp file to be created zf.EndUpdate(); zf.Close(); Is there a more efficient way to do this? Thanks :-)

    Read the article

  • guide on crawling the entire web ?

    - by bohohasdhfasdf
    i just had this thought, and was wondering if it's possible to crawl the entire web (just like the big boys!) on a single dedicated server (like Core2Duo, 8gig ram, 750gb disk 100mbps) . I've come across a paper where this was done....but i cannot recall this paper's title. it was like about crawling the entire web on a single dedicated server using some statistical model. Anyways, imagine starting with just around 10,000 seed URLs, and doing exhaustive crawl.... is it possible ? I am in need of crawling the web but limited to a dedicated server. how can i do this, is there an open source solution out there already ? for example see this real time search engine. http://crawlrapidshare.com the results are exteremely good and freshly updated....how are they doing this ?

    Read the article

  • (NOT) NULL for NVARCHAR columns

    - by Anders Abel
    Allowing NULL values on a column is normally done to allow the absense of a value to be represented. When using NVARCHAR there is aldready a possibility to have an empty string, without setting the column to NULL. In most cases I cannot see a semantical difference between an NVARCHAR with an empty string and a NULL value for such a column. Setting the column as NOT NULL saves me from having to deal with the possibility of NULL values in the code and it feels better to not have to different representations of "no value" (NULL or an empty string). Will I run into any other problems by setting my NVARCHAR columns to NOT NULL. Performance? Storage size? Anything I've overlooked on the usage of the values in the client code?

    Read the article

  • Is a PHP-only "cache engine" ever worth it?

    - by adsads
    I wrote a rather small skeleton for my web apps and thought that I would also add a small cache for it. It is rather simple: If the current page exists as a file in the cache and the file isn't too old, read it out and exit instead of rebuilding the page If the current page isn't cached/outdated recalc the page and save it However, the bad thing about it is: My performance tests with a page that receives 40 relatively long posts via a MySQL query said that with using the cache, it took even longer to handle a single request (1000 tests each) How can that happen? Should I just remove the complete raw-PHP cache and relieve on the availability of some PHP cache like memcached or so?

    Read the article

  • Background upload in PHP

    - by Robijntje007
    I am working with a form that allows me to upload files via a local folder and FTP. So I want to move files over ftp (which already works) Because of performance reasons I chose this process to run in the background so I use nfcftpput (linux) In CLI the following command works perfectly: ncftpput-b-u name -p password -P 1980 127.0.0.1 /upload/ /home/Downloads/upload.zip (Knowing that the b-parameter triggers background process) But if I run it via PHP it does not work (without the-b parameter it does) PHP code: $cmd = "ncftpput -b -u name -p password -P 1980 127.0.0.1 /upload/ /home/Downloads/upload.zip"; $return = exec($cmd);

    Read the article

  • Color Themes for Eclipse?

    - by John Stauffer
    I am a recovering Emacs user, who is trying to ease into Eclipse usage. (Since I'm encouraging the rest of the team to use it, I guess I should at least try to get along). My current excuse is that it hurts my eyes. I'm currently using the excellent zenburn theme in emacs, and would love to find it for eclipse. However, I find that changing my color theme every few months makes for a great way to procrastinate, so ideally I'd like to find a repository for eclipse color themes. There don't appear to be any eclipse themes indexed by Google, so all the great themes must be sitting on your hard disk somewhere. Please share them. Thanks

    Read the article

  • iPhone , core data, whether NSManagedObject use lazy load mechanism when it was create ?

    - by Robin
    Hi, all, I have use core data in app, I have definite a class that most like as follows: @interface Master : NSManagedObject { } @property (nonatomic, retain) NSSet* Details; .... the entity Master contains a property 'Details' that is relate to another table, this is typical Master-Details relationship, I trace the app , but I find a issue that the property 'Details' value was construct even it never be invoked ..... but I consider that the core data 'should' use some lazy mechanism to improve performance, or maybe I miss some configure step ? because the Master entity contains at least five 'Child' table properties , I have to consider this problem before use the core data .... any help ? thanks for your time!

    Read the article

  • "Single NSMutableArray" vs. "Multiple C-arrays" --Which is more Efficient/Practical?

    - by RexOnRoids
    Situation: I have a DAY structure. The DAY structure has three variables or attributes: a Date (NSString*), a Temperature (float), and a Rainfall (float). Problem: I will be iterating through an array of about 5000 DAY structures and graphing a portion of these onto the screen using OpenGL. Question: As far as drawing performance, which is better? I could simply create an NSMutableArray of DAY structures (NSObjects) and iterate on the array on each draw call -- which I think would be hard on the CPU. Or, I could instead manually manage three different C-Arrays -- One for the Date String (2-Dimensional), One for the temperature (1-Dimensional) and One for the Rainfall (1-Dimensional). I could keep track of the current Day by referencing the current index of the iterated C-Arrays.

    Read the article

  • Detecting and reloading updated application parameters at runtime

    - by VeeKayBee
    I am working on an ASP.NET web application(using .NET 4.5 and C#).The application deals with lot of units (for measuring like KG,Litre,KM etc). So based on the selected unit we have to implement some allowed range.This values can be configured without much effort. We identified two solutions for this Keeping a configuration xml. Suppose the values in xml, does it requires an iisreset or any other thing which can take the site down for some time, if we are changing the xml file to change some validation. Keeping in Db, then use SQL dependency caching. So an update to DB can reflect the caching values.SO i believe if we change the values, it will update the cache. How much complex is this and does it effect the performance ? It will be great helpful, if we have some other method to achieve this. Thanks in advance.

    Read the article

  • Giving Users an Option Between UDP & TCP?

    - by cam
    After studying TCP/UDP difference all week, I just can't decide which to use. I have to send a large amount of constant sensor data, while at the same time sending important data that can't be lost. This made a perfect split for me to use both, then I read a paper (http://www.isoc.org/INET97/proceedings/F3/F3_1.HTM) that says using both causes packet/performance loss in the other. Is there any issue presented if I allow the user to choose which protocol to use (if I program both server side) instead of choosing myself? Are there any disadvantages to this? The only other solution I came up with is to use UDP, and if there seems to be too great of loss of packets, switch to TCP (client-side).

    Read the article

  • Return an object after parsing xml with SAX

    - by sentimental_turtle
    I have some large XML files to parse and have created an object class to contain my relevant data. Unfortunately, I am unsure how to return the object for later processing. Right now I pickle my data and moments later depickle the object for access. This seems wasteful, and there surely must be a way of grabbing my data without hitting the disk. def endElement(self, name): if name == "info": # done collecting this iteration self.data.setX(self.x) self.data.setY(self.y) elif name == "lastTagOfInterest": # done with file # want to return my object from here filehandler = open(self.outputname + ".pi", "w") pickle.dump(self.data, filehandler) filehandler.close() I have tried putting a return statement in my endElement tag, but that does not seem to get passed up the chain to where I call the SAX parser. Thanks for any tips.

    Read the article

  • Advantages of using WCF to work with Sharepoint Services WSS3.0?

    - by val
    Hi folks, what is your opinion or better off your practical experience using WCF to work with WSS instead of SP web services? I am writing some custom library for our software to store and retrieve files from WSS document libraries using sharepoint web services. I am not entirely happy with the performance of the sp web services - a bit too slow in many cases. Now, microsoft claims a significant improvements in WCF over remoting and I am looking into a good way to use WCF for my file services. Any suggestions or ideas? Maybe a good source of coding practices or blogs? Thanks a lot, Val

    Read the article

  • Huge framerate difference between Test and Publish movie in Flash?

    - by Glacius
    Simply put, I am making a flash midi player. I am using ENTER_FRAME for my timings. I set the framerate to 100 to ensure that the timing of each note in milliseconds is accurate. When I test the movie with CTRL + ENTER it works fine. When I publish it and open it in a browser (tested both IE and Chrome), it suddenly plays back a lot slower. I don't think it's a performance issue, since the code is very simple. If this slowdown is consistent then I can perhaps work with it so that the playback speed will be correct. Do browsers make the framerate slower or do they implement a framerate cap of some sort? What is going on?

    Read the article

  • Is there any advantage to having more than 16gb ram on a Windows Dev machine?

    - by Robert Kozak
    Assuming a machine (Dual Quad Core Xeon (2.26GHz) with 24GB RAM) running Windows Server 2008 and Hyper-V. How many VMs can I expect to run at the same time with good performance. Is this overkill? Can you really have too much RAM? Assuming 2GB per VM thats around 16GB for the VMs with 8GB left over for the Main OS and Hyper-V. Sound about right? Edit: Tried to make the question sound less like bragging. Was never my intention. Its a hard question to write.

    Read the article

  • How much faster is a database running in RAM?

    - by orokusaki
    I"m looking to run PostgreSQL in RAM for performance enhancement. The database isn't more than 1GB and shouldn't ever grow to more than 5GB. Is it worth doing? Are there any benchmarks out there? Is it buggy? My second major concern is: How easy is it to back things up when it's running purely in RAM. Is this just like using RAM as tier 1 HD, or is it much more complicated?

    Read the article

  • Organizing PHP includes in your development environment

    - by Andrew Heath
    I'm auditing my site design based on the excellent Essential PHP Security by Chris Shiflett. One of the recommendations I'd like to adopt is moving all possible files out of webroot, this includes includes. Doing so on my shared host is simple enough, but I'm wondering how people handle this on their development testbeds? Currently I've got an XAMPP installation configured so that localhost/mysite/ matches up with D:\mysite\ in which includes are stored at D:\mysite\includes\ In order to keep include paths accurate, I'm guess I need to replicate the server's path on my local disk? Something like D:\mysite\public_html\ Is there a better way?

    Read the article

< Previous Page | 672 673 674 675 676 677 678 679 680 681 682 683  | Next Page >