Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 370/837 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • Benchmark of Java Try/Catch Block

    - by hectorg87
    I know that going into a catch block has some significance cost when executing a program, however, I was wondering if entering a try{} block also had any impact so I started looking for an answer in google with many opinions, but no benchmarking at all. Some answers I found were: Java try/catch performance, is it recommended to keep what is inside the try clause to a minimum? Try Catch Performance Java Java try catch blocks However they didn't answer my question with facts, so I decided to try it for myself. Here's what I did. I have a csv file with this format: host;ip;number;date;status;email;uid;name;lastname;promo_code; where everything after status is optional and will not even have the corresponding ; , so when parsing a validation has to be done to see if the value is there, here's where the try/catch issue came to my mind. The current code that in inherited in my company does this: StringTokenizer st=new StringTokenizer(line,";"); String host = st.nextToken(); String ip = st.nextToken(); String number = st.nextToken(); String date = st.nextToken(); String status = st.nextToken(); String email = ""; try{ email = st.nextToken(); }catch(NoSuchElementException e){ email = ""; } and it repeats what it's done for email with uid, name, lastname and promo_code. and I changed everything to: if(st.hasMoreTokens()){ email = st.nextToken(); } and in fact it performs faster. When parsing a file that doesn't have the optional columns. Here are the average times: --- Trying:122 milliseconds --- Checking:33 milliseconds however, here's what confused me and the reason I'm asking: When running the example with values for the optional columns in all 8000 lines of the CSV, the if() version still performs better than the try/catch version, so my question is Does really the try block does not have any performance impact on my code? The average times for this example are: --- Trying:105 milliseconds --- Checking:43 milliseconds Can somebody explain what's going on here? Thanks a lot

    Read the article

  • Recommended ASP.NET Shared Hosting

    - by coffeeaddict
    Ok, I have to admit I'm getting fed up with www.discountasp.net's pricing model and this annoyance has built up over the past 8 years or so. I've been with them for years and absolutely love them on the technical side, however it's getting ridiculously expensive for so little that you get. I mean here's my scenario: 1) I am running 2 SQL Server databases which costs me $10/ea per month so that's $20/month for 2 and I only get 500 mb disk space which is horrible 2) I am paying $10/mo just for the hosting itself which I only get 1 gig of disk space! I mean common! 3) I am simply running 2 small apps (Screwturn Wiki & Subtext Blog)...so I don't really care if it's up 99% or not, it's not worth paying a total of $300 just to keep these 2 apps running over discountasp.net Anyone else feel the same? Yes, I know they have great support, probably have great servers running behind this but in the end I really don't care as long as my site is up 95% or better. Yes, the hosting toolset rocks. But you know I bet you I can find a similar set somewhere else. I like how I can totally control IIS 7 at discountasp and I can control my own app pool etc. That's very powerful and essential. But anyone have any good alternatives to discountasp that gives me close to the same at a much more reasonable cost point? I mean http://www.m6.net/prices.aspx gives you 10 SQL Databases for $7 and 200 gigs disk space! I don't know about their tools or support but just looking at those numbers and some other hosts I've seen, I feel that discountasp.net is way out of line. They don't even offer any purchasing discounts such as it would be nice if my 2nd SQL Server is only $5/month not $10...stuff like this, to make it much more realistic and fair. Opinions (people who do have discountasp.net, people who have left them, or people who have another host they like)??? But geez $300 just to host a couple DBs and lightweight open source apps? Not worth the price they are charging. I'm almost at a price point that enables me to get a decent dedicated server! I really don't care about beta support. Not a big deal to me.

    Read the article

  • Single page app with high number of images working extremely slow on iOS8 safari/Webview

    - by NikhilWanpal
    We are working on a WebView (not WKWebView, yet) app, are are observing that the app runs extremely slow on iOS 8. The same app runs smooth on lower versions of OS like iOS7 and iOS6. So we tried it in safari on iOS8 and the performance is similar to iOS6 and 7. The app is filled with images and many are high resolution. While trying to trace the issue (trial and error!) we reduced the sizes and resolutions of the images and the performance improved, but it is still not at par with versions 6 and 7. We are unable to find any such issues reported elsewhere and are stuck. It would be great if we could get some pointers on this one.

    Read the article

  • Creating huge images

    - by David Rutten
    My program has the feature to export a hi-res image of the working canvas to the disk. Users will frequently try to export images of about 20,000 x 10,000 pixels @ 32bpp which equals about 800MB. Add that to the serious memory consumption already going on in your average 3D CAD program and you'll pretty much guarantee an out-of-memory crash on 32-bit platforms. So now I'm exporting tiles of 1000x1000 pixels which the user has to stitch together afterwards in a pixel editor. Is there a way I can solve this problem without the user doing any work? I figured I could probably write a small exe that gets command-lined into the process and performs the stitching automatically. It would be a separate process and it would thus have 2GB of ram all to itself. Or is there a better way still? I'd like to support jpg, png and bmp so writing the image as a bytestream to the disk is not really possible.

    Read the article

  • Archiving Database Tables using Java

    - by HonorGod
    My application demands archiving database tables between sybase and db2 and vice-a-versa and within(db2 to db2 and sybase to sybase) using java. I am trying to understand the best strategies around in terms performance, implementation, ease of use and scalability. Here is my current process - source and destination tables with the acceptable parameters (from java) are defined within xml. the application reads the source and destination configurations and execute them sequentially. destination is sometime optional when source is just deleting data from a specific table or when the source is just calling a stored procedure. dataset between source and destination is extremely large (in millions) From top of my head, it looks like I can define dependencies between multiple source and destination combination and have them execute in parallel in multiple treads. But will this improve any performance(i hope it will)? Are there any open-source frameworks for data archiving using java? Any other thoughts on the implements side will be really helpful. Thanks

    Read the article

  • Indexed key vs indexed separate columns, which one is faster ?

    - by Jerry
    In MYSQL, from a pure performance perspective, if I have a table with large amount of data with 10/1 read/write ratio. is it faster in read/write performance to have 4 search criteria in separate columns and all indexed or have them combined in to one single string acting as a key and store in one indexed column ? e.g. say this table with 5 columns, first name, last name, sex, country and file where the first four columns will ALWAYS be given as a part of search parameters in a search or have a table with two columns, key and file. where the value of key can be john-smith-male-australia ?? I don't quite get the pros and cons. the point I try to stress is the fact that all parameters will be given.in a search.

    Read the article

  • Lock web app only work for intranet

    - by justjoe
    some week in the future i will have job to create php web app that will work as billing process. As the client and my team agree upon, the web app will only deploy in their internal server. This need arose some fundamental questions for myself. how do we lock the web app really really will work only in internal server and not in internet as it asked ? cause this need, the cost for the job have been cut into some degree. so it will be best if it only work as client describe it : it will be deploy in intranet an intranet only What is the pro and cons deploy php application only (with all of its apache server )in intranet ? What is the fundamental different between deploying php app in intranet environment and in internet ? is there anything to be consider ? I know we can put windows in to a flash-disk or pen-disk. i there any autorun apache/php server that work in the same fashion ?

    Read the article

  • Controlling access to large files in Apache

    - by obeattie
    Hi there, I am looking to control access to some large files (we're talking many GB here) by the use of signed URLs. The files are currently restricted by LDAP Basic authentication (mod_auth_ldap), but I need to change this to verify the signature (passed as a query parameter in the URL). Basically, I just need to run a script to verify the signature, and allow the request to proceed as if authentication had succeeded. My initial thought to this was just to use a simple CGI script, but as the files are so large I'm concerned about performance. So, really, this question is (probably) more like "are there any performance implications of streaming large files from a CGI script via Apache?"… and if so, "is there a better way of doing this (short of writing a dedicated authentication module)?" If this makes any sense, help would be much appreciated :) P.S. I wasn't sure exactly what to search for for this (10 minutes of Googling were fruitless), so I may very well be duplicating someone else's post.

    Read the article

  • Real pagination vs Next and Previous buttons

    - by Pablo
    By real pagination i mean something like this when in page 3: <<Previous 1 | 2 | {3} | 4 | 5 |...| 15 | Next>> By Next and Previous buttons i mean something like this when in page 3: <<previous Next>> Performance wise im sure the Previous and Next Buttons are better since unlike the real pagination it doesn't require over-querying the database. By over-querying the database i mean getting more information from the database than what you will need to display on the page. My theory is that the Previous and Next Buttons can drastically increase a site performance as it only requires the exact information you will need to display on a page, please correct me if im wrong on this. so, do users really have preference when it comes to this two options? is it just a Developer preference and its convenience? Which one do you prefer? why? *Note: Previous and Next Buttons are usually labeled Newer and older.

    Read the article

  • iphone app photo upload to server from app threads

    - by user290380
    I have an app that needs to upload a least 5 photos to a server using API call available with the server. For that I am planning to use threads which will take care of photo upload and the main process can go on with the navigation of views etc. What I cant decide is whether it is OK to spawn five separate threads in iphone or use a single thread that will do the upload. In the later cases obviously it will become quite slow. Basically an HTTP POST request will be made to the server with the NSMutableURLRequest object using NSCOnnection. More threads mean more complexity and sync issues, but I can try to write code as neat as possible if it means better performance than a single thread which is simple but is a real stopper if performance is considered. Anybody with any experience in this kinda app. ??

    Read the article

  • Model of hql query firing at back end by hql engine?

    - by Maddy.Shik
    I want to understand how hibernate execute hql query internally or in other models how hql query engine works. Please suggest some good links for same? One of reason for reading is following problem. Class Branch { //lazy loaded @joincolumn(name="company_id") Company company; } Since company is heavy object so it is lazy loaded. now i have hql query "from Branch as branch where branch.Company.id=:companyId" my concern is that if for firing above query, hql engine has to retrieve company object then its a performance hit and i would prefer to add one more property in Branch class i.e. companyId. So in this case hql query would be "from Branch as branch where branch.companyId=:companyId" If hql engine first generate sql from hql followed by firing of sql query itself, then there should be no performance issue. Please let me know if problem is not understandable.

    Read the article

  • NoSQL replacement for memcache

    - by Juan Antonio Gomez Moriano
    We are having a situation in which the values we store on memcache are bigger than 1MB. It is not possible to make such values smaller, and even if there was a way, we need to persist them to disk. One solution would be to recompile the memcache server to allow say 2MB values, but this is either not clean nor a complete solution (again, we need to persist the values). Good news is that We can predict quite acurately how many key/values pair we are going to have We can also predict the total size we will need. A key feature for us is the speed of memcache. So question is: is there any noSQL replacement for memcache which will allow us to have values longer than 1MB AND store them in disk without loss of speed? In the past I have used tokyotyrant/cabinet but seems to be deprecated now. Any idea?

    Read the article

  • weird access denied issue with WMI

    - by stackunderflow1
    I'm seeing a weird access denied issue with WMI. we're trying to create a diff disk based on a parent vhd in a windows service app that runs under network service (machine account is an admin). everything works fine when we create the diff disk on other machine using wmi - we use an admin user account. however, we cannot do this on a local machine as wmi doesn't take user cred for the local machine. so thought network service account already should have access for this, it seems like it doesn't and even if we run the service under an admin service account, it fails. any pointers???

    Read the article

  • In C# how can I serialize a List<int> to a byte[] in order to store it in a DB field?

    - by Matt
    In C# how can I serialize a List to a byte[] in order to store it in a DB field? I know how to serialize to a file on the disk, but how do I just serialize to a variable? Here is how I serialized to the disk: List<int> l = IenumerableofInts.ToList(); Stream s = File.OpenWrite("file.bin"); BinaryFormatter bf = new BinaryFormatter(); bf.Serialize(s, lR); s.Close(); I'm sure it's much the same but I just can't wrap my head around it.

    Read the article

  • MS Access vs SQL Server and others ? Is it worth taking a db server when less than 2 Gb and only 20

    - by asksuperuser
    After my experiment with MSAccess vs MySQL which shows MS Access hugely overperforming Mysql odbc insert by a factor 1000% before I would do the same experiment with SQL Server I searched for some other's people and found this one: http://blog.nkadesign.com/2009/access-vs-sql-server-some-stats-part-1/ which says "As a side note, in this particular test, Access offers much better raw performance than SQL Server. In more complex scenarios it’s very likely that Access’ performance would degrade more than SQL Server, but it’s nice to see that Access isn’t a sloth." So is worth bother with some db server when data is less than 2 Gb and users are about 20 (knowing that MS Access theorically supports up to 255 concurrent users though practically it's around a dozen concurrent users only). Are there any real world studies that really compare MS Access with other db in these specific use case ? Because professionaly speaking I keep hearing people systematically recommend DB server from people who have never used Access just because they think DB Server can only perform better in every case which I used to think myself I confess.

    Read the article

  • Icons in Silverlight: Images vs. Vectors

    - by Shnitzel
    I like using the vector drawing feature of Expression Blend to create icons. That way I can change colors easily on my icons without having to resort to an image editor. But my question is... Say I have a treeview control that has an icon next to each tree element and say I have hundreds of elements. Do you think using images is faster - performance wise than using vector icons? B/c I'd rather use vectors but I'm wondering about performance concerns.

    Read the article

  • Using Lambda Statements for Event Handlers

    - by lush
    I currently have a page which is declared as follows: public partial class MyPage : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { //snip MyButton.Click += (o, i) => { //snip } } } I've only recently moved to .NET 3.5 from 1.1, so I'm used to writing event handlers outside of the Page_Load. My question is; are there any performance drawbacks or pitfalls I should watch out for when using the lambda method for this? I prefer it, as it's certainly more concise, but I do not want to sacrifice performance to use it. Thanks.

    Read the article

  • Do MySQL Locked Tables affect related Views?

    - by CogitoErgoSum
    So after reading http://stackoverflow.com/questions/1415602/performance-in-pdo-php-mysql-transaction-versus-direct-execution in regards to performance issues I was thinking about I did some research on locking tables in MySQL. On http://dev.mysql.com/doc/refman/5.0/en/table-locking.html Table locking enables many sessions to read from a table at the same time, but if a session wants to write to a table, it must first get exclusive access. During the update, all other sessions that want to access this particular table must wait until the update is done. This part struck me particularly becuase most of our queries will be updates rather than inserts. I was wondering if one created a table called foo on which all updates/inserts were carried out and then a view called foo_view (A copy of foo, or perhaps foo and a linkage of several other tables plus foo) on which all selects occured, would this locking issue still occur? That is, would SELECT quries on foo_view still have to wait for an update to finish on foo?

    Read the article

  • Sql Server as logging, best connection practise

    - by ozz
    I'm using SqlServer as logging. Yes this is wrong decision, there are better dbs for this requirement. But I have no other option for now. Logging interval is 3 logs per second. So I've static Logger class and it has static Log method. Using "Open Connection" as static member is better for performance. But what is the best implemantation of it? This is not that I know. public static class OzzLogger { static SqlConnection Con; static OzzLogger() { Con=ne SqlConnection(....); Con.Open(); } public static void Log(....) { Con.ExecuteSql(......); } } UPDATE I asked because of my old information. People say "connection pooling performance is enough". If there is no objection I'm closing the issue :)

    Read the article

  • How have your coding values changed since graduating?

    - by Matt
    We all walked out of school with the stars in our eyes and little experience in "real-world" programming. How have your opinions on programming as a craft changed since you've gained more experience away from academia? I've become more and more about design a la McConnell : wide use of encapsulation, quality code that gives you warm fuzzy feelings when you read it, maintainability over execution performance, etc..., whereas many of my co-workers have followed a different path of fewer middlemen layers getting in the way, code that is right out in the open and easier to locate, even if harder to read, and performance-centric designs. What have you learned about the craft of software design which has changed the way you approach coding since leaving the academic world?

    Read the article

  • Javascript large number array compression

    - by gatapia
    Hi All, I've got a javascript application that sends a large amount of numerical data down the wire. This data is then stored in a database. I am having size issues (too much bandwidth, database getting too big). I am now ready to sacrifice some performance for compression. I was thinking of implementing a base 62 number.toString(62) and parseInt(compressed, 62). This would certainly reduce the size of the data but before I go ahead and do this I thought I would put it to the folks here as I know there must be some outside the box solution I have not considered. The basic specs are: - Compress large number arrays into strings for JSONP transfer (So I think UTF is out) - Be relatively fast, look I'm not expecting same performance as I have now but I also don't want gzip compression either. Any ideas would be greatly appreciated. Thanks Guido Tapia

    Read the article

  • Fully automated MS SQL Restore

    - by hasen j
    I'm not very fluent with MS-SQL commands. I need a script to restore a database from a .bak file and move the logical_data and logical_log files to a specific path. I can do: restore filelistonly from disk='D:\backups\my_backup.bak' This will give me a result set with a column LogicalName, next I need to use the logical names from the result set in the restore command: restore database my_db_name from disk='d:\backups\my_backups.bak' with file=1, move 'logical_data_file' to 'd:\data\mydb.mdf', move 'logical_log_file' to 'd:\data\mylog.ldf' How do I capture the logical names from the first result set into variables that can be supplied to the "move" command? I think the solution might be trivial, but I'm pretty new to mssql.

    Read the article

  • Click-Once deployment is leaving multiple versions (yes, more than 2)

    - by Clyde
    I've got a click once application that is leaving all old versions on my disk. It's an internal corporate application that gets frequent updates, so this is a disaster for rapidly inflating our backup size. According to the docs and other SO questions, it is supposed to only leave the current and previous versions on disk. However, each time I deploy the project and upgrade a client, I get another copy of all exe/dll/data files. I'm making no changes whatsoever to the application, just pushing deploy again in Visual Studio. Any ideas? Updates: The problem seems to happen on both Windows 7 and XP. 64 bit windows and 32. I've done a diff of the folders where the version is installed and the following files are different: MyApp.exe.manifest MyApp.exe.cdf-ms MyDll1.cdf-ms MyDll2.cdf-ms No actual executable files are different, nor the MyApp.manifest, MyDll1.manifest, etc.

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >