Search Results

Search found 31421 results on 1257 pages for 'software performance'.

Page 362/1257 | < Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >

  • Is there is software license for code review (read-) only?

    - by Horace Ho
    I am going to development a product related to security. It's my personal belief that any security related product should release it's source code for review. However, I also want to sell it as a commercial product and keep the code ownership to myself and don't expect deviated work. Is there a software license for this purpose? Thanks.

    Read the article

  • How can I read this parameters inside my software? (c#)

    - by Dezigo
    Hello! I have created a shortcut of my .exe file. I want add to '.exe' extra parameters.(on shortcut: Target attribute) Example Target: "C:\Documents and Settings\dezigo\My Documents\c# programm\DirectoryScanner\DirectoryScanner\DirectoryScanner\bin\Debug\DirectoryScanner.exe" + extra parrams(like a method=1) How can I read this parameters inside my software? (c#) Then ,when starting .exe check if(method == 1) { //do something } else { //do something }

    Read the article

  • Entity Framework vs. nHibernate for Performance, Learning Curve overall features

    - by hadi
    I know this has been asked several times and I have read all the posts as well but they all are very old. And considering there have been advancements in versions and releases, I am hoping there might be fresh views. We are building a new application on ASP.NET MVC and need to finalize on an ORM tool. We have never used ORM before and have pretty much boiled down to two - nHibernate & Entity Framework. I really need some advice from someone who has used both these tools and can recommend based on experience. There are three points that I am focusing on to finalize - Performance Learning Curve Overall Capability Your advice will be highly appreciated. Best Regards,

    Read the article

  • What is the performance hit of enabling sessions on Google App Engine?

    - by Spines
    What is the performance hit of enabling sessions on the Google App Engine? I just turned on <sessions-enabled>true</sessions-enabled> in my Google App Engine app and now my requests are consistently using 100 more ms of CPU time than before I enabled it. It also makes the user wait an additional 100ms for the server to respond on each request. This seems to be quite a significant cost, I'm not even calling getSession or using it in any way yet and it still adds this extra latency. Is there something I can do to speed this up?

    Read the article

  • zlib memory usage / performance. With 500kb of data.

    - by unixman83
    Is zLib Worth it? Are there other better suited compressors? I am using an embedded system. Frequently, I have only 3MB of RAM or less available to my application. So I am considering using zlib to compress my buffers. I am concerned about overhead however. The buffer's average size will be 30kb. This probably won't get compressed by zlib. Anyone know of a good compressor for extremely limited memory environments? However, I will experience occasional maximum buffer sizes of 700kb, with 500kb much more common. Is zlib worth it in this case? Or is the overhead too much to justify? My sole considerations for compression are RAM overhead of algorithm and performance at least as good as zlib.

    Read the article

  • transform:translateX vs transition on left property. Which has better performance? CSS

    - by JackMahoney
    I'm making a slide out menu with HTML and CSS3 - especially transitions. I would like to know what is best practice / best performance to slide a relatively positioned div horizontally. When i click a button it adds a class to my div. Which class is better? (Note I can add all the browser prefixes later and this site only targets modern browsers). //option 1 .animate{ -webkit-transition:all ease 0.3s; -webkit-transform:translateZ(200px); } //option 2 .animate{ -webkit-transition:all ease 0.3s; left:200px; } Thanks

    Read the article

  • Maximum number of files one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the directory write to sub-directories such as ./a/b/c/abc.ext rather than just ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • Chaning coding style due to Android GC performance, how far is too far?

    - by Benju
    I keep hearing that Android applications should try to limit the number of objects created in order to reduce the workload on the garbage collector. It makes sense that you may not want to created massive numbers of objects to track on a limited memory footprint, for example on a traditional server application created 100,000 objects within a few seconds would not be unheard of. The problem is how far should I take this? I've seen tons of examples of Android applications relying on static state in order supposedly "speed things up". Does increasing the number of instances that need to be garbage collected from dozens to hundreds really make that big of a difference? I can imagine changing my coding style to now created hundreds of thousands of objects like you might have on a full-blown Java-EE server but relying on a bunch of static state to (supposedly) reduce the number of objects to be garbage collected seems odd. How much is it really necessary to change your coding style in order to create performance Android apps?

    Read the article

  • How to solve the performance decay of a VB.NET 1.1 application?

    - by marco.ragogna
    I have single-thread windows form application written with VB.NET and targeting Framework 1.1. The software communicates with external boards through a serial interface, and it mainly consist of a state machine that run some tests, driven in a loop done with a Timer and an Interval of 50ms. The feedback on the user interface is done through some custom events raised during the tests. The problem that is driving me crazy is that the performance slightly decrease over time, and in particular after 1200/1300 test operations. The memory occupied does not increase over time, it is only the CPU that seems interested by this problem. The strange thing is that, targeting framework 2.0 and using the same identical code, I do not have this problem. I know that is difficult without looking at the code, but do you have suggestions how can I approach the problem?

    Read the article

  • java System.nanoTime is really slow. Is it possible to implement a high performance java profiler?

    - by willpowerforever
    I did a test and found the overhead of a function call to System.nanoTime() is at least 500 ns on my machine. Seemed that it is very hard to have a high performance java profiler. For enterprise software, suppose a function takes about 350 seconds and has 12,500,000,000 times of method calls. Therefore, the number of calls to System.nanoTime() is: 12,500,000,000 * 2 = 25,000,000,000 (one for start timestamp, one for end timestamp) And the overhead of System.nanoTime in total is: 500 ns * 25,000,000,000 = 500 * 25000 s = 12500000s. Note: all data from real case. Any better way to acquire the timestamp?

    Read the article

  • Does the number of busy worker threads in the CLR ThreadPool affect performance of I/O threads?

    - by andrej351
    We have a Windows Service which hosts a number of WCF services and, in an unrelated part of the app, makes extensive use of the TPL Task class to asynchronously do relatively short bits of work. It is my understanding that WCF uses managed I/O threads from the ThreadPool to execute requests. I noticed that after deploying a feature which significantly raised the applications use of Tasks, and as such the use of ThreadPool worker threads as well, performance of a couple of web services has become very slow. We're talking minutes instead of less than a second. The number of Tasks actually trying to run at any one time can range between 20 and 1000, which makes me think that any new (last in) work needing some CPU time could be forced to wait for quite some time. Does the (in my case extremely large) number of busy ThreadPool worker threads affect the ThreadPool's managed I/O threads? Or could these two be connected in any way? Thanks!

    Read the article

  • Which registry keys need to be edited to change the default browser?

    - by paradroid
    Which registry keys need to be edited to change the default browser? I have found these keys so far and they seem to do what I want, but I am not sure if I have found all of them: Data in: HKEY_CURRENT_USER\Software\Classes\http\shell\open\command HKEY_CURRENT_USER\Software\Classes\https\shell\open\command HKEY_CURRENT_USER\Software\Classes\ftp\shell\open\command Value in: HKEY_CURRENT_USER\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\MuiCache Are there any other keys which would need to be changed, so that it is done perfectly?

    Read the article

  • Does the order of case in Switch statement can vary the performance?

    - by Bipul
    Let say I have a switch statement as below Switch(alphabet){ case "f": //do something break; case "c": //do something break; case "a": //do something break; case "e": //do something break; } Now suppose I know that the frequency of having Alphabet e is highest followed by a, c and f respectively. So, I just restructured the case statement order and made them as follows. Switch(alphabet){ case "e": //do something break; case "a": //do something break; case "c": //do something break; case "f": //do something break; } Will the second Switch statement better perform(means faster) than the first switch statement? If yes and if in my program I need to call this switch statement say many times, will that be a substantial improvement? Or if not in any how can I use my frequency knowledge to improve the performance? Thanks

    Read the article

  • Why does SQLAlchemy with psycopg2 use_native_unicode have poor performance?

    - by Bob Dover
    I'm having a difficult time figuring out why a simple SELECT query is taking such a long time with sqlalchemy using raw SQL (I'm getting 14600 rows/sec, but when running the same query through psycopg2 without sqlalchemy, I'm getting 38421 rows/sec). After some poking around, I realized that toggling sqlalchemy's use_native_unicode parameter in the create_engine call actually makes a huge difference. This query takes 0.5secs to retrieve 7300 rows: from sqlalchemy import create_engine engine = create_engine("postgresql+psycopg2://localhost...", use_native_unicode=True) r = engine.execute("SELECT * FROM logtable") fetched_results = r.fetchall() This query takes 0.19secs to retrieve the same 7300 rows: engine = create_engine("postgresql+psycopg2://localhost...", use_native_unicode=False) r = engine.execute("SELECT * FROM logtable") fetched_results = r.fetchall() The only difference between the 2 queries is use_native_unicode. But sqlalchemy's own docs state that it is better to keep use_native_unicode=True (http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html). Does anyone know why use_native_unicode is making such a big performance difference? And what are the ramifications of turning off use_native_unicode?

    Read the article

  • Maximum number of files in one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the application code write to sub-directories such as ./a/b/c/abc.ext rather than using only ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? What's your experience? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • How to query range of data in DB2 with highest performance?

    - by Fuangwith S.
    Usually, I need to retrieve data from a table in some range; for example, a separate page for each search result. In MySQL I use LIMIT keyword but in DB2 I don't know. Now I use this query for retrieve range of data. SELECT * FROM( SELECT SMALLINT(RANK() OVER(ORDER BY NAME DESC)) AS RUNNING_NO , DATA_KEY_VALUE , SHOW_PRIORITY FROM EMPLOYEE WHERE NAME LIKE 'DEL%' ORDER BY NAME DESC FETCH FIRST 20 ROWS ONLY ) AS TMP ORDER BY TMP.RUNNING_NO ASC FETCH FIRST 10 ROWS ONLY but I know it's bad style. So, how to query for highest performance?

    Read the article

  • Does the order of conditions in a WHERE clause affect MySQL performance?

    - by Greg
    Say that I have a long, expensive query, packed with conditions, searching a large number of rows. I also have one particular condition, like a company id, that will limit the number of rows that need to be searched considerably, narrowing it down to dozens from hundreds of thousands. Does make any difference to MySQL performance whether I do this: SELECT * FROM clients WHERE (firstname LIKE :foo OR lastname LIKE :foo OR phone LIKE :foo) AND (firstname LIKE :bar OR lastname LIKE :bar OR phone LIKE :bar) AND company = :ugh or this: SELECT * FROM clients WHERE company = :ugh AND (firstname LIKE :foo OR lastname LIKE :foo OR phone LIKE :foo) AND (firstname LIKE :bar OR lastname LIKE :bar OR phone LIKE :bar)

    Read the article

  • Which approach to create the data access layer has the highest performance?

    - by pooyakhamooshi
    I have to create a very high performance application. Currently, I am using Entity Framework for my data access layer. My application has to insert some communication data almost every second. I found that Entity Framework is slow; it has about 2 seconds delay to finish the SaveChanges() method. I was thinking I have the following options: 1. Create the data access layer myself using ADO.NET; using stored procedures or ad-hoc queries 2. Use Enterprise Library Data access Layer 3. Use NHibernate 4. Use Repository Factory: http://pooyakhamooshi.blogspot.com/search?q=repository What do you think? which one is quicker for inserting data? Which one is quicker to set up?

    Read the article

  • Aside from performance concerns, is Java still chosen over Groovy/JRuby etc.?

    - by yar
    [This is an empirical question about the state-of-the-art: I am NOT asking if Java is cooler or less cool than the dynamic languages that work in the JVM.] Aside from cases where performance is a main decision factor, do companies/developers still willingly chose Java over Groovy, JRuby or JPython? Personal Note: The reason I am asking is that, while I do some subset of my professional work in Ruby (not JRuby, for now), in my personal projects I use Java. While I have written non-trivial apps in Groovy, I prefer Java, but I wonder if I should just get over it and do everything in Groovy. I like Java because I feel that static typing saves me time and aids refactoring. (No, I am not familiar with Scala.) However, I feel that this very empirical, on-topic programming question may inform my decision.

    Read the article

  • is NATURAL JOIN any better than SELECT FROM WHERE in terms of performance ?

    - by ashy_32bit
    Today I got into a debate with my project manager about Cartesian products. He says a 'natural join' is somehow much better than using 'select from where' because the later cause the db engine to internally perform a Cartesian product but the former uses another approach that prevents this. As far as I know, the natural join syntax is not any different in anyway than 'select from where' in terms of performance or meaning, I mean you can use either based on your taste. SELECT * FROM table1,table2 WHERE table1.id=table2.id SELECT * FROM table1 NATURAL JOIN table2 please elaborate about the first query causing a Cartesian product but the second one being somehow more smart

    Read the article

< Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >