Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 583/763 | < Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >

  • (NOT) NULL for NVARCHAR columns

    - by Anders Abel
    Allowing NULL values on a column is normally done to allow the absense of a value to be represented. When using NVARCHAR there is aldready a possibility to have an empty string, without setting the column to NULL. In most cases I cannot see a semantical difference between an NVARCHAR with an empty string and a NULL value for such a column. Setting the column as NOT NULL saves me from having to deal with the possibility of NULL values in the code and it feels better to not have to different representations of "no value" (NULL or an empty string). Will I run into any other problems by setting my NVARCHAR columns to NOT NULL. Performance? Storage size? Anything I've overlooked on the usage of the values in the client code?

    Read the article

  • Question about varrying top margin when using custom Fonts on the iPhone.

    - by user170030
    I am using FontLabel to display varying lengths of texts in a custom font. I size the FontLabel using the following : CGSize size = [myString sizeWithFont:[UIFont systemFontOfSize:[[[UIApplication sharedApplication] delegate] getFontSize]] constrainedToSize:CGSizeMake(290, 4000) lineBreakMode:UILineBreakModeWordWrap]; For some reason, this always presents a Fontlabel where the text starts at a different space from the top. Sometimes the text begins at the correct location. Other times, it appears either too high or too low. Would appreciate some help in how to solve this issue.

    Read the article

  • when to use StringBuilder in java

    - by kostja
    It is supposed to be generally preferable to use a StringBuilder for String concatenation in Java. Is it always the case? What i mean is : Is the overhead of creating a StringBuilder object, calling the append() method and finally toString() smaller then concatenating existing Strings with + for 2 Strings already or is it only advisable for more Strings? If there is such a threshold, what does it depend on (the String length i suppose, but in which way)? And finally - would you trade the readability and conciseness of the + concatenation for the performance of the StringBuilder in smaller cases like 2, 3, 4 Strings?

    Read the article

  • What is the best method of assessment for computer science students?

    - by Gavimoss
    This question is a bit more philosophical so feel free to remove if you like but it's been bugging me for the last 4 years! As a final year student I find that exams can be often be passed with a couple of days of cramming, without necessarily retaining or understanding the content i.e. a regurgitation of lecture notes is often enough to gain high marks. A friend of mine is about to graduate with an honours degree whose final year evaluation was based solely on practical work (a project, assignment marks and the creation of a poster) yet all of this work could have been completed by a third party. Personally I don't think either of these methods of assessment is sufficient as I am currently on track for a 1st class honours in artificial intelligence and computer science and believe this is mostly due to my skill in passing exams not my skill as a programmer or my vast in depth knowledge of any of the subjects I have "studied". Surely there is a better way to assess our skills - isn't there?

    Read the article

  • iPhone , core data, whether NSManagedObject use lazy load mechanism when it was create ?

    - by Robin
    Hi, all, I have use core data in app, I have definite a class that most like as follows: @interface Master : NSManagedObject { } @property (nonatomic, retain) NSSet* Details; .... the entity Master contains a property 'Details' that is relate to another table, this is typical Master-Details relationship, I trace the app , but I find a issue that the property 'Details' value was construct even it never be invoked ..... but I consider that the core data 'should' use some lazy mechanism to improve performance, or maybe I miss some configure step ? because the Master entity contains at least five 'Child' table properties , I have to consider this problem before use the core data .... any help ? thanks for your time!

    Read the article

  • Background upload in PHP

    - by Robijntje007
    I am working with a form that allows me to upload files via a local folder and FTP. So I want to move files over ftp (which already works) Because of performance reasons I chose this process to run in the background so I use nfcftpput (linux) In CLI the following command works perfectly: ncftpput-b-u name -p password -P 1980 127.0.0.1 /upload/ /home/Downloads/upload.zip (Knowing that the b-parameter triggers background process) But if I run it via PHP it does not work (without the-b parameter it does) PHP code: $cmd = "ncftpput -b -u name -p password -P 1980 127.0.0.1 /upload/ /home/Downloads/upload.zip"; $return = exec($cmd);

    Read the article

  • SQL Function for On Balance Volume (Financial Query)

    - by CraigJSte
    I would like to create a function for On Balance Volume (SQL Function). This is too complex of a calculation for met to figure out but here is the outline of the User Defined Table Function. If someone could help me to fill in the blanks I would appreciate it. Craig CREATE FUNCTION [dbo].[GetStdDev3] (@TKR VARCHAR(10)) RETURNS @results TABLE ( dayno SMALLINT IDENTITY(1,1) PRIMARY KEY , [date] DATETIME , [obv] FLOAT ) AS BEGIN DECLARE @rowcount SMALLINT INSERT @results ([date], [obv]) // CREATE A FUNCTION FOR ON BALANCE VOLUME // On Balance Volume is the Summ of Volume for Total Periods // OBV = 1000 at Period = 0 // OBV = OBV Previous + Previous Volume if Close Previous Close // OBV = OBV Previous - Previous Volume if Close < Previous Close // OBV = OBV Previous if Close = Previous Close // The actual Value of OBV is not important so to keep the ratio low we reduce the // Total Value of Tickers by 1/10th or 1/100th // For Value of Volume = Volume * .01 if Volume < 999 // For Value of Volume = Volume * .001 If Volume = 999 FROM Tickers RETURN END This is the Tickers table [dbo].[Tickers]( [ticker] [varchar](10) NULL, [date] [datetime] NULL, [high] [float] NULL, [low] [float] NULL, [open] [float] NULL, [close] [float] NULL, [volume] [float] NULL, [time] [datetime] NULL, [change] [float] NULL )

    Read the article

  • Measuring the time to create and destroy a simple object

    - by portoalet
    From Effective Java 2nd Edition Item 7: Avoid Finalizers "Oh, and one more thing: there is a severe performance penalty for using finalizers. On my machine, the time to create and destroy a simple object is about 5.6 ns. Adding a finalizer increases the time to 2,400 ns. In other words, it is about 430 times slower to create and destroy objects with finalizers." How can one measure the time to create and destroy an object? Do you just do: long start = System.nanoTime(); SimpleObject simpleObj = new SimpleObject(); simpleObj.finalize(); long end = System.nanoTime(); long time = end - start;

    Read the article

  • Is a PHP-only "cache engine" ever worth it?

    - by adsads
    I wrote a rather small skeleton for my web apps and thought that I would also add a small cache for it. It is rather simple: If the current page exists as a file in the cache and the file isn't too old, read it out and exit instead of rebuilding the page If the current page isn't cached/outdated recalc the page and save it However, the bad thing about it is: My performance tests with a page that receives 40 relatively long posts via a MySQL query said that with using the cache, it took even longer to handle a single request (1000 tests each) How can that happen? Should I just remove the complete raw-PHP cache and relieve on the availability of some PHP cache like memcached or so?

    Read the article

  • What are some Servlet Container pros and cons for a Solr installation?

    - by danieltalsky
    The SolrInstall wiki page lists seven different server / Servlet Containers compatible with Solr: Tomcat Jetty Resin JBoss WebSphere Weblogic Glassfish I'm sure that "best" is subjective, so I'll just say my criteria are: easiest to set up, best for search performance with a smallish, infrequently-updated dataset, and with the fewest number of gotchas. Jetty and Tomcat both have apt-get solr packages, so they're clearly the frontrunners for some. Jetty is used in the demo install, but there's some notes that Jetty has some difficulties handling Unicode in some cases. Tomcat is a common choice but my understanding is that it's not as lightweight and has a lot of features not needed by Solr. Is it worth considering any of the others? Are there some important pro's and cons I should be aware of?

    Read the article

  • Is there any advantage to having more than 16gb ram on a Windows Dev machine?

    - by Robert Kozak
    Assuming a machine (Dual Quad Core Xeon (2.26GHz) with 24GB RAM) running Windows Server 2008 and Hyper-V. How many VMs can I expect to run at the same time with good performance. Is this overkill? Can you really have too much RAM? Assuming 2GB per VM thats around 16GB for the VMs with 8GB left over for the Main OS and Hyper-V. Sound about right? Edit: Tried to make the question sound less like bragging. Was never my intention. Its a hard question to write.

    Read the article

  • How can I implement lazy loading on a page with 500+ images?

    - by Fedor
    I basically have a booking engine unit results page which must show 40 units and per each unit there's 1 large image of the first thumbnail and an X number of accompanying thumbnail images. I've been using the jquery lazy load plugin, but it's not thorough enough ( I'm invoking it on DOM Ready ), plus it doesn't really work in IE ( 50% of the clients use IE so it's a big issue ). What I think I really need to do is not really spit out the image but a fake element such as a span, and possibly modify my code such that if the user views the span, render it into an image element. <span src="/images/foo.gif"> The booking engine relies on JS so I think I might be forced to just rely on ajaxifying all the thumbnails and have event handlers on window scroll, etc in order for the page to be "usable" and load at an average time ( 2-3 seconds instead of 5-30s on high speed DSL/Cable ). I'd appreciate any examples or ideas.

    Read the article

  • Ways to make your WCF services compatible with non-.NET consumers

    - by Mayo
    I'm working on adding a WCF services layer to my existing .NET application. This layer will be hosted in IIS and will be consumed by a variety of UIs, at least one of which will not use Microsoft technologies. I can make a Web service in WCF that is consumed by my .NET application. However, I'm concerned about things that work in the .NET world but not with other technologies. For example, simply throwing an exception from my WCF service works fine in .NET. But according to this article, one should approach exception handling with fault contracts to ensure compatibility with non-.NET consumers. The author labels this lack of foresight as The Fallacy of the .NET-Only World. Does anyone have any high level suggestions or links to articles that cover interoperability between WCF and non-.NET consumers? I realize I'm potentially working against the YAGNI principle. I'm only really looking to avoid things that will be incredibly difficult to overcome later when the developers of the non-.NET consumer report problems to me.

    Read the article

  • Reducing number of include files to reduce server load/speed up site?

    - by rein
    You can reduce the number of HTTP requests to speed up your site, such as css sprite images. I'm wondering does reducing the number of php includes/requires also speed up your site or reduce server load? For example, I have a index.php with <?php include './file.php'; ?> If instead I copy and paste the code from file.php and just put it into index.php, thus removing the include code, would it reduce the server load? This might make things less organized, but if it does reduce server load I might need to do that. For a small to medium sized site, I assume there might not be a difference, but how about for high traffic sites? Thanks in advance.

    Read the article

  • How much faster is a database running in RAM?

    - by orokusaki
    I"m looking to run PostgreSQL in RAM for performance enhancement. The database isn't more than 1GB and shouldn't ever grow to more than 5GB. Is it worth doing? Are there any benchmarks out there? Is it buggy? My second major concern is: How easy is it to back things up when it's running purely in RAM. Is this just like using RAM as tier 1 HD, or is it much more complicated?

    Read the article

  • C# - How to convert 10 bytes to unsigned long

    - by Justin
    Hey, I have 10 bytes - 4 bytes of low order, 4 bytes of high order, 2 bytes of highest order - that I need to convert to an unsigned long. I've tried a couple different methods but neither of them worked: Try #1: var id = BitConverter.ToUInt64(buffer, 0); Try #2: var id = GetID(buffer, 0); long GetID(byte[] buffer, int startIndex) { var lowOrderUnitId = BitConverter.ToUInt32(buffer, startIndex); var highOrderUnitId = BitConverter.ToUInt32(buffer, startIndex + 4); var highestOrderUnitId = BitConverter.ToUInt16(buffer, startIndex + 8); return lowOrderUnitId + (highOrderUnitId * 100000000) + (highestOrderUnitId * 10000000000000000); } Any help would be appreciated, thanks!

    Read the article

  • Are there any good books on writing commercial quality software?

    - by Andy
    Hey, My background has been generally new technology demonstrators, which, well... demonstrate the latest technology and how it can be of use to a clients company. They use it for internal demos etc. Now my career has shiffed course a bit more into actual products, in particular software which runs in locations like museums as interactive pieces. Clearly, although the technology demonstrators had to be well coded etc, there wasn't as much emphasis as there is on my current work, which has to work, be highly configurable, probably multi-ligual and run constantly, without restarts. So my question is, now that I'm trying to up my coding quality and write more commercial applications, are there any books which discuss issues surrounding high quality commercial software? I currently have a copy of Code Complete 2nd Edition, which is excellent, but just wondering if there's any better, possibly more focused titles out there? Thanks a lot! Andy.

    Read the article

  • Giving Users an Option Between UDP & TCP?

    - by cam
    After studying TCP/UDP difference all week, I just can't decide which to use. I have to send a large amount of constant sensor data, while at the same time sending important data that can't be lost. This made a perfect split for me to use both, then I read a paper (http://www.isoc.org/INET97/proceedings/F3/F3_1.HTM) that says using both causes packet/performance loss in the other. Is there any issue presented if I allow the user to choose which protocol to use (if I program both server side) instead of choosing myself? Are there any disadvantages to this? The only other solution I came up with is to use UDP, and if there seems to be too great of loss of packets, switch to TCP (client-side).

    Read the article

  • Can I tell Borland C++ Builder to copy a file somewhere else after it is built?

    - by MrVimes
    I have two computers. One is intended to be left 'free' for high-performance activities (such as playing games) The other is my 'all purpose' computer where I install all the apps I use for creating things, and so on. On the second computer I use Codegear C++ Builder to work on an app that I use on the first computer. If I have BCB compile to comp 1 it is hopeless. It becomes unresponsive. It compiles locally very quickly. So what I do is compile locally and then copy the exe to the other machine. Well, I'm all for streamlining processes, so I want a way to compile on PC2 and use on PC1 without any intermediate steps. So is it possible to have BCB do the compiling on PC2 and create a local exe file, then copy the file to PC 1?

    Read the article

  • Microphone input

    - by George
    I'm trying to build a gadget that detects pistol shots using Android. It's a part of a training aid for pistol shooters that tells how the shots are distributed in time and I use a HTC Tattoo for testing. I use the MediaRecorder and its getMaxAmplitude method to get the highest amplitude during the last 1/100 s but it does not work as expected; speech gives me values from getMaxAmplitude in the range from 0 to about 25000 while the pistol shots (or shouting!) only reaches about 15000. With a sampling frequency of 8kHz there should be some samples with considerably high level. Anyone who knows how these things work? Are there filters that are applied before registering the max amplitude. If so, is it hardware or software? Thanks, /George

    Read the article

  • Should I Split Tables Relevant to X Module Into Different DB? Mysql

    - by Michael Robinson
    I've inherited a rather large and somewhat messy codebase, and have been tasked with making it faster, less noodly and generally better. Currently we use one big database to hold all data for all aspects of the site. As we need to plan for significant growth in the future, I'm considering splitting tables relevant to specific sections of the site into different databases, so if/when one gets too large for one server I can more easily migrate some user data to different mysql servers while retaining overall integrity. I would still need to use joins on some tables across the new databases. Is this a normal thing to do? Would I incur a performance hit because of this?

    Read the article

  • Creating an Application to Save Arbitrary Application State

    - by ashes999
    See this SuperUser question. To summarize, VM software lets you save state of arbitrary applications (by saving the whole VM image). Would it be possible to write some software for Windows that allows you to save and reload arbitrary application state? If so (and presumably so), what would it entail? I would be looking to implement this, if possible, in a high-level language like C#. I presume if I used something else, I would need to dump memory registers (or maybe dump the entire application memory block) to a file somewhere and load it back somewhere to refresh state. So how do I build this thing?

    Read the article

  • efficient video format/codec for sparse & binary blob tracking

    - by user391339
    I am working on a blob tracking project and have many high-definition videos that I would like to reduce in size for storage and downstream tracking/shape-analysis. I want to use a lossless method that takes advantage of the black and white nature of the video as well as the fact that not much is moving between individual frames. The videos are quite sparse, with 5 to 10 b&w blobs per frame occupying <30% of the space in total, with each blob moving <5-10% of the field of view between frames and not changing shape too much between 2-3 frames. I will work in Python, Matlab, or LabView for this project, and could use a batch utility if available. It may be worthwhile to export the files as compressed image stacks if a proper video format can't be found. What are the pros and cons of this? A video codec uses correlations between neighboring frames, so it should be more efficient, but not if the wrong one is chosen or if it is improperly configured.

    Read the article

  • What's the best way to implement one-dimensional collision detection?

    - by cyclotis04
    I'm writing a piece of simulation software, and need an efficient way to test for collisions along a line. The simulation is of a train crossing several switches on a track. When a wheel comes within N inches of the switch, the switch turns on, then turns off when the wheel leaves. Since all wheels are the same size, and all switches are the same size, I can represent them as a single coordinate X along the track. Switch distances and wheel distances don't change in relation to each other, once set. This is a fairly trivial problem when done through brute force by placing the X coordinates in lists, and traversing them, but I need a way to do so efficiently, because it needs to be extremely accurate, even when the train is moving at high speeds. There's a ton of tutorials on 2D collision detection, but I'm not sure the best way to go about this unique 1D scenario.

    Read the article

  • Faster way to compare two sets of points in N-dimensional space?

    - by Amit
    List1 contains a high number (~7^10) of N-dimensional points (N <=10), List2 contains the same or fewer number of N-dimensional points (N <=10). My task is this: I want to check which point in List2 is closest (euclidean distance) to a point in List1 for every point in List1 and subsequently perform some operation on it. I have been doing it the simple- the nested loop way when I didn't have more than 50 points in List1, but with 7^10 points, this obviously takes up a lot of time. What is the fastest way to do this? Any concepts from Computational Geometry might help?

    Read the article

< Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >