Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 672/837 | < Previous Page | 668 669 670 671 672 673 674 675 676 677 678 679  | Next Page >

  • assignment vs std::swap and merging and keeping duplicates in seperate object

    - by rubenvb
    Say I have two std::set<std::string>s. The first one, old_options, needs to be merged with additional options, contained in new_options. I can't just use std::merge (well, I do, but not only that) because I also check for doubles and warn the user about this accordingly. To this effect, I have void merge_options( set<string> &old_options, const set<string> &new_options ) { // find duplicates and create merged_options, a stringset containing the merged options // handle duplicated the way I want to // ... old_options = merged_options; } Is it better to use std::swap( merged_options, old_options ); or the assignment I have? Is there a better way to filter duplicates and return the merged set than consecutive calls to std::set_intersection and std::set_union to detect dupes and merge the sets? I know it's slower than one traversal and doing both at once, but these sets are small (performance is not critical) and I trust the Standard more than I trust myself.

    Read the article

  • Are you using Virtual Machine as your primary development enviroment?

    - by Click Ok
    Recently I have purchased a notebook that cames with Windows Home Basic (that don't have with ASP.Net/IIS. I thought in upgrade the Windows version to one with ASP.Net/IIS, but I thought in another possibility: I have an Hard Disk Case with a 360Gb HD. I thought in create a virtual machine with Windows Ultimate (installing too ASP.Net, IIS and Visual Studio 2008) in this HD Case, then I can access my "development enviroment" in any computer that I will work on (my desktop machine and my notebook). But I was worried about the performance. I don't have experience working in Virtual Machines (I use it just to quick compatibility tests)... Are you using Virtual Machine as your primary development enviroment? What your finds? ==================== Thanks for your answers! It really did help me! I would like to know too about portability ie.: the virtual machine that I created in my laptop will work in the desktop? I will need re-activate Windows?

    Read the article

  • Netbeans C++ not finding standard libraries (Macintosh)

    - by Grue
    Hello everyone! I am trying to use Netbeans 6.7 (on a Mac) to create C++ applications. I started out with the standard "Hello World," just to test if everything was working correctly. First try std and could not be found. So I tried reinstalling the developer tools on my Mac OS X disk. After that Netbeans updated its c++ compiler info, but still cannot find std or . Odder than this XCode seems to be working with C++ perfectly fine. Any help fixing this would be greatly appreciated.

    Read the article

  • Which is the best API/Library to use when accessing a WebCam in .Net?

    - by Doctor Jones
    Which is the best API to use when accessing a WebCam in .Net? (I know they can be webcam specific, I am willing to buy a new webcam if it means better results). I want to write a desktop application that will take video from a webcam and store it in MPEG4 formats (DivX, Xvid, etc...). I would also like to access bitmap stills from the device so I can do image comparison between frames. I have tried various libraries, and none have really been a great fit (some have performance issues (very inconsistent framerates), some have image quality limitations, some just crash out for seemingly no reason. I want to get high quality video (as high as I can get) and a decent framerate. My webcam is more than up to the job and I was hoping that there would be a nice Managed .Net library around that would help my cause. Are webcam APIs all just incredibly bad?

    Read the article

  • NSArray vs. SQLite for Complex Queries on iPhone

    - by GingerBreadMane
    Developing for iPhone, I have a collection of points that I need to make complex queries on. For example: "How many points have a y-coordinate of 10" and "Return all points with an X-coordinate between 3 and 5 and a y-coordinate of 7". Currently, I am just cycling through each element of an NSArray and checking to see if each element matches my query. It's a pain to write the queries though. SQLite would be much nicer. I'm not sure which would be more efficient though since a SQLite database resides on disk and not in memory (to my understanding). Would SQLite be as efficient or more efficient here? Or is there a better way to do it other than these methods that I haven't thought of? I would need to perform the multiple queries with multiple sets of points thousands of times, so the best performance is important.

    Read the article

  • What about the Sql transaction log

    - by Michel
    Hi, i always thought that the sql transaction log keeps track of all the transactions done in the database so it could help recovering the database file in case of a unexpected power down or something like that So then, in normal usage, when the data is committed and written to disk, it is cleared because all the data is nice and safe in the mdf file. Seeing the ldf file grow and reading some i understand that that is not the case, and it will keep growing, until: you shrink the log. Only at that point all the commited transactions are cleared and the log file is shrinked. I found some sp's who should do this, but also found the theory that you first have to backup the database? That last step doesn't make sense to me, so can anyone tell me of that is correct and if so, why that is?

    Read the article

  • Implementing Tagging using Core Data on the iPhone

    - by Jonathan Penn
    I have an application that uses CoreData and I'm trying to figure out the best way to implement tagging and filtering by tag. For my purposes, if I was doing this in raw SQLite I would only need three tables, tags, item_tags and of course my items table. Then filtering would be as simple as joining between the three tables where only items are related to the given tags. Quite straightforward. But, is there a way to do this in CoreData and utilizing NSFetchedResultsController? It doesn't seem that NSPredicate give you the ability to filter through joins. NSPredicate's aren't full SQL anyway so I'm probably barking up the wrong tree there. I'm trying to avoid reimplementing my app using SQLite without CoreData since I'm enjoying the performance CoreData gives me in other areas. Yes, I did consider (and built a test implementation) diving into the raw SQLite that CoreData generates, but that's not future proof and I want to avoid that, too. Has anyone else tried to tackle tagging/filtering with CoreData in a UITableView with NSFetchedResultsController

    Read the article

  • What are some "mental steps" a developer must take to begin moving from SQL to NO-SQL (CouchDB, Fath

    - by Byron Sommardahl
    I have my mind firmly wrapped around relational databases and how to code efficiently against them. Most of my experience is with MySQL and SQL. I like many of the things I'm hearing about document-based databases, especially when someone in a recent podcast mentioned huge performance benefits. So, if I'm going to go down that road, what are some of the mental steps I must take to shift from SQL to NO-SQL? If it makes any difference in your answer, I'm a C# developer primarily (today, anyhow). I'm used to ORM's like EF and Linq to SQL. Before ORMs, I rolled my own objects with generics and datareaders. Maybe that matters, maybe it doesn't. Here are some more specific: How do I need to think about joins? How will I query without a SELECT statement? What happens to my existing stored objects when I add a property in my code? (feel free to add questions of your own here)

    Read the article

  • GetHashCode Method reliability in Silverlight/WP7.1

    - by abhinav
    I am attempting to hash and keep(the hash) an object of type IEnumerable<anotherobject> which has about a 1000 entries. I'll be generating another such object, but this time I'd like to check for any changes in the values of the entries using the hash codes of the two objects. Basically, I was wondering if GetHashCode() is apt for this, both from a performance perspective and reliability perspective (getting different values for different object values and same values for same object values, always). If I have to override it, what would be a good way to do so, does it always depend on the type of anotherobject and what Equals means when comparing two anotherobjects? Is there a generic way to do it? This concern is because my object can be quite big.

    Read the article

  • Multivalue Mysql Inserts using HibernateTemplate

    - by Langali
    I am using Spring HibernateTemplate and need to insert hundreds of records into a mysql database every second. Not sure what is the most performant way of doing it, but I am trying to see how the multi value mysql inserts do using hibernate. String query = "insert into user(age, name, birth_date) values(24, 'Joe', '2010-05-19 14:33:14'), (25, 'Joe1', '2010-05-19 14:33:14')" getHibernateTemplate().execute(new HibernateCallback(){ public Object doInHibernate(Session session) throws HibernateException, SQLException { return session.createSQLQuery(query).executeUpdate(); } }); But I get this error: 'could not execute native bulk manipulation query.' Please check your query ..... Any idea of I can use a multi value mysql insert using Hibernate? or is my query incorrect? Any other ways that I can improve the performance? I did try the saveOrUpdateAll() method, and that wasn't good enough!

    Read the article

  • Designing a table to store EXIF data

    - by rafale
    I'm looking to get the best performance out of querying a table containing EXIF data. The queries in question will only search the EXIF data for the specified strings and return the row index on a match. With that said, would it better to store the EXIF data in a table with separate columns for each of the tags, or would storing all of the tags in a single column as one long delimited string suit me just as well? There are around 115 EXIF tags I'll be storing, and each record would be around 1500 to 2000 chars in length if concatenated into a single string.

    Read the article

  • Is it possible to find out what FlashBuilder is doing during compilation?

    - by justkevin
    I've found that Flash Builder 4 (formerly Flex Builder) has trouble working with large projects. After a certain point, builds seem to take longer and longer. I've tried many different ways of improving build time including: Moving embedded resources into externally linked projects. Using -incremental. Tweaking the .ini jvm settings including memory and -server. Turning off automatic build (I'd prefer not to have to do this, because one of the main reasons for using an IDE is to be told about errors as you make them). Deleting the project and re-checking out from the repository. While some of these may help a bit, the performance is still annoyingly slow. I feel if I knew what was taking so long I could refactor my projects to build faster. Is there some setting that tells FlashBuilder to let me see what parts of the build process take so much time?

    Read the article

  • Implications of Fulltext Search over many columns

    - by Alex
    Hello, I have a really wide table which includes separate columns for billing address, shipping address, primary address, names, aliases etc. (I can't normalize this table further, and that's not the question here anyways). I'm implementing SQL Server fulltext search, and I'm wondering whether I should limit the search ability to just the primary fields (primary address and names for example), or if I can extend the search ability across all columns without occurring too much of a performance or memory penalty. I've done some basic testing with 10,000 sample rows and it's quite fast but I don't have much experience with fulltext indexing, especially its dictionary internals, so I don't know if the index is going to grow over time, or if there is anything else to consider. Thoughts?

    Read the article

  • Is there a more correct type for passing in the file path and file name to a method

    - by Rihan Meij
    Hi What I mean by this question is, when you need to store or pass a URL around, using a string is probably a bad practice, and a better approach would be to use a URI type. However it is so easy to make complex things more complex and bloated. So if I am going to be writing to a file on disk, do I pass it a string, as the file name and file path, or is there a better type that will be better suited to the requirement? This code seems to be clunky, and error prone? I would also need to do a whole bit of checking if it is a valid file name, if the string contains data and the list goes on. private void SaveFile(string fileNameAndPath) { //The normal stuff to save the file }

    Read the article

  • Conceptually, how does replay work in a game?

    - by SnOrfus
    I was kind of curious as to how replay might be implemented in a game. Initially, I thought that there would be just a command list of every player/ai action that was taken in the game, and it then 're-plays' the game and lets the engine render as usual. However, I have looked at replays in FPS/RTS games, and upon careful inspection even things like the particles and graphical/audible glitches are consistent (and those glitches are generally *in*consistent). So How does this happen. In fixed camera angle games I though it might just write every frame of the whole scene to a stream that gets stored and then just replays the stream back, but that doesn't seem like enough for games that allow you to pause and move the camera around. You'd have to store the locations of everything in the scene at all points in time (No?). So for things like particles, that's a lot of data to push which seems like a significant draw on the game's performance whilst playing.

    Read the article

  • dotTrace cant create deploy folder on Windows Azure Web Sites (WAWS)

    - by Orhan Maden
    I get the error message 'Can't create deploy folder.' when I try to profile a remote websites on WAWS. Actions taken: Downloaded and installed the dotTrace Profiler 5.5 from the JetBrains website Downloaded the dotTrace.Performance.Remote version 5.5.0 from Nuget Published the website to WAWS via Visual Studio 2013 Started the dotTrace application as Administrator Connected to the remote _https://subdomain.azurwebsites.net/AgentService.asmx. See image: http://1drv.ms/1nF5Cyh Selected the w3wp process and pressed Run Got the error message 'Can't create deploy folder'. See image: http://1drv.ms/U5h35A I'm running dotTrace in trial mode at the moment. Swift help is much appreciated. Orhan

    Read the article

  • Dealing with Windows line-endings in Python

    - by Adam Nelson
    I've got a 700MB XML file coming from a Windows provider. As one might expect, the line endings are '\r\n' (or ^M in vi). What is the most efficient way to deal with this situation aside from getting the supplier to send over '\n' :-) Use os.linesep Use rstrip() (requiring opening the file ... which seems crazy) Using Universal newline support is not standard on my Mac Snow Leopard - so isn't an option. I'm open to anything that requires Python 2.6+ but it needs to work on Snow Leopard and Ubuntu 9.10 with minimal external requirements. I don't mind a small performance penalty but I am looking for the standard best way to deal with this.

    Read the article

  • Is there any significant benefit to reading string directly from control instead of moving it into a

    - by Kevin
    sqlInsertFrame.Parameters.AddWithValue("@UserName", txtUserName.txt); Given the code above...if I don't have any need to move the textbox data into a string variable, is it best to read the data directly from the control? In terms of performance, it would seem smartest to not create any unnecessary variables which use up memory if its not needed. Or is this a situation where its technically true but doesn't yield any real world results due to the size of the data in question. Forgive me, I know this is a very basic question.

    Read the article

  • Running multiple jvms for different applications in same machine

    - by Rajesh
    We are getting frequent out of memory errors in our dev. machines We are running webshpere, eclipse, soap UI and maven in it. Our server gets down due to this "out of memory errors" when we restart our applications in websphere 2/3 times, We already increased the virtual memory setting in wesphere to 1GB. So what i did was copied the jre we use in eclipse and maven folders so that each of these uses individual jvms. But the performance of websphere is same. 2/3 restarts and out of memory errors. Is there any may of making eclipse and maven use different jvms other than websphere's?

    Read the article

  • Can I stop ASP.NET from returning 'The resource cannot be found.'?

    - by mackenir
    I have installed an HttpModule into my web app that will handle all requests with a given file extension. I want ASP.NET to handle all requests with the extension, regardless of whether there is an underlying file on disk. So, when I added the extension to the 'Application Extension Mappings', I unchecked the 'Verify that file exists' checkbox. However, this just transfers the file check to ASP.NET rather IIS, so I just get a different error page when requesting URLs with the file extension. Is there a way to preempt this ASP.NET file checking and intercept the requests?

    Read the article

  • how do copyright permission systems for content hosting sites work?

    - by zebraman
    I am wondering about subscription sites that host content, like recorded performances from concerts. I'm sure there is a tangle of copyright permissions that must be granted for these video/audio files to be hosted. For example, if a band plays a cover of another band's song, permission must be obtained from not only the band that performed, but the band that owns the song. Perhaps even from the venue that hosted the performance, to record the video and post the content. I am curious how websites that host content like this work. How might an automated copyright system work to keep track of who has ownership of certain performances and obtain permission from said owners to record and post their content.

    Read the article

  • playing incoming video stream

    - by mawia
    Hi! all, I am writing an application which is a kinda video streamer.The client is receiving a video stream using udp socket.Now as I am receiving the stream I want to play it simultaneous.It is different from playing local video file lying in your hard disk in which case it can be as simple as running the file using system("vlc filename").But here many issues are involved like there can be delay in receiving and player will have to wait for the incoming data.I have come to know about using vlc to run a video stream.Can you please elaborate the step for playing the stream using vlc.I am implementing my application in c++. EDIT: Can somebody give me some idea regarding VLC API which can be used to stream a given video to particular destination and receive that stream at other end play it. with regards, Mawia

    Read the article

  • Managing Data Prefetching and Dependencies with .NET Typed Datasets

    - by Derek Morrison
    I'm using .NET typed datasets on a project, and I often get into situations where I prefetch data from several tables into a dataset and then pass that dataset to several methods for processing. It seems cleaner to let each method decide exactly which data it needs and then load the data itself. However, several of the methods work with the same data, and I want the performance benefit of loading data in the beginning only once. My problem is that I don't know of a good way or pattern to use for managing dependencies (I want to be sure I load all the data that I'm going to need for each class/method that will use the dataset). Currently, I just end up looking through the code for the various classes that will use the dataset to make sure I'm loading everything appropriately. What are good approaches or patterns to use in this situation? Am I doing something fundamentally wrong? Although I'm using typed datasets, this seems like it would be a common situation where prefetching data is used. Thanks!

    Read the article

  • Keeping track of dirty blocks on a block device

    - by mikeY
    I'm looking for a way to keep track of what blocks on a block device are modified after a point in time. How I eventually want to use this for is to keep two 2TB disks in sync, one which only comes online (connected through USB) once a month. Without knowing what blocks have been modified, I have to go through the whole 2TB every time. I'm using a recent GNU/Linux OS and have C and Python experience. I'm hoping to avoid writing kernel level code as I don't have any experience in that area whatsoever. My current theory is that there should be some hooks somewhere where my code can get called when a disk flush is performed. Any ideas?

    Read the article

  • CakePHP repeats same queries

    - by Rytis
    I have a model structure: Category hasMany Product hasMany Stockitem belongsTo Warehouse, Manufacturer. I fetch data with this code, using containable to be able to filter deeper in the associated models: $this->Category->find('all', array( 'conditions' => array('Category.id' => $category_id), 'contain' => array( 'Product' => array( 'Stockitem' => array( 'conditions' => array('Stockitem.warehouse_id' => $warehouse_id), 'Warehouse', 'Manufacturer', ) ) ), ) ); Data structure is returned just fine, however, I get multiple repeating queries like, sometimes hundreds of such queries in a row, based on dataset. SELECT `Warehouse`.`id`, `Warehouse`.`title` FROM `beta_warehouses` AS `Warehouse` WHERE `Warehouse`.`id` = 2 Basically, when building data structure Cake is fetching data from mysql over and over again, for each row. We have datasets of several thousand rows, and I have a feeling that it's going to impact performance. Is it possible to make it cache results and not repeat same queries?

    Read the article

< Previous Page | 668 669 670 671 672 673 674 675 676 677 678 679  | Next Page >