Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 802/837 | < Previous Page | 798 799 800 801 802 803 804 805 806 807 808 809  | Next Page >

  • CultureManager issue

    - by Serge
    I have a bug I don't understand. While the following works fine: Resources.Classes.AFieldFormula.DirectFieldFormula this one throws an exception: new ResourceManager(typeof(Resources.Classes.AFieldFormula)).GetString("DirectFieldFormula"); Could not find any resources appropriate for the specified culture or the neutral culture. Make sure \"Resources.Classes.AFieldFormula.resources\" was correctly embedded or linked into assembly \"MygLogWeb\" at compile time, or that all the satellite assemblies required are loadable and fully signed. How comes? Resource designer.cs file: //------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a tool. // Runtime Version:4.0.30319.18408 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace Resources.Classes { using System; /// <summary> /// A strongly-typed resource class, for looking up localized strings, etc. /// </summary> // This class was auto-generated by the StronglyTypedResourceBuilder // class via a tool like ResGen or Visual Studio. // To add or remove a member, edit your .ResX file then rerun ResGen // with the /str option, or rebuild your VS project. [global::System.CodeDom.Compiler.GeneratedCodeAttribute("System.Resources.Tools.StronglyTypedResourceBuilder", "4.0.0.0")] [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] [global::System.Runtime.CompilerServices.CompilerGeneratedAttribute()] public class AFieldFormula { private static global::System.Resources.ResourceManager resourceMan; private static global::System.Globalization.CultureInfo resourceCulture; [global::System.Diagnostics.CodeAnalysis.SuppressMessageAttribute("Microsoft.Performance", "CA1811:AvoidUncalledPrivateCode")] internal AFieldFormula() { } /// <summary> /// Returns the cached ResourceManager instance used by this class. /// </summary> [global::System.ComponentModel.EditorBrowsableAttribute(global::System.ComponentModel.EditorBrowsableState.Advanced)] public static global::System.Resources.ResourceManager ResourceManager { get { if (object.ReferenceEquals(resourceMan, null)) { global::System.Resources.ResourceManager temp = new global::System.Resources.ResourceManager("MygLogWeb.Classes.AFieldFormula", typeof(AFieldFormula).Assembly); resourceMan = temp; } return resourceMan; } } /// <summary> /// Overrides the current thread's CurrentUICulture property for all /// resource lookups using this strongly typed resource class. /// </summary> [global::System.ComponentModel.EditorBrowsableAttribute(global::System.ComponentModel.EditorBrowsableState.Advanced)] public static global::System.Globalization.CultureInfo Culture { get { return resourceCulture; } set { resourceCulture = value; } } /// <summary> /// Looks up a localized string similar to Direct field. /// </summary> public static string DirectFieldFormula { get { return ResourceManager.GetString("DirectFieldFormula", resourceCulture); } } } }

    Read the article

  • Duplicate Blob field with foreach

    - by JGSilva
    I have some fields (blob) where I have uploaded some images. The images display correctly and I can open it without problem in Photoshop for example. I created a button where user can duplicate the product and everything works fine, but when it comes to duplicate the image entry I got some errors, like 1064 and others ones that I can't remember cause I am working 3 days inside this. Because de original product have 3 or more images I select then and gave an foreach. What I notice when a print de blob is that in the end it eats the next array, like if don't have an end. In other words, the next item got inside that utf-8 character in the print. That gave me some clue. The next approach was to save it in somewhere, and reupload it. The problem is that only the first one works. When I download the image saved, it opens normally so, it is not a saving in disk problem. When I gave a print in the $result, the same happens, is like the image is hungry and ate the next one. Here is the code. Notice = I created the [$count] to see if was not an rewrite in array error. Even tried to , in beging of the foreach, kind of clean the vars… $count=0; foreach ($original_image as $key => $val) { $count++; //$arquivo = ''; //$image = ''; //$file = ''; //$this->image = ''; //$return = ''; $arquivo[$count] = $val['pi_id'].'.'.$val['pi_type']; $image[$count] = $caminho_url.$arquivo[$count]; if (file_exists($image[$count])) { $this->image = Image::factory($image[$count]); $this->image->save($image[$count]); $file[$count]=mysql_real_escape_string(addslashes(fread(fopen($image[$count], "r"), filesize($image[$count])))); $return[$count] = Product::add_image($id_prod, $file[$count], $val['pi_type'],$val['pi_main']); }else { die('no'); } }

    Read the article

  • ANSI C as core of a C# project? Is this possible?

    - by Nektarios
    I'm writing a NON-GUI app which I want to be cross platform between OS X and Windows. I'm looking at the following architecture, but I don't know if it will work on the windows side: (Platform specific entry point) - ANSI C main loop = ANSI C model code doing data processing / logic = (Platform specific helpers) So the core stuff I'm planning to write in regular ANSI C, because A) it should be platform independent, B) I'm extremely comfortable with C, C) It can do the job and do it well (Platform specific entry point) can be written in whatever necessary to get the job done, this is a small amount of code, doesn't matter to me. (Platform specific helpers) is the sticky thing. This is stuff like parsing XML, accessing databases, graphics toolkit stuff, whatever. Things that aren't easy in C. Things that modern languages/frameworks will give for free. On OS X this code will be written in Objective-C interfacing with Cocoa. On Windows I'm thinking my best bet is to use C# So on Windows my architecture (simplified) looks like (C# or C?) - ANSI C - C# Is this possible? Some thoughts/suggestions so far.. 1) Compile my C core as a .dll -- this is fine, but seems there's no way to call my C# helpers unless I can somehow get function pointers and pass them to my core, but that seems unlikely 2) Compile a C .exe and a C# .exe and have them talk via shared memory or some kind of IPC. I'm not entirely opposed to this but it obviously introduces a lot of complexity so it doesn't seem ideal 3) Instead of C# use C++, it gets me some nice data management stuff and nice helper code. And I can mix it pretty easily. And the work I do could probably easily port to Linux. But I really don't like C++, and I don't want this to turn in to a 3rd-party-library-fest. Not that it's a huge deal, but it's 2010.. anything for basic data management should be built in. And targetting Linux is really not a priority. Note that no "total" alternatives are OK as suggested in other similar questions on SO I've seen; java, RealBasic, mono.. this is an extremely performance intensive application doing soft realtime for game/simulation purposes, I need C & friends here to do it right (maybe you don't, but I do)

    Read the article

  • Is order of parameters for database Command object really important?

    - by nawfal
    I was debugging a database operation code and I found that proper UPDATE was never happening though the code never failed as such. This is the code: condb.Open(); OleDbCommand dbcom = new OleDbCommand("UPDATE Word SET word=?,sentence=?,mp3=? WHERE id=? AND exercise_id=?", condb); dbcom.Parameters.AddWithValue("id", wd.ID); dbcom.Parameters.AddWithValue("exercise_id", wd.ExID); dbcom.Parameters.AddWithValue("word", wd.Name); dbcom.Parameters.AddWithValue("sentence", wd.Sentence); dbcom.Parameters.AddWithValue("mp3", wd.Mp3); But after some tweaking this worked: condb.Open(); OleDbCommand dbcom = new OleDbCommand("UPDATE Word SET word=?,sentence=?,mp3=? WHERE id=? AND exercise_id=?", condb); dbcom.Parameters.AddWithValue("word", wd.Name); dbcom.Parameters.AddWithValue("sentence", wd.Sentence); dbcom.Parameters.AddWithValue("mp3", wd.Mp3); dbcom.Parameters.AddWithValue("id", wd.ID); dbcom.Parameters.AddWithValue("exercise_id", wd.ExID); Why is it so important that the parameters in WHERE clause has to be given the last in case of OleDb connection? Having worked with MySQL previously, I could (and usually do) write parameters of WHERE clause first because that's more logical to me. Is parameter order important when querying database in general? Some performance concern or something? Is there a specific order to be maintained in case of other databases like DB2, Sqlite etc? Update: I got rid of ? and included proper names with and without @. The order is really important. In both cases only when WHERE clause parameters was mentioned last, actual update happened. To make matter worse, in complex queries, its hard to know ourselves which order is Access expecting, and in all situations where order is changed, the query doesnt do its intended duty with no warning/error!!

    Read the article

  • [NHibernate and ASP.NET MVC] How can I implement a robust session-per-request pattern in my project,

    - by Guillaume Gervais
    I'm currently building an ASP.NET MVC project, with NHibernate as its persistance layer. For now, some functionnalities have been implemented, but only use local NHibernate sessions: each method that accessed the database (read or write) needs to instanciate its own NHibernate session, with the "using()" directive. The problem is that I want to leverage NHibernate's Lazy-Loading capabilities to improve the performance of my project. This implies an open NHibernate session per request until the view is rendered. Furthermore, simultaneous request must be supported (multiple Sessions at the same time). How can I achieve that as cleanly as possible? I searched the Web a little bit and learned about the session-per-request pattern. Most of the implementations I saw used some sort of Http* (HttpContext, etc.) object to store the session. Also, using the Application_BeginRequest/Application_EndRequest functions is complicated, since they get fired for each HTTP request (aspx files, css files, js files, etc.), when I only want to instanciate a session once per request. The concern that I have is that I don't want my views or controllers to have access to NHibernate sessions (or, more generally, NHibernate namespaces and code). That means that I do not want to handle sessions at the controller level nor the view one. I have a few options in mind. Which one seems the best ? Use interceptors (like in GRAILS) that get triggered before and after the controller action. These would open and close sessions/transactions. Is it possible in the ASP.NET MVC world? Use the CurrentSessionContext Singleton provided by NHibernate in a Web context. Using this page as an example, I think this is quite promising, but that still requires filters at the controller level. Use the HttpContext.Current.Items to store the request session. This, coupled with a few lines of code in Global.asax.cs, can easily provide me with a session on the request level. However, it means that dependencies will be injected between NHibernate and my views (HttpContext). Thank you very much!

    Read the article

  • c++ std::ostringstream vs std::string::append

    - by NickSoft
    In all examples that use some kind of buffering I see they use stream instead of string. How is std::ostringstream and << operator different than using string.append. Which one is faster and which one uses less resourses (memory). One difference I know is that you can output different types into output stream (like integer) rather than the limited types that string::append accepts. Here is an example: std::ostringstream os; os << "Content-Type: " << contentType << ";charset=" << charset << "\r\n"; std::string header = os.str(); vs std::string header("Content-Type: "); header.append(contentType); header.append(";charset="); header.append(charset); header.append("\r\n"); Obviously using stream is shorter, but I think append returns reference to the string so it can be written like this: std::string header("Content-Type: "); header.append(contentType) .append(";charset=") .append(charset) .append("\r\n"); And with output stream you can do: std::string content; ... os << "Content-Length: " << content.length() << "\r\n"; But what about memory usage and speed? Especially when used in a big loop. Update: To be more clear the question is: Which one should I use and why? Is there situations when one is preferred or the other? For performance and memory ... well I think benchmark is the only way since every implementation could be different. Update 2: Well I don't get clear idea what should I use from the answers which means that any of them will do the job, plus vector. Cubbi did nice benchmark with the addition of Dietmar Kühl that the biggest difference is construction of those objects. If you are looking for an answer you should check that too. I'll wait a bit more for other answers (look previous update) and if I don't get one I think I'll accept Tolga's answer because his suggestion to use vector is already done before which means vector should be less resource hungry.

    Read the article

  • Techniques for querying a set of object in-memory in a Java application

    - by Edd Grant
    Hi All, We have a system which performs a 'coarse search' by invoking an interface on another system which returns a set of Java objects. Once we have received the search results I need to be able to further filter the resulting Java objects based on certain criteria describing the state of the attributes (e.g. from the initial objects return all objects where x.y z && a.b == c). The criteria used to filter the set of objects each time is partially user configurable, by this I mean that users will be able to select the values and ranges to match on but the attributes they can pick from will be a fixed set. The data sets are likely to contain <= 10,000 objects for each search. The search will be executed manually by the application user base probably no more than 2000 times a day (approx). It's probably worth mentioning that all the objects in the result set are known domain object classes which have Hibernate and JPA annotations describing their structure and relationship. Off the top of my head I can think of 3 ways of doing this: For each search persist the initial result set objects in our database, then use Hibernate to re-query them using the finer grained criteria. Use an in-memory Database (such as hsqldb?) to query and refine the initial result set. Write some custom code which iterates the initial result set and pulls out the desired records. Option 1 seems to involve a lot of toing and froing across a network to a physical Database (Oracle 10g) which might result in a lot of network and disk activity. It would also require the results from each search to be isolated from other result sets to ensure that different searches don't interfere with each other. Option 2 seems like a good idea in principle as it would allow me to do the finer query in memory and would not require the persistence of result data which would only be discarded after the search was complete. Gut feeling is that this could be pretty performant too but might result in larger memory overheads (which is fine as we can be pretty flexible on the amount of memory our JVM gets). Option 3 could be very performant but is something I would like to avoid as any code we write would require such careful testing that the time taken to acheive something flexible and robust enough would probably be prohibitive. I don't have time to prototype all 3 ideas so I am looking for comments people may have on the 3 options above, plus any further ideas I have not considered, to help me decide which idea might be most suitable. I'm currently leaning toward option 2 (in memory database) so would be keen to hear from people with experience of querying POJOs in memory too. Hopefully I have described the situation in enough detail but don't hesitate to ask if any further information is required to better understand the scenario. Cheers, Edd

    Read the article

  • WPF PathGeometry/RotateTransform optimization

    - by devinb
    I am having performance issues when rendering/rotating WPF triangles If I had a WPF triangle being displayed and it will be rotated to some degree around a centrepoint, I can do it one of two ways: Programatically determine the points and their offset in the backend, use XAML to simply place them on the canvas where they belong, it would look like this: <Path Stroke="Black"> <Path.Data> <PathGeometry> <PathFigure StartPoint ="{Binding CalculatedPointA, Mode=OneWay}"> <LineSegment Point="{Binding CalculatedPointB, Mode=OneWay}" /> <LineSegment Point="{Binding CalculatedPointC, Mode=OneWay}" /> <LineSegment Point="{Binding CalculatedPointA, Mode=OneWay}" /> </PathFigure> </PathGeometry> </Path.Data> </Path> Generate the 'same' triangle every time, and then use a RenderTransform (Rotate) to put it where it belongs. In this case, the rotation calculations are being obfuscated, because I don't have any access to how they are being done. <Path Stroke="Black"> <Path.Data> <PathGeometry> <PathFigure StartPoint ="{Binding TriPointA, Mode=OneWay}"> <LineSegment Point="{Binding TriPointB, Mode=OneWay}" /> <LineSegment Point="{Binding TriPointC, Mode=OneWay}" /> <LineSegment Point="{Binding TriPointA, Mode=OneWay}" /> </PathFigure> </PathGeometry> </Path.Data> <Path.RenderTransform> <RotateTransform CenterX="{Binding Centre.X, Mode=OneWay}" CenterY="{Binding Centre.Y, Mode=OneWay}" Angle="{Binding Orientation, Mode=OneWay}" /> </Path.RenderTransform> </Path> My question is which one is faster? I know I should test it myself but how do I measure the render time of objects with such granularity. I would need to be able to time how long the actual rendering time is for the form, but since I'm not the one that's kicking off the redraw, I don't know how to capture the start time.

    Read the article

  • Not sure what happens to my apps objects when using NSURLSession in background - what state is my app in?

    - by Avner Barr
    More of a general question - I don't understand the workings of NSURLSession when using it in "background session mode". I will supply some simple contrived example code. I have a database which holds objects - such that portions of this data can be uploaded to a remote server. It is important to know which data/objects were uploaded in order to accurately display information to the user. It is also important to be able to upload to the server in a background task because the app can be killed at any point. for instance a simple profile picture object: @interface ProfilePicture : NSObject @property int userId; @property UIImage *profilePicture; @property BOOL successfullyUploaded; // we want to know if the image was uploaded to out server - this could also be a property that is queryable but lets assume this is attached to this object @end Now Lets say I want to upload the profile picture to a remote server - I could do something like: @implementation ProfilePictureUploader -(void)uploadProfilePicture:(ProfilePicture *)profilePicture completion:(void(^)(BOOL successInUploading))completion { NSUrlSession *uploadImageSession = ..... // code to setup uploading the image - and calling the completion handler; [uploadImageSession resume]; } @end Now somewhere else in my code I want to upload the profile picture - and if it was successful update the UI and the database that this action happened: ProfilePicture *aNewProfilePicture = ...; aNewProfilePicture.profilePicture = aImage; aNewProfilePicture.userId = 123; aNewProfilePicture.successfullyUploaded = NO; // write the change to disk [MyDatabase write:aNewProfilePicture]; // upload the image to the server ProfilePictureUploader *uploader = [ProfilePictureUploader ....]; [uploader uploadProfilePicture:aNewProfilePicture completion:^(BOOL successInUploading) { if (successInUploading) { // persist the change to my db. aNewProfilePicture.successfullyUploaded = YES; [Mydase update:aNewProfilePicture]; // persist the change } }]; Now obviously if my app is running then this "ProfilePicture" object is successfully uploaded and all is well - the database object has its own internal workings with data structures/caches and what not. All callbacks that may exist are maintained and the app state is straightforward. But I'm not clear what happens if the app "dies" at some point during the upload. It seems that any callbacks/notifications are dead. According to the API documentation- the uploading is handled by a separate process. Therefor the upload will continue and my app will be awakened at some point in the future to handle completion. But the object "aNewProfilePicture" is non existant at that point and all callbacks/objects are gone. I don't understand what context exists at this point. How am I supposed to ensure consistency in my DB and UI (For instance update the "successfullyUploaded" property for that user)? Do I need to re-work everything touching the DB or UI to correspond with the new API and work in a context free environment?

    Read the article

  • Can I use the [] operator in C++ to create virtual arrays

    - by Shane MacLaughlin
    I have a large code base, originally C ported to C++ many years ago, that is operating on a number of large arrays of spatial data. These arrays contain structs representing point and triangle entities that represent surface models. I need to refactor the code such that the specific way these entities are stored internally varies for specific scenarios. For example if the points lie on a regular flat grid, I don't need to store the X and Y coordinates, as they can be calculated on the fly, as can the triangles. Similarly, I want to take advantage of out of core tools such as STXXL for storage. The simplest way of doing this is replacing array access with put and get type functions, e.g. point[i].x = XV; becomes Point p = GetPoint(i); p.x = XV; PutPoint(i,p); As you can imagine, this is a very tedious refactor on a large code base, prone to all sorts of errors en route. What I'd like to do is write a class that mimics the array by overloading the [] operator. As the arrays already live on the heap, and move around with reallocs, the code already assumes that references into the array such as point *p = point + i; may not be used. Is this class feasible to write? For example writing the methods below in terms of the [] operator; void MyClass::PutPoint(int Index, Point p) { if (m_StorageStrategy == RegularGrid) { int xoffs,yoffs; ComputeGridFromIndex(Index,xoffs,yoffs); StoreGridPoint(xoffs,yoffs,p.z); } else m_PointArray[Index] = p; } } Point MyClass::GetPoint(int Index) { if (m_StorageStrategy == RegularGrid) { int xoffs,yoffs; ComputeGridFromIndex(Index,xoffs,yoffs); return GetGridPoint(xoffs,yoffs); // GetGridPoint returns Point } else return m_PointArray[Index]; } } My concern is that all the array classes I've seen tend to pass by reference, whereas I think I'll have to pass structs by value. I think it should work put other than performance, can anyone see any major pitfalls with this approach. n.b. the reason I have to pass by value is to get point[a].z = point[b].z + point[c].z to work correctly where the underlying storage type varies.

    Read the article

  • Adding a clustered index to a SQL table: what dangers exist for a live production system?

    - by MoSlo
    Right, keep in mind i need to describe this by abstracting all possible confidential info: I've been put in charge of a 10-year old transactional system of which the majority business logic is implemented at database level (triggers, stored procedures etc). Win2000 server, MSSQL 2000 Enterprise. No immediate plans for replacing/updating the system are being considered :( The core process is a program that executes transactions - specifically, it executes a stored procedure with various parameters, lets call it sp_ProcessTrans. The program executes the stored procedure at asynchronous intervals. By itself, things work fine. But there are 30 instances of this program on remotely located workstations, all of them asynchronously executing sp_ProcessTrans and then retrieving data from the SQL server (execution is pretty regular - ranging 0 to 60 times a minute, depending on what items the program instance is responsible for) . Performance of the system has dropped considerably with 10 yrs of data growth: the reason is the deadlocks and specifically deadlock wait times. The deadlock is on the Employee table. I have discovered: In sp_ProcessTrans' execution, it selects from an Employee table 7 times (dont ask) The select is done on a field that is NOT the primary key No index exists on this field. Thus a table scan is performed. 7 times. per transaction So the reason for deadlocks is clear. I created a non-unique ordered clustered index on the field (field looks good, almost unique, NUM(7), very rarely changes). Immediate improvement in the test environment. The problem is that i cannot simulate the deadlocks in a test environment (I'd need 30 workstations; i'd need to simulate 'realistic' activity on those stations, so visualization is out). I need to know if i must schedule downtime. Creating an index shouldn't be a risky operation for MSSQL, but is there any danger (data corruption in transactions/select statements/extra wait time etc) to create this field index on the production database while the transactions are still taking place? (although i can select a time when transactions are fairly quiet through the 30 stations) Are there any hidden dangers i'm not seeing (not looking forward to needing to restore the DB if something goes wrong, restoring would take a lot of time with 10yrs of data).

    Read the article

  • Add console.profile statements to JavaScript/jQuery code on the fly.

    - by novogeek
    Hi folks, We have a thick client app using jQuery heavily and want to profile the performance of the code using firebug's console.profile API. The problem is, I don't want to change the code to write the profile statements. Take this example: var search=function(){ this.init=function(){ console.log('init'); } this.ajax=function(){ console.log('ajax'); //make ajax call using $.ajax and do some DOM manipulations here.. } this.cache=function(){ console.log('cache'); } } var instance=new search(); instance.ajax(); I want to profile my instance.ajax method, but I dont want to add profile statements in the code, as that makes it difficult to maintain the code. I'm trying to override the methods using closures, like this: http://www.novogeek.com/post/2010/02/27/Overriding-jQueryJavaScript-functions-using-closures.aspx but am not very sure how I can achieve. Any pointers on this? I think this would help many big projects to profile the code easily without a big change in code. Here is the idea. Just run the below code in firebug console, to know what I'm trying to achieve. var search=function(){ this.init=function(){ console.log('init'); } this.ajax=function(){ console.log('ajax'); //make ajax call using $.ajax and do some DOM manipulations here.. } this.cache=function(){ console.log('cache'); } } var instance=new search(); $.each(instance, function(functionName, functionBody){ (function(){ var dup=functionBody functionBody=function(){ console.log('modifying the old function: ',functionName); console.profile(functionName); dup.apply(this,arguments); console.profileEnd(functionName); } })(); console.log(functionName, '::', functionBody()); }); Now what I need is, if i say instance.ajax(), I want the new ajax() method to be called, along with the console.profile statements. Hope I'm clear with the requirement. Please improvise the above code. Regards, Krishna, http://www.novogeek.com

    Read the article

  • Query performs poorly unless a temp table is used

    - by Paul McLoughlin
    The following query takes about 1 minute to run, and has the following IO statistics: SELECT T.RGN, T.CD, T.FUND_CD, T.TRDT, SUM(T2.UNITS) AS TotalUnits FROM dbo.TRANS AS T JOIN dbo.TRANS AS T2 ON T2.RGN=T.RGN AND T2.CD=T.CD AND T2.FUND_CD=T.FUND_CD AND T2.TRDT<=T.TRDT JOIN TASK_REQUESTS AS T3 ON T3.CD=T.CD AND T3.RGN=T.RGN AND T3.TASK = 'UPDATE_MEM_BAL' GROUP BY T.RGN, T.CD, T.FUND_CD, T.TRDT (4447 row(s) affected) Table 'TRANSACTIONS'. Scan count 5977, logical reads 7527408, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'TASK_REQUESTS'. Scan count 1, logical reads 11, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 58157 ms, elapsed time = 61437 ms. If I instead introduce a temporary table then the query returns quickly and performs less logical reads: CREATE TABLE #MyTable(RGN VARCHAR(20) NOT NULL, CD VARCHAR(20) NOT NULL, PRIMARY KEY([RGN],[CD])); INSERT INTO #MyTable(RGN, CD) SELECT RGN, CD FROM TASK_REQUESTS WHERE TASK='UPDATE_MEM_BAL'; SELECT T.RGN, T.CD, T.FUND_CD, T.TRDT, SUM(T2.UNITS) AS TotalUnits FROM dbo.TRANS AS T JOIN dbo.TRANS AS T2 ON T2.RGN=T.RGN AND T2.CD=T.CD AND T2.FUND_CD=T.FUND_CD AND T2.TRDT<=T.TRDT JOIN #MyTable AS T3 ON T3.CD=T.CD AND T3.RGN=T.RGN GROUP BY T.RGN, T.CD, T.FUND_CD, T.TRDT (4447 row(s) affected) Table 'Worktable'. Scan count 5974, logical reads 382339, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'TRANSACTIONS'. Scan count 4, logical reads 4547, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#MyTable________________________________________________________________000000000013'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 1420 ms, elapsed time = 1515 ms. The interesting thing for me is that the TASK_REQUEST table is a small table (3 rows at present) and statistics are up to date on the table. Any idea why such different execution plans and execution times would be occuring? And ideally how to change things so that I don't need to use the temp table to get decent performance? The only real difference in the execution plans is that the temp table version introduces an index spool (eager spool) operation.

    Read the article

  • Hot to get rid of memory allocations/deallocations in swig wrappers?

    - by Dmitriy Matveev
    I want to use swig for generation of read-only wrappers for a complex object. The object which I want to wrap will always be existent while I will read it. And also I will only use my wrappers at the time that object is existent, thus I don't need any memory management from SWIG. For following swig interface: %module test %immutable; %inline %{ struct Foo { int a; }; struct Bar { int b; Foo f; }; %} I will have a wrappers which will have a lot of garbage in generated interfaces and do useless work which will reduce performance in my case. Generated java wrapper for Bar class will be like this: public class Bar { private long swigCPtr; protected boolean swigCMemOwn; protected Bar(long cPtr, boolean cMemoryOwn) { swigCMemOwn = cMemoryOwn; swigCPtr = cPtr; } protected static long getCPtr(Bar obj) { return (obj == null) ? 0 : obj.swigCPtr; } protected void finalize() { delete(); } public synchronized void delete() { if (swigCPtr != 0) { if (swigCMemOwn) { swigCMemOwn = false; testJNI.delete_Bar(swigCPtr); } swigCPtr = 0; } } public int getB() { return testJNI.Bar_b_get(swigCPtr, this); } public Foo getF() { return new Foo(testJNI.Bar_f_get(swigCPtr, this), true); } public Bar() { this(testJNI.new_Bar(), true); } } I don't need 'swigCMemOwn' field in my wrapper since it always will be false. All code related to this field will also be useless. There are also unnecessary logic in native code: SWIGEXPORT jlong JNICALL Java_some_testJNI_Bar_1f_1get(JNIEnv *jenv, jclass jcls, jlong jarg1, jobject jarg1_) { jlong jresult = 0 ; struct Bar *arg1 = (struct Bar *) 0 ; Foo result; (void)jenv; (void)jcls; (void)jarg1_; arg1 = *(struct Bar **)&jarg1; result = ((arg1)->f); { Foo * resultptr = (Foo *) malloc(sizeof(Foo)); memmove(resultptr, &result, sizeof(Foo)); *(Foo **)&jresult = resultptr; } return jresult; } I don't need these calls to malloc and memmove. I want to force swig to resolve both of these problems, but don't know how. Is it possible?

    Read the article

  • Should I HttpCombine Google Jquery Hosted File?

    - by chobo2
    Hi I am using something called HttpCombiner: http://code.msdn.microsoft.com/HttpCombiner An HTTP handler that combines multiple CSS, Javascript or URL into one response for faster page load. It can combine, compress and cache response which results in faster page load and better scalability of web application It's a good practice to use many small Javascript and CSS files instead of one large Javascript/CSS file for better code maintainability, but bad in terms of website performance. Although you should write your Javascript code in small files and break large CSS files into small chunks but when browser requests those javascript and css files, it makes one Http request per file. Every Http Request results in a network roundtrip form your browser to the server and the delay in reaching the server and coming back to the browser is called latency. So, if you have four javascripts and three css files loaded by a page, you are wasting time in seven network roundtrips. Within USA, latency is average 70ms. So, you waste 7x70 = 490ms, about half a second of delay. Outside USA, average latency is around 200ms. So, that means 1400ms of waiting. Browser cannot show the page properly until Css and Javascripts are fully loaded. So, the more latency you have, the slower page loads. You can reduce the wait time by using a CDN. Read my previous blog post about using CDN. However, a better solution is to deliver multiple files over one request using an HttpHandler that combines several files and delivers as one output. So, instead of putting many or tag, you just put one and one tag, and point them to the HttpHandler. You tell the handler which files to combine and it delivers those files in one response. This saves browser from making many requests and eliminates the latency. This Http Handler reads the file names defined in a configuration and combines all those files and delivers as one response. It delivers the response as gzip compressed to save bandwidth. Moreover, it generates proper cache header to cache the response in browser cache, so that, browser does not request it again on future visit. Now I am wondering since it can handle adding links should I put in it the jquery file? The reason I am not sure is if it gets combined with my other files I think I might close the advantages of it being hosted on googles servers such as caching(my thinking is if it gets combined it will look different so even if a user has it in it's cache I am not sure if it will use the one for the cahce or not). So should I combine it or only the finals that I am using locally?

    Read the article

  • Browsers (IE and Firefox) freeze when copying large amount of text

    - by Matt
    I have a web application - a Java servlet - that delivers data to users in the form of a text printout in a browser (text marked up with HTML in order to display in the browser as we want it to). The text does display in different colors, though most of it is black. One typical mode of operation is this: 1. User submits a form to request data. 2. Servlet delivers HTML file to browser. 3. User does CTRL+A to select all the text. 4. User does CTRL+C to copy all the text. 5. User goes to a text editor and does CTRL+V to paste the text. In the testing where I'm having this problem, step #2 successfully loads all the data - we wait for that to complete. We can scroll down to the end of what the browser loaded and see the end of the data. However, the browser freezes on step #3 (Firefox) or on step #4 (IE). Because step #2 finishes, I think it is a browser/memory issue, and not an issue with the web application. If I run queries to deliver smaller amounts of data (but after several queries we get the same data we would have above in one query) and copy/paste this text, the file I save it into ends up being about 8 MB. If I save the browser's displayed HTML to a file on my computer via File-Save As from the browser menu, it works fine and the file is about 22 MB. We've tried this on 2 different computers at work (both running Windows XP, with at least 2 GB of RAM and many GB of free disk space), using Firefox and IE. We also tried it on a home computer from a home network outside of work (thinking it might be our IT security software causing the problem), running Windows 7 using IE, and still had the problem. When I've done this, I can see whatever browser I'm using utilizing the CPU at 50%. Firefox's memory usage grows to about 1 GB; IE's stays in the several hundred MBs. We once let this run for half an hour, and it did not complete. I'm most likely going to modify the web app to have an option of delivering a plain text file for download, and I imagine that will get the users what they need. But for the mean time, and because I'm curious - and I don't like my application freezing people's browsers, does anyone have any ideas about the browser freezing? I understand that sometimes you just reach your memory limit, but 22 MB sounds to me like an amount I should be able to copy to the clipboard.

    Read the article

  • [c++] upload image to imageshack

    - by cinek1lol
    Hi. I would like to send pictures via a program written in C + + I wrote such a thing using curl.exe WinExec("C:\\curl\\curl.exe -H Expect: -F \"fileupload=@C:\\curl\\ok.jpg\" -F \"xml=yes\" -# \"http://www.imageshack.us/index.php\" -o data.txt -A \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1\" -e \"http://www.imageshack.us\"", NULL); This only works that I would like to send pictures to a variable pre-loaded char (you know what I mean? first reads the pictures into a variable and then send that variable), because now I have to specify the path to the images on disk I wanted to make this program was written in C + + using the curl library, and not the exe. I found it such a program (which some have modified) #include <stdio.h> #include <string.h> #include <iostream> #include <curl/curl.h> #include <curl/types.h> #include <curl/easy.h> int main(int argc, char *argv[]) { CURL *curl; CURLcode res; struct curl_httppost *formpost=NULL; struct curl_httppost *lastptr=NULL; struct curl_slist *headerlist=NULL; static const char buf[] = "Expect:"; curl_global_init(CURL_GLOBAL_ALL); /* Fill in the file upload field */ curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "send", CURLFORM_FILE, "nowy.jpg", CURLFORM_END); curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "nowy.jpg", CURLFORM_COPYCONTENTS, "nowy.jpg", CURLFORM_END); curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "submit", CURLFORM_COPYCONTENTS, "send", CURLFORM_END); curl = curl_easy_init(); headerlist = curl_slist_append(headerlist, buf); if(curl) { curl_easy_setopt(curl, CURLOPT_URL, "http://www.imageshack.us/index.php"); if ( (argc == 2) && (!strcmp(argv[1], "xml=yes")) ) curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headerlist); curl_easy_setopt(curl, CURLOPT_HTTPPOST, formpost); res = curl_easy_perform(curl); curl_easy_cleanup(curl); curl_formfree(formpost); curl_slist_free_all (headerlist); } system("pause"); return 0; } I will be grateful for any help

    Read the article

  • AudioTrack lag: obtainBuffer timed out

    - by BTR
    I'm playing WAVs on my Android phone by loading the file and feeding the bytes into AudioTrack.write() via the FileInputStream BufferedInputStream DataInputStream method. The audio plays fine and when it is, I can easily adjust sample rate, volume, etc on the fly with nice performance. However, it's taking about two full seconds for a track to start playing. I know AudioTrack has an inescapable delay, but this is ridiculous. Every time I play a track, I get this: 03-13 14:55:57.100: WARN/AudioTrack(3454): obtainBuffer timed out (is the CPU pegged?) 0x2e9348 user=00000960, server=00000000 03-13 14:55:57.340: WARN/AudioFlinger(72): write blocked for 233 msecs, 9 delayed writes, thread 0xba28 I've noticed that the delayed write count increases by one every time I play a track -- even across multiple sessions -- from the time the phone has been turned on. The block time is always 230 - 240ms, which makes sense considering a minimum buffer size of 9600 on this device (9600 / 44100). I've seen this message in countless searches on the Internet, but it usually seems to be related to not playing audio at all or skipping audio. In my case, it's just a delayed start. I'm running all my code in a high priority thread. Here's a truncated-yet-functional version of what I'm doing. This is the thread callback in my playback class. Again, this works (only playing 16-bit, 44.1kHz, stereo files right now), it just takes forever to start and has that obtainBuffer/delayed write message every time. public void run() { // Load file FileInputStream mFileInputStream; try { // mFile is instance of custom file class -- this is correct, // so don't sweat this line mFileInputStream = new FileInputStream(mFile.path()); } catch (FileNotFoundException e) {} BufferedInputStream mBufferedInputStream = new BufferedInputStream(mFileInputStream, mBufferLength); DataInputStream mDataInputStream = new DataInputStream(mBufferedInputStream); // Skip header try { if (mDataInputStream.available() > 44) mDataInputStream.skipBytes(44); } catch (IOException e) {} // Initialize device mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, ConfigManager.SAMPLE_RATE, AudioFormat.CHANNEL_CONFIGURATION_STEREO, AudioFormat.ENCODING_PCM_16BIT, ConfigManager.AUDIO_BUFFER_LENGTH, AudioTrack.MODE_STREAM); mAudioTrack.play(); // Initialize buffer byte[] mByteArray = new byte[mBufferLength]; int mBytesToWrite = 0; int mBytesWritten = 0; // Loop to keep thread running while (mRun) { // This flag is turned on when the user presses "play" while (mPlaying) { try { // Check if data is available if (mDataInputStream.available() > 0) { // Read data from file and write to audio device mBytesToWrite = mDataInputStream.read(mByteArray, 0, mBufferLength); mBytesWritten += mAudioTrack.write(mByteArray, 0, mBytesToWrite); } } catch (IOException e) { } } } } If I can get past the artificially long lag, I can easily deal with the inherit latency by starting my write at a later, predictable position (ie, skip past the minimum buffer length when I start playing a file).

    Read the article

  • JPanel's child components paint/layout problem

    - by Tom Brito
    I'm having a problem that when my frame is shown (after a login dialog) the buttons are not on correct position, then in some miliseconds they go to the right position (the center of the panel with border layout). When I make a SSCCE, it works correct, but when I run my whole code I have this fast-miliseconds delay to the buttons to go to the correct place. Unfortunately, I can't post the whole code, but the method that shows the frame is: public void login(JComponent userView) { centerPanel.removeAll(); centerPanel.add(userView); centerPanel.revalidate(); centerPanel.repaint(); frame.setVisible(true); } What would cause this delay to the panel layout? (I'm running everything in the EDT) -- update In my machine, this SSCCE shows the layout problem in 2 of 10 times I run it: import java.awt.BorderLayout; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.SwingUtilities; public class TEST { public static void main(String[] args) throws Exception { SwingUtilities.invokeAndWait(new Runnable() { @Override public void run() { System.out.println("Debug test..."); JPanel btnPnl = new JPanel(); btnPnl.add(new JButton("TEST")); JFrame f = new JFrame("TEST"); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); f.getContentPane().setLayout(new BorderLayout()); f.getContentPane().add(btnPnl); f.pack(); f.setSize(800, 600); f.setVisible(true); System.out.println("End debug test!"); } }); } } The button first appers in the up-left, and then it goes to the center. Please, note that I'm understand, not just correct. Is it a java bug? --update OK, so the SSCCE don't show the problem with you that tried till now. Maybe it's my computer performance problem. But this don't answer the question, I still think Java Swing is creating new threads for make the layout behind the scenes.

    Read the article

  • Problems with starting windows service on windows xp SP3

    - by Michiel Peeters
    I'm currently facing a problem which I can not resolve and I really don't know what to do anymore. When I'm trying to start the service I receive the message: "The service is started but again also stopped, this because that some of the services will stop if they have nothing to do, for example the performance logs and the alerts service". I've looked into the Windows Logs but nothing is written there which could describe why my service is all the time stopping. I've also tried to fire the windows service via the command prompt which gives me the message: "The service is not started, but the service didn't return any faults.". I've tried to remove all keys which references to my service, which didn't resolve the issue. I've searched on google (maybe not good enough) to find an answer but I didn't found any. I did found some websites which describes what I could do, but all of these suggestions didn't work. This is kinda ** because I do not know where to look. I do not have any error message, i do not have any id which i can use to search on. I really don't know where to start and I hope you guys can help me on this one. Detailed explanation about the windows service OS: Windows XP SP3 .Net Framework: .Net 4.0 Client Profile Language: C# Development environment: Visual Studio 2010 Professional (but Visual Studio 2012 RC is installed) Communications: WCF (Named Pipes), WCF (BasicHTTPBinding) Named Pipes: I have chosen for this solution because I wanted to communicate from a windows service to a windows form application. It worked now for quite some time but suddenly my windows service shuts it self down and I couldn't restart it anymore. There are two named pipes services implemented: An event service which will send any notification to the windows form application and an management service which gives my windows form application the possibility to maintain my windows service. BasicHTTPBinding: The basic http binding makes the connection to a central server. This connection is then used for streaming information from the client to the server. I do not know which additional information you will need, but if you guys need something then I'll try to give it as detailed as possible. Thank you in advance.

    Read the article

  • Delete duplicate rows, do not preserve one row

    - by Radley
    I need a query that goes through each entry in a database, checks if a single value is duplicated elsewhere in the database, and if it is - deletes both entries (or all, if more than two). Problem is the entries are URLs, up to 255 characters, with no way of identifying the row. Some existing answers on Stackoverflow do not work for me due to performance limitations, or they use uniqueid which obviously won't work when dealing with a string. Long Version: I have two databases containing URLs (and only URLs). One database has around 3,000 urls and the other around 1,000. However, a large majority of the 1,000 urls were taken from the 3,000 url database. I need to merge the 1,000 into the 3,000 as new entries only. For this, I made a third database with combined URLs from both tables, about 4,000 entries. I need to find all duplicate entries in this database and delete them (Both of them, without leaving either). I have followed the query of a few examples on this site, but whenever I try to delete both entries it ends up deleting all the entries, or giving sql errors. Alternatively: I have two databases, each containing the separate database. I need to check each row from one database against the other to find any that aren't duplicates, and then add those to a third database. Edit: I've got my own PHP solution which is pretty hacky, but works. I cannot answer my own question for 8 hours because I'm new, so here it is for now: I went with a PHP script to accomplish this, as I'm more familiar with PHP than MySQL. This generates a simple list of urls that only exist in the target database, but not both. If you have more than 7,000 entries to parse this may take awhile, and you will need to copy/paste the results into a text file or expand the script to store them back into a database. I'm just doing it manually to save time. Note: Uses MeekroDB <pre> <?php require('meekrodb.2.1.class.php'); DB::$user = 'root'; DB::$password = ''; DB::$dbName = 'testdb'; $all = DB::query('SELECT * FROM old_urls LIMIT 7000'); foreach($all as $row) { $test = DB::query('SELECT url FROM new_urls WHERE url=%s', $row['url']); if (!is_array($test)) { echo $row['url'] . "\n"; }else{ if (count($test) == 0) { echo $row['url'] . "\n"; } } } ?> </pre>

    Read the article

  • IntentService android download and return file to Activity

    - by Andrew G
    I have a fairly tricky situation that I'm trying to determine the best design for. The basics are this: I'm designing a messaging system with a similar interface to email. When a user clicks a message that has an attachment, an activity is spawned that shows the text of that message along with a paper clip signaling that there is an additional attachment. At this point, I begin preloading the attachment so that when the user clicks on it - it loads more quickly. currently, when the user clicks the attachment, it prompts with a loading dialog until the download is complete at which point it loads a separate attachment viewer activity, passing in the bmp byte array. I don't ever want to save attachments to persistent storage. The difficulty I have is in supporting rotation as well as home button presses etc. The download is currently done with a thread and handler setup. Instead of this, I'd like the flow to be the following: User loads message as before, preloading begins of attachment as before (invisible to user). When the user clicks on the attachment link, the attachment viewer activity is spawned right away. If the download was done, the image is displayed. If not, a dialog is shown in THIS activity until it is done and can be displayed. Note that ideally the download never restarts or else I've wasted cycles on the preload. Obviously I need some persistent background process that is able to keep downloading and is able to call back to arbitrarily bonded Activities. It seems like the IntentService almost fits my needs as it does its work in a background thread and has the Service (non UI) lifecycle. However, will it work for my other needs? I notice that common implementations for what I want to do get a Messenger from the caller Activity so that a Message object can be sent back to a Handler in the caller's thread. This is all well and good but what happens in my case when the caller Activity is Stopped or Destroyed and the currently active Activity (the attachment viewer) is showing? Is there some way to dynamically bind a new Activity to a running IntentService so that I can send a Message back to the new Activity? The other question is on the Message object. Can I send arbitrarily large data back in this package? For instance, rather than send back that "The file was downloaded", I need to send back the byte array of the downloaded file itself since I never want to write it to disk (and yes this needs to be the case). Any advice on achieving the behavior I want is greatly appreciated. I've not been working with Android for that long and I often get confused with how to best handle asynchronous processes over the course of the Activity lifecycle especially when it comes to orientation changes and home button presses...

    Read the article

  • Asyncronous While Loop?

    - by o7th Web Design
    I have a pretty great SqlDataReader wrapper in which I can map the output into a strongly typed list. What I am finding now is that on larger datasets with larger numbers of columns, performance could probably be a bit better if I can optimize my mapping. In thinking about this there is one section in particular that I am concerned about as it seems to be the heaviest hitter: while (_Rdr.Read()) { T newObject = new T(); for (int i = 0; i <= _Rdr.FieldCount - 1; ++i) { PropertyInfo info = (PropertyInfo)_ht[_Rdr.GetName(i).ToUpper()]; if ((info != null) && info.CanWrite) { info.SetValue(newObject, (_Rdr.GetValue(i) is DBNull) ? default(T) : _Rdr.GetValue(i), null); } } _en.Add(newObject); } _Rdr.Close(); What I would really like to know, is if there is a way that I can make this loop asyncronous? I feel that will make all the difference in the world with this beast :) Here is the entire Map method in case anyone can see where I can make further improvements on it... IList<T> Map<T> // Map our datareader object to a strongly typed list private static IList<T> Map<T>(IDataReader _Rdr) where T : new() { try { Type _t = typeof(T); List<T> _en = new List<T>(); Hashtable _ht = new Hashtable(); PropertyInfo[] _props = _t.GetProperties(); Parallel.ForEach(_props, info => { _ht[info.Name.ToUpper()] = info; }); while (_Rdr.Read()) { T newObject = new T(); for (int i = 0; i <= _Rdr.FieldCount - 1; ++i) { PropertyInfo info = (PropertyInfo)_ht[_Rdr.GetName(i).ToUpper()]; if ((info != null) && info.CanWrite) { info.SetValue(newObject, (_Rdr.GetValue(i) is DBNull) ? default(T) : _Rdr.GetValue(i), null); } } _en.Add(newObject); } _Rdr.Close(); return _en; }catch(Exception ex){ _Msg += "Wrapper.Map Exception: " + ex.Message; ErrorReporting.WriteEm.WriteItem(ex, "o7th.Class.Library.Data.Wrapper.Map", _Msg); return default(IList<T>); } }

    Read the article

  • I can't click the links in Firefox and Chrome (they work in IE7)

    - by janoChen
    Its the weirdest thing I've ever seen. I can't click the last 3 links in the following code (when I use FF or Chrome): HTML: <div id="leftmanulist"> <div class="abouttop"> <ul class="aboutlist"> <li class="index"><a>????</a></li> <li><a href="instruments.html">????</a></li> <li><a href="performance.html">????</a></li> <li><a href="clothes.html">????</a></li> <li><a href="aboutfalundafa.html">??????</a></li> <li><a href="awards.html">????</a></li> </ul> </div> <div class="aboutbutton"></div> </div> CSS: #leftmanulist{ background:url("images/abouttop.gif") no-repeat; float: left; margin: 2px 2px 5px 30px; padding:39px 0 0 0; width:237px;} #leftmanulist ul li{line-height:35px;text-align:left; text-decoration:none;} #leftmanulist ul li a{ text-decoration:none;} #leftmanulist ul li:hover{ color:#0068FF;} #leftmanulist ul li a:hover{ color:#0068FF;} #leftmanulist ul li.index{ color:#0068FF;} #leftmanulist ul li.index a{ color:#0068FF;} .abouttop{background:url("images/leftmanulist_z.gif") repeat-y ; padding:0 6px; position:relative; z-index:0; width:237px;} .aboutlist{position:relative;left:28px;} .aboutbutton{background:url("images/leftmanulist_b.gif") no-repeat; width:237px; height:20px; position:relative; top:-17px; z-index:2;}

    Read the article

  • Indexing on only part of a field in MongoDB

    - by Rob Hoare
    Is there a way to create an index on only part of a field in MongoDB, for example on the first 10 characters? I couldn't find it documented (or asked about on here). The MySQL equivalent would be CREATE INDEX part_of_name ON customer (name(10));. Reason: I have a collection with a single field that varies in length from a few characters up to over 1000 characters, average 50 characters. As there are a hundred million or so documents it's going to be hard to fit the full index in memory (testing with 8% of the data the index is already 400MB, according to stats). Indexing just the first part of the field would reduce the index size by about 75%. In most cases the search term is quite short, it's not a full-text search. A work-around would be to add a second field of 10 (lowercased) characters for each item, index that, then add logic to filter the results if the search term is over ten characters (and that extra field is probably needed anyway for case-insensitive searches, unless anybody has a better way). Seems like an ugly way to do it though. [added later] I tried adding the second field, containing the first 12 characters from the main field, lowercased. It wasn't a big success. Previously, the average object size was 50 bytes, but I forgot that includes the _id and other overheads, so my main field length (there was only one) averaged nearer to 30 bytes than 50. Then, the second field index contains the _id and other overheads. Net result (for my 8% sample) is the index on the main field is 415MB and on the 12 byte field is 330MB - only a 20% saving in space, not worthwhile. I could duplicate the entire field (to work around the case insensitive search problem) but realistically it looks like I should reconsider whether MongoDB is the right tool for the job (or just buy more memory and use twice as much disk space). [added even later] This is a typical document, with the source field, and the short lowercased field: { "_id" : ObjectId("505d0e89f56588f20f000041"), "q" : "Continental Airlines", "f" : "continental " } Indexes: db.test.ensureIndex({q:1}); db.test.ensureIndex({f:1}); The 'f" index, working on a shorter field, is 80% of the size of the "q" index. I didn't mean to imply I included the _id in the index, just that it needs to use that somewhere to show where the index will point to, so it's an overhead that probably helps explain why a shorter key makes so little difference. Access to the index will be essentially random, no part of it is more likely to be accessed than any other. Total index size for the full file will likely be 5GB, so it's not extreme for that one index. Adding some other fields for other search cases, and their associated indexes, and copies of data for lower case, does start to add up, which I why I started looking into a more concise index.

    Read the article

< Previous Page | 798 799 800 801 802 803 804 805 806 807 808 809  | Next Page >