Search Results

Search found 9032 results on 362 pages for 'fast math'.

Page 315/362 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • apache solr : sum of data resulted from group by

    - by Terance Dias
    Hi, We have a requirement where we need to group our records by a particular field and take the sum of a corresponding numeric field e.x. select userid, sum(click_count) from user_action group by userid; We are trying to do this using apache solr and found that there were 2 ways of doing this: Using the field collapsing feature (http://blog.jteam.nl/2009/10/20/result-grouping-field-collapsing-with-solr/) but found 2 problems with this: 1.1. This is not part of release and is available as patch so we are not sure if we can use this in production. 1.2. We do not get the sum back but individual counts and we need to sum it at the client side. Using the Stats Component along with faceted search (http://wiki.apache.org/solr/StatsComponent). This meets our requirement but it is not fast enough for very large data sets. I just wanted to know if anybody knows of any other way to achieve this. Appreciate any help. Thanks, Terance.

    Read the article

  • Big-O for calculating all routes from GPS data

    - by HH
    A non-critical GPS module use lists because it needs to be modifiable, new routes added, new distances calculated, continuos comparisons. Well so I thought but my team member wrote something I am very hard to get into. His pseudo code int k =0; a[][] <- create mapModuleNearbyDotList -array //CPU O(n) for(j = 1 to n) // O(nlog(m)) for(i =1 to n) for(k = 1 to n) if(dot is nearby) adj[i][j]=min(adj[i][j], adj[i][k] + adj[k][j]); His ideas transformations of lists to tables His worst case time complexity is O(n^3), where n is number of elements in his so-called table. Exception to the last point with Finite structure: O(mlog(n)) where n is number of vertices and m is the amount of neighbour vertices. Questions about his ideas why to waste resources to transform constantly-modified lists to table? Fast? only point where I to some extent agree but cannot understand the same upper limits n for each for-loops -- perhaps he supposed it circular why does the code take O(mlog(n)) to proceed in time as finite structure? The term finite may be wrong, explicit?

    Read the article

  • Tracking down slow managed DLL loading

    - by Alex K
    I am faced with the following issue and at this point I feel like I'm severely lacking some sort of tool, I just don't know what that tool is, or what exactly it should be doing. Here is the setup: I have a 3rd party DLL that has to be registered in GAC. This all works fine and good on pretty much every machine our software was deployed on before. But now we got 2 machines, seemingly identical to the ones we know work (they are cloned from the same image and stuffed with the same hardware, so pretty much the only difference is software settings, over which I went over and over, and they seem fine). Now the problem, the DLL in GAC takes a very long time to load. At least I believe this is the issue, what I can say definitively is that instantiating a single class from that DLL is the slow part. Once it is loaded, thing fly as they always have. But while on known-good machines the DLL loads so fast that a timestamp in the log doesn't even change, on these 2 machines it take over 1min to load. Knowns: I have no access to the source, so I can't debug through the DLL. Our app is the only one that uses it (so shouldn't be simultaneous access issues). There is only one version of this DLL in existance, so it shouldn't be a matter of version conflict. The GAC reference is being used (if I uninstall the DLL from GAC, an exception will be thrown about the missing GAC reference). Could someone with a greater skill in debug-fu suggest what I can do to track down the root cause of this issue?

    Read the article

  • Form submits but by passing the form validation when button.click is used

    - by Cuty Pie
    I am using form validation plugin as well as html5 attribute=recquired with input type=button or submit and fancybox.when button is clicked ,both validation proceedures work(notification of empty inputs appeare),but instead of this notification ,form is submitted. If I dont use button.click function but use simple submit.then validation works perfectly plugin==http://validatious.org/ javascript function $(document).ready(function() { $('#send').click(function() { $(this).attr("value", 'Applying...'); var id = getParam('ID'); $.ajax({ type:'POST', url:"send.php", data:{option:'catapply', tittle:$('#tittle').val(), aud:aud, con:con, adv:adv, mid:$('#mid').val(), aud2:aud, cid:$('#cid').val(), scid:$('#scname option[selected="saa"]').val(), gtt:$('input[name=gt]:radio:checked').val(), sr:$(":input").serialize() + '&aud=' + aud}, success:function(jd) { $('#' + id).fadeOut("fast", function() { $(this).before("<strong>Success! Your form has been submitted, thanks :)</strong>"); setTimeout("$.fancybox.close()", 1000); }); } }); }); }); AND Html is here <form id="form2" class="validate" style="display: none" > <input type='submit' value='Apply' id="send" class="sendaf validate_any" name="jobforming"/>

    Read the article

  • Dealing with rapid tapping on Buttons

    - by Eric Burke
    I have a Button with an OnClickListener. For illustrative purposes, consider a button that shows a modal dialog: public class SomeActivity ... { protected void onCreate(Bundle state) { super.onCreate(state); findViewById(R.id.ok_button).setOnClickListener( new View.OnClickListener() { public void onClick(View v) { // This should block input new AlertDialog.Builder(SomeActivity.this) .setCancelable(true) .show(); } }); } Under normal usage, the alert dialog appears and blocks further input. Users must dismiss the dialog before they can tap the button again. But sometimes the button's OnClickListener is called twice before the dialog appears. You can duplicate this fairly easily by tapping really fast on the button. I generally have to try several times before it happens, but sooner or later I'll trigger multiple onClick(...) calls before the dialog blocks input. I see this behavior in Android 2.1 on the Motorola Droid phone. We've received 4 crash reports in the Market, indicating this occasionally happens to people. Depending on what our OnClickListeners do, this causes all sorts of havoc. How can we guarantee that blocking dialogs actually block input after the first tap?

    Read the article

  • Splitting a set of object into several subsets of 'similar' objects

    - by doublep
    Suppose I have a set of objects, S. There is an algorithm f that, given a set S builds certain data structure D on it: f(S) = D. If S is large and/or contains vastly different objects, D becomes large, to the point of being unusable (i.e. not fitting in allotted memory). To overcome this, I split S into several non-intersecting subsets: S = S1 + S2 + ... + Sn and build Di for each subset. Using n structures is less efficient than using one, but at least this way I can fit into memory constraints. Since size of f(S) grows faster than S itself, combined size of Di is much less than size of D. However, it is still desirable to reduce n, i.e. the number of subsets; or reduce the combined size of Di. For this, I need to split S in such a way that each Si contains "similar" objects, because then f will produce a smaller output structure if input objects are "similar enough" to each other. The problems is that while "similarity" of objects in S and size of f(S) do correlate, there is no way to compute the latter other than just evaluating f(S), and f is not quite fast. Algorithm I have currently is to iteratively add each next object from S into one of Si, so that this results in the least possible (at this stage) increase in combined Di size: for x in S: i = such i that size(f(Si + {x})) - size(f(Si)) is min Si = Si + {x} This gives practically useful results, but certainly pretty far from optimum (i.e. the minimal possible combined size). Also, this is slow. To speed up somewhat, I compute size(f(Si + {x})) - size(f(Si)) only for those i where x is "similar enough" to objects already in Si. Is there any standard approach to such kinds of problems? I know of branch and bounds algorithm family, but it cannot be applied here because it would be prohibitively slow. My guess is that it is simply not possible to compute optimal distribution of S into Si in reasonable time. But is there some common iteratively improving algorithm?

    Read the article

  • How slow are bit fields in C++

    - by Shane MacLaughlin
    I have a C++ application that includes a number of structures with manually controlled bit fields, something like #define FLAG1 0x0001 #define FLAG2 0x0002 #define FLAG3 0x0004 class MyClass { ' ' unsigned Flags; int IsFlag1Set() { return Flags & FLAG1; } void SetFlag1Set() { Flags |= FLAG1; } void ResetFlag1() { Flags &= 0xffffffff ^ FLAG1; } ' ' }; For obvious reasons I'd like to change this to use bit fields, something like class MyClass { ' ' struct Flags { unsigned Flag1:1; unsigned Flag2:1; unsigned Flag3:1; }; ' ' }; The one concern I have with making this switch is that I've come across a number of references on this site stating how slow bit fields are in C++. My assumption is that they are still faster than the manual code shown above, but is there any hard reference material covering the speed implications of using bit fields on various platforms, specifically 32bit and 64bit windows. The application deals with huge amounts of data in memory and must be both fast and memory efficient, which could well be why it was written this way in the first place.

    Read the article

  • posting a variable with jquery and receiving it on other page

    - by Billa
    I want to post a varible "id" to a page. I'm trying following code, but I cant get the id value, it says "undefined". function box(){ var id=$(this).attr("id"); $("#votebox").slideDown("slow"); $("#flash").fadeIn("slow"); $.ajax({ type: "POST", //I want to post the "id" to the rating page. data: "id="+$(this).attr("id"), url: "rating.php", success: function(html){ $("#flash").fadeOut("slow"); $("#content").html(html); } }); } This function is called in following code. In the following code too, the id is posted to the page "votes.php", and it works fine, but in the above code when I'm trying to post the id to the rating.php page, it does not send. $(function(){ $("a.vote_up").click(function(){ the_id = $(this).attr('id'); $("span#votes_count"+the_id).fadeOut("fast"); $.ajax({ type: "POST", data: "action=vote_up&id="+$(this).attr("id"), url: "votes.php", success: function(msg) { $("span#votes_up"+the_id).fadeOut(); $("span#votes_up"+the_id).html(msg); $("span#votes_up"+the_id).fadeIn(); var that = this; box.call(that); } }); }); }); rating.php <? $id = $_POST['id']; echo $id; ?> The html part is: <a href='javascript:;' class='vote_up' id='<?php echo $row['id']; ?>'>Vote Up!</a> I'll appriciate any help.

    Read the article

  • Data structure to build and lookup set of integer ranges

    - by actual
    I have a set of uint32 integers, there may be millions of items in the set. 50-70% of them are consecutive, but in input stream they appear in unpredictable order. I need to: Compress this set into ranges to achieve space efficient representation. Already implemented this using trivial algorithm, since ranges computed only once speed is not important here. After this transformation number of resulting ranges is typically within 5 000-10 000, many of them are single-item, of course. Test membership of some integer, information about specific range in the set is not required. This one must be very fast -- O(1). Was thinking about minimal perfect hash functions, but they do not play well with ranges. Bitsets are very space inefficient. Other structures, like binary trees, has complexity of O(log n), worst thing with them that implementation make many conditional jumps and processor can not predict them well giving poor performance. Is there any data structure or algorithm specialized in integer ranges to solve this task?

    Read the article

  • How Optimize sql query make it faster

    - by user502083
    Hello every one : I have a very simple small database, 2 of tables are: Node (Node_ID, Node_name, Node_Date) : Node_ID is primary key Citation (Origin_Id, Target_Id) : PRIMARY KEY (Origin_Id, Target_Id) each is FK in Node Now I write a query that first find all citations that their Origin_Id has a specific date and then I want to know what are the target dates of these records. I'm using sqlite in python the Node table has 3000 record and Citation has 9000 records, and my query is like this in a function: def cited_years_list(self, date): c=self.cur try: c.execute("""select n.Node_Date,count(*) from Node n INNER JOIN (select c.Origin_Id AS Origin_Id, c.Target_Id AS Target_Id, n.Node_Date AS Date from CITATION c INNER JOIN NODE n ON c.Origin_Id=n.Node_Id where CAST(n.Node_Date as INT)={0}) VW ON VW.Target_Id=n.Node_Id GROUP BY n.Node_Date;""".format(date)) cited_years=c.fetchall() self.conn.commit() print('Cited Years are : \n ',str(cited_years)) except Exception as e: print('Cited Years retrival failed ',e) return cited_years Then I call this function for some specific years, But it's crazy slowwwwwwwww :( (around 1 min for a specific year) Although my query works fine, it is slow. would you please give me a suggestion to make it faster? I'd appreciate any idea about optimizing this query :) I also should mention that I have indices on Origin_Id and Target_Id, so the inner join should be pretty fast, but it's not!!!

    Read the article

  • Code Golf: Counting XML elements in file on Android

    - by CSharperWithJava
    Take a simple XML file formatted like this: <Lists> <List> <Note/> ... <Note/> </List> <List> <Note/> ... <Note/> </List> </Lists> Each node has some attributes that actually hold the data of the file. I need a very quick way to count the number of each type of element, (List and Note). Lists is simply the root and doesn't matter. I can do this with a simple string search or something similar, but I need to make this as fast as possible. Design Parameters: Must be in java (Android application). Must AVOID allocating memory as much as possible. Must return the total number of Note elements and the number of List elements in the file, regardless of location in file. Number of Lists will typically be small (1-4), and number of notes can potentially be very large (upwards of 1000, typically 100) per file. I look forward to your suggestions.

    Read the article

  • Java "Pool" of longs or Oracle sequence with reusable values

    - by Anthony Accioly
    Several months ago I implemented a solution to choose unique values from a range between 1 and 65535 (16 bits). This range is used to generate unique Route Targets suffixes, which for this customer massive network (it's a huge ISP) are a very disputed resource, so any free index needs to become immediately available to the end user. To tackle this requirement I used a BitSet. Allocate on the RT index with set and deallocate a suffix with clear. The method nextClearBit() can find the next available index. I handle synchronization / concurrency issues manually. This works pretty well for a small range... The entire index is small (around 10k), it is blazing fast and can be easy serialized into a Blob field. The problem is, some new devices can handle RTs of 32 bits (range 1 / 4294967296). Which can't be managed with a BitSet (it would, by itself, consume around 600Mb, plus be limited to int range). Even with this massive range available, the client still wants to free available Route Targets for the end user, mainly because the lowest ones (up to 65535) - which are compatible with old routers - are being heavily disputed. Before I tell the customer that this is impossible and he will have to conform with my reusable index for lower RTs (up to 65550) and use a database sequence for the other ones (which means that when the user frees a Route Target, it will not become available again). Would anyone shed some light? Maybe some kind soul already implemented a high performance number pool for Java (6 if it matters), or I am missing a killer feature of Oracle database (11R2 if it matters)... Wishful thinking. Thank you very much in advance.

    Read the article

  • Is there any performance issue using Row_Number to implement table paging in Sql Server 2008?

    - by majkinetor
    I want to implement table paging using this method: SET @PageNum = 2; SET @PageSize = 10; WITH OrdersRN AS ( SELECT ROW_NUMBER() OVER(ORDER BY OrderDate, OrderID) AS RowNum ,* FROM dbo.Orders ) SELECT * FROM OrdersRN WHERE RowNum BETWEEN (@PageNum - 1) * @PageSize + 1 AND @PageNum * @PageSize ORDER BY OrderDate ,OrderID; Is there anything I should be aware of ? Table has millions of records. Thx. EDIT: After using suggested MAXROWS method for some time (which works really really fast) I had to switch back to ROW_NUMBER method because of its greater flexibility. I am also very happy about its speed so far (I am working with View having more then 1M records with 10 columns). To use any kind of query I use following modification: PROCEDURE [dbo].[PageSelect] ( @Sql nvarchar(512), @OrderBy nvarchar(128) = 'Id', @PageNum int = 1, @PageSize int = 0 ) AS BEGIN SET NOCOUNT ON Declare @tsql as nvarchar(1024) Declare @i int, @j int if (@PageSize <= 0) OR (@PageSize > 10000) SET @PageSize = 10000 -- never return more then 10K records SET @i = (@PageNum - 1) * @PageSize + 1 SET @j = @PageNum * @PageSize SET @tsql = 'WITH MyTableOrViewRN AS ( SELECT ROW_NUMBER() OVER(ORDER BY ' + @OrderBy + ') AS RowNum ,* FROM MyTableOrView WHERE ' + @Sql + ' ) SELECT * FROM MyTableOrViewRN WHERE RowNum BETWEEN ' + CAST(@i as varchar) + ' AND ' + cast(@j as varchar) exec(@tsql) END If you use this procedure make sure u prevented sql injection.

    Read the article

  • Program freezing when syncing a ldap database (100+ entries added)

    - by djerry
    Hey guys, I'm updating a ldap database. I need to add a list of users to the db. I've written a simple foreach loop. There are about 180 users i need to add, but at the 128th user, the program freezes. I know ldap is really used for querying (fast), and that adding and modifying entries will not go as smooth as a search query, but is it normal that the program freezes while doing this? I'll post some code just in case. public static void SyncLDAPWithMySql(Novell.Directory.Ldap.LdapConnection _conn) { List<User> users = GetUsers(); int iteller = 0; foreach (User user in users) { if (!UserAlreadyInLdap(user, _conn)) { TelUser teluser = new TelUser(); teluser.Telephone = user.E164; teluser.Uid = user.E164; teluser.Company = "/"; teluser.Dn = ""; teluser.Name = "/"; teluser.DisplayName = "/"; teluser.FirstName = "/"; TelephoneDA.InsertUser(_conn, teluser ); } Console.WriteLine(iteller + " : " + user.E164); iteller++; } } private static bool UserAlreadyInLdap(User user, Novell.Directory.Ldap.LdapConnection _conn) { List<TelUser> users = TelephoneDA.GetAllEntries(_conn); foreach (TelUser teluser in users) { if (teluser.Telephone.Equals(user.E164)) return true; } return false; } public static int InsertUser(LdapConnection conn, TelUser user) { int iResponse = IsTelNumberUnique(conn, user.Dn, user.Telephone); if (iResponse == 0) { LdapAttributeSet attrSet = MakeAttSet(user); string dnForPhonebook = configurationManager.AppSettings.Get("phonebookDn"); LdapEntry ent = new LdapEntry("uid=" + user.Uid + "," + dnforPhonebook, attrSet); try { conn.Add(ent); } catch (Exception ex) { Console.WriteLine(ex.Message); } } return iResponse; } Am i adding too many entries at a time??? Thanks in advance.

    Read the article

  • Hang during databinding of large amount of data to WPF DataGrid

    - by nihi_l_ist
    Im using WPFToolkit datagrid control and do the binding in such way: <WpfToolkit:DataGrid x:Name="dgGeneral" SelectionMode="Single" SelectionUnit="FullRow" AutoGenerateColumns="False" CanUserAddRows="False" CanUserDeleteRows="False" Grid.Row="1" ItemsSource="{Binding Path=Conversations}" > public List<CONVERSATION> Conversations { get { return conversations; } set { if (conversations != value) { conversations = value; NotifyPropertyChanged("Conversations"); } } } public event PropertyChangedEventHandler PropertyChanged; public void NotifyPropertyChanged(string propertyName) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } public void GenerateData() { BackgroundWorker bw = new BackgroundWorker(); bw.WorkerSupportsCancellation = bw.WorkerReportsProgress = true; List<CONVERSATION> list = new List<CONVERSATION>(); bw.DoWork += delegate { list = RefreshGeneralData(); }; bw.RunWorkerCompleted += delegate { try { Conversations = list; } catch (Exception ex) { CustomException.ExceptionLogCustomMessage(ex); } }; bw.RunWorkerAsync(); } And than in the main window i call GenerateData() after setting DataCotext of the window to instance of the class, containing GenerateData(). RefreshGeneralData() returns some list of data i want and it returns it fast. Overall there are near 2000 records and 6 columns(im not posting the code i used during grid's initialization, because i dont think it can be the reason) and the grid hangs for almost 10 secs!

    Read the article

  • Why do I get errors when using unsigned integers in an expression with C++?

    - by neuviemeporte
    Given the following piece of (pseudo-C++) code: float x=100, a=0.1; unsigned int height = 63, width = 63; unsigned int hw=31; for (int row=0; row < height; ++row) { for (int col=0; col < width; ++col) { float foo = x + col - hw + a * (col - hw); cout << foo << " "; } cout << endl; } The values of foo are screwed up for half of the array, in places where (col - hw) is negative. I figured because col is int and comes first, that this part of the expression is converted to int and becomes negative. Unfortunately, apparently it doesn't, I get an overflow of an unsigned value and I've no idea why. How should I resolve this problem? Use casts for the whole or part of the expression? What type of casts (C-style or static_cast<...)? Is there any overhead to using casts (I need this to work fast!)? EDIT: I changed all my unsigned ints to regular ones, but I'm still wondering why I got that overflow in this situation.

    Read the article

  • web service slowdown

    - by user238591
    Hi, I have a web service slowdown. My (web) service is in gsoap & managed C++. It's not IIS/apache hosted, but speaks xml. My client is in .NET The service computation time is light (<0.1s to prepare reply). I expect the service to be smooth, fast and have good availability. I have about 100 clients, response time is 1s mandatory. Clients have about 1 request per minute. Clients are checking web service presence by tcp open port test. So, to avoid possible congestion, I turned gSoap KeepAlive to false. Until there everything runs fine : I bearly see connections in TCPView (sysinternals) New special synchronisation program now calls the service in a loop. It's higher load but everything is processed in less 30 seconds. With sysinternals TCPView, I see that about 1 thousands connections are in TIME_WAIT. They slowdown the service and It takes seconds for the service to reply, now. Could it be that I need to reset the SoapHttpClientProtocol connection ? Someone has TIME_WAIT ghosts with a web service call in a loop ?

    Read the article

  • The question about the basics of LINQ to SQL working

    - by Alex
    I just started learning LINQ to SQL, and so far I'm impressed with the easy of use and good performance. I used to think that when doing LINQ queries like from Customer in DB.Customers where Customer.Age > 30 select Customer Get all customers from the database ("SELECT * FROM Customers"), move them to the Customers array and then make a search in that Array using .NET methods. This is very inefficient, what if there are hundreds of thousands of customers in the database? Making such big SELECT queries would kill the web application. Now after experiencing how actually fast LINQ to SQL is, I start to suspect that when doing that query I just wrote, LINQ somehow converts it to a SQL Query string SELECT * FROM Customers WHERE Age > 30 And only when necessary it will run the query. So my question is: am I right? And when is the query actually run? The reason why I'm asking is not only because I want to understand how it works in order to build good optimized applications, but because I came across the following problem. I have 2 tables, one of them is Books, the other has information on how many books were sold on certain days. My goal is to select books that had at least 50 sales/day in past 10 days. It's done with this simple query: from Book in DB.Books where (from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID).Contains(Book.ID) select Book The point is, I have to use the checking part in several queries and I decided to create an array with IDs of all popular books: var popularBooksIDs = from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID; BUT when I try to do the query now: from Book in DB.Books where popularBooksIDs.Contains(Book.ID) select Book It doesn't work! That's why I think that we can't use thins kinds of shortcuts in LINQ to SQL queries, like we can't use them in real SQL. We have to create straightforward queries, am I right?

    Read the article

  • Javascript memory leak/ performance issue?

    - by Tom
    I just cannot for the life of me figure out this memory leak in Internet Explorer. insertTags simple takes string str and places each word within start and end tags for HTML (usually anchor tags). transliterate is for arabic numbers, and replaces normal numbers 0-9 with a &#..n; XML identity for their arabic counterparts. fragment = document.createDocumentFragment(); for (i = 0, e = response.verses.length; i < e; i++) { fragment.appendChild((function(){ p = document.createElement('p'); p.setAttribute('lang', (response.unicode) ? 'ar' : 'en'); p.innerHTML = ((response.unicode) ? (response.surah + ':' + (i+1)).transliterate() : response.surah + ':' + (i+1)) + ' ' + insertTags(response.verses[i], '<a href="#" onclick="window.popup(this);return false;" class="match">', '</a>'); try { return p } finally { p = null; } })()); } params[0].appendChild( fragment ); fragment = null; I would love some links other than MSDN and about.com, because neither of them have sufficiently explained to me why my script leaks memory. I am sure this is the problem, because without it everything runs fast (but nothing displays). I've read that doing a lot of DOM manipulations can be dangerous, but the for loops a max of 286 times (# of verses in surah 2, the longest surah in the Qur'an).

    Read the article

  • iPhone: Low memory crash...

    - by MacTouch
    Once again I'm hunting memory leaks and other crazy mistakes in my code. :) I have a cache with frequently used files (images, data records etc. with a TTL of about one week and a size limited cache (100MB)). There are sometimes more then 15000 files in a directory. On application exit the cache writes an control file with the current cache size along with other useful information. If the applications crashes for some reason (sh.. happens sometimes) I have in such case to calculate the size of all files on application start to make sure I know the cache size. My app crashes at this point because of low memory and I have no clue why. Memory leak detector does not show any leaks at all. I do not see any too. What's wrong with the code below? Is there any other fast way to calculate the total size of all files within a directory on iPhone? Maybe without to enumerate the whole contents of the directory? The code is executed on the main thread. NSUInteger result = 0; NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSDirectoryEnumerator *dirEnum = [[[NSFileManager defaultManager] enumeratorAtPath:path] retain]; int i = 0; while ([dirEnum nextObject]) { NSDictionary *attributes = [dirEnum fileAttributes]; NSNumber* fileSize = [attributes objectForKey:NSFileSize]; result += [fileSize unsignedIntValue]; if (++i % 500 == 0) { // I tried lower values too [pool drain]; } } [dirEnum release]; dirEnum = nil; [pool release]; pool = nil; Thanks, MacTouch

    Read the article

  • Jquery Mouseenter Click to Remove Class Not Working

    - by Sundance.101
    I'm really hoping someone can help. I have an unordered list of anchors that fades in opacity (the css defaults it to 0.7) on on mouseenter, and out again on mouseleave. On click, I want to add a class that makes the opacity stay at full. Got that far, but removing the class from the matched elements doesn't work at the moment - the other items that have had the class stay at full opacity, too. Here is the Jquery: $(document).ready(function(){ $("#nav a").mouseenter(function(){ $(this).fadeTo("slow",1); $("#nav a").click(function(){ $(".activeList").removeClass("activeList"); //THIS PART ISN'T WORKING $(this).addClass("activeList"); }); }); $("#nav a").mouseleave(function(){ if (!$(this).hasClass("activeList")) { $(this).fadeTo("fast",0.7); } }); }); I think it's because I'm stuck in the element because of mouseenter and can only affect (this). Have tried .bind/.unbind, have tried the add/remove class on it's own (it worked) and a few other things, but no luck so far! Any suggestions would be greatly appreciated.

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • QListWidget drag and drop items disappearing from list

    - by ppalasek
    Hello, I'm having trouble implementing a QListWidget with custom items that can be reordered by dragging and dropping. The problem is when I make a fast double click (a very short drag&drop) on an item, the item sometimes disappears from the QListWidget. This is the constructor for my Widget: ListPopisiDragDrop::ListPopisiDragDrop(QWidget *parent) : QListWidget(parent) { setSelectionMode(QAbstractItemView::SingleSelection); setDragEnabled(true); viewport()->setAcceptDrops(true); setDefaultDropAction(Qt::MoveAction); setDropIndicatorShown(true); setDragDropMode(QAbstractItemView::InternalMove); } also the drop event: void ListPopisiDragDrop::dropEvent(QDropEvent *event){ int startRow=currentIndex().row(); QListWidget::dropEvent(event); int endRow=currentIndex().row(); //more code... } Custom items are made by implementing paint() and sizeHint() functions from QAbstractItemDelegate. When the problem with disappearing items happens, the dropEvent isn't even called. I really don't know what is happening and if I'm doing something wrong. Any help is appreciated. Thanks! Edit: I'm running the application on a Symbian S60 5th edition phone. Edit2: If I add this line to the constructor: setDragDropOverwriteMode(true); the item in the list still disappears, but an empty row stays in it's place.

    Read the article

  • Smallcaps / multiple fonts and bolding using 'DrawString' in GDI+

    - by Simon_Weaver
    I want to write out some text using smallcaps in combination with different fonts for different words. To clarify I might want the message 'Welcome to our New Website' which is generated into a PNG file for the header of a page. The text will be smallcaps - everything is capitalized but the 'W', 'N' and 'W' are slightly larger. The 'New Website' will be in a different font than the rest of the text. Is there a way i can do this without doing it completely manually? Doing something like this is conceptually what I want to do : graphics.DrawString("<font size=2>W</font>ELCOME TO OUR <b><font size=2>N</font>" + "EW <font size=2>W</font>EBSITE</b>"); The best approach I could find so far is here, but I'm worried that I'll go to all the trouble to do this manually and end up with some horrible kerning or tracking problems. Edit: I should have mentioned that this is being done within ASP.NET so it needs to be fast and as lean as possible. I want it to be automated so I can localize easily and not have to create tonnes of little images.

    Read the article

  • Load In and Animate content

    - by crozer
    Hello, I have a little issue concerning an animation-effect which loads a certain div into the body of the site. Let me be more precise: I have a div with the id 'contact': <div id="contact">content</div> The jquery code loads the contents within that div, when I press the link with the id 'ajax_contact': <a href="#" id="ajax_contact">link</a>. The code is working perfectly. However, I want #contact to be HIDDEN when the site loads, i.e. the default state must be non-visible. Only when the user clicks the link #ajax_contact, the div must appear. Please have a look at the jquery code: $(document).ready(function() { var hash = window.location.hash.substr(1); var href = $('#ajax_contact').each(function(){ var href = $(this).attr('href'); if(hash==href.substr(0,href.length-5)){ var toLoad = hash+'.html #contact'; $('#contact').load(toLoad) } }); $('#ajax_contact').click(function(){ var toLoad = $(this).attr('href')+' #contact'; $('#contact').hide('fast',loadContent); $('#load').remove(); $('body').append('<span id="load">LOADING...</span>'); $('#load').fadeIn('normal'); window.location.hash = $(this).attr('href').substr(0,$(this).attr('href').length-5); function loadContent() { $('#contact').load(toLoad,'',showNewContent()) } function showNewContent() { $('#contact').show('normal',hideLoader()); } function hideLoader() { $('#load').fadeOut('normal'); } return false; }); }); I am not sure whether I must change something inside the HTML, but I believe the key is inside the jquery-code. I also tried giving the #contact a CSS style of visible:none, yet this loops and makes the jquery impossible to load the #contact in. I hope I've explained myself well; thank you very much in advance. Chris

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >