Search Results

Search found 3240 results on 130 pages for 'groupwise maximum'.

Page 100/130 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • WordPress > Optimizing a query to show recent posts with a "View All" link when postcount exceeds ma

    - by Scott B
    I have a setting in my theme that allows the site owner to set the maximum number of posts ($maxPosts) to display in a "Recent Posts" menu. I'm using a custom script to generate the recent posts (because the Recent Posts widget does not highlight the current page, which I need for my css). My menu also is set up to display a "View All" link below the post listing, but only if the actual post count is $maxposts I'm trying to work out the best method for getting the post count and comparing it to $maxposts in order to determine whether or not to show a "View All" link. I'm sure there's probably a better way, but here's my code. I'm looking to optimize it to support very large post counts... $cat=get_cat_ID('excludeFromRecentPosts'); $catHidden=get_cat_ID('hidden'); $myquery = new WP_Query(); $myquery->query(array( 'cat' => "-$cat,-$catHidden", 'post_not_in' => get_option('sticky_posts') )); $myrecentpostscount = $myquery->found_posts; if ($myrecentpostscount > 0) { //show the menu if ($myrecentpostscount > $maxPosts) { //show "View All" link } } I really only need to determine if the total post count from the query is greater than the maxPost setting in order to determine whether to show the "View All" link, so I'm wondering if, in the case there are thousands of posts matching the criteria, to avoid performance issues, I don't need to get a count of all of them. I just need to count up until the point of maxPosts + 1, and that's where I'm struggling a bit because the user could elect to make maxPosts = -1 which means they want to show all posts. But this would be impractical, so I would probably set a upper limit of 20...

    Read the article

  • WCF configuration and ISA Proxies

    - by Morten Louw Nielsen
    Hi, I have a setup with a .NET WCF Service hosted on IIS. The client apps are connecting to the service through a set of ISA proxy's. I don't know how many and don't know about their configuration etc. In the client apps I open a client to the service and make several calls via the same client. It works great in my office, but when I deploy at the customer (using the ISAs), after some calls, the connection breaks. In a successfull case, the client will maximum live a few seconds, but is that too much? I think there might be several proxyes. Maybe it's using load ballancing. pseudo code is something like this: WcfClient myClient = new WcfClient(); foreach (WorkItem Item in WorkItemsStack) myClient.ProcessItem(Item); myClient.Close(); I am thinking whether I have to do something like this foreach (WorkItem Item in WorkItemsStack) { WcfClient myClient = new WcfClient(); myClient.ProcessItem(Item); myClient.Close(); } Any one with experience with this field? Kind Regards, Morten, Denmark

    Read the article

  • Draw a column graph with no space between columns

    - by Andrew Shepherd
    I am using the WPF toolkit, and am trying to render a graph that looks like a histogram. In particular, I want each column to be right up against each other column. There should be no gaps between columns. There are a number of components that you apply when creating a column graph. (See example XAML below). Does anybody know if there is a property you can set on one of the elements which refers to the width of the white space between columns? <charting:Chart Height="600" Width="Auto" HorizontalAlignment="Stretch" Name="MyChart" Title="Column Graph" LegendTitle="Legend"> <charting:ColumnSeries Name="theColumnSeries" Title="Series A" IndependentValueBinding="{Binding Path=Name}" DependentValueBinding="{Binding Path=Population}" Margin="0" > </charting:ColumnSeries> <charting:Chart.Axes> <charting:LinearAxis Orientation="Y" Minimum="200000" Maximum="2500000" ShowGridLines="True" /> <charting:CategoryAxis Name="chartCategoryAxis" /> </charting:Chart.Axes> </charting:Chart>

    Read the article

  • How to avoid saving a blank model which attributes can be blank

    - by auralbee
    Hello people, I have two models with a HABTM association, let´s say book and author. class Book has_and_belongs_to_many :authors end class Author has_and_belongs_to_many :books end The author has a set of attributes (e.g. first-name,last-name,age) that can all be blank (see validation). validates_length_of :first_name, :maximum => 255, :allow_blank => true, :allow_nil => false In the books_controller, I do the following to append all authors to a book in one step: @book = Book.new(params[:book]) @book.authors.build(params[:book][:authors].values) My question: What would be the easiest way to avoid the saving of authors which fields are all blank to prevent too much "noise" in the database? At the moment, I do the following: validate :must_have_some_data def must_have_some_data empty = true hash = self.attributes hash.delete("created_at") hash.delete("updated_at") hash.each_value do |value| empty = false if value.present? end if (empty) errors.add_to_base("Fields do not contain any data.") end end Maybe there is an more elegant, Rails-like way to do that. Thanks.

    Read the article

  • AVL tree in C language

    - by I_S_W
    Hey all; i am currently doing a project that requires the use of AVL trees , the insert function i wrote for the avl does not seem to be working , it works for 3 or 4 nodes at maximum ; i would really appreciate your help The attempt is below enter code here Tree insert(Tree t,char name[80],int num) { if(t==NULL) { t=(Tree)malloc(sizeof(struct node)); if(t!=NULL) { strcpy(t->name,name); t->num=num; t->left=NULL; t->right=NULL; t->height=0; } } else if(strcmp(name,t->name)<0) { t->left=insert(t->left,name,num); if((height(t->left)-height(t->right))==2) if(strcmp(name,t->left->name)<0) { t=s_rotate_left(t);} else{ t=d_rotate_left(t);} } else if(strcmp(name,t-name)0) { t-right=insert(t-right,name,num); if((height(t-right)-height(t-left))==2) if(strcmp(name,t-right-name)0){ t=s_rotate_right(t); } else{ t=d_rotate_right(t);} } t-height=max(height(t-left),height(t-right))+1; return t; }

    Read the article

  • WCF Streaming not working at server

    - by Radhi
    hi, i have used WCF service to transfer large files in chunks to the server for that i have reference this article http://kjellsj.blogspot.com/2007/02/wcf-streaming-upload-files-over-http.html i have configured my application on IIS on my machine. its work fine here. it allows upto 64mb file upload but when we have published the site. it allows only maximum 30Mb file if i try to upload more than that i got error 404 - resource not found. here is the binding config i have used. <basicHttpBinding> <!-- buffer: 64KB; max size: 64MB --> <binding name="FileTransferServicesBinding" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transferMode="Streamed" messageEncoding="Mtom" maxBufferSize="65536" maxReceivedMessageSize="67108864"> <security mode="None"> <transport clientCredentialType="None"/> </security> </binding> </basicHttpBinding> please suggest me where i am missing anything. and if required more code please let me know -thanks in advance

    Read the article

  • How to launch a browser with a given URL within the same tab

    - by Bojan Milankovic
    Here is some code to launch S60 browser with a given url. // use the StartDocument api param->Des().Format( _L( "4 %S" ),&aUrl ); TUid id( TUid::Uid( browserUid ) ); TApaTaskList taskList( CEikonEnv::Static()->WsSession() ); TApaTask task = taskList.FindApp( id ); if ( task.Exists() ) { HBufC8* param8 = HBufC8::NewL( param->Length() ); param8->Des().Append( *param ); task.SendMessage( TUid::Uid( 0 ), *param8 ); // Uid is not used // CleanupStack::PopAndDestroy(); // param8 } else { RApaLsSession appArcSession; User::LeaveIfError( appArcSession.Connect() ); // connect to AppArc server TThreadId id; appArcSession.StartDocument( *param, TUid::Uid(browserUid), id ); appArcSession.Close(); } However, this seems to open a new tab for each URL, and if number of tabs reaches internal WebKit limit (5), it will raise an error, saying that maximum number of pop-up windows have been reached. Is there any workaround for this? Is it possible to open the native S60 browser within the same one tab?

    Read the article

  • Flush kernel's TCP buffer with `MSG_MORE`-flagged packets

    - by timn
    send()'s man page reveals the MSG_MORE flag which is asserted to act like TCP_CORK. I have a wrapper function around send(): int SocketConnection_Write(SocketConnection *this, void *buf, int len) { errno = 0; int sent = send(this->fd, buf, len, MSG_NOSIGNAL); if (errno == EPIPE || errno == ENOTCONN) { throw(exc, &SocketConnection_NotConnectedException); } else if (errno == ECONNRESET) { throw(exc, &SocketConnection_ConnectionResetException); } else if (sent != len) { throw(exc, &SocketConnection_LengthMismatchException); } return sent; } Assuming I want to use the kernel buffer, I could go with TCP_CORK, enable whenever it is necessary and then disable it to flush the buffer. But on the other hand, thereby the need for an additional system call arises. Thus, the usage of MSG_MORE seems more appropriate to me. I'd simply change the above send() line to: int sent = send(this->fd, buf, len, MSG_NOSIGNAL | MSG_MORE); According to lwm.net, packets will be flushed automatically if they are large enough: If an application sets that option on a socket, the kernel will not send out short packets. Instead, it will wait until enough data has shown up to fill a maximum-size packet, then send it. When TCP_CORK is turned off, any remaining data will go out on the wire. But this section only refers to TCP_CORK. Now, what is the proper way to flush MSG_MORE packets? I can only think of two possibilities: Call send() with an empty buffer and without MSG_MORE being set Re-apply the TCP_CORK option as described on this page Unfortunately the whole topic is very poorly documented and I couldn't find much on the Internet. I am also wondering how to check that everything works as expected? Obviously running the server through strace' is not an option. So the only simplest way would be to usenetcat' and then look at its `strace' output? Or will the kernel handle traffic differently transmitted over a loopback interface?

    Read the article

  • How do I reset the scale/zoom of a web app on an orientation change on the iPhone?

    - by Elisabeth
    I'm having the same problem that a couple of others have had with getting the correct behavior in a web app on an orientation change, and there doesn't seem to be an obvious solution - I've seen this question asked a couple of times on Stack Overflow and no one's yet been able to answer it. When I start the app in portrait mode, it works fine. Then I rotate into landscape and it's scaled up. To get it to scale correctly for the landscape mode I have to double tap on something twice, first to zoom all the way in (the normal double tap behavior) and again to zoom all the way out (again, the normal double tap behavior). When it zooms out, it zooms out to the correct NEW scale for landscape mode. Switching back to portrait seems to work more consistently; that is, it handles the zoom so that the scale is correct when the orientation changes back to portrait. I am trying to figure out if this is a bug? or if this is something that can be fixed with Javascript? With the viewport meta content, I am setting the initial-scale to 1.0 and I am NOT setting minimum or maximum scale (nor do I want to). I am setting the width to device-width. Any ideas? I know a lot of people would be grateful to have a solution as it seems to be a persistent problem. Thank you!

    Read the article

  • How to limit speed with BMW JSDK on 116i?

    - by lexicore
    I'm experimenting with the BMW Java SDK on the new BMW 116i Innovation Package. Basic things like turning the lights on and off, starting and stopping the motor work fine. What I'm trying to do now is that to write a carlet which would limit the speed to the maximum configured in the driver profile. Driver identity will be detected as usual via RFID reader. My problem is that though I can read the speed from the tachometer, I can't really limit the speed. Here's what I've got working so far: public class SpeenControllingCarlet extends GenericCarlet { public void start(final VehicleModel model) throws CarletException { RfidReader rfidReader = (RfidReader) model .getDevice(Devices.DRIVER_RFID_READER); Rfid rfid = rfidReader.getRfid(); DriverProfile driverProfile = model.getDriverProfileRegistry() .getDriverProfile(rfid.toString()); if (driverProfile == null) { return; } final Double maxAllowedSpeed = Double.valueOf(driverProfile .getCustomAttribute("maxAllowedSpeed", "190")); Tachometer tachometer = (Tachometer) mode.getDevice(Devices.TACHOMETER); tachometer.addSpeedListener(new SpeedListener() { public void onSpeedChanged(SpeedChangedEvent speedChangedEvent) { if (speedChangedEvent.getCurrentSpeed() > maxAllowedSpeed) { Horn horn = (Horn) mode.getDevice(Devices.HORN); horn.beep(440, 2000); } } }); } } This will just beep for two seconds if the driver goes faster than the driver profile allows. My question is - is there a possibility to actually limit the speed (not just silly beeping)?

    Read the article

  • ASP.NET Applications Requests/Sec suddenly jumps to a value of about 70 million/sec. on 8 core web

    - by Subhrajit Roy
    We are doing performance testing of an ASP.NET web application with VSTS 2008. We start with 2000 users and slowly ramp up to 5000 users (reaches this user load at around 2.5 hours after the tests start, after this we stay at this user load). The total test duration is of about 6 hours During these runs we have found that the counter Requests/Sec (under category ASP.NET applications) suddenly spikes to a values of 36-72 millions !!!. This keeps on happening intermittently i.e we see this issue once in every 3 performance runs that we give on the same application. In our testing environment we have 4 web servers and interestingly enough we have found that this issue occurs only in the 8 core web servers. Summarizing ... Issue : The counter Requests/Sec (under category ASP.NET Applications) suddenly jumps to a value of about 70 million/sec. on 8 core web servers. This results in an increase in SQL server connections opened by the application. Response time goes for a toss. Error rates also show similar behaviour. However the counter ISAPI Extention Requests/sec does not show any abnormal increase. The graph of this counter almost overlaps with that of counter Requests/Sec till the time of the appearance of the spike.When the spike appears , this counter (ISAPI Extention Requests/sec) actually shows a drop. Test Settings : Performance test run with Visual Studio Team System 2008. Soak test run for 6 hours. Maximum user load 5000 users. This is load is attained at about 2.5 hours into the run and mainted for remaining duration.(i.e for around 3.5 more hrs) This issue is reproducible though happens intermittently. (i.e occurs one in three or four runs) Test Environment : Web site deployed on 4 Web Servers (Windows Server 2003). Of these 2 are 4 core machines and the remaining 2 are 8 core ones. .NET Framework 3.5 SP1 installed on all 4 web servers. Application hosted on IIS 6.0 run in Worker process isolation mode.

    Read the article

  • Lucene stop words not removed during searching

    - by iamrohitbanga
    I have created a Lucene index with the following analyzer. public class DocSpecAnalyzer extends Analyzer { private static CharArraySet stopSet;// = new HashSet<String>(Arrays.asList());//STOP_WORDS_SET; static { stopSet = new CharArraySet(FDConstants.stopwords, true); // uncommenting this displays all the stop words // for (String s: FDConstants.stopwords) { // System.out.println(s); // } } /** * Specifies whether deprecated acronyms should be replaced with HOST type. * See {@linkplain https://issues.apache.org/jira/browse/LUCENE-1068} */ private final boolean enableStopPositionIncrements; private final Version matchVersion; public DocSpecAnalyzer(Version matchVersion) { this.matchVersion = matchVersion; enableStopPositionIncrements = StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion); } public TokenStream tokenStream(String fieldName, Reader reader) { StandardTokenizer tokenStream = new StandardTokenizer(matchVersion, reader); tokenStream.setMaxTokenLength(DEFAULT_MAX_TOKEN_LENGTH); TokenStream result = new StandardFilter(tokenStream); result = new LowerCaseFilter(result); result = new StopFilter(enableStopPositionIncrements, result, stopSet); result = new PorterStemFilter(result); return result; } /** Default maximum allowed token length */ public static final int DEFAULT_MAX_TOKEN_LENGTH = 255; } Now when I search for documents for a query containing stop words, i get hits for stop words also. As I post this problem, I found the bug. It is because of http://lucene.apache.org/java/2_9_2/api/contrib-misc/org/apache/lucene/queryParser/analyzing/AnalyzingQueryParser.html not handling stop words. Is there a substitute? Update: forgot to mention that I need to do a fuzzy search. that is why i am using an AnalyzingQueryParser.

    Read the article

  • ASP.NET MVC Model Binding into a List

    - by Maxim Z.
    In my ASP.NET MVC site, part of a feature allows the user to enter the hours when a certain venue is open. I've decided to store these hours in a VenueHours table in my database, with a FK-to-PK relationship to a Venues table, as well as DayOfWeek, OpeningTime, and ClosingTime parameters. In my View, I want to allow the user to only input the times they know about; in other words, some days may not be filled in for a Venue. I'm thinking of creating checkboxes that the user can check to enable the OpeningTime and ClosingTime fields for the DayOfWeek that the checkbox belongs to. My question relates to how to pass this information to my HttpPost Controller Action. As I know the maximum amount of Days that can be passed in (7), I could of course just write 7 nullable VenueHour parameters into my Action, but I'm sure there's a better way. Can I somehow bind the View information into a List that is passed to my Action? This will also help me if I run into a scenario where there is no limit to how much information the user can fill in.

    Read the article

  • Pros & Cons of Google App Engine

    - by Rishi
    Pros & Cons of Google App Engine [An Updated List 21st Aug 09] Help me Compile a List of all the Advantages & Disadvantages of Building an Application on the Google App Engine Pros: 1) No Need to buy Servers or Server Space (no maintenance). 2) Makes solving the problem of scaling much easier. Cons: 1) Locked into Google App Engine ?? 2)Developers have read-only access to the filesystem on App Engine. 3)App Engine can only execute code called from an HTTP request (except for scheduled background tasks). 4)Users may upload arbitrary Python modules, but only if they are pure-Python; C and Pyrex modules are not supported. 5)App Engine limits the maximum rows returned from an entity get to 1000 rows per Datastore call. 6)Java applications may only use a subset (The JRE Class White List) of the classes from the JRE standard edition. 7)Java applications cannot create new threads. Known Issues!! http://code.google.com/p/googleappengine/issues/list Hard limits Apps per developer - 10 Time per request - 30 sec Files per app - 3,000 HTTP response size - 10 MB Datastore item size - 1 MB Application code size - 150 MB Pro or Con? App Engine's infrastructure removes many of the system administration and development challenges of building applications to scale to millions of hits. Google handles deploying code to a cluster, monitoring, failover, and launching application instances as necessary. While other services let users install and configure nearly any *NIX compatible software, App Engine requires developers to use Python or Java as the programming language and a limited set of APIs. Current APIs allow storing and retrieving data from a BigTable non-relational database; making HTTP requests; sending e-mail; manipulating images; and caching. Most existing Web applications can't run on App Engine without modification, because they require a relational database.

    Read the article

  • How to create a variadic (with variable length argument list) function wrapper in JavaScript

    - by U-D13
    The intention is to build a wrapper to provide a consistent method of calling native functions with variable arity on various script hosts - so that the script could be executed in a browser as well as in the Windows Script Host or other script engines. I am aware of 3 methods of which each one has its own drawbacks. eval() method: function wrapper () { var str = ''; for (var i=0; i<arguments.lenght; i++) str += (str ?', ':'') + ',arguments['+i+']'; return eval('[native_function] ('+str+')'); } switch() method: function wrapper () { switch (arguments.lenght) { case 0: return [native_function] (arguments[0]); break; case 1: return [native_function] (arguments[0], arguments[1]); break; ... case n: return [native_function] (arguments[0], arguments[1], ... arguments[n]); } } apply() method: function wrapper () { return [native_function].apply([native_function_namespace], arguments); } What's wrong with them you ask? Well, shall we delve into all the reasons why eval() is evil? And also all the string concatenation... Not a solution to be labeled "elegant". One can never know the maximum n and thus how many cases to prepare. This also would strech the script to immense proportions and sin against the holy DRY principle. The script could get executed on older (pre- JavaScript 1.3 / ECMA-262-3) engines that don't support the apply() method. Now the question part: is there any another solution out there?

    Read the article

  • How do I detect if there is already a similar document stored in Lucene index.

    - by Jenea
    Hi. I need to exclude duplicates in my database. The problem is that duplicates are not considered exact match but rather similar documents. For this purpose I decided to use FuzzyQuery like follows: var fuzzyQuery = new global::Lucene.Net.Search.FuzzyQuery( new Term("text", queryText), 0.8f, 0); hits = _searcher.Search(query); The idea was to set the minimal similarity to 0.8 (that I think is high enough) so only similar documents will be found excluding those that are not sufficiently similar. To test this code I decided to see if it finds already existing document. To the variable queryText was assigned a value that is stored in the index. The code from above found nothing, in other words it doesn't detect even exact match. Index was build by this code: doc.Add(new global::Lucene.Net.Documents.Field( "text", text, global::Lucene.Net.Documents.Field.Store.YES, global::Lucene.Net.Documents.Field.Index.TOKENIZED, global::Lucene.Net.Documents.Field.TermVector.WITH_POSITIONS_OFFSETS)); I followed recomendations from bellow and the results are: TermQuery doesn't return any result. Query contructed with var _analyzer = new RussianAnalyzer(); var parser = new global::Lucene.Net.QueryParsers .QueryParser("text", _analyzer); var query = parser.Parse(queryText); var _searcher = new IndexSearcher (Settings.General.Default.LuceneIndexDirectoryPath); var hits = _searcher.Search(query); Returns several results with the maximum score the document that has exact match and other several documents that have similar content.

    Read the article

  • Force full garbage collection when memory occupation goes beyond a certain threshold

    - by Silvio Donnini
    I have a server application that, in rare occasions, can allocate large chunks of memory. It's not a memory leak, as these chunks can be claimed back by the garbage collector by executing a full garbage collection. Normal garbage collection frees amounts of memory that are too small: it is not adequate in this context. The garbage collector executes these full GCs when it deems appropriate, namely when the memory footprint of the application nears the allotted maximum specified with -Xmx. That would be ok, if it wasn't for the fact that these problematic memory allocations come in bursts, and can cause OutOfMemoryErrors due to the fact that the jvm is not able to perform a GC quickly enough to free the required memory. If I manually call System.gc() beforehand, I can prevent this situation. Anyway, I'd prefer not having to monitor my jvm's memory allocation myself (or insert memory management into my application's logic); it would be nice if there was a way to run the virtual machine with a memory threshold, over which full GCs would be executed automatically, in order to release very early the memory I'm going to need. Long story short: I need a way (a command line option?) to configure the jvm in order to release early a good amount of memory (i.e. perform a full GC) when memory occupation reaches a certain threshold, I don't care if this slows my application down every once in a while. All I've found till now are ways to modify the size of the generations, but that's not what I need (at least not directly). I'd appreciate your suggestions, Silvio P.S. I'm working on a way to avoid large allocations, but it could require a long time and meanwhile my app needs a little stability

    Read the article

  • longest string in texts

    - by davit-datuashvili
    Ihave following code in Java: import java.util.*; public class longest{ public static void main(String[] args){ int t=0;int m=0;int token1, token2; String words[]=new String[10]; String word[]=new String[10]; String common[]=new String[10]; String text="saqartvelo gabrwyindeba da gadzlierdeba aucileblad "; String text1="saqartvelo gamtliandeba da gadzlierdeba aucileblad"; StringTokenizer st=new StringTokenizer(text); StringTokenizer st1=new StringTokenizer(text1); token1=st.countTokens(); token2=st1.countTokens(); while (st.hasMoreTokens()){ words[t]=st.nextToken(); t++; } while (st1.hasMoreTokens()){ word[m]=st1.nextToken(); m++; } for (int k=0;k<token1;k++){ for (int f=0;f<token2;f++){ if (words[f].compareTo(word[f])==0){ common[f]=words[f]; } } } while (i<common.length){ System.out.println(common[i]); i++; } } } I want that in common array put elements which i in both text or these words saqartvelo (georgia in english) da (and in english) gadzlierdeba (will be stronger) aucileblad (sure) and then between these words find string which has maximum length but it does not work more correctly it show me these words and also many null elements. How do I correct it?

    Read the article

  • ColdFusion 8: Database Connection Reset Error

    - by Gavin
    I have been getting these intermittent ColdFusion Database connection reset errors and was wondering if anyone had experience with this and had a particular solution that worked? Here is the error: Error Executing Database Query.[Macromedia][SQLServer JDBC Driver]A problem occurred when attempting to contact the server (Server returned: Connection reset). Please ensure that the server parameters passed to the driver are correct and that the server is running. Also ensure that the maximum number of connections have not been exceeded for this server. This doesn't happen with any particular query, the code breaks in different queries every time, returning a SQLState error 08s01. These query's logic are fine, no logic errors etc. I checked the network logs and there were no database server connection refusals at the time of the error. Once the first error occurs, it keeps happening for no more than a minute or so at random times of the day, every few days. I've googled this thing and so far anyone that has had this issue was only on CF6 or 7, which the fixes coldFusion put out are only for CF6 or 7. Server configuration wise: The ColdFusion server is version 8 The database server is SQL Server 2005 Standard The database connections allowed setting is set to unlimited on both SQL Server and ColdFusion Any help would be greatly appreciated, Thanks!

    Read the article

  • SQLite on iPhone - Techniques for tracking down multithreading-related bugs

    - by Jasarien
    Hey guys, I'm working with an Objective-C wrapper around SQLite that I didn't write, and documentation is sparse... It's not FMDB. The people writing this wrapper weren't aware of FMDB when writing this code. It seems that the code is suffering from a bug where database connections are being accessed from multiple threads -- which according to the SQLite documentation won't work if the if SQLite is compiled with SQLITE_THREADSAFE 2. I have tested the libsqlite3.dylib provided as part of the iPhone SDK and seen that it is compiled in this manner, using the sqlite_threadsafe() routine. Using the provided sqlite library, the code regularly hits SQLITE_BUSY and SQLITE_LOCKED return codes when performing routines. To combat this, I added some code to wait a couple of milliseconds and try again, with a maximum retry count of 50. The code didn't contain any retry logic prior to this. Now when a sqlite call returns SQLITE_BUSY or SQLITE_LOCKED, the retry loop is invoked and the retry returns SQLITE_MISUSE. Not good. Grasping at straws, I replaced the provided sqlite library with a version compiled by myself setting SQLITE_THREADSAFE to 1 - which according to the documentation means sqlite is safe to be used in a multithreaded environment, effectively serialising all of the operations. It incurs a performance hit, that which I haven't measured, but it ridded the app of the SQLITE_MISUSE happening and seemed to not need the retry logic as it never hit a busy or locked state. What I would rather do is fix the problem of accessing a single db connection from multiple threads, but I can't for the life of me find where it's occurring. So if anyone has any tips on locating multithreaded bugs I would be extremely appreciative. Thanks in advance.

    Read the article

  • Problem in matlab with too many outputs

    - by Ben Fossen
    I am writing a Matlab program for simpson's rule I keep getting an error about to many outputs when the program gets to left_simpson = Simpson(a,c,(e1)/2,level, level_max); What is wrong with settinf left_simpson to Simpson(a,c,(e1)/2,level, level_max);? function Simpson(a,b,e1,level, level_max) level = level + 1; h = b - a; c = (a+b)/2; one_simpson = h*(f(a) + 4*f(c) + f(b))/6; d = (a+c)/2; e = (c+b)/2; two_simpson = h*(f(a) + 4*f(d) + 2*f(c) + 4*f(e))/2; if level >= level_max disp('h') simpson_result = two_simpson; disp('maximum levels reached') disp(simpson_result); if abs(two_simpson - one_simpson) < 15*e1 simpson_result = two_simpson + (two_simpson - one_simpson)/15; else left_simpson = Simpson(a,c,(e1)/2,level, level_max); right_simpson = Simpson(c,b,(e1)/2,level, level_max); simpson_result = left_simpson + right_simpson; end end

    Read the article

  • Backreferences in lookbehind

    - by polygenelubricants
    Can you use backreferences in a lookbehind? Let's say I want to split wherever behind me a character is repeated twice. String REGEX1 = "(?<=(.)\\1)"; // DOESN'T WORK! String REGEX2 = "(?<=(?=(.)\\1)..)"; // WORKS! System.out.println(java.util.Arrays.toString( "Bazooka killed the poor aardvark (yummy!)" .split(REGEX2) )); // prints "[Bazoo, ka kill, ed the poo, r aa, rdvark (yumm, y!)]" Using REGEX2 (where the backreference is in a lookahead nested inside a lookbehind) works, but REGEX1 gives this error at run-time: Look-behind group does not have an obvious maximum length near index 8 (?<=(.)\1) ^ This sort of make sense, I suppose, because in general the backreference can capture a string of any length (if the regex compiler is a bit smarter, though, it could determine that \1 is (.) in this case, and therefore has a finite length). So is there a way to use a backreference in a lookbehind? And if there isn't, can you always work around it using this nested lookahead? Are there other commonly-used techniques?

    Read the article

  • Unable to load huge XML document (incorrectly suppose it's due to the XSLT processing)

    - by krisvandenbergh
    I'm trying to match certain elements using XSLT. My input document is very large and the source XML fails to load after processing the following code (consider especially the first line). <xsl:template match="XMI/XMI.content/Model_Management.Model/Foundation.Core.Namespace.ownedElement/Model_Management.Package/Foundation.Core.Namespace.ownedElement"> <rdf:RDF> <rdf:Description rdf:about=""> <xsl:for-each select="Foundation.Core.Class"> <xsl:for-each select="Foundation.Core.ModelElement.name"> <owl:Class rdf:ID="@Foundation.Core.ModelElement.name" /> </xsl:for-each> </xsl:for-each> </rdf:Description> </rdf:RDF> </xsl:template> Apparently the XSLT fails to load after "Model_Management.Model". The PHP code is as follows: if ($xml->loadXML($source_xml) == false) { die('Failed to load source XML: ' . $http_file); } It then fails to perform loadXML and immediately dies. I think there are two options now. 1) I should set a maximum executing time. Frankly, I don't know how that I do this for the built-in PHP 5 XSLT processor. 2) Think about another way to match. What would be the best way to deal with this? The input document can be found at http://krisvandenbergh.be/uml_pricing.xml Any help would be appreciated! Thanks.

    Read the article

  • Determining failing sectors on portable flash memory

    - by Faxwell Mingleton
    I'm trying to write a program that will detect signs of failure for portable flash memory devices (thumb drives, etc). I have seen tools in the past that are able to detect failing sectors and other kinds of trouble on conventional mechanical hard drives, but I fear that flash memory does not have the same kind of predictable low-level access to the hardware due to the internal workings of the storage. Things like wear-leveling and other block-remapping techniques (to skip over 'dead' sectors?) lead me to believe that determining if a flash drive is failing will be difficult at best, if not impossible (short of having constant read failures and device unmounts). Flash drives at their end-of-life should be easy to detect (constant CRC discrepancies during reads and all-out failure). But what about drives that might be failing early? Are there any tell-tale signs like slower throughput speeds that might indicate a flash drive is going to fail much sooner than normal? Along the lines of detecting potentially bad blocks, I had considered attempting random reads/writes to a file close to or exactly the size of the entire volume, but even then is it possible that the drive might report sizes under its maximum capacity to account for 'dead' blocks? In short, is there any way to circumvent or at least detect (algorithmically or otherwise) the use of block-remapping or other life extension techniques for flash memory? Let me end this question by expressing my uncertainty as to whether or not this belongs on serverfault.com . This is definitely a hardware-related question, but I also desire a software solution - preferably one that I can program myself. If this question is misplaced, I will be happy to migrate it to serverfault - but I do need a programming solution. Please let me know if you need clarification :) Thanks!

    Read the article

  • Strip parity bits in C from 8 bits of data followed by 1 parity bit

    - by dubnde
    I have a buffer of bits with 8 bits of data followed by 1 parity bit. This pattern repeats itself. The buffer is currently stored as an array of octets. Example (p are parity bits): 0001 0001 p000 0100 0p00 0001 00p01 1100 ... should become 0001 0001 0000 1000 0000 0100 0111 00 ... Basically, I need to strip of every ninth bit to just obtain the data bits. How can I achieve this? This is related to another question asked here sometime back. This is on a 32 bit machine so the solution to the related question may not be applicable. The maximum possible number of bits is 45 i.e. 5 data octets This is what I have tried so far. I have created a "boolean" array and added the bits into the array based on the the bitset of the octet. I then look at every ninth index of the array and through it away. Then move the remaining array down one index. Then I've got only the data bits left. I was thinking there may be better ways of doing this.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >