Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 761/883 | < Previous Page | 757 758 759 760 761 762 763 764 765 766 767 768  | Next Page >

  • Setting White balance and Exposure mode for iphone camera + enum default

    - by Spectravideo328
    I am using the back camera of an iphone4 and doing the standard and lengthy process of creating an AVCaptureSession and adding to it an AVCaptureDevice. Before attaching the AvCaptureDeviceInput of that camera to the session, I am testing my understanding of white balance and exposure, so I am trying this: [self.theCaptureDevice lockForConfiguration:nil]; [self.theCaptureDevice setWhiteBalanceMode:AVCaptureWhiteBalanceModeLocked]; [self.theCaptureDevice setExposureMode:AVCaptureExposureModeContinuousAutoExposure]; [self.theCaptureDevice unlockForConfiguration]; 1- Given that the various options for white balance mode are in an enum, I would have thought that the default is always zero since the enum Typedef variable was never assigned a value. I am finding out, if I breakpoint and po the values in the debugger, that the default white balance mode is actually set to 2. Unfortunately, the header files of AVCaptureDevice does not say what the default are for the different camera setting. 2- This might sound silly, but can I assume that once I stop the app, that all settings for whitebalance, exposure mode, will go back to their default. So that if I start another app right after, the camera device is not somehow stuck on those "hardware settings".

    Read the article

  • Setting up a domain using WAMP and NameCheap's DNS service

    - by Mike Jones
    So, I've been working on this for far longer than I should be and I'm quite thoroughly perplexed and lost. I apologize if this turns out to be a facile question, but I'm entirely befuddled and very much in need of some guidance. Earlier today I registered a cheap .info domain on NameCheap just to play around with as I learn webdev. I have WAMP installed on my computer and I've been testing my projects through localhost. I thought it would be a good idea to have a domain at my disposal in order to understand how it works and better test my work. NameCheap has a complimentary DNS service (not that I entirely understand what DNS is, although I tried), so I'm using that. I direct both the "@" and "www" host names to my IP address, as shown in the below tutorial: https://www.namecheap.com/support/knowledgebase/article.aspx/319/78/how-can-i-setup-an-a-address-record-for-my-domain However, now when I go to the URL of the domain I bought, it confronts me with a login box that seems to be coming from my wifi network. How do I set this up so that it hosts the information in my WAMP server? Thanks in advance for any help, I really appreciate it. And, as a side-note, what should read (print or online) to better understand the structure of the internet? I've been studying HTML/CSS/Javascript, but I'm not sure where to look for pragmatic and comprehensive information on how the internet works and how to utilize it as a website administrator. Thanks so much.

    Read the article

  • How does loop address alignment affect the speed on Intel x86_64?

    - by Alexander Gololobov
    I'm seeing 15% performance degradation of the same C++ code compiled to exactly same machine instructions but located on differently aligned addresses. When my tiny main loop starts at 0x415220 it's faster then when it is at 0x415250. I'm running this on Intel Core2 Duo. I use gcc 4.4.5 on x86_64 Ubuntu. Can anybody explain the cause of slowdown and how I can force gcc to optimally align the loop? Here is the disassembly for both cases with profiler annotation: 415220 576 12.56% |XXXXXXXXXXXXXX 48 c1 eb 08 shr $0x8,%rbx 415224 110 2.40% |XX 0f b6 c3 movzbl %bl,%eax 415227 0.00% | 41 0f b6 04 00 movzbl (%r8,%rax,1),%eax 41522c 40 0.87% | 48 8b 04 c1 mov (%rcx,%rax,8),%rax 415230 806 17.58% |XXXXXXXXXXXXXXXXXXX 4c 63 f8 movslq %eax,%r15 415233 186 4.06% |XXXX 48 c1 e8 20 shr $0x20,%rax 415237 102 2.22% |XX 4c 01 f9 add %r15,%rcx 41523a 414 9.03% |XXXXXXXXXX a8 0f test $0xf,%al 41523c 680 14.83% |XXXXXXXXXXXXXXXX 74 45 je 415283 ::Run(char const*, char const*)+0x4b3 41523e 0.00% | 41 89 c7 mov %eax,%r15d 415241 0.00% | 41 83 e7 01 and $0x1,%r15d 415245 0.00% | 41 83 ff 01 cmp $0x1,%r15d 415249 0.00% | 41 89 c7 mov %eax,%r15d 415250 679 13.05% |XXXXXXXXXXXXXXXX 48 c1 eb 08 shr $0x8,%rbx 415254 124 2.38% |XX 0f b6 c3 movzbl %bl,%eax 415257 0.00% | 41 0f b6 04 00 movzbl (%r8,%rax,1),%eax 41525c 43 0.83% |X 48 8b 04 c1 mov (%rcx,%rax,8),%rax 415260 828 15.91% |XXXXXXXXXXXXXXXXXXX 4c 63 f8 movslq %eax,%r15 415263 388 7.46% |XXXXXXXXX 48 c1 e8 20 shr $0x20,%rax 415267 141 2.71% |XXX 4c 01 f9 add %r15,%rcx 41526a 634 12.18% |XXXXXXXXXXXXXXX a8 0f test $0xf,%al 41526c 749 14.39% |XXXXXXXXXXXXXXXXXX 74 45 je 4152b3 ::Run(char const*, char const*)+0x4c3 41526e 0.00% | 41 89 c7 mov %eax,%r15d 415271 0.00% | 41 83 e7 01 and $0x1,%r15d 415275 0.00% | 41 83 ff 01 cmp $0x1,%r15d 415279 0.00% | 41 89 c7 mov %eax,%r15d

    Read the article

  • How to implement custom JSF component for drawing chart?

    - by Roman
    I want to create a component which can be used like: <mc:chart data="#{bean.data}" width="200" height="300" /> where #{bean.data} returns a collection of some objects or chart model object or something else what can be represented as a chart (to put it simple, let's assume it returns a collection of integers). I want this component to generate html like this: <img src="someimg123.png" width="200" height="300"/> The problem is that I have some method which can receive data and return image, like: public RenderedImage getChartImage (Collection<Integer> data) { ... } and I also have a component for drawing dynamic image: <o:dynamicImage width="200" height="300" data="#{bean.readyChartImage}/> This component generates html just as I need but it's parameter is array of bytes or RenderedImage i.e. it needs method in bean like this: public RenderedImage getReadyChartImage () { ... } So, one approach is to use propertyChangedListener on submit to set data (Collection<Integer>) for drawing chart and then use <o:dynamicImage /> component. But I'd like to create my own component which receives data and draws chart. I'm using facelets but it's not so important indeed. Any ideas how to create the desired component? P.S. One solution I was thinking about is not to use <o:dynamicImage/> and use some servlet to stream image. But I don't know how to implement that correctly and how to tie jsf component with servlet and how to save already built chart images (generating new same image for each request can cause performance problems imho) and so on..

    Read the article

  • Iframe form not submittin in IE but working in Firefox

    - by Younes
    I have got a form that posts values to a page in a wizard. When i'm loading this form in a Iframe everything is working fine in Firefox, it will get me to the second step of the wizard and maintains the values i filled in. When im testing this in Internet Explorer i am not getting to the second step, instead of that it returns me to the first step of the wizard with all fields being blank. When i check this in Fiddler i see that im getting a different response when i'm posting the form in the Iframe from Firefox compared to Internet Explorer. How can i make this work for all browsers? What am I doing wrong? This is what i get back from Fiddler: Firefox Post: Ressult Protocol Host URL Body Caching Content-Type Process Comments Custom 1 302 HTTP www.dmg.eu /brugman/budgetplanner/aanmelden.php 0 no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Expires: Thu, 19 Nov 1981 08:52:00 GMT text/html; charset=UTF-8 firefox:6116 Get: # Result Protocol Host URL Body Caching Content-Type Process Comments Custom 2 200 HTTP www.dmg.eu /brugman/budgetplanner/ 40.677 no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Expires: Thu, 19 Nov 1981 08:52:00 GMT text/html; charset=UTF-8 firefox:6116 Internet Explorer Post: Result Protocol Host URL Body Caching Content-Type Process Comments Custom 73 302 HTTP www.dmg.eu /brugman/budgetplanner/aanmelden.php 0 no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Expires: Thu, 19 Nov 1981 08:52:00 GMT text/html; charset=UTF-8 iexplore:536 Get: Result Protocol Host URL Body Caching Content-Type Process Comments Custom 74 302 HTTP www.dmg.eu /brugman/budgetplanner/ 0 no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Expires: Thu, 19 Nov 1981 08:52:00 GMT text/html; charset=UTF-8 iexplore:536 Hope someone knows what the diff is :).

    Read the article

  • Can parser combinators be made efficient?

    - by Jon Harrop
    Around 6 years ago, I benchmarked my own parser combinators in OCaml and found that they were ~5× slower than the parser generators on offer at the time. I recently revisited this subject and benchmarked Haskell's Parsec vs a simple hand-rolled precedence climbing parser written in F# and was surprised to find the F# to be 25× faster than the Haskell. Here's the Haskell code I used to read a large mathematical expression from file, parse and evaluate it: import Control.Applicative import Text.Parsec hiding ((<|>)) expr = chainl1 term ((+) <$ char '+' <|> (-) <$ char '-') term = chainl1 fact ((*) <$ char '*' <|> div <$ char '/') fact = read <$> many1 digit <|> char '(' *> expr <* char ')' eval :: String -> Int eval = either (error . show) id . parse expr "" . filter (/= ' ') main :: IO () main = do file <- readFile "expr" putStr $ show $ eval file putStr "\n" and here's my self-contained precedence climbing parser in F#: let rec (|Expr|) (P(f, xs)) = Expr(loop (' ', f, xs)) and loop = function | ' ' as oop, f, ('+' | '-' as op)::P(g, xs) | (' ' | '+' | '-' as oop), f, ('*' | '/' as op)::P(g, xs) -> let h, xs = loop (op, g, xs) let op = match op with | '+' -> (+) | '-' -> (-) | '*' -> (*) | '/' -> (/) loop (oop, op f h, xs) | _, f, xs -> f, xs and (|P|) = function | '('::Expr(f, ')'::xs) -> P(f, xs) | c::xs when '0' <= c && c <= '9' -> P(int(string c), xs) My impression is that even state-of-the-art parser combinators waste a lot of time back tracking. Is that correct? If so, is it possible to write parser combinators that generate state machines to obtain competitive performance or is it necessary to use code generation?

    Read the article

  • Hibernate database integrity with multiple java applications

    - by Austen
    We have 2 java web apps both are read/write and 3 standalone java read/write applications (one loads questions via email, one processes an xml feed, one sends email to subscribers) all use hibernate and share a common code base. The problem we have recently come across is that questions loaded via email sometimes overwrite questions created in one of the web apps. We originally thought this to be a caching issue. We've tried turning off the second level cache, but this doesn't make a difference. We are not explicitly opening and closing sessions, but rather let hibernate manage them via Util.getSessionFactory().getCurrentSession(), which thinking about it, may actually be the issue. We'd rather not setup a clustered 2nd level cache at this stage as this creates another layer of complexity and we're more than happy with the level of performance we get from the app as a whole. So does implementing a open-session-in-view pattern in the web apps and manually managing the sessions in the standalone apps sound like it would fix this? Or any other suggestions/ideas please? <property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property> <property name="hibernate.current_session_context_class">thread</property> <property name="hibernate.cache.use_second_level_cache">false</property>

    Read the article

  • Porting - Shared Memory x32 & x64 processes

    - by dpb
    A 32 bit host Windows application setups shared memory (using memory mapped file / CreateFileMapping() API), and then other 32 bit client processes use this shared memory to communicate with each other. I am planning to port the host application to 64 bit platform and once it is ready, I intend that both 32 bit and 64 bit client processes should be able to use the shared memory setup by the main 64 bit host application. The original code written for host x32 application uses "size_t" almost everywhere, since this differs from 4 bytes to 8 bytes as we move from x32 to x64, I am looking for replacing it. I intend to replace "size_t" by "unsigned long long", so that its size will be same on 32 bit & 64 bit. Can you please suggest me better alternative? Also, will the use of "unsigned long long" have performance impact on x32 app .. i guess yes? Research Done - Found very useful articles - a) 20 issue in porting from 32 bit to 64 bit (www.viva64.com) b) No way to restrict/change "size_t" on x64 platform to 4 bytes using compiler flags or any hooks/crooks since it is typedef

    Read the article

  • Java: Efficiency of the readLine method of the BufferedReader and possible alternatives

    - by Luhar
    We are working to reduce the latency and increase the performance of a process written in Java that consumes data (xml strings) from a socket via the readLine() method of the BufferedReader class. The data is delimited by the end of line separater (\n), and each line can be of a variable length (6KBits - 32KBits). Our code looks like: Socket sock = connection; InputStream in = sock.getInputStream(); BufferedReader inputReader = new BufferedReader(new InputStreamReader(in)); ... do { String input = inputReader.readLine(); // Executor call to parse the input thread in a seperate thread }while(true) So I have a couple of questions: Will the inputReader.readLine() method return as soon as it hits the \n character or will it wait till the buffer is full? Is there a faster of picking up data from the socket than using a BufferedReader? What happens when the size of the input string is smaller than the size of the Socket's receive buffer? What happens when the size of the input string is bigger than the size of the Socket's receive buffer? I am getting to grips (slowly) with Java's IO libraries, so any pointers are much appreciated. Thank you!

    Read the article

  • SQL Server becomes slow after restart

    - by Tobi DM
    We use SQL Server 2005 on an Windwos Server 2008. Ther Server has 48 GB RAM. SQL Server is configured to use 40 GB RAM. There is only one database hosted (About 70 GB). The only app beside SQL Server is our App-Server which connects the clients to the database. Now we encounter the following problem: After a restart of the server our the performance is great. The server grabs the 40 GB RAM wich it is allowed to and then runs fast as hell. But after about 4 weeks the system becomes slower and slower. The execution of statements (seen in the profiler) is raising slowly. But I cannot see that there is something going wrong on the server. CPU usage is at about 20% I/O also seems to be no Problem The process monitor does also not show that there are strange apps or something like that. Eventlog does also have no interessting messages No open transactions or blockings to see We tried already the following things without effect: Droped the cache by using the statements DBCC FreeProcCache DBCC FREESYSTEMCACHE('ALL') DBCC DropCleanbuffers Restarted the Appserver we are using. Restart the sql server service But nothing did help exept restarting the whole server. Any ideas?

    Read the article

  • Has form post behavior changed in modern browsers? (or How are double clicks handled by the browser)

    - by Alex Czarto
    Background: We are in the process of writing a registration/payment page, and our philosophy was to code all validation and error checking on the server side first, and then add client side validation as a second step (un-obstructive jQuery). We wanted to disable double clicks server side, so we wrote some locking, thread-safe code to handle simultaneous posts/race conditions. When we tried to test this, we realized that we could not cause a simultaneous post or race condition to occur. I thought that (in older browsers anyway) double clicking a submit button worked as follows: User double clicks submit button. Browser sends a post on the first click On the second click, browser cancels/ignores initial post, and initiates a second post (before the first post has returned with a response). Browser waits for second post to return, ignoring initial post response. I thought that from the server side it looked like this: Server gets two simultaneous post requests, executes and responds to them both (unaware that no one is listening to the first response). From our testing (FireFox 3.0, IE 8.0) this is what actually happens: User double clicks submit button Browser sends a post for the first click Browser queues up second click, but waits for the response from the first click. Response returns from first click (response is ignored?). Browser sends a post for the second click. So from a server side: Server receives a single post which it executes and responds to. Then, server receives a second request wich it executes and responds to. My question is, has this always worked this way (and I'm losing my mind)? Or is this a new feature in modern browsers that prevents simultaneous posts to be sent to the server? It seems that for server side double click prevention, we don't have to worry about simultaneous posts or race conditions. Only need to worry about queued up posts. Thanks in advance for any feedback / comments. Alex

    Read the article

  • I'm looking for a way to evaluate reading rate in several languages

    - by i30817
    I have a software that is page oriented instead of scrollbar oriented so i can easily count the words, but i'd like a way to filter outliers and some default value for the text language (that is known). The goal is from the remaining text to calculate the remaining time. I'm not sure what is the best unit to use. WPM (words per minute) from here seems very fuzzy and human oriented. Besides i don't know how many "words" remain in the text. http://www.sfsu.edu/~testing/CalReadRate.htm So i came up with this: The user is reading the text. The total text size in characters is known. His position in the text is known. So the remaining characters to read is also known. If a language has a median word length of say 5 chars, then if i had a WPM speed for the user, i could calculate the remaining time. 3 things are needed for this: 1) A table of the median word length of the language. 2) A table of the median WPM of a median user per language. 3) Update the WPM to fit the user as data becomes available, filtering outliers. However i can't find these tables. And i'm not sure how precise it is assuming median word length.

    Read the article

  • In browser trusted application Silverlight 5

    - by Philippe
    With the new Silverlight 5, we can now have a In-Browser elevated-trust application. However, I'm experiencing some problems to deploy the application. When I am testing the application from Visual Studio, everything works fine because it automatically gives every right if the website is hosted on the local machine (localhost, 127.0.0.1). I saw on MSDN that I have to follow 3 steps to make it work on any website: Signed the XAP - I did it following the Microsoft tutorial Install the Trusted publishers certificate store - I did it too following the Microsoft Tutorial Adding a Registry key with the value : AllowElevatedTrustAppsInBrowser. The third step is the one I am the most unsure about. Do we need to add this registry key on the local machine or on the server ? Is there any automatic function in silverlight to add this key or its better to make a batchfile? Even with those 3 steps, the application is still not working when called from another url than localhost. Does anybody has successfully implemented a In-browser elevated-trust application? Do you see what I'm doing wrong? Thank you very much! Philippe, Sources: - http://msdn.microsoft.com/en-us/library/gg192793(v=VS.96).aspx - http://pitorque.de/MisterGoodcat/post/Silverlight-5-Tidbits-Trusted-applications.aspx

    Read the article

  • spam and dirty words comment post filtering in python (django)

    - by sintaloo
    Hi All, My basic question is how to filter spam and dirty words in a comment post system under python (django). I have a collection of phrases (approximately 3000 phrases) to be filtered. Question (1), are there any existing open source python (or django) package/module/plugin which can handle this job? I knew there was one called Akismet. But from what I understood, it will not solve my problem. Akismet is just a web service and filter the words dictionary defined by Akismet. But I have my own collection of words. Please correct me if I am wrong. Question (2), If there is no such open source package I can use, how to create my own one? The only thing I can think of it's to use regular expression and join all the word phrases with 'or' in a regular expression. but I have 3000 phrases, I think it won't work in term of performance and filter every comment post. any suggestions where should I start from? Thank you very much for your help and time.

    Read the article

  • Passing data between ViewControllers versus doing local Fetch in each VC

    - by Tofrizer
    Hi All, I'm developing an iPhone app using Core Data and I'm looking for some general advice and recommendations on whether its acceptable to pass data between ViewControllers versus doing a local fetch in each ViewController as you navigate to it. Ordinarily I would say it all depends on various factors (e.g. performance etc) but the passing data approach is so prevalent in my app and I'm spooked by all the stories about Apple rejecting apps because of not conforming to their standard guidelines. So let me put another way -- is it non-standard to pass data between VC's? The reason I pass data so much is because each ViewController is just another view on to data present in my object model / graph. Once I have a handle on my first object in the first view controller (which I of course do have to fetch), I can use the existing object composition / relationships to drill down into the next level of detail into data and so I just pass these objects to the next VC. Separately, one possible downside with this passing-data-to-each-VC approach is I don't benefit from (what I perceive to be) the optimisation/benefits that NSFetchedResultsController provides in terms of efficient memory usage and section handling. My app is read-only but I do have one table with 5000 rows and I'm curious if I am missing out on NSFetchedResultsController benefits. Any thoughts on this as well? Can I somehow still benefit from NSFetchedResultsController goodness without having to do a full fetch (as I would have already passed in the data from my previous VC)? Thanks a lot.

    Read the article

  • Is the Unicode prefix N still needed in SQL Compact Edition?

    - by Dave
    At least in previous versions of SQL Server, you had to prefix Unicode string constants with an "N" to make them be treated as Unicode. Thus, select foo from bar where fizz = N'buzz' (See "Server-Side Programming with Unicode" for SQL Server 2005 "from the horse's mouth" documentation.) We have an application that is using SQL Compact Edition and I am wondering if that is still necessary. From the testing I am doing, it appears to be unneeded. That is, the following SQL statements both behave identically in SQL CE, but the second one fails in SQL Server 2005: select foo from bar where foo=N'???' select foo from bar where foo='???' (I hope I'm not swearing in some language I don't know about...) I'm wondering if that is because all strings are treated as Unicode in SQL CE, or if perhaps the default code page is now Unicode-aware. If anyone has seen any official documentation, either yea or nay, I'd appreciate it. I know I could go the safe route and just add the "N"'s, but there's a lot of code that will need changed, and if I don't need to, I don't want to! Thanks for your help!

    Read the article

  • Starting a process in one HTTP call and getting results in another

    - by KillianDS
    Hi, I'm writing a very simple testing framework for my application, the design isn't perfect, but I don't have time to write something more complex. Essentially, I have a client and server-application, on my server I want a small python web server to start the server application with given test sequences on a GET or POST call. Also, the application prints some testdata to stderr which I'd like to catch and return in another HTTP call. At the moment I have this: from subprocess import Popen, PIPE from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer p = None class MyHandler(BaseHTTPRequestHandler): def do_GET(self): global p if self.path.endswith("start/"): p = Popen(["./bin/Release/simplex264","BBB-360","127.0.0.1"], stderr=PIPE) print 'started' return elif self.path.endswith("getResults/"): self.wfile.write(p.stderr.read()) return self.send_error(404,'File Not Found: %s' % self.path) def main(): try: server = HTTPServer(('localhost', 9876), MyHandler) print 'Started server...' server.serve_forever() except KeyboardInterrupt: print 'Shutting down...' server.socket.close() if __name__ == '__main__': main() Which 'works', except for one part, when I try to open http://localhost:9876/start/, it does not return before the process ended. However, the 'started' appears in my shell immediately (I added this because I thought the Popen call would only return after execution). I do not know the perfect inner workings of Popen and BaseHTTPRequestHandler however and do not really know where it goes wrong. Is there any way to make this work asynchronously?

    Read the article

  • Get instance of type inheriting from base class, implementing interface, using StructureMap

    - by Ben
    Continuing on my quest for a good plugin implementation I have been testing the StructureMap assembly scanning features. All plugins will inherit from abstract class PluginBase. This will provide access to common application services such as logging. Depending on it's function, each plugin may then implement additional interfaces, for example, IStartUpTask. I am initializing my plugins like so: Scan(x => { x.AssembliesFromPath(HttpContext.Current.Server.MapPath("~/Plugins"), assembly => assembly.GetName().Name.Contains("Extension")); x.AddAllTypesOf<PluginBase>(); }); The difficulty I am then having is how to work against the interface (not the PluginBase) in code. It's easy enough to work with PluginBase: var plugins = ObjectFactory.GetAllInstances<PluginBase>(); foreach (var plugin in plugins) { } But specific functionality (e.g. IStartUpTask.RunTask) is tied to the interface, not the base class. I appreciate this may not be specific to structuremap (perhaps more a question of reflection). Thanks, Ben

    Read the article

  • NSFetchedResultsController is driving me crazy

    - by user267980
    Hi everyone. i've been building an app since 1 month using NSFetchedResultsController and i was testing the app on the 3.1.2 SDK. The poblem is that i've been using NSFetchedResultsController everywhere in my app and was working on the 3.1.2 version of the SDK, now my client say that i should make it compatible with the 3.0 version and the deadline is almost there. But is crashing everytime i change an object handled by the contoller, the application is crashing with very weird errors. The problem occure when removing the last object in a section and when a change make an object love to another section. I've been using a sample code from "More iPhone 3 Development Tackling iPhone SDK 3" by Dave Mark and Jeff LaMarche. I've also included some changes from link text Here is a sample output of the console when the application is crashing. * Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid update: invalid number of sections. The number of sections contained in the table view after the update (1) must be equal to the number of sections contained in the table view before the update (2), plus or minus the number of sections inserted or deleted (2 inserted, 0 deleted).' 2010-03-14 16:23:29.758 Instaproofs[5879:207] Stack: ( 807902715, 7364425, 807986683, 811271572, 815059090, 815007323, 211023, 4363331, 810589786, 807635429, 810579728, 3620573, 3620227, 3614682, 3609719, 27337, 810595174, 807686849, 807683624, 839142449, 839142646, 814752238 ) If i knew that NSFetchedResultsController is so buggy, i would never used it. So basicaly i need the my NSFetchedResultsControllerDelegate to work fine on the 3.0 and above SDKs. It would be life saver if someone help me figure out what i'm doing wrong.

    Read the article

  • Which apache worker to use with passenger and how?

    - by Millisami
    I've this config in my apache2.conf <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MaxClients 15 MinSpareThreads 4 MaxSpareThreads 5 ThreadsPerChild 15 MaxRequestsPerChild 50000 </IfModule> Now I'm confused here. Which module gets loaded on which conditions? The phusion guys have suggested to use the worker module. Since both are present in apache conf file, do I have to comment the mpm_prefork_module or leave it as it is? Following is my passenger conf file for apache: LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/ext/apache2/mod_passenger.so PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4 PassengerRuby /usr/bin/ruby1.8 PassengerMaxPoolSize 3 PassengerPoolIdleTime 999999 RailsFrameworkSpawnerIdleTime 0 RailsAppSpawnerIdleTime 0 I'm running just a single Rails 2.3.2 app on 256MB slice at slicehost. I'm not quite satisfied with the performance yet. Are the settings above are any good??

    Read the article

  • Complex LINQ paging algorithm

    - by sharepointmonkey
    We have a list of projects that may or may not have a collection of subprojects. Our report needs to contain all the projects except those that are the parent project of a subproject. I need to page this into pages of, say, 25 rows. But if subprojects appear on that page then ALL the subprojects of that project must appear on the same page. So more than 25 items may appear if necessary. I've got as far as var pagedProjects = db.Projects.Where(x => !x.SubProjects.Any()).Skip( (pageNo -1) * pageSize).Take(pageSize); Obviously, this fails the second part of the requirements. As a further pain in the arse, I need to have a pager control on the report. So I'll need to be able to calculate the total number of pages. I could loop through the whole table of projects but the performance will suffer. Can anybody come up with a paged solution? EDIT - I should probably mention that SubProjects joins back onto Projects via a selfreferencing foreign key so the whole lot comes back as an IQueryable<Project>.

    Read the article

  • MySQL Gurus: How to pull a complex grid of data from MySQL database with one query?

    - by iopener
    Hopefully this is less complex than I think. I have one table of companies, and another table of jobs, and a third table with that contains a single entry for each employee in each job from each company. NOTE: Some companies won't have employees in some jobs, and some companies will have more than one employee in some jobs. The company table has a companyid and companyname field, the job table has a jobid and jobtitle field, and the employee table has employeeid, companyid, jobid and employeename fields. I want to build a table like this: +-----------+-----------+-----------+ | Company A | Company B | Company C | ------+-----------+-----------+-----------+ Job A | Emp 1 | Emp 2 | | ------+-----------+-----------+-----------+ Job B | Emp 3 | | Emp 4 | | | | Emp 5 | ------+-----------+-----------+-----------+ Job C | | Emp 6 | | | | Emp 7 | | | | Emp 8 | | ------+-----------+-----------+-----------+ I had previously been looping through a result set of jobs, and for each job, looping through a result set of each company, and for each company, looping through each employee and printing it in a table (gross, but performance was not supposed to be a consideration). The app has grown in popularity, and now we have 100 companies and hundreds of jobs, and the server is crapping out (all the id fields are indexed). Any suggestions on how to write a single query to get this data? I don't need the company names or job titles (obviously), but I do need some way to identify where each row from the result should be printed. I'm imagining a result set that just contained a long list of joined employees, and I could write a loop to use the companyid and employeeid values to tell me when to create a new cell or table row. This works as long as there aren't ZERO employees; I would need a NULL employee name for that I think? Am I completely on the wrong track? Thanks in advance for any ideas!

    Read the article

  • P values in wilcox.test gone mad :(

    - by Error404
    I have a code that isn't doing what it should do. I am testing P value for a wilcox.test for a huge set of data. the code i am using is the following library(MASS) data1 <- read.csv("file1path.csv",header=T,sep=",") data2 <- read.csv("file2path.csv",header=T,sep=",") data3 <- read.csv("file3path.csv",header=T,sep=",") data4 <- read.csv("file4path.csv",header=T,sep=",") data1$K <- with(data1,{"N"}) data2$K <- with(data2,{"E"}) data3$K <- with(data3,{"M"}) data4$K <- with(data4,{"U"}) new=rbind(data1,data2,data3,data4) i=3 for (o in 1:4800){ x1 <- data1[,i] x2 <- data2[,i] x3 <- data3[,i] x4 <- data4[,i] wt12 <- wilcox.test(x1,x2, na.omit=TRUE) wt13 <- wilcox.test(x1,x3, na.omit=TRUE) wt14 <- wilcox.test(x1,x4, na.omit=TRUE) if (wt12$p.value=="NaN"){ print("This is wrong") } else if (wt12$p.value < 0.05){ print(wt12$p.value) mypath=file.path("C:", "all1-less-05", (paste("graph-data1-data2",names(data1[i]), ".pdf", sep="-"))) pdf(file=mypath) mytitle = paste("graph",names(data1[i])) boxplot(new[,i] ~ new$K, main = mytitle, names.arg=c("data1","data2","data3","data4")) dev.off() } if (wt13$p.value=="NaN"){ print("This is wrong") } else if (wt13$p.value < 0.05){ print(wt13$p.value) mypath=file.path("C:", "all2-less-05", (paste("graph-data1-data3",names(data1[i]), ".pdf", sep="-"))) pdf(file=mypath) mytitle = paste("graph",names(data1[i])) boxplot(new[,i] ~ new$K, main = mytitle, names.arg=c("data1","data2","data3","data4")) dev.off() } if (wt14$p.value=="NaN"){ print("This is wrong") } else if (wt14$p.value < 0.05){ print(wt14$p.value) mypath=file.path("C:", "all3-less-05", (paste("graph-data1-data4",names(data1[i]), ".pdf", sep="-"))) pdf(file=mypath) mytitle = paste("graph",names(data1[i])) boxplot(new[,i] ~ new$K, main = mytitle, names.arg=c("data1","data2","data3","data4")) dev.off() } i=i+1 } I am having 2 problems with this long command: 1- Without specifying a certain P value, the code gives me arouind 14,000 graphs, when specifying a p value less than 0.05 the number of graphs generated goes down to 9,0000. THE FIRST PROBLEM IS: Some P value are more than 0.05 and are still showing up! 2- I designed the program to give me a result of "This is wrong" when the Value of P is "NaN", I am getting results of "NaN" Here's a screenshot from the results do you know what the mistake i made with the command to get these errors? Thanks in advance

    Read the article

  • Bulk Insert of hundreds of millions of records

    - by Dave Jarvis
    What is the fastest way to insert 237 million records into a table that has rules (for distributing the data across 84 child tables)? First I tried inserts. No go. Then I tried inserts with BEGIN/COMMIT. Not nearly fast enough. Next, I tried COPY FROM, but then noticed the documentation states that the rules are ignored. (And it was having difficulties with the column order and date format -- it said that '1984-07-1' was not a valid integer; true, but a bit unexpected.) Some example data: station_id,taken,amount,category_id,flag 1,'1984-07-1',0,4, 1,'1984-07-2',0,4, 1,'1984-07-3',0,4, 1,'1984-07-4',0,4,T Here is the table structure (with one rule included): CREATE TABLE climate.measurement ( id bigserial NOT NULL, station_id integer NOT NULL, taken date NOT NULL, amount numeric(8,2) NOT NULL, category_id smallint NOT NULL, flag character varying(1) NOT NULL DEFAULT ' '::character varying ) WITH ( OIDS=FALSE ); ALTER TABLE climate.measurement OWNER TO postgres; CREATE OR REPLACE RULE i_measurement_01_001 AS ON INSERT TO climate.measurement WHERE date_part('month'::text, new.taken)::integer = 1 AND new.category_id = 1 DO INSTEAD INSERT INTO climate.measurement_01_001 (id, station_id, taken, amount, category_id, flag) VALUES (new.id, new.station_id, new.taken, new.amount, new.category_id, new.flag); I can generate the data into any format. Am looking for something that won't take four days. I originally had the data in MySQL (still do), but am hoping to get a performance increase by switching to PostgreSQL and am eager to use its PL/R extensions for stats. I was also thinking about using: http://pgbulkload.projects.postgresql.org/ Any help, tips, or guidance would be greatly appreciated. Thank you!

    Read the article

  • How to avoid multiple, unused has_many associations when using multiple models for the same entity (

    - by mikep
    Hello, I'm looking for a nice, Ruby/Rails-esque solution for something. I'm trying to split up some data using multiple tables, rather than just using one gigantic table. My reasoning is pretty much to try and avoid the performance drop that would come with having a big table. So, rather than have one table called books, I have multiple tables: books1, books2, books3, etc. (I know that I could use a partition, but, for now, I've decided to go the 'multiple tables' route.) Each user has their books placed into a specific table. The actual book table is chosen when the user is created, and all of their books go into the same table. The goal is to try and keep each table pretty much even -- but that's a different issue. One thing I don't particularly want to have is a bunch of unused associations in the User class. Right now, it looks like I'd have to do the following: class User < ActiveRecord::Base has_many :books1, :books2, :books3, :books4, :books5 end class Books1 < ActiveRecord::Base belongs_to :user end class Books2 < ActiveRecord::Base belongs_to :user end First off, for each specific user, only one of the book tables would be usable/applicable, since all of a user's books are stored in the same table. So, only one of the associations would be in use at any time and any other has_many :bookX association that was loaded would be a waste. I don't really know Ruby/Rails does internally with all of those has_many associations though, so maybe it's not so bad. But right now I'm thinking that it's really wasteful, and that there may just be a better, more efficient way of doing this. Is there's some sort of special Ruby/Rails methodology that could be applied here to avoid having to have all of those has_many associations? Also, does anyone have any advice on how to abstract the fact that there's multiple book tables behind a single books model/class?

    Read the article

< Previous Page | 757 758 759 760 761 762 763 764 765 766 767 768  | Next Page >