Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 750/837 | < Previous Page | 746 747 748 749 750 751 752 753 754 755 756 757  | Next Page >

  • Disposing underlying object from finalizer in an immutable object

    - by Juan Luis Soldi
    I'm trying to wrap around Awesomium and make it look to the rest of my code as close as possible to NET's WebBrowser since this is for an existing application that already uses the WebBrowser. In this library, there is a class called JSObject which represents a javascript object. You can get one of this, for instance, by calling the ExecuteJavascriptWithResult method of the WebView class. If you'd call it like myWebView.ExecuteJavascriptWithResult("document", string.Empty).ToObject(), then you'd get a JSObject that represents the document. I'm writing an immutable class (it's only field is a readonly JSObject object) called JSObjectWrap that wraps around JSObject which I want to use as base class for other classes that would emulate .NET classes such as HtmlElement and HtmlDocument. Now, these classes don't implement Dispose, but JSObject does. What I first thought was to call the underlying JSObject's Dispose method in my JSObjectWrap's finalizer (instead of having JSObjectWrap implement Dispose) so that the rest of my code can stay the way it is (instead of having to add using's everywhere and make sure every JSObjectWrap is being properly disposed). But I just realized if more than two JSObjectWrap's have the same underlying JSObject and one of them gets finalized this will mess up the other JSObjectWrap. So now I'm thinking maybe I should keep a static Dictionary of JSObjects and keep count of how many of each of them are being referenced by a JSObjectWrap but this sounds messy and I think could cause major performance issues. Since this sounds to me like a common pattern I wonder if anyone else has a better idea.

    Read the article

  • When are SQL views appropriate in ASP.net MVC?

    - by sslepian
    I've got a table called Protocol, a table called Eligibility, and a Protocol_Eligibilty table that maps the two together (a many to many relationship). If I wanted to make a perfect copy of an entry in the Protocol table, and create all the needed mappings in the Protocol_Eligibility table, would using an SQL view be helpful, from a performance standpoint? Protocol will have around 1000 rows, Eligibility will have about 200, and I expect each Protocol to map to about 10 Eligibility rows and each Eligibility to map to over 100 rows in Protocol. Here's how I'm doing this with the view: var pel_original = (from pel in _documentDataModel.Protocol_Eligibility_View where pel.pid == id select pel); Protocol_Eligibility newEligibility; foreach (var pel_item in pel_original) { newEligibility = new Protocol_Eligibility(); newEligibility.Eligibility = (from pel in _documentDataModel.Eligibility where pel.ID == pel_item.eid select pel).First(); newEligibility.Protocol = newProtocol; newEligibility.ordering = pel_item.ordering; _documentDataModel.AddToProtocol_Eligibility(newEligibility); } And this is without the view: var pel_original = (from pel in _documentDataModel.Protocol_Eligibility where pel.Protocol.ID == id select pel); Protocol_Eligibility newEligibility; foreach (var pel_item in pel_original) { pel_item.EligibilityReference.Load(); newEligibility = new Protocol_Eligibility(); newEligibility.Eligibility = pel_item.Eligibility; newEligibility.Protocol = newProtocol; newEligibility.ordering = pel_item.ordering; _documentDataModel.AddToProtocol_Eligibility(newEligibility); }

    Read the article

  • ways to avoid global temp tables in oracle

    - by Omnipresent
    We just converted our sql server stored procedures to oracle procedures. Sql Server SP's were highly dependent on session tables (INSERT INTO #table1...) these tables got converted as global temporary tables in oracle. We ended up with aroun 500 GTT's for our 400 SP's Now we are finding out that working with GTT's in oracle is considered a last option because of performance and other issues. what other alternatives are there? Collections? Cursors? Our typical use of GTT's is like so: Insert into GTT INSERT INTO some_gtt_1 (column_a, column_b, column_c) (SELECT someA, someB, someC FROM TABLE_A WHERE condition_1 = 'YN756' AND type_cd = 'P' AND TO_NUMBER(TO_CHAR(m_date, 'MM')) = '12' AND (lname LIKE (v_LnameUpper || '%') OR lname LIKE (v_searchLnameLower || '%')) AND (e_flag = 'Y' OR it_flag = 'Y' OR fit_flag = 'Y')); Update the GTT UPDATE some_gtt_1 a SET column_a = (SELECT b.data_a FROM some_table_b b WHERE a.column_b = b.data_b AND a.column_c = 'C') WHERE column_a IS NULL OR column_a = ' '; and later on get the data out of the GTT. These are just sample queries, in actuality the queries are really complext with lot of joins and subqueries. I have a three part question: Can someone show how to transform the above sample queries to collections and/or cursors? Since with GTT's you can work natively with SQL...why go away from the GTTs? are they really that bad. What should be the guidelines on When to use and When to avoid GTT's

    Read the article

  • Maintaining a pool of DAO Class instances vs doing new operator

    - by Fazal
    we have been trying to benchmark our application performance in multiple way for sometime now. I always believed that object creation in java using Class.newInstance() was not slow (at least after 1.4 version of java). But we anyways did a test to use newInstance method vs mainitain an object pool of 1000 objects. We did about 200K iterations of loading data from DB using JDBC and populating these objects. I was amazed (even shocked) to see that newInstance code compared to object pool code was almost 10 times slower. These objects represent tables with about 50 fields and all string type. Can someone share there thoughts on this issue as now I am more confused if object pooling of atleast some DAO instances is a better option. The pool size as I see right now should be large enough to meet size of average requests. There is a flip side as my memory footprint will go up but I am beginning to wonder if this kind of idea makes sense atleast for some of the DAO entities representing tables of about 50 or more columns Please share your ideas and let me know if this has been tried by someone or am I missing some point here

    Read the article

  • error detection/correction/recovery in serial protocols

    - by Jason S
    I have some designing to do for a serial protocol and am running into some questions that I figure must have been considered elsewhere. So I'm wondering if there are some recommendations for best practices in designing serial protocols. (Please either state a fact that is easily verifiable, or cite a reputable source if you make a claim.) General recommendations for websites/books are also welcome. In particular I have to deal with issues like parsing a stream of bytes into packets verifying a packet is correct (easy with a CRC, for instance) identifying reasonable types of errors that can occur (e.g. in a point-to-point serial stream, sporadic single bit errors, and dropped series of bytes, are both likely, but extra phantom bytes are unlikely; whereas with a record stored in flash memory or on a disk drive the types of errors that predominate are different) error correction or recovery (if I detect an error in a packet, can I correct it? If not, can I resync to the boundary of the next packet?) how to make variable-length packets robust to error correction / recovery. Any suggestions?

    Read the article

  • Automatic .NET code, nhibernate session, and LINQ datacontext clean-up?

    - by AverageJoe719
    Hi all, in my goal to adopt better coding practices I have a few questions in general about automatic handling of code. I have heard different answers both from online and talking with other developers/programmers at my work. I am not sure if I should have split them into 3 questions, but they all seem sort of related: 1) How does .NET handle instances of classes and other code things that take up memory? I recently found out about using the factory pattern for certain things like service classes so that they are only instantiated once in the entire application, but then I was told that '.NET handles a lot of that stuff automatically when mentioning it.' 2) How does Nhibernate's session handle automatic clean-up of un-used things? I've seen some say that it is great at handling things automatically and you should just use a session factory and that's it, no need to close it. But I have also read and seem many examples where people close the hibernate session. 3) How does LINQ's datacontext handle this? Most of the time I never .disposed my datacontext's and the app didn't see to take a performance hit (though I am not running anything super intensively), but it seems like most people recommend disposing of your datacontext after you are done with it. However, I have seen many many code examples where the dispose method is never called. Also in general I found it kind of annoying that you couldn't access even one-deep child related objects after disposing of the datacontext unless you explicity also grabbed them in the query. Thanks all. I am loving this site so far, I kind of get lost and spend hours just reading things on here. =)

    Read the article

  • Can I automatically throw descriptive exceptions with parameter values and class feild information?

    - by Robert H.
    I honestly don't throw exceptions often. I catch them even less, ironically. I currently work in shop where we let them bubble up to avicode. For whatever reason, however, avicode isn't configured to capture some of the critical bits I need when these exceptions come bouncing back to my attention. Specifically, I'd like to see the parameter values and the class’s field data at the time of the exception. I’d guess with the large suite of .Net services that I could create a static method to crawl up the stack, gather these bits and store them in a string that I could stick in my exception message. I really don't are how long such a method would take to execute as performance is no longer a concern when I hit one of these scenarios. If it's possible, I'm sure someone has done it. If that's the case, I'm having a hard time finding it. I think any search containing "exception" brings back too many resutls. Anyway, can this be done? If so, some examples or links would be great. Thanks in advance for your time, Robert

    Read the article

  • Why does File.Exists return false?

    - by Jonas Stawski
    I'm querying all images on the Android device as such: string[] columns = { MediaStore.Images.Media.InterfaceConsts.Data, MediaStore.Images.Media.InterfaceConsts.Id }; string orderBy = MediaStore.Images.Media.InterfaceConsts.Id; var imagecursor = ManagedQuery(MediaStore.Images.Media.ExternalContentUri, columns, null, null, orderBy); for (int i = 0; i < this.Count; i++) { imagecursor.MoveToPosition(i); Paths[i]= imagecursor.GetString(dataColumnIndex); Console.WriteLine(Paths[i]); Console.WriteLine(System.IO.File.Exists(Paths[i])); } The problem is that the output shows that some files don't exist. Here's a sample output: /storage/sdcard0/Download/On-Yom-Kippur-Jews-choose-different-shoes-VSETQJ6-x-large.jpg False /storage/sdcard0/Download/397277_10151250943161341_876027377_n.jpg False /storage/sdcard0/Download/Roxy_Cottontail_&_Melo-X_Present..._Some_Bunny_Love's_You.jpg False /storage/sdcard0/Download/album-The-Rolling-Stones-Some-Girls.jpg True /storage/sdcard0/Download/some-people-ust-dont-appreciate-fashion[1].jpg True /storage/sdcard0/Download/express.gif True ... /storage/sdcard0/Download/some-joys-are-expressed-better-in-silence.JPG False How is this possible? I downloaded these images myself from the internet! They should exist in disk.

    Read the article

  • XElement vs Dcitionary

    - by user135498
    Hi All, I need advice. I have application that imports 10,000 rows containing name & address from a text file into XElements that are subsequently added to a synchronized queue. When the import is complete the app spawns worker threads that process the XElements by deenqueuing them, making a database call, inserting the database output into the request document and inserting the processed document into an output queue. When all requests have been processed the output queue is written to disk as an XML doc. I used XElements for the requests because I needed the flexibility to add fields to the request during processing. i.e. Depending on the job type the app might require that it add phone number, date of birth or email address to a request based on a name/address match against a public record database. My questions is; The XElements seems to use quite a bit of memory and I know there is a lot of parsing as the document makes its way through the processing methods. I’m considering replacing the XElements with a Dictionary object but I’m skeptical the gain will be worth the effort. In essence it will accomplish the same thing. Thoughts?

    Read the article

  • MKL Accelerated Math Libraries for Java...

    - by Kaopua
    I've looked at the related threads on StackOverflow and Googled with not much luck. I'm also very new to Java (I'm coming from a C# and .NET background) so please bear with me. There is so much available in the Java world it's pretty overwhelming. I'm starting on a new Java-on-Linux project that requires some heavy and highly repetitious numerical calculations (i.e. statistics, FFT, Linear Algebra, Matrices, etc.). So maximizing the performance of the mathematical operations is a requirement, as is ensuring the math is correct. So hence I have an interest in finding a Java library that perhaps leverages native acceleration such as MKL, and is proven (so commercial options are definitely a possibility here). In the .NET space there are highly optimized and MKL accelerated commercial Mathematical libraries such as Centerspace NMath and Extreme Optimization. Is there anything comparable in Java? Most of the math libraries I have found for Java either do not seem to be actively maintained (such as Colt) or do not appear to leverage MKL or other native acceleration (such as Apache Commons Math). I have considered trying to leverage MKL directly from Java myself (e.g. JNI), but me being new to Java (let alone interoperating between Java and native libraries) it seemed smarter finding a Java library that has already done this correctly, efficiently, and is proven. Again I apologize if I am mistaken or misguided (even in regarding any libraries I've mentioned) and my ignorance of the Java offerings. It's a whole new world for me coming from the heavily commercialized Microsoft stock so I could easily be mistaken on where to look and regarding the Java libraries I've mentioned. I would greatly appreciate any help or advice.

    Read the article

  • Porting - Shared Memory x32 & x64 processes

    - by dpb
    A 32 bit host Windows application setups shared memory (using memory mapped file / CreateFileMapping() API), and then other 32 bit client processes use this shared memory to communicate with each other. I am planning to port the host application to 64 bit platform and once it is ready, I intend that both 32 bit and 64 bit client processes should be able to use the shared memory setup by the main 64 bit host application. The original code written for host x32 application uses "size_t" almost everywhere, since this differs from 4 bytes to 8 bytes as we move from x32 to x64, I am looking for replacing it. I intend to replace "size_t" by "unsigned long long", so that its size will be same on 32 bit & 64 bit. Can you please suggest me better alternative? Also, will the use of "unsigned long long" have performance impact on x32 app .. i guess yes? Research Done - Found very useful articles - a) 20 issue in porting from 32 bit to 64 bit (www.viva64.com) b) No way to restrict/change "size_t" on x64 platform to 4 bytes using compiler flags or any hooks/crooks since it is typedef

    Read the article

  • JSON VIEW using GROUP_CONCAT question

    - by Dan Beam
    Hey DBAs and overall smart dudes. I have a question for you. We use MySQL VIEWs to format our data as JSON when it's returned (as a BLOB), which is convenient (though not particularly nice on performance, but we already know this). But, I can't seem to get a particular query working right now (each row contains NULL when it should contain a created JSON object with the values of multiple JOINs). Here's the general idea: SELECT CONCAT( "{", "\"some_list\":[", GROUP_CONCAT( DISTINCT t1.id ), "],", "\"other_list\":[", GROUP_CONCAT( DISTINCT t2.id ), "],", "}" ) cool_json FROM table_name tn INNER JOIN ( some_table st ) ON st.some_id = tn.id LEFT JOIN ( another_table at, another_one ao, used_multiple_times t1 ) ON st.id = at.some_id AND at.different_id = ao.different_id AND ao.different_id = t1.id LEFT JOIN ( another_table2 at2, another_one2 ao2, used_multiple_times t2 ) ON st.id = at2.some_id AND at2.different_id = ao2.different_id AND ao2.different_id = t2.id GROUP BY tn.id ORDER BY tn.name Anybody know the problem here? Am I missing something I should be grouping by? It was working when I was only doing 1 LEFT JOIN & GROUP_CONCAT, but now with multiple JOINs / GROUP_CONCATs it's messing it up. When I move the GROUP_CONCATs from the "cool_json" field they work as expected, but I'd like my data formatted as JSON so I can decode it server-side or client-side in one step.

    Read the article

  • How to implement web cache: internal fragmentation VS external fragmentation

    - by Summer_More_More_Tea
    Hi there: I come up with this question when play with Firefox web cache: in which approach does the browser cache a response in limited disk space(take my configuration as an example, 50MB is the upper bound)? I think two ways can be employed. One is cache the total response object one by one, but this is inefficient and will introduce external fragmentation, thus the total cache space may not be fully used. The second is take the total space(50MB) as a consecutive file, splitting it into fixed-length slots; incoming response objects will also be treated blocks of data with the same length as the slots. We can fill slots until the whole file is run out of, then some displacement algorithm can be used to swap out the old cached objects. The latter approach will of course bing in internal fragmentation, but in my opinion is easier to implement and maintain than the first strategy. But when I enter Firefox's Cache directory, I find it (maybe) use a different method: a lot of varied-length files reside in that directory and all those files are filled with undisplayable characters. I don't but really want to know what mechanism that a commercial browser, e.g. Firefoix, employed to implement web cache. Regards.

    Read the article

  • Python socket error on UDP data receive. (10054)

    - by Charles
    I currently have a problem using UDP and Python socket module. We have a server and clients. The problem occurs when we send data to a user. It's possible that user may have closed their connection to the server through a client crash, disconnect by ISP, or some other improper method. As such, it is possible to send data to a closed socket. Of course with UDP you can't tell if the data really reached or if it's closed, as it doesn't care (atleast, it doesn't bring up an exception). However, if you send data and it is closed off, you get data back somehow (???), which ends up giving you a socket error on sock.recvfrom. [Errno 10054] An existing connection was forcibly closed by the remote host. Almost seems like an automatic response from the connection. Although this is fine, and can be handled by a try: except: block (even if it lowers performance of the server a little bit). The problem is, I can't tell who this is coming from or what socket is closed. Is there anyway to find out 'who' (ip, socket #) sent this? It would be great as I could instantly just disconnect them and remove them from the data. Any suggestions? Thanks.

    Read the article

  • How to implement custom JSF component for drawing chart?

    - by Roman
    I want to create a component which can be used like: <mc:chart data="#{bean.data}" width="200" height="300" /> where #{bean.data} returns a collection of some objects or chart model object or something else what can be represented as a chart (to put it simple, let's assume it returns a collection of integers). I want this component to generate html like this: <img src="someimg123.png" width="200" height="300"/> The problem is that I have some method which can receive data and return image, like: public RenderedImage getChartImage (Collection<Integer> data) { ... } and I also have a component for drawing dynamic image: <o:dynamicImage width="200" height="300" data="#{bean.readyChartImage}/> This component generates html just as I need but it's parameter is array of bytes or RenderedImage i.e. it needs method in bean like this: public RenderedImage getReadyChartImage () { ... } So, one approach is to use propertyChangedListener on submit to set data (Collection<Integer>) for drawing chart and then use <o:dynamicImage /> component. But I'd like to create my own component which receives data and draws chart. I'm using facelets but it's not so important indeed. Any ideas how to create the desired component? P.S. One solution I was thinking about is not to use <o:dynamicImage/> and use some servlet to stream image. But I don't know how to implement that correctly and how to tie jsf component with servlet and how to save already built chart images (generating new same image for each request can cause performance problems imho) and so on..

    Read the article

  • Hibernate database integrity with multiple java applications

    - by Austen
    We have 2 java web apps both are read/write and 3 standalone java read/write applications (one loads questions via email, one processes an xml feed, one sends email to subscribers) all use hibernate and share a common code base. The problem we have recently come across is that questions loaded via email sometimes overwrite questions created in one of the web apps. We originally thought this to be a caching issue. We've tried turning off the second level cache, but this doesn't make a difference. We are not explicitly opening and closing sessions, but rather let hibernate manage them via Util.getSessionFactory().getCurrentSession(), which thinking about it, may actually be the issue. We'd rather not setup a clustered 2nd level cache at this stage as this creates another layer of complexity and we're more than happy with the level of performance we get from the app as a whole. So does implementing a open-session-in-view pattern in the web apps and manually managing the sessions in the standalone apps sound like it would fix this? Or any other suggestions/ideas please? <property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property> <property name="hibernate.current_session_context_class">thread</property> <property name="hibernate.cache.use_second_level_cache">false</property>

    Read the article

  • How does loop address alignment affect the speed on Intel x86_64?

    - by Alexander Gololobov
    I'm seeing 15% performance degradation of the same C++ code compiled to exactly same machine instructions but located on differently aligned addresses. When my tiny main loop starts at 0x415220 it's faster then when it is at 0x415250. I'm running this on Intel Core2 Duo. I use gcc 4.4.5 on x86_64 Ubuntu. Can anybody explain the cause of slowdown and how I can force gcc to optimally align the loop? Here is the disassembly for both cases with profiler annotation: 415220 576 12.56% |XXXXXXXXXXXXXX 48 c1 eb 08 shr $0x8,%rbx 415224 110 2.40% |XX 0f b6 c3 movzbl %bl,%eax 415227 0.00% | 41 0f b6 04 00 movzbl (%r8,%rax,1),%eax 41522c 40 0.87% | 48 8b 04 c1 mov (%rcx,%rax,8),%rax 415230 806 17.58% |XXXXXXXXXXXXXXXXXXX 4c 63 f8 movslq %eax,%r15 415233 186 4.06% |XXXX 48 c1 e8 20 shr $0x20,%rax 415237 102 2.22% |XX 4c 01 f9 add %r15,%rcx 41523a 414 9.03% |XXXXXXXXXX a8 0f test $0xf,%al 41523c 680 14.83% |XXXXXXXXXXXXXXXX 74 45 je 415283 ::Run(char const*, char const*)+0x4b3 41523e 0.00% | 41 89 c7 mov %eax,%r15d 415241 0.00% | 41 83 e7 01 and $0x1,%r15d 415245 0.00% | 41 83 ff 01 cmp $0x1,%r15d 415249 0.00% | 41 89 c7 mov %eax,%r15d 415250 679 13.05% |XXXXXXXXXXXXXXXX 48 c1 eb 08 shr $0x8,%rbx 415254 124 2.38% |XX 0f b6 c3 movzbl %bl,%eax 415257 0.00% | 41 0f b6 04 00 movzbl (%r8,%rax,1),%eax 41525c 43 0.83% |X 48 8b 04 c1 mov (%rcx,%rax,8),%rax 415260 828 15.91% |XXXXXXXXXXXXXXXXXXX 4c 63 f8 movslq %eax,%r15 415263 388 7.46% |XXXXXXXXX 48 c1 e8 20 shr $0x20,%rax 415267 141 2.71% |XXX 4c 01 f9 add %r15,%rcx 41526a 634 12.18% |XXXXXXXXXXXXXXX a8 0f test $0xf,%al 41526c 749 14.39% |XXXXXXXXXXXXXXXXXX 74 45 je 4152b3 ::Run(char const*, char const*)+0x4c3 41526e 0.00% | 41 89 c7 mov %eax,%r15d 415271 0.00% | 41 83 e7 01 and $0x1,%r15d 415275 0.00% | 41 83 ff 01 cmp $0x1,%r15d 415279 0.00% | 41 89 c7 mov %eax,%r15d

    Read the article

  • Can parser combinators be made efficient?

    - by Jon Harrop
    Around 6 years ago, I benchmarked my own parser combinators in OCaml and found that they were ~5× slower than the parser generators on offer at the time. I recently revisited this subject and benchmarked Haskell's Parsec vs a simple hand-rolled precedence climbing parser written in F# and was surprised to find the F# to be 25× faster than the Haskell. Here's the Haskell code I used to read a large mathematical expression from file, parse and evaluate it: import Control.Applicative import Text.Parsec hiding ((<|>)) expr = chainl1 term ((+) <$ char '+' <|> (-) <$ char '-') term = chainl1 fact ((*) <$ char '*' <|> div <$ char '/') fact = read <$> many1 digit <|> char '(' *> expr <* char ')' eval :: String -> Int eval = either (error . show) id . parse expr "" . filter (/= ' ') main :: IO () main = do file <- readFile "expr" putStr $ show $ eval file putStr "\n" and here's my self-contained precedence climbing parser in F#: let rec (|Expr|) (P(f, xs)) = Expr(loop (' ', f, xs)) and loop = function | ' ' as oop, f, ('+' | '-' as op)::P(g, xs) | (' ' | '+' | '-' as oop), f, ('*' | '/' as op)::P(g, xs) -> let h, xs = loop (op, g, xs) let op = match op with | '+' -> (+) | '-' -> (-) | '*' -> (*) | '/' -> (/) loop (oop, op f h, xs) | _, f, xs -> f, xs and (|P|) = function | '('::Expr(f, ')'::xs) -> P(f, xs) | c::xs when '0' <= c && c <= '9' -> P(int(string c), xs) My impression is that even state-of-the-art parser combinators waste a lot of time back tracking. Is that correct? If so, is it possible to write parser combinators that generate state machines to obtain competitive performance or is it necessary to use code generation?

    Read the article

  • Complex LINQ paging algorithm

    - by sharepointmonkey
    We have a list of projects that may or may not have a collection of subprojects. Our report needs to contain all the projects except those that are the parent project of a subproject. I need to page this into pages of, say, 25 rows. But if subprojects appear on that page then ALL the subprojects of that project must appear on the same page. So more than 25 items may appear if necessary. I've got as far as var pagedProjects = db.Projects.Where(x => !x.SubProjects.Any()).Skip( (pageNo -1) * pageSize).Take(pageSize); Obviously, this fails the second part of the requirements. As a further pain in the arse, I need to have a pager control on the report. So I'll need to be able to calculate the total number of pages. I could loop through the whole table of projects but the performance will suffer. Can anybody come up with a paged solution? EDIT - I should probably mention that SubProjects joins back onto Projects via a selfreferencing foreign key so the whole lot comes back as an IQueryable<Project>.

    Read the article

  • Java: Efficiency of the readLine method of the BufferedReader and possible alternatives

    - by Luhar
    We are working to reduce the latency and increase the performance of a process written in Java that consumes data (xml strings) from a socket via the readLine() method of the BufferedReader class. The data is delimited by the end of line separater (\n), and each line can be of a variable length (6KBits - 32KBits). Our code looks like: Socket sock = connection; InputStream in = sock.getInputStream(); BufferedReader inputReader = new BufferedReader(new InputStreamReader(in)); ... do { String input = inputReader.readLine(); // Executor call to parse the input thread in a seperate thread }while(true) So I have a couple of questions: Will the inputReader.readLine() method return as soon as it hits the \n character or will it wait till the buffer is full? Is there a faster of picking up data from the socket than using a BufferedReader? What happens when the size of the input string is smaller than the size of the Socket's receive buffer? What happens when the size of the input string is bigger than the size of the Socket's receive buffer? I am getting to grips (slowly) with Java's IO libraries, so any pointers are much appreciated. Thank you!

    Read the article

  • SQL Server becomes slow after restart

    - by Tobi DM
    We use SQL Server 2005 on an Windwos Server 2008. Ther Server has 48 GB RAM. SQL Server is configured to use 40 GB RAM. There is only one database hosted (About 70 GB). The only app beside SQL Server is our App-Server which connects the clients to the database. Now we encounter the following problem: After a restart of the server our the performance is great. The server grabs the 40 GB RAM wich it is allowed to and then runs fast as hell. But after about 4 weeks the system becomes slower and slower. The execution of statements (seen in the profiler) is raising slowly. But I cannot see that there is something going wrong on the server. CPU usage is at about 20% I/O also seems to be no Problem The process monitor does also not show that there are strange apps or something like that. Eventlog does also have no interessting messages No open transactions or blockings to see We tried already the following things without effect: Droped the cache by using the statements DBCC FreeProcCache DBCC FREESYSTEMCACHE('ALL') DBCC DropCleanbuffers Restarted the Appserver we are using. Restart the sql server service But nothing did help exept restarting the whole server. Any ideas?

    Read the article

  • Quartz 2D or OpenGL ES? Pros and cons in the long term, possibility of migration to other platforms.

    - by fspirit
    Hi all! I'm having a hard time deciding whether to go with Quartz2D or OpenGL for an iPad game. It will be 2D mostly, but effect-intense (simultaneous lighting effects for 10-30 objects, 10-20 simultaneous animations on the screen). So far, assuming i'm equally dumb in both technologies and have to learn them from the ground, i came to this list. (I've read several topics here, on SO, with names like "Quartz or OpenGL", but i'm still left with some questions) Quartz: Better time-to-market, because of ready to use absractions like UIView, UIImageView, CoreAnimation abstractions Open GL ES Closer to hardware, thus, performance is better. App, implemented with OpenGL ES can be easier migrated to Android, MeeGo, Windows Phone, etc. My questions are: How time will it take to rewrite Quartz 2d app to use OpenGL? Lets say it took me 2 man-month to write Quartz app, how much time will i need to rewrite it? (Please, just some subjective opinions, i'll try to summarize them somehow) Regarding the ease of migration to other platforms, when using OpenGL, is it really so? Or efforts when migrating Quartz app from iPhoneOS to Android will be not so much bigger, compared to OpenGL app migration? (Ease of migration is quite important criterion) Regarding OpenGL, should i go with OpenGL 1.1 or 2.0, concerning migration? (Android supports 2.0 through NDK, but dont know whether NDK's use will increase or decrease migration efforts)

    Read the article

  • Which apache worker to use with passenger and how?

    - by Millisami
    I've this config in my apache2.conf <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MaxClients 15 MinSpareThreads 4 MaxSpareThreads 5 ThreadsPerChild 15 MaxRequestsPerChild 50000 </IfModule> Now I'm confused here. Which module gets loaded on which conditions? The phusion guys have suggested to use the worker module. Since both are present in apache conf file, do I have to comment the mpm_prefork_module or leave it as it is? Following is my passenger conf file for apache: LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/ext/apache2/mod_passenger.so PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4 PassengerRuby /usr/bin/ruby1.8 PassengerMaxPoolSize 3 PassengerPoolIdleTime 999999 RailsFrameworkSpawnerIdleTime 0 RailsAppSpawnerIdleTime 0 I'm running just a single Rails 2.3.2 app on 256MB slice at slicehost. I'm not quite satisfied with the performance yet. Are the settings above are any good??

    Read the article

  • Loading from archive?

    - by fuzzygoat
    Can anyone point me in the right direction, I have two sets of load/save methods below that I am playing with to save an array of objects out to disk and then load them back in. I am getting a little confused after calling "saveMoons" how I should go about loading that data back in ... any pointers would be great. -(void)saveMoons:(NSString *)savePath { NSMutableData *data = [[NSMutableData alloc] init]; NSKeyedArchiver *archiver = [[NSKeyedArchiver alloc] initForWritingWithMutableData:data]; [moons encodeWithCoder:archiver]; [archiver finishEncoding]; [data writeToFile:savePath atomically:YES]; [data release]; [archiver release]; } -(void)loadMoons:(NSString *)loadPath { NSMutableData *data = [[NSMutableData alloc] initWithContentsOfFile:loadPath]; NSKeyedUnarchiver *unarchiver = [[NSKeyedUnarchiver alloc] initForReadingWithData:data]; // [self setMoons:[unarchiver ??????]]; } . // ------------------------------------------------------------------- ** // SIMPLER // ------------------------------------------------------------------- ** -(void)saveMoons_SIMPLE:(NSString *)savePath { NSData *data = [NSKeyedArchiver archivedDataWithRootObject:moons]; [data writeToFile:savePath atomically:YES]; } -(void)loadMoons_SIMPLE:(NSString *)loadPath { [self setMoons:[NSKeyedUnarchiver unarchiveObjectWithFile:loadPath]]; } // ------------------------------------------------------------------- ** // // ------------------------------------------------------------------- ** gary

    Read the article

  • Cassandra hot keyspace structure change

    - by Pierre
    Hello. I'm currently running a 12-node Cassandra cluster storing 4TB of data, with a replication factor set to 3. For the needs of an application update, we need to change the configuration of our keyspace, and we'd like to avoid any downtime if possible. I read on a mailing list that the best way to do it is to: Kill cassandra process on one server of the cluster Start it again, wait for the commit log to be written on the disk, and kill it again Make the modifications in the storage.xml file Rename or delete the files in the data directories according to the changes we made Start cassandra Goto 1 with next server on the list My questions would be: Did I understand the process well? Is there any risk of data corruption? During the process, there will be servers with different versions of the storage.xml file in the same cluser, same keyspace. Is it a problem? Same question as above if we not only add, rename and remove ColumnFamilies, but if we change the CompareWith parameter / transform an existing column family into a super one. Or do we need to change the name? Thank you for your answers. It's the first time I'll do this, and I'm a little bit scared.

    Read the article

< Previous Page | 746 747 748 749 750 751 752 753 754 755 756 757  | Next Page >