Search Results

Search found 13608 results on 545 pages for 'performance dashboard'.

Page 482/545 | < Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >

  • sem-dynamic cdn

    - by dwi kristianto
    i'm developing couple of websites using php (directory script, etc.) and wordpress as cms. i need to improve its performance, by using cdn for static files (css, js, images). the problem is, css and javascript files are generated on the fly. i did that due to yahoo and some expert advice to combine the files into one file. also changing basic color of css files. for the time being, i use couple of small vps but still its not fast enough. i already contact maxcdn and the support guy said that they dont have such kind of services. what i need is: a cdn that will serve the request from user/visitor and there's no file in local disk, the cdn will redirect/fetch it from another domain/server. in vps, it could be done easily using combination of .htaccess and php, but NOT in the cdn. most of cdn only support purely static files. is there any such cdn that will server semi-dynamic files?

    Read the article

  • javascript pausing consistently. How do I find what is causing it to pause?

    - by pedalpete
    I've got a fairly ajax heavy site and I'm trying to tune the performance. I have a function that runs between 20 & 200 times, depending on the user. I'm outputting the time the function takes to execute via console.time in firefox. The function takes about 4-6ms to complete. The strange thing is that on my larger test with 200 or runs through that function, it runs through the first 31, then seems to pause for almost a second before completing the last 170 or so. However, that 'pause' doesn't show up in the console.time logs, and I'm not running any other functions, and the object that gets passed to the function looks the same as all other objects that get passed in. The function is called like this for (var s in thisGroup.events){ showEvent(thisGroup.events[s]) } so, I don't see how or why it would suddenly pause near the beginning. but only pause once and then continue through. The pause ALWAYS happens on the 31st time through the function. I've taken a close look at the 'thisGroup.events[s]' that it is being run through, and it looks like this for #31 "eventId":"5106", "sid":"68", "gid":"29", "uid":"70","type":"event", "startDate":"2010-03-22","startTime":"6:00 PM","endDate":"2010-03-22","endTime":"11:00 PM","durationLength":"5", "durationTime":"5:00", "note":"", "desc":"event" The event immediately after the pause, #32 looks like this "eventId":"5111", "sid":"68", "gid":"29", "uid":"71","type":"event", "startDate":"2010-03-22","startTime":"6:00 PM","endDate":"2010-03-22","endTime":"11:00 PM","durationLength":"5", "durationTime":"5:00", "note":"", "desc":"event" another event that runs through no problem looks like this "eventId":"5113", "sid":"68", "gid":"29", "uid":"72","type":"event", "startDate":"2010-03-22","startTime":"4:30 PM","endDate":"2010-03-22","endTime":"11:00 PM","durationLength":"6.5", "durationTime":"6:30", "note":"", "desc":"event" From the console outputs, it doesn't appear as there is anything hanging or taking up time in the function itself, as the console.time for each event including #31,32 is 4ms. Another strange thing here is that the total time running the for loop across the entire object is coming out as 1014ms which is right for 200 events at 4-6ms each. Any suggestions on how to find this 'pause'? I find it very interesting that it is consistently happening between #31 & #32 only!

    Read the article

  • Python socket error on UDP data receive. (10054)

    - by Charles
    I currently have a problem using UDP and Python socket module. We have a server and clients. The problem occurs when we send data to a user. It's possible that user may have closed their connection to the server through a client crash, disconnect by ISP, or some other improper method. As such, it is possible to send data to a closed socket. Of course with UDP you can't tell if the data really reached or if it's closed, as it doesn't care (atleast, it doesn't bring up an exception). However, if you send data and it is closed off, you get data back somehow (???), which ends up giving you a socket error on sock.recvfrom. [Errno 10054] An existing connection was forcibly closed by the remote host. Almost seems like an automatic response from the connection. Although this is fine, and can be handled by a try: except: block (even if it lowers performance of the server a little bit). The problem is, I can't tell who this is coming from or what socket is closed. Is there anyway to find out 'who' (ip, socket #) sent this? It would be great as I could instantly just disconnect them and remove them from the data. Any suggestions? Thanks.

    Read the article

  • Custom UIView using CALayers disappears after 180º rotation or navigation controller pop

    - by Steve Madsen
    I have a created a custom UIView subclass that is exhibiting some strange behavior. It is a spinning wheel selector, and for performance reasons it is drawn entirely into two CALayer instances. The bottom layer is the wheel itself, which is rotated using setAffineTransform: according to touches. The top layer is eye candy. drawRect: is fairly simple. If the control hasn't been drawn yet (or it's been invalidated), it calls a method that creates the images and assigns them to the layer contents property. - (void) drawRect:(CGRect)rect { if (imageLayer == nil) { [self drawIntoImageLayer]; } [self updateWheelRotation]; } When the view controller using this view first appears, everything is fine. There are two instances where the view completely disappears, however: If the device is rotated a full 180°. After a view controller is popped off the navigation stack and the view becomes visible again. drawRect: is not called either time. Interestingly enough, it IS called after a 90° orientation change, and that causes the view to re-appear. How can I ensure that a custom view using CALayers is redrawn properly in these situations?

    Read the article

  • How should I avoid memoization causing bugs in Ruby?

    - by Andrew Grimm
    Is there a consensus on how to avoid memoization causing bugs due to mutable state? In this example, a cached result had its state mutated, and therefore gave the wrong result the second time it was called. class Greeter def initialize @greeting_cache = {} end def expensive_greeting_calculation(formality) case formality when :casual then "Hi" when :formal then "Hello" end end def greeting(formality) unless @greeting_cache.has_key?(formality) @greeting_cache[formality] = expensive_greeting_calculation(formality) end @greeting_cache[formality] end end def memoization_mutator greeter = Greeter.new first_person = "Bob" # Mildly contrived in this case, # but you could encounter this in more complex scenarios puts(greeter.greeting(:casual) << " " << first_person) # => Hi Bob second_person = "Sue" puts(greeter.greeting(:casual) << " " << second_person) # => Hi Bob Sue end memoization_mutator Approaches I can see to avoid this are: greeting could return a dup or clone of @greeting_cache[formality] greeting could freeze the result of @greeting_cache[formality]. That'd cause an exception to be raised when memoization_mutator appends strings to it. Check all code that uses the result of greeting to ensure none of it does any mutating of the string. Is there a consensus on the best approach? Is the only disadvantage of doing (1) or (2) decreased performance? (I also suspect freezing an object may not work fully if it has references to other objects) Side note: this problem doesn't affect the main application of memoization: as Fixnums are immutable, calculating Fibonacci sequences doesn't have problems with mutable state. :)

    Read the article

  • Automatic .NET code, nhibernate session, and LINQ datacontext clean-up?

    - by AverageJoe719
    Hi all, in my goal to adopt better coding practices I have a few questions in general about automatic handling of code. I have heard different answers both from online and talking with other developers/programmers at my work. I am not sure if I should have split them into 3 questions, but they all seem sort of related: 1) How does .NET handle instances of classes and other code things that take up memory? I recently found out about using the factory pattern for certain things like service classes so that they are only instantiated once in the entire application, but then I was told that '.NET handles a lot of that stuff automatically when mentioning it.' 2) How does Nhibernate's session handle automatic clean-up of un-used things? I've seen some say that it is great at handling things automatically and you should just use a session factory and that's it, no need to close it. But I have also read and seem many examples where people close the hibernate session. 3) How does LINQ's datacontext handle this? Most of the time I never .disposed my datacontext's and the app didn't see to take a performance hit (though I am not running anything super intensively), but it seems like most people recommend disposing of your datacontext after you are done with it. However, I have seen many many code examples where the dispose method is never called. Also in general I found it kind of annoying that you couldn't access even one-deep child related objects after disposing of the datacontext unless you explicity also grabbed them in the query. Thanks all. I am loving this site so far, I kind of get lost and spend hours just reading things on here. =)

    Read the article

  • Help on understanding multiple columns on an index?

    - by Xaisoft
    Assume I have a table called "table" and I have 3 columns, a, b, and c. What does it mean to have a non-clustered index on columns a,b? Is a nonclustered index on columns a,b the same as a nonclustered index on columns b,a? (Note the order). Also, Is a nonclustered index on column a the same as a nonclustered index on a,c? I was looking at the website sqlserver performance and they had these dmv scripts where it would tell you if you had overlapping indexes and I believe it was saying that having an index on a is the same as a,b, so it is redundant. Is this true about indexes? One last question is why is the clustered index put on the primary key. Most of the time the primary key is not queried against, so shouldn't the clustered index be on the most queried column. I am probably missing something here like having it on the primary key speeds up joins? Great explanations. Should I turn this into a wiki and change the title index explanation?

    Read the article

  • Can I automatically throw descriptive exceptions with parameter values and class feild information?

    - by Robert H.
    I honestly don't throw exceptions often. I catch them even less, ironically. I currently work in shop where we let them bubble up to avicode. For whatever reason, however, avicode isn't configured to capture some of the critical bits I need when these exceptions come bouncing back to my attention. Specifically, I'd like to see the parameter values and the class’s field data at the time of the exception. I’d guess with the large suite of .Net services that I could create a static method to crawl up the stack, gather these bits and store them in a string that I could stick in my exception message. I really don't are how long such a method would take to execute as performance is no longer a concern when I hit one of these scenarios. If it's possible, I'm sure someone has done it. If that's the case, I'm having a hard time finding it. I think any search containing "exception" brings back too many resutls. Anyway, can this be done? If so, some examples or links would be great. Thanks in advance for your time, Robert

    Read the article

  • How to choose light version of databse system

    - by adopilot
    I am starting one POS (Point of sale) project. Targeting system is going to be written in C# .NET 2 WinForms and as main database server We are going to use MS-SQL Server. As we have a lot of POS devices in chain for one store I will love to have backend local data base system on each POS device. Scenario are following: When main server goes down!! POS application should continue working "off-line" with local database, until connection to main server come up again. Now I am in dilemma which local database is going to be most adoptable for me. Here is some notes for helping me point me in right direction: To be Light "My POS devices art usually old and suffering with performances" To be Free "I have a lot of devices and I do not wont additional cost beside main SQL serer" One day Ill love to try all that port on Mono and Linux OS. Here is what I've researched so far: Simple XML "Light but I am afraid of performance, My main table of items is average of 10K records" SQL-Expres "I am afraid that my POS devices is poor with hardware for SQLExpres, and also hard to install on each device and configure" Less known Advantage Database Server have free distribution of offline ADT system. DBF with extended Library,"Respect for good old DBFs but that era is behind Me with clipper and DBFs" MS Access Sqlite "Mostly like for now, but I am afraid how it is going to pair with MS SQL do they have same Data taypes". I know that in this SO is a lot of subjective data, but at least can someone recommended some others lite database system, or things that I shod most take attention before I choice database.

    Read the article

  • Algorithm to produce Cartesian product of arrays in depth-first order

    - by Yuri Gadow
    I'm looking for an example of how, in Ruby, a C like language, or pseudo code, to create the Cartesian product of a variable number of arrays of integers, each of differing length, and step through the results in a particular order: So given, [1,2,3],[1,2,3],[1,2,3]: [1, 1, 1] [2, 1, 1] [1, 2, 1] [1, 1, 2] [2, 2, 1] [1, 2, 2] [2, 1, 2] [2, 2, 2] [3, 1, 1] [1, 3, 1] etc. Instead of the typical result I've seen (including the example I give below): [1, 1, 1] [2, 1, 1] [3, 1, 1] [1, 2, 1] [2, 2, 1] [3, 2, 1] [1, 3, 1] [2, 3, 1] etc. The problem with this example is that the third position isn't explored at all until all combinations of of the first two are tried. In the code that uses this, that means even though the right answer is generally (the much larger equivalent of) 1,1,2 it will examine a few million possibilities instead of just a few thousand before finding it. I'm dealing with result sets of one million to hundreds of millions, so generating them and then sorting isn't doable here and would defeat the reason for ordering them in the first example, which is to find the correct answer sooner and so break out of the cartesian product generation earlier. Just in case it helps clarify any of the above, here's how I do this now (this has correct results and right performance, but not the order I want, i.e., it creates results as in the second listing above): def cartesian(a_of_a) a_of_a_len = a_of_a.size result = Array.new(a_of_a_len) j, k, a2, a2_len = nil, nil, nil, nil i = 0 while 1 do j, k = i, 0 while k < a_of_a_len a2 = a_of_a[k] a2_len = a2.size result[k] = a2[j % a2_len] j /= a2_len k += 1 end return if j > 0 yield result i += 1 end end UPDATE: I didn't make it very clear that I'm after a solution where all the combinations of 1,2 are examined before 3 is added in, then all 3 and 1, then all 3, 2 and 1, then all 3,2. In other words, explore all earlier combinations "horizontally" before "vertically." The precise order in which those possibilities are explored, i.e., 1,1,2 or 2,1,1, doesn't matter, just that all 2 and 1 are explored before mixing in 3 and so on.

    Read the article

  • ways to avoid global temp tables in oracle

    - by Omnipresent
    We just converted our sql server stored procedures to oracle procedures. Sql Server SP's were highly dependent on session tables (INSERT INTO #table1...) these tables got converted as global temporary tables in oracle. We ended up with aroun 500 GTT's for our 400 SP's Now we are finding out that working with GTT's in oracle is considered a last option because of performance and other issues. what other alternatives are there? Collections? Cursors? Our typical use of GTT's is like so: Insert into GTT INSERT INTO some_gtt_1 (column_a, column_b, column_c) (SELECT someA, someB, someC FROM TABLE_A WHERE condition_1 = 'YN756' AND type_cd = 'P' AND TO_NUMBER(TO_CHAR(m_date, 'MM')) = '12' AND (lname LIKE (v_LnameUpper || '%') OR lname LIKE (v_searchLnameLower || '%')) AND (e_flag = 'Y' OR it_flag = 'Y' OR fit_flag = 'Y')); Update the GTT UPDATE some_gtt_1 a SET column_a = (SELECT b.data_a FROM some_table_b b WHERE a.column_b = b.data_b AND a.column_c = 'C') WHERE column_a IS NULL OR column_a = ' '; and later on get the data out of the GTT. These are just sample queries, in actuality the queries are really complext with lot of joins and subqueries. I have a three part question: Can someone show how to transform the above sample queries to collections and/or cursors? Since with GTT's you can work natively with SQL...why go away from the GTTs? are they really that bad. What should be the guidelines on When to use and When to avoid GTT's

    Read the article

  • JSON VIEW using GROUP_CONCAT question

    - by Dan Beam
    Hey DBAs and overall smart dudes. I have a question for you. We use MySQL VIEWs to format our data as JSON when it's returned (as a BLOB), which is convenient (though not particularly nice on performance, but we already know this). But, I can't seem to get a particular query working right now (each row contains NULL when it should contain a created JSON object with the values of multiple JOINs). Here's the general idea: SELECT CONCAT( "{", "\"some_list\":[", GROUP_CONCAT( DISTINCT t1.id ), "],", "\"other_list\":[", GROUP_CONCAT( DISTINCT t2.id ), "],", "}" ) cool_json FROM table_name tn INNER JOIN ( some_table st ) ON st.some_id = tn.id LEFT JOIN ( another_table at, another_one ao, used_multiple_times t1 ) ON st.id = at.some_id AND at.different_id = ao.different_id AND ao.different_id = t1.id LEFT JOIN ( another_table2 at2, another_one2 ao2, used_multiple_times t2 ) ON st.id = at2.some_id AND at2.different_id = ao2.different_id AND ao2.different_id = t2.id GROUP BY tn.id ORDER BY tn.name Anybody know the problem here? Am I missing something I should be grouping by? It was working when I was only doing 1 LEFT JOIN & GROUP_CONCAT, but now with multiple JOINs / GROUP_CONCATs it's messing it up. When I move the GROUP_CONCATs from the "cool_json" field they work as expected, but I'd like my data formatted as JSON so I can decode it server-side or client-side in one step.

    Read the article

  • When are SQL views appropriate in ASP.net MVC?

    - by sslepian
    I've got a table called Protocol, a table called Eligibility, and a Protocol_Eligibilty table that maps the two together (a many to many relationship). If I wanted to make a perfect copy of an entry in the Protocol table, and create all the needed mappings in the Protocol_Eligibility table, would using an SQL view be helpful, from a performance standpoint? Protocol will have around 1000 rows, Eligibility will have about 200, and I expect each Protocol to map to about 10 Eligibility rows and each Eligibility to map to over 100 rows in Protocol. Here's how I'm doing this with the view: var pel_original = (from pel in _documentDataModel.Protocol_Eligibility_View where pel.pid == id select pel); Protocol_Eligibility newEligibility; foreach (var pel_item in pel_original) { newEligibility = new Protocol_Eligibility(); newEligibility.Eligibility = (from pel in _documentDataModel.Eligibility where pel.ID == pel_item.eid select pel).First(); newEligibility.Protocol = newProtocol; newEligibility.ordering = pel_item.ordering; _documentDataModel.AddToProtocol_Eligibility(newEligibility); } And this is without the view: var pel_original = (from pel in _documentDataModel.Protocol_Eligibility where pel.Protocol.ID == id select pel); Protocol_Eligibility newEligibility; foreach (var pel_item in pel_original) { pel_item.EligibilityReference.Load(); newEligibility = new Protocol_Eligibility(); newEligibility.Eligibility = pel_item.Eligibility; newEligibility.Protocol = newProtocol; newEligibility.ordering = pel_item.ordering; _documentDataModel.AddToProtocol_Eligibility(newEligibility); }

    Read the article

  • Quartz 2D or OpenGL ES? Pros and cons in the long term, possibility of migration to other platforms.

    - by fspirit
    Hi all! I'm having a hard time deciding whether to go with Quartz2D or OpenGL for an iPad game. It will be 2D mostly, but effect-intense (simultaneous lighting effects for 10-30 objects, 10-20 simultaneous animations on the screen). So far, assuming i'm equally dumb in both technologies and have to learn them from the ground, i came to this list. (I've read several topics here, on SO, with names like "Quartz or OpenGL", but i'm still left with some questions) Quartz: Better time-to-market, because of ready to use absractions like UIView, UIImageView, CoreAnimation abstractions Open GL ES Closer to hardware, thus, performance is better. App, implemented with OpenGL ES can be easier migrated to Android, MeeGo, Windows Phone, etc. My questions are: How time will it take to rewrite Quartz 2d app to use OpenGL? Lets say it took me 2 man-month to write Quartz app, how much time will i need to rewrite it? (Please, just some subjective opinions, i'll try to summarize them somehow) Regarding the ease of migration to other platforms, when using OpenGL, is it really so? Or efforts when migrating Quartz app from iPhoneOS to Android will be not so much bigger, compared to OpenGL app migration? (Ease of migration is quite important criterion) Regarding OpenGL, should i go with OpenGL 1.1 or 2.0, concerning migration? (Android supports 2.0 through NDK, but dont know whether NDK's use will increase or decrease migration efforts)

    Read the article

  • Bulk Insert of hundreds of millions of records

    - by Dave Jarvis
    What is the fastest way to insert 237 million records into a table that has rules (for distributing the data across 84 child tables)? First I tried inserts. No go. Then I tried inserts with BEGIN/COMMIT. Not nearly fast enough. Next, I tried COPY FROM, but then noticed the documentation states that the rules are ignored. (And it was having difficulties with the column order and date format -- it said that '1984-07-1' was not a valid integer; true, but a bit unexpected.) Some example data: station_id,taken,amount,category_id,flag 1,'1984-07-1',0,4, 1,'1984-07-2',0,4, 1,'1984-07-3',0,4, 1,'1984-07-4',0,4,T Here is the table structure (with one rule included): CREATE TABLE climate.measurement ( id bigserial NOT NULL, station_id integer NOT NULL, taken date NOT NULL, amount numeric(8,2) NOT NULL, category_id smallint NOT NULL, flag character varying(1) NOT NULL DEFAULT ' '::character varying ) WITH ( OIDS=FALSE ); ALTER TABLE climate.measurement OWNER TO postgres; CREATE OR REPLACE RULE i_measurement_01_001 AS ON INSERT TO climate.measurement WHERE date_part('month'::text, new.taken)::integer = 1 AND new.category_id = 1 DO INSTEAD INSERT INTO climate.measurement_01_001 (id, station_id, taken, amount, category_id, flag) VALUES (new.id, new.station_id, new.taken, new.amount, new.category_id, new.flag); I can generate the data into any format. Am looking for something that won't take four days. I originally had the data in MySQL (still do), but am hoping to get a performance increase by switching to PostgreSQL and am eager to use its PL/R extensions for stats. I was also thinking about using: http://pgbulkload.projects.postgresql.org/ Any help, tips, or guidance would be greatly appreciated. Thank you!

    Read the article

  • spam and dirty words comment post filtering in python (django)

    - by sintaloo
    Hi All, My basic question is how to filter spam and dirty words in a comment post system under python (django). I have a collection of phrases (approximately 3000 phrases) to be filtered. Question (1), are there any existing open source python (or django) package/module/plugin which can handle this job? I knew there was one called Akismet. But from what I understood, it will not solve my problem. Akismet is just a web service and filter the words dictionary defined by Akismet. But I have my own collection of words. Please correct me if I am wrong. Question (2), If there is no such open source package I can use, how to create my own one? The only thing I can think of it's to use regular expression and join all the word phrases with 'or' in a regular expression. but I have 3000 phrases, I think it won't work in term of performance and filter every comment post. any suggestions where should I start from? Thank you very much for your help and time.

    Read the article

  • How does loop address alignment affect the speed on Intel x86_64?

    - by Alexander Gololobov
    I'm seeing 15% performance degradation of the same C++ code compiled to exactly same machine instructions but located on differently aligned addresses. When my tiny main loop starts at 0x415220 it's faster then when it is at 0x415250. I'm running this on Intel Core2 Duo. I use gcc 4.4.5 on x86_64 Ubuntu. Can anybody explain the cause of slowdown and how I can force gcc to optimally align the loop? Here is the disassembly for both cases with profiler annotation: 415220 576 12.56% |XXXXXXXXXXXXXX 48 c1 eb 08 shr $0x8,%rbx 415224 110 2.40% |XX 0f b6 c3 movzbl %bl,%eax 415227 0.00% | 41 0f b6 04 00 movzbl (%r8,%rax,1),%eax 41522c 40 0.87% | 48 8b 04 c1 mov (%rcx,%rax,8),%rax 415230 806 17.58% |XXXXXXXXXXXXXXXXXXX 4c 63 f8 movslq %eax,%r15 415233 186 4.06% |XXXX 48 c1 e8 20 shr $0x20,%rax 415237 102 2.22% |XX 4c 01 f9 add %r15,%rcx 41523a 414 9.03% |XXXXXXXXXX a8 0f test $0xf,%al 41523c 680 14.83% |XXXXXXXXXXXXXXXX 74 45 je 415283 ::Run(char const*, char const*)+0x4b3 41523e 0.00% | 41 89 c7 mov %eax,%r15d 415241 0.00% | 41 83 e7 01 and $0x1,%r15d 415245 0.00% | 41 83 ff 01 cmp $0x1,%r15d 415249 0.00% | 41 89 c7 mov %eax,%r15d 415250 679 13.05% |XXXXXXXXXXXXXXXX 48 c1 eb 08 shr $0x8,%rbx 415254 124 2.38% |XX 0f b6 c3 movzbl %bl,%eax 415257 0.00% | 41 0f b6 04 00 movzbl (%r8,%rax,1),%eax 41525c 43 0.83% |X 48 8b 04 c1 mov (%rcx,%rax,8),%rax 415260 828 15.91% |XXXXXXXXXXXXXXXXXXX 4c 63 f8 movslq %eax,%r15 415263 388 7.46% |XXXXXXXXX 48 c1 e8 20 shr $0x20,%rax 415267 141 2.71% |XXX 4c 01 f9 add %r15,%rcx 41526a 634 12.18% |XXXXXXXXXXXXXXX a8 0f test $0xf,%al 41526c 749 14.39% |XXXXXXXXXXXXXXXXXX 74 45 je 4152b3 ::Run(char const*, char const*)+0x4c3 41526e 0.00% | 41 89 c7 mov %eax,%r15d 415271 0.00% | 41 83 e7 01 and $0x1,%r15d 415275 0.00% | 41 83 ff 01 cmp $0x1,%r15d 415279 0.00% | 41 89 c7 mov %eax,%r15d

    Read the article

  • Simple description of worker and I/O threads in .NET

    - by Konstantin
    It's very hard to find detailed but simple description of worker and I/O threads in .NET What's clear to me regarding this topic (but may not be technically precise): Worker threads are threads that should employ CPU for their work; I/O threads (also called "completion port threads") should employ device drivers for their work and essentially "do nothing", only monitor the completion of non-CPU operations. What is not clear: Although method ThreadPool.GetAvailableThreads returns number of available threads of both types, it seems there is no public API to schedule work for I/O thread. You can only manually create worker thread in .NET? It seems that single I/O thread can monitor multiple I/O operations. Is it true? If so, why ThreadPool has so many available I/O threads by default? In some texts I read that callback, triggered after I/O operation completion is performed by I/O thread. Is it true? Isn’t this a job for worker thread, considering that this callback is CPU operation? To be more specific – do ASP.NET asynchronous pages user I/O threads? What exactly is performance benefit in switching I/O work to separate thread instead of increasing maximum number of worker threads? Is it because single I/O thread does monitor multiple operations? Or Windows does more efficient context switching when using I/O threads?

    Read the article

  • Java: Efficiency of the readLine method of the BufferedReader and possible alternatives

    - by Luhar
    We are working to reduce the latency and increase the performance of a process written in Java that consumes data (xml strings) from a socket via the readLine() method of the BufferedReader class. The data is delimited by the end of line separater (\n), and each line can be of a variable length (6KBits - 32KBits). Our code looks like: Socket sock = connection; InputStream in = sock.getInputStream(); BufferedReader inputReader = new BufferedReader(new InputStreamReader(in)); ... do { String input = inputReader.readLine(); // Executor call to parse the input thread in a seperate thread }while(true) So I have a couple of questions: Will the inputReader.readLine() method return as soon as it hits the \n character or will it wait till the buffer is full? Is there a faster of picking up data from the socket than using a BufferedReader? What happens when the size of the input string is smaller than the size of the Socket's receive buffer? What happens when the size of the input string is bigger than the size of the Socket's receive buffer? I am getting to grips (slowly) with Java's IO libraries, so any pointers are much appreciated. Thank you!

    Read the article

  • How to implement custom JSF component for drawing chart?

    - by Roman
    I want to create a component which can be used like: <mc:chart data="#{bean.data}" width="200" height="300" /> where #{bean.data} returns a collection of some objects or chart model object or something else what can be represented as a chart (to put it simple, let's assume it returns a collection of integers). I want this component to generate html like this: <img src="someimg123.png" width="200" height="300"/> The problem is that I have some method which can receive data and return image, like: public RenderedImage getChartImage (Collection<Integer> data) { ... } and I also have a component for drawing dynamic image: <o:dynamicImage width="200" height="300" data="#{bean.readyChartImage}/> This component generates html just as I need but it's parameter is array of bytes or RenderedImage i.e. it needs method in bean like this: public RenderedImage getReadyChartImage () { ... } So, one approach is to use propertyChangedListener on submit to set data (Collection<Integer>) for drawing chart and then use <o:dynamicImage /> component. But I'd like to create my own component which receives data and draws chart. I'm using facelets but it's not so important indeed. Any ideas how to create the desired component? P.S. One solution I was thinking about is not to use <o:dynamicImage/> and use some servlet to stream image. But I don't know how to implement that correctly and how to tie jsf component with servlet and how to save already built chart images (generating new same image for each request can cause performance problems imho) and so on..

    Read the article

  • Passing data between ViewControllers versus doing local Fetch in each VC

    - by Tofrizer
    Hi All, I'm developing an iPhone app using Core Data and I'm looking for some general advice and recommendations on whether its acceptable to pass data between ViewControllers versus doing a local fetch in each ViewController as you navigate to it. Ordinarily I would say it all depends on various factors (e.g. performance etc) but the passing data approach is so prevalent in my app and I'm spooked by all the stories about Apple rejecting apps because of not conforming to their standard guidelines. So let me put another way -- is it non-standard to pass data between VC's? The reason I pass data so much is because each ViewController is just another view on to data present in my object model / graph. Once I have a handle on my first object in the first view controller (which I of course do have to fetch), I can use the existing object composition / relationships to drill down into the next level of detail into data and so I just pass these objects to the next VC. Separately, one possible downside with this passing-data-to-each-VC approach is I don't benefit from (what I perceive to be) the optimisation/benefits that NSFetchedResultsController provides in terms of efficient memory usage and section handling. My app is read-only but I do have one table with 5000 rows and I'm curious if I am missing out on NSFetchedResultsController benefits. Any thoughts on this as well? Can I somehow still benefit from NSFetchedResultsController goodness without having to do a full fetch (as I would have already passed in the data from my previous VC)? Thanks a lot.

    Read the article

  • Running an existing LINQ query against a dynamic object (DataTable like)

    - by TomTom
    Hello, I am working on a generic OData provider to go against a custom data provider that we have here. Thsi is fully dynamic in that I query the data provider for the table it knows. I have a basic storage structure in place so far based on the OData sample code. My problem is: OData supports queries and expects me to hand in an IQueryable implementation. On the lowe rside, I dont have any query support. Not a joke - the provider returns tables and the WHERE clause is not supported. Performance is not an issue here - the tables are small. It is ok to sort them in the OData provider. My main problem is this. I submit a SQL statement to get out the data of a table. The result is some sort of ADO.NET data reader here. I need to expose an IQueryable implementation for this data to potentially allow later filtering. Any ide ahow to best touch that? .NET 3.5 only (no 4.0 planned for some time). I was seriously thinking of creating dynamic DTO classes for every table (emitting bytecode) so I can use standard LINQ. Right now I am using a dictionary per entry (not too efficient) but I see no real way to filter / sort based on them.

    Read the article

  • Disposing underlying object from finalizer in an immutable object

    - by Juan Luis Soldi
    I'm trying to wrap around Awesomium and make it look to the rest of my code as close as possible to NET's WebBrowser since this is for an existing application that already uses the WebBrowser. In this library, there is a class called JSObject which represents a javascript object. You can get one of this, for instance, by calling the ExecuteJavascriptWithResult method of the WebView class. If you'd call it like myWebView.ExecuteJavascriptWithResult("document", string.Empty).ToObject(), then you'd get a JSObject that represents the document. I'm writing an immutable class (it's only field is a readonly JSObject object) called JSObjectWrap that wraps around JSObject which I want to use as base class for other classes that would emulate .NET classes such as HtmlElement and HtmlDocument. Now, these classes don't implement Dispose, but JSObject does. What I first thought was to call the underlying JSObject's Dispose method in my JSObjectWrap's finalizer (instead of having JSObjectWrap implement Dispose) so that the rest of my code can stay the way it is (instead of having to add using's everywhere and make sure every JSObjectWrap is being properly disposed). But I just realized if more than two JSObjectWrap's have the same underlying JSObject and one of them gets finalized this will mess up the other JSObjectWrap. So now I'm thinking maybe I should keep a static Dictionary of JSObjects and keep count of how many of each of them are being referenced by a JSObjectWrap but this sounds messy and I think could cause major performance issues. Since this sounds to me like a common pattern I wonder if anyone else has a better idea.

    Read the article

  • Can parser combinators be made efficient?

    - by Jon Harrop
    Around 6 years ago, I benchmarked my own parser combinators in OCaml and found that they were ~5× slower than the parser generators on offer at the time. I recently revisited this subject and benchmarked Haskell's Parsec vs a simple hand-rolled precedence climbing parser written in F# and was surprised to find the F# to be 25× faster than the Haskell. Here's the Haskell code I used to read a large mathematical expression from file, parse and evaluate it: import Control.Applicative import Text.Parsec hiding ((<|>)) expr = chainl1 term ((+) <$ char '+' <|> (-) <$ char '-') term = chainl1 fact ((*) <$ char '*' <|> div <$ char '/') fact = read <$> many1 digit <|> char '(' *> expr <* char ')' eval :: String -> Int eval = either (error . show) id . parse expr "" . filter (/= ' ') main :: IO () main = do file <- readFile "expr" putStr $ show $ eval file putStr "\n" and here's my self-contained precedence climbing parser in F#: let rec (|Expr|) (P(f, xs)) = Expr(loop (' ', f, xs)) and loop = function | ' ' as oop, f, ('+' | '-' as op)::P(g, xs) | (' ' | '+' | '-' as oop), f, ('*' | '/' as op)::P(g, xs) -> let h, xs = loop (op, g, xs) let op = match op with | '+' -> (+) | '-' -> (-) | '*' -> (*) | '/' -> (/) loop (oop, op f h, xs) | _, f, xs -> f, xs and (|P|) = function | '('::Expr(f, ')'::xs) -> P(f, xs) | c::xs when '0' <= c && c <= '9' -> P(int(string c), xs) My impression is that even state-of-the-art parser combinators waste a lot of time back tracking. Is that correct? If so, is it possible to write parser combinators that generate state machines to obtain competitive performance or is it necessary to use code generation?

    Read the article

  • Which apache worker to use with passenger and how?

    - by Millisami
    I've this config in my apache2.conf <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MaxClients 15 MinSpareThreads 4 MaxSpareThreads 5 ThreadsPerChild 15 MaxRequestsPerChild 50000 </IfModule> Now I'm confused here. Which module gets loaded on which conditions? The phusion guys have suggested to use the worker module. Since both are present in apache conf file, do I have to comment the mpm_prefork_module or leave it as it is? Following is my passenger conf file for apache: LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/ext/apache2/mod_passenger.so PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4 PassengerRuby /usr/bin/ruby1.8 PassengerMaxPoolSize 3 PassengerPoolIdleTime 999999 RailsFrameworkSpawnerIdleTime 0 RailsAppSpawnerIdleTime 0 I'm running just a single Rails 2.3.2 app on 256MB slice at slicehost. I'm not quite satisfied with the performance yet. Are the settings above are any good??

    Read the article

< Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >