Search Results

Search found 97855 results on 3915 pages for 'code performance'.

Page 170/3915 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • New i7 is slower than old Core 2 Duo? Why? (BIOS programming)

    - by DrChase
    I've always wondered why the companies who make BIOS' either have terrible engineering psychologists or none at all. But without wasting your time further with random speculative questions, my real question is as follows: Why does my new computer run slower than my old computer? Old Computer: Intel Core 2 Duo CPU @ 3.0 Ghz (stock) 4GB OCZ DDR2 800 RAM Wolfdale E8400 mb nVidia GeForce 8600 GT New Computer: Intel Core i7 920 @ ~3.2 Ghz 6 GB OCZ DDR3 1066 RAM EVGA x58 SLI LE motherboard nVidia GeForce GTX 275 Vista x64 Home Premium on both. "Run slower" is defined as: - poorer FPS performance in the same games, applications - takes longer to start up - general desktop usage (checking email, opening up files, running exe's) is noticeably slower At first I thought I must've not set something up in the BIOS or something. But I have no idea how to set anything in the bios except for "Dummy O.C.", which brought me to ~3.2 Ghz. But beyond that I have no idea. I've been reading stuff about "ram timing" and voltages and the like but I really have no idea about that stuff. I'm a psychologist who has a basic understanding in building his own computers, not a computer scientist. Can someone give me some wisdom that might guide me to the reason my new computer is worse than my older one? I'm sorry if this is a bad question, or not appropriate to SO. I'm just pretty frustrated now and you all have helped me in the past so I figured I'd give it a shot. Thanks for your time.

    Read the article

  • Comparing 128MB GeForce 8600GT and 512MB Radeon X1650

    - by Synetech inc.
    Hi, I'm trying to determine which is the better of these two video cards: 128MB Nvidia GeForce 8600GT card while the other has a 512MB ATI Radeon X1650 card. Both cards are the upper-level mid-range versions of their respective series. On the one hand, the ATI has substantially more VRAM, but the Nvidia supports D3D 10 and SM4.0 as opposed to D3D 9.0c/SM3.0 that the ATI supports. Also, I have always heard better things about Nvidia cards compared to ATI cards. I'm trying to find some advice on which one is better, but I can't find any actual comparisons or anything for these specific cards (the comparisons I can find are only similar ones like the X1650 Pro or 8600GT PCI-E), so I figure that what I need to know is whether the extra VRAM is that important. Looking at the ATI table and the Nvidia table seems to indicate that the Nvidia is better, but then again, the Nvidia table also says that the GeForce 8600GT is a PCI-E card with at least 256MB even though the card in question is an AGP with 128MB. (:-?) (It looks like the ATI card is not supported in Windows 7 while the Nvidia card is, which I suppose is also a factor, though not quite as immediately relevant as performance.) Any ideas? Thanks a lot.

    Read the article

  • Issuing Current Time Increments in StreamInsight (A Practical Example)

    The issuing of a Current Time Increment, Cti, in StreamInsight is very definitely one of the most important concepts to learn if you want your Streams to be responsive. A full discussion of how to issue Ctis is beyond the scope of this article but a very good explanation in addition to Books Online can be found in these three articles by a member of the StreamInsight team at Microsoft, Ciprian Gerea. Time in StreamInsight Series http://blogs.msdn.com/b/streaminsight/archive/2010/07/23/time-in-streaminsight-i.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/07/30/time-in-streaminsight-ii.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/08/03/time-in-streaminsight-iii.aspx A lot of the problems I see with unresponsive or stuck streams on the MSDN Forums are to do with how Ctis are enqueued or in a lot of cases not enqueued. If you enqueue events and never enqueue a Cti then StreamInsight will be perfectly happy. You, on the other hand, will never see data on the output as you have not told StreamInsight to flush the stream. This article deals with a specific implementation problem I had recently whilst working on a StreamInsight project. I look at some possible options and discuss why they would not work before showing the way I solved the problem. The stream of data I was dealing with on this project was very bursty that is to say when events were flowing they came through very quickly and in large numbers (1000 events/sec), but when the stream calmed down it could be a few seconds between each event. When enqueuing events into the StreamInsight engne it is best practice to do so with a StartTime that is given to you by the system producing the event . StreamInsight processes events and it doesn't matter whether those events are being pushed into the engine by a source system or the events are being read from something like a flat file in a directory somewhere. You can apply the same logic and temporal algebra to both situations. Reading from a file is an excellent example of where the time of the event on the source itself is very important. We could be reading that file a long time after it was written. Being able to read the StartTime from the events allows us to define windows that will hold the correct sets of events. I was able to do this with my stream but this is where my problems started. Below is a very simple script to create a SQL Server table and populate it with sample data that will show exactly the problem I had. CREATE TABLE [dbo].[t] ( [c1] [int] PRIMARY KEY, [c2] [datetime] NULL ) INSERT t VALUES (1,'20100810'),(2,'20100810'),(3,'20100810') Column c2 defines the StartTime of the event on the source and as you can see the values in all 3 rows of data is the same. If we read Ciprian’s articles we know that we can define how Ctis get injected into the stream in 3 different places The Stream Definition The Input Factory The Input Adapter I personally have always been a fan of enqueing Ctis through the factory. Below is code typical of what I would use to do this On the class itself I do some inheriting public class SimpleInputFactory : ITypedInputAdapterFactory<SimpleInputConfig>, ITypedDeclareAdvanceTimeProperties<SimpleInputConfig> And then I implement the following function public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.FromTicks(-1)), AdvanceTimePolicy.Adjust); } The configInfo .CtiFrequency property is a value I pass through to define after how many events I want a Cti to be injected and this in turn will flush through the stream of data. I usually pass a value of 1 for this setting. The second parameter determines the CTI timestamp in terms of a delay relative to the events. -1 ticks in the past results in 1 tick in the future, i.e., ahead of the event. The problem with this method though is that if consecutive events have the same StartTime then only one of those events will be enqueued. In this example I use the following to define how I assign the StartTime of my events currEvent.StartTime = (DateTimeOffset)dt.c2; If I go ahead and run my StreamInsight process with this configuration i can see on the output adapter that two events have been removed To see this in a little more depth I can use the StreamInsight Debugger and see what happens internally. What is happening here is that the first event arrives and a Cti is injected with a time of 1 tick after the StartTime of that event (Also the EndTime of the event). The second event arrives and it has a StartTime of before the Cti and even though we specified AdvanceTimePolicy.Adjust on the factory we know that a point event can never be adjusted like this and the event is dropped. The same happens for the third event as well (The second and third events get trumped by the Cti). For a more detailed discussion of why this happens look here http://www.sqlis.com/sqlis/post/AdvanceTimePolicy-and-Point-Event-Streams-In-StreamInsight.aspx We end up with a single event being pushed into the output adapter and our result now makes sense. The next way I tried to solve this problem by changing the value of the second parameter to TimeSpan.Zero Here is how my factory code now looks public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.Zero), AdvanceTimePolicy.Adjust); } What I am doing here is declaring a policy that says inject a Cti together with every event and stamp it with a StartTime that is equal to the start time of the event itself (TimeSpan.Zero). This method has plus points as well as a downside. The upside is that no events will be lost by having the same StartTime as previous events. The Downside is that because the Cti is declared with the StartTime of the event itself then it does not actually flush that particular event because in the StreamInsight algebra, a Cti commits only those events that occurred strictly before them. To flush the events we need a Cti to be enqueued with a greater StartTime than the events themselves. Here is what happened when I ran this configuration As you can see all we got through was the Cti and none of the events. The debugger output shows the stamps on the Cti and the events themselves. Because the Cti issued has the same timestamp (StartTime) as the events then none of the events get flushed. I was nearly there but not quite. Because my stream was bursty it was possible that the next event would not come along for a few seconds and this was far too long for an event to be enqueued and not be flushed to the output adapter. I needed another solution. Two possible solutions crossed my mind although only one of them made sense when I explored it some more. Where multiple events have the same StartTime I could add 1 tick to the first event, two to the second, three to third etc thereby giving them unique StartTime values. Add a timer to manually inject Ctis The problem with the first implementation is that I would be giving the events a new StartTime. This would cause me the following problems If I want to define windows over the stream then some events may not be captured in the right windows and therefore any calculations on those windows I did would be wrong What would happen if we had 10,000 events with the same StartTime? I would enqueue them with StartTime + n ticks. Along comes a genuine event with a StartTime of the very first event + 1 tick. It is now too far in the past as far as my stream is concerned and it would be dropped. Not what I would want to do at all. I decided then to look at the Timer based solution I created a timer on my input adapter that elapsed every 200ms. private Timer tmr; public SimpleInputAdapter(SimpleInputConfig configInfo) { ctx = new SimpleTimeExtractDataContext(configInfo.ConnectionString); this.configInfo = configInfo; tmr = new Timer(200); tmr.Elapsed += new ElapsedEventHandler(t_Elapsed); tmr.Enabled = true; } void t_Elapsed(object sender, ElapsedEventArgs e) { ts = DateTime.Now - dtCtiIssued; if (ts.TotalMilliseconds >= 200 && TimerIssuedCti == false) { EnqueueCtiEvent(System.DateTime.Now.AddTicks(-100)); TimerIssuedCti = true; } }   In the t_Elapsed event handler I find out the difference in time between now and when the last event was processed (dtCtiIssued). I then check to see if that is greater than or equal to 200ms and if the last issuing of a Cti was done by the timer or by a genuine event (TimerIssuedCti). If I didn’t do this check then I would enqueue a Cti every time the timer elapsed which is not something I wanted. If the difference between the two times is greater than or equal to 500ms and the last event enqueued was by a real event then I issue a Cti through the timer to flush the event Queue, otherwise I do nothing. When I enqueue the Ctis into my stream in my ProduceEvents method I also set the values of dtCtiIssued and TimerIssuedCti   currEvent = CreateInsertEvent(); currEvent.StartTime = (DateTimeOffset)dt.c2; TimerIssuedCti = false; dtCtiIssued = currEvent.StartTime; If I go ahead and run this configuration I see the following in my output. As we can see the first Cti gets enqueued as before but then another is enqueued by the timer and because this has a later timestamp it flushes the enqueued events through the engine. Conclusion Hopefully this has shown how the enqueuing of Ctis can have a dramatic effect on the responsiveness of your output in StreamInsight. Understanding the temporal nature of the product is for me one of the most important things you can learn. I have attached my solution for the demos. It is all in one project and testing each variation is a simple matter of commenting and un-commenting the parts in the code we have been dealing with here.

    Read the article

  • MySQL performance

    - by kapil.israni
    Hi, I have this LAMP application with about 900k rows in MySQL and I am having some performance issues. Background - Apart from the LAMP stack , there's also a Java process (multi-threaded) that runs in its own JVM. So together with LAMP & java, they form the complete solution. The java process is responsible for inserts/updates and few selects as well. These inserts/updates are usually in bulk/batch, anywhere between 5-150 rows. The PHP front-end code only does SELECT's. Issue - the PHP/SELECT queries become very slow when the java process is running. When the java process is stopped, SELECT's perform alright. I mean the performance difference is huge. When the java process is running, any action performed on the php front-end results in 80% and more CPU usage for mysqld process. Any help would be appreciated. MySQL is running with default parameters & settings. Software stack - Apache - 2.2.x MySQL -5.1.37-1ubuntu5 PHP - 5.2.10 Java - 1.6.0_15 OS - Ubuntu 9.10 (karmic)

    Read the article

  • Tiered Design With Analytical Widgets - Is This Code Smell?

    - by Repo Man
    The idea I'm playing with right now is having a multi-leveled "tier" system of analytical objects which perform a certain computation on a common object and then create a new set of analytical objects depending on their outcome. The newly created analytical objects will then get their own turn to run and optionally create more analytical objects, and so on and so on. The point being that the child analytical objects will always execute after the objects that created them, which is relatively important. The whole apparatus will be called by a single thread so I'm not concerned with thread safety at the moment. As long as a certain base condition is met, I don't see this being an unstable design but I'm still a little bit queasy about it. Is this some serious code smell or should I go ahead and implement it this way? Is there a better way? Here is a sample implementation: namespace WidgetTier { public class Widget { private string _name; public string Name { get { return _name; } } private TierManager _tm; private static readonly Random random = new Random(); static Widget() { } public Widget(string name, TierManager tm) { _name = name; _tm = tm; } public void DoMyThing() { if (random.Next(1000) > 1) { _tm.Add(); } } } //NOT thread-safe! public class TierManager { private Dictionary<int, List<Widget>> _tiers; private int _tierCount = 0; private int _currentTier = -1; private int _childCount = 0; public TierManager() { _tiers = new Dictionary<int, List<Widget>>(); } public void Add() { if (_currentTier + 1 >= _tierCount) { _tierCount++; _tiers.Add(_currentTier + 1, new List<Widget>()); } _tiers[_currentTier + 1].Add(new Widget(string.Format("({0})", _childCount), this)); _childCount++; } //Dangerous? public void Sweep() { _currentTier = 0; while (_currentTier < _tierCount) //_tierCount will start at 1 but keep increasing because child objects will keep adding more tiers. { foreach (Widget w in _tiers[_currentTier]) { w.DoMyThing(); } _currentTier++; } } public void PrintAll() { for (int t = 0; t < _tierCount; t++) { Console.Write("Tier #{0}: ", t); foreach (Widget w in _tiers[t]) { Console.Write(w.Name + " "); } Console.WriteLine(); } } } class Program { static void Main(string[] args) { TierManager tm = new TierManager(); for (int c = 0; c < 10; c++) { tm.Add(); //create base widgets; } tm.Sweep(); tm.PrintAll(); Console.ReadLine(); } } }

    Read the article

  • jQuery performance

    - by jAndy
    Hi Folks, imagine you have to do DOM manipulation like a lot (in my case, it's kind of a dynamic list). Look at this example: var $buffer = $('<ul/>', { 'class': '.custom-example', 'css': { 'position': 'absolute', 'top': '500px' } }); $.each(pages[pindex], function(i, v){ $buffer.append(v); }); $buffer.insertAfter($root); "pages" is an array which holds LI elements as jQuery object. "$root" is an UL element What happens after this code is, both UL's are animated (scrolling) and finally, within the callback of animate this code is executed: $root.detach(); $root = $buffer; $root.css('top', '0px'); $buffer = null; This works very well, the only thing I'm pi**ed off is the performance. I do cache all DOM elements I'm laying a hand on. Without looking too deep into jQuery's source code, is there a chance that my performance issues are located there? Does jQuery use DocumentFragments to append things? If you create a new DOM element with var new = $('<div/>') it is only stored in memory at this point isnt it?

    Read the article

  • WF performance with new 20,000 persisted workflow instances each month

    - by Nikola Stjelja
    Windows Workflow Foundation has a problem that is slow when doing WF instances persistace. I'm planning to do a project whose bussiness layer will be based on WF exposed WCF services. The project will have 20,000 new workflow instances created each month, each instance could take up to 2 months to finish. What I was lead to belive that given WF slownes when doing peristance my given problem would be unattainable given performance reasons. I have the following questions: Is this true? Will my performance be crap with that load(given WF persitance speed limitations) How can I solve the problem? We currently have two possible solutions: 1. Each new buisiness process request(e.g. Give me a new drivers license) will be a new WF instance, and the number of persistance operations will be limited by forwarding all status request operations to saved state values in a separate database. 2. Have only a small amount of Workflow Instances up at any give time, without any persistance ofso ever(only in case of system crashes etc.), by breaking each workflow stap in to a separate worklof and that workflow handling each business process request instance in the system that is at that current step(e.g. I'm submitting my driver license reques form, which is step one... we have 100 cases of that, and my step one workflow will handle every case simultaneusly). I'm very insterested in solution for that problem. If you want to discuss that problem pleas be free to mail me at [email protected]

    Read the article

  • Figuring out the Nyquist performance limitation of an ADC on an example PIC microcontroller

    - by AKE
    I'm spec-ing the suitability of a dsPIC microcontroller for an analog-to-digital application. This would be preferable to using dedicated A/D chips and a separate dedicated DSP chip. To do that, I've had to run through some computations, pulling the relevant parameters from the datasheets. I'm not sure I've got it right -- would appreciate a check! (EDITED NOTE: The PIC10F220 in the example below was selected ONLY to walk through a simple example to check that I'm interpreting Tacq, Fosc, TAD, and divisor correctly in working through this sort of Nyquist analysis. The actual chips I'm considering for the design are the dsPIC33FJ128MC804 (with 16b A/D) or dsPIC30F3014 (with 12b A/D).) A simple example: PIC10F220 is the simplest possible PIC with an ADC Runs at clock speed of 8MHz. Has an instruction cycle of 0.5us (4 clock steps per instruction) So: Taking Tacq = 6.06 us (acquisition time for ADC, assume chip temp. = 50*C) [datasheet p34] Taking Fosc = 8MHz (? clock speed) Taking divisor = 4 (4 clock steps per CPU instruction) This gives TAD = 0.5us (TAD = 1/(Fosc/divisor) ) Conversion time is 13*TAD [datasheet p31] This gives conversion time 6.5us ADC duration is then 12.56 us [? Tacq + 13*TAD] Assuming at least 2 instructions for load/store: This is another 1 us [0.5 us per instruction] Which would give max sampling rate of 73.7 ksps (1/13.56) Supposing 8 more instructions for real-time processing: This is another 4 us Thus, total ADC/handling time = 17.56us (12.56us + 1us + 4us) So expected upper sampling rate is 56.9 ksps. Nyquist frequency for this sampling rate is therefore 28 kHz. If this is right, it suggests the (theoretical) performance suitability of this chip's A/D is for signals that are bandlimited to 28 kHz. Is this a correct interpretation of the information given in the data sheet in obtaining the Nyquist performance limit? Any opinions on the noise susceptibility of ADCs in PIC / dsPIC chips would be much appreciated! AKE

    Read the article

  • Naming multi-instance performance counters in .NET

    - by Roger Lipscombe
    Most multiple instance performance counters in Windows seem to automatically(?) have a #n on the end if there's more than one instance with the same name. For example: if, in Perfmon, you look under the Process category, you'll see: ... dwm explorer explorer#1 ... I have two explorer.exe processes, so the second counter has #1 appended to its name. When I attempt to do this in a .NET application: I can create the category, and register the instance (using the PerformanceCounterCategory.Create that takes a CounterCreationDataCollection). I can open the counter for write and write to it. When I open the counter a second time, it opens the same counter. This means that I have two applications fighting over the counters. The documentation for PerformanceCounter.InstanceName states that # is not allowed in the name. So: how do I have multiple-instance performance counters that are actually multiple instance? And where the second (and subsequent) instances get #n appended to the name? That is: I know that I can put the process ID (e.g.) on the instance name. This works, but has the unfortunate side effect that restarting the process results in a new PID, and Perfmon continues monitoring the old counter.

    Read the article

  • Improving the performance of XSL

    - by Rachel
    I am using the below XSL 2.0 code to find the ids of the text nodes that contains the list of indices that i give as input. the code works perfectly but in terms for performance it is taking a long time for huge files. Even for huge files if the index values are small then the result is quick in few ms. I am using saxon9he Java processor to execute the XSL. <xsl:variable name="insert-data" as="element(data)*"> <xsl:for-each-group select="doc($insert-file)/insert-data/data" group-by="xsd:integer(@index)"> <xsl:sort select="current-grouping-key()"/> <data index="{current-grouping-key()}" text-id="{generate-id( $main-root/descendant::text()[ sum((preceding::text(), .)/string-length(.)) ge current-grouping-key() ][1] )}"> <xsl:copy-of select="current-group()/node()"/> </data> </xsl:for-each-group> </xsl:variable> In the above solution if the index value is too huge say 270962 then the time taken for the XSL to execute is 83427ms. In huge files if the index value is huge say 4605415, 4605431 it takes several minutes to execute. Seems the computation of the variable "insert-data" takes time though it is a global variable and computed only once. Should the XSL be addessed or the processor? How can i improve the performance of the XSL.

    Read the article

  • APplication performance issue : SqlServer & Oracle

    - by Mahesh
    Hi, We have a applicaiton in Silverlight,WCF, NHibernate. Currently it is supporting SQL Serve and Oracle database. As it's huge data, it is running ok on SQL Sevrer. But on Oracle it is running very slow. For one functionality it takes 5 Sec to execute on SQL Server and 30 Sec on Oracle. I am not able to figure out what will be issue. Two things that i want to share with you about our database. 1) Database: contains one base table contains column of type SQLServer: [Text] Oracle: [NCLOB] 2) Our database structure is too much normalized. May be in the oracle i have used NCLOB, that is the cause of the performance. I mean i don't know the details about it.... Can anyone please let me know what will be cause? Or Which actions do i need to follw to improve the performance as equal as SqlServer.? Thanks in advance. Mahesh.

    Read the article

  • Improve performance of searching JSON object with jQuery

    - by cale_b
    Please forgive me if this is answered on SO somewhere already. I've searched, and it seems as though this is a fairly specific case. Here's an example of the JSON (NOTE: this is very stripped down - this is dynamically loaded, and currently there are 126 records): var layout = { "2":[{"id":"40","attribute_id":"2","option_id":null,"design_attribute_id":"4","design_option_id":"131","width":"10","height":"10", "repeat":"0","top":"0","left":"0","bottom":"0","right":"0","use_right":"0","use_bottom":"0","apply_to_options":"0"}, {"id":"41","attribute_id":"2","option_id":"115","design_attribute_id":"4","design_option_id":"131","width":"2","height":"1", "repeat":"0","top":"0","left":"0","bottom":"4","right":"2","use_right":"0","use_bottom":"0","apply_to_options":"0"}, {"id":"44","attribute_id":"2","option_id":"118","design_attribute_id":"4","design_option_id":"131","width":"10","height":"10", "repeat":"0","top":"0","left":"0","bottom":"0","right":"0","use_right":"0","use_bottom":"0","apply_to_options":"0"}], "5":[{"id":"326","attribute_id":"5","option_id":null,"design_attribute_id":"4","design_option_id":"154","width":"5","height":"5", "repeat":"0","top":"0","left":"0","bottom":"0","right":"0","use_right":"0","use_bottom":"0","apply_to_options":"0"}] }; I need to match the right combination of values. Here's the function I currently use: function drawOption(attid, optid) { var attlayout = layout[attid]; $.each(attlayout, function(k, v) { // d_opt_id and d_opt_id are global scoped variable set elsewhere if (v.design_attribute_id == d_att_id && v.design_option_id == d_opt_id && v.attribute_id == attid && ((v.apply_to_options == 1 || (v.option_id === optid)))) { // Do stuff here } }); } The issue is that I might iterate through 10-15 layouts (unique attid's), and any given layout (attid) might have as many as 50 possibilities, which means that this loop is being run A LOT. Given the multiple criteria that have to be matched, would an AJAX call work better? (This JSON is dynamically created via PHP, so I could craft a PHP function that could possibly do this more efficently), or am I completely missing something about how to find items in a JSON object? As always, any suggestions for improving the code are welcome! EDIT: I apologize for not making this clear, but the purpose of this question is to find a way to improve the performance. The page has a lot of javascript, and this is a location where I know that performance is lower than it could be.

    Read the article

  • How can I determine PerlLogHandler performance impact?

    - by Timmy
    I want to create a custom Apache2 log handler, and the template that is found on the apache site is: #file:MyApache2/LogPerUser.pm #--------------------------- package MyApache2::LogPerUser; use strict; use warnings; use Apache2::RequestRec (); use Apache2::Connection (); use Fcntl qw(:flock); use File::Spec::Functions qw(catfile); use Apache2::Const -compile => qw(OK DECLINED); sub handler { my $r = shift; my ($username) = $r->uri =~ m|^/~([^/]+)|; return Apache2::Const::DECLINED unless defined $username; my $entry = sprintf qq(%s [%s] "%s" %d %d\n), $r->connection->remote_ip, scalar(localtime), $r->uri, $r->status, $r->bytes_sent; my $log_path = catfile Apache2::ServerUtil::server_root, "logs", "$username.log"; open my $fh, ">>$log_path" or die "can't open $log_path: $!"; flock $fh, LOCK_EX; print $fh $entry; close $fh; return Apache2::Const::OK; } 1; What is the performance cost of the flocks? Is this logging process done in parallel, or in serial with the HTTP request? In parallel the performance would not matter as much, but I wouldn't want the user to wait another split second to add something like this.

    Read the article

  • Performance difference in for loop condition?

    - by CSharperWithJava
    Hello all, I have a simple question that I am posing mostly for my curiousity. What are the differences between these two lines of code? (in C++) for(int i = 0; i < N, N > 0; i++) for(int i = 0; i < N && N > 0; i++) The selection of the conditions is completely arbitrary, I'm just interested in the differences between , and &&. I'm not a beginner to coding by any means, but I've never bothered with the comma operator. Are there performance/behavior differences or is it purely aesthetic? One last note, I know there are bigger performance fish to fry than a conditional operator, but I'm just curious. Indulge me. Edit Thanks for your answers. It turns out the code that prompted this question had misused the comma operator in the way I've described. I wondered what the difference was and why it wasn't a && operator, but it was just written incorrectly. I didn't think anything was wrong with it because it worked just fine. Thanks for straightening me out.

    Read the article

  • MySQL Paritioning performance

    - by Imran Pathan
    Measured performance on key partitioned tables and normal tables separately. But we couldn't find any performance improvement with partitioning. Queries are pruned. Using MySQL 5.1.47 on RHEL 4. Table details: UserUsage - Will have entries for user mobile number and data usage for each date. Mobile number and Date as PRI KEY. UserProfile - Queries prev table and stores summary for each mobile number. Mobile number PRI KEY. CREATE TABLE `UserUsage` ( `Msisdn` decimal(20,0) NOT NULL, `Date` date NOT NULL, . . PRIMARY KEY USING BTREE (`Msisdn`,`Date`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; CREATE TABLE `UserProfile` ( `Msisdn` decimal(20,0) NOT NULL, . . PRIMARY KEY (`Msisdn`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; Second table is updated by query select and order by date in first table in a perl program, query is select * from UserUsage where Msisdn=number order by Date desc limit 7 [Process data in perl] update UserProfile values(....) where Msisdn=number explain partition for select, shows row being scanned in a particular partition only. Is something wrong with partition design or queries as partitioning is taking almost same or more time compared to normal tables?

    Read the article

  • Improving File Read Performance (single file, C++, Windows)

    - by david
    I have large (hundreds of MB or more) files that I need to read blocks from using C++ on Windows. Currently the relevant functions are: errorType LargeFile::read( void* data_out, __int64 start_position, __int64 size_bytes ) const { if( !m_open ) { // return error } else { seekPosition( start_position ); DWORD bytes_read; BOOL result = ReadFile( m_file, data_out, DWORD( size_bytes ), &bytes_read, NULL ); if( size_bytes != bytes_read || result != TRUE ) { // return error } } // return no error } void LargeFile::seekPosition( __int64 position ) const { LARGE_INTEGER target; target.QuadPart = LONGLONG( position ); SetFilePointerEx( m_file, target, NULL, FILE_BEGIN ); } The performance of the above does not seem to be very good. Reads are on 4K blocks of the file. Some reads are coherent, most are not. A couple questions: Is there a good way to profile the reads? What things might improve the performance? For example, would sector-aligning the data be useful? I'm relatively new to file i/o optimization, so suggestions or pointers to articles/tutorials would be helpful.

    Read the article

  • java + increasing performance and scalability

    - by varun
    Hi, below is the a code snippet, which returns the object of a class. now the object is basially comparing to some parameter in loop. my concern is what if there are thousands of objects in loop, in that case performance and scalability can be an issue. please suggest how to improve this code for performance part public Widget get(String name,int major,int minor,boolean exact) { Widget widgetToReturn = null; if(exact) { Widget w = new Widget(name, major, minor); // for loop using JDK 1.5 version for(Widget wid : set) { if((w.getName().equals(wid.getName())) && (wid.getVersion()).equals(w.getVersion())) { widgetToReturn = w; break; } } } else { Widget w = new Widget(name, major, minor); WidgetVersion widgetVersion = new WidgetVersion(major, minor); // for loop using JDK 1.5 version for(Widget wid : set) { WidgetVersion wv = wid.getVersion(); if((w.getName().equals(wid.getName())) && major == wv.getMajor() && WidgetVersion.isCompatibleAndNewer(wv, widgetVersion)) { widgetToReturn = wid; } else if((w.getName().equals(wid.getName())) && wv.equals(widgetVersion.getMajor(), widgetVersion.getMinor())) { widgetToReturn = w; } } } return widgetToReturn; }

    Read the article

  • Soap 1.2 Endpoint Performance

    - by mflair2000
    I have a Client WCF Host Service SOAP 1.2 Service setup and i'm having performance issues on the SOAP Java proxy. I have no control over how the Java service is setup, aside from the endpoint config. I have the Client running Asynchronously, and the host WCF service running in Async pattern, but i see the SOAP 1.2 proxy bottlenecking and handling the requests in a Synchronous way. Can someone take a look at the auto-generated SOAP 1.2 configuration below, from the SOAP 1.2 Service wsdl? Is there a way to configure this for Async way and improve performance? Could be configured for SOAP 1.1? <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true"/> </system.web> <system.serviceModel> <bindings> <customBinding> <binding name="WebserviceListenerSoap12Binding" closeTimeout="00:30:00" openTimeout="00:30:00" receiveTimeout="00:30:00" sendTimeout="00:30:00"> <textMessageEncoding maxReadPoolSize="64" maxWritePoolSize="16" messageVersion="Soap12" writeEncoding="utf-8"> <readerQuotas maxDepth="32" maxStringContentLength="20000000" maxArrayLength="20000000" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> </textMessageEncoding> <httpTransport manualAddressing="false" maxBufferPoolSize="20000000" maxReceivedMessageSize="20000000" allowCookies="false" authenticationScheme="Anonymous" bypassProxyOnLocal="false" decompressionEnabled="true" hostNameComparisonMode="StrongWildcard" keepAliveEnabled="true" maxBufferSize="20000000" proxyAuthenticationScheme="Anonymous" realm="" transferMode="Buffered" unsafeConnectionNtlmAuthentication="false" useDefaultWebProxy="true" /> </binding> </customBinding> </bindings> <client> <endpoint address="http://10.18.2.117:8080/java/webservice/WebserviceListener.WebserviceListenerHttpSoap12Endpoint/" binding="customBinding" bindingConfiguration="WebserviceListenerSoap12Binding" contract="ResolveServiceReference.WebserviceListenerPortType" name="WebserviceListenerHttpSoap12Endpoint" /> </client> </system.serviceModel> </configuration>

    Read the article

  • NHibernate Performance Optimization | Suggestions invited!!!

    - by user336749
    Hi, I’m facing an issue with NHibernate performance and can you please suggest me some optimizations? Below mentioned is a small summary of my application architecture I have a windows service which is listening to a messaging bus. On receiving a message the service creates an object out of which a property is the received xml snippet and saves the message to the DB (uses NH). There is a WPF UI with a readonly connection to the DB, and on refresh of the UI it displays the objects on the screen. While the UI does a refresh, it retrieves the xml and deserializes it , from which the object’s properties are derived and binded to the screen. For example assume an xml XXX is received by the service, it deserializes the xml , creates the book object and save it to the DB and a property/column is SCHEMA which contains the xml snippet. The UI while refreshed searches all book objects by ID and creates the book objects out of the xml which is being saved (yes, the xml is the constructor param). Now my issue is that the refresh takes more than 2 minutes to display say 50 book objects. I analyzed it using the NHibernate profiler, and found that the time spend within the DB is negligible, however time spent to create the entities is proportionally huge(10ms:1990 ms).I guess it’s due to the fairly huge size of xml snippet and it’s deserialization. My question is, how can I improve the performance. I dispose sessions after every refresh and is not lazy loading (please note that the time spend in DB is negligible). On every refresh it’s possible that all objects are updated by some downstream systems or maybe one of them are updated.Can I implement some sort of caching mechanism in this case? Thanks in advance for any suggestions. Regards, -Mike

    Read the article

  • Developers: How does BitLocker affect performance?

    - by Chris
    I'm an ASP.NET / C# developer. I use VS2010 all the time. I am thinking of enabling BitLocker on my laptop to protect the contents, but I am concerned about performance degradation. Developers who use IDEs like Visual Studio are working on lots and lots of files at once. More than the usual office worker, I would think. So I was curious if there are other developers out there who develop with BitLocker enabled. How has the performance been? Is it noticeable? If so, is it bad? My laptop is a 2.53GHz Core 2 Duo with 4GB RAM and an Intel X25-M G2 SSD. It's pretty snappy but I want it to stay that way. If I hear some bad stories about BitLocker, I'll keep doing what I am doing now, which is keeping stuff RAR'ed with a password when I am not actively working on it, and then SDeleting it when I am done (but it's such a pain).

    Read the article

  • Socket Performance C++ Or C#

    - by modernzombie
    I have to write an application that is essentially a proxy server to handle all HTTP and HTTPS requests from our server (web browsing, etc). I know very little C++ and am very comfortable writing the application features in C#. I have experimented with the proxy from Mentalis (socket proxy) which seems to work fine for small webpages but if I go to large sites like tigerdirect.ca and browse through a couple of layers it is very slow and sometimes requests don't complete and I see broken images and javascript errors. This happens with all of our vendor sites and other content heavy sites. Mentalis uses HTTP 1.0 which I know is not as efficient but should a proxy be that slow? What is an acceptable amount of performance loss from using a proxy? Would HTTP 1.1 make a noticeable difference? Would a C++ proxy be much faster than one in C#? Is the Mentalis code just not efficient? Would I be able to use a premade C++ proxy and import the DLL to C# and still get good performance or would this project call for all C++? Sorry if these are obvious questions but I have not done network programming before.

    Read the article

  • Misaligned Pointer Performance

    - by Elite Mx
    Aren't misaligned pointers (in the BEST possible case) supposed to slow down performance and in the worst case crash your program (assuming the compiler was nice enough to compile your invalid c program). Well, the following code doesn't seem to have any performance differences between the aligned and misaligned versions. Why is that? /* brutality.c */ #ifdef BRUTALITY xs = (unsigned long *) ((unsigned char *) xs + 1); #endif ... /* main.c */ #include <stdio.h> #include <stdlib.h> #define size_t_max ((size_t)-1) #define max_count(var) (size_t_max / (sizeof var)) int main(int argc, char *argv[]) { unsigned long sum, *xs, *itr, *xs_end; size_t element_count = max_count(*xs) >> 4; xs = malloc(element_count * (sizeof *xs)); if(!xs) exit(1); xs_end = xs + element_count - 1; sum = 0; for(itr = xs; itr < xs_end; itr++) *itr = 0; #include "brutality.c" itr = xs; while(itr < xs_end) sum += *itr++; printf("%lu\n", sum); /* we could free the malloc-ed memory here */ /* but we are almost done */ exit(0); } Compiled and tested on two separate machines using gcc -pedantic -Wall -O0 -std=c99 main.c for i in {0..9}; do time ./a.out; done

    Read the article

  • CreationName for SSIS 2008 and adding components programmatically

    If you are building SSIS 2008 packages programmatically and adding data flow components, you will probably need to know the creation name of the component to add. I can never find a handy reference when I need one, hence this rather mundane post. See also CreationName for SSS 2005. We start with a very simple snippet for adding a component: // Add the Data Flow Task package.Executables.Add("STOCK:PipelineTask"); // Get the task host wrapper, and the Data Flow task TaskHost taskHost = package.Executables[0] as TaskHost; MainPipe dataFlowTask = (MainPipe)taskHost.InnerObject; // Add OLE-DB source component - ** This is where we need the creation name ** IDTSComponentMetaData90 componentSource = dataFlowTask.ComponentMetaDataCollection.New(); componentSource.Name = "OLEDBSource"; componentSource.ComponentClassID = "DTSAdapter.OLEDBSource.2"; So as you can see the creation name for a OLE-DB Source is DTSAdapter.OLEDBSource.2. CreationName Reference  ADO NET Destination Microsoft.SqlServer.Dts.Pipeline.ADONETDestination, Microsoft.SqlServer.ADONETDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 ADO NET Source Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter, Microsoft.SqlServer.ADONETSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Aggregate DTSTransform.Aggregate.2 Audit DTSTransform.Lineage.2 Cache Transform DTSTransform.Cache.1 Character Map DTSTransform.CharacterMap.2 Checksum Konesans.Dts.Pipeline.ChecksumTransform.ChecksumTransform, Konesans.Dts.Pipeline.ChecksumTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Conditional Split DTSTransform.ConditionalSplit.2 Copy Column DTSTransform.CopyMap.2 Data Conversion DTSTransform.DataConvert.2 Data Mining Model Training MSMDPP.PXPipelineProcessDM.2 Data Mining Query MSMDPP.PXPipelineDMQuery.2 DataReader Destination Microsoft.SqlServer.Dts.Pipeline.DataReaderDestinationAdapter, Microsoft.SqlServer.DataReaderDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Derived Column DTSTransform.DerivedColumn.2 Dimension Processing MSMDPP.PXPipelineProcessDimension.2 Excel Destination DTSAdapter.ExcelDestination.2 Excel Source DTSAdapter.ExcelSource.2 Export Column TxFileExtractor.Extractor.2 Flat File Destination DTSAdapter.FlatFileDestination.2 Flat File Source DTSAdapter.FlatFileSource.2 Fuzzy Grouping DTSTransform.GroupDups.2 Fuzzy Lookup DTSTransform.BestMatch.2 Import Column TxFileInserter.Inserter.2 Lookup DTSTransform.Lookup.2 Merge DTSTransform.Merge.2 Merge Join DTSTransform.MergeJoin.2 Multicast DTSTransform.Multicast.2 OLE DB Command DTSTransform.OLEDBCommand.2 OLE DB Destination DTSAdapter.OLEDBDestination.2 OLE DB Source DTSAdapter.OLEDBSource.2 Partition Processing MSMDPP.PXPipelineProcessPartition.2 Percentage Sampling DTSTransform.PctSampling.2 Performance Counters Source DataCollectorTransform.TxPerfCounters.1 Pivot DTSTransform.Pivot.2 Raw File Destination DTSAdapter.RawDestination.2 Raw File Source DTSAdapter.RawSource.2 Recordset Destination DTSAdapter.RecordsetDestination.2 RegexClean Konesans.Dts.Pipeline.RegexClean.RegexClean, Konesans.Dts.Pipeline.RegexClean, Version=2.0.0.0, Culture=neutral, PublicKeyToken=d1abe77e8a21353e Row Count DTSTransform.RowCount.2 Row Count Plus Konesans.Dts.Pipeline.RowCountPlusTransform.RowCountPlusTransform, Konesans.Dts.Pipeline.RowCountPlusTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Number Konesans.Dts.Pipeline.RowNumberTransform.RowNumberTransform, Konesans.Dts.Pipeline.RowNumberTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Sampling DTSTransform.RowSampling.2 Script Component Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost, Microsoft.SqlServer.TxScript, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Slowly Changing Dimension DTSTransform.SCD.2 Sort DTSTransform.Sort.2 SQL Server Compact Destination Microsoft.SqlServer.Dts.Pipeline.SqlCEDestinationAdapter, Microsoft.SqlServer.SqlCEDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 SQL Server Destination DTSAdapter.SQLServerDestination.2 Term Extraction DTSTransform.TermExtraction.2 Term Lookup DTSTransform.TermLookup.2 Trash Destination Konesans.Dts.Pipeline.TrashDestination.Trash, Konesans.Dts.Pipeline.TrashDestination, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b8351fe7752642cc TxTopQueries DataCollectorTransform.TxTopQueries.1 Union All DTSTransform.UnionAll.2 Unpivot DTSTransform.UnPivot.2 XML Source Microsoft.SqlServer.Dts.Pipeline.XmlSourceAdapter, Microsoft.SqlServer.XmlSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Here is a simple console program that can be used to enumerate the pipeline components installed on your machine, and dumps out a list of all components like that above. You will need to add a reference to the Microsoft.SQLServer.ManagedDTS assembly. using System; using System.Diagnostics; using Microsoft.SqlServer.Dts.Runtime; public class Program { static void Main(string[] args) { Application application = new Application(); PipelineComponentInfos componentInfos = application.PipelineComponentInfos; foreach (PipelineComponentInfo componentInfo in componentInfos) { Debug.WriteLine(componentInfo.Name + "\t" + componentInfo.CreationName); } Console.Read(); } }

    Read the article

  • Opening the Internet Settings Dialog and using Windows Default Network Settings via Code

    - by Rick Strahl
    Ran into a question from a client the other day that asked how to deal with Internet Connection settings for running  HTTP requests. In this case this is an old FoxPro app and it's using WinInet to handle the actual HTTP connection. Another client asked a similar question about using the IE Web Browser control and configuring connection properties. Regardless of platform or tools used to do HTTP connections, you can probably configure custom connection and proxy settings in your application to configure http connection settings manually. However, this is a repetitive process for each application requires you to track system information in your application which is undesirable. Often it's much easier to rely on the system wide proxy settings that Windows provides via the Internet Settings dialog. The dialog is a Control Panel applet (inetcpl.cpl) and is the same dialog that you see when you pop up Internet Explorer's Options dialog: This dialog controls the Windows connection properties that determine how the Windows HTTP stack connects to the Internet and how Proxy's are used if configured. Depending on how the HTTP client is configured - it can typically inherit and use these global settings. Loading the Settings Dialog Programmatically The settings dialog is a Control Panel applet with the name of: inetcpl.cpl and you can use any Shell execution mechanism (Run dialog, ShellExecute API, Process.Start() in .NET etc.) to invoke the dialog. Changes made there are immediately reflected in any applications that use the default connection settings. In .NET you can simply do this to bring up the Internet Settings dialog with the Connection tab enabled: Process.Start("inetcpl.cpl",",4"); In FoxPro you can simply use the RUN command to execute inetcpl.cpl: lcCmd = "inetcpl.cpl ,4" RUN &lcCmd Using the Default Connection/Proxy Settings When using WinInet you specify the Http connect type in the call to InternetOpen() like this (FoxPro code here): hInetConnection=; InternetOpen(THIS.cUserAgent,0,; THIS.chttpproxyname,THIS.chttpproxybypass,0) The second parameter of 0 specifies that the default system proxy settings should be used and it uses the settings from the Internet Settings Connections tab. Other connection options for HTTP connections include 1 - direct (no proxies and ignore system settings), 3 - explicit Proxy specification. In most situations a connection mode setting of 0 should work. In .NET HTTP connections by default are direct connections and so you need to explicitly specify a default proxy or proxy configuration to use. The easiest way to do this is on the application level in the config file: <configuration> <system.net> <defaultProxy> <proxy bypassonlocal="False" autoDetect="True" usesystemdefault="True" /> </defaultProxy> </system.net> </configuration> You can do the same sort of thing in code specifying the proxy explicitly and using System.Net.WebProxy.GetDefaultProxy(). So when making HTTP calls to Web Services or using the HttpWebRequest class you can set the proxy with: StoreService.Proxy = WebProxy.GetDefaultProxy(); All of this is pretty easy to deal with and in my opinion is a way better choice to managing connection settings than having to track this stuff in your own application. Plus if you use default settings, most of the time it's highly likely that the connection settings are already properly configured making further configuration rare.© Rick Strahl, West Wind Technologies, 2005-2011Posted in Windows  HTTP  .NET  FoxPro   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Part 15: Fail a build based on the exit code of a console application

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application When you have a Console Application or a batch file that has errors, the exitcode is set to another value then 0. You would expect that the build would see this and report an error. This is not true however. First we setup the scenario. Add a ConsoleApplication project to your solution you are building. In the Main function set the ExitCode to 1     class Program    {        static void Main(string[] args)        {            Console.WriteLine("This is an error in the script.");            Environment.ExitCode = 1;        }    } Checkin the code. You can choose to include this Console Application in the build or you can decide to add the exe to source control Now modify the Build Process Template CustomTemplate.xaml Add an argument ErrornousScript Scroll down beneath the TryCatch activity called “Try Compile, Test, and Associate Changesets and Work Items” Add an Sequence activity to the template In the Sequence, add a ConvertWorkspaceItem and an InvokeProcess activity (see Part 14: Execute a PowerShell script  for more detailed steps) In the FileName property of the InvokeProcess use the ErrornousScript so the ConsoleApplication will be called. Modify the build definition and make sure that the ErrornousScript is executing the exe that is setting the ExitCode to 1. You have now setup a build definition that will execute the errornous Console Application. When you run it, you will see that the build succeeds. This is not what you want! To solve this, you can make use of the Result property on the InvokeProcess activity. So lets change our Build Process Template. Add the new variables (scoped to the sequence where you run the Console Application) called ExitCode (type = Int32) and ErrorMessage Click on the InvokeProcess activity and change the Result property to ExitCode In the Handle Standard Output of the InvokeProcess add a Sequence activity In the Sequence activity, add an Assign primitive. Set the following properties: To = ErrorMessage Value = If(Not String.IsNullOrEmpty(ErrorMessage), Environment.NewLine + ErrorMessage, "") + stdOutput And add the default BuildMessage to the sequence that outputs the stdOutput Add beneath the InvokeProcess activity and If activity with the condition ExitCode <> 0 In the Then section add a Throw activity and set the Exception property to New Exception(ErrorMessage) The complete workflow looks now like When you now check in the Build Process Template and run the build, you get the following result And that is exactly what we want.   You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >