Search Results

Search found 59278 results on 2372 pages for 'time estimation'.

Page 549/2372 | < Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >

  • What might cause SQL error Cannot find the object "dbo.InspectionEvents" because it does not exist o

    - by mrtrombone
    Hi My ASP.Net app is periodically getting the error 'Cannot find the object dbo."XXXX" because it does not exist or you do not have permissions' when it tries to execute a specific stored procedure that writes to the database. I have seen a few forum posts about this issue but the strange thing is that the method works fine almost all of the time, just every now and then I see it in my error logs. Can anyone tell me why this might works Ok most of the time but occassionally fire the error? Application is C# using Enterprise Library 4.1 Data Access. Database is SQL Server 2005 Cheers

    Read the article

  • How do quickly search through a .csv file in Python

    - by Baldur
    I'm reading a 6 million entry .csv file with Python, and I want to be able to search through this file for a particular entry. Are there any tricks to search the entire file? Should you read the whole thing into a dictionary or should you perform a search every time? I tried loading it into a dictionary but that took ages so I'm currently searching through the whole file every time which seems wasteful. Could I possibly utilize that the list is alphabetically ordered? (e.g. if the search word starts with "b" I only search from the line that includes the first word beginning with "b" to the line that includes the last word beginning with "b") I'm using import csv. (a side question: it is possible to make csv go to a specific line in the file? I want to make the program start at a random line) Edit: I already have a copy of the list as an .sql file as well, how could I implement that into Python?

    Read the article

  • GDB hardware watchpoint very slow - why?

    - by Laurynas Biveinis
    On a large C application, I have set a hardware watchpoint on a memory address as follows: (gdb) watch *((int*)0x12F5D58) Hardware watchpoint 3: *((int*)0x12F5D58) As you can see, it's a hardware watchpoint, not software, which would explain the slowness. Now the application running time under debugger has changed from less than ten seconds to one hour and counting. The watchpoint has triggered three times so far, the first time after 15 minutes when the memory page containing the address was made readable by sbrk. Surely during those 15 minutes the watchpoint should have been efficient since the memory page was inaccessible? And that still does not explain, why it's so slow afterwards. The GDB is $ gdb --version GNU gdb (GDB) 7.0-ubuntu [...] Thanks in advance for any ideas as what might be the cause or how to fix/work around it.

    Read the article

  • C# Value member property repopulate the control..

    - by karthik
    Hi.. I just wanted to confirm couple of things. I) Code snippet: cmb1.Datasource= dt; cmb1.Valuemember = "value"; Here data population happens 2 time for the control, 1 More time extra,bcoz of value member getting changed after data source assigned. Is it true? II) How can I trace these re population in C#? I just wanted to debug and see and confirm? example please? Thanks Karthik

    Read the article

  • Java: Netbeans debugging session works faster than normal run

    - by Martijn Courteaux
    Hello, I'm making Braid in Netbeans 6.7.1. Computer Spec: Windows 7 Running processes: 46 Running threads: +/- 650 NVidia GeForce 9200M GS Intel Core 2 Duo CPU P8400 @ 2.26Ghz Game-spec with normal run: Memory: between 80 MB and 110 MB CPU: between 9% and 20% CPU when time rewinding: 90% The same values for the debugging session, except when I rewind the time: CPU: 20%. Is there any reason for? Is there a way to reach the same performance with a normal run. This is my repaint code: @Override public void repaint() { BufferStrategy bs = getBufferStrategy(); // numBuffers: 4 Graphics g = bs.getDrawGraphics(); g.setColor(Color.BLACK); g.fillRect(-1, -1, 2000, 2000); gamePanel.paint(g.create(x, y, gameDim.width, gameDim.height)); bs.show(); g.dispose(); Toolkit.getDefaultToolkit().sync(); update(g); } The game runs in fullscreen (undecorated + frame.size = screensize) Martijn

    Read the article

  • Multiple lines of text to a single map

    - by steven
    I've been trying to use Hadoop to send N amount of lines to a single mapping. I don't require for the lines to be split already. I've tried to use NLineInputFormat, however that sends N lines of text from the data to each mapper one line at a time [giving up after the Nth line]. I have tried to set the option and it only takes N lines of input sending it at 1 line at a time to each map: job.setInt("mapred.line.input.format.linespermap", 10); I've found a mailing list recommending me to override LineRecordReader::next, however that is not that simple, as that the internal data members are all private. I've just checked the source for NLineInputFormat and it hard codes LineReader, so overriding will not help. Also, btw I'm using Hadoop 0.18 for compatibility with the Amazon EC2 MapReduce.

    Read the article

  • Concurency problem with Isolation - read-committed

    - by Ratn Deo--Dev
    I have to write a simple demo for amount withdrawl from a joint Bank amount .Andy and Jen holds a joint bank account with number 123 . Suppose they have 100$ in their account .Jen and Andy are operating their account at the same time and both are trying to withdraw 90$ at the time being .My transaction Isolation is set to read-committed and both are able to withdraw money leaving the balance to -(minus)80$ although I have constraint that balance should never be less than 0. I am using hibernate .Is versioning only way to solve this problem or I should go for another Isolation level ?

    Read the article

  • sql locking on silverlight app

    - by immuner
    Hi, i am not sure if this is the correct term, but this is what id like to do: I have an application that uses a mssql database. This application can operate in 3 modes. mode 1) user does not alter, but only read the database mode 2) user can add rows (one at a time) onto a table in the database mode 3) user can alter several tables in the database (one person at a time) question 1) how can i ensure that when a user in in mode 3 that the database will "lock" and all logged in users who operate in mode 2 or mode 3 will not be able to change the database until he finishes? question 2) how can i ensure that while there are several users in mode 2, that there will be no conflict while they all update the table? my guess here, is that before adding a new row, you make a server query for the table's current unique keys and add the new entry. will this be safe enough though? Thanks

    Read the article

  • Play Sound while UIImageView animating crashes the app?

    - by Rahul Vyas
    Hello all, I am having a strange problem in my app. I am creating a app in which i have 9 buttons on the view i want to play sound and animation same time. I am using UIImageview for this but i am having starnge behaviour sometimes sound play early and then animation runs.So how do i play them at same time? I am also having crash on device on a specific animation in which i have 60 jpg files. So what's the best way to do this?

    Read the article

  • Using Qtcreator (with Qwt), really basic stuff

    - by Dago
    I'm trying to make a Qt program using Qtcreator and Qwt for plotting. I've never used Qtcreator before. I've created a mainwindow and added a Qwtplot widget there (object name: qwtPlot). The widget shows up in the program when I compile and run it. But theres no mention of the qwtPlot object anywhere in the (autogenerated) code so I assume it is being added at compile time from the .ui xml file (or something). My question is that... how do I modify/change the qwtPlot object? Or where should I place the code. I'm having a hard time articulating my question but the question basically is "How do I do anything with the qwtPlot widget which is created (graphically) with Qtcreator?". I've checked some tutorials but in the tutorials they add the widgets manually in the code, but I'd like to use Qtcreator (because my UI will be fairly complicated). This whole Qtcreator is pretty confusing...

    Read the article

  • To Wrap or Not to Wrap: Wrapping Data Access in a Service Facade

    - by PureCognition
    For a while now, my team and I have been wrapping our data access layer in a web service facade (using WCF) and calling it from the business logic layer. Meanwhile, we could simply use the repository pattern where the business logic layer consumes the data access layer locally through an interface, and at any point in time, we can switch things out for it to hit a service instead (if necessary). The question is: When is it a good time to wrap the data access layer in a service facade and when isn't it? Right now, it seems like the main advantage is that other applications can consume the service, but if they are internal applications written in .NET then they can just consume the .NET assembly instead. Are there other advantages of having the DAL be wrapped in a service that I am unaware of?

    Read the article

  • Ruby - Possible to pass a block as a param as an actual block to another function?

    - by Markus O'Reilly
    This is what I'm trying to do: def call_block(in_class = "String", &block) instance = eval("#{in_class}.new") puts "instance class: #{instance.class}" instance.instance_eval{ block.call } end # --- TEST EXAMPLE --- # This outputs "class: String" every time "sdlkfj".instance_eval { puts "class: #{self.class}" } # This will only output "class: Object" every time # I'm trying to get this to output "class: String" though call_block("String") { puts "class: #{self.class}" } On the line where it says "instance.instance_eval{ block.call }", I'm trying to find another way to make the new instance variable run instance eval on the block. The only way I can think of to get it to do that is to pass instance_eval the original block, not as a variable or anything, but as a real block like in the test example. Any tips?

    Read the article

  • Is it posible to Build & Run on TWO iPhones/iPods at once?

    - by Dimitris
    When I connect two iPhones at the same time to my computer and Build and Run a project the app only installs and plays on one of the devices. Now, with the iPhone 3.0, that supports bluetooth peer-to-peer connectivity, to test a multiplayer project you have to install and run it on two devices at the same time. It would be very helpful to be able to do that with one click instead of: install on one phone, disconnect, connect the other, wait a 10 seconds to recognize the phone and install again and run... Is anyone aware of a method to do such a thing? Thanks

    Read the article

  • Connection Timeout exception for a query using ADO.Net

    - by dragon
    Update: Looks like the query does not throw any timeout. The connection is timing out. This is a sample code for executing a query. Sometimes, while executing time consuming queries, it throws a timeout exception. I cannot use any of these techniques: 1) Increase timeout. 2) Run it asynchronously with a callback. This needs to run in a synchronous manner. please suggest any other techinques to keep the connection alive while executing a time consuming query? private static void CreateCommand(string queryString, string connectionString) { using (SqlConnection connection = new SqlConnection( connectionString)) { SqlCommand command = new SqlCommand(queryString, connection); command.Connection.Open(); command.ExecuteNonQuery(); } }

    Read the article

  • Creating xaml 'template' for multiple pages

    - by superexsl
    Hey, I'm developing a Silverlight application for the first time. I've gone through some tutorials, but I can't seem to find anything that helps me with this particular problem. I would like a set of buttons to be present on all of my pages (like a template). When a button is pressed, I would like the ContentGrid to slide out and a new ContentGrid slide in (with the relevant .xaml file being loaded). Are there any tutorials showing the best way to do this? From samples I've seen, they only seem to transition between two pages, so copy-pasting the group of buttons on each xaml page isn't too much of a problem. However, with more pages, it would be inefficient to copy-paste the base layout each time. Thanks for any suggestions

    Read the article

  • How to prevent traffic to/from a slow Cassandra node using Python

    - by Sergio Ayestarán
    Intro: I have a Python application using a Cassandra 1.2.4 cluster with a replication factor of 3, all reads and writes are done with a consistency level of 2. To access the cluster I use the CQL library. The Cassandra cluster is running on rackspace's virtual servers. The problem: From time to time one of the nodes can become slower than usual, in this case I want to be able to detect this situation and prevent making requests to the slow node and if possible to stop using it at all (this should theoretically be possible since the RF is 3 and the CL is 2 for every single request). The questions: What's the best way of detecting the slow node from a Python application? Is there a way to stop using one of the Cassandra nodes from Python in this scenario without human intervention? Thanks in advance!

    Read the article

  • Why WCAG made 3 level "A", "AA" and "AAA"?

    - by jitendra
    What is the purpose of making 3 priority level by WCAG? is it like? If client not paying extra or if we don't have much time then go for A If client paying then or if we have time to make site compatible go for at least AA If client paying and needed according to govt. rules then go for AAA If we are making site then which level we should we try to achieve, or we should do only on client request? Although i found these definitions on this site but these are confusing for me • Priority 1: For all users to access the Web content and for Web developers to attain Conformance level “A”, these requirements must be satisfied. • Priority 2: These requirements should be satisfied by the Web developers so that no group finds it difficult to access the Web content and so as to attain Conformance level “AA”. • Priority 3: These requirements may be satisfied by the Web developers to facilitate access to Web content for some groups and attain Conformance level “AAA”.

    Read the article

  • flex3 Format date without timezone

    - by Maurits de Boer
    I'm receiving a date from a server in milliseconds since 1-1-1970. I then use the DateFormatter to print the date to the screen. However, Flex adds timedifference and thus it displays a different time than what I got from the server. I've fixed this by changing the date before printing to screen. But I think that's a bad solution because the date object doesn't hold the correct date. Does anyone know how to use the dateFormatter to print the date, ignoring the timezone? this is how I did it: function getDateString(value:Date):String { var millisecondsPerMinute:int = 1000*60; var newDate:Date = new Date(value.time - (millisecondsPerMinute*value.timezoneOffset)); var dateFormatter:DateFormatter = new DateFormatter(); dateFormatter.formatString = "EEEE DD-MM-YYYY LL:MM AA"; return dateFormatter.format(newDate); }

    Read the article

  • What's the (hidden) cost of lazy val? (Scala)

    - by Jesper
    One handy feature of Scala is lazy val, where the evaluation of a val is delayed until it's necessary (at first access). Ofcourse a lazy val must have some overhead - somewhere Scala must keep track of whether the value has already been evaluated and the evaluation must be synchronized, because multiple threads might try to access the value for the first time at the same time. What exactly is the cost of a lazy val - is there a hidden boolean flag associated with a lazy val to keep track if it has been evaluated or not, what exactly is synchronized and are there any more costs? And a follow-up question: Suppose I do this: class Something { lazy val (x, y) = { ... } } Is this the same as having two separate lazy vals x and y or do I get the overhead only once, for the pair (x, y)?

    Read the article

  • Help to the way to write a query for the requirement

    - by Lu Lu
    I need to write a SQL-Server query but I don't know how to solve. I have a table RealtimeData with data: Time | Value 4/29/2009 12:00:00 AM | 3672.0000 4/29/2009 12:01:00 AM | 3645.0000 4/29/2009 12:02:00 AM | 3677.0000 4/29/2009 12:03:00 AM | 3634.0000 4/29/2009 12:04:00 AM | 3676.0000 4/30/2009 12:00:00 AM | 3671.0000 4/30/2009 12:01:00 AM | 3643.0000 4/30/2009 12:02:00 AM | 3672.0000 4/30/2009 12:03:00 AM | 3634.0000 4/30/2009 12:04:00 AM | 3632.0000 4/30/2009 12:05:00 AM | 3672.0000 5/1/2009 12:00:00 AM | 3673.0000 5/1/2009 12:01:00 AM | 3642.0000 5/1/2009 12:02:00 AM | 3672.0000 5/1/2009 12:03:00 AM | 3634.0000 5/1/2009 12:04:00 AM | 3635.0000 I want to get the EOD's data of days which exist in table. (EOD = end of day). With the my sample's data, I will need to reture a table like following: Time | Value 4/29/2009 | 3676.0000 4/30/2009 | 3672.0000 5/1/2009 | 3635.0000 Please help me to solve my problem. Thanks.

    Read the article

  • TicTacToe AI Making Incorrect Decisions

    - by Chris Douglass
    A little background: as a way to learn multinode trees in C++, I decided to generate all possible TicTacToe boards and store them in a tree such that the branch beginning at a node are all boards that can follow from that node, and the children of a node are boards that follow in one move. After that, I thought it would be fun to write an AI to play TicTacToe using that tree as a decision tree. TTT is a solvable problem where a perfect player will never lose, so it seemed an easy AI to code for my first time trying an AI. Now when I first implemented the AI, I went back and added two fields to each node upon generation: the # of times X will win & the # of times O will win in all children below that node. I figured the best solution was to simply have my AI on each move choose and go down the subtree where it wins the most times. Then I discovered that while it plays perfect most of the time, I found ways where I could beat it. It wasn't a problem with my code, simply a problem with the way I had the AI choose it's path. Then I decided to have it choose the tree with either the maximum wins for the computer or the maximum losses for the human, whichever was more. This made it perform BETTER, but still not perfect. I could still beat it. So I have two ideas and I'm hoping for input on which is better: 1) Instead of maximizing the wins or losses, instead I could assign values of 1 for a win, 0 for a draw, and -1 for a loss. Then choosing the tree with the highest value will be the best move because that next node can't be a move that results in a loss. It's an easy change in the board generation, but it retains the same search space and memory usage. Or... 2) During board generation, if there is a board such that either X or O will win in their next move, only the child that prevents that win will be generated. No other child nodes will be considered, and then generation will proceed as normal after that. It shrinks the size of the tree, but then I have to implement an algorithm to determine if there is a one move win and I think that can only be done in linear time (making board generation a lot slower I think?) Which is better, or is there an even better solution?

    Read the article

  • Stop duplicate icmp echo replies when bridging to a dummy interface?

    - by mbrownnyc
    I recently configured a bridge br0 with members as eth0 (real if) and dummy0 (dummy.ko if). When I ping this machine, I receive duplicate replies as: # ping SERVERA PING SERVERA.domain.local (192.168.100.115) 56(84) bytes of data. 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=1 ttl=62 time=113 ms 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=1 ttl=62 time=114 ms (DUP!) 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=2 ttl=62 time=113 ms 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=2 ttl=62 time=113 ms (DUP!) Using tcpdump on SERVERA, I was able to see icmp echo replies being sent from eth0 and br0 itself as follows (oddly two echo request packets arrive "from" my Windows box myhost): 23:19:05.324192 IP myhost.domain.local > SERVERA.domain.local: ICMP echo request, id 512, seq 43781, length 40 23:19:05.324212 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324217 IP myhost.domain.local > SERVERA.domain.local: ICMP echo request, id 512, seq 43781, length 40 23:19:05.324221 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324264 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324272 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 It's worth noting, testing reveals that hosts on the same physical switch do not see DUP icmp echo responses (a host on the same VLAN on another switch does see a dup icmp echo response). I've read that this could be due to the ARP table of a switch, but I can't find any info directly related to bridges, just bonds. I have a feeling my problem lay in the stack on linux, not the switch, but am opened to any suggestions. The system is running centos6/el6 kernel 2.6.32-71.29.1.el6.i686. How do I stop ICMP echo replies from being sent in duplicate when dealing with a bridge interface/bridged interfaces? Thanks, Matt [edit] Quick note: It was recommended in #linux to: [08:53] == mbrownnyc [gateway/web/freenode/] has joined ##linux [08:57] <lkeijser> mbrownnyc: what happens if you set arp_ignore to 1 for the dummy interface? [08:59] <lkeijser> also set arp_announce to 2 for that interface [09:24] <mbrownnyc> lkeijser: I set arp_annouce to 2, arp_ignore to 2 in /etc/sysctl.conf and rebooted the machine... verifying that the bits are set after boot... the problem is still present I did this and came up empty. Same dup problem. I will be moving away from including the dummy interface in the bridge as: [09:31] == mbrownnyc [gateway/web/freenode/] has joined #Netfilter [09:31] <mbrownnyc> Hello all... I'm wondering, is it correct that even with an interface in PROMISC that the kernel will drop /some/ packets before they reach applications? [09:31] <whaffle> What would you make think so? [09:32] <mbrownnyc> I ask because I am receiving ICMP echo replies after configuring a bridge with a dummy interface in order for ipt_netflow to see all packets, only as reported in it's documentation: http://ipt-netflow.git.sourceforge.net/git/gitweb.cgi?p=ipt-netflow/ipt-netflow;a=blob;f=README.promisc [09:32] <mbrownnyc> but I do not know if PROMISC will do the same job [09:33] <mbrownnyc> I was referred here from #linux. any assistance is appreciated [09:33] <whaffle> The following conditions need to be met: PROMISC is enabled (bridges and applications like tcpdump will do this automatically, otherwise they won't function). [09:34] <whaffle> If an interface is part of a bridge, then all packets that enter the bridge should already be visible in the raw table. [09:35] <mbrownnyc> thanks whaffle PROMISC must be set manually for ipt_netflow to function, but [09:36] <whaffle> promisc does not need to be set manually, because the bridge will do it for you. [09:36] <whaffle> When you do not have a bridge, you can easily create one, thereby rendering any kernel patches moot. [09:36] <mbrownnyc> whaffle: I speak without the bridge [09:36] <whaffle> It is perfectly valid to have a "half-bridge" with only a single interface in it. [09:36] <mbrownnyc> whaffle: I am unfamiliar with the raw table, does this mean that PROMISC allows the raw table to be populated with packets the same as if the interface was part of a bridge? [09:37] <whaffle> Promisc mode will cause packets with {a dst MAC address that does not equal the interface's MAC address} to be delivered from the NIC into the kernel nevertheless. [09:37] <mbrownnyc> whaffle: I suppose I mean to clearly ask: what benefit would creating a bridge have over setting an interface PROMISC? [09:38] <mbrownnyc> whaffle: from your last answer I feel that the answer to my question is "none," is this correct? [09:39] <whaffle> Furthermore, the linux kernel itself has a check for {packets with a non-local MAC address}, so that packets that will not enter a bridge will be discarded as well, even in the face of PROMISC. [09:46] <mbrownnyc> whaffle: so, this last bit of information is quite clearly why I would need and want a bridge in my situation [09:46] <mbrownnyc> okay, the ICMP echo reply duplicate issue is likely out of the realm of this channel, but I sincerely appreciate the info on the kernels inner-workings [09:52] <whaffle> mbrownnyc: either the kernel patch, or a bridge with an interface. Since the latter is quicker, yes [09:54] <mbrownnyc> thanks whaffle [edit2] After removing the bridge, and removing the dummy kernel module, I only had a single interface chilling out, lonely. I still received duplicate icmp echo replies... in fact I received a random amount: http://pastebin.com/2LNs0GM8 The same thing doesn't happen on a few other hosts on the same switch, so it has to do with the linux box itself. I'll likely end up rebuilding it next week. Then... you know... this same thing will occur again. [edit3] Guess what? I rebuilt the box, and I'm still receiving duplicate ICMP echo replies. Must be the network infrastructure, although the ARP tables do not contain multiple entries. [edit4] How ridiculous. The machine was a network probe, so I was (ingress and egress) mirroring an uplink port to a node that was the NIC. So, the flow (must have) gone like this: ICMP echo request comes in through the mirrored uplink port. (the real) ICMP echo request is received by the NIC (the mirrored) ICMP echo request is received by the NIC ICMP echo reply is sent for both. I'm ashamed of myself, but now I know. It was suggested on #networking to either isolate the mirrored traffic to an interface that does not have IP enabled, or tag the mirrored packets with dot1q.

    Read the article

  • Best practices for using memcached in Rails?

    - by Matt
    Hello everybody, as database transcations in our app are getting more and more time consuming, we have started to use memcached to reduce the amount of queries passed to MySQL. All in all, it works fine and really saves a lot of time. But as caching was "silently appearing" as a workaround to give the app more juice, a lot of our models now contain code like this: def self.all_cached Rails.cache.fetch('object_name') { find( :all, :include => [associations]) } end This is getting more and more a pain as filling and flushing the cache happens in several classes accross the application. Now, I was wondering if there was a better way to abstract memcached logic to make it more powerful and easy to use across all needed models? I was thinking about having some kind of memcached-module which is included in all needed modules. But before playing around, I thought: Let's ask experts first :-) Thanks Matt

    Read the article

  • Is it too early to start designing for Task Parallel Library?

    - by Joe Erickson
    I have been following the development of the .NET Task Parallel Library (TPL) with great interest since Microsoft first announced it. There is no doubt in my mind that we will eventually take advantage of TPL. What I am questioning is whether it makes sense to start taking advantage of TPL when Visual Studio 2010 and .NET 4.0 are released, or whether it makes sense to wait a while longer. Why Start Now? The .NET 4.0 Task Parallel Library appears to be well designed and some relatively simple tests demonstrate that it works well on today's multi-core CPUs. I have been very interested in the potential advantages of using multiple lightweight threads to speed up our software since buying my first quad processor Dell Poweredge 6400 about seven years ago. Experiments at that time indicated that it was not worth the effort, which I attributed largely to the overhead of moving data between each CPU's cache (there was no shared cache back then) and RAM. Competitive advantage - some of our customers can never get enough performance and there is no doubt that we can build a faster product using TPL today. It sounds fun. Yes, I realize that some developers would rather poke themselves in the eye with a sharp stick, but we really enjoy maximizing performance. Why Wait? Are today's Intel Nehalem CPUs representative of where we are going as multi-core support matures? You can purchase a Nehalem CPU with 4 cores which share a single level 3 cache today, and most likely a 6 core CPU sharing a single level 3 cache by the time Visual Studio 2010 / .NET 4.0 are released. Obviously, the number of cores will go up over time, but what about the architecture? As the number of cores goes up, will they still share a cache? One issue with Nehalem is the fact that, even though there is a very fast interconnect between the cores, they have non-uniform memory access (NUMA) which can lead to lower performance and less predictable results. Will future multi-core architectures be able to do away with NUMA? Similarly, will the .NET Task Parallel Library change as it matures, requiring modifications to code to fully take advantage of it? Limitations Our core engine is 100% C# and has to run without full trust, so we are limited to using .NET APIs.

    Read the article

  • Throughput measurements

    - by dotsid
    I wrote simple load testing tool for testing performance of Java modules. One problem I faced is algorithm of throughput measurements. Tests are executed in several thread (client configure how much times test should be repeated), and execution time is logged. So, when tests are finished we have following history: 4 test executions 2 threads 36ms overall time - idle * test execution 5ms 9ms 4ms 13ms T1 |-*****-*********-****-*************-| 3ms 6ms 7ms 11ms T2 |-***-******-*******-***********-----| <-----------------36ms---------------> For the moment I calculate throughput (per second) in a following way: 1000 / overallTime * threadCount. But there is problem. What if one thread will complete it's own tests more quickly (for whatever reason): 3ms 3ms 3ms 3ms T1 |-***-***-***-***----------------| 3ms 6ms 7ms 11ms T2 |-***-******-*******-***********-| <--------------32ms--------------> In this case actual throughput is much better because of measured throughput is bounded by the most slow thread. So, my question is how should I measure throughput of code execution in multithreaded environment.

    Read the article

< Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >