Search Results

Search found 7602 results on 305 pages for 'actual costing'.

Page 2/305 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Cardinality Estimation Bug with Lookups in SQL Server 2008 onward

    - by Paul White
    Cost-based optimization stands or falls on the quality of cardinality estimates (expected row counts).  If the optimizer has incorrect information to start with, it is quite unlikely to produce good quality execution plans except by chance.  There are many ways we can provide good starting information to the optimizer, and even more ways for cardinality estimation to go wrong.  Good database people know this, and work hard to write optimizer-friendly queries with a schema and metadata (e.g. statistics) that reduce the chances of poor cardinality estimation producing a sub-optimal plan.  Today, I am going to look at a case where poor cardinality estimation is Microsoft’s fault, and not yours. SQL Server 2005 SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; The query plan on SQL Server 2005 is as follows (if you are using a more recent version of AdventureWorks, you will need to change the year on the date range from 2003 to 2007): There is an Index Seek on ProductID = 1, followed by a Key Lookup to find the Transaction Date for each row, and finally a Filter to restrict the results to only those rows where Transaction Date falls in the range specified.  The cardinality estimate of 45 rows at the Index Seek is exactly correct.  The table is not very large, there are up-to-date statistics associated with the index, so this is as expected. The estimate for the Key Lookup is also exactly right.  Each lookup into the Clustered Index to find the Transaction Date is guaranteed to return exactly one row.  The plan shows that the Key Lookup is expected to be executed 45 times.  The estimate for the Inner Join output is also correct – 45 rows from the seek joining to one row each time, gives 45 rows as output. The Filter estimate is also very good: the optimizer estimates 16.9951 rows will match the specified range of transaction dates.  Eleven rows are produced by this query, but that small difference is quite normal and certainly nothing to worry about here.  All good so far. SQL Server 2008 onward The same query executed against an identical copy of AdventureWorks on SQL Server 2008 produces a different execution plan: The optimizer has pushed the Filter conditions seen in the 2005 plan down to the Key Lookup.  This is a good optimization – it makes sense to filter rows out as early as possible.  Unfortunately, it has made a bit of a mess of the cardinality estimates. The post-Filter estimate of 16.9951 rows seen in the 2005 plan has moved with the predicate on Transaction Date.  Instead of estimating one row, the plan now suggests that 16.9951 rows will be produced by each clustered index lookup – clearly not right!  This misinformation also confuses SQL Sentry Plan Explorer: Plan Explorer shows 765 rows expected from the Key Lookup (it multiplies a rounded estimate of 17 rows by 45 expected executions to give 765 rows total). Workarounds One workaround is to provide a covering non-clustered index (avoiding the lookup avoids the problem of course): CREATE INDEX nc1 ON Production.TransactionHistory (ProductID) INCLUDE (TransactionDate); With the Transaction Date filter applied as a residual predicate in the same operator as the seek, the estimate is again as expected: We could also force the use of the ultimate covering index (the clustered one): SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WITH (INDEX(1)) WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; Summary Providing a covering non-clustered index for all possible queries is not always practical, and scanning the clustered index will rarely be optimal.  Nevertheless, these are the best workarounds we have today. In the meantime, watch out for poor cardinality estimates when a predicate is applied as part of a lookup. The worst thing is that the estimate after the lookup join in the 2008+ plans is wrong.  It’s not hopelessly wrong in this particular case (45 versus 16.9951 is not the end of the world) but it easily can be much worse, and there’s not much you can do about it.  Any decisions made by the optimizer after such a lookup could be based on very wrong information – which can only be bad news. If you think this situation should be improved, please vote for this Connect item. © 2012 Paul White – All Rights Reserved twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Where is the actual content in a TCP segment

    - by packetloss
    When I email something or download a program, or do anything else over a network, where in the segment is the actual content? If I am emailing a 20KB word document, and the maximum data field size in a segment is 1500 bytes, does that mean it takes about 14 segments to mail my document wherever it is going? I get, I think, the OSI model and I have a decent grasp of the IP protocol. I think I understand the concept of header wrapping of each successive layer in the protocol stack. What I can't get a definitive answer to is where does the actual content go in a TCP segment? Is that the datagram? Maybe the fact I am asking proves I have no clue... Many thanks.

    Read the article

  • Actual High Speed USB flash drive

    - by CSkau
    I'm looking to upgrade my EEE 1000H by possibly replacing the HDD with simple (internal) usb connected storage. The problem I'm having now is that I can't seem to find any actual high speed usb sticks. They all proclaim high speeds, but usually turn out to be ~30 mb/s - much lower than the 60 mb/s (480 mbit/s / 8 ) I understand USB 2.0 is at - no ? Can anyone enlighten me as to why no USB sticks seem to go past that low bar or alternatively point me in the direction of some actual high speed usb sticks ? Any help is greatly appreciated :) Cheers!

    Read the article

  • Using AddEncoding x-gzip .gz without actual files

    - by STATUS_ACCESS_DENIED
    With Apache (2.2 and later) how can I achieve the following. I want to transparently compress using GZip encoding (not plain Deflate) the output when a certain file is queried with its name plus the extension .gz, where the .gz version doesn't physically exist on disk. So let's say I have a file named /path/foo.bar and no file foo.bar.gz in the folder to which the URI /path maps, how can I get Apache to serve the contents of /path/foo.bar but with AddEncoding x-gzip ... applied to the (non-existing) file? The rewrite part appears to be easy, but the problem is how to apply the encoding to a non-existent item. The other way around also seems to be simple as long as the client supports the encoding. Is the only solution really a script that does this on the fly? I'm aware of mod_deflate and mod_gzip and it is not what I'm looking for - at least not alone. In particular I need an actual GZIP file and not just a deflated stream. Now I was thinking of using mod_ext_filter, but couldn't bridge the gap between rewriting the name of the (non-existent) file.gz to file on one side and the LocationMatch on the other. Here's what I have. RewriteRule ^(.*?\.ext)\.gz$ $1 [L] ExtFilterDefine gzip mode=output cmd="/bin/gzip" <LocationMatch "/my-files/special-path/.*?\.ext\.gz"> AddType application/octet-stream .ext.gz SetOutputFilter gzip Header set Content-Encoding gzip </LocationMatch> Note that the header for Content-Encoding isn't really needed by the clients in this case. They expect to see actual GZIP files, but I want to do this on-the-fly without caching (this is a test scenario).

    Read the article

  • Is there a difference between the actual chip intended for Dual Channel vs Triple Channel

    - by JimDel
    Is there a difference between the actual chips intended for Dual Channel vs Triple Channel. I bought a set of triple channel memory but I'm only using 2 of the 3 chips. I'm getting the error "Display driver NVIDIA Windows Kernel Mode Driver, Version 197.45 stopped responding and has successfully recovered" There seems to be a TON of discussion on the web about this and some say it might be RAM related. This is the ram I'm using. Thanks

    Read the article

  • How to log the actual output in apache?

    - by Oscar Rodriguez
    I am doing LAMP development for a mobile platform. However, the client browser does not allow me to view the source code of visited pages. I consider the source code to be of huge importance for debugging, so I would like to configure my web server so every time a user makes a request, in addition to sending the client a response, that response (the actual contents of the returned page) is also stored in a file with a filename I can cross-relate with access_log (maybe ip-timestamp-filename? or maybe a unique ID in an additional column in access_log?). I've searched quite a bit, but haven't even gotten close to finding what I'm looking for. Has anybody been able to do this?

    Read the article

  • Blocking a country (mass iP Ranges), best practice for the actual block

    - by kwiksand
    Hi all, This question has obviously been asked many times in many different forms, but I can't find an actual answer to the specific plan I've got. We run a popular European Commercial deals site, and are getting a large amount of incoming registrations/traffic from countries who cannot even take part in the deals we offer (and many of the retailers aren't even known outside Western Europe). I've identified the problem area to block a lot of this traffic, but (as expected) there are thousands of ip ranges required. My question now (finally!). On a test server, I created a script to block each range within iptables, but the amount of time it took to add the rules was large, and then iptables was unresponsive after this (especially when attempting a iptables -L). What is the most efficient way of blocking large numbers of ip ranges: iptables? Or a plugin where I can preload them efficiantly? hosts.deny? .htaccess (nasty as I'd be running it in apache on every load balanced web server)? Cheers

    Read the article

  • file.createNewFile() creates files with last-modified time before actual creation time

    - by Kaleb Pederson
    I'm using JPoller to detect changes to files in a specific directory, but it's missing files because they end up with a timestamp earlier than their actual creation time. Here's how I test: public static void main(String [] files) { for (String file : files) { File f = new File(file); if (f.exists()) { System.err.println(file + " exists"); continue; } try { // find out the current time, I would hope to assume that the last-modified // time on the file will definitely be later than this System.out.println("-----------------------------------------"); long time = System.currentTimeMillis(); // create the file System.out.println("Creating " + file + " at " + time); f.createNewFile(); // let's see what the timestamp actually is (I've only seen it <time) System.out.println(file + " was last modified at: " + f.lastModified()); // well, ok, what if I explicitly set it to time? f.setLastModified(time); System.out.println("Updated modified time on " + file + " to " + time + " with actual " + f.lastModified()); } catch (IOException e) { System.err.println("Unable to create file"); } } } And here's what I get for output: ----------------------------------------- Creating test.7 at 1272324597956 test.7 was last modified at: 1272324597000 Updated modified time on test.7 to 1272324597956 with actual 1272324597000 ----------------------------------------- Creating test.8 at 1272324597957 test.8 was last modified at: 1272324597000 Updated modified time on test.8 to 1272324597957 with actual 1272324597000 ----------------------------------------- Creating test.9 at 1272324597957 test.9 was last modified at: 1272324597000 Updated modified time on test.9 to 1272324597957 with actual 1272324597000 The result is a race condition: JPoller records time of last check as xyz...123 File created at xyz...456 File last-modified timestamp actually reads xyz...000 JPoller looks for new/updated files with timestamp greater than xyz...123 JPoller ignores newly added file because xyz...000 is less than xyz...123 I pull my hair out for a while I tried digging into the code but both lastModified() and createNewFile() eventually resolve to native calls so I'm left with little information. For test.9, I lose 957 milliseconds. What kind of accuracy can I expect? Are my results going to vary by operating system or file system? Suggested workarounds? NOTE: I'm currently running Linux with an XFS filesystem. I wrote a quick program in C and the stat system call shows st_mtime as truncate(xyz...000/1000).

    Read the article

  • Windows Displays Double the Actual Installed Physical Memory

    - by Andrew Barber
    I have a server I've installed Windows Web 2008 R2 on, which is reporting that I have double the physical memory installed as is actually the case. In msinfo32 "Installed Physical Memory" shows as 2x what ever the actual installed amount is, though "Total Physical Memory" shows the correct amount. The "System" info window shows installed memory as 2x, with the correct amount in parenthesis listed as the "usable" amount). This server mistakenly had Windows Web 2008 (32-bit) installed on it just previously, and that OS also reported the same faulty information as Win2K8R2 is reporting. BIOS reports the correct amount, memtest was run on this server before installation, and a previous Windows 2000 instance installed on this system also reported the correct amount, as I recall. Server operation seems to be fine as well (it's only trying to use the correct amount of memory). The server is a generic pizzabox running on a SuperMicro X6DVL-EG with dual Xeon-3.2's. Memory installed are 4 matching mt18vddf12872g-335c3 sticks (1GB pc2700 DDR ECC REG cl2.5) This behavior occurs whether two or all four are installed. So, has anyone seen something like this before? Have any idea about what's causing it, and how I should be concerned about it? Everything else seems good so far, and I'll be upgrading the memory before putting the server into service, but I don't want to spend too much time/money/effort on the server if it's got something odd going wrong here. UPDATE: There was a question I ran into regarding memory sparing in the BIOS and a possible (buggy) effect thereof; however, flipping that bit back and forth in the BIOS revealed that isn't the issue. Still flummoxed a bit about this one, though I still have seen no negative impacts. Post-Answer Update (January 13, 2011): Upgrading the system with new, larger memory has fixed this issue.

    Read the article

  • Multiple UIWebViewNavigationTypeBackForward not only false but make inferring the actual url impossi

    - by SG1
    I have a UIWebView and a UITextField for the url. Naturally, I want the textField to always show the current document url. This works fine for urls directly input in the field, but I also have some buttons attached to the view for reload, back, and forward. So I've added all the UIWebViewDelegate methods to my controller, so it can listen to whenever the webView navigates and change the url in the textField as needed. Here's how I'm using the shouldStartLoadWithRequest: method: - (BOOL)webView:(UIWebView *)webView shouldStartLoadWithRequest:(NSURLRequest *)request navigationType:(UIWebViewNavigationType)navigationType { NSLog(@"navigated via %d", navigationType); //loads the user cares about if ( navigationType == UIWebViewNavigationTypeLinkClicked || navigationType == UIWebViewNavigationTypeBackForward ) { //URL setting [self setUrlQuietly:request.URL]; } return YES; } Now, my problem here is that an actual click will generate a single navigation of type "LinkClicked" followed by a dozen type "Other" (redirects and ad loads I assume), which gets handled correctly by the code, but a back/forward action will generate all its requests as back/forward requests. In other words, a click calls setUrlQuietly: once, but a back/forward calls it multiple times. I am trying to use this method to determine if the user actually initiated the action (and I'd like to catch page redirects too). But if the method has no way of distinguishing between an actual "back" and a "load initiated as a result of a back", how can I make this assessment? Without this, I am completely stumped as to how I can only show the actual url and not intermediate urls. Thank you!

    Read the article

  • debsum and actual md5

    - by Radium
    I have discovered that debsum maybe does not work as i thought. I ran debsum -as And actually i did not see sshd binary in that list. However md5 of the /usr/sbin/sshd file and the numbers given in /var/lib/dpkg/info/openssh-server.md5sums are different. cat /var/lib/dpkg/info/openssh-server.md5sums 968ce0ccc85f3dc64375c689fa165359 usr/lib/openssh/sftp-server ba856dce069acadff587ca95e8e63551 usr/sbin/sshd a8f85459802674a416b903c8be7774d6 usr/share/doc/openssh-client/examples/sshd_config 8c5592e0d522fa0f8f55f3c104479ef5 usr/share/lintian/overrides/openssh-server 24e6a2d6f56d5fd52651db030a4124bb usr/share/man/man5/sshd_config.5.gz 65dbe6d2862940ad7cd945fadaabc2f8 usr/share/man/man8/sftp-server.8.gz 63398534a80e75262e56ac821e2bb3f3 usr/share/man/man8/sshd.8.gz md5sum /usr/sbin/sshd 72a54d63b9f9edbdc0cb0de4715683d0 What is wrong?

    Read the article

  • MediaShield RAID 5 is showing up as 760GB when the actual size is 2.7TB

    - by Ilya Volodin
    I just finished setting up Windows 2003 Server on my new server. And I started setting up a RAID 5 for it. I have 4x1TB Hard Drives. From MediaSheild RAID Utility (at boot time) the RAID size is displayed as 2.7TB. Linux also shows it as 2.7TB. However, in Windows, everything (including Windows Disk Management as well as Windows based MediaShield utility) is reporting only 760Gb. I already tried converting partitioning table to GUID from MBR, because I read somewhere that Windows can only handle up to 2TB MBR tables, that didn't help much. Tried searching for partitioning utilities that I could use, but couldn't find anything free. Formatted the disk as NTFS partition from within Linux, it stop showing in Windows all together, even MediaShield windows utility isn't showing at anymore. Windows is installed on a separate 500Gb hard drive, that's setup not to support RAID. Any ideas?

    Read the article

  • Azure cloud app subdomain pointing to actual domain

    - by Amit Aggarwal
    Say we have a domain xyz.com registered with some registrar ... we pointed that domain to the name server of our dedicated server where the DNS will be hosted for that domain. Now, we just want that dedicated server to host the emails coming and the domain will point to abc.cloudapp.net (azure cloud app, they don't provide any static IP ... and only public url is given) Now, someone please helping me in editing/creating the DNS file on our dedicated server to make sure things work properly... if possible past here minimum settings we need in DNS file to make sure mails are on dedicated server and app is on cloud... Thanks, Amit

    Read the article

  • RAM being displayed is lesser than the actual in my Windows 7

    - by Prateek Somani
    I am using Windows 7 and Ubuntu on the same machine. Earlier I had 3 GB of RAM,but now the Windows is displaying just 1 GB of RAM. Please also find the output of the free command in my Ubuntu : total used free shared buffers cached Mem: 1008208 904808 103400 5736 13516 239596 -/+ buffers/cache: 651696 356512 Swap: 3127292 10252 3117040 Has the swap memory consumed my 2 GB of RAM? Will I be able to use the whole of 3GB of the RAM in my Windows? Regards, Prateek Update : I tried to run the lshw command and I got the following output : *-memory description: System Memory physical id: 1b slot: System board or motherboard size: 1GiB *-bank:0 description: SODIMM DDR3 Synchronous 1067 MHz (0.9 ns) product: HMT112S6BFR6C-H9 vendor: Hynix physical id: 0 serial: 2C71D069 slot: Bottom - Slot 1 size: 1GiB width: 64 bits clock: 1067MHz (0.9ns) *-bank:1 description: SODIMM DDR3 Synchronous 1067 MHz (0.9 ns) [empty] product: 16JSF25664HZ-1G4F1 vendor: Micron physical id: 1 serial: FD421821 slot: Bottom - Slot 2 width: 64 bits clock: 1067MHz (0.9ns) Why it is able to detect the vendor/product name of the bank-1 RAM, why can't it detect the RAM size and other details ? Has my RAM got faulty?

    Read the article

  • How to make phone calls using a pc, modem, headphone and no actual phone instrument

    - by user18151
    Hi, I have a phone landline connection and I DO NOT have a phone instrument. I connect the cable into my laptop, and want to make calls using my laptop. I have an HDA CX20561 modem. I seem to be able to dial number using dialer.exe, though nothing seems to happen. From Microsoft kb http://support.microsoft.com/kb/958143, it looks like dialer.exe alone is not enough for the call. Can somebody tell me how to make and receive phone call with whatever hardware I have, i.e. what software will I need. Thanks.

    Read the article

  • SSL/https setup for herokuapp.com address rather than my actual domain

    - by new2ruby
    I have a subdomain of my site pointed to a rails app at mysite.herokuapp.com. I bought a certificate from godaddy and seem to have that all set up correctly. So that when I go to: http://mysite.herokuapp.com or http://dev.mysite.com it's redirected to: https://mysite.herokuapp.com or https://dev.mysite.com The problem is that when I visit dev.mysite.com, I get the error: Safari can't verify the identity of the website. But when I go to mysite.herokuapp.com, I don't get the error. I wanted this to be set up the other way, so that dev.mysite.com did not cause the error. I'm not sure where I went wrong. I used dev.mysite.com when generating the key and when setting it up at godaddy.com. Any ideas where I should look? P.S. The old site is hosted at dreamhost and the DNS info is stored there as well. So I created a subdomain there of type cname which points to mysite.herokuapp.com.

    Read the article

  • Network speeds being report as 4x higher than actual in Windows 7 SP1

    - by Synetech
    Ever since installing Windows 7 SP1, I have noticed that all programs that display my network transfer rate have been exactly 4x higher than they actually are. For example, when I download something from a high-bandwidth web site or through torrents with lots of sources, the download rate indicated is is ~5MBps (~40Mbps) even though my Internet connection has a maximum of only 1.5MBps (12Mbps). It is the same situation with the upstream bandwidth: the connection maximum is 64KBps, but I’m seeing up to 256KBps. I have tried several different programs for monitoring bandwidth throughput and they all give the same results. I also tried different times and different days, and they always show the rate as being four times too high. My initial thought was that my ISP had increased the speeds (without my noticing), which they have done before. However, I checked my ISP’s site and they have not increased the speeds. Moreover, when I look at the speeds in the program actually doing the transfer (eg Chrome, µTorrent, etc.), the numbers are in line with the expected values at the same time that bandwidth monitoring programs are showing the high numbers. The only significant change (and pretty much the only change at all) that has occurred to my system since the change was the installation of SP1 for Windows 7. As such, it is my belief that some sort of change exists in SP1 whereby software that accesses the bandwidth via a specific API receives (erroneously?) high numbers while others that have access to the raw data continue to receive the correct values. I booted into Windows XP and downloaded some things via HTTP and torrent and in both cases, the numbers were as expected (like they were in Windows 7 before installing SP1). I then booted back into 7SP1 and once again, the numbers were four times higher than possible. Therefore it is definitely something in SP1 that has changed how local bandwidth is calculated/returned. There is definitely something wonky with Windows 7 SP1’s network speed calculation. I tried Googling this, but (for multiple reasons), have had a difficult time finding anything relevant. Has anybody else noticed this behavior? Does anybody know of any bugs or changes in SP1 that could account for it?

    Read the article

  • Windows 8: 100% disk active time, no actual data transferred

    - by fingerbangpalateclick
    Occasionally, like several times an hour, my hard drive will appear to lock up: Task Manager will show 100% active time with read and write speeds of 0. I can still switch between open windows, but anything that requires a disk access will stall for around a minute until the hard disk starts working properly again. It happens at apparently random intervals, and only happens in Windows 8. Not 7, nor Linux. It is probably not a problem with the disk itself: This is a relatively new hard drive, and S.M.A.R.T. is showing no errors. Only happens in Windows 8: not any other OS that has used the same partition, or different partitions. So, what is going on? How can I fix this? Note: this is a different problem then this one: Extremely high disk activity without any real usage My task manager would look similar, but Average Response Time, Read Speed, and Write Speed would all be 0.

    Read the article

  • AS3: Synchronize Timer event to actual time?

    - by Nebs
    I plan to use a timer event to fire every second (for a clock application). I may be wrong, but I assume that there will probably be a (very slight) sync issue with the actual system time. For example the timer event might fire when the actual system time milliseconds are at 500 instead of 0 (meaning the seconds will be partially 'out of phase' if you will). Is there a way to either synchronize the timer event to the real time or get some kind of system time event to fire when an second ticks in AS3? Also if I set a Timer to fire every 1000 milliseconds, is that guaranteed or can there be some offset based on the application load? These are probably negligible issues but I'm just curious. Thanks.

    Read the article

  • java - powermock whenNew doesnt seem to work, calls the actual constructor

    - by user1331243
    I have two final classes that are used in my unit test. I am trying to use whenNew on the constructor of a final class, but I see that it calls the actual constructor. The code is @PrepareForTest({A.class, B.class, Provider.class}) @Test public void testGetStatus() throws Exception { B b = mock(B.class); when(b.getStatus()).thenReturn(1); whenNew(B.class).withArguments(anyString()).thenReturn(b); Provider p = new Provider(); int val = p.getStatus(); assertTrue((val == 1)); } public class Provider { public int getStatus() { B b = new B("test"); return b.getStatus(); } } public final class A { private void init() { // ...do soemthing } private static A a; private A() { } public static A getInstance() { if (a == null) { a = new A(); a.init(); } return a; } } public final class B { public B() { } public B(String s) { this(A.getInstance(), s); } public B(A a, String s) { } public int getStatus() { return 0; } } On debug, I find that its the actual class B instance created and not the mock instance that is returned for new usage and assertion fails. Any pointers on how to get this working. Thanks

    Read the article

  • jQuery Actual image diamension

    - by Mithun
    I have a 1024x768 image <span class="frame"> <img alt="Image" title="Image" src="http://localhost/zapav/site/assets/questions/D41120CA-7164-11DF-A79E-F4CE462E9D80_Green_Sea_Turtle.jpg"> </span> and the below CSS sets the image width to .frame img{ width:425px; } And the jQuery code $('.uploaded_image img').attr("width"); returns 425 How can i retrieve the actual width of the image 1024 in JavaScipt?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >