Search Results

Search found 1889 results on 76 pages for 'paul'.

Page 4/76 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • I changed the repository and now my ubuntu software center crashes

    - by Paul Menz
    paul@ubuntu:~$ software-center 2012-10-24 18:11:04,665 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2012-10-24 18:11:04,671 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True 2012-10-24 18:11:05,191 - softwarecenter.backend.reviews - WARNING - Could not get usefulness from server, no username in config file 2012-10-24 18:11:05,403 - softwarecenter.ui.gtk3.app - INFO - show_available_packages: search_text is '', app is None. 2012-10-24 18:11:05,920 - softwarecenter.db.pkginfo_impl.aptcache - INFO - aptcache.open() Traceback (most recent call last): File "/usr/share/software-center/softwarecenter/db/pkginfo_impl/aptcache.py", line 243, in open self._cache = apt.Cache(GtkMainIterationProgress()) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 102, in __init__ self.open(progress) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 149, in open self._list.read_main_list() SystemError: E:Malformed line 63 in source list /etc/apt/sources.list (dist parse) 2012-10-24 18:11:07,255 - softwarecenter.db.enquire - ERROR - _get_estimate_nr_apps_and_nr_pkgs failed Traceback (most recent call last): File "/usr/share/software-center/softwarecenter/db/enquire.py", line 115, in _get_estimate_nr_apps_and_nr_pkgs tmp_matches = enquire.get_mset(0, len(self.db), None, xfilter) File "/usr/share/software-center/softwarecenter/db/appfilter.py", line 89, in __call__ if (not pkgname in self.cache and File "/usr/share/software-center/softwarecenter/db/pkginfo_impl/aptcache.py", line 263, in __contains__ return self._cache.__contains__(k) AttributeError: 'NoneType' object has no attribute '__contains__' Traceback (most recent call last): File "/usr/bin/software-center", line 176, in <module> app.run(args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 1422, in run self.show_available_packages(args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 1352, in show_available_packages self.view_manager.set_active_view(ViewPages.AVAILABLE) File "/usr/share/software-center/softwarecenter/ui/gtk3/session/viewmanager.py", line 154, in set_active_view view_widget.init_view() File "/usr/share/software-center/softwarecenter/ui/gtk3/panes/availablepane.py", line 171, in init_view self.apps_filter) File "/usr/share/software-center/softwarecenter/ui/gtk3/views/catview_gtk.py", line 238, in __init__ self.build(desktopdir) File "/usr/share/software-center/softwarecenter/ui/gtk3/views/catview_gtk.py", line 511, in build self._build_homepage_view() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/catview_gtk.py", line 271, in _build_homepage_view self._append_whats_new() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/catview_gtk.py", line 450, in _append_whats_new whats_new_cat = self._update_whats_new_content() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/catview_gtk.py", line 439, in _update_whats_new_content docs = whats_new_cat.get_documents(self.db) File "/usr/share/software-center/softwarecenter/db/categories.py", line 124, in get_documents nonblocking_load=False) File "/usr/share/software-center/softwarecenter/db/enquire.py", line 317, in set_query self._blocking_perform_search() File "/usr/share/software-center/softwarecenter/db/enquire.py", line 212, in _blocking_perform_search matches = enquire.get_mset(0, self.limit, None, xfilter) File "/usr/share/software-center/softwarecenter/db/appfilter.py", line 89, in __call__ if (not pkgname in self.cache and File "/usr/share/software-center/softwarecenter/db/pkginfo_impl/aptcache.py", line 263, in __contains__ return self._cache.__contains__(k) AttributeError: 'NoneType' object has no attribute '__contains__'

    Read the article

  • HTML5Rocks Live, Episode 1

    HTML5Rocks Live, Episode 1 In this episode of HTML5Rocks Live, Boris, Eric and Paul join us to show some great new libraries and performance tips. Please leave your comments on our plus page at goo.gl In the first chapter, Paul shows how to use some of Chrome's new developer tools to understand how things are rendering and get improved performance. In the second chapter (21:25), Boris shows off his new device.js library to help make development of mobile web applications and sites easier. Eric closes the hangout (40:00) and talks about his new file system API polyfill that uses indexed db as it's back end. 02:15 Scroll Effects Demo goo.gl 23:04 - Media Queries Site goo.gl 24:15 - WURFL goo.gl 26:40 - Boris' Device Library goo.gl 29:28 - Device.js Demo goo.gl 33:25 - Bug to add touch-enabled media query to Chrome, please star goo.gl 35:00 - Chrome's DevTools for Mobile Development 38:56 - Paul Irish's Touch Demos goo.gl 40:43 - File System API Book goo.gl 43:10 - Eric's idb.filesystem.js goo.gl 44:27 - idb.filesystem HTML5 File System Demo goo.gl 47:33 - HTML5 Filesystem Playground goo.gl From: GoogleDevelopers Views: 12239 221 ratings Time: 52:29 More in Science & Technology

    Read the article

  • SkyDrive and Consumer Cloud Services

    - by Tim Murphy
    Paul Thurrrott recently posted an article on the future of SkyDrive and I was asked what I thought about its future by @UserCommunity.  So let’s take a look. The breakdown from Microsoft that Paul described I believe is an accurate representation of users and usages. While I can’t say that I leverage SkyDrive to the extent that it was meant to be I do enjoy having OneNote hosted their and being able to consult and edit it from the desktop, web and Windows Phone. Taking that one step further is the Midwest Geeks group which started as the community of Microsoft related user groups in our region uses SkyDrive groups and shares calendars and documents.  This collaboration aspect isn’t new in itself, but having it connected with the rest of your cloud assets makes life easier. Another recent usage of this type of cloud service is storing your personal music files in order to get that same universal access.  This is a scenario that has some arguments for and against.  On the one hand own once and listen anywhere is great, but the on the other hand the bandwidth cost becomes a giant downside.  This is especially the case since most carriers are now doing away with unlimited data packages. Ultimately I see this type of resource growing an evolving at a phenomenal rate over the next few years as we continue to become more mobile.  Having multiple players such as SkyDrive and iCloud will only help to give us more options.  Only time will tell where we end up next. del.icio.us Tags: SkyDrive,Cloud Services,Paul Thurrott,UserCommunity

    Read the article

  • Samba users are writing files with the same owner

    - by Alex
    I created a Samba share and 3 users (Marc, Mary and Paul), both in Ubuntu (12.04 LTS) and Samba. Then I configured 3 Win7 computers to access the share, each with different credentials. I created 3 folders, one for every user, and chown'd them to the related user, chmod'd them to 0700 and even restarted Samba. Every time that Mary or Paul create a file or a directory in the share, it ends up to be owned by Marc. They all can access the Marc folder, but none can open Mary's or Paul's. Can you help me with this problem? What am I missing?

    Read the article

  • Why do we need Hash by key? [migrated]

    - by Royi Namir
    (i'm just trying to find what am I missing...) Assuming John have a clear text message , he can create a regular hash ( like md5 , or sha256) and then encrypt the message. John can now send Paul the message + its (clear text)hash and Paul can know if the message was altered. ( decrypt and then compare hashes). Even if an attacker can change the encrpyted data ( without decrypt) - - when paul will open the message - and recalc the hash - it wont generate the same hash as the one john sent him. so why do we need hash by key ?

    Read the article

  • ArchBeat Link-o-Rama for 2012-10-10

    - by Bob Rhubart
    Oracle's Analytics, Engineered Systems, and Big Data Strategy | Mark Rittman Part 1 of 3 in Oracle ACE Director Mark Rittman's series on Oracle Exalytics, Oracle R Enterprise and Endeca. Series: How to Kill the Architecture Department? Part 1 | Xebia Blog Don't let the title fool you. This is not an anti-architecture post. Rather, this post, part 1 of a now four-part series, offers suggestions for preserving architecture in a form that better supports agile organizations. BPM Suite configure BAM Adapter | Peter Paul van der Beek "To have the BPM server push events to BAM – Business Activity Monitoring – we have to configure the BPM suite to use the BAM Adapter," says Peter Paul van de Beek. "The BAM Adapter is configured (like other SOA Suite and BPM Adapters) in the WebLogic Server Console." Peter Paul shows you how in this brief post. A case for not installing your own software | James Gentsch "I look selfishly forward to cloud computing and engineered systems dramatically reducing the occurrence of problems triggered by unforeseen environmental situations in the software I am responsible for," says James Gentsch. "I think this is an evolutionary game changer that will be a huge benefit to the reliability and consistent performance of the software for my customers, and may make 'well, it works here' a well forgotten phase for future software developers." Thought for the Day "I'm a strong believer in being minimalistic. Unless you actually are going to solve the general problem, don't try and put in place a framework for solving a specific one, because you don't know what that framework should look like." — Anders Hejlsberg Source: SoftwareQuotes.com`

    Read the article

  • Security in Robots and Automated Systems

    - by Roger Brinkley
    Alex Dropplinger posted a Freescale blog on Securing Robotics and Automated Systems where she asks the question,“How should we secure robotics and automated systems?”.My first thought on this was duh, make sure your robot is running Java. Java's built-in services for authentication, authorization, encryption/confidentiality, and the like can be leveraged and benefit robotic or autonomous implementations. Leveraging these built-in services and pluggable encryption models of Java makes adding security to an exist bot implementation much easier. But then I thought I should ask an expert on robotics so I fired the question off to Paul Perrone of Perrone Robotics. Paul's build automated vehicles and other forms of embedded devices like auto monitoring of commercial vehicles on highways.He says that most of the works that robots do now are autonomous so it isn't a problem in the short term. But long term projects like collision avoidance technology in automobiles are going to require it.Some of the work he's doing with his Java-based MAX, set of software building blocks containing a wide range of low level and higher level software modules that developers can use to build simple to complex robot and automation applications faster and cheaper, already provide some support for JAUS compliance and because their based on Java, access to standards based security APIs.But, as Paul explained to me, "the bottom line is…it depends on the criticality level of the bot, it's network connectivity, and whether or not a standards compliance is required."

    Read the article

  • TechEd 2012: MVVM In XAML

    - by Tim Murphy
    Paul Sheriff was a real character at the start of his MVVM in XAML session.  There was a lot of sarcasm and self deprecation going on prior to the .  That is never a bad way to get things rolling right after lunch.  Then things got semi-serious. The presentation itself had a number of surprises, but not all of them had to do with XAML.  When he flipped over his company’s code generation tool it took me off guard.  I am used to generator that create code for a whole project, but his tools were able to create different types of constructs on demand.  It also made it easier to follow what he was doing than some of the other demos I have seen this week where people were using code snippets. Getting to the heart of the topic I found myself thinking that I may have found my utopia for application development in MVVM.  Yes, I know there is no such thing, but this comes closer than any other pattern I have learned about.  This pattern allows the application to have better separation of concerns than I have seen before.  This is especially true since you can leverage data binding.  I’m not sure why it has taken me so long to find time for this subject. As Paul demonstrated using this pattern with XAML gives you multi-platform reusable code when you leverage common utility classes and ModelView classes.  The one drawback I see is that you have to go to the lowest common denominator between the platforms you want to support, but you always have to weigh the trade offs. And finally, the Visual Studio nuggets just keep coming.  Even though it has been available for several generations of Visual Studio I have never seen someone use linked files within a solution.  It just goes to show that I should spend more time exploring the deeper features of each dialog. del.icio.us Tags: TechEd,TechEd 2012,MVVM,Paul Sheriff,Patterns,Visual Studio 2012

    Read the article

  • Securing smtp with login

    - by Paul Peelen
    I have a ispconfig server, and it seems that someone is using it to send spam. I got about 130 "Mail Delivery System" email about declined send email. This spammer uses my email address as sent from adress, so I get all these email adresses to my mail. I am using Postfix and Courier. I installed my server according to this guide: http://www.howtoforge.com/perfect-server-debian-lenny-ispconfig3-p3 I did this a few months ago. My question: Can I secure my server to require login to be able to send email, and if so... how? Thanks! EDIT Some data from mail.log, these kind of error show up constantly: Jun 15 17:58:16 bolt postfix/qmgr[10712]: CC7DA1242AE: from=<paul@*****.se>, size=3782, nrcpt=1 (queue active) Jun 15 17:58:16 bolt postfix/smtp[11337]: CC7DA1242AE: to=<[email protected]>, relay=none, delay=4641, delays=4640/0.01/0.32/0, dsn=4.4.3, status=deferred (Host or domain name not found. Name service error for name=cmlisboa.pt type=MX: Host not found, try again) Jun 15 17:58:19 bolt postfix/smtpd[10836]: connect from static-200-105-220-154.acelerate.net[200.105.220.154] Jun 15 17:58:20 bolt postfix/smtpd[10836]: NOQUEUE: reject: RCPT from static-200-105-220-154.acelerate.net[200.105.220.154]: 550 5.1.1 <advertising@*****.com>: Recipient address rejected: User unknown in virtual mailbox table; from=<[email protected]> to=<advertising@*****.com> proto=ESMTP helo=<static-200-105-220-154.acelerate.net> Jun 15 17:58:20 bolt postfix/smtpd[10836]: lost connection after DATA (0 bytes) from static-200-105-220-154.acelerate.net[200.105.220.154] Jun 15 17:58:20 bolt postfix/smtpd[10836]: disconnect from static-200-105-220-154.acelerate.net[200.105.220.154] Jun 15 17:58:29 bolt postfix/smtpd[10834]: connect from unknown[62.176.172.226] Jun 15 17:58:32 bolt postfix/smtpd[10834]: 386791241F9: client=unknown[62.176.172.226] Jun 15 17:58:34 bolt postfix/cleanup[10975]: 386791241F9: message-id=<[email protected]> Jun 15 17:58:34 bolt postfix/qmgr[10712]: 386791241F9: from=<[email protected]>, size=867, nrcpt=1 (queue active) Jun 15 17:58:35 bolt postfix/smtpd[10834]: disconnect from unknown[62.176.172.226] Jun 15 17:58:35 bolt amavis[11084]: (11084-17) Blocked SPAM, [62.176.172.226] [62.176.172.226] <[email protected]> -> <*****@*****>, Message-ID: <[email protected]>, mail_id: XczovKoMBYNr, Hits: 18.471, size: 867, 833 ms Jun 15 17:58:35 bolt postfix/smtp[10732]: 386791241F9: to=<*****@*****>, relay=127.0.0.1[127.0.0.1]:10024, delay=3.5, delays=2.7/0/0/0.83, dsn=2.7.0, status=sent (250 2.7.0 Ok, discarded, id=11084-17 - SPAM) Jun 15 17:58:35 bolt postfix/qmgr[10712]: 386791241F9: removed Jun 15 17:58:43 bolt postfix/smtpd[10836]: warning: 178.121.154.194: address not listed for hostname mm-194-154-121-178.dynamic.pppoe.mgts.by Jun 15 17:58:43 bolt postfix/smtpd[10836]: connect from unknown[178.121.154.194] Jun 15 17:58:45 bolt postfix/smtpd[10727]: connect from unknown[180.134.223.86] EDIT #2 Got some more info from the logs, this is a send request: mail.info.1:Jun 15 16:41:57 bolt amavis[5399]: (05399-06) Passed CLEAN, [110.139.48.64] [110.139.48.64] <paul@*****.se> -> <[email protected]>, Message-ID: <CHILKAT-MID-7c54ebcf-5501-de9b-f0b1-4f0234290d8d@HP-IRISH>, mail_id: 35l56Ramx6Nc, Hits: -2.941, size: 3329, queued_as: 2485770086, 136 ms mail.info.1:Jun 15 16:41:57 bolt postfix/smtp[4743]: 375C570082: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=4.8, delays=4.7/0/0/0.14, dsn=2.0.0, status=sent (250 2.0.0 Ok, id=05399-06, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 2485770086) Which apparently got thrue. Any ideas how to restrict this?

    Read the article

  • Cardinality Estimation Bug with Lookups in SQL Server 2008 onward

    - by Paul White
    Cost-based optimization stands or falls on the quality of cardinality estimates (expected row counts).  If the optimizer has incorrect information to start with, it is quite unlikely to produce good quality execution plans except by chance.  There are many ways we can provide good starting information to the optimizer, and even more ways for cardinality estimation to go wrong.  Good database people know this, and work hard to write optimizer-friendly queries with a schema and metadata (e.g. statistics) that reduce the chances of poor cardinality estimation producing a sub-optimal plan.  Today, I am going to look at a case where poor cardinality estimation is Microsoft’s fault, and not yours. SQL Server 2005 SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; The query plan on SQL Server 2005 is as follows (if you are using a more recent version of AdventureWorks, you will need to change the year on the date range from 2003 to 2007): There is an Index Seek on ProductID = 1, followed by a Key Lookup to find the Transaction Date for each row, and finally a Filter to restrict the results to only those rows where Transaction Date falls in the range specified.  The cardinality estimate of 45 rows at the Index Seek is exactly correct.  The table is not very large, there are up-to-date statistics associated with the index, so this is as expected. The estimate for the Key Lookup is also exactly right.  Each lookup into the Clustered Index to find the Transaction Date is guaranteed to return exactly one row.  The plan shows that the Key Lookup is expected to be executed 45 times.  The estimate for the Inner Join output is also correct – 45 rows from the seek joining to one row each time, gives 45 rows as output. The Filter estimate is also very good: the optimizer estimates 16.9951 rows will match the specified range of transaction dates.  Eleven rows are produced by this query, but that small difference is quite normal and certainly nothing to worry about here.  All good so far. SQL Server 2008 onward The same query executed against an identical copy of AdventureWorks on SQL Server 2008 produces a different execution plan: The optimizer has pushed the Filter conditions seen in the 2005 plan down to the Key Lookup.  This is a good optimization – it makes sense to filter rows out as early as possible.  Unfortunately, it has made a bit of a mess of the cardinality estimates. The post-Filter estimate of 16.9951 rows seen in the 2005 plan has moved with the predicate on Transaction Date.  Instead of estimating one row, the plan now suggests that 16.9951 rows will be produced by each clustered index lookup – clearly not right!  This misinformation also confuses SQL Sentry Plan Explorer: Plan Explorer shows 765 rows expected from the Key Lookup (it multiplies a rounded estimate of 17 rows by 45 expected executions to give 765 rows total). Workarounds One workaround is to provide a covering non-clustered index (avoiding the lookup avoids the problem of course): CREATE INDEX nc1 ON Production.TransactionHistory (ProductID) INCLUDE (TransactionDate); With the Transaction Date filter applied as a residual predicate in the same operator as the seek, the estimate is again as expected: We could also force the use of the ultimate covering index (the clustered one): SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WITH (INDEX(1)) WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; Summary Providing a covering non-clustered index for all possible queries is not always practical, and scanning the clustered index will rarely be optimal.  Nevertheless, these are the best workarounds we have today. In the meantime, watch out for poor cardinality estimates when a predicate is applied as part of a lookup. The worst thing is that the estimate after the lookup join in the 2008+ plans is wrong.  It’s not hopelessly wrong in this particular case (45 versus 16.9951 is not the end of the world) but it easily can be much worse, and there’s not much you can do about it.  Any decisions made by the optimizer after such a lookup could be based on very wrong information – which can only be bad news. If you think this situation should be improved, please vote for this Connect item. © 2012 Paul White – All Rights Reserved twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • BIOS flash XP, 1 long beep, 2 short beeps, over&over

    - by Paul
    BIOS issue on HP dv9233cl laptop, wiped drive of Vista, loaded XP, not all the drives loaded. Went to the HP website, downloaded all drivers for this laptop. Started loading them. Loaded WIN Flash HP Network System BIOS Window SP42187. After a minute a low resolution screen appeared stating "It is now safe to turn off the computer" I waited a minute and half. Turned it off. Let it set 10 seconds try to start and No screen images at all and a nasty loud long beep 2 short beeps, 2 seconds of silence and it happens over & over again. I have unplugged/removed battery, still same problem, Any sugg.... Thx.. Paul

    Read the article

  • LTO 3 tape drive needing repaired

    - by DO it all Paul
    We have an IBM LTO 3 tape drive that needs repaired and with the £400 price tag i'm having to shop around for quotes. My question is has anyone actually repaired one before and how was in done? The first error LED was showing a 6, then i cleared the mangled tape only for it to start flashing alternate 'o' on the 7 segment display, simliar to a half 8, flashing top to bottom and it would just flash away like that coupled with a flashing amber light. I tried a reset holding the eject button for it to show an 'r' the go back to flashing again as before. I checked the IBM solutions for the codes but this flashing isn't documented at all. Would be great if anyone had any experience in this area. Thank you, Paul

    Read the article

  • HTTP downloads slow - FTP of same file very fast - Windows 2003

    - by Paul Hinett
    I am having some issues with download speeds on my site via http, i am averaging around 70kbps downloading a file that is around 70mb. But if i connect to my server via FTP and download the same file on the same computer / connection i am averaging about 300+kbps. I know my server has alot of connections at any one time, probably around 400 connections. My server has a 1gbps connection to the internet so there is plenty of bandwidth available, as proven with the FTP. I have no throttling of any kind enabled in IIS. If interested there is a test file here you can download to check the speed: http://filesd.house-mixes.com/test.zip I am based in the UK and the server is in Washington, USA if that makes any difference. Paul

    Read the article

  • Setting up routing for MS DirectAccess to a VMWare EsXi Host

    - by Paul D'Ambra
    I'm trying to set up DirectAccess on a virtual machine so I can demonstrate it's value and then if need be add a physical machine to host it. I'm hitting a problem because the Direct Access machine (DA01) needs to have 2 public addresses actually configured on the external adapter but there is a Zyxel Zywall USG300 between the VMware ESXi host and the outside world. I've summarised my setup in this diagram If I ping from the LAN to 212.x.y.89 I get a response but if I ping from the VM I get destination host unreachable. I used "route add 212.x.y.89 192.c.d.1" and get request timed out. At that point I see outbound traffic allowed on the Zyxel firewall but nothing coming back. I'm past my understanding of routing and VMWare so am not sure how to tie down where my problem lies (or even if this setup is possible). So any help massively appreciated. Paul

    Read the article

  • KVM Guest not reachable from host

    - by Paul
    Hello, I'm running Ubuntu server 9.10, installed KVM etc. Created the bridge network following instructions on help.ubuntu.com/community/KVM/Networking Created a windows 2008 guest using virt-install command line (using virt-manager GUI from a remote Ubuntu desktop would not let me select the ISO location). I can however use a remote virt-manager to connect to the guest and complete the windows install. Within windows 2008 I changed the IP address but cannot ping from outside world. The bridge network appears fine - I'm not sure what else to look at! Here is the interfaces file: The loopback network interface auto lo iface lo inet loopback The primary network interface auto eth0 iface eth0 inet manual # auto br0 iface br0 inet static address 60.234.64.50 netmask 255.255.255.248 network 60.234.0.0 broadcast 60.234.0.255 gateway 60.234.64.49 bridge_ports eth0 bridge_stp off bridge_fd 0 bridge_maxwait 0 auto eth1 iface eth1 inet static address 192.168.12.2 netmask 255.255.255.0 broadcast 192.168.12.255 The ip of the windows server is 60.234.64.52 What else should I check? Regards Paul.

    Read the article

  • Silverlight Version 4 latest build for Win7 64bit and WinXP 32bit

    - by Paul
    I have a requirement where a few people need the latest version of Silverlight 4 installed. I know the latest version is 5.xx... but apparently with some new software we're having installed we have to use version 4 After a bit of googling i can see that the latest version is... Build 4.1.10329.0 Released May 8, 2012 We have a mix of Win7 64-bit machines and WinXP 32-bit machines. Q: Is there a different version for each OS or the same one fits all. (This seems strangely hard to decipher by googling) Q: Does anyone know where i can download the latest version 4? Microsoft do not seem to offer it anymore unless i'm just not finding it. Q: Is there a separate browser version of it or will installing it also handle any browser needs (our new software will be browser based) Any pointers much appreciated. Paul

    Read the article

  • Connecting my iPhone to iTunes causes my Acer laptop to crash

    - by Paul Sheldrake
    Hello I have an Acer Travelmate 8200 laptop and whenever I connect my Iphone to it, it crashes with the BSOD(Blue Screen Of Death). I have figured out that if I delete all the pictures in my phone I can get it to connect but that is not a ideal long term solution. I also read that it may be a conflict with the built in web-cam I have but I've upgraded the driver and I still get the crashing problem. Any suggestions would be appreciated! Thanks Paul! edit: Here is the BSOD message I get

    Read the article

  • No HDMI audio - Windows 8 - ASUS H81M-PLUS

    - by Paul Wright
    I have an issue with HDMI audio on Windows 8 using an ASUS H81M-PLUS motherboard (without an external GFX card). There are many forum posts advising you to go into playback devices and setting HDMI to be default - I have done this. To eliminate what works and what doesn't work: I have not been able to get sound from my HDTV using HDMI. I have used this HDMI cable with my PS3, so this cable should be fine. I am able to use the HDMI cable in extended mode, so that I have two monitors (including the TV), just no audio. This HDMI cable goes straight from the motherboard to the TV. Below I have included 'Device manager', and 'Playback Devices' (Sound). Device Manager Playback Devices, showing disabled and disconnected devices I am at a loss. I have uninstalled all drivers, and then rebooted and made windows look for the correct ones, made sure the HDMI device was default. Thanks, Paul

    Read the article

  • i accidentally deleted the recovery folder on a partition (win vista home)

    - by paul
    i accidentally deleted the recovery folder on the recovery partition (win vista home) i think it was some sort of scheduled maintenance of some program that i did not configure properly? oops... lol i called toshiba and they said i needed to buy a recovery program, which i didnt bother doing. I bought a legal copy of vista and would like to install the correct files and in a way that when my computer starts looking for files it will eventually find them or i can point to the partition. i'm pretty sure it's not a matter of copy and paste (is it?) thanks Paul

    Read the article

  • Creating different margins on the first page of a word template

    - by Paul
    I have a letterhead template and I need the first page left margin to be larger than subsequent pages. I've seen the option of placing a text box or image box in the header to push the text but this ends up throwing off the tabs and bullet list indentation markers. I thought of setting up the first page using two columns and pushing the text to start on the second column but I can't seem to find a way to get the text to switch back to 1 column on the second page when it is created from text overflowing. Does anyone know how something like this is possible? Thanks in advance, Paul

    Read the article

  • Disable OS X Portable Home Directories for specific hosts for all users, not just individuals?

    - by Paul Nendick
    Would it be possible to block any and all Portable Home Directory services for specific hosts? Something like MCX's "MobileAccountNeverAsk-" but for the whole workstation? We have a network with both portable and stationary machines. I'd like our users to be able to use all machines, going portable on the MacBook but not being bothering with syncing when logged into stationary iMacs or Mac Pros. The Open Directory servers are running Snow Leopard (for now) and all clients are running Lion. Thanks! Paul

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • I see no LOBs!

    - by Paul White
    Is it possible to see LOB (large object) logical reads from STATISTICS IO output on a table with no LOB columns? I was asked this question today by someone who had spent a good fraction of their afternoon trying to work out why this was occurring – even going so far as to re-run DBCC CHECKDB to see if any corruption had taken place.  The table in question wasn’t particularly pretty – it had grown somewhat organically over time, with new columns being added every so often as the need arose.  Nevertheless, it remained a simple structure with no LOB columns – no TEXT or IMAGE, no XML, no MAX types – nothing aside from ordinary INT, MONEY, VARCHAR, and DATETIME types.  To add to the air of mystery, not every query that ran against the table would report LOB logical reads – just sometimes – but when it did, the query often took much longer to execute. Ok, enough of the pre-amble.  I can’t reproduce the exact structure here, but the following script creates a table that will serve to demonstrate the effect: IF OBJECT_ID(N'dbo.Test', N'U') IS NOT NULL DROP TABLE dbo.Test GO CREATE TABLE dbo.Test ( row_id NUMERIC IDENTITY NOT NULL,   col01 NVARCHAR(450) NOT NULL, col02 NVARCHAR(450) NOT NULL, col03 NVARCHAR(450) NOT NULL, col04 NVARCHAR(450) NOT NULL, col05 NVARCHAR(450) NOT NULL, col06 NVARCHAR(450) NOT NULL, col07 NVARCHAR(450) NOT NULL, col08 NVARCHAR(450) NOT NULL, col09 NVARCHAR(450) NOT NULL, col10 NVARCHAR(450) NOT NULL, CONSTRAINT [PK dbo.Test row_id] PRIMARY KEY CLUSTERED (row_id) ) ; The next script loads the ten variable-length character columns with one-character strings in the first row, two-character strings in the second row, and so on down to the 450th row: WITH Numbers AS ( -- Generates numbers 1 - 450 inclusive SELECT TOP (450) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) INSERT dbo.Test WITH (TABLOCKX) SELECT REPLICATE(N'A', N.n), REPLICATE(N'B', N.n), REPLICATE(N'C', N.n), REPLICATE(N'D', N.n), REPLICATE(N'E', N.n), REPLICATE(N'F', N.n), REPLICATE(N'G', N.n), REPLICATE(N'H', N.n), REPLICATE(N'I', N.n), REPLICATE(N'J', N.n) FROM Numbers AS N ORDER BY N.n ASC ; Once those two scripts have run, the table contains 450 rows and 10 columns of data like this: Most of the time, when we query data from this table, we don’t see any LOB logical reads, for example: -- Find the maximum length of the data in -- column 5 for a range of rows SELECT result = MAX(DATALENGTH(T.col05)) FROM dbo.Test AS T WHERE row_id BETWEEN 50 AND 100 ; But with a different query… -- Read all the data in column 1 SELECT result = MAX(DATALENGTH(T.col01)) FROM dbo.Test AS T ; …suddenly we have 49 LOB logical reads, as well as the ‘normal’ logical reads we would expect. The Explanation If we had tried to create this table in SQL Server 2000, we would have received a warning message to say that future INSERT or UPDATE operations on the table might fail if the resulting row exceeded the in-row storage limit of 8060 bytes.  If we needed to store more data than would fit in an 8060 byte row (including internal overhead) we had to use a LOB column – TEXT, NTEXT, or IMAGE.  These special data types store the large data values in a separate structure, with just a small pointer left in the original row. Row Overflow SQL Server 2005 introduced a feature called row overflow, which allows one or more variable-length columns in a row to move to off-row storage if the data in a particular row would otherwise exceed 8060 bytes.  You no longer receive a warning when creating (or altering) a table that might need more than 8060 bytes of in-row storage; if SQL Server finds that it can no longer fit a variable-length column in a particular row, it will silently move one or more of these columns off the row into a separate allocation unit. Only variable-length columns can be moved in this way (for example the (N)VARCHAR, VARBINARY, and SQL_VARIANT types).  Fixed-length columns (like INTEGER and DATETIME for example) never move into ‘row overflow’ storage.  The decision to move a column off-row is done on a row-by-row basis – so data in a particular column might be stored in-row for some table records, and off-row for others. In general, if SQL Server finds that it needs to move a column into row-overflow storage, it moves the largest variable-length column record for that row.  Note that in the case of an UPDATE statement that results in the 8060 byte limit being exceeded, it might not be the column that grew that is moved! Sneaky LOBs Anyway, that’s all very interesting but I don’t want to get too carried away with the intricacies of row-overflow storage internals.  The point is that it is now possible to define a table with non-LOB columns that will silently exceed the old row-size limit and result in ordinary variable-length columns being moved to off-row storage.  Adding new columns to a table, expanding an existing column definition, or simply storing more data in a column than you used to – all these things can result in one or more variable-length columns being moved off the row. Note that row-overflow storage is logically quite different from old-style LOB and new-style MAX data type storage – individual variable-length columns are still limited to 8000 bytes each – you can just have more of them now.  Having said that, the physical mechanisms involved are very similar to full LOB storage – a column moved to row-overflow leaves a 24-byte pointer record in the row, and the ‘separate storage’ I have been talking about is structured very similarly to both old-style LOBs and new-style MAX types.  The disadvantages are also the same: when SQL Server needs a row-overflow column value it needs to follow the in-row pointer a navigate another chain of pages, just like retrieving a traditional LOB. And Finally… In the example script presented above, the rows with row_id values from 402 to 450 inclusive all exceed the total in-row storage limit of 8060 bytes.  A SELECT that references a column in one of those rows that has moved to off-row storage will incur one or more lob logical reads as the storage engine locates the data.  The results on your system might vary slightly depending on your settings, of course; but in my tests only column 1 in rows 402-450 moved off-row.  You might like to play around with the script – updating columns, changing data type lengths, and so on – to see the effect on lob logical reads and which columns get moved when.  You might even see row-overflow columns moving back in-row if they are updated to be smaller (hint: reduce the size of a column entry by at least 1000 bytes if you hope to see this). Be aware that SQL Server will not warn you when it moves ‘ordinary’ variable-length columns into overflow storage, and it can have dramatic effects on performance.  It makes more sense than ever to choose column data types sensibly.  If you make every column a VARCHAR(8000) or NVARCHAR(4000), and someone stores data that results in a row needing more than 8060 bytes, SQL Server might turn some of your column data into pseudo-LOBs – all without saying a word. Finally, some people make a distinction between ordinary LOBs (those that can hold up to 2GB of data) and the LOB-like structures created by row-overflow (where columns are still limited to 8000 bytes) by referring to row-overflow LOBs as SLOBs.  I find that quite appealing, but the ‘S’ stands for ‘small’, which makes expanding the whole acronym a little daft-sounding…small large objects anyone? © Paul White 2011 email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • When is a Seek not a Seek?

    - by Paul White
    The following script creates a single-column clustered table containing the integers from 1 to 1,000 inclusive. IF OBJECT_ID(N'tempdb..#Test', N'U') IS NOT NULL DROP TABLE #Test ; GO CREATE TABLE #Test ( id INTEGER PRIMARY KEY CLUSTERED ); ; INSERT #Test (id) SELECT V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 1000 ; Let’s say we need to find the rows with values from 100 to 170, excluding any values that divide exactly by 10.  One way to write that query would be: SELECT T.id FROM #Test AS T WHERE T.id IN ( 101,102,103,104,105,106,107,108,109, 111,112,113,114,115,116,117,118,119, 121,122,123,124,125,126,127,128,129, 131,132,133,134,135,136,137,138,139, 141,142,143,144,145,146,147,148,149, 151,152,153,154,155,156,157,158,159, 161,162,163,164,165,166,167,168,169 ) ; That query produces a pretty efficient-looking query plan: Knowing that the source column is defined as an INTEGER, we could also express the query this way: SELECT T.id FROM #Test AS T WHERE T.id >= 101 AND T.id <= 169 AND T.id % 10 > 0 ; We get a similar-looking plan: If you look closely, you might notice that the line connecting the two icons is a little thinner than before.  The first query is estimated to produce 61.9167 rows – very close to the 63 rows we know the query will return.  The second query presents a tougher challenge for SQL Server because it doesn’t know how to predict the selectivity of the modulo expression (T.id % 10 > 0).  Without that last line, the second query is estimated to produce 68.1667 rows – a slight overestimate.  Adding the opaque modulo expression results in SQL Server guessing at the selectivity.  As you may know, the selectivity guess for a greater-than operation is 30%, so the final estimate is 30% of 68.1667, which comes to 20.45 rows. The second difference is that the Clustered Index Seek is costed at 99% of the estimated total for the statement.  For some reason, the final SELECT operator is assigned a small cost of 0.0000484 units; I have absolutely no idea why this is so, or what it models.  Nevertheless, we can compare the total cost for both queries: the first one comes in at 0.0033501 units, and the second at 0.0034054.  The important point is that the second query is costed very slightly higher than the first, even though it is expected to produce many fewer rows (20.45 versus 61.9167). If you run the two queries, they produce exactly the same results, and both complete so quickly that it is impossible to measure CPU usage for a single execution.  We can, however, compare the I/O statistics for a single run by running the queries with STATISTICS IO ON: Table '#Test'. Scan count 63, logical reads 126, physical reads 0. Table '#Test'. Scan count 01, logical reads 002, physical reads 0. The query with the IN list uses 126 logical reads (and has a ‘scan count’ of 63), while the second query form completes with just 2 logical reads (and a ‘scan count’ of 1).  It is no coincidence that 126 = 63 * 2, by the way.  It is almost as if the first query is doing 63 seeks, compared to one for the second query. In fact, that is exactly what it is doing.  There is no indication of this in the graphical plan, or the tool-tip that appears when you hover your mouse over the Clustered Index Seek icon.  To see the 63 seek operations, you have click on the Seek icon and look in the Properties window (press F4, or right-click and choose from the menu): The Seek Predicates list shows a total of 63 seek operations – one for each of the values from the IN list contained in the first query.  I have expanded the first seek node to show the details; it is seeking down the clustered index to find the entry with the value 101.  Each of the other 62 nodes expands similarly, and the same information is contained (even more verbosely) in the XML form of the plan. Each of the 63 seek operations starts at the root of the clustered index B-tree and navigates down to the leaf page that contains the sought key value.  Our table is just large enough to need a separate root page, so each seek incurs 2 logical reads (one for the root, and one for the leaf).  We can see the index depth using the INDEXPROPERTY function, or by using the a DMV: SELECT S.index_type_desc, S.index_depth FROM sys.dm_db_index_physical_stats ( DB_ID(N'tempdb'), OBJECT_ID(N'tempdb..#Test', N'U'), 1, 1, DEFAULT ) AS S ; Let’s look now at the Properties window when the Clustered Index Seek from the second query is selected: There is just one seek operation, which starts at the root of the index and navigates the B-tree looking for the first key that matches the Start range condition (id >= 101).  It then continues to read records at the leaf level of the index (following links between leaf-level pages if necessary) until it finds a row that does not meet the End range condition (id <= 169).  Every row that meets the seek range condition is also tested against the Residual Predicate highlighted above (id % 10 > 0), and is only returned if it matches that as well. You will not be surprised that the single seek (with a range scan and residual predicate) is much more efficient than 63 singleton seeks.  It is not 63 times more efficient (as the logical reads comparison would suggest), but it is around three times faster.  Let’s run both query forms 10,000 times and measure the elapsed time: DECLARE @i INTEGER, @n INTEGER = 10000, @s DATETIME = GETDATE() ; SET NOCOUNT ON; SET STATISTICS XML OFF; ; WHILE @n > 0 BEGIN SELECT @i = T.id FROM #Test AS T WHERE T.id IN ( 101,102,103,104,105,106,107,108,109, 111,112,113,114,115,116,117,118,119, 121,122,123,124,125,126,127,128,129, 131,132,133,134,135,136,137,138,139, 141,142,143,144,145,146,147,148,149, 151,152,153,154,155,156,157,158,159, 161,162,163,164,165,166,167,168,169 ) ; SET @n -= 1; END ; PRINT DATEDIFF(MILLISECOND, @s, GETDATE()) ; GO DECLARE @i INTEGER, @n INTEGER = 10000, @s DATETIME = GETDATE() ; SET NOCOUNT ON ; WHILE @n > 0 BEGIN SELECT @i = T.id FROM #Test AS T WHERE T.id >= 101 AND T.id <= 169 AND T.id % 10 > 0 ; SET @n -= 1; END ; PRINT DATEDIFF(MILLISECOND, @s, GETDATE()) ; On my laptop, running SQL Server 2008 build 4272 (SP2 CU2), the IN form of the query takes around 830ms and the range query about 300ms.  The main point of this post is not performance, however – it is meant as an introduction to the next few parts in this mini-series that will continue to explore scans and seeks in detail. When is a seek not a seek?  When it is 63 seeks © Paul White 2011 email: [email protected] twitter: @SQL_kiwi

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >