Search Results

Search found 17470 results on 699 pages for 'single quote'.

Page 347/699 | < Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >

  • Most efficient way to handle coordinate maps in Java

    - by glowcoder
    I have a rectangular tile-based layout. It's your typical Cartesian system. I would like to have a single class that handles two lookup styles Get me the set of players at position X,Y Get me the position of player with key K My current implementation is this: class CoordinateMap<V> { Map<Long,Set<V>> coords2value; Map<V,Long> value2coords; // convert (int x, int y) to long key - this is tested, works for all values -1bil to +1bil // My map will NOT require more than 1 bil tiles from the origin :) private Long keyFor(int x, int y) { int kx = x + 1000000000; int ky = y + 1000000000; return (long)kx | (long)ky << 32; } // extract the x and y from the keys private int[] coordsFor(long k) { int x = (int)(k & 0xFFFFFFFF) - 1000000000; int y = (int)((k >>> 32) & 0xFFFFFFFF) - 1000000000; return new int[] { x,y }; } } From there, I proceed to have other methods that manipulate or access the two maps accordingly. My question is... is there a better way to do this? Sure, I've tested my class and it works fine. And sure, something inside tells me if I want to reference the data by two different keys, I need two different maps. But I can also bet I'm not the first to run into this scenario. Thanks!

    Read the article

  • cPanel webmail roundcube direct login

    - by Jinx
    I have a small web hosting that I offer along my web development/design services. I run cent os with cPanel on it and I'd like to have my clients use roundcube by default without ever seeing cpanels page for picking webmail app (squirrel/horde/roundcube). I manage everything on the server so they don't need access to additional features that can be found on that page. So basically, how can I make www.somepage.com/webmail go directly to roundcube for every single account on my server. I know how to do it manually for each account, but that's a pain to do. Thanks!

    Read the article

  • Multi-Domain Root Administrator

    - by Brent Pabst
    We have a new domain structure we are planning on rolling out in the next few months. Essentially there is a single top level and forest domain controller "mydomain.lan" and two children "us.mydomain.lan" and "pl.mydomain.lan". We want to configure an administrator account or two at the top level domain that then has full administrator permissions on the sub domains. By default the top level administrator cannot access or login to machines on the sub-domains. Running W2K8R2. Ideas?

    Read the article

  • How can I change a video container without re-encoding or compressing the file?

    - by GiH
    When I ripped my Kill Bill DVD I used handbrake and put it into a single avi. I realize that I didn't get the subtitles, so what I want to do is convert the AVI to MKV and put the subtitles in the mkv. How do I go about doing this without losing any qualityI don't care about compressing or anything ju? I don't care about compressing or anything, just want to change the container. If handbrake can do it, I'd prefer to use that since I already have it.

    Read the article

  • Oracle Endeca Information Discovery 3.1 is Now Available

    - by p.anda
    Oracle Endeca Information Discovery (OEID) 3.1 is a major release that incorporates significant new self-service discovery capabilities for business users. These include agile data mashup, extended support for unstructured analytics, and an even tighter integration with Oracle BI This release is available for download from: Oracle Delivery Cloud Oracle Technology Network Some of the what's new highlights ... Self-service data mashup... enables access to a wider variety of personal and trusted enterprise data sources. Blend multiple data sets in a single app. Agile discovery dashboards... allows users to easily create, configure, and securely share discovery dashboards with intelligent defaults, intuitive wizards and drag-and-drop configuration. Deeper unstructured analysis ... enables users to enrich text using term extraction and whitelist tagging while the data is live. Enhanced integration with OBI... provides easier wizards for data selection and enables OBI Server as a self-service data source. Enterprise-class data discovery... offers faster performance, a trusted data connection library, improved auditing and increased data connectivity for Hadoop, web content and Oracle Data Integrator. Find out more ... visit the OEID Overview page to download the What's New and related Data Sheet PDF documents. Have questions or want to share details for Oracle Endeca Information Discovery?  The MOS Communities is a great first stop to visit and you can stop-by at MOS OEID Community.

    Read the article

  • Unable to mount external hard drive - Damaged file system and MFT

    - by Khalifa Abbas Lame
    I get the following error when i try to mount my external hard drive. UNABLE TO MOUNT Error mounting /dev/sdc1 at /media/khalibloo/Khalibloo2: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sdc1" "/media/khalibloo/Khalibloo2"' exited with non-zero exit status 13: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Failed to read of MFT, mft=6 count=1 br=-1: Input/output error Failed to open inode FILE_Bitmap: Input/output error Failed to mount '/dev/sdc1': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details. It doesn't mount on windows either: "I/O Device error" it's an ntfs hard drive with a single partition Of course, i tried chkdsk /f. it reported several file segments as unreadable, but didn't say whether it fixed them or not (apparently not). also tried with the /b flag. ntfsfix reported the volume as corrupt. TestDisk was able to fix a small error with the partition table by adding the "80" flag for the active (only) partition. TestDisk also confirmed that the boot sector was fine and it matched the backup. However, when attempting to repair the MFT, it couldn't read the MFT. It also couldn't list the files on the hard drive. It says file system may be damaged. Active@ also shows that MFT is missing or corrupt. So how do i fix the file system? or the MFT?

    Read the article

  • Is djvubundle available in Ubuntu?

    - by Tim
    The official webpage says Assembling DjVu Images into Multipage Documents The batch compressors distributed as part of the DjVuText and DjVuLayered packages can directly produce multipage DjVu file when fed with multiple input files. The files produced are smaller than if the pages are compressed separately because the compressor can extract and share redundant information accross multiple pages. Individually compressed DjVu pages can be assembled into multipage documents using the free package DjVuMulti. To assemble a bunch of DjVu images into a single BUNDLED document simply type: djvubundle page1.djvu page2.djvu.... pageN.djvu document.djvu To assemble a bunch of DjVu images into an INDIRECT document, type: djvujoin page1.djvu page2.djvu.... pageN.djvu documentdir/index.djvu where documentdir must be an existing directory where all the individual page files will be copied. To disassemble a BUNDLED document into an INDIRECT one, simply say: djvujoin document.djvu documentdir/indexfile.djvu To convert a multipage document from one of the old 2.0 multipage formats, do djvureindex olddocument newdocument The programs djvujoin, and djvubundle supersede the 2.0 programs djvuindex and djvumerge. I couldn't find djvujoin and djvubundle for Ubuntu. djvulibre doesn't have them either. Do I miss something? Thanks.

    Read the article

  • SQLAuthority News – Download Whitepaper – Power View Infrastructure Configuration and Installation: Step-by-Step and Scripts

    - by pinaldave
    Power View, a feature of SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint Server 2010 Enterprise Edition, is an interactive data exploration, visualization, and presentation experience. It provides intuitive ad-hoc reporting for business users such as data analysts, business decision makers, and information workers. Microsoft has recently released very interesting whitepaper which covers a sample scenario that validates the connectivity of the Power View reports to both PowerPivot workbooks and tabular models. This white paper talks about following important concepts about Power View: Understanding the hardware and software requirements and their download locations Installing and configuring the required infrastructure when Power View and its data models are on the same computer and on different computer Installing and configuring a computer used for client access to Power View reports, models, Sharepoint 2012 and Power View in a workgroup Configuring single sign-on access for double-hop scenarios with and without Kerberos You can download the whitepaper from here. This whitepaper talks about many interesting scenarios. It would be really interesting to know if you are using Power View in your production environment. If yes, would you please share your experience over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • Are there any HTPC-optimized web browsers?

    - by smackfu
    Features that I would ideally include in "HTPC-optimized": Full-screen. Navigable using a remote or the keyboard arrows. Legible at couch distances. Or, to put it another way, imagine the design requirements for Hulu Desktop or XBMC or WMC, applied to a web browser. Opera on a Wii meets most of these criteria but not being HD wastes a lot of potential. If a single solution doesn't exist, is there some combination of Firefox add-ons that will get me there?

    Read the article

  • Convience of mySQL over xml

    - by Bonechilla
    Currently I use XML to store specific information to correctly load a few things such as a list of specfied characters, scenes and music, Once more I use JAXB in combination with standard compression/decompression(ZIP) functionality to store a list of extrenous data. This data is called to add functionality to the character, somewhat like Skills in an RPG. Each skill is seperated into its own XML file with a grandlist which contains the names of each file with their extensions omitted and zipped in folder that gets encrypted. At first using xml was working fine however as the skill list grow i worry about its stability. I was wondering if I should begin storing the data in mySQL. Originally I planned to simply convert everything to JSON over xml but i think possibly mySQL would be a better move. Can anyone inform me of the key difference and pros and cons of each I guess i'm looking for the best way to store the data more conviently and would be easier to operate on. The data is mostly primatives and strings and the only arraylist of values i have i can just concat into a single field and parse later

    Read the article

  • Convert RAID-0 to RAID-1 on HP ML350G6 with P410i zero memory

    - by JLe
    I have an HP ML350 G6 with a P410i zero memory RAID controller. As far as I can understand that means I can't expand a current single drive "RAID-0" configuration to a RAID-1 using the HP Offline ACU without installing memory and BBWC. Is that correct? What makes me think about this is the fact that expanding RAID-0 to RAID-1 should be pretty similar to replacing a failed drive in an already existing RAID-1? So then why can't I expand without memory and BBWC? Is my best option otherwise to (i) use Ghost to capture the disk, create a new RAID-1 with the existing drive and a new one or (ii) buy memory+BBWC and do it online? Thanks

    Read the article

  • Mono is frequently used to say "Yes, .NET is cross-platform". How valid is that claim?

    - by Thorbjørn Ravn Andersen
    In What would you choose for your project between .NET and Java at this point in time? I say that I would consider the "Will you always deploy to Windows?" the single most important (EDIT: technical) decision to make up front in a new web project, and if the answer is "no", I would recommend Java instead of .NET. A very common counter-argument is that "If we ever want to run on Linux/OS X/Whatever, we'll just run Mono", which is a very compelling argument on the surface, but I don't agree for several reasons. OpenJDK and all the vendor supplied JVM's have passed the official Sun TCK ensuring things work correctly. I am not aware of Mono passing a Microsoft TCK. Mono trails the .NET releases. What .NET-level is currently fully supported? Does all GUI elements (WinForms?) work correctly in Mono? Businesses may not want to depend on Open Source frameworks as the official plan B. I am aware that with the new governance of Java by Oracle, the future is unsafe, but e.g. IBM provides JDK's for many platforms, including Linux. They are just not open sourced. So, under which circumstances is Mono a valid business strategy for .NET-applications? Edit: Mark H summarized it as: "If the claim is that "I have a windows application written in .NET, it should run on mono", then not, it's not a valid claim - but Mono has made efforts to make porting such applications simpler.".

    Read the article

  • A Generic RIDC Test Program

    - by Kevin Smith
    Many times I have found it useful to use a java program that communicates with WebCenter Content (WCC) using RIDC for testing. I might not have access to the web GUI or need to test a service running as a specific user. In the past I had created a number of "one off" programs that submitted specific services, e.g GET_SEARCH_RESULTS, DOCINFO, etc. Recently I decided to create a generic RIDC test program that could submit any service with the desired parameters based on a configuration file. The programs gets the following information from the configuration file: WCC connection information (host, port) User to use to run service Service to run Any parameters for the service The program will make a connection to the WCC server, send the service request, and print the results of the service call using the getResponseAsString() method. Here is a sample configuration file: ridc.host=localhostridc.port=4444ridc.user=sysadminridc.idcservice=GET_SEARCH_RESULTSidcservice.QueryText=dDocType <matches> `Document`idcservice.SortField=dDocNameidcservice.SortDesc=ASC There is a readme file included in the zip with instructions for how to configure and run the program. The program takes one command line argument, the configuration file name. The configuration file name is optional and defaults to config.properties. If you have any suggestions for improvements let me know. Right now it only submits a single service call each time you run it. One enhancement I have already thought about would be to allow you to specify multiple services to tun in the configuration file. You can do that with the current program by having multiple configuration files and running the program multiple times, each with a different configuration file. You can download the program here.

    Read the article

  • How can I limit the upload/download bandwidth on my CentOS server?

    - by Dan Nestor
    How can I limit the upload and download bandwidth on my CentOS server? This is a box with a single interface, eth0. Ideally, I would like a command-line solution (I've been trying to use tc), something that I could easily switch on and off in a script. So far I've been trying to do something like tc filter add dev eth0 protocol ip prio 50 u32 police rate 100kbit burst 10240 drop but I'm obviously missing a lot of knowledge and information. Can somebody help with a quick one-liner? Many thanks, Dan

    Read the article

  • Double audio cd ripping weirdness

    - by jqno
    Since I installed Ubuntu 12.04, Rhythmbox, Banshee and Sound Juicer have started acting weird around double cd's, and specifically, cd #2 of said double cd. Sometimes, they will show the information of cd #1. Track names, durations, and even count are incorrect. Sometimes, they will first show the tracks for cd #1, then continue onto cd #2 if cd #2 has more tracks than #1. Sound Juicer seems to be unable to find any track durations at all, even for single cd's. Obviously, this is a pain when I'm trying to rip double cd's. And I have a fair number of them, which I want to rip. This happens on both my machines (a slightly aging iMac, and a 1-year-old Sony Vaio). However, on previous versions of Ubuntu, this never happened. All on the same machines. So I suspect 12.04 is using a different lib for extracting audio cd data. Just for kicks, I tried with Linux Mint 13, and there it works correctly, even though it claims to be based on Ubuntu 12.04 and therefore should be using (partially) the same software. So if the Mint guys can fix it, I should be able to do it too, right? So, my question: what changed in 12.04 that could cause this? And more importantly: what can I do to fix it?

    Read the article

  • OOP oriented PHP app source code samples and advice

    - by abel
    The day I have been dreading has arrived. I never felt OOP or good software design was important(I knew they were important, but I thought I could manage without them.). However having read otherwise almost everywhere on the interwebs, I started dreading the day when my client would ask me for new features in an existing app. The day has come and the pain is unbearable! I have never coded my PHP websites "properly"(PHP is my primary language and the bulk of my work. I am learning Python (using web2py)) I take care that the website doesn't fall apart in a daily use scenario. I code pages like I was creating a list of static html files with bits of "magic code" in each of them(this bugs me a lot). How do I make the whole app more or less a single object? For eg. How do I design the object model for an invoicing app? I use a lot of functions for doing any particular thing in the same fashion throughout the app(for eg. validation, generating ids, calculating taxes etc.). I know the basics of OOP in general. Can anyone point me to source code samples of functional apps written in php? Or can someone provide pointers so I can recode my existing apps in a more modular way.

    Read the article

  • Can you play Halo 2 and Halo 3 on an Xbox360 Arcade?

    - by Jeremy Rudd
    I'm looking at purchasing a Xbox360 because I've wanted to catch up with the Halo trilogy. Does the cheap Arcade edition console support Halo 2 and Halo 3? Would I be able to save my game progress in single player? Would I be able to play online in typical maps? Would I be able to play games using the DVD drive or do I have to download everything? Does the tiny HDD hurt even if I don't download any games, trailers or music? Are there any other differences in comparison to the Xbox360 Pro? I have a regular TV that uses the composite cable, so I don't need the HDTV support in the Pro edition.

    Read the article

  • Postfix count relayed messages per user

    - by Martino Dino
    I would like to know if it's possible to count the outgoing (relayed) messages on a per user basis in postfix. I'm managing a small commercial SMTP relay and decided that it would be nice to have a detailed daily report on how much mail a single user have sent (and eventually enforce some limits) possibly in realtime. I've looked almost everywhere and started to think that writing my own milter would be the way to go... Are you aware of anything that already exists for postfix that can count and report relayed mail for authenticated users (a script, milter or whatever)?

    Read the article

  • How do I start Ubuntu without X server?

    - by Kaare Mikkelsen
    So, I'm trying to install the official nVidia drivers for my fancy graphics card, and they advice disabling the X server before installing, as well as making sure that I can boot without the X server, so as not to wreck anything. However, I seem to be doing something wrong. As I understand it, this should be as simple as changing the runlevel from 2 to 1? (I am aware that all this may simply be me not understanding runlevels) If that is correct, a quick test should be simply typing "sudo init 1" or "sudo telinit 1" in a terminal? Doing that makes the system attempt to shutdown, only it stops at the purple screen with the ubuntu logo and 5 white dots underneath. I haven't observed it get anywhere from there, I always end up holding down the power button. "sudo telinit 3" has not visible effect. Alternatively, I should be able to get there using the recovery mode, activated through the grub menu? I have very little success with that. After picking recovery mode, I am faced with a set of options about how to proceed. Both choosing the one with "network enabled" and "text only", I get a dialog explaining that this will mount my / file system in read/write mode, and whether this is what I want. I choose yes, and it seems to report that my drive is fine (there's a single line of text detailing the state of the partition). And then it stops. I haven't tried letting it sit for more than a few minutes, but presumably this process should be comparable in duration to a regular boot? I am not particularly fond of messing with any .conf-files until I am certain that I can handle things with training wheels on. So, I guess there are two questions: the one in the title, and "how do I start a text-only session without changing defaults?" Thanks in advance :)

    Read the article

  • Can't change folder background

    - by newcomer
    I tried to change via dragging from the Backgrounds and Emblems window, but the icon just goes back to that window rather than changing the folder background.However, I can change the task bar by this drag-n-drop. Probably it is something about changing ownership permission? if so how to change that? In /home/mashruf/.gconf/apps/nautilus/preferences/%gconf.xml file it says:, Should I change this file? how? <?xml version="1.0"?> <gconf> <entry name="click_policy" mtime="1297597800" type="string"> <stringvalue>single</stringvalue> </entry> <entry name="default_folder_viewer" mtime="1297597336" type="string"> <stringvalue>list_view</stringvalue> </entry> <entry name="media_autorun_x_content_open_folder" mtime="1297534321" type="list" ltype="string"> </entry> <entry name="media_autorun_x_content_ignore" mtime="1297534321" type="list" ltype="string"> </entry> <entry name="media_autorun_x_content_start_app" mtime="1297534321" type="list" ltype="string"> <li type="string"> <stringvalue>x-content/software</stringvalue> </li> </entry> <entry name="start_with_location_bar" mtime="1297300028" type="bool" value="true"/> <entry name="side_pane_view" mtime="1297269334" type="string"> <stringvalue>NautilusTreeSidebar</stringvalue> </entry> <entry name="navigation_window_saved_maximized" mtime="1297600306" type="bool" value="false"/> <entry name="navigation_window_saved_geometry" mtime="1297600306" type="string"> <stringvalue>964x608+59+2</stringvalue> </entry> <entry name="sidebar_width" mtime="1297390418" type="int" value="192"/> </gconf>

    Read the article

  • How can I create persistent SSH connection to "stream" commands over a period of time?

    - by Darth
    Say that I have an application running on one PC that is sending commands via SSH to another PC on the network (both machines running Linux). For example every time something happens on #1, I want to run a task on #2. In this setup, I have to create SSH connection on every single command. Is there any simple way to do this with basic unix tools without programming custom client/server application? Basically all I want is to establish a connection over SSH and then send one command after another.

    Read the article

  • SCO UNIX problem: "Cannot create /var/adm/utmp or /var/adm/utmpx"

    - by Maktouch
    Hey everyone, I have an old server that doesn't boot. I don't know the version of unix installed, but I see SCO UNIX. It stops with that error: UX:init: ERROR: Cannot create /var/adm/utmp or /var/adm/utmpx UX:init: ERROR: failed write of utmpx entry: " " UX:init: ERROR: failed write of utmpx entry: " " UX:init: INFO: SINGLE USER MODE After that message, it just stops. I cannot write or press anything. Even CTRL + ALT + DEL does not work. I cannot get into the system. I have tried booting with a DamnSmallLinux LiveCD but it does not recognize the file system on HDA. Is there a way to either log in as root or bypass this error? Thanks.

    Read the article

  • Add registry entries for all users

    - by George02
    I've installed a software on my windows 8 computer which writes entries in my registry. How can I modify this registry entries for all users ? For example what I need to modify is values from this key but this key only refers to a single user: [HKEY_USERS\S-1-5-21-543895283-3741240661-2983116896-500\Software\IvoSoft\ClassicStartMenu\Settings] But "S-1-5-21-543895283-3741240661-2983116896-500" is different depending on the user name. How can I change that key for all users ? I've tried to work with this key but is not possible. [HKEY_USERS\S-1-5-21-*\Software\IvoSoft\ClassicStartMenu\Settings]

    Read the article

  • Animation API vs frame animation

    - by Max
    I'm pretty far down the road in my game right now, closing in on the end. And I'm adding little tweaks here and there. I used custom frame animation of a single image with many versions of my sprite on it, and controlled which part of the image to show using rectangles. But I'm starting to think that maybe I should've used the Animation API that comes with android instead. Will this effect my performance in a negative way? Can I still use rectangles to draw my bitmap? Could I add effects from the Animation API to my current frame-controlled animation? like the fadeout-effect etc? this would mean I wont have to change my current code. I want some of my animations to fade out, and just noticed that using the Animation API makes things alot easier. But needless to say, I would prefer not having to change all my animation-code. I'm bad at explaining, so Ill show a bit of how I do my animation: private static final int BMP_ROWS = 1; //I use top-view so only need my sprite to have 1 direction private static final int BMP_COLUMNS = 3; public void update(GameControls controls) { if (sprite.isMoving) { currentFrame = ++currentFrame % BMP_COLUMNS; } else { this.setFrame(1); } } public void draw(Canvas canvas, int x, int y, float angle) { this.x=x; this.y=y; canvas.save(); canvas.rotate(angle , x + width / 2, y + height / 2); int srcX = currentFrame * width; int srcY = 0 * height; Rect src = new Rect(srcX, srcY, srcX + width, srcY + height); Rect dst = new Rect(x, y, x + width, y + height); canvas.drawBitmap(bitmap, src, dst, null); canvas.restore(); }

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

< Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >