Search Results

Search found 50650 results on 2026 pages for 'html select'.

Page 565/2026 | < Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >

  • Simple Branching and Merging with SVN

    Its a good idea not to do too much work without checking something into source control.  By too much work I mean typically on the order of a couple of hours at most, and certainly its a good practice to check in anything you have before you leave the office for the day.  But what if your changes break the build (on the build server you do have a build server dont you?) or would cause problems for others on your team if they get the latest code?  The solution with Subversion is branching and merging (incidentally, if youre using Microsoft Visual Studio Team System, you can shelve your changes and share shelvesets with others, which accomplishes many of the same things as branching and merging, but is a bit simpler to do). Getting Started Im going to assume you have Subversion installed along with the nearly ubiquitous client, TortoiseSVN.  See my previous post on installing SVN server if you want to get it set up real quick (you can put it on your workstation/laptop just to learn how it works easily enough). Overview When you know you are going to be working on something that you wont be able to check in quickly, its a good idea to start a branch.  Its also perfectly fine to create the branch after-the-fact (have you ever started something thinking it would be an hour and 4 hours later realized you were nowhere near done?).  In any event, the first thing you need to do is create a branch.  A branch is simply a copy of the current trunk (a typical subversion setup has root directories called trunk, tags, and branches its a good idea to keep this and to put your branches in the branches folder).  Once you have a new branch, you need to switch your working copy so that it is bound to your branch.  As you work,  you may want to merge in changes that are happening in the trunk to your branch, and ultimately when you are done youll want to merge your branch back into the trunk.  When done, you can delete your branch (or not, but it may add clutter).  To sum up: Create a new branch Switch your local working copy to the new branch Develop in the branch (commit changes, etc.) Merge changes from trunk into your branch Merge changes from branch into trunk Delete the branch Create a new branch From the root of your repository, right-click and select TortoiseSVN > Branch/tag as shown at right (click to enlarge).  This will bring up the Copy (Branch / Tag) interface.  By default the From WC at URL: should be pointing at the trunk of your repository.  I recommend (after ensuring that you have the latest version) that you choose to make the copy from the HEAD revision in the repository (the first radio button).  In the To URL: textbox, you should change the URL from /trunk to /branches/NAME_OF_BRANCH.  You can name the branch anything you like, but its often useful to give it your name (if its just for your use) or some useful information (such as a datestamp or a bug/issue ID from that it relates to, or perhaps just the name of the feature you are adding. When youre done with that, enter in a log message for your new branch.  If you want to immediately switch your local working copy to the new branch/tag, check the box at the bottom of the dialog (Switch working copy to new branch/tag).  You can see an example at right. Assuming everything works, you should very quickly see a window telling you the Copy finished, like the one shown below: Switch Local Working Copy to New Branch If you followed the instructions above and checked the box when you created your branch, you dont need to do this step.  However, if you have a branch that already exists and you would like to switch over to working on it, you can do so by using the Switch command.  Youll find it in the explorer context menu under TortoiseSVN > Switch: This brings up a dialog that shows you your current binding, and lets you enter in a new URL to switch to: In the screenshot above, you can see that Im currently bound to a branch, and so I could switch back to the trunk or to another branch.  If youre not sure what to enter here, you can click the [] next to the URL textbox to explore your repository and find the appropriate root URL to use.  Also, the dropdown will show you URLs that might be a good fit (such as the trunk of the current repository). Develop in the Branch Once you have created a branch and switched your working copy to use it,  you can make changes and Commit them as usual.  Your commits are now going into the branch, so they wont impact other users or the build server that are working off of the trunk (or their own branches).  In theory you can keep on doing this forever, but practically its a good idea to periodically merge the trunk into your branch, and/or keep your branches short-lived and merge them back into the trunk before they get too far out of sync. Merge Changes from Trunk into your Branch Once you have been working in a branch for a little while, change to the trunk will have occurred that youll want to merge into your branch.  Its much safer and easier to integrate changes in small increments than to wait for weeks or months and then try to merge in two very different codebases.  To perform the merge, simply go to the root of your branch working copy and right click, select TortoiseSVN->Merge.  Youll be presented with this dialog: In this case you want to leave the default setting, Merge a range of revisions.  Click Next.  Now choose the URL to merge from.  You should select the trunk of your current repository (which should be in the dropdownlist, or you can click the [] to browse your repository for the correct URL).  You can leave everything else blank since you want to merge everything: Click Next.  Again you can leave the default settings.  If you want to do something more granular than everything in the trunk, you can select a different Merge depth, to include merging just one item in the tree.  You can also perform a Test merge to see what changes will take place before you click Merge (which is often a good idea).  Heres what the dialog should look like before you click Merge: After clicking Merge (or Test merge) you should see a confirmation like this (it will say Test Only in the title if you click Test merge): Now you should build your solution, run all of your tests, and verify that your branch still works the way it should, given the updates that youve just integrated from the trunk.  Once everything works, Commit your changes, and then continue with your work on the branch.  Note that until you commit, nothing has actually changed in your branch on the server.  Other team members who may also be working in this branch wont be impacted, etc.  The Merge is purely a client-side operation until you perform a Commit. In a more real-world scenario, you may have conflicts.  When you do, youll be presented with a dialog like this one: Its up to you which option you want to go with.  The more frequently you Merge, the fewer of these youll have to deal with.  Also, be very sure that youre merging the right folders together.  If you try and merge your trunk with some subfolder in your branchs structure, youll end up with all kinds of conflicts and problems.  Fortunately, theyre only on your working copy (unless you commit them!) but if you see something like that, be sure to doublecheck your URL and your local file location. Merge Your Branch Back Into Trunk When youre done working in your branch, its time to pull it back into the trunk.  The first thing you should do is follow the previous steps instructions for merging the latest from the trunk into your branch.  This lets you ensure that what you have in your branch works correctly with the current trunk.  Once youve done that and committed your changes to your branch, youre ready to proceed with this step. Once youre confident your branch is good to go, you should go to its root folder and select TortoiseSVN->Merge (as above) from the explorer right-click menu.  This time, select Reintegrate a branch as shown below: Click Next.  Youll want it to merge with the trunk, which should be the default: Click Next. Leave the default settings: Click Test merge to see a test, and then if all looks good, click Merge.  Note that if you havent checked in your working copy changes, youll see something like this: If on the other hand things are successful: After this step, its likely you are finished working in your branch.  Dont forget to use the ToroiseSVN->Switch command to change your working copy back to the trunk. Delete the Branch You dont have to delete the branch, but over time your branches area of your repository will get cluttered, and in any event if theyre not actively being worked on the branches are just taking up space and adding to later confusion.  Keeping your branches limited to things youre actively working on is simply a good habit to get into, just like making sure your codebase itself remains tidy and not filled with old commented out bits of code. To delete the branch after youre finished with it, the simplest thing to do is choose TortoiseSVN->Repo Browser.  From there, assuming you did this from your branch, it should already be highlighted.  In any event, navigate to your branch in the treeview on the left, and then right-click and select Delete.  Enter a log message if youd like: Click OK, and its gone.  Dont be too afraid of this, though.  You can still get to the files by viewing the log for branches, and selecting a previous revision (anything before the delete action): If for some reason you needed something that was previously in this branch, you could easily get back to any changeset you checked in, so you should have absolutely no fear when it comes to deleting branches youre done with.   Resources If youre using Eclipse, theres a nice write-up of the steps required by Zach Cox that I found helpful here. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to resize YouTube player when Window Resize

    - by Permana
    I want to show Popup window contain YouTube video. My question is how to resize YouTube Player when user resize the Popup Window ? Head section of Popup windows PHP/HTML code <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <title>Wavin Video</title> <script src="jquery-1.4.1.min.js" type="text/javascript"></script> <script type="text/javascript" charset="utf-8"> $(document).ready(function(){ //If the User resizes the window //$(window).bind("resize", resizeWindow); $(window).resize(resizeWindow).resize() function resizeWindow( e ) { var target = "#youtubebox2"; var newWindowHeight = $(window).height(); var newWindowWidth = $(window).width(); console.log("Width : "+newWindowWidth); console.log("Height: "+newWindowHeight); console.log("---------"); $(target).html("<object width=\""+newWindowWidth+"\" height=\""+newWindowHeight+"\" id=\"youtubebox\"><param name=\"movie\" value=\"http://www.youtube.com/v/<?php echo $_GET['filename'];?>\"></param><param name=\"allowFullScreen\" value=\"true\"></param><param name=\"allowscriptaccess\" value=\"always\"></param><embed src=\"http://www.youtube.com/v/<?php echo $_GET['filename'];?>\" type=\"application/x-shockwave-flash\" allowscriptaccess=\"always\" allowfullscreen=\"true\" width=\""+newWindowWidth+"\" height=\""+newWindowHeight+"\"></embed></object>"); } }); </script> </head> <body> <div align="center" style="padding:6px 0px 0px 0px;background-color: #ccc;" id="youtubebox2"> <object width="480" height="385" id="youtubebox"> <param name="movie" value="http://www.youtube.com/v/<?php echo $_GET['filename'];?>"></param> <param name="allowFullScreen" value="true"></param> <param name="allowscriptaccess" value="always"></param> <embed src="http://www.youtube.com/v/<?php echo $_GET['filename'];?>" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="480" height="385"> </embed> </object> </div> </body> </html> the page receive youtube video ID through $_GET variable. the code above didn't work, the YouTUbe Player is not resized. Is there something I miss ?

    Read the article

  • Unit Tests in Visual Studio 2010

    - by Ben
    Hi, i am trying to create a Unit test for a WinForm in a Visual Studio 2010 project. I add a new "Coded UI Test" to my project, open up the code file, then right click and select "Generate Code for Coded UI Test" - "Use Coded UI Test builder". I then start my application up, select "Record" on the UI Map control. I run my tests (in this case simply select a textbox, type in a random value, them click a button). I then select "Generate Code" from the UI Map control which generates the code which the test will use. When running this test, i get the error: Test method HelloWorldTest.CodedUITest1.CodedUITestMethod1 threw exception: Microsoft.VisualStudio.TestTools.UITest.Extension.UITestControlNotFoundException: The playback failed to find the control with the given search properties. Additional Details: TechnologyName: 'MSAA' ControlType: 'Window' Name: 'Form1' ClassName: 'WindowsForms10.Window' --- System.Runtime.InteropServices.COMException: Error HRESULT E_FAIL has been returned from a call to a COM component. Does anyone know where i am going wrong? Thanks

    Read the article

  • Django Error: NameError name 'current_datetime' is not defined

    - by Diego
    I'm working through the book "The Definitive Guide to Django" and am stuck on a piece of code. This is the code in my settings.py: ROOT_URLCONF = 'mysite.urls' I have the following code in my urls.py from django.conf.urls.defaults import * from mysite.views import hello, my_homepage_view urlpatterns = patterns('', ('^hello/$', hello), ) urlpatterns = patterns('', ('^time/$', current_datetime), ) And the following is the code in my views.py file: from django.http import HttpResponse import datetime def hello(request): return HttpResponse("Hello World") def current_datetime(request): now = datetime.datetime.now() html = "<html><body>It is now %s.</body></html>" % now return HttpResponse(html) Yet, I get the following error when I test the code in the development server. NameError at /time/ name 'current_datetime' is not defined Can someone help me out here? This really is just a copy-paste from the book. I don't see any mistyping.

    Read the article

  • MySQL InnoDB Corruption after power outage, possible to recover?

    - by Tim Hackett
    Hey Guys, I recently started trying to get Redmine up and running after a power outage that seems to have corrupted our InnoDB database in MySQL. Redmine had an extensive set of documentation that I would like to get even if redmine isn't able to run. The service fails on startup. I have tried inserting innodb_force_recovery = 4 per the documentation from the url in the error log. (also tried 1 thru 6 as I have backed up all directories after the corruption) I have verified through "mysqld-nt --print-defaults" that it is starting with the recovery option in the params. The machine is running Windows Server 2003 SP2, Xeon E5335 with 2GB RAM, MySQL is not mirrored to another machine, nor is the machine a mirror. I do not have any backups because the previous person did not set them up. Here is the error log: InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100308 14:50:01 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 100308 14:50:02 InnoDB: Error: page 7 log sequence number 0 935521175 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 2 log sequence number 0 935517607 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 11 log sequence number 0 935517607 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 5 log sequence number 0 972973045 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 6 log sequence number 0 972984051 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 1577 log sequence number 0 972737368 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. InnoDB: Error: trying to access page number 4294965119 in space 0, InnoDB: space name .\ibdata1, InnoDB: which is outside the tablespace bounds. InnoDB: Byte offset 0, len 16384, i/o type 10. InnoDB: If you get this error at mysqld startup, please check that InnoDB: your my.cnf matches the ibdata files that you have in the InnoDB: MySQL server. 100308 14:50:02InnoDB: Assertion failure in thread 960 in file .\fil\fil0fil.c line 3959 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: about forcing recovery. 100308 14:50:02 [ERROR] mysqld-nt: Got signal 11. Aborting! 100308 14:50:02 [ERROR] Aborting 100308 14:50:02 [Note] mysqld-nt: Shutdown complete

    Read the article

  • Facebook require_login() in iFrame App

    - by LapKom
    Hi, I have serious problem with iframe application. I need to use many external JS libraries and other dynamic stuuf so FMBL application can't be done. When I call require_login() I get applicaition installing dialog when app is not already installed, which is ok. But then after authorization application enters an endless redirect loop with parameters like auth_token, installed and so. Yesterday I managed to fix this, but today it's broken again... What the heck is happening with FB? It's driving me crazy to find a sollution, none of ones found on net doesn't seem to be working. So far I tried: http://abhirama.wordpress.com/2010/03/07/facebook-iframe-xfbml-app/ (7th march 2010!) http://forum.developers.facebook.com/viewtopic.php?pid=156092 http://www.keywordintellect.com/facebook-development/how-to-set-up-a-facebook-iframe-application-in-php-in-5-minutes/ http://www.markdeepwell.com/2010/02/validating-a-facebook-session-within-an-iframe/ http://forum.developers.facebook.com/viewtopic.php?pid=210449 http://www.ajaxlines.com/ajax/stuff/article/facebook_fbml_rendering_in_iframe_application.php http://www.aratide.com/php/solving-the-break-out-issue-in-iframe-facebook-applications/ None of the above worked... According to those and some FB docs: http://wiki.developers.facebook.com/index.php/FB_RequireFeatures http://wiki.developers.facebook.com/index.php/Cross_Domain_Communication_Channel My example test files look as follow: <?php //Link in library. require_once '../application/vendor/Facebook/facebook.php'; //Authentication Keys $appapikey = 'XXXX'; $appsecret = 'XXXX'; //Construct the class $facebook = new Facebook($appapikey, $appsecret); //Require login $user_id = $facebook->require_login(); ?> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml"> <head> <title></title> </head> <body> <script src="http://static.ak.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php" type="text/javascript"></script> This is you: <fb:name uid="<?php echo $user_id?>"></fb:name> <?php var_dump($facebook->$this->facebook->api_client->friends_get())?> <script type="text/javascript"> FB_RequireFeatures(["XFBML"], function(){ FB.Facebook.init("<?=$appapikey?>", "xd_receiver.html"); }); </script> </body> </html> And cross-domain file xd_receiver.html is: <!doctype html public "-//w3c//dtd xhtml 1.0 strict//en" "http://www.w3.org/tr/xhtml1/dtd/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head> <title>cross-domain receiver page</title> </head> <body> <script src="http://static.ak.facebook.com/js/api_lib/v0.4/XdCommReceiver.js" type="text/javascript"></script> </body> </html> How do I get it working? I'm using Kohana framework to do this and already replaced header('Location') with url::redirect() in facebook php library.

    Read the article

  • Shouldn't prepared statements be much more fsater?

    - by silversky
    $s = explode (" ", microtime()); $s = $s[0]+$s[1]; $con = mysqli_connect ('localhost', 'test', 'pass', 'db') or die('Err'); for ($i=0; $i<1000; $i++) { $stmt = $con -> prepare( " SELECT MAX(id) AS max_id , MIN(id) AS min_id FROM tb "); $stmt -> execute(); $stmt->bind_result($M,$m); $stmt->free_result(); $rand = mt_rand( $m , $M ).'<br/>'; $res = $con -> prepare( " SELECT * FROM tb WHERE id >= ? LIMIT 0,1 "); $res -> bind_param("s", $rand); $res -> execute(); $res->free_result(); } $e = explode (" ", microtime()); $e = $e[0]+$e[1]; echo number_format($e-$s, 4, '.', ''); // and: $link = mysql_connect ("localhost", "test", "pass") or die (); mysql_select_db ("db") or die ("Unable to select database".mysql_error()); for ($i=0; $i<1000; $i++) { $range_result = mysql_query( " SELECT MAX(`id`) AS max_id , MIN(`id`) AS min_id FROM tb "); $range_row = mysql_fetch_object( $range_result ); $random = mt_rand( $range_row->min_id , $range_row->max_id ); $result = mysql_query( " SELECT * FROM tb WHERE id >= $random LIMIT 0,1 "); } defenitly prepared statements are much more safer but also every where it says that they are much faster BUT in my test on the above code I have: - 2.45 sec for prepared statements - 5.05 sec for the secon example What do you think I'm doing wrong? Should I use the second solution or I should try to optimize the prep stmt?

    Read the article

  • Slow query with unexpected index scan

    - by zerkms
    Hello I have this query: SELECT * FROM sample INNER JOIN test ON sample.sample_number = test.sample_number INNER JOIN result ON test.test_number = result.test_number WHERE sampled_date BETWEEN '2010-03-17 09:00' AND '2010-03-17 12:00' the biggest table here is RESULT, contains 11.1M records. The left 2 tables about 1M. this query works slowly (more than 10 minutes) and returns about 800 records. executing plan shows clustered index scan (over it's PRIMARY KEY (result.result_number, which actually doesn't take part in query)) over all 11M records. RESULT.TEST_NUMBER is a clustered primary key. if I change 2010-03-17 09:00 to 2010-03-17 10:00 - i get about 40 records. it executes for 300ms. and plan shows index seek (over result.test_number index) if i replace * in SELECT clause to result.test_number (covered with index) - then all become fast in first case too. this points to hdd IO issues, but doesn't clarifies changing plan. so, any ideas? UPDATE: sampled_date is in table sample and covered by index. other fields from this query: test.sample_number is covered by index and result.test_number too. UPDATE 2: obviously than sql server in any reasons don't want to use index. i did a small experiment: i remove INNER JOIN with result, select all test.test_number and after that do SELECT * FROM RESULT WHERE TEST_NUMBER IN (...) this, of course, works fast. but i cannot get what is the difference and why query optimizer choose such inappropriate way to select data in 1st case.

    Read the article

  • Translate query to NHibernate

    - by Rob Walker
    I am trying to learn NHibernate, and am having difficulty translating a SQL query into one using the criteria API. The data model has tables: Part (Id, Name, ...), Order (Id, PartId, Qty), Shipment (Id, PartId, Qty) For all the parts I want to find the total quantity ordered and the total quantity shipped. In SQL I have: select shipment.part_id, sum(shipment.quantity), sum(order.quantity) from shipment cross join order on order.part_id = shipment.part_id group by shipment.part_id Alternatively: select id, (select sum(quantity) from shipment where part_id = part.id), (select sum(quantity) from order where part_id = part.id) from part But the latter query takes over twice as long to execute. Any suggestions on how to create these queries in (fluent) NHibernate? I have all the tables mapped and loading/saving/etc the entities works fine.

    Read the article

  • Move Files from a Failing PC with an Ubuntu Live CD

    - by Trevor Bekolay
    You’ve loaded the Ubuntu Live CD to salvage files from a failing system, but where do you store the recovered files? We’ll show you how to store them on external drives, drives on the same PC, a Windows home network, and other locations. We’ve shown you how to recover data like a forensics expert, but you can’t store recovered files back on your failed hard drive! There are lots of ways to transfer the files you access from an Ubuntu Live CD to a place that a stable Windows machine can access them. We’ll go through several methods, starting each section from the Ubuntu desktop – if you don’t yet have an Ubuntu Live CD, follow our guide to creating a bootable USB flash drive, and then our instructions for booting into Ubuntu. If your BIOS doesn’t let you boot using a USB flash drive, don’t worry, we’ve got you covered! Use a Healthy Hard Drive If your computer has more than one hard drive, or your hard drive is healthy and you’re in Ubuntu for non-recovery reasons, then accessing your hard drive is easy as pie, even if the hard drive is formatted for Windows. To access a hard drive, it must first be mounted. To mount a healthy hard drive, you just have to select it from the Places menu at the top-left of the screen. You will have to identify your hard drive by its size. Clicking on the appropriate hard drive mounts it, and opens it in a file browser. You can now move files to this hard drive by drag-and-drop or copy-and-paste, both of which are done the same way they’re done in Windows. Once a hard drive, or other external storage device, is mounted, it will show up in the /media directory. To see a list of currently mounted storage devices, navigate to /media by clicking on File System in a File Browser window, and then double-clicking on the media folder. Right now, our media folder contains links to the hard drive, which Ubuntu has assigned a terribly uninformative label, and the PLoP Boot Manager CD that is currently in the CD-ROM drive. Connect a USB Hard Drive or Flash Drive An external USB hard drive gives you the advantage of portability, and is still large enough to store an entire hard disk dump, if need be. Flash drives are also very quick and easy to connect, though they are limited in how much they can store. When you plug a USB hard drive or flash drive in, Ubuntu should automatically detect it and mount it. It may even open it in a File Browser automatically. Since it’s been mounted, you will also see it show up on the desktop, and in the /media folder. Once it’s been mounted, you can access it and store files on it like you would any other folder in Ubuntu. If, for whatever reason, it doesn’t mount automatically, click on Places in the top-left of your screen and select your USB device. If it does not show up in the Places list, then you may need to format your USB drive. To properly remove the USB drive when you’re done moving files, right click on the desktop icon or the folder in /media and select Safely Remove Drive. If you’re not given that option, then Eject or Unmount will effectively do the same thing. Connect to a Windows PC on your Local Network If you have another PC or a laptop connected through the same router (wired or wireless) then you can transfer files over the network relatively quickly. To do this, we will share one or more folders from the machine booted up with the Ubuntu Live CD over the network, letting our Windows PC grab the files contained in that folder. As an example, we’re going to share a folder on the desktop called ToShare. Right-click on the folder you want to share, and click Sharing Options. A Folder Sharing window will pop up. Check the box labeled Share this folder. A window will pop up about the sharing service. Click the Install service button. Some files will be downloaded, and then installed. When they’re done installing, you’ll be appropriately notified. You will be prompted to restart your session. Don’t worry, this won’t actually log you out, so go ahead and press the Restart session button. The Folder Sharing window returns, with Share this folder now checked. Edit the Share name if you’d like, and add checkmarks in the two checkboxes below the text fields. Click Create Share. Nautilus will ask your permission to add some permissions to the folder you want to share. Allow it to Add the permissions automatically. The folder is now shared, as evidenced by the new arrows above the folder’s icon. At this point, you are done with the Ubuntu machine. Head to your Windows PC, and open up Windows Explorer. Click on Network in the list on the left, and you should see a machine called UBUNTU in the right pane. Note: This example is shown in Windows 7; the same steps should work for Windows XP and Vista, but we have not tested them. Double-click on UBUNTU, and you will see the folder you shared earlier! As well as any other folders you’ve shared from Ubuntu. Double click on the folder you want to access, and from there, you can move the files from the machine booted with Ubuntu to your Windows PC. Upload to an Online Service There are many services online that will allow you to upload files, either temporarily or permanently. As long as you aren’t transferring an entire hard drive, these services should allow you to transfer your important files from the Ubuntu environment to any other machine with Internet access. We recommend compressing the files that you want to move, both to save a little bit of bandwidth, and to save time clicking on files, as uploading a single file will be much less work than a ton of little files. To compress one or more files or folders, select them, and then right-click on one of the members of the group. Click Compress…. Give the compressed file a suitable name, and then select a compression format. We’re using .zip because we can open it anywhere, and the compression rate is acceptable. Click Create and the compressed file will show up in the location selected in the Compress window. Dropbox If you have a Dropbox account, then you can easily upload files from the Ubuntu environment to Dropbox. There is no explicit limit on the size of file that can be uploaded to Dropbox, though a free account begins with a total limit of 2 GB of files in total. Access your account through Firefox, which can be opened by clicking on the Firefox logo to the right of the System menu at the top of the screen. Once into your account, press the Upload button on top of the main file list. Because Flash is not installed in the Live CD environment, you will have to switch to the basic uploader. Click Browse…find your compressed file, and then click Upload file. Depending on the size of the file, this could take some time. However, once the file has been uploaded, it should show up on any computer connected through Dropbox in a matter of minutes. Google Docs Google Docs allows the upload of any type of file – making it an ideal place to upload files that we want to access from another computer. While your total allocation of space varies (mine is around 7.5 GB), there is a per-file maximum of 1 GB. Log into Google Docs, and click on the Upload button at the top left of the page. Click Select files to upload and select your compressed file. For safety’s sake, uncheck the checkbox concerning converting files to Google Docs format, and then click Start upload. Go Online – Through FTP If you have access to an FTP server – perhaps through your web hosting company, or you’ve set up an FTP server on a different machine – you can easily access the FTP server in Ubuntu and transfer files. Just make sure you don’t go over your quota if you have one. You will need to know the address of the FTP server, as well as the login information. Click on Places > Connect to Server… Choose the FTP (with login) Service type, and fill in your information. Adding a bookmark is optional, but recommended. You will be asked for your password. You can choose to remember it until you logout, or indefinitely. You can now browse your FTP server just like any other folder. Drop files into the FTP server and you can retrieve them from any computer with an Internet connection and an FTP client. Conclusion While at first the Ubuntu Live CD environment may seem claustrophobic, it has a wealth of options for connecting to peripheral devices, local computers, and machines on the Internet – and this article has only scratched the surface. Whatever the storage medium, Ubuntu’s got an interface for it! Similar Articles Productive Geek Tips Backup Your Windows Live Writer SettingsMove a Window Without Clicking the Titlebar in UbuntuRecover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDCreate a Bootable Ubuntu USB Flash Drive the Easy WayReset Your Ubuntu Password Easily from the Live CD TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals

    Read the article

  • Perl DBI execute not maintaining MySQL stored procedure results

    - by David Dolphin
    I'm having a problem with executing a stored procedure from Perl (using the DBI Module). If I execute a simple SELECT * FROM table there are no problems. The SQL code is: DROP FUNCTION IF EXISTS update_current_stock_price; DELIMITER | CREATE FUNCTION update_current_stock_price (symbolIN VARCHAR(20), nameIN VARCHAR(150), currentPriceIN DECIMAL(10,2), currentPriceTimeIN DATETIME) RETURNS INT DETERMINISTIC BEGIN DECLARE outID INT; SELECT id INTO outID FROM mydb449.app_stocks WHERE symbol = symbolIN; IF outID 0 THEN UPDATE mydb449.app_stocks SET currentPrice = currentPriceIN, currentPriceTime = currentPriceTimeIN WHERE id = outID; ELSE INSERT INTO mydb449.app_stocks (symbol, name, currentPrice, currentPriceTime) VALUES (symbolIN, nameIN, currentPriceIN, currentPriceTimeIN); SELECT LAST_INSERT_ID() INTO outID; END IF; RETURN outID; END| DELIMITER ; The Perl code snip is: $sql = "select update_current_stock_price('$csv_result[0]', '$csv_result[1]', '$csv_result[2]', '$currentDateTime') as `id`;"; My::Extra::StandardLog("SQL being used: ".$sql); my $query_handle = $dbh-prepare($sql); $query_handle-execute(); $query_handle-bind_columns(\$returnID); $query_handle-fetch(); If I execute select update_current_stock_price('aapl', 'Apple Corp', '264.4', '2010-03-17 00:00:00') asid; using the mysql CLI client it executes the stored function correctly and returns an existing ID, or the new ID. However, the Perl will only return a new ID, (incrementing by 1 on each run). It also doesn't store the result in the database. It looks like it's executing a DELETE on the new id just after the update_current_stock_price function is run. Any help? Does Perl do anything funky to procedures I should know about? Before you ask, I don't have access to binary logging, sorry

    Read the article

  • How to validate my Alexa code <noscript> tag in Head section

    - by Naveen Valecha
    The doctype and HTML tag of my page is below: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML+RDFa 1.1//EN" "http://www.w3.org/MarkUp/DTD/xhtml-rdfa-2.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" version="XHTML+RDFa 1.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/1999/xhtml http://www.w3.org/MarkUp/SCHEMA/xhtml-rdfa-2.xsd" xmlns:og="http://ogp.me/ns#" xml:lang="en" lang="en" dir="ltr">

    Read the article

  • Using a Case statement within the values section of an Insert statement

    - by mattgcon
    Please forgive my ignorance and poor SQL programming skills but I am normally a basic SQL developer. I need to create a trigger off the insertion of data in one table to insert different data into another table. Within this trigger I need to insert certain data into the new table based upon values within the newly inserted data from the original table. I am totally confused on this. i thought I would be creative and use a case statement within teh Values section but it is not working. Can anyone please help me on this? (below is the code for the trigger that I have as of now) INSERT INTO dbo.WebOnlineUserPeopleDashboard ( ONLINE_USERACCOUNT_ID, ONLINE_ROOMS_DIRECTORY, ONLINE_ROOMS_LIST, ONLINE_ROOMS_PLACEMENT, ONLINE_ROOMS_MANAGEMENT, ONLINE_MAILINGLIST_DIRECTORY, ONLINE_MAILINGLIST_LIST, ONLINE_MAILINGLIST_MEMBERS, ONLINE_MAILINGLIST_MANAGER, ONLINE_PEOPLESEARCH_DIRECTORY ) VALUES IF (SELECT ONLINE_PEOPLE_FULL_ACCESS FROM INSERTED) = 1 BEGIN SELECT ONLINE_USERACCOUNT_ID, 1, 1, 1, 1, 1, 1, 1, 1, 1 FROM INSERTED END ELSE IF (SELECT ONLINE_PEOPLE_FULL_ACCESS FROM INSERTED) = 0 BEGIN SELECT ONLINE_USERACCOUNT_ID, 0, 0, 0, 0, 0, 0, 0, 0, 0 FROM INSERTED END ELSE BEGIN SELECT ONLINE_USERACCOUNT_ID, CASE --DIRECTORY WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_FULL_ACCESS = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_VIEW = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_ADD = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_UPDATE = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_DELETE = 1 THEN 1 WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_FULL_ACCESS = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_VIEW = 1 THEN 1 WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_VIEW = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_ADD = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_UPDATE = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_DELETE = 1 THEN 1 WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_ADD = 0 AND ONLINE_PEOPLE_ROOMS_PLACEMENT_UPDATE = 0 AND ONLINE_PEOPLE_ROOMS_PLACEMENT_DELETE = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_ROOMS_MANAGEMENT_FULL_ACCESS = 1 THEN 1 WHEN ONLINE_PEOPLE_ROOMS_MANAGEMENT_FULL_ACCESS = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_MAILING_LISTS_FULL_ACCESS = 1 OR ONLINE_PEOPLE_MAILING_LISTS_VIEW = 1 OR ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_ADD = 1 OR ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_UPDATE = 1 OR ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_DELETE = 1 THEN 1 WHEN ONLINE_PEOPLE_MAILING_LISTS_FULL_ACCESS = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_MAILING_LISTS_VIEW = 1 THEN 1 WHEN ONLINE_PEOPLE_MAILING_LISTS_VIEW = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_ADD = 1 OR ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_UPDATE = 1 OR ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_DELETE = 1 THEN 1 WHEN ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_ADD = 0 AND ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_UPDATE = 0 AND ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_DELETE = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_MAILING_LISTS_ADD = 1 OR ONLINE_PEOPLE_MAILING_LISTS_UPDATE = 1 OR ONLINE_PEOPLE_MAILING_LISTS_DELETE = 1 THEN 1 WHEN ONLINE_PEOPLE_MAILING_LISTS_ADD = 1 OR ONLINE_PEOPLE_MAILING_LISTS_UPDATE = 1 OR ONLINE_PEOPLE_MAILING_LISTS_DELETE = 1 THEN 0 END, CASE WHEN ONLINE_PEOPLE_PEOPLE_SEARCH = 1 THEN 1 WHEN ONLINE_PEOPLE_PEOPLE_SEARCH = 0 THEN 0 END FROM INSERTED END END

    Read the article

  • Integrating NetBeans for Raspberry Pi Java Development

    - by speakjava
    Raspberry Pi IDE Java Development The Raspberry Pi is an incredible device for building embedded Java applications but, despite being able to run an IDE on the Pi it really pushes things to the limit.  It's much better to use a PC or laptop to develop the code and then deploy and test on the Pi.  What I thought I'd do in this blog entry was to run through the steps necessary to set up NetBeans on a PC for Java code development, with automatic deployment to the Raspberry Pi as part of the build process. I will assume that your starting point is a Raspberry Pi with an SD card that has one of the latest Raspbian images on it.  This is good because this now includes the JDK 7 as part of the distro, so no need to download and install a separate JDK.  I will also assume that you have installed the JDK and NetBeans on your PC.  These can be downloaded here. There are numerous approaches you can take to this including mounting the file system from the Raspberry Pi remotely on your development machine.  I tried this and I found that NetBeans got rather upset if the file system disappeared either through network interruption or the Raspberry Pi being turned off.  The following method uses copying over SSH, which will fail more gracefully if the Pi is not responding. Step 1: Enable SSH on the Raspberry Pi To run the Java applications you create you will need to start Java on the Raspberry Pi with the appropriate class name, classpath and parameters.  For non-JavaFX applications you can either do this from the Raspberry Pi desktop or, if you do not have a monitor connected through a remote command line.  To execute the remote command line you need to enable SSH (a secure shell login over the network) and connect using an application like PuTTY. You can enable SSH when you first boot the Raspberry Pi, as the raspi-config program runs automatically.  You can also run it at any time afterwards by running the command: sudo raspi-config This will bring up a menu of options.  Select '8 Advanced Options' and on the next screen select 'A$ SSH'.  Select 'Enable' and the task is complete. Step 2: Configure Raspberry Pi Networking By default, the Raspbian distribution configures the ethernet connection to use DHCP rather than a static IP address.  You can continue to use DHCP if you want, but to avoid having to potentially change settings whenever you reboot the Pi using a static IP address is simpler. To configure this on the Pi you need to edit the /etc/network/interfaces file.  You will need to do this as root using the sudo command, so something like sudo vi /etc/network/interfaces.  In this file you will see this line: iface eth0 inet dhcp This needs to be changed to the following: iface eth0 inet static     address 10.0.0.2     gateway 10.0.0.254     netmask 255.255.255.0 You will need to change the values in red to an appropriate IP address and to match the address of your gateway. Step 3: Create a Public-Private Key Pair On Your Development Machine How you do this will depend on which Operating system you are using: Mac OSX or Linux Run the command: ssh-keygen -t rsa Press ENTER/RETURN to accept the default destination for saving the key.  We do not need a passphrase so simply press ENTER/RETURN for an empty one and once more to confirm. The key will be created in the file .ssh/id_rsa.pub in your home directory.  Display the contents of this file using the cat command: cat ~/.ssh/id_rsa.pub Open a window, SSH to the Raspberry Pi and login.  Change directory to .ssh and edit the authorized_keys file (don't worry if the file does not exist).  Copy and paste the contents of the id_rsa.pub file to the authorized_keys file and save it. Windows Since Windows is not a UNIX derivative operating system it does not include the necessary key generating software by default.  To generate the key I used puttygen.exe which is available from the same site that provides the PuTTY application, here. Download this and run it on your Windows machine.  Follow the instructions to generate a key.  I remove the key comment, but you can leave that if you want. Click "Save private key", confirm that you don't want to use a passphrase and select a filename and location for the key. Copy the public key from the part of the window marked, "Public key for pasting into OpenSSH authorized_keys file".  Use PuTTY to connect to the Raspberry Pi and login.  Change directory to .ssh and edit the authorized_keys file (don't worry if this does not exist).  Paste the key information at the end of this file and save it. Logout and then start PuTTY again.  This time we need to create a saved session using the private key.  Type in the IP address of the Raspberry Pi in the "Hostname (or IP address)" field and expand "SSH" under the "Connection" category.  Select "Auth" (see the screen shot below). Click the "Browse" button under "Private key file for authentication" and select the file you saved from puttygen. Go back to the "Session" category and enter a short name in the saved sessions field, as shown below.  Click "Save" to save the session. Step 4: Test The Configuration You should now have the ability to use scp (Mac/Linux) or pscp.exe (Windows) to copy files from your development machine to the Raspberry Pi without needing to authenticate by typing in a password (so we can automate the process in NetBeans).  It's a good idea to test this using something like: scp /tmp/foo [email protected]:/tmp on Linux or Mac or pscp.exe foo pi@raspi:/tmp on Windows (Note that we use the saved configuration name instead of the IP address or hostname so the public key is picked up). pscp.exe is another tool available from the creators of PuTTY. Step 5: Configure the NetBeans Build Script Start NetBeans and create a new project (or open an existing one that you want to deploy automatically to the Raspberry Pi). Select the Files tab in the explorer window and expand your project.  You will see a build.xml file.  Double click this to edit it. This file will mostly be comments.  At the end (but within the </project> tag) add the XML for <target name="-post-jar">, shown below Here's the code again in case you want to use cut-and-paste: <target name="-post-jar">   <echo level="info" message="Copying dist directory to remote Pi"/>   <exec executable="scp" dir="${basedir}">     <arg line="-r"/>     <arg value="dist"/>     <arg value="[email protected]:NetBeans/CopyTest"/>   </exec>  </target> For Windows it will be slightly different: <target name="-post-jar">   <echo level="info" message="Copying dist directory to remote Pi"/>   <exec executable="C:\pi\putty\pscp.exe" dir="${basedir}">     <arg line="-r"/>     <arg value="dist"/>     <arg value="pi@raspi:NetBeans/CopyTest"/>   </exec> </target> You will also need to ensure that pscp.exe is in your PATH (or specify a fully qualified pathname). From now on when you clean and build the project the dist directory will automatically be copied to the Raspberry Pi ready for testing.

    Read the article

  • ContentEditable DIV - disabling drag and drop

    - by sonofdelphi
    Is it possible to disable the drag-and-drop functionality on elements which have the contentEditable attribute set to true. I have the following HTML page. <!DOCTYPE html> <html><meta charset="utf-8"><head><title>ContentEditable</title></head> <body> <div contenteditable="true">This is editable content</div> <span>This is not editable content</span> <img src="bookmark.png" title="Click to do foo" onclick= "foo()"> </span> </body> </html> The main problem I'm facing is that it is possible to drag and drop the image into the DIV and it gets copied (along with the title and the click handler)

    Read the article

  • Validation Summary with JQuery in MVC 2

    - by Nigel Sampson
    I'm trying to get client validation working on my asp.net mvc 2 web application (Visual Studio 2010). The client side validation IS working. However the validation summary is not. I'm including the following scripts <script type="text/javascript" src="../../content/scripts/jquery-1.4.1.js"></script> <script type="text/javascript" src="../../content/scripts/jquery.validate.js"></script> <script type="text/javascript" src="../../content/scripts/MicrosoftMvcJQueryValidation.js"></script> Have this before this form is started <% Html.EnableClientValidation(); %> and inside the form is <%: Html.ValidationSummary("There are some errors to fix.", new { @class = "warning_box" })%> <p> <%: Html.LabelFor(m => m.Name) %><br /> <%: Html.TextBoxFor(m => m.Name) %> <%: Html.ValidationMessageFor(m => m.Name, "*") %> </p> I have that latest version of MicrosoftMvcJQueryValidation.js from the MvcFutures download, but it doesn't look like it supports Validation Summary. I've tried correcting this by setting extra options such as errorContainer and errorLabelContainer, but it looks like there's some more underlying issues with it. Is there an updated / better version of this file around?

    Read the article

  • MPMoviePlayerController fullscreen movie inside a UIWebView

    - by Wakazors
    Hi, I'm having a problem with the UIWebView and MPMoviePlayerController: My UIWebView have a movie inside the html (it's a local html file), I'm using html5 and a video tag for the video. The problem is: the user can set the video to play inline, directly on the html or he can tap the fullscreen button, but I need to know if the video is playing fullscreen. I've tried to use MPMoviePlayerDidEnterFullscreenNotification but with no success. Does anybody know how to get this notification from the webview? Thanks in advance

    Read the article

  • Tab Controls effecting other Controls.

    - by VBeginner
    Hopefully I've explained myself good enough this time. Can't seem to get a real answer. Trying to make it so when I select certain tabs, certain controls on the left will disappear or reappear. http://img43.imageshack.us/img43/7533/scrnshotg.jpg Also, when "Stats" is selected, I need it to auto-select "Frequency" Ex. On click/focus/select (whatever, nothing seems to work)... ComboBox.Visible = True Thank you.

    Read the article

  • Linq to List and IEnumerable issues

    - by Otaku
    I am querying an HTML file with Linq. It looks something like this: <html> <body> <div class="Players"> <div class="role">Goalies</div> <div class="name">John Smith</div> <div class="name">Shawn Xie</div> <div class="role">Right Wings</div> <div class="name">Jack Davis</div> <div class="name">Carl Yuns</div> <div class="name">Wayne Gortonia</div> <div class="role">Centers</div> <div class="name">Lutz Gaspy</div> <div class="name">John Jacobs</div> </div </html> </body> What I'm trying to do is create a list of these folks like in a list of a structure called Players: Structure Players Public Name As String Public Position As String End Structure But I've quickly found out I don't really know what I'm doing when it comes to Linq. I've got this far my my queries: Dim goalieList = From d In player.Elements _ Where d.Value = "Goalies" _ Select From g In d.ElementsAfterSelf _ Take While (g.@class <> "role") _ Select New Players With {.Position = "Goalie", _ .Name = g.Value} Dim centersList = From d In player.Elements _ Where d.Value = "Centers" _ Select From g In d.ElementsAfterSelf _ Take While (g.@class <> "role") _ Select New Players With {.Position = "Centers", _ .Name = g.Value} Which gets me down to the the players by position, but then I can't do much with this afterwards the result type is System.Collections.Generic.IEnumerable(Of System.Collections.Generic.IEnumerable(Of Player)) What I want to do is add these two results to a new list, like: Dim playersList As List(Of Players) = Nothing playersList.AddRange(centersList) playersList.AddRange(goalieList) So that I can then query the list and use it. But it kicks the error: Unable to cast object of type 'WhereSelectEnumerableIterator2[System.Xml.Linq.XElement,System.Collections.Generic.IEnumerable1[Players]]' to type 'System.Collections.Generic.IEnumerable`1[Players]' As you can see, I may really have no idea how to work with all these objects/classes. Does anyone have any insight on what I may be doing wrong and how I can resolve it? RESOLVED: The Linq query needs to return a single iEnumerable, like this: Dim goalieList = From l In _ (From d In players.Elements _ Where d.Value = "Goalies" _ Select d.ElementsAfterSelf.TakeWhile(Function(f) f.@class <> "role")) _ Select New Players With {.Position = "Goalie", .Name = l.Value} and then use goalieList.ToList

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • JSON ParserError

    - by ashok
    Unexpected response from the api: (app/a437x7/generate/ff) :: 665: unexpected token at '<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="content-type" content="text/html; charset=UTF-8" /> <title>We're sorry, but something went wrong (500)</title> <style type="text/css"> body { background-color: #fff; color: #666; text-align: center; font-family: arial, sans-serif; } div.dialog { width: 25em; padding: 0 4em; margin: 4em auto 0 auto; border: 1px solid #ccc; border-right-color: #999; border-bottom-color: #999; } h1 { font-size: 100%; color: #f00; line-height: 1.5em; } </style> </head> <body> <!-- This file lives in public/500.html --> <div class="dialog"> <h1>We're sorry, but something went wrong.</h1> <p>We've been notified about this issue and we'll take a look at it shortly.</p> </div> </body> </html> ' (JSON::ParserError)!No access! Please verify your OAuth access token and secret.

    Read the article

  • CONVERT(int, (datepart(month, @search)), (datepart(day, @search)), DateAdd(year, Years.Year - (datepart(year, @search)))

    - by MyHeadHurts
    In the query the top part is getting all the years that will run in the stored procedure. Works fine But at first i just wanted to run the queries for yesterdays date for all the years, but now i realized i want the user to select a date that will be in a parameter @search Booked <= CONVERT(int,DateAdd(year, Years.Year - Year(getdate()), DateAdd(day, DateDiff(day, 2, getdate()), 1))) this should be easy because normally it would just be Booked <= CONVERT(int,@search) but the problem is i want to do something like a Booked <= CONVERT(int, (datepart(month, @search)), (datepart(day, @search)), DateAdd(year, Years.Year - (datepart(year, @search))) would something like that work i dont need to worry about subtracting days but i still need to worry about the years WITH Years AS ( SELECT DATEPART(year, GETDATE()) [Year] UNION ALL SELECT [Year]-1 FROM Years WHERE [Year]>@YearToGet ), q_00 as ( select DIVISION , DYYYY , sum(PARTY) as asofPAX , sum(APRICE) as asofSales from dbo.B101BookingsDetails INNER JOIN Years ON B101BookingsDetails.DYYYY = Years.Year where Booked <= CONVERT(int,DateAdd(year, Years.Year - Year(getdate()), DateAdd(day, DateDiff(day, 2, getdate()), 1))) and DYYYY = Years.Year group by DIVISION, DYYYY, years.year having DYYYY = years.year ),

    Read the article

  • How do I avoid a repetitive subquery JOIN in SQL?

    - by Karl
    Hi In SQL Server 2008: I have one table, and I want to do something along the following lines: SELECT T1.stuff, T2.morestuff from ( SELECT code, date1, date2 from Table ) as T1 INNER JOIN ( SELECT code, date1, date2 from Table ) as T2 ON T1.code = T2.code and T1.date1 = T2.date2 The two subqueries are exactly identical. Is there any way I can do this without repeating the subquery script? Thanks Karl

    Read the article

  • how to send on users profile page on selecting username( using jason autosuggest script)

    - by I Like PHP
    i m using auto suggest using Ajax Jason . now when a user select a user name , i want to send user on the link of that user name my jason data is coming in this way { query:'hel', suggestions:["hello world","hell boy ","bac to hell"], data:["2","26","34"] } now what i want that user goes to http://userProfile.php?uid=26 on select username(suppose user select "hell boy") how to do this??

    Read the article

< Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >