Search Results

Search found 43847 results on 1754 pages for 'command line arguments'.

Page 379/1754 | < Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >

  • Fail to set a new location of _viminfo on Windows

    - by Elderry
    I am trying to move _viminfo out of my home folder to make it simpler. So I added this to my _vimrc: set viminfo+=n~\\AppBackup\\Vim\\_viminfo However it's useless, Vim still create _viminfo in my home folder. Then I manually input the command above. This time it works fine, so I am sure the command itself is valid. How should I solve this problem for now? To be more specific, I am using GVim 7.4 on Windows 8.1.

    Read the article

  • How to get the Date in a batch file in a predictable format?

    - by AngryHacker
    In a batch file I need to extract a month, day, year from the date command. So I used the following, which essentially parses the Date command to extract its sub strings into a variable: set Day=%Date:~3,2% set Mth=%Date:~0,2% set Yr=%Date:~6,4% This is all great, but if I deploy this batch file to a machine with a different regional/country settings, it fails because month, day and year are in different locations. How can I extract month, day and year regardless of the date format?

    Read the article

  • Does gpresult work in Windows 7?

    - by Rod
    I'm trying to determine what GPO's are being applied to my Windows 7 machine in a Windows 2003 AD network. However, whenever I bring up a command window and run the GPRESULT command, all I get are error messages. Does gpresult work at all in Windows 7?

    Read the article

  • Copy and paste RESULTS of a file search

    - by ray023
    In order for me to get the results of a file search in a readable format, I use the following command line in a command prompt: dir *.* /s > myResultList.txt I then open that list in Excel, use fixed width format to get rid of all the stuff I don't want and then I have my list. Seems like a lot to do for something so simple. Does anyone out there have any recommendations for something that would work better than this?

    Read the article

  • Running multiple versions of PostgreSQL on the same Ubuntu server

    - by user51938
    I have PostgreSQL 8.4 and 9.0 running on the same server (Ubuntu Lucid). I installed them both via apt-get (8.4 with the default package sources, and 9.0 after adding the ppa from https://launchpad.net/~pitti/+archive/postgresql). When I run a command like "createdb" from the command line or start up the "psql" shell, PostgreSQL version 8.4 is used by default on my system. So, how do I force these commands to use PostgreSQL 9.0 instead of 8.4?

    Read the article

  • application to make shell typing easier?

    - by ajsie
    i wonder if there is any application for the linux bash shell to make it more easier to type. eg. if i type "c" then maybe it would give me all commands that starts with that letter for example "cd", "cp" etc. then if i type "cd" and press space it give me all the parameters for that command. is there application of this kind to automatize command typing?

    Read the article

  • What all is there to know about ping while troubleshooting an internet connection?

    - by tMJ
    What all is there to know about ping and the IP address to ping while troubleshooting/diagnosing a problem in the internet connection? For example: I run the command ping 192.168.1.1 -t in the command prompt, and if there is no reply I get something is odd, but I don't understand what is and where it is. I am looking for a complete list (okay, as many as you got), of IP address to ping while troubleshooting my network connection, and an insight into what the status message returned by them implies.

    Read the article

  • linux output show only right of ':'

    - by acidzombie24
    I forgot a lot of my command line. I am doing cat file | grep "error" and i would like it to show everything to the right of G:/ including G:/ if possible. I figure its an awk command but i dont know what. I tried awk '{print $8+}' but + does not work like i hoped and guessed.

    Read the article

  • Exclude all hidden directories in UNIX find

    - by xRickerlx
    I'm doing a word search using the following command: find . -exec grep -q [some_word] '{}' \; -print -o -name .svn -prune -o -name .ssh -prune -o -name .boneyard -o -name log -prune -prune -o -name tmp -prune Is it possible to use a regex to exclude all hidden directories? Note: The current command traverses the entire tree from the current location and exclude those being pruned. The exclusion needs to work for any hidden directory regardless off location.

    Read the article

  • Clearing terminal

    - by sldkjalsdjk
    Hi folks, I would like to issue a command from a bash script to clear the terminal it is running from: -I don't want to clear the bash history (history -c) -I don't want to issue the clear command (which moves the terminal down to the last prompt, giving the impression the terminal has been cleared, but previous output remains visible if you scroll up) -I want to completely remove all previous output to my terminal and have it clean as if I was opening a new one Thanks.

    Read the article

  • What is . and .. in a directory?

    Based on the qestion How to make using command prompt less painful what is the . and .. in the most voted answer? I see it when i do a dir command but it isn't visible to the user in the form of a file. In case you dont know what I mean heres the example . .. Su.exe Sup.txt SuperUser.COM

    Read the article

  • Easily view a list of changes of upgraded packages.

    - by D Connors
    So, let's say I run sudo apt-get upgrade on my Lucid Lynx and it upgrades a couple of packages I'm interested in. Is there a command to run that will open some kind of info or manual that tells me what changes were made in this new version of the package? For instance, if run the apt-get upgrade and it installs a new version of empathy. Do I have to go over to their site to review the changes made in this version, or is there a quicker command line way?

    Read the article

  • SQL Statement Help... Ignore already existing rows

    - by Funchy
    I have a table with a foreign key constraint and the command below gives me an error because it's trying to set a value that already in the provider table. How do I update this command to ignore those rows that already exist in the provider table? UPDATE b SET b.iProvider_PVN = a.POIN FROM dbo.ASPVNTOPOIN_stg a INNER JOIN dbo.Provider b ON a.ASPVN = b.iProvider_PVN AND b.vcProv_Type = 'IPA' LEFT JOIN dbo.Provider c ON a.POIN = c.iProvider_PVN WHERE c.iProvider_PVN IS NULL

    Read the article

  • Using jQuery and SPServices to Display List Items

    - by Bil Simser
    I had an interesting challenge recently that I turned to Marc Anderson’s wonderful SPServices project for. If you haven’t already seen or used SPServices, please do. It’s a jQuery library that does primarily two things. First, it wraps up all of the SharePoint web services in a nice little AJAX wrapper for use in JavaScript. Second, it enhances the form editing of items in SharePoint so you’re not hacking up your List Form pages. My challenge was simple but interesting. The user wanted to display a SharePoint item page (DispForm.aspx, which already had some customization on it to display related items via this blog post from Codeless Solutions for SharePoint) but launch from an external application using the value of one of the fields in the SharePoint list. For simplicity let’s say my list is a list of customers and the related list is a list of orders for that customer. It would look something like this (click on the item to see the full image): Your first thought might be, that’s easy! Display the customer information using a DataView Web Part and filter the item using a query string to match the customer number. However there are a few problems with this idea: You’ll need to build a custom page and then attach that related orders view to it. This is a bit of a problem because the solution from Codeless Solutions relies on the Title field on the page to be displayed. On a custom page you would have to recreate all of the elements found on the DispForm.aspx page so the related view would work. The DataView Web Part doesn’t look *exactly* like what the out of the box display form page does. Not a huge problem and can be overcome with some CSS style overrides but still, more work. A DVWP showing a single record doesn’t have the same toolbar that you would using the DispForm.aspx. Not a show-stopper and you can rebuild the toolbar but it’s going to potentially require code and then there’s the security trimming, etc. that you have to get right. DVWPs are not automatically updated if you add a column to the list like DispForm.aspx is. Work, work, work. For these reasons I thought it would be easier to take the already existing (modified) DispForm.aspx page and just add some jQuery magic to the page to find the item. Why do we need to find it? DispForm.aspx relies on a querystring parameter called “ID” which then displays whatever that item ID number is in the list. Trouble is, when you’re coming in from an external app via a link, you don’t know what that internal ID is (and frankly shouldn’t). I don’t like exposing internal SharePoint IDs to the outside world for the same reason I don’t do it with database IDs. They’re internal and while it’s find to use on the site itself you don’t want external links using it. It’s volatile and can change (delete one item then re-add it back with the same data and watch any ID references break). The next thought might be to call a SharePoint web service with a CAML query to get the item ID number using some criteria (in this case, the customer number). That’s great if you have that ability but again we had an existing application we were just adding a link to. The last thing I wanted to do was to crack open the code on that sucker and start calling web services (primarily because it’s Java, but really I’m a lazy geek). However if you’re doing this and have access to call a web service that would be an option. Back to this problem, how do I a) find a SharePoint List Item based on some field value other than ID and b) make it low impact so I can just construct a URL to it? That’s where jQuery and SPServices came to the rescue. After spending a few hours of emails back and forth with Marc and a couple of phone calls (and updating jQuery to the latest version, duh!) it was a simple answer. First we need a reference to a) jQuery b) SPServices and c) our script. I just dropped a Content Editor Web Part, the Swiss Army Knives of Web Parts, onto the DispForm.aspx page and added these lines: <script type="text/javascript" src="http://intranet/JavaScript/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/jquery.SPServices-0.5.3.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/RedirectToID.js"> </script> Update it to point to where you keep your scripts located. I prefer to keep them all in Document Libraries as I can make changes to them without having to remote into the server (and on a multiple web front end, that’s just a PITA), it provides me with version control of sorts, and it’s quick to add new plugins and scripts. Now we can look at our RedirectToID.js script. This invokes the SPServices Library to call the GetListItems method of the Lists web service and then rewrites the URL to DispForm.aspx to use the correct SharePoint ID (the internal one). $(document).ready(function(){ var queryStringValues = $().SPServices.SPGetQueryString(); var id = queryStringValues["ID"]; if(id == "0") { var customer = queryStringValues["CustomerNumber"]; var query = "<Query><Where><Eq><FieldRef Name='CustomerNumber'/><Value Type='Text'>" + customer + "</Value></Eq></Where></Query>"; var url = window.location; $().SPServices({ operation: "GetListItems", listName: "Customers", async: false, CAMLQuery: query, completefunc: function (xData, Status) { $(xData.responseXML).find("[nodeName=z:row]").each(function(){ id = $(this).attr("ows_ID"); url = $().SPServices.SPGetCurrentSite() + "/Lists/Customers/DispForm.aspx?ID=" + id; window.location = url; }); } }); } }); What’s happening here? Line 3: We call SPServices.SPGetQueryString to get an array of query string values (a handy function in the library as I had 15 lines of code to do this which is now gone). Line 4: Extract the ID value from the query string Line 6: If we pass in “0” it means we’re looking up a field value. This allows DispForm.aspx to work like normal with SharePoint lists but lookup our values when invoked. Why ID at all? DispForm.aspx doesn’t work unless you pass in something and “0” is a *magic* number that will invoke the page but not lookup a value in the database. Line 8-15: Extract the CustomerNumber query string value, build a CAML query to find it then call the GetListitems method using SPServices Line 16: Process the results in our completefunc to iterate over all the rows (there should only be one) and extract the real ID of the item Line 17-20: Build a new URL based on the site (using a call to SPGetCurrentSite) and append our real ID to redirect to the DispForm.aspx page As you can see, it dynamically creates a CAML query for the call to the web service using the passed in value. You could even make this generic to take in different query strings, one for the field name to search for and the other for the value to find. That way it could be used for any field you want. For example you could bring up the correct item on the DispForm.aspx page based on customer name with something like this: http://myserver/Lists/Customers/DispForm.aspx?ID=0&FilterId=CustomerName&FilterValue=Sony Use your imagination. Some people would opt for building a custom page with a DVWP but if you want to leverage all the functionality of DispForm.aspx this might come in handy if you don’t want to rely on internal SharePoint IDs.

    Read the article

  • How to prevent Android bluetooth RFCOMM connection from dying immediately after .connect()?

    - by Gilead
    I'm trying to connect to a Zeemote (http://zeemote.com/) gaming controller from Moto Droid running 2.0.1 firmware. The test application below does connect to the device (LED flashes) but connection is dropped immediately after that. I can connect to the device perfectly fine using bluez tools (log attached as well). I'm quite at a loss here, I work on it for so long that I ran out of ideas so any help would be very much appreciated. Thanks, Max =========================================== Code: public class ZeeTest extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); try { test(); } catch (IOException e) { e.printStackTrace(); } } public void test() throws IOException { BluetoothDevice zee = BluetoothAdapter.getDefaultAdapter(). getRemoteDevice("00:1C:4D:02:A6:55"); Log.d("ZeeTest", "++++ Creating socket"); BluetoothSocket sock = zee.createRfcommSocketToServiceRecord( UUID.fromString("8e1f0cf7-508f-4875-b62c-fbb67fd34812")); Log.d("ZeeTest", "++++ Connecting"); sock.connect(); Log.d("ZeeTest", "++++ Connected"); final InputStream in = sock.getInputStream(); new Thread() { @Override public void run() { byte[] buffer = new byte[32]; int bytes = 0; int x = 0; Log.d("ZeeTest", "++++ Listening..."); while (x < 200) { x++; try { bytes = in.read(buffer); Log.d("ZeeTest", "++++ Read "+ bytes +" bytes"); } catch (IOException e) { // java.io.IOException: Software caused connection abort if (x % 50 == 0) { Log.d("ZeeTest", "Tried "+ x +" times ("+ bytes +")"); } try { Thread.sleep(100); } catch (InterruptedException ie) {} } } Log.d("ZeeTest", "++++ Done: thread exit"); } }.start(); Log.d("ZeeTest", "++++ Done: test()"); } } =========================================== Log: I/ActivityManager( 1169): Start proc zee.test for activity zee.test/.ZeeTest: pid=4294 uid=10084 gids={3002, 3001, 3003} I/dalvikvm( 4294): Debugger thread not active, ignoring DDM send (t=0x41504e4d l=38) D/dalvikvm( 4287): LinearAlloc 0x0 used 640700 of 5242880 (12%) I/dalvikvm( 4294): Debugger thread not active, ignoring DDM send (t=0x41504e4d l=20) D/ZeeTest ( 4294): ++++ Creating socket D/ZeeTest ( 4294): ++++ Connecting E/BluetoothEventLoop.cpp( 1169): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/1240/hci0/dev_00_1C_4D_02_A6_55 I/usbd ( 1068): process_usb_uevent_message(): buffer = add@/devices/virtual/bluetooth/hci0/hci0:1 I/usbd ( 1068): main(): call select(...) E/BluetoothEventLoop.cpp( 1169): event_filter: Received signal org.bluez.Adapter:DeviceFound from /org/bluez/1240/hci0 V/BluetoothEventRedirector( 1242): Received android.bluetooth.device.action.FOUND V/BluetoothEventRedirector( 1242): Received android.bleutooth.device.action.UUID D/ZeeTest ( 4294): ++++ Connected D/ZeeTest ( 4294): ++++ Done: test() D/ZeeTest ( 4294): ++++ Listening... I/ActivityManager( 1169): Displayed activity zee.test/.ZeeTest: 2296 ms (total 2296 ms) E/BluetoothEventLoop.cpp( 1169): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/1240/hci0/dev_00_1C_4D_02_A6_55 I/usbd ( 1068): process_usb_uevent_message(): buffer = remove@/devices/virtual/bluetooth/hci0/hci0:1 I/usbd ( 1068): main(): call select(...) V/BluetoothEventRedirector( 1242): Received android.bleutooth.device.action.UUID D/ZeeTest ( 4294): Tried 50 times (0) D/ZeeTest ( 4294): Tried 100 times (0) D/ZeeTest ( 4294): Tried 150 times (0) D/ZeeTest ( 4294): Tried 200 times (0) D/ZeeTest ( 4294): ++++ Done: thread exit =========================================== Terminal log: $ sdptool browse Inquiring ... Browsing 00:1C:4D:02:A6:55 ... $ sdptool records 00:1C:4D:02:A6:55 Service Name: Zeemote Service RecHandle: 0x10015 Service Class ID List: UUID 128: 8e1f0cf7-508f-4875-b62c-fbb67fd34812 Protocol Descriptor List: "L2CAP" (0x0100) "RFCOMM" (0x0003) Channel: 1 Language Base Attr List: code_ISO639: 0x656e encoding: 0x6a base_offset: 0x100 $ rfcomm connect /dev/tty10 00:1C:4D:02:A6:55 Connected /dev/rfcomm0 to 00:1C:4D:02:A6:55 on channel 1 Press CTRL-C for hangup # rfcomm show /dev/tty10 rfcomm0: 00:1F:3A:E4:C8:40 - 00:1C:4D:02:A6:55 channel 1 connected [reuse-dlc release-on-hup tty-attached] # cat /dev/tty10 (nothing here) # hcidump HCI sniffer - Bluetooth packet analyzer ver 1.42 device: hci0 snap_len: 1028 filter: 0xffffffff < HCI Command: Create Connection (0x01|0x0005) plen 13 > HCI Event: Command Status (0x0f) plen 4 > HCI Event: Connect Complete (0x03) plen 11 < HCI Command: Read Remote Supported Features (0x01|0x001b) plen 2 > HCI Event: Read Remote Supported Features (0x0b) plen 11 < ACL data: handle 11 flags 0x02 dlen 10 L2CAP(s): Info req: type 2 > HCI Event: Command Status (0x0f) plen 4 > HCI Event: Page Scan Repetition Mode Change (0x20) plen 7 > HCI Event: Max Slots Change (0x1b) plen 3 < HCI Command: Remote Name Request (0x01|0x0019) plen 10 > HCI Event: Command Status (0x0f) plen 4 > ACL data: handle 11 flags 0x02 dlen 16 L2CAP(s): Info rsp: type 2 result 0 Extended feature mask 0x0000 < ACL data: handle 11 flags 0x02 dlen 12 L2CAP(s): Connect req: psm 3 scid 0x0040 > HCI Event: Number of Completed Packets (0x13) plen 5 > ACL data: handle 11 flags 0x02 dlen 16 L2CAP(s): Connect rsp: dcid 0x04fb scid 0x0040 result 1 status 2 Connection pending - Authorization pending > HCI Event: Remote Name Req Complete (0x07) plen 255 > ACL data: handle 11 flags 0x02 dlen 16 L2CAP(s): Connect rsp: dcid 0x04fb scid 0x0040 result 0 status 0 Connection successful < ACL data: handle 11 flags 0x02 dlen 16 L2CAP(s): Config req: dcid 0x04fb flags 0x00 clen 4 MTU 1013 (events are properly received using bluez)

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Unable to resolve class org.codehaus.groovy.grails.plugins.springsecurity.Secured

    - by Alan
    Hi, I'm new to Grails and I'm seeing the error "Groovy:unable to resolve class org.codehaus.groovy.grails.plugins.springsecurity.Secured" when I open a Grails app in SpringSource Tool Suite (STS) and build the project. However the application does run when I issue the run-app command and I can login. Also when I look in my .grails folder I can see that 'grails-acegi-0.5.2.zip' has been downloaded. When I issues the upgrade command from the grails command prompt I get a message telling me that all dependancies have been resolved. Thanks for any help.

    Read the article

  • SqlCeException: The column cannot be modified. [ Column name = id ]

    - by pek
    I am simply trying to add a single row in the database but I keep getting an exception. I created a local database and added a single table: users. It consists of two columns: "id" and "name". I only made the id primary key (not auto-increment or anything else). When I run the following code: string execPath = Path.GetDirectoryName(Assembly.GetExecutingAssembly().GetName().CodeBase); string dbfile = execPath + @"\LocalDatabase.sdf"; SqlCeConnection conn = new SqlCeConnection("datasource=" + dbfile); conn.Open(); string command = "INSERT INTO users VALUES('1','pek')"; Debug.WriteLine(command); SqlCeCommand comm = conn.CreateCommand(); comm.CommandText = command; comm.ExecuteNonQuery(); I get the following Exception at "comm.ExecuteNonQuery();": SqlCeException was unhandled The column cannot be modified. [ Column name = id ] What's with the "modified" part?

    Read the article

  • How to Integrate ILMerge into Visual Studio Build Process to Merge Assemblies?

    - by AMissico
    I want to merge one .NET DLL assembly and one C# Class Library project referenced by a VB.NET Console Application project into one command-line console executable. I can do this with ILMerge from the command-line, but I want to integrate this merging of reference assemblies and projects into the Visual Studio project. From my reading, I understand that I can do this through a MSBuild Task or a Target and just add it to a C#/VB.NET Project file, but I can find no specific example since MSBuild is large topic. Moreover, I find some references that add the ILMerge command to the Post-build event. How do I integrate ILMerge into a Visual Studio (C#/VB.NET) project, which are just MSBuild projects, to merge all referenced assemblies (copy-local=true) into one assembly? How does this tie into a possible ILMerge.Targets file? Is it better to use the Post-build event?

    Read the article

  • Customize Team Build 2010 – Part 11: Speed up opening my build process template

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application       When you open the build process template, it takes 15 – 30 seconds until it opens. When you are in the process of creating your custom build process template, this can be very frustrating. Thanks to Ed Blankenship how has found a little trick to speed up the opening of the template. It now only takes a few seconds. Create a file called empty.xaml and place the following text in it: <Activity http://www.edsquared.com/ct.ashx?id=1746c587-59ce-45eb-85af-8ea167862617&url=http%3a%2f%2fschemas.microsoft.com%2fnetfx%2f2009%2fxaml%2factivities"http://schemas.microsoft.com/netfx/2009/xaml/activities"> </Activity> Open this file in Visual Studio. In the toolbox panel, add a new tab called “Team Foundation Build Activities”.  Note that it is important to get the tab name correct because if it is not correct then the activities will be reloaded. Inside the new tab, right click and select “Choose Items” Click the Browse button Load the file C:\Windows\Microsoft.NET\assembly\GAC_MSIL\Microsoft.TeamFoundation.Build.Workflow\v4.0_10.0.0.0__b03f5f7f11d50a3a\Microsoft.TeamFoundation.Build.Workflow.dll Click OK to add the toolbox items to the tab. Create another new tab called “Team Foundation LabManagement Activities”. Inside the new tab, right click and select “Choose Items” Click the Browse button Load the file C:\Windows\Microsoft.NET\assembly\GAC_MSIL\Microsoft.TeamFoundation.Lab.Workflow.Activities\v4.0_10.0.0.0__b03f5f7f11d50a3a\Microsoft.TeamFoundation.Lab.Workflow.Activities.dll Click OK to add the toolbox items to the tab. You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Displaying an image on a LED matrix with a Netduino

    - by Bertrand Le Roy
    In the previous post, we’ve been flipping bits manually on three ports of the Netduino to simulate the data, clock and latch pins that a shift register expected. We did all that in order to control one line of a LED matrix and create a simple Knight Rider effect. It was rightly pointed out in the comments that the Netduino has built-in knowledge of the sort of serial protocol that this shift register understands through a feature called SPI. That will of course make our code a whole lot simpler, but it will also make it a whole lot faster: writing to the Netduino ports is actually not that fast, whereas SPI is very, very fast. Unfortunately, the Netduino documentation for SPI is severely lacking. Instead, we’ve been reliably using the documentation for the Fez, another .NET microcontroller. To send data through SPI, we’ll just need  to move a few wires around and update the code. SPI uses pin D11 for writing, pin D12 for reading (which we won’t do) and pin D13 for the clock. The latch pin is a parameter that can be set by the user. This is very close to the wiring we had before (data on D11, clock on D12 and latch on D13). We just have to move the latch from D13 to D10, and the clock from D12 to D13. The code that controls the shift register has slimmed down considerably with that change. Here is the new version, which I invite you to compare with what we had before: public class ShiftRegister74HC595 { protected SPI Spi; public ShiftRegister74HC595(Cpu.Pin latchPin) : this(latchPin, SPI.SPI_module.SPI1) { } public ShiftRegister74HC595(Cpu.Pin latchPin, SPI.SPI_module spiModule) { var spiConfig = new SPI.Configuration( SPI_mod: spiModule, ChipSelect_Port: latchPin, ChipSelect_ActiveState: false, ChipSelect_SetupTime: 0, ChipSelect_HoldTime: 0, Clock_IdleState: false, Clock_Edge: true, Clock_RateKHz: 1000 ); Spi = new SPI(spiConfig); } public void Write(byte buffer) { Spi.Write(new[] {buffer}); } } All we have to do here is configure SPI. The write method couldn’t be any simpler. Everything is now handled in hardware by the Netduino. We set the frequency to 1MHz, which is largely sufficient for what we’ll be doing, but it could potentially go much higher. The shift register addresses the columns of the matrix. The rows are directly wired to ports D0 to D7 of the Netduino. The code writes to only one of those eight lines at a time, which will make it fast enough. The way an image is displayed is that we light the lines one after the other so fast that persistence of vision will give the illusion of a stable image: foreach (var bitmap in matrix.MatrixBitmap) { matrix.OnRow(row, bitmap, true); matrix.OnRow(row, bitmap, false); row++; } Now there is a twist here: we need to run this code as fast as possible in order to display the image with as little flicker as possible, but we’ll eventually have other things to do. In other words, we need the code driving the display to run in the background, except when we want to change what’s being displayed. Fortunately, the .NET Micro Framework supports multithreading. In our implementation, we’ve added an Initialize method that spins a new thread that is tied to the specific instance of the matrix it’s being called on. public LedMatrix Initialize() { DisplayThread = new Thread(() => DoDisplay(this)); DisplayThread.Start(); return this; } I quite like this way to spin a thread. As you may know, there is another, built-in way to contextualize a thread by passing an object into the Start method. For the method to work, the thread must have been constructed with a ParameterizedThreadStart delegate, which takes one parameter of type object. I like to use object as little as possible, so instead I’m constructing a closure with a Lambda, currying it with the current instance. This way, everything remains strongly-typed and there’s no casting to do. Note that this method would extend perfectly to several parameters. Of note as well is the return value of Initialize, a common technique to add some fluency to the API and enabling the matrix to be instantiated and initialized in a single line: using (var matrix = new LedMS88SR74HC595().Initialize()) The “using” in the previous line is because we have implemented IDisposable so that the matrix kills the thread and clears the display when the user code is done with it: public void Dispose() { Clear(); DisplayThread.Abort(); } Thanks to the multi-threaded version of the matrix driver class, we can treat the display as a simple bitmap with a very synchronous programming model: matrix.Set(someimage); while (button.Read()) { Thread.Sleep(10); } Here, the call into Set returns immediately and from the moment the bitmap is set, the background display thread will constantly continue refreshing no matter what happens in the main thread. That enables us to wait or read a button’s port on the main thread knowing that the current image will continue displaying unperturbed and without requiring manual refreshing. We’ve effectively hidden the implementation of the display behind a convenient, synchronous-looking API. Pretty neat, eh? Before I wrap up this post, I want to talk about one small caveat of using SPI rather than driving the shift register directly: when we got to the point where we could actually display images, we noticed that they were a mirror image of what we were sending in. Oh noes! Well, the reason for it is that SPI is sending the bits in a big-endian fashion, in other words backwards. Now sure you could fix that in software by writing some bit-level code to reverse the bits we’re sending in, but there is a far more efficient solution than that. We are doing hardware here, so we can simply reverse the order in which the outputs of the shift register are connected to the columns of the matrix. That’s switching 8 wires around once, as compared to doing bit operations every time we send a line to display. All right, so bringing it all together, here is the code we need to write to display two images in succession, separated by a press on the board’s button: var button = new InputPort(Pins.ONBOARD_SW1, false, Port.ResistorMode.Disabled); using (var matrix = new LedMS88SR74HC595().Initialize()) { // Oh, prototype is so sad! var sad = new byte[] { 0x66, 0x24, 0x00, 0x18, 0x00, 0x3C, 0x42, 0x81 }; DisplayAndWait(sad, matrix, button); // Let's make it smile! var smile = new byte[] { 0x42, 0x18, 0x18, 0x81, 0x7E, 0x3C, 0x18, 0x00 }; DisplayAndWait(smile, matrix, button); } And here is a video of the prototype running: The prototype in action I’ve added an artificial delay between the display of each row of the matrix to clearly show what’s otherwise happening very fast. This way, you can clearly see each of the two images being displayed line by line. Next time, we’ll do no hardware changes, focusing instead on building a nice programming model for the matrix, with sprites, text and hardware scrolling. Fun stuff. By the way, can any of my reader guess where we’re going with all that? The code for this prototype can be downloaded here: http://weblogs.asp.net/blogs/bleroy/Samples/NetduinoLedMatrixDriver.zip

    Read the article

  • Cast error on SQLDataReader (entlib 5.0, asp.net 3.5, vb)

    - by Phil
    My site is using enterprise library v 5.0. Mainly the DAAB. Some functions such as executescalar, executedataset are working as expected. The problems appear when I start to use Readers I have this function in my includes class: Public Function AssignedDepartmentDetail(ByVal Did As Integer) As SqlDataReader Dim reader As SqlDataReader Dim Command As SqlCommand = db.GetSqlStringCommand("select seomthing from somewhere where something = @did") db.AddInParameter(Command, "@did", Data.DbType.Int32, Did) reader = db.ExecuteReader(Command) reader.Read() Return reader End Function This is called from my aspx.vb like so: reader = includes.AssignedDepartmentDetail(Did) If reader.HasRows Then TheModule = reader("templatefilename") PageID = reader("id") Else TheModule = "#" End If This gives the following error on db.ExecuteReader line: Unable to cast object of type 'Microsoft.Practices.EnterpriseLibrary.Data.RefCountingDataReader' to type 'System.Data.SqlClient.SqlDataReader'. Can anyone shed any light on how I go about getting this working. Will I always run into problems when dealing with readers via entlib?

    Read the article

  • BNF – how to read syntax?

    - by Piotr Rodak
    A few days ago I read post of Jen McCown (blog) about her idea of blogging about random articles from Books Online. I think this is a great idea, even if Jen says that it’s not exciting or sexy. I noticed that many of the questions that appear on forums and other media arise from pure fact that people asking questions didn’t bother to read and understand the manual – Books Online. Jen came up with a brilliant, concise acronym that describes very well the category of posts about Books Online – RTFM365. I take liberty of tagging this post with the same acronym. I often come across questions of type – ‘Hey, i am trying to create a table, but I am getting an error’. The error often says that the syntax is invalid. 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT DEFAULT Guid_Default NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); 5 The answer is usually(1), ‘Ok, let me check it out.. Ah yes – you have to put name of the DEFAULT constraint before the type of constraint: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); Why many people stumble on syntax errors? Is the syntax poorly documented? No, the issue is, that correct syntax of the CREATE TABLE statement is documented very well in Books Online and is.. intimidating. Many people can be taken aback by the rather complex block of code that describes all intricacies of the statement. However, I don’t know better way of defining syntax of the statement or command. The notation that is used to describe syntax in Books Online is a form of Backus-Naur notatiion, called BNF for short sometimes. This is a notation that was invented around 50 years ago, and some say that even earlier, around 400 BC – would you believe? Originally it was used to define syntax of, rather ancient now, ALGOL programming language (in 1950’s, not in ancient India). If you look closer at the definition of the BNF, it turns out that the principles of this syntax are pretty simple. Here are a few bullet points: italic_text is a placeholder for your identifier <italic_text_in_angle_brackets> is a definition which is described further. [everything in square brackets] is optional {everything in curly brackets} is obligatory everything | separated | by | operator is an alternative ::= “assigns” definition to an identifier Yes, it looks like these six simple points give you the key to understand even the most complicated syntax definitions in Books Online. Books Online contain an article about syntax conventions – have you ever read it? Let’s have a look at fragment of the CREATE TABLE statement: 1 CREATE TABLE 2 [ database_name . [ schema_name ] . | schema_name . ] table_name 3 ( { <column_definition> | <computed_column_definition> 4 | <column_set_definition> } 5 [ <table_constraint> ] [ ,...n ] ) 6 [ ON { partition_scheme_name ( partition_column_name ) | filegroup 7 | "default" } ] 8 [ { TEXTIMAGE_ON { filegroup | "default" } ] 9 [ FILESTREAM_ON { partition_scheme_name | filegroup 10 | "default" } ] 11 [ WITH ( <table_option> [ ,...n ] ) ] 12 [ ; ] Let’s look at line 2 of the above snippet: This line uses rules 3 and 5 from the list. So you know that you can create table which has specified one of the following. just name – table will be created in default user schema schema name and table name – table will be created in specified schema database name, schema name and table name – table will be created in specified database, in specified schema database name, .., table name – table will be created in specified database, in default schema of the user. Note that this single line of the notation describes each of the naming schemes in deterministic way. The ‘optionality’ of the schema_name element is nested within database_name.. section. You can use either database_name and optional schema name, or just schema name – this is specified by the pipe character ‘|’. The error that user gets with execution of the first script fragment in this post is as follows: Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'DEFAULT'. Ok, let’s have a look how to find out the correct syntax. Line number 3 of the BNF fragment above contains reference to <column_definition>. Since column_definition is in angle brackets, we know that this is a reference to notion described further in the code. And indeed, the very next fragment of BNF contains syntax of the column definition. 1 <column_definition> ::= 2 column_name <data_type> 3 [ FILESTREAM ] 4 [ COLLATE collation_name ] 5 [ NULL | NOT NULL ] 6 [ 7 [ CONSTRAINT constraint_name ] DEFAULT constant_expression ] 8 | [ IDENTITY [ ( seed ,increment ) ] [ NOT FOR REPLICATION ] 9 ] 10 [ ROWGUIDCOL ] [ <column_constraint> [ ...n ] ] 11 [ SPARSE ] Look at line 7 in the above fragment. It says, that the column can have a DEFAULT constraint which, if you want to name it, has to be prepended with [CONSTRAINT constraint_name] sequence. The name of the constraint is optional, but I strongly recommend you to make the effort of coming up with some meaningful name yourself. So the correct syntax of the CREATE TABLE statement from the beginning of the article is like this: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); That is practically everything you should know about BNF. I encourage you to study the syntax definitions for various statements and commands in Books Online, you can find really interesting things hidden there. Technorati Tags: SQL Server,t-sql,BNF,syntax   (1) No, my answer usually is a question – ‘What error message? What does it say?’. You’d be surprised to know how many people think I can go through time and space and look at their screen at the moment they received the error.

    Read the article

  • windbg dv cmd fail - Private symbols (symbols.pri) are required for locals

    - by leif
    i have a C++ application compiled with VS 2008 with pdb file enabled. After i tried to use dv command to display local vars, it shows the following message: Unable to enumerate locals, HRESULT0x80004005 Private symbols (symbols.pri) are required for locals. Type ".hh dbgerr005" for details. Note that: i've run the "dv" command on the correct frame which has the symbol file. i can use "dt" command successfully. i've included the symbol path and the pdb file has been loaded successfully as following: start end module name 00400000 0043f000 helloworld (private pdb symbols) c:\test... Does anyone know the cause? Is there any configuration i missed to enable local var watch? Or VS 2008 pdb is not supported by windbg (i'm using the latest windbg version)?

    Read the article

< Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >