Search Results

Search found 89638 results on 3586 pages for 'file table'.

Page 1602/3586 | < Previous Page | 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609  | Next Page >

  • Does anyone know what causes this error? VC++ with VisualAssert

    - by TerryJohnson
    Hi does anyone know what causes this error? In Visual Studio 2008 with Visual Assert Thanks 1>------ Build started: Project: ChessRound1, Configuration: Debug Win32 ------ 1>Compiling... 1>stdafx.cpp 1>C:\Program Files\Microsoft Visual Studio 9.0\VC\include\xlocnum(135) : error C2857: '#include' statement specified with the /Ycstdafx.h command-line option was not found in the source file 1>Build log was saved at "file://c:\Users\Admin1\Documents\Visual Studio 2008\Projects\ChessRound1\ChessRound1\Debug\BuildLog.htm" 1>ChessRound1 - 1 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========

    Read the article

  • autoexp.dat does not seem to take affect in Visual Studio C++ 2005 debugger.

    - by Pradyot
    autoexp.dat does not seem to take affect in Visual Studio C++ 2005 debugger. I am not trying to add any custom rules. Just want commonly used stuff like stl::string, to display in a friendlier manner. Does anyone know. how I can accomplish this? Is this just question of specifying a path to the autoexp.dat file somewhere. The file is available under the Visual Studio installation directory.

    Read the article

  • Qt C++ XML, validating against a DTD?

    - by Airjoe
    Is there a way to validate an XML file against a DTD with Qt's XML handling? I've tried googling around but can't seem to get a straight answer. If Qt doesn't include support for validating an XML file, what might be the process of implementing validation myself? Any good reference to start with in regards to validating XML against a spec? Thanks for the help!

    Read the article

  • Generate key for a software developed using vb.net

    - by Pandiya Chendur
    Hai guys, I ve developed a salary calculating software using vb.net.... Its working fine and i ve converted it to an exe file... My drawback is it can be copied and pasted in another system very easily... I want to generate a key for the exe file and while installing the key should be used and when installation is completed ,the key should not be used again... Is this ya secured one or give me some ideas how it can be done....

    Read the article

  • Little more help with writing a o buffer with libjpeg

    - by Richard Knop
    So I have managed to find another question discussing how to use the libjpeg to compress an image to jpeg. I have found this code which is supposed to work: Compressing IplImage to JPEG using libjpeg in OpenCV Here's the code (it compiles ok): /* This a custom destination manager for jpeglib that enables the use of memory to memory compression. See IJG documentation for details. */ typedef struct { struct jpeg_destination_mgr pub; /* base class */ JOCTET* buffer; /* buffer start address */ int bufsize; /* size of buffer */ size_t datasize; /* final size of compressed data */ int* outsize; /* user pointer to datasize */ int errcount; /* counts up write errors due to buffer overruns */ } memory_destination_mgr; typedef memory_destination_mgr* mem_dest_ptr; /* ------------------------------------------------------------- */ /* MEMORY DESTINATION INTERFACE METHODS */ /* ------------------------------------------------------------- */ /* This function is called by the library before any data gets written */ METHODDEF(void) init_destination (j_compress_ptr cinfo) { mem_dest_ptr dest = (mem_dest_ptr)cinfo->dest; dest->pub.next_output_byte = dest->buffer; /* set destination buffer */ dest->pub.free_in_buffer = dest->bufsize; /* input buffer size */ dest->datasize = 0; /* reset output size */ dest->errcount = 0; /* reset error count */ } /* This function is called by the library if the buffer fills up I just reset destination pointer and buffer size here. Note that this behavior, while preventing seg faults will lead to invalid output streams as data is over- written. */ METHODDEF(boolean) empty_output_buffer (j_compress_ptr cinfo) { mem_dest_ptr dest = (mem_dest_ptr)cinfo->dest; dest->pub.next_output_byte = dest->buffer; dest->pub.free_in_buffer = dest->bufsize; ++dest->errcount; /* need to increase error count */ return TRUE; } /* Usually the library wants to flush output here. I will calculate output buffer size here. Note that results become incorrect, once empty_output_buffer was called. This situation is notified by errcount. */ METHODDEF(void) term_destination (j_compress_ptr cinfo) { mem_dest_ptr dest = (mem_dest_ptr)cinfo->dest; dest->datasize = dest->bufsize - dest->pub.free_in_buffer; if (dest->outsize) *dest->outsize += (int)dest->datasize; } /* Override the default destination manager initialization provided by jpeglib. Since we want to use memory-to-memory compression, we need to use our own destination manager. */ GLOBAL(void) jpeg_memory_dest (j_compress_ptr cinfo, JOCTET* buffer, int bufsize, int* outsize) { mem_dest_ptr dest; /* first call for this instance - need to setup */ if (cinfo->dest == 0) { cinfo->dest = (struct jpeg_destination_mgr *) (*cinfo->mem->alloc_small) ((j_common_ptr) cinfo, JPOOL_PERMANENT, sizeof (memory_destination_mgr)); } dest = (mem_dest_ptr) cinfo->dest; dest->bufsize = bufsize; dest->buffer = buffer; dest->outsize = outsize; /* set method callbacks */ dest->pub.init_destination = init_destination; dest->pub.empty_output_buffer = empty_output_buffer; dest->pub.term_destination = term_destination; } /* ------------------------------------------------------------- */ /* MEMORY SOURCE INTERFACE METHODS */ /* ------------------------------------------------------------- */ /* Called before data is read */ METHODDEF(void) init_source (j_decompress_ptr dinfo) { /* nothing to do here, really. I mean. I'm not lazy or something, but... we're actually through here. */ } /* Called if the decoder wants some bytes that we cannot provide... */ METHODDEF(boolean) fill_input_buffer (j_decompress_ptr dinfo) { /* we can't do anything about this. This might happen if the provided buffer is either invalid with regards to its content or just a to small bufsize has been given. */ /* fail. */ return FALSE; } /* From IJG docs: "it's not clear that being smart is worth much trouble" So I save myself some trouble by ignoring this bit. */ METHODDEF(void) skip_input_data (j_decompress_ptr dinfo, INT32 num_bytes) { /* There might be more data to skip than available in buffer. This clearly is an error, so screw this mess. */ if ((size_t)num_bytes > dinfo->src->bytes_in_buffer) { dinfo->src->next_input_byte = 0; /* no buffer byte */ dinfo->src->bytes_in_buffer = 0; /* no input left */ } else { dinfo->src->next_input_byte += num_bytes; dinfo->src->bytes_in_buffer -= num_bytes; } } /* Finished with decompression */ METHODDEF(void) term_source (j_decompress_ptr dinfo) { /* Again. Absolute laziness. Nothing to do here. Boring. */ } GLOBAL(void) jpeg_memory_src (j_decompress_ptr dinfo, unsigned char* buffer, size_t size) { struct jpeg_source_mgr* src; /* first call for this instance - need to setup */ if (dinfo->src == 0) { dinfo->src = (struct jpeg_source_mgr *) (*dinfo->mem->alloc_small) ((j_common_ptr) dinfo, JPOOL_PERMANENT, sizeof (struct jpeg_source_mgr)); } src = dinfo->src; src->next_input_byte = buffer; src->bytes_in_buffer = size; src->init_source = init_source; src->fill_input_buffer = fill_input_buffer; src->skip_input_data = skip_input_data; src->term_source = term_source; /* IJG recommend to use their function - as I don't know **** about how to do better, I follow this recommendation */ src->resync_to_restart = jpeg_resync_to_restart; } All I need to do is replace the jpeg_stdio_dest in my program with this code: int numBytes = 0; //size of jpeg after compression char * storage = new char[150000]; //storage buffer JOCTET *jpgbuff = (JOCTET*)storage; //JOCTET pointer to buffer jpeg_memory_dest(&cinfo,jpgbuff,150000,&numBytes); So I need some help to incorporate the above four lines into this function which now works but writes to a file instead of a memory: int write_jpeg_file( char *filename ) { struct jpeg_compress_struct cinfo; struct jpeg_error_mgr jerr; /* this is a pointer to one row of image data */ JSAMPROW row_pointer[1]; FILE *outfile = fopen( filename, "wb" ); if ( !outfile ) { printf("Error opening output jpeg file %s\n!", filename ); return -1; } cinfo.err = jpeg_std_error( &jerr ); jpeg_create_compress(&cinfo); jpeg_stdio_dest(&cinfo, outfile); /* Setting the parameters of the output file here */ cinfo.image_width = width; cinfo.image_height = height; cinfo.input_components = bytes_per_pixel; cinfo.in_color_space = color_space; /* default compression parameters, we shouldn't be worried about these */ jpeg_set_defaults( &cinfo ); /* Now do the compression .. */ jpeg_start_compress( &cinfo, TRUE ); /* like reading a file, this time write one row at a time */ while( cinfo.next_scanline < cinfo.image_height ) { row_pointer[0] = &raw_image[ cinfo.next_scanline * cinfo.image_width * cinfo.input_components]; jpeg_write_scanlines( &cinfo, row_pointer, 1 ); } /* similar to read file, clean up after we're done compressing */ jpeg_finish_compress( &cinfo ); jpeg_destroy_compress( &cinfo ); fclose( outfile ); /* success code is 1! */ return 1; } Anybody could help me out a bit with it? I've tried meddling with it but I am not sure how to do it. I I just replace this line: jpeg_stdio_dest(&cinfo, outfile); It's not going to work. There is more stuff that needs to be changed a bit in that function and I am being a little lost from all those pointers and memory management.

    Read the article

  • Best way to migrate export/import from SQL Server to oracle

    - by matao
    Hi guys! I'm faced with needing access for reporting to some data that lives in Oracle and other data that lives in a SQL Server 2000 database. For various reasons these live on different sides of a firewall. Now we're looking at doing an export/import from sql server to oracle and I'd like some advice on the best way to go about it... The procedure will need to be fully automated and run nightly, so that excludes using the SQL developer tools. I also can't make a live link between databases from our (oracle) side as the firewall is in the way. The data needs to be transformed in the process from a star schema to a de-normalised table ready for reporting. What I'm thinking about is writing a monster query for SQL Server (which I mostly have already) that will denormalise and read out the data from SQL Server into a flat file using the sql server equivalent of sqlplus as a scheduled task, dump into a Well Known Location, then on the oracle side have a cron job that copies down the file and loads it with sql loader and rebuilds indexes etc. This is all doable, but very manual. Is there one or a combination of FOSS or standard oracle/SQL Server tools that could automate this for me? the Irreducible complexity is the query on one side and building indexes on the other, but I would love to not have to write the CSV dumping detail or the SQL loader script, just say dump this view out to CSV on one side, and on the other truncate and insert into this table from CSV and not worry about mapping column names and all other arcane sqlldr voodoo... best practices? thoughts? comments? edit: I have about 50+ columns all of varying types and lengths in my dataset, which is why I'd prefer to not have to write out how to generate and map each single column...

    Read the article

  • Request.Browser.Platform not returning iPad, OSX, or Windows7

    - by rockinthesixstring
    I'm working on some advanced browser detection, and I've downloaded the MDBF browser file from CodePlex. Unfortunately my Request.Browser.Platform, along with a few other things is returning "Unknown" on both my iPad Mac OSX (Snow Leopard) and on Windows7 Does anyone know of a good advanced .browser file out there that does the same thing for non mobile devices as the MDBF does for mobile devices?

    Read the article

  • How to link different Servlet together?

    - by Harry Pham
    First of all, I did not use Spring MVC. :) :) Just want to get it out first. Now what I have is different JSP pages that making calls to different Servlets. All the pieces work great individually but I kind of need to link them together. If all of jsp pages make GET request then it would be easy, since I would just pass a type via the web address, and on my servlet side, I would just enumerated through all the parameter, determine which type is it, and delegate to the right servlet. But not all jsp pages make GET request, some make POST request via form. Let see example A.jsp $.getJSON('GenericServlet?type=A', ... GenericServlet.java String type = request.getParameter("type"); if(type.equals("A")){ //Somehow delegate to Servlet A (Not sure how to do that yet :)) } but in B.jsp I would have something like this B.jsp <form action="GenericServlet" method="post"> <table border=0 cellspacing=10 cellpadding=0> <tr> <td>User Name:</td> <td><input type="text" name="username" size=22/></td> </tr> <tr> <td>Password:</td> <td><input type="password" name="password" size=22/></td> </tr> </table> <input type="submit" value="Create User" /> </form> It kind of hard for me to determine in GenericServlet.java that this need to go to servletB

    Read the article

  • Windows XP cannot read DVD burned on Ubuntu

    - by webcrawl
    The story : I have burned a DVD disc on Ubuntu using Brasero. When file burning was complete, I canceled the burning of checksum to DVD. Then I put that DVD on Windows XP (SP3) and copied all the files from DVD to the hard drive (no errors when copying). When that was done I discovered that all copied files are not readable. What is more, all the files on that DVD also shown to be not readable, even though all file names, directories were in their place. What I found out? Windows detect that the disc is in CDFS (CD-ROM File System). Disc is clean as new, have no scratches. All files while opened in Notepad++ look like "NULNULNULNULNUL" in one line. The size of files is normal. Other discs that are recognized as CDFS can be read with no problems. What I tried? Starting CDFS service in Windows registry. Result - a new device in Windows Device Manager (JUBS JGH2ZCT SCSI CdRom Device). Removing my CD/DVD device from Device Manager. Result - Windows restarted the system and reinstalled the driver. So... how to read the DVD, when I have no access to any other PC, any other OS?

    Read the article

  • Cannot access new folders created in my Apache2 document-root

    - by user235101
    I have tried to create a new folder 'test' in my documentroot of my Apache2 installation, however, whenever I try and access it from a web-browser it gives me a 403 (forbidden) error. My virtualhosts file - <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName REMOVED DocumentRoot /var/www/ <Directory /> Options FollowSymLinks AllowOverride All AuthType Digest AuthName "documentroot" AuthDigestProvider file AuthUserFile /etc/apache2/htpasswd Require user REMOVED AllowOverride Indexes </Directory> <Directory /var/www/> Options FollowSymLinks Options -Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> <Directory /var/www/share/> Order Deny,Allow Allow from all Satisfy any </Directory> <Directory /var/www/REMOVED/> Order Deny,Allow Allow from all Satisfy any </Directory> <Directory /var/www/stream/> Order Deny,Allow Allow from all Satisfy any </Directory> <Directory /var/www/test> Order Deny,Allow Allow from all Satisfy any </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined <Directory /var/www/REMOVED> AuthType Digest AuthName "rutorrent" AuthDigestDomain /var/www/REMOVED/ http://46.105.127.19/REMOVED AuthDigestProvider file AuthUserFile /etc/apache2/htpasswd Require valid-user SetEnv R_ENV "/var/www/REMOVED" AllowOverride Indexes </Directory> </VirtualHost> Image of the permissions - Other information - If I create a new folder (and use chmod --reference to ensure it has the same permissions as an accessible folder), I get a 403 client-side. If I copy folder 'rapidleech' to the name 'rapidleech1', it will let access 'rapidleech1', but no longer 'rapidleech', until I delete the copy. In my logs I found nothing logged in errors.log, and only that it delivered a 403 in access.log. All the appropriate users are members of www-data.

    Read the article

  • What is wrong with this SimPy installation?

    - by dmindreader
    Alright, I have tried a bunch of times the python setup.py install command from my command prompt, and this is what I'm getting: SCREEN And when trying this: from SimPy.Simulation import * on Idle, I get this: Traceback (most recent call last): File "C:/Python30/pruebas/prueba1", line 1, in <module> from SimPy.Simulation import * File "C:\Python30\SimPy\Simulation.py", line 320 print 'SimPy.Simulation %s' %__version__, ^ SyntaxError: invalid syntax >>>

    Read the article

  • SQLAlchemy Custom Type Which Contains Multiple Columns

    - by Kekoa
    I would like to represent a datatype as a single column in my model, but really the data will be stored in multiple columns in the database. I cannot find any good resources on how to do this in SQLAlchemy. I would like my model to look like this(this is a simplified example using geometry instead of my real problem which is harder to explain): class 3DLine(DeclarativeBase): start_point = Column(my.custom.3DPoint) end_point = Column(my.custom.3DPoint) This way I could assign an object with the (x, y, z) components of the point at once without setting them individually. If I had to separate each component, this could get ugly, especially if each class has several of these composite objects. I would combine the values into one encoded field except that I need to query each value separately at times. I was able to find out how to make custom types using a single column in the documentation. But there's no indication that I can map a single type to multiple columns. I suppose I could accomplish this by using a separate table, and each column would be a foreign key, but in my case I don't think it makes sense to have a one to one mapping for each point to a separate table, and this still does not give the ability to set the related values all at once.

    Read the article

  • Is there a good .Net CSS aggregator that combines style sheets and minifies them?

    - by vfilby
    I am looking to see if there is an open source/free project that provides a CSS manager. I am looking for this mainly for performance tweaking and hoping there is a readymade project rather than building from scratch. Features I am looking for include: Combines multiple .css files into a single css file Optionally minifies the resulting .css file Works well with .Net (a user control, custom handler, etc) Is there a project out that that handles this?

    Read the article

  • iOS layout; I'm not getting it

    - by Tbee
    Well, "not getting it" is too harsh; I've got it working in for what for me is a logical setup, but it does not seem to be what iOS deems logical. So I'm not getting something. Suppose I've got an app that shows two pieces of information; a date and a table. According to the MVC approach I've got three MVC at work here, one for the date, one for the table and one that takes both these MCVs and makes it into a screen, wiring them up. The master MVC knows how/where it wants to layout the two sub MVC's. Each detail MVC only takes care of its own childeren within the bounds that were specified by the master MVC. Something like: - (void)loadView { MVC* mvc1 = [[MVC1 alloc] initwithFrame:...] [self.view addSubview:mvc1.view]; MVC* mvc2 = [[MVC2 alloc] initwithFrame:...] [self.view addSubview:mvc2.view]; } If the above is logical (which is it for me) then I would expect any MVC class to have a constructor "initWithFrame". But an MVC does not, only view have this. Why? How would one correctly layout nested MVCs? (Naturally I do not have just these two, but the detail MVCs have sub MVCs again.)

    Read the article

  • Rails 3 + Nginx + Passenger -- Routing index

    - by Bijan
    I have no index.html file in my public folder. My rails routes file routes this, and it works fine when I run 'rails server' on my machine. I'm trying to deploy the app. I have passenger and nginx running When I run rails server on my local machine, it works fine. But it's just trying to access static file when I try to access it on the production server. Here's my nginx conf: worker_processes 1; #pid logs/nginx.pid; events { worker_connections 1024; } http { passenger_root /usr/lib/ruby/gems/1.9.1/gems/passenger-3.0.2; passenger_ruby /usr/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name mmjconsult.com; root /www/mmjs/public; access_log logs/host.access.log; passenger_enabled on; } } Thank you for any help. I really appreciate it.

    Read the article

  • Weekly cron executing 2 times:

    - by yes123
    Hi guys, I have placed a .sh file that runs a php script weekly. This script should run only once, but every sunday it runs at: 1:30 am 6:50 am Any way to fix this? Add1: /etc/cron.weekly/cronweek: #!/bin/bash /usr/bin/php -f /home/my/path/to/script/cronweek.php Add2: crontab file: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # */1 * * * * root /usr/local/rtm/bin/rtm 35 > /dev/null 2> /dev/null

    Read the article

  • MSBuild: automate collecting of db migration scripts?

    - by P Dub
    Summary of environment. Asp.net web application (source stored in svn) sqlserver database. (Database schema (tables/sprocs) stored in svn) db version is synced with web application assembly version. (stored in table 'CurrentVersion') CI hudson server that checks out web app from repo and runs custom msbuild file to publish/package app. My msbuild script updates the assembly version of the web app (Major.Minor.Revision.Build) on each build. The 'Revision' is set to the currently checked out svn revision and the 'Build' to the hudson build number (incremented on each automated build). This way i can match the app to a specific trunk revision also get other build stats from the hudson build number. I'd like to automate the collecting of migration scripts (updated sprocs etc) to add to the zip package. I guess by comparing the svn revision of the db that has yet to be deployed to, to the revision being deployed, i can find what db files have changed in the trunk since the last deployment to that database/environment. This could easily be achieved by manually calling the svn diff -r REVNO:REVNO command to list changed .sql files. These files could then manually have to be added to the package. It would be great if this could be automated. Firstly i'd imagine I'll have to write a custom task to check the version of the db that has yet to be deployed to. After that I'm quite unsure. Does anyone have any suggestion on how this would be achieved through an msbuild task either existing or custom? Finally I'll have to autogen a script to add to the package that updates the database version table so as to be in sync with the application.

    Read the article

  • Group partial class shortcut

    - by Fred Yang
    I have to cs file for one partial class. I know that I can modify project file to group them together like way that vs.net group *.aspx and *.aspx.cs, but is there a way to do that in vs.net IDE directly?

    Read the article

  • where can I find the rake tasks delivered with rails

    - by ash34
    I am looking for tasks like tmp:clear or db:migrate. Where can I find the code for these tasks. I remember seeing them before but don't recollect where. Also, is there a way I can set some global variables in a .rake file that can be accessed by all tasks in that file without passing them as arguments to each task. thanks, ash

    Read the article

  • app engine modify xml

    - by Hoax
    Hi I am writing a GWT app running on App Engine, which needs to modify a XML File serverside. As far as I know there is no way to modify a XML file in the WAR directory or any subdirectories. What other possibilities do I have to store that data? Can I use the Data Store somehow or should I look for storage space somewhere else and access it there (if so any recommendations?)? Any help is appreciated!

    Read the article

  • Handling close-to-impossible collisions on should-be-unique values

    - by balpha
    There are many systems that depend on the uniqueness of some particular value. Anything that uses GUIDs comes to mind (eg. the Windows registry or other databases), but also things that create a hash from an object to identify it and thus need this hash to be unique. A hash table usually doesn't mind if two objects have the same hash because the hashing is just used to break down the objects into categories, so that on lookup, not all objects in the table, but only those objects in the same category (bucket) have to be compared for identity to the searched object. Other implementations however (seem to) depend on the uniqueness. My example (that's what lead me to asking this) is Mercurial's revision IDs. An entry on the Mercurial mailing list correctly states The odds of the changeset hash colliding by accident in your first billion commits is basically zero. But we will notice if it happens. And you'll get to be famous as the guy who broke SHA1 by accident. But even the tiniest probability doesn't mean impossible. Now, I don't want an explanation of why it's totally okay to rely on the uniqueness (this has been discussed here for example). This is very clear to me. Rather, I'd like to know (maybe by means of examples from your own work): Are there any best practices as to covering these improbable cases anyway? Should they be ignored, because it's more likely that particularly strong solar winds lead to faulty hard disk reads? Should they at least be tested for, if only to fail with a "I give up, you have done the impossible" message to the user? Or should even these cases get handled gracefully? For me, especially the following are interesting, although they are somewhat touchy-feely: If you don't handle these cases, what do you do against gut feelings that don't listen to probabilities? If you do handle them, how do you justify this work (to yourself and others), considering there are more probable cases you don't handle, like a supernonva?

    Read the article

  • Why am I unable to create a trigger using my SqlCommand?

    - by acidzombie24
    The line cmd.ExecuteNonQuery(); cmd.CommandText CREATE TRIGGER subscription_trig_0 ON subscription AFTER INSERT AS UPDATE user_data SET msg_count=msg_count+1 FROM user_data JOIN INSERTED ON user_data.id = INSERTED.recipient; The exception: Incorrect syntax near the keyword 'TRIGGER'. Then using VS 2010, connected to the very same file (a mdf file) I run the query above and I get a success message. WTF!

    Read the article

< Previous Page | 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609  | Next Page >