Search Results

Search found 8233 results on 330 pages for 'unresolved external'.

Page 290/330 | < Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >

  • What VC++ compiler/linker does when building a C++ project with Managed Extension

    - by ???
    The initial problem is that I tried to rebuild a C++ project with debug symbols and copied it to test machine, The output of the project is external COM server(.exe file). When calling the COM interface function, there's a RPC call failre: COMException(0x800706BE): The remote procedure call failed. According to the COM HRESULT design, if the FACILITY code is 7, it's actually a WIN32 error, and the win32 error code is 0x6BE, which is the above mentioned "remote procedure call failed". All I do is replace the COM server .exe file, the origin file works well. When I checked into the project, I found it's a C++ project with Managed Extension. When I checking the DLL with reflector, it shows there's 2 additional .NET assembly reference. Then I checked the project setting and found nothing about the extra 2 assembly reference. I turned on the show includes option of compiler and verbose library of linker, and try to analyze whether the assembly is indirectly referenced via .h file. I've collect all the .h file and grep all the files with '#using' '#import' and the assembly file itself. There really is a '#using ' in one of the .h file but not-relevant to the referenced assembly. And about the linked .lib library files, only one of the .lib file is a side-product of another managed-extension-enabled C++ project, all others are produced by a pure, traditional C++ project. For the managed-extension-enabled C++ project, I checked the output DLL assembly, it did NOT reference to the 2 assembly. I even try to capture the access of the additional assembly file via sysinternal's filemon and procmon, but the rebuild process does NOT access these file. I'm very confused about the compile and linking process model of a VC++/CLI project, where the additional assembly reference slipped into the final assembly? Thanks in advance for any of your help.

    Read the article

  • Broken console in Maven project using Netbeans

    - by Maciek Sawicki
    Hi, I have strange problem with my Neatens+Maven installation. This is the shortest code to reproduce the problem: public class App { public static void main( String[] args ) { // Create a scanner to read from keyboard Scanner scanner = new Scanner (System.in); Scanner s= new Scanner(System.in); String param= s.next(); System.out.println(param); } } When I'm running it as Maven Project inside Netbeans console seems to be broken. It just ignores my input. It's look like infinitive loop in System.out.println(param);. However this project works fine when it's compiled as "Java Aplication" project. It also works O.K. if I build and run it from cmd. System info: Os: Vista IDE: Netbeans 6.8 Maven: apache-maven-2.2.1 //edit Built program (using mavean from Netbeans) works fine. I just can't test it using Net beans. And I think I forgot to ask the question ;). So of course my first question is: how can I fix this problem? And second is: Is it any workaround for this? For example configuring Netbeans to run external commend line app instead of using built in console.

    Read the article

  • MSBUILD batch targets

    - by Darthg8r
    I want to deploy an application to a list of servers. I have all of the build issues taken care of, but I'm having trouble publishing to a list of servers. I want to read the list of servers from an external file and call a target passing the name of each server in. <ItemGroup> <File Include="$(SolutionFolder)CP\Build\DenormDevServers.txt" /> </ItemGroup> <Target Name="DeployToServer" Inputs="Servers" Outputs="Nothing"> <Message Text="Deployment to server done here. Deploying to server: @(Servers)" /> </Target> <Target Name="Test"> <ReadLinesFromFile File="@(File)"> <Output TaskParameter="Lines" ItemName="Servers" /> </ReadLinesFromFile> <CallTarget Targets="DeployToServer" ContinueOnError="true"></CallTarget> </Target> I can't seem to get it to "Deploy" to each server in the list. The output looks like this: Deployment to server done here. Deploying to server: Notice there is no server name, nor is done more than once. There are 2 lines in DenormDevServers.txt

    Read the article

  • Access modifiers - Property on business objects - getting and setting

    - by Mike
    Hi, I am using LINQ to SQL for the DataAccess layer. I have similar business objects to what is in the data access layer. I have got the dataprovider getting the message #23. On instantiation of the message, in the message constructor, it gets the MessageType and makes a new instance of MessageType class and fills in the MessageType information from the database. Therefore; I want this to get the Name of the MessageType of the Message. user.Messages[23].MessageType.Name I also want an administrator to set the MessageType user.Messages[23].MessageType = MessageTypes.LoadType(3); but I don't want the user to publicly set the MessageType.Name. But when I make a new MessageType instance, the access modifier for the Name property is public because I want to set that from an external class (my data access layer). I could change this to property to internal, so that my class can access it like a public variable, and not allow my other application access to modify it. This still doesn't feel right as it seems like a public property. Are public access modifiers in this situation bad? Any tips or suggestions would be appreciated. Thanks.

    Read the article

  • Is there a way to programatically check dependencies of an EXE?

    - by Mason Wheeler
    I've got a certain project that I build and distribute to users. I have two build configurations, Debug and Release. Debug, obviously, is for my use in debugging, but there's an additional wrinkle: the Debug configuration uses a special debugging memory manager, with a dependency on an external DLL. There's been a few times when I've accidentally built and distributed an installer package with the Debug configuration, and it's then failed to run once installed because the users don't have the special DLL. I'd like to be able to keep that from happening in the future. I know I can get the dependencies in a program by running Dependency Walker, but I'm looking for a way to do it programatically. Specifically, I have a way to run scripts while creating the installer, and I want something I can put in the installer script to check the program and see if it has a dependency on this DLL, and if so, cause the installer-creation process to fail with an error. I know how to create a simple CLI program that would take two filenames as parameters, and could run a DependsOn function and create output based on the result of it, but I don't know what to put in the DependsOn function. Does anyone know how I'd go about writing it?

    Read the article

  • How to handle not-enough-isolatedstorage issue deep in data loader?

    - by Edward Tanguay
    I have a silverlight application which loads data from many external data sources into IsolatedStorage, and while loading any of these sources if it does not have enough IsolatedStorage, it ends up in a catch statement. At that point in that catch statement I would like to ask the user to click a button to approve silverlight to increase the IsolatedStorage capacity. The problem is, although I have a "SwitchPage()" method with which I display a page, if I access it at this point it is too deep in the loading process and the application always goes into an endless loop, hangs and crashes. I need a way to branch out of the application completely somehow to an independent UserControl which has a button and code behind which does the increase logic. What is a solution for an application to be able to branch out of a loading process catch statement like this, display a user control which has a button to ask the user to increase the IsolatedStorage? public static void SaveBitmapImageToIsolatedStorageFile(OpenReadCompletedEventArgs e, string fileName) { try { using (IsolatedStorageFile isf = IsolatedStorageFile.GetUserStoreForApplication()) { using (IsolatedStorageFileStream isfs = new IsolatedStorageFileStream(fileName, FileMode.Create, isf)) { Int64 imgLen = (Int64)e.Result.Length; byte[] b = new byte[imgLen]; e.Result.Read(b, 0, b.Length); isfs.Write(b, 0, b.Length); isfs.Flush(); isfs.Close(); isf.Dispose(); } } } catch (IsolatedStorageException) { //handle: present user with button to increase isolated storage } catch (TargetInvocationException) { //handle: not saved } }

    Read the article

  • Is there a standard pattern for scanning a job table executing some actions?

    - by Howiecamp
    (I realize that my title is poor. If after reading the question you have an improvement in mind, please either edit it or tell me and I'll change it.) I have the relatively common scenario of a job table which has 1 row for some thing that needs to be done. For example, it could be a list of emails to be sent. The table looks something like this: ID Completed TimeCompleted anything else... ---- --------- ------------- ---------------- 1 No blabla 2 No blabla 3 Yes 01:04:22 ... I'm looking either for a standard practice/pattern (or code - C#/SQL Server preferred) for periodically "scanning" (I use the term "scanning" very loosely) this table, finding the not-completed items, doing the action and then marking them completed once done successfully. In addition to the basic process for accomplishing the above, I'm considering the following requirements: I'd like some means of "scaling linearly", e.g. running multiple "worker processes" simultaneously or threading or whatever. (Just a specific technical thought - I'm assuming that as a result of this requirement, I need some method of marking an item as "in progress" to avoid attempting the action multiple times.) Each item in the table should only be executed once. Some other thoughts: I'm not particularly concerned with the implementation being done in the database (e.g. in T-SQL or PL/SQL code) vs. some external program code (e.g. a standalone executable or some action triggered by a web page) which is executed against the database Whether the "doing the action" part is done synchronously or asynchronously is not something I'm considering as part of this question.

    Read the article

  • Runing bcdedit from python in Windows 2008 SP2

    - by Lee-Man
    I do not know windows well, so that may explain my dilemma ... I am trying to run bcdedit in Windows 2008R2 from Python 2.6. My Python routine to run a command looks like this: def run_program(cmd_str): """Run the specified command, returning its output as an array of lines""" dprint("run_program(%s): entering" % cmd_str) cmd_args = cmd_str.split() subproc = subprocess.Popen(cmd_args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) (outf, errf) = (subproc.stdout, subproc.stderr) olines = outf.readlines() elines = errf.readlines() if Options.debug: if elines: dprint('Error output:') for line in elines: dprint(line.rstrip()) if olines: dprint('Normal output:') for line in olines: dprint(line.rstrip()) errf.close() outf.close() res = subproc.wait() dprint('wait result=', res) return (res, olines) I call this function thusly: (res, o) = run_program('bcdedit /set {current} MSI forcedisable') This command works when I type it from a cmd window, and it works when I put it in a batch file and run it from a command window (as Administrator, of course). But when I run it from Python (as Administrator), Python claims it can't find the command, returning: bcdedit is not recognized as an internal or external command, operable program or batch file Also, if I trying running my batch file from Python (which works from the command line), it also fails. I've also tried it with the full path to bcdedit, with the same results. What is it about calling bcdedit from Python that makes it not found? Note that I can call other EXE files from Python, so I have some level of confidence that my Python code is sane ... but who knows. Any help would be most appreciated.

    Read the article

  • Multi threading in WCF RIA Services

    - by synergetic
    I use WCF RIA Services to update customer database. In domain service: public void UpdateCustomer(Customer customer) { this.ObjectContext.Customers.AttachAsModified(customer); syncCustomer(customer); } After update, a database trigger launches and depending on the columns updated it may insert a new record in CustomerChange table. syncCustomer(customer) method is executed to check for a new record in the CustomerChange table and if found it will create a text file which contains customer information and forwards that file to external system for import. Now this synchronization may take a time so I wanted to execute it in different thread. So: private void syncCustomer(Customer customer) { this.ObjectContext.SaveChanges(); new Thread(() => syncCustomerInfo(customer.CustomerID)) { IsBackground = true }.Start(); } private void syncCustomerInfo(int customerID) { //Thread.Sleep(2000); //does real job here ... ... } The problem is in most cases syncCustomerInfo method cannot find any new CustomerChange record even if it was definitely there. If I force thread sleep then it finds a new record. I also looked Entity Framework events but the only event provided by object context is SavingChanges which occur before changes are saved. Please suggest me what else to try.

    Read the article

  • Using JAXB to customise the generation of java enums

    - by belltower
    I'm using an external bindings file when using jaxb against an XML schema. I'm mostly using the bindings file to map from the XML schema primitives to my own types. This is a snippet of the bindings file <jxb:bindings version="1.0" xmlns:jxb="http://java.sun.com/xml/ns/jaxb" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:ai="http://java.sun.com/xml/ns/jaxb/xjc" extensionBindingPrefixes="ai"> <jxb:bindings schemaLocation="xsdurl" node="xs:schema"> <jxb:globalBindings> <jxb:javaType name="com.companyname.StringType" xmlType="xs:string" parseMethod="parse" printMethod="print" hasNsContext="true"> </jxb:javaType> </jxb:globalBindings> </jxb:bindings> </jxb:bindings> So whenever a xs:string is encountered, the com.companyname.StringType the methods print / parse are called for marshalling/unmarshalling etc. Now if JAXB encounters an xs:enumeration it will generate a java enum. For example: <xs:simpleType name="Address"> <xs:restriction base="xs:string"> <xs:enumeration value="ADDR"/> <xs:enumeration value="PBOX"/> </xs:restriction> </xs:simpleType> public enum Address { ADDR, PBOX, public String value() { return name(); } public static Address fromValue(String v) { return valueOf(v); } } Does anyone know if it is possible to customise the creation of an enum like it is for a primitive? I would like to be able to: Add a standard member variable / other methods to every enum generated by jaxb. Specify the static method used to create the enum.

    Read the article

  • Strange Sql Server 2005 behavior

    - by Justin C
    Background: I have a site built in ASP.NET with Sql Server 2005 as it's database. The site is the only site on a Windows Server 2003 box sitting in my clients server room. The client is a local school district, so for data security reasons there is no remote desktop access and no remote Sql Server connection, so if I have to service the database I have to be at the terminal. I do have FTP access to update ASP code. Problem: I was contacted yesterday about an issue with the system. When I looked in to it, it seems a bug that I had solved nearly a year ago had returned. I have a stored procedure that used to take an int as a parameter but a year ago we changed the structure of the system and updated the stored procedure to take an nvarchar(10). The stored procedure somehow changed back to taking an int instead of an nvarchar. There is an external hard drive connected to the server that copies data periodically and has the ability to restore the server in case of failure. I would have assumed that somehow an older version of the database had been restored, but data that I know was inserted 7 days and 1 day before the bug occurred is still in the database. Question: Is there anyway that the structure of a Sql Server 2005 database can revert to a previous version or be restored to a previous version without touching the actual data? No one else should have access to the server so I'm going a little insane trying to figure out how this even happened. Any ideas?

    Read the article

  • IIS URL Rewriting: How can I reliably keep relative paths when serving multiple files?

    - by NVRAM
    My WebApp is part CMS, and when I serve up an HTML page to the user it typically contains relative paths in a.href and img.src attributes. I currently have them accessed by urls like: ~/get-data.aspx/instance/user/page.html -- where instance indicates the particular instance for the report and "user/page.html" is a path created by an external application that generates the content. This works pretty reliably with code in the application's BeginRequest method that translates the text after ".aspx" into a query string, then uses Context.RewritePath(). So far so good, but I've just tripped over something that took me by surprise: it appears that if any of the query string ("instance/user/page.html") happens to contain a plus sign ("+") the BeginRequest method is never called, and a 404 is immediately returned to the user. So my question is two-fold: Am I correct in my belief that a "+" would cause the 404, and if so are there other things that could cause similar problems? Is there a way around that problem (perhaps a different method than BeginRequest)? Is there a better way to preserve relative URL paths for generated content than what I'm using? I'd rather not require site admins to install a 3rd party rewrite tool if I can help it.

    Read the article

  • Application design question, best approach?

    - by Jamie Keeling
    Hello, I am in the process of designing an application that will allow you to find pictures (screen shots) made from certain programs. I will provide the locations of a few of the program in the application itself to get the user started. I was wondering how I should go about adding new locations as the time goes on, my first thought was simply hard coding it into the application but this will mean the user has to reinstall it to make the changes take affect. My second idea was to use an XML file to contain all the locations as well as other data, such as the name of the application. This also means the user can add their own locations if they wish as well as sharing them over the internet. The second option seemed the best approach but then I had to think how would it be managed on the users computer. Ideally I'd like just a single .exe without the reliance on any external files such as the XML but this would bring me back to point one. Would it be best to simply use the ClickOnce deployment to create an entry in the start menu and create a folder containing the .exe and the file names? Thanks for the feedback, I don't want to start implementing the application until the design is nailed.

    Read the article

  • Best way to catch a WCF exception in Silverlight?

    - by wahrhaft
    I have a Silverlight 2 application that is consuming a WCF service. As such, it uses asynchronous callbacks for all the calls to the methods of the service. If the service is not running, or it crashes, or the network goes down, etc before or during one of these calls, an exception is generated as you would expect. The problem is, I don't know how to catch this exception. Because it is an asynchronous call, I can't wrap my begin call with a try/catch block and have it pick up an exception that happens after the program has moved on from that point. Because the service proxy is automatically generated, I can't put a try/catch block on each and every generated function that calls EndInvoke (where the exception actually shows up). These generated functions are also surrounded by External Code in the call stack, so there's nowhere else in the stack to put a try/catch either. I can't put the try/catch in my callback functions, because the exception occurs before they would get called. There is an Application_UnhandledException function in my App.xaml.cs, which captures all unhandled exceptions. I could use this, but it seems like a messy way to do it. I'd rather reserve this function for the truly unexpected errors (aka bugs) and not end up with code in this function for every circumstance I'd like to deal with in a specific way. Am I missing an obvious solution? Or am I stuck using Application_UnhandledException? [Edit] As mentioned below, the Error property is exactly what I was looking for. What is throwing me for a loop is that the fact that the exception is thrown and appears to be uncaught, yet execution is able to continue. It triggers the Application_UnhandledException event and causes VS2008 to break execution, but continuing in the debugger allows execution to continue. It's not really a problem, it just seems odd.

    Read the article

  • Creating an appropriate index for a frequently used query in SQL Server

    - by Slauma
    In my application I have two queries which will be quite frequently used. The Where clauses of these queries are the following: WHERE FieldA = @P1 AND (FieldB = @P2 OR FieldC = @P2) and WHERE FieldA = @P1 AND FieldB = @P2 P1 and P2 are parameters entered in the UI or coming from external datasources. FieldA is an int and highly on-unique, means: only two, three, four different values in a table with say 20000 rows FieldB is a varchar(20) and is "almost" unique, there will be only very few rows where FieldB might have the same value FieldC is a varchar(15) and also highly distinct, but not as much as FieldB FieldA and FieldB together are unique (but do not form my primary key, which is a simple auto-incrementing identity column with a clustered index) I'm wondering now what's the best way to define an index to speed up specifically these two queries. Shall I define one index with... FieldB (or better FieldC here?) FieldC (or better FieldB here?) FieldA ... or better two indices: FieldB FieldA and FieldC FieldA Or are there even other and better options? What's the best way and why? Thank you for suggestions in advance!

    Read the article

  • Organizing multiple embed codes with jQuery

    - by Nimbuz
    I have several embed codes on my website, for example: Embed Code #1: <object width="480" height="385"><param name="movie" value="http://www.youtube.com/v/f8Lp2ssd5A9ErAc&hl=en_US&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/f8Lp2A9ErAc&hl=en_US&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="480" height="385"></embed></object> Embed Code #2: <script type="text/javascript"> _qoptions={ qacct:"p-3asdb5E0g6" }; </script> <script type="text/javascript" src="http://edge.quantserve.com/quant.js"></script> <noscript> <a href="http://www.quantcast.com/p-3asdb5E0g6" target="_blank"><img src="http://pixel.quantserve.com/pixel/p-3asdb5E0g6.gif" style="display: none;" border="0" height="1" width="1" alt="Quantcast"/></a> </noscript> and so on.. How do organize them and separate them into an external single js file to keep the markup clean? Thanks for your help!

    Read the article

  • What is the easiest way to add compression to WCF in Silverlight?

    - by caryden
    I have a silverlight 2 beta 2 application that accesses a WCF web service. Because of this, it currently can only use basicHttp binding. The webservice will return fairly large amounts of XML data. This seems fairly wasteful from a bandwidth usage standpoint as the response, if zipped, would be smaller by a factor of 5 (I actually pasted the response into a txt file and zipped it.). The request does have the "Accept-Encoding: gzip, deflate" - Is there any way have the WCF service gzip (or otherwise compress) the response? I did find this link but it sure seems a bit complex for functionality that should be handled out-of-the-box IMHO. OK - at first I marked the solution using the System.IO.Compression as the answer as I could never "seem" to get the IIS7 dynamic compression to work. Well, as it turns out: Dynamic Compression on IIS7 was working al along. It is just that Nikhil's Web Developer Helper plugin for IE did not show it working. My guess is that since SL hands the web service call off to the browser, that the browser handles it "under the covers" and Nikhil's tool never sees the compressed response. I was able to confirm this by using Fiddler which monitors traffic external to the browser application. In fiddler, the response was, in fact, gzip compressed!! The other problem with the System.IO.Compression solution is that System.IO.Compression does not exist in the Silverlight CLR. So from my perspective, the EASIEST way to enable WCF compression in Silverlight is to enable Dynamic Compression in IIS7 and write no code at all.

    Read the article

  • Qt cross thread call

    - by QLatvia
    I have a Qt/C++ application, with the usual GUI thread, and a network thread. The network thread is using an external library, which has its own select() based event loop... so the network thread isn't using Qt's event system. At the moment, the network thread just emit()s signals when various events occur, such as a successful connection. I think this works okay, as the signals/slots mechanism posts the signals correctly for the GUI thread. Now, I need for the network thread to be able to call the GUI thread to ask questions. For example, the network thread may require the GUI thread to request put up a dialog, to request a password. Does anyone know a suitable mechanism for doing this? My current best idea is to have the network thread wait using a QWaitCondition, after emitting an object (emit passwordRequestedEvent(passwordRequest);. The passwordRequest object would have a handle on the particular QWaitCondition, and so can signal it when a decision has been made.. Is this sort of thing sensible? or is there another option?

    Read the article

  • LinQ XML mapping to a generic type

    - by Manuel Navarro
    I´m trying to use an external XML file to map the output from a stored procedure into an instance of a class. The problem is that my class is of a generic type: public class MyValue<T> { public T Value { get; set; } } Searching through a lot of blogs an articles I've managed to get this: <?xml version="1.0" encoding="utf-8" ?> <Database Name="" xmlns="http://schemas.microsoft.com/linqtosql/mapping/2007"> <Table Name="MyValue" Member="MyNamespace.MyValue`1" > <Type Name="MyNamespace.MyValue`1"> <Column Name="Category" Member="Value" DbType="VarChar(100)" /> </Type> </Table> <Function Method="GetResourceCategories" Name="myprefix_GetResourceCategories" > <ElementType Name="MyNamespace.MyValue`1"/> </Function> </Database> The MyNamespace.MyValue`1 trick works fine, and the class is recognized. I expect four rows from the stored procedure, and I'm getting four MyValue<string> instances, but the big problem is that the property Value for the all four instances is null. The property is not getting mapped and I don't really get why. Maybe worth noting that the property Value is generic, and that when the mapping is done using attributes it works perfect. Anyone have a clue? BTW the method GetResourceCategories: public ISingleResult<MyValue<string>> GetResourceCategories() { IExecuteResult result = this.ExecuteMethodCall( this, (MethodInfo)MethodInfo.GetCurrentMethod()); return (ISingleResult<MyValue<string>>)result.ReturnValue; }

    Read the article

  • How do I run NUnit in debug mode from Visual Studio?

    - by Jon Cage
    I've recently been building a test framework for a bit of C# I've been working on. I have NUnit set up and a new project within my workspace to test the component. All works well if I load up my unit tests from Nunit (v2.4), but I've got to the point where it would be really useful to run in debug mode and set some break points. I've tried the suggestions from several guides which all suggest changing the 'Debug' properties of the test project: Start external program: C:\Program Files\NUnit 2.4.8\bin\nunit-console.exe Command line arguments: /assembly: <full-path-to-solution>\TestDSP\bin\Debug\TestDSP.dll I'm using the console version there, but have tried the calling the GUI as well. Both give me the same error when I try and start debugging: Cannot start test project 'TestDSP' because the project does not contain any tests. Is this because I normally load \DSP.nunit into the Nunit GUI and that's where the tests are held? I'm beginning to think the problem may be that VS wants to run it's own test framework and that's why it's failing to find the NUnit tests? [Edit] To those asking about test fixtures, one of my .cs files in the TestDSP project looks roughly like this: namespace Some.TestNamespace { // Testing framework includes using NUnit.Framework; [TestFixture] public class FirFilterTest { /// <summary> /// Tests that a FirFilter can be created /// </summary> [Test] public void Test01_ConstructorTest() { ...some tests... } } } ...I'm pretty new to C# and the Nunit test framework so it's entirely possible I've missed some crucial bit of information ;-) [FINAL SOLUTION] The big problem was the project I'd used. If you pick: Other Languages->Visual C#->Test->Test Project ...when you're choosing the project type, Visual Studio will try and use it's own testing framework as far as I can tell. You should pick a normal c# class library project instead and then the instructions in my selected answer will work.

    Read the article

  • After Delete Trigger Fires Only After Delete?

    - by Brandi
    I thought "after delete" meant that the trigger is not fired until after the delete has already taken place, but here is my situation... I made 3, nearly identical SQL CLR after delete triggers in C#, which worked beautifully for about a month. Suddenly, one of the three stopped working while an automated delete tool was run on it. By stopped working, I mean, records could not be deleted from the table via client software. Disabling the trigger caused deletes to be allowed, but re-enabling it interfered with the ability to delete. So my question is 'how can this be the case?' Is it possible the tool used on it futzed up the memory? It seems like even if the trigger threw an exception, if it is AFTER delete, shouldn't the records be gone? All the trigger looks like is this: ALTER TRIGGER [sysdba].[AccountTrigger] ON [sysdba].[ACCOUNT] AFTER DELETE AS EXTERNAL NAME [SQL_IO].[SQL_IO.WriteFunctions].[AccountTrigger] GO The CLR trigger does one select and one insert into another database. I don't yet know if there are any errors from SQL Server Mgmt Studio, but will update the question after I find out.

    Read the article

  • jQuery > Update inline script on form submission

    - by Andrew Kirk
    I am using the ChemDoodle Web Components to display molecules on a web page. Basically, I can just insert the following script in my page and it will create an HTML5 canvas element to display the molecule. <script> var transform1 = new TransformCanvas('transform1', 200, 200, true); transform1.specs.bonds_useJMOLColors = true; transform1.specs.bonds_width_2D = 3; transform1.specs.atoms_useJMOLColors = true; transform1.specs.atoms_circles_2D = true; transform1.specs.backgroundColor = 'black'; transform1.specs.bonds_clearOverlaps_2D = true; transform1.loadMolecule(readPDB(molecule)); </script> In this example, "molecule" is a variable that I have defined in an external script by using the jQuery.ajax() function to load a PDB file. This is all fine and good. Now, I would like to include a form on the page that will allow a user to paste in a PDB molecule definition. Upon submitting the form, I want to update the "molecule" variable with the form data so that the ChemDoodle Web Components script will work its magic and display molecule defined by the PDB definition pasted into the form. I am using the following jQuery code to process the the form submission. $(".button").click(function() { // validate and process form here //hide previous errors $('.error').hide(); //validate pdb textarea field var pdb = $("textarea#pdb").val(); if (pdb == "") { $("label#pdb_error").show(); $("textarea#pdb").focus(); return false; } molecule = pdb; }); This code is setting the "molecule" variable upon the form submission but it is not being passed back to the inline script as I had hoped. I have tried a number of variations on this but can't seem to get it right. Any clues as to where I might be going wrong would be much appreciated.

    Read the article

  • What -W values in gcc correspond to which actual warnings?

    - by SebastianK
    Preamble: I know, disabling warnings is not a good idea. Anyway, I have a technical question about this. Using GCC 3.3.6, I get the following warning: choosing ... over ... because conversion sequence for the argument is better. Now, I want to disable this warning as described in gcc warning options by providing an argument like -Wno-theNameOfTheWarning But I don't know the name of the warning. How can I find out the name of the option that disables this warning? I am not able to fix the warning, because it occurs in a header of an external library that can not be changed. It is in boost serialization (rx(s, count)): template<class Archive, class Container, class InputFunction, class R> inline void load_collection(Archive & ar, Container &s) { s.clear(); // retrieve number of elements collection_size_type count; unsigned int item_version; ar >> BOOST_SERIALIZATION_NVP(count); if(3 < ar.get_library_version()) ar >> BOOST_SERIALIZATION_NVP(item_version); else item_version = 0; R rx; rx(s, count); std::size_t c = count; InputFunction ifunc; while(c-- > 0){ ifunc(ar, s, item_version); } } I have already tried #pragma GCC system_header but this had no effect. Using -isystem instead of -I also does not work. The general question remains is: I know the text of the warning message. But I do not know the correlation to the gcc warning options.

    Read the article

  • How to update GUI thread/class from worker thread/class?

    - by user315182
    First question here so hello everyone. The requirement I'm working on is a small test application that communicates with an external device over a serial port. The communication can take a long time, and the device can return all sorts of errors. The device is nicely abstracted in its own class that the GUI thread starts to run in its own thread and has the usual open/close/read data/write data basic functions. The GUI is also pretty simple - choose COM port, open, close, show data read or errors from device, allow modification and write back etc. The question is simply how to update the GUI from the device class? There are several distinct types of data the device deals with so I need a relatively generic bridge between the GUI form/thread class and the working device class/thread. In the GUI to device direction everything works fine with [Begin]Invoke calls for open/close/read/write etc. on various GUI generated events. I've read the thread here (How to update GUI from another thread in C#?) where the assumption is made that the GUI and worker thread are in the same class. Google searches throw up how to create a delegate or how to create the classic background worker but that's not at all what I need, although they may be part of the solution. So, is there a simple but generic structure that can be used? My level of C# is moderate and I've been programming all my working life, given a clue I'll figure it out (and post back)... Thanks in advance for any help.

    Read the article

  • What is the best practice in regards to building composite dtos off of an aggregate root with domain

    - by Chance
    I'm trying to figure out the best approach/practice for assembling a composite data transfer object off of an aggregate root and would love to hear people's thoughts on this. For example, lets say I have a root that has a few domain objects as children. I want to assemble a specific view dto, based on some business logic, that either has attributes or full dto's of it's objects. What I'm struggling with is trying to figure out where that assembly should happen. I can see it going on the domain object of the aggregate root as there is some business logic associated with it. The benefits of this approach from what I've deduced thus far is that it should reduce the inevitable business logic from bleeding outisde of the domain object. It also allows for private methods that take care of tasks that could become more complex from an external builder. The downsides being that the domain object becomes much more entrenched in the application's workflow and represents much more than just the domain object. It also could become very large in the scenario where you need multiple composite Dtos. Alternatively, I could also see it belonging to some form of transfer object assembler where there is a builder for each domain object. The domain objects would still be responsible for GetDto() and UpdateFromDto(dto). Outside of that, the builder would handle the construction and deconstruction of composite dtos. The downside is kind of mentioned above, where I fear this will easily lead to developers unfamiliar with DDD bleeding a ton of business logic into the assembler which is what I want to desperately avoid. Any thoughts would be greatly apperciated.

    Read the article

< Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >