Search Results

Search found 11915 results on 477 pages for 'copy'.

Page 384/477 | < Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >

  • wix The directory is in the user profile but is not listed in the RemoveFile table

    - by Venkat S. Rao
    I have the following configuration to delete and copy a file from WIX. <Directory Id='TARGETDIR' Name='SourceDir'> ... <Directory Id="AppDataFolder" Name="AppDataFolder"> <Directory Id="GleasonAppData" Name="Gleason" > <Directory Id="GleasonStudioAppData" Name="GleasonStudio"> <Directory Id="DatabaseAppData" Name ="Database"> <Directory Id="UserSandboxesAppData" Name="UserSandboxes" /> </Directory> </Directory> </Directory> </Directory> </Directory> <DirectoryRef Id="UserSandboxesAppData"> <Component Id="comp_deleteBackup" Guid="1f159f49-3029-4f46-b194-e42aabd40844"> <RemoveFile Id="RemoveBackup" Directory="UserSandboxesAppData" Name="DevelopmentBackUp.FDB" On="install" /> <RegistryKey Root="HKCU" Key="Software\Gleason\Database\RemoveBackup"> <RegistryValue Value="Removed" Type="string" KeyPath="yes" /> </RegistryKey> </Component> <Component Id="comp_createBackup" Guid="557badef-6d77-4c4e-aa5f-8d88cb5ef735"> <CopyFile Id="DBBackup" DestinationDirectory="UserSandboxesAppData" DestinationName="DevelopmentBackUp.FDB" SourceDirectory="UserSandboxesAppData" SourceName="Development.FDB" /> <RegistryKey Root="HKCU" Key="Software\Gleason\Database\CopyBackup"> <RegistryValue Value="Copied" Type="string" KeyPath="yes" /> </RegistryKey> </Component> </DirectoryRef> I get 4 errors related to ICE64--The directory 'xxx' is in the user profile but is not listed in the RemoveFile table. xxx={UserSandboxesAppData, DatabaseAppData, GleasonStudioAppData, GleasonAppData} Someone else had a very similar problem here: Directory xx is in the user profile but is not listed in the RemoveFile table. . But that solution did not help me. What do I need to change? Thank You, Venkat Rao

    Read the article

  • LPX-00607 for ora:contains in java but not sqlplus

    - by Windle
    Hey all, I am trying to doing some sql querys out of Oracle 11g and am having issues using ora:contains. I am using spring's jdbc impl and my code generates the sql statement: select * from view_name where column_a = ? and column_b = ? and existsNode(xmltype(clob_column), 'record/name [ora:contains(text(), "name1") 0]', 'xmlns:ora="http://xmlns.oralce.com/xdb"') = 1 I have removed the actual view / column names obviously, but when I copy that into sqlplus and substitute in random values, the select executes properly. When I try to run it in my DAO code I get this stack trace: org.springframework.jdbc.UncatergorizedSQLException: PreparedStatementCallback; uncatergorizedSQLException for SQL [the big select above]; SQL state [99999]; error code [31011]; ORA-31011: XML parsing failed. ORA-19202: Error occured in XML processing LPX-00607: Invalid reference: 'contains' ;nested exception is java.sql.SQLException: ORA-31011: XML parsing failed ORA-19202: Error occured in XML processing LPX-00607: Invalid reference: 'contains' (continues on like this for awhile....) I think it is worth mentioning that I am using maven and it is possible I am missing some dependency that is required for this. Sorry the post is so long, but I wanted to err on the side of too much info. Thanks for taking the time to read this at least =) -Windle

    Read the article

  • Oracle T4CPreparedStatement memory leaks?

    - by Jay
    A little background on the application that I am gonna talk about in the next few lines: XYZ is a data masking workbench eclipse RCP application: You give it a source table column, and a target table column, it would apply a trasformation (encryption/shuffling/etc) and copy the row data from source table to target table. Now, when I mask n tables at a time, n threads are launched by this app. Here is the issue: I have run into a production issue on first roll out of the above said app. Unfortunately, I don't have any logs to get to the root. However, I tried to run this app in test region and do a stress test. When I collected .hprof files and ran 'em through an analyzer (yourKit), I noticed that objects of oracle.jdbc.driver.T4CPreparedStatement was retaining heap. The analysis also tells me that one of my classes is holding a reference to this preparedstatement object and thereby, n threads have n such objects. T4CPreparedStatement seemed to have character arrays: lastBoundChars and bindChars each of size char[300000]. So, I researched a bit (google!), obtained ojdbc6.jar and tried decompiling T4CPreparedStatement. I see that T4CPreparedStatement extends OraclePreparedStatement, which dynamically manages array size of lastBoundChars and bindChars. So, my questions here are: Have you ever run into an issue like this? Do you know the significance of lastBoundChars / bindChars? I am new to profiling, so do you think I am not doing it correct? (I also ran the hprofs through MAT - and this was the main identified issue - so, I don't really think I could be wrong?) I have found something similar on the web here: http://forums.oracle.com/forums/thread.jspa?messageID=2860681 Appreciate your suggestions / advice.

    Read the article

  • How should I setup my Visual Studio projects/solutions in a Mercurial repository?

    - by Dave A
    At my company we have a few different web apps that each share some common libraries. The Visual Studio setup looks like this. Website 1 Solution Website 1 Shared Library 1 Project Shared Library 2 Project Website 2 Solution Website 2 Shared Library 1 Project Shared Library 2 Project Windows Service Solution Windows Service Project Shared Library 1 Project Shared Library 2 Project Shared Library Solution Shared Library 1 Project Shared Library 2 Project All Projects Solution Website 1 Website 2 Windows Service Project Shared Library 1 Project Shared Library 2 Project We want to start using Mercurial for source control, but I'm still not sure the best way to do it. From what I've read you're supposed to use a separate repository for each project. No problem there, but where do the Visual Studio solution files (.sln) go? Should there be a separate repository with just an .sln file? Ideally the projects that use the shared libraries should all use the same version, and the solution "All Projects Solution" should build without errors, but sometimes we need to branch the shared libraries. What is the best way to do this, and how would the repositories be setup? How do I get a working copy of a certain branch/tag of the Website 1 solution when every project is in a separate repository. Do I have to pull each one separately, or write a script to do it all at once? Can tortoise hg do that for me? Any other tips to make this process easier?

    Read the article

  • what is the exact way to use Endorsed directory in jdk1.6

    - by raticulin
    I want to upgrade my jaxws to 2.2 (jdk1.6 comes bundled with jaxws 2.1). My jdk is (I did not install public jre): java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02) Java HotSpot(TM) Client VM (build 16.3-b01, mixed mode) In jaxws' own doc they explain how to do it: One way to fix this is to copy jaxws-api.jar and jaxb-api.jar into JRE endorsed directory, which is $JAVA_HOME/lib/endorsed (or $JDK_HOME/jre/lib/endorsed) But I am not sure this is having any effect in my installation. For starters I have only defined %JAVA_HOME%. And folder $JAVA_HOME/lib/endorsed is inexistant, so I created and copied the two jars. But if I do (wsgen is a tool from jaxws) wsgen -version I still get: JAX-WS RI 2.1.6 in JDK 6 I also tried creating folder JAVA_HOME\jre\lib\endorsed (notice that in the doc they say JDK_HOME, but as I only have JAVA_HOME I used this path). Still same wsgen output. My questions are: What is the difference between JAVA_HOME and JDK_HOME in the doc page? anything significant or just two ways to refer to JAVA_HOME ? Is 'wsgen -version' a valid way to check jaxws version that is used or this always calls the exe in the original jdk, but it does not mean endorsed jars will be used? Anyone knows very detailed steps to install jaxws2.2 in a jdk.16?

    Read the article

  • Web Service Client in JBOSS 5.1 with JDK6

    - by dcp
    This is a continuation of the question here: http://stackoverflow.com/questions/2435286/jboss-does-app-have-to-be-compiled-under-same-jdk-as-jboss-is-running-under It's different enough though that it required a new question. I am trying to use jdk6 to run JBOSS 5.1, and I downloaded the JDK6 version of JBOSS 5.1. This works fine and my EAR application deploys fine. However, when I want to run a web service client with code like this: public static void main(String[] args) throws Exception { System.out.println("creating the web service client..."); TestClient client = new TestClient("http://localhost:8080/tc_test_project-tc_test_project/TestBean?wsdl"); Test service = client.getTestPort(); System.out.println("calling service.retrieveAll() using the service client"); List<TestEntity> list = service.retrieveAll(); System.out.println("the number of elements in list retrieved using the client is " + list.size()); } I get the following exception: javax.xml.ws.WebServiceException: java.lang.UnsupportedOperationException: setProperty must be overridden by all subclasses of SOAPMessage at org.jboss.ws.core.jaxws.client.ClientImpl.handleRemoteException(ClientImpl.java:396) at org.jboss.ws.core.jaxws.client.ClientImpl.invoke(ClientImpl.java:302) at org.jboss.ws.core.jaxws.client.ClientProxy.invoke(ClientProxy.java:170) at org.jboss.ws.core.jaxws.client.ClientProxy.invoke(ClientProxy.java:150) Now, here's the really interesting part. If I change the JDK that my the code above is running under from JDK6 to JDK5, the exception above goes away! It's really strange. The only way I found for the code above to run under JDK6 was to take the JBOSS_HOME/lib/endorsed folder and copy it to JDK6_HOME/lib. This seems like it shouldn't be necessary, but it is. Is there any other way to make this work other than using the workaround I just described?

    Read the article

  • How does real-time collaboration with multiple clients work in a system using operation transformati

    - by Saikat Chakrabarti
    I just finished reading High-Latency, Low-Bandwidth Windowing in the Jupiter Collaboration System and I mostly followed everything until part 6: global consistency. This part describes how the system described in the paper can be extended to accomodate for multiple clients connected to the server. However, the explanation is very short and essentially says the system will work if the central server merely forwards client messages to all the other clients. I don't really understand how this works though. What state vector would be sent in the message that is sent to all the other clients? Does the server maintain separate state vectors for each client? Does it maintain a separate copy of the widgets locally for each client? The simple example I can think of is this setup: imagine client A, server, and client B with client A and client B both connected to the server. To start, all three have the state object "ABCD". Then, client A sends the message "insert character F at position 0" at the same time client B sends the message "insert character G at position 0" to the server. It seems like simply relaying client A's message to client B and vice versa doesn't actually handle this case. So what exactly does the server do?

    Read the article

  • JQuery Validation Plugin: Prevent validation issue on nested form

    - by Majid
    I have a form on which I use validation. This form has a section for photo upload. Initially this section contains six elements with the following code: <img class="photo_upload" src="image/app/photo_upload.jpg"> I have bound a function to the click event for the class of photo_upload. This function replaces the image with a minimal form with this code: Copy code <form onsubmit="startUploadImage();" target="control_target" enctype="multipart/form-data" method="post" action="index.php"> <input type="hidden" value="add_image" name="service"> <input type="hidden" value="1000000" name="MAX_FILE_SIZE"> <input type="file" size="10" name="photo" id="photo_file_upload"><br> <button onclick="javascript:cancel_photo_upload();return false;">Cancel</button> </form> So, essentially, after the image is clicked, I'd have a new form nested in my original, validated form. Now, when I use this new form and upload an image, I receive an error (repeated three times) saying: validator is undefined http://host.com/path/index.php Line 286 What is going on here? My guess is this Submit event bubbles up to the outer form As we have validation on that form, validation is triggered, Validation tries to find the form triggering it, Since we have not bound validation to the inner form it returns 'undefined' Now, is my assessment correct? Whether it is or not, how can I solve this issue?

    Read the article

  • How Does a COM Program Locate a .NET DLL Registered for COM Interop?

    - by Eric J.
    One customer wants to consume our .NET DLLs from VB6. They are designed to support reverse interop and all works fine... except: There are two separate VB6 programs in two different directories. It seems it's necessary to do one of: Copy the .NET DLL into both directories, or Install the .NET DLL in the GAC This is the customer's observation and also supported by the RegAsm documentation: After registering an assembly using Regasm.exe, you can install it in the global assembly cache so that it can be activated from any COM client. If the assembly is only going to be activated by a single application, you can place it in that application's directory. I'm confused on this point. First point of confusion: As far as I understand, the COM runtime locates the DLL using the Prog ID / Class ID. When I look in the registry at the Class ID entry, I see the full path to the .NET DLL in the CodeBase key. Why is it that a COM program using the Prog ID / Class ID doesn't locate the .NET DLL using the CodeBase? Second point of confusion: The GAC is specific to .NET. How is it involved in resolving COM references?

    Read the article

  • Assigning a property across threads

    - by Mike
    I have set a property across threads before and I found this post http://stackoverflow.com/questions/142003/cross-thread-operation-not-valid-control-accessed-from-a-thread-other-than-the-t about getting a property. I think my issue with the code below is setting the variable to the collection is an object therefore on the heap and therefore is just creating a pointer to the same object So my question is besides creating a deep copy, or copying the collection into a different List object is there a better way to do the following to aviod the error during the for loop. Cross-thread operation not valid: Control 'lstProcessFiles' accessed from a thread other than the thread it was created on. Code: private void btnRunProcess_Click(object sender, EventArgs e) { richTextBox1.Clear(); BackgroundWorker bg = new BackgroundWorker(); bg.DoWork += new DoWorkEventHandler(bg_DoWork); bg.RunWorkerCompleted += new RunWorkerCompletedEventHandler(bg_RunWorkerCompleted); bg.RunWorkerAsync(lstProcessFiles.SelectedItems); } void bg_DoWork(object sender, DoWorkEventArgs e) { WorkflowEngine engine = new WorkflowEngine(); ListBox.SelectedObjectCollection selectedCollection=null; if (lstProcessFiles.InvokeRequired) { // Try #1 selectedCollection = (ListBox.SelectedObjectCollection) this.Invoke(new GetSelectedItemsDelegate(GetSelectedItems), new object[] { lstProcessFiles }); // Try #2 //lstProcessFiles.Invoke( // new MethodInvoker(delegate { // selectedCollection = lstProcessFiles.SelectedItems; })); } else { selectedCollection = lstProcessFiles.SelectedItems; } // *********Same Error on this line******************** // Cross-thread operation not valid: Control 'lstProcessFiles' accessed // from a thread other than the thread it was created on. foreach (string l in selectedCollection) { if (engine.LoadProcessDocument(String.Format(@"C:\TestDirectory\{0}", l))) { try { engine.Run(); WriteStep(String.Format("Ran {0} Succussfully", l)); } catch { WriteStep(String.Format("{0} Failed", l)); } engine.PrintProcess(); WriteStep(String.Format("Rrinted {0} to debug", l)); } } } private delegate void WriteDelegate(string p); private delegate ListBox.SelectedObjectCollection GetSelectedItemsDelegate(ListBox list); private ListBox.SelectedObjectCollection GetSelectedItems(ListBox list) { return list.SelectedItems; }

    Read the article

  • Inserting instructions into method.

    - by Alix
    Hi, (First of all, this is a very lengthy post, but don't worry: I've already implemented all of it, I'm just asking your opinion.) I'm having trouble implementing the following; I'd appreciate some help: I get a Type as parameter. I define a subclass using reflection. Notice that I don't intend to modify the original type, but create a new one. I create a property per field of the original class, like so: [- ignore this text here; I had to add something or the formatting wouldn't work <-] public class OriginalClass { private int x; } public class Subclass : OriginalClass { private int x; public int X { get { return x; } set { x = value; } } } [This is number 4! Numbered lists don't work if you add code in between; sorry] For every method of the superclass, I create an analogous method in the subclass. The method's body must be the same except that I replace the instructions ldfld x with callvirt this.get_X, that is, instead of reading from the field directly I call the get accessor. I'm having trouble with step 4. I know you're not supposed to manipulate code like this, but I really need to. Here's what I've tried: Attempt #1: Use Mono.Cecil. This would allow me to parse the body of the method into human-readable Instructions, and easily replace instructions. However, the original type isn't in a .dll file, so I can't find a way to load it with Mono.Cecil. Writing the type to a .dll, then load it, then modify it and write the new type to disk (which I think is the way you create a type with Mono.Cecil), and then load it seems like a huge overhead. Attempt #2: Use Mono.Reflection. This would also allow me to parse the body into Instructions, but then I have no support for replacing instructions. I've implemented a very ugly and inefficient solution using Mono.Reflection, but it doesn't yet support methods that contain try-catch statements (although I guess I can implement this) and I'm concerned that there may be other scenarios in which it won't work, since I'm using the ILGenerator in a somewhat unusual way. Also, it's very ugly ;). Here's what I've done: private void TransformMethod(MethodInfo methodInfo) { // Create a method with the same signature. ParameterInfo[] paramList = methodInfo.GetParameters(); Type[] args = new Type[paramList.Length]; for (int i = 0; i < args.Length; i++) { args[i] = paramList[i].ParameterType; } MethodBuilder methodBuilder = typeBuilder.DefineMethod( methodInfo.Name, methodInfo.Attributes, methodInfo.ReturnType, args); ILGenerator ilGen = methodBuilder.GetILGenerator(); // Declare the same local variables as in the original method. IList<LocalVariableInfo> locals = methodInfo.GetMethodBody().LocalVariables; foreach (LocalVariableInfo local in locals) { ilGen.DeclareLocal(local.LocalType); } // Get readable instructions. IList<Instruction> instructions = methodInfo.GetInstructions(); // I first need to define labels for every instruction in case I // later find a jump to that instruction. Once the instruction has // been emitted I cannot label it, so I'll need to do it in advance. // Since I'm doing a first pass on the method's body anyway, I could // instead just create labels where they are truly needed, but for // now I'm using this quick fix. Dictionary<int, Label> labels = new Dictionary<int, Label>(); foreach (Instruction instr in instructions) { labels[instr.Offset] = ilGen.DefineLabel(); } foreach (Instruction instr in instructions) { // Mark this instruction with a label, in case there's a branch // instruction that jumps here. ilGen.MarkLabel(labels[instr.Offset]); // If this is the instruction that I want to replace (ldfld x)... if (instr.OpCode == OpCodes.Ldfld) { // ...get the get accessor for the accessed field (get_X()) // (I have the accessors in a dictionary; this isn't relevant), MethodInfo safeReadAccessor = dataMembersSafeAccessors[((FieldInfo) instr.Operand).Name][0]; // ...instead of emitting the original instruction (ldfld x), // emit a call to the get accessor, ilGen.Emit(OpCodes.Callvirt, safeReadAccessor); // Else (it's any other instruction), reemit the instruction, unaltered. } else { Reemit(instr, ilGen, labels); } } } And here comes the horrible, horrible Reemit method: private void Reemit(Instruction instr, ILGenerator ilGen, Dictionary<int, Label> labels) { // If the instruction doesn't have an operand, emit the opcode and return. if (instr.Operand == null) { ilGen.Emit(instr.OpCode); return; } // Else (it has an operand)... // If it's a branch instruction, retrieve the corresponding label (to // which we want to jump), emit the instruction and return. if (instr.OpCode.FlowControl == FlowControl.Branch) { ilGen.Emit(instr.OpCode, labels[Int32.Parse(instr.Operand.ToString())]); return; } // Otherwise, simply emit the instruction. I need to use the right // Emit call, so I need to cast the operand to its type. Type operandType = instr.Operand.GetType(); if (typeof(byte).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (byte) instr.Operand); else if (typeof(double).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (double) instr.Operand); else if (typeof(float).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (float) instr.Operand); else if (typeof(int).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (int) instr.Operand); ... // you get the idea. This is a pretty long method, all like this. } Branch instructions are a special case because instr.Operand is SByte, but Emit expects an operand of type Label. Hence the need for the Dictionary labels. As you can see, this is pretty horrible. What's more, it doesn't work in all cases, for instance with methods that contain try-catch statements, since I haven't emitted them using methods BeginExceptionBlock, BeginCatchBlock, etc, of ILGenerator. This is getting complicated. I guess I can do it: MethodBody has a list of ExceptionHandlingClause that should contain the necessary information to do this. But I don't like this solution anyway, so I'll save this as a last-resort solution. Attempt #3: Go bare-back and just copy the byte array returned by MethodBody.GetILAsByteArray(), since I only want to replace a single instruction for another single instruction of the same size that produces the exact same result: it loads the same type of object on the stack, etc. So there won't be any labels shifting and everything should work exactly the same. I've done this, replacing specific bytes of the array and then calling MethodBuilder.CreateMethodBody(byte[], int), but I still get the same error with exceptions, and I still need to declare the local variables or I'll get an error... even when I simply copy the method's body and don't change anything. So this is more efficient but I still have to take care of the exceptions, etc. Sigh. Here's the implementation of attempt #3, in case anyone is interested: private void TransformMethod(MethodInfo methodInfo, Dictionary<string, MethodInfo[]> dataMembersSafeAccessors, ModuleBuilder moduleBuilder) { ParameterInfo[] paramList = methodInfo.GetParameters(); Type[] args = new Type[paramList.Length]; for (int i = 0; i < args.Length; i++) { args[i] = paramList[i].ParameterType; } MethodBuilder methodBuilder = typeBuilder.DefineMethod( methodInfo.Name, methodInfo.Attributes, methodInfo.ReturnType, args); ILGenerator ilGen = methodBuilder.GetILGenerator(); IList<LocalVariableInfo> locals = methodInfo.GetMethodBody().LocalVariables; foreach (LocalVariableInfo local in locals) { ilGen.DeclareLocal(local.LocalType); } byte[] rawInstructions = methodInfo.GetMethodBody().GetILAsByteArray(); IList<Instruction> instructions = methodInfo.GetInstructions(); int k = 0; foreach (Instruction instr in instructions) { if (instr.OpCode == OpCodes.Ldfld) { MethodInfo safeReadAccessor = dataMembersSafeAccessors[((FieldInfo) instr.Operand).Name][0]; byte[] bytes = toByteArray(OpCodes.Callvirt.Value); for (int m = 0; m < OpCodes.Callvirt.Size; m++) { rawInstructions[k++] = bytes[put.Length - 1 - m]; } bytes = toByteArray(moduleBuilder.GetMethodToken(safeReadAccessor).Token); for (int m = instr.Size - OpCodes.Ldfld.Size - 1; m >= 0; m--) { rawInstructions[k++] = bytes[m]; } } else { k += instr.Size; } } methodBuilder.CreateMethodBody(rawInstructions, rawInstructions.Length); } private static byte[] toByteArray(int intValue) { byte[] intBytes = BitConverter.GetBytes(intValue); if (BitConverter.IsLittleEndian) Array.Reverse(intBytes); return intBytes; } private static byte[] toByteArray(short shortValue) { byte[] intBytes = BitConverter.GetBytes(shortValue); if (BitConverter.IsLittleEndian) Array.Reverse(intBytes); return intBytes; } (I know it isn't pretty. Sorry. I put it quickly together to see if it would work.) I don't have much hope, but can anyone suggest anything better than this? Sorry about the extremely lengthy post, and thanks.

    Read the article

  • Changing customErrors in web.config semi-dynamically

    - by Tom Ritter
    The basic idea is we have a test enviroment which mimics Production so customErrors="RemoteOnly". We just built a test harness that runs against the Test enviroment and detects breaks. We would like it to be able to pull back the detailed error. But we don't want to turn customErrors="On" because then it doesn't mimic Production. I've looked around and thought a lot, and everything I've come up with isn't possible. Am I wrong about any of these points? We can't turn customErrors on at runtime because when you call configuration.Save() - it writes the web.config to disk and now it's Off for every request. We can't symlink the files into a new top level directory with it's own web.config because we're on windows and subversion on windows doesn't do symlinks. We can't use URL-Mapping to make an empty folder dir2 with its own web.config and make the files in dir1 appear to be in dir2 - the web.config doesn't apply We can't copy all the aspx files into dir2 with it's own web.config because none of the links would be consistent and it's a horrible hacky solution. We can't change customErrors in web.config based on hostname (e.g. add another dns entry to the test server) because it's not possible/supported We can't do any virtual directory shenanigans to make it work. If I'm not, is there a way to accomplish what I'm trying to do? Turn on customErrors site-wide under certain circumstances (dns name or even a querystring value)?

    Read the article

  • Svn import with auto-props & pre-commit hook

    - by James Tisato
    My company's svn repo has a lot of MS Word docs in it. We've implemented a policy that all .doc files must have the svn:needs-lock property set to prevent parallel access on files that are hard to merge (we've also done this for xls, ppt, pdf etc.). We've implemented the policy by distributing a svn config with auto-props set appropriately for all relevant document types. We've also set up a pre-commit hook that checks that all added files of these types have the needs-lock property set (i.e. if they forget/are too lazy to update their svn config file, they won't be able to add any docs to the repo). The problem I'm having, however, is that the pre-commit hook fails when users try to import files into the repo, e.g. some users like to add files directly thru TortoiseSVN's Repo Browser, which effectively is an svn import. Through testing on other file types, I have seen that doing an import does in fact apply the auto-props listed in my config, but they don't seem to be applied at the point that the pre-commit hook runs. When importing .doc files, the hook fails, saying that the needs-lock property is missing. Is there really much difference between adding a single file to a working copy and committing it vs importing a file directly? Do we need to tailor our precommit hook in some way to cater for this scenario?

    Read the article

  • CSRF Protection in AJAX Requests using MVC2

    - by mnemosyn
    The page I'm building depends heavily on AJAX. Basically, there is just one "page" and every data transfer is handled via AJAX. Since overoptimistic caching on the browser side leads to strange problems (data not reloaded), I have to perform all requests (also reads) using POST - that forces a reload. Now I want to prevent the page against CSRF. With form submission, using Html.AntiForgeryToken() works neatly, but in AJAX-request, I guess I will have to append the token manually? Is there anything out-of-the box available? My current attempt looks like this: I'd love to reuse the existing magic. However, HtmlHelper.GetAntiForgeryTokenAndSetCookie is private and I don't want to hack around in MVC. The other option is to write an extension like public static string PlainAntiForgeryToken(this HtmlHelper helper) { // extract the actual field value from the hidden input return helper.AntiForgeryToken().DoSomeHackyStringActions(); } which is somewhat hacky and leaves the bigger problem unsolved: How to verify that token? The default verification implementation is internal and hard-coded against using form fields. I tried to write a slightly modified ValidateAntiForgeryTokenAttribute, but it uses an AntiForgeryDataSerializer which is private and I really didn't want to copy that, too. At this point it seems to be easier to come up with a homegrown solution, but that is really duplicate code. Any suggestions how to do this the smart way? Am I missing something completely obvious?

    Read the article

  • Change Data Capture or Change Tracking - Same as Traditional Audit Trail Table?

    - by HardCode
    Before I delve into the abyss of Microsoft documentation any deeper, I'd like to know if someone experienced with Change Data Capture and Change Tracking know if one or both of these can be used to replace the traditional ... "Audit trail table copy of the 'real table' (all of the fields of the original table, plus date/time, user ID, and DML action field) inserted into by Triggers" ... setup for a database table audit trail, where the trigger populates the audit trail table (which is all manual work). The MSDN overview documentation explains at a high level what Change Data Capture and Change Tracking are, but it isn't clear enough to me, and doesn't state outright, that these tools can be used to replace the traditional audit trail tables we've made so often. Can someone with any experience using Change Data Capture and Change Tracking save me a lot of time, or confirm that I am spending time looking at the right tool? The critical part of our audit trail is capturing all changes to a table's fields (on INSERT, UPDATE, DELETE), when it happened, and who did it. These changes are commonly provided to an end user chronologically via an audit trail report. Which is another question ... Change Data Capture or Change Tracking is the solution, I'd assume that this data can be queried just like data from a normal table? EDIT: I need a permanent audit trail, irregardless of time. I see that Change Data Capture has to do with the transaction logs, so this sounds finite to me.

    Read the article

  • google static maps via TIdHTTP

    - by cloudstrif3
    Hi all. I'm trying to return content from maps.google.com from within Delphi 2006 using the TIdHTTP component. My code is as follows procedure TForm1.GetGoogleMap(); var t_GetRequest: String; t_Source: TStringList; t_Stream: TMemoryStream; begin t_Source := TStringList.Create; try t_Stream := TMemoryStream.Create; try t_GetRequest := 'http://maps.google.com/maps/api/staticmap?' + 'center=Brooklyn+Bridge,New+York,NY' + '&zoom=14' + '&size=512x512' + '&maptype=roadmap' + '&markers=color:blue|label:S|40.702147,-74.015794' + '&markers=color:green|label:G|40.711614,-74.012318' + '&markers=color:red|color:red|label:C|40.718217,-73.998284' + '&sensor=false'; IdHTTP1.Post(t_GetRequest, t_Source, t_Stream); t_Stream.SaveToFile('google.html'); finally t_Stream.Free; end; finally t_Source.Free; end; end; However I keep getting the response HTTP/1.0 403 Forbidden. I assume this means that I don't have permission to make this request but if I copy the url into my web browser IE 8, it works fine. Is there some header information that I need or something else? thanks you advance.

    Read the article

  • Flex-built SWF's no longer work, error 2048, 2046, 2032

    - by Kevin
    I'm really confused about this problem, and I'm pretty new to Flex. Basically, anything I try to build with mxmlc fails to run now, giving me the above three errors depending on what I do. It was working 30 minutes ago, I've been spending that time trying to figure out what has changed. I redownloaded the Flex SDK, cleared my assetcache, have cleared Firefox's cache. (I'm using Linux.) Even if I compile with -static-link-runtime-shared-libraries=false, since it seems like #2048 is a RSL problem, it still refuses to run. Another strange thing, if I keep <policy-file-url>http://fpdownload.adobe.com/pub/swz/crossdomain.xml</policy-file-url> <rsl-url>textLayout_1.0.0.595.swz</rsl-url> in my flex-config file, then firebug tells me that my swf file is trying to access a copy of that in the app's folder, giving error 2032. And if I stick the one I have in frameworks/rsls/ then it gives me error 2046. I don't know how it could not be properly signed, unless Adobe magically changed a signature and didn't update their flex SDK. Any help will be appreciated.

    Read the article

  • Flex - Issues retrieving an XMLList

    - by BS_C3
    Hello! I'm having a problem retrieving a XMLList and I don't understand why. I have an application that is running properly. It uses some data from two xml files called division.xml and store.xml. I noticed that I have some data in division.xml that should be in store.xml, so I did a copy/paste of the data from one file to the other. This is the data I copied: <stores> <store> <odeis>101</odeis> <name></name> <password></password> <currency></currency> <currSymbol></currSymbol> </store> <store> <odeis>102</odeis> <name></name> <password></password> <currency></currency> <currSymbol></currSymbol> </store> </stores> In the application, I list all odeis codes and I need to retrieve the block store corresponding to the selected odeis code. Before moving the data into store.xml, this is how I retrieved the block: var node:XMLList = divisionData.division.(@name==HomePageData.instance.divisionName).stores.store.(odeis == HomePageData.instance.storeCodeOdeis) This is how I retrieve it after copying the data into store.xml: var node:XMLList = storeData.stores.(@name==HomePageData.instance.divisionName).store.(odeis == HomePageData.instance.storeCodeOdeis) And I'm currently getting the following error: ReferenceError: Error #1065: The variable odeis is not defined. Could anyone enlighten me? Cause I really have no clue of why it is not working... Thanks for any tips you can give. Regards, BS_C3

    Read the article

  • How to find that Mutex in C# is acquired?

    - by TN
    How can I find from mutex handle in C# that a mutex is acquired? When mutex.WaitOne(timeout) timeouts, it returns false. However, how can I find that from the mutex handle? (Maybe using p/invoke.) UPDATE: public class InterProcessLock : IDisposable { readonly Mutex mutex; public bool IsAcquired { get; private set; } public InterProcessLock(string name, TimeSpan timeout) { bool created; var security = new MutexSecurity(); security.AddAccessRule(new MutexAccessRule(new SecurityIdentifier(WellKnownSidType.WorldSid, null), MutexRights.Synchronize | MutexRights.Modify, AccessControlType.Allow)); mutex = new Mutex(false, name, out created, security); IsAcquired = mutex.WaitOne(timeout); } #region IDisposable Members public void Dispose() { if (IsAcquired) mutex.ReleaseMutex(); } #endregion } Currently, I am using my own property IsAcquired to determine whether I should release a mutex. Not essential but clearer, would be not to use a secondary copy of the information represented by IsAcquired property, but rather to ask directly the mutex whether it is acquired by me. Since calling mutex.ReleaseMutex() throws an exception if it is not acquired by me. (By acquired state I mean that the mutex is in not-signaled state when I am owning the mutex.)

    Read the article

  • Understanding evaluation of expressions containing '++' and '->' operators in C.

    - by Leif Ericson
    Consider this example: struct { int num; } s, *ps; s.num = 0; ps = &s; ++ps->num; printf("%d", s.num); /* Prints 1 */ It prints 1. So I understand that it is because according to operators precedence, -> is higher than ++, so the value ps->num (which is 0) is firstly fetched and then the ++ operator operates on it, so it increments it to 1. struct { int num; } s, *ps; s.num = 0; ps = &s; ps++->num; printf("%d", s.num); /* Prints 0 */ In this example I get 0 and I don't understand why; the explanation of the first example should be the same for this example. But it seems that this expression is evaluated as follows: At first, the operator ++ operates, and it operates on ps, so it increments it to the next struct. Only then -> operates and it does nothing because it just fetches the num field of the next struct and does nothing with it. But it contradicts the precedence of operators, which says that -> have higher precedence than ++. Can someone explain this behavior? Edit: After reading two answers which refer to a C++ precedence tables which indicate that a prefix ++/-- operators have lower precedence than ->, I did some googling and came up with this link that states that this rule applies also to C itself. It fits exactly and fully explains this behavior, but I must add that the table in this link contradicts a table in my own copy of K&R ANSI C. So if you have suggestions as to which source is correct I would like to know. Thanks.

    Read the article

  • Problem calling Java from PHP script

    - by Jack
    I am working on windows. I am running PHP (5.1.3) scripts on Tomcat using PHP/Java bridge. Here is my simple code //test.php <?php require_once("java\Java.inc"); $systemInfo = new Java("Test"); print $systemInfo->foo(); ?> Test.class is in the same folder as test.php. But the php file is not able to locate the test class and I get the following error - Fatal error: Uncaught [[o:Exception]:"java.lang.Exception: CreateInstance failed: new Test. If I use a standard class like below. It works - <?php require_once("java\Java.inc"); $systemInfo = new Java("java.lang.System"); print "Total seconds since January 1, 1970: ".$systemInfo->currentTimeMillis(); ?> What should I do? 1)Should I copy my class to the standard location where all Java classes are kept. (What is this location?) 2) Do some changes in the php.ini file

    Read the article

  • Method binding to base method in external library can't handle new virtual methods "between"

    - by Berg
    Lets say I have a library, version 1.0.0, with the following contents: public class Class1 { public virtual void Test() { Console.WriteLine( "Library:Class1 - Test" ); Console.WriteLine( "" ); } } public class Class2 : Class1 { } and I reference this library in a console application with the following contents: class Program { static void Main( string[] args ) { var c3 = new Class3(); c3.Test(); Console.ReadKey(); } } public class Class3 : ClassLibrary1.Class2 { public override void Test() { Console.WriteLine("Console:Class3 - Test"); base.Test(); } } Running the program will output the following: Console:Class3 - Test Library:Class1 - Test If I build a new version of the library, version 2.0.0, looking like this: public class Class1 { public virtual void Test() { Console.WriteLine( "Library:Class1 - Test V2" ); Console.WriteLine( "" ); } } public class Class2 : Class1 { public override void Test() { Console.WriteLine("Library:Class2 - Test V2"); base.Test(); } } and copy this version to the bin folder containing my console program and run it, the results are: Console:Class3 - Test Library:Class1 - Test V2 I.e, the Class2.Test method is never executed, the base.Test call in Class3.Test seems to be bound to Class1.Test since Class2.Test didn't exist when the console program was compiled. This was very surprising to me and could be a big problem in situations where you deploy new versions of a library without recompiling applications. Does anyone else have experience with this? Are there any good solutions? This makes it tempting to add empty overrides that just calls base in case I need to add some code at that level in the future...

    Read the article

  • Wrapping FUSE from Go

    - by Matt Joiner
    I'm playing around with wrapping FUSE with Go. However I've come stuck with how to deal with struct fuse_operations. I can't seem to expose the operations struct by declaring type Operations C.struct_fuse_operations as the members are lower case, and my pure-Go sources would have to use C-hackery to set the members anyway. My first error in this case is "can't set getattr" in what looks to be the Go equivalent of a default copy constructor. My next attempt is to expose an interface that expects GetAttr, ReadLink etc, and then generate C.struct_fuse_operations and bind the function pointers to closures that call the given interface. This is what I've got (explanation continues after code): package fuse // #include <fuse.h> // #include <stdlib.h> import "C" import ( //"fmt" "os" "unsafe" ) type Operations interface { GetAttr(string, *os.FileInfo) int } func Main(args []string, ops Operations) int { argv := make([]*C.char, len(args) + 1) for i, s := range args { p := C.CString(s) defer C.free(unsafe.Pointer(p)) argv[i] = p } cop := new(C.struct_fuse_operations) cop.getattr = func(*C.char, *C.struct_stat) int {} argc := C.int(len(args)) return int(C.fuse_main_real(argc, &argv[0], cop, C.size_t(unsafe.Sizeof(cop)), nil)) } package main import ( "fmt" "fuse" "os" ) type CpfsOps struct { a int } func (me *CpfsOps) GetAttr(string, *os.FileInfo) int { return -1; } func main() { fmt.Println(os.Args) ops := &CpfsOps{} fmt.Println("fuse main returned", fuse.Main(os.Args, ops)) } This gives the following error: fuse.go:21[fuse.cgo1.go:23]: cannot use func literal (type func(*_Ctype_char, *_Ctype_struct_stat) int) as type *[0]uint8 in assignment I'm not sure what to pass to these members of C.struct_fuse_operations, and I've seen mention in a few places it's not possible to call from C back into Go code. If it is possible, what should I do? How can I provide the "default" values for interface functions that acts as though the corresponding C.struct_fuse_operations member is set to NULL?

    Read the article

  • How to search inbox using zend mail

    - by Bob Cavezza
    The following is a function from zend_mail_protocol_imap. i read that to search emails, I would want to override it using zend_mail_storage_imap (which is what I'm using now to grab email from gmail). I copy and pasted the following function into zend_mail_storage_imap, but I'm having issues with the params. I can't find documentation on what to use for the array $params. I initially thought it was the search term before reading it more thoroughly. I'm out of ideas. Here's the function... /** * do a search request * * This method is currently marked as internal as the API might change and is not * safe if you don't take precautions. * * @internal * @return array message ids */ public function search(array $params) { $response = $this->requestAndResponse('SEARCH', $params); if (!$response) { return $response; } foreach ($response as $ids) { if ($ids[0] == 'SEARCH') { array_shift($ids); return $ids; } } return array(); } Initially I thought this would do the trick... $storage = new Zend_Mail_Storage_Imap($imap); $searchresults = $storage->search('search term'); But nope, I need to send the info in an array. Any ideas?

    Read the article

  • Partial template specialization on a class

    - by Jonathan Swinney
    I'm looking for a better way to this. I have a chunk of code that needs to handle several different objects that contain different types. The structure that I have looks like this: class Base { // some generic methods } template <typename T> class TypedBase : public Base { // common code with template specialization private: std::map<int,T> mapContainingSomeDataOfTypeT; } template <> class TypedBase<std::string> : public Base { // common code with template specialization public: void set( std::string ); // functions not needed for other types std::string get(); private: std::map<int,std::string> mapContainingSomeDataOfTypeT; // some data not needed for other types } Now I need to add some additional functionality that only applies to one of the derivative classes. Specifically the std::string derivation, but the type doesn't actually matter. The class is big enough that I would prefer not copy the whole thing simply to specialize a small part of it. I need to add a couple of functions (and accessor and modifier) and modify the body of several of the other functions. Is there a better way to accomplish this?

    Read the article

< Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >