Search Results

Search found 22007 results on 881 pages for 'variable reference'.

Page 194/881 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • C++ converting back and forth from derived and base classes

    - by user127817
    I was wondering if there is a way in C++ to accomplish the following: I have a base class called ResultBase and two class that are Derived from it, Variable and Expression. I have a few methods that do work on vector<ResultBase> . I want to be able to pass in vectors of Variable and Expression into these methods. I can achieve this by creating a vector<ResultBase> and using static_cast to fill it with the members from my vector of Variable/Expression. However, once the vector has run through the methods, I want to be able to get it back as the vector of Result/Expression. I'll know for sure which one I want back. static_cast won't work here as there isn't a method to reconstruct a Variable/Expression from a ResultBase, and more importantly I wouldn't have the original properties of the Variables/Expressions The methods modify some of the properties of the ResultBase and I need those changes to be reflected in the original vectors. (i.e. ResultBase has a property called IsLive, and one of the methods will modify this property. I want this IsLive value to be reflected in the derived class used to create the ResultBase Whats the easiest way to accomplish this?

    Read the article

  • How to filter node list based on the contents of another node list

    - by ~otakuj462
    Hi, I'd like to use XSLT to filter a node list based on the contents of another node list. Specifically, I'd like to filter a node list such that elements with identical id attributes are eliminated from the resulting node list. Priority should be given to one of the two node lists. The way I originally imagined implementing this was to do something like this: <xsl:variable name="filteredList1" select="$list1[not($list2[@id_from_list1 = @id_from_list2])]"/> The problem is that the context node changes in the predicate for $list2, so I don't have access to attribute @id_from_list1. Due to these scoping constraints, it's not clear to me how I would be able to refer to an attribute from the outer node list using nested predicates in this fashion. To get around the issue of the context node, I've tried to create a solution involving a for-each loop, like the following: <xsl:variable name="filteredList1"> <xsl:for-each select="$list1"> <xsl:variable name="id_from_list1" select="@id_from_list1"/> <xsl:if test="not($list2[@id_from_list2 = $id_from_list1])"> <xsl:copy-of select="."/> </xsl:if> </xsl:for-each> </xsl:variable> But this doesn't work correctly. It's also not clear to me how it fails... Using the above technique, filteredList1 has a length of 1, but appears to be empty. It's strange behaviour, and anyhow, I feel there must be a more elegant approach. I'd appreciate any guidance anyone can offer. Thanks.

    Read the article

  • Protecting my apps security from deassembling

    - by sandis
    So I recently tested deassembling one of my android apps, and to my horror I discovered that the code was quite readable. Even worse, all my variable names where intact! I thought that those would be compressed to something unreadable at compile time. The app is triggered to expire after a certain time. However, now it was trivial for me to find my function named checkIfExpired() and find the variable "expired". Is there any good way of making it harder for a potential hacker messing with my app? Before someone states the obvious: Yes, it is security through obscurity. But obviously this is my only option since the user always will have access to all my code. This is the same for all apps. The details of my deactivation-thingy is unimportant, the point is that I dont want deassembler to understand some of the things I do. side questions: Why are the variable names not compressed? Could it be the case that my program would run faster if I stopped using really long variable names, as are my habit?

    Read the article

  • C++ Storing variables and inheritance

    - by Kaa
    Hello Everyone, Here is my situation: I have an event driven system, where all my handlers are derived from IHandler class, and implement an onEvent(const Event &event) method. Now, Event is a base class for all events and contains only the enumerated event type. All actual events are derived from it, including the EventKey event, which has 2 fields: (uchar) keyCode and (bool)isDown. Here's the interesting part: I generate an EventKey event using the following syntax: Event evt = EventKey(15, true); and I ship it to the handlers: EventDispatch::sendEvent(evt); // void EventDispatch::sendEvent(const Event &event); (EventDispatch contains a linked list of IHandlers and calls their onEvent(const Event &event) method with the parameter containing the sent event. Now the actual question: Say I want my handlers to poll the events in a queue of type Event, how do I do that? x Dynamic pointers with reference counting sound like too big of a solution. x Making copies is more difficult than it sounds, since I'm only receiving a reference to a base type, therefore each time I would need to check the type of event, upcast to EventKey and then make a copy to store in a queue. Sounds like the only solution - but is unpleasant since I would need to know every single type of event and would have to check that for every event received - sounds like a bad plan. x I could allocate the events dynamically and then send around pointers to those events, enqueue them in the array if wanted - but other than having reference counting - how would I be able to keep track of that memory? Do you know any way to implement a very light reference counter that wouldn't interfere with the user? What do you think would be a good solution to this design? I thank everyone in advance for your time. Sincerely, Kaa

    Read the article

  • SPSS - sum of squares change radically with slight model changes in ANOVA??

    - by Pat
    I have noticed that the sum of squares in my models can change fairly radically with even the slightest adjustment to my models???? Is this normal???? I'm using SPSS 16, and both models presented below used the same data and variables with only one small change - categorizing one of the variables as either a 2 level or 3 level variable. Details - using a 2 x 2 x 6 mixed model ANOVA with the 6 being the repeated measure i get the following in the between group analysis ------------------------------------------------------------ Source | Type III SS | df | MS | F | Sig ------------------------------------------------------------ intercept | 4086.46 | 1 | 4086.46 | 104.93 | .000 X | 224.61 | 1 | 224.61 | 5.77 | .019 Y | 2.60 | 1 | 2.60 | .07 | .80 X by Y | 19.25 | 1 | 19.25 | .49 | .49 Error | 2570.40 | 66 | 38.95 | Then, when I use the exact same data but a slightly different model in which variable Y has 3 levels instead of 2 levels I get the following ------------------------------------------------------------ Source | Type III SS | df | MS | F | Sig ------------------------------------------------------------ intercept | 3603.88 | 1 | 3603.88 | 90.89 | .000 X | 171.89 | 1 | 171.89 | 4.34 | .041 Y | 19.23 | 2 | 9.62 | .24 | .79 X by Y | 17.90 | 2 | 17.90 | .80 | .80 Error | 2537.76 | 64 | 39.65 | I don't understand why variable X would have a different sum of squares simply because variable Y gets devided up into 3 levels instead of 2. This is also the case in the within groups analysis too. Please help me understand :D Thank you in advance Pat

    Read the article

  • Dealing with Expression Blend's lack of support for C++/CLI projects

    - by Brian Ensink
    I have a WPF C# project that references a C++/CLI mixed mode project. I'm having trouble using the WPF project in Expression Blend 3. I'm new to Blend so perhaps this is obvious, but it won't display the xaml designer properly until it builds the project. In my case it complains that my custom commands are not "recognized or accessible" and the solution is to build the project in Blend. But I can't build the project because it references a C++/CLI mixed mode project which Blend won't load. The WPF project is pure C# it just happens to reference a C++/CLI mixed mode project but I'm not asking Blend to do anything with the mixed-mode assembly. How can I work around this problem? Edit: I was able to get it to build by removing the reference to the C++/CLI mixed mode project and replacing it with a reference to the actual assembly. However this is not ideal because in my past experience Visual Studio will not always be able to resolve the reference when switching between release and debug configurations.

    Read the article

  • System Calls in windows & Native API?

    - by claws
    Recently I've been using lot of Assembly language in *NIX operating systems. I was wondering about the windows domain. Calling convention in linux: mov $SYS_Call_NUM, %eax mov $param1 , %ebx mov $param2 , %ecx int $0x80 Thats it. That is how we should make a system call in linux. Reference of all system calls in linux: Regarding which $SYS_Call_NUM & which parameters we can use this reference : http://docs.cs.up.ac.za/programming/asm/derick_tut/syscalls.html OFFICIAL Reference : http://kernel.org/doc/man-pages/online/dir_section_2.html Calling convention in Windows: ??? Reference of all system calls in Windows: ??? Unofficial : http://www.metasploit.com/users/opcode/syscalls.html , but how do I use these in assembly unless I know the calling convention. OFFICIAL : ??? If you say, they didn't documented it. Then how is one going to write libc for windows without knowing system calls? How is one gonna do Windows Assembly programming? Atleast in the driver programming one needs to know these. right? Now, whats up with the so called Native API? Is Native API & System calls for windows both are different terms referring to same thing? In order to confirm I compared these from two UNOFFICIAL Sources System Calls: http://www.metasploit.com/users/opcode/syscalls.html Native API: http://undocumented.ntinternals.net/aindex.html My observations: All system calls are beginning with letters Nt where as Native API is consisting of lot of functions which are not beginning with letters Nt. System Call of windows are subset of Native API. System calls are just part of Native API. Can any one confirm this and explain.

    Read the article

  • C++ STL Map vs Vector speed

    - by sub
    In the interpreter for my experimental programming language I have a symbol table. Each symbol consists of a name and a value (the value can be e.g.: of type string, int, function, etc.). At first I represented the table with a vector and iterated through the symbols checking if the given symbol name fitted. Then I though using a map, in my case map<string,symbol>, would be better than iterating through the vector all the time but: It's a bit hard to explain this part but I'll try. If a variable is retrieved the first time in a program in my language, of course its position in the symbol table has to be found (using vector now). If I would iterate through the vector every time the line gets executed (think of a loop), it would be terribly slow (as it currently is, nearly as slow as microsoft's batch). So I could use a map to retrieve the variable: SymbolTable[ myVar.Name ] But think of the following: If the variable, still using vector, is found the first time, I can store its exact integer position in the vector with it. That means: The next time it is needed, my interpreter knows that it has been "cached" and doesn't search the symbol table for it but does something like SymbolTable.at( myVar.CachedPosition ). Now my (rather hard?) question: Should I use a vector for the symbol table together with caching the position of the variable in the vector? Should I rather use a map? Why? How fast is the [] operator? Should I use something completely different?

    Read the article

  • Split node list to parts.

    - by Kalinin
    xml: <mode>1</mode> <mode>2</mode> <mode>3</mode> <mode>4</mode> <mode>5</mode> <mode>6</mode> <mode>7</mode> <mode>8</mode> <mode>9</mode> <mode>10</mode> <mode>11</mode> <mode>12</mode> i need to separate it on parts (for ex. on 4): xslt: <xsl:variable name="vNodes" select="mode"/> <xsl:variable name="vNumParts" select="4"/> <xsl:variable name="vNumCols" select="ceiling(count($vNodes) div $vNumParts)"/> <xsl:for-each select="$vNodes[position() mod $vNumCols = 1]"> <xsl:variable name="vCurPos" select="(position()-1)*$vNumCols +1"/> <ul> <xsl:for-each select="$vNodes[position() >= $vCurPos and not(position() > $vCurPos + $vNumCols -1)]"> <li><xsl:value-of select="."/></li> </xsl:for-each> </ul> </xsl:for-each> this code is written by Dimitre Novatchev - great coder)) but for the number of nodes less then number of parts (for ex. i have 2 modes) this code does not work - it outputs nothing. How it upgrade for that case (without choose construction)?

    Read the article

  • R: building a simple command line plotting tool/Capturing window close events

    - by user275455
    I am trying to use R within a script that will act as a simple command line plot tool. I.e. user pipes in a csv file and they get a plot. I can get to R fine and get the plot to display through various temp file machinations, but I have hit a roadblock. I cannot figure out how to get R to keep running until the users closes the window. If I plot and exit, the plot disappears immediately. If I plot and use some kind of infinite loop, the user cannot close the plot; he must exit by using an interrupt which I don't like. I see there is a getGraphicsEvent function, but it claims that the device is not supported (X11). Anyway, it doesn't appear to actually support an onClose event, only onMouseDown. Any ideas on how to solve this? edit: Thanks to Dirk for the advice to check out the tk interface. Here is the test code that works: require(tcltk) library(tkrplot) ##function to display plot, called by tkrplot and embedded in a window plotIt<-function(){ plot(x=1:10, y=1:10) } ##create top level window tt<-tktoplevel() ##variable to wait on like a condition variable, to be set by event handler done <- tclVar(0) ##bind to the window destroy event, set done variable when destroyed tkbind(tt,"",function() tclvalue(done) <- 1) ##Have tkrplot embed the plot window, then realize it with tkgrid tkgrid(tkrplot(tt,plotIt)) ##wait until done is true tkwait.variable(done)

    Read the article

  • PHP & MYSQL: How can i neglect empty variables from select

    - by cash-cash
    hello all; if i have 4 variables and i want to select DISTINCT values form data base <?php $var1 = ""; //this variable can be blank $var2 = ""; //this variable can be blank $var3 = ""; //this variable can be blank $var4 = ""; //this variable can be blank $result = mysql_query("SELECT DISTINCT title,description FROM table WHERE **keywords ='$var1' OR author='$var2' OR date='$var3' OR forums='$var4'** "); ?> note: some or all variables ($var1,$var2,$var3,$var4) can be empty what i want: i want to neglect empty fields lets say that $var1 (keywords) is empty it will select all empty fileds, but i want if $var1 is empty the result will be like $result = mysql_query("SELECT DISTINCT title,description FROM table WHERE author='$var2' OR date='$var3' OR forums='$var4' "); if $var2 is empty the result will be like $result = mysql_query("SELECT DISTINCT title,description FROM table WHERE keywords ='$var1' OR date='$var3' OR forums='$var4' "); if $var1 and $var2 are empty the result will be like $result = mysql_query("SELECT DISTINCT title,description FROM table WHERE date='$var3' OR forums='$var4' "); and so on

    Read the article

  • Why ng-hide don't work with custom directives?

    - by javier
    I'm reading the directives section of the developers guide on angularjs.org to refresh my knowledge and gain some insights and I was trying to run one of the examples but the directive ng-hide is not working on a custom directive. Here the jsfiddle: http://jsfiddle.net/D3Nsk/: <my-dialog ng-hide="dialogIsHidden" on-close="hideDialog()"> Does Not Work Here!!! </my-dialog> <div ng-hide="dialogIsHidden"> It works Here. </div> Any idea on why this is happening? Thanks. solution Seems that the variable dialogIsHidden on the tag already make a reference to a scope variable inside the directive and not to the variable in the controller; given that the directive has it's own insolated scope, to make this work it's necesary to pass by reference the variable dialogIsHidden of the controller to the directive. Here the jsfiddle: http://jsfiddle.net/h7xvA/ changes at: <my-dialog ng-hide="dialogIsHidden" on-close="hideDialog()" dialog-is-hidden='dialogIsHidden'> and: scope: { 'close': '&onClose', 'dialogIsHidden': '=' },

    Read the article

  • How to retrieve data from a dialog box?

    - by Ralph
    Just trying to figure out an easy way to either pass or share some data between the main window and a dialog box. I've got a collection of variables in my main window that I want to pass to a dialog box so that they can be edited. They way I've done it now, is I pass in the list to the constructor of the dialog box: private void Button_Click(object sender, RoutedEventArgs e) { var window = new VariablesWindow(_templateVariables); window.Owner = this; window.ShowDialog(); if(window.DialogResult == true) _templateVariables = new List<Variable>(window.Variables); } And then in there, I guess I need to deep-copy the list, public partial class VariablesWindow : Window { public ObservableCollection<Variable> Variables { get; set; } public VariablesWindow(IEnumerable<Variable> vars) { Variables = new ObservableCollection<Variable>(vars); // ... So that when they're edited, it doesn't get reflected back in the main window until the user actually hits "Save". Is that the correct approach? If so, is there an easy way to deep-copy an ObservableCollection? Because as it stands now, I think my Variables are being modified because it's only doing a shallow-copy.

    Read the article

  • What rules govern cross-version compatibility for .NET applications and the C# language?

    - by John Feminella
    For some reason I've always had trouble remembering the backwards/forwards compatibility guarantees made by the framework, so I'd like to put that to bed forever. Suppose I have two assemblies, A and B. A is older and references .NET 2.0 assemblies; B references .NET 3.5 assemblies. I have the source for A and B, Ax and Bx, respectively; they are written in C# at the 2.0 and 3.0 language levels. (That is, Ax uses no features that were introduced later than C# 2.0; likewise Bx uses no features that were introduced later than 3.0.) I have two environments, C and D. C has the .NET 2.0 framework installed; D has the .NET 3.5 framework installed. Now, which of the following can/can't I do? Running: run A on C? run A on D? run B on C? run C on D? Compiling: compile Ax on C? compile Ax on D? compile Bx on C? compile Bx on D? Rewriting: rewrite Ax to use features from the C# 3 language level, and compile it on D, while having it still work on C? rewrite Bx to use features from the C# 4 language level on another environment E that has .NET 4, while having it still work on D?' Referencing from another assembly: reference B from A and have a client app on C use it? reference B from A and have a client app on D use it? reference A from B and have a client app on C use it? reference A from B and have a client app on D use it? More importantly, what rules govern the truth or falsity of these hypothetical scenarios?

    Read the article

  • SSIS Expressions - EvaluateAsExpression Problem

    - by Randy Minder
    In a Data Flow, I have an Derived Column task. In the expression for one of the columns, I have the following expression: [siteid] == "100" ? "1101" : [siteid] == "110" ? "1001" : [siteid] == "120" ? "2101" : [siteid] == "140" ? "1102" : [siteid] == "210" ? "2001" : [siteid] == "310" ? "3001" : [siteid] This works just fine. However, I intend to reuse this in at least a dozen other places so I want to store this to a variable and use the variable in the Derived Column instead of the hard-coded expression. When I attempt to create a variable, using the expression above, I get a syntax error saying 'siteid' is not defined. I guess this makes sense because it isn't. But how can I get this the expression to work by using a variable? It seems like I need some sort of way to tell it that 'siteid' will be the column containing the data I want to apply the expression to.

    Read the article

  • Mathmatic errors in basic C++ program

    - by Heather
    I am working with a basic C++ program to determine the area and perimeter of a rectangle. My program works fine for whole numbers but falls apart when I use any number with a decimal. I get the impression that I am leaving something out, but since I'm a complete beginner, I have no idea what. Below is the source: #include <iostream> using namespace std; int main() { // Declared variables int length; // declares variable for length int width; // declares variable for width int area; // declares variable for area int perimeter; // declares variable for perimeter // Statements cout << "Enter the length and the width of the rectangle: "; // states what information to enter cin >> length >> width; // user input of length and width cout << endl; // closes the input area = length * width; // calculates area of rectangle perimeter = 2 * (length + width); //calculates perimeter of rectangle cout << "The area of the rectangle = " << area << " square units." <<endl; // displays the calculation of the area cout << "The perimeter of the rectangle = " << perimeter << " units." << endl; // displays the calculation of the perimeter system ("pause"); // REMOVE BEFORE RELEASE - testing purposes only return 0; }

    Read the article

  • global variables doesn't change value in Javascript

    - by user1856906
    My project is composed by 2 html pages: 1)index.html, wich contains the login and the registration form. 2)user_logged.html, wich contains all the features of a logged user. Now, what I want to do is a control if the user is really logged, to avoid the case where a user paste a url in the browser and can see the pages of another user. hours as now, if a user paste this url in the browser: www.user_loggato.html?user=x#profile is as if logged in as user x and this is not nice. My html pages both use js files that contains scripts. I decided to create a global variable called logged inizialized to false and change the variable to true when the login is succesfull. The problem is that the variable, remains false. here is the code: var logged=false; (write in the file a.js) while in the file b.js I have: function login() { //if succesfull logged=true; window.location.href = "user_loggato.html?user="+ JSON.parse(str).username + #profilo"; Now with some alerts I found that my variable logged is always false. Why? if I have not explained well or if there is not some information in order to respond to my question let me know.

    Read the article

  • mutableCopyWithZone updating a property value.

    - by Jim
    I have a Class that I need to copy with the ability to make changes the value of a variable on both Classes. Simply put the classes need to remain clones of each other at all times. My understanding of the documentation is that I can do this using a shallow copy of the Class which has also been declared mutable. By shallow copying the pointer value for the variable will be cloned so that it is an exact match in both classes. So when I update the variable in the original the copy will be updated simultaneously. Is this right? As you can see below I have used mutableCopyWithZone in the class I want to copy. I have tried both NSCopyObject and allocWithZone methods to get this to work. Although I'm able to copy the class and it appears as intended, when updating the variable it is not changing value in the copied Class. - (id)mutableCopyWithZone:(NSZone *)zone { //ReviewViewer *copy = NSCopyObject(self, 0, zone); ReviewViewer *copy = [[[self class] allocWithZone:zone] init]; copy->infoTextViews = [infoTextViews copy]; return copy; } infoTextViews is a property declared as nonatomic, retain in the header file of the class being copied. I have also implemented the NSMutableCopying protocol accordingly. Any help would be great.

    Read the article

  • Mathematics errors in basic C++ program

    - by H Bomb1013
    I am working with a basic C++ program to determine the area and perimeter of a rectangle. My program works fine for whole numbers but falls apart when I use any number with a decimal. I get the impression that I am leaving something out, but since I'm a complete beginner, I have no idea what. Below is the source: #include <iostream> using namespace std; int main() { // Declared variables int length; // declares variable for length int width; // declares variable for width int area; // declares variable for area int perimeter; // declares variable for perimeter // Statements cout << "Enter the length and the width of the rectangle: "; // states what information to enter cin >> length >> width; // user input of length and width cout << endl; // closes the input area = length * width; // calculates area of rectangle perimeter = 2 * (length + width); //calculates perimeter of rectangle cout << "The area of the rectangle = " << area << " square units." <<endl; // displays the calculation of the area cout << "The perimeter of the rectangle = " << perimeter << " units." << endl; // displays the calculation of the perimeter system ("pause"); // REMOVE BEFORE RELEASE - testing purposes only return 0; }

    Read the article

  • Question about architecting asp.net mvc application ?

    - by Misnomer
    I have read little bit about architecture and also patterns in order to follow the best practices. So this is the architecture we have and I wanted to know what you think of it and any proposed changes or improvements to it - Presentation Layer - Contains all the views,controllers and any helper classes that the view requires also it containes the reference to Model Layer and Business Layer. Business Project - Contains all the business logic and validation and security helper classes that are being used by it. It contains a reference to DataAccess Layer and Model Layer. Data Access Layer - Contains the actual queries being made on the entity classes(CRUD) operations on the entity classes. It contains reference to Model Layer. Model Layer - Contains the entity framework model,DTOs,Enums.Does not really have a reference to any of the above layers. What are your thoughts on the above architecture ? The problem is that I am getting confused by reading about like say the repository pattern, domain driven design and other design patterns. The architecture we have although not that strict still is relatively alright I think and does not really muddle things but I maybe wrong. I would appreciate any help or suggestions here. Thanks !

    Read the article

  • Template neglects const (why?)

    - by Gabriel
    Does somebody know, why this compiles?? template< typename TBufferTypeFront, typename TBufferTypeBack = TBufferTypeFront> class FrontBackBuffer{ public: FrontBackBuffer( const TBufferTypeFront front, const TBufferTypeBack back): ////const reference assigned to reference??? m_Front(front), m_Back(back) { }; ~FrontBackBuffer() {}; TBufferTypeFront m_Front; ///< The front buffer TBufferTypeBack m_Back; ///< The back buffer }; int main(){ int b; int a; FrontBackBuffer<int&,int&> buffer(a,b); // buffer.m_Back = 33; buffer.m_Front = 55; } I compile with GCC 4.4. Why does it even let me compile this? Shouldn't there be an error that I cannot assign a const reference to a non-const reference?

    Read the article

  • Satisfying indirect references at runtime.

    - by automatic
    I'm using C# and VS2010. I have a dll that I reference in my project (as a dll reference not a project reference). That dll (a.dll) references another dll that my project doesn't directly use, let's call it b.dll. None of these are in the GAC. My project compiles fine, but when I run it I get an exception that b.dll can't be found. It's not being copied to the bin directory when my project is compiled. What is the best way to get b.dll into the bin directory so that it can be found at run time. I've thought of four options. Use a post compile step to copy b.dll to the bin directory Add b.dll to my project (as a file) and specify copy to output directory if newer Add b.dll as a dll reference to my project. Use ILMerge to combine b.dll with a.dll I don't like 3 at all because it makes b.dll visible to my project, the other two seem like hacks. Am I missing other solutions? Which is the "right" way? Would a dependency injection framework be able to resolve and load b.dll?

    Read the article

  • How to reduce the time of clang_complete search through boost

    - by kirill_igum
    I like using clang with vim. The one problem that I always have is that whenever I include boost, clang goes through boost library every time I put "." after a an object name. It takes 5-10 seconds. Since I don't make changes to boost headers, is there a way to cache the search through boost? If not, is there a way to remove boost from the auto-completion search? update (1) in response to answer by adaszko after :let g:clang_use_library = 1 I type a name of a variable. I press ^N. Vim starts to search through boost tree. it auto-completes the variable. i press "." and get the following errors: Error detected while processing function ClangComplete: line 35: Traceback (most recent call last): Press ENTER or type command to continue Error detected while processing function ClangComplete: line 35: File "<string>", line 1, in <module> Press ENTER or type command to continue Error detected while processing function ClangComplete: line 35: NameError: name 'vim' is not defined Press ENTER or type command to continue Error detected while processing function ClangComplete: line 40: E121: Undefined variable: l:res Press ENTER or type command to continue Error detected while processing function ClangComplete: line 40: E15: Invalid expression: l:res Press ENTER or type command to continue Error detected while processing function ClangComplete: line 58: E121: Undefined variable: l:res Press ENTER or type command to continue Error detected while processing function ClangComplete: line 58: E15: Invalid expression: l:res Press ENTER or type command to continue ... and there is no auto-compeltion update (2) not sure if clang_complete should take care of the issue with boost. vim without plugins does search through boost. superuser has an answer to comment out search through boost dirs with set include=^\\s*#\\s*include\ \\(<boost/\\)\\@!

    Read the article

  • WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    - by Mladen Prajdic
    In previous posts I’ve shown you our SuperForm test application solution structure and how the main wxs and wxi include file look like. In this post I’ll show you how to automate inclusion of files to install into your build process. For our SuperForm application we have a single exe to install. But in the real world we have 10s or 100s of different files from dll’s to resource files like pictures. It all depends on what kind of application you’re building. Writing a directory structure for so many files by hand is out of the question. What we need is an automated way to create this structure. Enter Heat.exe. Heat is a command line utility to harvest a file, directory, Visual Studio project, IIS website or performance counters. You might ask what harvesting means? Harvesting is converting a source (file, directory, …) into a component structure saved in a WiX fragment (a wxs) file. There are 2 options you can use: Create a static wxs fragment with Heat and include it in your project. The pro of this is that you can add or remove components by hand. The con is that you have to do the pro part by hand. Automation always beats manual labor. Run heat command line utility in a pre-build event of your WiX project. I prefer this way. By always recreating the whole fragment you don’t have to worry about missing any new files you add. The con of this is that you’ll include files that you otherwise might not want to. There is no perfect solution so pick one and deal with it. I prefer using the second way. A neat way of overcoming the con of the second option is to have a post-build event on your main application project (SuperForm.MainApp in our case) to copy the files needed to be installed in a special location and have the Heat.exe read them from there. I haven’t set this up for this tutorial and I’m simply including all files from the default SuperForm.MainApp \bin directory. Remember how we created a System Environment variable called SuperFormFilesDir? This is where we’ll use it for the first time. The command line text that you have to put into the pre-build event of your WiX project looks like this: "$(WIX)bin\heat.exe" dir "$(SuperFormFilesDir)" -cg SuperFormFiles -gg -scom -sreg -sfrag -srd -dr INSTALLLOCATION -var env.SuperFormFilesDir -out "$(ProjectDir)Fragments\FilesFragment.wxs" After you install WiX you’ll get the WIX environment variable. In the pre/post-build events environment variables are referenced like this: $(WIX). By using this you don’t have to think about the installation path of the WiX. Remember: for 32 bit applications Program files folder is named differently between 32 and 64 bit systems. $(ProjectDir) is obviously the path to your project and is a Visual Studio built in variable. You can view all Heat.exe options by running it without parameters but I’ll explain some that stick out the most. dir "$(SuperFormFilesDir)": tell Heat to harvest the whole directory at the set location. That is the location we’ve set in our System Environment variable. –cg SuperFormFiles: the name of the Component group that will be created. This name is included in out Feature tag as is seen in the previous post. -dr INSTALLLOCATION: the directory reference this fragment will fall under. You can see the top level directory structure in the previous post. -var env.SuperFormFilesDir: the name of the variable that will replace the SourceDir text that would otherwise appear in the fragment file. -out "$(ProjectDir)Fragments\FilesFragment.wxs": the full path and name under which the fragment file will be saved. If you have source control you have to include the FilesFragment.wxs into your project but remove its source control binding. The auto generated FilesFragment.wxs for our test app looks like this: <?xml version="1.0" encoding="utf-8"?><Wix xmlns="http://schemas.microsoft.com/wix/2006/wi"> <Fragment> <ComponentGroup Id="SuperFormFiles"> <ComponentRef Id="cmp5BB40DB822CAA7C5295227894A07502E" /> <ComponentRef Id="cmpCFD331F5E0E471FC42A1334A1098E144" /> <ComponentRef Id="cmp4614DD03D8974B7C1FC39E7B82F19574" /> <ComponentRef Id="cmpDF166522884E2454382277128BD866EC" /> </ComponentGroup> </Fragment> <Fragment> <DirectoryRef Id="INSTALLLOCATION"> <Component Id="cmp5BB40DB822CAA7C5295227894A07502E" Guid="{117E3352-2F0C-4E19-AD96-03D354751B8D}"> <File Id="filDCA561ABF8964292B6BC0D0726E8EFAD" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.exe" /> </Component> <Component Id="cmpCFD331F5E0E471FC42A1334A1098E144" Guid="{369A2347-97DD-45CA-A4D1-62BB706EA329}"> <File Id="filA9BE65B2AB60F3CE41105364EDE33D27" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.pdb" /> </Component> <Component Id="cmp4614DD03D8974B7C1FC39E7B82F19574" Guid="{3443EBE2-168F-4380-BC41-26D71A0DB1C7}"> <File Id="fil5102E75B91F3DAFA6F70DA57F4C126ED" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.vshost.exe" /> </Component> <Component Id="cmpDF166522884E2454382277128BD866EC" Guid="{0C0F3D18-56EB-41FE-B0BD-FD2C131572DB}"> <File Id="filF7CA5083B4997E1DEC435554423E675C" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.vshost.exe.manifest" /> </Component> </DirectoryRef> </Fragment></Wix> The $(env.SuperFormFilesDir) will be replaced at build time with the directory where the files to be installed are located. There is nothing too complicated about this. In the end it turns out that this sort of automation is great! There are a few other ways that Heat.exe can compose the wxs file but this is the one I prefer. It just seems the clearest. Play with its options to see what can it do. It’s one awesome little tool.   WiX 3 tutorial by Mladen Prajdic navigation WiX 3 Tutorial: Solution/Project structure and Dev resources WiX 3 Tutorial: Understanding main wxs and wxi file WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    Read the article

  • Run Your Tests With Any NUnit Version

    - by Alois Kraus
    I always thought that the NUnit test runners and the test assemblies need to reference the same NUnit.Framework version. I wanted to be able to run my test assemblies with the newest GUI runner (currently 2.5.3). Ok so all I need to do is to reference both NUnit versions the newest one and the official for the current project. There is a nice article form Kent Bogart online how to reference the same assembly multiple times with different versions. The magic works by referencing one NUnit assembly with an alias which does prefix all types inside it. Then I could decorate my tests with the TestFixture and Test attribute from both NUnit versions and everything worked fine except that this was ugly. After playing a little bit around to make it simpler I found that I did not need to reference both NUnit.Framework assemblies. The test runners do not require the TestFixture and Test attribute in their specific version. That is really neat since the test runners are instructed by attributes what to do in a declarative way there is really no need to tie the runners to a specific version. At its core NUnit has this little method hidden to find matching TestFixtures and Tests   public bool CanBuildFrom(Type type) {     if (!(!type.IsAbstract || type.IsSealed))     {         return false;     }     return (((Reflect.HasAttribute(type,           "NUnit.Framework.TestFixtureAttribute", true) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestAttribute"       , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestCaseAttribute"   , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TheoryAttribute"     , true)); } That is versioning and backwards compatibility at its best. I tell NUnit what to do by decorating my tests classes with NUnit Attributes and the runner executes my intent without the need to bind me to a specific version. The contract between NUnit versions is actually a bit more complex (think of AssertExceptions) but this is also handled nicely by using not the concrete type but simply to check for the catched exception type by string. What can we learn from this? Versioning can be easy if the contract is small and the users of your library use it in a declarative way (Attributes). Everything beyond it will force you to reference several versions of the same assembly with all its consequences. Type equality is lost between versions so none of your casts will work. That means that you cannot simply use IBigInterface in two versions. You will need a wrapper to call the correct versioned one. To get out of this mess you can use one (and only one) version agnostic driver to encapsulate your business logic from the concrete versions. This is of course more work but as NUnit shows it can be easy. Simplicity is therefore not a nice thing to have but also requirement number one if you intend to make things more complex in version two and want to support any version (older and newer). Any interaction model above easy will not be maintainable. There are different approached to versioning. Below are my own personal observations how versioning works within the  .NET Framwork and NUnit.   Versioning Models 1. Bug Fixing and New Isolated Features When you only need to fix bugs there is no need to break anything. This is especially true when you have a big API surface. Microsoft did this with the .NET Framework 3.0 which did leave the CLR as is but delivered new assemblies for the features WPF, WCF and Windows Workflow Foundations. Their basic model was that the .NET 2.0 assemblies were declared as red assemblies which must not change (well mostly but each change was carefully reviewed to minimize the risk of breaking changes as much as possible) whereas the new green assemblies of .NET 3,3.5 did not have such obligations since they did implement new unrelated features which did not have any impact on the red assemblies. This is versioning strategy aimed at maximum compatibility and the delivery of new unrelated features. If you have a big API surface you should strive hard to do the same or you will break your customers code with every release. 2. New Breaking Features There are times when really new things need to be added to an existing product. The .NET Framework 4.0 did change the CLR in many ways which caused subtle different behavior although the API´s remained largely unchanged. Sometimes it is possible to simply recompile an application to make it work (e.g. changed method signature void Func() –> bool Func()) but behavioral changes need much more thought and cannot be automated. To minimize the impact .NET 2.0,3.0,3.5 applications will not automatically use the .NET 4.0 runtime when installed but they will keep using the “old” one. What is interesting is that a side by side execution model of both CLR versions (2 and 4) within one process is possible. Key to success was total isolation. You will have 2 GCs, 2 JIT compilers, 2 finalizer threads within one process. The two .NET runtimes cannot talk  (except via the usual IPC mechanisms) to each other. Both runtimes share nothing and run independently within the same process. This enables Explorer plugins written for the CLR 2.0 to work even when a CLR 4 plugin is already running inside the Explorer process. The price for isolation is an increased memory footprint because everything is loaded and running two times.   3. New Non Breaking Features It really depends where you break things. NUnit has evolved and many different Assert, Expect… methods have been added. These changes are all localized in the NUnit.Framework assembly which can be easily extended. As long as the test execution contract (TestFixture, Test, AssertException) remains stable it is possible to write test executors which can run tests written for NUnit 10 because the execution contract has not changed. It is possible to write software which executes other components in a version independent way but this is only feasible if the interaction model is relatively simple.   Versioning software is hard and it looks like it will remain hard since you suddenly work in a severely constrained environment when you try to innovate and to keep everything backwards compatible at the same time. These are contradicting goals and do not play well together. The easiest way out of this is to carefully watch what your customers are doing with your software. Minimizing the impact is much easier when you do not need to guess how many people will be broken when this or that is removed.

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >