Search Results

Search found 3679 results on 148 pages for 'definition'.

Page 122/148 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Global name not defined error in Django/Python trying to set foreignkey

    - by Mark
    Summary: I define a method createPage within a file called PageTree.py that takes a Source model object and a string. The method tries to generate a Page model object. It tries to set the Page model object's foreignkey to refer to the Source model object which was passed in. This throws a NameError exception! I'm trying to represent a website which is structured like a tree. I define the Django models Page and Source, Page representing a node on the tree and Source representing the contents of the page. (You can probably skip over these, this is a basic tree implementation using doubly linked nodes). class Page(models.Model): name = models.CharField(max_length=50) parent = models.ForeignKey("self", related_name="children", null=True); firstChild = models.ForeignKey("self", related_name="origin", null=True); nextSibling = models.ForeignKey("self", related_name="prevSibling", null=True); previousSibling = models.ForeignKey("self", related_name="nxtSibling", null=True); source = models.ForeignKey("Source"); class Source(models.Model): #A source that is non dynamic will be refered to as a static source #Dynamic sources contain locations that are names of functions #Static sources contain locations that are places on disk name = models.CharField(primary_key=True, max_length=50) isDynamic = models.BooleanField() location = models.CharField(max_length=100); I've coded a python program called PageTree.py which allows me to request nodes from the database and manipulate the structure of the tree. Here is the trouble making method: def createPage(pageSource, pageName): page = Page() page.source = pageSource page.name = pageName page.save() return page I'm running this program in a shell through manage.py in Windows 7 manage.py shell from mysite.PageManager.models import Page, Source from mysite.PageManager.PageTree import * ... create someSource = Source(), populate the fields, and save it ... createPage(someSource, "test") ... NameError: global name 'source' is not defined When I type in the function definition for createPage into the shell by hand, the call works without error. This is driving me bonkers and help is appreciated.

    Read the article

  • no longer an issue

    - by MrTemp
    I am still new to c# and wpf This program is a clock with different view and I would like to use the context menu to change between view, but the error says that there is no definition or extension method for the events. Right now I have the event I'm working on popping up a MessageBox just so I know it has run, but I cannot get it to compile. public partial class MainWindow : NavigationWindow { public MainWindow() { //InitializeComponent(); } public void AnalogMenu_Click(object sender, RoutedEventArgs e) { /*AnalogClock analog = new AnalogClock(); this.NavigationService.Navigate(analog);*/ } public void DigitalMenu_Click(object sender, RoutedEventArgs e) { MessageBox.Show("Digital Clicked"); /*DigitalClock digital = new DigitalClock(); this.NavigationService.Navigate(digital);*/ } public void BinaryMenu_Click(object sender, RoutedEventArgs e) { /*BinaryClock binary = new BinaryClock(); this.NavigationService.Navigate(binary);*/ } } and the xaml call if you want it <NavigationWindow.ContextMenu> <ContextMenu Name="ClockMenu" > <MenuItem Name="ToAnalog" Header="To Analog" ToolTip="Changes to an analog clock"/> <MenuItem Name="ToDigital" Header="To Digital" ToolTip="Changes to a digital clock" Click="DigitalMenu_Click" /> <MenuItem Name="ToBinary" Header="To Binary" ToolTip="Changes to a binary clock"/> </ContextMenu> </NavigationWindow.ContextMenu>

    Read the article

  • How do I deserialize a namespaced element to an object in .net?

    - by pc1oad1etter
    Given this XML snippet: ... <InSide:setHierarchyUpdates> <automaticUpdateInterval>5</automaticUpdateInterval> <shouldRunAutomaticUpdates>true<shouldRunAutomaticUpdates> </InSide:setHierarchyUpdates> ... I am attempting to serialize this object: Imports System.Xml.Serialization <XmlRoot(ElementName:="setHierarchyUpdates", namespace:="InSide")> _ Public Class HierarchyUpdate <XmlElement(ElementName:="shouldRunAutomaticUpdates")> _ Public shouldRunAutomaticUpdates As Boolean <XmlElement(ElementName:="automaticUpdateInterval")> _ Public automaticUpdateInterval As Integer End Class Like this: Dim hierarchyUpdater As New HierarchyUpdate Dim x As New XmlSerializer(hierarchyUpdater.GetType) Dim objReader As Xml.XmlNodeReader = New Xml.XmlNodeReader(myXMLNode) hierarchyUpdater = x.Deserialize(objReader) However, the object, after deserialization, has values of false and zero. If I switch the objReader to a streamreader and read this in as a file, with none of its parents and no namespaces, it works: <setHierarchyUpdates> <automaticUpdateInterval>5</automaticUpdateInterval> <shouldRunAutomaticUpdates>true<shouldRunAutomaticUpdates> </setHierarchyUpdates> What am I doing wrong? Should I use something other than XMLRoot in the class definition, because, as an XML node, it's not really the root? If so, what? Why are no errors returned when this fails?

    Read the article

  • What are the drawbacks of this Classing format?

    - by Keysle
    This is a 3 layer example of my classing format function __(_){return _.constructor} //class var _ = ( CLASS = function(){ this.variable = 0; this.sub = new CLASS.SUBCLASS(); }).prototype; _.func = function(){ alert('lvl'+this.variable); this.sub.func(); } _.divePeak = function(){ alert('lvl'+this.variable); this.sub.variable += 5; } //sub class _ = ( __(_).SUBCLASS = function(){ this.variable = 1; this.sub = new CLASS.SUBCLASS.DEEPCLASS(); }).prototype; _.func = function(){ alert('lvl'+this.variable); this.sub.func(); } //deep class _ = ( __(_).DEEPCLASS = function(){ this.variable = 2; }).prototype; _.func = function(){ alert('lvl'+this.variable); } Before you blow a gasket, let me explain myself. The purpose behind the underscores is to accelerate the time needed to specify functions for a class and also specify sub classes of a class. To me it's easier to read. I KNOW, this does interfere with underscore.js if you intend to use it in your classes. I'm sure _.js can be easily switched over to another $ymbol though ... oh wait, But I digress. Why have classes within a class? because solar.system() and social.system() mean two totally different things but it's convenient to use the same name. Why user underscores to manage the definition of the class? because "Solar.System.prototype" took me about 2 seconds to type out and 2 typos to correct. It also keeps all function names for all classes in the same column of texts, which is nice for legibility. All I'm doing is presenting my reasoning behind this method and why I came up with it. I'm 3 days into learning OO JS and I am very willing to accept that I might have messed up.

    Read the article

  • LINQ to SQL - Lightweight O/RM?

    - by CoffeeAddict
    I've heard from some that LINQ to SQL is good for lightweight apps. But then I see LINQ to SQL being used for Stackoverflow, and a bunch of other .coms I know (from interviewing with them). Ok, so is this true? for an e-commerce site that's bringing in millions and you're typically only doing basic CRUDs most the time with the exception of an occasional stored proc for something more complex, is LINQ to SQL complete enough and performance-wise good enough or able to be tweaked enough to run happily on an e-commerce site? I've heard that you just need to tweak performance on the DB side when using LINQ to SQL for a better approach. So there are really 2 questions here: 1) Meaning/scope/definition of a "Lightweight" O/RM solution: What the heck does "lightweight" mean when people say LINQ to SQL is a "lightweight O/RM" and is that true??? If this is so lightweight then why do I see a bunch of huge .coms using it? Is it good enough to run major .coms (obviously it looks like it is) and what determines what the context of "lightweight" is...it's such a generic statement. 2) Performance: I'm working on my own .com and researching different O/RMs. I'm not really looking at the Entity Framework (yet), just want to figure out the LINQ to SQL basics here and determine if it will be efficient enough for me. The problem I think is you can't tweak or control the SQL it generates...

    Read the article

  • How can I ignore the block content reading in Perl.

    - by Nano HE
    Hello. I plan to ignore the block content which include the start line of "MaterializeU4()" with the subroutin() read_block below. But failed. # Read a constant definition block from a file handle. # void return when there is no data left in the file. # Otherwise return an array ref containing lines to in the block. sub read_block { my $fh = shift; my @lines; my $block_started = 0; while( my $line = <$fh> ) { # how to correct my code below? I don't need the 2nd block content. $block_started++ if ( ($line =~ /^(status)/) && (index($line, "MaterializeU4") != 0) ) ; if( $block_started ) { last if $line =~ /^\s*$/; push @lines, $line; } } return \@lines if @lines; return; } Data as below: __DATA__ status DynTest = <dynamic 100> vid = 10002 name = "DynTest" units = "" status VIDNAME9000 = <U4 MaterializeU4()> vid = 9000 name = "VIDNAME9000" units = "degC" status DynTest = <U1 100> vid = 100 name = "Hello" units = "" Output: <StatusVariables> <SVID logicalName="DynTest" type="L" value="100" vid="10002" name="DynTest" units=""></SVID> <SVID logicalName="VIDNAME9000" type="L" value="MaterializeU4()" vid="9000" name="VIDNAME9000" units="degC"></SVID> <SVID logicalName="DynTest" type="L" value="100" vid="100" name="Hello" units=""></SVID> </StatusVariables> [Updated] I print the value of index($line, "MaterializeU4"), it output 25. Then I updated the code as below $block_started++ if ( ($line =~ /^(status)/) && (index($line, "MaterializeU4") != 25) Now it works. Any comments are welcome about my practice. Thank you.

    Read the article

  • Best solution for __autoload

    - by tpk
    As our PHP5 OO application grew (in both size and traffic), we decided to revisit the __autoload() strategy. We always name the file by the class definition it contains, so class Customer would be contained within Customer.php. We used to list the directories in which a file can potentially exist, until the right .php file was found. This is quite inefficient, because you're potentially going through a number of directories which you don't need to, and doing so on every request (thus, making loads of stat() calls). Solutions that come to my mind... -use a naming convention that dictates the directory name (similar to PEAR). Disadvantages: doesn't scale too great, resulting in horrible class names. -come up with some kind of pre-built array of the locations (propel does this for its __autoload). Disadvantage: requires a rebuild before any deploy of new code. -build the array "on the fly" and cache it. This seems to be the best solution, as it allows for any class names and directory structure you want, and is fully flexible in that new files just get added to the list. The concerns are: where to store it and what about deleted/moved files. For storage we chose APC, as it doesn't have the disk I/O overhead. With regards to file deletes, it doesn't matter, as you probably don't wanna require them anywhere anyway. As to moves... that's unresolved (we ignore it as historically it didn't happen very often for us). Any other solutions?

    Read the article

  • Python to C/C++ const char question

    - by tsukemonoki
    I am extending Python with some C++ code. One of the functions I'm using has the following signature: int PyArg_ParseTupleAndKeywords(PyObject *arg, PyObject *kwdict, char *format, char **kwlist, ...); (link: http://docs.python.org/release/1.5.2p2/ext/parseTupleAndKeywords.html) The parameter of interest is kwlist. In the link above, examples on how to use this function are given. In the examples, kwlist looks like: static char *kwlist[] = {"voltage", "state", "action", "type", NULL}; When I compile this using g++, I get the warning: warning: deprecated conversion from string constant to ‘char*’ So, I can change the static char* to a static const char*. Unfortunately, I can't change the Python code. So with this change, I get a different compilation error (can't convert char** to const char**). Based on what I've read here, I can turn on compiler flags to ignore the warning or I can cast each of the constant strings in the definition of kwlist to char *. Currently, I'm doing the latter. What are other solutions? Sorry if this question has been asked before. I'm new.

    Read the article

  • Importing a C DLL's functions into a C++ program

    - by bobobobo
    I have a 3rd party library that's written in C. It exports all of its functions to a DLL. I have the .h file, and I'm trying to load the DLL from my C++ program. The first thing I tried was surrounding the parts where I #include the 3rd party lib in #ifdef __cplusplus extern "C" { #endif and, at the end #ifdef __cplusplus } // extern "C" #endif But the problem there was, all of the DLL file function linkage looked like this in their header files: a_function = (void *)GetProcAddress(dll, "a_function"); While really a_function had type int (*a_function) (int *). Apparently MSVC++ compiler doesn't like this, while MSVC compiler does not seem to mind. So I went through (brutal torture) and fixed them all to the pattern typedef int (*_a_function) (int *); _a_function a_function ; Then, to link it to the DLL code, in main(): a_function = (_a_function)GetProcAddress(dll, "a_function"); This SEEMS to make the compiler MUCH, MUCH happier, but it STILL complains with this final set of 143 errors, each saying for each of the DLL link attempts: error LNK2005: _a_function already defined in main.obj main.obj Multiple symbol definition errors.. sounds like a job for extern! SO I went and made ALL the function pointer declarations as follows: function_pointers.h typedef int (*_a_function) (int *); extern _a_function a_function ; And in a cpp file: function_pointers.cpp #include "function_pointers.h" _a_function a_function ; ALL fine and dandy.. except for linker errors now of the form: error LNK2001: unresolved external symbol _a_function main.obj Main.cpp includes "function_pointers.h", so it should know where to find each of the functions.. I am bamboozled. Does any one have any pointers to get me functional? (Pardon the pun..)

    Read the article

  • Java Generics error when implementing Hibernate message interpolator

    - by Jayaprakash
    Framework: Spring, Hibernate. O/S: Windows I am trying to implement hibernate's Custom message interpolator following the direction of this Link. When implementing the below class, it gives an error "Cannot make a static reference to the non-static type Locale". public class ClientLocaleThreadLocal<Locale> { private static ThreadLocal tLocal = new ThreadLocal(); public static void set(Locale locale) { tLocal.set(locale); } public static Locale get() { return tLocal.get(); } public static void remove() { tLocal.remove(); } } As I do not know generics enough, not sure how is being used by TimeFilter class below and the purpose of definition in the above class. public class TimerFilter implements Filter { public void destroy() { } public void doFilter(ServletRequest req, ServletResponse res, FilterChain filterChain) throws IOException, ServletException { try { ClientLocaleThreadLocal.set(req.getLocale()); filterChain.doFilter(req, res); }finally { ClientLocaleThreadLocal.remove(); } } public void init(FilterConfig arg0) throws ServletException { } } Will doing the following be okay? Change static method/field in ClientLocaleThreadLocal to non-static method/fields In TimeFilter, set locale by instantiating new object as below. new ClientLocaleThreadLocal().set(req.getLocale()) Thanks for your help in advance

    Read the article

  • Design issue when having classes implement different interfaces to restrict client actions

    - by devoured elysium
    Let's say I'm defining a game class that implements two different views: interface IPlayerView { void play(); } interface IDealerView { void deal(); } The view that a game sees when playing the game, and a view that the dealer sees when dealing the game (this is, a player can't make dealer actions and a dealer can't make player actions). The game definition is as following: class Game : IPlayerView, IDealerView { void play() { ... } void deal() { ... } } Now assume I want to make it possible for the players to play the game, but not to deal it. My original idea was that instead of having public Game GetGame() { ... } I'd have something like public IPlayerView GetGame() { ... } But after some tests I realized that if I later try this code, it works: IDealerView dealerView = (IDealerView)GameClass.GetGame(); this works as lets the user act as the dealer. Am I worrying to much? How do you usually deal with this patterns? I could instead make two different classes, maybe a "main" class, the dealer class, that would act as factory of player classes. That way I could control exactly what I would like to pass on the the public. On the other hand, that turns everything a bit more complex than with this original design. Thanks

    Read the article

  • Convert a binary tree to linked list, breadth first, constant storage/destructive

    - by Merlyn Morgan-Graham
    This is not homework, and I don't need to answer it, but now I have become obsessed :) The problem is: Design an algorithm to destructively flatten a binary tree to a linked list, breadth-first. Okay, easy enough. Just build a queue, and do what you have to. That was the warm-up. Now, implement it with constant storage (recursion, if you can figure out an answer using it, is logarithmic storage, not constant). I found a solution to this problem on the Internet about a year back, but now I've forgotten it, and I want to know :) The trick, as far as I remember, involved using the tree to implement the queue, taking advantage of the destructive nature of the algorithm. When you are linking the list, you are also pushing an item into the queue. Each time I try to solve this, I lose nodes (such as each time I link the next node/add to the queue), I require extra storage, or I can't figure out the convoluted method I need to get back to a node that has the pointer I need. Even the link to that original article/post would be useful to me :) Google is giving me no joy. Edit: Jérémie pointed out that there is a fairly simple (and well known answer) if you have a parent pointer. While I now think he is correct about the original solution containing a parent pointer, I really wanted to solve the problem without it :) The refined requirements use this definition for the node: struct tree_node { int value; tree_node* left; tree_node* right; };

    Read the article

  • What is a good use case for static import of methods?

    - by Miserable Variable
    Just got a review comment that my static import of the method was not a good idea. The static import was of a method from a DA class, which has mostly static methods. So in middle of the business logic I had a da activity that apparently seemed to belong to the current class: import static some.package.DA.*; class BusinessObject { void someMethod() { .... save(this); } } The reviewer was not keen that I change the code and I didn't but I do kind of agree with him. One reason given for not static-importing was it was confusing where the method was defined, it wasn't in the current class and not in any superclass so it too some time to identify its definition (the web based review system does not have clickable links like IDE :-) I don't really think this matters, static-imports are still quite new and soon we will all get used to locating them. But the other reason, the one I agree with, is that an unqualified method call seems to belong to current object and should not jump contexts. But if it really did belong, it would make sense to extend that super class. So, when does it make sense to static import methods? When have you done it? Did/do you like the way the unqualified calls look? EDIT: The popular opinion seems to be that static-import methods if nobody is going to confuse them as methods of the current class. For example methods from java.lang.Math and java.awt.Color. But if abs and getAlpha are not ambiguous I don't see why readEmployee is. As in lot of programming choices, I think this too is a personal preference thing. Thanks for your response guys, I am closing the question.

    Read the article

  • Confusion with a while statement evaluating if a number is triangular

    - by Darkkurama
    I've been having troubles trying to figure out how to solve a function. I've been assigned the development of a little programme which tells if a number is "triangular" (a number is triangular when the addition of certain consecutive numbers in the [1,n] interval is n. Following the definition, the number 10 is triangular, because in the [1,10] interval, 1+2+3+4=10). I've coded this so far: class TriangularNumber{ boolean numTriangular(int n) { boolean triangular = false; int i = n; while(n>=0 && triangular){ //UE06 is a class which contains the function "f0", which makes the addition of all the numbers in a determined interval UE06 p = new UE06(); if ((p.f0(1, i))==n) triangular = true; else i=i-1; } return triangular; } boolean testTriangular = numTriangular(10) == true && numTriangular(7) == false && numTriangular(6) == true; public static void main(String[] args){ TriangularNumber p = new TriangularNumber(); System.out.println("testTriangular = " + p.testTriangular); } } According to those boolean tests I made, the function is wrong. As I see the function, it goes like this: I state that the input number in the initial state isn't triangular (triangular=false) and i=n (determining the interval [1,i] where the function is going to be evaluated While n is greater or equals 0 and the number isn't triangular, the loop starts The loop goes like this: if the addition of all the numbers in the [1,i] interval is n, the number is triangular, causing the loop to end. If that statement is false, i goes from i to (i-1), starting the loop again with that particular interval, and so on till the addition is n. I can't spot the error in my "algorithm", any advice? Thanks!

    Read the article

  • How can I write a function template for all types with a particular type trait?

    - by TC
    Consider the following example: struct Scanner { template <typename T> T get(); }; template <> string Scanner::get() { return string("string"); } template <> int Scanner::get() { return 10; } int main() { Scanner scanner; string s = scanner.get<string>(); int i = scanner.get<int>(); } The Scanner class is used to extract tokens from some source. The above code works fine, but fails when I try to get other integral types like a char or an unsigned int. The code to read these types is exactly the same as the code to read an int. I could just duplicate the code for all other integral types I'd like to read, but I'd rather define one function template for all integral types. I've tried the following: struct Scanner { template <typename T> typename enable_if<boost::is_integral<T>, T>::type get(); }; Which works like a charm, but I am unsure how to get Scanner::get<string>() to function again. So, how can I write code so that I can do scanner.get<string>() and scanner.get<any integral type>() and have a single definition to read all integral types? Update: bonus question: What if I want to accept more than one range of classes based on some traits? For example: how should I approach this problem if I want to have three get functions that accept (i) integral types (ii) floating point types (iii) strings, respectively.

    Read the article

  • How does Haskell do pattern matching without us defining an Eq on our data types?

    - by devoured elysium
    I have defined a binary tree: data Tree = Null | Node Tree Int Tree and have implemented a function that'll yield the sum of the values of all its nodes: sumOfValues :: Tree -> Int sumOfValues Null = 0 sumOfValues (Node Null v Null) = v sumOfValues (Node Null v t2) = v + (sumOfValues t2) sumOfValues (Node t1 v Null) = v + (sumOfValues t1) sumOfValues (Node t1 v t2) = v + (sumOfValues t1) + (sumOfValues t2) It works as expected. I had the idea of also trying to implement it using guards: sumOfValues2 :: Tree -> Int sumOfValues2 Null = 0 sumOfValues2 (Node t1 v t2) | t1 == Null && t2 == Null = v | t1 == Null = v + (sumOfValues2 t2) | t2 == Null = v + (sumOfValues2 t1) | otherwise = v + (sumOfValues2 t1) + (sumOfValues2 t2) but this one doesn't work because I haven't implemented Eq, I believe: No instance for (Eq Tree) arising from a use of `==' at zzz3.hs:13:3-12 Possible fix: add an instance declaration for (Eq Tree) In the first argument of `(&&)', namely `t1 == Null' In the expression: t1 == Null && t2 == Null In a stmt of a pattern guard for the definition of `sumOfValues2': t1 == Null && t2 == Null The question that has to be made, then, is how can Haskell make pattern matching without knowing when a passed argument matches, without resorting to Eq?

    Read the article

  • in haskell, why do I need to specify type constraints, why can't the compiler figure them out?

    - by Steve
    Consider the function, add a b = a + b This works: *Main> add 1 2 3 However, if I add a type signature specifying that I want to add things of the same type: add :: a -> a -> a add a b = a + b I get an error: test.hs:3:10: Could not deduce (Num a) from the context () arising from a use of `+' at test.hs:3:10-14 Possible fix: add (Num a) to the context of the type signature for `add' In the expression: a + b In the definition of `add': add a b = a + b So GHC clearly can deduce that I need the Num type constraint, since it just told me: add :: Num a => a -> a -> a add a b = a + b Works. Why does GHC require me to add the type constraint? If I'm doing generic programming, why can't it just work for anything that knows how to use the + operator? In C++ template programming, you can do this easily: #include <string> #include <cstdio> using namespace std; template<typename T> T add(T a, T b) { return a + b; } int main() { printf("%d, %f, %s\n", add(1, 2), add(1.0, 3.4), add(string("foo"), string("bar")).c_str()); return 0; } The compiler figures out the types of the arguments to add and generates a version of the function for that type. There seems to be a fundamental difference in Haskell's approach, can you describe it, and discuss the trade-offs? It seems to me like it would be resolved if GHC simply filled in the type constraint for me, since it obviously decided it was needed. Still, why the type constraint at all? Why not just compile successfully as long as the function is only used in a valid context where the arguments are in Num? Thank you.

    Read the article

  • Options for organizing android app with multiple independent apps

    - by lazyguy
    Problem Definition: We have a fairly large app which has multiple use cases such that they are all independent of each other. For example lets say we have a1, a2, a3 & a4 modules that are independent apps or use cases for our main app 'A'. The independent a1, a2, a3, a4 are all purchasable apps such that the user goes to our website instead of play store and activate either a1 or a2 by paying some fees on our website. So basically App 'A' is a free app in play-store and is sort of Dashboard with buttons to launch a1, a2, a3, a4. When the user click on lets say a1 button then we will check if a1 is already installed and launch it but if it is not present then give the user a link to download it. Option 1: Have a main app 'A' and a1, a2, a3, a4 as library project. But with this approach the main app A is too big in size. Option 2: Have a1, a2, a3, a4 build as separate .apk and then put in the assets folder of main app 'A' and then install them as needed. Again size of main app A is bigger. Option 3: Upload a1, a2, a3, a4 to a third party website or play store and download from it as needed. This way the main app remains lighter. Observation: In all these approaches there will be an independent app installed with its own icon on users phone. So basically user can launch from either the Dashboard (which will eventually launch an intent from Activity in a1 app) or user can directly launch app a1. Follow-up Question: Is there any other solution that anyone can suggest to tackle this kind of problem? Another things is by going this approach app a1, a2, a3, a4 can be developed & tested independently of each other.

    Read the article

  • SCons and dependencies for python function generating source

    - by elmo
    I have an input file data, a python function parse and a template. What I am trying to do is use parse function to get dictionary out of data and use that to replace fields in template. Now to make this a bit more generic (I perform the same action in few places) I have defined a custom function to do so. Below is definition of custom builder and values is a dictionary with { 'name': (data_file, parse_function) } (you don't really need to read through this, I simply put it here for completeness). def TOOL_ADD_FILL_TEMPLATE(env): def FillTemplate(env, output, template, values): out = output[0] subs = {} for name, (node, process) in values.iteritems(): def Process(env, target, source): with open( env.GetBuildPath(target[0]), 'w') as out: out.write( process( source[0] ) ) builder = env.Builder( action = Process ) subs[name] = builder( env, env.GetBuildPath(output[0])+'_'+name+'_processed.cpp', node )[0] def Fill(env, target, source): values = dict( (name, n.get_contents()) for name, n in subs.iteritems() ) contents = template[0].get_contents().format( **values ) open( env.GetBuildPath(target[0]), 'w').write( contents ) builder = env.Builder( action = Fill ) builder( env, output[0], template + subs.values() ) return output env.Append(BUILDERS = {'FillTemplate': FillTemplate}) It works fine when it comes to checking if data or template changed. If it did it rebuilds the output. It even works if I edit process function directly. However if my process function looks like this: def process( node ): return subprocess(node) and I edit subprocess the change goes unnoticed. Is there any way to get correct builds without making process functions being always invoked?

    Read the article

  • C++ template and pointers

    - by Kary
    I have a problem with a template and pointers ( I think ). Below is the part of my code: /* ItemCollection.h */ #ifndef ITEMCOLLECTION_H #define ITEMCOLLECTION_H #include <cstddef> using namespace std; template <class T> class ItemCollection { public: // constructor //destructor void insertItem( const T ); private: struct Item { T price; Item* left; Item* right; }; Item* root; Item* insert( T, Item* ); }; #endif And the file with function defintion: /* ItemCollectionTemp.h-member functions defintion */ #include <iostream> #include <cstddef> #include "ItemCollection.h" template <class Type> Item* ItemCollection <T>::insert( T p, Item* ptr) { // function body } Here are the errors which are generated by this line of code: Item* ItemCollection <T>::insert( T p, Item* ptr) Errors: error C2143: syntax error : missing ';' before '*' error C4430: missing type specifier - int assumed. Note: C++ does not support default-int error C2065: 'Type' : undeclared identifier error C2065: 'Type' : undeclared identifier error C2146: syntax error : missing ')' before identifier 'p' error C4430: missing type specifier - int assumed. Note: C++ does not support default-int error C2470: 'ItemCollection::insert' : looks like a function definition, but there is no parameter list; skipping apparent body error C2072: 'ItemCollection::insert': initialization of a function error C2059: syntax error : ')' Any help is much appreciated.

    Read the article

  • Interpretation of int (*a)[3]

    - by kapuzineralex
    When working with arrays and pointers in C, one quickly discovers that they are by no means equivalent although it might seem so at a first glance. I know about the differences in L-values and R-values. Still, recently I tried to find out the type of a pointer that I could use in conjunction with a two-dimensional array, i.e. int foo[2][3]; int (*a)[3] = foo; However, I just can't find out how the compiler "understands" the type definition of a in spite of the regular operator precedence rules for * and []. If instead I were to use a typedef, the problem becomes significantly simpler: int foo[2][3]; typedef int my_t[3]; my_t *a = foo; At the bottom line, can someone answer me the questions as to how the term int (*a)[3] is read by the compiler? int a[3] is simple, int *a[3] is simple as well. But then, why is it not int *(a[3])? EDIT: Of course, instead of "typecast" I meant "typedef" (it was just a typo).

    Read the article

  • Should I learn two (or more) programming languages in parallel?

    - by c_maker
    I found entries on this site about learning a new programming language, however, I have not come across anything that talks about the advantages and disadvantages of learning two languages at the same time. Let's say my goal is to learn two new languages in a year. I understand that the definition of learning a new language is different for everyone and you can probably never know everything about a language. I believe in most cases the following things are enough to include the language in your resume and say that you are proficient in it (list is not in any particular order): Know its syntax so you can write a simple program in it Compare its underlying concepts with concepts of other languages Know best practices Know what libraries are available Know in what situations to use it Understand the flow of a more complex program At least know most of what you do not know I would probably look for a good book and pick an open source project for both of these languages to start with. My questions: Is it best to spend 5 months learning language#1 then 5 months learning language#2, or should you mix the two. Mixing them I mean you work on them in parallel. Should you pick two languages that are similar or different? Are there any advantages/disadvantages of let's say learning Lisp in tandem with Ruby? Is it a good idea to pick two languages with similar syntax or would it be too confusing? Please tell me what your experiences are regarding this. Does it make a difference if you are a beginner or a senior programmer?

    Read the article

  • MySQL Join/Comparison on a DATETIME column (<5.6.4 and > 5.6.4)

    - by Simon
    Suppose i have two tables like so: Events ID (PK int autoInc), Time (datetime), Caption (varchar) Position ID (PK int autoinc), Time (datetime), Easting (float), Northing (float) Is it safe to, for example, list all the events and their position if I am using the Time field as my joining criteria? I.e.: SELECT E.*,P.* FROM Events E JOIN Position P ON E.Time = P.Time OR, even just simply comparing a datetime value (taking into consideration that the parameterized value may contain the fractional seconds part - which MySQL has always accepted) e.g. SELECT E.* FROM Events E WHERE E.Time = @Time I understand MySQL (before version 5.6.4) only stores datetime fields WITHOUT milliseconds. So I would assume this query would function OK. However as of version 5.6.4, I have read MySQL can now store milliseconds with the datetime field. Assuming datetime values are inserted using functions such as NOW(), the milliseconds are truncated (<5.6.4) which I would assume allow the above query to work. However, with version 5.6.4 and later, this could potentially NOT work. I am, and only ever will be interested in second accuracy. If anyone could answer the following questions would be greatly appreciated: In General, how does MySQL compare datetime fields against one another (consider the above query). Is the above query fine, and does it make use of indexes on the time fields? (MySQL < 5.6.4) Is there any way to exclude milliseconds? I.e. when inserting and in conditional joins/selects etc? (MySQL 5.6.4) Will the join query above work? (MySQL 5.6.4) EDIT I know i can cast the datetimes, thanks for those that answered, but i'm trying to tackle the root of the problem here (the fact that the storage type/definition has been changed) and i DO NOT want to use functions in my queries. This negates all my work of optimizing queries applying indexes etc, not to mention having to rewrite all my queries. EDIT2 Can anyone out there suggest a reason NOT to join on a DATETIME field using second accuracy?

    Read the article

  • Thread-Safe lazy instantiating using MEF

    - by Xaqron
    // Member Variable private static readonly object _syncLock = new object(); // Now inside a static method foreach (var lazyObject in plugins) { if ((string)lazyObject.Metadata["key"] = "something") { lock (_syncLock) { // It seems the `IsValueCreated` is not up-to-date if (!lazyObject.IsValueCreated) lazyObject.value.DoSomething(); } return lazyObject.value; } } Here I need synchronized access per loop. There are many threads iterating this loop and based on the key they are looking for, a lazy instance is created and returned. lazyObject should not be created more that one time. Although Lazy class is for doing so and despite of the used lock, under high threading I have more than one instance created (I track this with a Interlocked.Increment on a volatile static int and log it somewhere). The problem is I don't have access to definition of Lazy and MEF defines how the Lazy class create objects. I should notice the CompositionContainer has a thread-safe option in constructor which is already used. My questions: 1) Why the lock doesn't work ? 2) Should I use an array of locks instead of one lock for performance improvement ?

    Read the article

  • In C/C++,how to link dynamic link lib which compiled in GCC/G++ in MS VStudio?

    - by coanor
    These days, I use Flex & Bison generated some codes to develop a SQL-parser alike tools, these code can't compiled silently(may be this another topic) in VS2005,but GCC/G++ works well, then I compiled these code with mingw in dll(in windows xp), and then linked these function facades in VS2005, but it seems can't link the dll during linking. Does MS VS2005 recognize the dll which compiled using mingw on windows? Is there anything I need to do additional? For example, adding something in the include-file that declare the exported APIs? Does any one can give some advices? The condition is, as in VS2005, if you want to export some APIs, you may show a *.def file to tell nmake which API you want to export, and then you may create a(or some) *.h file to declare somthing about these APIs(adding some stdcall alike prefix as a call protocal) and some data-type definition. But with GCC/G++, you do not need to do such boring things, just use [ar], you can get these APIs, so my *.h file do not add call protocol and no *.def, just like common function declaration. After *.dll generated, add the *.h file and [mv] generated *.dll in VS2005 project directory, then set the linking *.dll in project setting. Does these steps generated my Question? BTW, I found and tested VC6-compiled dll can be linked with mingw in Windows XP, but the reverse can't work. Anyway, forgive my poor English, and thanks for your concern.

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >