Search Results

Search found 17501 results on 701 pages for 'stored functions'.

Page 644/701 | < Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >

  • IF Statement has strange behavior

    - by BSchlinker
    I've developed a 'custom' cout, so that I can display text to console and also print it to a log file. This cout class is passed a different integer on initialization, with the integer representing the verbosity level of the message. If the current verbosity level is greater then or equal to the verbosity level of the message, the message should print. The problem is, I have messages printing even when the current verbosity level is too low. I went ahead and debugged it, expecting to find the problem. Instead, I found multiple scenarios where my if statements are not working as expected. The statement if(ilralevel_passed <= ilralevel_set) will sometimes proceed even if ilralevel_set is LESS then ilralevel_passed. You can see this behavior in the following picture (my apologizes for using Twitpic) http://twitpic.com/1xtx4g/full. Notice how ilralevel_set is equal to zero, and ilralevel_passed is equal to one. Yet, the if statement has returned true and is now moving forward to pass the line to cout. I've never seen this type of behavior before and I'm not exactly sure how to proceed debugging it. I'm not able to isolate the behavior either -- it only occurs in certain parts of my program. Any suggestions are appreciated as always. // Here is an example use of the function: // ilra_status << setfill('0') << setw(2) << dispatchtime.tm_sec << endl; // ilra_warning << "Dispatch time (seconds): " << mktime(&dispatchtime) << endl; // Here is the 'custom' cout function: #ifndef ILRA_H_ #define ILRA_H_ // System libraries #include <iostream> #include <ostream> #include <sstream> #include <iomanip> // Definitions #define ilra_talk ilra(__FUNCTION__,0) #define ilra_update ilra(__FUNCTION__,0) #define ilra_error ilra(__FUNCTION__,1) #define ilra_warning ilra(__FUNCTION__,2) #define ilra_status ilra(__FUNCTION__,3) // Statics static int ilralevel_set = 0; static int ilralevel_passed; // Classes class ilra { public: // constructor / destructor ilra(const std::string &funcName, int toset) { ilralevel_passed = toset; } ~ilra(){}; // enable / disable irla functions static void ilra_verbose_level(int toset){ ilralevel_set = toset; } // output template <class T> ilra &operator<<(const T &v) { if(ilralevel_passed <= ilralevel_set) std::cout << v; return *this; } ilra &operator<<(std::ostream&(*f)(std::ostream&)) { if(ilralevel_passed <= ilralevel_set) std::cout << *f; return *this; } }; // end of the class #endif /* ILRA_H_ */

    Read the article

  • Dynamic data-entry value store

    - by simendsjo
    I'm creating a data-entry application where users are allowed to create the entry schema. My first version of this just created a single table per entry schema with each entry spanning a single or multiple columns (for complex types) with the appropriate data type. This allowed for "fast" querying (on small datasets as I didn't index all columns) and simple synchronization where the data-entry was distributed on several databases. I'm not quite happy with this solution though; the only positive thing is the simplicity... I can only store a fixed number of columns. I need to create indexes on all columns. I need to recreate the table on schema changes. Some of my key design criterias are: Very fast querying (Using a simple domain specific query language) Writes doesn't have to be fast Many concurrent users Schemas will change often Schemas might contain many thousand columns The data-entries might be distributed and needs syncronization. Preferable MySQL and SQLite - Databases like DB2 and Oracle is out of the question. Using .Net/Mono I've been thinking of a couple of possible designs, but none of them seems like a good choice. Solution 1: Union like table containing a Type column and one nullable column per type. This avoids joins, but will definitly use a lot of space. Solution 2: Key/value store. All values are stored as string and converted when needed. Also use a lot of space, and of course, I hate having to convert everything to string. Solution 3: Use an xml database or store values as xml. Without any experience I would think this is quite slow (at least for the relational model unless there is some very good xpath support). I also would like to avoid an xml database as other parts of the application fits better as a relational model, and being able to join the data is helpful. I cannot help to think that someone has solved (some of) this already, but I'm unable to find anything. Not quite sure what to search for either... I know market research is doing something like this for their questionnaires, but there are few open source implementations, and the ones I've found doesn't quite fit the bill. PSPP has much of the logic I'm thinking of; primitive column types, many columns, many rows, fast querying and merging. Too bad it doesn't work against a database.. And of course... I don't need 99% of the provided functionality, but a lot of stuff not included. I'm not sure this is the right place to ask such a design related question, but I hope someone here has some tips, know of any existing work, or can point me to a better place to ask such a question. Thanks in advance!

    Read the article

  • Crash in OS X Core Data Utility Tutorial

    - by vinogradov
    I'm trying to follow Apple's Core Data utility Tutorial. It was all going nicely, until... The tutorial uses a custom sub-class of NSManagedObject, called 'Run'. Run.h looks like this: #import <Foundation/Foundation.h> #import <CoreData/CoreData.h> @interface Run : NSManagedObject { NSInteger processID; } @property (retain) NSDate *date; @property (retain) NSDate *primitiveDate; @property NSInteger processID; @end Now, in Run.m we have an accessor method for the processID variable: - (void)setProcessID:(int)newProcessID { [self willChangeValueForKey:@"processID"]; processID = newProcessID; [self didChangeValueForKey:@"processID"]; } In main.m, we use functions to set up a managed object model and context, instantiate an entity called run, and add it to the context. We then get the current NSprocessInfo, in preparation for setting the processID of the run object. NSManagedObjectContext *moc = managedObjectContext(); NSEntityDescription *runEntity = [[mom entitiesByName] objectForKey:@"Run"]; Run *run = [[Run alloc] initWithEntity:runEntity insertIntoManagedObjectContext:moc]; NSProcessInfo *processInfo = [NSProcessInfo processInfo]; Next, we try to call the accessor method defined in Run.m to set the value of processID: [run setProcessID:[processInfo processIdentifier]]; And that's where it's crashing. The object run seems to exist (I can see it in the debugger), so I don't think I'm messaging nil; on the other hand, it doesn't look like the setProcessID: message is actually being received. I'm obviously still learning this stuff (that's what tutorials are for, right?), and I'm probably doing something really stupid. However, any help or suggestions would be gratefully received! ===MORE INFORMATION=== Following up on Jeremy's suggestions: The processID attribute in the model is set up like this: NSAttributeDescription *idAttribute = [[NSAttributeDescription alloc]init]; [idAttribute setName:@"processID"]; [idAttribute setAttributeType:NSInteger32AttributeType]; [idAttribute setOptional:NO]; [idAttribute setDefaultValue:[NSNumber numberWithInteger:-1]]; which seems a little odd; we are defining it as a scalar type, and then giving it an NSNumber object as its default value. In the associated class, Run, processID is defined as an NSInteger. Still, this should be OK - it's all copied directly from the tutorial. It seems to me that the problem is probably in there somewhere. By the way, the getter method for processID is defined like this: - (int)processID { [self willAccessValueForKey:@"processID"]; NSInteger pid = processID; [self didAccessValueForKey:@"processID"]; return pid; } and this method works fine; it accesses and unpacks the default int value of processID (-1). Thanks for the help so far!

    Read the article

  • Different function returns from command line and within function

    - by Myx
    Hello: I have an extremely bizzare situation: I have a function in MATLAB which calls three other main functions and produces two figures for me. The function reads in an input jpeg image, crops it, segments it using kmeans clustering, and outputs 2 figures to the screen - the original image and the clustered image with the cluster centers indicated. Here is the function in MATLAB: function [textured_avg_x photo_avg_x] = process_database_images() clear all warning off %#ok type_num_max = 3; % type is 1='texture', 2='graph', or 3='photo' type_num_max = 1; img_max_num_photo = 100; % 400 photo images img_max_num_other = 100; % 100 textured, and graph images for type_num = 1:2:type_num_max if(type_num == 3) img_num_max = img_max_num_photo; else img_num_max = img_max_num_other; end img_num_max = 1; for img_num = 1:img_num_max [type img] = load_image(type_num, img_num); %img = imread('..\images\445.jpg'); img = crop_image(img); [IDX k block_bounds features] = segment_image(img); end end end The function segment_image first shows me the color image that was passed in, performs kmeans clustering, and outputs the clustered image. When I run this function on a particular image, I get 3 clusters (which is not what I expect to get). When I run the following commands from the MATLAB command prompt: >> img = imread('..\images\texture\1.jpg'); >> img = crop_image(img); >> segment_image(img); then the first image that is displayed by segment_image is the same as when I run the function (so I know that the clustering is done on the same image) but the number of clusters is 16 (which is what I expect). In fact, when I run my process_database_images() function on my entire image database, EVERY image is evaluated to have 3 clusters (this is a problem), whereas when I test some images individually, I get in the range of 12-16 clusters, which is what I prefer and expect. Why is there such a discrepancy? Am I having some syntax bug in my process_database_images() function? If more code is required from me (i.e. segment_images function, or crop_image function), please let me know. Thanks.

    Read the article

  • c#: Design advice. Using DataTable or List<MyObject> for a generic rule checker

    - by Andrew White
    Hi, I have about 100,000 lines of generic data. Columns/Properties of this data are user definable and are of the usual data types (string, int, double, date). There will be about 50 columns/properties. I have 2 needs: To be able to calculate new columns/properties using an expression e.g. Column3 = Column1 * Column2. Ultimately I would like to be able to use external data using a callback, e.g. Column3 = Column1 * GetTemperature The expression is relatively simple, maths operations, sum, count & IF are the only necessary functions. To be able to filter/group the data and perform aggregations e.g. Sum(Data.Column1) Where(Data.Column2 == "blah") As far as I can see I have two options: 1. Using a DataTable. = Point 1 above is achieved by using DataColumn.Expression = Point 2 above is acheived by using DataTable.DefaultView.RowFilter & C# code 2. Using a List of generic Objects each with a Dictionary< string, object to store the values. = Point 1 could be achieved by something like NCalc = Point 2 is achieved using LINQ DataTable: Pros: DataColumn.Expression is inbuilt Cons: RowFilter & coding c# is not as "nice" as LINQ, DataColumn.Expression does not support callbacks(?) = workaround could be to get & replace external value when creating the calculated column GenericList: Pros: LINQ syntax, NCalc supports callbacks Cons: Implementing NCalc/generic calc engine Based on the above I would think a GenericList approach would win, but something I have not factored in is the performance which for some reason I think would be better with a datatable. Does anyone have a gut feeling / experience with LINQ vs. DataTable performance? How about NCalc? As I said there are about 100,000 rows of data, with 50 columns, of which maybe 20 are calculated. In total about 50 rules will be run against the data, so in total there will be 5 million row/object scans. Would really appreciate any insights. Thx. ps. Of course using a database + SQL & Views etc. would be the easiest solution, but for various reasons can't be implemented.

    Read the article

  • downloaded zip file returns zero has 0 bytes as size

    - by Yaw Reuben
    I have written a Java web application that allows a user to download files from a server. These files are quite large and so are zipped together before download. It works like this: 1. The user gets a list of files that match his/her criteria 2. If the user likes a file and wants to download he/she selects it by checking a checkbox 3. The user then clicks "download" 4. The files are then zipped and stored on a server 5. The user this then presented with a page which contains a link to the downloadable zip file 6. However on downloading the zip file the file that is downloaded is 0 bytes in size I have checked the remote server and the zip file is being created properly, all that is left is to serve the file the user somehow, can you see where I might be going wrong, or suggest a better way to serve the zip file. The code that creates the link is: <% String zipFileURL = (String) request.getAttribute("zipFileURL"); %> <p><a href="<% out.print(zipFileURL); %> ">Zip File Link</a></p> The code that creates the zipFileURL variable is: public static String zipFiles(ArrayList<String> fileList, String contextRootPath) { //time-stamping Date date = new Date(); Timestamp timeStamp = new Timestamp(date.getTime()); Iterator fileListIterator = fileList.iterator(); String zipFileURL = ""; try { String ZIP_LOC = contextRootPath + "WEB-INF" + SEP + "TempZipFiles" + SEP; BufferedInputStream origin = null; zipFileURL = ZIP_LOC + "FITS." + timeStamp.toString().replaceAll(":", ".").replaceAll(" ", ".") + ".zip"; FileOutputStream dest = new FileOutputStream(ZIP_LOC + "FITS." + timeStamp.toString().replaceAll(":", ".").replaceAll(" ", ".") + ".zip"); ZipOutputStream out = new ZipOutputStream(new BufferedOutputStream( dest)); // out.setMethod(ZipOutputStream.DEFLATED); byte data[] = new byte[BUFFER]; while(fileListIterator.hasNext()) { String fileName = (String) fileListIterator.next(); System.out.println("Adding: " + fileName); FileInputStream fi = new FileInputStream(fileName); origin = new BufferedInputStream(fi, BUFFER); ZipEntry entry = new ZipEntry(fileName); out.putNextEntry(entry); int count; while ((count = origin.read(data, 0, BUFFER)) != -1) { out.write(data, 0, count); } origin.close(); } out.close(); } catch (Exception e) { e.printStackTrace(); } return zipFileURL; }

    Read the article

  • ASP.NET- forcing child/container events to fire before parent onload?

    - by Hans Gruber
    I'm working on a questionnaire type application in which questions are stored in a database. Therefore, I create my controls dynamically on every Page.OnLoad. This works like a charm and ViewState is persisted between postbacks because I ensure that my dynamic controls always have the same generated Control.ID. In addition to the user control that dynamically populates the questions, my questionnaire page also contains a 'Status' section (also encapsulated by a user control) which represents the status of the questionnaire (choices are 'Complete', 'Started' or 'In Progress'). If the user changes the status of questionnaire (i.e. from 'In Progress' to 'Complete'), I need to postback to the server because the contents of the dynamic portion of the questionnaire depend on the selected status. Some questions are always present regardless of status, and yet others may not be present at all for the selected status. The point is, when the status changes, I have to postback to the page and render the right set of questions. Additionally, I need to preserve any user entered values for those questions which are 'always available'. However, due to the page life cycle in ASP.NET, the 'Status' user control's OnLoad, which contains the correct status needed to load the right questions from the DB, doesn't get executed until after the 'dynamic questions' user control has already been populated (with the wrong/stale values). To get around this, I raise an event from my 'Status' user control to the main page to indicate that the Status has changed. The main page then raises an event on the 'dynamic questions' user control. Since by the time this event bubbles up, the 'dynamic questions' user control has already loaded the 'wrong' questions from the DB, it first calls Controls.Clear. It then happily uses the new status to query the database for the 'correct' questions and does a Control.Add() on each. FYI, Control.IDs are consistent across postbacks. This solution works...sorta. The correct set of questions for the selected status do get rendered; however ViewState is getting lost for those 'always available' questions. I'm guessing this is because the 'dynamic questions' user control calls Controls.Clear when responding to the status changed event. This must somehow kill the association between ViewState and my dynamic controls, even though the Control.ID are consistent. This seems like such a common requirement, I'm virtually certain there is a better, cleaner and less error prone approach to accomplish this. In case its not plain obvious, I haven't been able to grok the ASP.NET page life-cycle despite working with it for the last year. Any help is much appreciated!

    Read the article

  • Mocking a concrete class : templates and avoiding conditional compilation

    - by AshirusNW
    I'm trying to testing a concrete object with this sort of structure. class Database { public: Database(Server server) : server_(server) {} int Query(const char* expression) { server_.Connect(); return server_.ExecuteQuery(); } private: Server server_; }; i.e. it has no virtual functions, let alone a well-defined interface. I want to a fake database which calls mock services for testing. Even worse, I want the same code to be either built against the real version or the fake so that the same testing code can both: Test the real Database implementation - for integration tests Test the fake implementation, which calls mock services To solve this, I'm using a templated fake, like this: #ifndef INTEGRATION_TESTS class FakeDatabase { public: FakeDatabase() : realDb_(mockServer_) {} int Query(const char* expression) { MOCK_EXPECT_CALL(mockServer_, Query, 3); return realDb_.Query(); } private: // in non-INTEGRATION_TESTS builds, Server is a mock Server with // extra testing methods that allows mocking Server mockServer_; Database realDb_; }; #endif template <class T> class TestDatabaseContainer { public: int Query(const char* expression) { int result = database_.Query(expression); std::cout << "LOG: " << result << endl; return result; } private: T database_; }; Edit: Note the fake Database must call the real Database (but with a mock Server). Now to switch between them I'm planning the following test framework: class DatabaseTests { public: #ifdef INTEGRATION_TESTS typedef TestDatabaseContainer<Database> TestDatabase ; #else typedef TestDatabaseContainer<FakeDatabase> TestDatabase ; #endif TestDatabase& GetDb() { return _testDatabase; } private: TestDatabase _testDatabase; }; class QueryTestCase : public DatabaseTests { public: void TestStep1() { ASSERT(GetDb().Query(static_cast<const char *>("")) == 3); return; } }; I'm not a big fan of that compile-time switching between the real and the fake. So, my question is: Whether there's a better way of switching between Database and FakeDatabase? For instance, is it possible to do it at runtime in a clean fashion? I like to avoid #ifdefs. Also, if anyone has a better way of making a fake class that mimics a concrete class, I'd appreciate it. I don't want to have templated code all over the actual test code (QueryTestCase class). Feel free to critique the code style itself, too. You can see a compiled version of this code on codepad.

    Read the article

  • How do I get the next token in a Cstring if I want to use it as an int? (c++)

    - by Van
    My objective is to take directions from a user and eventually a text file to move a robot. The catch is that I must use Cstrings(such as char word[];) rather than the std::string and tokenize them for use. the code looks like this: void Navigator::manualDrive() { char uinput[1]; char delim[] = " "; char *token; cout << "Enter your directions below: \n"; cin.ignore(); cin.getline (uinput, 256); token=strtok(uinput, delim); if(token == "forward") { int inches; inches=token+1; travel(inches); } } I've never used Cstrings I've never tokenized anything before, and I don't know how to write this. Our T.A.'s expect us to google and find all the answers because they are aware we've never been taught these methods. Everyone in my lab is having much more trouble than usual. I don't know the code to write but I know what I want my program to do. I want it to execute like this: 1) Ask for directions. 2) cin.getline the users input 3) tokenize the inputed string 4) if the first word token == "forward" move to the next token and find out how many inches to move forward then move forward 5) else if the first token == "turn" move to the next token. if the next token == "left" move to the next token and find out how many degrees to turn left I will have to do this for forward x, backward x, turn left x, turn right x, and stop(where x is in inches or degrees). I already wrote functions that tell the robot how to move forward an inch and turn in degrees. I just need to know how to convert the inputted strings to all lowercase letters and move from token to token and convert or extract the numbers from the string to use them as integers. If all is not clear you can read my lab write up at this link: http://www.cs.utk.edu/~cs102/robot_labs/Lab9.html If anything is unclear please let me know, and I will clarify as best I can.

    Read the article

  • Text Obfuscation using base64_encode()

    - by user271619
    I'm playing around with encrypt/decrypt coding in php. Interesting stuff! However, I'm coming across some issues involving what text gets encrypted into. Here's 2 functions that encrypt and decrypt a string. It uses an Encryption Key, which I set as something obscure. I actually got this from a php book. I modified it slightly, but not to change it's main goal. I created a small example below that anyone can test. But, I notice that some characters show up as the "encrypted" string. Characters like "=" and "+". Sometimes I pass this encrypted string via the url. Which may not quite make it to my receiving scripts. I'm guessing the browser does something to the string if certain characters are seen. I'm really only guessing. is there another function I can use to ensure the browser doesn't touch the string? or does anyone know enough php bas64_encode() to disallow certain characters from being used? I'm really not going to expect the latter as a possibility. But, I'm sure there's a work-around. enjoy the code, whomever needs it! define('ENCRYPTION_KEY', "sjjx6a"); function encrypt($string) { $result = ''; for($i=0; $i<strlen($string); $i++) { $char = substr($string, $i, 1); $keychar = substr(ENCRYPTION_KEY, ($i % strlen(ENCRYPTION_KEY))-1, 1); $char = chr(ord($char)+ord($keychar)); $result.=$char; } return base64_encode($result)."/".rand(); } function decrypt($string){ $exploded = explode("/",$string); $string = $exploded[0]; $result = ''; $string = base64_decode($string); for($i=0; $i<strlen($string); $i++) { $char = substr($string, $i, 1); $keychar = substr(ENCRYPTION_KEY, ($i % strlen(ENCRYPTION_KEY))-1, 1); $char = chr(ord($char)-ord($keychar)); $result.=$char; } return $result; } echo $encrypted = encrypt("reaplussign.jpg"); echo "<br>"; echo decrypt($encrypted);

    Read the article

  • Choosing a distributed shared memory solution

    - by mindas
    I have a task to build a prototype for a massively scalable distributed shared memory (DSM) app. The prototype would only serve as a proof-of-concept, but I want to spend my time most effectively by picking the components which would be used in the real solution later on. The aim of this solution is to take data input from an external source, churn it and make the result available for a number of frontends. Those "frontends" would just take the data from the cache and serve it without extra processing. The amount of frontend hits on this data can literally be millions per second. The data itself is very volatile; it can (and does) change quite rapidly. However the frontends should see "old" data until the newest has been processed and cached. The processing and writing is done by a single (redundant) node while other nodes only read the data. In other words: no read-through behaviour. I was looking into solutions like memcached however this particular one doesn't fulfil all our requirements which are listed below: The solution must at least have Java client API which is reasonably well maintained as the rest of app is written in Java and we are seasoned Java developers; The solution must be totally elastic: it should be possible to add new nodes without restarting other nodes in the cluster; The solution must be able to handle failover. Yes, I realize this means some overhead, but the overall served data size isn't big (1G max) so this shouldn't be a problem. By "failover" I mean seamless execution without hardcoding/changing server IP address(es) like in memcached clients when a node goes down; Ideally it should be possible to specify the degree of data overlapping (e.g. how many copies of the same data should be stored in the DSM cluster); There is no need to permanently store all the data but there might be a need of post-processing of some of the data (e.g. serialization to the DB). Price. Obviously we prefer free/open source but we're happy to pay a reasonable amount if a solution is worth it. In any way, paid 24hr/day support contract is a must. The whole thing has to be hosted in our data centers so SaaS offerings like Amazon SimpleDB are out of scope. We would only consider this if no other options would be available. Ideally the solution would be strictly consistent (as in CAP); however, eventual consistence can be considered as an option. Thanks in advance for any ideas.

    Read the article

  • NewBie Question, jQuery: How can we implement if...else logic and call function

    - by Rachel
    I am new to jQuery and so don't mind this question if it sounds stupid but here is something that I am trying to do : I have 3 functions like: AddToCart Function which adds item to the shopping cart: //offer_id is the offer which we are trying to add to cart. addToCart: function(offer_id) { this.submit({action: 'add', 'offer_id': offer_id}, {'app_server_url': this.app_server_url}); }, RemoveFromCart which removes data from the cart //target is link clicked and event is the click event. removeFromCart: function(target, event) { this.uniqueElmt('cart_table').find('.sb_item_remove').unbind('click'); var offer_id = $(target).parent().find('.offer_id').html(); this.submit({action: 'remove', 'offer_id': offer_id, 'next_action': this.config.current_action}, {'app_server_url': this.app_server_url}); }, Get the current state of the cart //return string which represents current state of cart. getCartItems: function() { return this.contents; } Now I am trying to do 3 things: if there is no content in cart and addToCart is called than some action, so basically here we need to check the current state of cart and that is obtained by calling getCartItems and if it is Null and than if addToCart is called than we perform some action if there is content in the cart and addToCart is called than some action,so basically here we need to check the current state of cart and that is obtained by calling getCartItems and check if it is Null or not and than if addToCart is called than we perform some action if we had some content in the cart. if there is content in the cart and removeFromCart is called some action, so basically here we need to check the current state of cart and that is obtained by calling getCartItems and if it is not Null and if removeFromCart is called than we perform some action Pseudocode of what I am trying to do: if there is no content in cart and addToCart is called than $(document).track( { 'module' : 'Omniture', 'event' : 'instant', 'args' : { 'linkTrackVars' : 'products,events', 'linkTrackEvents' : 'scAdd,scOpen', 'linkType' : 'o', 'linkName' : 'Cart : First Product Added' // could be blank, but can include event name as added feature 'svalues' : { 'products' : ';OFFERID1[,;OFFERID2]', 'events' : 'scAdd,scOpen', }, } 'defer' : '0' } ); if there is content in the cart and addToCart is called than $(document).track( { 'module' : 'Omniture', 'event' : 'instant', 'args' : { 'linkTrackVars' : 'products,events', 'linkTrackEvents' : 'scAdd', 'linkType' : 'o', 'linkName' : 'Cart : Product Added' // could be blank, but can include event name as added feature 'svalues' : { 'products' : ';OFFERID1[,;OFFERID2]', 'events' : 'scAdd', }, }, 'defer' : '0' } ); if there is content in the cart and removeFromCart is called $(document).track( { 'module' : 'Omniture', 'event' : 'instant', 'args' : { 'linkTrackVars' : 'products,events', 'linkTrackEvents' : 'scRemove', 'linkType' : 'o', 'linkName' : 'Cart : Product Removed' // could be blank, but can include event name as added feature 'svalues' : { 'products' : ';OFFERID1[,;OFFERID2]', 'events' : 'scRemove', }, } 'defer' : '0' } ); My basic concern is that am complete newbie to jQuery and JavaScript and so am not sure how can I implement if...else logic and how can I call a funtion using jQuery/JavaScript.

    Read the article

  • Java - strange problem storing text to a String[] and then storing the array to a Vector. Help plea

    - by hatboysam
    OK so for background I've only been using Java for a little more than a week and I am making a basic GUI "Recipe Book" application just to test out some of the techniques I've learned. Currently I'm working on the "Add New Recipe" page that allows the user to make a new recipe by adding ingredients one at a time. The ingredients have 3 attributes: Name, amount, and unit (like cups, oz, etc) that come from two text fields and a combo box respectively. The 3 values are stores as strings to a String array that represents the ingredient, then the arrays are stored in a vector that holds all of the ingredients for one recipe. When an ingredient is added, the user first types in the 3 necessary values into empty boxes/combo box dropdown and then clicks "Add Ingredient". Here is my code for when the button is clicked: public void actionPerformed(ActionEvent event) { Object source = event.getSource(); if (source == addIngredientButton) { //make sure 3 ingredient fields are not blank if (!ingredientNameField.getText().equals("") && !ingredientAmountField.getText().equals("") && !ingredientUnitDropdown.getSelectedItem().equals("")) { tempIngredientsArray[0] = ingredientNameField.getText(); //set ingredient name to first position in array tempIngredientsArray[1] = ingredientAmountField.getText(); //set ingredient amount to second position in array tempIngredientsArray[2] = (String) ingredientUnitDropdown.getSelectedItem(); //set ingredient unit to third position in array int ingredientsVectorSize = tempRecipe.ingredientsVector.size(); tempRecipe.ingredientsVector.add(this.tempIngredientsArray); //why would it matter if this and previous statement were switched for (int k = 0; k < ingredientsVectorSize + 1; k++ ) { liveIngredientsListArea.append("\n" + tempRecipe.makeIngredientSentence(k)); System.out.println(Arrays.toString(tempIngredientsArray)); //shows that tempIngredientsArray gets messed up each time } } Now here's the really strange problem I've been having. The first time the user adds an ingredient everything works fine. The second time, the String[] for that ingredient seems to be duplicated, and the third time it's triplicated. Aka the first time it runs the System.out.println might return "{Sugar, 3, Cups}" and the second time it will return "{Flour, 2, Oz.} {Flour, 2, Oz.}" etc. What's causing this strange problem? All help greatly appreciated, let me know if you need any clarification.

    Read the article

  • Startup in Windows 7

    - by iira
    Hi, I am trying to add my program run in Windows 7 startup, but it does'nt works. My program has embedded uac manifest. My current way is by adding String Value at HKCU..\Run I found a manual solution for Vista from http://social.technet.microsoft.com/Forums/en/w7itprosecurity/thread/81c3c1f2-0169-493a-8f87-d300ea708ecf 1. Click Start, right click on Computer and choose “Manage”. 2. Click “Task Scheduler” on the left panel. 3. Click “Create Task” on the right panel. 4. Type a name for the task. 5. Check “Run with highest privileges”. 6. Click Actions tab. 7. Click “New…”. 8. Browse to the program in the “Program/script” box. Click OK. 9. On desktop, right click, choose New and click “Shortcut”. 10. In the box type: schtasks.exe /run /tn TaskName where TaskName is the name of task you put in on the basics tab and click next. 11. Type a name for the shortcut and click Finish. Additionally, you need to run the saved scheduled task shortcut to run the program instead of running the application shortcut to ignore the IAC prompt. When startup the system will run the program via the original shortcut. Therefore you need to change the location to run the saved task. Please: 1. Open Regedit. 2. Find the entry of the startup item in Registry. It will be stored in one of the following branches. HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run HKEY_USERS\.DEFAULT\Software\Microsoft\Windows\CurrentVersion\Run HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run 3. Double-click on the correct key, change the path to the saved scheduled task you created. Is there any free code to add item with privileges option in scheduled task? I havent found the free one in torry.net Thanks a lot.

    Read the article

  • How to solve Memory leaks in Lib Xml Parser in objective-C where the list is returned?

    - by Madan Mohan
    Hi Guys, I got leaks in Lib Xml Parser while retrieving the data from the net, Here in the below code, I have allocated the list - (void)getCustomersList { // make an operation so we can push it into the queue SEL method = @selector(parseForData); NSInvocationOperation *op = [[NSInvocationOperation alloc] initWithTarget:self selector:method object:nil]; customersTempList = [[NSMutableArray alloc] initWithCapacity:20];// allocated list [self.retrieverQueue addOperation:op]; [op release]; } // return each recode // in parser .m class one of the condition in endElement where it shows a leak. else if(0 == strncmp((const char *)localname, kCustomerElement, kCustomerElementLength)) { [customersTempList addObject:customer]; printf("\n no of objects in temp list:%d", [customersTempList count]); if ([customersTempList count] == 20) { NSMutableArray* argsList = [customersTempList copy];//////////////////////here it is showing leak. printf("\n Calling reload data with %d new objects", [argsList count]); SEL selector = @selector(parser:addCustomerObject:); NSMethodSignature *sig = [(id)self.delegate methodSignatureForSelector:selector]; if(nil != sig && [self.delegate respondsToSelector:selector]) { NSInvocation *invocation = [NSInvocation invocationWithMethodSignature:sig]; [invocation retainArguments]; [invocation setTarget:self.delegate]; [invocation setSelector:selector]; [invocation setArgument:&self atIndex:2]; [invocation setArgument:&argsList atIndex:3]; [invocation performSelectorOnMainThread:@selector(invoke) withObject:NULL waitUntilDone:NO]; } [customersTempList removeAllObjects]; } } // returned the list after all the records are stored in the list else if(0 == strncmp((const char *)localname, kCustomersElement, kCustomersElementLength)) { printf("\n Calling reload data with %d new objects", [customersTempList count]); NSMutableArray* argsList = [customersTempList copy]; printf("\n Calling reload data with %d new objects", [argsList count]); SEL selector = @selector(parser:addCustomerObject:); NSMethodSignature *sig = [(id)self.delegate methodSignatureForSelector:selector]; if(nil != sig && [self.delegate respondsToSelector:selector]) { NSInvocation *invocation = [NSInvocation invocationWithMethodSignature:sig]; [invocation retainArguments]; [invocation setTarget:self.delegate]; [invocation setSelector:selector]; [invocation setArgument:&self atIndex:2]; [invocation setArgument:&argsList atIndex:3]; [invocation performSelectorOnMainThread:@selector(invoke) withObject:NULL waitUntilDone:NO]; } [customersTempList removeAllObjects]; } } please help me out of this, Thanks, Madan.

    Read the article

  • Seeking STL-aware c++filt

    - by Don Wakefield
    In my development environment, I'm compiling a code base using GNU C++ 3.4.6. Code is under development, and unfortunately crashes now and then. It's nice to be able to run the traceback through a demangler, and I use c++filt 3.4. The problem comes when functions have a number of STL parameters. Consider My_callback::operator()( Status&, std::set<std::string> const&, std::vector<My_parameter*> const&, My_attribute_set const&, std::vector<My_parameter_base*> const&, std::vector<My_parameter> const&, std::set<std::string> const& ) { // ... } When this function is in the traceback, the mangled output on my platform is: (_ZN30My_callbackclER11StatusRKSt3setISsSt4lessISsESaISsEERKSt6vectorIP13My_parameterSaISB_EERK17My_attribute_setRKS9_IP18My_parameter_baseSaISK_EERKS9_ISA_SaISA_EES8_+0x76a) [0x13ffdaa] c++filt kindly demangles it to (My_callback::operator()(Status&, std::set<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, std::vector<My_parameter*, std::allocator<My_parameter*> > const&, My_attribute_set const&, std::vector<My_parameter_base*, std::allocator<My_parameter_base*> > const&, std::vector<My_parameter, std::allocator<My_parameter> > const&, std::set<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)+0x76a) [0x13ffdaa] This is the same problem as compiler errors encountered when using templates. However, the STL is a fairly regular and recognizable package of templates. So what I'm hoping is that someone out there has created an enhanced version of c++filt which would dump something closer to the original function signature. Any hints?

    Read the article

  • MATLAB: different function returns from command line and within function

    - by Myx
    Hello: I have an extremely bizzare situation: I have a function in MATLAB which calls three other main functions and produces two figures for me. The function reads in an input jpeg image, crops it, segments it using kmeans clustering, and outputs 2 figures to the screen - the original image and the clustered image with the cluster centers indicated. Here is the function in MATLAB: function [textured_avg_x photo_avg_x] = process_database_images() clear all warning off %#ok type_num_max = 3; % type is 1='texture', 2='graph', or 3='photo' type_num_max = 1; img_max_num_photo = 100; % 400 photo images img_max_num_other = 100; % 100 textured, and graph images for type_num = 1:2:type_num_max if(type_num == 3) img_num_max = img_max_num_photo; else img_num_max = img_max_num_other; end img_num_max = 1; for img_num = 1:img_num_max [type img] = load_image(type_num, img_num); %img = imread('..\images\445.jpg'); img = crop_image(img); [IDX k block_bounds features] = segment_image(img); end end end The function segment_image first shows me the color image that was passed in, performs kmeans clustering, and outputs the clustered image. When I run this function on a particular image, I get 3 clusters (which is not what I expect to get). When I run the following commands from the MATLAB command prompt: >> img = imread('..\images\texture\1.jpg'); >> img = crop_image(img); >> segment_image(img); then the first image that is displayed by segment_image is the same as when I run the function (so I know that the clustering is done on the same image) but the number of clusters is 16 (which is what I expect). In fact, when I run my process_database_images() function on my entire image database, EVERY image is evaluated to have 3 clusters (this is a problem), whereas when I test some images individually, I get in the range of 12-16 clusters, which is what I prefer and expect. Why is there such a discrepancy? Am I having some syntax bug in my process_database_images() function? If more code is required from me (i.e. segment_images function, or crop_image function), please let me know. Thanks.

    Read the article

  • What benefits are there to storing Javascript in external files vs in the <head>?

    - by RenderIn
    I have an Ajax-enabled CRUD application. If I display a record from my database it shows that record's values for each column, including its primary key. For the Ajax actions tied to buttons on the page I am able to set up their calls by printing the ID directly into their onclick functions when rendering the HTML server-side. For example, to save changes to the record I may have a button as follows, with '123' being the primary key of the record. <button type="button" onclick="saveRecord('123')">Save</button> Sometimes I have pages with Javascript generating HTML and Javascript. In some of these cases the primary key is not naturally available at that place in the code. In these cases I took a shortcut and generate buttons like so, taking the primary key from a place it happens to be displayed on screen for visual consumption: ... <td>Primary Key: </td> <td><span id="PRIM_KEY">123</span></td> ... <button type="button" onclick="saveRecord(jQuery('#PRIM_KEY').text())">DoSomething</button> This definitely works, but it seems wrong to drive database queries based on the value of text whose purpose was user consumption rather than method consumption. I could solve this by adding a series of additional parameters to various methods to usher the primary key along until it is eventually needed, but that also seems clunky. The most natural way for me to solve this problem would be to simply situate all the Javascript which currently lives in external files, in the <head> of the page. In that way I could generate custom Javascript methods without having to pass around as many parameters. Other than readability, I'm struggling to see what benefit there is to storing Javascript externally. It seems like it makes the already weak marriage between HTML/DOM and Javascript all the more distant. I've seen some people suggest that I leave the Javascript external, but do set various "custom" variables on the page itself, for example, in PHP: <script type="text/javascript"> var primaryKey = <?php print $primaryKey; ?>; </script> <script type="text/javascript" src="my-external-js-file-depending-on-primaryKey-being-set.js"></script> How is this any better than just putting all the Javascript on the page in the first place? There HTML and Javascript are still strongly dependent on each other.

    Read the article

  • Actionscript base class in Flex AIR app

    - by Alan
    I'm trying to build a Flex AIR app using Flex Builder 3, which I'm just getting started with. In Flash CS4, there's a text field in the authoring environment where you can specify a class that will become the "base" class - your class inherits from Sprite and then "becomes" the Stage at runtime. Is there a a way to do the same thing with Flex/AIR? Failing that, can anyone explain how to create and use an external class? Originally I had this in TestApp.mxml: <?xml version="1.0" encoding="utf-8"?> <mx:WindowedApplication xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute"> <mx:Script source="TestApp.as"/> </mx:WindowedApplication> And this in TestApp.as: package { public class TestApp { public function TestApp() { trace('Hello World'); } } } That gives the error "packages cannot be nested", so I tried taking out the package statement: public class TestApp { public function TestApp() { trace('Hello World'); } } That gives an error "classes cannot be nested", so I finally gave up and tried to take out the class altogether, figuring I'd try to start with a bunch of functions instead: function init() { trace('Hello World'); } But that gives the error "A file found in a source-path must have an externally visible definition. If a definition in the file is meant to be externally visible, please put the definition in a package". I can't win! When I put my class in a package, it says I can't do that because it would be nested. When I don't, it says it needs to be in a package so it can be seen. Does anyone know how to fix this? If I can't do the custom-class-as-base-class thing, is there a way I could just have it like: <?xml version="1.0" encoding="utf-8"?> <mx:WindowedApplication xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute"> <mx:Script source="TestApp.as"/> <mx:Script> var app = new TestApp(); </mx:Script> </mx:WindowedApplication> At the moment I can't import the class definition at all, so even that won't work. Thanks in advance!

    Read the article

  • Web Services Primer for a WinForms Developer?

    - by Unicorns
    I've been writing client/server applications with Winforms for about six years now, but I have yet to venture into the web space (neither ASP.NET nor web services). Given the direction that the job market has been heading for some time and the fact that I have a basic curiosity, I'd like to get involved with writing web services, but I don't know where to start. I've read about various options (XML/SOAP vs. JSON, REST vs...well, actually I don't know what it's called, etc.), but I'm not sure what sort of criteria are in play when making the determination to use one or the other. Obviously, I'd like to leverage the tools that I have (Visual Studio, the .NET framework, etc.) without hamstringing myself into only targeting a particular audience (i.e. writing the service in such a way as to make it difficult to consume from a Windows Mobile/Android/iPhone client, for example). For the record, my plan--for now--is to use WCF for my web service development, but I'm open to using another .NET approach if that's advisable. I realize that this question is pretty open-ended so it may get closed, but here are some things I'm wondering: What are some things to consider when choosing the type of web service (REST, etc.) I intend to write? Is it possible (and, if so, feasible) to move from one approach to another? Can web services be written in an event-driven way? As I said I'm a Winforms developer, so I'm used to objects raising events for me to react to. For instance, if I have two clients connected to my service, is there a way for me to "push" information to one of them as a result of an action by the other? If this is possible, is this advisable or am I just not thinking about it correctly? What authentication mechanisms seem to work best for public-facing services? What about if I plan to have different types of OS'es and clients connecting to the service? Is there a generally accepted platform-agnostic approach? In the line of authentication, is this something that I should be doing myself (authenticating an managing sessions, etc.) or is this something should be handled at the framework level and I just define exactly how it should work? If that's the case, how do I tell who the requester has authenticated themselves as? I started writing an authentication mechanism (simple username/password combinations stored in the database and a corresponding session table with a GUID key) within my service and just requiring that key to be passed with every operation (other than logging in, of course), but I want to make sure that I'm not reinventing the wheel here. However, I also don't want to clutter up the server with a bunch of machine user accounts just to use Basic authentication. I'm also under the impression that Digest (and of course Windows) authentication requires a machine (or AD) user account.

    Read the article

  • linked list elements gone?

    - by Hristo
    I create a linked list dynamically and initialize the first node in main(), and I add to the list every time I spawn a worker process. Before the worker process exits, I print the list. Also, I print the list inside my sigchld signal handler. in main(): head = NULL; tail = NULL; // linked list to keep track of worker process dll_node_t *node; node = (dll_node_t *) malloc(sizeof(dll_node_t)); // initialize list, allocate memory append_node(node); node->pid = mainPID; // the first node is the MAIN process node->type = MAIN; in a fork()'d process: // add to list dll_node_t *node; node = (dll_node_t *) malloc(sizeof(dll_node_t)); append_node(node); node->pid = mmapFileWorkerStats->childPID; node->workerFileName = mmapFileWorkerStats->workerFileName; node->type = WORK; functions: void append_node(dll_node_t *nodeToAppend) { /* * append param node to end of list */ // if the list is empty if (head == NULL) { // create the first/head node head = nodeToAppend; nodeToAppend->prev = NULL; } else { tail->next = nodeToAppend; nodeToAppend->prev = tail; } // fix the tail to point to the new node tail = nodeToAppend; nodeToAppend->next = NULL; } finally... the signal handler: void chld_signalHandler() { dll_node_t *temp1 = head; while (temp1 != NULL) { printf("2. node's pid: %d\n", temp1->pid); temp1 = temp1->next; } int termChildPID = waitpid(-1, NULL, WNOHANG); dll_node_t *temp = head; while (temp != NULL) { if (temp->pid == termChildPID) { printf("found process: %d\n", temp->pid); } temp = temp->next; } return; } Is it true that upon the worker process exiting, the SIGCHLD signal handler is triggered? If so, that would mean that after I print the tree before exiting, the next thing I do is in the signal handler which is print the tree... which would mean i would print the tree twice? But the tree isn't the same. The node I add in the worker process doesn't exist when I print in the signal handler or at the very end of main(). Any idea why? Thanks, Hristo

    Read the article

  • QValidator for hex input

    - by Evan Teran
    I have a Qt widget which should only accept a hex string as input. It is very simple to restrict the input characters to [0-9A-Fa-f], but I would like to have it display with a delimiter between "bytes" so for example if the delimiter is a space, and the user types 0011223344 I would like the line edit to display 00 11 22 33 44 Now if the user presses the backspace key 3 times, then I want it to display 00 11 22 3. I almost have what i want, so far there is only one subtle bug involving using the delete key to remove a delimiter. Does anyone have a better way to implement this validator? Here's my code so far: class HexStringValidator : public QValidator { public: HexStringValidator(QObject * parent) : QValidator(parent) {} public: virtual void fixup(QString &input) const { QString temp; int index = 0; // every 2 digits insert a space if they didn't explicitly type one Q_FOREACH(QChar ch, input) { if(std::isxdigit(ch.toAscii())) { if(index != 0 && (index & 1) == 0) { temp += ' '; } temp += ch.toUpper(); ++index; } } input = temp; } virtual State validate(QString &input, int &pos) const { if(!input.isEmpty()) { // TODO: can we detect if the char which was JUST deleted // (if any was deleted) was a space? and special case this? // as to not have the bug in this case? const int char_pos = pos - input.left(pos).count(' '); int chars = 0; fixup(input); pos = 0; while(chars != char_pos) { if(input[pos] != ' ') { ++chars; } ++pos; } // favor the right side of a space if(input[pos] == ' ') { ++pos; } } return QValidator::Acceptable; } }; For now this code is functional enough, but I'd love to have it work 100% as expected. Obviously the ideal would be the just separate the display of the hex string from the actual characters stored in the QLineEdit's internal buffer but I have no idea where to start with that and I imagine is a non-trivial undertaking. In essence, I would like to have a Validator which conforms to this regex: "[0-9A-Fa-f]( [0-9A-Fa-f])*" but I don't want the user to ever have to type a space as delimiter. Likewise, when editing what they types, the spaces should be managed implicitly.

    Read the article

  • Calling function dynamically by using Reflection

    - by Alaa'
    Hi, I'm generating dll files contain code like the following example : // using System; using System.Collections; using System.Xml; using System.IO; using System.Windows.Forms; namespace CSharpScripter { public class TestClass : CSharpScripter.Command { private int i=1; private int j=2; public int k=3; public TestClass6() { } public void display (int i,int j,int k) { string a = null; a= k.ToString(); string a1 = null; a1= this.i.ToString(); string a2 = null; a2= j.ToString(); MessageBox.Show(" working! "+ "k="+ a +" i="+a1 + " j="+ a2); } public void setValues(int i,int j,int k1) { this.i=i; this.j=j; k=k1; } // I'm compiling the pervious code, then I execute an object from the dll file. So, in the second part of the code ( Executing part), I'm just calling the execute function, It contains a call for a function, I named here: display. For that I need to set values in the declaration by a setValue function. I want it to been called dynamically (setValues ), which has declaration like : public void(Parameter[] parameters) { //some code block here } For this situation I used Reflection. // Type objectType = testClass.GetType(); MethodInfo members = objectType.GetMethod("setValues"); ParameterInfo[] parameters = members.GetParameters(); For) int t = 0; t < parameters.Length; t++) { If (parameters[t]. ParameterType == typeof()) { object value = this.textBox2.Text; parameters.SetValue)Convert.ChangeType(value,parameters[t].ParameterType), t); } } // But it throws an casting error" Object cannot be stored in an array of this type." at last line, in first parameter for (setValue) methode. What is the problem here? And How I can call the method Dynamically after the previous code, by( Invoke) or is there a better way? Thanks.

    Read the article

  • Magento: add product twice to cart, with different attributes!

    - by Peter
    Hello all, i have been working with this for a whole day but i cannot find any solution: I have a product (lenses), which has identical attributes, but user can choose one attribute set for one eye and another attribute set for another. On the frontend i got it ok, see it here: http://connecta.si/clarus/index.php/featured/acuvue-oasys-for-astigmatism.html So the user can select attributes for left or right eye, but it is the same product. I build a function, wich should take a product in a cart (before save), add other set of attributes, so there should be two products in the cart. What happens is there are two products, but with the same set of attributes??? Here is the snippet of the function: $req = Mage::app()-getRequest(); $request[’qty’] = 1; $request[’product’] = 15; $request[’uenc’] = $req-get(’uenc’); $request[’options’][1] = 1; $request[’options’][3] = 5; $request[’options’][2] = 3; $reqo = new Varien_Object($request); $newitem = $quote-addProduct($founditem-getProduct(), $reqo); //add another one ------------------------------------------ $request[’qty’] = 1; $request[’product’] = 15; $request[’uenc’] = $req-get(’uenc’); $request[’options’][1] = 2; $request[’options’][3] = 6; $request[’options’][2] = 4; $reqo = new Varien_Object($request); $newitem = $quote-addProduct($founditem-getProduct(), $reqo); Or another test, with some other functions (again, product added, with 2 quantity , but same attributes...): $req = Mage::app()-getRequest(); $request[’qty’] = 1; $request[’product’] = 15; $request[’uenc’] = $req-get(’uenc’); $request[’options’][1] = 2; $request[’options’][3] = 6; $request[’options’][2] = 4; $product = $founditem-getProduct(); $cart = Mage::getSingleton(’checkout/cart’); //delete all first… $cart-getItems()-clear()-save(); $reqo = new Varien_Object($request); $cart-addProduct($founditem-getProduct(), $reqo); $cart-getItems()-save(); $request[’options’][1] = 1; $request[’options’][3] = 5; $request[’options’][2] = 3; $reqo = new Varien_Object($request); $cart-addProduct($founditem-getProduct(), $reqo); $cart-getItems()-save(); i really dont know what more to do, please any advice, this is my first module in magento… thank you, Peter

    Read the article

  • How can I have a Makefile automatically rebuild source files that include a modified header file? (I

    - by Nicholas Flynt
    I have the following makefile that I use to build a program (a kernel, actually) that I'm working on. Its from scratch and I'm learning about the process, so its not perfect, but I think its powerful enough at this point for my level of experience writing makefiles. AS = nasm CC = gcc LD = ld TARGET = core BUILD = build SOURCES = source INCLUDE = include ASM = assembly VPATH = $(SOURCES) CFLAGS = -Wall -O -fstrength-reduce -fomit-frame-pointer -finline-functions \ -nostdinc -fno-builtin -I $(INCLUDE) ASFLAGS = -f elf #CFILES = core.c consoleio.c system.c CFILES = $(foreach dir,$(SOURCES),$(notdir $(wildcard $(dir)/*.c))) SFILES = assembly/start.asm SOBJS = $(SFILES:.asm=.o) COBJS = $(CFILES:.c=.o) OBJS = $(SOBJS) $(COBJS) build : $(TARGET).img $(TARGET).img : $(TARGET).elf c:/python26/python.exe concat.py stage1 stage2 pad.bin core.elf floppy.img $(TARGET).elf : $(OBJS) $(LD) -T link.ld -o $@ $^ $(SOBJS) : $(SFILES) $(AS) $(ASFLAGS) $< -o $@ %.o: %.c @echo Compiling $<... $(CC) $(CFLAGS) -c -o $@ $< #Clean Script - Should clear out all .o files everywhere and all that. clean: -del *.img -del *.o -del assembly\*.o -del core.elf My main issue with this makefile is that when I modify a header file that one or more C files include, the C files aren't rebuilt. I can fix this quite easily by having all of my header files be dependencies for all of my C files, but that would effectively cause a complete rebuild of the project any time I changed/added a header file, which would not be very graceful. What I want is for only the C files that include the header file I change to be rebuilt, and for the entire project to be linked again. I can do the linking by causing all header files to be dependencies of the target, but I cannot figure out how to make the C files be invalidated when their included header files are newer. I've heard that GCC has some commands to make this possible (so the makefile can somehow figure out which files need to be rebuilt) but I can't for the life of me find an actual implementation example to look at. Can someone post a solution that will enable this behavior in a makefile? EDIT: I should clarify, I'm familiar with the concept of putting the individual targets in and having each target.o require the header files. That requires me to be editing the makefile every time I include a header file somewhere, which is a bit of a pain. I'm looking for a solution that can derive the header file dependencies on its own, which I'm fairly certain I've seen in other projects.

    Read the article

< Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >