Search Results

Search found 21556 results on 863 pages for 'control structures'.

Page 462/863 | < Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >

  • wsdl return an array of complex types

    - by Anand
    hi, I have defined a web service that will return the data from my mysql data base. I have written the web service in php. Now I have defined a complex type as follows: $server->wsdl->addComplexType( 'Category', 'complexType', 'struct', 'all', '', array( 'category_parent_id' => array('name' => 'category_parent_id', 'type' => 'xsd:int'), 'category_child_id' => array('name' => 'category_child_id', 'type' => 'xsd:int'), 'category_list' => array('name' => 'category_list', 'type' => 'xsd:int') ) ); The above complex type is a row in a table in my database. Now my function must send an array of these rows so how do I achieve the same My code is as follows: require_once('./nusoap/nusoap.php'); $server = new soap_server; $server-configureWSDL('productwsdl', 'urn:productwsdl'); // Register the data structures used by the service $server-wsdl-addComplexType( 'Category', 'complexType', 'struct', 'all', '', array( 'category_parent_id' = array('name' = 'category_parent_id', 'type' = 'xsd:int'), 'category_child_id' = array('name' = 'category_child_id', 'type' = 'xsd:int'), 'category_list' = array('name' = 'category_list', 'type' = 'xsd:int') ) ); $server-register('getaproduct', // method name array(), // input parameters //array('return' = array('result' = 'tns:Category')), // output parameters array('return' = 'tns:Category'), // output parameters 'urn:productwsdl', // namespace 'urn:productwsdl#getaproduct', // soapaction 'rpc', // style 'encoded', // use 'Get the product categories' // documentation ); function getaproduct() { $conn = mysql_connect('localhost','root',''); mysql_select_db('sssl', $conn); $sql = "SELECT * FROM jos_vm_category_xref"; $q = mysql_query($sql); while($r = mysql_fetch_array($q)) { $items[] = array('category_parent_id'=$r['category_parent_id'], 'category_child_id'=$r['category_child_id'], 'category_list'=$r['category_list']); } return $items; } // Use the request to (try to) invoke the service $HTTP_RAW_POST_DATA = isset($HTTP_RAW_POST_DATA) ? $HTTP_RAW_POST_DATA : ''; $server-service($HTTP_RAW_POST_DATA);

    Read the article

  • Entity Framework - Merging 2 physical tables into one "virtual" table problems...

    - by Keith Barrows
    I have been reading up on porting ASP.NET Membership Provider into .NET 3.5 using LINQ & Entities. However, the DB model that every single sample shows is the newer model while I've inherited a rather old model. Differences: The User Table is split into a pair of User & Membership Tables. All of the tables in the DB are prepended with aspnet_ I have Lowered versions of some columns (UserName, Email, etc) To work with this I have copied the properties from the Membership table into the User table (in the DB this is a 1<-1 relationship, not a 1<-0,1), renamed aspnet_Applications to Application, aspnet_Profiles to Profile, aspnet_Users to User and aspnet_Roles to Role. (See image) Link to full size image of model Now, I am running into one of 2 problems when I try to compile. Using the model in the image I get this error: Problem in Mapping Fragment starting at line 464: EntitySets 'UserSet' and 'aspnet_Membership' are both mapped to table 'aspnet_Membership'. Their Primary Keys may collide. If I delete the aspnet_Membership table from my model (to handle the above error) I then get: Problem in Mapping Fragment starting at line 384: Column aspnet_Membership.ApplicationId in table aspnet_Membership must be mapped: It has no default value and is not nullable. My ability to hand edit the backing stores is not the best and I don't want to just hack something in that may break other things. I am looking for suggestions, best practices, etc to handle this. Note: Moving the data tables themselves is not an option as I cannot replace all the logic in the existing apps. I am building this EF Provider for a new App. Over the next 6 months the old app(s) will migrate bit-by-bit to the new structures. Note: I added a link just under the image to the full size image for better viewing.

    Read the article

  • Variant datatype library for C

    - by Joey Adams
    Is there a decent open-source C library for storing and manipulating dynamically-typed variables (a.k.a. variants)? I'm primarily interested in atomic values (int8, int16, int32, uint, strings, blobs, etc.), while JSON-style arrays and objects as well as custom objects would also be nice. A major case where such a library would be useful is in working with SQL databases. The most obvious feature of such a library would be a single type for all supported values, e.g.: struct Variant { enum Type type; union { int8_t int8_; int16_t int16_; // ... }; }; Other features might include converting Variant objects to/from C structures (using a binding table), converting values to/from strings, and integration with an existing database library such as SQLite. Note: I do not believe this is question is a duplicate of http://stackoverflow.com/questions/649649/any-library-for-generic-datatypes-in-c , which refers to "queues, trees, maps, lists". What I'm talking about focuses more on making working with SQL databases roughly as smooth as working with them in interpreted languages.

    Read the article

  • Database warehoue design: fact tables and dimension tables

    - by morpheous
    I am building a poor man's data warehouse using a RDBMS. I have identified the key 'attributes' to be recorded as: sex (true/false) demographic classification (A, B, C etc) place of birth date of birth weight (recorded daily): The fact that is being recorded My requirements are to be able to run 'OLAP' queries that allow me to: 'slice and dice' 'drill up/down' the data and generally, be able to view the data from different perspectives After reading up on this topic area, the general consensus seems to be that this is best implemented using dimension tables rather than normalized tables. Assuming that this assertion is true (i.e. the solution is best implemented using fact and dimension tables), I would like to see some help in the design of these tables. 'Natural' (or obvious) dimensions are: Date dimension Geographical location Which have hierarchical attributes. However, I am struggling with how to model the following fields: sex (true/false) demographic classification (A, B, C etc) The reason I am struggling with these fields is that: They have no obvious hierarchical attributes which will aid aggregation (AFAIA) - which suggest they should be in a fact table They are mostly static or very rarely change - which suggests they should be in a dimension table. Maybe the heuristic I am using above is too crude? I will give some examples on the type of analysis I would like to carryout on the data warehouse - hopefully that will clarify things further. I would like to aggregate and analyze the data by sex and demographic classification - e.g. answer questions like: How does male and female weights compare across different demographic classifications? Which demographic classification (male AND female), show the most increase in weight this quarter. etc. Can anyone clarify whether sex and demographic classification are part of the fact table, or whether they are (as I suspect) dimension tables.? Also assuming they are dimension tables, could someone elaborate on the table structures (i.e. the fields)? The 'obvious' schema: CREATE TABLE sex_type (is_male int); CREATE TABLE demographic_category (id int, name varchar(4)); may not be the correct one.

    Read the article

  • How to explain to someone that a data structure should not draw itself, explaining separation of con

    - by leeand00
    I have another programmer who I'm trying to explain why it is that a UI component should not also be a data-structure. For instance say that you get a data-structure that contains a record-set from the "database", and you wish to display that record-set in a UI component within your application. According to this programmer (who will remain nameless, he's young and I'm teaching him...), we should subclass the data-structure into a class that will draw the UI component within our application!!!!!! And thus according to this logic, the record-set should manage the drawing of the UI. **Head Desk*** I know that asking a record-set to draw itself is wrong, because, if you wish to render the same data-structure on more than one type of component on your UI, you are going to have a real mess on your hands; you'll need to extend yet another class for each and every UI component that you render from the base-class of your record-set; I am well aware of the "cleanliness" of the of the MVC pattern (and by that what I really mean is you don't confuse your data (the Model) with your UI (the view) or the actions that take place on the data (the Controller more or less...okay not really the API should really handle that...and the Controller should just make as few calls to it as it can, telling it which view to render)) But it's certainly alot cleaner than using data-structures to render UI components! Is there any other advice I could send his way other than the example above? I understand that when you first learn OOP you go through "a stage" where you where just want to extend everything. Followed by a stage when you think that Design Patterns are the solution every single problem...which isn't entirely correct either...thanks Jeff. Is there a way that I can gently nudge this kid in the right direction? Do you have any more examples that might help explain my point to him?

    Read the article

  • Big-O of PHP functions?

    - by Kendall Hopkins
    After using PHP for a while now, I've noticed that not all PHP built in functions as fast as expected. Consider the below two possible implementations of a function that finds if a number is prime using a cached array of primes. //very slow for large $prime_array $prime_array = array( 2, 3, 5, 7, 11, 13, .... 104729, ... ); $result_array = array(); foreach( $array_of_number => $number ) { $result_array[$number] = in_array( $number, $large_prime_array ); } //still decent performance for large $prime_array $prime_array => array( 2 => NULL, 3 => NULL, 5 => NULL, 7 => NULL, 11 => NULL, 13 => NULL, .... 104729 => NULL, ... ); foreach( $array_of_number => $number ) { $result_array[$number] = array_key_exists( $number, $large_prime_array ); } This is because in_array is implemented with a linear search O(n) which will linearly slow down as $prime_array grows. Where the array_key_exists function is implemented with a hash lookup O(1) which will not slow down unless the hash table gets extremely populated (in which case it's only O(logn)). So far I've had to discover the big-O's via trial and error, and occasionally looking at the source code. Now for the question... I was wondering if there was a list of the theoretical (or practical) big O times for all* the PHP built in functions. *or at least the interesting ones For example find it very hard to predict what the big O of functions listed because the possible implementation depends on unknown core data structures of PHP: array_merge, array_merge_recursive, array_reverse, array_intersect, array_combine, str_replace (with array inputs), etc.

    Read the article

  • Daylight Savings Handling in DateDiff() in MS Access?

    - by PowerUser
    I am fully aware of DateDiff()'s inability to handle daylight savings issues. Since I often use it to compare the number of hours or days between 2 datetimes several months apart, I need to write up a solution to handle DST. This is what I came up with, a function that first subtracts 60 minutes from a datetime value if it falls within the date ranges specified in a local table (LU_DST). Thus, the usage would be: datediff("n",Conv_DST_to_Local([date1]),Conv_DST_to_Local([date2])) My question is: Is there a better way to handle this? I'm going to make a wild guess that I'm not the first person with this question. This seems like the kind of thing that should have been added to one of the core reference libraries. Is there a way for me to access my system clock to ask it if DST was in effect at a certain date & time? Function Conv_DST_to_Local(X As Date) As Date Dim rst As DAO.Recordset Set rst = CurrentDb.OpenRecordset("LU_DST") Conv_DST_to_Local = X While rst.EOF = False If X > rst.Fields(0) And X < rst.Fields(1) Then Conv_DST_to_Local = DateAdd("n", -60, X) rst.MoveNext Wend End Function Notes I have visited and imported the BAS file of http://www.cpearson.com/excel/TimeZoneAndDaylightTime.aspx. I spent at least an hour by now reading through it and, while it may do its job well, I can't figure out how to modify it to my needs. But if you have an answer using his data structures, I'll take a look. Timezones are not an issue since this is all local time.

    Read the article

  • Convert function to read from string instead of file in C

    - by Dusty
    I've been tasked with updating a function which currently reads in a configuration file from disk and populates a structure: static int LoadFromFile(FILE *Stream, ConfigStructure *cs) { int tempInt; ... if ( fscanf( Stream, "Version: %d\n",&tempInt) != 1 ) { printf("Unable to read version number\n"); return 0; } cs->Version = tempInt; ... } to one which allows us to bypass writing the configuration to disk and instead pass it directly in memory, roughly equivalent to this: static int LoadFromString(char *Stream, ConfigStructure *cs) A few things to note: The current LoadFromFile function is incredibly dense and complex, reading dozens of versions of the config file in a backward compatible manner, which makes duplication of the overall logic quite a pain. The functions that generate the config file and those that read it originate in totally different parts of the old system and therefore don't share any data structures so I can't pass those directly. I could potentially write a wrapper, but again, it would need to handle any structure passed in in a backwards compatible manner. I'm tempted to just pass the file as is in as a string (as in the prototype above) and convert all the fscanf's to sscanf's but then I have to handle incrementing the pointer along (and potentially dealing with buffer overrun errors) manually. This has to remain in C, so no C++ functionality like streams can help here Am I missing a better option? Is there some way to create a FILE * that actually just points to a location in memory instead of on disk? Any pointers, suggestions or other help is greatly appreciated.

    Read the article

  • Filter entities that match all pairs

    - by Jon
    I have an entity (let's say Person) with a set of arbitrary attributes with a known subset of values. I need to search for all of these entities that match all my filter conditions. For example, my table structures look like this: Person: id | name 1 | John Doe 2 | Jane Roe 3 | John Smith Attribute: id | attr_name 1 | Sex 2 | Eye Color ValidValue: id | attr_id | value_name 1 | 1 | Male 2 | 1 | Female 3 | 2 | Blue 4 | 2 | Green 5 | 2 | Brown PersonAttributes id | person_id | attr_id | value_id 1 | 1 | 1 | 1 2 | 1 | 2 | 3 3 | 2 | 1 | 2 4 | 2 | 2 | 4 5 | 3 | 1 | 1 6 | 3 | 2 | 4 In JPA, I have entities built for all of these tables. What I'd like to do is perform a search for all entities matching a given set of attribute-value pairs. For instance, I'd like to be able to find all males (John Doe and John Smith), all people with green eyes (Jane Roe or John Smith), or all females with green eyes (Jane Roe). I see that I can already take advantage of the fact that I only really need to match on value_id, since that's already unique and tied to the attr_id. But where can I go from there?

    Read the article

  • sqlite3 JOIN, GROUP_CONCAT using distinct with custom separator

    - by aiwilliams
    Given a table of "events" where each event may be associated with zero or more "speakers" and zero or more "terms", those records associated with the events through join tables, I need to produce a table of all events with a column in each row which represents the list of "speaker_names" and "term_names" associated with each event. However, when I run my query, I have duplication in the speaker_names and term_names values, since the join tables produce a row per association for each of the speakers and terms of the events: 1|Soccer|Bobby|Ball 2|Baseball|Bobby - Bobby - Bobby|Ball - Bat - Helmets 3|Football|Bobby - Jane - Bobby - Jane|Ball - Ball - Helmets - Helmets The group_concat aggregate function has the ability to use 'distinct', which removes the duplication, though sadly it does not support that alongside the custom separator, which I really need. I am left with these results: 1|Soccer|Bobby|Ball 2|Baseball|Bobby|Ball,Bat,Helmets 3|Football|Bobby,Jane|Ball,Helmets My question is this: Is there a way I can form the query or change the data structures in order to get my desired results? Keep in mind this is a sqlite3 query I need, and I cannot add custom C aggregate functions, as this is for an Android deployment. I have created a gist which makes it easy for you to test a possible solution: https://gist.github.com/4072840

    Read the article

  • How to make comment reply query in MYSQL?

    - by Prashant
    I am having comment reply (only till one level) functionality. All comments can have as many as replies but no replies can have their further replies. So my database table structure is like below Id ParentId Comment 1 0 this is come sample comment text 2 0 this is come sample comment text 3 0 this is come sample comment text 4 1 this is come sample comment text 5 0 this is come sample comment text 6 3 this is come sample comment text 7 1 this is come sample comment text In the above structures, commentid, 1 (has 2 replies) and 3 (1 reply) has replies. So to fetch the comments and their replies, one simple method is first I fetch all the comments having ParentId as 0 and then by running a while loop fetch all the replies of that particular commentId. But that seems to be running hundreds of queries if I'll have around 200 comments on a particular record. So I want to make a query which will fetch Comments with their replies sequentially as following; Id ParentId Comment 1 0 this is come sample comment text 4 1 this is come sample comment text 7 1 this is come sample comment text 2 0 this is come sample comment text 3 0 this is come sample comment text 6 3 this is come sample comment text 5 0 this is come sample comment text I also have a comment date column in my comment table, if anyone wants to use this with comment query. So finally I want to fetch all the comments and their replies by using one single mysql query. Please tell me how I can do that? Thanks

    Read the article

  • Google Chrome Extension - Help needed

    - by Jim-Y
    Im new on Google Chrome Extensions coding, and i have some basic questions. I want to make a Chrome Extension, and the scheme is the following: -a popup window, containing buttons and result fields (popup.html) -when a button is clicked, i want to trigger an event, this event should connect to a webserver (i make the servlet too), and gather information from the server. (XMLHttpRequest()) -after that, i want my extension to load the gathered information into one of the result fields. Simple, isn't it? But i have several problems, right at the beginning:( I started developing with reading tutorials, but i have fog on the main structure of an extension. Now, i started an app, containing a popup.html, manifest.json ... In popup.html theres a result field, and a button <div id="extension_container"> <div id="header"> <p id="intro">Result here</p> <button type="button" id="button">Click Me!</button> </div> <!-- END header --> <div id="content"> </div> <!-- END content --> When button is clicked, i trigger an event, handeled with jquery, code here: <script> $(document).ready(function(){ $("#button").click(function(){ $("#intro").text("Hello, im added"); alert("Clicked"); }); }); </script> And here comes the problem, in popup.html this doesnt work, if i load it to Chrome, nothing happens. Otherwise, if i open popup.html in browser, not as an extension, everything works fine. So, i think i have basic misunderstandings on extension structures, starting with background pages, background javascript and so on.. :( Could anyone help me?

    Read the article

  • Best Practice: QT4 QList<Mything*>... on Heap, or QList<Mything> using reference?

    - by Mike Crowe
    Hi Folks, Learning C++, so be gentle :)... I have been designing my application primarily using heap variables (coming from C), so I've designed structures like this: QList<Criteria*> _Criteria; // ... Criteria *c = new Criteria(....); _Criteria.append(c); All through my program, I'm passing pointers to specific Criteria, or often the list. So, I have a function declared like this: QList<Criteria*> Decision::addCriteria(int row,QString cname,QString ctype); Criteria * Decision::getCriteria(int row,int col) which inserts a Criteria into a list, and returns the list so my GUI can display it. I'm wondering if I should have used references, somehow. Since I'm always wanting that exact Criteria back, should I have done: QList<Criteria> _Criteria; // .... Criteria c(....); _Criteria.append(c); ... QList<Criteria>& Decision::addCriteria(int row,QString cname,QString ctype); Criteria& Decision::getCriteria(int row,int col) (not sure if the latter line is syntactically correct yet, but you get the drift). All these items are specific, quasi-global items that are the core of my program. So, the question is this: I can certainly allocate/free all my memory w/o an issue in the method I'm using now, but is there are more C++ way? Would references have been a better choice (it's not too late to change on my side). TIA Mike

    Read the article

  • Unknown error when submit a REST request to Liferay json API

    - by r.rodriguez
    I'm writing an script in Python to automatically update the structures in my Liferay portal and I want to do it via the json REST API. I make a request to get an structure (method getStructure), and it worked. But when I try to do an structure update in the portal it shows me the following error: ValueError: Content-Length should be specified for iterable data of type class 'dict' {'serviceContext': "{'prueba'}", 'serviceClassName': 'com.liferay.portlet.journal.service.JournalStructureServiceUtil', 'name': 'FOO', 'xsd': '... THE XSD OBTAINED VIA JSON ...', 'serviceParameters': '[groupId,structureId,parentStructureId,name,description,xsd,serviceContext]', 'description': 'FOO Structure', 'serviceMethodName': 'updateStructure', 'groupId': '10133'} What I'm doing is the next: urllib.request.Request(url = URL, data = data_update, headers = headers) URL is http://localhost:8080/tunnel-web/secure/json The headers are configured with basic authentication (it works, it is tested with the getStructure method). Data is: data_update = { "serviceClassName" : "com.liferay.portlet.journal.service.JournalStructureServiceUtil", "serviceMethodName" : "updateStructure", "serviceParameters" : "[groupId,structureId,parentStructureId,name,description,xsd,serviceContext]", "groupId" : 10133, "name" : FOO, "description" : FOO Structure, "xsd" : ... THE XSD OBTAINED VIA JSON ..., "serviceContext" : "{}" } Does anybody know the solution? Have I to specify the length for the dictionary and how? Or this is a bug?

    Read the article

  • Why does Clojure hang after hacing performed my calculations?

    - by Thomas
    Hi all, I'm experimenting with filtering through elements in parallel. For each element, I need to perform a distance calculation to see if it is close enough to a target point. Never mind that data structures already exist for doing this, I'm just doing initial experiments for now. Anyway, I wanted to run some very basic experiments where I generate random vectors and filter them. Here's my implementation that does all of this (defn pfilter [pred coll] (map second (filter first (pmap (fn [item] [(pred item) item]) coll)))) (defn random-n-vector [n] (take n (repeatedly rand))) (defn distance [u v] (Math/sqrt (reduce + (map #(Math/pow (- %1 %2) 2) u v)))) (defn -main [& args] (let [[n-str vectors-str threshold-str] args n (Integer/parseInt n-str) vectors (Integer/parseInt vectors-str) threshold (Double/parseDouble threshold-str) random-vector (partial random-n-vector n) u (random-vector)] (time (println n vectors (count (pfilter (fn [v] (< (distance u v) threshold)) (take vectors (repeatedly random-vector)))))))) The code executes and returns what I expect, that is the parameter n (length of vectors), vectors (the number of vectors) and the number of vectors that are closer than a threshold to the target vector. What I don't understand is why the programs hangs for an additional minute before terminating. Here is the output of a run which demonstrates the error $ time lein run 10 100000 1.0 [null] 10 100000 12283 [null] "Elapsed time: 3300.856 msecs" real 1m6.336s user 0m7.204s sys 0m1.495s Any comments on how to filter in parallel in general are also more than welcome, as I haven't yet confirmed that pfilter actually works.

    Read the article

  • LaTex: why partially showing up references?

    - by HH
    The bib.style part may be the problem. If I do not reference to references, do they show up? I have listed all errors below, the file compiles so I don't know whether they are related to partially-showing-up-references. For example, work with many authors gets only one author listed. I want to see references fully, not partially. Headers $ grep bib header.tex \usepackage{natbib} \bibliographystyle{abbrvnat} Errors $ grep -n -A 7 -B 7 Error *.log combined.log-505-! Illegal unit of measure (pt inserted). combined.log-506-<to be read again> combined.log-507- \futurelet combined.log-508-l.353 \hline combined.log-509- combined.log-510-? combined.log-511- combined.log:512:! Package caption Error: cite undefined. combined.log-513- combined.log-514-See the caption package documentation for explanation. combined.log-515-Type H <return> for immediate help. combined.log-516- ... combined.log-517- combined.log-518-l.374 ...n={CPU O(mlog(n))}, cite={topcoder:node}] combined.log-519- -- combined.log-559- [] combined.log-560- combined.log-561-) [10] combined.log-562-\openout2 = `references.aux'. combined.log-563- combined.log-564- (./references.tex combined.log-565- combined.log:566:! LaTeX Error: \include cannot be nested. combined.log-567- combined.log-568-See the LaTeX manual or LaTeX Companion for explanation. combined.log-569-Type H <return> for immediate help. combined.log-570- ... combined.log-571- combined.log-572-l.1 \include{timeUse.tex} Bibs.bib @misc{ Gundersen, author = "G. Gundersen", title = "Data Structures in Java for Matrix Computations", year = "2002" } @book{ Lennart, author = "R. Lennart", title = "Mathematics Handbook for Science and Engineering BETA", year = "2004" }

    Read the article

  • SQL Server Clustered Index: (Physical) Data Page Order

    - by scherand
    I am struggling understanding what a clustered index in SQL Server 2005 is. I read the MSDN article Clustered Index Structures (among other things) but I am still unsure if I understand it correctly. The (main) question is: what happens if I insert a row (with a "low" key) into a table with a clustered index? The above mentioned MSDN article states: The pages in the data chain and the rows in them are ordered on the value of the clustered index key. And Using Clustered Indexes for example states: For example, if a record is added to the table that is close to the beginning of the sequentially ordered list, any records in the table after that record will need to shift to allow the record to be inserted. Does this mean that if I insert a row with a very "low" key into a table that already contains a gazillion rows literally all rows are physically shifted on disk? I cannot believe that. This would take ages, no? Or is it rather (as I suspect) that there are two scenarios depending on how "full" the first data page is. A) If the page has enough free space to accommodate the record it is placed into the existing data page and data might be (physically) reordered within that page. B) If the page does not have enough free space for the record a new data page would be created (anywhere on the disk!) and "linked" to the front of the leaf level of the B-Tree? This would then mean the "physical order" of the data is restricted to the "page level" (i.e. within a data page) but not to the pages residing on consecutive blocks on the physical hard drive. The data pages are then just linked together in the correct order. Or formulated in an alternative way: if SQL Server needs to read the first N rows of a table that has a clustered index it can read data pages sequentially (following the links) but these pages are not (necessarily) block wise in sequence on disk (so the disk head has to move "randomly"). How close am I? :)

    Read the article

  • Linking the Linker script file to source code

    - by user304097
    Hello , I am new to GNU compiler. I have a C source code file which contains some structures and variables in which I need to place certain variables at a particular locations. So, I have written a linker script file and used the __ attribute__("SECTION") at variable declaration, in C source code. I am using a GNU compiler (cygwin) to compile the source code and creating a .hex file using -objcopy option, but I am not getting how to link my linker script file at compilation to relocate the variables accordingly. I am attaching the linker script file and the C source file for the reference. Please help me link the linker script file to my source code, while creating the .hex file using GNU. /*linker script file*/ /*defining memory regions*/ MEMORY { base_table_ram : org = 0x00700000, len = 0x00000100 /*base table area for BASE table*/ mem2 : org =0x00800200, len = 0x00000300 /* other structure variables*/ } /*Sections directive definitions*/ SECTIONS { BASE_TABLE : { } > base_table_ram GROUP : { .text : { } { *(SEG_HEADER) } .data : { } { *(SEG_HEADER) } .bss : { } { *(SEG_HEADER) } } > mem2 } C source code: const UINT8 un8_Offset_1 __attribute__((section("BASE_TABLE"))) = 0x1A; const UINT8 un8_Offset_2 __attribute__((section("BASE_TABLE"))) = 0x2A; const UINT8 un8_Offset_3 __attribute__((section("BASE_TABLE"))) = 0x3A; const UINT8 un8_Offset_4 __attribute__((section("BASE_TABLE"))) = 0x4A; const UINT8 un8_Offset_5 __attribute__((section("BASE_TABLE"))) = 0x5A; const UINT8 un8_Offset_6 __attribute__((section("SEG_HEADER"))) = 0x6A; My intention is to place the variables of section "BASE_TABLE" at the address defined i the linker script file and the remaining variables at the "SEG_HEADER" defined in the linker script file above. But after compilation when I look in to the .hex file the different section variables are located in different hex records, located at an address of 0x00, not the one given in linker script file . Please help me in linking the linker script file to source code. Are there any command line options to link the linker script file, if any plese provide me with the info how to use the options. Thanks in advance, SureshDN.

    Read the article

  • Iterator performance contract (and use on non-collections)

    - by polygenelubricants
    If all that you're doing is a simple one-pass iteration (i.e. only hasNext() and next(), no remove()), are you guaranteed linear time performance and/or amortized constant cost per operation? Is this specified in the Iterator contract anywhere? Are there data structures/Java Collection which cannot be iterated in linear time? java.util.Scanner implements Iterator<String>. A Scanner is hardly a data structure (e.g. remove() makes absolutely no sense). Is this considered a design blunder? Is something like PrimeGenerator implements Iterator<Integer> considered bad design, or is this exactly what Iterator is for? (hasNext() always returns true, next() computes the next number on demand, remove() makes no sense). Similarly, would it have made sense for java.util.Random implements Iterator<Double>? Should a type really implement Iterator if it's effectively only using one-third of its API? (i.e. no remove(), always hasNext())

    Read the article

  • How to use the Request URL/URL Rewriting For Localization in ASP.NET - Using an HTTP Module or Globa

    - by LocalizedUrlDMan
    I wanted to see if there is a way to use the request URL/URL rewriting to set the language a page is rendered in by examining a portion of the URL in ASP.NET. We have a site that already works with ASP.NET’s resource localization and user’s can change the language that they see pages/resources on the site in, however the current mechanism in not very search engine friendly since the language variations for each language all appear as one page. It would be much better if we could have pages like www.site.com/en-mx/realfolder/realpage.aspx that allow linking to culture specific versions of a page. I know lots of people have likely done localization through URL structures before and I wanted to know if one of your could share how to do this in the Global.asax file or with an HTTP Module (pointing to links to blog postings would be great too). We have a restriction that the site is based on ASP.NET 2.0 (so we can't used the 3.5+ features yet). Here is the example scenario: A real page exits at: www.site.com/realfolder/realpage.aspx The page has a mechanism for the user to change the language it is displayed in via a dropdown. There are search engine optimization and user links sharing benefits to doing this since people can link directly to a page that has content that is applicable to a certain language (this could also include right-to-left layouts for languages like Japanese). I would like to use an HTTP module to see if the first part of the URL after www.site.com, site.com, subdomain.site.com, etc. contains a valid culture code (e.g. en-us, es-mx) then use that value to set the localization culture of the page/resources based on that URL. So if the user accesses the URL www.site.com/en-MX/realfolder/realpage.aspx Then the page will render in Mexico’s variant of Spanish. If the user goes to www.site.com/realfolder/realpage.aspx directly the page would just use their browser’s language settings.

    Read the article

  • How can I use R (Rcurl/XML packages ?!) to scrap this webpage ?

    - by Tal Galili
    Hi all, I have a (somewhat complex) webscraping challenge that I wish to accomplish and would love for some direction (to whatever level you feel like sharing) here goes: I would like to go through all the "species pages" present in this link: http://gtrnadb.ucsc.edu/ So for each of them I will go to: The species page link (for example: http://gtrnadb.ucsc.edu/Aero_pern/) And then to the "Secondary Structures" page link (for example: http://gtrnadb.ucsc.edu/Aero_pern/Aero_pern-structs.html) Inside that link I wish to scrap the data in the page so that I will have a long list containing this data (for example): chr.trna3 (1-77) Length: 77 bp Type: Ala Anticodon: CGC at 35-37 (35-37) Score: 93.45 Seq: GGGCCGGTAGCTCAGCCtGGAAGAGCGCCGCCCTCGCACGGCGGAGGcCCCGGGTTCAAATCCCGGCCGGTCCACCA Str: >>>>>>>..>>>>.........<<<<.>>>>>.......<<<<<.....>>>>>.......<<<<<<<<<<<<.... Where each line will have it's own list (inside the list for each "trna" inside the list for each animal) I remember coming across the packages Rcurl and XML (in R) that can allow for such a task. But I don't know how to use them. So what I would love to have is: 1. Some suggestion on how to build such a code. 2. And recommendation for how to learn the knowledge needed for performing such a task. Thanks for any help, Tal

    Read the article

  • How do I supply extra info to IApplicationSettingsProvider class?

    - by joebeazelman
    Perhaps this question has been asked before in a different way, but I haven’t been able to find it. I have one or more plugin adapter assemblies in my application all having the type IPlugin, for instance. Each adapter has its own settings structures stored in a common directory. Whether they are stored in one contiguous file or in separate ones doesn’t matter. Each adapter can have one or more settings associated with it. The settings will have both a name and the Plugin it will be used for. How would I create such a configuration system using the following requirements: I want to use .NETs built in settings system and avoid writing one from scratch The host application will be responsible for locating the plugin settings and passing it to the plugin Each plugin will be responsible for reading and writing its own settings to separate concerns. The host application should call Plugin.Save(thePath) and it does its thing. All settings are user scoped So far, I realize that I would need to write my own SettingsProvider, but the provider seems to work in isolation in that there’s no way to pass it parameters such as the path of the plugin directory and the name of the settings. All of the example code I've seen has the provider getting the data from the runtime environment.

    Read the article

  • "Work stealing" vs. "Work shrugging"?

    - by John
    Why is it that I can find lots of information on "work stealing" and nothing on "work shrugging" as a dynamic load-balancing strategy? By "work-shrugging" I mean busy processors pushing excessive work towards less loaded neighbours rather than idle processors pulling work from busy neighbours ("work-stealing"). I think the general scalability should be the same for both strategies. However I believe that it is much more efficient for busy processors to wake idle processors if and when there is definitely work for them to do than having idle processors spinning or waking periodically to speculatively poll all neighbours for possible work. Anyway a quick google didn't show up anything under the heading of "Work Shrugging" or similar so any pointers to prior-art and the jargon for this strategy would be welcome. Clarification/Confession In more detail:- By "Work Shrugging" I actually envisage the work submitting processor (which may or may not be the target processor) being responsible for looking around the immediate locality of the preferred target processor (based on data/code locality) to decide if a near neighbour should be given the new work instead because they don't have as much work to do. I am talking about an atomic read of the immediate (typically 2 to 4) neighbours' estimated q length here. I do not think this is any more coupling than implied by the thieves polling & stealing from their neighbours - just much less often - or rather - only when it makes economic sense to do so. (I am assuming "lock-free, almost wait-free" queue structures in both strategies). Thanks.

    Read the article

  • What is the general feeling about reflection extensions in std::type_info?

    - by Evan Teran
    I've noticed that reflection is one feature that developers from other languages find very lacking in c++. For certain applications I can really see why! It is so much easier to write things like an IDE's auto-complete if you had reflection. And certainly serialization APIs would be a world easier if we had it. On the other side, one of the main tenets of c++ is don't pay for what you don't use. Which makes complete sense. That's something I love about c++. But it occurred to me there could be a compromise. Why don't compilers add extensions to the std::type_info structure? There would be no runtime overhead. The binary could end up being larger, but this could be a simple compiler switch to enable/disable and to be honest, if you are really concerned about the space savings, you'll likely disable exceptions and RTTI anyway. Some people cite issues with templates, but the compiler happily generates std::type_info structures for template types already. I can imagine a g++ switch like -fenable-typeinfo-reflection which could become very popular (and mainstream libs like boost/Qt/etc could easily have a check to generate code which uses it if there, in which case the end user would benefit with no more cost than flipping a switch). I don't find this unreasonable since large portable libraries like this already depend on compiler extensions. So why isn't this more common? I imagine that I'm missing something, what are the technical issues with this?

    Read the article

  • Match entities fulfilling filter (strict superset of search)

    - by Jon
    I have an entity (let's say Person) with a set of arbitrary attributes with a known subset of values. I need to search for all of these entities that match all my filter conditions. That is, given a set of Attributes A, I need to find all people that have a set of Attributes that are a superset of A. For example, my table structures look like this: Person: id | name 1 | John Doe 2 | Jane Roe 3 | John Smith Attribute: id | attr_name 1 | Sex 2 | Eye Color ValidValue: id | attr_id | value_name 1 | 1 | Male 2 | 1 | Female 3 | 2 | Blue 4 | 2 | Green 5 | 2 | Brown PersonAttributes id | person_id | attr_id | value_id 1 | 1 | 1 | 1 2 | 1 | 2 | 3 3 | 2 | 1 | 2 4 | 2 | 2 | 4 5 | 3 | 1 | 1 6 | 3 | 2 | 4 In JPA, I have entities built for all of these tables. What I'd like to do is perform a search for all entities matching a given set of attribute-value pairs. For instance, I'd like to be able to find all males (John Doe and John Smith), all people with green eyes (Jane Roe or John Smith), or all females with green eyes (Jane Roe). I see that I can already take advantage of the fact that I only really need to match on value_id, since that's already unique and tied to the attr_id. But where can I go from there? I've been trying to do something like the following, given that the ValidValue is unique in all cases: select distinct p from Person p join p.personAttributes a where a.value IN (:values) Then I've tried putting my set of required values in as "values", but that gives me errors no matter how I try to structure that. I also have to get a little more complicated, as follows, but at this point I'd be happy with solving the first problem cleanly. However, if it's possible, the Attribute table actually has a field for default value: id | attr_name | default_value 1 | Sex | 1 2 | Eye Color | 5 If the value you're searching on happens to be the default value, I want it to return any people that have no explicit value set for that attribute, because in the application logic, that means they inherit the default value. Again, I'm more concerned about the primary question, but if someone who can help with that also has some idea of how to do this one, I'd be extremely grateful.

    Read the article

< Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >