Search Results

Search found 25137 results on 1006 pages for 'xml db'.

Page 382/1006 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • How to use JAXB to process messages from two separate schemas (with same rootelement name)

    - by sairn
    Hi We have a client that is sending xml messages that are produced from two separate schemas. We would like to process them in a single application, since the content is related. We cannot modify the client's schema that they are using to produce the XML messages... We can modify our own copies of the two schema (or binding.jxb) -- if it helps -- in order to enable our JAXB processing of messages created from the two separate schemas. Unfortunately, both schemas have the same root element name (see below). QUESTION: Does JAXB prohibit absolutely the processing two schemas that have the same root element name? -- If so, I will stop my "easter egg" hunt for a solution to this... ---Or, is there some workaround that would enable us to use JAXB for processing these XML messages produced from two different schemas? schema1 (note the root element name: "A"): <?xml version="1.0" encoding="UTF-8"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> <xsd:element name="A"> <xsd:complexType> <xsd:sequence> <xsd:element name="AA"> <xsd:complexType> <xsd:sequence> <xsd:element name="AAA1" type="xsd:string" /> <xsd:element name="AAA2" type="xsd:string" /> <xsd:element name="AAA3" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="BB"> <xsd:complexType> <xsd:sequence> <xsd:element name="BBB1" type="xsd:string" /> <xsd:element name="BBB2" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:schema> schema2 (note, again, using the same root element name: "A") <?xml version="1.0" encoding="UTF-8"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> <xsd:element name="A"> <xsd:complexType> <xsd:sequence> <xsd:element name="CCC"> <xsd:complexType> <xsd:sequence> <xsd:element name="DDD1" type="xsd:string" /> <xsd:element name="DDD2" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="EEE"> <xsd:complexType> <xsd:sequence> <xsd:element name="EEE1"> <xsd:complexType> <xsd:sequence> <xsd:element name="FFF1" type="xsd:string" /> <xsd:element name="FFF2" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="EEE2" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:schema>

    Read the article

  • Dataset bind to Gridview within WCF REST retrieval method and Linq to Sql

    - by user643794
    I used a WCF REST template to build a WCF service library to create PUT and GET calls. PUT method works fine sending my blob to a database. On the GET, I want to be able to access the web service directly and display the results from a stored procedure as a dataset and bind this to a gridview. The stored procedure is a simple select statement, returning three of the four columns from the table. I have the following: [WebGet(UriTemplate = "/?name={name}", ResponseFormat = WebMessageFormat.Xml)] public List<Object> GetCollection(string name) { try { db.OpenDbConnection(); // Call to SQL stored procedure return db.GetCustFromName(name); } catch (Exception e) { Log.Error("Stored Proc execution failed. ", e); } finally { db.CloseDbConnection(); } return null; } I also added Linq to SQL class to include my database table and stored procedures access. I also created the Default.aspx file in addition to the other required files. protected void Page_Load(object sender, EventArgs e) { ServiceDataContext objectContext = new ServiceDataContext(); var source = objectContext.GetCustFromName("Tiger"); Menu1.DataSource = source; Menu1.DataBind(); } But this gives me The entity type '' does not belong to any registered model. Where should the data binding be done? What should be the return type for GetCollection()? I am stuck with this. Please provide help on how to do this.

    Read the article

  • Querying the Datastore in python

    - by Ray
    Greetings! I am trying to work with a single column in the datatstore, I can view and display the contents, like this - q = test.all() q.filter("adjclose =", "adjclose") q = db.GqlQuery("SELECT * FROM test") results = q.fetch(5) for p in results: p1 = p.adjclose print "The value is --> %f" % (p.adjclose) however i need to calculate the historical values with the adjclose column, and I am not able to get over the errors for c in range(len(p1)-1): TypeError: object of type 'float' has no len() here is my code! for c in range(len(p1)-1): p1.append(p1[c+1]-p1[c]/p1[c]) p2 = (p1[c+1]-p1[c]/p1[c]) print "the p1 value<-- %f" % (p2) print "dfd %f" %(p1) new to python, any help will be greatly appreciated! thanks in advance Ray HERE IS THE COMPLETE CODE class CalHandler(webapp.RequestHandler): def get(self): que = db.GqlQuery("SELECT * from test") user_list = que.fetch(limit=100) doRender( self, 'memberscreen2.htm', {'user_list': user_list} ) q = test.all() q.filter("adjclose =", "adjclose") q = db.GqlQuery("SELECT * FROM test") results = q.fetch(5) for p in results: p1 = p.adjclose print "The value is --> %f" % (p.adjclose) for c in range(len(p1)-1): p1.append(p1[c+1]-p1[c]/p1[c]) print "the p1 value<--> %f" % (p2) print "dfd %f" %(p1)

    Read the article

  • Problem with evaluating XPath expression in Java

    - by JSteve
    Can somebody help me find the mistake I am doing in evaluating following XPath expression? I want to get all "DataTable" nodes under the node "Model" in my xml through XPath. Here is my XML doc: <?xml version="1.0" encoding="UTF-8"?> <Root> <Application> <Model> <DataSet name="ds" primaryTable="Members" openRows="1"> <DataTable name="Members" openFor="write"> <DataColumn name="id" type="String" mandatory="true" primaryKey="true" valueBy="user"/> <DataColumn name="name" type="String" mandatory="true" valueBy="user"/> <DataColumn name="address" type="String" mandatory="false" valueBy="user"/> <DataColumn name="city" type="String" mandatory="false" valueBy="user"/> <DataColumn name="country" type="String" mandatory="false" valueBy="user"/> </DataTable> </DataSet> </Model> <View> <Composite> <Grid> <Label value="ID:" row="0" column="0" /> <Label value="Name:" row="1" column="0" /> <Label value="Address:" row="2" column="0" /> <Label value="City:" row="3" column="0" /> <Label value="Country:" row="4" column="0" /> <TextField name="txtId" row="0" column="1" /> <TextField name="txtName" row="1" column="1" /> <TextField name="txtAddress" row="2" column="1" /> <TextField name="txtCity" row="3" column="1" /> <TextField name="txtCountry" row="4" column="1" /> </Grid> </Composite> </View> </Application> </Root> Here the Java code to extract required node list: try { DocumentBuilderFactory domFactory = DocumentBuilderFactory.newInstance(); domFactory.setNamespaceAware(true); domFactory.setIgnoringComments(true); domFactory.setIgnoringElementContentWhitespace(true); DocumentBuilder builder = domFactory.newDocumentBuilder(); Document dDoc = builder.parse("D:\TEST\myFile.xml"); XPath xpath = XPathFactory.newInstance().newXPath(); NodeList nl = (NodeList) xpath.evaluate("//Model//DataTable", dDoc, XPathConstants.NODESET); System.out.println(nl.getLength()); }catch (Exception ex) { System.out.println(ex.getMessage()); } There is no problem in loading and parsing xml file and I can see correct nodes in dDoc. Problem is with xpath that returns nothing on evaluating my expression. I tried many other expressions for testing purpose but every time resulting NodeList "nl" does not have any item

    Read the article

  • A many-to-many relation joined disallows intellisense/lookup in joined table

    - by BerggreenDK
    I want to be able to select a product and retrieve all sub-parts(products) within it. My approach is to find the Product id and then retrieve the list of ProductParts with that as a parent and while fetching those, follow the key back to the Product child to get the name and details of each Part. I was hoping to use something similar to: part.linked_product_id.name I have two tables. One for [Product] and and a relation [ProductPart] that has two FK ID's to [Product] Table Product() { int ID; // (PRIMARY, NOT NULL) String Name; ... details removed for overview purpose... } Table ProductPart() { int Product_ID; // FK (linked with relation to Product/parent) int Part_Product_ID; // FK (linked with relation to Product/childen) ... details removed for overview purpose... } I have checked the SQL-diagram and it shows the two relations (both are one to many) and in my DBML they also looks right. Here is my LINQ to SQL snippet that doesnt work for me... wondering why my joins dont work as supposed. FaultySnippet() { ProductDataContext db = new ProductDataContext(); var list = ( from part in db.ProductParts join prod in db.Products on part.Part_Product_ID equals prod.ID where (part.Product_ID == product_ID) select new { Name = part.Part_Product_ID. ?? // <-- NO details from Joined table? ... rest of properties from ProductPart join... I hoped... } ); }

    Read the article

  • Mysql and PHP - Reading multiple insert queries from a file and executing at runtime

    - by SpikETidE
    Hi everyone... I am trying out a back-up and restore system. Basically, what i try to do is, when creating the back up i make a file which looks like DELETE FROM bank_account; INSERT INTO bank_account VALUES('1', 'IB6872', 'Indian Bank', 'Indian Bank', '2', '1', '0', '0000-00-00 00:00:00', '1', '2010-04-13 17:09:05');INSERT INTO bank_account VALUES('2', 'IB7391', 'Indian Bank', 'Indian Bank', '3', '1', '0', '0000-00-00 00:00:00', '1', '2010-04-13 17:09:32'); and so on and so forth. When i restore the db i just read the query from the file, save it to a string and then execute it over the DB using mysql_query(); The problem is, when i run the query through mysql_query(), the execution stops after the delete query with the error 'Error in syntax near '; INSERT INTO bank_account VALUES('1', 'IB6872', 'Indian Bank', 'Indian Bank', '2',' at line 1. But when i run the queries directly over the Db, using phpmyadmin it executes without any errors. As far as i can see, i can't notice any syntax error in the query. Can anyone point out where there might be a glitch...? Thanks and regards....

    Read the article

  • Web page database query optimization

    - by morpheous
    I am putting together a web page which is quite 'expensive' in terms of database hits. I don't want to start optimizing at this stage - though with me trying to hit a deadline, I may end up not optimizing at all. Currently the page requires 18 (that's right eighteen) hits to the db. I am already using joins, and some of the queries are UNIONed to minimize the trips to the db. My local dev machine can handle this (page is not slow) however, I feel if I release this into the wild, the number of queries will quickly overwhelm my database (MySQL). I could always use memcache or something similar, but I would much rather continue with my other dev work that needs to be completed before the deadline - at least retrieving the page works - its simply a matter of optimization now (if required). My question therefore is - is 18 db queries for a single page retrieval completely outrageous - (i.e. I should put everything on hold and optimize the hell of the retrieval logic), or shall I continue as normal, meet the deadline and release on schedule and see what happens? [Edit] Just to clarify, I have already done the 'obvious' things like using (single and composite) indexes for fields used in the queries. What I haven't yet done is to run a query analyzer to see if my indexes etc are optimal.

    Read the article

  • Proxy Issues with Javascript Cross Domain RSS Feed Parsing

    - by Amir
    This is my Javascript function which grabs an rss feed via the proxy script and then spits out the 5 latest rss items from the feed along with a link to my stylesheet: function getWidget (feed,limit) { if (window.XMLHttpRequest) { xhttp=new XMLHttpRequest() } else { xhttp=new ActiveXObject("Microsoft.XMLHTTP") } xhttp.open("GET","http://MYSITE/proxy.php?url="+feed,false); xhttp.send(""); xmlDoc=xhttp.responseXML; var x = 1; var div = document.getElementById("div"); srdiv.innerHTML = '<link type="text/css" href="http://MYSITE/css/widget.css" rel="stylesheet" /><div id="rss-title"></div></h3><div id="items"></div><br /><br /><a href="http://MYSITE">Powered by MYSITE</a>'; document.body.appendChild(div); content=xmlDoc.getElementsByTagName("title"); thelink=xmlDoc.getElementsByTagName("link"); document.getElementByTagName("rss-title").innerHTML += content[0].childNodes[0].nodeValue; for (x=1;x<=limit;srx++) { y=x; y--; var shout = '<div class="item"><a href="'+thelink[y].childNodes[0].nodeValue+'">'+content[x].childNodes[0].nodeValue+'</a></div>'; document.getElementById("items").innerHTML += shout; } } Here is the the code from proxy.php: $session = curl_init($_GET['url']); // Open the Curl session curl_setopt($session, CURLOPT_HEADER, false); // Don't return HTTP headers curl_setopt($session, CURLOPT_RETURNTRANSFER, true); // Do return the contents of the call $xml = curl_exec($session); // Make the call header("Content-Type: text/xml"); // Set the content type appropriately echo $xml; // Spit out the xml curl_close($session); // And close the session Now when I try to load this on any domain that's not my site nothing loads. I get no JS errors, but I in the Console tab in firebug I get "407 Proxy Authentication Required" So I'm not really sure how to make this work. The goal is to be able to grab the RSS feed, parse it to grab the titles and links and spit it out into some HTML on any website on the web. I"m basically making a simple RSS widget for my site's various RSS feeds. My Javascript is wack Also, I'm really a beginner with Javascript. I know jQuery pretty well, but I wasn't able to use it in this case, because this script will be embeded on any site and I can't really rely on the jQuery library. So I was decided to write some basic Javascript relying on the default XML parsing options available. Any suggestions here would be cool. Thanks! What's with the x and y They way my site creates RSS feeds is that the first title is actually the RSS feed title. The second title is the title of the first item. The first link is the link to the first item. So when using the javascript to get the title, I had to first grab the first title (which is the RSS title) and then start with the second title that being the first title of the item. Sorry for the confusion, but I don't think this is related to my issue. Just wanted to clarify my code.

    Read the article

  • Entity Framework Duplicate type name within an assembly (6.1.0)

    - by CodeMilian
    I am not sure what is going on but I keep getting the following exception when doing a query. "Duplicate type name within an assembly." I have not been able to find a solution on the web. I had resolved the issue by removing entity framework from all the projects in the solutions and re-installing using nugget. Then all of the sudden the exception is back. I have verified my table schema over and over and find nothing wrong with. This is the query causing the exception. var BaseQuery = from Users in db.Users join UserInstalls in db.UserTenantInstalls on Users.ID equals UserInstalls.UserID join Installs in db.TenantInstalls on UserInstalls.TenantInstallID equals Installs.ID where Users.Username == Username && Users.Password == Password && Installs.Name == Install select Users; var Query = BaseQuery.Include("UserTenantInstalls.TenantInstall"); return Query.FirstOrDefault(); As I mentioned previously the same query was working before. The data has not changed and the code has not changed.

    Read the article

  • querying huge database table takes too much of time in mysql

    - by Vijay
    Hi all, I am running sql queries on a mysql db table that has 110Mn+ unique records for whole day. Problem: Whenever I run any query with "where" clause it takes at least 30-40 mins. Since I want to generate most of data on the next day, I need access to whole db table. Could you please guide me to optimize / restructure the deployment model? Site description: mysql Ver 14.12 Distrib 5.0.24, for pc-linux-gnu (i686) using readline 5.0 4 GB RAM, Dual Core dual CPU 3GHz RHEL 3 my.cnf contents : [root@reports root]# cat /etc/my.cnf [mysqld] datadir=/data/mysql/data/ socket=/tmp/mysql.sock sort_buffer_size = 2000000 table_cache = 1024 key_buffer = 128M myisam_sort_buffer_size = 64M # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 [mysql.server] user=mysql basedir=/data/mysql/data/ [mysqld_safe] err-log=/data/mysql/data/mysqld.log pid-file=/data/mysql/data/mysqld.pid [root@reports root]# DB table details: CREATE TABLE `RAW_LOG_20100504` ( `DT` date default NULL, `GATEWAY` varchar(15) default NULL, `USER` bigint(12) default NULL, `CACHE` varchar(12) default NULL, `TIMESTAMP` varchar(30) default NULL, `URL` varchar(60) default NULL, `VERSION` varchar(6) default NULL, `PROTOCOL` varchar(6) default NULL, `WEB_STATUS` int(5) default NULL, `BYTES_RETURNED` int(10) default NULL, `RTT` int(5) default NULL, `UA` varchar(100) default NULL, `REQ_SIZE` int(6) default NULL, `CONTENT_TYPE` varchar(50) default NULL, `CUST_TYPE` int(1) default NULL, `DEL_STATUS_DEVICE` int(1) default NULL, `IP` varchar(16) default NULL, `CP_FLAG` int(1) default NULL, `USER_LOCATE` bigint(15) default NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 MAX_ROWS=200000000; Thanks in advance! Regards,

    Read the article

  • Returning Database Blobs in TurboGears 2.x / FCGI / Lighttpd extremely slow

    - by Tom
    Hey everyone, I am running a TG2 App on lighttpd via flup/fastcgi. We are reading images (~30kb each) from BlobFields in a MySQL database and return those images with a custom mime type via a controller method. Caching these images on the hard disk makes no sense because they change with every request, the only reason we cache these in the DB is that creating these images is quite expensive and the data used to create the images is also present in plain text on the website. Now to the problem itself: When returning such an image, things get extremely slow. The code runs totally fine on paster itself with no visible delay, but as soon as its running via fcgi/lighttpd the described phenomenon happens. I profiled the method of my controller that returns my blob, and the entire method runs in a few miliseconds, but when "return" executes, the entire app hangs for roughly 10 seconds. We could not reproduce the same error with PHP on FCGI. This only seems to happen with Turbogears or Pylons. Here for your consideration the concerned piece of source code: @expose(content_type=CUSTOM_CONTENT_TYPE) def return_img(self, img_id): """ Return a DB persisted image when requested """ img = model.Images.by_id(img_id) #get image from DB response.headers['content-type'] = 'image/png' return img.data # this causes the app to hang for 10 seconds

    Read the article

  • [MySQL/PHP] Avoid using RAND()

    - by Andrew Ellis
    So... I have never had a need to do a random SELECT on a MySQL DB until this project I'm working on. After researching it seems the general populous says that using RAND() is a bad idea. I found an article that explains how to do another type of random select. Basically, if I want to select 5 random elements, I should do the following (I'm using the Kohana framework here)? If not, what is a better solution? Thanks, Andrew <?php final class Offers extends Model { /** * Loads a random set of offers. * * @param integer $limit * @return array */ public function random_offers($limit = 5) { // Find the highest offer_id $sql = ' SELECT MAX(offer_id) AS max_offer_id FROM offers '; $max_offer_id = DB::query(Database::SELECT, $sql) ->execute($this->_db) ->get('max_offer_id'); // Check to make sure we're not trying to load more offers // than there really is... if ($max_offer_id < $limit) { $limit = $max_offer_id; } $used = array(); $ids = ''; for ($i = 0; $i < $limit; ) { $rand = mt_rand(1, $max_offer_id); if (!isset($used[$rand])) { // Flag the ID as used $used[$rand] = TRUE; // Set the ID if ($i > 0) $ids .= ','; $ids .= $rand; ++$i; } } $sql = ' SELECT offer_id, offer_name FROM offers WHERE offer_id IN(:ids) '; $offers = DB::query(Database::SELECT, $sql) ->param(':ids', $ids) ->as_object(); ->execute($this->_db); return $offers; } }

    Read the article

  • Recursing data into a 2 dimensional array in PHP 5

    - by user315699
    I'm getting bamboozled by "for each" loops and two dimensional arrays, and I'm a php newb so please bear with me (and ignore any variables with the word "image" - it's all about the mp3s, I just didn't change it from the xml tutorial) I found a php function on the net that list files in a directory, the output of which is: Array ( [0] = audio/1.mp3 [1] = audio/2.mp3 [2] = audio/3.mp3 [3] = audio/4.mp3 [4] = audio/5.mp3 ) As expected. And another that lists some info about mp3 files. $mp3datafile = 'audio/1.mp3'; $m = new mp3file($mp3datafile); $mp3dataArray = $m-get_metadata(); print_r($mp3dataArray); unset($mp3dataArray); The output of which is Array ( [Filesize] = 31972 [Encoding] = CBR [etc] ) In order to automatically build RSS for a podcast, I need to generate XML for each item. So far so good. This is how I'm making the xml foreach ($imagearray as $key = $val) { $tnsoundfile = $xml_generator-addChild('item'); $tnsoundfile-addChild('title', $podcasttitle); $enclosure = $tnsoundfile-addChild('enclosure'); $enclosure-addAttribute('url', $val); // that's the filename $enclosure-addAttribute('length', $mp3dataArray[Filesize]); // << Length is file length, not time. But later I also need $mp3dataArray[Length mm:ss] for duration tag. $enclosure-addAttribute('type', 'audio/mpeg'); $tnsoundfile-addChild('guid', $path_to_image_dir.'/'.$val); } (The above has been truncated, I realise it's not proper xml right now, but it was just to show what was going on). Perfect. But I need to do it for as many files as there are in the directory. So, I have an array of the names of the files in the directory in $mp3data And, I have an array of mp3 data in $mp3dataArray from one iteration of the get_metadata() function. If I do the following, then I get a nice list of the mp3 data of the 5 files in the directory: foreach ($mp3data as $key = $val) { $mp3datafile = $val; $m = new mp3file($mp3datafile); $mp3dataArray = $m-get_metadata(); print_r($mp3dataArray); unset($mp3dataArray); } As expected. Where I'm struggling, and have been for most of the day in spite of reading many forums and tutorials, is how to populate the "second dimension" of the array, so that it goes through 1,2,3,4 and 5.mp3 (or however many there are), extracts the metadata, then allows me to use it in the xml section above. Here's what I have foreach ($mp3data as $key = $val) { $mp3datafile = $val; $m = new mp3file($mp3datafile); $mp3dataArray = $m-get_metadata(); $mp3testarray = array($mp3dataArray); } print_r($mp3dataArray); Shouldn't that line print_r($mp3dataArray); give me a nice list of 5 lots of mp3 data, in the way it did when I recursed through the loop as before? Cos this is driving me nuts! It must be something so simple, and any help would be greatly appreciated. Thank you in advance.

    Read the article

  • Question on overview of C# OOP in business WinForms application - scope of Object

    - by TimR
    I may have all this OO completely wrong, but here goes: Ok the scenario is a classic order entry. Customer places an Order which has OrderLineItems of StockItems. Order is entered by Employee. 1) Application starts and asks for login/password 2) Employee selects 'Orders' from Mainmenu form 3) Orders forms opens.... 4) Employee selects Customer 5) Employee selects Stock adds to OrderLineItems 6) Selects second StockItem; add to OrderLineItems 7) Order is committed, [stock decremented, order posted to DB, Order printed] 8) Employee is returned to MainMenu Now with Object scope: 1) Application starts and asks for login/password Is this the best place to make objEmployee, to be kept whilst in this whole Sales application? 2) Employee selects 'Orders' from Mainmenu form 3) Orders forms opens.... *Make objOrderHeader, is objEmployee able to be passed in or is it created here, or re-created here.* 4) Employee selects Customer - adds/edits Customer details if required... Make objCustomer 5) Employee selects Stock adds to OrderLineItems... *Make objStockItem and objOrderLineItem - add to objOrderLineItems_collection* 6) Selects second StockItem; add to OrderLineItems... *Make objStockItem and objOrderLineItem - add to objOrderLineItems_collection* 7) Order is committed, [stock decremented, order posted to DB, Order printed, Order Entered By = EmployeeID] Once posted to Db, all objects now redundant/garbage [except objEmployee?] 8) Employee is returned to MainMenu is objEmployee still valid as an object?

    Read the article

  • Django: How to dynamically add tag field to third party apps without touching app's source code

    - by Chris Lawlor
    Scenario: large project with many third party apps. Want to add tagging to those apps without having to modify the apps' source. My first thought was to first specify a list of models in settings.py (like ['appname.modelname',], and call django-tagging's register function on each of them. The register function adds a TagField and a custom manager to the specified model. The problem with that approach is that the function needs to run BEFORE the DB schema is generated. I tried running the register function directly in settings.py, but I need django.db.models.get_model to get the actual model reference from only a string, and I can't seem to import that from settings.py - no matter what I try I get an ImportError. The tagging.register function imports OK however. So I changed tactics and wrote a custom management command in an otherwise empty app. The problem there is that the only signal which hooks into syncdb is post_syncdb which is useless to me since it fires after the DB schema has been generated. The only other approach I can think of at the moment is to generate and run a 'south' like database schema migration. This seems more like a hack than a solution. This seems like it should be a pretty common need, but I haven't been able to find a clean solution. So my question is: Is it possible to dynamically add fields to a model BEFORE the schema is generated, but more specifically, is it possible to add tagging to a third party model without editing it's source. To clarify, I know it is possible to create and store Tags without having a TagField on the model, but there is a major flaw in that approach in that it is difficult to simultaneously create and tag a new model.

    Read the article

  • Why can't I run a Perl program from TextMate?

    - by JZ
    I'm following a bioinformatics text, and this represents one of my first Perl scripts. While in TextMate, this does not produce any result. Is it functioning? I added "hello world" at the bottom and I don't see that when I run the script in TextMate. What have I done wrong? #!/usr/local/bin/perl -w use lib "/Users/fogonthedowns/myperllib"; use LWP::Simple; use strict; #Set base URL for all eutils my $utils = "http://eutils.ncbi.nlm.nih.gov/entrez/eutils"; my $db = "Pubmed"; my $query ="Cancer+Prostate"; my $retmax = 10; my $esearch = "$utils/esearch.fcgi?" . "db=$db&retmax=$retmax&term="; my $esearch_result = get($esearch.$query); print "ESEARCH RESULT: $esearch_result\n"; print "Using Query: \n$esearch$query\n"; print "hello world\n";

    Read the article

  • Is it possible to replace values ina queryset before sending it to your template?

    - by Issy
    Hi Guys, Wondering if it's possible to change a value returned from a queryset before sending it off to the template. Say for example you have a bunch of records Date | Time | Description 10/05/2010 | 13:30 | Testing... etc... However, based on the day of the week the time may change. However this is static. For example on a monday the time is ALWAYS 15:00. Now you could add another table to configure special cases but to me it seems overkill, as this is a rule. How would you replace that value before sending it to the template? I thought about using the new if tags (if day=1), but this is more of business logic rather then presentation. Tested this in a custom template tag def render(self, context): result = self.model._default_manager.filter(from_date__lte=self.now).filter(to_date__gte=self.now) if self.day == 4: result = result.exclude(type__exact=2).order_by('time') else: result = result.order_by('type') result[0].time = '23:23:23' context[self.varname] = result return '' However it still displays the results from the DB, is this some how related to 'lazy' evaluation of templates? Thanks! Update Responding to comments below: It's not stored wrong in the DB, its stored Correctly However there is a small side case where the value needs to change. So for example I have a From Date & To date, my query checks if todays date is between those. Now with this they could setup a from date - to date for an entire year, and the special cases (like mondays as an example) is taken care off. However if you want to store in the DB you would have to capture several more records to cater for the side case. I.e you would be capturing the same information just to cater for that 1 day when the time changes. (And the time always changes on the same day, and is always the same)

    Read the article

  • Please help clean this loop

    - by Alex Angelini
    I do not code much in Javascript, but I have the following snippet which IMHO looks horrendous and I have to do this nested iteration quite often in my code. Does anyone have a prettier/easier to read solution? function addBrowse(data) { var list = $('<ul></ul>') for(i = 0; i < data.list.length; i++) { var file = list.append('<li class="toLeft">' + data.list[i].name + '</li>') for(j = 0; j < data.list[i].children.length; j++) { var db = file.append('<li>' + data.list[i].children[j].name + '</li>') for(k = 0; k < data.list[i].children[j].children.length; k++) db.append('<li class="toRight">' + data.list[i].children[j].children[k].name + '</li>') } } $('#browse').append(list).show()} Here is a sample data element {"file":"","db":"","tbl":"","page":"browse","list":[ { "name":"/home/alex/GoSource/test1.txt", "children":[ { "name":"go", "children":[ { "name":"validation1", "children":[ ] } ] } ] }, { "name":"/home/alex/GoSource/test2.txt", "children":[ { "name":"go", "children":[ { "name":"validation2", "children":[ ] } ] } ] }, { "name":"/home/alex/GoSource/test3.txt", "children":[ { "name":"go", "children":[ { "name":"validation3", "children":[ ] } ] } ] }]} Thanks a lot

    Read the article

  • What's the difference between these LINQ queries ?

    - by SnAzBaZ
    I use LINQ-SQL as my DAL, I then have a project called DB which acts as my BLL. Various applications then access the BLL to read / write data from the SQL Database. I have these methods in my BLL for one particular table: public IEnumerable<SystemSalesTaxList> Get_SystemSalesTaxList() { return from s in db.SystemSalesTaxLists select s; } public SystemSalesTaxList Get_SystemSalesTaxList(string strSalesTaxID) { return Get_SystemSalesTaxList().Where(s => s.SalesTaxID == strSalesTaxID).FirstOrDefault(); } public SystemSalesTaxList Get_SystemSalesTaxListByZipCode(string strZipCode) { return Get_SystemSalesTaxList().Where(s => s.ZipCode == strZipCode).FirstOrDefault(); } All pretty straight forward I thought. Get_SystemSalesTaxListByZipCode is always returning a null value though, even when it has a ZIP Code that exists in that table. If I write the method like this, it returns the row I want: public SystemSalesTaxList Get_SystemSalesTaxListByZipCode(string strZipCode) { var salesTax = from s in db.SystemSalesTaxLists where s.ZipCode == strZipCode select s; return salesTax.FirstOrDefault(); } Why does the other method not return the same, as the query should be identical ? Note that, the overloaded Get_SystemSalesTaxList(string strSalesTaxID) returns a record just fine when I give it a valid SalesTaxID. Is there a more efficient way to write these "helper" type classes ? Thanks!

    Read the article

  • S#harp architecture mapping many to many and ado.net data services: A single resource was expected f

    - by Leg10n
    Hi, I'm developing an application that reads data from a SQL server database (migrated from a legacy DB) with nHibernate and s#arp architecture through ADO.NET Data services. I'm trying to map a many-to-many relationship. I have a Error class: public class Error { public virtual int ERROR_ID { get; set; } public virtual string ERROR_CODE { get; set; } public virtual string DESCRIPTION { get; set; } public virtual IList<ErrorGroup> GROUPS { get; protected set; } } And then I have the error group class: public class ErrorGroup { public virtual int ERROR_GROUP_ID {get; set;} public virtual string ERROR_GROUP_NAME { get; set; } public virtual string DESCRIPTION { get; set; } public virtual IList<Error> ERRORS { get; protected set; } } And the overrides: public class ErrorGroupOverride : IAutoMappingOverride<ErrorGroup> { public void Override(AutoMapping<ErrorGroup> mapping) { mapping.Table("ERROR_GROUP"); mapping.Id(x => x.ERROR_GROUP_ID, "ERROR_GROUP_ID"); mapping.IgnoreProperty(x => x.Id); mapping.HasManyToMany<Error>(x => x.Error) .Table("ERROR_GROUP_LINK") .ParentKeyColumn("ERROR_GROUP_ID") .ChildKeyColumn("ERROR_ID").Inverse().AsBag(); } } public class ErrorOverride : IAutoMappingOverride<Error> { public void Override(AutoMapping<Error> mapping) { mapping.Table("ERROR"); mapping.Id(x => x.ERROR_ID, "ERROR_ID"); mapping.IgnoreProperty(x => x.Id); mapping.HasManyToMany<ErrorGroup>(x => x.GROUPS) .Table("ERROR_GROUP_LINK") .ParentKeyColumn("ERROR_ID") .ChildKeyColumn("ERROR_GROUP_ID").AsBag(); } } When I view the Data service in the browser like: http://localhost:1905/DataService.svc/Errors it shows the list of errors with no problems, and using it like http://localhost:1905/DataService.svc/Errors(123) works too. The Problem When I want to see the Errors in a group or the groups form an error, like: "http://localhost:1905/DataService.svc/Errors(123)?$expand=GROUPS" I get the XML Document, but the browser says: The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later. -------------------------------------------------------------------------------- Only one top level element is allowed in an XML document. Error processing resource 'http://localhost:1905/DataServic... <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> -^ I view the sourcecode, and I get the data. However it comes with an exception: <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> <code></code> <message xml:lang="en-US">An error occurred while processing this request.</message> <innererror xmlns="xmlns"> <message>A single resource was expected for the result, but multiple resources were found.</message> <type>System.InvalidOperationException</type> <stacktrace> at System.Data.Services.Serializers.Serializer.WriteRequest(IEnumerator queryResults, Boolean hasMoved)&#xD; at System.Data.Services.ResponseBodyWriter.Write(Stream stream)</stacktrace> </innererror> </error> A I missing something??? Where does this error come from?

    Read the article

  • Relational MySQL - fetched properties?

    - by Kelso.b
    I'm currently using the following PHP code: // Get all subordinates $subords = array(); $supervisorID = $this->session->userdata('supervisor_id'); $result = $this->db->query(sprintf("SELECT * FROM users WHERE supervisor_id=%d AND id!=%d",$supervisorID, $supervisorID)); $user_list_query = 'user_id='.$supervisorID; foreach($result->result() as $user){ $user_list_query .= ' OR user_id='.$user->id; $subords[$user->id] = $user; } // Get Submissions $submissionsResult = $this->db->query(sprintf("SELECT * FROM submissions WHERE %s", $user_list_query)); $submissions = array(); foreach($submissionsResult->result() as $submission){ $entriesResult = $this->db->query(sprintf("SELECT * FROM submittedentries WHERE timestamp=%d", $submission->timestamp)); $entries = array(); foreach($entriesResult->result() as $entries) $entries[] = $entry; $submissions[] = array( 'user' => $subords[$submission->user_id], 'entries' => $entries ); $entriesResult->free_result(); } Basically I'm getting a list of users that are subordinates of a given supervisor_id (every user entry has a supervisor_id field), then grabbing entries belonging to any of those users. I can't help but think there is a more elegant way of doing this, like SELECT FROM tablename where user->supervisor_id=2222 Is there something like this with PHP/MySQL? Should probably learn relational databases properly sometime. :(

    Read the article

  • DBD::Oracle and utf8 issue

    - by goe
    Hi All, I have a problem where my perl code using the latest DBD::Oracle on perl v5.8.8 throws an exception on me when I try to insert characters like 'ñ'. Exception: DBD::Oracle::db do failed: ORA-01756: quoted string not properly terminated (DBD ERROR: OCIStmtPrepare) My $ENV{NLS_LANG} is set to 'AMERICAN_AMERICA.AL32UTF8' These are the DB params based on "SELECT * from NLS_DATABASE_PARAMETERS" 1 NLS_LANGUAGE AMERICAN 2 NLS_TERRITORY AMERICA 3 NLS_CURRENCY $ 4 NLS_ISO_CURRENCY AMERICA 5 NLS_NUMERIC_CHARACTERS ., 6 NLS_CHARACTERSET AL32UTF8 7 NLS_CALENDAR GREGORIAN 8 NLS_DATE_FORMAT DD-MON-RR 9 NLS_DATE_LANGUAGE AMERICAN 10 NLS_SORT BINARY 11 NLS_TIME_FORMAT HH.MI.SSXFF AM 12 NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM 13 NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR 14 NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR 15 NLS_DUAL_CURRENCY $ 16 NLS_COMP BINARY 17 NLS_LENGTH_SEMANTICS BYTE These are perl params based on "$db-ora_nls_parameters()" $VAR1 = { 'NLS_LANGUAGE' => 'AMERICAN', 'NLS_TIME_TZ_FORMAT' => 'HH.MI.SSXFF AM TZR', 'NLS_SORT' => 'BINARY', 'NLS_NUMERIC_CHARACTERS' => '.,', 'NLS_TIME_FORMAT' => 'HH.MI.SSXFF AM', 'NLS_ISO_CURRENCY' => 'AMERICA', 'NLS_COMP' => 'BINARY', 'NLS_CALENDAR' => 'GREGORIAN', 'NLS_DATE_FORMAT' => 'DD-MON-RR', 'NLS_DATE_LANGUAGE' => 'AMERICAN', 'NLS_TIMESTAMP_FORMAT' => 'DD-MON-RR HH.MI.SSXFF AM', 'NLS_TERRITORY' => 'AMERICA', 'NLS_LENGTH_SEMANTICS' => 'BYTE', 'NLS_NCHAR_CHARACTERSET' => 'AL16UTF16', 'NLS_DUAL_CURRENCY' => '$', 'NLS_TIMESTAMP_TZ_FORMAT' => 'DD-MON-RR HH.MI.SSXFF AM TZR', 'NLS_NCHAR_CONV_EXCP' => 'FALSE', 'NLS_CHARACTERSET' => 'AL32UTF8', 'NLS_CURRENCY' => '$' }; Here are some other strange facts: If I set NLS_LANG to ‘'AMERICAN_AMERICA.UTF8’ the insert executes fine with ‘ñ’ character. If I leave NLS_LANG as ‘'AMERICAN_AMERICA.AL32UTF8' but use ‘Ñ’ the insert will run fine as well.

    Read the article

  • Using Subsonic 3.0 Advanced Templates

    - by umit
    Hi all, I've been trying to use Subsonic Advanced Templates in a project for a while but most of the time I find myself writing a Stored Procedure as I can't find a proper way of doing it in code. Subsonic created corresponding objects for my DB tables and for foreign keys it created IQueryable fields inside each object. These fields are not loaded by default and a new SQL query is executed when you access them. 1- Is there a way to get all data in one query (deep load)? Also these fields can not be assigned. So when I want to create an object in a maintenance page, I can't put all the data into this object before saving it in DB: Post post = new Post(); //get photos for this post IList<PostPhoto> postPhotos = GetPostPhotos(); post.PostPhotos = postPhotos; 2- Is it possible to have one Post object with all fields set from user input? Think of the Post object above and assume I've successfully assigned its fields. Now I need to save it to the DB. 3- Is using BatchQuery the only way to do it in one query? If I have 4 photos in PostPhotos field; 2 of them previously saved and 2 of them new, can I use the Update method to handle both the adding and updating of these photos? Any ideas or links are appreciated. Cheers...

    Read the article

  • EF4 Code First - Many to many relationship issue

    - by Yngve B. Nilsen
    Hi! I'm having some trouble with my EF Code First model when saving a relation to a many to many relationship. My Models: public class Event { public int Id { get; set; } public string Name { get; set; } public virtual ICollection<Tag> Tags { get; set; } } public class Tag { public int Id { get; set; } public string Name { get; set; } public virtual ICollection<Event> Events { get; set; } } In my controller, I map one or many TagViewModels into type of Tag, and send it down to my servicelayer for persistence. At this time by inspecting the entities the Tag has both Id and Name (The Id is a hidden field, and the name is a textbox in my view) The problem occurs when I now try to add the Tag to the Event. Let's take the following scenario: The Event is already in my database, and let's say it already has the related tags C#, ASP.NET If I now send the following list of tags to the servicelayer: ID Name 1 C# 2 ASP.NET 3 EF4 and add them by first fetching the Event from the DB, so that I have an actual Event from my DbContext, then I simply do myEvent.Tags.Add to add the tags.. Problem is that after SaveChanges() my DB now contains this set of tags: ID Name 1 C# 2 ASP.NET 3 EF4 4 C# 5 ASP.NET This, even though my Tags that I save has it's ID set when I save it (although I didn't fetch it from the DB)

    Read the article

  • MySQL Database Design with Internationalization

    - by Some name
    Hello, I'm going to start work on a medium sized application, and i'm planning it's db design. One thing that I'm not sure about is this. I will have many tables which will need internationalization, such as: "membership_options, gender_options, language_options etc" Each of these tables will share common i18n fields, like: "title, alternative_title, short_description, description" In your opinion which is the best way to do it? Have an i18n table with the same fields for each of the tables that will need them? or do something like: Membership table Gender table ---------------- -------------- id | created_at id | created_at 1 - 22.03.2001 1 - 14.08.2002 2 - 22.03.2001 2 - 14.08.2002 General translation table ------------------------- record_id | table_name | string_name | alternative_title| .... |id_language 1 - membership regular null 1 (english) 1 - membership normale null 2 (italian) 1 - gender man null 1(english) 1 -gender uomo null 2(italian) This would avoid me repeating something like: membership_translation table ----------------------------- membership_id | name | alternative_title | id_lang 1 regular null 1 1 normale null 2 gender_translation table ----------------------------- gender_id | name | alternative_title | id_lang 1 man null 1 1 uomo null 2 and so on, so i would probably reduce the number of db tables, but i'm not sure about performance.I'm not much of a DB designer, so please let me know.

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >