Search Results

Search found 34305 results on 1373 pages for 'self referencing table'.

Page 654/1373 | < Previous Page | 650 651 652 653 654 655 656 657 658 659 660 661  | Next Page >

  • How to create a backup from SqlAlchemy?

    - by swilliams
    I'm writing a Pylons app, and am trying to create a simple backup system where every table is serialized and tarred up into a single file for an administrator to download, and use to restore the app should something bad happen. I can serialize my table data just fine using the SqlAlchemy serializer, and I can deserialize it fine as well, but I can't figure out how to commit those changes back to the database. In order to serialize my data I am doing this: from myproject.model.meta import Session from sqlalchemy.ext.serializer import loads, dumps q = Session.query(MyTable) serialized_data = dumps(q.all()) In order to test things out, I go ahead and truncation MyTable, and then attempt to restore using serialized_data: from myproject.model import meta restore_q = loads(serialized_data, meta.metadata, Session) This doesn't seem to do anything... I've tried calling a Session.commit after the fact, individually walking through all the objects in restore_q and adding them, but nothing seems to work. What am I missing? Or is there a better way to do what I'm aiming for? I don't want to shell out and directly touch the database, since SqlAlchemy supports different database engines.

    Read the article

  • Design Help! How can design Extended properties for Entity with simple and complex data in extended

    - by mmtemporary
    I have design question. I have entity such as "Person". Person has properties such as: FirstName, LastName, Gender, BirthDate, .... End user when create a person in application may be need to define another property that is not defined in database table schema (or class person). for example: end user nead to define "property1" that its a string property. or nead define "proerty2" that its a image, or need define "property3" that its complex type. please separate your design solution in tow level: 1-database table design 2-class design thank u.

    Read the article

  • Cannot loop through Excel 2003 files in SSIS 2008

    - by Techspirit
    Hi, I am trying to execute a SSIS 2008 package on a 64-bit OS and import Excel 2003 files to SQL Server 2008. I have created an OLEDB Connection to the Excel file with a Connection String that retrieves the Excel file from a variable, inside the ForEach Loop Container. The Run64BitRunTime is set to false. I am not able to edit the SQL Command on the OLEDB Source in the Data Flow task. It returns an error : Error 2 Validation error. Load List Staged Table: Load List Staged Table: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "List OLEDB to Excel" failed with error code 0xC0202009. There may be error messages posted before this with more information on why the AcquireConnection method call failed. 0 0 Appreciate any help.

    Read the article

  • PHP: Strange behaviour while calling custom php functions

    - by baltusaj
    I am facing a strange behavior while coding in PHP with Flex. Let me explain the situation: I have two funcions lets say: populateTable() //puts some data in a table made with flex createXML() //creates an xml file which is used by Fusion Charts to create a chart Now, if i call populateTable() alone, the table gets populated with data but if i call it with createXML(), the table doesn't get populated but createXML() does it's work i.e. creates an xml file. Even if i run following code, only xml file gets generated but table remains empty whereas i called populateTable() before createXML(). Any idea what may be going wrong? MXML Part <mx:HTTPService id="userRequest" url="request.php" method="POST" resultFormat="e4x"> <mx:request xmlns=""> <getResult>send</getResult> </mx:request> and <mx:DataGrid id="dgUserRequest" dataProvider="{userRequest.lastResult.user}" x="28.5" y="36" width="525" height="250" > <mx:columns> <mx:DataGridColumn headerText="No." dataField="no" /> <mx:DataGridColumn headerText="Name" dataField="name"/> <mx:DataGridColumn headerText="Age" dataField="age"/> </mx:columns> PHP Part <?php //-------------------------------------------------------------------------- function initialize($username,$password,$database) //-------------------------------------------------------------------------- { # Connect to the database $link = mysql_connect("localhost", $username,$password); if (!$link) { die('Could not connected to the database : ' . mysql_error()); } # Select the database $db_selected = mysql_select_db($database, $link); if (!$db_selected) { die ('Could not select the DB : ' . mysql_error()); } // populateTable(); createXML(); # Close database connection } //-------------------------------------------------------------------------- populateTable() //-------------------------------------------------------------------------- { if($_POST['getResult'] == 'send') { $Result = mysql_query("SELECT * FROM session" ); $Return = "<Users>"; $no = 1; while ( $row = mysql_fetch_object( $Result ) ) { $Return .= "<user><no>".$no."</no><name>".$row->name."</name><age>".$row->age."</age><salary>". $row->salary."</salary></session>"; $no=$no+1; $Return .= "</Users>"; mysql_free_result( $Result ); print ($Return); } //-------------------------------------------------------------------------- createXML() //-------------------------------------------------------------------------- { $users=array ( "0"=>array("",0), "1"=>array("Obama",0), "2"=>array("Zardari",0), "3"=>array("Imran Khan",0), "4"=>array("Ahmadenijad",0) ); $selectedUsers=array(1,4); //this means only obama and ahmadenijad are selected and the xml file will contain info related to them only //Extracting salaries of selected users $size=count($users); for($i = 0; $i<$size; $i++) { //initialize temp which will calculate total throughput for each protocol separately $salary = 0; $result = mysql_query("SELECT salary FROM userInfo where name='$users[$selectedUsers[$i]][0]'"); $row = mysql_fetch_array($result)) $salary = $row['salary']; } $users[$selectedUsers[$i]][1]=$salary; } //creating XML string $chartContent = "<chart caption=\"Users Vs Salaries\" formatNumberScale=\"0\" pieSliceDepth=\"30\" startingAngle=\"125\">"; for($i=0;$i<$size;$i++) { $chartContent .= "<set label=\"".$users[$selectedUsers[$i]][0]."\" value=\"".$users[$selectedUsers[$i]][1]."\"/>"; } $chartContent .= "<styles>" . "<definition>" . "<style type=\"font\" name=\"CaptionFont\" size=\"16\" color=\"666666\"/>" . "<style type=\"font\" name=\"SubCaptionFont\" bold=\"0\"/>" . "</definition>" . "<application>" . "<apply toObject=\"caption\" styles=\"CaptionFont\"/>" . "<apply toObject=\"SubCaption\" styles=\"SubCaptionFont\"/>" . "</application>" . "</styles>" . "</chart>"; $file_handle = fopen('ChartData.xml','w'); fwrite($file_handle,$chartContent); fclose($file_handle); } initialize("root","","hiddenpeak"); ?>

    Read the article

  • Need alternative field names for these reserved words

    - by MattSlay
    “type” and “class” are likely reserved or problematic words in C# and/or Ruby, two languages I may use to program against my new database schema in the future. So, in order to avoid potential conflicts with those languages, I’m looking for alternative names for these field names in my tables. In this case, it is from my Machines table, where I have: “class” field (values would be something like “manual” or “computerized”) and “type” field (values would be “lathe” or “mill”) I could call the fields “machineclass” and “machinetype”, but that is inconsistent with naming scheme in the rest of my schema (meaning, I do not re-use the table name in the field… For instance, I use Machine.name, not Machine.machinename) Any thought on this madness?

    Read the article

  • CKEditor adds html entities to inline CSS. Is the CSS still valid?

    - by Mihai Secasiu
    I have this piece of code: <table style="background-image: url(path/to_image.png)"> And when I load it in CKEditor it's transformed in: <table style="background-image: url(&quot;path/to_image.png&quot;)"> Is this still still valid CSS? Actually I'm not so interested if it's valid but if there would be any problems with any web browser or email client ( the editor is used for composing a html email ). Firefox and Thunderbird seem to be fine with it.

    Read the article

  • Faster jquery selector for finding a number of TD elements

    - by Bernard Chen
    I have a table where each row has 13 TD elements. I want to show and hide the first 10 of them when I toggle a link. These 10 TD elements all have an IDs with the prefix "foo" and a two digit number for its position (e.g., "foo01"). What's the fastest way to select them across the entire table? $("td:nth-child(-n+10)") or $("td[id^=foo]") or is it worth concatenating all of the ids? $("#foo01, #foo02, #foo03, #foo04, #foo05, #foo06, #foo07, #foo08, #foo09, #foo10") Is there another approach I should be considering as well?

    Read the article

  • What are the hard and fast rules for Cache Control?

    - by Metalshark
    Confession: sites I maintain have different rules for Cache Control mostly based on the default configuration of the server followed up with recommendations from the Page Speed & Y-Slow Firefox plug-ins and the Network Resources view in Google's Speed Tracer. Cache-Control is set to private/public depending on what they say to do, ETag's/Last-Modified headers are only tinkered with if Y-Slow suggests there is something wrong and Vary-Accept-Encoding seems necessary when manually gziping files for Amazon CloudFront. When reading through the material on the different options and what they do there seems to be conflicting information, rules for broken proxies and cargo cult configurations. Any of the official information provided by the analysis tools mentioned above is quite inaccessible as it deals with each topic individually instead of as a unified strategy (so there is no cross-referencing of techniques). For example, it seems to make no sense that the speed analysis tools rate a site with ETag's the same as a site without them if they are meant to help with caching. What are the hard and fast rules for a platform agnostic Cache Control strategy? EDIT: A link through Jeff Atwood's article explains Caching in superb depth. For the record though here are the hard and fast rules: If the file is Compressed using GZIP, etc - use "cache-control: private" as a proxy may return the compressed version to a client that does not support it (the browser cache will hold files marked this way though). Also remember to include a "Vary: Accept-Encoding" to say that it is compressible. Use Last-Modified in conjunction with ETag - belt and braces usage provides both validators, whilst ETag is based on file contents instead of modification time alone, using both covers all bases. NOTE: AOL's PageTest has a carte blanche approach against ETags for some reason. If you are using Apache on more than one server to host the same content then remove the implicitly declared inode from ETags by excluding it from the FileETag directive (i.e. "FileETag MTime Size") unless you are genuinely using the same live filesystem. Use "cache-control: public" wherever you can - this means that proxy servers (and the browser cache) will return your content even if the rest of the page needs HTTP authentication, etc.

    Read the article

  • Future proof Primary Key design in postgresql

    - by John P
    I've always used either auto_generated or Sequences in the past for my primary keys. With the current system I'm working on there is the possibility of having to eventually partition the data which has never been a requirement in the past. Knowing that I may need to partition the data in the future, is there any advantage of using UUIDs for PKs instead of the database's built-in sequences? If so, is there a design pattern that can safely generate relatively short keys (say 6 characters instead of the usual long one e6709870-5cbc-11df-a08a-0800200c9a66)? 36^6 keys per-table is more than sufficient for any table I could imagine. I will be using the keys in URLs so conciseness is important.

    Read the article

  • How do I save user specific data in an asp.net site?

    - by Greg McNulty
    I just set up user profiles using asp.net 3.5 using wvd. For each user I would like to store data that they will be updating every day. For example, every time they go for a run they will update time and distance. I intend to allow them to also look up their history of distance and time from any past date. My question is, what does the database schema usually look like for such a set up? Currently asp.net set up a db for me when I made user profiles. Do I just add an extra table for every user? Should there be one big table with all users data? How do I relate a user I'd to their specific data? Etc.... I have never done this before so any ideas on how this is usually done would be very helpful. Thank you.

    Read the article

  • Delphi, Csv,Import Firebird

    - by Vijesh V.Nair
    Lemme explain my problem. I have One ID and 2 Other Fields in a CsV file. the ID connected to a database table. I have to show the curresponding entries in the db and fields from Csv. Need sort the Fields too. My Idea was load into a ClientDataset, lookup to a Query with table and Use sort and show. My Csv have 85 K Records and its taking 120 seconds to load and sort, Its not acceptable. Can you tell me, can I use Bacthmove for this. So I can easily pick fields by a simple query. if I can use Bacthmove Plz give me the guild lines. Also Is there any other Techniques for this? Thanks and Regards, Vijesh V.Nair

    Read the article

  • ado.net slow updating large tables

    - by brett
    The problem: 100,000+ name & address records in an access table (2003). Need to iterate through the table & update detail with the output from a 3rd party dll. I currently use ado, and it works at an acceptable speed (less than 5 minutes on a network share). We will soon need to update to access 2007 and its 'non jet' accdb format to maintain compatability with clients. I've tried using ado.net datsets, but updating the records takes hours! We process 5-10 of these tables per day - so this cannot be a solution. Any ideas on the fastest way to update individual records using ado.net? Surely we didn't take such a hugh backward step with ado.net? Any help would be appreciated.

    Read the article

  • can't insert xml dml expression as a string

    - by 81967
    Here is the code below that would explain you the problem... I create a table below with an xml column and declare a variable, initialize it and Insert the Value into the xml column, create table CustomerInfo (XmlConfigInfo xml) declare @StrTemp nvarchar(2000) set @StrTemp = '<Test></Test>' insert into [CustomerInfo](XmlConfigInfo) values (@StrTemp) Then comes the part of the question,, if I write this... update [CustomerInfo] set XmlConfigInfo.modify('insert <Info></Info> into (//Test)[1]') -- Works Fine!!! but when I try this, set @StrTemp = 'insert <Info></Info> into (//Test)[1]' update [CustomerInfo] set XmlConfigInfo.modify(@StrTemp) -- Doesn't Work!!! and throws an error The argument 1 of the xml data type method "modify" must be a string literal. is there a way around for this one? I tried this, but it is not working :(

    Read the article

  • Problems using 2 simplemodal osx calls

    - by Marcos
    I´m developing a webpage that contains two <p> tags (program and events), that shows two different contents. When I click the program button (it´s a <p> tag), it´s ok, it shows the table on the div´s. But, when I click the events button, again it shows the program <p> tag contents, and I want to show the events table content. I´ve read de solution for this question on this site, but not happens, when I change my JS file, the content of both <p> tags disapears. Can somebody help me?

    Read the article

  • Dynamic data-entry value store

    - by simendsjo
    I'm creating a data-entry application where users are allowed to create the entry schema. My first version of this just created a single table per entry schema with each entry spanning a single or multiple columns (for complex types) with the appropriate data type. This allowed for "fast" querying (on small datasets as I didn't index all columns) and simple synchronization where the data-entry was distributed on several databases. I'm not quite happy with this solution though; the only positive thing is the simplicity... I can only store a fixed number of columns. I need to create indexes on all columns. I need to recreate the table on schema changes. Some of my key design criterias are: Very fast querying (Using a simple domain specific query language) Writes doesn't have to be fast Many concurrent users Schemas will change often Schemas might contain many thousand columns The data-entries might be distributed and needs syncronization. Preferable MySQL and SQLite - Databases like DB2 and Oracle is out of the question. Using .Net/Mono I've been thinking of a couple of possible designs, but none of them seems like a good choice. Solution 1: Union like table containing a Type column and one nullable column per type. This avoids joins, but will definitly use a lot of space. Solution 2: Key/value store. All values are stored as string and converted when needed. Also use a lot of space, and of course, I hate having to convert everything to string. Solution 3: Use an xml database or store values as xml. Without any experience I would think this is quite slow (at least for the relational model unless there is some very good xpath support). I also would like to avoid an xml database as other parts of the application fits better as a relational model, and being able to join the data is helpful. I cannot help to think that someone has solved (some of) this already, but I'm unable to find anything. Not quite sure what to search for either... I know market research is doing something like this for their questionnaires, but there are few open source implementations, and the ones I've found doesn't quite fit the bill. PSPP has much of the logic I'm thinking of; primitive column types, many columns, many rows, fast querying and merging. Too bad it doesn't work against a database.. And of course... I don't need 99% of the provided functionality, but a lot of stuff not included. I'm not sure this is the right place to ask such a design related question, but I hope someone here has some tips, know of any existing work, or can point me to a better place to ask such a question. Thanks in advance!

    Read the article

  • Grouping with operands question

    - by Filip
    I have a table: mysql> desc kursy_bid; +-----------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------+-------------+------+-----+---------+-------+ | datetime | datetime | NO | PRI | NULL | | | currency | varchar(6) | NO | PRI | NULL | | | value | varchar(10) | YES | | NULL | | +-----------+-------------+------+-----+---------+-------+ 3 rows in set (0.01 sec) I would like to select some rows from a table, grouped by some time interval (can be one day) where I will have the first row and the last row of the group, the max(value) and min(value). I tried: select datetime, (select value order by datetime asc limit 1) open, (select value order by datetime desc limit 1) close, max(value), min(value) from kursy_bid_test where datetime > '2009-09-14 00:00:00' and currency = 'eurpln' group by month(datetime), day(datetime), hour(datetime); but the output is: | open | close | datetime | max(value) | min(value) | +--------+--------+---------------------+------------+------------+ | 1.4581 | 1.4581 | 2009-09-14 00:00:05 | 4.1712 | 1.4581 | | 1.4581 | 1.4581 | 2009-09-14 01:00:01 | 1.4581 | 1.4581 | As you see open and close is the same (but they shouldn't be). What should be the query to do what I want?

    Read the article

  • How to correctly calculate address spaces?

    - by user337308
    Below is an example of a question given on my last test in a Computer Engineering course. Anyone mind explaining to me how to get the start/end addresses of each? I have listed the correct answers at the bottom... The MSP430F2410 device has an address space of 64 KB (the basic MSP430 architecture). Fill in the table below if we know the following. The first 16 bytes of the address space (starting at the address 0x0000) is reserved for special function registers (IE1, IE2, IFG1, IFG2, etc.), the next 240 bytes is reserved for 8-bit peripheral devices, and the next 256 bytes is reserved for 16-bit peripheral devices. The RAM memory capacity is 2 Kbytes and it starts at the address 0x1100. At the top of the address space is 56KB of flash memory reserved for code and interrupt vector table. What Start Address End Address Special Function Registers (16 bytes) 0x0000 0x000F 8-bit peripheral devices (240 bytes) 0x0010 0x00FF 16-bit peripheral devices (256 bytes) 0x0100 0x01FF RAM memory (2 Kbytes) 0x1100 0x18FF Flash Memory (56 Kbytes) 0x2000 0xFFFF

    Read the article

  • Need help with auto-scaffolding template in ASP.NET MVC

    - by DanM
    I'm trying to write an auto-scaffolder for Index views. I'd like to be able to pass in a collection of models or view-models (e.g., IQueryable<MyViewModel>) and get back an HTML table that uses the DisplayName attribute for the headings (th elements) and Html.Display(propertyName) for the cells (td elements). Each row should correspond to one item in the collection. Here's what I have so far: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> <% var items = (IQueryable<TestProj.ViewModels.TestViewModel>)Model; // Should be generic! var properties = items.First().GetMetadata().Properties .Where(pm => pm.ShowForDisplay && !ViewData.TemplateInfo.Visited(pm)); %> <table> <tr> <% foreach(var property in properties) { %> <th> <%= property.DisplayName %> </th> <% } %> </tr> <% foreach(var item in items) { %> <tr> <% foreach(var property in properties) { %> <td> <%= Html.Display(property.DisplayName) %> // This doesn't work! </td> <% } %> </tr> <% } %> </table> Two problems with this: I'd like it to be generic. So, I'd like to replace var items = (IQueryable<TestProj.ViewModels.TestViewModel>)Model; with var items = (IQueryable<T>)Model; or something to that effect. The <td> elements are not working because the Html in <%= Html.Display(property.DisplayName) %> contains the model for the view, which is a collection of items, not the item itself. Somehow, I need to obtain an HtmlHelper object whose Model property is the current item, but I'm not sure how to do that. How do I solve these two problems?

    Read the article

  • LINQ DataLoadOptions - Retrieval of data via Fulltext/Broken Foreign Key Relationship

    - by Alex
    I've hit a brick wall: I have an SQL Function: FUNCTION [dbo].[ContactsFTS] (@searchtext nvarchar(4000)) RETURNS TABLE AS RETURN SELECT * FROM Contacts INNER JOIN CONTAINSTABLE(Contacts, *, @searchtext) AS KEY_TBL ON Contacts.Id = KEY_TBL.[KEY] which I am calling via LINQ public IQueryable<ContactsFTSResult> SearchByFullText(String searchText) { return db.ContactsFTS(searchText); } I am projecting the ContactsFTSResult collection into a List<Contact> which is then given to my viewmodel. Here is the problem: My Contacts table (and therefore the Contact object created via LINQ to SQL) has multiple FK relationships to other information, such as Contact.BillingAddressId is an FK to an Address.Id. That information is missing after I do the fulltext search (e.g. if I try to access Contact.BillingAddress it is null). Can I add this information somehow via DataLoadOptions? I tried LoadWith<Contact>(c => c.BillingAddress) but this doesn't work, I assume because of the fact that I'm calling the function instead of doing the whole query via LINQ.

    Read the article

  • Returning partial address matches and mismatch position using L2S or SQL

    - by peter3
    I need to implement a method that takes an address split up into individual parts and returns any matching items from an address table. If no matches are found, I want to be able to return a value indicating where it failed. Each input param has a corresponding field in the table. The signature would look something like this: List<Address> MatchAddress(string zipCode, string streetName, string houseNumber, string houseLetter, string floor, string appartmentNo, out int mismatchPosition) { // return matching addresses // if none found, return the position where it stopped matching // zipCode is position 0, appartmentNo is position 5 // // an empty param value indicates "don't check" } I know I can construct the method such that I start with all the parameters, execute the query and then remove param by param (from the right side) until either a match is found or I run out of parameters, but can I construct a query that is more effective than that, i.e minimizing the number of calls to the db, maybe even as a single call?

    Read the article

  • ANSI SQL question - how to insert or update a record if it already exists?

    - by morpheous
    Although I am using mySQL (for now), I dont want any DB specific SQL. I am trying to insert a record if it doesn't exist, and update a field if it does exist. I want to use ANSI SQL. The table looks something like this: create table test_table (id int, name varchar(16), weight double) ; //test data insert into test_table (id, name, weight) values(1,'homer', 900); insert into test_table (id, name, weight) values(2,'marge', 85); insert into test_table (id, name, weight) values(3,'bart', 25); insert into test_table (id, name, weight) values(4,'lisa', 15); If the record exists, I want to update the weight (increase by say 10)

    Read the article

  • PHP multiuser login class or script

    - by FFish
    I am looking for a simple but secure login script with mySQL PHP: sessions, MD5 that I can use with my exsisting database. Cookies to store password + password recovery by email. Change login/pass. I do not need registering, I register the user myself with temp login/pass. table agents agent1 agent2 table albums album1, owner: agent1 album2, owner: agent1 album3, owner: agent2 ... login.php agent1 logs in and has access to his albums: - album1 - album2 agent1 can edit his albums: edit.php?ref=album1 but NOT edit.php?ref=album3 by changing the ?ref variable

    Read the article

  • Simple File-based Record Storage with Fast Text Searching for Compact Framework and Silverlight

    - by Eric Farr
    I have a single table with lots of records ( 100k) that I need to be able to index and search on several text fields. The easiest searches will have the first part of the string specified (eg, LIKE 'ABC%' in SQL). The tougher searches will need to search for any substring within the text fields (eg, LIKE '%ABC%' in SQL). I need to run on the Compact Framework. SQL Compact is a memory hog and overkill for my one table. Besides, I'd like to be able to run on Silverlight 4 eventually. The file and indexes can be generated on the full .NET Framework and I only need read capability on the Compact Framework. My records are not especially large and can be expressed in fix length format. I'm looking for some existing code or libraries to avoid having to write a file-based BTree implementation from scratch.

    Read the article

  • Many-To-Many dimensional model

    - by Mevdiven
    Folks, I have a dimension table called DIM_FILE which holds information of the files we received from customers. Each file has detail records which constitutes my FACT table, CUST_DETAIL. In the main process, file is gone through several stages and each stage tags a status to it. Long in a short, I have many-to-many relationship. Any ideas around star schema dimensional modeling. A customer record only belong to a single file and a file can have multiple statuses. FACT ---- CustID FileID AmountDue DIM_FILE -------- FileID FileName DateReceived FILE_STATUS ----------- FileID StatusDateTime StatusCode

    Read the article

  • Multiple Inheritance in LINQtoSQL?

    - by Bumble Bee
    Guys, I have been surfing thru the web to find a way that I could use Multiple-Table-Inheritance in LINQ-To-SQL. But it looks like that it only supports single table inheritance which is not the best way to achieve inheritance in a ORM framework. I got to read that this will be addressed in next LINQ and Entity framework implementations. But how longer a stay we are talking about? In the meantime, if any of you guys have tried out a work-around implementation to achieve this, please let me know. And I thought of using my leisure time to come up with such an implementation so suggestions are welcome! /Bumble Bee

    Read the article

< Previous Page | 650 651 652 653 654 655 656 657 658 659 660 661  | Next Page >