Search Results

Search found 26878 results on 1076 pages for 'temp table'.

Page 506/1076 | < Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >

  • Stored Procedure in Entity Framework

    - by kamal
    Hi I had added the Stored procedure in my Entity framework and i also imported the functions in the edmx. Is it must to add all the three functions insert, update, and delete functions to a table. I had tried with insert alone and also with all, but why can't i get the name of the stored procedure in the connection string. Let me know what i done clearly. I had added the sp i had imported the functions in the model browser. i had also mapped the insert, update and delete function to the table with return type only for insert and update. Still i can't get the name of SP in the connection string. Please let me know how could i resolve this issue. Thanks in Advance, Kamal.

    Read the article

  • PHP multiuser login class or script

    - by FFish
    I am looking for a simple but secure login script with mySQL PHP: sessions, MD5 that I can use with my exsisting database. Cookies to store password + password recovery by email. Change login/pass. I do not need registering, I register the user myself with temp login/pass. table agents agent1 agent2 table albums album1, owner: agent1 album2, owner: agent1 album3, owner: agent2 ... login.php agent1 logs in and has access to his albums: - album1 - album2 agent1 can edit his albums: edit.php?ref=album1 but NOT edit.php?ref=album3 by changing the ?ref variable

    Read the article

  • Handle ConstraintException and get ColumnName that cause the error

    - by Mysterious
    Hello, I have a Table Machines that made of: ID = Identity , Primary , AutoIncrement,NOT NULL SN = Unique , String , NOT NULL Name =String now I am using a form to insert data into this table, and I am using BindingSource and ErrorProvider, I am handling Null and Empty string in the DataSet layer (partial class in the dataset.xsd), now when I try to end editing this.machinesBindingSource.EndEdit(); I got this error message: ex.Message = "Column 'SN' is constrained to be unique. Value '123' is already present." I know I entered a duplicated value but the question is HOW can I determine which the columnName that cause the error and re-write the error message entirely in different language I want to get the columnName and the wrong value thanks in advance

    Read the article

  • DOM and Javascript

    - by Bob Smith
    I let a user reconfigure the location of a set of rows in a table by giving them ability to move them up and down. The changes are done by swapping nodes in the DOM. After the user has moved rows around, when I do a view source, I see the HTML in the original state (before the user made any changes). Can someone explain why that is? My understanding was when we do any DOM operations, the underlying HTML will be changed as well. EDIT: Does that mean on the server side, when attempt to get the state after user's changes, I will be able to get what I need? I am using C#/ASP.NET. Could it be because this is a HTML table (not ASP.NET Server control), that it's not maintaining the state of the changes?

    Read the article

  • Modifying records in my migration throws an authlogic error

    - by nfm
    I'm adding some columns to one of my database tables, and then populating those columns: def self.up add_column :contacts, :business_id, :integer add_column :contacts, :business_type, :string Contact.reset_column_information Contact.all.each do |contact| contact.update_attributes(:business_id => contact.client_id, :business_type => 'Client') end remove_column :contacts, :client_id end The line contact.update_attributes is causing the following Authlogic error: You must activate the Authlogic::Session::Base.controller with a controller object before creating objects I have no idea what is going on here - I'm not using a controller method to modify each row in the table. Nor am I creating new objects. The error doesn't occur if the contacts table is empty. I've had a google and it seems like this error can occur when you run your controller tests, and is fixed by adding before_filter :activate_authlogic to them, but this doesn't seem relevant in my case. Any ideas? I'm stumped.

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • Replace into equivalent for postgresql and then autoincrementing an int

    - by Mohamed Ikal Al-Jabir
    Okay no seriously, if a postgresql guru can help out I'm just getting started. Basically what I want is a simple table like such: CREATE TABLE schema.searches ( search_id serial NOT NULL, search_query character varying(255), search_count integer DEFAULT 1, CONSTRAINT pkey_search_id PRIMARY KEY (search_id) ) WITH ( OIDS=FALSE ); I need something like REPLACE INTO for mysql. I don't know if I have to write my own procedure or something? Basically: check if the query already exists if so, just add 1 to the count it not, add it to the db I can do this in my php code but I'd rather all that be done in postgres C engine Thanks for helping

    Read the article

  • Repeating fields in similar database tables

    - by user1738833
    I have been tasked with working on a database that I have never seen before and I'm looking at the DB structure. Some of the central and most heavily queried and joined tables look like virtual duplicates of each other. Here's a massively simplified representation of the situation, with business-sensitive information changed, listing hypothetical table names and fields: TopLevelGroup: PK_TLGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK SubGroup: PK_SubGroupId, FK_ParentTopLevelGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK SubSubGroup: PK_SubSUbGroupId, FK_ParentSubGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK I haven't listed the types of the fields as I don't think it's particularly important to the situation. In addition, it's worth saying that rather than four repeated fields as in the example above, I'm looking at 86 repeated fields. For the most part, those fields genuinely do represent "facts" about the primary table entity, so it's not automatically wrong for that reason. In addition, the "groups" represented here have a property inheritance relationship. If DisplaysXOnBill is NULL in the SubSubGroup, it takes the value of DisplaysXOnBillfrom it's parent, the SubGroup, and so-on up to the TopLevelGroup. Further, the requirements will never require that the model extends beyond three levels, so there is no need for flexibility in that area. Is there a design smell from several tables which describe very similar entities having almost identical fields? If so, what might be a better design of the example above? I'm using the phrase "design smell" to indicate a possible problem. Of course, in any given situation, a particular design might well be the best solution. I'm looking for a more general answer - wondering what might be wrong with this design and what might be the better design were that the case. Possibly related, but not primary questions: Is this database schema in a reasonably normal form (e.g. to 3NF), insofar as can be told from the information I've provided. I can't see a problem with the requirements of 2NF and 3NF, except in their inheriting the requirements of 1NF. Is 1NF satisfied though? Are repeating groups allowed in different tables? Is there a best-practice method for implementing the inheritance relationship in a database as I require? The method above feels clunky to me because any query on the SubSubGroup necessarily needs to join onto the SubGroup and the TopLevelGroup tables to collect inherited facts, which can make even trivial joins requiring facts from the SubSubGroup table rather long-winded. There are, of course, political considerations to making a relatively large change like this. For the purpose of this question, I'm happy to ignore that fact in the interests of keeping the answers ring-fenced to the technical problem.

    Read the article

  • NHibernate - is property lazy loading possible?

    - by Ben
    I've got some binary data that I store and was going to separate this out into a separate table so it could be lazy loaded. However, i then came across this post by Ayende (http://ayende.com/Blog/archive/2010/01/27/nhibernate-new-feature-lazy-properties.aspx) which suggests that property lazy loading is now possible. I have added the lazy="true" attribute to my property mapping but the field is still loaded from the database (I am using a simple text field to test). My query: return _session.CreateQuery("from Product") .SetMaxResults(1) .UniqueResult<Product>(); Mapping: <property name="Description" type="string" column="FullDescription" lazy="true"/> Has anyone been able to get this working? Personally I prefer this approach than having to add another table to my database.

    Read the article

  • SQL: many-to-many relationship, IN condition

    - by Maarten
    I have a table called transactions with a many-to-many relationship to items through the items_transactions table. I want to do something like this: SELECT "transactions".* FROM "transactions" INNER JOIN "items_transactions" ON "items_transactions".transaction_id = "transactions".id INNER JOIN "items" ON "items".id = "items_transactions".item_id WHERE (items.id IN (<list of items>)) But this gives me all transactions that have one or more of the items in the list associated with it and I only want it to give me the transactions that are associated with all of those items. Any help would be appreciated.

    Read the article

  • Nested WPF DataGrids

    - by jvberg
    I have a DataGrid (from the toolkit) and I want to nest another DataGrid in the DataGrid.RowDetailsTemplate. The trick is I want to bring back the data from one table in the main grid and then based on row selection go and get additonal detail from a different table and show it in the DataGrid in the detail template. This is easy enough to do in 2 seperate DataGrids but I am having trouble getting it to work with the nested version. Is this even possible? If so, could someone point me in the right direction. I should note I am using LinqToSql clases to populate the data. Thanks for your consideration. -Joel

    Read the article

  • MySQL: updating a row and deleting the original in case it becomes a duplicate

    - by Silvio Donnini
    I have a simple table made up of two columns: col_A and col_B. The primary key is defined over both. I need to update some rows and assign to col_A values that may generate duplicates, for example: UPDATE `table` SET `col_A` = 66 WHERE `col_B` = 70 This statement sometimes yields a duplicate key error. I don't want to simply ignore the error with UPDATE IGNORE, because then the rows that generate the error would remain unchanged. Instead, I want them to be deleted when they would conflict with another row after they have been updated I'd like to write something like: UPDATE `table` SET `col_A` = 66 WHERE `col_B` = 70 ON DUPLICATE KEY REPLACE which unfortunately isn't legal in SQL, so I need help finding another way around. Also, I'm using PHP and could consider a hybrid solution (i.e. part query part php code), but keep in mind that I have to perform this updating operation many millions of times. thanks for your attention, Silvio Reminder: UPDATE's syntax has problems with joins with the same table that is being updated

    Read the article

  • Remove a certain value from string which keeps on changing

    - by user2971375
    I m trying to make a utility to generate a Insert script of sql tables along with relational table. I got all the values in c#. Now I want to remove the one column name and its value from the script.most probably the identity column. Eg. The string I have (which keeps on changing with table name and varies) INSERT Core.Customers ([customerId],[customername],[customeradress],[ordernumber]) Values(123,N'Rahul',N'244 LIZ MORN',2334) NOW I know I have to remove CustomerId. (sometimes need to be replaces with @somevariable) Please give me an efficient way how to retrieve customerId value and Deleting column name and value . Conditions: insert script length and columns names keep changing. Method should return the deleted value. At some point of time customer id can be same as order id. In that case my string.remove method fails.

    Read the article

  • Sorting/Paginating/Filtering Complex Multi-AR Object Tables in Rails

    - by Matt Rogish
    I have a complex table pulled from a multi-ActiveRecord object array. This listing is a combined display of all of a particular user's "favorite" items (songs, messages, blog postings, whatever). Each of these items is a full-fledged AR object. My goal is to present the user with a simplified search, sort, and pagination interface. The user need not know that the Song has a singer, and that the Message has an author -- to the end user both entries in the table will be displayed as "User". Thus, the search box will simply be a dropdown list asking them which to search on (User name, created at, etc.). Internally, I would need to convert that to the appropriate object search, combine the results, and display. I can, separately, do pagination (mislav will_paginate), sorting, and filtering, but together I'm having some problems combining them. For example, if I paginate the combined list of items, the pagination plugin handles it just fine. It is not efficient since the pagination is happening in the app vs. the DB, but let's assume the intended use-case would indicate the vast majority of the users will have less than 30 favorited items and all other behavior, server capabilities, etc. indicates this will not be a bottleneck. However, if I wish to sort the list I cannot sort it via the pagination plugin because it relies on the assumption that the result set is derived from a single SQL query, and also that the field name is consistent throughout. Thus, I must sort the merged array via ruby, e.g. @items.sort_by{ |i| i.whatever } But, since the items do not share common names, I must first interrogate the object and then call the correct sort by. For example, if the user wishes to sort by user name, if the sorted object is a message, I sort by author but if the object is a song, I sort by singer. This is all very gross and feels quite un-ruby-like. This same problem comes into play with the filter. If the user filters on the "parent item" (the message's thread, the song's album), I must translate that to the appropriate collection object method. Also gross. This is not the exact set-up but is close enough. Note that this is a legacy app so changing it is quite difficult, although not impossible. Also, yes there is some DRY that can be done, but don't focus on the style or elegance of the following code. Style/elegance of the SOLUTION is important, however! :D models: class User < ActiveRecord::Base ... has_and_belongs_to_many :favorite_messages, :class_name => "Message" has_and_belongs_to_many :favorite_songs, :class_name => "Song" has_many :authored_messages, :class_name => "Message" has_many :sung_songs, :class_name => "Song" end class Message < ActiveRecord::Base has_and_belongs_to_many :favorite_messages belongs_to :author, :class_name => "User" belongs_to :thread end class Song < ActiveRecord::Base has_and_belongs_to_many :favorite_songs belongs_to :singer, :class_name => "User" belongs_to :album end controller: def show u = User.find 123 @items = Array.new @items << u.favorite_messages @items << u.favorite_songs # etc. etc. @items.flatten! @items = @items.sort_by{ |i| i.created_at } @items = @items.paginate :page => params[:page], :per_page => 20 end def search # Assume user is searching for username like 'Bob' u = User.find 123 @items = Array.new @items << u.favorite_messages.find( :all, :conditions => "LOWER( author ) LIKE LOWER('%bob%')" ) @items << u.favorite_songs.find( :all, :conditions => "LOWER( singer ) LIKE ... " ) # etc. etc. @items.flatten! @items = @items.sort_by{ |i| determine appropriate sorting based on user selection } @items = @items.paginate :page => params[:page], :per_page => 20 end view: #index.html.erb ... <table> <tr> <th>Title (sort ASC/DESC links)</th> <th>Created By (sort ASC/DESC links))</th> <th>Collection Title (sort ASC/DESC links)</th> <th>Created At (sort ASC/DESC links)</th> </tr> <% @items.each |item| do %> <%= render { :partial => "message", :locals => item } if item.is_a? Message %> <%= render { :partial => "song", :locals => item } if item.is_a? Song %> <%end%> ... </table> #message.html.erb # shorthand, not real ruby print out message title, author name, thread title, message created at #song.html.erb # shorthand print out song title, singer name, album title, song created at

    Read the article

  • Using memtables in sql. When is it reasonable and is it safe?

    - by Spiros
    I was just reading an update from a friend's project, mentioning the use of memtables to store data temporatily and then flush to a table on disk. Up to now, I have never faced a situation where I would use a memtable, or a situation where I would think the use of a mem table would be beneficial; so I wonder, when would someone use mem tables? what makes a memtable (appart from access speed) a reasonable choice? and how safe is it, even for temp data? there is always the limitation of available physical memory.

    Read the article

  • LINQ DataLoadOptions - Retrieval of data via Fulltext/Broken Foreign Key Relationship

    - by Alex
    I've hit a brick wall: I have an SQL Function: FUNCTION [dbo].[ContactsFTS] (@searchtext nvarchar(4000)) RETURNS TABLE AS RETURN SELECT * FROM Contacts INNER JOIN CONTAINSTABLE(Contacts, *, @searchtext) AS KEY_TBL ON Contacts.Id = KEY_TBL.[KEY] which I am calling via LINQ public IQueryable<ContactsFTSResult> SearchByFullText(String searchText) { return db.ContactsFTS(searchText); } I am projecting the ContactsFTSResult collection into a List<Contact> which is then given to my viewmodel. Here is the problem: My Contacts table (and therefore the Contact object created via LINQ to SQL) has multiple FK relationships to other information, such as Contact.BillingAddressId is an FK to an Address.Id. That information is missing after I do the fulltext search (e.g. if I try to access Contact.BillingAddress it is null). Can I add this information somehow via DataLoadOptions? I tried LoadWith<Contact>(c => c.BillingAddress) but this doesn't work, I assume because of the fact that I'm calling the function instead of doing the whole query via LINQ.

    Read the article

  • SQL: Find difference between dates with grouping

    - by ajbeaven
    I have a problem that seems similar to this fellow - I just want to display the data slightly differently. I'm pretty terrible with SQL so can't modify it to suit, but perhaps someone else can. My table looks similar to this (date format is dd/mm/yyyy): ID User Date_start Role 1 Andy 01/04/2010 A 2 Andy 10/04/2010 B 3 Andy 20/04/2010 A 4 John 02/05/2010 A I want to show the total number of days that anyone was in a certain role. Users stay in the role until there is another entry into the table. Users can only be in one role at a time. So the summary data would look like this (assuming that the date is 04/05/2010): A: 26 days B: 10 days Thanks for any help :)

    Read the article

  • SQL select statement

    - by kwokwai
    Hi all, I got a Table which has two fields: Point, and Level, with some sample data as follows: ----------------------- Point | Level ----------------------- 10 | Level 1 20 | Level 2 30 | Level 3 40 | Level 4 Suppose that there is a user who has 25 points, to find the Level in which this user is in, the Select statement I used was: Select Level from Table where Point < 30 AND Point > 20; But the Select SQL ststament is a hard-copy one where you can see the ponts 30 and 20 are fixed. I want to alter the Select statement so that the new SQL Select statement can be applied to all users with different points, but I don't know how to do it.

    Read the article

  • How to access stdClass variables stdClass Object([max(id)])=>64)

    - by Theopile
    I need the very last valid entry in a database table which would be the row with the greatest primary key. So using mysqli, my query is "SELECT MAX(id) FROM table LIMIT 1". This query returns the correct number(using print_r()) but I cannot figure out how to access it. Here is the main code. Note that the $this-link refers to class with a mysqli connection. $q="select max(id) from stones limit 1"; $qed=$this->link->query($q) or die(mysqli_error()); if($qed){ $row=$qed->fetch_object(); print_r($row); echo $lastid=$row;//here is the problem } The valid line print_r($row) echos out "stdClass Object ( [max(id)] = 68 )"

    Read the article

  • Unidirectional OneToMany in Doctrine 2

    - by darja
    I have two Doctrine entities looking like that: /** * @Entity * @Table(name = "Locales") */ class Locale { /** * @Id @Column(type="integer") * @GeneratedValue(strategy="IDENTITY") */ private $id; /** @Column(length=2, name="Name", type="string") */ private $code; } /** * @Entity * @Table(name = "localized") */ class LocalizedStrings { /** * @Id @Column(type="integer") * @GeneratedValue(strategy="IDENTITY") */ private $id; /** @Column(name="Locale", type="integer") */ private $locale; /** @Column(name="SomeText", type="string", length=300) */ private $someText; } I'd like to create reference between these entities. LocalizedStrings needs reference to Locale but Locale doesn't need reference to LocalizedStrings. How to write such mapping via Doctrine 2?

    Read the article

  • Scraping paginated items from a website using scrapy

    - by Mridang Agarwalla
    I'm using scrapy to scrape items from a site. I'm not being able to implement this scraping pattern. The site I'm trying to scrape is a forum and I scrape the site once a day. Each page has a table containing posts. New posts are added to the top of the table and as more and more posts are posted to the site, the older posts go further into the pages due to pagination. This is a very simple scenario and we will assume that the order of the posts never change. I would like to scrape this site and scrape all the "new" records until the last scraped post from yesterday is encountered. I have configured my spider to paginate endlessly and when it encounters yesterday's last scraped post, it should stop. How can implement this? (My Scrapy installation works with my Django installation using django-dynamic-scraper )

    Read the article

  • PHP: Strange behaviour while calling custom php functions

    - by baltusaj
    I am facing a strange behavior while coding in PHP with Flex. Let me explain the situation: I have two funcions lets say: populateTable() //puts some data in a table made with flex createXML() //creates an xml file which is used by Fusion Charts to create a chart Now, if i call populateTable() alone, the table gets populated with data but if i call it with createXML(), the table doesn't get populated but createXML() does it's work i.e. creates an xml file. Even if i run following code, only xml file gets generated but table remains empty whereas i called populateTable() before createXML(). Any idea what may be going wrong? MXML Part <mx:HTTPService id="userRequest" url="request.php" method="POST" resultFormat="e4x"> <mx:request xmlns=""> <getResult>send</getResult> </mx:request> and <mx:DataGrid id="dgUserRequest" dataProvider="{userRequest.lastResult.user}" x="28.5" y="36" width="525" height="250" > <mx:columns> <mx:DataGridColumn headerText="No." dataField="no" /> <mx:DataGridColumn headerText="Name" dataField="name"/> <mx:DataGridColumn headerText="Age" dataField="age"/> </mx:columns> PHP Part <?php //-------------------------------------------------------------------------- function initialize($username,$password,$database) //-------------------------------------------------------------------------- { # Connect to the database $link = mysql_connect("localhost", $username,$password); if (!$link) { die('Could not connected to the database : ' . mysql_error()); } # Select the database $db_selected = mysql_select_db($database, $link); if (!$db_selected) { die ('Could not select the DB : ' . mysql_error()); } // populateTable(); createXML(); # Close database connection } //-------------------------------------------------------------------------- populateTable() //-------------------------------------------------------------------------- { if($_POST['getResult'] == 'send') { $Result = mysql_query("SELECT * FROM session" ); $Return = "<Users>"; $no = 1; while ( $row = mysql_fetch_object( $Result ) ) { $Return .= "<user><no>".$no."</no><name>".$row->name."</name><age>".$row->age."</age><salary>". $row->salary."</salary></session>"; $no=$no+1; $Return .= "</Users>"; mysql_free_result( $Result ); print ($Return); } //-------------------------------------------------------------------------- createXML() //-------------------------------------------------------------------------- { $users=array ( "0"=>array("",0), "1"=>array("Obama",0), "2"=>array("Zardari",0), "3"=>array("Imran Khan",0), "4"=>array("Ahmadenijad",0) ); $selectedUsers=array(1,4); //this means only obama and ahmadenijad are selected and the xml file will contain info related to them only //Extracting salaries of selected users $size=count($users); for($i = 0; $i<$size; $i++) { //initialize temp which will calculate total throughput for each protocol separately $salary = 0; $result = mysql_query("SELECT salary FROM userInfo where name='$users[$selectedUsers[$i]][0]'"); $row = mysql_fetch_array($result)) $salary = $row['salary']; } $users[$selectedUsers[$i]][1]=$salary; } //creating XML string $chartContent = "<chart caption=\"Users Vs Salaries\" formatNumberScale=\"0\" pieSliceDepth=\"30\" startingAngle=\"125\">"; for($i=0;$i<$size;$i++) { $chartContent .= "<set label=\"".$users[$selectedUsers[$i]][0]."\" value=\"".$users[$selectedUsers[$i]][1]."\"/>"; } $chartContent .= "<styles>" . "<definition>" . "<style type=\"font\" name=\"CaptionFont\" size=\"16\" color=\"666666\"/>" . "<style type=\"font\" name=\"SubCaptionFont\" bold=\"0\"/>" . "</definition>" . "<application>" . "<apply toObject=\"caption\" styles=\"CaptionFont\"/>" . "<apply toObject=\"SubCaption\" styles=\"SubCaptionFont\"/>" . "</application>" . "</styles>" . "</chart>"; $file_handle = fopen('ChartData.xml','w'); fwrite($file_handle,$chartContent); fclose($file_handle); } initialize("root","","hiddenpeak"); ?>

    Read the article

  • How to create a backup from SqlAlchemy?

    - by swilliams
    I'm writing a Pylons app, and am trying to create a simple backup system where every table is serialized and tarred up into a single file for an administrator to download, and use to restore the app should something bad happen. I can serialize my table data just fine using the SqlAlchemy serializer, and I can deserialize it fine as well, but I can't figure out how to commit those changes back to the database. In order to serialize my data I am doing this: from myproject.model.meta import Session from sqlalchemy.ext.serializer import loads, dumps q = Session.query(MyTable) serialized_data = dumps(q.all()) In order to test things out, I go ahead and truncation MyTable, and then attempt to restore using serialized_data: from myproject.model import meta restore_q = loads(serialized_data, meta.metadata, Session) This doesn't seem to do anything... I've tried calling a Session.commit after the fact, individually walking through all the objects in restore_q and adding them, but nothing seems to work. What am I missing? Or is there a better way to do what I'm aiming for? I don't want to shell out and directly touch the database, since SqlAlchemy supports different database engines.

    Read the article

  • Future proof Primary Key design in postgresql

    - by John P
    I've always used either auto_generated or Sequences in the past for my primary keys. With the current system I'm working on there is the possibility of having to eventually partition the data which has never been a requirement in the past. Knowing that I may need to partition the data in the future, is there any advantage of using UUIDs for PKs instead of the database's built-in sequences? If so, is there a design pattern that can safely generate relatively short keys (say 6 characters instead of the usual long one e6709870-5cbc-11df-a08a-0800200c9a66)? 36^6 keys per-table is more than sufficient for any table I could imagine. I will be using the keys in URLs so conciseness is important.

    Read the article

< Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >