Search Results

Search found 29194 results on 1168 pages for 'non null'.

Page 156/1168 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • mysql foreign key problem.

    - by JP19
    Hi, What is wrong with the foreign key addition here: mysql> create table notes ( id int (11) NOT NULL auto_increment PRIMARY KEY, note_type_id smallint(5) NOT NULL, data TEXT NOT NULL, created_date datetime NOT NULL, modified_date timestamp NOT NULL on update now()) Engine=InnoDB; Query OK, 0 rows affected (0.08 sec) mysql> create table notetypes ( id smallint (5) NOT NULL auto_increment PRIMARY KEY, type varchar(255) NOT NULL UNIQUE) Engine=InnoDB; Query OK, 0 rows affected (0.00 sec) mysql> alter table `notes` add constraint foreign key(`note_type_id`) references `notetypes`.`id` on update cascade on delete restrict; ERROR 1005 (HY000): Can't create table './admin/#sql-43e_b762.frm' (errno: 150) Thanks JP

    Read the article

  • Returning multiple aggregate functions as rows

    - by SDLFunTimes
    I need some help formulating a select statement. I need to select the total quantity shipped for each part with a distinct color. So the result should be a row with the color name and the total. Here's my schema: create table s ( sno char(5) not null, sname char(20) not null, status smallint, city char(15), primary key (sno) ); create table p ( pno char(6) not null, pname char(20) not null, color char(6), weight smallint, city char(15), primary key (pno) ); create table sp ( sno char(5) not null, pno char(6) not null, qty integer not null, primary key (sno, pno) );

    Read the article

  • Header Guard Issues - Getting Swallowed Alive

    - by gjnave
    I'm totally at wit's end: I can't figure out how my dependency issues. I've read countless posts and blogs and reworked my code so many times that I can't even remember what almost worked and what didnt. I continually get not only redefinition errors, but class not defined errors. I rework the header guards and remove some errors simply to find others. I somehow got everything down to one error but then even that got broke while trying to fix it. Would you please help me figure out the problem? card.cpp #include <iostream> #include <cctype> #include "card.h" using namespace std; // ====DECL====== Card::Card() { abilities = 0; flavorText = 0; keywords = 0; artifact = 0; classType = new char[strlen("Card") + 1]; classType = "Card"; } Card::~Card (){ delete name; delete abilities; delete flavorText; artifact = NULL; } // ------------ Card::Card(const Card & to_copy) { name = new char[strlen(to_copy.name) +1]; // creating dynamic array strcpy(to_copy.name, name); type = to_copy.type; color = to_copy.color; manaCost = to_copy.manaCost; abilities = new char[strlen(to_copy.abilities) +1]; strcpy(abilities, to_copy.abilities); flavorText = new char[strlen(to_copy.flavorText) +1]; strcpy(flavorText, to_copy.flavorText); keywords = new char[strlen(to_copy.keywords) +1]; strcpy(keywords, to_copy.keywords); inPlay = to_copy.inPlay; tapped = to_copy.tapped; enchanted = to_copy.enchanted; cursed = to_copy.cursed; if (to_copy.type != ARTIFACT) artifact = to_copy.artifact; } // ====DECL===== int Card::equipArtifact(Artifact* to_equip){ artifact = to_equip; } Artifact * Card::unequipArtifact(Card * unequip_from){ Artifact * to_remove = artifact; artifact = NULL; return to_remove; // put card in hand or in graveyard } int Card::enchant( Card * to_enchant){ to_enchant->enchanted = true; cout << "enchanted" << endl; } int Card::disenchant( Card * to_disenchant){ to_disenchant->enchanted = false; cout << "Enchantment Removed" << endl; } // ========DECL===== Spell::Spell() { currPower = basePower; currToughness = baseToughness; classType = new char[strlen("Spell") + 1]; classType = "Spell"; } Spell::~Spell(){} // --------------- Spell::Spell(const Spell & to_copy){ currPower = to_copy.currPower; basePower = to_copy.basePower; currToughness = to_copy.currToughness; baseToughness = to_copy.baseToughness; } // ========= int Spell::attack( Spell *& blocker ){ blocker->currToughness -= currPower; currToughness -= blocker->currToughness; } //========== int Spell::counter (Spell *& to_counter){ cout << to_counter->name << " was countered by " << name << endl; } // ============ int Spell::heal (Spell *& to_heal, int amountOfHealth){ to_heal->currToughness += amountOfHealth; } // ------- Creature::Creature(){ summoningSick = true; } // =====DECL====== Land::Land(){ color = NON; classType = new char[strlen("Land") + 1]; classType = "Land"; } // ------ int Land::generateMana(int mana){ // ... // } card.h #ifndef CARD_H #define CARD_H #include <cctype> #include <iostream> #include "conception.h" class Artifact; class Spell; class Card : public Conception { public: Card(); Card(const Card &); ~Card(); protected: char* name; enum CardType { INSTANT, CREATURE, LAND, ENCHANTMENT, ARTIFACT, PLANESWALKER}; enum CardColor { WHITE, BLUE, BLACK, RED, GREEN, NON }; CardType type; CardColor color; int manaCost; char* abilities; char* flavorText; char* keywords; bool inPlay; bool tapped; bool cursed; bool enchanted; Artifact* artifact; virtual int enchant( Card * ); virtual int disenchant (Card * ); virtual int equipArtifact( Artifact* ); virtual Artifact* unequipArtifact(Card * ); }; // ------------ class Spell: public Card { public: Spell(); ~Spell(); Spell(const Spell &); protected: virtual int heal( Spell *&, int ); virtual int attack( Spell *& ); virtual int counter( Spell*& ); int currToughness; int baseToughness; int currPower; int basePower; }; class Land: public Card { public: Land(); ~Land(); protected: virtual int generateMana(int); }; class Forest: public Land { public: Forest(); ~Forest(); protected: int generateMana(); }; class Creature: public Spell { public: Creature(); ~Creature(); protected: bool summoningSick; }; class Sorcery: public Spell { public: Sorcery(); ~Sorcery(); protected: }; #endif conception.h -- this is an "uber class" from which everything derives class Conception{ public: Conception(); ~Conception(); protected: char* classType; }; conception.cpp Conception::Conception{ Conception(){ classType = new char[11]; char = "Conception"; } game.cpp -- this is an incomplete class as of this code #include <iostream> #include <cctype> #include "game.h" #include "player.h" Battlefield::Battlefield(){ card = 0; } Battlefield::~Battlefield(){ delete card; } Battlefield::Battlefield(const Battlefield & to_copy){ } // =========== /* class Game(){ public: Game(); ~Game(); protected: Player** player; // for multiple players Battlefield* root; // for battlefield getPlayerMove(); // ask player what to do addToBattlefield(); removeFromBattlefield(); sendAttack(); } */ #endif game.h #ifndef GAME_H #define GAME_H #include "list.h" class CardList(); class Battlefield : CardList{ public: Battlefield(); ~Battlefield(); protected: Card* card; // make an array }; class Game : Conception{ public: Game(); ~Game(); protected: Player** player; // for multiple players Battlefield* root; // for battlefield getPlayerMove(); // ask player what to do addToBattlefield(); removeFromBattlefield(); sendAttack(); Battlefield* field; }; list.cpp #include <iostream> #include <cctype> #include "list.h" // ========== LinkedList::LinkedList(){ root = new Node; classType = new char[strlen("LinkedList") + 1]; classType = "LinkedList"; }; LinkedList::~LinkedList(){ delete root; } LinkedList::LinkedList(const LinkedList & obj) { // code to copy } // --------- // ========= int LinkedList::delete_all(Node* root){ if (root = 0) return 0; delete_all(root->next); root = 0; } int LinkedList::add( Conception*& is){ if (root == 0){ root = new Node; root->next = 0; } else { Node * curr = root; root = new Node; root->next=curr; root->it = is; } } int LinkedList::remove(Node * root, Node * prev, Conception* is){ if (root = 0) return -1; if (root->it == is){ root->next = root->next; return 0; } remove(root->next, root, is); return 0; } Conception* LinkedList::find(Node*& root, const Conception* is, Conception* holder = NULL) { if (root==0) return NULL; if (root->it == is){ return root-> it; } holder = find(root->next, is); return holder; } Node* LinkedList::goForward(Node * root){ if (root==0) return root; if (root->next == 0) return root; else return root->next; } // ============ Node* LinkedList::goBackward(Node * root){ root = root->prev; } list.h #ifndef LIST_H #define LIST_H #include <iostream> #include "conception.h" class Node : public Conception { public: Node() : next(0), prev(0), it(0) { it = 0; classType = new char[strlen("Node") + 1]; classType = "Node"; }; ~Node(){ delete it; delete next; delete prev; } Node* next; Node* prev; Conception* it; // generic object }; // ---------------------- class LinkedList : public Conception { public: LinkedList(); ~LinkedList(); LinkedList(const LinkedList&); friend bool operator== (Conception& thing_1, Conception& thing_2 ); protected: virtual int delete_all(Node*); virtual int add( Conception*& ); // virtual Conception* find(Node *&, const Conception*, Conception* ); // virtual int remove( Node *, Node *, Conception* ); // removes question with keyword int display_all(node*& ); virtual Node* goForward(Node *); virtual Node* goBackward(Node *); Node* root; // write copy constrcutor }; // ============= class CircularLinkedList : public LinkedList { public: // CircularLinkedList(); // ~CircularLinkedList(); // CircularLinkedList(const CircularLinkedList &); }; class DoubleLinkedList : public LinkedList { public: // DoubleLinkedList(); // ~DoubleLinkedList(); // DoubleLinkedList(const DoubleLinkedList &); protected: }; // END OF LIST Hierarchy #endif player.cpp #include <iostream> #include "player.h" #include "list.h" using namespace std; Library::Library(){ root = 0; } Library::~Library(){ delete card; } // ====DECL========= Player::~Player(){ delete fname; delete lname; delete deck; } Wizard::~Wizard(){ delete mana; delete rootL; delete rootH; } // =====Player====== void Player::changeName(const char[] first, const char[] last){ char* backup1 = new char[strlen(fname) + 1]; strcpy(backup1, fname); char* backup2 = new char[strlen(lname) + 1]; strcpy(backup1, lname); if (first != NULL){ fname = new char[strlen(first) +1]; strcpy(fname, first); } if (last != NULL){ lname = new char[strlen(last) +1]; strcpy(lname, last); } return 0; } // ========== void Player::seeStats(Stats*& to_put){ to_put->wins = stats->wins; to_put->losses = stats->losses; to_put->winRatio = stats->winRatio; } // ---------- void Player::displayDeck(const LinkedList* deck){ } // ================ void CardList::findCard(Node* root, int id, NodeCard*& is){ if (root == NULL) return; if (root->it.id == id){ copyCard(root->it, is); return; } else findCard(root->next, id, is); } // -------- void CardList::deleteAll(Node* root){ if (root == NULL) return; deleteAll(root->next); root->next = NULL; } // --------- void CardList::removeCard(Node* root, int id){ if (root == NULL) return; if (root->id = id){ root->prev->next = root->next; // the prev link of root, looks back to next of prev node, and sets to where root next is pointing } return; } // --------- void CardList::addCard(Card* to_add){ if (!root){ root = new Node; root->next = NULL; root->prev = NULL; root->it = &to_add; return; } else { Node* original = root; root = new Node; root->next = original; root->prev = NULL; original->prev = root; } } // ----------- void CardList::displayAll(Node*& root){ if (root == NULL) return; cout << "Card Name: " << root->it.cardName; cout << " || Type: " << root->it.type << endl; cout << " --------------- " << endl; if (root->classType == "Spell"){ cout << "Base Power: " << root->it.basePower; cout << " || Current Power: " << root->it.currPower << endl; cout << "Base Toughness: " << root->it.baseToughness; cout << " || Current Toughness: " << root->it.currToughness << endl; } cout << "Card Type: " << root->it.currPower; cout << " || Card Color: " << root->it.color << endl; cout << "Mana Cost" << root->it.manaCost << endl; cout << "Keywords: " << root->it.keywords << endl; cout << "Flavor Text: " << root->it.flavorText << endl; cout << " ----- Class Type: " << root->it.classType << " || ID: " << root->it.id << " ----- " << endl; cout << " ******************************************" << endl; cout << endl; // ------- void CardList::copyCard(const Card& to_get, Card& put_to){ put_to.type = to_get.type; put_to.color = to_get.color; put_to.manaCost = to_get.manaCost; put_to.inPlay = to_get.inPlay; put_to.tapped = to_get.tapped; put_to.class = to_get.class; put_to.id = to_get.id; put_to.enchanted = to_get.enchanted; put_to.artifact = to_get.artifact; put_to.class = to_get.class; put.to.abilities = new char[strlen(to_get.abilities) +1]; strcpy(put_to.abilities, to_get.abilities); put.to.keywords = new char[strlen(to_get.keywords) +1]; strcpy(put_to.keywords, to_get.keywords); put.to.flavorText = new char[strlen(to_get.flavorText) +1]; strcpy(put_to.flavorText, to_get.flavorText); if (to_get.class = "Spell"){ put_to.baseToughness = to_get.baseToughness; put_to.basePower = to_get.basePower; put_to.currToughness = to_get.currToughness; put_to.currPower = to_get.currPower; } } // ---------- player.h #ifndef player.h #define player.h #include "list.h" // ============ class CardList() : public LinkedList(){ public: CardList(); ~CardList(); protected: virtual void findCard(Card&); virtual void addCard(Card* ); virtual void removeCard(Node* root, int id); virtual void deleteAll(); virtual void displayAll(); virtual void copyCard(const Conception*, Node*&); Node* root; } // --------- class Library() : public CardList(){ public: Library(); ~Library(); protected: Card* card; int numCards; findCard(Card&); // get Card and fill empty template } // ----------- class Deck() : public CardList(){ public: Deck(); ~Deck(); protected: enum deckColor { WHITE, BLUE, BLACK, RED, GREEN, MIXED }; char* deckName; } // =============== class Mana(int amount) : public Conception { public: Mana() : displayTotal(0), classType(0) { displayTotal = 0; classType = new char[strlen("Mana") + 1]; classType = "Mana"; }; protected: int accrued; void add(); void remove(); int displayTotal(); } inline Mana::add(){ accrued += 1; } inline Mana::remove(){ accrued -= 1; } inline Mana::displayTotal(){ return accrued; } // ================ class Stats() : public Conception { public: friend class Player; friend class Game; Stats() : wins(0), losses(0), winRatio(0) { wins = 0; losses = 0; if ( (wins + losses != 0) winRatio = wins / (wins + losses); else winRatio = 0; classType = new char[strlen("Stats") + 1]; classType = "Stats"; } protected: int wins; int losses; float winRatio; void int getStats(Stats*& ); } // ================== class Player() : public Conception{ public: Player() : wins(0), losses(0), winRatio(0) { fname = NULL; lname = NULL; stats = NULL; CardList = NULL; classType = new char[strlen("Player") + 1]; classType = "Player"; }; ~Player(); Player(const Player & obj); protected: // member variables char* fname; char* lname; Stats stats; // holds previous game statistics CardList* deck[]; // hold multiple decks that player might use - put ll in this private: // member functions void changeName(const char[], const char[]); void shuffleDeck(int); void seeStats(Stats*& ); void displayDeck(int); chooseDeck(); } // -------------------- class Wizard(Card) : public Player(){ public: Wizard() : { mana = NULL; rootL = NULL; rootH = NULL}; ~Wizard(); protected: playCard(const Card &); removeCard(Card &); attackWithCard(Card &); enchantWithCard(Card &); disenchantWithCard(Card &); healWithCard(Card &); equipWithCard(Card &); Mana* mana[]; Library* rootL; // Library Library* rootH; // Hand } #endif

    Read the article

  • mysql select query optimization

    - by Saharsh Shah
    I have two table testa & testb. CREATE TABLE `testa` ( `id` INT(10) NOT NULL AUTO_INCREMENT, `name` VARCHAR(50) DEFAULT NULL, PRIMARY KEY (`id`) ); CREATE TABLE `testb` ( `id` INT(10) NOT NULL AUTO_INCREMENT, `name` VARCHAR(50) DEFAULT NULL, `aid1` INT(10) DEFAULT NULL, `aid2` INT(10) DEFAULT NULL, `aid3` INT(10) DEFAULT NULL, PRIMARY KEY (`id`) ); Currently I am running below query for retrieving all rows where id in testa table matches with any columns of aid1,aid2,aid3 in tableb. The query is retreiving acurate result but it is taking minimum 30 seconds to execute which is too much. I have also tried to optimise my query using UNION but failed to do so. SELECT a.id, a.name, b.name, b.id FROM testb b INNER JOIN testa a ON b.aid1 = a.id OR b.aid2 = a.id OR b.aid3 = a.id ; How do i optimize my query so it's total execution time is within 2-3 seconds? Thanks in advance...

    Read the article

  • Type hinting and optional attributes in PHP

    - by Álvaro G. Vicario
    I have a class method that deals with dates: public function setAvailability(DateTime $start, DateTime $end){ } Since item availability can have lower limit, upper limit, both or none, I'd like to make setAvailability() accept NULL values as well. However, the NULL constant violates the type hinting: $foo->setAvailability(NULL, $end); triggers: Catchable fatal error: Argument 1 passed to Foo::setAvailability() must be an instance of DateTime, null given And, as far as I know, I cannot have a DateTime instance with no value. (Can I?) For a reason I cannot grasp, this seems to work: public function setAvailability(DateTime $start=NULL, DateTime $end=NULL){ } ... $foo->setAvailability(NULL, $end); But it looks like a hack that works by pure chance. How would you deal with unset dates in PHP classes?

    Read the article

  • Will creating index help in this case

    - by The King
    I'm still a learning user of SQL-SERVER2005. Here is my table structure CREATE TABLE [dbo].[Trn_PostingGroups]( [ControlGroup] [char](5) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [PracticeCode] [char](5) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [ScanDate] [smalldatetime] NULL, [DepositDate] [smalldatetime] NULL, [NameOfFile] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [DepositValue] [decimal](11, 2) NULL, [RecordStatus] [char](1) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, CONSTRAINT [PK_Trn_PostingGroups_1] PRIMARY KEY CLUSTERED ( [ControlGroup] ASC, [PracticeCode] ASC )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON [PRIMARY] Scenario 1 : Suppose I have a query like this... Select * from Trn_PostingGroups where PracticeCode = 'ABC' Will indexing on Practice Code seperately help me in making my query faster?? Scenario 2 : Select * from Trn_PostingGroups where ControlGroup = 12701 and PracticeCode = 'ABC' and NameOfFile = 'FileName1' Will indexing on NameOfFile seperately help me in making my query faster ??

    Read the article

  • MySQL how to display data from two tables.

    - by gew
    I'm trying to display the username of the person who has submitted the most articles but I don't know how to do it using MySQL & PHP, can someone help me? Here is the MySQL code. CREATE TABLE users ( user_id INT UNSIGNED NOT NULL AUTO_INCREMENT, username VARCHAR(255) DEFAULT NULL, pass CHAR(40) NOT NULL, PRIMARY KEY (user_id) ); CREATE TABLE users_articles ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, user_id INT UNSIGNED mNOT NULL, title TEXT NOT NULL, acontent LONGTEXT NOT NULL, PRIMARY KEY (id) ); Here is the code I have so far. $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"SELECT COUNT(*) as coun, user_id FROM users_articles GROUP BY user_id ORDER BY coun DESC LIMIT 1");

    Read the article

  • Association in Entity Framework 4

    - by Marsharks
    I have two tables, a problem table and a problem history table. As you can expect, a problem can have many histories associated with it. CREATE TABLE [dbo].[Problem]( [Last_Update] [datetime] NULL, [Problem_Id] [int] NOT NULL, [Incident_Count] [int] NULL ) ALTER TABLE [dbo].[Problem] ADD CONSTRAINT [PK_Problem] PRIMARY KEY CLUSTERED ( [Problem_Id] ASC ) CREATE TABLE [dbo].[Problem_History]( [Last_Update] [datetime] NULL, [Problem_Id] [int] NOT NULL, [Severity_Chg_Flag] [char](1) NULL ) ALTER TABLE [dbo].[Problem_History] ADD [Create_DateTime] [datetime] NOT NULL ALTER TABLE [dbo].[Problem_History] WITH CHECK ADD CONSTRAINT [FK_Problem_History_Problem] FOREIGN KEY([Problem_Id]) REFERENCES [dbo].[Problem] ([Problem_Id]) The problem is when I drag this into an Entity Model, the associations are not included. Any ideas? I would like to point out that the problem history table has no separate key of its own, it shares the problem id

    Read the article

  • Change master table PK and update related table FK (changing PK from Autoincrement to UUID on Mysql)

    - by eleonzx
    I have two related tables: Groups and Clients. Clients belongs to Groups so I have a foreign key "group_id" that references the group a client belongs to. I'm changing the Group id from an autoincrement to a UUID. So what I need is to generate a UUID for each Group and update the Clients table at once to reflect the changes and keep the records related. Is there a way to do this with multiple-table update on MySQL? Adding tables definitions for clarification. CREATE TABLE `groups` ( `id` char(36) NOT NULL, `name` varchar(255) DEFAULT NULL, `created` datetime DEFAULT NULL, `modified` datetime DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8$$ CREATE TABLE `clients` ( `id` char(36) NOT NULL, `name` varchar(255) NOT NULL, `group_id` char(36) DEFAULT NULL, `active` tinyint(1) DEFAULT '1' PRIMARY KEY (`id`), KEY `fkgp` (`group_id`), CONSTRAINT `fkgp` FOREIGN KEY (`group_id`) REFERENCES `groups` (`id`) ON DELETE NO ACTION ON UPDATE CASCADE ) ENGINE=InnoDB DEFAULT CHARSET=utf8$$

    Read the article

  • phpMyAdmin "No database selected" MySQL

    - by user1751660
    I downloaded a MySQL backup file and promptly imported into MAMP's phpMyAdmin. I got this return: Error SQL query: -- -- Database: `mysql` -- -- -------------------------------------------------------- -- -- Table structure for table `columns_priv` -- CREATE TABLE IF NOT EXISTS `columns_priv` ( `Host` CHAR( 60 ) COLLATE utf8_bin NOT NULL DEFAULT '', `Db` CHAR( 64 ) COLLATE utf8_bin NOT NULL DEFAULT '', `User` CHAR( 16 ) COLLATE utf8_bin NOT NULL DEFAULT '', `Table_name` CHAR( 64 ) COLLATE utf8_bin NOT NULL DEFAULT '', `Column_name` CHAR( 64 ) COLLATE utf8_bin NOT NULL DEFAULT '', `Timestamp` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP , `Column_priv` SET( 'Select', 'Insert', 'Update', 'References' ) CHARACTER SET utf8 NOT NULL DEFAULT '', PRIMARY KEY ( `Host` , `Db` , `User` , `Table_name` , `Column_name` ) ) ENGINE = MYISAM DEFAULT CHARSET = utf8 COLLATE = utf8_bin COMMENT = 'Column privileges'; MySQL said: #1046 - No database selected I did not alter the .sql file at all. Any hints on how i can get this puppy going locally? Thanks!

    Read the article

  • How can I eager-load a child collection mapped to a non-primary key in NHibernate 2.1.2?

    - by David Rubin
    Hi, I have two objects with a many-to-many relationship between them, as follows: public class LeftHandSide { public LeftHandSide() { Name = String.Empty; Rights = new HashSet<RightHandSide>(); } public int Id { get; set; } public string Name { get; set; } public ICollection<RightHandSide> Rights { get; set; } } public class RightHandSide { public RightHandSide() { OtherProp = String.Empty; Lefts = new HashSet<LeftHandSide>(); } public int Id { get; set; } public string OtherProp { get; set; } public ICollection<LeftHandSide> Lefts { get; set; } } and I'm using a legacy database, so my mappings look like: Notice that LeftHandSide and RightHandSide are associated by a different column than RightHandSide's primary key. <class name="LeftHandSide" table="[dbo].[lefts]" lazy="false"> <id name="Id" column="ID" unsaved-value="0"> <generator class="identity" /> </id> <property name="Name" not-null="true" /> <set name="Rights" table="[dbo].[lefts2rights]"> <key column="leftId" /> <!-- THIS IS THE IMPORTANT BIT: I MUST USE PROPERTY-REF --> <many-to-many class="RightHandSide" column="rightProp" property-ref="OtherProp" /> </set> </class> <class name="RightHandSide" table="[dbo].[rights]" lazy="false"> <id name="Id" column="id" unsaved-value="0"> <generator class="identity" /> </id> <property name="OtherProp" column="otherProp" /> <set name="Lefts" table="[dbo].[lefts2rights]"> <!-- THIS IS THE IMPORTANT BIT: I MUST USE PROPERTY-REF --> <key column="rightProp" property-ref="OtherProp" /> <many-to-many class="LeftHandSide" column="leftId" /> </set> </class> The problem comes when I go to do a query: LeftHandSide lhs = _session.CreateCriteria<LeftHandSide>() .Add(Expression.IdEq(13)) .UniqueResult<LeftHandSide>(); works just fine. But LeftHandSide lhs = _session.CreateCriteria<LeftHandSide>() .Add(Expression.IdEq(13)) .SetFetchMode("Rights", FetchMode.Join) .UniqueResult<LeftHandSide>(); throws an exception (see below). Interestingly, RightHandSide rhs = _session.CreateCriteria<RightHandSide>() .Add(Expression.IdEq(127)) .SetFetchMode("Lefts", FetchMode.Join) .UniqueResult<RightHandSide>(); seems to be perfectly fine as well. NHibernate.Exceptions.GenericADOException Message: Error performing LoadByUniqueKey[SQL: SQL not available] Source: NHibernate StackTrace: c:\opt\nhibernate\2.1.2\source\src\NHibernate\Type\EntityType.cs(563,0): at NHibernate.Type.EntityType.LoadByUniqueKey(String entityName, String uniqueKeyPropertyName, Object key, ISessionImplementor session) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Type\EntityType.cs(428,0): at NHibernate.Type.EntityType.ResolveIdentifier(Object value, ISessionImplementor session, Object owner) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Type\EntityType.cs(300,0): at NHibernate.Type.EntityType.NullSafeGet(IDataReader rs, String[] names, ISessionImplementor session, Object owner) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Persister\Collection\AbstractCollectionPersister.cs(695,0): at NHibernate.Persister.Collection.AbstractCollectionPersister.ReadElement(IDataReader rs, Object owner, String[] aliases, ISessionImplementor session) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Collection\Generic\PersistentGenericSet.cs(54,0): at NHibernate.Collection.Generic.PersistentGenericSet`1.ReadFrom(IDataReader rs, ICollectionPersister role, ICollectionAliases descriptor, Object owner) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Loader\Loader.cs(706,0): at NHibernate.Loader.Loader.ReadCollectionElement(Object optionalOwner, Object optionalKey, ICollectionPersister persister, ICollectionAliases descriptor, IDataReader rs, ISessionImplementor session) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Loader\Loader.cs(385,0): at NHibernate.Loader.Loader.ReadCollectionElements(Object[] row, IDataReader resultSet, ISessionImplementor session) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Loader\Loader.cs(326,0): at NHibernate.Loader.Loader.GetRowFromResultSet(IDataReader resultSet, ISessionImplementor session, QueryParameters queryParameters, LockMode[] lockModeArray, EntityKey optionalObjectKey, IList hydratedObjects, EntityKey[] keys, Boolean returnProxies) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Loader\Loader.cs(453,0): at NHibernate.Loader.Loader.DoQuery(ISessionImplementor session, QueryParameters queryParameters, Boolean returnProxies) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Loader\Loader.cs(236,0): at NHibernate.Loader.Loader.DoQueryAndInitializeNonLazyCollections(ISessionImplementor session, QueryParameters queryParameters, Boolean returnProxies) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Loader\Loader.cs(1649,0): at NHibernate.Loader.Loader.DoList(ISessionImplementor session, QueryParameters queryParameters) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Loader\Loader.cs(1568,0): at NHibernate.Loader.Loader.ListIgnoreQueryCache(ISessionImplementor session, QueryParameters queryParameters) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Loader\Loader.cs(1562,0): at NHibernate.Loader.Loader.List(ISessionImplementor session, QueryParameters queryParameters, ISet`1 querySpaces, IType[] resultTypes) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Loader\Criteria\CriteriaLoader.cs(73,0): at NHibernate.Loader.Criteria.CriteriaLoader.List(ISessionImplementor session) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Impl\SessionImpl.cs(1936,0): at NHibernate.Impl.SessionImpl.List(CriteriaImpl criteria, IList results) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Impl\CriteriaImpl.cs(246,0): at NHibernate.Impl.CriteriaImpl.List(IList results) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Impl\CriteriaImpl.cs(237,0): at NHibernate.Impl.CriteriaImpl.List() c:\opt\nhibernate\2.1.2\source\src\NHibernate\Impl\CriteriaImpl.cs(398,0): at NHibernate.Impl.CriteriaImpl.UniqueResult() c:\opt\nhibernate\2.1.2\source\src\NHibernate\Impl\CriteriaImpl.cs(263,0): at NHibernate.Impl.CriteriaImpl.UniqueResult[T]() D:\proj\CMS3\branches\nh_auth\DomainModel2Tests\Authorization\TempTests.cs(46,0): at CMS.DomainModel.Authorization.TempTests.Test1() Inner Exception System.Collections.Generic.KeyNotFoundException Message: The given key was not present in the dictionary. Source: mscorlib StackTrace: at System.ThrowHelper.ThrowKeyNotFoundException() at System.Collections.Generic.Dictionary`2.get_Item(TKey key) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Persister\Entity\AbstractEntityPersister.cs(2047,0): at NHibernate.Persister.Entity.AbstractEntityPersister.GetAppropriateUniqueKeyLoader(String propertyName, IDictionary`2 enabledFilters) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Persister\Entity\AbstractEntityPersister.cs(2037,0): at NHibernate.Persister.Entity.AbstractEntityPersister.LoadByUniqueKey(String propertyName, Object uniqueKey, ISessionImplementor session) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Type\EntityType.cs(552,0): at NHibernate.Type.EntityType.LoadByUniqueKey(String entityName, String uniqueKeyPropertyName, Object key, ISessionImplementor session) I'm using NHibernate 2.1.2 and I've been debugging into the NHibernate source, but I'm coming up empty. Any suggestions? Thanks so much!

    Read the article

  • Default /etc/apt/sources.list?

    - by piemesons
    I need default source list for ubuntu 10.04. Can anybody help me? Here is Mine:--- Ubuntu supported packages deb http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb-src http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse Canonical Commercial Repository deb http://archive.canonical.com/ubuntu lucid partner deb http://archive.canonical.com/ubuntu lucid-backports partner deb http://archive.canonical.com/ubuntu lucid-updates partner deb http://archive.canonical.com/ubuntu lucid-security partner deb http://archive.canonical.com/ubuntu lucid-proposed partner deb-src http://archive.canonical.com/ubuntu lucid partner deb-src http://archive.canonical.com/ubuntu lucid-backports partner deb-src http://archive.canonical.com/ubuntu lucid-updates partner deb-src http://archive.canonical.com/ubuntu lucid-security partner deb-src http://archive.canonical.com/ubuntu lucid-proposed partner medibuntu deb http://packages.medibuntu.org/ lucid free non-free deb-src http://packages.medibuntu.org/ lucid free non-free PlayOnLinux deb http://deb.playonlinux.com/ lucid main opera deb http://deb.opera.com/opera/ lenny non-free google deb http://dl.google.com/linux/deb/ stable non-free main Dropbox Official Source deb http://linux.dropbox.com/ubuntu karmic main Skype deb http://download.skype.com/linux/repos/debian/ stable non-free This is the error i am getting:-- (sudo apt-get update) Get:9 http://dl.google.com stable/main Packages [1,076B] Err http://ppa.launchpad.net lucid/main Packages 404 Not Found Get:10 http://dl.google.com stable/main Packages [735B] and finally :-- Fetched 9,724B in 3s (2,645B/s) W: Failed to fetch http://ppa.launchpad.net/bisig/ppa/ubuntu/dists/lucid/main/binary-i386/Packages.gz 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

  • PPA causing 404 error?

    - by piemesons
    I need default source list for ubuntu 10.04. Can anybody help me? Here is Mine:--- Ubuntu supported packages deb http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb-src http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse Canonical Commercial Repository deb http://archive.canonical.com/ubuntu lucid partner deb http://archive.canonical.com/ubuntu lucid-backports partner deb http://archive.canonical.com/ubuntu lucid-updates partner deb http://archive.canonical.com/ubuntu lucid-security partner deb http://archive.canonical.com/ubuntu lucid-proposed partner deb-src http://archive.canonical.com/ubuntu lucid partner deb-src http://archive.canonical.com/ubuntu lucid-backports partner deb-src http://archive.canonical.com/ubuntu lucid-updates partner deb-src http://archive.canonical.com/ubuntu lucid-security partner deb-src http://archive.canonical.com/ubuntu lucid-proposed partner medibuntu deb http://packages.medibuntu.org/ lucid free non-free deb-src http://packages.medibuntu.org/ lucid free non-free PlayOnLinux deb http://deb.playonlinux.com/ lucid main opera deb http://deb.opera.com/opera/ lenny non-free google deb http://dl.google.com/linux/deb/ stable non-free main Dropbox Official Source deb http://linux.dropbox.com/ubuntu karmic main Skype deb http://download.skype.com/linux/repos/debian/ stable non-free This is the error i am getting:-- (sudo apt-get update) Get:9 http://dl.google.com stable/main Packages [1,076B] Err http://ppa.launchpad.net lucid/main Packages 404 Not Found Get:10 http://dl.google.com stable/main Packages [735B] and finally :-- Fetched 9,724B in 3s (2,645B/s) W: Failed to fetch http://ppa.launchpad.net/bisig/ppa/ubuntu/dists/lucid/main/binary-i386/Packages.gz 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

  • Integrate Bing Search API into ASP.Net application

    - by sreejukg
    Couple of months back, I wrote an article about how to integrate Bing Search engine (API 2.0) with ASP.Net website. You can refer the article here http://weblogs.asp.net/sreejukg/archive/2012/04/07/integrate-bing-api-for-search-inside-asp-net-web-application.aspx Things are changing rapidly in the tech world and Bing has also changed! The Bing Search API 2.0 will work until August 1, 2012, after that it will not return results. Shocked? Don’t worry the API has moved to Windows Azure market place and available for you to sign up and continue using it and there is a free version available based on your usage. In this article, I am going to explain how you can integrate the new Bing API that is available in the Windows Azure market place with your website. You can access the Windows Azure market place from the below link https://datamarket.azure.com/ There is lot of applications available for you to subscribe and use. Bing is one of them. You can find the new Bing Search API from the below link https://datamarket.azure.com/dataset/5BA839F1-12CE-4CCE-BF57-A49D98D29A44 To get access to Bing Search API, first you need to register an account with Windows Azure market place. Sign in to the Windows Azure market place site using your windows live account. Once you sign in with your windows live account, you need to register to Windows Azure Market place account. From the Windows Azure market place, you will see the sign in button it the top right of the page. Clicking on the sign in button will take you to the Windows live ID authentication page. You can enter a windows live ID here to login. Once logged in you will see the Registration page for the Windows Azure market place as follows. You can agree or disagree for the email address usage by Microsoft. I believe selecting the check box means you will get email about what is happening in Windows Azure market place. Click on continue button once you are done. In the next page, you should accept the terms of use, it is not optional, you must agree to terms and conditions. Scroll down to the page and select the I agree checkbox and click on Register Button. Now you are a registered member of Windows Azure market place. You can subscribe to data applications. In order to use BING API in your application, you must obtain your account Key, in the previous version of Bing you were required an API key, the current version uses Account Key instead. Once you logged in to the Windows Azure market place, you can see “My Account” in the top menu, from the Top menu; go to “My Account” Section. From the My Account section, you can manage your subscriptions and Account Keys. Account Keys will be used by your applications to access the subscriptions from the market place. Click on My Account link, you can see Account Keys in the left menu and then Add an account key or you can use the default Account key available. Creating account key is very simple process. Also you can remove the account keys you create if necessary. The next step is to subscribe to BING Search API. At this moment, Bing Offers 2 APIs for search. The available options are as follows. 1. Bing Search API - https://datamarket.azure.com/dataset/5ba839f1-12ce-4cce-bf57-a49d98d29a44 2. Bing Search API – Web Results only - https://datamarket.azure.com/dataset/8818f55e-2fe5-4ce3-a617-0b8ba8419f65 The difference is that the later will give you only web results where the other you can specify the source type such as image, video, web, news etc. Carefully choose the API based on your application requirements. In this article, I am going to use Web Results Only API, but the steps will be similar to both. Go to the API page https://datamarket.azure.com/dataset/8818f55e-2fe5-4ce3-a617-0b8ba8419f65, you can see the subscription options in the right side. And in the bottom of the page you can see the free option Since I am going to use the free options, just Click the Sign Up link for that. Just select I agree check box and click on the Sign Up button. You will get a recipt pagethat detail your subscription. Now you are ready Bing Search API – Web results. The next step is to integrate the API into your ASP.Net application. Now if you go to the Search API page (as well as in the Receipt page), you can see a .Net C# Class Library link, click on the link, you will get a code file named “BingSearchContainer.cs”. In the following sections I am going to demonstrate the use of Bing Search API from an ASP.Net application. Create an empty ASP.Net web application. In the solution explorer, the application will looks as follows. Now add the downloaded code file (“BingSearchContainer.cs”) to the project. Right click your project in solution explorer, Add -> existing item, then browse to the downloaded location, select the “BingSearchContainer.cs” file and add it to the project. To build the code file you need to add reference to the following library. System.Data.Services.Client You can find the library in the .Net tab, when you select Add -> Reference Try to build your project now; it should build without any errors. Add an ASP.Net page to the project. I have included a text box and a button, then a Grid View to the page. The idea is to Search the text entered and display the results in the gridview. The page will look in the Visual Studio Designer as follows. The markup of the page is as follows. In the button click event handler for the search button, I have used the following code. Now run your project and enter some text in the text box and click the Search button, you will see the results coming from Bing, cool. I entered the text “Microsoft” in the textbox and clicked on the button and I got the following results. Searching Specific Websites If you want to search a particular website, you pass the site url with site:<site url name> and if you have more sites, use pipe (|). e.g. The following search query site:microsoft.com | site:adobe.com design will search the word design and return the results from Microsoft.com and Adobe.com See the sample code that search only Microsoft.com for the text entered for the above sample. var webResults = bingContainer.Web("site:www.Microsoft.com " + txtSearch.Text, null, null, null, null, null, null); Paging the results returned by the API By default the BING API will return 100 results based on your query. The default code file that you downloaded from BING doesn’t include any option for this. You can modify the downloaded code to perform this paging. The BING API supports two parameters $top (for number of results to return) and $skip (for number of records to skip). So if you want 3rd page of results with page size = 10, you need to pass $top = 10 and $skip=20. Open the BingSearchContainer.cs in the editor. You can see the Web method in it as follows. public DataServiceQuery<WebResult> Web(String Query, String Market, String Adult, Double? Latitude, Double? Longitude, String WebFileType, String Options) {  In the method signature, I have added two more parameters public DataServiceQuery<WebResult> Web(String Query, String Market, String Adult, Double? Latitude, Double? Longitude, String WebFileType, String Options, int resultCount, int pageNo) { and in the method, you need to pass the parameters to the query variable. query = query.AddQueryOption("$top", resultCount); query = query.AddQueryOption("$skip", (pageNo -1)*resultCount); return query; Note that I didn’t perform any validation, but you need to check conditions such as resultCount and pageCount should be greater than or equal to 1. If the parameters are not valid, the Bing Search API will throw the error. The modified method is as follows. The changes are highlighted. Now see the following code in the SearchPage.aspx.cs file protected void btnSearch_Click(object sender, EventArgs e) {     var bingContainer = new Bing.BingSearchContainer(new Uri(https://api.datamarket.azure.com/Bing/SearchWeb/));     // replace this value with your account key     var accountKey = "your key";     // the next line configures the bingContainer to use your credentials.     bingContainer.Credentials = new NetworkCredential(accountKey, accountKey);     var webResults = bingContainer.Web("site:microsoft.com" +txtSearch.Text , null, null, null, null, null, null,3,2);     lstResults.DataSource = webResults;     lstResults.DataBind(); } The following code will return 3 results starting from second page (by skipping first 3 results). See the result page as follows. Bing provides complete integration to its offerings. When you develop search based applications, you can use the power of Bing to perform the search. Integrating Bing Search API to ASP.Net application is a simple process and without investing much time, you can develop a good search based application. Make sure you read the terms of use before designing the application and decide which API usage is suitable for you. Further readings BING API Migration Guide http://go.microsoft.com/fwlink/?LinkID=248077 Bing API FAQ http://go.microsoft.com/fwlink/?LinkID=252146 Bing API Schema Guide http://go.microsoft.com/fwlink/?LinkID=252151

    Read the article

  • Something for the weekend - Whats the most complex query?

    - by simonsabin
    Whenever I teach about SQL Server performance tuning I try can get across the message that there is no such thing as a table. Does that sound odd, well it isn't, trust me. Rather than tables you need to consider structures. You have 1. Heaps 2. Indexes (b-trees) Some people split indexes in two, clustered and non-clustered, this I feel confuses the situation as people associate clustered indexes with sorting, but don't associate non clustered indexes with sorting, this is wrong. Clustered and non-clustered indexes are the same b-tree structure(and even more so with SQL 2005) with the leaf pages sorted in a linked list according to the keys of the index.. The difference is that non clustered indexes include in their structure either, the clustered key(s), or the row identifier for the row in the table (see http://sqlblog.com/blogs/kalen_delaney/archive/2008/03/16/nonclustered-index-keys.aspx for more details). Beyond that they are the same, they have key columns which are stored on the root and intermediary pages, and included columns which are on the leaf level. The reason this is important is that this is how the optimiser sees the world, this means it can use any of these structures to resolve your query. Even if your query only accesses one table, the optimiser can access multiple structures to get your results. One commonly sees this with a non-clustered index scan and then a key lookup (clustered index seek), but importantly it's not restricted to just using one non-clustered index and the clustered index or heap, and that's the challenge for the weekend. So the challenge for the weekend is to produce the most complex single table query. For those clever bods amongst you that are thinking, great I will just use lots of xquery functions, sorry these are the rules. 1. You have to use a table from AdventureWorks (2005 or 2008) 2. You can add whatever indexes you like, but you must document these 3. You cannot use XQuery, Spatial, HierarchyId, Full Text or any open rowset function. 4. You can only reference your table once, i..e a FROM clause with ONE table and no JOINs 5. No Sub queries. The aim of this is to show how the optimiser can use multiple structures to build the results of a query and to also highlight why the optimiser is doing that. How many structures can you get the optimiser to use? As an example create these two indexes on AdventureWorks2008 create index IX_Person_Person on Person.Person (lastName, FirstName,NameStyle,PersonType) create index IX_Person_Person on Person.Person(BusinessentityId,ModifiedDate)with drop_existing    select lastName, ModifiedDate   from Person.Person  where LastName = 'Smith' You will see that the optimiser has decided to not access the underlying clustered index of the table but to use two indexes above to resolve the query. This highlights how the optimiser considers all storage structures, clustered indexes, non clustered indexes and heaps when trying to resolve a query. So are you up to the challenge for the weekend to produce the most complex single table query? The prize is a pdf version of a popular SQL Server book, or a physical book if you live in the UK.  

    Read the article

  • How to: Check which table is the biggest, in SQL Server

    - by AngelEyes
    The company I work with had it's DB double its size lately, so I needed to find out which tables were the biggest. I found this on the web, and decided it's worth remembering! Taken from http://www.sqlteam.com/article/finding-the-biggest-tables-in-a-database, the code is from http://www.sqlteam.com/downloads/BigTables.sql   /************************************************************************************** * *  BigTables.sql *  Bill Graziano (SQLTeam.com) *  [email protected] *  v1.1 * **************************************************************************************/ DECLARE @id INT DECLARE @type CHARACTER(2) DECLARE @pages INT DECLARE @dbname SYSNAME DECLARE @dbsize DEC(15, 0) DECLARE @bytesperpage DEC(15, 0) DECLARE @pagesperMB DEC(15, 0) CREATE TABLE #spt_space   (      objid    INT NULL,      ROWS     INT NULL,      reserved DEC(15) NULL,      data     DEC(15) NULL,      indexp   DEC(15) NULL,      unused   DEC(15) NULL   ) SET nocount ON -- Create a cursor to loop through the user tables DECLARE c_tables CURSOR FOR   SELECT id   FROM   sysobjects   WHERE  xtype = 'U' OPEN c_tables FETCH NEXT FROM c_tables INTO @id WHILE @@FETCH_STATUS = 0   BEGIN       /* Code from sp_spaceused */       INSERT INTO #spt_space                   (objid,                    reserved)       SELECT objid = @id,              SUM(reserved)       FROM   sysindexes       WHERE  indid IN ( 0, 1, 255 )              AND id = @id       SELECT @pages = SUM(dpages)       FROM   sysindexes       WHERE  indid < 2              AND id = @id       SELECT @pages = @pages + Isnull(SUM(used), 0)       FROM   sysindexes       WHERE  indid = 255              AND id = @id       UPDATE #spt_space       SET    data = @pages       WHERE  objid = @id       /* index: sum(used) where indid in (0, 1, 255) - data */       UPDATE #spt_space       SET    indexp = (SELECT SUM(used)                        FROM   sysindexes                        WHERE  indid IN ( 0, 1, 255 )                               AND id = @id) - data       WHERE  objid = @id       /* unused: sum(reserved) - sum(used) where indid in (0, 1, 255) */       UPDATE #spt_space       SET    unused = reserved - (SELECT SUM(used)                                   FROM   sysindexes                                   WHERE  indid IN ( 0, 1, 255 )                                          AND id = @id)       WHERE  objid = @id       UPDATE #spt_space       SET    ROWS = i.ROWS       FROM   sysindexes i       WHERE  i.indid < 2              AND i.id = @id              AND objid = @id       FETCH NEXT FROM c_tables INTO @id   END SELECT TOP 25 table_name = (SELECT LEFT(name, 25)                             FROM   sysobjects                             WHERE  id = objid),               ROWS = CONVERT(CHAR(11), ROWS),               reserved_kb = Ltrim(Str(reserved * d.low / 1024., 15, 0) + ' ' + 'KB'),               data_kb = Ltrim(Str(data * d.low / 1024., 15, 0) + ' ' + 'KB'),               index_size_kb = Ltrim(Str(indexp * d.low / 1024., 15, 0) + ' ' + 'KB'),               unused_kb = Ltrim(Str(unused * d.low / 1024., 15, 0) + ' ' + 'KB') FROM   #spt_space,        MASTER.dbo.spt_values d WHERE  d.NUMBER = 1        AND d.TYPE = 'E' ORDER  BY reserved DESC DROP TABLE #spt_space CLOSE c_tables DEALLOCATE c_tables

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • SQL SERVER – Capturing Wait Types and Wait Stats Information at Interval – Wait Type – Day 5 of 28

    - by pinaldave
    Earlier, I have tried to cover some important points about wait stats in detail. Here are some points that we had covered earlier. DMV related to wait stats reset when we reset SQL Server services DMV related to wait stats reset when we manually reset the wait types However, at times, there is a need of making this data persistent so that we can take a look at them later on. Sometimes, performance tuning experts do some modifications to the server and try to measure the wait stats at that point of time and after some duration. I use the following method to measure the wait stats over the time. -- Create Table CREATE TABLE [MyWaitStatTable]( [wait_type] [nvarchar](60) NOT NULL, [waiting_tasks_count] [bigint] NOT NULL, [wait_time_ms] [bigint] NOT NULL, [max_wait_time_ms] [bigint] NOT NULL, [signal_wait_time_ms] [bigint] NOT NULL, [CurrentDateTime] DATETIME NOT NULL, [Flag] INT ) GO -- Populate Table at Time 1 INSERT INTO MyWaitStatTable ([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], [CurrentDateTime],[Flag]) SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], GETDATE(), 1 FROM sys.dm_os_wait_stats GO ----- Desired Delay (for one hour) WAITFOR DELAY '01:00:00' -- Populate Table at Time 2 INSERT INTO MyWaitStatTable ([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], [CurrentDateTime],[Flag]) SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], GETDATE(), 2 FROM sys.dm_os_wait_stats GO -- Check the difference between Time 1 and Time 2 SELECT T1.wait_type, T1.wait_time_ms Original_WaitTime, T2.wait_time_ms LaterWaitTime, (T2.wait_time_ms - T1.wait_time_ms) DiffenceWaitTime FROM MyWaitStatTable T1 INNER JOIN MyWaitStatTable T2 ON T1.wait_type = T2.wait_type WHERE T2.wait_time_ms > T1.wait_time_ms AND T1.Flag = 1 AND T2.Flag = 2 ORDER BY DiffenceWaitTime DESC GO -- Clean up DROP TABLE MyWaitStatTable GO If you notice the script, I have used an additional column called flag. I use it to find out when I have captured the wait stats and then use it in my SELECT query to SELECT wait stats related to that time group. Many times, I select more than 5 or 6 different set of wait stats and I find this method very convenient to find the difference between wait stats. In a future blog post, we will talk about specific wait stats. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • ODI 11g – How to override SQL at runtime?

    - by David Allan
    Following on from the posting some time back entitled ‘ODI 11g – Simple, Powerful, Flexible’ here we push the envelope even further. Rather than just having the SQL we override defined statically in the interface design we will have it configurable via a variable….at runtime. Imagine you have a well defined interface shape that you want to be fulfilled and that shape can be satisfied from a number of different sources that is what this allows - or the ability for one interface to consume data from many different places using variables. The cool thing about ODI’s reference API and this is that it can be fantastically flexible and useful. When I use the variable as the option value, and I execute the top level scenario that uses this temporary interface I get prompted (or can get prompted to be correct) for the value of the variable. Note I am using the <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@> notation for the table reference, since this is done at runtime, then the context will resolve to the correct table name etc. Each time I execute, I could use a different source provider (obviously some dependencies on KMs/technologies here). For example, the following groovy snippet first executes and the query uses SCOTT model with EMP, the next time it is from BOB model and the datastore OTHERS. m=new Properties(); m.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@>"); s=new StartupParams(m); runtimeAgent.startScenario("TOP", null, s, null, "GLOBAL", 5, null, true); m2=new Properties(); m2.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","OTHERS", "BOB","D")@>"); s2=new StartupParams(m); runtimeAgent.startScenario("TOP", null, s2, null, "GLOBAL", 5, null, true); You’ll need a patch to 11.1.1.6 for this type of capability, thanks to my ole buddy Ron Gonzalez from the Enterprise Management group for help pushing the envelope!

    Read the article

  • Data Masking for Oracle E-Business Suite

    - by Troy Kitch
    E-Business Suite customers can now use Oracle Data Masking to obscure sensitive information in non-production environments. Many organizations are inadvertently exposed when copying sensitive or regulated production data into non-production database environments for development, quality assurance or outsourcing purposes. Due to weak security controls and unmonitored access, these non-production environments have increasingly become the target of cyber criminals. Learn more about the announcement here.

    Read the article

  • Microsoft Sponsored - Give Camp

    - by Ken Lovely, MCSE, MCDBA, MCTS
    Are you ready to connect with the local tech community for a good cause? GiveCamp needs your support. For one weekend in June, we’ll take on the technology wish lists of 20 non-profit organizations, and we’re looking for about 100 volunteers, both technical and non-technical, to help us do it. A typical GiveCamp draws 75 to 100 volunteers. Individuals can work with their colleagues in company teams, or they can opt to be matched with fellow volunteers who have complementary skill sets. Everyone is welcome to head home for the evenings – but there are always the diehards who work from Friday kickoff straight through Sunday afternoon. Food and drinks, especially of the caffeinated variety, are provided, along with game systems for breaks. Technical volunteers We're looking for graphic or UX designers, developers with .NET/Java/LAMP/Open Source/CMS experience, project managers, system/network administrators, DBAs, and non-profit technical consultants and web strategists. Non-technical volunteers Beyond the technology, there are many other aspects that make GiveCamp a success. We need non-technical volunteers to run errands, help with setting up and cleaning up, and everything in between. Whether you can offer a couple hours of your time or join GiveCamp for a couple days, your support is needed Sign up at; http://www.eventbrite.com/event/650615007 Feel free to contact me or Dani Diaz of Microsoft for more information

    Read the article

  • User-Defined Customer Events & their impact (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what is the impact in having a user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events that extend or simulate base customer events, the ones that are included in the base product, ensure that there is no other impact either direct or indirect to other business functions that the application has to offer.  

    Read the article

  • XNA - Error while rendering a texture to a 2D render target via SpriteBatch

    - by Jared B
    I've got this simple code that uses SpriteBatch to draw a texture onto a RenderTarget2D: private void drawScene(GameTime g) { GraphicsDevice.Clear(skyColor); GraphicsDevice.SetRenderTarget(targetScene); drawSunAndMoon(); effect.Fog = true; GraphicsDevice.SetVertexBuffer(line); effect.MainEffect.CurrentTechnique.Passes[0].Apply(); GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 2); GraphicsDevice.SetRenderTarget(null); SceneTexture = targetScene; } private void drawPostProcessing(GameTime g) { effect.SceneTexture = SceneTexture; GraphicsDevice.SetRenderTarget(targetBloom); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, null, null, null); { if (Bloom) effect.BlurEffect.CurrentTechnique.Passes[0].Apply(); spriteBatch.Draw( targetScene, new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height), Color.White); } spriteBatch.End(); BloomTexture = targetBloom; GraphicsDevice.SetRenderTarget(null); } Both methods are called from my Draw(GameTime gameTime) function. First drawScene is called, then drawPostProcessing is called. The thing is, when I run this code I get an error on the spriteBatch.Draw call: The render target must not be set on the device when it is used as a texture. I already found the solution, which is to draw the actual render target (targetScene) to the texture so it doesn't create a reference to the loaded render target. However, to my knowledge, the only way of doing this is to write: GraphicsDevice.SetRenderTarget(outputTarget) SpriteBatch.Draw(inputTarget, ...) GraphicsDevice.SetRenderTarget(null) Which encounters the same exact problem I'm having right now. So, the question I'm asking is: how would I render inputTarget to outputTarget without reference issues?

    Read the article

  • XNA - Render texture to a rendertarget 2d via SpriteBatch error

    - by Jared B
    I got simple code that uses SpriteBatch to draw a texture onto a RenderTarget2D... private void drawScene(GameTime g) { GraphicsDevice.Clear(skyColor); GraphicsDevice.SetRenderTarget(targetScene); drawSunAndMoon(); effect.Fog = true; GraphicsDevice.SetVertexBuffer(line); effect.MainEffect.CurrentTechnique.Passes[0].Apply(); GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 2); GraphicsDevice.SetRenderTarget(null); SceneTexture = targetScene; } private void drawPostProcessing(GameTime g) { effect.SceneTexture = SceneTexture; GraphicsDevice.SetRenderTarget(targetBloom); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, null, null, null); { if(Bloom) effect.BlurEffect.CurrentTechnique.Passes[0].Apply(); spriteBatch.Draw(targetScene, new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height), Color.White); } spriteBatch.End(); BloomTexture = targetBloom; GraphicsDevice.SetRenderTarget(null); } Both methods are called from Draw(GameTime gameTime). First drawScene is called, then drawPostProcessing is called. The thing is, I can't run the code because "the render target must not be set on the device when it is used as a texture." at line spriteBatch.Draw(targetScene, new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height), Color.White); I already found the solution, which is to draw the actual renderTarget (targetScene) to the texture so it doesn't create a reference to the loaded rendertarget. However, to my knowledge, the only way of doing this is to write: GraphicsDevice.SetRenderTarget(OutputTarget) SpriteBatch.Draw(InputTarget, ...) GraphicsDevice.SetRenderTarget(null) Which encounters the same exact problem I'm having right now. So, the question I'm asking is: how would I render InputTarget to OutputTarget without reference issues?

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >