Search Results

Search found 8320 results on 333 pages for 'tables'.

Page 297/333 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • performance issue: difference between select s.* vs select *

    - by kamil
    Recently I had some problem in performance of my query. The thing is described here: poor Hibernate select performance comparing to running directly - how debug? After long time of struggling, I've finally discovered that the query with select prefix like: select sth.* from Something as sth... Is 300x times slower then query started this way: select * from Something as sth.. Could somebody help me, and asnwer why is that so? Some external documents on this would be really useful. The table used for testing was: SALES_UNIT table contains some basic info abot sales unit node such as name and etc. The only association is to table SALES_UNIT_TYPE, as ManyToOne. The primary key is ID and field VALID_FROM_DTTM which is date. SALES_UNIT_RELATION contains relation PARENT-CHILD between sales unit nodes. Consists of SALES_UNIT_PARENT_ID, SALES_UNIT_CHILD_ID and VALID_TO_DTTM/VALID_FROM_DTTM. No association with any tables. The PK here is ..PARENT_ID, ..CHILD_ID and VALID_FROM_DTTM The actual query I've done was: select s.* from sales_unit s left join sales_unit_relation r on (s.sales_unit_id = r.sales_unit_child_id) where r.sales_unit_child_id is null select * from sales_unit s left join sales_unit_relation r on (s.sales_unit_id = r.sales_unit_child_id) where r.sales_unit_child_id is null Same query, both uses left join and only difference is with select.

    Read the article

  • Problem in mutiple :dependent=> :destroy when multiple polymorphic is true

    - by piemesons
    I have four models question, answer, comment and vote.Consider it same as stackoverflow. Question has_many comments Answers has_many comments Questions has_many votes answers has_many votes comments has_many votes Here are the models (only relevant things) class Comment < ActiveRecord::Base belongs_to :commentable, :polymorphic => true has_many :votes, :as => :votable, :dependent => :destroy end class Question < ActiveRecord::Base has_many :comments, :as => :commentable, :dependent => :destroy has_many :answers, :dependent => :destroy has_many :votes, :as => :votable, :dependent => :destroy end class Vote < ActiveRecord::Base belongs_to :votable, :polymorphic => true end class Answer < ActiveRecord::Base belongs_to :question, :counter_cache => true has_many :comments, :as => :commentable , :dependent => :destroy end Now the problem is whenever i am trying to delete any question/answer/comment its giving me an error NoMethodError in QuestionsController#destroy undefined method `each' for 0:Fixnum if i remove this line from any of the model (question/answer/comment) has_many :votes, :as => :votable, :dependent => :destroy then it works perfectly. It seems there is a problem while deleting the records active record is not able to find out the proper path because of multiple joins within the tables.

    Read the article

  • MySQL subquery and bracketing

    - by text
    Here are my tables respondents: field sample value respondentid : 1 age : 2 gender : male survey_questions: id : 1 question : Q1 answer : sample answer answers: respondentid : 1 question : Q1 answer : 1 --id of survey question I want to display all respondents who answered the certain survey, display all answers and total all the answer and group them according to the age bracket. I tried using this query: SELECT res.Age, res.Gender, answer.id, answer.respondentid, SUM(CASE WHEN res.Gender='Male' THEN 1 else 0 END) AS males, SUM(CASE WHEN res.Gender='Female' THEN 1 else 0 END) AS females, CASE WHEN res.Age < 1 THEN 'age1' WHEN res.Age BETWEEN 1 AND 4 THEN 'age2' WHEN res.Age BETWEEN 4 AND 9 THEN 'age3' WHEN res.Age BETWEEN 10 AND 14 THEN 'age4' WHEN res.Age BETWEEN 15 AND 19 THEN 'age5' WHEN res.Age BETWEEN 20 AND 29 THEN 'age6' WHEN res.Age BETWEEN 30 AND 39 THEN 'age7' WHEN res.Age BETWEEN 40 AND 49 THEN 'age8' ELSE 'age9' END AS ageband FROM Respondents AS res INNER JOIN Answers as answer ON answer.respondentid=res.respondentid INNER JOIN Questions as question ON answer.Answer=question.id WHERE answer.Question='Q1' GROUP BY ageband ORDER BY res.Age ASC I was able to get the data but the listing of all answers are not present. Do I have to subquery SELECT into my current SELECT statement to show the answers? I want to produce something like this: ex: # of Respondents is 3 ages: 2,3 and 6 Question: what are your favorite subjects? Ages 1-4: subject 1: 1 subject 2: 2 subject 3: 2 total respondents for ages 1-4 : 2 Ages 5-10: subject 1: 1 subject 2: 1 subject 3: 0 total respondents for ages 5-10 : 1

    Read the article

  • MySQL: optimization of table (indexing, foreign key) with no primary keys

    - by Haradzieniec
    Each member has 0 or more orders. Each order contains at least 1 item. memberid - varchar, not integer - that's OK (please do not mention that's not very good, I can't change it). So, thera 3 tables: members, orders and order_items. Orders and order_items are below: CREATE TABLE `orders` ( `orderid` INT(11) UNSIGNED NOT NULL AUTO_INCREMENT, `memberid` VARCHAR( 20 ), `Time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP , `info` VARCHAR( 3200 ) NULL , PRIMARY KEY (orderid) , FOREIGN KEY (memberid) REFERENCES members(memberid) ) ENGINE = InnoDB; CREATE TABLE `order_items` ( `orderid` INT(11) UNSIGNED NOT NULL, `item_number_in_cart` tinyint(1) NOT NULL , --- 5 items in cart= 5 rows `price` DECIMAL (6,2) NOT NULL, FOREIGN KEY (orderid) REFERENCES orders(orderid) ) ENGINE = InnoDB; So, order_items table looks like: orderid - item_number_in_cart - price: ... 1000456 - 1 - 24.99 1000456 - 2 - 39.99 1000456 - 3 - 4.99 1000456 - 4 - 17.97 1000457 - 1 - 20.00 1000458 - 1 - 99.99 1000459 - 1 - 2.99 1000459 - 2 - 69.99 1000460 - 1 - 4.99 ... As you see, order_items table has no primary keys (and I think there is no sense to create an auto_increment id for this table, because once we want to extract data, we always extract it as WHERE orderid='1000456' order by item_number_in_card asc - the whole block, id woudn't be helpful in queries). Once data is inserted into order_items, it's not UPDATEd, just SELECTed. The questions are: I think it's a good idea to put index on item_number_in_cart. Could anybody please confirm that? Is there anything else I have to do with order_items to increase the performance, or that looks pretty good? I could miss something because I'm a newbie. Thank you in advance.

    Read the article

  • Too many columns to index - use mySQL Partitions?

    - by Christopher Padfield
    We have an application with a table with 20+ columns that are all searchable. Building indexes for all these columns would make write queries very slow; and any really useful index would often have to be across multiple columns increasing the number of indexes needed. However, for 95% of these searches, only a small subset of those rows need to be searched upon, and quite a small number - say 50,000 rows. So, we have considered using mySQL Partition tables - having a column that is basically isActive which is what we divide the two partitions by. Most search queries would be run with isActive=1. Most queries would then be run against the small 50,000 row partition and be quick without other indexes. Only issue is the rows where isActive=1 is not fixed; i.e. it's not based on the date of the row or anything fixed like that; we will need to update isActive based on use of the data in that row. As I understand it that is no problem though; the data would just be moved from one partition to another during the UPDATE query. We do have a PK on id for the row though; and I am not sure if this is a problem; the manual seemed to suggest the partition had to be based on any primary keys. This would be a huge problem for us because the primary key ID has no basis on whether the row isActive.

    Read the article

  • Why do I get this exception? {An item with the same key has already been added."})

    - by Alan
    Aknittel NewSellerID is the result of a lookup on tblSellers. These tables (tblSellerListings and tblSellers) are not "officially" joined with a foreign key relationship, either in the model or in the database, but I want some referential integrity maintained for the future. So my issue remains. Why do I get the exception ({"An item with the same key has already been added."}) with this code, if I don't begin each iteration of the foreach loop with a new ObjectContext and end it with SaveChanges, which I think will affect performance. Also, could you tell me why ORCSolutionsDataService.tblSellerListings (An ADO.NET DataServices/WCF object is not IDisposable, like LINQ to Entities?? ============================================== // Add listings to previous seller int NewSellerID = 0; // Look up existing Seller key using SellerUniqueEBAYID var qryCurrentSeller = from s in service.tblSellers where s.SellerEBAYUserID == SellerUserID select s; foreach (var s in qryCurrentSeller) NewSellerID = s.SellerID; // Save the selected listings for this seller foreach (DataGridViewRow dgr in dgvRows) { ORCSolutionsDataService.tblSellerListings NewSellerListing = new ORCSolutionsDataService.tblSellerListings(); NewSellerListing.ItemID = dgr.Cells["txtSellerItemID"].Value.ToString(); NewSellerListing.Title = dgr.Cells["txtSellerItemTitle"].Value.ToString(); NewSellerListing.CurrentPrice = Convert.ToDecimal(dgr.Cells["txtSellerItemPrice"].Value); NewSellerListing.QuantitySold = Convert.ToInt32(dgr.Cells["txtSellerItemSold"].Value); NewSellerListing.EndTime = Convert.ToDateTime(dgr.Cells["txtSellerItemEnds"].Value); NewSellerListing.CategoryName = dgr.Cells["txtSellerItemCategory"].Value.ToString(); NewSellerListing.ExtendedPrice = Convert.ToDecimal(dgr.Cells["txtExtendedReceipts"].Value); NewSellerListing.RetrievedDtime = Convert.ToDateTime(dtSellerDataRetrieved.ToString()); NewSellerListing.SellerID = NewSellerID; service.AddTotblSellerListings(NewSellerListing); } service.SaveChanges(); } catch (Exception ex) { MessageBox.Show("Unable to add a new case. Exception: " + ex.Message); }

    Read the article

  • Database/Object Mapping

    - by Eric
    Hello everyone, This is a beginner question, but it's been frustrating me... I am using C#, by the way. I'd like to make a few classes, each with their own properties and methods. I would also like to have a database to store certain instances of these classes in case I would ever need to look at them again. So, for example... class Polygon { String name; Double perimiter; int numSides; public Double GetArea() { // ... } } class Circle { String name; Double radius; public void PrintName() { // ... } } Say I've got these classes. I also want a database that has the TABLES "Polygon" and "Circle" with the COLUMNS "name" "perimeter" "radius" etc. And I want an easy way to save a class instance into the database, or pull a class instance out of the database. I have previously been using MS Access for my database stuff, which I don't mind using, but I would prefer if nothing other than .NET need to be installed. I've been researching online a bit, but I wanted to get some opinions on here. I have looked at Linq-to-Sql, but it seems you need Sql-Server. Is this true? If so, I'd really rather not use it because I don't want to have to have it installed everywhere. Anway, I'm just fishing for some ideas/insights/suggestions/etc. so please help me out if you can. Thanks.

    Read the article

  • Get info from multiple files, match it and then display to end user, what is fastest?

    - by Patrick
    Hi, I need to build a website where we display data that is refreshed every 5minutes in a text file with a | separator. I currently use Java to do this. What I do now: I grab the textfile for every request through the website and process it and then display the data to the end user, this works fine, since Java can go through like 5000 lines of data fast, and when I filter it it is still extremely fast. However now the management wants the following: They added 3 textfiles with the | separator to it, and now want me to also read those files and match the information on certain fields, and if there is a match also display that information to the end user. I think soon enough, although Java is fast, I will run into trouble when 10 people want that information and I have to run through 4 total files matching the information. What can I do to make this process super fast? My Creative solutions so far: -Leave it this way, since Java is fast and end users can wait (probably less then 1second) -Have a background process that dumps new data into a MySQL database every 5minutes, since databases are extremely good at getting the same data from multiple tables. Thank you!

    Read the article

  • Why can't we just use a hash of passphrase as the encryption key (and IV) with symmetric encryption algorithms?

    - by TX_
    Inspired by my previous question, now I have a very interesting idea: Do you really ever need to use Rfc2898DeriveBytes or similar classes to "securely derive" the encryption key and initialization vector from the passphrase string, or will just a simple hash of that string work equally well as a key/IV, when encrypting the data with symmetric algorithm (e.g. AES, DES, etc.)? I see tons of AES encryption code snippets, where Rfc2898DeriveBytes class is used to derive the encryption key and initialization vector (IV) from the password string. It is assumed that one should use a random salt and a shitload of iterations to derive secure enough key/IV for the encryption. While deriving bytes from password string using this method is quite useful in some scenarios, I think that's not applicable when encrypting data with symmetric algorithms! Here is why: using salt makes sense when there is a possibility to build precalculated rainbow tables, and when attacker gets his hands on hash he looks up the original password as a result. But... with symmetric data encryption, I think this is not required, as the hash of password string, or the encryption key, is never stored anywhere. So, if we just get the SHA1 hash of password, and use it as the encryption key/IV, isn't that going to be equally secure? What is the purpose of using Rfc2898DeriveBytes class to generate key/IV from password string (which is a very very performance-intensive operation), when we could just use a SHA1 (or any other) hash of that password? Hash would result in random bit distribution in a key (as opposed to using string bytes directly). And attacker would have to brute-force the whole range of key (e.g. if key length is 256bit he would have to try 2^256 combinations) anyway. So either I'm wrong in a dangerous way, or all those samples of AES encryption (including many upvoted answers here at SO), etc. that use Rfc2898DeriveBytes method to generate encryption key and IV are just wrong.

    Read the article

  • rake db:migrate fails when trying to do inserts

    - by anthony
    I'm trying to get a database populate so I can begin working on a project. THis project is already built and I'm being brought in to helpwith front-end work. Problem is I can't get rake db:migrate to do any inserts. Every time I run rake db:migrate I get this: ... == 20081220084043 CreateTimeDimension: migrating ============================== -- create_table(:time_dimension) - 0.0870s INSERT time_dimension(time_key, year, month, day, day_of_week, weekend, quarter) VALUES(20080101, 2008, 1, 1, 'Tuesday', false, 1) rake aborted! Could not load driver (uninitialized constant Mysql::Driver) ... I'm building on a MBP with Snow Leopard. I've installed XCode from the disk that comes with the mac. I've updated ruby, installed rails and all the needed gems. I have the 64 bit version of MySQL installed. I've tried the 32 bit version of MySQL and I've even tried installing from macports (via http://www.robbyonrails.com/articles/2010/02/08/installing-ruby-on-rails-passenger-postgresql-mysql-oh-my-zsh-on-snow-leopard-fourth-edition) The mysql gemis installed using: sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/path/to/mysql/bin/mysql_config the migrate creates the tables just fine but it dies every. single. time. it trys an insert. Any help would be great

    Read the article

  • How to add a new entry to a multiple has_many association?

    - by siulamvictor
    I am not sure am I doing these correct. I have 3 models, Account, User, and Event. Account contains a group of Users. Each User have its own username and password for login, but they can access the same Account data under the same Account. Events is create by a User, which other Users in the same Account can also read or edit it. I created the following migrations and models. User migration class CreateUsers < ActiveRecord::Migration def self.up create_table :users do |t| t.integer :account_id t.string :username t.string :password t.timestamps end end def self.down drop_table :users end end Account migration class CreateAccounts < ActiveRecord::Migration def self.up create_table :accounts do |t| t.string :name t.timestamps end end def self.down drop_table :accounts end end Event migration class CreateEvents < ActiveRecord::Migration def self.up create_table :events do |t| t.integer :account_id t.integer :user_id t.string :name t.string :location t.timestamps end end def self.down drop_table :events end end Account model class Account < ActiveRecord::Base has_many :users has_many :events end User model class User < ActiveRecord::Base belongs_to :account end Event model class Event < ActiveRecord::Base belongs_to :account belongs_to :user end so.... Is this setting correct? Every time when a user create a new account, the system will ask for the user information, e.g. username and password. How can I add them into correct tables? How can I add a new event? I am sorry for such a long question. I am not very understand the rails way in handling such data structure. Thank you guys for answering me. :)

    Read the article

  • .tpl files and website problem

    - by whitstone86
    Apologies if the title is in lowercase but it's describing an extension format. I've started using Dwoo as my template engine for PHP, and am not sure how to convert my PHP files into .tpl templates. My site is similar to, but not the same as, http://library.digiguide.com/lib/programme/Medium-319648/Drama/ with its design (except colour scheme and site name are different, plus it's in PHP - so copyright issues are avoided here, the design arguably could be seen as parody even though the content is different. The database is called tvguide, and it has these tables: Programmes House M.D. Medium Police Stop! American Dad! The tablenames of the above programmes are: housemdonair mediumonair policestopair americandad1 Episodes The tablenames for the above programmes' episode guides are: housemdepidata mediumepidata policestopepidata americandad1epidata All of them have the following rows: id (not an auto-increment, since I wish to dynamically generate a page from this) episodename seriesnumber episodenumber episodesynopsis (the above four after id do exactly as stated) I have a pagination script that works, it displays 20 records per page as I want it to. This is called pmcPagination.php - but I won't post it in full since it would take up too much space. However, I'm trying to get it so that variables are filled in like this: (ok, so the examples below are ASP.NET, but if there's a PHP/MySQL equivalent I would gratefully appreciate this!!): http://library.digiguide.com/lib/episode/741168 http://library.digiguide.com/lib/episode/714829 with the episode detail and data. My site works, but it's fairly basic, and it's not online yet until my bugs are fixed. Mod_rewrite is enabled so my site reads as http://mytvguide.com/episode/123456 or http://mytvguide.com/programme/123456 http://mytvguide.com/WorldInAction/123456/Documentary/ I've tried looking on Google, but am not sure how to get this TV guide script to work at its best - but I think .tpl, and .php/MySQL is the way to go. Any advice anyone has on making this project into a fully workable, ready to use site would be much appreciated, I've spent months refining this project! P.S. Apologies for the length of this, hope it describes my project well.

    Read the article

  • get_browser not working

    - by tazphoenix
    it's not working.i mean i have many scripts to get ip and os but anyway get_browser is internal function and should work but its not.when i try to get a print_r on the function i get. Array ( [browser_name_regex] => §^.*$§ [browser_name_pattern] => * [browser] => Default Browser [version] => 0 [majorver] => 0 [minorver] => 0 [platform] => unknown [alpha] => [beta] => [win16] => [win32] => [win64] => [frames] => 1 [iframes] => [tables] => 1 [cookies] => [backgroundsounds] => [cdf] => [vbscript] => [javaapplets] => [javascript] => [activexcontrols] => [isbanned] => [ismobiledevice] => [issyndicationreader] => [crawler] => [cssversion] => 0 [supportscss] => [aol] => [aolversion] => 0 ) I'm using win7 and firefox.

    Read the article

  • How to run stored procedures and ad-hoc scripts asynchronously with "loosely" connected SQL Server 2

    - by sanga
    Is there a way to initiate a script against an instance of SQL server when it is not connected then have it run on the instance the next time it connects? This needs to happen without any intervention from me. Background situation if you are interested: We have about 120 machines each with their own instance of SQL Server 2000. Most of them are laptops. We have merge replication set up with each one. From time to time, there is a need to delete "rogue" guids from some tables in some instances that overwrite legitimate records on the main publisher as well as perform administrative tasks via stored procedure or adhoc sql statements. The problem is there is no telling when each machine is going to be connected to the network. Some folks turn their machines completely off at the end of the day. Others disconnect their machines and take them on business trips, home for the weekend etc. Did I mention that about 35 of these machines are in utility trucks and "attempt" to sync over a wireless connection. Thanks in advance for any assistance or suggestions. Sanga

    Read the article

  • object references an unsaved transient instance

    - by developer
    Hi, I have 2 tables, user and userprofile, both with almost identical fields. user table references userprofile table by primary key ID. My requirement is that on click of a button I need to dump user table record to userprofile table. Now for a particular user table, if there is a corresponding userprofile entry, I am successfully able to dump the data, but if there is no record in userprofile table then I need to create a new record by dumping all the data. My problem is that I am able to update the data when the record is present in userprofile table, but in the case wherein I have to create a new record I get the below error "object references an unsaved transient instance - save the transient instance before flushing". `<class name="User"> <id name="ID" type="Int32"> <generator class="native" /> </id> <many-to-one name="Pid" class="UserProfile" /> </class>` UserProfile is another table and Pid above references the Primary key ID of UserProfile table.

    Read the article

  • Replace beginning words(SQL SERVER 2005, SET BASED)

    - by Newbie
    I have the below tables. tblInput Id WordPosition Words -- ----------- ----- 1 1 Hi 1 2 How 1 3 are 1 4 you 2 1 Ok 2 2 This 2 3 is 2 4 me tblReplacement Id ReplacementWords --- ---------------- 1 Hi 2 are 3 Ok 4 This The tblInput holds the list of words while the tblReplacement hold the words that we need to search in the tblInput and if a match is found then we need to replace those. But the problem is that, we need to replace those words if any match is found at the beginning. i.e. in the tblInput, in case of ID 1, the words that will be replaced is only 'Hi' and not 'are' since before 'are', 'How' is there and it is not in the tblReplacement list. in case of Id 2, the words that will be replaced are 'Ok' & 'This'. Since these both words are present in the tblReplacement table and after the first word i.e. 'Ok' is replaced, the second word which is 'This' here comes first in the list of ID category 2 . Since it is available in the tblReplacement, and is the first word now, so this will also be replaced. So the desired output will be Id NewWordsAfterReplacement --- ------------------------ 1 How 1 are 1 you 2 is 2 me My approach so far: ;With Cte1 As( Select t1.Id ,t1.Words ,t2.ReplacementWords From tblInput t1 Cross Join tblReplacement t2) ,Cte2 As( Select Id, NewWordsAfterReplacement = REPLACE(Words,ReplacementWords,'') From Cte1) Select * from Cte2 where NewWordsAfterReplacement <> '' But I am not getting the desired output. It is replacing all the matching words. Urgent help needed*.( SET BASED )* I am using SQL SERVER 2005. Thanks

    Read the article

  • Is it possible to cache all the data in a SQL Server CE database using LinqToSql?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • How to store and collect data for mining such information as most viewed for last 24 hours, last 7 d

    - by Kirzilla
    Hello, Let's imagine that we have high traffic project (a tube site) which should provide sorting using this options (NOT IN REAL TIME). Number of videos is about 200K and all information about videos is stored in MySQL. Number of daily video views is about 1.5KK. As instruments we have Hard Disk Drive (text files), MySQL, Redis. Views top viewed top viewed last 24 hours top viewed last 7 days top viewed last 30 days top rated last 365 days How should I store such information? The first idea is to log all visits to text files (single file per hour, for example visits_20080101_00.log). At the beginning of each hour calculate views per video for previous hour and insert this information into MySQL. Then recalculate totals (for last 24 hours) and update statistics in tables. At the beginning of every day we have to do the same but recalculate for last 7 days, last 30 days, last 365 days. This method seems to be very poor for me because we have to store information about last 365 days for each video to make correct calculations. Is there any other good methods? Probably, we have to choose another instruments for this? Thank you.

    Read the article

  • updating a column in a table only if after the update it won't be negative and identifying all updat

    - by Azeem
    Hello all, I need some help with a SQL query. Here is what I need to do. I'm lost on a few aspects as outlined below. I've four relevant tables: Table A has the price per unit for all resources. I can look up the price using a resource id. Table B has the funds available to a given user. Table C has the resource production information for a given user (including the number of units to produce everyday). Table D has the number of units ever produced by any given user (can be identified by user id and resource id) Having said that, I need to run a batch job on a nightly basis to do the following: a. for all users, identify whether they have the funds needed to produce the number of resources specified in table C and deduct the funds if they are available from table B (calculating the cost using table A). b. start the process to produce resources and after the resource production is complete, update table D using values from table C after the resource product is complete. I figured the second part can be done by using an UPDATE with a subquery. However, I'm not sure how I should go about doing part a. I can only think of using a cursor to fetch each row, examine and update. Is there a single sql statement that will help me avoid having to process each row manually? Additionally, if any rows weren't updated, the part b. SQL should not produce resources for that user. Basically, I'm attempting to modify the sql being used for this logic that currently is in a stored procedure to something that will run a lot faster (and won't process each row separately). Please let me know any ideas and thoughts. Thanks! - Azeem

    Read the article

  • Warehouse Attributes model

    - by Orapps
    i am giving a training project for building a schema the following is the question all i would like to ask you guys is i am not sure what attributes would come in the table please see the problem below... create one new company and attach 10 attributes to the company. the company has two warehouses one in Torrance and another in Costa mesa. each warehouse should have 10 attributes. the company sells 100 products and services. each product and services should have 10 attributes. the company received orders. the orders should have HEADERS and Lines (am not sure what these are). each headers and lin should have 10 attributes. there should be a table for quantity on hand. the quantity on hand will increase once the warehouse receives good. please i only want some hints or clues on what attributes would generally come in this highlighted tables. thanks for help in advance guys....

    Read the article

  • Handling nulls in Datawarehouse

    - by rrydman
    I'd like to ask your input on what the best practice is for handling null or empty data values when it pertains to data warehousing and SSIS/SSAS. I have several fact and dimension tables that contain null values in different rows. Specifics: 1) What is the best way to handle null date/times values? Should I make a 'default' row in my time or date dimensions and point SSIS to the default row when there is a null found? 2) What is the best way to handle nulls/empty values inside of dimension data. Ex: I have some rows in an 'Accounts' dimensions that have empty (not NULL) values in the Account Name column. Should I convert these empty or null values inside the column to a specific default value? 3) Similar to point 1 above - What should I do if I end up with a Facttable row that has no record in one of the dimension columns? Do I need default dimension records for each dimension in case this happens? 4) Any suggestion or tips in regards to how to handle these operation in Sql server integration services (SSIS)? Best data flow configurations or best transformation objects to use would be helpful. Thanks :-)

    Read the article

  • DB optimization to use it as a queue

    - by anony
    We have a table called worktable which has some columns(key(primary key), ptime, aname, status, content) we have something called producer which puts in rows in this table and we have consumer which does an order-by on the key column and fetches the first row which has status as 'pending'. The consumer does some processing on this row: 1. updates status to "processing" 2. does some processing using content 3. deletes the row we are facing contention issues when we try to run multiple consumers(probably due to the order-by which does a full table scan)... using Advanced queues would be our next step but before we go there we want to check what is the max throughput we can achieve with multiple consumers and producer on the table. What are the optimizations we can do to get the best numbers possible? Can we do an in-memory processing where a consumer fetches 1000 rows at a time processes and deletes? will that improve? What are other possibilities? partitioning of table? parallelization? Index organized tables?...

    Read the article

  • Optimize SQL query (Facebook-like application)

    - by fabriciols
    My application is similar to Facebook, and I'm trying to optimize the query that get user records. The user records are that he as src ou dst. The src is in usermuralentry directly, the dst list are in usermuralentry_user. So, a entry can have one src and many dst. I have those tables: mysql> desc usermuralentry ; +-----------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | user_src_id | int(11) | NO | MUL | NULL | | | private | tinyint(1) | NO | | NULL | | | content | longtext | NO | | NULL | | | date | datetime | NO | | NULL | | | last_update | datetime | NO | | NULL | | +-----------------+------------------+------+-----+---------+----------------+ 10 rows in set (0.10 sec) mysql> desc usermuralentry_user ; +-------------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------------+---------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | usermuralentry_id | int(11) | NO | MUL | NULL | | | userinfo_id | int(11) | NO | MUL | NULL | | +-------------------+---------+------+-----+---------+----------------+ 3 rows in set (0.00 sec) And the following query to retrieve information from two users. mysql> explain SELECT * FROM usermuralentry AS a , usermuralentry_user AS b WHERE a.user_src_id IN ( 1, 2 ) OR ( a.id = b.usermuralentry_id AND b.userinfo_id IN ( 1, 2 ) ); +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | 1 | SIMPLE | b | ALL | usermuralentry_id,usermuralentry_user_bcd7114e,usermuralentry_user_6b192ca7 | NULL | NULL | NULL | 147188 | | | 1 | SIMPLE | a | ALL | PRIMARY | NULL | NULL | NULL | 1371289 | Range checked for each record (index map: 0x1) | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ 2 rows in set (0.00 sec) but it is taking A LOT of time... Some tips to optimize? Can the table schema be better in my application?

    Read the article

  • cakephp - form for belongsTo Model

    - by user1511579
    I createt the following model to link 2 relational tables: class Ficha extends AppModel { //public $useTable = 'ficha_seg'; var $primaryKey = 'id_ficha'; var $name = 'Ficha'; var $belongsTo = array( 'Perigo' => array( 'className' => 'Perigo', 'foreignKey' => false, 'conditions' => 'Perigo.id_fichas = Ficha.id_ficha' ) ); } Now, i have a form that requires data from the class Ficha, and then is redirected to another ctp page where i will input the data for the table "Perigos". However, since i'm still a newbie in cakephp i'm having difficult building that second form to insert the data on the table "Perigos". Here goes the code i built at the moment related to the second form: FichasController.php (the method where is it supposed to save the data on the table "Perigos": public function preencher_ficha(){ if ($this->request->is('ficha')) { $this->Ficha->create(); if ($this->Ficha->Perigo->save($this->request->data)) { $last_id=$this->Ficha->getLastInsertID(); $this->Session->setFlash('Your post has been updated '.$last_id.'.'); //$this->redirect(array('action' => 'preencher_ficha')); } else { $this->Session->setFlash('Unable to qualquer coisa your post.'); } } } The preencher_ficha.ctp file with the form: echo $this->Form->create('Ficha->Perigo', array('action' => 'index')); echo $this->Form->input('class_subst', array('label' => 'Classificação:')); echo $this->Form->input('simbolos_perigo', array('label' => 'Símbolos:')); echo $this->Form->input('frases_r', array('label' => 'Frases:')); echo $this->Form->end('Finalizar Ficha'); Here i guess the create part is wrong, but i think i have errors too in the controller part.

    Read the article

  • Mapping element via joining table with NHibernate

    - by NhibernateIdiot
    This is stuff ive done lots of times before but my mind is just blanking at the moment, i will try and give a simple overview of my current situation. I currently have 3 tables as shown below: Office > id, name Person > id, name Office_Personnel > office_id, person_id I then have a model for Person (id, name) and Office, however the Office model contains personnel information: public class Office { int Id {get;set;} string Name {get;set;} ICollection<Person> Personnel {get;set;} } Mapping person is easy, but now im a bit stumped as to why office wont map properly. I chose to use a set when I was mapping the Personnel as there shouldn't be any duplicates, however it doesn't seem to work as I would expect... <set name="Personnel" table="office_personnel" cascade="all"> <key column="office_id" /> <one-to-many class="Person"/> </set> Now one thing that strikes me as odd is that there is no indication as to what person should be binding to (person_id). It keeps trying to find *office_id* column within the Person table. I'm sure this is just some simple problem and im being an idiot, but any help would be great! On a side note, I was weighing up if I should even bother having a middle man table, as I could directly put an Office_Id column within the Person table, but im not 100% sure if in my real project the Person class could be in multiple Offices further down the line...

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >