Search Results

Search found 31357 results on 1255 pages for 'database indexes'.

Page 304/1255 | < Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >

  • Stored procedure using cursor in mySql.

    - by RAVI
    I wrote a stored procedure using cursor in mysql but that procedure is taking 10 second to fetch the result while that result set have only 450 records so, I want to know that why that proedure is taking that much time to fetch tha record. procedure as below: DELIMITER // DROP PROCEDURE IF EXISTS curdemo123// CREATE PROCEDURE curdemo123(IN Branchcode int,IN vYear int,IN vMonth int) BEGIN DECLARE EndOfData,tempamount INT DEFAULT 0; DECLARE tempagent_code,tempplantype,tempsaledate CHAR(12); DECLARE tempspot_rate DOUBLE; DECLARE var1,totalrow INT DEFAULT 1; DECLARE cur1 CURSOR FOR select SQL_CALC_FOUND_ROWS ad.agentCode,ad.planType,ad.amount,ad.date from adplan_detailstbl ad where ad.branchCode=Branchcode and (ad.date between '2009-12-1' and '2009-12-31')order by ad.NUM_ID asc; DECLARE CONTINUE HANDLER FOR SQLSTATE '02000' SET EndOfData = 1; DROP TEMPORARY TABLE IF EXISTS temptable; CREATE TEMPORARY TABLE temptable (agent_code varchar(15), plan_type char(12),sale double,spot_rate double default '0.0', dATE DATE); OPEN cur1; SET totalrow=FOUND_ROWS(); while var1 <= totalrow DO fetch cur1 into tempagent_code,tempplantype,tempamount,tempsaledate; IF((tempplantype='Unit Plan' OR tempplantype='MIP') OR tempplantype='STUP') then select spotRate into tempspot_rate from spot_amount where ((monthCode=vMonth and year=vYear) and ((agentCode=tempagent_code and branchCode=Branchcode) and (planType=tempplantype))); INSERT INTO temptable VALUES(tempagent_code,tempplantype,tempamount,tempspot_rate,tempsaledate); else INSERT INTO temptable(agent_code,plan_type,sale,dATE) VALUES(tempagent_code,tempplantype,tempamount,tempsaledate); END IF; SET var1=var1+1; END WHILE; CLOSE cur1; select * from temptable; DROP TABLE temptable; END // DELIMITER ;

    Read the article

  • Seeking a GUI auto-format feature for T-SQL

    - by dvanaria
    Is there a freely available GUI tool that will allow interaction with Microsoft SQL Server (via T-SQL) that provides an auto-format feature? I constantly find myself writing queries in SQL Query Analyzer (Microsoft’s standard GUI tool for T-SQL) and cutting/pasting the whole thing into SQLyog (a GUI tool for MySQL), where I can press F12 and have it reformatted into an easily readable, industry standard format. I then cut/paste this back into Query Analyzer to execute. I do this all the time at work and haven’t been able to find an alternative. I realize that SQLyog is no longer free software, but what I’m looking for is a specific alternative to a MS SQL Server interface (with auto-formatting). Thanks in advance for your help.

    Read the article

  • I am not able to drop foreign key in mysql Error 150. Please help

    - by Shantanu Gupta
    i am trying to create a foreign key in my table. But when i executes my query it shows me error 150 Error Code : 1005 Can't create table '.\vts#sql-6ec_1.frm' (errno: 150) (0 ms taken) My Queries are Query to create a foreign Key alter table `vts`.`tblguardian` add constraint `FK_tblguardian` FOREIGN KEY (`GuardianPickPointId`) REFERENCES `tblpickpoint` (`PickPointId`) EDIT: Now I am trying to drop this constraint But it fails again and shows me same error as it was giving when i was trying to create foreign key. alter table `vts`.`tblguardian` drop index `FK_tblguardian` Primary Key table CREATE TABLE `tblpickpoint` ( `PickPointId` int(4) NOT NULL auto_increment, `PickPointName` varchar(500) default NULL, `PickPointLabel` varchar(500) default NULL, `PickPointLatLong` varchar(100) NOT NULL, PRIMARY KEY (`PickPointId`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 CHECKSUM=1 DELAY_KEY_WRITE=1 ROW_FORMAT=DYNAMIC Foreign Key Table CREATE TABLE `tblguardian` ( `GuardianId` int(4) NOT NULL auto_increment, `GuardianName` varchar(500) default NULL, `GuardianAddress` varchar(500) default NULL, `GuardianMobilePrimary` varchar(15) NOT NULL, `GuardianMobileSecondary` varchar(15) default NULL, `GuardianPickPointId` int(4) default NULL, PRIMARY KEY (`GuardianId`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1

    Read the article

  • MS Access 2003 - Failure to create MDE file: error VBA is corrupt?

    - by Justin
    Ok so this is a brand new snag I have run into. I am trying to launch a new MDE from my source MDB file, and it is locking up Access. So in my mdb, I am first compacting and repairing, and then selecting create a new mde (just as I have done many times before). It looks like it is starting the process, but never gets to where it compacts when it is done, and access is not responding. So after I force close the app, I look in the folder where I am trying to create the MDE to and I see there is a new access db1 file there. If I try to open that it gives me an error that says file not found, and then it says the Visual Basic for Applications is corrupt. The thing is, I just made a very simple adjustment to the code since last launching an mde, and after this I double and triple checked it...its not that because its just a simple open this form and close this one addition. I did however have my source mdb file on a disc that I copied to my laptop, and then tried to re link the tables to the network drive (had them linked to other tables on my local drive so that I could develop offline)?? PLEASE HELP!!!

    Read the article

  • Update MySQL table from jsp

    - by vishnu
    I have these in a jsp file. But these values are not updated in the mysql table. May be it is not commiting. How can i solve this ? String passc1 = request.getParameter("passc1"); String accid = request.getParameter("accid"); int i = 0; String sql = " update customertb " + " set passwd = ?" + " where acc_no = ?;"; try { PreparedStatement ps = con.prepareStatement(sql); ps.setString(1, passc1); ps.setString(2, accid); i = ps.executeUpdate(); } catch (Exception e) { // do something with Exception here. Maybe just throw it up again } finally { con.close(); }

    Read the article

  • Yahoo Query Language Problem

    - by Damiano
    Hello everybody! Today, I've started with Yahoo Query Language. I would use it to retrive stocks details, so I'm talking about Yahoo Finance. I think there is a bug on this language. This is my query: select * from yahoo.finance.quoteslist where symbol='@^GSPC' I ALWAYS get 51 results! it's impossible, take a look at: http://it.finance.yahoo.com/q/cp?s=^GSPC There are 500 results! I also tried some paging parameters. select * from yahoo.finance.quoteslist(50,30) where symbol='@^GSPC' (to get from 50 to 80) select * from yahoo.finance.quoteslist(100) where symbol='@^GSPC' (to get the first 100 results) select * from yahoo.finance.quoteslist where symbol='@^GSPC' limit 30 offset 50 but ALWAYS the last stock is: <quote symbol="BBY"> <Symbol>BBY</Symbol> <LastTradePriceOnly>41.03</LastTradePriceOnly> <LastTradeDate>5/7/2010</LastTradeDate> <LastTradeTime>4:00pm</LastTradeTime> <Change>-0.48</Change> <Open>41.35</Open> <DaysHigh>42.35</DaysHigh> <DaysLow>39.60</DaysLow> <Volume>14129531</Volume> </quote> Why do I have this kind of problem? Thank you so much for your support! (P.S. I've tested it on Yahoo YQL console)

    Read the article

  • Combining cache methods - memcache/disk based

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • Systems design question: DB connection management in load-balanced n-tier

    - by aoven
    I'm wondering about the best approach to designing a DB connection manager for a load-balanced n-tier system. Classic n-tier looks like this: Client -> BusinessServer -> DBServer A load-balancing solution as I see it would then look like this: +--> ... +--+ +--> BusinessServer +--+--> SessionServer --+ Client -> Gateway --+--> BusinessServer +--| +--> DBServer +--> BusinessServer +--+--------------------+ +--> ... +--+ As pictured, the business server component is being load-balanced via multiple instances, and a hardware gateway is distributing the load among them. Session server probably needs to be situated outside the load-balancing array, because it manages state, which mustn't be duplicated. Barring any major errors in design so far, what is the best way to implement DB connection management? I've come up with a couple of options, but there may be others I'm not aware of: Introduce a new Broker component between the DBServer and the other components and let it handle the DB connections. The upside is that all the connections can be managed from a single point, which is very convenient. The downside is that now there is an additional "single point of failure" in the system. Other components must go through it for every request that involves DB in some way, which also makes this a bottleneck. Move the DB connection management into BusinessServer and SessionServer components and let each handle its own DB connections. The upside is that there is no additional "single point of failure" or bottleneck components. The downside is that there is also no control over possible conflicts and deadlocks apart from what DBServer itself can provide. What else can be done? FWIW: Technology is .NET, but none of the vendor-specific stacks are used (e.g. no WCF, MSMQ or the like).

    Read the article

  • Model Binding to a List using non-sequential indexes. Can I access the index later?

    - by Kid A
    I'm following Phil's great tutorial on model binding to a list. I use input names like this: book[5804].title book[5804].author book[1234].title book[1234].author This works well and the data gets back to the model just fine, populating a list of books. What I'm looking for is a way to get access in the model to the index that was used to send the books. I'd like to get that number, "5804." This is because the index is of semantic importance. If I can access it, it saves me from setting another property on the object (book ID). Is there a way to see, either on the FormCollection or on the model after UpdateModel is called, what the index was when it was sent up?

    Read the article

  • Which Oracle table uses a sequence?

    - by Jaú
    Having a sequence, I need to find out which table.column gets its values. As far as I know, Oracle doesn't keep track of this relationship. So, looking up for the sequence in source code would be the only way. Is that right? Anyone knows of some way to find out this sequence-table relationship?

    Read the article

  • Hierarchical Data in MySQL is fast to retrieve?

    - by ajsie
    i've got a list of all countries - states - cities (- subcities/villages etc) in a XML file and to retrieve for example a state's all cities it's really quick with XML (using xml parser). i wonder, if i put all this information in mysql, is retrieving a state's all cities as fast as with XML? cause XML is designed to store hierarchical data while relational databases like mysql are not. the list contains like 500 000 entities. so i wonder if its as fast as XML using either of: Adjacency list model Nested Set model And which one should i use? Cause (theoretically) there could be unlimited levels under a state. And which is fastest for this huge dataset? Thanks!

    Read the article

  • PHP: table structure

    - by A3efan
    I'm developing a website that has some audio courses, each course can have multiple lessons. I want to display each course in its own table with its different lessons. This is my sql statement: Table: courses id, title Table: lessons id, cid (course id), title, date, file $sql = "SELECT lessons.*, courses.title AS course FROM lessons INNER JOIN courses ON courses.id = lessons.cid GROUP BY lessons.id ORDER BY lessons.id" ; Can someone help me with the PHP code? This is the I code I have written: mysql_select_db($database_config, $config); mysql_query("set names utf8"); $sql = "SELECT lessons.*, courses.title AS course FROM lessons INNER JOIN courses ON courses.id = lessons.cid GROUP BY lessons.id ORDER BY lessons.id" ; $result = mysql_query($sql) or die(mysql_error()); while ($row = mysql_fetch_assoc($result)) { echo "<p><span class='heading1'>" . $row['course'] . "</span> </p> "; echo "<p class='datum'>Posted onder <a href='*'>*</a>, latest update on " . strftime("%A %d %B %Y %H:%M", strtotime($row['date'])); } echo "</p>"; echo "<class id='text'>"; echo "<p>...</p>"; echo "<table border: none cellpadding='1' cellspacing='1'>"; echo "<tr>"; echo "<th>Nr.</th>"; echo "<th width='450'>Lesso</th>"; echo "<th>Date</th>"; echo "<th>Download</th>"; echo "</tr>"; echo "<tr>"; echo "<td>" . $row['nr'] . "</td>"; echo "<td>" . $row['title'] . "</td>"; echo "<td>" . strftime("%d/%m/%Y", strtotime($row['date'])) . "</td>"; echo "<td><a href='audio/" . rawurlencode($row['file']) . "'>MP3</a></td>"; echo "</tr>"; echo "</table>"; echo "<br>"; } ?>

    Read the article

  • How to use Data aware controls "correctly"?

    - by lyborko
    Hi, I would like to ask experienced users, if you prefer to use data aware controls to add, insert, delete and edit data in DB or you favor to do it manualy. I developed some DB applications, in which for the sake of "user friendly policy" I run into complicated web of table events (afterinsert, afteredit, after... and beforeedit, beforeinsert, before...). After that it was a quite nasty work to debug the application. Aware of this risk (later by another application) I tried to avoid this problem, so I paid increased attention to write code well, readable and comprehensive. It seemed everything all right from the beginning, but as I needed to handle some preprocessing stuff before sending and loading data etc, I run into the same problems again, "slowly and inevitably". Sometime I could not use dataaware controls anyway, and what seemed to be a "cool" feature of DAControl at the beginning it turned to an obstacle on the end. I "had to" write special routine for non-dataaware controls, in order to behave as dataaware. Then I asked myself, why on earth should I use dataaware controls? Is it better to found application architecture on non-dataaware controls? It requires more time to write bug-proof code, of course, but does it worth of it? I do not know... I happened to me several times, like jinxed : paradise on the beginning hell on the end... I do not know, if I use wrong method to write DB program, if there is some standard common practice how to proceed. Or if it is common problem to everybody? Thanx for advices and your experiences

    Read the article

  • Alternative design for a synonyms table?

    - by Majid
    I am working on an app which is to suggest alternative words/phrases for input text. I have doubts about what might be a good design for the synonyms table. Design considerations: number of synonyms is variable, i.e. football has one synonym (soccer), but in particular has two (particularly, specifically) if football is a synonym to soccer, the relation exists in the opposite direction as well. our goal is to query a word and find its synonyms we want to keep the table small and make adding new words easy What comes to my mind is a two column design with col a = word and col b = delimited list of synonyms Is there any better alternative? What about using two tables, one for words and the other for relations?

    Read the article

  • PostgreSQL - best way to return an array of key-value pairs

    - by Matt W
    I'm trying to select a number of fields, one of which needs to be an array with each element of the array containing two values. Each array item needs to contain a name (character varying) and an ID (numeric). I know how to return an array of single values (using the ARRAY keyword) but I'm unsure of how to return an array of an object which in itself contains two values. The query is something like SELECT t.field1, t.field2, ARRAY(--with each element containing two values i.e. {'TheName', 1 }) FROM MyTable t I read that one way to do this is by selecting the values into a type and then creating an array of that type. Problem is, the rest of the function is already returning a type (which means I would then have nested types - is that OK? If so, how would you read this data back in application code - i.e. with a .Net data provider like NPGSQL?) Any help is much appreciated.

    Read the article

  • How do I get save (no exclamation point) semantics in an ActiveRecord transaction?

    - by James A. Rosen
    I have two models: Person and Address which I'd like to create in a transaction. That is, I want to try to create the Person and, if that succeeds, create the related Address. I would like to use save semantics (return true or false) rather than save! semantics (raise an ActiveRecord::StatementInvalid or not). This doesn't work because the user.save doesn't trigger a rollback on the transaction: class Person def save_with_address(address_options = {}) transaction do self.save address = Address.build(address_options) address.person = self address.save end end end (Changing the self.save call to an if self.save block around the rest doesn't help, because the Person save still succeeds even when the Address one fails.) And this doesn't work because it raises the ActiveRecord::StatementInvalid exception out of the transaction block without triggering an ActiveRecord::Rollback: class Person def save_with_address(address_options = {}) transaction do save! address = Address.build(address_options) address.person = self address.save! end end end The Rails documentation specifically warns against catching the ActiveRecord::StatementInvalid inside the transaction block. I guess my first question is: why isn't this transaction block... transacting on both saves?

    Read the article

  • SQL Design: representing a default value with overrides?

    - by Mark Harrison
    I need a sparse table which contains a set of "override" values for another table. I also need to specify the default value for the items overridden. For example, if the default value is 17, then foo,bar,baz will have the values 17,21,17: table "things" table "xvalue" name stuff name xval ---- ----- ---- ---- foo ... bar 21 bar ... baz ... If I don't care about a FK from xvalue.name - things.name, I could simply put a "DEFAULT" name: table "xvalue" name xval ---- ---- DEFAULT 17 bar 21 But I like having a FK. I could have a separate default table, but it seems odd to have 2x the number of tables. table "xvalue_default" xval ---- 17 table "xvalue" name xval ---- ---- bar 21 I could have a "defaults table" tablename attributename defaultvalue xvalue xval 17 but then I run into type issues on defaultvalue. My operations guys prefer as compact a representation as possible, so they can most easily see the "diff" or deviations from the default. What's the best way to represent this, including the default value? This will be for Oracle 10.2 if that makes a difference.

    Read the article

  • How can I execute a SQL query in emacs lisp?

    - by Chris R
    I want to execute an SQL query and get its result in elisp: (let ((results (do-sql-query "SELECT * FROM a_table"))) (do-something-with results)) I'm using Postgres, and I already know all of my connection information (host, username, password, db et al) I just want to execute the query and get the result back, synchronously.

    Read the article

  • SQLAlchemy - relationship limited on more than just the foreign key

    - by Marian
    I have a wiki db layout with Page and Revisions. Each Revision has a page_id referencing the Page, a page relationship to the referenced page; each Page has a all_revisions relationship to all its revisions. So far so common. But I want to implement different epochs for the pages: If a page was deleted and is recreated, the new revisions have a new epoch. To help find the correct revisions, each page has a current_epoch field. Now I want to provide a revisions relation on the page that only contains its revisions, but only those where the epochs match. This is what I've tried: revisions = relationship('Revision', primaryjoin = and_( 'Page.id == Revision.page_id', 'Page.current_epoch == Revision.epoch', ), foreign_keys=['Page.id', 'Page.current_epoch'] ) Full code (you may run that as it is) However this always raises ArgumentError: Could not determine relationship direction for primaryjoin condition ...`, I've tried all I had come to mind, it didn't work. What am I doing wrong? Is this a bad approach for doing this, how could it be done other than with a relationship?

    Read the article

  • How to track history of db tables that include many-to-many mapping tables?

    - by chacmool
    I have seen several questions here on tracking db history, but can't seem to find one that matches our situation. We need to track the history of several tables, some of which are many-to-many linking tables. Eg say we have this schema: EntityA id name EntityB id name ABLink A_id B_id So, tracking changes to EntityA or EntityB seems pretty straightforward. We can keep a log table with the same columns plus a date stamp and user. But what about the links? How do we maintain the set of links that are valid for a given version of the data? We need to be able to recreate a history of the data showing changes in chronological order. So if a link added or deleted, we indicate that. Etc.

    Read the article

  • Using Entity Framework, how do I specify a sort on a navagation property?

    - by Jared
    I have two tables: [Category], [Item]. They are connected by a join table: [CategoryAndItem]. It has two primary key fields: [CategoryKey], [ItemKey]. Foreign keys exist appropriately and Entity has no problem pulling this in and creating the correct navigation properties that connect the entity objects. Basically each category can have multiple items, and items can be in multiple categories. The problem is that the order of items is specified per category, so that a particular item might be third in one category but fifth in another. In the past, I have added a [Sequence] field to the join table and modified the stored procedure to handle it. But since Entity is replacing my stored procedures, I need to figure out how to make Entity handle the sequence. Any suggestions?

    Read the article

  • What's the best way to migrate a Django DB from SQLite to MySQL?

    - by Inshim
    I need to migrate my db from sqlite to mysql, and the various tools/scripts out there are too many for me to easily spot the safest and most elegant solution. This seemed to me nice http://djangosnippets.org/snippets/14/ but appears to be 3 years since getting an update which is worrying.. Can you recommend a solution that is known to be reliable with Django 1.1.1 ?

    Read the article

  • MongoDB RSS Feed Entries, Embed the Entries in the Feed Object?

    - by Patrick Klingemann
    I am saving a reference to an RSS Feed in MongoDB, each Feed has an ever growing list of Entries. As I'm designing my schema, I'm concerned about this statement from the MongoDB Schema Design - Embed vs. Reference Documentation: If the amount of data to embed is huge (many megabytes), you may read the limit on size of a single object. This will surely happen if I understand the statement correctly. So the question is, I am correct to assume that I should not embed the Feed Entries within a Feed because I'll eventually reach the limit on size of a single object?

    Read the article

< Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >