Search Results

Search found 28985 results on 1160 pages for 'sql training'.

Page 674/1160 | < Previous Page | 670 671 672 673 674 675 676 677 678 679 680 681  | Next Page >

  • Clustered index on frequently changing reference table of one or more foreign keys

    - by Ian
    My specific concern is related to the performance of a clustered index on a reference table that has many rapid inserts and deletes. Table 1 "Collection" collection_pk int (among other fields) Table 2 "Item" item_pk int (among other fields) Reference Table "Collection_Items" collection_pk int, item_pk int (combined primary key) Because the primary key is composed of both pks, a clustered index is created and the data physically ordered in the table according to the combined keys. I have many users creating and deleting collections and adding and removing items to those collections very frequently affecting the "Collection_Items" table, and its clustered index. QUESTION PART: Since the "Collection_Items" table is so dynamic, wouldn't there be a big performance hit on constantly resorting the table rows because of the clustered index ? If yes, what should I do to minimize this ?

    Read the article

  • Get count matches in query on large table very slow

    - by Roy Roes
    I have a mysql table "items" with 2 integer fields: seid and tiid The table has about 35000000 records, so it's very large. seid tiid ----------- 1 1 2 2 2 3 2 4 3 4 4 1 4 2 The table has a primary key on both fields, an index on seid and an index on tiid. Someone types in 1 or more tiid values and now I would like to get the seid with most results. For example when someone types 1,2,3, I would like to get seid 2 and 4 as result. They both have 2 matches on the tiid values. My query so far: SELECT COUNT(*) as c, seid FROM items WHERE tiid IN (1,2,3) GROUP BY seid HAVING c = (SELECT COUNT(*) as c, seid FROM items WHERE tiid IN (1,2,3) GROUP BY seid ORDER BY c DESC LIMIT 1) But this query is extremly slow, because of the large table. Does anyone know how to construct a better query for this purpose?

    Read the article

  • allowing bb code but not java script

    - by user1405062
    Hello im trying to get the hang of using bb codes onto my normal php site ( not forum or anything just a normal site ) I have seen a few posts like this one http://www.pixel2life.com/forums/index.php?/topic/10659-php-bbcode-parser/ which says i need a bb parser. I was just wondering can anyone show me how i would use one ? I have a status box were users can update there status but i would like to allow bb codes but not java script. So when im doing my inserting into the db i strip the status like so.. $status= mysql_real_escape_string($_POST['status']); $status2= strip_tags($status); And that stops the java script and tags from getting though but i need the bb code tags to come though but carry on blocking the java script code is there anyway to do this ? Also then i just echo it out echo $status2 ; But just plain text shows so was just wondering if anyone knows how to let though bb code and stop java script and could some one show me how to use the bb parasher ? also need to know how to echo out the bb coding...

    Read the article

  • Store latitudes and longitudes in database for proximity/radius search using Google Maps API, .NET a

    - by poojad
    What is the approach for storing the latitudes and longitudes for multiple addresses as a one time set up. I need to find the nearby stores using Google Maps and I have to get the latitudes and longitudes of all the available stores. As the data is huge and may increase or change in future, can anyone suggest an approach taking performance and maintenance into consideration. Thank you.

    Read the article

  • I want to do a sql update loop statement, by using the do--while in php

    - by Jean
    Hello, I want to loop the update statement, but it only loops once. Here is the code I am using: do { mysql_select_db($database_ll, $ll); $query_query= "update table set ex='$71[1]' where field='val'"; $query = mysql_query($query_query, $ll) or die(mysql_error()); $row_domain_all = mysql_fetch_assoc($query); } while ($row_query = mysql_fetch_assoc($query)); Thanks Jean

    Read the article

  • linq .net with dynamically generated controls

    - by Stuart
    This is a very strange problem and i really dont have a clue whats causing it. What is supposed to happen is that a call to the BLL then DAL returns some data via a linq SPROC call. The retunred IMultipleResults object is processed and all results stored in a hashtable. The hashtable is stored in session and then the UI layer uses these results to dynamically generate some gridviews. Easy you would think. But if i run the code i dont get any gridviews. If i take out the call to the BLL and DAL the gridviews appear but with nothing in them? Why is it the page renders correctly when i take out the call to get the data? Thanks.

    Read the article

  • Is count(*) really expensive ?

    - by Anil Namde
    I have a page where I have 4 tabs displaying 4 different reports based off different tables. I obtain the row count of each table using a select count(*) from <table> query and display number of rows available in each table on the tabs. As a result, each page postback causes 5 count(*) queries to be executed (4 to get counts and 1 for pagination) and 1 query for getting the report content. Now my question is: are count(*) queries really expensive -- should I keep the row counts (at least those that are displayed on the tab) in the view state of page instead of querying multiple times? How expensive are COUNT(*) queries ?

    Read the article

  • How do you write a recursive stored procedure

    - by Grayson Mitchell
    I simply want a stored procedure that calculates a unique id and inserts it. If it fails it just calls itself to regenerate said id. I have been looking for an example, but cant find one, and am not sure how I should get the sp to call itself, and set the appropriate output parameter. I would also appreciate someone pointing out how to test this sp also. ALTER PROCEDURE [dbo].[DataContainer_Insert] @SomeData varchar(max), @DataContainerId int out AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; BEGIN TRY SELECT @UserId = MAX(UserId) From DataContainer INSERT INTO DataContainer (UserId, SomeData) VALUES (@UserId, SomeData) SELECT @DataContainerId = scope_identity() END TRY BEGIN CATCH --try again exec DataContainer_Insert @DataContainerId, @SomeData END CATCH END

    Read the article

  • select top 50 records from sql

    - by air
    i have following database table name tbl_rec recno uid uname points ============================ 1 a abc 10 2 b bac 8 3 c cvb 12 4 d aty 13 5 f cyu 9 ------------------------- ------------------------- i have about 5000 records in this table. i want to select first 50 higher points records. i can't use limit statement as i am already using limit for paging. Thanks

    Read the article

  • How to avoid geometric slowdown with large Linq transactions?

    - by Shaul
    I've written some really nice, funky libraries for use in LinqToSql. (Some day when I have time to think about it I might make it open source... :) ) Anyway, I'm not sure if this is related to my libraries or not, but I've discovered that when I have a large number of changed objects in one transaction, and then call DataContext.GetChangeSet(), things start getting reaalllly slooowwwww. When I break into the code, I find that my program is spinning its wheels doing an awful lot of Equals() comparisons between the objects in the change set. I can't guarantee this is true, but I suspect that if there are n objects in the change set, then the call to GetChangeSet() is causing every object to be compared to every other object for equivalence, i.e. at best (n^2-n)/2 calls to Equals()... Yes, of course I could commit each object separately, but that kinda defeats the purpose of transactions. And in the program I'm writing, I could have a batch job containing 100,000 separate items, that all need to be committed together. Around 5 billion comparisons there. So the question is: (1) is my assessment of the situation correct? Do you get this behavior in pure, textbook LinqToSql, or is this something my libraries are doing? And (2) is there a standard/reasonable workaround so that I can create my batch without making the program geometrically slower with every extra object in the change set?

    Read the article

  • How to join two query in SQL (Oracle)

    - by MAHESH A SONI
    How can I join these queries? SELECT RCTDT, SUM(RCTAMOUNT), COUNT(RCTAMOUNT) FROM RECEIPTS4 WHERE RCTDT BETWEEN '01-nov-2009' AND '30-nov-2009' AND RCTTYPE='CA' AND RCTAMOUNT>0 GROUP BY RCTDT --- SELECT RCTDT, SUM(RCTAMOUNT), COUNT(RCTAMOUNT) FROM RECEIPTS4 WHERE RCTDT BETWEEN '01-nov-2009' AND '30-nov-2009' AND RCTTYPE='CQ' AND RCTAMOUNT>0 GROUP BY RCTDT

    Read the article

  • multi-row update table with "different" data

    - by kralco626
    I think the best way to explain this is to tell you what I have. I have two tables A and B both have columns Field1 and Field2. However Field 2 is not populated in table B I want to populate field 2 of table B with field 2 of table A where field 1 of table A matches field 1 of table B. something like update tableB set Field2 = tableA.field2 where tablea.field1 = tableb.field1. The reason this may seem so odd and obscure is that I'm tyring to do an inital data load form an old database to a new one. please let me know if you need clarification

    Read the article

  • Optimize INSERT / UPDATE / DELETE operation

    - by clime
    I wonder if the following script can be optimized somehow. It does write a lot to disk because it deletes possibly up-to-date rows and reinserts them. I was thinking about applying something like "insert ... on duplicate key update" and found some possibilities for single-row updates but I don't know how to apply it in the context of INSERT INTO ... SELECT query. CREATE OR REPLACE FUNCTION update_member_search_index() RETURNS VOID AS $$ DECLARE member_content_type_id INTEGER; BEGIN member_content_type_id := (SELECT id FROM django_content_type WHERE app_label='web' AND model='member'); DELETE FROM watson_searchentry WHERE content_type_id = member_content_type_id; INSERT INTO watson_searchentry (engine_slug, content_type_id, object_id, object_id_int, title, description, content, url, meta_encoded) SELECT 'default', member_content_type_id, web_member.id, web_member.id, web_member.name, '', web_user.email||' '||web_member.normalized_name||' '||web_country.name, '', '{}' FROM web_member INNER JOIN web_user ON (web_member.user_id = web_user.id) INNER JOIN web_country ON (web_member.country_id = web_country.id) WHERE web_user.is_active=TRUE; END; $$ LANGUAGE plpgsql; EDIT: Schemas of web_member, watson_searchentry, web_user, web_country: http://pastebin.com/3tRVPPVi. (content_type_id, object_id_int) in watson_searchentry is unique pair in the table but atm the index is not present (there is no use for it). This script should be run at most once a day for full rebuilds of search index.

    Read the article

  • Inexplicably slow query in MySQL

    - by Brandon M.
    Given this result-set: mysql> EXPLAIN SELECT c.cust_name, SUM(l.line_subtotal) FROM customer c -> JOIN slip s ON s.cust_id = c.cust_id -> JOIN line l ON l.slip_id = s.slip_id -> JOIN vendor v ON v.vend_id = l.vend_id WHERE v.vend_name = 'blahblah' -> GROUP BY c.cust_name -> HAVING SUM(l.line_subtotal) > 49999 -> ORDER BY c.cust_name; +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | 1 | SIMPLE | v | ref | PRIMARY,idx_vend_name | idx_vend_name | 12 | const | 1 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | l | ref | idx_vend_id | idx_vend_id | 4 | csv_import.v.vend_id | 446 | | | 1 | SIMPLE | s | eq_ref | PRIMARY,idx_cust_id,idx_slip_id | PRIMARY | 4 | csv_import.l.slip_id | 1 | | | 1 | SIMPLE | c | eq_ref | PRIMARY,cIndex | PRIMARY | 4 | csv_import.s.cust_id | 1 | | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ 4 rows in set (0.04 sec) I'm a bit baffled as to why the query referenced by this EXPLAIN statement is still taking about a minute to execute. Isn't it true that this query only has to search through 449 rows? Anyone have any idea as to what could be slowing it down so much?

    Read the article

  • Running an application from an USB device...

    - by Workshop Alex
    I'm working on a proof-of-concept application, containing a WCF service with console host and client, both on a single USB device. On the same device I will also have the client application which will connect to this service. The service uses the entity framework to connect to the database, which in this POC will just return a list of names. If it works, it will be used for a larger project. Creating the client and service was easy and this works well from USB. But getting the service to connect to the database isn't. I've found this site, suggesting that I should modify machine.config but that stops the XCopy deployment. This project cannot change any setting of the PC, so this suggestion is bad. I cannot create a deployment setup either. The whole thing just needs to run from USB disk. So, how do I get it to run? (The service just selects a list of names from the database, which it returns to the client. If this POC works, it will do far more complex things!)

    Read the article

  • Query crashes MS Access

    - by user284651
    THE TASK: I am in the process of migrating a DB from MS Access to Maximizer. In order to do this I must take 64 tables in MS ACCESS and merge them into one. The output must be in the form of a TAB or CSV file. Which will then be imported into Maximizer. THE PROBLEM: Access is unable to perform a query that is so complex it seems, as it crashes any time I run the query. ALTERNATIVES: I have thought about a few alternatives, and would like to do the least time-consuming one, out of these, while also taking advantage of any opportunities to learn something new. Export each table into CSVs and import into SQLight and then make a query with it to do the same as what ACCESS fails to do (merge 64 tables). Export each table into CSVs and write a script to access each one and merge the CSVs into a single CSV. Somehow connect to the MS ACCESS DB (API), and write a script to pull data from each table and merge them into a CSV file. QUESTION: What do you recommend?

    Read the article

  • LINQ - if condition

    - by ile
    In code, the commented part is what I need to solve... Is there a way to write such query in LINQ? I need this because I will need sorting based on Status. var result = ( from contact in db.Contacts join user in db.Users on contact.CreatedByUserID equals user.UserID join deal in db.Deals on contact.ContactID equals deal.ContactID into deals orderby contact.ContactID descending select new ContactListView { ContactID = contact.ContactID, FirstName = contact.FirstName, LastName = contact.LastName, Email = contact.Email, Deals = deals.Count(), EstValue = deals.Sum(e => e.EstValue), SalesAgent = user.FirstName + " " + user.LastName, Tasks = 7, // This is critical part if(Deals == 0) Status = "Prospect"; else Status = "Client"; // End of critical part... }) .OrderBy(filterQuery.OrderBy + " " + filterQuery.OrderType) .Where(filterQuery.Status);

    Read the article

  • Manual Linq to SQL entity framework mapping

    - by kprobst
    I've been playing with the O/R designer in VS and I was wondering if someone could shed come light on this. I'm used to OR mappers that are largely manual (homegrown and e.g., NHibernate). I don't mind encoding the entity classes myself, since they don't change all that often to begin with, and I have this irrational fear of designers and auto generated code as it is. I have noticed that the generated entity classes contain a lot of boilerplate extensibility methods, e.g. On[Property]Changed() and so on where [Property] is a mapped member of the class. These are placed in the setters of the property accessors. I assume it's OK if I don't include these when I do my hand coding, correct? They would be nice if I needed some sort of interception pattern but that's certainly not the case. I guess I just need to know if any of those methods are required by the entity framework to keep track of changes to the mapping types in order for things to work when updating the database. Thanks!

    Read the article

  • Best .NET Solution for Frequently Changed Database

    - by sestocker
    I am currently architecting a small CRUD applicaton. Their database is a huge mess and will be changing frequently over the course of the next 6 months to a year. What would you recommend for my data layer: 1) ORM (if so, which one?) 2) Linq2Sql 3) Stored Procedures 4) Parametrized Queries I really need a solution that will be dynamic enough (both fast and easy) where I can replace tables and add/delete columns frequently. Note: I do not have much experience with ORM (only a little SubSonic) and generally tend to use stored procedures so maybe that would be the way to go. I would love to learn Ling2Sql or NHibernate if either would allow for the situation I've described above.

    Read the article

< Previous Page | 670 671 672 673 674 675 676 677 678 679 680 681  | Next Page >