Search Results

Search found 27339 results on 1094 pages for 'sql dmv'.

Page 623/1094 | < Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >

  • How to update a table with a list of values at a time?

    - by VJ
    I have update NewLeaderBoards set MonthlyRank=(Select RowNumber() from LeaderBoards) I tried it this way - (Select RowNumber() from LeaderBoards) as NewRanks update NewLeaderBoards set MonthlyRank = NewRanks But it doesnt work for me..Can anyone suggest me how can i perform an update in such a way..

    Read the article

  • What is the corrrect way to increment a field making up part of a composit key

    - by Tr1stan
    I have a bunch of tables whose primary key is made up of the foreign keys of other tables (Composite key). Therefore for example the attributes (as a very cut down version) might look like this: A[aPK, SomeFields] 1:M B[bPK, aFK, SomeFields] 1:M C[cPK, bFK, aFK, SomeFields] as data this could look like: A[aPK, SomeFields]: 1, Foo 2, Bar B[bPK, aFK, SomeFields]: 1, 1, FooData1 2, 1, FooData2 1, 2, BarData1 2, 2, BarData2 C[cPK, bFK, aFK, SomeFields]: 1, 1, 1, FooData1More 2, 1, 1, FooData1More 1, 2, 1, FooData2More 2, 2, 1, FooData2More 1, 1, 2, BarData1More 2, 1, 2, BarData1More 1, 2, 2, BarData2More 2, 2, 2, BarData2More I've got this running in a MSSQL DBMS and I'm looking for the best way to increment the left most column, in each table when a new tuple is added to it. I can't use the Auto Increment Identity Specification option as that has no idea that it is part of a composite key. I also don't want to use any aggregate function such as: MAX(field)+1 as this will have adverse affects with multiple users inputting data, rolling back etc. There might however be a nice trigger based option here, but I'm not sure. This must be a common issue so I'm hoping that someone has a lovely solution. As a side which may or may not affect the answer, I'm using Entity Framework 1.0 as my ORM, within a c# MVC application.

    Read the article

  • MySQL Database Design with Internationalization

    - by Some name
    Hello, I'm going to start work on a medium sized application, and i'm planning it's db design. One thing that I'm not sure about is this. I will have many tables which will need internationalization, such as: "membership_options, gender_options, language_options etc" Each of these tables will share common i18n fields, like: "title, alternative_title, short_description, description" In your opinion which is the best way to do it? Have an i18n table with the same fields for each of the tables that will need them? or do something like: Membership table Gender table ---------------- -------------- id | created_at id | created_at 1 - 22.03.2001 1 - 14.08.2002 2 - 22.03.2001 2 - 14.08.2002 General translation table ------------------------- record_id | table_name | string_name | alternative_title| .... |id_language 1 - membership regular null 1 (english) 1 - membership normale null 2 (italian) 1 - gender man null 1(english) 1 -gender uomo null 2(italian) This would avoid me repeating something like: membership_translation table ----------------------------- membership_id | name | alternative_title | id_lang 1 regular null 1 1 normale null 2 gender_translation table ----------------------------- gender_id | name | alternative_title | id_lang 1 man null 1 1 uomo null 2 and so on, so i would probably reduce the number of db tables, but i'm not sure about performance.I'm not much of a DB designer, so please let me know.

    Read the article

  • Generate dynamic UPDATE command from Expression<Func<T, T>>

    - by Rui Jarimba
    I'm trying to generate an UPDATE command based on Expression trees (for a batch update). Assuming the following UPDATE command: UPDATE Product SET ProductTypeId = 123, ProcessAttempts = ProcessAttempts + 1 For an expression like this: Expression<Func<Product, Product>> updateExpression = entity => new Product() { ProductTypeId = 123, ProcessAttempts = entity.ProcessAttempts + 1 }; How can I generate the SET part of the command? SET ProductTypeId = 123, ProcessAttempts = ProcessAttempts + 1

    Read the article

  • Select where and where not

    - by Simon
    I have a table containing lessons that I called "cours" (french) and I have several cours inside and I have linked them to students with a table between them to see if they go to the lessons or not. I would like to return data with the SELECT and the data that are NOT select. So, If one student follow 3 courses of 5, I would like to return the 3 courses that he follow and the 2 courses that he doesn't follow. Is there a way to do it ?

    Read the article

  • How can I make a multi search SPROC/UDF by passing a tabled-value to it?

    - by Shimmy
    I actually want to achieve the following description This is the table argument I want to pass to the server <items> <item category="cats">1</item> <item category="dogs">2</item> </items> SELECT * FROM Item WHERE Item.Category = <one of the items in the XML list> AND Item.ReferenceId = <the corresponding value of that item xml element> --Or in other words: SELECT FROM Items WHERE Item IN XML according to the splecified columns. Am I clear enought? I don't mind to do it in a different way other than xml. What I need is selecting values that mach an array of two of its columns' values.

    Read the article

  • Get column of a mysql entry

    - by Xelluloid
    Is there a possibility to get the name of the column a database entry belongs to? Perhaps I have three columns with column names col1, col2 and col3. Now I want to select for every column the column with the maximum entry, something like this. Select name_of_column(max(col1,col2,col3)). I know that I can ask for the name of the columns by its ordinal position in the information_schema.COLUMNS table but how do I get the ordinal position of a database entry within a table?

    Read the article

  • how to Solve the "Digg" problem in MongoDB

    - by user193116
    A while back,a Digg developer had posted this blog ,"http://about.digg.com/blog/looking-future-cassandra", where the he described one of the issues that were not optimally solved in MySQL. This was cited as one of the reasons for their move to Cassandra. I have been playing with MongoDB and I would like to understand how to implement the MongoDB collections for this problem From the article, the schema for this information in MySQL : CREATE TABLE Diggs ( id INT(11), itemid INT(11), userid INT(11), digdate DATETIME, PRIMARY KEY (id), KEY user (userid), KEY item (itemid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE TABLE Friends ( id INT(10) AUTO_INCREMENT, userid INT(10), username VARCHAR(15), friendid INT(10), friendname VARCHAR(15), mutual TINYINT(1), date_created DATETIME, PRIMARY KEY (id), UNIQUE KEY Friend_unique (userid,friendid), KEY Friend_friend (friendid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; This problem is ubiquitous in social networking scenario implementation. People befriend a lot of people and they in turn digg a lot of things. Quickly showing a user what his/her friends are up to is very critical. I understand that several blogs have since then provided a pure RDBMs solution with indexes for this issue; however I am curious as to how this could be solved in MongoDB.

    Read the article

  • Reuse select query in a procedure in Oracle

    - by Jer
    How would I store the result of a select statement so I can reuse the results with an in clause for other queries? Here's some pseudo code: declare ids <type?>; begin ids := select id from table_with_ids; select * from table1 where id in (ids); select * from table2 where id in (ids); end; ... or will the optimizer do this for me if I simply put the sub-query in both select statements?

    Read the article

  • Improving performance in this query

    - by Luiz Gustavo F. Gama
    I have 3 tables with user logins: sis_login = administrators tb_rb_estrutura = coordinators tb_usuario = clients I created a VIEW to unite all these users by separating them by levels, as follows: create view `login_names` as select `n1`.`cod_login` as `id`, '1' as `level`, `n1`.`nom_user` as `name` from `dados`.`sis_login` `n1` union all select `n2`.`id` as `id`, '2' as `level`, `n2`.`nom_funcionario` as `name` from `tb_rb_estrutura` `n2` union all select `n3`.`cod_usuario` as `id`, '3' as `level`, `n3`.`dsc_nome` as `name` from `tb_usuario` `n3`; So, can occur up to three ids repeated for different users, which is why I separated by levels. This VIEW is just to return me user name, according to his id and level. considering it has about 500,000 registered users, this view takes about 1 second to load. too much time, but is becomes very small when I need to return the latest posts on the forum of my website. The tables of the forums return the user id and level, then look for a name in this VIEW. I have registered 18 forums. When I run the query, it takes one second for each forum = 18 seconds. OMG. This page loads every time somebody enter my website. This is my query: select `x`.`forum_id`, `x`.`topic_id`, `l`.`nome` from ( select `t`.`forum_id`, `t`.`topic_id`, `t`.`data`, `t`.`user_id`, `t`.`user_level` from `tb_forum_topics` `t` union all select `a`.`forum_id`, `a`.`topic_id`, `a`.`data`, `a`.`user_id`, `a`.`user_level` from `tb_forum_answers` `a` ) `x` left outer join `login_names` `l` on `l`.`id` = `x`.`user_id` and `l`.`level` = `x`.`user_level` group by `x`.`forum_id` asc USING EXPLAIN: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 6 Using temporary; Using filesort 1 PRIMARY <derived4> ALL NULL NULL NULL NULL 530415 4 DERIVED n1 ALL NULL NULL NULL NULL 114 5 UNION n2 ALL NULL NULL NULL NULL 2 6 UNION n3 ALL NULL NULL NULL NULL 530299 NULL UNION RESULT ALL NULL NULL NULL NULL NULL 2 DERIVED t ALL NULL NULL NULL NULL 3 3 UNION r ALL NULL NULL NULL NULL 3 NULL UNION RESULT ALL NULL NULL NULL NULL NULL Somebody can help me or give a suggestion?

    Read the article

  • how to avoid sub-query to gain performance

    - by chun
    hi i have a reporting query which have 2 long sub-query SELECT r1.code_centre, r1.libelle_centre, r1.id_equipe, r1.equipe, r1.id_file_attente, r1.libelle_file_attente,r1.id_date, r1.tranche, r1.id_granularite_de_periode,r1.granularite, r1.ContactsTraites, r1.ContactsenParcage, r1.ContactsenComm, r1.DureeTraitementContacts, r1.DureeComm, r1.DureeParcage, r2.AgentsConnectes, r2.DureeConnexion, r2.DureeTraitementAgents, r2.DureePostTraitement FROM ( SELECT cc.id_centre_contact, cc.code_centre, cc.libelle_centre, a.id_equipe, a.equipe, a.id_file_attente, f.libelle_file_attente, a.id_date, g.tranche, g.id_granularite_de_periode, g.granularite, sum(Nb_Contacts_Traites) as ContactsTraites, sum(Nb_Contacts_en_Parcage) as ContactsenParcage, sum(Nb_Contacts_en_Communication) as ContactsenComm, sum(Duree_Traitement/1000) as DureeTraitementContacts, sum(Duree_Communication / 1000 + Duree_Conference / 1000 + Duree_Com_Interagent / 1000) as DureeComm, sum(Duree_Parcage/1000) as DureeParcage FROM agr_synthese_activite_media_fa_agent a, centre_contact cc, direction_contact dc, granularite_de_periode g, media m, file_attente f WHERE m.id_media = a.id_media AND cc.id_centre_contact = a.id_centre_contact AND a.id_direction_contact = dc.id_direction_contact AND dc.direction_contact ='INCOMING' AND a.id_file_attente = f.id_file_attente AND m.media = 'PHONE' AND ( ( g.valeur_min = date_format(a.id_date,'%d/%m') and g.granularite = 'Jour') or ( g.granularite = 'Heure' and a.id_th_heure = g.id_granularite_de_periode) ) GROUP by cc.id_centre_contact, a.id_equipe, a.id_file_attente, a.id_date, g.tranche, g.id_granularite_de_periode) r1, ( (SELECT cc.id_centre_contact,cc.code_centre, cc.libelle_centre, a.id_equipe, a.equipe, a.id_date, g.tranche, g.id_granularite_de_periode,g.granularite, count(distinct a.id_agent) as AgentsConnectes, sum(Duree_Connexion / 1000) as DureeConnexion, sum(Duree_en_Traitement / 1000) as DureeTraitementAgents, sum(Duree_en_PostTraitement / 1000) as DureePostTraitement FROM activite_agent a, centre_contact cc, granularite_de_periode g WHERE ( g.valeur_min = date_format(a.id_date,'%d/%m') and g.granularite = 'Jour') AND cc.id_centre_contact = a.id_centre_contact GROUP BY cc.id_centre_contact, a.id_equipe, a.id_date, g.tranche, g.id_granularite_de_periode ) UNION (SELECT cc.id_centre_contact,cc.code_centre, cc.libelle_centre, a.id_equipe, a.equipe, a.id_date, g.tranche, g.id_granularite_de_periode,g.granularite, count(distinct a.id_agent) as AgentsConnectes, sum(Duree_Connexion / 1000) as DureeConnexion, sum(Duree_en_Traitement / 1000) as DureeTraitementAgents, sum(Duree_en_PostTraitement / 1000) as DureePostTraitement FROM activite_agent a, centre_contact cc, granularite_de_periode g WHERE ( g.granularite = 'Heure' AND a.id_th_heure = g.id_granularite_de_periode) AND cc.id_centre_contact = a.id_centre_contact GROUP BY cc.id_centre_contact,a.id_equipe, a.id_date, g.tranche, g.id_granularite_de_periode) ) r2 WHERE r1.id_centre_contact = r2.id_centre_contact AND r1.id_equipe = r2.id_equipe AND r1.id_date = r2.id_date AND r1.tranche = r2.tranche AND r1.id_granularite_de_periode = r2.id_granularite_de_periode GROUP BY r1.id_centre_contact , r1.id_equipe, r1.id_file_attente, r1.id_date, r1.tranche, r1.id_granularite_de_periode ORDER BY r1.code_centre, r1.libelle_centre, r1.equipe, r1.libelle_file_attente, r1.id_date, r1.id_granularite_de_periode,r1.tranche the EXPLAIN shows | id | select_type | table | type| possible_keys | key | key_len | ref| rows | Extra | '1', 'PRIMARY', '<derived3>', 'ALL', NULL, NULL, NULL, NULL, '2520', 'Using temporary; Using filesort' '1', 'PRIMARY', '<derived2>', 'ALL', NULL, NULL, NULL, NULL, '4378', 'Using where; Using join buffer' '3', 'DERIVED', 'a', 'ALL', 'fk_Activite_Agent_centre_contact', NULL, NULL, NULL, '83433', 'Using temporary; Using filesort' '3', 'DERIVED', 'g', 'ref', 'Index_granularite,Index_Valeur_min', 'Index_Valeur_min', '23', 'func', '1', 'Using where' '3', 'DERIVED', 'cc', 'ALL', 'PRIMARY', NULL, NULL, NULL, '6', 'Using where; Using join buffer' '4', 'UNION', 'g', 'ref', 'PRIMARY,Index_granularite', 'Index_granularite', '23', '', '24', 'Using where; Using temporary; Using filesort' '4', 'UNION', 'a', 'ref', 'fk_Activite_Agent_centre_contact,fk_activite_agent_TH_heure', 'fk_activite_agent_TH_heure', '5', 'reporting_acd.g.Id_Granularite_de_periode', '2979', 'Using where' '4', 'UNION', 'cc', 'ALL', 'PRIMARY', NULL, NULL, NULL, '6', 'Using where; Using join buffer' NULL, 'UNION RESULT', '<union3,4>', 'ALL', NULL, NULL, NULL, NULL, NULL, '' '2', 'DERIVED', 'g', 'range', 'PRIMARY,Index_granularite,Index_Valeur_min', 'Index_granularite', '23', NULL, '389', 'Using where; Using temporary; Using filesort' '2', 'DERIVED', 'a', 'ALL', 'fk_agr_synthese_activite_media_fa_agent_centre_contact,fk_agr_synthese_activite_media_fa_agent_direction_contact,fk_agr_synthese_activite_media_fa_agent_file_attente,fk_agr_synthese_activite_media_fa_agent_media,fk_agr_synthese_activite_media_fa_agent_th_heure', NULL, NULL, NULL, '20903', 'Using where; Using join buffer' '2', 'DERIVED', 'cc', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'reporting_acd.a.Id_Centre_Contact', '1', '' '2', 'DERIVED', 'f', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'reporting_acd.a.Id_File_Attente', '1', '' '2', 'DERIVED', 'dc', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'reporting_acd.a.Id_Direction_Contact', '1', 'Using where' '2', 'DERIVED', 'm', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'reporting_acd.a.Id_Media', '1', 'Using where' don't know it very clear, but i think is the problem of seems it take full scaning than i change all the sub-query to views(create view as select sub-query), and the result is the same thanks for any advice

    Read the article

  • Removing "Using temporary; Using filesort" from this MySQL select+join+group by query

    - by claytontstanley
    I have the following query: select t.Chunk as LeftChunk, t.ChunkHash as LeftChunkHash, q.Chunk as RightChunk, q.ChunkHash as RightChunkHash, count(t.ChunkHash) as ChunkCount from chunksubset as t join chunksubset as q on t.ID = q.ID group by LeftChunkHash, RightChunkHash And the following explain table: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE subsets ref PRIMARY,IDIndex,SubsetIndex SubsetIndex 767 const 522014 "Using where; Using temporary; Using filesort" 1 SIMPLE subsets eq_ref PRIMARY,IDIndex,SubsetIndex PRIMARY 771 sotero.subsets.Id,const 1 "Using where; Using index" 1 SIMPLE c ref IDIndex IDIndex 4 sotero.subsets.Id 12 "Using where" 1 SIMPLE c ref IDIndex IDIndex 4 sotero.subsets.Id 12 note the "using temporary; using filesort". When this query is run, I quickly run out of RAM (presumably b/c of the temp table), and then the HDD kicks in, and the query slows to a halt. I thought it might be an index issue, so I started adding a few that sort of made sense: Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment chunks 0 PRIMARY 1 ChunkId A 17796190 NULL NULL BTREE chunks 1 ChunkHashIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 IDIndex 1 Id A 1483015 NULL NULL BTREE chunks 1 ChunkIndex 1 Chunk A 243783 NULL NULL BTREE chunks 1 ChunkTypeIndex 1 ChunkType A 2 NULL NULL BTREE chunks 1 chunkHashByChunkIDIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByChunkIDIndex 2 ChunkId A 17796190 NULL NULL BTREE chunks 1 chunkHashByChunkTypeIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByChunkTypeIndex 2 ChunkType A 261708 NULL NULL BTREE chunks 1 chunkHashByIDIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByIDIndex 2 Id A 17796190 NULL NULL BTREE But still using the temporary table. The db engine is MyISAM. How can I get rid of the using temporary; using filesort in this query? Just changing to InnoDB w/o explaining the underlying cause is not a particularly satisfying answer. Besides, if the solution is to just add the proper index, then that's much easier than migrating to another db engine.

    Read the article

  • SSAS dimension source table changed - how to propagate changes to analysis server?

    - by Phil
    Hi, Sorry if the question isn't phrased very well but I'm new to SSAS and don't know the correct terms. I have changed the name of a table and its columns. I am using said table as a dimension for my cube, so now the cube won't process. Presumably I need to make updates in the analysis server to reflect changes to the source database? I have no idea where to start - any help gratefully received. Thanks Phil

    Read the article

  • is there a tool to see the difference between two database tables in mssql?

    - by reinier
    What is a good tool to see the differences between 2 tables (or even better, the datasets returned by 2 queries). EDIT: I'm not interested in the schema changes. Just assume that the schemas are the same. background as to why: I'm porting some legacy code which can fill a database with some pre-calced data. The easiest way to see if I got everything right, is to check the output of the old program, with the new one. I was thinking that if there is some kind of 'diff' tool for databases, this might be great.

    Read the article

  • Getting Stored Procedure Information from .Net

    - by Ben
    Hi, I am trying to get some data relevant to a stored procedure (or funtion) back from a database using .Net. The first thing I need to be able to do, is get the stored proc from the database and turn it into string format. The information I need is: The return set of columns, tables used within the SP, Stored Procedures called from the SP. The only way of doing this at the moment that i can think of, is though parsing the text and looking for keyword matches. Is there a better way of doing this? Any ideas? Thanks.

    Read the article

  • Retrieving Top 10 rows ans sum all others in row 11

    - by Mario
    Hello all, I have the following query that retrieve the number of users per country; SELECT C.CountryID AS CountryID, C.CountryName AS Country, Count(FirstName) AS Origin FROM Users AS U INNER JOIN Country AS C ON C.CountryID = U.CountryOfOrgin GROUP BY CASE C.CountryName, C.CountryID What I need is a way to get the top 10 and then sum all other users in a single row. I know how to get the top 10 but I`m stuck on getting the remaining in a single row. Is there a simple way to do it? For example if the above query returns 17 records the top ten are displayed and a sum of the users from the 7 remaining country should appear on row 11. On that row 11 the countryid would be 0 and countryname Others Thanks for your help!

    Read the article

  • Timeout LinqToSql inserting millions of records

    - by Bas
    I'm inserting approximently 3 million records in a database using this solution. Eventually when the application has been inserting records for a while (my last run lasted around 4 hours), it gives a timeout with the following SqlException: "SqlExcepetion: Timeout expired. The timeoutperiod elapsed prior to completion of the operation or the server is not responding." What's the best way to handle this exception? Is there a way to prevent this from happening or should I catch the exception? Thanks in advance!

    Read the article

  • Customizing Mail Message in SSIS Event Handler

    - by Eric Ness
    I want to add an email notification to an SSIS 2005 package event handler. I've added a Send Mail task to the event handler. I'd like to customize the email body to include things like the error description. I've tried including @[System::ErrorDescription] in the MessageSource field, but the mail message doesn't include the value of ErrorDescription only the name of the variable.

    Read the article

  • connecting to secure database from website host

    - by jim
    Hello all, I've got a requirement to both read and write data via a .net webservice to a sqlserver database that's on a private network. this database is currently accessed via a vpn connection by remote client software (on standard desktop machines) to get latest product prices and to upload product stock sales. I've been tasked with finding a way to centralise this access from a webservice that the clients then access, rather than them using the vpn route to connect directly to the database. My question is related to my .net service's relationship to the sqlserver database. What are the options for connecting to a private network vpn from a domain host in order to achive the functionality of allowing the webservice to both read and write data to the database. For now, I'm not too concerned about the client connectivity and security (tho i appreciate that this will have to be worked out too), I'm really just interested in discovering the options available in order to allow my .net webservice to connect to the private network in as painless and transparent a way as posible. The option of switching the database onto public hosting is not an option, so I have to work with the sdcenario as described above for now, unless there's a compelling rationale presented to do otherwise. thanks all... jim

    Read the article

  • Regarding some Update Stored procedure

    - by Serenity
    I have two Tables as follows:- Table1:- ------------------------------------- PageID|Content|TitleID(FK)|LanguageID ------------------------------------- 1 |abc |101 |1 2 |xyz |102 |1 -------------------------------------- Table2:- ------------------------- TitleID|Title |LanguageID ------------------------- 101 |Title1|1 102 |Title2|1 ------------------------ I don't want to add duplicates in my Table1(Content Table). Like..there can be no two Pages with the same Title. What check do I need to add in my Insert/Update Stored Procedure ? How do I make sure duplicates are never added. I have tried as follows:- CREATE PROC InsertUpdatePageContent ( @PageID int, @Content nvarchar(2000), @TitleID int ) AS BEGIN IF(@PageID=-1) BEGIN IF(NOT EXISTS(SELECT TitleID FROM Table1 WHERE LANGUAGEID=@LANGUAGEID)) BEGIN INSERT INTO Table1(Content,TitleID) VALUES(@Content,@TitleID) END END ELSE BEGIN IF(NOT EXISTS(SELECT TitleID FROM Table1 WHERE LANGUAGEID=@LANGUAGEID)) BEGIN UPDATE Table1 SET Content=@Content,TitleID=@TitleID WHERE PAGEID=@PAGEID END END END Now what is happening is that it is inserting new records alright and won't allow duplicates to be added but when I update its giving me problem. On my aspx Page I have a drop down list control that is bound to DataSource that returns Table 2(Title Table) and I have a text box in which user types Page's content to be stored. When I update, like lets say I have a row in my Table 1 as shown above with PageID=1. Now when I am updating this row, like I didn't change the Title from the drop down and only changed Content in the text box, its not updating the record ..and when Stored procedure's Update Query does not execute it displays a Label that says "Page with this title exists already." So whenever I am updating an existing record that label is displayed on screen.How do I change that IF condition in my Update Stored procedure?? EDIT:- @gbn :: Will that IF condition work in case of update? I mean lets say I am updating the Page with TitleID=1, I changed its content, then when I update, it's gonna execute that IF condition and it still won't update coz TitleID=1 already exits!It will only update if TitleID=1 is not there in Table1. Isn't it? Guess I am getting confused. Please answer.Thanks.

    Read the article

  • MySQL - Structure for Permissions to Objects

    - by Kerry
    What would be an ideal structure for users permissions of objects. I've seen many related posts for general permissions, or what sections a user can access, which consists of a users, userGroups and userGroupRelations or something of that nature. In my system there are many different objects that can get created, and each one has to be able to be turned on or off. For instance, take a password manager that has groups and sub groups. Group 1 Group 2 Group 3 Group 4 Group 5 Group 6 Group 7 Group 8 Group 9 Group 10 Each group can contain a set of passwords. A user can be given read, write, edit and delete permissions to any group. More groups can get created at any point in time. If someone has permission to a group, I should be able to make him have permissions to all sub groups OR restrict it to just that group. My current thought is to have a users table, and then a permissions table with columns like: permission_id (int) PRIMARY_KEY user_id (int) INDEX object_id (int) INDEX type (varchar) INDEX read (bool) write (bool) edit (bool) delete (bool) This has worked in the past, but the new system I'm building needs to be able to scale rapidly, and I am unsure if this is the best structure. It also makes the idea of having someone with all subgroup permissions of a group more difficult. So, as a question, should I use the above structure? Or can someone point me in the direction of a better one?

    Read the article

  • Extract multiple values from one column in MySql

    - by Neil
    I've noticed that MySql has an extensive search capacity, allowing both wildcards and regular expressions. However, I'm in somewhat in a bind since I'm trying to extract multiple values from a single string in my select query. For example, if I had the text "<span>Test</span> this <span>query</span>", perhaps using regular expressions I could find and extract values "Test" or "query", but in my case, I have potentially n such strings to extract. And since I can't define n columns in my select statement, that means I'm stuck. Is there anyway I could have a list of values (ideally separated by commas) of any text contained with span tags? In other words, if I ran this query, I would get "Test,query" as the value of spanlist: select <insert logic here> as spanlist from HtmlPages ...

    Read the article

  • Accessing non-related entities in LinqToSql entity classes

    - by Chris Johnson
    In LinqToSql, if I want to access a non-related entity in an entity partial class, how do I do this without creating a new DataContext? Here's the scenario: I have the tables Client, IssueType and ClientIssueType. A Client may specify a list of IssueTypes if they do not want to use the default IssueTypes. I have the default IssueTypes in the ClientIssueType table with a ClientId of null. In my Client partial I'd like to try to retrieve all IssueTypes, and if none are found, return all default IssueTypes. The only way I can see of accessing the IssueTypes with a null ClientId is by accessing the table through a new DataContext, which is problematic once I want to start assigning them to Issues. Where am I going wrong?

    Read the article

< Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >