Search Results

Search found 27339 results on 1094 pages for 'sql dmv'.

Page 622/1094 | < Previous Page | 618 619 620 621 622 623 624 625 626 627 628 629  | Next Page >

  • SqlCE DB occasionally freezes on one handheld, not another

    - by Michael
    I have two types of custom handhelds which are similar, but slightly different, each running the same WinForm application and a WinCE database: Type 1: WinCE 4.2, 400 mhz, 93244 kb Type 2: WinCE 5.0, 520 mhz, 84208 kb Type 1 will happily proceed through a large batch db operation (initiated) by the app, by Type 2 will consistently begin c-r-a-w-l-i-n-g (for several to many cycles) at around the 200 cycle mark. As several points it will begin running normally and then crawl again. The app does several db op's (inserts, updates and selects, no deletes). To simplify my situation, I've built a small test app which essentially does this: command_s.CommandText = "select dvr from vr where vid = 2211250"; command_u.CommandText = "update pvr set LocationID=81 where Status='OK' and vri = 27861"; while(going) { command_s.ExecuteScalar(); command_u.ExecuteNonQuery(); } and set it off running on the two units side by side. Sure enough, the slower (400 mhz) unit is outpacing the faster (520 mhz) unit (it's about 5000 cycles ahead right now) and I can see noticable pauses on the 520 mhz unit. What is causing this?

    Read the article

  • Complicated conditional SQL query

    - by DevAno1
    I'm not even sure if it's possible but I need it for my Access database. So I have following db structure : Now I need to perform a query that takes category_id from my product and do the magic : - let's say product belongs to console (category_id is in table Console) - from console_types take type_id, where category_id == category_id - but if product belongs to console_game (category_id is in table console_game) - from console_game take game_cat_id, where category_id == category_id I'm not sure if mysql is capable of such thing. If not I'm really f&%ranked up. Maybe there is a way to split this into 2,3 separate queries ?

    Read the article

  • How can I do more than one level of cascading deletes in Linq?

    - by Gary McGill
    If I have a Customers table linked to an Orders table, and I want to delete a customer and its corresponding orders, then I can do: dataContext.Orders.DeleteAllOnSubmit(customer.Orders); dataContext.Customers.DeleteOnSubmit(customer); ...which is great. However, what if I also have an OrderItems table, and I want to delete the order items for each of the orders deleted? I can see how I could use DeleteAllOnSubmit to cause the deletion of all the order items for a single order, but how can I do it for all the orders?

    Read the article

  • How to Generate XML from Database

    - by Nisarg Mehta
    Hi , I am fetching data from two tables CARRIER_IFTA ,IFTA_NAME. My Select Query is like below.. SELECT t1.IFTA_LICENSE_NUMBER,t1.IFTA_BASE_STATE,t2.NAME_TYPE,t2.NAME from CARRIER_IFTA t1 inner join IFTA_NAME t2 on t1.IFTA_LICENSE_NUMBER=t2.IFTA_LICENSE_NUMBER My Data is coming in this way... IFTA_LICENSE_NUMBER IFTA_BASE_STATE NAME_TYPE NAME -------------------------------------------------------- 630908333 US LG XYZ 630908333 US MG PQR 730908344 US LG ABC Now using XSLT I want to generate XML like this <T0019> <IFTA_ACCOUNT> <IFTA_LICENSE_NUMBER>630908333</IFTA_LICENSE_NUMBER> <IFTA_BASE_STATE>US</IFTA_BASE_STATE> <IFTA_NAME> <NAME_TYPE>LG<NAME_TYPE> <NAME>XYZ</NAME> </IFTA_NAME> <IFTA_NAME> <NAME_TYPE>MG<NAME_TYPE> <NAME>PQR</NAME> <IFTA_NAME> </IFTA_ACCOUNT> <IFTA_ACCOUNT> <IFTA_LICENSE_NUMBER>730908344</IFTA_LICENSE_NUMBER> <IFTA_BASE_STATE>US</IFTA_BASE_STATE> <IFTA_NAME> <NAME_TYPE>LG<NAME_TYPE> <NAME>ABC</NAME> </IFTA_NAME> </IFTA_ACCOUNT> </T0019> i have used below xslt but it is not giveng me desire result ... <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="2.0"> <xsl:template match="/ROWSET"> <xsl:element name="T0019"> <xsl:apply-templates select="IFTAACCOUNT"/> </xsl:element> </xsl:template> <xsl:template match="IFTAACCOUNT"> <xsl:element name="IFTAACCOUNT"> <xsl:apply-templates select="IFTA_CARRIER_ID_NUMBER"/> </xsl:element> </xsl:template> <xsl:template match="IFTA_LICENSE_NUMBER"> <xsl:element name="IFTA_LICENSE_NUMBER"> <xsl:apply-templates /> </xsl:element> </xsl:template> <xsl:template match="IFTA_BASE_STATE"> <xsl:element name="IFTA_BASE_STATE"> <xsl:apply-templates /> </xsl:element> </xsl:template> <xsl:template match="IRP_NAME"> <IRP_NAME> <xsl:apply-templates select="NAME"/> <xsl:apply-templates select="NAME_TYPE"/> </IRP_NAME> </xsl:template> <xsl:template match="NAME"> <xsl:element name="NAME"> <xsl:value-of select="." /> </xsl:element> </xsl:template> <xsl:template match="NAME_TYPE"> <xsl:element name="NAME_TYPE"> <xsl:apply-templates /> </xsl:element> </xsl:template> </xsl:stylesheet> but it is not giving desire result ... Please help me ... Thanks in Advance...

    Read the article

  • LINQ Group By to project into a non-anonymous type?

    - by vikp
    Hi, I have the following LINQ example: var colorDistribution = from product in ctx.Products group product by product.Color into productColors select new { Color = productColors.Key, Count = productColors.Count() }; All this works and makes perfect sense. What I'm trying to achieve is to group by into a strong type instead of anonymous type. For example I have a ProductColour class and I would like to Group into a List<ProductColour> Is this possible? Thank you

    Read the article

  • Update a list of things without hitting every entry

    - by bobobobo
    I have a list in a database that the user should be able to order. itemname| order value (int) --------+--------------------- salad | 1 mango | 2 orange | 3 apples | 4 On load from the database, I simply order by order_value. By drag 'n drop, he should be able to move apples so that it appears at the top of the list.. itemname| order value (int) --------+--------------------- apples | 4 salad | 1 mango | 2 orange | 3 Ok. So now internally I have to update EVERY LIST ITEM! If the list has 20 or 100 items, that's a lot of updates for a simple drag operation. itemname| order value (int) --------+--------------------- apples | 1 salad | 2 mango | 3 orange | 4 I'd rather do it with only one update. One way I thought of is if "internal Order" is a double value. itemname| order value (double) --------+--------------------- salad | 1.0 mango | 2.0 orange | 3.0 apples | 4.0 SO after the drag n' drop operation, I assign apples has a value that is less than the item it is to appear in front of: itemname| order value (double) --------+--------------------- apples | 0.5 salad | 1.0 mango | 2.0 orange | 3.0 .. and if an item is dragged into the middle somewhere, its order_value is bigger than the one it appears after .. here I moved orange to be between salad and mango: itemname| order value (double) --------+--------------------- apples | 0.5 salad | 1.0 orange | 1.5 mango | 2.0 Any thoughts on better ways to do this?

    Read the article

  • How to get count of another table in a left join

    - by Sinan
    I have multiple tables post id Name 1 post-name1 2 post-name2 user id username 1 user1 2 user2 post_user post_id user_id 1 1 2 1 post_comments post_id comment_id 1 1 1 2 1 3 I am using a query like this: SELECT post.id, post.title, user.id AS uid, username FROM `post` LEFT JOIN post_user ON post.id = post_user.post_id LEFT JOIN user ON user.id = post_user.user_id ORDER BY post_date DESC It works as intended. However I would like the get the number of comments for each post too. So how can i modify the this query so I can get the count of comments. Any ideas?

    Read the article

  • I want to run two or more procedures in parallel

    - by binod gyawali
    I have list of procedures. All procedures are not dependent upon each other. So, I need to do is, to run the independent procedures in parallel. I have 4 procedures that are to be run parallel. When the procedures are run successfully, now I need to go to the next task. These procedures create about 10 tables. Next task is to execute the set of procedures. I have made one table, where I describe the dependency of these procedures to the tables created above. After any one of the above procedures is completed, I should come to this set of procedures, and find out those procedures whose dependency tables are already created. If any procedure whose dependent tables creation is completed, I need to execute this procedure. Running 4 procedures parallel is done by dts. But, difficulty for me is to transfer the task from the above 4 procedures to the below set of procedures. Please help me to complete my task. Thanks in advance

    Read the article

  • Group Specific set of data by Day

    - by Jacques444
    Need to get a certian subgroup of data per day (Seperated by weekday) For example Select weekday,bla,blabla,blablabla from dbo.blabla where bla = @StartDate and bla <=@endDate I need the output to be: Monday bla blabla blablabla Tuesday bla blabla blablabla If someone could help me that would be awesome. Thanks & Regards Jacques

    Read the article

  • select all values from a dimension for which there are facts in all other dimensions

    - by ideasculptor
    I've tried to simplify for the purposes of asking this question. Hopefully, this will be comprehensible. Basically, I have a fact table with a time dimension, another dimension, and a hierarchical dimension. For the purposes of the question, let's assume the hierarchical dimension is zip code and state. The other dimension is just descriptive. Let's call it 'customer' Let's assume there are 50 customers. I need to find the set of states for which there is at least one zip code in which EVERY customer has at least one fact row for each day in the time dimension. If a zip code has only 49 customers, I don't care about it. If even one of the 50 customers doesn't have a value for even 1 day in a zip code, I don't care about it. Finally, I also need to know which zip codes qualified the state for selection. Note, there is no requirement that every zip code have a full data set - only that at least one zip code does. I don't mind making multiple queries and doing some processing on the client side. This is a dataset that only needs to be generated once per day and can be cached. I don't even see a particularly clean way to do it with multiple queries short of simply brute-force iteration, and there are a heck of a lot of 'zip codes' in the data set (not actually zip codes, but the there are approximately 100,000 entries in the lower level of the hierarchy and several hundred in the top level, so zipcode-state is a reasonable analogy)

    Read the article

  • how to implement undo operation in datagridview

    - by ush
    Hi, I have created one application in c#.net.Using this application we can update datagridview,now i need to implement undo in it plz give me some ideas. private void button29_Click(object sender, EventArgs e) { Datatable dt; dt.RejectChanges(); } using above code i can do undo before updating. but i need a undo feature as in word plz suggest me thanks in advance

    Read the article

  • Question about joins and table with Millions of rows

    - by xRobot
    I have to create 2 tables: Magazine ( 10 millions of rows with these columns: id, title, genres, printing, price ) Author ( 180 millions of rows with these columns: id, name, magazine_id ) . Every author can write on ONLY ONE magazine and every magazine has more authors. So if I want to know all authors of Motors Magazine, I have to use this query: SELECT * FROM Author, Magazine WHERE ( Author.id = Magazine.id ) AND ( genres = 'Motors' ) The same applies to Printing and Price column. To avoid these joins with tables of millions of rows, I thought to use this tables: Magazine ( 10 millions of rows with this column: id, title, genres, printing, price ) Author ( 180 millions of rows with this column: id, name, magazine_id, genres, printing, price ) . and this query: SELECT * FROM Author WHERE genres = 'Motors' Is it a good approach ? I can use Postgresql or Mysql.

    Read the article

  • Does a TransactionScope that exists only to select data require a call to Complete()

    - by fordareh
    In order to select data from part of an application that isn't affected by dirty data, I create a TransactionScope that specifies a ReadUncommitted IsolationLevel as per the suggestion from Hanselman here. My question is, do I still need to execute the oTS.Complete() call at the end of the using block even if this transaction scope was not built for the purpose of bridging object dependencies across 2 databases during an Insert, Update, or Delete? Ex: List<string> oStrings = null; using (SomeDataContext oCtxt = new SomeDataContext (sConnStr)) using (TransactionScope oTS = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions { IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted })) { oStrings = oCtxt.EStrings.ToList(); oTS.Complete(); }

    Read the article

  • How to prune data set by frequency to conform to paper's description

    - by sakura90
    The MovieLens data set provides a table with columns: userid | movieid | tag | timestamp I have trouble reproducing the way they pruned the MovieLens data set used in: Tag Informed Collaborative Filtering, by Zhen, Li and Young In 4.1 Data Set of the above paper, it writes "For the tagging information, we only keep those tags which are added on at least 3 distinct movies. As for the users, we only keep those users who used at least 3 distinct tags in their tagging history. For movies, we only keep those movies that are annotated by at least 3 distinct tags." I tried to query the database: select TMP.userid, count(*) as tagnum from (select distinct T.userid as userid, T.tag as tag from tags T) AS TMP group by TMP.userid having tagnum >= 3; I got a list of 1760 users who labeled 3 distinct tags. However, some of the tags are not added on at least 3 distinct movies. Any help is appreciated.

    Read the article

  • Is there a problem when I call SqlAdapter.Update and at the same time call SqlDataReader.Read

    - by Ahmed Said
    I have two applications, one updates a single table which has constant number of rows (128 rows) using SqlDataAdapter.Update method , and another application that select from this table periodically using SqlDataReader. sometimes the DataReader returns only 127 rows not 128, and the update application does not remove or even insert any new rows, it just update. I am asking what is the cause of this behaviour?

    Read the article

  • Native XML WebService With Authentication Basic and SSL

    - by tom
    I'm using 2005 and the Native XML WebServices. The integrated authentication via HTTP:80 works fine. But i need the basic authentication which requires SSL. So if i change the web service to ssl i always get a connection reset (101). I tried several ports 80,443,9999 with the same outcome. What is the error?

    Read the article

  • ASP Function that returns result from stored procedures

    - by Brad
    I am working on a project that requires me to hop into to separate DB's. So I have figured that I need to have multiple functions inside of my VB page. The only problem I am having,is I am not to sure how to get this all accomplished. So far I have figured out the overall structure, just need help implementing that structure. Here is my idea: The main Function would call two other functions. We can Call them Sub Function 1 and Sub Function 2. So, the main Function takes the saved sessions information for the E-mail address and dumps in into Sub Function 1. It needs to open up a new connection to the db/stored procedure and RUN the following procedure and then return the result. Here is the stored procedure and what i think is correct. CREATE PROCEDURE WEB_User ( @EMAIL_ADDRESS varchar(80) = [EMAIL_ADDRESS] ) AS SELECT MEMBER_NUMBER FROM WEB_LOGIN WHERE EMAIL_ADDRESS = @EMAIL_ADDRESS So my question is, what is the function suppose to look like? how do I send the session information to the procedure? and finally, how do I return the stored procedure results and push back into the main function so it can be carried into sub function 2? Thank you in advance for your help... I really appreciate it!

    Read the article

  • SQLAlchemy & Complex Queries

    - by user356594
    I have to implement ACL for an existing application. So I added the a user, group and groupmembers table to the database. I defined a ManyToMany relationship between user and group via the association table groupmembers. In order to protect some ressources of the app (i..e item) I added a additional association table auth_items which should be used as an association table for the ManyToMany relationship between groups/users and the specific item. item has following columns: user_id -- user table group_id -- group table item_id -- item table at least on of user_id and group_id columns are set. So it's possible to define access for a group or for a user to a specific item. I have used the AssociationProxy to define the relationship between users/groups and items. I now want to display all items which the user has access to and I have a really hard time doing that. Following criteria are used: All items which are owned by the user should be shown (item.owner_id = user.id) All public items should be shown (item.access = public) All items which the user has access to should be shown (auth_item.user_id = user.id) All items which the group of the user has access to should be shown. The first two criteria are quite straightforward, but I have a hard time doing the 3rd one. Here is my approach: clause = and_(item.access == 'public') if user is not None: clause = or_(clause,item.owner == user,item.users.contains(user),item.groups.contains(group for group in user.groups)) The third criteria produces an error. item.groups.contains(group for group in user.groups) I am actually not sure if this is a good approach at all. What is the best approach when filtering manytomany relationships? How I can filter a manytomany relationship based on another list/relationship? Btw I am using the latest sqlalchemy (6.0) and elixir version Thanks for any insights.

    Read the article

  • What is the corrrect way to increment a field making up part of a composit key

    - by Tr1stan
    I have a bunch of tables whose primary key is made up of the foreign keys of other tables (Composite key). Therefore for example the attributes (as a very cut down version) might look like this: A[aPK, SomeFields] 1:M B[bPK, aFK, SomeFields] 1:M C[cPK, bFK, aFK, SomeFields] as data this could look like: A[aPK, SomeFields]: 1, Foo 2, Bar B[bPK, aFK, SomeFields]: 1, 1, FooData1 2, 1, FooData2 1, 2, BarData1 2, 2, BarData2 C[cPK, bFK, aFK, SomeFields]: 1, 1, 1, FooData1More 2, 1, 1, FooData1More 1, 2, 1, FooData2More 2, 2, 1, FooData2More 1, 1, 2, BarData1More 2, 1, 2, BarData1More 1, 2, 2, BarData2More 2, 2, 2, BarData2More I've got this running in a MSSQL DBMS and I'm looking for the best way to increment the left most column, in each table when a new tuple is added to it. I can't use the Auto Increment Identity Specification option as that has no idea that it is part of a composite key. I also don't want to use any aggregate function such as: MAX(field)+1 as this will have adverse affects with multiple users inputting data, rolling back etc. There might however be a nice trigger based option here, but I'm not sure. This must be a common issue so I'm hoping that someone has a lovely solution. As a side which may or may not affect the answer, I'm using Entity Framework 1.0 as my ORM, within a c# MVC application.

    Read the article

  • Generate dynamic UPDATE command from Expression<Func<T, T>>

    - by Rui Jarimba
    I'm trying to generate an UPDATE command based on Expression trees (for a batch update). Assuming the following UPDATE command: UPDATE Product SET ProductTypeId = 123, ProcessAttempts = ProcessAttempts + 1 For an expression like this: Expression<Func<Product, Product>> updateExpression = entity => new Product() { ProductTypeId = 123, ProcessAttempts = entity.ProcessAttempts + 1 }; How can I generate the SET part of the command? SET ProductTypeId = 123, ProcessAttempts = ProcessAttempts + 1

    Read the article

  • Select where and where not

    - by Simon
    I have a table containing lessons that I called "cours" (french) and I have several cours inside and I have linked them to students with a table between them to see if they go to the lessons or not. I would like to return data with the SELECT and the data that are NOT select. So, If one student follow 3 courses of 5, I would like to return the 3 courses that he follow and the 2 courses that he doesn't follow. Is there a way to do it ?

    Read the article

  • Auto increment with a Unit Of Work

    - by Derick
    Context I'm building a persistence layer to abstract different types of databases that I'll be needing. On the relational part I have mySQL, Oracle and PostgreSQL. Let's take the following simplified MySQL tables: CREATE TABLE Contact ( ID varchar(15), NAME varchar(30) ); CREATE TABLE Address ( ID varchar(15), CONTACT_ID varchar(15), NAME varchar(50) ); I use code to generate system specific alpha numeric unique ID's fitting 15 chars in this case. Thus, if I insert a Contact record with it's Addresses I have my generated Contact.ID and Address.CONTACT_IDs before committing. I've created a Unit of Work (amongst others) as per Martin Fowler's patterns to add transaction support. I'm using a key based Identity Map in the UoW to track the changed records in memory. It works like a charm for the scenario above, all pretty standard stuff so far. The question scenario comes in when I have a database that is not under my control and the ID fields are auto-increment (or in Oracle sequences). In this case I do not have the db generated Contact.ID beforehand, so when I create my Address I do not have a value for Address.CONTACT_ID. The transaction has not been started on the DB session since all is kept in the Identity Map in memory. Question: What is a good approach to address this? (Avoiding unnecessary db round trips) Some ideas: Retrieve the last ID: I can do a call to the database to retrieve the last Id like: SELECT Auto_increment FROM information_schema.tables WHERE table_name='Contact'; But this is MySQL specific and probably something similar can be done for the other databases. If do this then would need to do the 1st insert, get the ID and then update the children (Address.CONTACT_IDs) – all in the current transaction context.

    Read the article

  • MySQL Database Design with Internationalization

    - by Some name
    Hello, I'm going to start work on a medium sized application, and i'm planning it's db design. One thing that I'm not sure about is this. I will have many tables which will need internationalization, such as: "membership_options, gender_options, language_options etc" Each of these tables will share common i18n fields, like: "title, alternative_title, short_description, description" In your opinion which is the best way to do it? Have an i18n table with the same fields for each of the tables that will need them? or do something like: Membership table Gender table ---------------- -------------- id | created_at id | created_at 1 - 22.03.2001 1 - 14.08.2002 2 - 22.03.2001 2 - 14.08.2002 General translation table ------------------------- record_id | table_name | string_name | alternative_title| .... |id_language 1 - membership regular null 1 (english) 1 - membership normale null 2 (italian) 1 - gender man null 1(english) 1 -gender uomo null 2(italian) This would avoid me repeating something like: membership_translation table ----------------------------- membership_id | name | alternative_title | id_lang 1 regular null 1 1 normale null 2 gender_translation table ----------------------------- gender_id | name | alternative_title | id_lang 1 man null 1 1 uomo null 2 and so on, so i would probably reduce the number of db tables, but i'm not sure about performance.I'm not much of a DB designer, so please let me know.

    Read the article

< Previous Page | 618 619 620 621 622 623 624 625 626 627 628 629  | Next Page >