Search Results

Search found 37647 results on 1506 pages for 'sql performance'.

Page 341/1506 | < Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >

  • SQL with Regular Expressions vs Indexes with Logical Merging Functions

    - by geeko
    Hello Lads, I am trying to develop a complex textual search engine. I have thousands of textual pages from many books. I need to search pages that contain specified complex logical criterias. These criterias can contain virtually any compination of the following: A: Full words. B: Word roots (semilar to stems; i.e. all words with certain key letters). C: Word templates (in some languages are filled in certain templates to form various part of speech such as adjactives, past/present verbs...). D: Logical connectives: AND/OR/XOR/NOT/IF/IFF and parentheses to state priorities. Now, would it be faster to have the pages' full text in database (not indexed) and search though them all using SQL and Regular Expressions ? Or would it be better to construct indexes of word/root/template-page-location tuples. Hence, we can boost searching for individual words/roots/templates. However, it gets tricky as we interdouce logical connectives into our query. I thought of doing the following steps in such cases: 1: Seperately search for each individual words/roots/templates in the specified query. 2: On priority bases, we merge two result lists (from step 1) at a time depedning on the logical connective For example, if we are searching for "he AND (is OR was)": 1: We shall search for "he", "is" and "was" seperately and get result lists for each word. 2: Merge the result lists of "is" and "was" using the merging function OR-MERGE 3: Merge the merged result list from the OR-MERGE function with the one of "he" using the merging function AND-MERGE The result of step 3 is then returned as the result of the specified query. What do you think gurues ? Which is faster ? Any better ideas ? Thank you all in advance.

    Read the article

  • i want to search in sql server with must have parameter in one colunm

    - by sherif4csharp
    hello i am usning c# and ms SQL server 2008 i have table like this id | propertyTypeId | FinishingQualityId | title | Description | features 1 1 2 prop1 propDEsc1 1,3,5,7 2 2 3 prop2 propDEsc2 1,3 3 6 5 prop3 propDEsc3 1 4 5 4 prop4 propDEsc4 3,5 5 4 6 prop5 propDEsc5 5,7 6 4 6 prop6 propDEsc6 and here is my stored code (search in the same table) create stored procdures propertySearch as @Id int = null, @pageSize float , @pageIndex int, @totalpageCount int output, @title nvarchar(150) =null , @propertyTypeid int = null , @finishingqualityid int = null , @features nvarchar(max) = null , -- this parameter send like 1,3 ( for example) begin select row_number () as TempId over( order by id) , id,title,description,propertyTypeId,propertyType.name,finishingQualityId,finishingQuality.Name,freatures into #TempTable from property join propertyType on propertyType.id= property.propertyTypeId join finishingQuality on finishingQuality.id = property.finishingQualityId where property.id = isnull(@id,property.id ) and proprty.PropertyTypeId= isnull(@propertyTypeid,property.propertyTypeId) select totalpageconunt = ((select count(*) from #TempTable )/@pageSize ) select * from #TempTable where tempid between (@pageindex-1)*@pagesize +1 and (@pageindex*@pageSize) end go i can't here filter the table by feature i sent. this table has to many rows i want to add to this stored code to filter data for example when i send 1,3 in features parameter i want to return row number one and two in the example table i write in this post (want to get the data from table must have the feature I send) many thanks for every one helped me and will help

    Read the article

  • Select highest rated, oldest track

    - by Blair McMillan
    I have several tables: CREATE TABLE [dbo].[Tracks]( [Id] [uniqueidentifier] NOT NULL, [Artist_Id] [uniqueidentifier] NOT NULL, [Album_Id] [uniqueidentifier] NOT NULL, [Title] [nvarchar](255) NOT NULL, [Length] [int] NOT NULL, CONSTRAINT [PK_Tracks_1] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] CREATE TABLE [dbo].[TrackHistory]( [Id] [int] IDENTITY(1,1) NOT NULL, [Track_Id] [uniqueidentifier] NOT NULL, [Datetime] [datetime] NOT NULL, CONSTRAINT [PK_TrackHistory] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] INSERT INTO [cooltunes].[dbo].[TrackHistory] ([Track_Id] ,[Datetime]) VALUES ("335294B0-735E-4E2C-8389-8326B17CE813" ,GETDATE()) CREATE TABLE [dbo].[Ratings]( [Id] [int] IDENTITY(1,1) NOT NULL, [Track_Id] [uniqueidentifier] NOT NULL, [User_Id] [uniqueidentifier] NOT NULL, [Rating] [tinyint] NOT NULL, CONSTRAINT [PK_Ratings] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] INSERT INTO [cooltunes].[dbo].[Ratings] ([Track_Id] ,[User_Id] ,[Rating]) VALUES ("335294B0-735E-4E2C-8389-8326B17CE813" ,"C7D62450-8BE6-40F6-80F1-A539DA301772" ,1) Users User_Id|Guid Other fields Links between the tables are pretty obvious. TrackHistory has each track added to it as a row whenever it is played ie. a track will appear in there many times. Ratings value will either be 1 or -1. What I'm trying to do is select the Track with the highest rating, that is more than 2 hours old, and if there is a duplicate rating for a track (ie a track receives 6 +1 ratings and 1 - rating, giving that track a total rating of 5, another track also has a total rating of 5), the track that was last played the longest ago should be returned. (If all tracks have been played within the last 2 hours, no rows should be returned) I'm getting somewhere doing each part individually using the link above, SUM(Value) and GROUP BY Track_Id, but I'm having trouble putting it all together. Hopefully someone with a bit more (MS)SQL knowledge will be able to help me. Many thanks!

    Read the article

  • sql foreign keys

    - by Paul Est
    I was create tables with the syntax in phpmyadmin: DROP TABLE IF EXISTS users; DROP TABLE IF EXISTS info; CREATE TABLE users ( user_id int unsigned NOT NULL auto_increment, email varchar(100) NOT NULL default '', pwd varchar(32) NOT NULL default '', isAdmin int(1) unsigned NOT NULL, PRIMARY KEY (user_id) ) TYPE=INNODB; CREATE TABLE info ( info_id int unsigned NOT NULL auto_increment, first_name varchar(100) NOT NULL default '', last_name varchar(100) NOT NULL default '', address varchar(300) NOT NULL default '', zipcode varchar(100) NOT NULL default '', personal_phone varchar(100) NOT NULL default '', mobilephone varchar(100) NOT NULL default '', faxe varchar(100) NOT NULL default '', email2 varchar(100) NOT NULL default '', country varchar(100) NOT NULL default '', sex varchar(1) NOT NULL default '', birth varchar(1) NOT NULL default '', email varchar(100) NOT NULL default '', PRIMARY KEY (info_id), FOREIGN KEY (email) REFERENCES users(email) ON UPDATE CASCADE ON DELETE CASCADE ) TYPE=INNODB; But shows the error "#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'TYPE=INNODB' at line 11 " If i remove the TYPE=INNODB in the end of create the tables, it will show the error "#1005 - Can't create table 'curriculo.info' (errno: 150) ".

    Read the article

  • google analytics reverse transaction not working with sales performance

    - by prasad maganti
    We have google analytics account and trying to do reverse transaction. We have created a transaction on one date and reverse transaction on some other date. After transaction if we do reverse transaction it disappears from transactions list. Is it the expected behavior or abnormal behavior? But, if we check the same order data in sales performance, the reverse transaction does not reflects on when we created the transaction, it reflecting on when we made reverse transaction date. It should not be do like this. The reverse transaction should affect the same date on when we made transaction date.

    Read the article

  • SQL group results as a column array

    - by Radek
    Hi guys, this is an SQL question and don't know which type of JOIN, GROUP BY etc. to use, it is for a chat program where messages are related to rooms and each day in a room is linked to a transcript etc. Basically, when outputting my transcripts, I need to show which users have chatted on that transcript. At the moment I link them through the messages like so: SELECT rooms.id, rooms.name, niceDate, room_transcripts.date, long FROM room_transcripts JOIN rooms ON room_transcripts.room=rooms.id JOIN transcript_users ON transcript_users.room=rooms.id AND transcript_users.date=room_transcripts.date JOIN users ON transcript_users.user=users.id WHERE room_transcripts.deleted=0 AND rooms.id IN (1,2) ORDER BY room_transcripts.id DESC, long ASC The result set looks like this: Array ( [0] => Array ( [id] => 2 [name] => Room 2 [niceDate] => Wednesday, April 14 [date] => 2010-04-14 [long] => Jerry Seinfeld ) [1] => Array ( [id] => 1 [name] => Room 1 [niceDate] => Wednesday, April 14 [date] => 2010-04-14 [long] => Jerry Seinfeld ) [2] => Array ( [id] => 1 [name] => Room 1 [niceDate] => Wednesday, April 14 [date] => 2010-04-14 [long] => Test Users ) ) I would like though for each element in the array to represent one transcript entry and for the users to be grouped in an array as the entry's element. So 'long' will be an array listing all the names. Can this be done? At the moment I just append the names and when the transcript date and room changes I echo them retrospectively, but I will do the same for files and highlighted messages and it's messy. Thanks.

    Read the article

  • SQL Server: Clutering by timestamp; pros/cons

    - by Ian Boyd
    i have a table in SQL Server, where i want inserts to be added to the end of the table (as opposed to a clustering key that would cause them to be inserted in the middle). This means i want the table clustered by some column that will constantly increase. This could be achieved by clustering on a datetime column: CREATE TABLE Things ( ... CreatedDate datetime DEFAULT getdate(), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (CreatedDate) ) But i can't guaranteed that two Things won't have the same time. So my requirements can't really be achieved by a datetime column. i could add a dummy identity int column, and cluster on that: CREATE TABLE Things ( ... RowID int IDENTITY(1,1), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (RowID) ) But you'll notice that my table already constains a timestamp column; a column which is guaranteed to be a monotonically increasing. This is exactly the characteristic i want for a candidate cluster key. So i cluster the table on the rowversion (aka timestamp) column: CREATE TABLE Things ( ... [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (timestamp) ) Rather than adding a dummy identity int column (RowID) to ensure an order, i use what i already have. What i'm looking for are thoughts of why this is a bad idea; and what other ideas are better. Note: Community wiki, since the answers are subjective.

    Read the article

  • Error in datatype (nvarchar instead of ntext)

    - by prabu R
    I am importing data from excel(.xls) to SQL Server 2008 using SSIS. I have included IMEX=1 in the connection string of excel connection manager. But a column consists of a value as below: 4-Hour Engineer Dispatch ASPP Engr Dispatch 1: Up to 1 dispatch (8 hours) per year. Hours exceeding allocation billed @ 1.5x hourly rate w/ 8-hr min Engr Dispatch: 8-hrs to arrive on-site from Ciena's determination of need On-Site Engineer Dispatch - 8 Hour ASPP Engr Dispatch 8: Up to 8 dispatch (64 hours) per year. Hours exceeding allocation billed @ 1.5x hourly rate w/ 8-hr min Engr Dispatch: NBD to dispatch from Ciena's determination of need Per Incident On Site Support ASPP Engr Dispatch 12: Up to 12 dispatch (96 hours) per year. Hours exceeding allocation billed @ 1.5x hourly rate w/ 8-hr min Engr Dispatch: Next day to arrive on-site from Ciena's determination of need Resident Engineer Engr Dispatch: 2-hrs to arrive on-site from Ciena's determination of need Engr Dispatch: 4-hrs to arrive on-site from Ciena's determination of need ASPP Engr Dispatch 2: Up to 2 dispatch (16 hours) per year. Hours exceeding allocation billed @ 1.5x hourly rate w/ 8-hr min N Actually there are about 600 rows in that excel file. But the above mentioned value is present after 450 rows only. So, the datatype of that column is taken as nvarchar(255) as default instead of ntext and so i am getting error. Anybody please help out... Thanks in advance...

    Read the article

  • Sql Serve - Cascade delete has multiple paths

    - by Anders Juul
    Hi all, I have two tables, Results and ComparedResults. ComparedResults has two columns which reference the primary key of the Results table. My problem is that if a record in Results is deleted, I wish to delete all records in ComparedResults which reference the deleted record, regardless of whether it's one column or the other (and the columns may reference the same Results row). A row in Results may deleted directly or through cascade delete caused by deleting in a third table. Googling this could indicate that I need to disable cascade delete and rewrite all cascade deletes to use triggers instead. Is that REALLY nessesary? I'd be prepared to do much restructuring of the database to avoid this, as my main area is OO programming, and databases should 'just work'. It is hard to see, however, how a restructuring could help as I would just move the problem around... Or am I missing something? I am also a bit at a loss as to why my initial construct should even be a problem for the Sql Server?! Any comments welcome and much appreciated! Anders, Denmark

    Read the article

  • SQLAuthority News Guest Post Performance Counters Gathering using Powershell

    Laerte Junior Laerte Junior has previously helped me personally to resolve the issue with Powershell installation on my computer. He did awesome job to help. He has send this another wonderful article regarding performance counter for readers of this blog. I really liked it and I expect all of you who are Powershell geeks, you [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL Outer Join on a bunch of Inner Joined results

    - by Matthew Frederick
    I received some great help on joining a table to itself and am trying to take it to the next level. The SQL below is from the help but with my addition of the select line beginning with COUNT, the inner join to the Recipient table, and the Group By. SELECT Event.EventID AS EventID, Event.EventDate AS EventDateUTC, Participant2.ParticipantID AS AwayID, Participant1.ParticipantID AS HostID, COUNT(Recipient.ChallengeID) AS AllChallenges FROM Event INNER JOIN Matchup Matchup1 ON (Event.EventID = Matchup1.EventID) INNER JOIN Matchup Matchup2 ON (Event.EventID = Matchup2.EventID) INNER JOIN Participant Participant1 ON (Matchup1.Host = 1 AND Matchup1.ParticipantID = Participant1.ParticipantID) INNER JOIN Participant Participant2 ON (Matchup2.Host != 1 AND Matchup2.ParticipantID = Participant2.ParticipantID) INNER JOIN Recipient ON (Event.EventID = Recipient.EventID) WHERE Event.CategoryID = 1 AND Event.Resolved = 0 AND Event.Type = 1 GROUP BY Recipient.ChallengeID ORDER BY EventDateUTC ASC My goal is to get a count of how many rows in the Recipient table match the EventID in Event. This code works fine except that I also want to get results where there are 0 matching rows in Recipient. I want 15 rows (= the number of events) but I get 2 rows, one with a count of 1 and one with a count of 2 (which is appropriate for an inner join as there are 3 rows in the sample Recipient table, one for one EventID and two for another EventID). I thought that either a LEFT join or an OUTER join was what I was looking for, but I know that I'm not quite getting how the tables are actually joined. A LEFT join there gives me one more row with 0, which happens to be EventID 1 (first thing in the table), but that's all. Errors advise me that I can't just change that INNER join to an OUTER. I tried some parenthesizing and some subselects and such but can't seem to make it work.

    Read the article

  • SQL Select - adding field to Select is changing the results

    - by nycdan
    I'm stumped by this SQL problem that I suspect will be easy pickings for someone out there. I have a table that contains rows representing several daily lists of ranked items. The relevent fields are as follows: ID, ListID, ItemID, ItemName, ItemRank, Date. I have a query that returns the items that were on a list yesterday but not today (Items Off List) as follows: Select ItemID, ListID, ItemName, convert(varchar(10),MAX(date),101) as date, COUNT(ItemName) as days_on_list From Table Group By ItemID, ListID, ItemName Having Max(date) = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and ListID = 1 Order By ListID, ItemName, COUNT(ItemName) Basically I'm looking for records where the max date is yesterday. It works fine and shows the number of days each item was previously on the list (although not necessarily consecutively, but that's fine for now). The problem is when I try to add ranking to see what yesterday's rank was. I tried the following: Select ItemID, ListID, ItemName, ranking, convert(varchar(10),MAX(date),101) as date, COUNT(ItemName) as days_on_list From Table Group By ItemID, ListID, ItemName, ranking Having Max(date) = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and ListID = 1 Order By ListID, ItemName, ranking, COUNT(ItemName) This returns a great deal more records than the previous query so something isn't right with it. I want the same number of records, but with the ranking included. I can get the rank by doing a self-join with a subquery and getting records where the ItemID occurs yesterday but not today - but then I don't know how to get the Count any more. Appreciation in advance for any help with this. ======== SOLVED ============== Select ItemID, ListID, ItemName, ranking, convert(varchar(10),MAX(date),101) as date, COUNT(ItemName) as days_on_list from Table T Where date = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and ListID = 1 and T.ItemID Not In (select T.ItemID from Table T join Table T2 on T.ItemID = T2.ItemID and T.ListID = T2.ListID where T.date = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and T2.date = convert (varchar(10),getdate(),101) and T.ListID = 1) Group by ItemID, ListID, ItemName, ranking Basically, what I did was create a subquery that finds all items that appear in both days, and finds items that appeared yesterday but are not in the set of items that appeared both days. Then I was able to do the aggregate function and grouping correctly. I would NOT be surprised if this is more convoluted than necessary but I understand it and can modify it as needed and performance doesn't seem to be an issue. Thanks everyone for the assist.

    Read the article

  • SQL Server: Clustering by timestamp; pros/cons

    - by Ian Boyd
    I have a table in SQL Server, where i want inserts to be added to the end of the table (as opposed to a clustering key that would cause them to be inserted in the middle). This means I want the table clustered by some column that will constantly increase. This could be achieved by clustering on a datetime column: CREATE TABLE Things ( ... CreatedDate datetime DEFAULT getdate(), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (CreatedDate) ) But I can't guaranteed that two Things won't have the same time. So my requirements can't really be achieved by a datetime column. I could add a dummy identity int column, and cluster on that: CREATE TABLE Things ( ... RowID int IDENTITY(1,1), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (RowID) ) But you'll notice that my table already constains a timestamp column; a column which is guaranteed to be a monotonically increasing. This is exactly the characteristic I want for a candidate cluster key. So I cluster the table on the rowversion (aka timestamp) column: CREATE TABLE Things ( ... [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (timestamp) ) Rather than adding a dummy identity int column (RowID) to ensure an order, I use what I already have. What I'm looking for are thoughts of why this is a bad idea; and what other ideas are better. Note: Community wiki, since the answers are subjective.

    Read the article

  • SQL: script to create country, state tables

    - by pcampbell
    Consider writing an application that requires registration for an entity, and the schema has been defined to require the country, state/prov/county data to be normalized. This is fairly typical stuff here. Naming also is important to reflect. Each country has a different name for this entity: USA = states Australia = states + territories Canada = provinces + territories Mexico = states Brazil = states Sweden = provinces UK = counties, principalities, and perhaps more! Most times when approaching this problem, I have to scratch together a list of good countries, and the states/prov/counties of each. The app may be concerned with a few countries and not others. The process is full of pain. It typically involves one of two approaches: opening up some previous DB and creating a CREATE script based on those tables. Run that script in the context of the new system. creating a DTS package from database1 to database2, with all the DDL and data included in the transfer. My goal now is to script the creation and insert of the countries that I'd be concerned with in the app of the day. When I want to roll out Countries X/Y/Z, I'll open CountryX.sql, and load its states into the ProvState table. Question: do you have a set of scripts in your toolset to create schema and data for countries and state/province/county? If so, would you share your scripts here? (U.K. citizens, please feel free to correct me by way of a comment in the use of counties.)

    Read the article

  • What's the performance penalty of using isKindOfClass in Objective C?

    - by durcicko
    I'm considering introducing: if ([myInstance isKindOfClass:[SomeClass class]]) { do something...} into a piece of code that gets called pretty often. Will I introduce a significant performance penalty? In Objective C, is there a quicker way of assessing whether a given object instance is of certain class type? For example, is the following quicker? (I realize the test is somewhat different) if (myInstance.class == [SomeClass class]) { do something else...}

    Read the article

  • is it possible to symulate load balancers, and how much performance gain they provide?

    - by Lakhlani Prashant
    I have a website which runs on IIS (Asp.net application, some of them are in dotnetnuke also) and we are expecting higher numbers of traffic on some of the sites, so we are planning to add a load-balancer, but before going to do that, we just want to konw is it worth to do that? so, I want to know if is it possible to simulate load balancer, and how much performance gain they provide?

    Read the article

  • Correct way to generate order numbers in SQL Server

    - by Anton Gogolev
    This question certainly applies to a much broader scope, but here it is. I have a basic ecommerce app, where users can, naturally enough, place orders. Said orders need to have a unique number, which I'm trying to generate right now. Each order is Vendor-specific. Basically, I have an OrderNumberInfo (VendorID, OrderNumber) table. Now whenever a customer places an order I need to increment OrderNumber for a particuar Vendor and return that value. Naturally, I don't want other processes to interfere with me, so I need to exclusively lock this row somehow: begin tranaction declare @n int select @n = OrderNumber from OrderNumberInfo where VendorID = @vendorID update OrderNumberInfo set OrderNumber = @n + 1 where OrderNumber = @n and VendorID = @vendorID commit transaction Now, I've read about select ... with (updlock rowlock), pessimistic locking, etc., but just cannot fit all this in a coherent picture: How do these hints play with SQL Server 2008s' snapshot isolation? Do they perform row-level, page-level or even table-level locks? How does this tolerate multiple users trying to generate numbers for a single Vendor? What isolation levels are appropriate here? And generally - what is the way to do such things?

    Read the article

  • How to access SQL CE 3.5 from VB6

    - by Masterfu
    We have a .NET 3.5 SP1 application written in C# that stores data in an SQL CE 3.5 Database. We also need to access (read only) this very data from a legacy VB6 application. I don't know if this is at all possible. There are several approaches to this problem that I can think of. 1) I have read about ADOCE Connections, but this seems to be an option for embedded Visual Basic only 2) I can't seem to get a connection working using ADODB.Connection Objects like so Dim MyConnObj As New ADODB.Connection ' Microsoft.SQLSERVER.CE.OLEDB.3.5 ' Microsoft.SQLSERVER.MOBILE.OLEDB.3.0 MyConnObj.ConnectionString = "Provider=SQLOLEDB;Data Source=c:\test.sdf" MyConnObj.Open Maybe this is just a bad choice of providers? I also tried the providers that appear as comments above and different connection strings, but to no avail. Both providers are not installed on my dev machine and won't be installed on my customer's machine. 3) Maybe there is a way to use a more generic approach like ODBC? But I believe this would result in setup / deployment work, right? Does anyone have any experience with this scenario? As you can see, I am really looking for some good starting points. I also accept answers like "This is drop dead simple and so are you" as long as they come with some guiding directions ;-)

    Read the article

  • How to prevent linq-to-sql designer undo my changing

    - by anonim.developer
    Dear All, Thanks for your attention in advance, I’ve met an issue with LINQ-2-SQL designer in VS 2008 SP1 which has made me CRAZY. I use Linq2sql as my DAL. It seems Linq2sql speeds up coding in the first step but lots of issues arise in feature specifically with table or object inheritance. In this case I have a class Entity that all other entity classes generated by Linq2sql designer inherit from. public abstract class Entity { public virtual Guid ID { get; protected set; } } public partial class User : monius.Data.Entity { } And the following generated by L2S designer (DataModel.designer.cs) [Column(Storage = "_ID", AutoSync = AutoSync.OnInsert, DbType = "UniqueIdentifier NOT NULL", IsPrimaryKey = true, IsDbGenerated = true, UpdateCheck = UpdateCheck.Never)] [DataMember(Order = 1)] public System.Guid ID { get { return this._ID; } set { if ((this._ID != value)) { this.OnIDChanging(value); this.SendPropertyChanging(); this._ID = value; this.SendPropertyChanged("ID"); this.OnIDChanged(); } } } When I compile the code VS warns me that Warning 1 'User.ID' hides inherited member 'Entity.ID'. To make the current member override that mplementation, add the override keyword. Otherwise add the new keyword. That warning is obvious and I have to change the code generated by L2S designer (DataModel.designer.cs) to […] public override System.Guid ID { … protected set … } And the code compiled with no error or warning and everyone is happy. But that is not the end of story. As soon as I made changes to entities of the diagram (dbml) or even I open dbml file to view it, any change manually I made to designer has been vanished and POOF! Redo AGAIN. That is a painful job. Now I wonder if there is a way to force L2S designer not changing portions of auto-generated code. I’ll be appreciated if someone kindly helps me with this issue.

    Read the article

  • Does 3d modeling software *choice used during asset creation affect performance at runtime

    - by user134143
    Does software used to create 3d assets (for game development specifically) have an impact on the efficiency of the program. In other words. Is it possible to reduce the operating footprint of an application merely by utilizing alternative development software during production of 3d assets. If you use two different applications to create a 3 dimensional image of a box, can one of them result in better performance if aspects of the image are identical? Sorry if this question seems vague, I am attempting to get the information I need without causing unnecessary debate over specific software choice. Thank you.

    Read the article

  • SQL: Need help with query construction.

    - by Geeknidas
    Hi Guys, I am relatively new with sql and I need some help with some basic query construction. Problem: To retrieve the number of orders and the customer id from a table based on a set of parameters. I want to write a query to figure out the number of orders under each customer (Column: Customerid) along with the CustomerID where the number of orders should be greater or equal to 10 and the status of the order should be Active. Moreover, I also want to know the first transaction date of an order belonging to each customerid. Table Description: product_orders Orderid CustomerId Transaction_date Status ------- ---------- ---------------- ------- 1 23 2-2-10 Active 2 22 2-3-10 Active 3 23 2-3-10 Deleted 4 23 2-3-10 Active Query that I have written: select count(*), customerid from product_orders where status = 'Active' GROUP BY customerid ORDER BY customerid; The above statement gives me the sum of all order under a customer id but does not fulfil the condition of atleast 10 orders. I donot know how to display the first transaction date along with the order under a customerid (status: could be active or delelted doesn't matter) Ideal solutions should look like: Total Orders CustomerID Transaction Date (the first transaction date) ------------ ---------- ---------------- 11 23 1-2-10 Thanks in advance. I hope you guys would be kind enough to stop by and help me out. Cheers, Leonidas

    Read the article

  • Find out which row caused the error

    - by Felipe Fiali
    I have a big fat query that's written dynamically to integrate some data. Basically what it does is query some tables, join some other ones, treat some data, and then insert it into a final table. The problem is that there's too much data, and we can't really trust the sources, because there could be some errored or inconsistent data. For example, I've spent almost an hour looking for an error while developing using a customer's database because somewhere in the middle of my big fat query there was an error converting some varchar to datetime. It turned out to be that they had some sales dating '2009-02-29', an out-of-range date. And yes, I know. Why was that stored as varchar? Well, the source database has 3 columns for dates, 'Month', 'Day' and 'Year'. I have no idea why it's like that, but still, it is. But how the hell would I treat that, if the source is not trustable? I can't HANDLE exceptions, I really need that it comes up to another level with the original message, but I wanted to provide some more info, so that the user could at least try to solve it before calling us. So I thought about displaying to the user the row number, or some ID that would at least give him some idea of what record he'd have to correct. That's also a hard job because there will be times when the integration will run up to 80000 records. And in an 80000 records integration, a single dummy error message: 'The conversion of a varchar data type to a datetime data type resulted in an out-of-range datetime value' means nothing at all. So any idea would be appreciated. Oh I'm using SQL Server 2005 with Service Pack 3.

    Read the article

  • How to get XML element/attribute name in SQL Server 2005

    - by OG Dude
    Hi, I have a simple procedure in SQL Server 2005 which takes some XML as input. The element attributes correspond to field names in tables. I'd like to be able to determine <elementName>, <attribNameX> dynamically as to avoid having to hardcode them into the procedure. How can I do this? The XML looks like this: <ROOT> <elementName attribName1 = "xxx" attribName2 = "yyy"/> <elementName attribName1 = "aaa" attribName2 = "bbb"/> ... </ROOT> The stored procedure like this: CREATE PROC dbo.myProc ( @XMLInput varchar(1000) ) AS BEGIN SET NOCOUNT ON DECLARE @XMLDocHandle int EXEC sp_xml_preparedocument @XMLDocHandle OUTPUT, @XMLInput SELECT someTable.someCol FROM dbo.someTable JOIN OPENXML (@XMLDocHandle, '/ROOT/elementName',1) WITH (attrib1Name int, attrib2Name int) AS XMLData ON someTable.attribName1 = XMLData.attribName1 AND someTable.attribName2 = XMLData.attribName2 EXEC sp_xml_removedocument @XMLDocHandle END GO The question has been asked here before but maybe there is a cleaner solution. Additionally, I'd like to pass the tablename as a parameter as well - I read some stuff arguing that this is bad style - so what would be a good solution for having a dynamic tablename? Thanks a lot in advance, /David

    Read the article

  • SQL 2008 Querying Soap XML

    - by Vince
    I have been trying to process this SOAP XML return using SQL but all I get is NULL or nothing at all. I have tried different ways and pasted them all below. Declare @xmlMsg xml; Set @xmlMsg = '<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body> <SendWarrantyEmailResponse xmlns="http://Web.Services.Warranty/"> <SendWarrantyEmailResult xmlns="http://Web.Services.SendWarrantyResult"> <WarrantyNumber>120405000000015</WarrantyNumber> <Result>Cannot Send Email to anonymous account!</Result> <HasError>true</HasError> <MsgUtcTime>2012-06-07T01:11:36.8665126Z</MsgUtcTime> </SendWarrantyEmailResult> </SendWarrantyEmailResponse> </soap:Body> </soap:Envelope>'; declare @table table (data xml); insert into @table values (@xmlMsg); select data from @table; WITH xmlnamespaces ('http://schemas.xmlsoap.org/soap/envelope/' as [soap], 'http://Web.Services.Warranty' as SendWarrantyEmailResponse, 'http://Web.Services.SendWarrantyResult' as SendWarrantyEmailResult) SELECT Data.value('(/soap:Envelope[1]/soap:Body[1]/SendWarrantyEmailResponse[1]/SendWarrantyEmailResult[1]/WarrantyNumber[1])[1]','VARCHAR(500)') AS WarrantyNumber FROM @Table ; ;with xmlnamespaces('http://schemas.xmlsoap.org/soap/envelope/' as [soap],'http://Web.Services.Warranty' as SendWarrantyEmailResponse,'http://Web.Services.SendWarrantyResult' as SendWarrantyEmailResult) --select @xmlMsg.value('(/soap:Envelope/soap:Body/SendWarrantyEmailResponse/SendWarrantyEmailResult/WarrantyNumber)[0]', 'nvarchar(max)') --select T.N.value('.', 'nvarchar(max)') from @xmlMsg.nodes('/soap:Envelope/soap:Body/SendWarrantyEmailResponse/SendWarrantyEmailResult') as T(N) select @xmlMsg.value('(/soap:Envelope/soap:Body/SendWarrantyEmailResponse/SendWarrantyEmailResult/HasError)[1]','bit') as test ;with xmlnamespaces('http://schemas.xmlsoap.org/soap/envelope/' as [soap],'http://Web.Services.Warranty' as [SendWarrantyEmailResponse],'http://Web.Services.SendWarrantyResult' as [SendWarrantyEmailResult]) SELECT SendWarrantyEmailResult.value('WarrantyNumber[1]','varchar(max)') AS WarrantyNumber, SendWarrantyEmailResult.value('Result[1]','varchar(max)') AS Result, SendWarrantyEmailResult.value('HasError[1]','bit') AS HasError, SendWarrantyEmailResult.value('MsgUtcTime[1]','datetime') AS MsgUtcTime FROM @xmlMsg.nodes('/soap:Envelope/soap:Body/SendWarrantyEmailResponse/SendWarrantyEmailResult') SendWarrantyEmailResults(SendWarrantyEmailResult)

    Read the article

  • PHP and performance

    - by Naif
    I always hear that PHP is for medium and small websites whereas .NET and Java for enterprise applications. My question is about PHP. Why is PHP not a good option for enterprise web applications? Is it because if the web application becomes bigger then PHP will be slower as it is an interpreted language? I know that corporate world will choose .NET or J2EE because of the integration with their products and because of back end services, etc. However, if we just have PHP for building sites and web applications then how can we use it to perform well with big sites? In short, Is there a relationship between the performance of PHP and the size of the website? What are the factors that make PHP not appropriate option for big sites?

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >