Search Results

Search found 27519 results on 1101 pages for 'sql learner'.

Page 593/1101 | < Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >

  • Transactional isolation level needed for safely incrementing ids

    - by Knut Arne Vedaa
    I'm writing a small piece of software that is to insert records into a database used by a commercial application. The unique primary keys (ids) in the relevant table(s) are sequential, but does not seem to be set to "auto increment". Thus, I assume, I will have to find the largest id, increment it and use that value for the record I'm inserting. In pseudo-code for brevity: id = select max(id) from some_table id++ insert into some_table values(id, othervalues...) Now, if another thread started the same transaction before the first one finished its insert, you would get two identical ids and a failure when trying to insert the last one. You could check for that failure and retry, but a simpler solution might be setting an isolation level on the transaction. For this, would I need SERIALIZABLE or a lower level? Additionally, is this, generally, a sound way of solving the problem? Are the any other ways of doing it?

    Read the article

  • Tables with no Primary Key

    - by Matt Hamilton
    I have several tables whose only unique data is a uniqueidentifier (a Guid) column. Because guids are non-sequential (and they're client-side generated so I can't use newsequentialid()), I have made a non-primary, non-clustered index on this ID field rather than giving the tables a clustered primary key. I'm wondering what the performance implications are for this approach. I've seen some people suggest that tables should have an auto-incrementing ("identity") int as a clustered primary key even if it doesn't have any meaning, as it means that the database engine itself can use that value to quickly look up a row instead of having to use a bookmark. My database is merge-replicated across a bunch of servers, so I've shied away from identity int columns as they're a bit hairy to get right in replication. What are your thoughts? Should tables have primary keys? Or is it ok to not have any clustered indexes if there are no sensible columns to index that way?

    Read the article

  • Sqlserver 2005 Replication issue

    - by Francesco
    Hi, I have a peer to peer Merge Replication from two Sqlserver 2005. The first Sqlserver is both the publisher and distribuitor. All works fine but if the VPN goes down for a couple of hours the replication goes down too and I need to manually restart the sqlserver agent. In the Sqlserver Agent properties I have set the two option about the agent failover but nothing. How I can set an automatic restart of sqlserver agent when the VPN goes down? Thanks

    Read the article

  • SQL Concurrent test update question

    - by ptoinson
    Howdy Folks, I have a SQLServer 2008 database in which I have a table for Tags. A tag is just an id and a name. The definition of the tags table looks like: CREATE TABLE [dbo].[Tag]( [ID] [int] IDENTITY(1,1) NOT NULL, [Name] [varchar](255) NOT NULL CONSTRAINT [PK_Tag] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ) Name is also a unique index. further I have several processes adding data to this table at a pretty rapid rate. These processes use a stored proc that looks like: ALTER PROC [dbo].[lg_Tag_Insert] @Name varchar(255) AS DECLARE @ID int SET @ID = (select ID from Tag where Name=@Name ) if @ID is null begin INSERT Tag(Name) VALUES (@Name) RETURN SCOPE_IDENTITY() end else begin return @ID end My issues is that, other than being a novice at concurrent database design, there seems to be a race condition that is causing me to occasionally get an error that I'm trying to enter duplicate keys (Name) into the DB. The error is: Cannot insert duplicate key row in object 'dbo.Tag' with unique index 'IX_Tag_Name'. This makes sense, I'm just not sure how to fix this. If it where code I would know how to lock the right areas. SQLServer is quite a different beast. First question is what is the proper way to code this 'check, then update pattern'? It seems I need to get an exclusive lock on the row during the check, rather than a shared lock, but it's not clear to me the best way to do that. Any help in the right direction will be greatly appreciated. Thanks in advance.

    Read the article

  • Why would I do an inner join on a non-distinct field?

    - by froadie
    I just came across a query that does an inner join on a non-distinct field. I've never seen this before and I'm a little confused about this usage. Something like: SELECT distinct all, my, stuff FROM myTable INNER JOIN myOtherTable ON myTable.nonDistinctField = myOtherTable.nonDistinctField (WHERE some filters here...) I'm not quite sure what my question is or how to phrase it, or why exactly this confuses me, but I was wondering if anyone could explain why someone would need to do an inner join on a non-distinct field and then select only distinct values...? Is there ever a legitimate use of an inner join on a non-distinct field? What would be the purpose? And if there's is a legitimate reason for such a query, can you give examples of where it would be used?

    Read the article

  • Saving data to server with user accounts.

    - by AKRamkumar
    Ok, so for an app I am making, I want the user to be able to save data online. On my website, I will provide a web server with tables of UserName/Password/SaveData. How can I do this without crashing the server load? How can I guarantee security ? Is there a Design Pattern for this?Is there a better way of doing this? This is going to be a free application, available to the public and I would like for their settings to be available, no matter the computer they are using. Is there a better way of doing this? I am using MEF for plugins so is there a way I can save plugin data as well?

    Read the article

  • How to designing a generic databse whos layout may change over time?

    - by mawg
    Here's a tricky one - how do I programatically create and interrogate a database who's contents I can't really foresee? I am implementing a generic input form system. The user can create PHP forms with a WYSIWYG layout and use them for any purpose he wishes. He can also query the input. So, we have three stages: a form is designed and generated. This is a one-off procedure, although the form can be edited later. This designs the database. someone or several people make use of the form - say for daily sales reports, stock keeping, payroll, etc. Their input to the forms is written to the database. others, maybe management, can query the database and generate reports. Since these forms are generic, I can't predict the database structure - other than to say that it will reflect HTML form fields and consist of a the data input from collection of edit boxes, memos, radio buttons and the like. Questions and remarks: A) how can I best structure the database, in terms of tables and columns? What about primary keys? My first thought was to use the control name to identify each column, then I realized that the user can edit the form and rename, so that maybe "name" becomes "employee" or "wages" becomes ":salary". I am leaning towards a unique number for each. B) how best to key the rows? I was thinking of a timestamp to allow me to query and a column for the row Id from A) C) I have to handle column rename/insert/delete. Foe deletion, I am unsure whether to delete the data from the database. Even if the user is not inputting it from the form any more he may wish to query what was previously entered. Or there may be some legal requirements to retain the data. Any gotchas in column rename/insert/delete? D) For the querying, I can have my PHP interrogate the database to get column names and generate a form with a list where each entry has a database column name, a checkbox to say if it should be used in the query and, based on column type, some selection criteria. That ought to be enough to build searches like "position = 'senior salesman' and salary 50k". E) I probably have to generate some fancy charts - graphs, histograms, pie charts, etc for query results of numerical data over time. I need to find some good FOSS PHP for this. F) What else have I forgotten? This all seems very tricky to me, but I am database n00b - maybe it is simple to you gurus?

    Read the article

  • What are the types and inner workings of a query optimizer?

    - by Frank Developer
    As I understand it, most query optimizers are cost-based. Some can be influenced by hints like FIRST_ROWS(). Others are tailored for OLAP. Is it possible to know more detailed logic about how Informix IDS and SE's optimizers decide what's the best route for processing a query, other than SET EXPLAIN? Is there any documentation which illustrates the ranking of SELECT statements? I would imagine that "SELECT col FROM table WHERE ROWID = n" ranks 1st. What are the rest of them?.. If I'm not mistaking, Informix's ROWID is a SERIAL(INT) which allows for a max. of 2GB nrows, or maybe it uses INT9 for TB's nrows?.. However, I think Oracle uses HEX values for ROWID. Too bad ROWID can't be oftenly used, since a rows ROWID can change. So maybe ROWID is used by the optimizer as a counter? Perhaps, it could be used for implementing the query progress idea I mentioned in my "Begin viewing query results before query completes" question? For some reason, I feel it wouldn't be that difficult to report a query's progress while being processed, perhaps at the expense of some slight overhead, but it would be nice to know ahead of time: A "Google-like" estimate of how many rows meet a query's criteria, display it's progress every 100, 200, 500 or 1,000 rows, give users the ability to cancel it at anytime and start displaying the qualifying rows as they are being put into the current list, while it continues searching?.. This is just one example, perhaps we could think other neat/useful features, the ingridients are more or less there. Perhaps we could fine-tune each query with more granularity than currently available? OLTP queries tend to be mostly static and pre-defined. The "what-if's" are more OLAP, so let's try to add more control and intelligence to it? So, therefore, being able to more precisely control, not "hint-influence" a query is what's needed and therefore it would be necessary to know how the optimizers logic is programmed. We can then have Dynamic SELECT and other statements for specific situations! Maybe even tell IDS to read blocks of indexes nodes at-a-time instead of one-by-one, etc. etc.

    Read the article

  • ISBNdb Retrieving and Managing Info

    - by Pierre Sylvestre
    Given a set of databases how can one you get information on a book with given price first (which consist of the average of a hidden list of prices coming from different web site) and followed by an optional second option that shows the list of the different web site with their page? To take an example given a query for ISBN 9785554443331 - it returns "Chemistry the central science 11 edition" : new:$50 used good condition:$35 used poor condition:$20 If the return does not match with our product list an option to "click here to visit our partner" appears and which returns: Atextbook: $10 Btextbook: $10 Ctextbook: $9 Dtextbook: $8.50 I understand that the first search would be done simultaneous on the web and our database to determine whether or not we have the book and the web to get the average of the price of a given list of web site. Thank you in advance for the help

    Read the article

  • LINQtoSQL Custom Constructor off Partial Class?

    - by sah302
    Hi all, I read this question here: http://stackoverflow.com/questions/82409/is-there-a-way-to-override-the-empty-constructor-in-a-class-generated-by-linqtosq Typically my constructor would look like: public User(String username, String password, String email, DateTime birthday, Char gender) { this.Id = Guid.NewGuid(); this.DateCreated = this.DateModified = DateTime.Now; this.Username = username; this.Password = password; this.Email = email; this.Birthday = birthday; this.Gender = gender; } However, as read in that question, you want to use partial method OnCreated() instead to assign values and not overwrite the default constructor. Okay so I got this : partial void OnCreated() { this.Id = Guid.NewGuid(); this.DateCreated = this.DateModified = DateTime.Now; this.Username = username; this.Password = password; this.Email = email; this.Birthday = birthday; this.Gender = gender; } However, this gives me two errors: Partial Methods must be declared private. Partial Methods must have empty method bodies. Alright I change it to Private Sub OnCreated() to remove both of those errors. However I am still stuck with...how can I pass it values as I would with a normal custom constructor? Also I am doing this in VB (converted it since I know most know/prefer C#), so would that have an affect on this?

    Read the article

  • When is referential integrity not appropriate?

    - by Curtis Inderwiesche
    I understand the need to have referential integrity for limiting specific values on entry or possibly preventing them from removal upon a request of deletion. However, I am unclear as to a valid use case which would exclude this mechanism from always being used. I guess this would fall into several sub-questions: When is referential integrity not appropriate? Is it appropriate to have fields containing multiple and/or possibly incomplete subsets of a foreign key's list? Typically, should this be a schema structure design decision or an interface design decision? (Or possibly neither or both) Thoughts?

    Read the article

  • Any way to speed up this hierarchical query?

    - by RenderIn
    I've got a serious performance problem with a hierarchical query that I can't seem to fix. I am modeling several organization charts in my database, each representing a virtual organization within our company. For example, we have several temporary committees that are created from time to time and there may be a Committee Organizer role at the top of this virtual hierarchy, with several people assigned to the Committee Member role beneath the organizer. Some of our virtual organizations have many levels and several branches at each level. I have a single table in which I represent all the role assignments. i.e. a ROLE_ID column and a PARENT_ROLE_ID column which is a foreign key to the ROLE_ID column. For each assignment we also store as a column the location in the company where this person has the assignment. For example, the Committee Organizer would have a company-level/ CEO assignment, while the committee members would have department-level assignments such as ACCOUNTING, MARKETING, etc. So to model the organizer/member relationship for two individuals we would have: ROLE_ID = 4 PARENT_ROLE_ID = NULL EMPLOYEE_NUMBER = 213423 COMPANY_LOCATION = CEO ROLE_ID = 5 PARENT_ROLE_ID = 4 EMPLOYEE_NUMBER = 838221 COMPANY_LOCATION = ACCOUNTING Here's where things get tricky. I have an application that every person in the organization can log in to. When they log in they should be able to view all the virtual organizations in our company. e.g. the committee members should be able to see the committee organizer and vice-versa. However, only the committee organizer should be able to edit the committee members. The difficulty is in determining whether an individual (who can have multiple role assignments) has edit access for each other assignment. While this seems simple in the example, consider a virtual organization in which we have President at the top, 5 departments directly beneath him, 2 subdepartments below each department. We only want people in the Accounting department to be able to edit individuals in the subdepartments belonging to the Accounting department. They should not have edit access to anybody in the Marketing department or its subdepartments. To determine edit access when a user views a virtual organization in our company I run a query that executes two inline views: A) Hierarchically query for all assignments in this virtual organization and using SYS_CONNECT_BY_PATH to store the entire path to each user/role/company_location and B) Hierarchically retrieve all the assignments the individual logged in has and using the SYS_CONNECT_BY_PATH to store the entire path to each of these assignments. The result of the query is all the records from A) plus a boolean determined by joining with B) which flags whether the logged in user has edit access for each record. Indexes don't seem to be helping... it simply appears that there is too much processing going on to separate all the records and then determine edit access. One issue is that I can't store the SYS_CONNECT_BY_PATH and index it... determining whether an individual record has edit access consists of comparing if: test_record_sys_path LIKE individual_record_sys_path || '%' Is a materialized view the answer?

    Read the article

  • Are database triggers evil?

    - by WW
    Are database triggers a bad idea? In my experience they are evil, because they can result in surprising side effects, and are difficult to debug (especially when one trigger fires another). Often developers do not even think of looking if there is a trigger. On the other hand, it seems like if you have logic that must occur evertime a new FOO is created in the database then the most foolproof place to put it is an insert trigger on the FOO table. The only time we're using triggers is for really simple things like setting the ModifiedDate.

    Read the article

  • Looping through recordset with VBA

    - by Robert
    I am trying to assign salespeople (rsSalespeople) to customers (rsCustomers) in a round-robin fashion in the following manner: Navigate to first Customer, assign the first SalesPerson to the Customer. Move to Next Customer. If rsSalesPersons is not at EOF, move to Next SalesPerson; if rsSalesPersons is at EOF, MoveFirst to loop back to the first SalesPerson. Assign this (current) SalesPerson to the (current) Customer. Repeat step 2 until rsCustomers is at EOF (EOF = True, i.e. End-Of-Recordset). It's been awhile since I dealt with VBA, so I'm a bit rusty, but here is what I have come up with, so far: Private Sub Command31_Click() 'On Error GoTo ErrHandler Dim intCustomer As Integer Dim intSalesperson As Integer Dim rsCustomers As DAO.Recordset Dim rsSalespeople As DAO.Recordset Dim strSQL As String strSQL = "SELECT CustomerID, SalespersonID FROM Customers WHERE SalespersonID Is Null" Set rsCustomers = CurrentDb.OpenRecordset(strSQL) strSQL = "SELECT SalespersonID FROM Salespeople" Set rsSalespeople = CurrentDb.OpenRecordset(strSQL) rsCustomers.MoveFirst rsSalespeople.MoveFirst Do While Not rsCustomers.EOF intCustomers = rsCustomers!CustomerID intSalesperson = rsSalespeople!SalespersonID strSQL = "UPDATE Customers SET SalespersonID = " & intSalesperson & " WHERE CustomerID = " & intCustomer DoCmd.RunSQL (strSQL) rsCustomers.MoveNext If Not rsSalespeople.EOF Then rsSalespeople.MoveNext Else rsSalespeople.MoveFirst End If Loop ExitHandler: Set rsCustomers = Nothing Set rsSalespeople = Nothing Exit Sub ErrHandler: MsgBox (Err.Description) Resume ExitHandler End Sub My tables are defined like so: Customers --CustomerID --Name --SalespersonID Salespeople --SalespersonID --Name With ten customers and 5 salespeople, my intended result would like like: CustomerID--Name--SalespersonID 1---A---1 2---B---2 3---C---3 4---D---4 5---E---5 6---F---1 7---G---2 8---H---3 9---I---4 10---J---5 The above code works for the intitial loop through the Salespeople recordset, but errors out when the end of the recordset is found. Regardless of the EOF, it appears it still tries to execute the rsSalespeople.MoveFirst command. Am I not checking for the rsSalespeople.EOF properly? Any ideas to get this code to work?

    Read the article

  • TooManyRowsAffectedException with encrypted triggers

    - by Jon Masters
    I'm using nHibernate to update 2 columns in a table that has 3 encrypted triggers on it. The triggers are not owned by me and I can not make changes to them, so unfortunately I can't SET NOCOUNT ON inside of them. Is there another way to get around the TooManyRowsAffectedException that is thrown on commit? Update 1 So far only way I've gotten around the issue is to step around the .Save routine with var query = session.CreateSQLQuery("update Orders set Notes = :Notes, Status = :Status where OrderId = :Order"); query.SetString("Notes", orderHeader.Notes); query.SetString("Status", orderHeader.OrderStatus); query.SetInt32("Order", orderHeader.OrderHeaderId); query.ExecuteUpdate(); It feels dirty and is not easily to extend, but it doesn't crater.

    Read the article

  • Converting delimited string to multiple values in mysql

    - by epo
    I have a mysql legacy table which contains an client identifier and a list of items, the latter as a comma-delimited string. E.g. "xyz001", "foo,bar,baz". This is legacy stuff and the user insists on being able to edit a comma delimited string. They now have a requirement for a report table with the above broken into separate rows, e.g. "xyz001", "foo" "xyz001", "bar" "xyz001", "baz" Breaking the string into substrings is easily doable and I have written a procedure to do this by creating a separate table, but that requires triggers to deal with deletes, updates and inserts. This query is required rarely (say once a month) but has to be absolutely up to date when it is run, so e.g. the overhead of triggers is not warranted and scheduled tasks to create the table might not be timely enough. Is there any way to write a function to return a table or a set so that I can join the identifier with the individual items on demand?

    Read the article

  • Split function in where clause

    - by abhishek-khandelwal
    hello friends I am using following query in linq In product table following type of data are stored abc-def bcd=fgh abc-xyz var query=from prod in db.Product join cat in db.category on prod.categoryId=cat.categoryID where prod.productName.split('-')[0]=="abc" but in that query it product annoumous problem Please give some suggestion to split in where caluse

    Read the article

  • Compiled Queries and "Parameters cannot be sequences"

    - by David B
    I thought that compiled queries would perform the same query translation as DataContext. Yet I'm getting a run-time error when I try to use a query with a .Contains method call. Where have I gone wrong? //private member which holds a compiled query. Func<DataAccess.DataClasses1DataContext, List<int>, List<DataAccess.TestRecord>> compiledFiftyRecordQuery = System.Data.Linq.CompiledQuery.Compile <DataAccess.DataClasses1DataContext, List<int>, List<DataAccess.TestRecord>> ((dc, ids) => dc.TestRecords.Where(tr => ids.Contains(tr.ID)).ToList()); //this method calls the compiled query. public void FiftyRecordCompiledQueryByID() { List<int> IDs = GetRandomInts(50); //System.NotSupportedException //{"Parameters cannot be sequences."} List<DataAccess.TestRecord> results = compiledFiftyRecordQuery (myContext, IDs); }

    Read the article

  • How to debug issues with differing execution times in different contexts.

    - by Dave
    The following question seems to be haunting me more consistently than most other questions recently. What kinds of things would you suggest I suggest that they look for when trying to debug "performance issues" like this? ok, get this - running this in query analyzer takes < 1 second exec usp_MyAccount_Allowance_Activity '1/1/1900', null, 187128 debugging locally, this takes 10 seconds: DataSet allowanceBalance = SqlHelper.ExecuteDataset( WebApplication.SQLConn(), CommandType.StoredProcedure, "usp_MyAccount_Allowance_Activity", Params); same parameters

    Read the article

  • How to add if statement in SQL ?

    - by shin
    I have a following MySQL with two parameters, $catname and $limit=1. And it is working fine. SELECT P.*, C.Name AS CatName FROM omc_product AS P LEFT JOIN omc_category AS C ON C.id = P.category_id WHERE C.Name = '$catname' AND p.status = 'active' ORDER BY RAND() LIMIT 0, $limit Now I want to add another parameter $order. $order can be either ODER BY RAND() or ORDER BY product_order in the table omc_product. Could anyone tell me how to write this query please? Thanks in advance.

    Read the article

  • why this left join query failed to load all the data in left table ?

    - by lzyy
    users table +-----+-----------+ | id | username | +-----+-----------+ | 1 | tom | | 2 | jelly | | 3 | foo | | 4 | bar | +-----+-----------+ groups table +----+---------+-----------------------------+ | id | user_id | title | +----+---------+-----------------------------+ | 2 | 1 | title 1 | | 4 | 1 | title 2 | +----+---------+-----------------------------+ the query SELECT users.username,users.id,count(groups.title) as group_count FROM users LEFT JOIN groups ON users.id = groups.user_id result +----------+----+-------------+ | username | id | group_count | +----------+----+-------------+ | tom | 1 | 2 | +----------+----+-------------+ where is the rest users' info? the result is the same as inner join , shouldn't left join return all left table's data? PS:I'm using mysql

    Read the article

  • Code changes in the Dtsx file in an SSIS package not reflecting after deploying and running on the server

    - by SKumar
    I have some folder called Test-Deploy in which I keep all the dtsx files, manifest and configuration files. Whenever I want to deploy the ssis package, I run manifest file in this folder and deploy. My problem is I have to change one of the dtsx files out of it. So, I opened only that particular dtsx file BI studio, updated and built. After the build, I copied the dtsx file from bin folder and copied to my Test-Deploy folder. When I deployed and run this new package in the Test-Deploy folder, the changes I made are not reflecting in the result. I could not find any difference in the results before and after changing. My doubt is has it saved my previous dtsx file somewhere on the server and executing the same dtsx file instead of the new one?

    Read the article

  • What data structures and algorithms are applied within data warehouse cubes?

    - by Jeff Meatball Yang
    I understand that cubes are optimized data structures for aggregating and "slicing" large amounts of data. I just don't know how they are implemented. I can imagine a lot of this technology is proprietary, but are there any resources that I could use to start implementing my own cube technology? Set theory and lots of math are probably involved (and welcome as suggestions!), but I'm primarily interested in implementations: the data structures and query algorithms. Thanks!

    Read the article

< Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >