Search Results

Search found 98443 results on 3938 pages for 'sql server'.

Page 469/3938 | < Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >

  • Subversion/Hudson/Sonar/Artifactory - too much for my little server to handle! Help!

    - by Ricket
    I have a little dedicated server. It's at a cheap price and has a simple AMD 1800+ (1.5ghz), 256mb DDR RAM, ...need I continue? And I think I'm overloading it already. I have installed the following, and it's running CentOS 5.4: Webmin Apache MySQL Subversion as an Apache module Hudson (standalone) Sonar (standalone, runs with a standalone Jetty install) Artifactory (standalone) That's pretty much it. But I'm having problems; pages are loading quite slowly. Network speed of the server is excellent, but I think I'm just running out of CPU and/or memory. A side-effect of the pages loading slowly is that sometimes Hudson times out, not being able to start Maven or contact Sonar in a certain amount of time. I think the next step to speed things up might be to move to an application server and use the WAR version of Hudson, Sonar and Artifactory together on that server. I don't know that it will help, but it just seems to make sense, especially with Sonar running on its own Jetty install and the other two probably running their own mini application servers as well. Am I correct in thinking this? Is this the right course of action? Any other tips on how to make the server run faster? I can post more data if you'd like, just let me know what else would help you answer my question. Oh, also just to cure any suspicions, I don't have any sort of virus or spyware. I protect my SSH access with DenyHosts (which has blocked 300+ brute forcers in the past few months), and I have confirmed that the top four processes in terms of memory and CPU usage are Sonar, Artifactory, Hudson, and MySQL. Edit: I just thought of another thing that I'd like you to comment on as well: Apache currently has 8 spawned slave processes, taking 42MB of ram apiece. This is not my web server. Is everything else able to function if I shut down Apache? Can you point me towards a tutorial or something on migrating Subversion from Apache into something that might work along with the other three applications, maybe even make Subversion a WAR file or something?

    Read the article

  • Displaying tree path of record in SQL Server 2005

    - by jskiles1
    An example of my tree table is: ([id] is an identity) [id], [parent_id], [path] 1, NULL, 1 2, 1, 1-2 3, 1, 1-3 4, 3, 1-3-4 My goal is to query quickly for multiple rows of this table and view the full path of the node from its root, through its superiors, down to itself. The ultimate question is, should I generate this path on inserts and maintain it in its own column or generate this path on query to save disk space? I guess it depends if this table is write heavy or read heavy. I've been contemplating several approaches to using the "path" characteristic of this parent/child relationship and I just can't seem to settle on one. This "path" is simply for display purposes and serves absolutely no purpose other than that. Here is what I have done to implement this "path." AFTER INSERT TRIGGER - requires passing a NULL path to the insert and updating the path for the record at the inserted rows identity INSTEAD OF INSERT TRIGGER - does not require insert to have NULL path passed, but does require the trigger to insert with a NULL path and updating the path for the record at SCOPE_IDENTITY() STORED PROCEDURE - requiring all inserts into this table to be done through the stored procedure implementing the trigger logic VIEW - requires building the path in the view 1 and 2 seem annoying if massive amounts of data are entered at once. 3 seems annoying because all inserts must go through the procedure in order to have a valid path populated. 1, 2, and 3 require maintaining a path column on the table. 4 removes all the limitations of the above but require the view to perform the path logic and requires use of the view if a path is to be displayed. I have successfully implemented all of the above approaches and I'm mainly looking for some advice. Am I way off the mark here or are any of the above acceptable? Each has it's advantages and disadvantages.

    Read the article

  • SQL: join within same table with different 'where' clause

    - by Pmarcoen
    Ok, so the problem I'm facing is this, I have a table with 3 columns : ID, Key and Value. ID | Key | Value ================ 1 | 1 | ab 1 | 2 | cd 1 | 3 | ef 2 | 1 | gh 2 | 2 | ij 2 | 3 | kl Now I want to select the value of Keys 1 & 3 for all IDs, the return should be like this ID | 1 | 2 ================ 1 | ab | ef 2 | gh | kl So per ID 1 row containing the Values for Keys 1 & 3. I tried using 'join' but since I need to use multiple where clauses I can't figure out how to get this to work ..

    Read the article

  • MS SQL Server BEGIN/END vs BEGIN TRANS/COMMIT/ROLLBACK

    - by Rich
    I have been trying to find info on the web about the differences between these statements, and it seems to me they are identical but I can't find confirmation of that or any kind of comparison between the two. What is the difference between doing this: BEGIN -- Some update, insert, set statements END and doing this BEGIN TRANS -- Some update, insert, set statements COMMIT TRANS ? Note that there is only the need to rollback in the case of some exception or timeout or other general failure, there would not be a conditional reason to rollback.

    Read the article

  • Convert SQL with Inner AND Outer Join to L2S

    - by Refracted Paladin
    I need to convert the below Sproc to a Linq query. At the very bottom is what I have so far. For reference the fields behind the "splat"(not my sproc) are ImmunizationID int, HAReviewID int, ImmunizationMaintID int, ImmunizationOther varchar(50), ImmunizationDate smalldatetime, ImmunizationReasonID int The first two are PK and FK, respectively. The other two ints are linke to the Maint Table where there description is stored. That is what I am stuck on, the INNER JOIN AND the LEFT OUTER JOIN Thanks, SELECT tblHAReviewImmunizations.*, tblMaintItem.ItemDescription, tblMaintItem2.ItemDescription as Reason FROM dbo.tblHAReviewImmunizations INNER JOIN dbo.tblMaintItem ON dbo.tblHAReviewImmunizations.ImmunizationMaintID = dbo.tblMaintItem.ItemID LEFT OUTER JOIN dbo.tblMaintItem as tblMaintItem2 ON dbo.tblHAReviewImmunizations.ImmunizationReasonID = tblMaintItem2.ItemID WHERE HAReviewID = @haReviewID My attempt so far -- public static DataTable GetImmunizations(int haReviewID) { using (var context = McpDataContext.Create()) { var currentImmunizations = from haReviewImmunization in context.tblHAReviewImmunizations where haReviewImmunization.HAReviewID == haReviewID join maintItem in context.tblMaintItems on haReviewImmunization.ImmunizationReasonID equals maintItem.ItemID into g from maintItem in g.DefaultIfEmpty() let Immunization = GetImmunizationNameByID( haReviewImmunization.ImmunizationMaintID) select new { haReviewImmunization.ImmunizationDate, haReviewImmunization.ImmunizationOther, Immunization, Reason = maintItem == null ? " " : maintItem.ItemDescription }; return currentImmunizations.CopyLinqToDataTable(); } } private static string GetImmunizationNameByID(int? immunizationID) { using (var context = McpDataContext.Create()) { var domainName = from maintItem in context.tblMaintItems where maintItem.ItemID == immunizationID select maintItem.ItemDescription; return domainName.SingleOrDefault(); } }

    Read the article

  • Different execution plan for similar queries

    - by Graham Clements
    I am running two very similar update queries but for a reason unknown to me they are using completely different execution plans. Normally this wouldn't be a problem but they are both updating exactly the same amount of rows but one is using an execution plan that is far inferior to the other, 4 secs vs 2 mins, when scaled up this is causing me a massive problem. The only difference between the two queries is one is using the column CLI and the other DLI. These columns are exactly the same datatype, and are both indexed exactly the same, but for the DLI query execution plan, the index is not used. Any help as to why this is happening is much appreciated. -- Query 1 UPDATE a SET DestKey = ( SELECT TOP 1 b.PrefixKey FROM refPrefixDetail AS b WHERE a.DLI LIKE b.Prefix + '%' ORDER BY len(b.Prefix) DESC ) FROM CallData AS a -- Query 2 UPDATE a SET DestKey = ( SELECT TOP 1 b.PrefixKey FROM refPrefixDetail b WHERE a.CLI LIKE b.Prefix + '%' ORDER BY len(b.Prefix) DESC ) FROM CallData AS a

    Read the article

  • SQL How to join multiplue columns with same name to one column

    - by Choi Shun Chi
    There is a super class account {User, TYPE} and subclasses saving{User, ID, balance,TYPE,interest,curency_TYPE} time{User,ID,balance,TYPE,interest,curency_TYPE,start_date,due_date,period} fore{User,ID,balance,interest,curency_TYPE} User and TYPE is the primary key of account and foreign key of three subclasses ID is primary key of three subclasses how to make a list of showing all IDs in one column?Also the same as balance and TYPE meet the problem I considered a.ID as saving, b.ID as time but it showing them separately

    Read the article

  • SQL latest/top items in category

    - by drozzy
    What is a scalable way to select latest 10 items from each category. I have a schema list this: item category updated so I want to select 10 last update items from each category. The current solution I can come up with is to query for categories first and then issue some sort of union query: query = none for cat in categories: query += select top 10 from table where category=cat order by updated I am not sure how efficient this will be for bigger databases (1 million rows). If there is a way to do this in one go - that would be nice. Any help appreciated.

    Read the article

  • LINQ to SQL: On load processing of lazy loaded associations

    - by Matt Holmes
    If I have an object that lazy loads an association with very large objects, is there a way I can do processing at the time the lazy load occurs? I thought I could use AssociateWith or LoadWith from DataLoadOptions, but there are very, very specific restrictions on what you can do in those. Basically I need to be notified when an EntitySet< decides it's time to load the associated object, so I can catch that event and do some processing on the loaded object. I don't want to simply walk through the EntitySet when I load the parent object, because that will force all the lazy loaded items to load (defeating the purpose of lazy loading entirely).

    Read the article

  • SQL multiple primary keys - localization

    - by Max Malmgren
    I am trying to implement some localization in my database. It looks something like this (prefixes only for clarification) tbl-Categories ID Language Name tbl-Articles ID CategoryID Now, in my tbl-Categories, I want to have primary keys spanning ID and language, so that every combination of ID and language is unique. In tbl-Articles I would like a foreign key to reference ID in categories, but not Language, since I do not want to bind an article to a certain language, only category. Of course, I cannot add a foreign key to part of the primary key. I also cannot have the primary key only on the ID of categories, since then there can only be one language. Having no primary keys disables foreign keys altogether, and that is also not a great solution. Do you have any ideas how I can solve this in an elegant fashion? Thanks.

    Read the article

  • Select multiple records from sql database table in a master-detail scenario

    - by Trex
    Hello, I have two tables in a master-detail relationship. The structure is more or less as follows: Master table: MasterID, DetailID, date, ... masterID1, detailID1, 2010/5/1, .... masterID2, detailID1, 2008/6/14, ... masterID3, detailID1, 2009/5/25, ... masterID4, detailID2, 2008/7/24, ... masterID5, detailID2, 2010/4/1, ... masterID6, detailID4, 2008/9/16, ... Details table: DetailID, ... detailID1, ... detailID2, ... detailID3, ... detailID4, ... I need to get all the records from the details table plus the LAST record from the master table (last by the date in the master table). Like this: detailID1, masterID1, 2010/5/1, .... detailID2, masterID5, 2010/4/1, ... detailID3, null, null, ... detailID4, masterID6, 2008/9/16, ... I have no idea how to do this. Can anybody help me? Thanks a lot. Jan

    Read the article

  • complicated sql query !!

    - by user507779
    LookupTable: userid, mobileid, startedate, enddate , owner 1 , 1 , 12-12-2000, 01-01-2001, asd 2 , 2 , 12-12-2000, 01-01-2001, dgs 3 , 3 , 02-01-2001, 01-01-2002, sdg 4 , 4 , 12-12-2000, 01-01-2001, sdg UserInfoTable: userid, firstname, lastname, address 1 , tom , do , test 2 , sam , smith , asds 3 , john , saw , asdasda 4 , peter , winston , near by Mobile: Mobileid, Name , number, imeinumber 1 , apple , 123 , 1111111 2 , nokia , 456 , 2222222 3 , vodafone , 789 , 3333333 CallLogs: id , Mobileid, callednumbers (string), date , totalduration 1 , 1 , 123,123,321 , 13-12-2000 , 30 2 , 1 , 123,123,321 , 14-12-2000 , 30 3 , 2 , 123,123,321 , 13-12-2000 , 30 4 , 2 , 123,123,321 , 14-12-2000 , 30 5 , 3 , 123,123,321 , 13-12-2000 , 30 6 , 3 , 123,123,321 , 14-12-2000 , 30 1 , 1 , 123,123,321 , 13-01-2002 , 30 2 , 1 , 123,123,321 , 14-01-2002 , 30 I want a query which will return me the following: firstname, lastname, mobile.name as mobilename, callednumbers (as concatinated strings from different rows in CallLogs table) and need it for year 2000 example: firstname, lastname, mobilename, callednumbers tom , do , apple , 123,123,321, 123,123,321 sam , smith , nokia , 123,123,321, 123,123,321 peter , winston , apple , 123,123,321, 123,123,321 any help will be highly appreciated...

    Read the article

  • What happens when the server that the Remote Desktop Connection Broker goes down?

    - by Frank Owen
    I would like to setup the Remote Desktop Connection Broker to allow better load balancing of the two terminal servers we have as well as allowing the user to re-establish to the correct server if they get disconnected. My worry is, if I set this up and the server this service is running goes down, does the terminal server stop accepting connections or will they just lose the benefit of having RDCB turned on? I don't want to add another point of failure in this equation unless I have to.

    Read the article

  • Question About DateCreated and DateModified Columns - MS SQL Server

    - by user311509
    CREATE TABLE Customer ( customerID int identity (500,20) CONSTRAINT . . dateCreated datetime DEFAULT GetDate() NOT NULL, dateModified datetime DEFAULT GetDate() NOT NULL ); When i insert a record, dateCreated and dateModified gets set to default date/time. When i update/modify the record, dateModified and dateCreated remains as is? What should i do? Obviously, i need to dateCreated value to remain as was inserted the first time and dateModified keeps changing when a change/modification occurs in the record fields. In other words, can you please write a sample quick trigger? I don't know much yet... Any help will be appreciated.

    Read the article

  • Separating date ranges into separate segments in SQL

    - by Richard
    I have a table containing 2 date fields and an identifier (id, fromdate and todate) These dates overlap in any and every possible way. I need to produce a list of segments each with a start and end date describing the separate segments in that list. For example: id, FromDate ToDate 1, 1944-12-11, 1944-12-31 2, 1945-01-01, 1945-12-31 3, 1945-01-01, 1945-06-30 4, 1945-12-31, 1946-05-01 5, 1944-12-17, 1946-03-30 Should produce all the segments of all the overlaps: 1, 1944-12-11, 1944-12-16 1, 1944-12-17, 1944-12-31 5, 1944-12-17, 1944-12-31 2, 1945-01-01, 1945-06-30 3, 1945-01-01, 1945-06-30 5, 1945-01-01, 1945-06-30 2, 1945-07-01, 1945-12-09 5, 1945-07-01, 1945-12-09 2, 1945-12-10, 1945-12-31 4, 1945-12-10, 1945-12-31 5, 1945-12-10, 1945-12-31 4, 1946-01-01, 1946-03-30 5, 1946-01-01, 1946-03-30 4, 1946-04-01, 1946-05-01 Or perhaps a diagram might help INPUT 1 <----> 2 <-----------> 3 <-----> 4 <----------> 5 <-----------------> OUTPUT 1 <-> 1 <-> 5 <-> 2 <-----> 3 <-----> 5 <-----> 2 <-> 5 <-> 2 <-> 4 <-> 5 <-> 4 <-> 5 <-> 4 <----> Please help

    Read the article

  • SQL Design: representing a default value with overrides?

    - by Mark Harrison
    I need a sparse table which contains a set of "override" values for another table. I also need to specify the default value for the items overridden. For example, if the default value is 17, then foo,bar,baz will have the values 17,21,17: table "things" table "xvalue" name stuff name xval ---- ----- ---- ---- foo ... bar 21 bar ... baz ... If I don't care about a FK from xvalue.name - things.name, I could simply put a "DEFAULT" name: table "xvalue" name xval ---- ---- DEFAULT 17 bar 21 But I like having a FK. I could have a separate default table, but it seems odd to have 2x the number of tables. table "xvalue_default" xval ---- 17 table "xvalue" name xval ---- ---- bar 21 I could have a "defaults table" tablename attributename defaultvalue xvalue xval 17 but then I run into type issues on defaultvalue. My operations guys prefer as compact a representation as possible, so they can most easily see the "diff" or deviations from the default. What's the best way to represent this, including the default value? This will be for Oracle 10.2 if that makes a difference.

    Read the article

  • SQL Access INSERT INTO Autonumber Field

    - by KrazyKash
    I'm trying to make a visual basic application which is connected to a Microsoft Access Database using OLEDB. Inside my database I have a user table with the following layout ID - Autonumber Username - Text Password - Text Email - Text To insert data into the table I use the following query INSERT INTO Users (Username, Password, Email) VALUES ('004606', 'Password', '[email protected]') However I seem to get an error with this statement and according to VB it's a syntax error. But then I tried to use the following query INSERT INTO Users (Username) Values ('004606') This query seemed to work absolutely fine... So the problem is I can insert into just one field but not all 3 (excluding the ID field because it's an autonumber). Any help would be appreciated, Thanks

    Read the article

  • Why can't I shrink a transaction log file, even after backup?

    - by Jordan Hudson
    I have a database that has a 28gig transaction log file. Recovery mode is simple. I just took a full backup of the database, and then ran both: backup log dbmcms with truncate_only DBCC SHRINKFILE ('Wxlog0', TRUNCATEONLY) The name of the db is db_mcms and the name of the transaction log file is Wxlog0. Neither has helped. I'm not sure what to do next.

    Read the article

< Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >