Search Results

Search found 47861 results on 1915 pages for 'sql update'.

Page 326/1915 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • Problems Enforcing Referential Integrity on SQL Server Tables

    - by SidC
    Hello All, I have a SQL Server 2005 database comprised of Customer, Quote, QuoteDetail tables. I want/need to enforce referential integrity such that when an insert is made on quotedetail, the quote and customer tables are also affected. I have tried my best to set up primary/foreign keys on my tables but need some help. Here's the scripts for my tables as they stand now (please don't laugh): Customers: USE [Diel_inventory] GO /****** Object: Table [dbo].[Customers] Script Date: 05/08/2010 03:39:04 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Customers]( [pkCustID] [int] IDENTITY(1,1) NOT NULL, [CompanyName] [nvarchar](50) NULL, [Address] [nvarchar](50) NULL, [City] [nvarchar](50) NULL, [State] [nvarchar](2) NULL, [ZipCode] [nvarchar](5) NULL, [OfficePhone] [nvarchar](12) NULL, [OfficeFAX] [nvarchar](12) NULL, [Email] [nvarchar](50) NULL, [PrimaryContactName] [nvarchar](50) NULL, CONSTRAINT [PK_Customers] PRIMARY KEY CLUSTERED ([pkCustID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] Quotes: USE [Diel_inventory] GO /****** Object: Table [dbo].[Quotes] Script Date: 05/08/2010 03:30:46 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Quotes]( [pkQuoteID] [int] IDENTITY(1,1) NOT NULL, [fkCustomerID] [int] NOT NULL, [QuoteDate] [timestamp] NOT NULL, [NeedbyDate] [datetime] NULL, [QuoteAmt] [decimal](6, 2) NOT NULL, [QuoteApproved] [bit] NOT NULL, [fkOrderID] [int] NOT NULL, CONSTRAINT [PK_Bids] PRIMARY KEY CLUSTERED ( [pkQuoteID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[Quotes] WITH CHECK ADD CONSTRAINT [fkCustomerID] FOREIGN KEY([fkCustomerID]) REFERENCES [dbo].[Customers] ([pkCustID]) GO ALTER TABLE [dbo].[Quotes] CHECK CONSTRAINT [fkCustomerID] QuoteDetail: USE [Diel_inventory] GO /****** Object: Table [dbo].[QuoteDetail] Script Date: 05/08/2010 03:31:58 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[QuoteDetail]( [ID] [int] IDENTITY(1,1) NOT NULL, [fkQuoteID] [int] NOT NULL, [fkCustomerID] [int] NOT NULL, [fkPartID] [int] NULL, [PartNumber1] [float] NOT NULL, [Qty1] [int] NOT NULL, [PartNumber2] [float] NULL, [Qty2] [int] NULL, [PartNumber3] [float] NULL, [Qty3] [int] NULL, [PartNumber4] [float] NULL, [Qty4] [int] NULL, [PartNumber5] [float] NULL, [Qty5] [int] NULL, [PartNumber6] [float] NULL, [Qty6] [int] NULL, [PartNumber7] [float] NULL, [Qty7] [int] NULL, [PartNumber8] [float] NULL, [Qty8] [int] NULL, [PartNumber9] [float] NULL, [Qty9] [int] NULL, [PartNumber10] [float] NULL, [Qty10] [int] NULL, [PartNumber11] [float] NULL, [Qty11] [int] NULL, [PartNumber12] [float] NULL, [Qty12] [int] NULL, [PartNumber13] [float] NULL, [Qty13] [int] NULL, [PartNumber14] [float] NULL, [Qty14] [int] NULL, [PartNumber15] [float] NULL, [Qty15] [int] NULL, [PartNumber16] [float] NULL, [Qty16] [int] NULL, [PartNumber17] [float] NULL, [Qty17] [int] NULL, [PartNumber18] [float] NULL, [Qty18] [int] NULL, [PartNumber19] [float] NULL, [Qty19] [int] NULL, [PartNumber20] [float] NULL, [Qty20] [int] NULL, CONSTRAINT [PK_QuoteDetail] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[QuoteDetail] WITH CHECK ADD CONSTRAINT [FK_QuoteDetail_Customers] FOREIGN KEY ([fkCustomerID]) REFERENCES [dbo].[Customers] ([pkCustID]) GO ALTER TABLE [dbo].[QuoteDetail] CHECK CONSTRAINT [FK_QuoteDetail_Customers] GO ALTER TABLE [dbo].[QuoteDetail] WITH CHECK ADD CONSTRAINT [FK_QuoteDetail_PartList] FOREIGN KEY ([fkPartID]) REFERENCES [dbo].[PartList] ([RecID]) GO ALTER TABLE [dbo].[QuoteDetail] CHECK CONSTRAINT [FK_QuoteDetail_PartList] GO ALTER TABLE [dbo].[QuoteDetail] WITH CHECK ADD CONSTRAINT [FK_QuoteDetail_Quotes] FOREIGN KEY([fkQuoteID]) REFERENCES [dbo].[Quotes] ([pkQuoteID]) GO ALTER TABLE [dbo].[QuoteDetail] CHECK CONSTRAINT [FK_QuoteDetail_Quotes] Your advice/guidance on how to set these up so that customer ID in Customers is the same as in Quotes (referential integrity) and that CustomerID is inserted on Quotes and Customers when an insert is made to QuoteDetial would be much appreciated. Thanks, Sid

    Read the article

  • SQL Server architecture guidance

    - by Liam
    Hi, We are designing a new version of our existing product on a new schema. Its an internal web application with possibly 100 concurrent users (max)This will run on a SQL Server 2008 database. On of the discussion items recently is whether we should have a single database of split the database for performance reasons across 2 separate databases. The database could grow anywhere from 50-100GB over 5 years. We are Developers and not DBAs so it would be nice to get some general guidance. [I know the answer is not simple as it depends on the schema, archiving policy, amount of data etc. ] Option 1 Single Main Database [This is my preferred option]. The plan would be to have all the tables in a single database and possibly to use file groups and partitioning to separate the data if required across multiple disks. [Use schema if appropriate]. This should deal with the performance concerns One of the comments wrt this was that the a single server instance would still be processing this data so there would still be a processing bottle neck. For reporting we could have a separate reporting DB but this is still being discussed. Option 2 Split the database into 2 separate databases DB1 - Customers, Accounts, Customer resources etc DB2 - This would contain the bulk of the data [i.e. Vehicle tracking data, financial transaction tables etc]. These tables would typically contain a lot of data. [It could reside on a separate server if required] This plan would involve keeping the main data in a smaller database [DB1] and retaining the [mainly] read only transaction type data in a separate DB [DB2]. The UI would mainly read from DB1 and thus be more responsive. [I'm aware that this option makes it harder for Referential Integrity to be enforced.] Points for consideration As we are at the design stage we can at least make proper use of indexes to deal performance issues so thats why option 1 to me is attractive and its more of a standard approach. For both options we are considering implementing an archiving database. Apologies for the long Question. In summary the question is 1 DB or 2? Thanks in advance, Liam

    Read the article

  • SQL-How to retrieve the correct data using php

    - by Programatt
    I am new to SQL so please excuse my question if it is simple. I have a database with a few tables. 1 is a users table, the others are application tables that contain the users preferences for receiving notifications about that application based on the country they are interested in. What I want to do, is retrieve the e-mail address of all users that have an interest in that country. I am struggling to think about how to do this. I currently have the following query constructed, and the code to populate the values function check($string) { if (isset($_POST[$string])) { $print = implode(', ', $_POST[$string]); //Converts an array into a single string $imanageSQLArr = Array(); if (substr_count($print,'Benelux') > 0) { $imanageSQLArr[0] = "checked"; } else { $imanageSQLArr[0] = "off"; } if (substr_count($print, 'France') > 0) { $imanageSQLArr[1] = "checked"; } else { $imanageSQLArr[1] = "off"; } if (substr_count($print, 'Germany') > 0) { $imanageSQLArr[2] = "checked"; } else { $imanageSQLArr[2] = "off"; } if (substr_count($print, 'Italy') > 0) { $imanageSQLArr[3] = "checked"; } else { $imanageSQLArr[3] = "off"; } if (substr_count($print, 'Netherlands') > 0) { $imanageSQLArr[4] = "checked"; } else { $imanageSQLArr[4] = "off"; } if (substr_count($print, 'Portugal') > 0) { $imanageSQLArr[5] = "checked"; } else { $imanageSQLArr[5] = "off"; } if (substr_count($print, 'Spain') > 0) { $imanageSQLArr[6] = "checked"; } else { $imanageSQLArr[6] = "off"; } if (substr_count($print, 'Sweden') > 0) { $imanageSQLArr[7] = "checked"; } else { $imanageSQLArr[7] = "off"; } if (substr_count($print, 'Switzerland') > 0) { $imanageSQLArr[8] = "checked"; } else { $imanageSQLArr[8] = "off"; } if (substr_count($print, 'UK') > 0) { $imanageSQLArr[9] = "checked"; } else { $imanageSQLArr[9] = "off"; } and the query $tocheck = $db->prepare( "SELECT users.email FROM users,app WHERE users.id=app.userid AND BENELUX=:BENELUX AND FRANCE=:FRANCE AND GERMANY=:GERMANY AND ITALY=:ITALY AND NETHERLANDS=:NETHERLANDS AND PORTUGAL=:PORTUGAL AND SPAIN=:SPAIN AND SWEDEN=:SWEDEN AND SWITZERLAND=:SWITZERLAND AND UK=:UK"); $tocheck->execute($country); $row = $tocheck->fetchAll(); This does retrieve data, but only people who's preferences match EXACTLY what is put (so what they haven't selected is taken into account as much as what they have). Any help would be greatly appreciated.

    Read the article

  • Comments Parent-Child query with indentation

    - by poldoj
    I've been trying to retrieve comments to articles in a pretty common blog fashion way. Here's my sample code: -- ---------------------------- -- Sample Table structure for [dbo].[Comments] -- ---------------------------- CREATE TABLE [dbo].[Comments] ( [CommentID] int NOT NULL , [AddedDate] datetime NOT NULL , [AddedBy] nvarchar(256) NOT NULL , [ArticleID] int NOT NULL , [Body] nvarchar(4000) NOT NULL , [parentCommentID] int NULL ) GO -- ---------------------------- -- Sample Records of Comments -- ---------------------------- INSERT INTO [dbo].[Comments] ([CommentID], [AddedDate], [AddedBy], [ArticleID], [Body], [parentCommentID]) VALUES (N'1', N'2011-11-26 23:18:07.000', N'user', N'1', N'body', null); GO INSERT INTO [dbo].[Comments] ([CommentID], [AddedDate], [AddedBy], [ArticleID], [Body], [parentCommentID]) VALUES (N'2', N'2011-11-26 23:18:50.000', N'user', N'2', N'body', null); GO INSERT INTO [dbo].[Comments] ([CommentID], [AddedDate], [AddedBy], [ArticleID], [Body], [parentCommentID]) VALUES (N'3', N'2011-11-26 23:19:09.000', N'user', N'1', N'body', null); GO INSERT INTO [dbo].[Comments] ([CommentID], [AddedDate], [AddedBy], [ArticleID], [Body], [parentCommentID]) VALUES (N'4', N'2011-11-26 23:19:46.000', N'user', N'3', N'body', null); GO INSERT INTO [dbo].[Comments] ([CommentID], [AddedDate], [AddedBy], [ArticleID], [Body], [parentCommentID]) VALUES (N'5', N'2011-11-26 23:20:16.000', N'user', N'1', N'body', N'1'); GO INSERT INTO [dbo].[Comments] ([CommentID], [AddedDate], [AddedBy], [ArticleID], [Body], [parentCommentID]) VALUES (N'6', N'2011-11-26 23:20:42.000', N'user', N'1', N'body', N'1'); GO INSERT INTO [dbo].[Comments] ([CommentID], [AddedDate], [AddedBy], [ArticleID], [Body], [parentCommentID]) VALUES (N'7', N'2011-11-26 23:21:25.000', N'user', N'1', N'body', N'6'); GO -- ---------------------------- -- Indexes structure for table Comments -- ---------------------------- -- ---------------------------- -- Primary Key structure for table [dbo].[Comments] -- ---------------------------- ALTER TABLE [dbo].[Comments] ADD PRIMARY KEY ([CommentID]) GO -- ---------------------------- -- Foreign Key structure for table [dbo].[Comments] -- ---------------------------- ALTER TABLE [dbo].[Comments] ADD FOREIGN KEY ([parentCommentID]) REFERENCES [dbo]. [Comments] ([CommentID]) ON DELETE NO ACTION ON UPDATE NO ACTION GO I thought I could use a CTE query to do the job like this: WITH CommentsCTE(CommentID, AddedDate, AddedBy, ArticleID, Body, parentCommentID, lvl, sortcol) AS ( SELECT CommentID, AddedDate, AddedBy, ArticleID, Body, parentCommentID, 0, cast(CommentID as varbinary(max)) FROM Comments UNION ALL SELECT P.CommentID, P.AddedDate, P.AddedBy, P.ArticleID, P.Body, P.parentCommentID, PP.lvl+1, CAST(sortcol + CAST(P.CommentID AS BINARY(4)) AS VARBINARY(max)) FROM Comments AS P JOIN CommentsCTE AS PP ON P.parentCommentID = PP.CommentID ) SELECT REPLICATE('--', lvl) + right('>',lvl)+ AddedBy + ' :: ' + Body, CommentID, parentCommentID, lvl FROM CommentsCTE WHERE ArticleID = 1 order by sortcol go but the results have been very disappointing so far, and after days of tweaking I decided to ask for help. I was looking for a method to display hierarchical comments to articles like it happens in blogs. [edit] The problem with this query is that I get duplicates because I couldn't figure out how to properly select the ArticleID which I want comments from to display. I'm also looking for a method that sorts children entries by date within a same level. An example of what I'm trying to accomplish could be something like: (ArticleID[post retrieved]) ------------------------- ------------------------- (Comments[related to the article id above]) first comment[no parent] --[first child to first comment] --[second child to first comment] ----[first child to second child comment to first comment] --[third child to first comment] ----[first child to third child comment to first comment] ------[(recursive child): first child to first child to third child comment to first comment] ------[(recursive child): second child to first child to third child comment to first comment] second comment[no parent] third comment[no parent] --[first child to third comment] I kinda got myself lost in all this mess...I appreciate any help or simpler ways to get this working. Thanks

    Read the article

  • Issue with SQL query for activity stream/feed

    - by blabus
    I'm building an application that allows users to recommend music to each other, and am having trouble building a query that would return a 'stream' of recommendations that involve both the user themselves, as well as any of the user's friends. This is my table structure: Recommendations ID Sender Recipient [other columns...] -- ------ --------- ------------------ r1 u1 u3 ... r2 u3 u2 ... r3 u4 u3 ... Users ID Email First Name Last Name [other columns...] --- ----- ---------- --------- ------------------ u1 ... ... ... ... u2 ... ... ... ... u3 ... ... ... ... u4 ... ... ... ... Relationships ID Sender Recipient Status [other columns...] --- ------ --------- -------- ------------------ rl1 u1 u2 accepted ... rl2 u3 u1 accepted ... rl3 u1 u4 accepted ... rl4 u3 u2 accepted ... So for user 'u4' (who is friends with 'u1'), I want to query for a 'stream' of recommendations relevant to u4. This stream would include all recommendations in which either the sender or recipient is u4, as well as all recommendations in which the sender or recipient is u1 (the friend). This is what I have for the query so far: SELECT * FROM recommendations WHERE recommendations.sender IN ( SELECT sender FROM relationships WHERE recipient='u4' AND status='accepted' UNION SELECT recipient FROM relationships WHERE sender='u4' AND status='accepted') OR recommendations.recipient IN ( SELECT sender FROM relationships WHERE recipient='u4' AND status='accepted' UNION SELECT recipient FROM relationships WHERE sender='u4' AND status='accepted') UNION SELECT * FROM recommendations WHERE recommendations.sender='u4' OR recommendations.recipient='u4' GROUP BY recommendations.id ORDER BY datecreated DESC Which seems to work, as far as I can see (I'm no SQL expert). It returns all of the records from the Recommendations table that would be 'relevant' to a given user. However, I'm now having trouble also getting data from the Users table as well. The Recommendations table has the sender's and recipient's ID (foreign keys), but I'd also like to get the first and last name of each as well. I think I require some sort of JOIN, but I'm lost on how to proceed, and was looking for help on that. (And also, if anyone sees any areas for improvement in my current query, I'm all ears.) Thanks!

    Read the article

  • two sql queries in one place

    - by Luke
    <?php $results = mysql_query("SELECT * FROM ".TBL_SUB_RESULTS." WHERE user_submitted != '$_SESSION[username]' AND home_user = '$_SESSION[username]' OR away_user = '$_SESSION[username]' ") ; $num_rows = mysql_num_rows($results); if ($num_rows > 0) { while( $row = mysql_fetch_assoc($results)) { extract($row); $q = mysql_query("SELECT name FROM ".TBL_FRIENDLY." WHERE id = '$ccompid'"); while( $row = mysql_fetch_assoc($q)) { extract($row); ?> <table cellspacing="10" style='border: 1px dotted' width="300" bgcolor="#eeeeee"> <tr> <td><b><? echo $name; ?></b></td> </tr><tr> <td width="100"><? echo $home_user; ?></td> <td width="50"><? echo $home_score; ?></td> <td>-</td> <td width="50"><? echo $away_score; ?></td> <td width="100"><? echo $away_user; ?></td> </tr><tr> <td colspan="2"><A HREF="confirmresult.php?fixid=<? echo $fix_id; ?>">Accept / Decline</a></td> </tr></table><br> <? } } } else { echo "<b>You currently have no results awaiting confirmation</b>"; } ?> I am trying to run two queries as you can see. But they aren't both working. Is there a better way to structure this, I am having a brain freeze! Thanks OOOH by the way, my SQL wont stay in this form! I will protect it afterwards

    Read the article

  • Rails - How do i update a records value in a join model ?

    - by ChrisWesAllen
    I have a join model in a HABTM relationship with a through association (details below). I 'm trying to find a record....find the value of that records attribute...change the value and update the record but am having a hard time doing it. The model setup is this User.rb has_many :choices has_many :interests, :through => :choices Interest.rb has_many :choices has_many :users, :through => :choices Choice.rb belongs_to :user belongs_to :interest and Choice has the user_id, interest_id, score as fields. And I find the ?object? like so @choice = Choice.where(:user_id => @user.id, :interest_id => interest.id) So the model Choice has an attribute called :score. How do I find the value of the score column....and +1/-1 it and then resave? I tried @choice.score = @choice.score + 1 @choice.update_attributes(params[:choice]) flash[:notice] = "Successfully updated choices value." but I get "undefined method score"......What did i miss?

    Read the article

  • R: How to update a package and keep it from reverting to the original?

    - by John
    I want to upgrade the package ggplot2: library(ggplot2) packageDescription("ggplot2")["Version"] > 0.8.3 But the current version is 0.8.7. I tried update.packages(), which seemed to work OK. But it still returned older version 0.8.3. So I downloaded and installed the package source from Cran, which says 0.8.7 in the download page. I then install it via the GUI menu in R. It returns ** building package indices ... * DONE (ggplot2) I then run: packageDescription("ggplot2")["Version"] > 0.8.3 And still I have the older version! I don't know why this is not working, what's more I had already come across this problem before and solved it (I can't remember exactly what) but now it has gone back to the older version! What's the easiest way to keep packages like this updated automatically and not have them refer back to older packages?

    Read the article

  • How to update Eclipse from 3.4 (Ganymede) to 3.5 (Galileo)?

    - by Vilx-
    I've got my Eclipse 3.4 envirnoment set up nice and cozy the way I like it. Took me some time too, to find all the plugins (Mylin, PDT, Subclipse), set all the settings, etc. Now I see that some of the plugins (like PDT) only support 3.5 in their latest versions. Is it possible to update from 3.4 to 3.5? I'd hate to do it all again. I read in some mailing list where they noted that it's possible, but the conversation trailed off in another direction. Google wasn't much help, and Eclipse's documentation either.

    Read the article

  • MySQL UPDATE WHERE IN for each listed value separately?

    - by Tom
    Hi, I've got the following type of SQL: UPDATE photo AS f LEFT JOIN car AS c ON f.car_id=c.car_id SET f.photo_status=1 , c.photo_count=c.photo_count+1 WHERE f.photo_id IN ($ids) Basically, two tables (car & photo) are related. The list in $ids contains unique photo ids, such as (34, 87, 98, 12). With the query, I'm setting the status of each photo in that list to "1" in the photo table and simultaneously incrementing the photo count in the car table for the car at hand. It works but there's one snag: Because the list can contain multiple photo ids that relate to the same car, the photo count only ever gets incremented once. If the list had 10 photos associated with the same car, photo_count would become 1 .... whereas I'd like to increment it to 10. Is there a way to make the incrementation occur for each photo individually through the join, as opposed to MySQL overthinking it for me? I hope the above makes sense. Thanks.

    Read the article

  • Kernel Error during upgrade or update commands

    - by Ashesh
    I am getting these errors during sudo apt-get update and upgrade... I tried all possible options no success. I recently upgraded to 13.04 and had problems with Broadcom WiFi. Fixed tat issues using the clean script... but looks like it did not install the Kernel properly.. Here is the o/p of the few scripts I ran: ashesh@ashesh-HPdv4:~$ sudo dpkg -r bcmwl-kernel-source (Reading database ... 175338 files and directories currently installed.) Removing bcmwl-kernel-source ... Removing all DKMS Modules Done. update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.8.0-25-generic cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/mtd/mtd.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/mtd/mtd.ko’: Input/output error cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/sfc/sfc.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/sfc/sfc.ko’: Input/output error cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/mellanox/mlx4/mlx4_core.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/mellanox/mlx4/mlx4_core.ko’: Input/output error cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/broadcom/bnx2x/bnx2x.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/broadcom/bnx2x/bnx2x.ko’: Input/output error cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/broadcom/cnic.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/broadcom/cnic.ko’: Input/output error cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/qlogic/netxen/netxen_nic.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/qlogic/netxen/netxen_nic.ko’: Input/output error cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/brocade/bna/bna.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/brocade/bna/bna.ko’: Input/output error cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/scsi/libfc/libfc.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/scsi/libfc/libfc.ko’: Input/output error cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/scsi/advansys.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/scsi/advansys.ko’: Input/output error cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/scsi/be2iscsi/be2iscsi.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/scsi/be2iscsi/be2iscsi.ko’: Input/output error cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/scsi/bnx2i/bnx2i.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/scsi/bnx2i/bnx2i.ko’: Input/output error Bus error (core dumped) depmod: ../libkmod/libkmod-elf.c:207: elf_get_mem: Assertion `offset < elf->size' failed. Aborted (core dumped) I am not a techie but I need your support to resolve this without re-installation from scratch....

    Read the article

  • Spatial data in the UK

    - by simonsabin
    I am just loving the fact that the Ordance Survey has now released a huge amount of data that can be used freely. I’ve downloaded the Panorama (tm) data http://www.ordnancesurvey.co.uk/oswebsite/products/land-form-panorama-contours/index.html . which is all the contours for the UK This I’ve loaded into SQL Server using Safe Computing’s FME ( http://www.safe.com/ ). This is because the data is a Autocad DXF file and translating that to SQL Server spatial data is not easy. The FME workbench is not...(read more)

    Read the article

  • Data Services Update for .NET 3.5 SP1…

    - by joelvarty
    I have started writing OData style services for a couple of clients, and I noticed that a lot of the classes in the API were missing… That’s because I needed to download the update, just having .net 3.5 sp1 wasn’t enough.. http://blogs.msdn.com/astoriateam/archive/2010/01/27/data-services-update-for-net-3-5-sp1-available-for-download.aspx   More later - joel

    Read the article

  • SQL Saturday Richmond, VA

    - by Mike
    Very excited to announce that I’ll be holding 2 sessions at SQL Saturday in VA on April 10th. If there are any frequent readers of SQLTeam.com attending, please make sure to say hi! Topics I’m covering are partitioning & loading data real time and an introduction to performance tuning. Hope to see you there! SQL Saturday Richmond Schedule

    Read the article

  • Working with Temporal Data in SQL Server

    - by Dejan Sarka
    My third Pluralsight course, Working with Temporal Data in SQL Server, is published. I am really proud on the second part of the course, where I discuss optimization of temporal queries. This was a nearly impossible task for decades. First solutions appeared only lately. I present all together six solutions (and one more that is not a solution), and I invented four of them. http://pluralsight.com/training/Courses/TableOfContents/working-with-temporal-data-sql-server

    Read the article

  • Making Use of Plan Explorer in my own Environment

    - by Jonathan Kehayias
    Back in October 2010, I briefly blogged about the SQL Sentry Plan Explorer in my blog post wrap up for SQL Bits 7 and how impressed I was with what I saw from a Alpha demo standpoint from Greg Gonzalez ( Blog | Twitter ) while I was at SQLBits 7 in York.  To be 100% honest and transparent, Greg gave me early access to this tool after discussing it at SQLBits 7, and I had the opportunity to test a number of pre-Beta releases where I was able to offer significant feedback and submit bugs in the...(read more)

    Read the article

  • Some thoughts on the Virtualization Feedback in the SSWUG Newsletters

    - by Jonathan Kehayias
    Last Thursday, March 25, 2010, the topic of Virtualization of SQL Server came up in the SSWUG Newsletter , with Steven Wynkoop asking if peoples perceptions and experiences have changed since the last time he covered virtualizing SQL Server.  I unfortunately missed the last coverage of this topic, but it appears from the newsletter that there was a general consensus that “low-traffic solution could be fine, but if you had a heavy hitting application, the net advise was to avoid a virtual environment...(read more)

    Read the article

  • Some thoughts on the Virtualization Feedback in the SSWUG Newsletters

    - by Jonathan Kehayias
    Last Thursday, March 25, 2010, the topic of Virtualization of SQL Server came up in the SSWUG Newsletter , with Steven Wynkoop asking if peoples perceptions and experiences have changed since the last time he covered virtualizing SQL Server.  I unfortunately missed the last coverage of this topic, but it appears from the newsletter that there was a general consensus that “low-traffic solution could be fine, but if you had a heavy hitting application, the net advise was to avoid a virtual environment...(read more)

    Read the article

  • Making Use of Plan Explorer in my own Environment

    - by Jonathan Kehayias
    Back in October 2010, I briefly blogged about the SQL Sentry Plan Explorer in my blog post wrap up for SQL Bits 7 and how impressed I was with what I saw from a Alpha demo standpoint from Greg Gonzalez ( Blog | Twitter ) while I was at SQLBits 7 in York.  To be 100% honest and transparent, Greg gave me early access to this tool after discussing it at SQLBits 7, and I had the opportunity to test a number of pre-Beta releases where I was able to offer significant feedback and submit bugs in the...(read more)

    Read the article

  • Understanding Column Properties for a SQL Server Table

    Designing a table can be a little complicated if you don’t have the correct knowledge of data types, relationships, and even column properties. In this tip, Brady Upton goes over the column properties and provides examples. "It really helped us isolate where we were experiencing a bottleneck"- John Q Martin, SQL Server DBA. Get started with SQL Monitor today to solve tricky performance problems - download a free trial

    Read the article

  • Red Gate Rolls Out the Red Carpet for SQL Server Users

    SQL in the City, the unique event for database developers and administrators organized by Red Gate Software, hits the streets of London and Seattle this fall. Now in its fourth year, it features presentations by some of the world’s top SQL Server speakers. Can 41,000 DBAs really be wrong? Join 41,000 other DBAs who are following the new series from the DBA Team: the 5 Worst Days in a DBA’s Life. Part 3, As Corrupt As It Gets, is out now – read it here.

    Read the article

  • Want to know how SQL Server logging works?

    - by Jonathan Kehayias
    Have you ever wondered why SQL Server logging works the way it does?  While I have long understood the importance behind the SQL Server Transaction Log, I have never actually understood exactly why it works the way it does, despite having attended Paul Randal’s ( Blog / Twitter ) session at PASS Summit last year.  Over the last few months I have been working on writing a book on troubleshooting the most common problems that I see asked on the forums, and as a part of that I have been digging...(read more)

    Read the article

  • Book review: Microsoft System Center Enterprise Suite Unleashed

    - by BuckWoody
    I know, I know – what’s a database guy doing reading a book on System Center? Well, I need it from time to time. System Center is actually a collection of about 7 different products that you can use to manage and monitor your software and hardware, from drive space through Microsoft Office, UNIX systems, and yes, SQL Server. It’s that last part I care about the most, and so I’ve dealt with Data Protection Manager and System Center Operations Manager (I call it SCOM) in SQL Server. But I wasn’t familiar with the rest of the suite nor was I as familiar as I needed to be with the “Essentials” release – a separate product that groups together the main features of System Center into a single offering for smaller organizations. These companies usually run with a smaller IT shop, so they sometimes opt for this product to help them monitor everything, including SQL Server. So I picked up “Microsoft System Center Enterprise Suite Unleashed” by Chris Amaris and a cast of others. I don’t normally like to get a technical book by multiple authors – I just find that most of the time it’s quite jarring to switch from author to author, but I think this group did pretty well here.  The first chapter on introducing System Center has helped me talk with others about what the product does, and which pieces fit well together with SQL Server. The writing is well done, and I didn’t find a jump from author to author as I went along. The information is sequential, meaning that they lead you from install to configuration and then use. It’s very much a concepts-and-how-to book, and a big one at that – over 950 pages of learning! It was a pretty quick read, though, since I skipped the installation parts and there are lots of screenshots. While I’m not sure you’d be an expert on the product when you finish reading this book, but I would say you’re more than halfway there. I would say it suits someone that learns through examples the best, since they have a lot of step-by-step examples I do recommend that you take a look if you have to interact with this product, or even if you are a smaller shop and you’re the primary IT resource. The last few chapters deal with System Center Essentials, and honestly it was the best part of the book for me. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • General monitoring for SQL Server Analysis Services using Performance Monitor

    - by Testas
    A recent customer engagement required a setup of a monitoring solution for SSAS, due to the time restrictions placed upon this, native Windows Performance Monitor (Perfmon) and SQL Server Profiler Monitoring Tools was used as using a third party tool would have meant the customer providing an additional monitoring server that was not available.I wanted to outline the performance monitoring counters that was used to monitor the system on which SSAS was running. Due to the slow query performance that was occurring during certain scenarios, perfmon was used to establish if any pressure was being placed on the Disk, CPU or Memory subsystem when concurrent connections access the same query, and Profiler to pinpoint how the query was being managed within SSAS, profiler I will leave for another blogThis guide is not designed to provide a definitive list of what should be used when monitoring SSAS, different situations may require the addition or removal of counters as presented by the situation. However I hope that it serves as a good basis for starting your monitoring of SSAS. I would also like to acknowledge Chris Webb’s awesome chapters from “Expert Cube Development” that also helped shape my monitoring strategy:http://cwebbbi.spaces.live.com/blog/cns!7B84B0F2C239489A!6657.entrySimulating ConnectionsTo simulate the additional connections to the SSAS server whilst monitoring, I used ascmd to simulate multiple connections to the typical and worse performing queries that were identified by the customer. A similar sript can be downloaded from codeplex at http://www.codeplex.com/SQLSrvAnalysisSrvcs.     File name: ASCMD_StressTestingScripts.zip. Performance MonitorWithin performance monitor,  a counter log was created that contained the list of counters below. The important point to note when running the counter log is that the RUN AS property within the counter log properties should be changed to an account that has rights to the SSAS instance when monitoring MSAS counters. Failure to do so means that the counter log runs under the system account, no errors or warning are given while running the counter log, and it is not until you need to view the MSAS counters that they will not be displayed if run under the default account that has no right to SSAS. If your connection simulation takes hours, this could prove quite frustrating if not done beforehand JThe counters used……  Object Counter Instance Justification System Processor Queue legnth N/A Indicates how many threads are waiting for execution against the processor. If this counter is consistently higher than around 5 when processor utilization approaches 100%, then this is a good indication that there is more work (active threads) available (ready for execution) than the machine's processors are able to handle. System Context Switches/sec N/A Measures how frequently the processor has to switch from user- to kernel-mode to handle a request from a thread running in user mode. The heavier the workload running on your machine, the higher this counter will generally be, but over long term the value of this counter should remain fairly constant. If this counter suddenly starts increasing however, it may be an indicating of a malfunctioning device, especially if the Processor\Interrupts/sec\(_Total) counter on your machine shows a similar unexplained increase Process % Processor Time sqlservr Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process % Processor Time msmdsrv Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process Working Set sqlservr If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Process Working Set msmdsrv If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Processor % Processor Time _Total and individual cores measures the total utilization of your processor by all running processes. If multi-proc then be mindful only an average is provided Processor % Privileged Time _Total To see how the OS is handling basic IO requests. If kernel mode utilization is high, your machine is likely underpowered as it's too busy handling basic OS housekeeping functions to be able to effectively run other applications. Processor % User Time _Total To see how the applications is interacting from a processor perspective, a high percentage utilisation determine that the server is dealing with too many apps and may require increasing thje hardware or scaling out Processor Interrupts/sec _Total  The average rate, in incidents per second, at which the processor received and serviced hardware interrupts. Shoulr be consistant over time but a sudden unexplained increase could indicate a device malfunction which can be confirmed using the System\Context Switches/sec counter Memory Pages/sec N/A Indicates the rate at which pages are read from or written to disk to resolve hard page faults. This counter is a primary indicator of the kinds of faults that cause system-wide delays, this is the primary counter to watch for indication of possible insufficient RAM to meet your server's needs. A good idea here is to configure a perfmon alert that triggers when the number of pages per second exceeds 50 per paging disk on your system. May also want to see the configuration of the page file on the Server Memory Available Mbytes N/A is the amount of physical memory, in bytes, available to processes running on the computer. if this counter is greater than 10% of the actual RAM in your machine then you probably have more than enough RAM. monitor it regularly to see if any downward trend develops, and set an alert to trigger if it drops below 2% of the installed RAM. Physical Disk Disk Transfers/sec for each physical disk If it goes above 10 disk I/Os per second then you've got poor response time for your disk. Physical Disk Idle Time _total If Disk Transfers/sec is above  25 disk I/Os per second use this counter. which measures the percent time that your hard disk is idle during the measurement interval, and if you see this counter fall below 20% then you've likely got read/write requests queuing up for your disk which is unable to service these requests in a timely fashion. Physical Disk Disk queue legnth For the OLAP and SQL physical disk A value that is consistently less than 2 means that the disk system is handling the IO requests against the physical disk Network Interface Bytes Total/sec For the NIC Should be monitored over a period of time to see if there is anb increase/decrease in network utilisation Network Interface Current Bandwidth For the NIC is an estimate of the current bandwidth of the network interface in bits per second (BPS). MSAS 2005: Memory Memory Limit High KB N/A Shows (as a percentage) the high memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Limit Low KB N/A Shows (as a percentage) the low memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Usage KB N/A Displays the memory usage of the server process. MSAS 2005: Memory File Store KB N/A Displays the amount of memory that is reserved for the Cache. Note if total memory limit in the msmdsrv.ini is set to 0, no memory is reserved for the cache MSAS 2005: Storage Engine Query Queries from Cache Direct / sec N/A Displays the rate of queries answered from the cache directly MSAS 2005: Storage Engine Query Queries from Cache Filtered / Sec N/A Displays the Rate of queries answered by filtering existing cache entry. MSAS 2005: Storage Engine Query Queries from File / Sec N/A Displays the Rate of queries answered from files. MSAS 2005: Storage Engine Query Average time /query N/A Displays the average time of a query MSAS 2005: Connection Current connections N/A Displays the number of connections against the SSAS instance MSAS 2005: Connection Requests / sec N/A Displays the rate of query requests per second MSAS 2005: Locks Current Lock Waits N/A Displays thhe number of connections waiting on a lock MSAS 2005: Threads Query Pool job queue Length N/A The number of queries in the job queue MSAS 2005:Proc Aggregations Temp file bytes written/sec N/A Shows the number of bytes of data processed in a temporary file MSAS 2005:Proc Aggregations Temp file rows written/sec N/A Shows the number of bytes of data processed in a temporary file 

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >