Search Results

Search found 5641 results on 226 pages for 'maintenance plan'.

Page 14/226 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Boot to VHD backup plan

    - by Josh Barker
    I have a machine that I just reinstalled Windows and all of my applications onto... what a chore that is. I want to totally and completely avoid this from now on by creating an image. My first thought was to see if it possible to copy a VHD file when you are booted into it since I am using Windows 7 Ultimate as boot-to-vhd (without a parent machine). Is this possible and if so, how could I accomplish this? Keep in mind, this is my personal machine and I'm trying to keep things inexpensive (a good script would work). Thanks, Josh

    Read the article

  • Clone a Red Hat RAID as part of a disaster recovery plan

    - by Campo
    I am looking for recommendations to clone a Red Hat mirrored raid to a single hard drive located in the same machine. The idea is if the servers hardware ever has an issue we have a similar hardware machine ready to go. All we would have to do is pop in the cloned drive. If the servers RAID ever failed we could just switch to the single drive to maintain uptime and restore the original configuration on the spare server with a backup. This is a restaurant and they are open 7 days a week. We do have time from 12:am to 9:00am to perform the necessary steps for a clone and we talking about under 10 Gigs of information. There is a database on the server. I have looked into Rsync and Clonezilla. But I am just not confident either is capable of completing the task I want. Looking for some suggestions and possibly a step by step if you could be so kind.

    Read the article

  • Setting up IIS7 to mimic a GoDaddy shared hosting plan

    - by NerdFury
    I host multiple domains on a GoDaddy shared hosting account. I would like to setup a website locally in IIS 7 that mimics the setup of my hosted account so that I can test and debug applications locally before deploying, as debugging after deploying, or discovering there are issues after deploying is frustrating. I have created a folder WebRoot, at put my main application in that folder. I created a website in IIS 7 and pointed it at that folder. I setup bindings with a fake domain, and created a matching entry in my hosts file to make the fake domain point at my 127.0.0.1. I then created a folder www.otherdomain.com under webroot. I then created an application underneath my website, and pointed it at this folder. I can't find how I can add bindings to the web application to have it referenced as a different fake domain, rather than a subdirectory under my root domain. What would be the proper way to setup IIS to best simulate the environment on the GoDaddy servers.

    Read the article

  • ESXi disaster recovery plan

    - by Marlin
    I have a Vmware infrastructure where I am using the free version of Esxi 5 . I cannot as a result use vmotion and the other cool features that come with a paid ESXI. I am using snapshots for the backups but they are stored on local hard drive. I need a better backup scenario where I can recover in the event of a harddrive failure. I tried openfiler but could not get it right. What backup method can I try given my situation?

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Back out plan for a Web App

    - by nobody
    We need a back out plan for a web app whose first maintenance release is going to production soon. The issue we are facing is even if we back out new EAR and deploy old one , the data which was keyed in using new release would not support old business rules(current), since there is enormous changes in business rules. Can you suggest how do we tackle this issue?

    Read the article

  • MSSQL 2008 - Bit Param Evaluation alters Execution Plan

    - by Nathanial Woolls
    I have been working on migrating some of our data from Microsoft SQL Server 2000 to 2008. Among the usual hiccups and whatnot, I’ve run across something strange. Linked below is a SQL query that returns very quickly under 2000, but takes 20 minutes under 2008. I have read quite a bit on upgrading SQL server and went down the usual paths of checking indexes, statistics, etc. before coming to the conclusion that the following statement, found in the WHERE clause, causes the execution plan for the steps that follow this statement to change dramatically: And ( @bOnlyUnmatched = 0 -- offending line Or Not Exists( The SQL statements and execution plans are linked below. A coworker was able to rewrite a portion of the WHERE clause using a CASE statement, which seems to “trick” the optimizer into using a better execution plan. The version with the CASE statement is also contained in the linked archive. I’d like to see if someone has an explanation as to why this is happening and if there may be a more elegant solution than using a CASE statement. While we can work around this specific issue, I’d like to have a broader understanding of what is happening to ensure the rest of the migration is as painless as possible. Zip file with SQL statements and XML execution plans Thanks in advance!

    Read the article

  • Open plan office annoyance

    - by arturito
    Not a technical question, but related to IT. At the moment I work in the open plan office and the guy next to me is talking to himself while programming. It annoys my collegue and me so much that we are putting the earphones on with music volume set to max. Does anyone know good and polite solution to shut him up?

    Read the article

  • any software for managing data center, incidents, SLAs?

    - by coderwhiz
    I have been looking in using OTRS - ITSM to manage all of our services but wanted to know if any software existed that is not built into a ticket system? Pretty much want to add all services and tie them to an SLA. Once that is done, we would manually declare incidents and the system would automatically send our notificaton to the proper e-mail lists for downtime/planned maintenance. Would be nice if it calculates reports for SLAs etc. thanks

    Read the article

  • Database maintenance, big binary table optimization

    - by jgemedina
    Hi i have a huge database, around 1 TB in size, most of the space is consumed by a table which stores images, the tables has right now almost 800k rows. server response time has increased, i would like to know which techniques should i use or you recomend, partitioning? o how to reorganize the table every row is accessed by the image id column, and it has its clustered index by that column, and every two days i reorganize the index and every 7 days i rebuild it, but it seems not to be working any suggestions?

    Read the article

  • Issue selenium code maintenance

    - by Rajasankar
    I want to group the common methods in one file and use it. For example, login to a page using selenium may be used in multiple times. Define that in class A and call it in class B. However, it throws null pointer exception. class A has public void test_Login() throws Exception { try{ selenium.setTimeout("60000"); selenium.open("http://localhost"); selenium.windowFocus(); selenium.windowMaximize(); selenium.windowFocus(); selenium.type("userName", "admin"); selenium.type("password", "admin"); Result=selenium.isElementPresent("//input[@type='image']"); selenium.click("//input[@type='image']"); selenium.waitForPageToLoad(Timeout); } catch(Exception ex) { System.out.println(ex); ex.printStackTrace(); } } with all other java syntax in class B public void test_kk() throws Exception { try { a.test_Login(); } catch(Exception ex) { System.out.println(ex); ex.printStackTrace(); } } with all syntax. When I execute class B, I got this error, java.lang.NullPointerException at A.test_Login(A.java:32) at B.test_kk(savefile.java:58) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at com.thoughtworks.selenium.SeleneseTestCase.runBare(SeleneseTestCase.j ava:212) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:232) at junit.framework.TestSuite.run(TestSuite.java:227) at junit.textui.TestRunner.doRun(TestRunner.java:116) at junit.textui.TestRunner.doRun(TestRunner.java:109) at junit.textui.TestRunner.run(TestRunner.java:77) at B.main(B.java:77) I hope someone must have tried this before. I may miss something here.

    Read the article

  • Dynamic mod_rewrite or how to plan a dynamic website

    - by Sophia Gavish
    Hi, I'm trying to make a clean url for a blog on a dynamic website, but I think that the problem is that I don't know how to plan the website schema. I read about how to use mod_rewrite and all I found is how to make "http://www.website.com/?category&date&post-title" to "http://www.website.com/category/date/post-title". that's works o.k for me. The problem is that If my url looks like "http://www.website.com/blog/?id=34" this method won't work as far as I got it. So, I have two questions: 1. Is there a way to use mod_rewrite (maybe read from a txt file) to read the post title of my blog and rewrite my url by date and post-title? 2. Should I rewrite my website to query the data from one index file in the homepage and use mod_rewrite to write the nice url? should I query also the date and the title of the post instead just the post ID?

    Read the article

  • Web service performance testing plan, Microsoft .NET WS, SQL

    - by zxed
    Trying to answer a question to come up with a testing plan. It has to do with using a website and/or webservice that queries a sql server to get data and display to user. * Solution must be able to handle an estimated 2000 users, approximately 700 concurrent users, 10,000 + website hits a month. Database calls should handle 100,000 queries via the website/webservice a month. The system is used at multiple times during a 24 hour period; however networking and bandwidth traffic decreases after 5 pm * two windows 2003 servers are used, one for web, another for sql. Both are located in the same room. User access is varied and users can be far/near (its a centralized system), users access via www

    Read the article

  • Best way to plan a task ?

    - by Indigo Praveen
    Hi All, As I am very new to programming, I am very curious about learning the best ways/practices of programming. Whenever I want to write any program , I strat directly with coding while some guys say that you should plan your program first before starting the code. But I don't understand the real value of creating the class diagrams and all that kind of stuff coz I think that ultimately I have to write the code. Can you guys please share your experiences about how you are doing your programming means what is your first step when you start an application.

    Read the article

  • why is there extra using where in execution plan of query

    - by user366534
    I see plan of query: EXPLAIN SELECT * FROM `subscribers` WHERE state =4 AND date_added < '2010-12-23 11:47:45' It shows: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE subscribers range state_date_added state_date_added 9 NULL 8 Using where Here is indexes of table: Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment subscribers 0 PRIMARY 1 subscriber_id A 382039 NULL NULL BTREE subscribers 0 email_list_id 1 email_address A 191019 NULL NULL BTREE subscribers 0 email_list_id 2 list_id A 382039 NULL NULL BTREE subscribers 1 FK_list_id 1 list_id A 10 NULL NULL BTREE subscribers 1 state_date_added 1 state A 12 NULL NULL BTREE subscribers 1 state_date_added 2 date_added A 8128 NULL NULL BTREE The last two lines describes index what is supposed for the query. Why is there in extra column using where? Even If I fetch only state and date_added column, it has in extra column: Using where; Using index. I understand why it has using index, but I don't understand Using where here.

    Read the article

  • What job is better for a newbie, one that requires you to create a new program frequently, or something like software maintenance?

    - by MobileDev123
    One of my friends has just completed his college degree and is ready to join the programmers' world. Today he has two offers, one with new projects every time, and another with software maintenance. The remaining factors are not important to him, what he wants to know is which option is better? My experience goes with second option because my first job was the maintenance one and I could learn how my fellow programmers made mistakes while coding . But I soon switched to a new job which required me to create new project every time. I enjoyed both but I must admit that my first job has given me a more advantage today. But it's not necessary that my experience can give benefit to him. But I want to know what is general approach? If I have to give him final verdict on these two, what should I tell him? Edit Everybody deserves one up vote here, I am really learning a lot from you guys.

    Read the article

  • ATG Live Webcast Dec. 6th: Minimizing EBS Maintenance Downtimes

    - by Bill Sawyer
    This webcast provides an overview of the plans and decisions you can make, and the actions you can take, that will help you minimize maintenance downtimes for your E-Business Suite instances. It is targeted to system administrators, DBAs, developers, and implementers. This session, led by Elke Phelps, Senior Principal Product Manager, and Santiago Bastidas, Principal Product Manager, will cover best practices, tools, utilities, and tasks to minimize your maintenance downtimes during the four key maintenance phases. Topics will include: Pre-Patching: Reviewing the list of patches and analyzing their impact Patching Trials: Testing the patch prior to actual production deployment Patch Deployment: Applying patching to your system Post Patching Analysis: Validating the patch application Date:                Thursday, December 6, 2012Time:               8:00 AM - 9:00 AM Pacific Standard TimePresenters:   Elke Phelps, Senior Principal Product Manager                         Santiago Bastidas, Principal Product Manager Webcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:    Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              103200To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  595757500 If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here. If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • Building a maven child project that depends on another projects child project with Bamboo

    - by kosoant
    I have two maven projects Project AAA * AAA-Core * AAA-Other Project BBB * BBB-Core * BBB-AAA-specific I want to create a build plan in Bamboo to build the BBB-AAA-specific project. The plan configuration is such that this project depends on the AAA-Other projec build. Thus everything should work ok. But when I try to run the BBB-AAA-specific Bamboo plan I get an error that states: "Unable to find resource 'foo.bar.AAA:AAA:pom:0.0.1-SNAPSHOT' in repository snapshots (http://foo.bar.com)" What is going on? The bamboo builds for "AAA-Core" and "AAA-Other" work as expected.

    Read the article

  • SQL restore from single file db to filegroup

    - by Mauro
    I have a 180GB MOSS 2007 database whose maintenance (i.e. backups and restores) are becoming a problem. Part of the issue can be resolved by splitting the three content sites down into their own site collections, however this will likely still leave me with a 100gb DB to deal with. Whilst this isnt entirely problematic for SQL it does mean that backups / restores take far too long. my idea is to split each of the databases to 30gb files, then to import the content into them which should distribute the content across the file groups,making it much easier / faster to backup/restore. Is there a way to backup from a single file and restore to a filegroup? If i have the wrong understanding of filegroups then I'm more than happy to find out other methods of managing the size of databases.

    Read the article

  • General High-Level Assessment

    - by tcarper
    Guys and Gals, I've been tasked with a doozy of an assignment. The objective is something akin to "laying of hands" on several database servers which work in concert to provide data to various Web, Client-Server and Tablet-Sync'd distributed Client-Server programs. More specifically, I've been asked to come up with a "Maintenance Plan" which includes recommendations for future work to improve these machines' performance/reliability/security/etc. Might there be some good articles on teh interwebs ya'll could point me towards which would give me some good basis to start? Articles describing "These are the top 4 overarching categories and this is how you should proceed when drilling down on each of them" sort-of-thing would be fabulous. The Databases are all SQL 2005, however the compatibility level is 80 and they were originally created with ERwin based on SQL 6.5. The OSs are all Windows Server 2003. Thanks all! Tim

    Read the article

  • How to maintain VPS server?

    - by clorz
    Assuming I have no experience in running them, what would be called a good maintenance routine for a VPS server running mail server and LAMP with a couple of sites. I've got one for quite a while now, but was doing what I feel is right without any guidance. It's ubuntu server and the only thing I do is ssh in there once a month and apt-get update, apt-get upgrade. Last year it suggested to update the distro, which I did. Waded through a bunch of diffs, broke mail server in the process and fixed it later on. So it turned out fine. Was this a right thing to do or should I stick with the old version just updating the packages? Is there a difference in the routine if it will be Fedora?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >