Search Results

Search found 33569 results on 1343 pages for 'sql backup and restore'.

Page 779/1343 | < Previous Page | 775 776 777 778 779 780 781 782 783 784 785 786  | Next Page >

  • New e learning course on Business Intelligence

    - by simonsabin
    I just got this from fello SQL MVP Chris Testa O'Neil   "I am pleased to announce the release of the Author Model eCourseCollection 6233 AE: Implementing and Maintaining Business Intelligence in Microsoft® SQL Server® 2008: Integration Services, Reporting Services and Analysis Services This 24-hour collection provides you with the skills and knowledge required for implementing and maintaining business intelligence solutions on SQL Server 2008. You will learn about the SQL Server technologies, such as Integration Services, Analysis Services, and Reporting Services. This collection also helps students to prepare for Exam 70-448 and can be accessed from: http://www.microsoft.com/learning/elearning/course/6233.mspx   

    Read the article

  • Linked servers and performance impact: Direction matters!

    - by Linchi Shea
    When you have some data on a SQL Server instance (say SQL01) and you want to move the data to another SQL Server instance (say SQL02) through openquery(), you can either push the data from SQL01, or pull the data from SQL02. To push the data, you can run a SQL script like the following on SQL01, which is the source server: -- The push script -- Run this on SQL01 use testDB go insert openquery(SQL02, 'select * from testDB.dbo.target_table') select * from source_table; To pull the data, you can run...(read more)

    Read the article

  • PASS Summit 2011 &ndash; Part IV

    - by Tara Kizer
    This is the final blog for my PASS Summit 2011 series.  Well okay, a mini-series, I guess. On the last day of the conference, I attended Keith Elmore’ and Boris Baryshnikov’s (both from Microsoft) “Introducing the Microsoft SQL Server Code Named “Denali” Performance Dashboard Reports, Jeremiah Peschka’s (blog|twitter) “Rewrite your T-SQL for Great Good!”, and Kimberly Tripp’s (blog|twitter) “Isolated Disasters in VLDBs”. Keith and Boris talked about the lifecycle of a session, figuring out the running time and the waiting time.  They pointed out the transient nature of the reports.  You could be drilling into it to uncover a problem, but the session may have ended by the time you’ve drilled all of the way down.  Also, the reports are for troubleshooting live problems and not historical ones.  You can use Management Data Warehouse for historical troubleshooting.  The reports provide similar benefits to the Activity Monitor, however Activity Monitor doesn’t provide context sensitive drill through. One thing I learned in Keith’s and Boris’ session was that the buffer cache hit ratio should really never be below 87% due to the read-ahead mechanism in SQL Server.  When a page is read, it will read the entire extent.  So for every page read, you get 7 more read.  If you need any of those 7 extra pages, well they are already in cache.  I had a lot of fun in Jeremiah’s session about refactoring code plus I learned a lot.  His slides were visually presented in a fun way, which just made for a more upbeat presentation.  Jeremiah says that before you start refactoring, you should look at your system.  Investigate missing or too many indexes, out-of-date statistics, and other areas that could be leading to your code running slow.  He talked about code standards.  He suggested using common abbreviations for aliases instead of one-letter aliases.  I’m a big offender of one-letter aliases, but he makes a good point.  He said that join order does not matter to the optimizer, but it does matter to those who have to read your code.  Now let’s get into refactoring! Eliminate useless things – useless/unneeded joins and columns.  If you don’t need it, get rid of it! Instead of using DISTINCT/JOIN, replace with EXISTS Simplify your conditions; use UNION or better yet UNION ALL instead of OR to avoid a scan and use indexes for each union query Branching logic – instead of IF this, IF that, and on and on…use dynamic SQL (sp_executesql, please!) or use a parameterized query in the application Correlated subqueries – YUCK! Replace with a join Eliminate repeated patterns Last, but certainly not least, was Kimberly’s session.  Kimberly is my favorite speaker.  I attended her two-day pre-conference seminar at PASS Summit 2005 as well as a SQL Immersion Event last December.  Did I mention she’s my favorite speaker?  Okay, enough of that. Kimberly’s session was packed with demos.  I had seen some of it in the SQL Immersion Event, but it was very nice to get a refresher on these, especially since I’ve got a VLDB with some growing pains.  One key takeaway from her session is the idea to use a log shipping solution with a load delay, such as 6, 8, or 24 hours behind the primary.  In the case of say an accidentally dropped table in a VLDB, we could retrieve it from the secondary database rather than waiting an eternity for a restore to complete.  Kimberly let us know that in SQL Server 2012 (it finally has a name!), online rebuilds are supported even if there are LOB columns in your table.  This will simplify custom code that intelligently figures out if an online rebuild is possible. There was actually one last time slot for sessions that day, but I had an airplane to catch and my kids to see!

    Read the article

  • Help With Hard links And Symlinks Moving Directory And Files

    - by Julio
    This is what I would like to do. I have a symlink "/var" linking to "/tmpfs/var.1" /var - /tmpfs/var.1 I start a script called "cache_tmpfs" from /etc/rc.local on startup this script will copy /var.backup/* contents to /tmpfs/var.1/ cp -dpRxf /var.backup/* /tmpfs/var.1/ now the problem is that kernel is opening messages log file in /var/log/messages, is it possible to remove the current /var symlink and recreate a new one (that will symlink to /var.backup insteed of /tmpfs/var.1) without issues as files once opened by system become hard links?? rm /var && ln -s /var.backup /var Thanks...

    Read the article

  • How your Standard can become AWEsome

    - by NeilHambly
    Having tried to make a fun play on words to illustrate that for Standard Editions of SQL Server 2005/2008 since the releases of these Cumulative Updates: SQL 2005 SP3 & CU4 / SQL 2008 SP1 & CU2 we can make real use of AWE! Since (Mid 2009) when these CU’s where released, the ability to make use of required privilege “locking-pages-in-memory” which previously was only available in Enterprise Edition, allowing us to make use of those AWE APIs for resolving working set trim issues that resulted...(read more)

    Read the article

  • Did You Know: So Many User Groups, So Little Time

    - by Kalen Delaney
    In May and June of this year, I'll be four user groups presentations plus a SQL Saturday. You can check my schedule for links to the relevant sites, and a description of my topics, as soon as they are available. This post is mainly just a heads-up, so you can make your plans. http://schedule.KalenDelaney.com May 12: The inaugural meeting of the Sacramento SQL Server User Group (evening) May 13: Central California .Net Users Group (evening) June 8: Colorado PASS (evening) June 12: SQL Saturday #43,...(read more)

    Read the article

  • SCOM, 90 days in, III. Stuff to Add

    - by merrillaldrich
    This is the third installment of a series on our deployment of System Center at my workplace, emphasis on SQL Server MP. At this point we’ve got Operations Manager installed, and up and running, and we’ve been able to categorize all the monitored servers into production, preproduction, test and DR using groups that have dynamic membership rules. We’ve got the SQL management pack working with out-of-the-box settings, and used it to locate all the SQL Server stack services like the engine, reporting...(read more)

    Read the article

  • zip being too nice (osX)

    - by stib
    I use zip to do a regular backup of a local directory onto a remote machine. They don't believe in things like rsync here, so it's the best I can do (?). Here's the script I use echo $(date)>>~/backuplog.txt; if [[ -e /Volumes/backup/ ]]; then cd /Volumes/Non-RAID_Storage/; for file in projects/*; do nice -n 10 zip -vru9 /Volumes/backup/nonRaidStorage.backup.zip "$file" 2>&1 | grep -v "zip info: local extra (21 bytes)">>~/backuplog.txt; done; else echo "backup volume not mounted">>~/backuplog.txt; fi this all works fine, except that zip never uses much CPU, so it seems to be taking longer than it should. It never seems to get above 5%. I tried making it nice -20 but that didn't make any difference. Is it just the network or disc speeds bottlenecking the process or am I doing something wrong?

    Read the article

  • Introducing sp_ssiscatalog (v1.0.0.0)

    - by jamiet
    Regular readers of my blog may know that over the last year I have made available a suite of SQL Server Reporting Services (SSRS) reports that provide visualisations of the data in the SQL Server Integration Services (SSIS) 2012 Catalog. Those reports are available at http://ssisreportingpack.codeplex.com. As I have built these reports and used them myself on a real life project a couple of things have dawned on me: As soon as your SSIS Catalog gets a significant amount of data in it the performance of the reports degrades rapidly. This is hampered by the fact that there are limitations as to the SQL statements that I can embed within a SSRS report. SSIS professionals are data guys at heart and those types of people feel more comfortable in a query environment rather than having to go through the rigmarole of standing up a reporting server (well, I know I do anyway) Hence I have decided to take a different tack with the reporting pack. Taking my lead from Adam Machanic’s sp_whoisactive and Brent Ozar’s sp_blitz I have produced sp_ssiscatalog, a stored procedure that makes it easy to get at the crucial data in the SSIS Catalog. I will spend the rest of this blog explaining exactly what sp_ssiscatalog does and how to use it but if you would rather just download the bits yourself and start to play you can download v1.0.0.0 from DB v1.0.0.0. Usage Scenarios Most Recent Execution I find that the most frequent information that one needs to get from the SSIS Catalog is information pertaining to the most recent execution. Hence if you execute sp_ssiscatalog with no parameters, that is exactly what you will get. EXEC [dbo].[sp_ssiscatalog] This will return up to 5 resultsets: EXECUTION - Summary information about the execution including status, start time & end time EVENTS - All events that occurred during the execution OnError,OnTaskFailed - All events where event_name is either OnError or OnTaskFailed OnWarning - All events where event_name is OnWarning EXECUTABLE_STATS - Duration and execution result of every executable in the execution All 5 resultsets will be displayed if there is any data satisfying that resultset. In other words, if there are no (for example) OnWarning events then the OnWarning resultset will not be displayed. The display of these 5 resultsets can be toggled respectively by these 5 optional parameters (all of which are of type BIT): @exec_execution @exec_events @exec_errors @exec_warnings @exec_executable_stats Any Execution As just explained the default behaviour is to supply data for the most recent execution. If you wish to specify which execution the data should return data for simply supply the execution_id as a parameter: EXEC [dbo].[sp_ssiscatalog] 6 All Executions sp_ssiscatalog can also return information about all executions: EXEC [dbo].[sp_ssiscatalog] @operation_type='execs' The most recent execution will appear at the top. sp_ssiscatalog provides a number of parameters that enable you to filter the resultset: @execs_folder_name @execs_project_name @execs_package_name @execs_executed_as_name @execs_status_desc Some typical usages might be: //Return all failed executions EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_status_desc='failed' //Return all executions for a specified folder EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_folder_name='My folder' //Return all executions of a specified package in a specified project EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_project_name='My project', @execs_package_name='Pkg.dtsx' Installing sp_ssicatalog Under the covers sp_ssiscatalog actually calls many other stored procedures and functions hence creating it on your server is not simply a case of running a CREATE PROCEDURE script. I maintain the code in an SQL Server Data Tools (SSDT) database project which means that you have two ways of obtaining it. Download the source code You can download the latest (at the time of writing) source code from http://ssisreportingpack.codeplex.com/SourceControl/changeset/view/70192. Hit the download button to download all the source code in a zip file. The contents of that zip file will include an SSDT database project which you can open up in SSDT and publish just like any other SSDT database project. You can publish to a new database or any existing database, even [SSISDB] if you prefer. Download a dacpac Maintaining the code in an SSDT database project means that it can all get packaged up into a dacpac that you can then publish to your SQL Server. That dacpac is available from DB v1.0.0.0: Ordinarily a dacpac can be deployed to a SQL Server from SSMS using the Deploy Dacpac wizard however in this case there is a limitation. Due to sp_ssiscatalog referring to objects in the SSIS Catalog (which it has to do of course) the dacpac contains a SqlCmd variable to store the name of the database that underpins the SSIS Catalog; unfortunately the Deploy Dacpac wizard in SSMS has a rather gaping limitation in that it cannot deploy dacpacs containing SqlCmd variables. Hence, we can use the command-line tool, sqlpackage.exe, instead. Don’t worry if reverting to the command-line sounds a little daunting, I assure you it is not. Simply open a Visual Studio command-prompt and cd to the folder containing the downloaded dacpac: Type: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /action:Publish /TargetDatabaseName:SsisReportingPack /SourceFile:SSISReportingPack.dacpac /Variables:SSISDB=SSISDB /TargetServerName:(local) or the shortened form: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /a:Publish /tdn:SsisReportingPack /sf:SSISReportingPack.dacpac /v:SSISDB=SSISDB /tsn:(local) remembering to set your server name appropriately (here mine is set to “(local)” ). If everything works successfully you will see this: And you’re done! You’ll have a new database called [SsisReportingPack] which contains sp_ssiscatalog:   Good luck with sp_ssiscatalog. I have been using it extensively on my own projects recently and it has proved to be very useful indeed. Rest-assured however, I will be adding many new capabilities in the future. Feedback is welcome. @Jamiet

    Read the article

  • Sendmail not working as desired in bash script

    - by dan08
    This is the code I have in a bash script that runs as a cronjob. The cronjob run as root. /usr/sbin/sendmail [email protected]<<EOF subject:Backup Error! from:backup@server01 $error EOF There is code after this and the email I get is as follows: From the root user on the machine. and the message includes: subject:Backup Error! from:backup@server01 $error EOF More code... that is in the script all the way to the end... I have tried other variations, this is the closest I've got. I tried this in a regular script and it worked properly. Whats going on, and how can I send this email, specifying the subject and form sender?

    Read the article

  • Cloud computing - database loading question

    - by workwise
    Following is the situation, I want to know whether what I want is possible in cloud computing and is it the best way for me: 1) My main site has a Database with tables with millions of rows, and entries are added almost every second. 2) I will setup a mysql mirror, so there will be a backup database always in sync with the main one. 3) There are few tens of thousands of images- growing. So say total size of images few tens of gigabytes. I will be keeping the image data also in sync on the backup server. 4) There can be short periods where traffic can go 100X the average traffic. 5) I will be using memcache heavily - most database and even frequently used disk files/images will be in RAM. I want that the main site runs on a dedicated server. The backup server is say an Amazon EC2 instance. Now note that since it is live backup, I need to run a small instance continuously. I want that when I anticipate high traffic, I should be able to run a large instance on the cloud and transfer the traffic there. The main point is - I do not want to spend time in "loading" the database on the large instance, as it typically can take few minutes or even hours (experience). So is it possible to just scale the memory/CPU on demand, and not having to load the database or sync up the filesystem? I want to setup my backup scripts etc just ONCE. Thanks JP

    Read the article

  • How to prevent creation of monitors.xml?

    - by user222723
    I'm using Ubuntu on a tablet and have a problem with the screen rotation, which I can fix by removing ~/.config/monitors.xml, but sadly every time I rotate the screen a new monitors.xml is created. Is there any way to prevent this? I already tried to create an empty file with the same name as root but it was still overwritten after rotating the screen. Edit: I think I finally found the reason for the problem. Everytime the orientation is changed the new orientation is saved in monitors.xml while the original monitors.xml is saved as monitors.xml.backup. By playing around with chattr I found out that this causes Ubuntu to try to restore monitors.xml out of monitors.xml.backup after every login. So if I turn the screen to the left and then back to normal monitors.xml says "orientation=normal" and monitors.xml.backup says "orientation=left". After the login Ubuntu overwrites monitors.xml with the backup and uses its configuration and turns the screen to the left.

    Read the article

  • Oiling the gears for the data dictionary

    Documenting the database is always a challenge, and there are many techniques you can use to help all the people on your team understand what all your tables are used for. David Poole brings us an easy way to implement a framework for documentation. The Future of SQL Server Monitoring "Being web-based, SQL Monitor 2.0 enables you to check on your servers from almost any location" Jonathan Allen.Try SQL Monitor now.

    Read the article

  • Hierarchies on Steroids #2: A Replacement for Nested Sets Calculations

    In this sequel to his first "Hierarchies on Steroids" article, SQL Server MVP Jeff Moden shows us how to build a pre-aggregated table that will answer most of the questions that you could ask of a typical hierarchy. Any bets on whether Santa is packin’ a Tally Table in his bag or not? 12 essential tools for database professionalsThe SQL Developer Bundle contains 12 tools designed with the SQL Server developer and DBA in mind. Try it now.

    Read the article

  • Why isn't cron running my script?

    - by Jingqiang Zhang
    Now I want to use Backup and Whenever gem to automatic backup my database. When I connect the server by ssh as an added user to run backup perform -t my_backup,it works well.But the cron file: 0 22 * * * /bin/bash -l -c 'backup perform -t my_backup' can't run at 22:00. When I use cat /etc/crontab check the cron's config file,it is: SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # The /bin/bash and /bin/sh are different.What's the reason?How to do?

    Read the article

  • Light Weight Monitoring using Extended Events

    Introduction SQL Server 2008 introduces Extended Events for performance monitoring. SQL Server Extended Events is a general event-handling system for server systems. So  why another event handling system? We already have activity monitor, Perfmon, SQL Profiler, DMVs,. However, Extended Events ... [Read Full Article]

    Read the article

  • 7 Things that High Availability is Not

    Wesley has heard High Availablity touted as all sorts of technological cure-all for busy SysAdmins and DBAs, and now he's taking a stand against it. There are a range of things that High Availability is regularly confused with (either deliberately or innocently), and Wesley's clearing it all up The Future of SQL Server Monitoring "Being web-based, SQL Monitor 2.0 enables you to check on your servers from almost any location" Jonathan Allen.Try SQL Monitor now.

    Read the article

  • Rolling Along: PASS Board Year 2, Q2

    - by Denise McInerney
    Eighteen months into my time as a PASS Director I’m especially proud of what the Virtual Chapters have accomplished and want to share that progress with you. I'm also pleased that the organization has invested more resources to support the VCs. In this quarter I got to attend two conferences and meet more members of the SQL community. Virtual Chapters In the first six months of 2013 VCs have hosted more than 50 webinars, offering free technical education to over 6200 attendees. This is a great benefit to PASS members; thanks to the VC leaders, volunteers and speakers who contribute their time to produce these events. The Performance VC held their “Summer Performance Palooza”, an event featuring eight back-to-back sessions. Links to the session recordings can be found on the VCs web site. The new webinar platform, GoToWebinar, has been rolled out to all the VCs. This is a more stable, scalable platform and represents an important investment into the future of the VCs. A few new VCs are in the planning stages, including one focused on Security and one for Russian speakers. Visit the Virtual Chapter home page to sign up for the chapters that interest you. Each Virtual Chapter is offering a discount code for PASS Summit 2013. Be sure to ask your VC leader for the code to save $200 on Summit registration. 24 Hours of PASS The next 24HOP will be on July 31. This Summit Preview edition will feature 24 consecutive webcasts presented by experts who will be speaking at Summit in October. Registration for this free event is open now. And we will be using the GoToWebinar platform for 24HOP also. Business Analytics Conference April marked the first PASS Business Analytics Conference in Chicago. This introduced PASS to another segment of data professionals: the analysts and data scientists who work with the world’s growing collection of data. Overall the inaugural event was a success and gave us a glimpse into this increasingly important space. After Chicago the Board had several serious discussions about the lessons learned from this seven and what we should do next. We agreed to apply those lessons and continue to invest in this event; there will be a PASS Business Analytics Conference in 2014. I’m very pleased the next event will be in San Jose, CA, the heart of Silicon Valley, a place where a great deal of investment and innovation in data analytics is taking place. Global SQL Community Over the last couple of years PASS has been taking steps to become more relevant to SQL communities in different parts of the world. In May I had the opportunity to attend SQL Bits XI in Nottingham, England. It was enlightening to meet and talk with SQL professionals from around the U.K. as well as many other European countries. The many SQL Bits volunteers put on a great event and were gracious hosts. Budgets The Board passed the FY14 budget at the end of June. The  budget process can be challenging and requires the Board to make some difficult choices about where to allocate resources. Overall I’m satisfied with the decisions we made and think we are investing in the right activities and programs. Next Up The Board is meeting July 18-19 in Kansas City. We will be holding the Executive Committee election for the Exec Co that will take office in 2014. We will also be discussing plans for the next BA conference as well as the next steps for our Global Growth initiative. Applications for the upcoming Board of Directors election open on July 24. If you are considering running for the Board you can visit the PASS elections site to learn more about the election process. And I encourage anyone considering running to reach out to current and past Board members to learn about what the role entails. Plans for the next PASS Summit are in full swing. We are working on some fun new ideas to introduce attendees to the many ways to become involved in the SQL community.

    Read the article

  • How to use RunAs command for SSMS if option does not exist

    In order to use your normal Windows login and your admin login to connect to SQL Server using SSMS you need to use the "Run as" feature. What do you do in the case of Windows 7 or Windows Vista where you can’t find the Run As Different User option? The Future of SQL Server MonitoringMonitor wherever, whenever with Red Gate's SQL Monitor. See it live in action now.

    Read the article

  • Spend a Week With Kalen Delaney in the Boston Area

    - by Adam Machanic
    If you're reading this blog, you're undoubtedly already familiar with Kalen Delaney . She's been writing the premier internals book series for Microsoft since SQL Server 2000, teaching SQL Server for many years before that, and is known as one of the most knowledgeable people in the world when it comes to how SQL Server works and the art of applying that knowledge to your day-to-day work. Given Kalen's extreme depth and reputation as a fantastic teacher, it should come as no surprise that last time...(read more)

    Read the article

  • Android: How to restore List data when pressing the "back" button?

    - by Rob
    Hi there, My question is about restoring complex activity related data when coming back to the activity using the "back" button". Activity A has a ListView which is connected to ArrayAdapter serving as its data source - this happens in onCreate(). By default, if I move to activity B and press "back" to get back to activity A, does my list stay intact with all the data or do I just get visual "copy" of the screen but the data is lost? What can I do when more than activities are involved? Let's say activity A starts activity B which starts activity C and then I press "back" twice to get to A. How do I ensure the integrity of the A's data when it gets back to the foreground? PrefsManager does not seem to handle complex object very intuitively. Thanks, Rob

    Read the article

  • Is it possible to choose the Amazon Glacier Vault when backing up from Synology NAS?

    - by Jorrit Reedijk
    I am using Amazon Glacier Backup on a Synology NAS (rackstation). But the Synology backup software for glacier does not allow me to choose the vault to backup to. This means I can not give my own name to the vault, but also I can not backup to a previously created vault (I also have this problem now after replacing a Synology rackstation). Is there any way to choose to which vault you want your Synology to backup to? Edit: I have now posted this question to the Synology forum also: http://forum.synology.com/enu/viewtopic.php?f=174&t=86557&e=0 If I can solve the problem I'll post the solutions here also.

    Read the article

  • Serialize a C# class to xml. Store the XML as a string in SQL Server and then restore the class late

    - by BrianK
    I want to serialize a class to xml and store that in a field in a database. I can serialize with this: StringWriter sw = new StringWriter(); XmlSerializer xmlser = new XmlSerializer(typeof(MyClass)); xmlser.Serialize(sw, myClassVariable); string s = sw.ToString(); sw.Close(); Thats works, but it has the namesapces in it. xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" Will these slow down the deserialization because it will go out to those and verify the XML? I got rid of the namespaces by creating a blank XmlSerializerNamespaces and using that to serialize, but then the xml still had namespaces around integer variables: <anyType xmlns:q1="http://www.w3.org/2001/XMLSchema" d3p1:type="q1:int" xmlns:d3p1="http://www.w3.org/2001/XMLSchema-instance"> 3 </anyType> My question is: Is it necessary to have the namesapces for deserialization and if not, how to get rid of them? How do I tell it fields are ints so it doesnt put in "anytype" Thanks, Brian

    Read the article

< Previous Page | 775 776 777 778 779 780 781 782 783 784 785 786  | Next Page >