Search Results

Search found 33569 results on 1343 pages for 'sql backup and restore'.

Page 420/1343 | < Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >

  • Introduction to SQL Server 2014 CTP1 Memory-Optimized Tables

    There are a number of new features that became available with SQL Server 2014. One of the more exciting features is the new Memory-Optimized tables. In this article Greg Larson explores how to create Memory-Optimized tables, and what he's found during his initial exploration of using this new type of table. Countless happy developers. One award-winning bundle.The SQL Developer Bundle can transform the way you and your team work, aiding collaboration, efficiency, and consistency. Download your free trial now.

    Read the article

  • SQL Saturday 300 BBQ Crawl

    - by Bill Graziano
    SQL Saturday #300 is coming up right here in Kansas City on September 13th, 2014.  This is our fifth SQL Saturday which means it's the fifth anniversary of our now infamous BBQ Crawl.  We get together on Friday afternoon before the event and visit a few local joints.  We've done nice places and we've done dives.  We haven’t picked the venues yet but I promise you’ll be well fed! And if you’re thinking about the BBQ crawl you should think about submitting a session.  Our call for speakers closes Tuesday, July 15th so you just have time!  If you’re going to be at the event, contact me and I’ll get you added to the list.

    Read the article

  • Is it possible to use Back in Time with Ubuntu One without storing the back up files locally?

    - by leousa
    I use Deja Dup as my back up tool. Love its simplicity and the fact that I can store my back up directly on my Ubuntu One account. Now I really like the flexibility of Back in Time, but I can not find a clean way to store my back up in Ubuntu One. Some threads suggest to use the Ubuntu One folder in your system, and that works, but it also keeps a local copy of the back up in my system, and I do not want that. Any work around for this? Thanks in advance

    Read the article

  • SQL Database Management Survey

    Win one of two $50 Amazon vouchers by entering our database management survey. We’re finding out more about how SQL database professionals are doing backup and recovery, using cloud services and more. Answer the short survey for a chance to win. Learn Agile Database Development Best PracticesAgile database development experts Sebastian Meine and Dennis Lloyd are running day-long classes designed to complement Red Gate’s SQL in the City US tour. Classes will be held in San Francisco, Chicago, Boston and Seattle. Register Now.

    Read the article

  • Encryption password

    - by George
    I am running Ubuntu 12.04 LTS I thought that everything in my Documents folder was being backed up by Ubuntu1, as everytime I put a file in my documents folder a box popped up stating backing up file. When I looked at the Left hand list of programmes on my desktop screen there was an extra Box in the list. Clicked the Box and it stated encryption Password required. It seems the latest files have not been backed up as an Encryption Password is required. Can anyone explain for me, what is this Encription Password and how do I get it.

    Read the article

  • How to build the SQL community

    - by simonsabin
    I’ve been running SQLBits for 5 years and have always had a desire to make the SQL community better. I’ve often thought about running for the board but have never stood up Just over a year ago I was at a meeting with some SQL leaders about growing PASS globally. At that meeting a friend of offered to help the board from an international perspective. I thought he was mad. James runs his own business, has been managing the sponsors for SQLBits and has 3 kids to look after, no way would he have the...(read more)

    Read the article

  • How do I move (copy) my entire Ubuntu system to a different hard disk?

    - by boywithaxe
    The HDD I have my Ubuntu installed is about to fail. I would rather not lose 3 years worth of data, customisation and apps. I am looking for a way to move the complete system (SWAP included, because I'm not sure if I can relink the system to a new SWAP partition) to another HDD. But not the complete HDD< only the partition containing Ubuntu, to a partition on a different HDD. Basically I'd like to do what I've been able to do with Norton Ghost for my Windows install. I thought about using Clonezilla but I think I would have issues with GRUB (Especially trying to boot from a different UUID than what is in the conf file). do you know of any way this could be done? PS, my home directory is encrypted but that's not really an issue, because I can work around that. EDIT: changed the explanation to make it clearer

    Read the article

  • Introduction to the SQL Server Analysis Services Neural Network Data Mining Algorithm

    In data mining and machine learning circles, the neural network is one of the most difficult algorithms to explain. Fortunately, SQL Server Analysis Services allows for a simple implementation of the algorithm for data analytics. Dallas Snider explains 24% of devs don’t use database source control – make sure you aren’t one of themVersion control is standard for application code, but databases haven’t caught up. So what steps can you take to put your SQL databases under version control? Why should you start doing it? Read more to find out…

    Read the article

  • Is there a way to backup AD and DNS in Windows 2008 without backing up the whole Volume?

    - by EtherDragon
    I would like to know if there is a (cost free) way to backup Windows Server 2008 Active Directory and DNS settings without using Windows Server Backup. The problem stems from not having a seperate volume available to store the resulting backup from Windows Server Backup. I examined the command line options with wbadmin and it also expects the destination to be a dedicated volume for the backup. ~ED

    Read the article

  • Is there a way to backup AP and DNS in Windows 2008 without backing up the whole Volume?

    - by EtherDragon
    I would like to know if there is a (cost free) way to backup Windows Server 2008 Active Directory and DNS settings without using Windows Server Backup. The problem stems from not having a seperate volume available to store the resulting backup from Windows Server Backup. I examined the command line options with wbadmin and it also expects the destination to be a dedicated volume for the backup. ~ED

    Read the article

  • SQL Injection: How it Works and How to Thwart it

    This is an extract from the book Tribal SQL. In this article, Kevin Feasel explains SQL injection attacks, how to defend against them, and how to keep your Chief Information Security Officer from appearing on the nightly news. NEW! The DBA Team in The Girl with the Backup TattooPina colada in the disk drives! How could any DBA do such a thing? And can the DBA Team undo the damage? Find out in Part 2 of their new series, 5 Worst Days in a DBA’s Life. Read the new article now.

    Read the article

  • SQL Saturday #248 Tampa

    SQLSaturday BI & Big Data Edition is a free training event for everyone interested in learning about Business Intelligence & Big Data with a focus in the Microsoft SQL Server platform. This event will be held November 9th, 2013. In addition to our Saturday free event, we will also host four paid full day pre-conferences. Want faster, smaller backups you can rely on? Use SQL Backup Pro for up to 95% compression, faster file transfer and integrated DBCC CHECKDB. Download a free trial now.

    Read the article

  • Looking for SQL People

    - by simonsabin
    I’m looking for 2 SQL people to join our data team. I need people that are keen to develop an exciting data platform with strong SQL skills. Desirable skills are MDX/SSAS, data warehousing and working in finance industry. The role is a full time role based in London. If you are interested then let me know either via my http://sqlblogcasts.com/blogs/simons/contact.aspx or via twitter @simon_sabin https://twitter.com/intent/tweet?screen_name=simon_sabin&text=Read your blog send me details . No...(read more)

    Read the article

  • Using Friendly Names for SQL Servers via DNS

    Wouldn't it be great if your HR folks only had to put in HR-SQL.mydomain.com for the database connection in their reports? They wouldn't have to remember it was on server Nile and they certainly wouldn't have to change their reports if you migrated their database from the Nile server to the server named Danube. In DNS there are two easy ways to do this. New! SQL Monitor 3.0 Red Gate's multi-server performance monitoring and alerting tool gets results from Day One.Simple to install and easy to use – download a free trial today.

    Read the article

  • Setting Up Your SQL Server Agent Correctly

    It is important to set up SQL Server Agent Security on the principles of 'executing with minimum privileges’, and ensure that errors are properly logged and alerts are set up for a comprehensive range of errors. SQL Server Agent allows fine-grained control of every job step that should allow tasks to be run entirely safely even if they occasionally need special privileges. Can 41,000 DBAs really be wrong? Join 41,000 other DBAs who are following the new series from the DBA Team: the 5 Worst Days in a DBA’s Life. Part 3, As Corrupt As It Gets, is out now – read it here.

    Read the article

  • I need to get past my permissions to recover data

    - by adsmz
    Due to some mishaps, I am unable to boot into Kubuntu at all. However, my data is still on the hard drive. I managed to get one of the other two computers to which I have access to read the disk by booting into a liveCD session of kubuntu. The only storage medium to which I have access is a 30 GB data stick. Here's where the trouble starts: In music alone, I have to back up about 60 GB. Obviously this is going to have to be split into chunks and moved over to the second spare PC until I can reinstall Kubuntu on my laptop. All of the data that needs backed up is behind a permissions wall, so while I can view it, I can't interact with it directly. I know copying and moving through the terminal can get around this with sudo cp or sudo mv, but is there a way to first compress multiple folders in a single archive, then move it? (While we're on the subject, what compression method would be best for large volumes of music in MP3, WAV, and OGG format?)

    Read the article

  • Can't reset BackInTime snapshot path

    - by user87337
    I've been using BackInTime with Ubuntu 12.04. The disk I was saving to is no longer available. BackInTime insists that I bring it back. (It says "Can't find snapshots folder. If it is on a removable drive please plug it and then press OK") No matter what I've tried, I can't seem to get beyond this point. I've even tried removing BackInTime and re-installing it. The problem persists. How can I change the snapshots path without the missing disk? ---Jack

    Read the article

  • backing up a remote vps?

    - by ajsie
    i am using a vps at a hosting company. i can access it through ssh and i have set up webdav. i asked them if they backup the vps and they told me they are making a backup every day. but i wonder if i should backup my important files in my vps to my local computer of mine? cause it seems unsafe to upload all my important files and then delete them from my local machine. not knowing if they are backed up properly or not. if i should make backup, how and how often should i do it? what program could i use to do this? best practices i should know about? thanks!

    Read the article

  • rsync useful w/ encrypted files?

    - by barrycarter
    Is rsync efficient for transferring encrypted files? More specifically: I encrypt 'x' with my public key and call the result 'y'. I rsync 'y' to my backup server. 'x' changes slightly I encrypt the modified 'x' and rsync the modified 'y' to my backup server. Is this efficient? I know a small change in 'x' yields a large change in 'y', but is the change localized? Or has 'y' changed so thoroughly that rsync is not much better than scp? I currently backup my "critical" files by tarring/bzipping them nightly, then encrypting the .tar.bz file and rsync'ing it to my backup server. Many of the individual files don't change, but, of course, the tar file changes if even one of the files change. Is this efficient? Should I be encrypting and backing up each file individually? That way, unchanged files will take no time to rsync.

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • 12c - SQL Text Expansion

    - by noreply(at)blogger.com (Thomas Kyte)
    Here is another small but very useful new feature in Oracle Database 12c - SQL Text Expansion.  It will come in handy in two cases:You are asked to tune what looks like a simple query - maybe a two table join with simple predicates.  But it turns out the two tables are each views of views of views and so on... In other words, you've been asked to 'tune' a 15 page query, not a two liner.You are asked to take a look at a query against tables with VPD (virtual private database) policies.  In order words, you have no idea what you are trying to 'tune'.A new function, EXPAND_SQL_TEXT, in the DBMS_UTILITY package makes seeing what the "real" SQL is quite easy. For example - take the common view ALL_USERS - we can now:ops$tkyte%ORA12CR1> variable x clobops$tkyte%ORA12CR1> begin  2          dbms_utility.expand_sql_text  3          ( input_sql_text => 'select * from all_users',  4            output_sql_text => :x );  5  end;  6  /PL/SQL procedure successfully completed.ops$tkyte%ORA12CR1> print xX--------------------------------------------------------------------------------SELECT "A1"."USERNAME" "USERNAME","A1"."USER_ID" "USER_ID","A1"."CREATED" "CREATED","A1"."COMMON" "COMMON" FROM  (SELECT "A4"."NAME" "USERNAME","A4"."USER#" "USER_ID","A4"."CTIME" "CREATED",DECODE(BITAND("A4"."SPARE1",128),128,'YES','NO') "COMMON" FROM "SYS"."USER$" "A4","SYS"."TS$" "A3","SYS"."TS$" "A2" WHERE "A4"."DATATS#"="A3"."TS#" AND "A4"."TEMPTS#"="A2"."TS#" AND "A4"."TYPE#"=1) "A1"Now it is easy to see what query is really being executed at runtime - regardless of how many views of views you might have.  You can see the expanded text - and that will probably lead you to the conclusion that maybe that 27 table join to 25 tables you don't even care about might better be written as a two table join.Further, if you've ever tried to figure out what a VPD policy might be doing to your SQL, you know it was hard to do at best.  Christian Antognini wrote up a way to sort of see it - but you never get to see the entire SQL statement: http://www.antognini.ch/2010/02/tracing-vpd-predicates/.  But now with this function - it becomes rather trivial to see the expanded SQL - after the VPD has been applied.  We can see this by setting up a small table with a VPD policy ops$tkyte%ORA12CR1> create table my_table  2  (  data        varchar2(30),  3     OWNER       varchar2(30) default USER  4  )  5  /Table created.ops$tkyte%ORA12CR1> create or replace  2  function my_security_function( p_schema in varchar2,  3                                 p_object in varchar2 )  4  return varchar2  5  as  6  begin  7     return 'owner = USER';  8  end;  9  /Function created.ops$tkyte%ORA12CR1> begin  2     dbms_rls.add_policy  3     ( object_schema   => user,  4       object_name     => 'MY_TABLE',  5       policy_name     => 'MY_POLICY',  6       function_schema => user,  7       policy_function => 'My_Security_Function',  8       statement_types => 'select, insert, update, delete' ,  9       update_check    => TRUE ); 10  end; 11  /PL/SQL procedure successfully completed.And then expanding a query against it:ops$tkyte%ORA12CR1> begin  2          dbms_utility.expand_sql_text  3          ( input_sql_text => 'select * from my_table',  4            output_sql_text => :x );  5  end;  6  /PL/SQL procedure successfully completed.ops$tkyte%ORA12CR1> print xX--------------------------------------------------------------------------------SELECT "A1"."DATA" "DATA","A1"."OWNER" "OWNER" FROM  (SELECT "A2"."DATA" "DATA","A2"."OWNER" "OWNER" FROM "OPS$TKYTE"."MY_TABLE" "A2" WHERE "A2"."OWNER"=USER@!) "A1"Not an earth shattering new feature - but extremely useful in certain cases.  I know I'll be using it when someone asks me to look at a query that looks simple but has a twenty page plan associated with it!

    Read the article

< Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >