Search Results

Search found 296 results on 12 pages for 'henrik bak'.

Page 1/12 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Livre gratuit à télécharger : Kanban et Scrum, par Henrik Kniberg et Mattias Skarin, traduit en fran

    Bonjour, Une traduction de Kanban and Scrum - making the most of both de Henrik Kniberg et Mattias Skarin publié par InfoQ a été faite bénévolement par Claude Aubry, Frédéric Faure, Antoine Vernois, et Fabrice Aimetti. Vous pouvez donc le télécharger gratuitement sur l'hébergement Developpez.com dédié aux contributions d'Henrik. Prochainement vous pourrez également le consulter entièrement en ligne Et bien sur, un grand merci aux 4 traducteurs pour c...

    Read the article

  • ARM TechCon 2013: Oracle Summary from Henrik Stahl

    - by hinkmond
    Henrik Stahl posted a good blog post summary of Oracle's involvement at last week's ARM TechCon 2013 in Santa Clara, Calif. Lots of new and interesting items to note from this year's conference. See: ARM TechCon 2013 Summary Here's a quote: If you have been following Java news, you are already aware of the fact that there has been a lot of investment in Java for ARM-based devices and servers over the last couple of years... Good stuff related to Java Embedded on ARM chips, but even better stuff coming soon... Stay tuned. Hinkmond

    Read the article

  • Download databasename.bak file

    - by Jordon
    I have downloaded databasename.bak file from my hosting company, when i tried to restore that DB file in SQL server 2008 it is keep on giving me following error. The media family on device 'C:\go4sharepoint_1384_8481.bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE HEADERONLY is terminating abnormally. (Microsoft SQL Server, Error: 3241) According to this error and from following link http://www.sqlcoffee.com/Troubleshooting047.htm It is clear that either file i am downloading is corrupt or it is getting corrupted on the way? Any idea, why I am keep on receiving this error? I tried almost all ways but unable to fix this problem, please help me.

    Read the article

  • Restore Bak File created on Windows XP pro in Windows XP Pro 64

    - by Kobojunkie
    I have a situation that I need help with. I backed up my Files on Windows XP using the System BackUp utility/wizard, and then Installed a new operating system on the machine. Now I want to restore my old files via the .bak file but it is not being recognized at all. Did I do this wrong or is there a way to still get back my old files on my new OS ? Thanks in advance!

    Read the article

  • How to Import a .bak file using MySQL?

    - by user682526
    I have no knowledge on MySQL and I have received a .bak file from one of my customer and he asked me to "import" this file using MySQL. I installed MysQL Community Server free download which installs MySQL Command Line Client. Now I don't know how to Import this .bak file and can't read the necessary data that I need. I tried installation of MySQL EE free trial as well but it doesn't give me anything that I can invoke the DB GUI so I can import the .bak file either. I'm frustrated now and can't go anywhere. Can you please help ? Thanks !

    Read the article

  • Sharepoint 2010 Restore .bak Error

    - by Quiel
    I'm having issues with restoring my .bak files to my own sharepoint environment due to version difference. My version is currently: Sharepoint Server with Enterprise Client Access License Ms sharepoint Foundation 2010 core : 14.0.4763.1000 Security Update of Ms sharepoint Foundation 2010(kb2494001) 14.0.6106.5008 Server Version(Where I'm creating my Backups): SharePoint Server with Enterprise Client Access License Microsoft SharePoint Foundation 2010 Core : 14.0.4763.1000 Security Update for Office SharePoint Foundation 2010 (KB2345322) : 14.0.5123.5002 I would like to know if there is a way I could enable my backups from the Server to be compatible with my version. My only option is to downgrade my version since I'm not an admin to the server. Does my Client Access License an issue? Can I just downgrade my security update to match the one in the server? If there is would you please be kind enough to tell me how. ?

    Read the article

  • Fastest way to compress a database or .bak file and transfer it

    - by Nai
    As per the question title. I wonder if there are special programmes or commands that makes zipping up a .bak file and transferring it super quick. I read abour xp_cmdshell here but I'm not sure about the speed. My .bak file is about 12 gigs at the moment. Related to this is the possibility of using Red Gate's SQL Data Compare to just transfer the differential data across the network pipeline but I have never used SQL Data Compare before and I'm not sure how it goes about doing INSERTS on tables with Primary Keys and such. Also, not sure about the speed. Does anyone have any experience with this programme or similar programmes? Cheers!

    Read the article

  • DatabaseName.bak File Transfer Problem

    - by Jordon
    I have downloaded databasename.bak file from my hosting company, when i tried to restore that DB file in SQL server 2008 it is keep on giving me following error. The media family on device 'C:\go4sharepoint_1384_8481.bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE HEADERONLY is terminating abnormally. (Microsoft SQL Server, Error: 3241) According to this error and from following link http://www.sqlcoffee.com/Troubleshooting047.htm Later when I tried to restore file on server it was restored correctly, but when I tried to transfer same file using FTP Software FileZila and tried to restore that downloaded file it was giving above error. That is file is getting corrupted on the way to tranfer using FTP. Any Idea how can i avoid that file to get courrupt? Note: File is downloaded using FileZila - TransferType as Binary. Thank you.

    Read the article

  • Download databasename.bak file

    - by Jordon
    I have downloaded databasename.bak file from my hosting company, when i tried to restore that DB file in SQL server 2008 it is keep on giving me following error. The media family on device 'C:\go4sharepoint_1384_8481.bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE HEADERONLY is terminating abnormally. (Microsoft SQL Server, Error: 3241) According to this error and from following link http://www.sqlcoffee.com/Troubleshooting047.htm It is clear that either file i am downloading is corrupt or it is getting corrupted on the way? Any idea, why I am keep on receiving this error? I tried almost all ways but unable to fix this problem, please help me.

    Read the article

  • TSQL syntax to restore .bak to new db

    - by justSteve
    I need to automate the creation of a duplicate db from the .bak of my production db. I've done the operation plenty of times via the GUI but when executing from the commandline I'm a little confused by the various switches, in particular, the filenames and being sure ownership is correctly replicated. Just looking for the TSQL syntax for RESTORE that accomplishes that. thx

    Read the article

  • how to restore very large .bak file (180 GB) in SQL Server 2008

    - by Umutos
    Hello! I have a verly large .bak file (180 GB) which was stored in Microsoft SQL Server 2008 and I have to restore it. I first installed Microsoft SQL Server 2008 Express and tried to restore it in MS SQL management studio express but it didn't work because there is a size limit. Does anybody know a method how i can restore the file? Its the first time I work with Microsoft SQL and I have no clue what to do. Its really urgend and I would be really helpful for any help! Thanks a lot! Umutos

    Read the article

  • Why won't ruby recognize Haml under ubuntu64 while using jekyll static blog generator?

    - by oldmanjoyce
    I have been trying, quite unsuccessfully, to run henrik's fork of the jekyll static blog generator on Ubuntu 64-bit. I just can't seem to figure this out and I've tried a bunch of different things. Originally I posted this over at stackoverflow, but this is probably the better spot for it. The base stats of my machine: Ubuntu 9.04, 64 bit, ruby 1.8.7 (2008-08-11 patchlevel 72) [x86_64-linux], rubygems 1.3.1. When I attempt to build the site, this is what happens: $ jekyll --pygments Configuration from ./_config.yml Using Sass for CSS generation You must have the haml gem installed first Using rdiscount for Markdown Building site: . - ./_site /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/core_ext.rb:27:in `method_missing': undefined method 'header' for #, page=# ..... cut ..... (NoMethodError) from (haml):9:in `render' from /home/chris/.gem/gems/haml-2.2.3/lib/haml/engine.rb:167:in 'render' from /home/chris/.gem/gems/haml-2.2.3/lib/haml/engine.rb:167:in 'instance_eval' from /home/chris/.gem/gems/haml-2.2.3/lib/haml/engine.rb:167:in 'render' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/convertible.rb:72:in 'render_haml_in_context' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/convertible.rb:105:in 'do_layout' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/post.rb:226:in 'render' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/site.rb:172:in 'read_posts' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/site.rb:171:in 'each' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/site.rb:171:in 'read_posts' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/site.rb:210:in 'transform_pages' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/site.rb:126:in 'process' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/jekyll:135 from /home/chris/.gem/bin/jekyll:19:in `load' from /home/chris/.gem/bin/jekyll:19 I added spaces to the left of the ClosedStruct to enable better visibility - sorry that my inline html/formatting isn't perfect. I also cut out some middle text that is just data. $ gem list *** LOCAL GEMS *** actionmailer (2.3.4) actionpack (2.3.4) activerecord (2.3.4) activeresource (2.3.4) activesupport (2.3.4) classifier (1.3.1) directory_watcher (1.2.0) haml (2.2.3) haml-edge (2.3.27) henrik-jekyll (0.5.2) liquid (2.0.0) maruku (0.6.0) open4 (0.9.6) rack (1.0.0) rails (2.3.4) rake (0.8.7) rdiscount (1.3.5) RedCloth (4.2.2) stemmer (1.0.1) syntax (1.0.0) Some showing for path verification: $ echo $PATH /home/chris/.gem/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games $ which haml /home/chris/.gem/bin/haml $ which jekyll /home/chris/.gem/bin/jekyll

    Read the article

  • SQL Server 2005 to 2008 Bak file help please!

    - by Brandon
    I have a SQl Server 2005 database backup that I want to transfer to SQL Server 2008 on my server. I spent 3 days transferring the .bak file from my own machine to my server. I then tried to restore the bak file and I got an error. I then read online a completely different method for adding a SQL server 2005 Database to SQL server 2008 which was the detach and attach method which means I need to detach the database in SQL Server 2005 and then transfer the MDF file from it via ftp to my server and then attach it in SQL Server 2008. Well I already used a lot of bandwidth transferring the .bak file to my server. is there a way to convert my .bak file which is already on my server to an MDF file and attach it in SQL server 2008?

    Read the article

  • How to solve CUDA crash when run CUDA example fluidsGL?

    - by sam
    I use ubuntu 12.04 64 bits with GTX560Ti. I install CUDA by following instruction: wget http: //developer.download.nvidia.com/compute/cuda/4_2/rel/toolkit/cudatoolkit_4.2.9_lin ux_64_ubuntu11.04.run wget http: //developer.download.nvidia.com/compute/cuda/4_2/rel/drivers/devdriver_4.2_linux _64_295.41.run wget http: //developer.download.nvidia.com/compute/cuda/4_2/rel/sdk/gpucomputingsdk_4.2.9 _linux.run chmod +x cudatoolkit_4.2.9_linux_64_ubuntu11.04.run sudo ./cudatoolkit_4.2.9_linux_64_ubuntu11.04.run echo "/usr/local/cuda/lib64" > ~/cuda.conf echo "/usr/local/cuda/lib" >> ~/cuda.conf sudo mv ~/cuda.conf /etc/ld.so.conf.d/cuda.conf sudo ldconfig echo 'export PATH=$PATH:/usr/local/cuda/bin' >> ~/.bashrc chmod +x gpucomputingsdk_4.2.9_linux.run ./gpucomputingsdk_4.2.9_linux.run sudo apt-get install build-essential libx11-dev libglu1-mesa-dev freeg lut3-dev libxi-dev libxmu-dev gcc-4.4 g++-4.4 sed 's/g++ -fPIC/g++-4.4 -fPIC/g' ~/NV IDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk sed 's/gcc -fPIC/gcc-4.4 -fPIC/g' ~/NV IDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk sed 's/-L$(SHAREDDIR)\/lib/-L$(SHAREDDIR)\/lib -L\/u sr\/lib\/nvidia-current/g' ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk sed 's/-L$(SHAREDDIR)\/lib -L\/usr\/lib\/nvidia-current $(NV CUVIDLIB)/-L$(SHAREDDIR)\/lib $(NVCUVIDLIB)/g' ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk After I run ~/NVIDIA_GPU_Computing_SDK/C/bin/linux/release/./fluidsGL It got stuck even mouse or keyboard couldn't move. How to solve it? Thank you~

    Read the article

  • Windows Server 2008 CMD Task Schedule not running

    - by Jonathan Platovsky
    I have a BAT/CMD file that when run from the command prompt runs completely. When I run it through the Task Scheduler it partially runs. Here is a copy of the file cd\sqlbackup ren Apps_Backup*.* Apps.Bak ren Apps_Was_Backup*.* Apps_Was.Bak xcopy /Y c:\sqlbackup\*.bak c:\sqlbackup\11\*.bak xcopy /y c:\sqlbackup\*.bak \\igweb01\c$\sqlbackup\*.bak Move /y c:\sqlbackup\*.bak "\\igsrv01\d$\sql backup\" The last two lines do not run when the task scheduler calls it. But again, work when manually run from the command line. All the local sever commands run but when it comes to the last two lines where it goes to another server then it does not work.

    Read the article

  • What is the best way to change the replication scheme of 2 currently replicated slaves?

    - by mmattax
    I have MySQL replication set up in production as follows: DB1 - DB2 DB1 - BAK Where DB2 and BAK are slaves to DB1. All 3 servers are in sync (0 seconds behind the master) and have 30+ GB of data. I'd like to put the servers in a new master-slave configuration as follows: DB1 - DB2 - BAK What is the best way to change the master host on BAK? Is there a way to avoid having to stop the slave thread on DB2 and getting a mysqldump for BAK (a 5-6 hour processes) ?

    Read the article

  • Full/Differential backup - what's used to determine the differential backup content?

    - by gernblandston
    Let's say I have a 'MyDB' SQL Server 2005 database (simple recovery) in which I do a Full backup on Sunday, and Differentials every other night BACKUP DATABASE [MyDB] TO DISK = N'c:\Database Backups\MyDB\MyDB_Full.bak' WITH NOFORMAT, INIT, NAME = N'MyDB.BAK', SKIP, NOREWIND, NOUNLOAD, STATS = 10 and BACKUP DATABASE [MyDB] TO DISK = N'c:\Database Backups\MyDB\MyDB_Diff.bak' WITH NOINIT, DIFFERENTIAL, NAME= 'MyDB.BAK', STATS= 10 What does the differential backup process use to decide what data gets backed up on the differential nights? Does it need the mydb_full.bak file to do its business? If I wanted to save disk space, could I zip up the mydb_full.bak file to a .zip file after it's created without adversely affecting the differential backups, and if I needed to restore, just unzip the full backup before starting?

    Read the article

  • Does SQL Server Maintainance Cleanup consider exact times when deleting old backups?

    - by Heinzi
    Let's say I have a daily maintainance task that: Backups all databases and then removes backups that are older than 3 days. Now let's say the first backup at day 1, starting at 10:00, results in the following files db1.bak 2012-01-01 10:04 db2.bak 2012-01-01 10:06 Now let's say at day 4 the first step of the maintainance task (backup the DBs) happens to finish at 10:05. Will SQL Server delete db1.bak and keep db2.bak (would be logical, but might be surprising for the user) or keep both or remove both?

    Read the article

  • ack misses results (vs. grep)

    - by techpeace
    I'm sure I'm misunderstanding something about ack's file/directory ignore defaults, but perhaps somebody could shed some light on this for me: mbuck$ grep logout -R app/views/ Binary file app/views/shared/._header.html.erb.bak.swp matches Binary file app/views/shared/._header.html.erb.swp matches app/views/shared/_header.html.erb.bak: <%= link_to logout_text, logout_path, { :title => logout_text, :class => 'login-menuitem' } %> mbuck$ ack logout app/views/ mbuck$ Whereas... mbuck$ ack -u logout app/views/ Binary file app/views/shared/._header.html.erb.bak.swp matches Binary file app/views/shared/._header.html.erb.swp matches app/views/shared/_header.html.erb.bak 98:<%= link_to logout_text, logout_path, { :title => logout_text, :class => 'login-menuitem' } %> Simply calling ack without options can't find the result within a .bak file, but calling with the --unrestricted option can find the result. As far as I can tell, though, ack does not ignore .bak files by default.

    Read the article

  • permanently load module

    - by Radu
    I have a Compaq Presario CQ-61 320SQ, I am using Ubuntu 10.04 because after update to 10.10 my mouse and touchpad won't work, network won't work, sound won't work ... (I managed to fix most of them after almost a month of googling, but not all, my 2 Desktops have no problem with 10.10) so I decided to switch back to 10.04, where I have a problem: My broadband speed is very low beacause of the kernel module r8169, I downloaded the good module r8101 and every time the computer boots have a rc.local entry to fix this. Question: Can I load the modul permanently from a specific location. I heard about /etc/modules but there I need the module name, but I have to load it from a specific path (where is the default path for that) Thank you. So I studied the script: It creates the file r8101.ko in /lib/modules/uname -r/kernel/drivers/net so I think as long as nobody will delete that file, and I don't update the kernel, maybe adding r8108 to /etc/modules will work, and add r8169 to blacklist ... I will give it a try. EDIT2: So I added r8101 to /etc/modules and blacklist r8169 to /etc/modprobe.d/blacklist.conf It still uses the old module, lsmod prints: radu@adu:~$ lsmod | grep r8 r8101 67626 0 r8169 34108 0 mii 4381 1 r8169 EDIT: the module is loaded using this script that came with it: #!/bin/sh # invoke insmod with all arguments we got # and use a pathname, as insmod doesn't look in . by default TARGET_PATH=/lib/modules/`uname -r`/kernel/drivers/net echo echo "Check old driver and unload it." check=`lsmod | grep r8169` if [ "$check" != "" ]; then echo "rmmod r8169" /sbin/rmmod r8169 fi check=`lsmod | grep r8101` if [ "$check" != "" ]; then echo "rmmod r8101" /sbin/rmmod r8101 fi echo "Build the module and install" echo "-------------------------------" >> log.txt date 1>>log.txt make all 1>>log.txt || exit 1 module=`ls src/*.ko` module=${module#src/} module=${module%.ko} if [ "$module" == "" ]; then echo "No driver exists!!!" exit 1 elif [ "$module" != "r8169" ]; then if test -e $TARGET_PATH/r8169.ko ; then echo "Backup r8169.ko" if test -e $TARGET_PATH/r8169.bak ; then i=0 while test -e $TARGET_PATH/r8169.bak$i do i=$(($i+1)) done echo "rename r8169.ko to r8169.bak$i" mv $TARGET_PATH/r8169.ko $TARGET_PATH/r8169.bak$i else echo "rename r8169.ko to r8169.bak" mv $TARGET_PATH/r8169.ko $TARGET_PATH/r8169.bak fi fi fi echo "Depending module. Please wait." depmod -a echo "load module $module" modprobe $module echo "Completed." exit 0

    Read the article

  • Real tortoises keep it slow and steady. How about the backups?

    - by Maria Zakourdaev
      … Four tortoises were playing in the backyard when they decided they needed hibiscus flower snacks. They pooled their money and sent the smallest tortoise out to fetch the snacks. Two days passed and there was no sign of the tortoise. "You know, she is taking a lot of time", said one of the tortoises. A little voice from just out side the fence said, "If you are going to talk that way about me I won't go." Is it too much to request from the quite expensive 3rd party backup tool to be a way faster than the SQL server native backup? Or at least save a respectable amount of storage by producing a really smaller backup files?  By saying “really smaller”, I mean at least getting a file in half size. After Googling the internet in an attempt to understand what other “sql people” are using for database backups, I see that most people are using one of three tools which are the main players in SQL backup area:  LiteSpeed by Quest SQL Backup by Red Gate SQL Safe by Idera The feedbacks about those tools are truly emotional and happy. However, while reading the forums and blogs I have wondered, is it possible that many are accustomed to using the above tools since SQL 2000 and 2005.  This can easily be understood due to the fact that a 300GB database backup for instance, using regular a SQL 2005 backup statement would have run for about 3 hours and have produced ~150GB file (depending on the content, of course).  Then you take a 3rd party tool which performs the same backup in 30 minutes resulting in a 30GB file leaving you speechless, you run to management persuading them to buy it due to the fact that it is definitely worth the price. In addition to the increased speed and disk space savings you would also get backup file encryption and virtual restore -  features that are still missing from the SQL server. But in case you, as well as me, don’t need these additional features and only want a tool that performs a full backup MUCH faster AND produces a far smaller backup file (like the gain you observed back in SQL 2005 days) you will be quite disappointed. SQL Server backup compression feature has totally changed the market picture. Medium size database. Take a look at the table below, check out how my SQL server 2008 R2 compares to other tools when backing up a 300GB database. It appears that when talking about the backup speed, SQL 2008 R2 compresses and performs backup in similar overall times as all three other tools. 3rd party tools maximum compression level takes twice longer. Backup file gain is not that impressive, except the highest compression levels but the price that you pay is very high cpu load and much longer time. Only SQL Safe by Idera was quite fast with it’s maximum compression level but most of the run time have used 95% cpu on the server. Note that I have used two types of destination storage, SATA 11 disks and FC 53 disks and, obviously, on faster storage have got my backup ready in half time. Looking at the above results, should we spend money, bother with another layer of complexity and software middle-man for the medium sized databases? I’m definitely not going to do so.  Very large database As a next phase of this benchmark, I have moved to a 6 terabyte database which was actually my main backup target. Note, how multiple files usage enables the SQL Server backup operation to use parallel I/O and remarkably increases it’s speed, especially when the backup device is heavily striped. SQL Server supports a maximum of 64 backup devices for a single backup operation but the most speed is gained when using one file per CPU, in the case above 8 files for a 2 Quad CPU server. The impact of additional files is minimal.  However, SQLsafe doesn’t show any speed improvement between 4 files and 8 files. Of course, with such huge databases every half percent of the compression transforms into the noticeable numbers. Saving almost 470GB of space may turn the backup tool into quite valuable purchase. Still, the backup speed and high CPU are the variables that should be taken into the consideration. As for us, the backup speed is more critical than the storage and we cannot allow a production server to sustain 95% cpu for such a long time. Bottomline, 3rd party backup tool developers, we are waiting for some breakthrough release. There are a few unanswered questions, like the restore speed comparison between different tools and the impact of multiple backup files on restore operation. Stay tuned for the next benchmarks.    Benchmark server: SQL Server 2008 R2 sp1 2 Quad CPU Database location: NetApp FC 15K Aggregate 53 discs Backup statements: No matter how good that UI is, we need to run the backup tasks from inside of SQL Server Agent to make sure they are covered by our monitoring systems. I have used extended stored procedures (command line execution also is an option, I haven’t noticed any impact on the backup performance). SQL backup LiteSpeed SQL Backup SQL safe backup database <DBNAME> to disk= '\\<networkpath>\par1.bak' , disk= '\\<networkpath>\par2.bak', disk= '\\<networkpath>\par3.bak' with format, compression EXECUTE master.dbo.xp_backup_database @database = N'<DBName>', @backupname= N'<DBName> full backup', @desc = N'Test', @compressionlevel=8, @filename= N'\\<networkpath>\par1.bak', @filename= N'\\<networkpath>\par2.bak', @filename= N'\\<networkpath>\par3.bak', @init = 1 EXECUTE master.dbo.sqlbackup '-SQL "BACKUP DATABASE <DBNAME> TO DISK= ''\\<networkpath>\par1.sqb'', DISK= ''\\<networkpath>\par2.sqb'', DISK= ''\\<networkpath>\par3.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 4, INIT"' EXECUTE master.dbo.xp_ss_backup @database = 'UCMSDB', @filename = '\\<networkpath>\par1.bak', @backuptype = 'Full', @compressionlevel = 4, @backupfile = '\\<networkpath>\par2.bak', @backupfile = '\\<networkpath>\par3.bak' If you still insist on using 3rd party tools for the backups in your production environment with maximum compression level, you will definitely need to consider limiting cpu usage which will increase the backup operation time even more: RedGate : use THREADPRIORITY option ( values 0 – 6 ) LiteSpeed : use  @throttle ( percentage, like 70%) SQL safe :  the only thing I have found was @Threads option.   Yours, Maria

    Read the article

  • Wubi install: How do I increase swap size

    - by Diogenes Lantern
    I am trying to increase the swapfile size on my WUBI install. I followed the answer here: sudo su swapoff -a cd /host/ubuntu/disks/ mv swap.disk swap.disk.bak dd if=/dev/zero of=swap.disk bs=1024 count=2097152 mkswap swap.disk swapon -a free -m until I reached: mv swap.disk swap.disk.bak At which point I have got got the following: root@ubuntu:/host/ubuntu/disks# mv swap.disk swap.disk.bak mv: cannot move `swap.disk' to `swap.disk.bak': Operation not permitted My 256 M swap space is all used up. I would like to install a total of twice that. Is there a method of setting it which would not include guesswork on my part?

    Read the article

  • SQL Server restore a backup results in an error.

    - by Mario
    I have a database in dev (SQL Server 2005 on Windows Server 2008) that I need to move to prod (SQL Server 2000 on Windows Server 2003). My process is as follows: Login to dev, open SQL Server Management Studio Right click on the database | Tasks | Backup. Keep all default options (full backup etc.) Move .bak file locally to prod (no network drive), login to prod, open SQL Server Enterprise Manager. Right click Databases node | All Tasks | Restore database. Change Restore as database to reflect the same database name. Click radio button 'From device'. Click 'Select Devices' Click Restore from: Add..., browse to .bak file (small - only 6mb) Now I am ready to restore the database, so I click OK and get the following error: "The media family on device 'E:...bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE DATABASE is terminating abnormally." This error is immediate. I have tried a few different variations of this - restoring the db to dev machine with a different db name and log file names (where it originated), creating an empty database with the same physical path to files before and trying to restore to that, making a few different .bak files and making sure they are verified before uploading them to prod. I know for a fact the directory for the .mdf and .ldf files exist on prod, though the files themselves don't exist. If, before I click OK to restore, go to the options tab instead I get the following error: Error 3241: The media family on device 'E:...bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE FILELIST is terminating abnormally. Anyone have any bright ideas?

    Read the article

  • How to convert Windows filenames (from a checksums.md5) to *nix notation so I can use it on my shell with md5sum?

    - by Somebody still uses you MS-DOS
    I have some checksums.md5 verification files from an ntfs external drive, but using windows notation: \ instead of /, spaces between file names (not escaped), reserved shell characters (like (, &, ', to name a few). The checksums.md5 has a bunch of checksums and filenames: ;Created by program ;2010 f12f75c1f2d1a658dc32ca6ef9ef3ffc *My Windows & Files (2010)\[bak]\testing.wmv 53445e1a0821b790872e60bd7a166887 *My Windows Files' 2 (2012)\[bak]\testing.wmv 53445e1a0821b790872e60bd7a166887 *My Windows Files ˜nicóde (2012)\[bak]\testing.wmv ;Finished I want to use this checksums.md5 to verify the files that I've copied to my machine: but I'm on a Linux, so I need to convert the names inside checksums.md5 from Windows to Linux to use the md5sum utility from the shell. The first line in my example would become: f12f75c1f2d1a658dc32ca6ef9ef3ffc My\ Windows\ \&\ Files\ \(2010\)/\[bak\]/testing.wmv Is there some application for this (converting a file listing, from windows cmd notation, to linux shell notation) or will I need to create a bash script using sed that just "replaces" what is "wrong" with the filenames?

    Read the article

  • EXECUTE master.dbo.xp_delete_file folder permission issue

    - by Alex
    I'm trying to run a Maintenance Cleanup Task to remove .bak files older than 2 days (simple enough). Been trying all variety of .bak, BAK, .*., and editing the path, but the files are still not getting removed even though I receive a "job succeeded" log message. I'm not at the point where I believe it's a folder permission issue. How do I make sure my SA has the proper permissions to remove files from a folder? Thanks.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >