Search Results

Search found 19701 results on 789 pages for 'disk images'.

Page 668/789 | < Previous Page | 664 665 666 667 668 669 670 671 672 673 674 675  | Next Page >

  • How can I upgradge from Ubuntu Intrepid Ibex 8.10 to Jaunty 9.04 when old-releases no longer has the necessary packages?

    - by tommy chheng
    I changed my sources.list to: deb http://old-releases.ubuntu.com/ubuntu/ intrepid main restricted universe multiverse deb http://old-releases.ubuntu.com/ubuntu/ intrepid-updates main restricted universe multiverse deb http://old-releases.ubuntu.com/ubuntu/ intrepid-security main restricted universe multiverse I tried installing sudo apt-get install update-manager-core but i get this error: 1 upgraded, 3 newly installed, 0 to remove and 40 not upgraded. Need to get 2506kB/2555kB of archives. After this operation, 4346kB of additional disk space will be used. Do you want to continue [Y/n]? Y Err http://old-releases.ubuntu.com intrepid-updates/main update-manager-core 1:0.93.34 404 Not Found Err http://old-releases.ubuntu.com intrepid-security/main dpkg 1.14.20ubuntu6.3 404 Not Found Failed to fetch http://old-releases.ubuntu.com/ubuntu/pool/main/d/dpkg/dpkg_1.14.20ubuntu6.3_amd64.deb 404 Not Found Failed to fetch http://old-releases.ubuntu.com/ubuntu/pool/main/u/update-manager/update-manager-core_0.93.34_amd64.deb 404 Not Found E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Running apt-get update or --fix-missing returns the same errors. How can I successfully upgrade from Intrepid Ibex to Jaunty 9.04?

    Read the article

  • Where did my backup files go? Can they be recovered?

    - by Ken
    I just purchased a Western Digital Essential SE 1TB external hard drive from Best Buy at their recommendation. I then exchanged it for a Toshiba Canvio (I think that was the name). I have a Toshiba Qosmio X505-Q898. The Canvio locked up my computer and rewrote some kind of OS file, and erased all the restore points as well as the system image backup (according to Best Buy) just by plugging it in for the first time. Never even got to the install part or anything -- plugged it in and fried my computer. They spent about an hour and a half on my computer and got it back to a somewhat working condition and gave me access to my files. So now they say I have to back it up using my recovery disk and rewriting my OS. Enter the Essential. Brought it home last night, plugged it in and installed everything. Works perfect, no problems. Backed up everything on it. I unplugged and plugged it twice to make sure that everything was on it. Essential told me it had both the HDD and SSD backed up. So I reinstalled my OS. Plugged the Essential in and everything loads right up. Went to retrieve my files and the Western Digital has nothing on it. It shows all my music, pics, ETC. as still being on my computer and needing to be backed up, but since there are no files on my computer now. Where is this information coming from and where did my files go? It's about 810GB worth of files I've amassed over several years. Is there any way to recover data from this? I plan to contact Western Digital and Best Buy, just thought I would check here too. Any advice will be appreciated as a lot of these files are invaluable to me.

    Read the article

  • Photoshop CS5 performance over network drive (cifs)

    - by grub
    Hello Everyone I did install a QNAP NAS TS410 for a customer (professional photographer) with 3 Hitachi Deskstar 7200rpm 2TB disk configured as RAID5. The NAS and the workstations are connected over a Gigabit network. He and his co-worker are accessing the photos (about 1TB of photos) over a mapped network drive from their windows machines (Windows XP - 32bit and Windows 7 Ultimate - 32bit). Both are using Photoshop CS5 to edit the photos. The problem is that to save a edited photo takes a really long time, it takes about 3 times as long to save a photo as to open it. After some tests I can exclude the network, the NAS and the windows machines as source of the issue. I think the problem is the Photoshop software and its handling of the network drives. Officially network drives are not supported by Adobe. I do not have any experience with the Adobe products, especially with Adobe Photoshop CS5. What are your recommendation to solve the performance issue? Should my customer copy the photos to the local drive, edit them and upload them again to the network drive or is Adobe Drive or Adobe Version Cue the answer? One requirement is that the photos need to be accessible / editable from both computers even when one of them is offline. Adobe Version Cue needs a dedicated service running to be usable, so this solution is not possible as far as I understand the Cue software. Thank you for your input to this issue and have a nice day :-) Greetings grub

    Read the article

  • Encrypted Windows 7 & Linux Advice Wanted

    - by Miles
    I would like to set up my laptop to dual boot Arch Linux and Windows 7 with file sharing and encryption. Just wanted some advice on going about this because I have not dealt with encryption nor file sharing. I have two 500GB hard drives, and this is my plan: Install Windows 7 across both hard drives Use a live CD to wipe out Windows boot loader and replace with Grub Legacy Use live CD to wipe out second hard drive and re-size the Windows partition located on first hard drive Install Arch Linux along side with Windows 7 on first hard drive, all remaining space goes to home folder as ext2 Install truecrypt and ext2fsd Concerns: Is this the most efficient way to share files between both OSes? Or should I just be using NTFS to store all my data? How would the file permissions work when sharing files between Windows and Linux? Is there a high likley hood of corruption, and what is the ease of backing up files from an encrypted disk? Anything I should look out for, conflict between Grub and Truecrypt? Thank you for any advice, and feel free to post any links you might find useful to me. I am trying to plan this out so I can minimize downtime as I do not want to spend more than a night on this, nor do I want to run into a major problem some time in the future.

    Read the article

  • Why dd finishes instantly when pipelining to cat?

    - by agsamek
    I start bash on Cygwin and type: dd if=/dev/zero | cat /dev/null It finishes instantly. When I type: dd if=/dev/zero > /dev/null it runs as expected and I can issue killall -USR1 dd to see the progress. Why does the former invocation finishes instantly? Is it the same on a Linux box? * Explanation why I asked such stupid question and possibly not so stupid question I was compressing hdd images and some were compressed incorrectly. I ended up with the following script showing the problem: while sleep 1 ; do killall -v -USR1 dd ; done & dd if=/dev/zero bs=5000000 count=200 | gzip -c | gzip -cd | wc -c Wc should write 1000000000 at the end. The problem is that it does not on my machine: bash-3.2$ dd if=/dev/zero bs=5000000 count=200 | gzip -c | gzip -cd | wc -c 13+0 records in 12+0 records out 60000000 bytes (60 MB) copied, 0.834 s, 71.9 MB/s 27+0 records in 26+0 records out 130000000 bytes (130 MB) copied, 1.822 s, 71.4 MB/s 200+0 records in 200+0 records out 1000000000 bytes (1.0 GB) copied, 13.231 s, 75.6 MB/s 1005856128 Is it a bug or am I doing something wrong once again?

    Read the article

  • Restrict Computer or Users from Internet but allow access to intranet and Windows Update / ePO?

    - by MoSiAc
    So this may be impossible but I've been asked to try and find something about it. So far nothing I have found is possible. I need to restrict specific machines or user accounts from regular Internet access but let them have access to the intranet portion of our network. I do not have Active Directory control, nor does anyone at my local workplace (corporate control in a different state). I have tried going through IPsec and doing this per local machine, but that system seems to have been removed from the images that are installed on these machines so that is out. So far the only other option I can think of is assigning the machines a specific ip address and removing their gateway access. This would probably work but the machines need to be able to receive updates that are being pushed to them through ePO and LanDesk. I would really like to do this on the user level because then if I need to do tech work to the machine and need internet access I can get to it but a "special" user could login and not be able to get into anything.

    Read the article

  • WSUS performance for unneeded updates

    - by mhouston100
    We have a WSUS server serving around 300 PC's and a couple of dozen servers and a discussion came up at work as to what products to include. We have a single SQL 2005 instance on one of the servers and it has NEVER been updated. My first thought was to just tick the box for SQL 2005 and let WSUS do it's thing to upgrade to the latest service pack at least. One of the other guys here has the opinion that having updates that are relevant to only a small selection of hosts would effect the performance of WSUS as a whole, claiming that each update does a 'check' against all the hosts or something similar. My argument is that manually updating these servers is obviously not working as the admins are not paying attention to what is needed. So my question is: Do updates that only effect a sub-set of the hosts effect the overall performance of the WSUS server in relation to ALL the hosts? (disk space is not an issue at this point) Is there any performance justification for or against manually updating small amounts of products? Basically I'm needing a rebuttal against his argument and I'm unable to find any concrete documentation to prove him wrong.

    Read the article

  • backup an existing linux server to a virtualbox virtual machine

    - by user146526
    I have some servers and VPSs to many companies across the world. I want to back them up locally. I have some backup solutions enabled to remote hosts, but I want to have a local backup on a computer at home. What I am thinking is: 1) Create a virtualbox virtual machine, install the same version linux as the server. 2) Use rsync to backup the server to the local virtualbox machine. (something like rsync -av --delete --progress --exclude '/dev/' --exclude '/proc/' root@server_ip:// / ) 3) Repeat the command every few days update files. 4) In case of a hard disk failure, or any other bad event, reverse the rsync command and get the files back and continue my bussiness. I tried it with 2 openvz VPS, the one was a backup of the other. I also tried to transfer normal linux server host to openvz machine and it worked great. That way looks pretty clean and easy to me, this is the kind of solution I am looking for. However I need to be sure that this will work if I am going to do it. The question is, will that work ok ? Does anyone see any problem with that ? Do you have any other suggestions ? Thanks

    Read the article

  • Apache2 403 permission denied on Ubuntu 12.04

    - by skeniver
    I have a sub-directory in my /var/www folder called prod, which is password protected. It was all working fine until I asked my server admin to help me set up allow all access to one particular file. Now the entire folder is just giving me a 403 error. This is the sites-enabled file: <VirtualHost *:80> ServerAdmin [email protected] # Server name ServerName prod.xxx.co.uk DocumentRoot /var/www/prod <Directory /var/www/prod> Options Indexes FollowSymLinks MultiViews +ExecCGI Includes AllowOverride None Order allow,deny AuthType Basic AuthName "Please log in" AuthUserFile /home/ubuntu/.htpasswd Require valid-user </Directory> <Directory /var/www/prod/xxx/cgi-bin/api.pl> Allow from All Satisfy Any </Directory> ScriptAlias /xxx/cgi-bin/ /var/www/prod/xxx/cgi-bin/ ErrorLog ${APACHE_LOG_DIR}/prod.xxx.error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/prod.xxx.access.log combined </VirtualHost> Now he's unsure why this is blocking me out completely. No permissions have been changed, but this is the /var/www/ folder: 4 drwxr-xr-x 2 root root 4096 Jan 3 21:10 images 4 drwxr-sr-x 4 root www-data 4096 Mar 31 14:47 jslib 4 drwxr-xr-x 7 root root 4096 Jun 2 13:00 prod When I try to visit http://prod.xxx.co.uk, I don't get asked for the password; I just get 403'd I hope I've given enough information... Anyone able to spot something he can't?

    Read the article

  • How do you backup your own files? [on hold]

    - by Antonis Christofides
    I'm a system administrator and I use rsnapshot to backup some servers, duplicity for some others. Both work fine, each one with advantages and disadvantages. Despite that, I am at a loss on how to backup my own private files. I'd use duplicity to automatically backup my files to a remote server; but the problem is that once in a while I must do a full backup. My emails and important files are 9G, and I expect this to increase. Uploading through aDSL at 1Mbit would be 20 hours. Too much. rsnapshot doesn't require periodic full backups (only the first time), but it must be running on the remote server and have a means to connect to my computer; if the server is compromised (or simply if the NSA decides to use it), my own machine is also compromised. Not good. The only solution I've come up with is use encfs, use unison to synchronize the files to a remote server, and use duplicity or rsnapshot on the remote server to backup these files. In that case, the question is whether I can sync the files on many computers; is it possible for encfs to be used with the same key on many computers? I also think that if I append one character to the unencrypted file, its encrypted encfs counterpart might change a lot, so that incrementals with duplicity would be less efficient—but not a big deal. Maybe also, when I need to restore a file, finding the correct file to restore could be a pain, because of filename encryption. I wonder whether there is any other possibility that I've overlooked. Maybe I'm asking too much for my personal use, and I should settle with an external disk?

    Read the article

  • Computer not finding hard drives on boot -sometimes-

    - by todd.pund
    Computer specs: Mobo: Gigabyte ultradurable 3 - GA-970A-UD3 Processor: First gen I7 3.2GHZ Ram: 8GB Kingston DDR3 1066 Video Card: EVGA NVidia GTX 460 1GB Hard Drive: 500MB 7200rpm x2 (can't remember brand, sorry I'm at work.) Last week my developer preview for Windows 8 ran out so I put my copy of windows 7 back on the computer. The computer at that point started suffering from frequent freezing and crashing. When I rebooted the computer sometimes it wouldn't find the system HD at all. When I looked at the post screen it seemed to show that it wasn't finding either of the HDs. Then yesterday when turning on the computer I just got GRUB as a message (not a GRUB prompt, just GRUB) I haven't had a dual boot of Linux for at least a year. I loaded windows 7 recovery console from the disk and ran: bootrec /fixboot bootrec /fixmbr Which did not help. At that point I just installed Ubuntu 13.04 over the windows 7 install and still received the GRUB post. I went into the BIOS and switched the Hard Drive priorities and then it loaded into Ubuntu fine. For several days everything was just hunky dory until I installed the Ubuntu version of Steam, install Portal and tried to run it. At that point the computer froze and after hard rebooting couldn't find the hard disks again. Then after restarting the system it loaded up fine again and no issues since. (I have not tried to launch portal again). My next thought is to remove the system hard drive and try to use the secondary as the master to see if the primary HD is bad. I'm sorry if this has been confusing, I'll answer any questions I can. Any thoughts?

    Read the article

  • What's throttling the database?

    - by Troels Arvin
    Hardware: Intel x86_64 with 192GB of RAM. OS: CentOS 5.4 x86_64. DBMS: DB2 v. 9.7.1 64 bit. During certain special workloads (e.g. parallel REORGs/RUNSTATs), I've seen the server transporting 450MB/s with 25000IO/s (yes, there is probably some storage system caching happening here) while all CPU cores were happily working in an even mix of usermode/wait. And disk benchmark tools can also bring some very satisfying bandwith and IO/s numbers to the table. On the other hand, we also have another scenario: A single rather complex query with at least one large table scan. db2's "list applications" reports that the query is Executing (not locked). IO: At most 10MB/s, 500 IO/s; CPU: two cores in 99.9% wait state, all other cores 100% idle. The tables which the query reads from have been altered to have LOCKSIZE=TABLE, so I would think that lock list work is zero. What's going on in such a situation? What tools/snapshots/... can I use to gain better insight in such a case?

    Read the article

  • Bad HD and robocopy

    - by acidzombie24
    After my HD gave me crc errors I wanted to copy one drive to another and picked up a new 1TB hd. I am using the command robocopy G: J: /MIR /COPYALL /ZB First it tried copying the file a few times (i didnt count, its not in my window anymore) and got an access denied error, error 5. Then it tried again and locked up. I tried copying that specific file (14mb) and windows says "cant read from source file or disk". I started robocopy again. Hopefully it will ignore it after a fail attempt or two but what options can i use to say if it doesnt work continue to the next file. It looked like that is what it was doing but for this last one it repeated more then 4times and locked up finally. I'm open to other copy solutions. I do prefer built in solutions. I am using windows 7 Also how might i do this without the /MIR option. Is /S /E good enough? flag reference here -edit- i see i can control retries with /R: but i am still open to alternative solutions.

    Read the article

  • Ubuntu 9.04: Ripping CDs with grip?

    - by chris
    I tried to rip a CD tonight, and couldn't figure out how to configure grip - /dev/cdrom doesn't seem to be the mount point for music CDs any more. How can I configure grip to find CDs? Update: /etc/fstab has /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0 But there's nothing visible in /media/cdrom0 (or /media/cdrom, which is a symlink to cdrom0) There's an icon on the desktop labeled "Audio Disk" and opening it shows the .wav files on the CD. The location is cdda://sr0/, but grip doesn't like that either. Trying to manually mount /dev/sr0, I get $ sudo mount -t auto /dev/sr0 foo/ mount: block device /dev/sr0 is write-protected, mounting read-only mount: you must specify the filesystem type Update 2: Tried to change the media handling preferences (From a file browser, Edit-Preferences, Media, CD Audio) to "Do Nothing". CD Still doesn't mount. Update 3: With an audio CD in the drive: $ ls -l /dev/ | grep cd lrwxrwxrwx 1 root root 3 2009-09-15 22:13 cdrom1 -> sr0 lrwxrwxrwx 1 root root 3 2009-09-15 22:13 cdrw1 -> sr0 drwxr-xr-x 2 root root 60 2009-09-15 22:13 pktcdvd lrwxrwxrwx 1 root root 3 2009-09-15 22:13 scd0 -> sr0 crw-rw----+ 1 root cdrom 21, 2 2009-09-15 22:13 sg2 brw-rw----+ 1 root cdrom 11, 0 2009-09-15 22:13 sr0

    Read the article

  • What should I encrypt in Debian during install?

    - by ianfuture
    I have seen various guides and recommendations on web about how best to do this but nothing that clearly explains the best way and why. So I understand there is a need for part of Debian during install to be un-encrypted on its own partition to allow it to boot. Most info I have seen is call this /boot and set the boot flag. Next I believe the best approach is to create another partition out of all the rest of the disk space, encrypt this, then on top of that create a LVM and then within the LVM create my various partitions , name them , select size, and file system type. Can I include /swap in the encrypted LVM part ? Is this approach sound? If so what are the partitions I should use (this is going to be a minimal server install with a view to install as and when what I need for a dev server)? Finally how does the installer know what to put in each partition I define ? I appreciate there are more than one question but any help and suggestions would be appreciated. If further clarification is needed please mention in the comments . Thanks.. Ian

    Read the article

  • 1080p monitor: connected through VGA - perfect, HDMI - awful

    - by develroot
    When I connect my 23" monitor (1920x1080) to my pc through HDMI, I encourage some problems. It's not full screen That's it. There are ~1cm black borders on right and left side and ~0,5cm black borders on the top and the bottom of the monitor. That's pretty frustrating. I tried adjusting overscan, but I can't mannualy type the % of overscan that I need. I can only select between ex. 8 and 10% in AMD Vision Engine Control center, but what I need is 9%. Next, even if i select 10% and the image fits all the corners, but after log off all the settings are lost and I have to do it again and again. Text, images, everything looks blurry That surprised me a lot. Should'nt HDMI quality be better than VGA's one? When connected through HDMI, the text isn't readable. It's like a very low refresh rate, although i'm running at 60Hz. Also the text has something like little shadows, very very annoying. Are there any tips to get the same quality as with VGA, with HDMI ? (running on integrated ATI Radeon HD4200, which, appearently, is the best card I have ever seen in terms of integrated ones)

    Read the article

  • How is network mounted software executed?

    - by CptSupermrkt
    I would like to understand how network mounted software works. For example, at my place of work, we have a software server. Each client machine (hundreds of them) automatically mounts directories from the software server on boot. For example, a program like Matlab is installed just once on the software server, but each client machine can start up an instance of Matlab. What is going on under the hood? Let's say I run /opt/bin/matlab and /opt/ is mounted from the software server, what happens when I press Enter to execute matlab on a client machine? The process is on the client machine, and I've already narrowed down that there isn't any implicit or hidden file transfer (i.e. copying matlab to my machine temporarily for that session) by running matlab on a computer with nearly zero disk space (i.e. not enough room to transfer). Since Matlab was installed on the server, how is my client computer executing it? What mechanism is controlling this? What is happening behind the scenes?

    Read the article

  • Automatically update SVN repository on another server

    - by Mikey C
    We have 2 Ubuntu web servers, one of which is our staging server (Staging) and the other is our live server (Live). Staging has our Subversion repository, as well as the latest version of our sites on it. Because the SVN server is running on Staging, I've added post-commit hook scripts so that the staging server automatically has the latest code. Easy. However, I'd like one of the repositories on Live to also stay updated. This is a repository of images, PDFs and suchlike. When a team member commits to this, I'd like it to automatically update on the live servers so it can be used in mailings, content managed pages etc. I'd add something to the post-commit to SSH across and update, but for security, we can only SSH from one server to another as user 'commandLine', whereas the 'www-data' user runs the post-commit. I'd rather not run a cron on Live to update every 5 minutes, but I can't see another way of doing it without altering all our user permissions. Any ideas?

    Read the article

  • How to improve Samba performance on VirtualBox machine?

    - by ColinM
    I am running a Windows 7 64bit host and Ubuntu 9.04 32bit guest inside of VirtualBox 4.0.0 on a laptop which has internet connectivity via Wifi. The main use is writing code for which I use Netbeans. My dev environment is hte virtual machine and I use Samba on the VM to share the code directory so that I can use Netbeans on the host as my IDE. Unfortunately Netbeans does a lot of disk access and due to the poor Samba performance it makes the IDE hardly usable. How can I improve performance of the Samba share? On my desktop it isn't so bad but I don't know what the difference would be since they are similar setups (Win 7 hosts, cloned guests, SSDs, Vbox guests using SATA in AHCI mode, etc..). With Bridged networking is the performance between the host and guest limited by the physical hardware (Intel 6200 AGN on laptop)? I switched to Host-only and it didn't seem to improve performance at all. To clarify bad performance, I used 7zip to zip a project directory and got 19kbs to 500kbs depending on the size of the files being zipped. On my desktop it was in the ~10mbs range. Any tips for VirtualBox/Samba configuration to get improve the performance? I am using Samba 3.3.2. Hopefully Samba with SMB2 support will be released soon..

    Read the article

  • Ubuntu 12.04 Very slow especially with Android Studio

    - by Nada
    I have an old laptop with the following specification: Memory: 485 MiB, Processor: Genuine intel CPU T2300 @ 1.66 GHz ×2, OS Type: 32 bit, Disk: 78.1 GB, I installed on it Ubuntu 12.04 LTS and I noticed that the overall system is very slow in responding. I tried to search about that in the internet and I found some articles talking about how to make Ubuntu 12.04 LTS run fast I applied all what they said including download LXDE desktop environment and then nothing different in the system response time. Then I need to develop some android applications so, I download Android Studio (Beta) 0.8.6. The problem became worse than before whenever I tried to open the Android Studio the screen is frozen for some minutes then it took time to download the projects and initialize the work space also, when I tried to move the cursor he is move very slowly. When I tried to run my first application on the AVD it took three hours and still not run yet. I delete the Android Studio and install it again several times, I was trying to solve the problem but still nothing change. Please if you have any suggestions that may help me make my laptop and Android Studio work faster I will appreciate it for you. Thank you in advance.

    Read the article

  • Recommend a UK based VPS host equivalent to Dreamhost [closed]

    - by Pez Cuckow
    I appreciate this question could be considered subjective and argumentative so can people make recommendations rather than arguing about the best. I believe the "correct" answer is the one closest to what I am looking for. Basically I live in the UK but have been using the US based Dreamhost for about 6 years now, and my web projects are getting to the scale where the websites need to the UK based to cope with the demand and load. I originally had shared hosting with Dreamhost but upgraded to a VPS a while ago, getting 512mb of RAM, unlimited disk space, bandwidth and domains for $30. Their control panel is a custom easy to use build that they have created in house and offers features very similar to other web panels (as far as I am aware). So basically my question boils down to, is there anywhere that offers an equivalent package? In all honesty as long as I have over 50gb HDD space and unlimited domains it doesn't really matter? Are there any VPS providers you would recommend as reliable? I promise to check every link posted, many thanks for your time!

    Read the article

  • Automatic o/s reset on a dedicated internet browsing Windows 7 pc.

    - by camelCase
    I have just purchased a new Acer Revo nettop PC for dedicated internet browsing. It will be the only pc on a home network. My original plan was to install one virtual PC for family browsing, another for remote web based server administration and ban browser use from the host Windows 7 o/s. The idea was that I could recover to a fresh VHD image once a week to eliminate any build up of malware inside the browser VMs. However now I am looking for alternative solutions since the Intel Atom cpu does not have hardware VT support which Windows Virtual PC requires. Would it be possible to engineer some type of routine overnight host o/s wipe and recovery? I guess cyber cafes do something like this? The only user data that would need to be retained across a recovery would be browser bookmarks but these could be exported to remote service. Edit 1: I am thinking the o/s reset could be done via some disk image recovery process. Edit 2: Just had a brainwave. Routine browsing could be done via the new Google Chrome O/S. I have just seen a video of the Google Chrome o/s booting off a usb pen drive in seconds.

    Read the article

  • Problems creating a functioning table

    - by Hoser
    This is a pretty simple SQL query I would assume, but I'm having problems getting it to work. if (object_id('#InfoTable')is not null) Begin Drop Table #InfoTable End create table #InfoTable (NameOfObject varchar(50), NameOfCounter varchar(50), SampledValue float(30), DayStamp datetime) insert into #InfoTable(NameOfObject, NameOfCounter, SampledValue, DayStamp) select vPerformanceRule.ObjectName AS NameOfObject, vPerformanceRule.CounterName AS NameOfCounter, Perf.vPerfRaw.SampleValue AS SampledValue, Perf.vPerfHourly.DateTime AS DayStamp from vPerformanceRule, vPerformanceRuleInstance, Perf.vPerfHourly, Perf.vPerfRaw where (ObjectName like 'Logical Disk' and CounterName like '% Free Space' AND SampleValue > 95 AND SampleValue < 100) order by DayStamp desc select NameOfObject, NameOfCounter, SampledValue, DayStamp from #InfoTable Drop Table #InfoTable I've tried various other forms of syntax, but no matter what I do, I get these error messages. Msg 207, Level 16, State 1, Line 10 Invalid column name 'NameOfObject'. Msg 207, Level 16, State 1, Line 10 Invalid column name 'NameOfCounter'. Msg 207, Level 16, State 1, Line 10 Invalid column name 'SampledValue'. Msg 207, Level 16, State 1, Line 10 Invalid column name 'DayStamp'. Msg 207, Level 16, State 1, Line 22 Invalid column name 'NameOfObject'. Msg 207, Level 16, State 1, Line 22 Invalid column name 'NameOfCounter'. Msg 207, Level 16, State 1, Line 22 Invalid column name 'SampledValue'. Msg 207, Level 16, State 1, Line 22 Invalid column name 'DayStamp'. Line 10 is the first 'insert into' line, and line 22 is the second select line. Any ideas?

    Read the article

  • Moving a lot of small files between servers using rsync

    - by Adirael
    Hello guys, I'm moving a lot of files (about 2 millions) between two servers on different locations using rsync over ssh, it seems to work fine but I just realised I'm losing some files on the process. I got server 1, with the original data, and server 2, with the copy. Server 1 runs CentOS 5 and Server 2 runs on Ubuntu 10. I'm doing the transfer on the Server's 2 command line like this: rsync -e ssh -avzn usr@server1:/remote/path /local/path The first file movement I did using tar, but I didn't though of piping it through ssh and it failed cause the disk on server 1 was almost full, so I transfered it anyways (it was about 200GB) and got about 80% of the files. Then I piped another tar with the rest of the files (they're in folders, I got 100 folders with about 30 subfolders each, with files inside) and now I got everything on server 2. I wanted to be sure, so I my two options are getting the md5sum of all the files and check them or running an rsync on server 2 against server 1, that's what I did. It got some missing stuff and now it says there's nothing more to do (DRY RUN). But I got at least two files that are missing inside a subfolder. I ran that same rsync on that folder, but still dry run. Am I doing something wrong? Thanks, and sorry for the wall of text.

    Read the article

  • How can one keep secure regular backups of his desktop on a remote server through aDSL? [on hold]

    - by Antonis Christofides
    I'm a system administrator and I use rsnapshot to backup some servers, duplicity for some others. Both work fine, each one with advantages and disadvantages. Despite that, I am at a loss on how to backup my own private files. I'd use duplicity to automatically backup my files to a remote server; but the problem is that once in a while I must do a full backup. My emails and important files are 9G, and I expect this to increase. Uploading through aDSL at 1Mbit would be 20 hours. Too much. rsnapshot doesn't require periodic full backups (only the first time), but it must be running on the remote server and have a means to connect to my computer; if the server is compromised (or simply if the NSA decides to use it), my own machine is also compromised. Not good. The only solution I've come up with is use encfs, use unison to synchronize the files to a remote server, and use duplicity or rsnapshot on the remote server to backup these files. In that case, the question is whether I can sync the files on many computers; is it possible for encfs to be used with the same key on many computers? I also think that if I append one character to the unencrypted file, its encrypted encfs counterpart might change a lot, so that incrementals with duplicity would be less efficient—but not a big deal. Maybe also, when I need to restore a file, finding the correct file to restore could be a pain, because of filename encryption. I wonder whether there is any other possibility that I've overlooked. Maybe I'm asking too much for my personal use, and I should settle with an external disk?

    Read the article

< Previous Page | 664 665 666 667 668 669 670 671 672 673 674 675  | Next Page >