Search Results

Search found 26798 results on 1072 pages for 'difference between detach attach and restore backup a db'.

Page 750/1072 | < Previous Page | 746 747 748 749 750 751 752 753 754 755 756 757  | Next Page >

  • Acer recovering windows vista

    - by Charlie Pigarelli
    My computer history is very long even of my computer has 4 years of life. An year ago I installed Windows 7 on this acer m1610 that had Vista before. My technic left me 2 recovery disc for "acer vista" before updating it to Windows 7. Then the computer had some trouble. The graphic card broke and we decided to use another computer. Yesterday I had the great idea to fuse the two computer to have a better one... So I moved the graphic card of the latter computer to the acer and everything gone well. Then the trouble of speed, it had before, come back. So I decided to reinstall the very first Windows: Vista back again. I booted the computer with those 2 DVD-R my technic left me and at the end of the process it asked me to insert "the backup cd number one or the system disk". I found 2 original Acer "Blank Recovery Disc" DVD-R and tried with those: rejected. Tried with empty DVD-+R: rejected. I tried with CDs: rejected. I don't have any system disc with me. Except for those 2 DVD-R my technic left me. What am I supposed to do now? I even tried with these fantomatic alt+f9/f10 that should start the recovery without any disc... But nothing happened. PS: the installation cannot complete if I do not insert the right disk. (The recovery disc uses Acer eRecovery Management as recovery software.

    Read the article

  • Skip Corrupt Revisions During SvnAdmin Load

    - by cisellis
    I have a dump file that I am generating from VSS with the use of the VSS2SVN script. I've tested the generated dump file before and some of the revisions are corrupt for one reason or another (binary data or long path strings seem to be the main culprit). This is fine. In the past I have used svndumpfilter to split the dump file, remove the corrupt revisions and continue to load the repository. It worked but took a lot of manual effort to start the load, hit the bad revision, split the dump file, continue loading the repo, etc. This dump file is pretty large (~5GB) and takes several hours to load. I think I know the answer to this but is there any way to simply tell svnadmin load to keep going and skip corrupt revisions? I know how to verify, backup, etc. the dump file and don't need any of that. I don't care about recovering corrupt revisions. I just want to start the load, walk away, and not worry about checking it every few hours to manually remove the corrupt revisions. Is that possible? Thanks.

    Read the article

  • How have I locked me out from my Ubuntu VPS?

    - by Sanoj
    I have a Ubuntu Server as VPS (OpenVZ), and yesterday I installed php-fpm, but I guess something went wrong with the installation. Because since then I cannot log in to my server over SSH with PuTTY or using WinSCP. The message I get when connecting is Network error: Connection timed out. Immediately after the installation I was not able to use emacs either, I had to re-install it with apt-get install emacs. I have tried with clearing the firewall and rebooting the server from my web-based "control panel", but it doesn't help. The commands I used for installation of the PHP-fpm was from Installing PHP 5.3, Nginx And PHP-fpm On Ubuntu/Debian. And I guess it has something to do with these commands: cd /tmp wget http://us.archive.ubuntu.com/ubuntu/pool/main/k/krb5/libkrb53_1.6.dfsg.4~beta1-5ubuntu2_i386.deb wget http://us.archive.ubuntu.com/ubuntu/pool/main/i/icu/libicu38_3.8-6ubuntu0.2_i386.deb sudo dpkg -i *.deb sudo echo "deb http://php53.dotdeb.org stable all" >> /etc/apt/sources.list sudo apt-get update sudo apt-get install php5-cli php5-common php5-suhosin sudo apt-get install php5-fpm php5-cgi The web-sites that are hosted from my server works fine. Anyone that have the same experience or know how this could happen? I guess that I have to re-install Ubuntu Server from my "control panel" now, but I would like to avoid this situation in the future. Finally, I have backup on everything so nothing is lost if I have to re-install the machine.

    Read the article

  • 2000 Server, User can't logon

    - by Mike I
    I hope you can help me. I recently upgraded a workstation at my office (to a whole new machine) and ran into a pretty serious problem. Friday until 5:00 PM, I could access my mail on 2000 Exchange server. When I shut the old workstation down and put in the new workstation, I tried to set up an account. When I put the server name in appropriate field and typed my username and hit check names, my username does not come up. So to troubleshoot, (It also is a SMB server) I try to logon to my file share. (My local credentials are the same as server credientials of user account) When I try to logon to share, I just get the Username/Password screen (Never had gotten that before since credentials are the same) Again, in troubleshooting mode, I try to log on to my user from another workstation. Still can't authenticate via my user. Every other user can authenticate and load up their shares/mailboxes. I have restored Exchange from the backup as of 3 days ago (Thursday) but the exact same issue is still there. I really do not understand what is wrong and what else I can do to troubleshoot. If anyone has some pointers for me, I will surely accept them. Thanks, Mike

    Read the article

  • Ubuntu Server mdadm drbd ocfs2 kvm hangs under heavy file reading

    - by Stefano Annese
    I have deployed four ubuntu 10.04 server. They are coupled two by two in a cluster scenario. on both sides we have software raid1 disks, drbd8 and OCFS2 and on top of it some kvm machines run with qcow2 disks. I followed this: Link corosync is just used for DRBD and OCFS, the kvm machines are run "manually" When it works is fine: good performances, good I/O, but at a given time one of the two cluster started hanging. Then we tried with just one server turned on and it hangs the same. It seems to happen when an heavy READ in one of the virtual machines occurs, that is during rsyn backup. When the fact occurs the virtual machines are not reachable any more and the real server responds with good delay to the ping but no screen and no ssh is available. All we can do is force shutdown (hold the button) and restart and when it turns on again the raid on which relay drbd is resyncing. All the time it hangs we see such fact. After a couple of week of pain on one side this morning also the other cluster hung, but it has different moteherboard, ram, kvm instances. What is similar is reading for rsync scenario and Western Digital RAID Edistion disks on both side. Can anybody give me some input to solve such issue?

    Read the article

  • CIFS mounted drive setting "stick-bit" on all files, cannot change permissions or modify files

    - by mattmcmanus
    I have a folder mounted on an Ubuntu 8.10 sever through cifs that I simply cannot change the permissions on once mounted. Here is a breakdown of what's going on: All files within the mounted folder automatically have their permissions set to -rwxrwSrwx regardless of whether the file is create on the windows server or on the linux machine. I have the same directory mounted on two other linux servers (both running 9.10 instead of 8.10) with no problems at all. They all are using the same fstab options and the same credentials. //server/folder /media/backups cifs credentials=/etc/samba/.arcadia_cred,noexec,noserverino 0 0 I've I run a chmod command a million different ways, all of which report successfully changing the permissions. However it doesn't. The issue began after I updated from 8.04 to 8.10 Any idea why this may be happening on one machine? Since it started after an upgrade I'm not sure what is the bes thing to do. Any help you could give would great! None of my automated backup scripts are working because of this!

    Read the article

  • Massive SQL issue shutting down our site.

    - by Pselus
    Our website has started timing out like crazy today. All of our clients are finding it unusable. The only error we can seem to trace down as a potential problem is this: SQLAllocHandle on SQL_HANDLE_DBC failed Error ASP Description Error Category Microsoft OLE DB Provider for ODBC Drivers I have no idea what it means or how to go about fixing it. Anyone ever encountered this error before? Currently, you can log in to our site, but then once you go to do anything else, you find yourself logged out or nothing happens. We have a lot of Ajax going on so the "nothing happens" probably has to do with the Ajax pages not loading properly due to logouts and so nothing displays to the user. Like I said, I'm at a loss. Anyone have any advice on this error? EDIT I realize that this isn't necessarily a programming question, but we are a small startup company that just yesterday started talking about how we need to get a backup server running. Apparently we talked about it too late. We don't have a DBA, just 2 mid level programmers trying their hardest to keep our clients happy. So please, if you have any assistance give it but please don't close my question right now. EDIT 2 Turns out we had something on our server running called "ServerMask" that makes our IIS server look like Apache to the outside world. Shutting it down fixed our issue. Still no idea why it was messing up but it was the problem apparently. Thanks to everyone who tried to help.

    Read the article

  • Why are Microsoft Windows Update taking so long to install?

    - by Mathieu Pagé
    Hi, I have a question that is not related to a problem I have. Just something I'd like to understand. Why are Windows update so long? First Windows Update need to find witch updates you needs and this take about 5 minutes. What is happening behind the scene during those 5 minutes? I would have tought that it would be enough to compare the updates you already have to the complete list of updates or to check the version numbers of a couples files. Then when it comes time to install the upgrades, they're also taking a long time. Some 1 Mb updates takes 2, 3 or 5 minutes to install. What is taking so long. I would have though that it was simply a mater of backup the old file, uncompress the new files, replace the old file. This should be really fast. Is Windows doing something else? For comparison, under Linux, you can find which updates you need in about 20 seconds and installing them is usually pretty fast (The time to uncompress the files). I can do a complete updgrade of my linux machine in about 25 minutes (download 600-800 Mb of updates, hundreds of them and install them) while under windows 25 minutes is the time it needs to find witch update are needed and install about 5-10 updates. I just updated a Windows XP home from SP1a to SP3 + all other updates. It took me more than 3 hours. Doing something like that in the Linux World takes about 30 minutes. I don't want to bash Microsoft here. I genuinly want to know what they do differently that makes it so long.

    Read the article

  • Git fails to push with error 'out of memory'

    - by jwir3
    I'm using gitosis on a server that has a low amount of memory, specifically around 512 MB. When I try to push a large folder (happens to be a backup from an android phone), I get: me@corellia:~/Configs/$ git push origin master Counting objects: 18, done. Delta compression using up to 8 threads. Compressing objects: 100% (14/14), done. fatal: Out of memory, malloc failed MiB | 685 KiB/s error: pack-objects died of signal 13 error: failed to push some refs to 'git@dagobah:Configs' I've been searching the web, and notably found: http://www.mail-archive.com/[email protected]/msg01747.html as well as http://git.661346.n2.nabble.com/Out-of-memory-error-during-git-push-td5443705.html but these don't seem to help me for two reasons: 1) I am not actually out of memory when I push. When I run 'top' during the push, I get: 24262 git 18 0 16204 6084 1096 S 2 1.2 0:00.12 git-unpack-obje Also, during the push if I run /head/meminfo, I get: MemTotal: 524288 kB MemFree: 289408 kB Buffers: 0 kB Cached: 0 kB SwapCached: 0 kB Active: 0 kB Inactive: 0 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 524288 kB So, it seems that I have enough memory free, but it's actually still failing, and I'm not enough of a git guru to figure out what is happening. I would appreciate it if someone could give me a hand here and tell me what could be causing this problem, and what I can do to solve it. Thanks! EDIT: The output of running the ulimit -a command: scottj@dagobah:~$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 204800 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 204800 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • cron doesn't execute any task, but writes into log as executed

    - by FractalizeR
    I have strange problem on one of my servers. Cron does not execute any task, but it writes to it's log, that task has been executed successfully. Like some simulation mode is activated... Apr 30 03:03:08 nd-10049 crond[13387]: (root) CMD (php /usr/local/frb/backup.php) Apr 30 03:05:01 nd-10049 crond[13397]: (root) CMD (php /home/support/public_html/cron/cron_hourly.php>/home/support/public_html/cron/hourly.log) Apr 30 03:09:01 nd-10049 crond[19108]: (root) CMD (/etc/webmin/cron/tempdelete.pl ) Apr 30 03:10:01 nd-10049 crond[19467]: (root) CMD (php /home/support/public_html/cron/cron_hourly.php>/home/support/public_html/cron/hourly.log) Apr 30 03:14:44 nd-10049 crontab[21154]: (root) BEGIN EDIT (root) Apr 30 03:15:01 nd-10049 crond[21309]: (root) CMD (php /home/support/public_html/cron/cron_hourly.php>/home/support/public_html/cron/hourly.log) Apr 30 03:15:38 nd-10049 crontab[21154]: (root) REPLACE (root) Apr 30 03:15:38 nd-10049 crontab[21154]: (root) END EDIT (root) Apr 30 03:16:01 nd-10049 crond[14961]: (root) RELOAD (cron/root) Apr 30 03:20:02 nd-10049 crond[22620]: (root) CMD (php /home/support/public_html/cron/cron_hourly.php) There are no errors about cron in common log (messages). The OS is CentOS. What can I do to check what is the problem? What can the problem be?

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • what is the fastest way to copy all data to a new larger hard drive?

    - by SUPER user
    I was certain this would have been covered before, but I cannot find an answer amongst all the almost-duplicates that come up; sorry if I've missed something obvious. I have a full 320gb disk inside my machine, a new 1tb disk to replace it, and a USB 2.0 chassis. It is only data on a single partition, no OS/apps involved, and the old drive will be kept somewhere as backup (no secure wiping etc). The simple option would be to put new disk in USB chassis, copy files, then swap them over. But for USB pen drives, reading is around 4x faster than writing. If the same is true for a USB SATA chassis (is it?) then it would be significantly faster to swap the drives first and read from the old drive over USB, right? Then the other consideration is that copying lots of files is usually slower than a single file of equivalent size. Is Windows 7 smart enough to do everything in a single lump like that, or is there specialised software that should be used instead? (Even if SATA-SATA copying is faster than involving USB, knowing what to do when it isn't an option is useful information.) Summary: Does a USB SATA chassis suffer from a read/write inequality? (like a USB pen drive does, but unlike a direct SATA connection) Can Windows 7 do sequential access? (I can't find confirmation if Robocopy does this.) Or is it necessary to use a bootable CD/USB with something like Clonezilla to achieve sequential copy speeds?

    Read the article

  • Windows 7 - User profile corrupted on standby/hibernate

    - by Dogbert
    I have a friend who uses Windows 7 for her home PC. She has a RAID1 array that is using up-to-date Intel Matrix storage drivers, and the entire array is backed up to a separate internal SATA HDD via Acronis True Image every night. Over the weekends, she lets her machine go into suspend after 4 hours of inactivity, and then later into hibernation after 6 hours of inactivity. Her Acronis backup system does nightly incremental backups, and full backups every Saturday night. She also has AVG Free Antivirus installed which does full scans every Monday. So far, on two occasions, on Sunday morning, her user profile is corrupt. I couldn't find any solution that allowed me to repair her profile, so I end up having to (as suggested by MS Knowledge Base, http://windows.microsoft.com/en-us/windows7/fix-a-corrupted-user-profile ;, Yep, no solution to fixing it, just clobber the whole thing) recreating her profile, then copying over data, recreating her Outlook profile, reconfiguring all third-party applications, etc. It's a real nightmare, and takes 4 hours to do each time. Are there any suggestions on how to resolve this profile corruption? This was happening even before the RAID/Acronis solution was in place, but I thought I'd provide as much information as possible.

    Read the article

  • Can't connect to Synology DiskStation through HTTPS when using Windows 7 Import

    - by LeonidasFett
    a little background to my problem: I have a Synology DiskStation 213j that I use as a backup/data storage solution. When I'm at work, I would like to push and pull files from my DiskStation but I can't use VPN which is forbidden for outgoing connections. So I wanted to try to use HTTPS so I can at least connect securely to the web interface. I mostly use Chrome which uses the Windows Certificate Store. So I tried importing a self-signed certificate into it, without success. I still get a warning in Chrome telling me the connection is not secure because it can't be verified. When I import the certificate into Firefox though, it works and I can connect through HTTPS. I checked my domain on this site: http://www.sslshopper.com/ssl-checker.html It shows no errors, only a warning that the certificate is self-signed. Which is OK in this case. Any got any idea why importing the certificate into Windows 7 doesn't work? I tried Right-Click domain.mydomain.de.crt File --> Install certificate --> Next --> both options here (in case of "Place certificate in following store:" I selected "Third Party Root Certificate Authorities") to no avail.

    Read the article

  • vmdk Recovery after migration from 3.5 to 4 and fallback tentative.

    - by olgirard
    Hy, I've tryed to migrate some VM from my 3.5i environment to a brand new vSphere 4.0 U1. The two platforms are running simultaneously, sharing the same SAN. I Migrate my VM by stopping it, unregistering in vcenter (esx ver. 3.5, i call it esx3), register in vSphere (esx ver. 4, i call it esx4), and migrate upgrade virtual hardware before powering it up (First mistake). vMotion was enabled on esx4, seem to be a second mistake. After a day or so, i encountred problems joigning the esx server (esx4) and decided to unregister my server for esx4 and fallback to esx3. esx3 refused to boot, i supposed this was due to virtual hardware in Version 7 so i recreated a new VM pointing to the vmdk of the old VM. Everithing seemed fine until i log into the server and discover that i was running on the original disk ith every snapshots ignored even those created on esx3. I tried to reboot VM on esx4 but VM doesn't power up because "The parent virtual disk has been modified since the child was created". I've got a copy of a later state of the drive but generated between two snapshots (ovf generated with canverter standalone) as a backup. Do i have a chance to recover at least some files on the virtual drive or (as i tink) all is played, i've done enought mistakes for this time. Thanks for your help.

    Read the article

  • internet-based sync software that will keep running after Windows Live Sync stops doing PC-to-PC-syncs?

    - by Warren P
    According to the wikipedia page, Microsoft Live Sync will shortly stop offering the PC-to-PC sync service. There are lots of apps to sync two PCs on the same LAN, but I want to sync two PCs that are in different cities, across the internet, traversing two different NATs, and that requires some kind of service running in the internet that both connect into. There is already a few questions about syncing folders and files, but this is not a duplicate because none of them answer this basic question: Microsoft Live Sync works better than RSYNC, or any of the linked SYNC solutions in any of the "not really duplicates" because it works even when the two PCs have NAT and firewalls between them that forbid direct connectivity, because Windows Live Sync has a free always-on internet server that all the client PCs connect into. I'm looking for a FREE (no-fees) Microsoft Live Sync work-alike PC-to-PC sync solution that works between PCs and Macs, at least, as well as between PCs, and works behind NAT and firewalls at least as well as Microsoft's solution. (Note that Microsoft's solution makes only outbound socket calls to a microsoft server, so this solution must necessarily include a server-hub component that is hosted publically on a free site and which does not require that I set up and manage and pay for my own public internet hosting site) Hint: None of the answers in the linked duplicate are equivalent (PureSync,FreeFileSync,BestSync 2010,SyncButler,Comodo BackUp,QuickShadow,Gbridge) in that none of them work for the PC to Mac situation, where firewalls and nats prevent direct connection, or else they require money to be paid. When Microsoft Live Sync / Live Mesh finally kills direct PC-to-PC mode, the limitation will be that you will have to pay for more than 25 GB of cloud service, and you can then only sync PC #1 to PC #2 if you first sync to the cloud, then down to other clients. I can currently sync 100 gb of data from one computer to another, only temporarily "moving the data" through Microsoft's data servers without using up my Skydrive storage quota.

    Read the article

  • Blu-ray BD-R: Would you physically store it in a CaseLogic Wallet pocket?

    - by Rob
    I keep several backup copies of my material and files. For my DVDs, one set of copies is kept in a CaseLogic wallet folder pack, so that I can easily move this around when visiting friends, family or for business. This is highly convenient. The other sets are kept in their jewel cases in hard plastic see thru storage boxes. Although CaseLogic wallet material is designed to be abrasion free, their caveat is that external dust will be the cause of any blemishes. If hard dust gets in these pockets, which is inevitable, this will occasionally cause light hair like scratches on the disc surface as the discs are removed and returned for access to their contents. This is of no consequence as the laser and error correction can more than cope with this. I'm aware that the blu-ray spec requires anti-scratch in disc surfaces but was wondering that, given the smaller pits, would dust and light scratches from wallet storage cause more problems with blu-rays than they would with DVDs? I'm using Blu-ray BD-R and BD-R DL write once media.

    Read the article

  • "Error: Unknown error" when trying to start virtual machine from VMware server

    - by slhck
    Problem We are running VMware Server 2.0.0 build-116503 on a Ubuntu 10.04 LTS server. There is a virtual machine installed, running Lotus Domino on Windows Server 2003. Ever since a sudden power failure last week, the virtual machine won't properly start up. When I run the command: vmrun -T server -h https://127.0.0.1:8333/sdk -u root -p jk2x2208 start "[standard] lotus/test.vmx" … after 30 seconds it displays: Error: Unknown error That's about everything I get. I know the command is right, since that's what we've used all the time. This has happened last Saturday after a scheduled backup shutdown, and somehow I was able to start it again. This week, it happened again, and I can't get it back up. Occasionally, I also get: Error: Cannot connect to the virtual machine When I get this, and I run the start command, it seemingly works. Why is this so random? Which configuration could have been messed up? What I've tried / other info I already shut down VMware itself with /etc/init.d/vmware stop. This works. I tried to start VMware again with /etc/init.d/vmware start. It complains that it's "not configured", which is why I had to rm /etc/vmware/not_configured, and then try to start again. There have been no software updates on the machine, and no configuration changes

    Read the article

  • Assistance on setup to Connect an offsite server to the LAN via RRAS VPN - Server 2008 R2

    - by Paul D'Ambra
    I have an office LAN protected using a Zyxel Zywall USG 300. I've set up an L2TP/ipsec VPN on that which accepts connections using a shared secret and I've tested this from multiple clients. I have a server offsite and want to set up RRAS to use a persistent connection to the VPN so that it can carry out network jobs even with no one logged in (I'm using it for Micorosft DPM secondary backup). If I create a vpn as if I were setting up a users laptop it can dial in no problem but if I set up a demand dial interface in RRAS it errors. I enable RRAS ticking only demand dial interface (branch office routing) Select network interfaces, right click and choose new demand dial interface Name the VPN ToCompany Select connect using VPN And then L2TP as the vpn type enter the IP address (double-checked for typos!) select Route IP packets on this interface specify static route to remote network as 10.0.0.0/24 with metric of 1 add dial out credentials (again double checked for typos and confirmed with other vpn connections click finish now I right-click on the new interface and choose properties and then the security tab I change Data encryption to optional select only PAP for Authentication (both as per manufacturer of Zywall) click advanced settings against type of vpn and set shared secret then I select the new interface, right-click and choose connect this dials and then errors with either 720 or 811 as the error codes. However, if I create a VPN by going to Network & Sharing center and setting up as if I was creating a VPN from my laptop to the office (say) it dials successfully so I know the VPN settings are correct and the machine can connect to the VPN. Suggests very strongly the problem is how I'm setting up RRAS. Can anyone help?

    Read the article

  • Using modem for sending voice recording

    - by ircmaxell
    I've got an interesting one for you. I've been going over my server monitoring and notification systems (Nagios based), and realized that if our internet connection goes down, there's no way for it to notify me. I already have a modem listening (Via CentOS 5) on a spare POTS line so that I can dial-in in case our internet goes down. I was wondering if I could come up with a script (Shell, Python, etc) that can dial out and play a recorded message (wave file I'm guessing) when it's picked up. I know Windows supports voice calls over a voice modem, I was wondering if a solution existed for Linux... I know asterisk can probably do it, but isn't that overkill (A full blown VOIP system just for a notification mechanism that will hopefully never be used)? And wouldn't it interfere with the modem's primary function as a backup network interface (PPP spawned via mgetty)? I've done some searching, and haven't really come up with much. I know how to dial out from the command line, but only as a modem (not as voice). Worst case, I could set it up to dial out as a modem, and then just realize that if I get a call with modem sounds from that number that it's the notification... Any insight would be appreciated...

    Read the article

  • Is it possible to boot Windows 7 from when you're harddrive's partition with two OSes?

    - by Muhammad
    I have a PC with a hard drive that's partition into home directories for Windows 7 and Ubuntu. I primarily use Windows 7 and occasionally (once a week) use Ubuntu. When I boot up my computer, I usually get taken to a boot menu that includes about 5 different options: 3 are for Ubuntu's configurations, one's for swap, and the forth is for Windows 7. Then after I select Windows 7 or Ubuntu from this menu, I get taken to another menu that again asks me for Windows 7 or Ubuntu. This time, there's only 2 options, Windows 7 and Ubuntu. [Side note: out of experience I realized most boot menus are timed and so are these.] So if I ever turn my computer on without actually sitting in front of it for a few minutes, it boots into Ubuntu. I'm trying to figure out what I need to do so I can first get rid of the 2 boot menus. And if possible, I'm looking for help changing my boot options where I can load up Windows 7 (even with the boot menu wait of about 30 seconds). My harddrive's partition's laid out like this: Windows 7 (C partition) Multimedia (D partition, I just use this for backup/non-OS stuff) Ubuntu (home directory) Swap Is there any other information I need to provide?

    Read the article

  • Windows XP: How to delete files and folders that cannot be deleted?

    - by glenneroo
    I have a backup copy of a previous Windows' Documents and Settings folder which only contains my original user and within 2 more directories: Favorites and Local Settings. When I try to delete Local Settings I get this error: When I try to delete Favorites, I get this error: I ran this in a cmd shell: attrib *.* -r -a -s -h /s ...but it did not help, nor did it return any errors/warnings. I used Unlocker v1.8.5 and LockHunter repeatedly at multiple levels to see if any files are in use, but both always say: No Files Locked. Update #1: I was able to rename the directory, which now gives me this warning before (trying to) delete: If I press Yes (or Yes to All) then I get this error: Update #2: I let chkdsk /f run which required a reboot since it's on my primary system partition. During Stage 2 scanning, I received about 40 of these: Deleting an index entry from index $0 of file 25. ...followed by: Deleting index entry cookies in index $I30 of file 37576. ...but I still get the first error dialog above when trying to delete. Update #3: Digging deeper, the 99 is the name of one of many directories located deep in here: C:\Documents and Settings.OLD\User\Local Settings\Application Data\Microsoft\Messenger\[email protected]\SharingMetadata\[email protected]\DFSR\Staging\CS{D4E4AE55-B5E2-F03B-5189-6C4DA6E41788}\ Inside each of those directories were files with names such as: 2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-Downloaded.frx I noticed that, unlike all the directories, I couldn't rename any of these files. I also noticed that the file + dir names were extremely long: Original directory = 194 characters Filenames = 100+ characters Together the length exceeds the 255-char limit which is bad and would explain the error message I posted in Update #1. Partial Solution: Rename all directories until the total path length is less than 100. Afterwards I was able to rename the .frx files, not to mention delete everything inside the Local Settings directory. This is only a partial solution because this (empty) directory is still undeleteable: C:\1\2\Favorites\Wien\What To Do.. I'm guessing because of the ".." at the end, Windows (Explorer and cmd) can't deal with it: Here is what Explorer properties shows: Any ideas?

    Read the article

  • Ubuntu upgrade process failed

    - by Spin0us
    I tried to dist-upgrade my ubuntu server on my percona cluster but it failed with this message The following packages have unmet dependencies: libmysqlclient18 : Depends: libmariadbclient18 (= 5.5.33a+maria-1~precise) but it is not installable And here is the package listing # dpkg --list | grep -E 'percona|mysql' ii libdbd-mysql-perl 4.020-1build2 Perl5 database interface to the MySQL database iU libmysqlclient18 5.5.33a+maria-1~precise Virtual package to satisfy external depends ii mariadb-common 5.5.33a+maria-1~precise MariaDB database common files (e.g. /etc/mysql/conf.d/mariadb.cnf) ii percona-xtrabackup 2.1.5-680-1.precise Open source backup tool for InnoDB and XtraDB ii percona-xtradb-cluster-client-5.5 5.5.31-23.7.5-438.precise Percona Server database client binaries ii percona-xtradb-cluster-common-5.5 5.5.33-23.7.6-496.precise Percona Server database common files (e.g. /etc/mysql/my.cnf) ii percona-xtradb-cluster-galera-2.x 157.precise Galera components of Percona XtraDB Cluster ii percona-xtradb-cluster-server-5.5 5.5.31-23.7.5-438.precise Percona Server database server binaries ii php5-mysql 5.3.10-1ubuntu3.8 MySQL module for php5 During the install of the server, mariadb and galera cluster have first been installed. Then removed to be replaced by percona XtraDBCluster. So i think this is the source of the problem. But how can i resolve this without reinstalling all ? UPDATE 1 # apt-cache policy libmariadbclient18 libmariadbclient18: Installed: (none) Candidate: (none) Version table: 5.5.32+maria-1~precise 0 100 /var/lib/dpkg/status

    Read the article

  • Fully FOSS EMail solution

    - by Ravi
    I am looking at various FOSS options to build a robust EMail solution for a government funded university. Commercial options are to be chosen only in the worst case scenario. Here are the requirements: Approx 1000-1500 users - Postfix or Exim? (Sendmail is out;-)) Mailing lists for different groups/Need web based archive - Mailman? Sympa? Centralised identity store - OpenLDAP? Fedora 389DS? Secure IMAP only - no POP3 required - Courier? Dovecot? Cyrus?? Anti Spam - SpamAssasin? what else? Calendaring - ?? webmail - good to have, not mandatory - needs to be very secure...so squirrelmail is out;-)? Other questions: What mailbox storage format to use? where to store? database/file system? Simple and effective HA options? Is there a web proxy equivalent to squid in the mail server world? software load balancers?CARP? Monitoring and alert? Backup? The govt wants to stimulate the local economy by buying hardware locally from whitebox vendors. Also local consultants and university students will do the integration. We looked at out-of-the-box integrated solutions like Axigen, Zimbra and GMail but each was ruled out in favour of a DIY approach in the hopes of full control over the data and avoiding vendor lockin - which i though was a smart thing to do. I wish more provincial governments in the developing world think of these sort of initiatives As for OS - Debian, FreeBSD would be first preference. Commercial OS's need not apply. CentOS as second tier option...

    Read the article

  • Mac SMB connections to Windows 2003 server, leaving Open Files

    - by Bruce Garlock
    We have several Mac clients (Both 10.5, and 10.6) mounting a share from a Windows 2003 server. At least once a day, our archivist will go into this share to archive items from it, to the backup server. Most of the time, she has no issues: she copies the folder to the archive server, when it's done, she deletes it from this share. Then, she will come upon one, and it will say she doesn't have permission. When I go into the Open sessions, it will say that a particular user has a READ lock on the file, in Windows 2003. Of course, this person does not have the file open, and the only way we can delete it, is to close the open session on the file. My thoughts: The Mac likes to "sprinkle" Hidden "Resource Forks" on SMB servers, and possibly, when this Mac who last wrote to that share, closes out of the file, and these files still exist. Windows 2003 has a bug, that doesn't properly "release" the OPLOCK on the file? Steve Ballmer just doesn't like Mac's, so he wants to annoy everyone by not releasing file locks :-) What can be done about this? It happens every day, and sometimes several times per day! Many thanks, Bruce

    Read the article

< Previous Page | 746 747 748 749 750 751 752 753 754 755 756 757  | Next Page >