Search Results

Search found 27958 results on 1119 pages for 'failed to load viewstate'.

Page 748/1119 | < Previous Page | 744 745 746 747 748 749 750 751 752 753 754 755  | Next Page >

  • 4.4.1 Timeout in 10 minute intervals SMTP on batch email jobs

    - by TEEKAY
    I am running a job that uses SMTP and it can run in excess of an hour, emailing the entire time. It's not my code but a workflow based app so I just get a form to configure the mail server, subj, msg, etc and can't see it's implementation. I know it is .NET and SmtpClient. I have been seeing 4.4.1 timeouts every 10 minutes being reported by the application as the response from the server. The # of emails in those 10 minute sessions are variable, between 100 and below 150 which leads me to ask about the 10 minute timeout time specifically. I have found there are several exchange properties (though I don't know what version they are running) that set timeout limits. (http://technet.microsoft.com/en-us/library/bb232205%28v=exchg.150%29.aspx) Would those values for ConnectionInactivityTimeOut and ConnectionTimeout be the controlling the timeouts? and finally I would like to ask if exchange considers the consistent connection(s) it kept receiving from the same source as one continuous connection and cause the timeout each 10 minutes and cause the timeout? I am using a static ip of the mail server. Thanks if anyone can shed any light on my problem. EDIT - It is my belief that the library is just keeping the connections around and isn't wrapped in any cleanup code or using statement. That said, I still haven't made any progress on this issue in the last year and just requeue the failed ones as I see them.

    Read the article

  • Backup data from RAID 1 disk out of its server

    - by Doomsday
    I'm facing with a pretty easy problem in my opinion. I've extracted a working disk from a RAID1 and I'm looking to copy only data (FS and RAID configuration doesn't matter) into another location (another FS). My problem is I'm not able to mount properly this disk into another linux. I've first looked the partition table : # fdisk -l /dev/sdc Disk /dev/sdc: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders, total 1250263728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 1249535699 624767818+ fd Linux raid autodetect /dev/sdc2 1249535700 1250017649 240975 fd Linux raid autodetect /dev/sdc3 1250017650 1250258624 120487+ 82 Linux swap / Solaris I've understood I should use dmraid tools. Once installed : # cat /proc/mdstat Personalities : md0 : inactive sdc1[1](S) 624767744 blocks unused devices: <none> And some other informations : # mdadm --examine /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 0.90.00 UUID : 8f292f54:7e5aef72:7e5ab5fd:b348fd05 Creation Time : Mon Jun 2 03:39:41 2008 Raid Level : raid1 Used Dev Size : 624767744 (595.82 GiB 639.76 GB) Array Size : 624767744 (595.82 GiB 639.76 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Update Time : Tue Feb 7 22:34:59 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : a505b324 - correct Events : 15148 Number Major Minor RaidDevice State this 1 8 1 1 active sync /dev/sda1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 1 1 active sync /dev/sda1 From here, I've tried to mount but I'm not comfortable with dmtools and how it's working. # mount /dev/sdc1 /mnt/sdc1 mount: unknown filesystem type 'linux_raid_member' # mount /dev/md0 /mnt/sdc1 mount: /dev/md0: can't read superblock I've seen some options to alter RAID array with mdadm but I only want to copy data on its filesystem before wiping them... Anyone has a clue ?

    Read the article

  • Ubuntu server apt-get says "(-5 - No address associated with hostname)"

    - by Srini
    I have a ubuntu 12.04 server. Running sudo apt-get update on it produces errors like this: W: Failed to fetch http://au.archive.ubuntu.com/ubuntu/dists/precise-backports/main/binary-i386/Packages Something wicked happened resolving 'au.archive.ubuntu.com:http' (-5 - No address associated with hostname) I am able to ping all the other hosts on the network and also Google's DNS 8.8.8.8. But am unable to ping www.google.com. So, I'm guessing something is wrong with my DNS setup, but not sure what. I use static IP and my /etc/network/interfaces looks like this: auto eth0 iface eth0 inet static address 192.168.1.50 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.0.255 gateway 192.168.1.1 #dns-nameserver 203.12.160.35 203.12.160.36 #nameserver 203.12.160.35 203.12.160.36 My /etc/resolv.conf and /etc/resolvconf/resolv.conf.d/base are both empty and my /etc/resolvconf/resolv.conf.d/original says: nameserver 192.168.1.1 Any help would be greatly appreciated. P.S. I've googled it a bit and the common resolution is to switch to DHCP which I don't want to do since this is my home server. Thanks Srini

    Read the article

  • MySQL based authentication with crypt()ed password fails in Apache 2.2

    - by Fester Bestertester
    I'm trying to set up a simple CalDAV/CardDAV server with a Radicale backend and an Apache 2.2 frontend. So far, it's all nice and simple, but I can't get the MySQL based authentication to work. I'd like to authenticate users against an existing MySQL database, and I need the REMOTE_USER variable to be set (pretty much like in the configuration examples for Radicale). I've tried mod_auth_mysql, which authenticated the users nicely, but failed to set the REMOTE_USER variable. The newer alternative seems to be mod_authn_dbd, which doesn't seem to like the crypted passwords in the MySQL database. According to the documentation, crypted passwords should work, so maybe I'm just missing a simple parameter. The configuration looks like this: DBDriver mysql DBDParams "sock=/var/run/mysqld/mysqld.sock dbname=myAuthDB user=myAuthUser pass=myAuthPW <Directory /> AllowOverride None Order allow,deny allow from all AuthName 'CalDav' AuthType Basic AuthBasicProvider dbd require valid-user AuthDBDUserPWQuery "SELECT crypt FROM myAuthTable WHERE id=%s" </Directory> I've tested the query, it works fine. And as mentioned before, mod_auth_mysql worked nicely against the same database, but didn't set the required variables. Am I just missing some configuration parameter? Or is mod_authn_dbd just not the right tool to achieve what I want?

    Read the article

  • How can I add a second hard drive to a previously configured UEFI/ACHI Windows 8 machine?

    - by pflyer
    Recently purchased a new Windows 8 PC. It came with one hard drive. I want to a second hard drive to it. This second hard drive is my data hard drive from my previous computer. However, I have run into issues when the system accesses it. The drive is found in the BIOS. But is not seen by Explorer or Disk Management. I have added the drive to the next available SATA slot: SATA 2. The machine is a UEFI/ACHI based machine. In my reading I have found people documenting the following: 1) adding multiple partitioned hard drives (like mine is) to UEFI based machines is not possible 2) I have seen it suggested that you can only add blank hard drives to UEFI based machines. However, in doing so, I did not have success. I tried to add it as a hard drive with unallocated space and then as a hard drive with a single simple partition. Both attempts failed. My ultimate question: What is the proper procedure for adding a second hard drive to a UEFI/ACHI machine? I do not want to reinstall the OS and start from scratch as I have seen suggested elsewhere. There has to be a way to accomplish this without all that hassle. Thanks in advance for your help.

    Read the article

  • Xubuntu stuck after login

    - by viraptor
    How can I debug an issue with Xubuntu 12.04 (fresh install) which just waits idle after a login for about 30 seconds? The login screen is delayed correctly. After login, I get my desktop background, but no panels or auto-starting apps. It doesn't seem to be an authentication/pam issue, because I can login without delay at the console while the graphical session is still stuck. There's no disk or cpu activity and no obvious respawning of any process when I look at htop. There's nothing obviously wrong in .xsession-errors. Most interesting errors: openConnection: connect: No such file or directory cannot connect to brltty at :0 WARNING: gnome-keyring:: couldn't connect to: /tmp/keyring-wFn4VR/pkcs11: No such file or directory ... (polkit-gnome-authentication-agent-1:2131): polkit-gnome-1-WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The nam e org.gnome.SessionManager was not provided by any .service files ** Message: applet now removed from the notification area ** Message: using fallback from indicator to GtkStatusIcon ... (xfce4-indicator-plugin:2176): libindicator-WARNING **: IndicatorObject class does not have an accessible description. ... (xfce4-indicator-plugin:2176): Indicator-Application-WARNING **: Unable to get application list: Operation was cancelled Bootchart seems to end before I login, so it's not that helpful. Where else can I look for information?

    Read the article

  • How to display/define Mirror/Stripping pairs with mdadm

    - by Chris
    I want to make a standard linux software Raid10 over 4 HDD. The server has 4HDDs, 2 pairs from different vendors in order to avoid batch problems. I want to have the mirror over two different Vendors, and then the Stripe over the mirror pairs. I could do that by manually creating Raid1/0, but mdadm supports Raid level 10. I just cant figure out how the Raid10 is then handled and how the data is distributed. mdadm --detail /dev/md10 /dev/md10: Version : 1.2 Creation Time : Wed May 28 11:06:23 2014 Raid Level : raid10 Array Size : 1953260544 (1862.77 GiB 2000.14 GB) Used Dev Size : 976630272 (931.39 GiB 1000.07 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Wed May 28 11:06:23 2014 State : clean, resyncing (PENDING) Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : pdwhost:10 (local to host pdwhost) UUID : a3de0ad5:9e694ee1:addc6786:c4449e40 Events : 0 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 81 1 active sync /dev/sdf1 2 8 97 2 active sync /dev/sdg1 3 8 113 3 active sync /dev/sdh1 does not really give any information about that. How it should be: Raid 1 / Mirror over /dev/sda1 /dev/sdf1 and /dev/sdg1 /dev/sdh1 Raid 0 over the two Raid 1 pairs Is it possible to do that with the built in "level=10", how can I see what pairs are mirrored? Thanks a lot for you help

    Read the article

  • Can a USB/IDE/SATA adapter be flaky?

    - by Ward
    I use USB/IDE/SATA converters a lot and on the two that I have now, I sometimes get errors copying files to drives. It only happens when I'm copying big files to the drive (big can mean as little as 100MB, I think it happens more often with bigger files - 300MB or more), and basically the copy will fail and I'll get one or more error messages about "Delayed write failed." But if I disconnect the drive and re-connect it, I'll usually be able to continue. (The file that was being copied will be corrupt, but otherwise the drive is fine.) I just noticed a new type of flakiness: the data transfer rate can vary widely. I copied one set of files (5x300MB files) and it took 10+minutes, then I copied another set (approx. the same sizes) and it took less than a minute. I haven't done systematic testing, the other things I'm doing on my laptop at the same time might have some impact, and I haven't cross-checked the two adapters I have and the 3 hard drives I'm working with to see if there's a pattern. I'm more wondering if anyone else has seen anything like this.

    Read the article

  • Can SSH into remote server but can't SCP?

    - by ArtfulDodger2012
    I can SSH into remote server just fine using private key authentication with prompt for passphrase. However I'm getting permission denied when I try to SCP a file using the same passphrase. Here's my output: $ scp -v [file] [user]@[remoteserver.com]:/home/[my dir] Executing: program /usr/bin/ssh host [remoteserver.com], user [user], command scp -v -t /home/[my dir] OpenSSH_5.3p1 Debian-3ubuntu7, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /home/[my dir].ssh/config debug1: Applying options for [remoteserver.com] debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to [remoteserver.com] [[remoteserver.com]] port 22. debug1: Connection established. debug1: identity file /home/[user]/.ssh/aws_corp type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3p1 Debian-3ubuntu7 debug1: match: OpenSSH_5.3p1 Debian-3ubuntu7 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu7 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '[remoteserver.com]' is known and matches the RSA host key. debug1: Found key in /home/[my dir]/.ssh/known_hosts:12 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /home/[my dir]/.ssh/aws_corp debug1: PEM_read_PrivateKey failed debug1: read PEM private key done: type <unknown> Enter passphrase for key '/home/[my dir]/.ssh/aws_corp': debug1: read PEM private key done: type RSA Connection closed by [remote server] lost connection I've searched for answers but can't find quite the same problem or am just being thick. Either way any help is much appreciated. Cheers!

    Read the article

  • Discrepancy in file size on disk and ls output

    - by smokinguns
    I have a script that checks for gzipped file sizes greater than 1MB and outputs files along with their sizes as a report. This is the code: myReport=`ls -ltrh "$somePath" | egrep '\.gz$' | awk '{print $9,"=>",$5}'` # Count files that exceed 1MB oversizeFiles=`find "$somePath" -maxdepth 1 -size +1M -iname "*.gz" -print0 | xargs -0 ls -lh | wc -l` if [ $oversizeFiles -eq 0 ];then status="PASS" else status="CHECK FAILED. FOUND FILES GREATER THAN 1MB" fi echo -e $status"\n"$myReport The problem is that ls command outputs the files sizes as 1.0MB in the report but the status is "FAIL" as "$oversizeFiles" variable's value is 2. I checked the file sizes on disk and 2 files are 1.1MB. Why this discrepancy? How should I modify the script so that I can generate an accurate report? BTW, I'm on a Mac. Here is what man page for "find" says on my Mac OSX: -size n[ckMGTP] True if the file's size, rounded up, in 512-byte blocks is n. If n is followed by a c,then the primary is true if the file's size is n bytes (characters). Similarly if n is followed by a scale indicator then the file's size is compared to n scaled as: k kilobytes (1024 bytes) M megabytes (1024 kilobytes) G gigabytes (1024 megabytes) T terabytes (1024 gigabytes) P petabytes (1024 terabytes)

    Read the article

  • Print from Linux to Windows networked printer

    - by wonkothenoob
    I want to print from a Debian (Lenny) workstation to a Windows networked printer. I'm not even sure what type of Windows network this is. Our tech-support is friendly but doesn't want to get involved with supporting Linux. I need to use it for a variety of reasons and am completely stumped because I know nothing about Windows networking. They gave me URI smb://msprint.ourorg.edu as the "address" of the printer and further confirmed that the domain is "OURORG" and the share is "PHYS-PRI". I've installed CUPS and made sure that it's running as a daemon, I've clicked on the system-config-printer[1] icon, selected the printer as a Windows printer shared via SAMBA and entered the above URI. Attempting to print a testpage just sees it sit in the queue. I attempted to see if I could access the share using two other methods. Method 1. First I tried the "smbclient" from the CLI: $ smbclient -L //msprint.ourorg.edu -U user23 timeout connecting to 192.168.44.3:445 timeout connecting to 192.168.44.3:139 Connection to msprint.ourorg.edu failed (Error NT_STATUS_ACCESS_DENIED) Method 2. I tried to use the GUI tool Smb4K. This shows me four other toplevel (I'm assuming they're domains?) groupings one of which is the one which our IT department supplied to me. Clicking them shows a bunch of other machines with (what I assume are NetBIOS names?) including my own. I see all sorts of other networked printers belonging to other departments but none within mine. Certainly not the PHYS-PRI one suggested to me by the IT folks. I realize that I'm probably using the wrong terminology for the windows network, but can anyone help me with this? What steps should I be taking in debugging this? Do I need to actually run my machine as a SAMBA server to authenticate to the printer or should I just be able to communicate using CUPS? It's a GUI to CUPS configuration http://cyberelk.net/tim/software/system-config-printer/

    Read the article

  • Upgrading memory in a laptop

    - by ulidtko
    I'm a bit confused about all the memory types and various bus frequencies of modern consumer PCs. Requesting expert help on the subject. So far I'm confident that: I have an Asus X51L laptop with an unknown set of configuration options. The CPU in there supports PAE, so I still have a chance to extend the memory beyond 3GiB; and the upper limit of the system is 8GiB. (?) The laptop has two SODIMM slots, one of which is occupied by a 2GiB bank, and the other one is empty. dmidecode and lshw tools consistently state 533 Mhz frequency of the bank. The last one confuses me the most. I failed to find out characteristics of the northbridge in this laptop, and still can't figure out what DDR2 to seek for. Is it DDR2-1066? Or, rather, PC2-8500/PC2-8600? Wouldn't a DDR2-800 bank harm the system's performance? Which kind of modules should I look up in stores? Update: I have bought a 2 GiB DDR2-800 SODIMM, and it seams that the system can't handle 4 GiB of memory. When installed by itself in either slot, both new and old bank (which btw happens to be marked GDDR2-677) work just perfectly; i.e. any configuration resulting in 2 GiB works. When both banks are installed though (totalling in 4 GiB), the memcheck86 tool produces horrible artifacts and crashes, and system reboots; an Ubuntu system can be started and even logged into a Unity session, but the system reboots too in this case from even a minor RAM load. So it's pretty obvious to me now that this laptop doesn't support 4 GiB of RAM or more.

    Read the article

  • Does migrating 2 domain controllers between 2 datacentre requires both virtual machines to be shut down at the same time?

    - by Imagineer
    I was attempting to migrate 2 virtual machines that are domain controllers between 2 datacentres running ESX 3.5 and ESX 4.1. I was advised to shut down both domain controller at the same time during the migration process. This is to avoid USN Rollback and other replication issues. The following are the steps that I was planning to perform: 1. Shutdown both DC. 2. Copy both VMs files across to new datacentre using Veeam FastSCP (connection to both vCentre through IP address instead of hostname) 3. Power them up at new datacentre. 4. Configure Network interface/DNS/DHCP for both DCs in new datacentre I have tried to use Veeam FastSCP rather than VMware Standalone Converter is because its copying rather than converting. Someone also suggested that I use backup and restore app like Veeam backup and replication software. Sounds like a simple job, but after shutting down both DCs, the transfer rate using FastSCP is so slow registering only 1KB/s as oppose to the normal 1MB/s (or more). When that attempt to transfer failed, I tried to cold clone both DCs resulted in the both ESX hosts get disconnected. I have tried troubleshooting by referring to this - VMware KB - Diagnosing an ESX Server that is Disconnected or Not Responding in VirtualCenter It seems that the DNS being down is the caused of all unusual occurrence. The moment I powered up the DCs via VMware console command, the ESX host were able to connect to the vCentre again. How can I avoid such a pitfall again? Am I doing it correctly? Any help would be greatly appreciated! Thank you.

    Read the article

  • Disk doesn't contain a valid partition table

    - by Jeevan Dongre
    I was running a m1.small instance ec2 ubuntu instance. I was running out of disk space, so I upgraded my instance to medium. When I upgraded I actually got 429.5 GB of space and after that I added 10 gb of volume too. When I run the "sudo fdisk -l" command I got this results. Disk /dev/sda1: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda1 doesn't contain a valid partition table Disk /dev/sda2: 429.5 GB, 429461078016 bytes 255 heads, 63 sectors/track, 52212 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda2 doesn't contain a valid partition table Disk /dev/sdf: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 sda1 is the primary parition and sda2 is what I got added upgrading my system to medium. But the problem persists, I am not able to pull the code from git, it is giving me this error. remote: Counting objects: 409, done. remote: Compressing objects: 100% (236/236), done. fatal: write error: No space left on device fatal: index-pack failed

    Read the article

  • Moving a lot of small files between servers using rsync

    - by Adirael
    Hello guys, I'm moving a lot of files (about 2 millions) between two servers on different locations using rsync over ssh, it seems to work fine but I just realised I'm losing some files on the process. I got server 1, with the original data, and server 2, with the copy. Server 1 runs CentOS 5 and Server 2 runs on Ubuntu 10. I'm doing the transfer on the Server's 2 command line like this: rsync -e ssh -avzn usr@server1:/remote/path /local/path The first file movement I did using tar, but I didn't though of piping it through ssh and it failed cause the disk on server 1 was almost full, so I transfered it anyways (it was about 200GB) and got about 80% of the files. Then I piped another tar with the rest of the files (they're in folders, I got 100 folders with about 30 subfolders each, with files inside) and now I got everything on server 2. I wanted to be sure, so I my two options are getting the md5sum of all the files and check them or running an rsync on server 2 against server 1, that's what I did. It got some missing stuff and now it says there's nothing more to do (DRY RUN). But I got at least two files that are missing inside a subfolder. I ran that same rsync on that folder, but still dry run. Am I doing something wrong? Thanks, and sorry for the wall of text.

    Read the article

  • Nginx + PHPBB3 reverse proxy images problem

    - by siberiano
    Hello all I have a problem with my Nginx Frontend + Apache2 backend + PHPBB3 software. It doesn't load the CSS and the images neither. I get constant errors like these: 2010/04/14 16:57:25 [error] 13365#0: *69 open() "/var/www/foo/styles/styles/coffee_time/theme/large.css" failed (2: No such file or directory), client: 83.44.175.237, server: www.foo.com, request: "GET /styles/coffee_time/theme/large.css HTTP/1.1", host: "www.foo.com", referrer: "http://www.foo.com/viewforum.php?f=43" This is my config of the site: server { listen 80; server_name www.foo.com; access_log /var/log/nginx/foo.access.log; # serve static files directly location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico)$ { access_log off; expires 30d; root /var/www/trasteando/; } location / { root /var/www/foo/; index /var/www/foo/index.php; } # proxy the PHP scripts to predefined upstream .apache. # location ~ .php$ { proxy_pass http://apache; } location /styles/ { root /var/www/foo/styles/; }

    Read the article

  • Can not boot windows XP from cloned hard disk - what can I do?

    - by Martin
    My configuration: a PC (some years old) with MSI K8N-Neo-4F Motherboard, 1 GB RAM. Disk 1 (Maxtor, SATA II, 250 GB): 2 Partitions, on Partition 1 (48 GB): Windows XP Professional (NTFS) on Partition 2 (190 GB): data (NTFS) I wanted to have a larger and faster disk (the PC is incredibly slow and permanently the disk is rattling when I try to open an application or during Windows startup), so I took Disk 2 (Seagate, Sata II, 500 GB), installed in the PC, created at first a 400 GB-partition at the end of the disk and cloned the data to it, which worked well Installed a swap partition and a partition for Ubuntu Linux 12.10 on the first "part" of the disk so I was able to boot Linux and the old Windows XP with the Linux "System selection" at startup. Now I wanted to move Windows XP to the new disk, deleted the Linux partitions cloned Windows XP to the new disk (with free tools - EASESUS), left both disks in the PC and tried to select the new hard drive during boot as boot partition. This did not work, the PC refused to boot from this second disk. I tried many things like making the boot partition on the 2nd drive "active" in the Windows System Preferences modifying the boot.ini file to boot from the second disk - tried to boot from it, but ended with an error message stating that it was not possible to boot from this disk because of a hardware failure or something else or so removing the original disk and plugging the new one on the same SATA port as the original one - also booting failed with an error message repairing the MBR by booting into recovery mode from the Windows XP Installation CD-ROM, selecting the second disk and doing "FIXMBR" which said that everything was fine with the MBR. after that at least the PC tried to boot from the newer disk and then startup was hanging during the blue screen with the Windows Logo.... no luck. ... deleting the cloned partition and cloning again - this time with Macrium Reflect Free version... - no success during booting. I tried a lot of things with no success, so I wonder what I am doing wrong?! What could I do to successfully clone my Win XP partition to replace the original disk by a larger one which is bootable.

    Read the article

  • Server resolve issues not consistent

    - by bobthemac
    I am having some weird issues with my web server. It has a public ip address and is set-up on an openVZ virtual machine. Accessing in to the site works fine every time but when trying to access out from the server I can't always connect out. Sometimes I can connect out and resolve addresses, sometimes I can't. The issue is visible in both ssh when trying to do a wget command on Google; sometimes it works and I get the index.html page and sometimes I get nothing. The issue is more visible in wordpress where you can't view themes but after a few presses of the try again button you can then view them. I have searched google and found nothing about this issue. Does anyone here have any ideas what could be causing this strange behaviour? Ports 80 and 2222 are open for web and ssh. Failed 17:26:33.398412 IP 86.148.184.124.38445 > 176.9.36.252.http: Flags [.], ack 98383, win 632, options [nop,nop,TS val 3070086 ecr 323106946], length 0 [email protected]..|. $..-.P..,.e......x....... .....B8. Passed 17:30:00.179630 IP 146.90.206.241.50091 > 176.9.36.252.http: Flags [F.], seq 1, ack 1, win 115, options [nop,nop,TS val 13740559 ecr 323308537], length 0 [email protected]... $....P.w...x.....s(K..... .....EK. Thanks in advance

    Read the article

  • grub refuses to install to raid array

    - by ronno
    I have a software raid 0 setup with dual booting Windows 7 and Ubuntu 12.04. The GRUB bootloader that is already on the hard drive seems to work fine. However, since the latest package update for grub, it refuses to install the new version to the hard disk. grub-install throws the following error: /usr/sbin/grub-probe: error: cannot find a GRUB drive for /dev/mapper/< raid name_RAID0p9. Check your device.map. Auto-detection of a filesystem of /dev/mapper/< raid name_RAID0p9 failed. Try with --recheck. If the problem persists please report this together with the output of "/usr/sbin/grub-probe --device-map="/boot/grub/device.map" --target=fs -v /boot/grub" to < [email protected] update-grub pops the same "/usr/sbin/grub-probe: error: cannot find a GRUB drive for /dev/mapper/< raid name_RAID0p9. Check your device.map." every alternate line. I don't understand what exactly is going on. I'm afraid to reinstall the grub package because it might mess up the boot, which currently works fine. Is it safe to just ignore this?

    Read the article

  • Recommended Win2k8 Server software to fix my RAID-0 issue

    - by Jason Kealey
    I'm running an Asus P6T V2 Deluxe. It has six SATA ports and supports onboard RAID. I am using two of those ports for a RAID0 array of 1.5TB Seagate drives using the onboard RAID controller. One of them is giving me SMART warnings and I want to preemptively replace it. I pulled out two other 1.5TB drives from another computer and am ready to use one or both, if necessary. I can't run any SMART diagnostic software from within Windows because it only sees the hardware RAID-0 array, not each individual drive. The first thing I tried was a slow sector-by-sector copy using a free tool called EASEUS Disk Copy. Used the bootdisk, copied (took like 16 hours), unplugged the defective drive and plugged the new one in its place. The motherboard didn't recognize the new drive as being part of the known setup, so it did not want to boot. The second thing I tried was using other software (I forget the name) to copy the partition from within Windows. The first software failed because I had a server operating system. I found another software (I forget the name) which supported a server OS and did a partition copy onto the new drive. This seemed to work and the OS started to boot, but blue screened and started a reboot cycle. I'm assuming the software I was using was no good as it was trying to copy the boot disk while it was in use. I am looking for recommendations on what software to use to fix my problem without doing a re-install. Everything is backed up but my computer works fine and I'd like to avoid re-installation when possible. However, my system would be back up now if I had just started over on a second RAID array. :)

    Read the article

  • How do I encapsulate the application server from the web and database servers?

    - by SNyamathi
    So I've been doing some reading and it seems like the best practice would be to have separate database, application, and web servers. There are a few things that I've failed to understand - please feel free to recommend any reading materials that would address these topics. Database (assume MySQL) Application server communication: Does the database server do any sort of checks on the SQL commands sent / returned, or is it just a "dumb pipe" that responds to SQL commands by spitting back data? Application server (assume Tomcat) Web Server Almost the reverse here, is it the web server that is more of a pipe to the internet that forwards requests to the application server and spits back responses? I'm not wording this well, but I'm trying to ask - is it the application server that is responsible for validating data received by from requests? ex: Parsing POSTs Validating user logins Encrypting decrypting data Furthermore, how do these two servers communicate? I'm trying to keep things as flexible as possible here, so while I could write a web server in Java and use Java to communicate between the web and app server, that doesn't sound very modular. What if I want to use Python or some other language to replace the web server later on? What if I want to make a non-web facing application used in house written in C++ or something.

    Read the article

  • Chef bash resource not executing as specified user

    - by Arthur Maltson
    I'm writing a Chef cookbook to install Hubot. In the recipe, I do the following: bash "install hubot" do user hubot_user group hubot_group cwd install_dir code <<-EOH wget https://github.com/downloads/github/hubot/hubot-#{node['hubot']['version']}.tar.gz && \ tar xzvf hubot-#{node['hubot']['version']}.tar.gz && \ cd hubot && \ npm install EOH end However, when I try to run chef-client on the server installing the cookbook, I'm getting a permission denied writing to the directory of the user that runs chef-client, not the hubot user. For some reason, npm is trying to run under the wrong user, not the user specified in the bash resource. I am able to run sudo su - hubot -c "npm install /usr/local/hubot/hubot" manually, and this gets the result I want (installs hubot as the hubot user). However, it seems chef-client isn't executing the command as the hubot user. Below you'll find the chef-client execution. Thank you in advance. Saving to: `hubot-2.1.0.tar.gz' 0K ...... 100% 563K=0.01s 2012-01-23 12:32:55 (563 KB/s) - `hubot-2.1.0.tar.gz' saved [7115/7115] npm ERR! Could not create /home/<user-chef-client-uses>/.npm/log/1.2.0/package.tgz npm ERR! Failed creating the tarball. npm ERR! couldn't pack /tmp/npm-1327339976597/1327339976597-0.13104878342710435/contents/package to /home/<user-chef-client-uses>/.npm/log/1.2.0/package.tgz npm ERR! error installing [email protected] Error: EACCES, permission denied '/home/<user-chef-client-uses>/.npm/log' ... npm not ok ---- End output of "bash" "/tmp/chef-script20120123-25024-u9nps2-0" ---- Ran "bash" "/tmp/chef-script20120123-25024-u9nps2-0" returned 1

    Read the article

  • SQL Server 2005 Service Pack 3 won’t install.

    - by AngryHacker
    I am trying to install SQL Server 2005 Service Pack 3 and it keeps failing. Comes back with the following: Microsoft SQL Server 2005 - Update 'Service Pack 3 for SQL Server Database Services 2005 ENU (KB955706)' could not be installed. Error code 1603. The detailed dump reveals the following: MSI (s) (90:C8) [13:50:17:776]: Note: 1: 1729 MSI (s) (90:C8) [13:50:17:776]: Transforming table Error. MSI (s) (90:C8) [13:50:17:776]: Note: 1: 2262 2: Error 3: -2147287038 MSI (s) (90:C8) [13:50:17:792]: Transforming table Error. MSI (s) (90:C8) [13:50:17:792]: Transforming table Error. MSI (s) (90:C8) [13:50:17:792]: Note: 1: 2262 2: Error 3: -2147287038 MSI (s) (90:C8) [13:50:17:792]: Transforming table Error. MSI (s) (90:C8) [13:50:17:792]: Note: 1: 2262 2: Error 3: -2147287038 MSI (s) (90:C8) [13:50:17:792]: Transforming table Error. MSI (s) (90:C8) [13:50:17:792]: Note: 1: 2262 2: Error 3: -2147287038 MSI (s) (90:C8) [13:50:17:792]: Transforming table Error. MSI (s) (90:C8) [13:50:17:792]: Note: 1: 2262 2: Error 3: -2147287038 MSI (s) (90:C8) [13:50:17:807]: Transforming table Error. MSI (s) (90:C8) [13:50:17:807]: Transforming table Error. MSI (s) (90:C8) [13:50:17:807]: Note: 1: 2262 2: Error 3: -2147287038 MSI (s) (90:C8) [13:50:17:807]: Transforming table Error. MSI (s) (90:C8) [13:50:17:807]: Note: 1: 2262 2: Error 3: -2147287038 MSI (s) (90:C8) [13:50:17:807]: Transforming table Error. MSI (s) (90:C8) [13:50:17:807]: Note: 1: 2262 2: Error 3: -2147287038 MSI (s) (90:C8) [13:50:17:807]: Product: Microsoft SQL Server 2005 -- Configuration failed. Does it mean anything to anybody? Btw, this Q originally came from SO (936895)

    Read the article

  • Httpd and LDAP Authentication not working for sub-pages

    - by DavisTasar
    I just recently installed a Nagios implementation, and I'm trying to get LDAP authentication working for httpd on Red Hat. (nagios.conf for Apache config below, sanitized of course) ScriptAlias /nagios/cgi-bin "/usr/local/nagios/sbin" <Directory "/usr/local/nagios/sbin"> #SSLRequireSSL Options ExecCGI AllowOverride none AuthType Basic AuthName "LDAP Authentication" AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthzLDAPAuthoritative off AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Alias /nagios "/usr/local/nagios/share" <Directory /usr/local/nagios/share> #SSLRequireSSL Options None AllowOverride none AuthBasicProvider ldap AuthType Basic AuthName "LDAP Authentication" AuthzLDAPAuthoritative off AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Now, the initial authentication works, so when you first hit the page you can log in just fine. However, when you go anywhere else, it prompts you for authentication, fails (asking for a re-prompt), and gives this error message: [Mon Oct 21 14:46:23 2013] [error] [client 172.28.9.30] access to /nagios/cgi-bin/statusmap.cgi failed, reason: verification of user id '<myuseraccount>' not configured, referer: http://<nagiosserver>/nagios/side.php I'm almost certain its a simple flag or option, but I just can't find it, and I don't have a lot of experience working with Apache. Any assistance or help would be greatly appreciated.

    Read the article

  • Unable to ssh in Beagle Bone Black

    - by SamuraiT
    I wanted to install pip onto beagle bone black,and I tried this: /usr/bin/ntpdate -b -s -u pool.ntp.org opkg update && opkg install python-pip python-setuptools then, it threw errors,but Unfortunately, I didn't log that errors. it was occurred a week ago and was't solved yet. I wanted to solve it now and I tried connect by ssh,but I failed. When I ping to beagle bone, it responds, and Cloud9 IDE is working too but not ssh. I don't think this is serious problem since I can connect to beagle bone by other methods: Cloud9 or so. However, to use python on beagle bone, I need to connect by ssh. Before trying to update and install python-pip, I could connect by ssh. Do you have any ideas to solve this connection problem? note I use default OS: Angstrom I don't use SD card. HOST PC is mac, OS.X 10.9 connect by USB serial I checked this but this wasn't helpful http://stackoverflow.com/questions/19233516/cannot-connect-to-beagle-bone-black I could connect by GateOne SSH client, but still unable to connect from terminal.

    Read the article

< Previous Page | 744 745 746 747 748 749 750 751 752 753 754 755  | Next Page >