Search Results

Search found 17847 results on 714 pages for 'virtual disk'.

Page 669/714 | < Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >

  • mint linux, DVD drive keeps randomly being accessed. unsure how to find culprit

    - by juicebox
    I have a workstation with mint linux 12. It seems like the DVD drive on the machine keeps randomly "activating". By activating it makes noise, the light turns on, and it seems like it is checking if a disk is in it. At first I thought I was being hacked and someone/something was trying to check if I had media in the DVDRom drive. I ruled that out with netstat and rkhunter. I checked my logs and the only thing I can find that might help point out the problem are these repeated chunks in syslog: Mar 24 17:47:31 rich-MINT kernel: [ 9846.551422] ata2.00: cmd a0/00:00:00:08:00/00:00:00:00:00/a0 tag 0 pio 16392 in Mar 24 17:47:31 rich-MINT kernel: [ 9846.551424] res 51/40:01:00:00:00/00:00:00:00:00/a0 Emask 0x10 (ATA bus error) Mar 24 17:47:31 rich-MINT kernel: [ 9846.551427] ata2.00: status: { DRDY ERR } Mar 24 17:47:31 rich-MINT kernel: [ 9846.551433] ata2.00: hard resetting link Mar 24 17:47:32 rich-MINT kernel: [ 9846.868012] ata2.01: hard resetting link Mar 24 17:47:32 rich-MINT kernel: [ 9847.344054] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 310) Mar 24 17:47:32 rich-MINT kernel: [ 9847.344067] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Mar 24 17:47:32 rich-MINT kernel: [ 9847.376118] ata2.00: configured for PIO0 Mar 24 17:47:32 rich-MINT kernel: [ 9847.393047] ata2.01: configured for UDMA/133 Mar 24 17:47:32 rich-MINT kernel: [ 9847.397046] ata2: EH complete and again Mar 24 17:55:28 rich-MINT kernel: [10323.633268] sr 1:0:0:0: ioctl_internal_command return code = 8000002 Mar 24 17:55:28 rich-MINT kernel: [10323.633270] : Sense Key : Aborted Command [current] [descriptor] Mar 24 17:55:28 rich-MINT kernel: [10323.633275] : Add. Sense: No additional sense information Mar 24 17:55:11 rich-MINT kernel: [10306.640009] ata2.00: link is slow to respond, please be patient (ready=0) Mar 24 17:55:16 rich-MINT kernel: [10310.840009] ata2.00: SRST failed (errno=-16) Mar 24 17:55:16 rich-MINT kernel: [10310.840016] ata2.00: hard resetting link Mar 24 17:55:16 rich-MINT kernel: [10311.160013] ata2.01: hard resetting link Mar 24 17:55:16 rich-MINT kernel: [10311.636061] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 310) Mar 24 17:55:16 rich-MINT kernel: [10311.636075] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Mar 24 17:55:16 rich-MINT kernel: [10311.668122] ata2.00: configured for PIO0 Mar 24 17:55:16 rich-MINT kernel: [10311.684854] ata2.01: configured for UDMA/133 Mar 24 17:55:17 rich-MINT kernel: [10312.105473] ata2: EH complete (Copied from Pastebin - http://pastebin.com/YNDrnyzH) If any linux masters could take a quick look at these log outputs and help me understand what is going on , much appreciated.

    Read the article

  • Adobe Reader not loading form content

    - by wullxz
    We have an FDL file which is used to offer an online application possibility. The FDL is filled out and sent to a mailbox. When I open the received file, Adobe Reader starts, loads the document in Internet Explorer (had to change my default browser because it doesn't work in chrome - the customer uses IE as default) and displays a warning that Adobe Reader has blocked the connection to the server where the initial document is saved: I can then click on "Trust this document once" (translated by me!) or "Add this host to trusted hosts" (also translated by me!). The second option doesn't work at all. The first option works but is a little bit annoying. I looked into Adobe Readers options (Edit - "Voreinstellungen" in german / the last option - Security (advanced)) and found the possibility to add hosts, files and directories or allow Adobe Reader to use the "Trusted Websites" list from Internetoptions. When I add the website either to Trusted Websites or the trusted list in Adobe Readers options, the warning doesn't pop up but the content in the prefilled (by the applicant) input boxes of the document doesn't show up on Windows 7 but it does show up on Windows XP. This Screenshot shows the settings window described in the last paragraph. The big input box at the bottom normally holds the trusted files/directories/hosts list. System Information: Windows 7 Enterprise x64 Adobe Reader X multiple IE versions (mine is latest but there's also IE 7 or 8) How do I get Adobe Reader to load the content of the form? This behaviour can be reproduced on a PC. When opening an fdf from a command line the form fields are blank even though there is data in the fdf and the pdf is located in a mnaully entered trsuted folder. Steps to reproduce: Clean install a Windows 7 PC (or use a virtual box) Map a network drive to a shared folder with a subfolder e.g. c:\test\docs becomes m:\docs Set security permissions to allow full control to everyone Add an fdf and a matching pdf file in the subfolder Manually add m:\docs to each of the trusted folders in the trust manager registry settings Ensure that Enhanced Security is on Run a command line to open the fdf file Expected result: pdf is opened in Adobe Reader with form fields filled out with data Actual results: pdf is opened with blank fields 'Yellow bar' appears asking to add document to trusted locations It appears that Adobe Reader XI is ignoring the privileged locations entries in the registry. Adding the document via the 'yellow bar' adds the individual document, with the same folder, to the privileged locations but means that the process has to be repeated for every document that needs to be opened from the folder.

    Read the article

  • usb_modeswitch not switching

    - by deniz
    After I upgraded from kernel 2.6.18 to 3.5.3 modeswitch started not to work for me. Although lsusb shows my usb modem, usb_modeswitch does not switch it. My system information is like below. I ran lsusb, dmesg, usb-devices and usb_modeswitch their output is like below. usb_modeswitch instead of switching my modem it says "No devices in default mode found. Nothing to do. Bye.". Can you offer a solution? Kernel: Linux 3.5.3 usb_modeswitch: 1.2.3-1 usb_modeswitch-data: 20120120-1 usbutils: 006-1 libusb: 1.0.8-0.1 root@localhost$ lsusb Bus 002 Device 029: ID 12d1:1446 Huawei Technologies Co., Ltd. root@localhost$ dmesg [70112.477080] usb 2-1.4: new high-speed USB device number 30 using ehci_hcd [70112.567757] scsi49 : usb-storage 2-1.4:1.0 [70112.567842] scsi50 : usb-storage 2-1.4:1.1 [70113.571433] scsi 49:0:0:0: CD-ROM HUAWEI Mass Storage 2.31 PQ: 0 ANSI: 2 [70113.572304] scsi 50:0:0:0: Direct-Access HUAWEI TF CARD Storage PQ: 0 ANSI: 2 [70113.574169] sr0: scsi-1 drive [70113.574223] sr 49:0:0:0: Attached scsi CD-ROM sr0 [70113.574250] sr 49:0:0:0: Attached scsi generic sg1 type 5 [70113.574350] sd 50:0:0:0: Attached scsi generic sg2 type 0 [70113.577173] sd 50:0:0:0: [sdb] Attached SCSI removable disk root@localhost$ usb-devices T: Bus=02 Lev=02 Prnt=02 Port=03 Cnt=01 Dev#= 30 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=12d1 ProdID=1446 Rev=00.00 S: Manufacturer=Huawei Technologies S: Product=HUAWEI Mobile C: #Ifs= 2 Cfg#= 1 Atr=c0 MxPwr=500mA I: If#= 0 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage I: If#= 1 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage root@localhost$ cat /etc/usb_modeswitch.d/12d1\:1446 # Huawei, newer modems TargetVendor= 0x12d1 TargetProductList="1001,1406,140b,140c,1412,141b,1433,1436,14ac,1506" MessageContent="55534243123456780000000000000011062000000100000000000000000000" root@localhost$ usb_modeswitch -c /etc/usb_modeswitch.d/12d1:1446 -v 12d1 -p 1446 -W * usb_modeswitch: handle USB devices with multiple modes * Version 1.2.3 (C) Josua Dietze 2012 * Based on libusb0 (0.1.12 and above) ! PLEASE REPORT NEW CONFIGURATIONS ! DefaultVendor= 0x12d1 DefaultProduct= 0x1446 TargetVendor= 0x12d1 TargetProduct= not set TargetClass= not set TargetProductList="1001,1406,140b,140c,1412,141b,1433,1436,14ac,1506" DetachStorageOnly=0 HuaweiMode=0 SierraMode=0 SonyMode=0 QisdaMode=0 GCTMode=0 KobilMode=0 SequansMode=0 MobileActionMode=0 CiscoMode=0 MessageEndpoint= not set MessageContent="55534243123456780000000000000011062000000100000000000000000000" NeedResponse=0 ResponseEndpoint= not set InquireDevice enabled (default) Success check disabled System integration mode disabled usb_set_debug: Setting debugging level to 15 (on) usb_os_find_busses: Skipping non bus directory devices usb_os_find_busses: Skipping non bus directory drivers usb_os_find_busses: Skipping non bus directory uevent usb_os_find_busses: Skipping non bus directory drivers_probe usb_os_find_busses: Skipping non bus directory drivers_autoprobe Looking for target devices ... No devices in target mode or class found Looking for default devices ... No devices in default mode found. Nothing to do. Bye. Thanks in advance.

    Read the article

  • How might I stop BACKSCATTER using Qmail?

    - by alecb
    New to ServerFault , please pardon if my details are too much Linux box acting as Virtual Host for domain hosting. Runs CentOs. Runs Parallels Plesk 9.x Regardless of the following, the SPAM keeps flowing in at 1-3 / second. An explanation of the problem... "xinetd service listens for SMTP connections and forwards to qmail-smtpd. The qmail service only process the queue, but does not control messages coming into the queue...that's why stopping it has no effect. If you stop xinetd AND qmail, then kill any open qmail-smtpd processes, all mail flow comes to a stop SOMETIMES Problem is, qmail-smtpd is not smart enough to check for valid mailboxes on the localhost before accepting the mail. So, it accepts bad mail with a forged replyto address which gets processed in the queue by qmail. Qmail cannot deliver locally and bounces to the forged replyto address." We believe the fix is to patch the qmail-smtpd process to give it the intelligence to check for the existence of local mailboxes BEFORE accepting the message. The problem is when we try to compile the chkuser patch we run into failures due to Plesk Control Panel." Is anyone aware of something we could do differently or better?" Other things that have NOT worked thus far: -Turning off any and all mail processes (to check as an indicator that an individual account has been compromised. This has been verified as NOT the case.) -Turning off mail AND http server processes (in the case of a compromised formmail) -Running EXIM in lieu of Qmail( easy/quick install but xinetd forces exim to close and restarts qmail on its own) -Turned on SPF protection via Plesk GUI. Does not help. -Turned on Greylisting via Plesk GUI. Does not help. -Disabled Bounce notifications via command line That which MIGHT work but have complications: -Use POSTFIX instead of QMAIL (No knowledge of POSTIFX and don't want to bother with it unless anyone knows it has potential to handle backscatter WELL before investing time) -As mentioned above, compiling a chkusr patch, we believe will STOP this problem, along with qmail (because of plesk in the mix, the comile fails every time and Parallels Plesk support is unresponsive unless I cough up MONEY) If I don't clear out the SPAM from the outgoing mail queue nightly, then it clogs up with millions of SPAMs and will bring down the OUTGOING email services. Any and all help welcome and appreciated!

    Read the article

  • Strange permission errors with Windows Server 2008

    - by Spirit
    I just don't know a better way to describe my issue that is driving me nuts. I am trying to establish a test domain with virtual machines on a box that has Win7 with VMwware workstation installed. The purpouse with this domain will be so that we can try and test different situations before they go into the production network. I build a VM with WinSrv2008R2 and I am using that VM as a template to make other servers for the domain by making clones of it. Now I raise a DC with one clone and a member server with another clone - I add the server to the domain. I am following a standard procedure as always (it is not my first domain). Then I make an admin account and I am adding the admin to be a member of the Domain and Enterprise Admins group. That admin is admin with full priviledges on the DC.. no problem there. But on the other server has ... somewhat half the privileges and I cant log in via RDP. I tryed with another account. Same issues. For example (with half the privileges): I can't open the Even Viewer if I go via Start - Administrative Tools - Event Viewer. But I can open the Even Viewer via the server manager. You can notice this on the image below. I mean WTF??? I am going crazy, I haven't experienced anything similar in my three years of expertise. I already lost 3 days troubleshooting this. Could this be related with the cloning? Perhaps if I make fresh installs of WinSrv2008 there won't be any problems? I've had raised test domains as VMs on other occasions before, and there weren't any problems then. This is VMware Workstation 8. I've made clones before, on Workstation 7 it didn't had any problems. Anyone has any ideas? UPDATE: This is the info from the event log when I try to access via RDP: An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: pat.coleman Account Domain: lab Failure Information: Failure Reason: Domain sid inconsistent. Status: 0xc000006d Sub Status: 0xc000019b

    Read the article

  • can't use periods in ServerName [Lion Apache installation]

    - by punchfacechamp
    I can access my host like this… http://keggyshop but can't use periods… http://keggyshop.dev here's my virtual host directive… <VirtualHost *:80> ServerName keggyshop ServerAlias keggyshop.dev DocumentRoot "~/sites/2012/keggy/web/pages/keggy/120528/sandbox/public" <Directory "~/sites/2012/keggy/web/pages/keggy/120528/sandbox/public"> Options Includes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> host file 127.0.0.1 keggyshop 127.0.0.1 keggyshop.dev traceroute for keggyshop… user$ traceroute keggyshop traceroute to keggyshop (192.168.1.184), 64 hops max, 52 byte packets 1 keggyshop (192.168.1.184) 1.188 ms 0.683 ms 0.747 ms traceroute for keggyshop.dev… user$ traceroute keggyshop.dev traceroute: Warning: keggyshop.dev has multiple addresses; using 184.106.15.239 traceroute to keggyshop.dev (184.106.15.239), 64 hops max, 52 byte packets 1 * 192.168.1.1 (192.168.1.1) 0.856 ms 0.568 ms 2 10.81.192.1 (10.81.192.1) 15.232 ms 7.002 ms 7.936 ms 3 gig-0-3-0-6-nycmnya-rtr2.nyc.rr.com (24.29.97.122) 7.962 ms 7.813 ms 7.712 ms 4 bun101.nycmnytg-rtr001.nyc.rr.com (184.152.112.107) 10.999 ms 14.001 ms 15.466 ms 5 bun6-nycmnytg-rtr002.nyc.rr.com (24.29.148.250) 11.231 ms 17.321 ms 12.745 ms 6 107.14.19.24 (107.14.19.24) 13.972 ms 11.704 ms 16.477 ms 7 ae-1-0.pr0.nyc30.tbone.rr.com (66.109.6.161) 9.237 ms 11.896 ms 107.14.19.153 (107.14.19.153) 7.481 ms 8 xe-5-0-6.ar2.ewr1.us.nlayer.net (69.31.94.57) 16.682 ms 11.791 ms 11.981 ms 9 ae3-90g.cr1.ewr1.us.nlayer.net (69.31.94.117) 12.977 ms 15.706 ms 9.709 ms 10 xe-5-0-0.cr1.ord1.us.nlayer.net (69.22.142.74) 30.473 ms 30.497 ms 31.750 ms 11 ae1-20g.ar1.ord6.us.nlayer.net (69.31.110.250) 36.699 ms 50.785 ms 35.957 ms 12 as19994.xe-1-0-7.ar1.ord6.us.nlayer.net (69.31.110.242) 34.723 ms 31.118 ms 29.967 ms 13 coreb.ord1.rackspace.net (184.106.126.138) 30.471 ms corea.ord1.rackspace.net (184.106.126.136) 33.392 ms 35.210 ms 14 core1-coreb.ord1.rackspace.net (184.106.126.129) 32.453 ms core1-corea.ord1.rackspace.net (184.106.126.125) 32.020 ms core1-coreb.ord1.rackspace.net (184.106.126.129) 32.417 ms 15 core1-aggr401a-3.ord1.rackspace.net (173.203.0.157) 31.274 ms 34.854 ms 30.194 ms

    Read the article

  • Trouble with Debian Lenny and Sphinx

    - by Ando
    I've very basic understanding of linux systems, but I've a server which was setup a while ago to host some web apps. Recently I decided to test out and implement Sphinx but unfortunately I cant get the install to work. I'm running a Debian Lenny distro and when I try to install sphinx it says - checking MySQL include files... configure: error: missing include files. ****************************************************************************** ERROR: cannot find MySQL include files. Check that you do have MySQL include files installed. The package name is typically 'mysql-devel'. If include files are installed on your system, but you are still getting this message, you should do one of the following: 1) either specify includes location explicitly, using --with-mysql-includes; 2) or specify MySQL installation root location explicitly, using --with-mysql; 3) or make sure that the path to 'mysql_config' program is listed in your PATH environment variable. To disable MySQL support, use --without-mysql option. ****************************************************************************** I do have mysql 5.1 installed but I can't find the include files, AND one more thing.. I read around the net that I probably need libmysqlclient15-dev but when I try to install that using apt-get i receive the following error. The following packages were automatically installed and are no longer required: libxcb-aux0 libts-0.0-0 libxcb-atom1 ttf-dejavu-extra hunspell-en-us g++-4.3 libmysql++3 libnspr4-0d libdirectfb-1.0-0 libxcb-event1 libasound2 libstdc++6-4.3-dev libhunspell-1.2-0 ttf-dejavu libmozjs2d conkeror-spawn-process-helper libnss3-1d Use 'apt-get autoremove' to remove them. The following NEW packages will be installed: libmysqlclient15-dev 0 upgraded, 1 newly installed, 0 to remove and 276 not upgraded. Need to get 7590 kB of archives. After this operation, 26.3 MB of additional disk space will be used. WARNING: The following packages cannot be authenticated! libmysqlclient15-dev Install these packages without verification [y/N]? Y Err http://ftp.us.debian.org/debian/ lenny/main libmysqlclient15-dev amd64 5.0.51a-24+lenny5 404 Not Found [IP: 35.9.37.225 80] Err http://security.debian.org/ lenny/updates/main libmysqlclient15-dev amd64 5.0.51a-24+lenny5 404 Not Found [IP: 149.20.20.6 80] Failed to fetch http://security.debian.org/pool/updates/main/m/mysql-dfsg-5.0/libmysqlclient15-dev_5.0.51a-24+lenny5_amd64.deb 404 Not Found [IP: 149.20.20.6 80] E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Can you help me out by suggesting how to install the required packages and run the Sphinx.

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • Do RAID controllers commonly have SATA drive brand compatibility issues?

    - by Jeff Atwood
    We've struggled with the RAID controller in our database server, a Lenovo ThinkServer RD120. It is a rebranded Adaptec that Lenovo / IBM dubs the ServeRAID 8k. We have patched this ServeRAID 8k up to the very latest and greatest: RAID bios version RAID backplane bios version Windows Server 2008 driver This RAID controller has had multiple critical BIOS updates even in the short 4 month time we've owned it, and the change history is just.. well, scary. We've tried both write-back and write-through strategies on the logical RAID drives. We still get intermittent I/O errors under heavy disk activity. They are not common, but serious when they happen, as they cause SQL Server 2008 I/O timeouts and sometimes failure of SQL connection pools. We were at the end of our rope troubleshooting this problem. Short of hardcore stuff like replacing the entire server, or replacing the RAID hardware, we were getting desperate. When I first got the server, I had a problem where drive bay #6 wasn't recognized. Switching out hard drives to a different brand, strangely, fixed this -- and updating the RAID BIOS (for the first of many times) fixed it permanently, so I was able to use the original "incompatible" drive in bay 6. On a hunch, I began to assume that the Western Digital SATA hard drives I chose were somehow incompatible with the ServeRAID 8k controller. Buying 6 new hard drives was one of the cheaper options on the table, so I went for 6 Hitachi (aka IBM, aka Lenovo) hard drives under the theory that an IBM/Lenovo RAID controller is more likely to work with the drives it's typically sold with. Looks like that hunch paid off -- we've been through three of our heaviest load days (mon,tue,wed) without a single I/O error of any kind. Prior to this we regularly had at least one I/O "event" in this time frame. It sure looks like switching brands of hard drive has fixed our intermittent RAID I/O problems! While I understand that IBM/Lenovo probably tests their RAID controller exclusively with their own brand of hard drives, I'm disturbed that a RAID controller would have such subtle I/O problems with particular brands of hard drives. So my question is, is this sort of SATA drive incompatibility common with RAID controllers? Are there some brands of drives that work better than others, or are "validated" against particular RAID controller? I had sort of assumed that all commodity SATA hard drives were alike and would work reasonably well in any given RAID controller (of sufficient quality).

    Read the article

  • unable to join domain using virtualbox

    - by FreshPrinceOfSO
    I'm in the process of setting up a VM environment for a MS certification exam (70-462). Following the training kit's instructions, I've set up a domain controller (DC) and two members (SQL-A, SQL-B) thus far. I can't figure out why I can't join the domain. DC IPv4 Address . . . : 10.10.10.10(Preferred) Subnet Mask. . . . : 255.0.0.0 DNS Servers. . . . : ::1 127.0.0.1 SQL-A IPv4 Address . . . : 10.10.10.20(Preferred) Subnet Mask. . . . : 255.0.0.0 DNS Servers. . . . : 10.10.10.10 SQL-B IPv4 Address . . . : 10.10.10.30(Preferred) Subnet Mask. . . . : 255.0.0.0 DNS Servers. . . . : 10.10.10.10 I've read how to do networking between virtual machines in virtualbox and the documentation. After trying various network adapter configurations, I can't get them to communicate in order to have the two members join the domain. When I ping from .30 to .10, I get: ping 10.10.10.10 Pinging 10.10.10.10 with 32 bytes of data: Reply from 10.10.10.20: Destination host unreachable. Reply from 10.10.10.20: Destination host unreachable. Reply from 10.10.10.20: Destination host unreachable. Reply from 10.10.10.20: Destination host unreachable. Trying to join the domain: netdom join SQL-A /domain:contso.com The specified domain either does not exist or could not be contacted. The command failed to complete successfully. Within VirtualBox, I've tried the following combinations for network adapter: Attached to - Promiscuous Mode ------------------------------- NAT Bridged Adapter - Deny Bridged Adapter - Allow VMs Bridged Adapter - Allow All Internal Network - Deny Internal Network - Allow VMs Internal Network - Allow All Host-only Adapter - Deny Host-only Adapter - Allow VMs Host-only Adapter - Allow All Edit ipconfig /all of DC ipconfig /all of SQL-A

    Read the article

  • How to enable caching on Apache / Ubuntu Linux?

    - by Jim Mischel
    I have a large (several megabytes) XML file that's updated rather frequently (every 10 minutes or less) and gets a lot of traffic. I'd like to implement some caching to reduce bandwidth and server load. Looking at the Apache documents, I see a dizzying array of configuration options that involve various combinations of mod_expires, mod_headers, and mod_cache (and variants). I end up running in circles and the results aren't what I expect. I'm comfortable editing the various configuration files if I have some idea what I'm supposed to change. But at the moment I'm poking around in the dark and that's never a comfortable feeling. So, perhaps if I describe what I want, somebody here can take me by the hand and say, "This is what you need to do." Periodically, this file, call it "stuff.xml" is updated and a new version copied to the directory. The external url would be, for example, http://example.com/stuff.xml. Understand, this part works. Whenever I request the file, I get the expected result. But the file is big and I want to save bandwidth, so first I'd like to implement conditional GET semantics with the If-Modified-Since header. How do I do this? I've enabled mod_headers and mod_expired and added the <FilesMatching> section in my httpd.conf as recommended in countless examples I've seen online, but that didn't change the behavior when made a conditional GET request. I always get a status 200 with the entire document. So how the heck do I implement this? That'll cut down on neeless transfers. I'd also like to limit the amount of data transferred. Seeing as this is XML, gzipping it should save me 50% or more. My next step would be to somehow gzip the file and, if it's not too difficult, store it in memory. That'll cut down on per-access data transfer, and also reduce disk transfers. So how do I implement this type of caching? Thanks in advance.

    Read the article

  • How to force Debian to boot new Kernel?

    - by ThE_-_BliZZarD
    I'm running Debian 6, Debian GNU/Linux, with Linux 2.6.32-5-amd64 under Grub2 ( 1.98+20100804-14+squeeze1) on a remote system (no possibility to view the pre-boot messages). I compiled and installed a new kernel, but I can not get it to boot. What I have done: Installed the packages via: dpkg -i linux-headers-3.5.3.20120914-amd64_3.5.3.20120914-amd64-10.00.Custom_amd64.deb linux-image-3.5.3.20120914-amd64_3.5.3.20120914-amd64-10.00.Custom_amd64.deb This updated the Grub configuration. My /boot/grub/grub.cfg now contains: menuentry 'Debian GNU/Linux, with Linux 3.5.3.20120914-amd64' --class debian --class gnu-linux --class gnu --class os { insmod raid insmod mdraid insmod part_msdos insmod part_msdos insmod ext2 set root='(md0)' search --no-floppy --fs-uuid --set 5a3882a9-c7df-4f6a-9feb-f03e3e37be01 echo 'Loading Linux 3.5.3.20120914-amd64 ...' linux /vmlinuz-3.5.3.20120914-amd64 root=UUID=003242b5-121b-49f3-b32f-1b40aea56eed ro acpi=ht quiet panic=10 echo 'Loading initial ramdisk ...' initrd /initrd.img-3.5.3.20120914-amd64 } menuentry 'Debian GNU/Linux, with Linux 2.6.32-5-amd64' --class debian --class gnu-linux --class gnu --class os { insmod raid insmod mdraid insmod part_msdos insmod part_msdos insmod ext2 set root='(md0)' search --no-floppy --fs-uuid --set 5a3882a9-c7df-4f6a-9feb-f03e3e37be01 echo 'Loading Linux 2.6.32-5-amd64 ...' linux /vmlinuz-2.6.32-5-amd64 root=UUID=003242b5-121b-49f3-b32f-1b40aea56eed ro acpi=ht quiet panic=10 echo 'Loading initial ramdisk ...' initrd /initrd.img-2.6.32-5-amd64 } I used grub-set-default "Debian GNU/Linux, with Linux 2.6.32-5-amd64" to set the old kernel as default and then grub-reboot "Debian GNU/Linux, with Linux 3.5.3.20120914-amd64" to boot into the new kernel once. After update-grub I rebooted the system, but everytime it comes back up with the old kernel (2.6). I tried setting the new one as default (grub-set-default 0, update-grub, reboot) but, still the old one. The Syslogs contain NO hint whatsoever about trying to boot the new kernel - only the old one. Would there be any hints regarding problems with a kernel? Is there a way to enable debug-logging in grub? What am I doing wrong? How can I force the system to boot the new kernel? Edit: Hardware of remote machine. CPU cat /proc/cpuinfo processor : 0 vendor_id : AuthenticAMD cpu family : 16 model : 5 model name : AMD Athlon(tm) II X4 605e Processor stepping : 3 cpu MHz : 2294.898 cache size : 512 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 5 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nonstop_tsc extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt bogomips : 4589.77 TLB size : 1024 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm stc 100mhzsteps hwpstate (copied only the first, 3 more follow) The server is a Fujitsu PRIMERGY MX130 S1.

    Read the article

  • "Error loading operating system": Win7/Vista

    - by LookitsPuck
    Hey fellas, Have this computer for about 2 years now. Originally had Vista installed, now have Windows 7 installed. Both on separate hard drives. Also have another drive used strictly for media. About a week ago, the Vista hard drive started going on its way out. Was getting problems on startup. After a few BIOS settings, I was able to get into Windows 7 and everything was fine. However, I started remembering the startup issues, so I deleted the bootup for Vista under msconfig. Didn't restart the computer at that time, though. For a few days, everything was ok. Last night I play a little poker, then hit the hay. I wake up to a good ole "Error loading operating system" on the screen. Just wonderful. Looks like the computer restarted overnight (auto updates, anyone?). So, after a big of finagling and half hearted tries, I can't get past the "Error loading operating system" screen. FWIW, in the BIOS it can see my hard drives fine. So I move on. I get my Windows 7 installation disk to try and do a repair. Go in the BIOS, change boot priority to DVD drive, and we're on our merry way. After loading from the disc, I first try jumping into the "Repair your computer" section. That opens up the System Recovery Options. However, this is where the problem comes into play. I don't see any operating systems here. Nada. What's odd though is if I click on the Load Drivers button, I can see my Windows 7 partition (C:), and can go through the files and folders without issue. What do I do at this point? I can't repair it. It seems like I can traverse the hard drive without issue when in an open dialog in the System Recovery Options, but I'm getting the good ole "Error loading computer" on bootup. Suggestions? Thanks all!!

    Read the article

  • How should I set up my Hyper-V server and network topology?

    - by Daniel Waechter
    This is my first time setting up either Hyper-V or Windows 2008, so please bear with me. I am setting up a pretty decent server running Windows Server 2008 R2 to be a remote (colocated) Hyper-V host. It will be hosting Linux and Windows VMs, initially for developers to use but eventually also to do some web hosting and other tasks. Currently I have two VMs, one Windows and one Ubuntu Linux, running pretty well, and I plan to clone them for future use. Right now I'm considering the best ways to configure developer and administrator access to the server once it is moved into the colocation facility, and I'm seeking advice on that. My thought is to set up a VPN for access to certain features of the VMs on the server, but I have a few different options for going about this: Connect the server to an existing hardware firewall (an old-ish Netscreen 5-GT) that can create a VPN and map external IPs to the VMs, which will have their own IPs exposed through the virtual interface. One problem with this choice is that I'm the only one trained on the Netscreen, and its interface is a bit baroque, so others may have difficulty maintaining it. Advantage is that I already know how to do it, and I know it will do what I need. Connect the server directly to the network and configure the Windows 2008 firewall to restrict access to the VMs and set up a VPN. I haven't done this before, so it will have a learning curve, but I'm willing to learn if this option is better long-term than the Netscreen. Another advantage is that I won't have to train anyone on the Netscreen interface. Still, I'm not certain if the capabilities of the Windows software firewall as far as creating VPNs, setting up rules for external access to certain ports on the IPs of Hyper-V servers, etc. Will it be sufficient for my needs and easy enough to set up / maintain? Anything else? What are the limitations of my approaches? What are the best practices / what has worked well for you? Remember that I need to set up developer access as well as consumer access to some services. Is a VPN even the right choice?

    Read the article

  • Rsync: windows 7, synology: login error and permission denied error

    - by loonboon
    Good day to all of you all, I'm running into strange/stupid errors, and I hope anybody would be so kind to help me out. I have to admit, I am by no means a guru, so please bear with me :-) Situation: -Synology NAS (runs Linux), and Windows 7 desktop (1 normal/restricted user Lisa and 1 admin user). - Data from W7 desktop to be rsynced to synology: /volume1/home/Lisa/Backup - Rsync command: c:\cygwin\bin\rsync -avz /cygdrive/e/Lisa/ [email protected]:/volume1/homes/Lisa/Backup - I've set up ssh per these two threads: a. http://www.cesareriva.com/archives/102 b. http://www.cesareriva.com/archives/112 Now the horrors begin: - root is allowed to run the rsync succesfully, however, he doesn't login automatically (so I can not use rsync in W7 batchscripts, which is of course required). - Lisa is allowed to login automatically but he can not succesfully finish the rsync command because of permission errors: rsync change dir /volume1/homes/Lisa/Backup failed: permissions denied. This happens for each and every file and subdir rsync tries to create. However, the main directory (Backup) is created. When I try to copy files from windows explorer to the directory 'Backup' using the very same user Lisa everything goes smoothly. So, obviously, there is a permission problem somewhere; either my rsync-command isn't correct, or the folder permissions for homes/Lisa aren't correct (but, then again, Windows 7 copies files to that folder without any problems, so that does make me believe the homes/Lisa-permissions don't appear to be the problem). I also tried adding: --chmod=Dugo+x --chmod=ugo+r which I found somewhere on the web, to the rsync-command, but this didn't solve any problem and gave the exact errors. Would anybody please please help me on how to fix this? I am utterly frustrated about this, because I have been trying for 1 month to get everything to work and it simply doesn't work. I bought the big Synology to end the horrors of 20 external USB-disks for once and for all (we have many pictures and home vids of our deceased dogs and want to watch these, the horrors being 'what material is on what disk'). I'll gladly return the favour of somebody helping me out by buying you a nice beer (paypal), if you could end my misery. I am not extremely skilled on Linux (not at all :-( ) so if you could give an extra word when possible so I understand what to do, I'd be very grateful. I really hope somebody can help me out, Thank you in advance, Lisa

    Read the article

  • How to get more information from the system crash

    - by viraptor
    I'd like to debug an issue I'm having with a linux (debian stable) server, but I'm running out of ideas of how to confirm any diagnosis. Some background: The servers are running DL160 class with hardware raid between two disks. They're running a lot of services, mostly utilising network interface and CPU. There are 8 cpus and 7 "main" most cpu-hungry processes are bound to one core each via cpu affinity. Other random background scripts are not forced anywhere. The filesystem is writing ~1.5k blocks/s the whole time (goes up above 2k/s in peak times). Normal CPU usage for those servers is ~60% on 7 cores and some minimal usage on the last (whatever's running on shells usually). What actually happens is that the "main" services start using 100% CPU at some point, mainly stuck in kernel time. After a couple of seconds, LA goes over 400 and we lose any way to connect to the box (KVM is on it's way, but not there yet). Sometimes we see a kernel reporting hung task (but not always): [118951.272884] INFO: task zsh:15911 blocked for more than 120 seconds. [118951.272955] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [118951.273037] zsh D 0000000000000000 0 15911 1 [118951.273093] ffff8101898c3c48 0000000000000046 0000000000000000 ffffffffa0155e0a [118951.273183] ffff8101a753a080 ffff81021f1c5570 ffff8101a753a308 000000051f0fd740 [118951.273274] 0000000000000246 0000000000000000 00000000ffffffbd 0000000000000001 [118951.273335] Call Trace: [118951.273424] [<ffffffffa0155e0a>] :ext3:__ext3_journal_dirty_metadata+0x1e/0x46 [118951.273510] [<ffffffff804294f6>] schedule_timeout+0x1e/0xad [118951.273563] [<ffffffff8027577c>] __pagevec_free+0x21/0x2e [118951.273613] [<ffffffff80428b0b>] wait_for_common+0xcf/0x13a [118951.273692] [<ffffffff8022c168>] default_wake_function+0x0/0xe .... This would point at raid / disk failure, however sometimes the tasks are hung on kernel's gettsc which would indicate some general weird hardware behaviour. It's also running mysql (almost read-only, 99% cache hit), which seems to spawn a lot more threads during the system problems. During the day it does ~200kq/s (selects) and ~10q/s (writes). The host is never running out of memory or swapping, no oom reports are spotted. We've got many boxes with similar/same hardware and they all seem to behave that way, but I'm not sure which part fails, so it's probably not a good idea to just grab something more powerful and hope the problem goes away. Applications themselves don't really report anything wrong when they're running. I can run anything safely on the same hardware in an isolated environment. What can I do to narrow down the problem? Where else should I look for explanation?

    Read the article

  • Postfix flow/hook reference, or high-level overview?

    - by threecheeseopera
    The Postfix MTA consists of several components/services that work together to perform the different stages of delivery and receipt of mail; these include the smtp daemon, the pickup and cleanup processes, the queue manager, the smtp service, pipe/spawn/virtual/rewrite ... and others (including the possibility of custom components). Postfix also provides several types of hooks that allow it to integrate with external software, such as policy servers, filters, bounce handlers, loggers, and authentication mechanisms; these hooks can be connected to different components/stages of the delivery process, and can communicate via (at least) IPC, network, database, several types of flat files, or a predefined protocol (e.g. milter). An old and very limited example of this is shown at this page. My question: Does anyone have access to a resource that describes these hooks, the components/delivery stages that the hook can interact with, and the supported communication methods? Or, more likely, documentation of the various Postfix components and the hooks/methods that they support? For example: Given the requirement "if the recipient primary MX server matches 'shadysmtpd', check the recipient address against a list; if there is a match, terminate the SMTP connection without notice". My software would need to 1) integrate into the proper part of the SMTP process, 2) use some method to perform the address check (TCP map server? regular expressions? mysql?), and 3) implement the required action (connection termination). Additionally, there will probably be several methods to accomplish this, and another requirement would be to find that which best fits (ex: a network server might be faster than a flat-file lookup; or, if a large volume of mail might be affected by this check, it should be performed as early in the mail process as possible). Real-world example: The apolicy policy server (performs checks on addresses according to user-defined rules) is designed as a standalone TCP server that hooks into Postfix inside the smtpd component via the directive 'check_policy_service inet:127.0.0.1:10001' in the 'smtpd_client_restrictions' configuration option. This means that, when Postfix first receives an item of mail to be delivered, it will create a TCP connection to the policy server address:port for the purpose of determining if the client is allowed to send mail from this server (in addition to whatever other restrictions / restriction lookup methods are defined in that option); the proper action will be taken based on the server's response. Notes: 1)The Postfix architecture page describes some of this information in ascii art; what I am hoping for is distilled, condensed, reference material. 2) Please correct me if I am wrong on any level; there is a mountain of material, and I am just one man ;) Thanks!

    Read the article

  • Different files on shared partition?

    - by Matt Robertson
    I am dual-booting Windows 8 and Ubuntu 12.04. My partition scheme looks like this: /dev/sda1 - Windows 8 (nfts) /dev/sda2 - Ubuntu / (ext4) /dev/sda3 - Ubuntu home (ext4) /dev/sda5 - swap /dev/sda6 - Shared data partition (exfat) (First off, yes I do have exfat libraries installed on Ubuntu) I created some PNG images in Windows and saved them on my shared partition. From Ubuntu, I edited the images in GIMP and saved them (replacing the ones on the shared partition). When I boot into Windows, the files appear unchanged - exactly like they did before I edited them from Ubuntu. I even added a folder and deleted some other files, but none of these changes exist in Windows. When I boot into Ubuntu, all of the changes are still there. It is as if Windows is caching the old file structure... How is this possible? Thanks in advance. Edit -- commands output ~~ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk +-sda1 8:1 0 165.1G 0 part +-sda2 8:2 0 21.3G 0 part / +-sda3 8:3 0 98.9G 0 part /home +-sda4 8:4 0 1K 0 part +-sda5 8:5 0 7.8G 0 part [SWAP] +-sda6 8:6 0 172.7G 0 part /mnt/shared_data ~~ /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # /dev/sda2 UUID=8f700f65-b5c7-4afc-a6fb-8f9271e0fb5e / ext4 errors=remount-ro 0 1 # /dev/sda3 UUID=f0d688b7-22bd-4fa7-bc1b-a594af2933fa /home ext4 defaults 0 2 # /dev/sda5 UUID=3bc2399b-5deb-4f04-924b-d4fc77491997 none swap sw 0 0 # /dev/sda6 UUID=F2DE-BC47 /mnt/shared_data exfat defaults 0 3 ~~ /etc/mtab /dev/sda2 / ext4 rw,errors=remount-ro 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sda3 /home ext4 rw 0 0 /dev/sda6 /mnt/shared_data fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 gvfs-fuse-daemon /home/matt/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=matt 0 0

    Read the article

  • Lag spikes at full CPU usage, maybe video card

    - by Roberts
    I am posting this thread in hurry so few things may be missed (I will update tomorrow). My PC specs: Motherboard Name - Gigabyte GA-945PL-S3 CPU Type - DualCore Intel Core 2 Duo E4300, 1800 MHz (9 x 200) OS - Microsoft Windows 7 Ultimate OS Kernel Type - 32-bit OS Version - 6.1.7601 I bougth a new video card one month ago. GeForce 210. I didn't have any problems. I wanted to overclock it, in other words: "Play with it". So I installed Gigabyte EasyBoost from CD and overclocked the GPU 590 + 110 mhz, memory to max to 960mhz from 800mhz. Benchmarks showed a little bit bigger score. Then I overclocked shader clock from 1405 to [..] (don't remeber really). So I was playing Modern Warfare 2 when off sudden computer froze when I wanted to select team, I was afk before that. I had to reset CMOS. After that I had problems with Skype: unread messages and no sound. Then I figured it out that when ever I open EasyBoost - Skype starts to glitch again. Now I use EVGA Precission X. Now after a month, I cleaned computer and closed the case, it was open all the time. I started to overclock GPU clock only (just a bit) because there was no problems that would stop me. So sometimes on heavy CPU load graphics starts to lag. Dragging a window is painful to watch too. Sometimes the screen freezes for 5 to 10 seconds (I can see that hard disk activity is maximal). You may say that CPU fault it is, isn't it? But sometimes lag spikes starts randomly when CPU load is at maximum. All 3 benchmark softwares (PerformanceTest, NovaBench and MSI Kombustor) shows that performance of my video card has dropped about 25%. BUT! CPU score is lower too. I ignored these problems but when I refreshed Windows Experience Index I was shocked. Month before (in latvian language but not so hard to understand): Now (upgraded RAM): This happened when I tried to capture Minecraft with Fraps on underclocked GPU to 580mhz (def: 590mhz): All drivers are up to date. Average CPU temperature from 55°C to 75°C (at 70°C sometimes starts these lag spikes). Video card's tempratures are from 45°C to 60°C (very hard to reach 60°C). So my hope is that the video card is fine, cause this card is very new and I want to upgrade CPU anyways. Aplogies for my mistakes in vocabulary (I am trying to type this as fast I can).

    Read the article

  • what can cause a folder to become indestructible?

    - by JustJeff
    I have a directory that I want to delete, but windows (xp sp3) is giving me the run-around and the folder is now effectively indestructible. Attempts to open the folder, either via explorer or cmd.exe are met with 'd:/temp/foo Is Not Accessible. Access is denied'. Attempts to delete the folder result in 'Cannot delete foo: The directory is not empty' So I can't delete it because supposedly it's not empty, but windows won't let me in it for some reason, so I can't clean it out first. There's nothing in it of consequence, and basically I just want to delete it at this point. Thinking that some other process must have a lock on it, I used the SysInternals 'handles' and Process Explorer to look for open handles with the directory name. These turned up no matches. (The directory name is not actually 'foo', it is something more unique but 'foo' is easier to type here). I put the machine through a restart, and the problem persists. I did a search for the folder name with regedit, to see what other apps might be aware of it. No match. The properties dialog was mildly interesting. The Read-Only attribute is 'semi-checked', i.e., the grayish check mark you get when some parts are and some parts aren't. Naturally I immediately unchecked this, and tried to delete the folder. No go. Opening properties again reveals the gray check mark next to Read-Only has returned. All the stats, size, size on disk, files, folders, all these are zero. There do not appear to be any shares on the folder, so that's not it either. Finally, I tried opening the partition's properties, and running the Tools/Error Checking utility. This didn't turn up any problems either. Fwiw, this directory was created by [a popular gui zip tool] when I tried to unpack a tar-and-zipped archive created on another system with command line utils. The archive was definitely corrupt, but I've never seen such a file do anything worse than crash the zip app, and certainly never leave permanent glitches in the file system. So what else can possibly be going on to make this folder behave this way?

    Read the article

  • XP shared folders not accessible after BIOS changed

    - by stijn
    Here's what worked for over a year: PC A runs Windows 7, PC B runs Windows XP. Both are on the same subnet behind a router. A uses user account X, but logs in to PC B using the Administrator account. PC B is a Dell Precision 470. A known problem with these is that sometimes when plugging in their power cable they somehow loses all BIOS settings. This happened yesterday. After this happens Windows won't boot, because the default BIOS setting is 'RAID ON' while there is no RAID configured. No problem though, changing the BIOS settings to 'RAID OFF' makes it boot without problems. Note that in the meantime, nothing config-related was changed on machine A. It wasn't even on. Indeed after doing this, everything is fine. Everything includes all normal operations, remote desktop from PC A to PC B, running Synergy between A and B, accessing shared folders from B to A. But accessing the shared folders on B from A does not work any more. I tried pretty much everything I found via Google (fiddling with policies/registry kes/...) but no avail. > ping -a 192.168.2.2 Pinging A [192.168.2.2] with 32 bytes of data: Reply from 192.168.2.2: bytes=32 time<1ms TTL=128 > net view \\192.168.2.2 System error 5 has occurred. Access is denied. > net use /persistent:no K: \\A\myshare /user:A\USERNAME PASSWORD > net use /persistent:no K: \\192.168.2.2\myshare /user:192.168.2.2\USERNAME PASSWORD > net use /persistent:no K: \\192.168.2.2\myshare /user:USERNAME PASSWORD System error 86 has occurred. The specified network password is not correct. A solution to this would be great: I haven't been able to do any work since yesterday ;] update after taking the hard drive out of B and putting it in another Precision 470 with almost exactly the same hardware (at first sight, only the video card differs) the shared folders work.. Putting the disk back into A, same problem remains. Why does this depend on hardware, and more important, on which hardware?

    Read the article

  • load-causing processes disappearing from "top" ps -o pcpu shows bogus numbers

    - by Alec Matusis
    I administer a large number of servers, and I have this problem only with Ubuntu 10.04 LTS: I run a server under normal load (say load average 3.0 on an 8-core server). The "top" command shows processes taking certain % of CPU that cause this load average: say PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 11008 mysql 20 0 25.9g 22g 5496 S 67 76.0 643539:38 mysqld ps -o pcpu,pid -p11008 %CPU PID 53.1 11008 , everything is consistent. The all of the sudden, the process causing the load average disappears from "top", but the process continues to run normally (albeit with a slight performance decrease), and the system load average becomes somewhat higher. The output of ps -o pcpu becomes bogus: # ps -o pcpu,pid -p11008 %CPU PID 317910278 1587 This happened to at least 5 different severs (different brand new IBM System X hardware), each running different software: one httpd 2.2, one mysqld 5.1, and one Twisted Python TCP servers. Each time the kernel was between 2.6.32-32-server and 2.6.32-40-server. I updated some machines to 2.6.32-41-server, and it has not happened on those yet, but the bug is rare (once every 60 days or so). This is from an affected machine: top - 10:39:06 up 73 days, 17:57, 3 users, load average: 6.62, 5.60, 5.34 Tasks: 207 total, 2 running, 205 sleeping, 0 stopped, 0 zombie Cpu(s): 11.4%us, 18.0%sy, 0.0%ni, 66.3%id, 4.3%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 74341464k total, 71985004k used, 2356460k free, 236456k buffers Swap: 3906552k total, 328k used, 3906224k free, 24838212k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 805 root 20 0 0 0 0 S 3 0.0 1493:09 fct0-worker 982 root 20 0 0 0 0 S 1 0.0 111:35.05 fioa-data-groom 914 root 20 0 0 0 0 S 0 0.0 884:42.71 fct1-worker 1068 root 20 0 19364 1496 1060 R 0 0.0 0:00.02 top Nothing causing high load is showing on top, but I have two highly loaded mysqld instances on it, that suddenly show crazy %CPU: #ps -o pcpu,pid,cmd -p1587 %CPU PID CMD 317713124 1587 /nail/encap/mysql-5.1.60/libexec/mysqld and #ps -o pcpu,pid,cmd -p1624 %CPU PID CMD 2802 1624 /nail/encap/mysql-5.1.60/libexec/mysqld Here are the numbers from # cat /proc/1587/stat 1587 (mysqld) S 1212 1088 1088 0 -1 4202752 14307313 0 162 0 85773299069 4611685932654088833 0 0 20 0 52 0 3549 27255418880 5483524 18446744073709551615 4194304 11111617 140733749236976 140733749235984 8858659 0 552967 4102 26345 18446744073709551615 0 0 17 5 0 0 0 0 0 the 14th and 15th numbers according to man proc are supposed to be utime %lu Amount of time that this process has been scheduled in user mode, measured in clock ticks (divide by sysconf(_SC_CLK_TCK). This includes guest time, guest_time (time spent running a virtual CPU, see below), so that applications that are not aware of the guest time field do not lose that time from their calculations. stime %lu Amount of time that this process has been scheduled in kernel mode, measured in clock ticks (divide by sysconf(_SC_CLK_TCK). On a normal server, these numbers are advancing, every time I check the /proc/PID/stat. On a buggy server, these numbers are stuck at a ridiculously high value like 4611685932654088833, and it's not changing. Has anyone encountered this bug?

    Read the article

  • Why do my Application Compatibility Toolkit Data Collectors fail to write to my ACT Log Share?

    - by Jay Michaud
    I am trying to get the Microsoft Application Compatibility Toolkit 5.6 (version 5.6.7320.0) to work, but I cannot get the Data Collectors to write to the ACT Log Share. The configuration is as follows. Machine: ACT-Server Domain: mydomain.example.com OS: Windows 7 Enterprise 64-bit Edition Windows Firewall configuration: File and Printer Sharing (SMB-In) is enabled for Public, Domain, and Private networks ACT Log Share: ACT Share permissions*: Group/user names Allow permissions --------------------------------------- Everyone Full Control Administrator Full Control Domain Admins Full Control Administrators Full Control ANONYMOUS LOGON Full Control Folder permissions*: Group/user name Allow permissions Apply to ------------------------------------------------- ANONYMOUS LOGON Read, write & execute This folder, subfolders, and files Domain Admins Full control This folder, subfolders, and files Everyone Read, write & execute This folder, subfolders, and files Administrators Full control This folder, subfolders, and files CREATOR OWNER Full control Subfolders and files SYSTEM Full control This folder, subfolders, and files INTERACTIVE Traverse folder / This folder, subfolders, and files execute file, List folder / read data, Read attributes, Read extended attributes, Create files / write data, Create folders / append data, Write attributes, Write extended attributes, Delete subfolders and files, Delete, Read permissions SERVICE (same as INTERACTIVE) BATCH (same as INTERACTIVE) *I am fully aware that these permissions are excessive, but that is beside the point of this question. Some of the clients running the Data Collector are domain members, but some are not. I am working under the assumption that this is a Windows file sharing permission issue or a network access policy issue, but of course, I could be wrong. It is my understanding that the Data Collector runs in the security context of the SYSTEM account, which for domain members appears on the network as MYDOMAIN\machineaccount. It is also my understanding from reading numerous pieces of documentation that setting the ANONYMOUS LOGON permissions as I have above should allow these computer accounts and non-domain-joined computers to access the share. To test connectivity, I set up the Windows XP Mode virtual machine (VM) on ACT-Server. In the VM, I opened a command prompt running as SYSTEM (using the old "at" command trick). I used this command prompt to run explorer.exe. In this Windows Explorer instance, I typed \ACT-Server\ACT into the address bar, and then I was prompted for logon credentials. The goal, though, was not to be prompted. I also used the "net use /delete" command in the command prompt window to delete connections to the ACT-Server\IPC$ share each time my connection attempt failed. I have made sure that the appropriate exceptions are Since ACT-Server is a domain member, the "Network access: Sharing and security model for local accounts" security policy is set to "Classic - local users authenticate as themselves". In spite of this, I still tried enabling the Guest account and adding permissions for it on the share to no effect. What am I missing here? How do I allow anonymous logons to a shared folder as a step toward getting my ACT Data Collectors to deposit their data correctly? Am I even on the right track, or is the issue elsewhere?

    Read the article

  • Operative systems on SD cards

    - by HisDudeness
    I was getting some wild ideas the last days, like putting some operative systems into SD cards rather than on my hard drive. I'll go further into details now and explain what lead me to consider this probably abominable decision. I am on a laptop (that means I have a native SD-card reader) which is currently running a cross-distro setup, with a bunch of Linux systems (placed in dedicated ext4 logical partitions into a huge extended one) regulated by an unique GRUB. Since today, my laptop haven't even seen any Windows system with binoculars. I was thinking about placing all the os part of my setup into a Secure Digital to save all my 500 Gb Hard Drive for documents, music, videos and so on, and being able to just remove the SD and boot my system into another computer too, as well as having the possibility of booting other systems into mine by just plugging in another SD, without having to keep it constantly placed in my PC. Also, in the remote case in the near future I just wanted to boot Windows 8 in it, I read it causes major boot incompatibility issues with other systems by needing a digital signature in order for them to start. By having it in a removable drive, I could just get rid of it when I'm needing him and switch its card with Linux one, and so not having any obstacles to their boot. Now, my questions are: I know unlikely traditional rotating disk drives, integrated circuits ones have a limited lifespan in terms of cluster rewriting. Is it an obstacle to that kind of usage? I mean, some Ultrabooks are using SSD now, is it the same issue, or there are some differences between Solid State Drives and Secure Digitals in that sense? Maybe having them to store system files which are in fixed positions (making the even-usage of cluster technology useless) constantly being re-read and updated and similar things just gets them soon unserviceable, do it? Second question: are all motherboards and BIOSes able to boot from SDs just like they are from USB pen drives (I mean, provided card reader is USB-connected, isn't it)? Or can't bootloaders like GRUB be installed on SDs working? If they can't, is it a solution installing GRUB to MBR and making boot option pointing to SD? Will it work? Are there any other problems to installing OSs on a Secure Digital?

    Read the article

  • How to get an inactive RAID device working again?

    - by Jonik
    After booting, my RAID1 device (/dev/md_d0 *) sometimes goes in some funny state and I cannot mount it. * Originally I created /dev/md0 but it has somehow changed itself into /dev/md_d0. # mount /opt mount: wrong fs type, bad option, bad superblock on /dev/md_d0, missing codepage or helper program, or other error (could this be the IDE device where you in fact use ide-scsi so that sr0 or sda or so is needed?) In some cases useful info is found in syslog - try dmesg | tail or so The RAID device appears to be inactive somehow: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d0 : inactive sda4[0](S) 241095104 blocks # mdadm --detail /dev/md_d0 mdadm: md device /dev/md_d0 does not appear to be active. Question is, how to make the device active again (using mdmadm, I presume)? (Other times it's alright (active) after boot, and I can mount it manually without problems. But it still won't mount automatically even though I have it in /etc/fstab: /dev/md_d0 /opt ext4 defaults 0 0 So a bonus question: what should I do to make the RAID device automatically mount at /opt at boot time?) This is an Ubuntu 9.10 workstation. Background info about my RAID setup in this question. Edit: My /etc/mdadm/mdadm.conf looks like this. I've never touched this file, at least by hand. # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR <my mail address> # definitions of existing MD arrays # This file was auto-generated on Wed, 27 Jan 2010 17:14:36 +0200 In /proc/partitions the last entry is md_d0 at least now, after reboot, when the device happens to be active again. (I'm not sure if it would be the same when it's inactive.) Resolution: as Jimmy Hedman suggested, I took the output of mdadm --examine --scan: ARRAY /dev/md0 level=raid1 num-devices=2 UUID=de8fbd92[...] and added it in /etc/mdadm/mdadm.conf, which seems to have fixed the main problem. After changing /etc/fstab to use /dev/md0 again (instead of /dev/md_d0), the RAID device also gets automatically mounted!

    Read the article

< Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >